Researcher: Doug Bonn
Position: Professor in Physics and Astronomy
Faculty: Science
Department: Physics and Astronomy Department
Year level: First and Second
Number of students: 150-180
Problem addressed:
Traditional first year physics laboratory is known to be ineffective in helping students learn physics concepts. What can a first year lab be used for instead?
Solution approach:
Students had to complete simple experiments, designed to promote scientific reasoning. Any measurement they had was compared either to another measurement or to a model. Then, they had to reflect on what that comparison meant, come up with a plan for doing better and, of course, execute the new plan.
Evaluation approach:
Using an adaptation of Bloom’s Taxonomy, the researchers coded notes taken by students during these experiments, attributing a number between 1 and 4 to assess the level of reflection that students were doing.
Preliminary findings:
It took roughly seven weeks for students to start iterating as a matter of habit. Researchers found that students who had this iterative experience were reflecting on their data much more deeply and in a more sophisticated way than students who had not.
Can you give some background on the research?
Doug Bonn: There’s a traditional style of first year physics laboratory that is known to not be very effective, which is using a first year physics laboratory as a venue for students to learn physics. The evidence suggests that it doesn’t actually help them learn concepts. So I turned and said, “Since we’re spending a lot of money on labs, what are they good for?” I decided that what a first year lab is good for is teaching students about data, models, and how scientists work with data and models.
What was the research question?
DB: Any person that teaches the first year lab is tired of seeing students leave the lab at the end of three hours with a lousy set of data fit poorly to a bad model. They just leave with that kind of very bad end result, and it’s not very good for their attitudes. A few years ago, I added a twist to our labs, with the help of my graduate student Natasha Holmes. In addition to teaching them this data skill set we decided we’d pick off a couple of behaviors that we wanted students to start partaking of that were more expert-like, more specifically, the iterative scientific process.
We wanted to know: “Can we teach first year students how to understand and think critically about scientific evidence?” We created a framework that made students compare the data they had obtained, think critically about these comparisons and then adapt the experiments to obtain better results, all on their own.
How was the experiment set up?
DB: The scheme that we arrived at was actually a very simple structure. Every experiment that students do is fairly simple so that they can repeat the experiment many times if they have to, to get it right. Any measurement that they have will then be compared. Each experiment involves a comparison: either they’re going to compare two different measurements against one another, or they might compare a measurement to a model.
Once they’ve made that comparison, they have to reflect on what that comparison means. Did things agree or not agree for instance? And come up with a plan for doing more and doing better and go back and try to do a better job with the experiment. That’s kind of an infinite loop that only ends when they’re out of time. The framework is scaffolded with instructions to repeat this loop over and over again. Then we fade that scaffolding out after about seven weeks.
What that allows for is for students to keep reflecting and improving on their results, which is what scientists do. We redo and improve experiments and models all the time, as we discover that something didn’t work great.
What did you find?
DB: The first thing we wanted to look at was whether they would really start to develop this habit and how long we would have to scaffold it. With the first group of students we tried the transformation on, in week two we saw that some of them had started to game the system. They knew they had to iterate and so they would actually deliberately do a lousy job with already an idea in their head of how they would do better. We removed the scaffolding, but their behavior reverted to not trying to improve, so we reintroduced the scaffolding and tapered it much more slowly over the course of seven weeks. After seven weeks, they started to do the behavior automatically.
We compared the quality of the students written notes to a similar group of students in a previous version of the course that did not have the scaffolded reflection. We noticed that the students who had this iterative experience were actually not just iterating, they were reflecting on their data much more deeply and in a more sophisticated way than the students in the previous year without the scaffolding. Somehow this iterative behavior was starting to interact with the different experiments that we were giving them, and they were starting to think and reason much more as expert scientists. Furthermore, we found evidence of transfer of this more expert-like thinking into their second-year laboratory.
How did you evaluate your findings?
DB: Natasha came up with a scheme a little bit related to Bloom’s Taxonomy, to code the level of reflection that students were doing. She shrunk it down to a scale of one to four. Level 1 comments reflect the simple application of analysis tools or comparisons without interpretation; level 2 comments analyze or interpret results; level 3 comments combine multiple ideas or propose something new; and level 4 comments evaluate or defend the new idea.
For example, they might measure a Chi-squared when they fit a model to their data. The lowest level would be saying Chi-Squared equals such and such. Next level might be Chi-Squared equals such and such which seems high to me – there’s a bit of an evaluation going on there. A level three comment would say, Chi Squared equals such and such which looks a little high to me but when I look at the graph of the residuals, I can see that there’s one piece of data that looks like there’s a problem with it, and so on.
How will this study impact teaching and learning?
DB: This comparing idea is important. It’s something we do a lot in science and in fact even outside of science. Comparing is one of the really important tools we use when we do reasoning. It’s a skill that we don’t have automatically. You have to acquire it. We’ve come up with a way to do it.
What we’ve learned will benefit many more students at UBC. There’s been a lot of other interest at other institutions as well. We’re at a stage now with the cost of education, where people are taking a hard look at labs. So you better be able to prove that your lab is actually achieving something. With this scaffolding students have a lot of free agency within this structure where they’re told to reflect and iterate and improve.
How will this study impact future research?
DB: Natasha is working with Carl Weiman at Stanford now and she’s extending this in all sorts of ways, especially trying to come up with a survey tool that measures students’ scientific reasoning. At UBC, this year we are focused on one particular thing, which is to get students to actually reflect on their own learning and their perception of whether they are more expert-like at the end of the course. Even though we’ve measured that they are more expert, a survey we carried out last year showed they don’t think they are or they haven’t really internalized that. So the thing that we’ve been doing this year is having them reflect each week on what it is they’ve learned and then have a discussion with them about why that’s a behavior that an expert scientist would do.
The results of this research were published in the Proceedings of the National Academy of Sciences journal.
Holmes, N. G., Wieman, C. E., & Bonn, D. A. (2015). Teaching critical thinking. Proceedings of the National Academy of Sciences, 112(36), 11199-11204.
http://www.pnas.org/content/112/36/11199.short