Researcher: Christina Hendricks
Position: Senior Instructor and Chair of the Arts One program
Faculty: Arts
Department: Philosophy
Year level: First
Number of students: 13
Problem addressed:
Research gap on the impact of peer feedback on writing. The project focused on whether the help received as a result of peer feedback transfers between essays and the number of peer feedback sessions necessary to produce positive change in a student’s writing.
Solution approach:
Thirteen students in the Arts One program agreed to participate in the study. Researchers collected all 10 essays students wrote that year, in addition to all the peer comments and as the comments made by instructors. The study used a common rubric for both the comments and the essays, with four main categories – Strength, Insight, Organization, Style & Mechanics.
Evaluation approach:
The research experiment used a cross-lagged panel design with autoregressive structure to evaluate to what extent the quality of each essay can be explained by the quality of previous essays and quality of peer feedback.
Main findings:
The mean number of critical comments made by the instructor decreased across all the categories over time. Also, essay quality according to the coders went up from essay one to essay ten across all four categories.
Can you give some background on the research?
Christina Hendricks: I teach in the Arts One program. Recently I did a survey of students who were in the program, students who had just finished and students who had finished a while ago but were still at UBC. I was trying to find out, “Do they think Arts One was helpful in the rest of their university career?” The common answer was that Arts One helped them to write better. They really highlighted that the tutorial process of peer feedback on the essays was what really helped them write better.
I was interested in finding out how, why or what’s so great about that. I did a bunch of research into the literature around peer feedback. There’s a lot of evidence that peer feedback is really helpful for writing. But there were a couple of things that were not shown in the literature. I was curious whether the help that they get by giving and receiving feedback transfers between essays and how many peer feedback sessions is useful.
What were the research questions?
CH: We had a few research questions. One was “How do students use peer comments that they give to others and receive from other for improving different essays, rather than improving the same essay?” The second research question was, “Are students more likely to use peer comments given and received for improving their writing after more than one or two feedback sessions, and how many sessions are optimal?” Our third question was, “Does the quality of peer comments improve over time?” We are trying to see to what degree the comments that students give on the essays relates to the quality that the coders gave to the essays. Are the comments that they are giving connected to what the coders say the essay is actually like and does that get better over time?
How was the experiment set up?
CH: We got 13 out of my 16 students in one year to agree to participate. We collected all 10 essays they wrote for Arts One that year and then we collected all the written comments that other students had made on their essays as well as the comments I had made on their essays. So we had 130 essays, 1,200 peer comments and over 3,000 instructor comments.
We collected all that, which is in itself a task and then we hired three research assistants to do coding of that qualitative data. We used a common rubric for both the comments and the essays, and it had four main categories — Strength, Insight, Organization, Style & Mechanics. The coders gave each essay a number from 1-7 in each of those four main categories. For the comments we gave a numerical value from one to three. One is a significant problem. Two is a medium critique. Three is a positive praise.
What did you find?
CH: The research is still ongoing. We’ve coded all of the comments and we’ve done half of the essays. I can only say a few things. We don’t really have a lot to be able to do the comparison we wanted to do yet.
We noticed the 2’s I made on essays went down over all the categories — insight, strength, style, organization. For student comments the number of 2 comments in style went down. On the instructor comments, the 3’s, the positive ones went up in every category from the first essay to the tenth essay. So all of my comments — strength, organization, style, insight — I had a lot more 3’s towards the end. When we’d done 60 essays out of the 130, and we coded them, the essay quality — so how good the essay is according to the coders across all the four categories does go up from essay one to essay ten.
What we are really looking for is, “Are the comments on one essay related to what the students do on the next and later essays?” That’s the path we are most interested in, and since we don’t have a lot of data yet we can’t say that much.
How did you evaluate your findings?
CH: We used a cross-lagged panel design with autoregressive structure. This means that we evaluate to what extent the quality of each essay can be explained by the quality of previous essays, quality of feedback and areas directed by feedback. What we are looking at: Does the quality of the essay at the first time affect the quality of the essay the second time? We are looking at whether the comments on each one reflect the essay quality. We are looking at whether or not, if students get a certain kind of comment here do they tend to get that again? Are they actually not improving? Are they just getting the same comments over and over again? And the last is the one I’m most interested in: Do the comments that they give or they get on this essay affect the quality of the essay next time?
How has this study impacted teaching and learning?
CH: This particular study so far hasn’t changed anything in what I am doing besides one thing I got from the literature review. We actually sit down and talk. I didn’t use to have students write out their comments because they just gave them to each other. But when I was doing the literature review on peer feedback, I noticed that both oral and written comments are good. Oral comments are good because if you just do written you can’t really explain. There might be misunderstandings. But if you just do oral comments they might not remember it. So I started doing written comments too.
How will this study impact future research?
CH: There’s very little in the literature that shows that people transfer knowledge over between assignments. I would like to be able to show if that happens. Then people can say, “Oh, I can do peer feedback even if we are not doing drafts of the same essay.”
Also, this is a small study, 13 students. This was a pilot study to see if this whole idea was even feasible — Can we even do it? Can we take 3,000 comments and 130 essays and do what we want to do? We can. It just requires a lot of time. If we want to do 50 students we are going to have to figure out how to do this differently because it’s been a year and a half and we’re still not done.
This project has been supported by the SoTL Seed program, 2014.