Researcher: Tiffany Potter
Position: Senior Instructor in English and Associate Head, Curriculum and Planning
Faculty: Arts
Department: English
Year level: First
Number of students: 142
Problem addressed:
In courses with large class sizes instructors are challenged with giving every single student the most personal, active-learning experience possible. Many of the available online tools work from a basis of multiple-choice assessments, or are variations on submitting work online to be assessed by the instructor, or are just de facto discussion groups. These formats present limitations to their range of use in the development and practice of the nuanced and subjective critical reading and thinking skills taught in a literature course.
Solution approach:
The Adaptive Comparative Judgment Tool was piloted in courses in Physics, Math and English. The website allows instructors to input a question, which students answer. Students are then given three sets of matched answers from their classmates and are asked to evaluate and rank them on two criteria: the quality of the idea and the quality of the articulation of the idea.
Evaluation approach:
Using a survey and focus groups, instructors listed the tasks involved in the exercise: answering the question; giving feedback; comparing answers; receiving peer feedback; receiving TA feedback; and classroom discussions. Then, they asked the students to assess how much positive outcome they felt they got from each of those steps.
Preliminary findings:
The data suggests that in the first-year English classroom, the ACJ tool is a really effective technical teaching tool. Students in English 110 identified ACJ as one of the most useful assignments that term. Responses in student terms in Physics and Math were not quite as favorable.
Can you give some background on the research?
Tiffany Potter: The economic requirements of universities these days mean that a lot of our first-year classes are very big. One of the gaps that I felt that I was experiencing was this: how do I give every single one of those 150 students a personal, active-learning experience? That’s tricky to do with 150 students. A lot of the online tools that I have seen so far tend to have multiple-choice kind of assessments, are online submission assessed by instructor, or are essentially discussion groups. I hadn’t really seen anything that would allow the positive outcomes we know come from peer assessment and collaborative learning, but that was also usable for the kind of nuanced critical reading and thinking skills that we teach in a literature course.
Adaptive Comparative Judgment created a website where the instructor can go in and design a specific question. We went through quite a long process of coming up with different use cases, and how we could develop this tool so that it could have multi-disciplinary applicability. English, Math and Physics had very different sets of case studies and very different sets of modelling for how this kind of tool could be used to enhance student engagement and learning.
What was the research question?
TP: Several different research questions are being investigated by different parts of our team within our overarching project of extending the capabilities of a software prototype that has been developed here at UBC. We have now completed pilot implementations in first-year English and in Math, as well as an online Physics course.
Evaluation of a peer’s work—and the reflection it can prompt on one’s own answers and thinking—is a valuable skill to be practiced. What if we could develop a system that could help students to judge the submissions of their peers, and having done so, then reflect back on their own answers?
Our goal in the pilots was to assess whether the ACJ technology could facilitate a three-part set of learning benefits: learning from providing peer feedback (using a ‘student as tutor’ model) through comparisons and associated comments to explain their reasoning; learning from feedback received from peers on their own work, prompting self-reflection; and learning in the classroom, when instructors can have access to a peer-generated list of submissions ranked by quality, which can be incorporated into future class sessions in range of diverse pedagogical and disciplinary approaches.
How was the experiment set up?
TP: We did the full formal assessments January to April of 2015. There were 142 students. For me, one particularly important learning outcomes for English 110 (which is the first-year literature course) is developing the ability to take your observations and feelings about something that you’ve read and translate them onto the page as an arguable, critical position. It’s one of the skill-based learning outcomes in my course.
In ACJ we were able to design a system where I post a question, and students have a certain time window in which to answer my question. When that window closes, the next two-day window opens, where each student is given three pairs of matched answers from their (anonymized) classmates. They’re asked to evaluate and comment upon each pair of answers on two criteria: the quality of the idea and the quality of the articulation of that idea. At the end of the evaluation phase, they rank each pair: which is the better of the two?
The adaptive part of the tool is that as the students are doing these comparisons the next pair will be matched as having been similarly evaluated by previous students. So the assessment becomes slightly more complicated and slightly more nuanced. The third set will have had access to still more data, and will be still more closely matched. So in this third set, it will be a little bit more complex yet to sort out which one is stronger and why. In a parallel process, the answers are marked by the TA as well, and then the TAs and professor take examples from within the group and use them as a teaching tool in the tutorial.
What did you find?
TP: Our data suggests that is a really effective technical teaching tool. I worked with Ido Roll and Letitia Englund at CTLT to design the assessment. First there was a survey administered in class by Letitia. Those students were also invited to participate in focus groups. I completely separated myself from that process: we wanted the narrative data to be as unmediated as possible. We also got one unexpectedly useful source of data in the UBC Student Evaluation of Teaching at the end of the year: without prompting, more than half of responding students named [ACJ] in answer to the question, “what assignment was most helpful to your learning in this course?”
What we have found in our first round of data from surveys, focus groups, and the SEoT in the English pilot was that first-year students loved being able to read the answers their peers gave. Some expressed anxiety as non-experts about their capacity to give “correct” feedback (though not about their capacity to rank answers), and most found the final step in our process—using the answers, comparisons and rankings to facilitate high-quality collaborative learning in tutorial—to be a highly effective way to continue and build upon that online peer-assessment process. Students reported perceiving a benefit in both skills acquisition (the specific skill being practiced), and in their ability to assess their own work, as well as in their capacity to provide effective peer feedback in future courses.
How did you evaluate your findings?
TP: We compared the various kinds of quantitative and qualitative data. We first assessed basic things like ease of use (the way that students perceive the tool as being accessible). We then delineated the tasks of doing the assignment, giving feedback, comparing answers, receiving peer feedback, receiving TA feedback and classroom discussions. We asked the students to assess how much positive outcome they felt they got from each of those steps. So we were able to assess each task’s contributions to learning. And then through the surveys and narrative interviews, we assessed the sense of what the students perceived to have been the outcomes from the assignments: “My ability to blank has benefited” (for example, to assess my own work, to give peer feedback, to develop future essays). We were assessing to what degree students perceived they had (or did not have) their successful learning of a complex critical skill augmented by this new tool.
How will this study impact teaching and learning?
TP: It has contributed to the ongoing process that I have about how to make the learning experience in a large class more personal and more engaged in active learning. How can I create a sense of a learning community even within such a large class? ACJ has turned out to be a really effective device for that. I will continue to use it in my large classes, and we hope that it will soon be available for use by other teachers both within UBC and beyond.
How will this study impact future research?
TP: There’s so much conversation going on right now about how we can make technology in the classroom useful and not just included for its own bells and whistles sake. There are excellent tools, but for those of us in Arts disciplines that typically use assessment modes that require the kind of critical processes and nuanced engagement that students recognize as complex and to some degree subjective, it’s hard to find online tools that enhance that kind of learning for students and facilitate that kind of assessment for teachers. Making students actively engaged in that process of collaborative learning, peer assessment and self-assessment is something that I think a lot of teachers are constantly trying to improve. I am really hoping that the ACJ work that we are doing will continue to participate in the conversation on ways in which we can expand technology into these disciplines that are as fruitful as they have been in some of the other disciplines.