ComPAIR Learning Application – Teaching and Learning Technologies

Last edited: June 2, 2017.

Project type: New technology

ComPAIR is a UBC-developed student peer assessment and feedback application, in which students submit their own assignment, then compare pairs of their peers’ submissions to the same assignment. For each pair, students pick the assignment they think best meets any instructor-set criteria (e.g., “Which answer is more clearly written?”, “Which answer is more accurate?”) and write constructive feedback to the anonymous author of each.

By contrasting pairs of assignments, students can better reflect on what makes an assignment stronger or weaker, encouraging richer peer feedback and deeper critical thinking as well as an enhanced understanding of their own assignment’s strengths and weaknesses.

Evaluation goals

ComPAIR came out of a unique development process that saw faculty members, researchers, technology experts, and students collaborating in a participatory design process to build the application from the ground up. The technology went through several iterative design cycles, with feedback solicited on the developing prototypes from faculty intending to use it and student representatives who did use it (in usability testing). At that point, the application required formal evaluation to assess its potential as a teaching tool and suitability for wider release at UBC, specifically:

  1. Usefulness: Did assignments in the application support effective teaching and learning?
  2. Usability: How could the technology and its implementation in courses be optimized?

Methodology

The three primary instructors who participated in the design of the application piloted the technology in each of their respective courses—English, Math, and Physics—in 2015. In each course, ComPAIR played a role in 2-3 assignments, and feedback was gathered near the end of term. Rather than try to quantify learning, the evaluation focused on perceptions of people using the technology. Data sources included:

  • Paper surveys: 168 students and 6 teaching assistants (TAs) responded
  • Focus groups: 4 students and 5 TAs participated in separate sessions (to follow up on themes emerging from the surveys)
  • One-on-one interviews: All 3 instructors participated in interviews (with questions following loosely what was asked of students and TAs in the paper surveys)

Findings

The evaluation of the pilot showed that the application had the capacity to support a teaching and learning experience that instructors, TAs, and students viewed as beneficial (Goal A). Although, how assignments were introduced, designed, and integrated impacted perceptions of the usefulness. In terms of usability, ease-of-use was rated high (Goal B) by the majority of all user groups. Specifically, 95% of English students, 70% of Math students, 100% of Physics students, 83% of teaching assistants, and 100% of instructors reported final ease-of-use as 4 or 5 (out of 5). The main usability issues raised were addressed in the next application version—among the more significant improvements: fixing technical problems with PDF uploading/viewing, speeding up responsiveness in loading pages, and preventing students from losing work when accidentally navigating away from pages with partially completed work.

Instructors thought answer quality was largely good (all rated answer quality as 4 or 5) and that students seemed to be putting more effort into assignments and participating in higher numbers than in prior similar assignments. All instructors also felt their learning objectives were generally met, and ComPAIR provided a useful way of checking student comprehension of core concepts (Goal A). English TAs liked having a pool of examples to discuss and believed the assignments helped: 1) prepare students better for tutorials, 2) create a positive in-person classroom climate for giving peer feedback, and 3) encourage critical thinking around a core course skill (Goal A).

Most instructors and TAs agreed that peer feedback quality was not as high as answer quality. Instructors rated feedback quality in the 3-5 range, while TAs rated it in the 1-3 range. Half of English TAs did not see a benefit to students writing/receiving peer feedback. The consensus for improving this (Goal B) was that: 1) more guidelines could be provided by instructors on how to write quality feedback, 2) grades/grade weight could be added/increased for the feedback, and 3) the interface could be changed so the comparison placed more emphasis on writing feedback.

The way instructors introduced, designed, and integrated the assignments impacted the benefits students perceived from the assignments and for their learning more generally. Comparing student responses to the contexts provided by each course’s instructor/TAs allowed us to surface some themes for optimizing the student experience (Goal B).

  • Students who better understood what underlying skill they were practicing in the application felt they learned more from the assignments. English and Physics students articulated a good grasp of what they were practicing (writing and problem-solving, respectively) and their ratings across the board were notably higher than Math students, who did not have a clear sense what they were practicing. Student comments highlighted the importance of introducing the application and assignments in a “what’s-in-it-for-me” way. It is key for students to know the end goal of using ComPAIR  (“how will this help me in this course?”) and not only how to use it to complete the assignments.
  • Not surprisingly, students given more guidance (multiple criteria, additional rubrics) in how to compare said they learned more from comparing. Math students evaluated assignment pairs based on the default comparison criterion in the application (the basic question “Which is better?” of two assignments), with no other guidance provided, and 40% rated comparing highly as a learning activity. English student evaluated assignment pairs based on two criteria in the system (which assignment had a better idea and which was better articulated), and 62% rated comparing highly. Physics students received a detailed handout with rubrics explaining how to evaluate assignment pairs, and 81% rated comparing highly. Helping students understand what to look for in the assignment comparisons can result in stronger learning and (possibly) richer feedback.
  • Assignments felt more beneficial to students when presented as part of a larger process. English and Physics ComPAIR assignments were part of key activities in the course, and this was reinforced by ComPAIR-related work done during (in-person and online) class time. In English, the assignments were the first step in learning to write better critical premises in order to form stronger in-class essays and a final term paper. In Physics, the assignments were a final step in practicing a problem-solving process, in which ComPAIR enabled the fourth part (reflection). In Math, the assignments functioned as standalone assessments of a conceptual understanding, one that students felt had already been tested in other ways, and little class time was spent discussing ComPAIR results. Students comments reflected it may be more useful for students to use ComPAIR in the context of a bigger goal, rather than as standalone assignments that are not concretely tied into the course.
  • Regardless of low reported confidence in giving peer feedback, many students said they learned simply from practicing this skill with answer pairs. Specifically, 34% of English, 26% of Math, and 51% of Physics students were confident they had given good feedback. But surprisingly, 56% of English, 41% of Math, and 81% of Physics students still felt their skill to write future peer feedback benefitted from the practice. A minimum of three comparisons per assignment is suggested to give students sufficient practice, as this was the number used for the pilot courses.

Recommendations

For project: Based on the outcomes of the pilot evaluation, significant workflow and user interface changes were made to the application to improve the experience for all user groups. Among the bigger changes were: adding a new tab to the interface where students can find their submitted work (and peer feedback), decreasing the number of response boxes for students during the comparison process, allowing TAs to give feedback privately in the application, adding a more complete instructor/TA overview of student work, providing instructors with a student preview, and adding an instructor option to submit late work on behalf of a student.

For future evaluation:

  • We should work out the timing with instructors for running assignment-based surveys better, so all surveys are administered at a similar time based not on calendar dates but on course schedules. Differences in application experience/exposure at the time of survey distribution may have slightly tinted responses.
  • We could give stronger incentives and/or increased opportunities for students to provide in-depth feedback. More attractive trade-offs and focus group times that preceded or followed in-person classes could have increased participation, as well as an online area to engage.

For participants: A list of best practices (similar to the student experience points noted above) was shared in workshops, presentations, and the ComPAIR support website.

Contact                            

Letitia Englund                   
UX/UI Analyst
Centre for Teaching, Learning & Technology
Email: letitia.englund@ubc.ca

Additional Information

ComPAIR support website

ComPAIR demo site

Surveys used for the evaluation


To download the PDF version of this document, click here.