Peer review can be stimulating and enjoyable, but busy faculty and designers often find it difficult to make time for review, reflection, and idea sharing. I conceptualized and pilot-tested methods for small- and large-group peer review that required very little time on behalf of my participants, and had successful outcomes for course design and revision. The large group peer review process may be easily scaled for different group sizes, and may be utilized by faculty, designers, and other program or course stakeholders.
For the first pilot, I tested a method of peer review with a very small group: four instructional designers who work together on a team. Each term, a designer 1) provided a brief review request for her or his own course, and 2) independently reviewed another designer’s course. This pilot lasted a year, during which time I gathered feedback. Next, I redesigned the process for a larger group of 12 instructional designers who would take turns being the reviewee. These designers already met on a monthly basis, and the peer review took place in the first 15-20 minutes of the meeting. The reviewee provided his or her course review challenge in advance of the meeting.
For these peer review processes, I provided a list of questions to reviewers as a starting point, but communicated that reviewees were welcome to provide alternate questions and use the peer evaluation however it would be most beneficial to assess and explore a course design challenge. I offered reviewees the opportunity to either take a work-in-progress approach for an upcoming course, or to ask for an evaluation of a recently designed course.
Guiding questions for peer course review (very roughly based on the Feldman Method: Varieties of Visual Experience, 1972):
Creative freedom was key to the success of the peer review processes. While designers followed the proffered list in the first round of peer evaluation, they quickly began to explore areas of their own interest in the evaluation requests and reviews. They also brought in challenges provided by the faculty with whom they work.
Common themes emerged in the peer review challenges:
In the spirit of review, designers also provided me with helpful feedback and suggestions to make the process more effective for them; this feedback was incorporated into the peer review processes. For example, participants in the larger group pilot requested an end-of-year recap session where designers could share results from the outcomes of their challenges.
Participants in both pilots found peer review to be highly beneficial. They enjoyed seeing challenges presented both for courses that were complete and works-in-progress.
Design challenges were rarely unique to a single course. Participants found commonalities in challenges across a range of programs and subject areas, from Social Work to Computer Science. By sharing experiences and solutions, participants also deepened their understanding of each other’s expertise. They began to follow up on one another’s past challenges to find if earlier solutions would be applicable to new, related challenges. Solutions were beneficial not only to immediate experiments but to a wider scope of course outcomes.
Participants reported that they felt more inspired after reviewing challenges together, sharing helpful ideas, and celebrating achievements. (Their enthusiasm in the process also demonstrates how much they care about design.)
After each pilot, the designers agreed unanimously to continue implementing the peer review processes.
Learning effectiveness – Courses are reviewed by a panel of designers with expertise in pedagogical theory and practice. Principles of universal design and usability testing are consistently applied. Learning outcomes are further measured by student evaluations at the end of each course.
Scale – The peer review practice has been scaled from smaller to larger groups with success. Furthermore, design solutions are beneficial not only to individual courses but to a wider scope of course and program outcomes.
Access – Accessibility is a major component of the peer review. Panel include designers with expertise in WCAG guidelines who examine course elements to ensure they meet/exceed standards for accessibility.
Faculty satisfaction – Design challenges approached by the panel result in implementation of solutions that assist faculty with their teaching goals. Instructors have taken elements created for their online courses and brought these to their on-campus students to broaden their conceptual practice or to deepen understanding.
Student satisfaction – (Note: These comments apply to the online courses we support, which benefit holistically from this practice. Students do not participate in instructional designers’ peer review process.) Students evaluate online courses at the end of each term. Evaluations are carefully read and addressed by online learning teams. Students are also encouraged to reach out to support administrators who work exclusively on online courses.
Nothing beyond what organizations typically own (computers, meeting space).
Time cost: Total of 45-60 minutes per review, including prep and discussion.