"Automatic" gradesheets: A Holy Grail for simultaneously improving faculty and student satisfaction

Award Winner: 
2010 Sloan-C Effective Practice Award
Author Information
Author(s): 
James T. Fatzinger, M.Div., MBA
Institution(s) or Organization(s) Where EP Occurred: 
Metropolitan State University (Minneapolis and St. Paul, MN)
Effective Practice Abstract/Summary
Abstract/Summary of Effective Practice: 

It might not be too challenging to enhance faculty satisfaction: reduce class size, increase opportunities to interact with students, eliminate (or at least drastically reduce) paperwork, etc. Improving student satisfaction may be even easier: improve the quality of feedback as well as the timeliness with which the feedback is provided. It is clear, however, that the means of enhancing the latter (student satisfaction) can be at odds with the means of improving the former (faculty satisfaction). Providing higher quality feedback more quickly can seem like an onerous burden in a context of steadily increasing class sizes and "extracurricular" demands. "Automatic" gradesheets, which drastically reduce the time needed to return high quality feedback to students may be an educational Holy Grail.

Description of the Effective Practice
Description of the Effective Practice: 

The idea is disarmingly simple. Use the functionality of two (2) programs with which most faculty are already adept: Microsoft Excel ® and Microsoft Word ® to create "automatic" gradesheets requiring no more than a mouse click (see K.H.'s comments below) to generate feedback based on best practices and 4-stroke "shortcuts" to insert often-used comments (color-coded, if so desired) into electronically submitted student papers. The supporting "architecture" for the practice is the seamless, transparent integration from assignment description to a rubric with four (4) behavioral anchors for each graded item to a feedback form that "automatically" provides feedback on each item described in the rubric and calculates a grade for the assignment.

Supporting Information for this Effective Practice
Evidence of Effectiveness: 
Existing information makes it easier to provide quantitative evidence of the effectiveness of the automatic gradesheets with regard to the "student satisfaction" pillar.
Metropolitan State University uses an instrument called an "Instructional Improvement Questionnaire" in all classes, all modalities (traditional classroom, fully online, and "Web-enhanced" [hybrid]) at the end of every semester. Instructor behaviors have a strong, direct influence on seventeen (17) items on the IIQ; e.g., "Demonstrated mastery of subject matter," "Explained course requirements and evaluation criteria," etc.
Student evaluations of two (2) items: "Provided helpful feedback on student assignments" and "Informed students of their progress in time to correct deficiencies," though, stand out in stark contrast to the other fifteen. University-wide (Fall, 2009, N=8,889 and Spring, 2010, N= 9,411), the mean on a 5-point Likert scale where 1 represents the best possible rating and 5 the worst possible rating for the top-scoring fifteen (15) items was 1.40 for Fall semester and 1.41 for Spring semester. Contrast this with the scores for "Provided helpful feedback on student assignments": Fall, 2009 = 1.64; Spring, 2010 = 1.65 and "Informed students of their progress in time to correct deficiencies": Fall, 2009 = 1.67; Spring, 2010 = 1.70!
Ratings on these same items for the instructor piloting the use of the automatic gradesheets are significantly better: 1.44 for "Provided helpful feedback on student assignments" as opposed to the University-wide means of 1.64 (Fall, 2009) and 1.65 (Spring, 2010). Even better results are indicated for the item, "Informed students of their progress in time to correct deficiencies" -- the average for 21 classes over 14 semesters for the instructor using the automatic gradesheets was 1.43, compared to the University-wide means of 1.67 (Fall, 2009) and 1.70 (Spring, 2010). Evidence suggests that more useful feedback was returned to students more quickly using the automatic gradesheets -- resulting in improved student satisfaction.
Unfortunately, Metropolitan State University doesn't collect comparable information from faculty, so the evidence of the effectiveness of the automatic gradesheets comes in the form of qualitative comments like those below:
  • K.H. (tenured professor, 4 year university): "I cut my grading time by approximately 50% by using the automatic grading sheets.  Since I use many of the same sentences to provide feedback to numerous students on the same assignments, the automatic grade sheets allow me to prove accurate feedback by the 'click' of my mouse."
  • M.C. (tenure-track professor, community college): "Automatic gradesheets have helped me to better communicate with my students.  My feedback is more consistent and it is always in line with the rubric.  Additionally, with the new process I provide feedback on all levels of the rubric -- whether it is positive or negative comments.  That is a change that I enjoy.  Previously, I spent far too much time letting students know what needed improvement.  The automatic gradesheets allow me a seamless way of also letting students know what they did well."
  • A.B. (community faculty): "The automatic gradesheets were very helpful for providing consistent feedback.  The programmed comments were useful and easily customizable. The gradesheets reduced the time it took for me to write feedback for each assignment and provided useful information for the students.  I will continue to use these tools.
How does this practice relate to pillars?: 
1.      Improve faculty satisfaction by reducing time spent on repetitive grading tasks. The Pareto Principle applies to grading; the vast majority of faculty time is spent commenting on and correcting a relatively small number of frequently repeated errors. The "automatic gradesheet" reduces faculty time spent on this low-reward activity dramatically!
2.      Improve student satisfaction by delivering high-quality (totally customizable) feedback on assignments in a fraction of the time required without the use of automatic feedback (see "Evidence of Effectiveness" section).
3.      A primary purpose of feedback is to help students close the gap between goal (what a given assignment purports to assess) and performance (the degree to which the work submitted meets the assessment criteria). Enhance learning effectiveness by:
a.       increasing the "transparency" from assignment description to rubric to (automatic) gradesheet,
b.      providing formative feedback focusing on improving gaps in performance, and
c.       enabling students to use feedback to increase self-regulation and improve performance on similar, future assignments.
4.      Scalability: This practice can be implemented in steps. Faculty can collaborate on developing common descriptions for assignments (e.g., case studies) and students can be engaged in developing evaluation criteria. New automatic gradesheets can be brought online each semester until a complete collection has been developed.
Equipment necessary to implement Effective Practice: 
I am an ardent advocate for keeping costs low by using readily available resources. All that is needed to implement the automatic gradesheet most faculty already have -- a computer and Microsoft Office ®!
Estimate the probable costs associated with this practice: 

Most instructors will already have access to Microsoft Office ®; the only other "cost" would be the time needed to become proficient with the specific processes in Excel ® and Word ®. Additional cost (also in the form of time) would be needed if the faculty member is not familiar with Excel ® and to enter her/his detailed feedback into Word ®. Attending a single webinar and/or following detailed, step-by-step handouts with screenshots should bring any faculty member up to speed in a few hours.

References, supporting documents: 
2009-10 college prices. CollegeBoard. Retrieved June 24, 2010 from http://www.collegeboard.com/student/pay/add-it-up/4494.html.
Blase, J.J. (1986). A qualitative analysis of sources of teacher stress: Consequences for Performance. American Education Research Journal. Spring, 23:1, 13-40, http://www.jstor.org/stable/1163040.
Brookhart, S. (2007). Feedback That Fits. Educational Leadership, 65(4), 54-59. Retrieved from Academic Search Premier database.
Certo, J., and Fox, J. (2002). Retaining Quality Teachers. The High School Journal, 86(1), 57-75. doi: 10.1353/hsj.2002.0015.
Chong Leng, T. (2005). Mail merge: A function to improve efficiency…beyond the generation of customized feedback documents. Decision Sciences Journal of Innovative Education, 3(1), 151-159. doi:10.1111/j.1540-4609.2005.00059.x.
German, K. (1990). Computerized instructional responses: An option for providing student feedback. Association for Communication Administration Bulletin, (74), 83-91. Retrieved from Communication & Mass Media Complete database.
Gibbs, G and Simpson, C. (2004). Does your assessment support your students' learning? Open University. Retrieved June 19, 2010 from http://artsonline.tki.org.nz/documents/GrahamGibbAssessmentLearning.pdf.
Hattie, J. and Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. doi: 10.3102/003465430298487
IIQ Results, University Level, Fall 2009. Obtained from Metropolitan State University Department of Institutional Research June 28, 2010.
IIQ Results, University Level, Spring, 2010. Obtained from Metropolitan State University Department of Institutional Research June 28, 2010.
Jones, S. (1998) Student and staff appraisal - how to give effective feedback. Management in Education, (1 January), 12(4), 23-25. doi. DOI: 10.1177/089202069801200408.
Lipnevich, A. A. & Smith, J. K. (2009). Effects of differential feedback on students' examination performance. Journal of Experimental Psychology: Applied, 15(4), 319-333. Retrieved from http://dx.doi.org/10.1037/a0017841
McIntyre, F. S., Hoover, G. A. & Gilbert, F. W. (May 1997). Evaluating oral presentations using behaviorally anchored rating scales.  Academy of Educational Leadership Journal, 1(2), 1-6.Retrieved June 22, 2010, from Academic OneFile.
Nias, J. (1981). Satisfaction and dissatisfaction: Herzberg's 'Two-Factor' hypothesis revisited. British Journal of Sociology of Education, 2(3), 235-246. Retrieved June 22, 2010 from JSTOR, http://www.jstor.org/stable/1392621.
Nicol, D. and Macfarlane-Dick, D. (date unknown). Rethinking formative assessment in higher education: A theoretical model and seven principles of good feedback practice. A briefing paper from The Higher Education Academy (Scotland). Retrieved June 19, 2010 from http://www.heacademy.ac.uk/assets/York/documents/ourwork/assessment/web0015_rethinking_formative_assessment_in_he.pdf.
Reybold, L. (2005). Surrendering the dream: Early career conflict and faculty dissatisfaction thresholds. Journal of Career Development, 32(2), 107-121. Retrieved June 22, 2010 from doi:10.1177/0894845305279163.
Other Comments: 

I will be presenting the "automatic" gradesheets in the workshop titled, "The Holy Grail: Increasing student satisfaction while decreasing faculty time spent on repetitious grading tasks"

Contact(s) for this Effective Practice
Effective Practice Contact: 
James T. (Jim) Fatzinger
Email this contact: 
effsolns@mindspring.com