A New Methodology for Evaluation: The Pedagogical Rating of Online Courses

Author Information
Author(s): 
Nishikant Sonwalkar
Author(s): 
Massachusetts Institute of Technology
Institution(s) or Organization(s) Where EP Occurred: 
Massachusetts Institute of Technology
Effective Practice Abstract/Summary
Abstract/Summary of Effective Practice: 

Massachusetts Institute of Technology's pedagogical effectiveness index provides a simple yet mutlidimensional tool to the learning effectiveness of the online courses based on media, learning models and interactivity elements.

Supporting Information for this Effective Practice
Evidence of Effectiveness: 

The pedagogical effectiveness index has been now used in many institutions of higher learnings involved with the online education. The number of user for the given methodology is increasing. The instrument is also available online for those who would like to use it for institutional course evaluation.

How does this practice relate to pillars?: 
learning effectiveness: Online course offerings are increasing in numbers everyday. Most universities and corporate training facilities are now offering some or all of their courses online: there are more than 1,000 corporate universities and online course providers offering everything from IT training to Chinese cooking online. Though it is clearly advantageous for asynchronous learners to access educational information and content anywhere and anytime, it is really quite difficult to evaluate the quality and effectiveness of online courses and learning modules. Open source learning platforms and public Web course content have gained popular attention and support because higher education at large can benefit from joint development efforts and shared resources, ultimately reducing the overall cost of online learning. Consortia are leveraging the volumes of shared information and courseware that students can benefit from, and several vendors are providing some of their technology as open source. In the open source, open content environment we are entering, it is important to develop a common, objective scale and summative instrument with which to measure the pedagogical effectiveness of online instructional technologies and course offerings. Models of Evaluation In my two previous articles in Syllabus (Nov. and Dec. 2001), I described the pedagogical learning cube in the context of instructional design. In this article, I will again invoke the cube, including the six media elements: text, graphics, audio, video, animation, and simulation (y-axis); the five functional learning styles: apprenticeship, incidental, inductive, deductive, and discovery (x-axis); and the third axis of the cube (the z-axis), which represents the interactive aspects of learning. The z-axis indicates the degree of engagement of students with the learning content, moving from a teacher-centric approach to a student-centered approach. This interactivity axis (z-direction) of the cube may be defined in terms of five elements: systems feedback, adaptive remediation, discussion board, and bulletin board. With this definition of the three-dimensional learning cube, an educational framework can be constructed that can be used to define pedagogy as a three-dimensional space. The pedagogical effectiveness of an online course is the sum total of its medial elements, learning styles, and interactivity. Pedagogical effectiveness is at the heart of the online offering and defines a critical parameter for the evaluation of courses. However, learning management systems provide the essential integrative layer for online courses. If online courses are delivered in the context of learning management systems, several additional factors must be considered in any evaluation. I propose a new assessment tool based on a five-factor summative rating system and a pedagogy effectiveness index which together provide a thorough evaluation. The Pedagogy Effectiveness Index (PEI) Expanding on the above arguments, the pedagogical effectiveness of an online course can be defined as summation of media elements, learning styles, and interactivity. Assuming that each of these factors are equally likely and mutually exclusive, a probability distribution tree diagram can be shown to have three branches, with sub-branches represented for each axis of the pedagogical learning cube. A pedagogical effectiveness index can therefore be determined by a summative rule. The corresponding probability multipliers can be shown in a simple matrix. Pedagogy Effectiveness Index = S Mi pi +S Sj pj + S Ik pk Where M=Media; S=Style; and I=Interaction, the subscripts define the ranges i=1 to 6; j =1 to 5; k=1 to 5; and S represents summation. Style.........Pi.....Media.....Pj.....Interaction...Pk Apprentshp...0.068...Text.....0.055.....Feedback....0.066 Incidental...0.068...Graphics.0.055.....Revision....0.066 Inductive....0.068...Audio....0.055.....Email.......0.066 Deductive....0.068...Video....0.055.....Discuss.....0.066 Discovery....0.068...Animatn..0.055.....Bulletin....0.066 ........................Simultn..0.055 ...............0.34.....Total...0.33....................0.33 Consider the following cases as examples of the application of PEI: Case 1: The PEI for a course with one media element, one learning style, and one interactive element will be: PEI = 0.055+0.068+0.066 = 0.189 Case 2: The PEI for a course with 4 media elements, 3 learning styles, and 2 interactive elements will be: PEI = 4*0.055 +3*068 + 2*066 = 0.556 Case 3: The PEI for a course with 6 media elements, 5 learning styles, and 5 interactive elements will be: PEI = 6*0.055 +5*068 +5*066 = 1.0 These cases clearly illustrate that the PEI varies from 0 to 1. The probability of the pedagogical effectiveness increases as cognitive opportunity increases with the inclusion of media elements, learning styles, and interaction. The PEI is based on a simple probability distribution and should be considered as an approximate indicator within the bounds of assumptions listed above, specifically relating to the flexible learning approach depicted by the pedagogical learning cube. Summative Rating for Online Courses The PEI serves as an indicator of the pedagogical richness of a course. However, online course delivery systems include several additional factors that impact the measure of success. Objective criteria for a summative evaluation should be applied in five major areas, including (1) content factors, (2) learning factors, (3) delivery and support factors, (4) usability and human factors, and (5) technological factors.

These factors are evaluated with reference to the learning technology standards proposed by IMS, AICC, and SCORM. Content Factors. The content is the basis for course delivery. The quality of the content has to be good to begin with. Mediocre content cannot be made better just by the infusion of the pedagogical styles or multimedia enhancements. It is important to note that the independent authority authenticates the accuracy and the quality of the content. The source and author of the content must be given proper attribution to avoid copyright and compensation issues and to hold the author responsible for the quality of the content. Learning Factors. The effectiveness of an online course depends on the quality of pedagogically driven instructional design. The learning factors that are at the core of the educational quality of an online course include concept identification, pedagogical styles, media enhancements, interactivity with the educational content, testing and feedback, and collaboration. Often the objectives of the courses are not well defined and therefore do not provide a clear intent of the course. The learning styles define content sequencing and presentation. It is important that the instructional design has sensitivity to the functional learning style that allows accommodation of the individual content sequencing and aggregation preferences. Delivery Support Factors. The success of an online course depends heavily on the delivery support function essential for the course instructors, administrators, and users. The user authentication, the portfolio information, and records of the users activities during the completion of the online course should be administered by a user management module. Management of the course content elements, including video streaming servers, audio servers and HTML server is managed in this module. Also note that it is now federal requirement to provide access to course Web site to the visually and hearing impaired students. The federal acts 255, 504 and 508 require that course Web sites are designed to allow screen readers have alt tag for the graphics and video with the sign language closed captions. Usability Factors. The usability is an important element for human factor based man-machine interface design. Despite the high quality of content, pedagogical styles and media enhancement, an online course can be a complete failure if the usability of the course is poor. The user interacts with the online Web course through the GUI. The design of the graphical elements, color scheme, font type, and navigational elements all can affect how a course is organized and perceived by the student. Information overload in a Web page with excessive scrolling within a window can be detrimental to the educational quality of the presentation. It is recommended by several information design experts that a small chunk of information on an 800x600 pixal window is optimum. The page layout for the information chunks and access from the page to various part of the course Web site through the navigation bars is very important in the success of an online course. Technology factors. Online course Web sites run on the technological infrastructure. The issues that influence the technological success of online courses include bandwidth, target system configuration, server capacity, brower client, and database connectivity. The network bandwidth defines what is the lowest common denominator for the course Web page design. Design for 56Kbps modem network access has more constraints than a network connection of T1 line with 1Mbps bandwidth. The number of simultaneous users that a Web server can handle is an important constraint for the large-scale deployment of online courses.

The choice of online applications running over MS Explorer and Netscape Navigator can make a difference in the kind of HTML 4.0 features and Javascript features are supported by the browsers. It also has impact on the plug-in that may be required for running interactive applications. Most of the large-scale online course deployment is powered by the database back end. The database connectivity and connection pooling mechanism can become a bottleneck if not dealt with properly. It is important to emphasize that the intent of the methodology described here is to create an objective criteria for evaluation of the quality of the online course based on the existing elements that represent pedagogical content. Summative Evaluation Instrument Most rating systems are summative and depend on the precise definition of the quantitative scale. The most widely used rating system is the Likert scale, which I have selected for the proposed summative evaluation instrument (Figure 5). The summative evaluation results and the pedagogical effectiveness index can be combined to give a final result that provides a view of the overall effectiveness of the online course: Overall Pedagogical Rating = PEI x Summative Rating Score The advantage of using therating formula is to evaluate both the pedagogical and delivery system-based scores and provide a final rating that will be useful for the comparison of online course offerings and online course delivery systems. Towards a Successful Online Education The pedagogical effectiveness index and the summative evaluation instrument used in combination can be powerful tools for the evaluation of large numbers of online offerings. These criteria have clear emphasis towards the pedagogically driven design. Widespread use of these tools could guide and motivate online education developers, universities, and training centers towards the creation of educational systems marked by measurable success. No. Evaluation Factors Absent Poor Average Good Excellent 1 Content Factors 0 1 2 3 4 Quality Authenticity Validity Media Presentation Attribution 2 Learning Factors 0 1 2 3 4 Concept identification Pedagogical Styles Media enhancements Interactivity Testing andFeedback Collaboration 3 Delivery & Support 0 1 2 3 4 User management Course Content Accessibility Reporting 4 Usability Factors 0 1 2 3 4 GUI Interactive design Clarity Chunk size Page layouts 5 Technological Factors 0 1 2 3 4 Network bandwidth Target system configuration Server capacity Browser client Database connectivity Ratings key: 0 = Absent 1 = Poor 2 = Average 3 = Good 4 = Excellent

Estimate the probable costs associated with this practice: 
The instrument developed for the evaluation and pedagogical rating is available for individual users and institutions at no cost by sending request to:

Dr. Nishikant Sonwalkar Principal Educational Architect, AMPS, MIT nish@alum.mit.edu Ph$ (617)642-1767 cell (617)258-8730

References, supporting documents: 
6. Burrell, B., Wiggins, R.J.N., Sonwalkar, N., Kutney, M.C., Dalzell, W. and Colton, C.K., A Comparison of Web-based and Laboratory Learning Environments Proceedings of the American Society of Engineering Education Annual Conference, 2000
Contact(s) for this Effective Practice
Effective Practice Contact: 
Dr. Nishikant Sonwalkar
Email this contact: 
nish@alum.mit.edu