This effective practice was evaluated in the context of a study to determine whether entries written to an electronic portfolio (blog portfolio or bportfolio) by preservice teachers improved in quality after an intervention was deployed. The study also compared portfolio metadata to writing quality scores to determine whether there was a relationship. Participants included a convenience sample of 11 undergraduate students enrolled in a teacher education program. Primary analyses focused on comparing portfolio entries, written before and after the intervention, using a repeated measures design. Secondary analyses involved calculating correlations between writing quality and portfolio metadata. Results showed that writing improved at a statistically significant level, t(10) = 4.99, p < .001, d = 3.16, 95% CI = 1.91 to 5.00. In addition, statistically significant correlations were found between writing quality and the number of unique terms shown on portfolio tag clouds, r = .60, N = 11, p < .05, d = 1.50, as well as writing quality and the total number of portfolio entries, r = .72, N = 11, p < .05, d = 2.08. These findings suggest that the intervention improved writing quality on entries made to electronic portfolios and that metadata predicted the quality of portfolio content. Keywords: electronic portfolios, evidence, metadata, preservice teacher, writing
Undergraduate students had created electronic portfolios using WordPress blogs. Each portfolio showed a landing page, or blog page, along with four auxiliary pages showing professional teaching standards. Each portfolio showed a tag cloud and archive. Students began writing entries to their portfolios at an average rate of one every two weeks. The contents of portfolio entries varied. For example, some described instructional theory, presumably written for a specific course; while other entries recounted events based on classroom observations. These entries were assessed by course instructors and practicum supervisors using a variety of methods, such as comments and points.
A writing intervention was deployed, in combination with the use of blog portfolios. The writing intervention included the following instructional practices:
1) explicit direction on content and format,
2) communication of assessment criteria,
3) evaluating evidence,
4) instructor and peer feedback, and
These practices were deployed as participants began writing their third portfolio entry. Graham and Perin (2007) identified these particular methods as characteristic of the following approaches to writing instruction: procedural facilitation, product goals, inquiry, feedback, and process writing. According to Graham and Perin (2007), these instructional practices have a positive impact on writing skill and writing quality. Three portfolio entries were scored for each student using a repeated measures design. Two of these entries were written by students and assessed by the instructor before the intervention was deployed. The oldest entry, further referred to as the first entry, was written nine months before the intervention was administered. The next entry, further referred to as the second entry, had been written between one day and one month before the beginning of the intervention. The third entry was written during intervention. The intervention lasted one hour, spread across two class sessions. Class sessions were separated by one week. Students wrote, and then revised, their third entry outside of class. The first, second, and third entries were scored for writing quality. Writing quality was operationalized using a rubric. This rubric contained five columns and two rows. Columns were scaled from 0, not passing, to 4, exemplary. The first row assessed the integration of artifacts used to show evidence of teaching competence. Artifacts included lesson plans, student work samples, teaching videos, or course papers, among other items. To achieve a score of 2 or above on this criterion, participants had to reference the artifact and interpret or evaluate its impact on their practice or student learning. The second row assessed the participants’ analysis and evaluation of their teaching in comparison to a given professional standard. To achieve a score of 2 or above on this criterion, participants had to reference the professional standard, analyze and evaluate their performance in comparison to the standard, identify significant conclusions about their teaching practice, and support their conclusions by referencing the artifact. Participants’ third entry was also scored according to a four-point scale of reflective writing designed by Kember, McKay, Sinclair, and Wong (2008). The original Kember et al. (2008) scale identified four levels of reflective writing according to letter designations, including nonreflection, understanding, reflection, and critical reflection. These letters were assigned numerical values from 1 to 4, respectively, and then used to assess the third entry (M = 3.18, SD = .41). In addition, two types of metadata were collected from student portfolios before implementing the intervention. Each portfolio showed a tag cloud and archive (Figure 1). The number of words or phrases, further referred to as terms, in each tag cloud were counted (M = 18.36, SE = 10.42). Course numbers and generic titles, such as EDU 1234, weekly blog, and entry #4, were excluded. The total number of portfolio entries were also counted by summing from the numerals shown on each portfolio’s archive menu (M = 23.00, SE = 9.92). Scores from another portfolio assignment, which were not used to answer the research questions for this study, showed positive correlations with the number of tag cloud terms and total portfolio entries. The correlations were statistically significant (mean r = .64, N = 11, p < .05), indicating some convergent validity between writing quality and metadata.
A paired sample t-test was conducted to evaluate whether the quality of students’ third portfolio entry improved in comparison to the second entry. Results indicated that the mean score for the third entry (M = 5.82, SD = 1.08) was significantly greater than the mean score for the second entry, M = 2.36, SD = 1.70, t(10) = 4.99, p < .001. The standardized effect size index, d, was 3.16, with some overlap in the distributions for rubric scores between the second and third entries, as shown in Figure 2. The 95% confidence interval for the mean difference between the two ratings was 1.91 to 5.00. A second paired sample t-test was conducted to compare differences between the writing quality of the third and first entries. Results indicated that the mean score for the third entry was significantly greater than the mean score for the first entry, M = 0.55, SD = .52, t(10) = 22.24, p < .001. The standardized effect size index was 14.07, with no overlap in the distributions for rubric scores between the third and first entries, as shown in Figure 2. The 95% confidence interval for the mean difference between the two ratings was 4.75 to 5.80. A Pearson correlation was computed to assess the relationship between scores for the writing quality of the third entry and scores assessing the level of reflection in written work (Kember et al., 2008). Results showed a statistically significant correlation, r = .77, N = 11, p < .01, d = 2.41. A second correlation was computed to assess the relationship between the third entry and tag cloud term counts. There was a statistically significant correlation, r = .60, N = 11, p < .05, d = 1.50. A final correlation between third entry writing quality scores and the total number of portfolio entries was calculated, and it also showed a statistically significant result, r = .72, N = 11, p < .05, d = 2.08.
The quality of student writing improved significantly in comparison to entries written before the intervention. This finding corroborates research by Graham and Perin (2007) who found that writing interventions aligned with procedural facilitation, product goals, inquiry, feedback, and process writing, improved students’ writing skill and writing quality. However, unlike previous studies, this one utilized electronic portfolios and other online learning methods in combination. The results of this study, and perhaps more importantly the characteristics of the intervention, directly relate to the Sloan-C pillar of Learning Effectiveness. Moreover, the results of this study add to the literature on the use of online portfolios for improving student achievement in important ways. Besides improving the quality of student writing, the metadata analyzed in this study was predictive of the writing quality of the third entry. Notably, the total number of portfolio entries was a stronger predictor of writing quality in comparison to the number of terms shown on a tag cloud. These results suggest that metadata is useful to instructors as an informal assessment measure of the general writing quality of electronic portfolio entries. However, including metadata as an electronic portfolio feature only appear to be available through generic tools, such as WordPress and Blogger.
There were no costs associated with the intervention. Blackboard was used to manage assignment submission and dissemination of directions/documents, but this was not a central part of the effective practice and no-cost alternatives would be easily implemented.
Sample student portfolios follow http://hannahgj.wordpress.com/ http://mholter.wordpress.com/ http://lusbym.wordpress.com/