Session G ~ Assessing Digital Writing: Opportunities and Challenges for Programs and Instructors

0

Review by Justine Neiderhiser

Panelists
Judith Fourzan-Rice, University of Texas at El Paso
Crystal VanKooten, University of Michigan
Anthony Atkins and Colleen Reilly, University of North Carolina Wilmington

This roundtable led a discussion about the challenges and opportunities associated with assessing digital writing from both institutional and instructor perspectives.

Judith Fourzan-Rice (University of Texas at El Paso) began the session from an institutional perspective, describing how programmatic changes at her university led to the implementation of many forms of digital writing. While she described shifts towards hybrid classrooms, e-books for teacher training, and a major overhaul of the assignment sequence in first-year writing, her main emphasis was on the adoption of Miner Writer, an electronic system for distributing and assessing student writing. As Fourzan-Rice explained, when students submit their class assignments in Miner Writer, they are distributed to a trained scoring committee which provides feedback to students. Teachers have access to all of the students’ work in the system, but are not actually the graders of student essays. The program allows administrators to create new assignments, edit assignments, upload late submissions, and problem shoot issues students may experience while submitting their work. Although she emphasized the merits of such a system for her writing program, she also described important drawbacks: developing a program that works with this system is expensive (scorers need to be trained and normed and essays should be double scored to verify accuracy) and the system itself doesn’t have everything an administrator needs (reports need to be generated manually, no mechanism for programmatic assessment). The takeaway, then, was that systems for the digital collection and assessment of student writing have many affordances, but they are not fail-safe solutions to the difficulties of assessment. Every system has its own issues, and even systems like Miner Writer require a great deal of effort to implement and sustain.

Crystal VanKooten (University of Michigan) then shifted focus to models for assessing student-produced new media projects. She began by describing the difficulties she experienced when using rubrics that were co-constructed with her students to assess their digital projects. She found that students often didn’t have the necessary language to express realizable goals at the start of an assignment. Consequently, she turned to the work of Stuart Selber, Paul Allison, and Michael Neal to develop a new model for assessment. Following Selber’s work, VanKooten’s model is rooted firmly in the process of goal-setting. In this model, students begin by setting both functional goals (such as using hardware or technical effects) and rhetorical goals (such as conveying a particular message to a particular group of people) for their individual projects. VanKooten emphasized the importance of working with students and providing examples to help them set these goals. A key aspect of this process is that the goals are allowed to evolve as the project develops. Next to these goals, the model also assesses the product itself and the process that students engaged with to get there. A crucial point that VanKooten raised was that although this assessment model was designed with video projects in mind, it can be used much more broadly. In addition, VanKooten emphasized the benefit of actually workshopping assessment models with students as she shared video of her own students responding to what they identified as the strengths and weaknesses of the model. Through this student feedback, she explained that she was able to develop her model in ways that made it more meaningful to students.

Finally, Anthony Atkins and Colleen Reilly (University of North Carolina Wilmington) shared their experiences with digital writing assessment as instructors of a professional writing course. Atkins began by raising concerns about the range of success that both speakers saw in their classes. Where some students seemed to complete assignments with relative ease, others struggled greatly to accomplish the same tasks. The root of this problem, as he identified it, was that some students were unfairly disadvantaged by not being familiar with the software used in class. In response to this disparity, Atkins explained, they attempted to shift their assessments towards aspirational criteria, taking into account students’ motivations, efforts, and risk taking. To accomplish this, they developed a method of assessment which combined primary trait scoring with student-generated levels of assessment and foregrounded reflective activities such as maintaining a daily process blog. Reilly emphasized the ways in which this Freirean assessment model informed the writing process for students. Because students were asked to establish criteria for both meeting the baseline and reaching excellence for each of the traits assessed, they were more likely to go above and beyond the minimum criteria outlined and strive to achieve what they had deemed “excellent.” In this way, Reilly emphasized, the assessment model that she and Atkins implemented informed the entire development of student projects. Key here is the function of assessment – instead of coming in at the end, this global approach was integrated throughout the writing process.

At the conclusion, the roundtable opened up for a Q&A session largely focused on practical questions about constructing assessment models and difficult scenarios some had encountered in their own assessments of digital writing. Questions were raised about whether students should perform assessments before being given (or developing) an assessment model, how teachers might discuss appropriateness (as opposed to correctness) with students, how student-set standards help them succeed in non-product based assessments, how to assess beautiful projects that don’t match assignment criteria, and even what to do if a student is visibly intoxicated in a video. The tenor of these questions suggests that the concerns that come into play when assessing digital writing often overlap those expressed with the assessment of any kind of writing. And, though the roundtable offered promising means through which digital writing can be assessed, the takeaway from each speaker can equally be extended to other forms of writing as well. The difficulties encountered by Fourzan-Rice were not really difficulties with digital writing, but difficulties with training new writing instructors, finding a way to reliably score student-essays, and effectively managing burgeoning administrative responsibilities; VanKooten’s model was developed to assess student-produced videos, but could usefully be extended to any writing context with only slight modification; Atkins and Reilly were responding to what they deemed unfair differences in student experiences with software, but we could say the same thing about unfair differences in student experiences with academic texts, with particular genres, with writing more generally…

What these speakers demonstrate, then, is the utility not only of their particular approaches to assessing digital writing, but also of new media as a magnifying glass for issues that need more attention in writing assessment more generally, regardless of medium.

Justine Neiderhiser is a doctoral student in the Joint Program in English and Education at the University of Michigan. She is currently researching the role that confidence plays in student self-assessments.

Author

Leave A Reply