This panel deftly blended a research study on labor-based contract grading in OWI and hybrid courses, a smart and engaging presentation about assessment broadly construed, and a visually spectacular presentation about how to use data visualization tools for writing placement. It was, in short, one of the best presentations I’ve seen at a conference in many years.
Conceptualizations of Time and Labor in Contract-Assessed Online Writing Courses
- Sydney Sullivan, University of California, Davis
- Mikenna Sims, University of California, Davis
- Jennifer Burke Reifman, University of California, Davis
Sydney Sullivan, Mikenna Sims, and Jennifer Burke Reifman (all instructors and PhD students from UC Davis) presented on work that looked at how writing teachers thought about and experienced contract grading (particularly labor-based contract grading) while teaching hybrid and online classes. Collectively they described a tight mixed-methods study, and they focused particularly on the results of interviews with 8 instructors (seven being contingent faculty, one a tenure-track faculty member).
Sydney Sullivan started out with her presentation by walking through notions of contract grading and relating how the research presented came out of a class taught by Dana Ferris on responding to student writing. It was also here that notions of labor and precarity were introduced to the discussion—more on this in a bit.
Mikenna Sims then discussed how a labor-based grading contract demands a fair amount of “emotional labor,” and can be a “disruption” (and a productive one) to the “grading ecology.” Sims also laid out the key codes that emerged from the interviews: role shifts, movement away from grade justifications, and faculty labor.
Jennifer Burke Reifman, ended the talk with a fascinating discussion of the codes and mentioned a very interesting point about the use of audio, and other tools embedded in the UC Davis CMS, which changed the nature of response to student writing—particularly from a labor-based contract grading perspective.
Unflattening Assessment: Three-dimensional Thinking, Modeling, and Mapping for Classrooms and Writing Programs
- Stephanie West-Puckett, University of Rhode Island
Stephanie West-Puckett’s presentation was simultaneously the most abstract, but concrete, of the presentations, moving between theoretical discussions and handing out tangible objects for audience members to manipulate and think about the implications of a notion of writing assessment based in queer-theory and the idea that “failure” is often more interesting and useful than “success” in terms of assessment.
Apparently drawing on a book she co-authored, Failing Sideways: Queer Possibilities for Writing Assessment, West-Puckett mentioned how hyperobjects might be an apt way to describe writing assessment, particularly institutional writing assessment. Hyperobjects are non-local, sticky, viscous, temporary, unfixed, and inter-objective. West-Puckett pointed out that humans conceptually “have problems” with hyperobjects generally, and that there was a tendency to looking for “shallow” and “observable” traits of these objects, which perfectly aligned with my experience of institutional writing assessment. Oftentimes we focus on the shapes and appearances of writing, not on the context and more interesting elements of the object/thing/hyperobject/writing that we are assessing.
To move the audience, and perhaps the field, away from reified notions of writing assessment, West-Puckett brought out solid versions of Pyraminxes and Emoji Squishies to help all of us in the audience think a bit about what assessment might be/become. As I happily used a blank filled balloon to construct my own emoji squishie as a way of identifying how I felt, I thought about how I might, in my classroom and home writing program, move away from a lateral notion of writing assessment that privileged success and think about the way that “failure is a heuristic” when considering writing.
Analyzing and Visualizing Student Writing in a Hybrid Approach to Placement
- Madeleine Sorapure, UC Santa Barbara
Madeleine Sorapure presented about how she was using data visualization software to think through the writing placement practices at the University of California Santa Barbara (UCSB). She provided some background about the “Collaborative Writing Placement” (CWP) process that UCSB now uses, having moved away from a more traditional timed test of writing for placement. She then demonstrated how she was using two AI programs, Infranodus and Open AI Playground, to sort through placement data (from the survey that students filled out as part of the CWP) to try to answer the question, “Can we use computer programs and data visualization to help us understand and help students in terms of writing placement?”
The answer appears to be yes, particularly using Infranodus, but Sorapure was careful to point out that her work was preliminary, and she even invited feedback from the audience.
This presentation allowed me, as someone interested in and using labor-based contract grading, to think about how I might research and explore my own practice (thank you to Sydney Sullivan, Mikenna Sims, and Jennifer Burke Reifman for this). In particular I was grateful that their research looked at issues of labor and precarity vis-à-vis labor-based grading, and it affirmed something I have come to understand: labor-based contract grading helps make grading more equitable, but it come with the cost of more labor—particularly emotional labor.
From Stephanie West-Puckett’s thoroughly engaging presentation, I got to spend some time thinking about how I can reconceptualize my own classroom practice (I’m already planning on students designing emoji squishies on day-one of class I’ll be teaching this summer) and my rather linear understanding of programmatic writing assessment.
And from Madeleine Sorapure, I was introduced to “Infranodus,” which seems to promise a really interesting way of not only looking at writing assessment, but a way of doing corpus research (which is part of my own research agenda) in a novel way. Also, Sorapure made it very evident that AI tools can help one start to see patterns in data about writing, and even in the writing it is working through itself. That seems to me a big take-away from the presentation.
The whole of the presentation I was privileged to witness did something that doesn’t often happen now that I’m slipping into the “late career” stage of my profession: I walked out with new, actionable teaching and research ideas.