Panel F.8: “Fostering Critical Engagement with AI-Technologies: Examining Human-Machine Writing, Pedagogy, and Ethical Frameworks”


Presenters: Dr. Heidi McKee, Dr. Nathan Riggs, and Alan Knowles (Miami University)

Artificial general intelligence (AGI, or simply AI) is rapidly becoming one of the most salient issues of the twenty-first century. Although AI involves an array of technologies and applications, its deployment in the production and processing of natural language is uniquely intriguing and vexing for language scholars and educators. Researchers in Rhetoric and Composition and elsewhere have for decades studied the computerized automation of language technologies, but the focus has largely been on their evaluative facility, such as grammar checkers and essay scoring programs (Shermis and Burstein, 2003; 2013). Computer technologies designed to sort, rank, evaluate, and assign quantitative metrics to human writing are surprisingly rudimentary and pale in comparison to today’s AI language machines, known technically as Large Language Models. With the ability to predict and prompt writers with words, clauses, sentences, and now even paragraphs to choose from, Large Language Models have the potential to fundamentally alter the nature of writing and thus its teaching, reshaping the basic exchange between teacher and student, writer and prose.

Despite the grandiosity of the proclamations above, AI language technology is already quietly mediating our communications in simple ways. As I type this review on a word processor and then email the DRC admins, I am prompted by Word and Outlook with next word prediction, Microsoft’s statistical models necessarily influencing the direction and flow of my prose. But it is precisely the seemingly modest features of next word prediction and related technologies that have the most profound implications. Which is why Panel F.8 at the 2022 Computers and Writing Conference, “Fostering Critical Engagement with AI-Technologies: Examining Human-Machine Writing, Pedagogy, and Ethical Frameworks,” offered vital inquiries on multiple fronts into the changing landscape of AI, writing, and pedagogy. The panel began with Nathan Riggs of Miami University, who meticulously detailed the ethics of language-oriented AI, a necessary theoretical prerequisite for later practical concerns of AI engagement. After reviewing historical and philosophical ethical frameworks, Riggs concluded that AI technology lacks a cognate to human agency and is therefore incapable of acting morally and thus rhetorically. Riggs’s careful examination of ethical paradigms offers an analysis crucial for both the developers and users of AI technologies, motivating a deeply necessary consideration of the humanistic dimension of AI that is too often eclipsed by the excitement of its sheer technological capabilities.

Next, Heidi McKee, also of Miami University, offered a discussion of AI chatbots and similar technology, counseling the audience to confront rather than ignore the inevitability of AI in the classroom and workplace. McKee’s presentation helpfully aggregated resources available for researchers, educators, and workers, including chatbot repositories and texts and tools for emergent research methodologies. Deftly avoiding the low-hanging pessimism that often attends the topic of automated pedagogy, McKee instead proposed an alternative: productive collaborations between humans and machines. Rather than advance the case for automating teachers, McKee’s proposed collaborations and AI-inflected digital research methods ultimately highlighted the stark differences between humans and machines, reminding even the most tech-pious among us that certain aspects of education cannot and will never be truly replicated by AI.

Finally, Alan Knowles, also of Miami University, delighted the audience with the most fascinating and challenging presentation of the group. Knowles described a digital rhetoric course he taught that incorporated AI writing technology into the course assignments. At the upper-level, Knowles’s course demonstrated the innovative pedagogical opportunities that the increasing access to Large Language Models affords educators and students alike. Using a now-old version of Open AI’s Large Language Model, GPT-2, Knowles required students to train and use the program and produce writing for a variety of genres. Both practical and theoretical, Knowles’ course introduced students to a technology that is sure to be used in all manner of workplaces. More importantly, however, Knowles’s course exercised new rhetorical analysis muscles, compelling students to make judgments about the compositional choices of the GPT-2 program.

For example, students were required to submit weekly responses to the semester readings, but the twist was that they had to use the AI word processor to compose them. The browser-based processor obviated the need for coding knowledge and worked essentially as a next-word predictor on steroids, providing multiple words and clauses calculated by the AI to most accurately complete sentences which students then critically analyzed and reflected on. Rather than capitulate to the AI overlords, this dynamic allowed Knowles’s students to dictate and control—and thus wield rhetorical dominion over—machine-mediated compositions. The virtue of Knowles’s course is that by actually using the technology rather than simply reading articles about it, students were better able to detect its rhetorical shortcomings while also learning a new form of blended composition, one that is supplemented, but not totally supplanted, by AI. Knowles’s presentation concluded by reviewing Open AI’s next generation Large Language Model, GPT-3, which predicts with uncanny precision essay-length prose rather than single clauses and sentences. The prospect of GPT-3’s compositional ability would seem more terrifying if not for Knowles’s innovative reclamation of AI for pedagogical purposes.

As AI technology generally and Large Language Models specifically continue their unassailable march into every aspect of our professional lives, we should take the insights of this panel to heart and try to stay ahead of the technology by reclaiming and collaborating with it. As Riggs thoroughly confirmed, humans will always have the upper hand in matters of communicative exchange like education, as AI is fundamentally amoral and arhetorical. While the successive achievements of machines in the realm of intelligence raises many concerns, the presenters of Panel F.8 remind me that all hope is not yet lost and that the simulation of intelligence, however proximal to the real thing, remains a simulation still and all.


Shermis, M. D., & Burstein, J. C. (2003). Automated essay scoring: A cross-disciplinary perspective. NY: Routledge.

Shermis, M. D., & Burstein, J. (2013). Handbook of automated essay evaluation. NY: Routledge.

About Author

Daniel Ernst

Daniel Ernst is an Assistant Professor of English at Texas Woman's University where he studies automated language technology, educational assessment, and technical communication.

Leave A Reply