How can users be literate of systems that evade the comprehension of even the experts who created them? For generative A.I., such a literacy seems to depend upon terms of art: misleading homonyms like ‘explainable’, ‘memorization’, ‘natural’, etc. In the field of computer science, terms of art serve as metrics to quantitatively evaluate A.I. performance with the aim of achieving greater efficiency in the tasks those systems were programmed to perform in the first place. Apparently, machine intelligence has blasted off; it approaches infinity and beyond while we stand still. With this post, I attempt to carve off a clean…
Recent Posts
- CCCC 2026 Session Review: EA.5 Navigating Algorithmic Literacy Practices among Digital Feminists and Activists in the Global South
- CCCC 2026 Session Review: CA.3 Developing AI Literacy in Composition Courses
- CCCC 2026 Session Review: D.6 Food Studies in Rhetoric and Writing: Taking Stock of Our Next Steps
- Starting with Voice: How Language Awareness Shapes Multimodal Composing
- From Studio Remixing to Classroom Remixing: How Research Posters Can Teach Semiotic Border-Crossing for Social Justice
- Multimodal, Multilingual Praxis in the First Year Composition Classroom: Reflections on Promoting Social and Linguistic Justice Via Rhetorical Translation
- Against Linguistic Flattening: Translingual Multimodality in the Age of AI
- When the Teacher Stops Talking: A Human-Centered Experiment with Classroom Silence