How can users be literate of systems that evade the comprehension of even the experts who created them? For generative A.I., such a literacy seems to depend upon terms of art: misleading homonyms like ‘explainable’, ‘memorization’, ‘natural’, etc. In the field of computer science, terms of art serve as metrics to quantitatively evaluate A.I. performance with the aim of achieving greater efficiency in the tasks those systems were programmed to perform in the first place. Apparently, machine intelligence has blasted off; it approaches infinity and beyond while we stand still. With this post, I attempt to carve off a clean…
Recent Posts
- Apply to be a 2024-25 DRC Graduate Fellow!
- Call for Blog Carnival 22: Digital Literacy, Multimodality, & The Writing Center
- Promoting Global Understanding and Multicultural Communication Through Arts-Based Research in ESL Classrooms
- Digital Pedagogy and Pentiment (2022): Playing with Critical Art History
- Syllabus Repository Update: AI and Writing
- An Interview with Dr. Aaron Mauro on Hacking in the Humanities: Cybersecurity, Speculative Fiction, and Navigating a Digital Future
- Introduction to Alex Mashny
- Call for Syllabi: Artificial Intelligence (AI) and Writing