Conversational artificial intelligence (AI) systems like ChatGPT frequently generate falsehoods, but ChatGPT maintains that “I cannot lie” since “I don’t have personal beliefs, intentions, or consciousness.”1 ChatGPT’s assertion echoes the anti-anthropomorphic views of experts like Shanahan (2023) who warns, “It is a serious mistake to unreflectingly apply to AI systems the same intuitions that we deploy in our dealings with each other” (p. 1). Shanahan understandably discourages “imputing to [AI systems] capacities they lack” when Anthropomorphizing AI risks vulnerable users developing false senses of attachment (Deshpande et al., 2023) and traditional interpersonal relationships being degraded by increased human-computer interaction (Babushkina…
Recent Posts
- CCCC 2026 Session Review: EA.5 Navigating Algorithmic Literacy Practices among Digital Feminists and Activists in the Global South
- CCCC 2026 Session Review: CA.3 Developing AI Literacy in Composition Courses
- CCCC 2026 Session Review: D.6 Food Studies in Rhetoric and Writing: Taking Stock of Our Next Steps
- Starting with Voice: How Language Awareness Shapes Multimodal Composing
- From Studio Remixing to Classroom Remixing: How Research Posters Can Teach Semiotic Border-Crossing for Social Justice
- Multimodal, Multilingual Praxis in the First Year Composition Classroom: Reflections on Promoting Social and Linguistic Justice Via Rhetorical Translation
- Against Linguistic Flattening: Translingual Multimodality in the Age of AI
- When the Teacher Stops Talking: A Human-Centered Experiment with Classroom Silence