Conversational artificial intelligence (AI) systems like ChatGPT frequently generate falsehoods, but ChatGPT maintains that “I cannot lie” since “I don’t have personal beliefs, intentions, or consciousness.”1 ChatGPT’s assertion echoes the anti-anthropomorphic views of experts like Shanahan (2023) who warns, “It is a serious mistake to unreflectingly apply to AI systems the same intuitions that we deploy in our dealings with each other” (p. 1). Shanahan understandably discourages “imputing to [AI systems] capacities they lack” when Anthropomorphizing AI risks vulnerable users developing false senses of attachment (Deshpande et al., 2023) and traditional interpersonal relationships being degraded by increased human-computer interaction (Babushkina…
Recent Posts
- Blog Carnival 24: Editor’s Outro: Multimodality, Social Justice, and Human-Centered Praxis
- From Digital Content to Academic Confidence: My Rhetorical Journey
- Scooby Doo, Who Are You?: Scaffolding Collaboration Through Narrative Tropes
- On Creative Permission: Offering Multimodal Choice in First-Year Writing
- Multimodal Reading as Valid Academic Practice
- Centering Lived Experiences in Multimodal Writing and Digital Literacy Pedagogy
- Design as Praxis: Multimodal Composition in Writing Center Administration
- Multimodal Approaches to Faculty Development Spaces