Conversational artificial intelligence (AI) systems like ChatGPT frequently generate falsehoods, but ChatGPT maintains that “I cannot lie” since “I don’t have personal beliefs, intentions, or consciousness.”1 ChatGPT’s assertion echoes the anti-anthropomorphic views of experts like Shanahan (2023) who warns, “It is a serious mistake to unreflectingly apply to AI systems the same intuitions that we deploy in our dealings with each other” (p. 1). Shanahan understandably discourages “imputing to [AI systems] capacities they lack” when Anthropomorphizing AI risks vulnerable users developing false senses of attachment (Deshpande et al., 2023) and traditional interpersonal relationships being degraded by increased human-computer interaction (Babushkina…
Recent Posts
- DRC Roundup September 2024
- Blog Carnival 22: Editor’s Outro: “Digital Literacy, Multimodality, & The Writing Center”
- Digitizing Tutor Observations: A Look into Self-Observations of Asynchronous Tutoring
- AI (kind of) in the Writing Center
- How My Role at the Writing Center Shaped My Digital Literacies
- Beyond the Hype: Writing Centers and the AI Revolution in Higher Education
- Investigating the Impact of Multimodality in the Writing Center
- On Building (and Leaving) a Multiliteracy Center