Unless you’re rooting for social media bots to become Nazis, Microsoft’s Tay was a resounding failure. When she was released “into the wild” on Twitter, she learned quickly based on her input data: interactions with users on the platform. As those users inundated Tay with misogyny, xenophobia, and racism, Tay started to spout out hateful messages. It’s been a couple years since Tay’s troubles, and Microsoft even tried another bot, Zo, which has likewise had a few problems. Bots are still in the news for their problems; in fact, bots and bad behavior now are almost synonymous, especially in light…
Recent Posts
- CCCC 2026 Session Review: D.6 Food Studies in Rhetoric and Writing: Taking Stock of Our Next Steps
- Starting with Voice: How Language Awareness Shapes Multimodal Composing
- From Studio Remixing to Classroom Remixing: How Research Posters Can Teach Semiotic Border-Crossing for Social Justice
- Multimodal, Multilingual Praxis in the First Year Composition Classroom: Reflections on Promoting Social and Linguistic Justice Via Rhetorical Translation
- Against Linguistic Flattening: Translingual Multimodality in the Age of AI
- When the Teacher Stops Talking: A Human-Centered Experiment with Classroom Silence
- Multimodality as Praxis: Coconstructing the Asynchronous Learning Space
- Intro to Blog Carnival 24: Multimodality, Social Justice, and Human-Centered Praxis