Unless you’re rooting for social media bots to become Nazis, Microsoft’s Tay was a resounding failure. When she was released “into the wild” on Twitter, she learned quickly based on her input data: interactions with users on the platform. As those users inundated Tay with misogyny, xenophobia, and racism, Tay started to spout out hateful messages. It’s been a couple years since Tay’s troubles, and Microsoft even tried another bot, Zo, which has likewise had a few problems. Bots are still in the news for their problems; in fact, bots and bad behavior now are almost synonymous, especially in light…
Recent Posts
- C&W 2025 Session Review: “Whose Time is It, Anyway?” (Keynote)
- 2025 C&W Session Review: From Chatbot to Classroom: Understanding Student and Instructor Use and Perceptions of AI (Session C)
- 2025 C&W Session Review: “Invention, AI, and Circulation” (Session F)
- 2025 C&W Session Review: “Moving through Space” (Session H)
- Blog Carnival 23: Editor’s Outro: “Digital Circulation in Rhetoric and Writing Studies
- Collage as Socialist Circulation
- Play, Rhetoric, and the Circulation of J.D. Vance Photoshops
- Attending to Scales of Intensity: A Viral/Chronological Method for Researching the Circulation of Activist Rhetoric