Unless you’re rooting for social media bots to become Nazis, Microsoft’s Tay was a resounding failure. When she was released “into the wild” on Twitter, she learned quickly based on her input data: interactions with users on the platform. As those users inundated Tay with misogyny, xenophobia, and racism, Tay started to spout out hateful messages. It’s been a couple years since Tay’s troubles, and Microsoft even tried another bot, Zo, which has likewise had a few problems. Bots are still in the news for their problems; in fact, bots and bad behavior now are almost synonymous, especially in light…
Recent Posts
- Introduction to Alex Mashny
- Call for Syllabi: Artificial Intelligence (AI) and Writing
- Introduction to Saurabh Anand
- Introduction to Anuj Gupta
- Introduction to Luke Hernandez
- Introduction to Sarah Fischer
- 2022-2023 Fellows End of Year Reflection
- Transformative Pedagogy and Decolonial Approach Through Digital Storytelling