Facebook X (Twitter) Instagram
    Recent Posts
    • 2022-2023 Fellows End of Year Reflection
    • Transformative Pedagogy and Decolonial Approach Through Digital Storytelling 
    • Dr. Jason Tham Interview
    • Blog Carnival 21: Editor’s Outro: “Digital Rhetoric in the Age of Misinformation and AI Advancements”
    • Scrivener: The Go-to APP for Writing
    • Make It So: Assessing What Students Actually Think about Generative AI
    • ChatGPT and Effective Pedagogy
    • Through a ChatBot Darkly
    RSS Facebook X (Twitter)
    Digital Rhetoric Collaborative
    • Home
    • Conversations
      • Blog Carnivals
      • DRC Talk Series
      • Hack & Yack
      • DRC Wiki
    • Reviews
      • CCCC Reviews
        • 2023 CCCC Reviews
        • 2022 CCCC Reviews
        • 2021 CCCC Reviews
        • 2019 CCCC Reviews
      • C&W Reviews
        • 2022 C&W Reviews
        • 2019 C&W Reviews
        • 2018 C&W Reviews
        • 2017 C&W Reviews
        • 2016 C&W Reviews
        • 2015 C&W Reviews
        • 2014 C&W Reviews
        • 2013 C&W Reviews
        • 2012 C&W Reviews
      • MLA Reviews
        • 2019 MLA Reviews
        • 2014 MLA Reviews
        • 2013 MLA Reviews
      • Other Reviews
        • 2018 Watson Reviews
        • 2017 Feminisms & Rhetorics
        • 2017 GPACW
        • 2016 Watson Reviews
        • 2015 IDRS Reviews
      • Webtext of the Month
    • Syllabus Repository
    • Teaching Materials
    • Books
      • Memetic Rhetorics
      • Beyond the Makerspace
      • Video Scholarship and Screen Composing
      • 100 Years of New Media Pedagogy
      • Writing Workflows
      • Rhetorical Code Studies
      • Developing Writers in Higher Education
      • Sites of Translation
      • Rhizcomics
      • Making Space
      • Digital Samaritans
      • DRC Book Prize
      • Submit a Book Proposal
    • About
      • Board
      • Graduate Fellows
    Digital Rhetoric Collaborative

    Introduction: Angela Glotfelter

    0
    By Angela Glotfelter on September 24, 2018 DRC Grad Fellows

    The author stands in front of a Super-Pac Man arcade game.As kids, my siblings and I would often play Super Mario Bros. on the NES, blasting through the first Mushroom Kingdom level to get to the “basement” and, eventually, to “water world” and to Bowser’s lair beyond. We’d also spend hours in my grandparents’ basement, where they had original Ms. Pac-Man and Crazy Kong arcade games.

    Those memories stand out as my introduction to digital media and rhetoric. While we don’t all often get the chance to sit down around the same console, my siblings and I still game together when we can, but we’ve now graduated to World of Warcraft.

    Professionally, this interest in digital media has translated to a curiosity about how humans and machines work with and against each other to create action. Asking these kinds of questions has taken me to many places, including working with small nonprofits to examine their use of social media. Now, I wonder things like: What does success on social media mean? And, perhaps more importantly, how do we sometimes define online success in ways that clash with or even contradict the mission and purposes of our community partners?

    These days, I’ve followed how the conversations in our field have shifted from social media, to platforms, to the algorithms that govern those platforms. And the questions both academics and the public are asking of algorithms are big.

    I think of Safia Noble’s recent Algorithms of Oppression where she talks about the ways that algorithms have been built—and sometimes have learned—to behave in discriminatory ways. I think of Tarleton Gillespie, who tells us to beware of algorithms that claim to know “the mind of the public” in objective, factual ways when, in reality, many algorithms are modeled using datasets that are proxies for what is actually being measured.

    In a day and age where edge providers like Facebook and Twitter are called upon to do something about the hate speech and fake news circulating like wildfire on their platforms, these companies are looking for scalable algorithmic solutions that will let them minimize their human costs and maximize capital. But, currently, Facebook’s algorithms can only identify hate speech a measly 38% of the time, leaving the rest of the work up to the 7,500 content moderators the company now employs (Koebler & Cox, 2018).

    These issues leave a lot of unanswered questions—questions that I hope to explore in part during my time as a DRC fellow. If you’d like to get in touch or collaborate, you can reach me at glotfeam@miamioh.edu or at @amglotfelter on Twitter.

    Angela Glotfelter

    Angela is a PhD student in Composition and Rhetoric at Miami University of Ohio, where she researches how content creators deal with the effects of algorithms on their work.

    Leave A Reply Cancel Reply

    Recent Posts
    By Alyse CampbellSeptember 16, 20230

    2022-2023 Fellows End of Year Reflection

    By Alexandra KrasovaSeptember 12, 20230

    Transformative Pedagogy and Decolonial Approach Through Digital Storytelling 

    By Jiaxin ZhangSeptember 11, 20230

    Dr. Jason Tham Interview

    By Nitya PandeySeptember 10, 20230

    Blog Carnival 21: Editor’s Outro: “Digital Rhetoric in the Age of Misinformation and AI Advancements”

    By Jiaxin ZhangAugust 26, 20230

    Scrivener: The Go-to APP for Writing

    Digital Rhetoric Collaborative | Gayle Morris Sweetland Center for Writing | University of Michigan

    Type above and press Enter to search. Press Esc to cancel.