As we wrap up our 13th Blog Carnival, we want thank all of our contributors for their engaging and thought-provoking ideas. In January 2018, we shared our CFP with the goals of considering how digital rhetoricians are being called to help fill important theoretical voids in the ethics of these technologies and how intelligence in AI is identified/defined and by whom. In response, our contributors have offered critical observations about the connections between AI and digital-rhetorical theories. Here are three main themes that emerged from the seven posts:
1. Feminist resistance to AI narratives
In our first post, “Tropes of Feminine AI,” Patricia Fancher examines feminine AI through chatbots and personal assistants, arguing that “feminized AI fit into the same tropes that women have been placed in for centuries: sex object or mother.” Fancher reminds us: “With the prevalence and persuasive capacity of these sexy chatbots and feminized personal assistant apps, we are reminded once again of the importance of this research: the codes of feminized intelligence reinforce the roles that are assigned as typical or appropriate for women.” Meanwhile, Marcia Bost in “Pay No Attention to the Man Behind the Curtain” offers a critique of several “temptations” and ethical limitations of AI. These include: (1) the temptation “to delegate routine tasks to a chat bot” instead of offering a human connection to students; (2) our willingness to believe that more data “will result in greater understandings of ourselves as humans and of the universe itself,” which may not be true; (3) the need to “take into account how the medium…affects the messages we send;” and (4) the ethical questions surrounding our treatment of AI as agents themselves, rather than examining the people who create and code those responses.
2. AI reflexivity and authorship
As writing teachers and scholars, we are concerned about the ways in which AI would impact our pedagogy and writing processes. A key observation our contributors provided in this blog carnival is how the writer’s sense of self is activated. In “Becoming the Bot,” Jeremy David Johnson shares with us his experience in a “counterintuitive” approach to learning how bots think––by becoming a Twitter bot himself. From his experimentation, Johnson discovers the conditions for self-awareness in AI bots and warns us against their lack of rhetorical reflexivity. Heidi McKee & Jim Porter look at authorship from an instructional perspective in “The Impact of AI on Writing and Writing Instruction,” a must-read for those who are concerned about AI instructors, teaching assistants, and how deep learning would affect the future of writing instruction. McKee & Porter think teachers should “advocate for a critical engagement with technology development to insure that its designs and uses are truly smart, not just convenient or cost cutting, appropriate for our educational mission and goals and for our students.”
3. Ethical considerations in AI development
In “Good AI Computing Well,” Jennifer Maher raises the importance of rhetorical education of AI as an ethical endeavor. Reflecting on some of the initial research in AI and the development of AI thus far, Maher emphasizes this rhetoric in relation to language and the opaque methods through which these systems are generated. Maher concludes that AI “must compute through a rhetorical and ethical understanding of the world.” Meanwhile, in “Bots or Ghosts? Ethical Considerations of Bots as Ghostwriters,” Charles Grimm explores the increasing prevalence of the use of bots as ghostwriters, particularly in social media platforms. He examines dlvr.it, an automated service used by many businesses to create posts. Through this examination, he offers a ethical consideration of our use of such services, including questioning whose interests we serve in this process, how this work can affect “authenticity,” and the consequences of using these services, particularly to curate a professional digital identity.
In “Inscrutable AI: Deep Learning and the Problem of Technology and Trust,” Andrew Kulak looks at the ethical implications of deep learning, “a type of machine learning that implements digital structures similar to those of neurological systems and uses the neural networks to process information.” Using rhetoric as a framework of critique, Kulak suggests that rhetoric can help us understand how these AI “shift the object of analysis from artificial intelligence as a unit to what Jane Bennet describes as assemblages, or agentive collections of human and non-human actors” as well as to create methods of accountability for deep-learning AI.
AI brings much complexity to what we do in our field. We hope this blog carnival opens up discussion about:
- the need to consider questions of ethics, authenticity, and human(ity/ness) as we grapple with AI;
- a need to recognize where and how we use AI (professionally, personally, etc.) and what kinds of relationships AI are creating and/or replacing; and
- alternative ways of using AI (feminist methodologies, e.g.).
Please leave comments here or under the individual blog posts to let our contributors know what you think. Happy reading!