One of the many memorable scenes in The Wizard of Oz shows Dorothy and her friends overawed by the enormous wizard. When the little dog sniffs out the man behind the mechanics, the “wizard” says, “Pay no attention to the man behind the curtain.” I’m concerned that we humans may be similarly be distracted by the shiny affordances of Artificial Intelligence: passing the Turing test, winning Jeopardy, turning off our lights and locking our doors, ordering our pizza, and driving our cars. I suggest that we must pay attention to those “behind the curtain” who create AI, especially when considering ethics.
Patricia Fancher (2016) in Composing Artificial Intelligence, exposes these “fathers” of AI algorithms, suggesting that they are embedding their own patriarchy in the code. She describes in detail the chat bot that won the Turing test, based on deceit by the makers when they coded it to not know everything. Since it deceived only 33 percent of the scientists with whom it was chatting, I suggest that a rhetorical focus on the abilities of any chat bot to pass this low bar is obscuring its use as a tool by its creators. I will explore the temptations of algorithmic-based AI and the limitations of algorithms for ethics.
How great it would be to delegate routine tasks to a chat bot! My own personal Siri could respond to online students who don’t post an assignment while I could be shopping yard sales or even reading those scholarly books on my endless list. However, my usual response (“What happened?”) is an invitation to a dialogue and Siri might not be able to respond ethically. How would a chatbot balance the scholarly standard of documented quotes with grace and mercy for a student who has a children with the flu, a new baby, and/ or a military deployment? In addition, research has shown that students want to experience the instructor’s presence in an online classroom (Capra 2011). If my online students suspected a chatbot was responding instead of a human, their worst fears about the impersonality of distance learning would be confirmed.
The second temptation of AI is the proposition that greater data will result in greater understanding of ourselves as humans and of the universe itself (Maher 2015). It seems to be the dream of AI enthusiasts that a super computer or a network of super computers will be able to crunch all the data and finally we will answer all of our hearts’ questions, including the thorny ethical questions such who are my neighbors and how should I treat them. The AI creed seems to posit that algorithms can describe all natural phenomena, including human relations. Yet even within the field of math, not all concepts can be quantified or proved, as Lin Ming and Paul M.B. Vitanyi (2007) point out in “Applications of Algorithmic Information Theory.” I understand this to refer to such problems the final number or pattern of Pi, the last numeral of google, or proof of Fermat’s Last Theorem [a3 + b3 = c3 ]. In the end, we may find that greater data will not lead to better ethics or even a better understanding of ourselves.
Another temptation is to ignore the well-known media theory of Marshall McLuhan, who argued that the media is the message. In other words, the media we use, whether that be video or chat bots or Snapchat, will affect the way we compose and receive the message. For instance, Nicholas Carr in “Is Google Making Us Stupid?” raises concerns that online reading, while providing more access to knowledge through “power browsing,” has led to less critical reading and thinking. Timothy Laquintano and Annette Vee (2017) in “How Automated Writing Systems Affect Circulation of Political Information Online” describe in detail how creators of chat bots, the automations, and the end users all created an ecology of online writing. I would add that the ethics of both the creators and the end users should be further explored. Carr points to the commercial interests of internet giants like Google and Facebook. Recent revelations of Facebook’s selling user data reinforces this concern, but they have been doing it for several years, as Hatchman (2015) reports. Thus, we must take into account how the medium, with its little known capacity to collect all our data, affects the messages we send and the entics we embody.
Finally, the greatest temptation might be to consider algorithmic-based AI to be an agent in and of itself, apart from those who coded the responses. While I will discuss in more depth the question of a chatbot’s agency (as defined by sociologists Mustafa Emirbayer and Ann Mische) in my presentation at Computers and Writing 2018, I suggest now that chat bots as currently configured can only respond to patterns of human discourse that have been coded into their databases by their “fathers.” Any ethical choices will be a result of their mathematical DNA, and therefore future ethical analysis must be transparent about the purpose and authorship of these fathers. In other words, I am seconding the call of Nick Seaver in “Knowing Algorithms” (2014) to “examine the logic that guides the hands, picking certain algorithms rather than others, choosing particular representations of data, and translating ideas into code” (10). Without explicitly pointing to the coders, Gillespie also suggests that we need to know the assumptions and priorities on which algorithms are based. To be more specific, we human end users need to know the ethical creed espoused by the fathers of chatbots and other automated writing. Are they pragmatists like Bentham who might code their chat bots to avoid suffering for humans, possibly leading computers to eliminate human suffering by eliminating humans (Maher 2)? Is there a HAL 9000 (2001: Space Odyssey) somewhere in our future who will decide that we are jeopardizing “the mission”? Those are ethical questions that we need to consider.
Capra, Theresa. “Online Education: Promise and Problems.” MERLOT Journal of Online Learning and Teaching (June 2011) 7, no. 2, 228-234. Web. Accessed
Carr, Nicholas. (2008) “Is Google Making Us Stupid?” The Atlantic Monthly. Accessed 3/1/2016 web at https://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
Emirbayer, Mustafa and Ann Mische (1998) What Is Agency? American Journal of Sociology, Vol. 103, No. 4 (January 1998), pp. 962-1023 The University of Chicago Press. Stable URL: http://www.jstor.org/stable/10.1086/231294 Accessed: 29-09-2017
Fancher, Patricia. Composing Artificial Intelligence: Performing Whitness and Masculinity. Present Tense Journal. Vol 6, no. 1. Web at http://www.presenttensejournal.org/volume-6/composing-artificial-intelligence/ Accessed 2/6/18
Gillespie, Tarleton (2017) Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem, Information, Communication & Society, 20:1, 63-80, DOI: 10.1080/1369118X.2016.1199721
Hachman, Mark. (2015) “The Price of Free: How Apple, Facebook, Microsoft and Google Sell You to Advertisers.” PC World. Accessed 3/25/18
Laquintano, Timothy, and Annette Vee. (2017) “How Automated Writing Systems Affect the Circulation of Political Information Online.” Literacy in Composition Studies. Vol 5, no. 2. Accessed 3/23/18
Maher, Jennifer. (2016) Artificial Rhetorical Agents and the Computing of Phronesis. Computational Culture. Accessed 1/31/ 2018 at http://computationalculture.net/artificial-rhetorical-agents-and-the-computing-of-phronesis/
McKenzie, Sheena. (March 25, 2018) “Facebook’s Zuckerberg says sorry in full-page newspaper ads” CNN assessed 3/26/18 at https://www.cnn.com/2018/03/25/europe/facebook-zuckerberg-cambridge-analytica-sorry-ads-newspapers-intl/index.html
Ming Li and Paul M.B. Vitanyi (2007) Applications of Algorithmic Information Theory. Scholarpedia, 2(5): 2658. Accessed 3/ 9/18
Seaver, Nick (2014). “Knowing Algorithms. Media in Transition 8 unpublished draft; accessed 3/23/18