In (Partial) Defense of Algorithms

0

Defending the Digital

There is a noticeable dystopic strand in digital rhetoric scholarship – a sense that we have a responsibility to extend rhetorical analysis into the digital because we need to find out what’s really happening, to discover how our world is being undermined or overtaken. This is even more pronounced in popular media, where headlines such as “Don’t Believe the Algorithm” and “If an Algorithm Wrote This, How Would You Even Know?” stoke popular fears about robot overlords emerging from parthenogenetic, mysteriously mathematical creation cycles. Rhetorical scholarship rarely becomes so fearmongering, but it also tends to take on a defensive posture, looking for responses to the problem of the digital rather that approaching the digital on its own terms.

The Problem of Cooperation

To be fair, there is much to be concerned about. One has only to look through trade publications such as Cathy O’Neil’s Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy to get a sense that technology, AI, and algorithms are threatening humans’ ways of life. In academia, for example, O’Neil describes the way College Board rankings have changed college’s priorities and topography, encouraging luxury sports complexes and housing rather than faculty development and student diversity (53-57). Where Kenneth Burke (1969) claimed war was “the ultimate disease of cooperation” (22), O’Neil would claim the ultimate disease of digital cooperation is a massively scaled, invasive, unquestioned algorithm.

The question is no longer whether we should know about such digital object, but how to best go about it (Vee 2012; Jackson 2014). David M. Rieder (2012) points out that “if you can’t write code, if you can’t think with code, if you can’t write algorithmically, you may eventually find yourself stuck in the logocentric sands of the past.” Yet Aaron Beveridge (2015) admits that even if rhetorical scholars do how to code, we rarely know enough.

Human Muddling

That said, there is hope even within the most threatening of algorithmic invasions. While we can condemn complex systems, they are also fertile sites to discover how human muddling preserves or can counter existing cultural biases. By this, I mean coders and programmers often cobble together various programs or data sets in order to solve an immediate problem. One coder, Maciej Cegłowski, described the situation using two fictional coders, Chad and Brad, who “are not specific people. They’re my mental shorthand for developers who are just trying to crush out some code out on deadline, and don’t think about the wider consequences of their actions,” which is why they use outdated or racist data in order to create algorithms that have far-reaching consequences. This insider description of the pressures and shortcuts explains how, for example, gender biases are embedded into word sorting algorithms that guide Google search results (Rutkin 2016). That said, there has also been a great deal of work done by those including Dwork et al (2011) that aims to counter such biases from the algorithmic level.

The role of digital rhetoricians is be part of the team identifying dangerously scaled algorithms. We must question who made them, why, and how before we decide how best to respond.

An Example

Consider the top-secret algorithm driving Facebook’s news feed. Few other algorithms have received such attention as this one, especially in the wake of the 2016 presidential election. Will Oremus (2016) managed to discover only the basics when researching his profile for Slate: the news feed algorithm is a complex of smaller algorithms, each of which assigns value to particular news items. What appears on a person’s feed is a result of those designated values. Sometimes the feed shows only the high-value stories, and sometimes it mixes the high and low-value stories in order to gauge whether or not the algorithm’s determination of “high value” meets user preferences.

More specific information was revealed in a 2013 study published in PNAS. Written by Cornell researchers with ties to Facebook, the study is ethically questionable in terms of whether or not test subjects knew they were being studied. That aside, it did reveal one way that Facebook designates news “value.” In order to determine how to prioritize “emotional content,” Facebook skewed hundreds of thousands of users’ news feeds, showing either predominantly positive or predominantly negative, emotionally based content as determined by an established linguistic model (8789). The results showed a small but significant chance of “emotional contagion.” If people saw predominantly negative or positive results in their news feed, they were likely to have correlated positive or negative status updates. Even the control group was revealing: those with less emotional content of either intensity were also less engaged generally.

This all seems to support the debunking approach. However, as Oremus points out, “When the algorithm errs, humans are to blame. When it evolves, it’s because a bunch of humans read a bunch of spreadsheets, held a bunch of meetings, ran a bunch of tests, and decided to make it better.” Here is where digital rhetoric should stake a strong claim. The aim should not be primarily defensive, by which I mean that we should not simply concern ourselves with self-protection from algorithms (Brunton and Nissenbaum 2015). Rather, we should approach algorithm-building humans on their own terms.

Generosity and Hope

Humans make our algorithmic world, and they do so in collaboration. In those many spreadsheets and boring meetings, humans are deciding what our digital world will be. The Facebook news feed algorithm is the work of dozens of ever-changing teams working like mosaic artists. They take small pieces from other works and fuse them together. Before we condemn them, though, it’s important to note that those building algorithms don’t always realize the values they salvage. They could put a shard of Sainte-Chapelle next to one from Walter Womacka’s “Marx Window.” As I’ve noted before, they almost certainly don’t know the mathematical underpinnings of each individual piece, only that it usually generates particular results. This means a flawed or outdated mathematical program – perhaps one that statistically favors a particular group of people or a particular value – gets built into the final algorithm, for social good or ill, and the builders are oblivious.

That’s not to say we shouldn’t uncover these values and cope with them, but digital rhetoricians, no matter how skilled in coding, are ill-equipped unless we revise our single-author research model. I’m certainly not the first to advocate for more collaborative projects (Beveridge 2015, McGrath 2011). Nor am I the first to argue that these collaborations ought to occur more often across disciplinary borders. In the context of digital rhetoric, such collaboration must aim to uncover the human behind the machine. Such awareness should enable us to confront the humanity behind a digital object before responding to it. A machine may churn out an answer in milliseconds; critics of digital objects shouldn’t feel compelled to do the same.

Hannah Fry argues we treat algorithmically driven systems with the same kind of caution we should treat any human creation. She describes a moment during the Cold War, when a report came into Russia from their nuclear early warning system: five incoming missiles from the US. The military officer in charge hesitated: five missiles seemed too few. And it was – there was a flaw in the algorithm, one caught because the officer waited and analyzed the situation and possibility for the algorithm to make an error. The human responded to the machine not as infallible mathematical object, but as an object that – like the humans who created it – could be flawed. Could be wrong.

Accepting the capacity for human error embedded within all such machines and working alongside those who know how to discern it enables us to better understand a digital object’s internal construction and its power within our world.

One more point – if and when we discover hope in the depths of any of these newly opened black boxes, if and when we find humans making our world better – we have a responsibility to hold it up as evidence of the human capacity to continue to create good, even if sometimes by happenstance.

References

Beveridge, A. (2015). Looking in the dustbin: Data janitorial work, statistical reasoning, and information rhetorics. Computers and Composition Online. Retrieved from http://cconlinejournal.org/fall15/beveridge/.

Biggs, J. (2018) .This tech (scarily) lets video change reality. TechCrunch. Retrieved from https://techcrunch.com/2018/09/11/this-tech-scarily-lets-video-change-reality/amp/

Bolukbasi, T. et al. (2016). “Man is to Computer Programming as Woman is to Homemaker? Dibiasing Word Embeddings.” New York City: Fairness, Accountability, and Transparency in Machine Learning Workshop. https://arxiv.org/abs/1607.06520

Brunton F., & Nissenbaum H. (2015) Obfuscation: A user’s guide for privacy and protest. Cambridge: MIT Press.

Burke, K. (1969). A rhetoric of motives. Berkeley: University of California Press.

Cegłowski, M. (2016).  “Who Will Command the Robot Armies?” IdleWords. Retrieved from http://idlewords.com/talks/robot_armies.htm

Dwork, C. (2011). “Fairness Through Awareness.” Retrieved from https://arxiv.org/pdf/1104.3913.pdf

Fry, H. (2018). Hello, world: Being human in the age of algorithms. New York: W. W. Norton & Co.

Jackson, R. (2014). Four notes towards propaganda and the post-digital symptom. APRJA, 3(1). Retrieved from www.aprja.net/?p=1388.

Juszkiewicz, J., & Warfel J. (2016). The rhetoric of mathematical programming. Enculturation. Retrieved from http://enculturation.net/the-rhetoric-of-mathematical-programming

Kleiner, J.P. (2018). Walter Womacka and East Berlin’s Socialist face. Retrieved from https://gdrobjectified.wordpress.com/2018/01/15/walter-womacka/

Kramer A. D. I. et al. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS, 111(29): 8788-8790. Retrieved from http://www.pnas.org/content/pnas/111/24/8788.full.pdf

Fry, H. (2018). Don’t believe the algorithm. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/dont-believe-the-algorithm-1536157620

Matas, J. (n.d.). Art at Facebook HQ [Painting]. Retrieved from https://newsroom.fb.com/media-gallery/artists-in-residence/jonathan-matas/

McGrath, L. (Ed.). (2011). Collaborative approaches to the digital in english studies. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/cad

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Random House.

Oremus, W. (2016). Who controls your Facebook feed. Slate. Retrieved from http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html

Rieder, D. (2012). Programming Is the new ground of writing. Enculturation. Retrieved from http://enculturation.net/node/5267

Rutkin, A. (2016). Lazy coders are training artificial intelligences to be sexist. New Scientist. Retrieved from https://www.newscientist.com/article/2115175-lazy-coders-are-training-artificial-intelligences-to-be-sexist/

Vee, A. (2012). Coding values. Enculturation. Retrieved from http://enculturation.net/node/5268.

About Author(s)

Jennifer is on the English faculty and Director of the Writing Center at Saint Mary's College, Notre Dame, Ind. She is also a Ph.D. candidate at Indiana University - Bloomington, specializing in spatial and digital rhetoric, composition theory, and WAC/WID institutional history.

Leave A Reply