Hundreds of think pieces have predicted “The End of Writing” in 2023, thanks to the arrival of generative AI chatbots. Before the digital ink began to dry, the cautionary tales began to roll in. Notable among them: a lawyer asked ChatGPT to write a legal brief in support of his client’s spat with an airline. He filed the brief, failing to notice that the chatbot invented and confidently cited imaginary cases.
Ha, ha. So we still need human editors. So far, so funny. Once we’ve heard 100 more such tales, should we conclude: Ho hum?
Um—No. Definitely not.
The recent warning statement from the Center for AI Safety, about the future risks AI poses to humanity, was irresponsibly vague. It belied the fact that AI has already killed people. AI will not wreak havoc like a pandemic or climate change. It works in a much simpler and more familiar manner.
AI kills when we invite it serve in place of human experts. And it kills the underprivileged, not the experts who delight in its increasing efficiency. It kills when we take it at its word (never mind that every word it generates was looted from human memory, via CommonCrawl, and filtered for bias and hate speech by underpaid ghost workers in Kenya). AI kills when we invite it take the pilot seat.
We typically talk about self-driving cars as AI, but not self-flying planes. Perhaps because passengers might not trust “self-flying planes.” In fact, for decades, Boeing has mostly designed airplanes that fly themselves, with minimal supervision from pilots, who benefit from less training than in the past—and thus acquire less expertise, or “airmanship,” a term for a human expert pilot’s physiological-cognitive sense of how to fly a plane in various challenging circumstances—which can only be gained by flying, a lot, in various challenging circumstances,
a plane that is not flying itself.
When two Boeing 737 Max airplanes crashed in 2019, killing 346 people in Ethiopia and Indonesia, and grounding 340 other active 737 Max airplanes all over the world, the fault was clearly Boeing’s. Boeing engineers designed and tested the plane, and Boeing technical writers, presumably, wrote the flight manuals. But what exactly was their mistake?
In very rare circumstances, the airplane’s obscure MCAS (Maneuvering Characteristics Augmentation System) made it all but impossible for two inadequately trained pilots to escape a death dive. Astonishingly, the pilots did not even know the system existed. “Boeing believed the [MCAS] system to be so innocuous, even if it malfunctioned, that the company did not inform pilots of its existence or include a description of it in the airplane’s flight manuals,” as William Langewiesche explained.
How did this terrible oversight get past Boeing’s engineers and writers? The utterly preventable loss of 346 lives was the direct consequence of four converging errors, all related to AI:
1) overconfidence in artificial intelligence (which might more accurately be called advanced automation);
2) the erosion of training, and thus expertise, as a consequence of overconfidence in artificial intelligence;
3) profit-driven haste in design, production and documentation; and
4) inadequate audience awareness.
In assuming that they need not even mention the MCAS system to pilots, or in flight instruction manuals, Boeing was still designing and documenting commercial passenger aircraft for expert pilots from major industrialized nations—not for undertrained pilots, flying for underregulated airlines, in developing, underprivileged countries. After all, just two of the ~342 identical aircrafts flying around the world in 2019, clocking an average of 8,600 flights per week, crashed. Tragically, more experienced pilots could easily have pulled the airplanes out of the death dive. Even more tragically, the actual pilots could have escaped it, if they had ever been instructed in the MCAS system.
AI will continue to cause damage, and even to kill, when we invite it take the place of human expertise. Experts will make mistakes when they over rely on AI. And students will fail to develop expertise in every field in which they over rely on AI—the same human expertise that AI, like a vampire, feeds on, to develop and grow in the first place.
Developing critical editing skills has never been more important in human history than it is now. By critical editing skills, I mean applying to a text one’s discipline-and-context-specific expertise and socio-political awareness of the rhetorical situation. A strong critical editor can deeply analyze and understand the audience, purpose, genre and changing context for a given communication task, before beginning to write—or, as is becoming ever more common, before beginning to edit a draft generated by AI. Of course, the flight instruction manuals for the Boeing 737 Max were not written by AI, but they surely relied on templates, and probably involved inadequate updates to the manuals for the previous 737 model. Incremental advancements in technology often feel inconsequential, but they are not. They add up, fast.
Generative AI was built, incrementally, on the achievements of advanced automation, which is not new. We must slow down and see through the hype around generative AI, to understand the ways in which it is not new. The risks it presents are not new. They are just a whole lot bigger, and are moving faster.
If Boeing engineers and writers had slowed down enough to register how much the audience, purpose and context for their airplanes were changing—from the former audience, of extremely experienced pilots flying semi-automated planes, to relatively less experienced pilots flying almost fully automated planes—346 people would not have lost their lives. (Nor would Boeing have lost at least $19 billion.)
Let’s hope the engineers and writers at Merlin, “the aviation technology company propelling the future of fully autonomous flight,” have strong critical editing skills.
Recent Posts
- DRC Roundup September 2024
- Blog Carnival 22: Editor’s Outro: “Digital Literacy, Multimodality, & The Writing Center”
- Digitizing Tutor Observations: A Look into Self-Observations of Asynchronous Tutoring
- AI (kind of) in the Writing Center
- How My Role at the Writing Center Shaped My Digital Literacies
- Beyond the Hype: Writing Centers and the AI Revolution in Higher Education
- Investigating the Impact of Multimodality in the Writing Center
- On Building (and Leaving) a Multiliteracy Center