When talking about A.I., Google’s Cassie Kozyrkov asks you to imagine an island of drunk workers, an island that does the mindless, repetitive work that humans don’t want to do. We’re used to thinking about computers this way—but when they become better writers than us, things grow complicated.
Natural Language Generation (NLG), the subset of A.I. that powers computer-written language, essentially works by predicting a response to given data. If you feed it an incomplete sentence, it calculates the probability that a next set of words will follow. In our daily lives, this is built into different language models like Siri, customer-service chatbots, and autocomplete. Neural networks, inspired by our brain’s neurons, is the recent technology that has allowed these models to unlock breakthrough capabilities and scale.
Thus, as neural networks grow more advanced, computer-generated text grows more human.
At OpenAI, a leading research laboratory in California, a new language model has made headlines: meet GPT-3, described by OpenAI to be “an autoregressive language model with 175 billion parameters.” Pre-trained with text data from around the internet, GPT-3 uses Transformer (one type of said neural network) to analyze previously seen words and deduce what follows. It works by picking out mathematical patterns in data to map out different language structures and write nearly anything.
So what has it composed?
On top of penning news articles, computer code, short films, and tweets, the program has also generated Dr. Seuss poems about Elon Musk—“Once there was a man / who really was a Musk. / He liked to build robots / and rocket ships and such”—and a collection of emotional romances for the New York Times’ Modern Love column.
Yet we’re led to ask: what happens when GPT-3 is made public? As text generation sounds more real, fake news and nonsense sound more real, making this technology easily-manipulable for disinformation. Furthermore, the same source that taught GPT-3—the internet—is the same source that’s filled with bias and discrimination. So when the time comes, will we be able to distinguish machine-generated from human-written text? Nils Köbis and Luca D. Mossink from the University of Amsterdam warn not: in a study to see if computers can pass as human poets, they concluded that “participants were incapable of reliably detecting algorithm-generated poetry.”
Not all news is worrying, though. Such revolutionary technology brims with positive potential. One possibility is that language generators will becomes tools to make our jobs easier, producing code for programmers and automating marketing for customer service. As for the future of human artistry, Dartmouth College professor Dan Rockmoreencourages us to embrace A.I. as a creative collaborator: “We could think of ‘machine-generated’ as a kind of literary GMO-tag—or we could think of it as an entirely new, and worthy, category of art.”
In any case, A.I. is no longer just an island of drunk workers. This prompts the question: are we entering a progressive world with new forms of productivity and inspiration, or will GPT-3 ignite a science fiction reality of robot usurpers?
Köbis, Nils and Luca D. Mossink. “Artificial Intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry.” ScienceDirect, 8 Sep. 2020.
Kozyrkov, Cassie. “Advice for Finding AI Use Cases.” 14 June 2018.
Metz, Cade. “When A.I. Falls in Love.” The New York Times, 24 Nov. 2020.
Metz, Cade. “Meet GPT-3. It Has Learned to Code (and Blog and Argue).” The New York Times, 24 Nov. 2020.
Piper, Kelsey. “GPT-3, explained: This new language AI is uncanny, funny – and a big deal.” Vox, 13 Aug. 2020.
OpenAI. “Language Models are Few-Shot Learners.” Cornell University, 22 Jul. 2020.
Reid, David. “How GPT-3 Is Poised to Change the Employee Benefits Landscape.” Forbes, 30 Nov. 2020.
Rockmore, Dan. “What happens when machines learn to write poetry.” The New Yorker, 7 Jan. 2020.
Sabeti, Arram. “Elon Musk by Dr. Seuss (GPT-3).” 14 Jul. 2020.
Simonite, Tom. “The AI Text Generator That’s Too Dangerous to Make Public.” WIRED, 14 Feb. 2019.
Sunnak et al. “Evolution of Natural Language Generation.” SFU Professional Master’s Program in Computer Science, 15 March 2019.