February 13, 2023
Anthony Brandt and David Eagleman, in their book, Runaway Species: How Human Creativity Remakes the World, offer a three-part framework for understanding how novel things are created. Using three cognitive maneuvers—Breaking, Blending, and Bending—humans continue to produce creative artifacts, solutions, and products, not out of nothing (like an all-powerful deity), but by reconfiguring found materials, tools, and objects into new arrangements and values.
What happens when a machine breaks, blends, and bends an age-old academic performance task, like composing essays? What happens when intelligent machines, like OpenAI’s ChatGPT, start to deterritorialize the landscape of writing into one where humans are writing with machines? As we confront this conundrum, I recommend exploring Kevin Kelly’s framing of our relationship with machines in the age of AI: “Everyone will have access to a personal robot, but simply owning one will not guarantee success. Rather, success will go to those who best optimize the process of working with bots and machines (The Inevitable 58-59).” Later he writes, “This is not a race against machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots” (60). I want to emphasize this claim: One of the valued human competencies in the age of machines will come down to “how well we work with machines.”
There are already examples of concerted efforts to “work against” ChatGPT, and many have written about ChatGPT from a rightfully concerned perspective, such as how it will break the college essay or the high school English class. This is the kind of “novel” outcome—Brandt and Eagleman’s act of Breaking—that undoubtedly could have negative consequences for society as a whole.
And I do share these concerns: I think of the anecdote from Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans when she and the famous philosopher, Richard Hofstadter, met with Google’s AI research team. Hofstadter, to the surprise of Google’s AI team, expressed his sense of terror when seeing what Google was trying to accomplish and the speed at which they were trying to get there. However,
“Hofstadter’s terror… was not about AI becoming too smart, too invasive, too malicious, or even too useful. Instead, he was terrified that intelligence, creativity, emotions, and maybe even consciousness itself would be too easy to produce–that what…