February 13, 2023
Anthony Brandt and David Eagleman, in their book, Runaway Species: How Human Creativity Remakes the World, offer a three-part framework for understanding how novel things are created. Using three cognitive maneuvers—Breaking, Blending, and Bending—humans continue to produce creative artifacts, solutions, and products, not out of nothing (like an all-powerful deity), but by reconfiguring found materials, tools, and objects into new arrangements and values.
What happens when a machine breaks, blends, and bends an age-old academic performance task, like composing essays? What happens when intelligent machines, like OpenAI’s ChatGPT, start to deterritorialize the landscape of writing into one where humans are writing with machines? As we confront this conundrum, I recommend exploring Kevin Kelly’s framing of our relationship with machines in the age of AI: “Everyone will have access to a personal robot, but simply owning one will not guarantee success. Rather, success will go to those who best optimize the process of working with bots and machines (The Inevitable 58-59).” Later he writes, “This is not a race against machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots” (60). I want to emphasize this claim: One of the valued human competencies in the age of machines will come down to “how well we work with machines.”
There are already examples of concerted efforts to “work against” ChatGPT, and many have written about ChatGPT from a rightfully concerned perspective, such as how it will break the college essay or the high school English class. This is the kind of “novel” outcome—Brandt and Eagleman’s act of Breaking—that undoubtedly could have negative consequences for society as a whole.
And I do share these concerns: I think of the anecdote from Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans when she and the famous philosopher, Richard Hofstadter, met with Google’s AI research team. Hofstadter, to the surprise of Google’s AI team, expressed his sense of terror when seeing what Google was trying to accomplish and the speed at which they were trying to get there. However,
“Hofstadter’s terror… was not about AI becoming too smart, too invasive, too malicious, or even too useful. Instead, he was terrified that intelligence, creativity, emotions, and maybe even consciousness itself would be too easy to produce–that what he valued most in humanity would end up being nothing more than a ‘bag of tricks,’ and a superficial set of brute-force algorithms could explain the human spirit” (11).
I agree. To echo a point made by friend and colleague, Jeannette Lee Parikh, the value of teaching writing is not, from a competency-based perspective, only about communicating and conveying information effectively. The deeper value has to do with how it slows us down in a fast-brain world to do slow-brain, that is, deep reflective thinking. I think Hofstadter worries about certain technological shortcuts short-circuiting profoundly important human activities that are integral to the healthy development of the self. I totally agree.
But just because there is a danger that machines could “work for us” in a way that’s damaging to the human spirit does not mean we must necessarily “work against” them as educators. I’d like to explore, with the co-authorial assistance of ChatGPT, the ways we can “work with” AI tools to discover examples of breaking, blending, and bending human activities—skills and tasks that are positively valuable and exciting. So, I asked ChatGPT for assistance:
Yes, ChatGPT may have broken some elements of the traditional essay writing process that could create cause for concern, but it’s also a tool that can help us “break” writer’s and researcher’s block by aiding in the brainstorming process, and by helping a student get started with organizing ideas and producing a basic body for revision, re-writing, and improvement. In other words, ChatGPT potentially can serve as a catalyst for creative thinking.
When examining ChatGPT, one could make the case that one novel feature of the tool is how it powerfully blends a personalized assistant (like Alexa) with a wealth of synthesized information (like Wikipedia), with the added ability to continue to learn. Again, what a powerful tool for brainstorming, planning, and learning! The other day, a student was working on a project where they are designing a pop-up restaurant for students and faculty at school. The student was chatting with the OpenAI system to gather ideas for their restaurant concept and design. In other words, ChatGPT potentially can serve as a catalyst for strategic thinking.
ChatGPT claims that AI could help “improve the overall quality and coherence of [students’] writing” and “that it could enhance their writing style.” Perhaps one way to do this is to bend the concept of authorship itself. Descartes once famously made the argument that a single architect designing a building would produce a better product because the totality of the design would be more coherent, whereas a scenario where multiple architects worked on a single design would be one where the product is less uniform and therefore less favorable. I think almost every design firm in the 21st century would disagree with this argument—at least in its simplest form. We know that collaborative design teams produce superior results, and the same could be said about writing. Perhaps we’re moving even further away from Descartes’ dream of intellectual and creative individualism; perhaps writing with machines is a natural next step in this collaborative unfolding. What if students were asked to bring a piece of AI-generated writing to class with the task of making choices on how to improve the written artifact with the expectation that they explain the reason for their choices? In other words, ChatGPT potentially can serve as a catalyst for critical, evaluative thinking.
As a former English teacher, towards the end of my tenure, I became more and more uncomfortable with the age-old model of reading a common text, engaging in discussions, direct instruction, short writing, and finally writing a cumulative essay for a final score. Too often, I lacked confidence when evaluating the level of cognition expressed by the student. Was it simply recall—namely, aping back all the things said in class—and if writing were just about communicating information, perhaps that’s ok. However, I wanted writing to serve as the performance task that sparks, inspires, and develops deeper levels of cognitive expression. Working with machines and having kids evaluate and make choices to improve the quality of a draft while also explaining their reasoning could be a way of assessing this more precisely.
If I am correct that “how well we work with machines” will serve as a crucial competency for life success in the age of machines, then we risk missing a huge opportunity by being the schools that banned calculators instead of integrating them into authentic learning environments. This is our calculator moment in the Humanities, and how we frame this and respond to it will have decades-long ramifications.