Wise Integration: Sea Squirts, Tech Bans, and Cognitive Artifacts (Summer Series) | Brent Kaneft | 22 Min Read

July 25, 2023

“If we don’t change direction soon, we’ll end up where we’re going.”

“Professor” Irvin Corey

A sea squirt will metabolize its own brain. This hermaphroditic organism produces tadpole-like larvae that immediately swim to their eternal home, connecting headfirst, and then proceeding to absorb their eye, their spine, and their brain. The sea squirt isn’t being reckless, but prudent—this cannibalizing process supports the development of their “digestive, circulatory, and reproductive organs.” Once the eye, spine, and brain navigate home, they don’t serve another function. There is a well-worn joke that this is the same thing that happens to tenured professors. More charitably to tenured professors, this is a process most humans participate in: find a home, design for homeostasis, and then reduce the need for the brain through automatic processes or default modes—the brain is an “energy hog” and will take shortcuts whenever possible. Efficiency is its goal.

This isn’t a criticism, it’s biology (and sociology): brains crave predictability, and they seek efficiency; more or less, both attributes make predictable, routinized environments desirable —disruption demands effort, and brains want to conserve energy. This reality makes change hard, at least initially. Think about the first time you used a smartphone: you expended a lot of cognitive energy trying to manipulate this new tool, perhaps became frustrated from time to time, until at last its functions were integrated into your life so as to be second nature (e.g., When was the last time you paused to remember how to share a video, image, or podcast with a friend?).

This neurotrait is why, for example, classroom management is so crucial. Students’ brains want a routine, behavioral expectations, and clear goals that prevent cognitive overload. Without that structure, students are constantly on the alert and will struggle to learn in that environment. What’s more, with good classroom management, the use of novelty to gain students’ attention can be even more powerful. Students can handle it because of the predictably reliable environment the teacher has created. They can risk disruption, disorder, and cognitive dissonance because they trust the teacher.

What strategies, tools, activities, and innovations are integrated into the classroom, then, should be subject to thorough vetting, because whatever we add to this space changes the environment and the developmental outcomes of our students. Recently, Jonathan Haidt, NYU professor, bestselling author, and co-founder of Heterodox Academy, wrote a piece for The Atlantic, “Get Phones Out of Schools Now,” which calls for banning smartphones because “[t]hey impede learning, stunt relationships, and lessen belonging.” Haidt argues that those who believe “[s]martphones can be useful teaching tools […] and may make it easier for some teachers to create engaging lesson plans,” forget that “any increase in engagement during a lesson may be offset by students getting distracted during the same lesson.” He’s arguing that banning smartphones, a technology that has upended everything about our world, is a good thing for human development and, I must presume in this case, an effective way to deal with innovation.

Often, though, banning innovations feels too conservative, too ostrich-with-your-head-in-the-sand, too boring. Innovations are supposed to save us, redeem us, and improve our lots in life—tides and boats simultaneously rising. And there’s no escaping them anyway, apologists argue: smartphones, for example, are here to stay, along with the social media apps so essential for living a good life. Haidt’s ban is setting students up for failure in the real world, right? 

The Social Institute [1]Full disclosure: My current school partnered with TSI last year, and we are in a 3-year contract with them. The Social Institute has been made aware of this article but does not directly or … Continue reading(TSI), for example, whose mission is to “[offer] a gamified, online learning platform that empowers students to navigate their social world positively—including social media and technology—to fuel their health, happiness, and future success,” rejects the banning approach. They “aim to empower and equip, not scare and restrict.” Their product is alluring; it appears to address a hole in many schools’ programming, a head-on approach to social media use (among other social situations)[2]TSI does take a broad perspective on social media. Their expanded definition of social media encompasses any way in which students engage socially through technology, such as using chat features in … Continue reading that parents, teachers, and administrators can support. Even better, they offer “turnkey lessons” (so teachers love it!)[3]When teachers don’t have any skin in the game, lessons suffer, which means students’ classroom experience suffers. Remember, “turnkey” or “canned” is a marketing ploy, not sound … Continue reading and there is “[n]o knowledge of or experience with social media required”—just think about that for a second. Phones and social media are here for good, they maintain, and we need to help students navigate this virtual world. 

On the surface, it’s hard to argue with their philosophy, until one remembers Audre Lorde’s famous observation: “The master’s tools will never dismantle the master’s house.” In other words, we cannot bend the algorithmic manipulation baked into social media platforms to our will. To believe otherwise is foolish. But TSI has been featured in several independent-school conferences and workshops, so I assume business is good. 

It concerns me, though, that from what I can glean on their “Leadership” page, the company is led by a former social media manager, a front-end web developer, a former employee-relations officer, a former educator, and a brand manager. Though their research advisory committee includes some expertise in adolescent development, cognitive psychology and neuroscience, where is the expert who has spent years designing social media to manipulate user attention, etc.? In what world does this company stand a chance against Silicon Valley’s best and brightest, who are exploiting neural circuitry through complicated algorithms? The audacity, the hubris of it, is startling. Social media is designed to capture attention and time, mostly by amplifying negativity and human folly. Can it be used for positive purposes? Of course, but who will sincerely make the argument that it has improved community, relationships, mental health, communication, democracy, etc.? Who will stand by the statement, “The good outweighs the bad”? This criticism isn’t against the actual leaders—I have met several of them, and they are kind, generous people—but I am challenged by this assumption that social media is a force we cannot reckon with for the sake of our students’ wellbeing.[4]TSI does, to be fair, spend some time educating students about the dangers of social media. 

We are seeing people everywhere unplug from social media for lack of sustenance, grassroots organizations like “Wait Until 8th gaining traction with parents[5]If I had my preference, people would wait until 25 when their prefrontal cortex is, on average, fully developed., and documentaries like The Social Dilemma emerging to the top of Netflix (38 million views in the first four weeks). Are we truly fighting a losing battle against social media?[6]From “What If the Next Big Social Media App Is…Nothing” by Max Chafkin: “Social media itself has been in decline for years. Partly because of a growing awareness of the way these apps invade … Continue reading 

As we all know by now, social media, among other technologies, is designed to create addicts because their business model relies on ads (the good news is, we can design it differently). What other highly addictive opportunities exist for our students that they are possibly (or likely) engaging with: fast food, vaping, drinking (though maybe less than past generations), pornography? Each of these opportunities hijacks neurotraits that create unhealthy addictions. Would Alcoholics Anonymous tell participants they merely need to navigate around the perils of drinking more effectively, in order to keep alcohol in their lives? Or do they help them abstain from drinking altogether because they understand the power of the addiction? How is there a good argument to begin integrating these types of opportunities into our students’ lives in “healthy” ways since students are going to engage with them whether we like it or not? Tristan Harris, co-founder and executive director of the Center for Humane Technology and featured in The Social Dilemma, argues that “Human downgrading is the climate change of culture”:  “while we’ve been upgrading the machines we’ve been downgrading humans: downgrading our attention spans, downgrading our civility and our decency, downgrading democracy, downgrading mental health.” When I shared this quote with a lifelong friend of mine, Hamilton Davis, VP of Regulatory Affairs at Southern Current, a clean energy company, he questioned whether humans are being downgraded or revealed:

Our technology, specifically social media in this case, is revealing how humans behave in a particular technological context. Our culture, customs, institutions, and man-made environments have immense influence over how individuals conduct their lives. It is all too human to be captured by social media. That is what we have to understand and respond to. (emphasis mine)

He’s right, I think: our capacity to be deluded by a virtual world is being exposed and our neurology hacked for the betterment of the economy, not human welfare. To solve human problems that cause existential anxiety (i.e. suffering, isolation, boredom, a lack of meaning), we are not doubling down on building resilience or comfort-with-the-discomfort in our students: we are impotently observing the slow and steady downgrading of humanity and raw exposure of our frailty, and companies like TSI are abetting the effort.

My criticism of TSI is not because I am anti-technology, I am anti profit-driven enterprises that masquerade as pro-student by teaching them how to take just enough poison to keep them alive but still enslaved. When we outsource student development to profit-driven companies, bad things can happen. Mistakes are made, mainly because there is a perverse incentive to ignore research on the science of learning and adolescent development. Consider the recent exposure of the “science of reading” scandal. Often, innovation blinds us to our responsibility as expert educators—ensuring that what we integrate into our programs, from curricular shifts to edtech products, is good for kids. The Rome of adolescence is burning, and companies like TSI are selling marshmallows when they should be selling fire extinguishers. 

But again, banning, even temporarily, has little appeal in the marketplace of ideas. It is, in some sense, anathema to the zeitgeist to spend time in reflection about whether to integrate an innovation or not. The International Baccalaureate maintained this line when addressing AI software: “The IB will not ban the use of AI software. The simplest reason is that it is an ineffective way to deal with innovation.” Their Head of Assessment Principles and Practice, Dr. Matthew Glanville, stated that banning “is the wrong way to deal with innovation.” The integration of generative artificial intelligence may be pedagogically appropriate—I see tremendous potential in Khan Academy’s Khanmigo, an AI-powered guide and tutor—but I am less convinced by Dr. Glanville’s calls for celebration: “Ultimately what AI is likely to mean in the longer term is that we spend less time teaching the mechanics of essay-writing or communication and more on understanding, describing, and analyzing problems. This is something we can celebrate rather than fear” (emphasis mine). Dr. Glanville’s background is in math and mine in English, which means we probably have vastly different relationships with “teaching the mechanics of essay-writing or communication,” but my bias aside, what his statement suggests is that those forms of cognitive development will be replaced by AI software. Why that should be celebrated is lost on me, but how its erasure improves students’ ability to “understand, describe, or analyze problems” requires some deep reflection and research. 

Daniel Schmactengerger, a founding member of The Consilience Project, recently argued that “The complexity of the situation”—in this conversation, referring to the development of democracy—“means I need more working memory to hold the whole [my view and someone else’s view and someone else’s view at the same time, right],” and…

[i]f my working memory and attention span becomes too small, I can only be a fundamentalist who has reactionary views that fit into soundbites. Because the truth is too nuanced to fit into a soundbite, so you have to have much longer attention spans, which means you have to train attention span, which means rather than fast cycles of low dopaminergic stuff, I need stuff like reading. That’s boring […]. And then also writing. When the medium was reading and writing, writing also meant I had to be able to understand what I thought well enough to articulate it in structured form. And so I’m not saying we go to a pre-digital age, we’re not going to, but we have to look at […] the cognitive capacities that those media [like reading and writing] were developing in us and say, in this new media age, how do we develop those same cognitive capacities? (emphasis mine)

In other words, what cognitive exercise (like yesterday’s reading and writing) does Glanville believe will develop students’ cognitive ability to “understand, describe, or analyze” problems? Cognitive development takes time, and while adults, who had to read and write and be “bored”[7]To be clear, I do believe we can design reading and writing to be engaging for students. I am parroting what I often hear from adults who didn’t enjoy school. in school, see these innovations as a gift to student engagement, students may see these innovations as tools to game the system as it’s currently designed—incentivized for grades, learning outcomes, and achievement as opposed to understanding and cognitive, social, and emotional development. As Joel Backon, editor of Intrepid Ed News, suggested, “Technology is on one evolutionary trajectory and student learning is on another. Currently, they are not in sync,” which doesn’t mean they can’t be, but until our schools are redesigned to maximize human-centered goals, then innovations like ChatGPT will be a problem.

Dr. Zachary Stein, another founding member of The Consilience Project, as well as a philosopher of education, psychologist, futurist, and author of Education in a Time Between Worlds, in an incredibly engaging conversation with Mary Helen Immordino-Yang on the Future Learning Design Podcast with Tim Logan, suggests that this imbalance between what is good for human development and what appeals to the lower angels of our nature may be resolved by placing technological restrictions on students before they reach a certain age:

Already, like Facebook, TikTok, these places build psychometric profiles based on your behavior and then customize what they deliver to you [i.e., curative AI]. And so that’s, I think, problematic.  We’re reaching a state with what’s called generative AI [e.g. ChatGPT], where you’ll be able to couple the curative AI, which we already have, with the generative AI, both of them customized to your psychological profile. So that’s when it starts to get bad. Because this thing will be able to create very convincing text, very convincing video images, without a human prompting it, just based on what it thinks you want. And so you end up with this inexorably persuasive and incredibly charismatic, endless conversation partner customized to you. So if the phone didn’t already kind of break schools, these things are going to, and the only way out, I believe, well, I think we should regulate the technology. I think, honestly, that basically if you are below 18, you should have a very limited set of things that you can have access to technologically. I think we’re going to learn that lesson possibly soon with this thing. With the curation and creation, coupled, means that it’s an endless, endlessly fascinating, perfectly made for your limbic system set of experiences. So we may cross that threshold. Again, I think the only way out of that is to do the richer, embodied experience that is always more fascinating than anything the computer can generate. (emphasis mine)

We live under the powerful illusion that our regulatory powers are exponentially evolving alongside our technology, that there is a parallel trajectory, a beautiful symbiosis, but our brains, our limbic systems like Zak Stein suggests, are being exploited, helplessly manipulated by perverse incentives and biological hacks. Perhaps the most threadbare quote of the 21st century comes from the American biologist and writer, E.O. Wilson: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” Companies like The Social Institute use the same biological hacks to increase student engagement in their program, but what they offer is not a higher ideal of what humanity can be; what they offer is the delusion that humans can manipulate these social media to their advantage, without sinking into the quicksand themselves. And here is one of the great challenges educators face today: despite all of the demands on teachers’ attention and time, they have to design an environment for students where transcendence, through student discovery, is possible, is the norm actually. As the yogis say, you will never trade a higher taste for a lower one, but if you’ve only eaten fat, sugary, and salted fast food, it is a real challenge to choose (and then to enjoy) a fresh salad. This is the real reason, I believe, Jonathan Haidt is calling for the removal of smartphones—living in the infinite distraction removes the possibility for personal transformation and transcendence. Why bring infinite distraction into our classrooms?

So the strange paradox of innovation is that every innovation has the potential to be an existential threat to the physical, social, spiritual, and cognitive development of humans. The allure is the convenience (our brains are always looking to save energy!) and the potentiality innovation offers, but the human cost can be staggering, either immediately or slowly, like the impact of mold secretly growing behind an attractive wallpaper. To return to Tristan Harris’s point: machines are improving as humans downgrade in various ways. As professional educators, we have to ask whether innovation will prove detrimental to the fundamental qualities we want to develop in our students. To take an example from parenting, consider the built-in backseat television screen. In Last Child in the Woods (2008), Richard Louv wonders, 

Why do so many Americans say they want their children to watch less TV, yet continue to expand the opportunities for them to watch it? More important, why do so many people no longer consider the physical world worth watching? The highway’s edges may not be postcard perfect. But for a century, children’s early understanding of how cities and nature fit together was gained from the backseat: the empty farmhouse at the edge of the subdivision; the variety of architecture, here and there; the woods and fields and water beyond the seamy edges—all that was and is still available to the eye. This was the landscape that we watched as children. It was our drive-by movie.

For temporary convenience, which is a predictably peaceful and quiet car and a reduction of parental energy output—admittedly, in my house, a resource in short supply—we sacrificed the “useful boredom” that helped our children develop geographical awareness. So kids are clueless about the surrounding environment, which leaves them completely reliant on GPS systems. A human without navigational awareness and some sense of direction is a fundamentally different type of human—the narrative of innovation always ends there, though, in fundamentally different types of humans, underdeveloped in some way from previous generations. These “cognitive artifacts” like GPS systems—tools we have created that either “amplify” or “replace” our “representational ability”[8]Terms used by David Krakauer in an interview with neuroscientist and philosopher Sam Harris. All Krakauer quotes come from this source. Available in Making Sense: Conversations on Consciousness, … Continue reading—have, in the words of Donald Norman who coined the term, made “possible the modern intellectual world,” but for many of us, our daily navigational needs have become completely reliant on GPS systems, which is why David Krakauer, President of the Sante Fe Institute, divides cognitive artifacts into two camps: complementary and competitive. 

As Glanville’s prediction suggests, AI software like ChatGPT is competitively replacing the cognitive task of writing essays, among other forms of writing. Students will eventually use this tool to replace the taxing and sometimes boring task of organizing their thoughts on paper. That task, however, helped students develop the representational ability to form sound arguments in their heads during, for example, a discussion with a friend about a political topic. That representational ability (what they graduate with and use, whether they know it or not, in their life beyond high school) is what students will possibly lose when schools integrate ChatGPT to its fullest capacity, and schools will choose to do so behind the unfettered belief that these competitive cognitive artifacts—calculators, GPS, ChatGPT—“aren’t going away,” which Krakauer calls a “totally reasonable” response to people like me who are concerned that these artifacts will permanently and negatively “[alter] the cultural landscape.” But when your GPS doesn’t work, what happens? When the teacher removes the calculator from your desk, what happens? When you have to form opinions and arguments without ChatGPT, what happens? Impotence, ignorance, infantilism: these are the human qualities we are enhancing the more we embrace and integrate competitive cognitive artifacts into our lives. Again from Joel Backon: “As well-educated adults, we have a very different view of tech tools than a student. We can opt to use the tool depending on ‘how lazy’ our brain feels at a given moment. Students see the tool as a way of circumventing understanding/meaning-making, and that is dangerous. That is how AI bots will become like the HAL9000 (“2001: A Space Odyssey” reference).” 

Donald Norman reminds us that there are two views of cognitive artifacts: the system view and the personal view. The former is an outsider perspective, and from that angle, we see “the total structure of person plus artifact” and witness cognitive enhancement (e.g., a person using a GPS system in their car arrives at their destination faster than a person navigating by using a traditional road map. This is often called “progress.”). The “personal view” sees “how the artifact has affected the task to be performed” (i.e., the driver has a different cognitive task ahead of them depending on what tool they’re using). In other words, the GPS system doesn’t enhance cognition, it replaces it. But the “systems view” is how many educators see innovations, only in regard to the appearance of cognitive amplification, not the reality that the cognitive task has simply changed, and when that happens, human development changes, which is why I believe, as Jonathan Haidt argued, banning certain technologies may be an effective way to deal with innovation, at least until you are more confident that the way the cognitive artifact will change student development is positive. 

Some waiting period, some restraint, some time to reflect is the foundation of wise integration. As Daniel Schmactenberger argues, wisdom “bind[s] naive versions of progress in a way that [is] actually better for the whole long term.” While Dr. Glanville may be correct in his projections, his reason for celebrating the removal of the “mechanics of essay-writing or communication” from the curriculum emerges, I believe, from an economic perspective, not a human development perspective. He is falling on his knees before the god of efficiency, which is why our modern institutions are in crisis. More efficiency, in the history of our world, has never been the necessary ingredient for more enlightened human beings, ever. 

Arguably, our technology will become so advanced that human cognition will be outcompeted at every level, which is a serious concern regarding where artificial intelligence is heading, and would mean that humans, like the sea squirt, can place their brains on the shelf. Schools are in a serious predicament: using advanced AI systems and replacing human cognition will, on the surface, appear “innovative” and in the spirit of preparing students for their futures, but, as “Professor” Irwin Corey suggested by his quote at the beginning of this article, we are in danger of arriving exactly where we’re heading, which is the atrophying of the human brain. We may be regressing to something like sea-squirt status. 

The question of whether human social, emotional, and cognitive development is essential is what’s at stake. I am not arguing that human development must equate to a person’s worth or have some relationship with economic thriving, but if we come to the point that human development is unnecessary to sustain and thrive in life, then we are going to be at the end of human civilization. It seems inevitable, given the advancements in technology, that for most people, their cognitive development will be irrelevant in an open marketplace. This is the precipice unwise capitalism has brought us to. 

It is the problem of Moloch, the topic of my next article. And the only reasonable advice I can give any educator or school leader comes from Azeem’s character (played by Morgan Freeman) in Robin Hood: Prince of Thieves: “If you would be free men, then you must fight. Join us now!”

You may also be interested in reading more articles written by Brent Kaneft for Intrepid Ed News.


1 Full disclosure: My current school partnered with TSI last year, and we are in a 3-year contract with them. The Social Institute has been made aware of this article but does not directly or indirectly endorse the views of the author.
2 TSI does take a broad perspective on social media. Their expanded definition of social media encompasses any way in which students engage socially through technology, such as using chat features in Google Docs for school projects, connecting with family through FaceTime, or even writing thoughtful emails to mentors for internships. For this article, I am mostly focused on popular social media like Instagram, Tiktok, etc.
3 When teachers don’t have any skin in the game, lessons suffer, which means students’ classroom experience suffers. Remember, “turnkey” or “canned” is a marketing ploy, not sound pedagogical practice. In reality,  TSI would claim that the teacher still plays a key role in selecting lesson materials and facilitating lessons to best fit their classroom. The teacher continues to play an active role, with help from The Social Institute’s platform and content to keep teachers aware of student experiences in general and to better understand their own students’ experiences.
4 TSI does, to be fair, spend some time educating students about the dangers of social media.
5 If I had my preference, people would wait until 25 when their prefrontal cortex is, on average, fully developed.
6 From “What If the Next Big Social Media App Is…Nothing” by Max Chafkin: “Social media itself has been in decline for years. Partly because of a growing awareness of the way these apps invade privacy, and partly because of a growing awareness that they’re tacky as hell, most of us have stopped posting about our beach trips, or our latte art, or even our kids. (In a telling rebuke of Zuckerberg’s Law, the man himself recently posted a family photo in which the faces of his two older daughters were deliberately obscured.) The decline of engagement meant that there wasn’t enough in our feeds to keep us coming back. So Facebook and other platforms ginned up new content for us, showing us an endless scroll of influencers, memes and outrage porn. As our friends got too busy living their actual lives to document them on social media, Facebook and Instagram copied TikTok, replacing feeds of friends with feeds of influencers you’ve never met and probably wouldn’t want to.”
7 To be clear, I do believe we can design reading and writing to be engaging for students. I am parroting what I often hear from adults who didn’t enjoy school.
8 Terms used by David Krakauer in an interview with neuroscientist and philosopher Sam Harris. All Krakauer quotes come from this source. Available in Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity (2020): 353-383.

Brent Kaneft

Brent Kaneft is Head of School at Wilson Hall School, a PK-12 independent school, in South Carolina. He holds a master’s in literature from James Madison University and earned his master’s degree in educational leadership from Indiana University (Bloomington) in May 2022. Since 2016, Brent has led teacher workshops on how to translate Mind, Brain, and Education (MBE) research strategies into the classroom, and since 2020, he has focused on research-informed practices in the areas of social-emotional learning, mindfulness, and equity and inclusion. Brent’s recent publications include "The Belonging Apocalypse: Woke Bypassing, Contemplative Practices, and a Way Forward for DEI" (IntrepidEd News) and "The Problem with Nice: Moving from Congenial to Collegial Cultures" (Independent School Magazine).

Leave a Reply

Your email address will not be published. Required fields are marked *