Agency & Artifact

Fear Is Not a Framework

Why AI Panic Misses the Point

Recent research from MIT triggered a familiar media frenzy: “ChatGPT is changing your brain,” headlines declared, as if that fact alone were cause for alarm. It is changing your brain, but so did writing, arithmetic, Google, and the invention of the index card. The capacity for cognitive change is not a flaw. It is the signature of human intelligence.

The Study and the Panic

The MIT study, led by Nataliya Kosmyna, used EEG to measure brain activity as 149 participants completed writing tasks in three modes: unaided, assisted by Google Search, and assisted by ChatGPT. When using ChatGPT, participants showed a sharp drop in neural activity, especially in the frontal and parietal lobes. These areas are tied to memory, semantic processing, and executive control. This was not just a momentary lull. Even after writing stopped, the brain remained quiet. Kosmyna and her team called this “cognitive debt.”

That term sounds ominous, but the finding itself is unsurprising. We have known for years that offloading mental labor changes how and where work happens in the brain. When we start using GPS, our spatial reasoning atrophies. When we rely on calculators, we lose fluency in arithmetic. But we also gain something: speed, reach, and the ability to redirect our minds toward higher-level thinking. Tool use always involves tradeoffs.

The nuance here matters. The same MIT data showed that when participants drafted ideas first and used ChatGPT later, neural engagement remained robust. The brain only grew quiet when users began by outsourcing the hard part. In other words, it is not the tool that is the problem. It is the timing.

What Thinking Actually Looks Like

This insight echoes a century of cognitive theory. Lev Vygotsky argued that higher mental functions are shaped by tools: cultural, social, and technological. Offloading is not an error. It is how we evolve. But that evolution depends on how, when, and why we integrate new tools into our thinking. If we use AI as scaffolding, we build stronger minds. If we use it as a substitute, we risk mental flattening.

Cognitive offloading is not a glitch in the system. It is a core function of intelligence. From tally sticks and clay tablets to chalkboards and spreadsheets, humans have always externalized thought. Andy Clark called us “natural born cyborgs,” meaning that we are built to think not just with our brains, but also through our environments.

Despite this, our cultural reflex is to conflate intelligence with unaided mental labor: memorization, calculation, solitary reasoning. But that is not how people solve problems in the real world. Intelligence is not how much you can store in your head. It is how well you use the resources around you. It is adaptability, not austerity.

And yet, in academic and professional settings, we treat effort as virtue. We have been trained to see shortcuts as dishonest and struggle as proof of seriousness. But struggle is not always meaningful. Sometimes it is just friction. Sometimes the smart move is using the tool, provided you know why you are using it.

This is why the MIT findings should not spark panic. They should spark pedagogy.

AI in the Classroom: A Better Approach

If students rely on ChatGPT to write entire essays without reflection or revision, the failure is not just theirs. It is ours. It means we are assigning tasks that reward passivity instead of judgment. It means we have built systems that measure outputs instead of thinking.

There is a fix. It begins with design.

Instead of assigning “Write a five-page paper on climate policy,” a more effective prompt might be: “Use AI to generate an initial draft. Then revise it extensively. Submit both versions, along with a one-page reflection detailing what the AI missed, where you added complexity, and what changed in your thinking.” In that model, AI becomes a thought partner rather than a ghostwriter.

This approach reframes authorship. It values synthesis over generation, and revision over regurgitation. It also reflects how professionals work. Good writing is not a burst of originality. It is a process of refinement, critique, and strategic choice making. AI can speed up the first step. It should never be the last one.

None of this is new. We have been here before.

When calculators entered classrooms, critics warned they would destroy math literacy. When Google became ubiquitous, Nicholas Carr asked whether it was “making us stupid.” Plato, writing millennia ago, worried that the written word would undermine memory. In each case, the concern was the same: that a new tool would erode the cognitive skills we most prized.

In each case, the result was not collapse but transformation. Writing reduced the need for memorization, but enabled science, philosophy, and law. Calculators diminished arithmetic fluency, but expanded what math could accomplish. Google changed what we remember, but enhanced how we find and evaluate information.

ChatGPT is part of this lineage. What makes it feel more threatening is its encroachment into domains long held sacred: language, creativity, argument. But the underlying process has not changed. We are still externalizing parts of cognition to make room for something new.

The question is: what new thing are we making room for?

Unfortunately, the media response to the MIT study did not ask that. Instead, it defaulted to fear. Headlines screamed about lazy brains and disappearing neurons. The actual findings, subtle, specific, and well contextualized, were reduced to alarmist soundbites.

This is more than clickbait. It is a failure of public science communication. And it is not hard to see why. Journalists are under pressure. The audience demands certainty. The story must be clear and urgent. And, importantly, many of the people interpreting this study, writers, editors, pundits, are among those most threatened by generative AI.

When a new tool threatens your relevance, it is easy to see it as a threat to cognition itself.

But tools do not determine outcomes. People do. As Donald Norman put it, design and intention shape results. AI is not a philosophy. It is a lever. Used carelessly, it reduces effort. Used wisely, it provokes engagement.

That wisdom is what we must teach.

Five Principles for AI-Era Pedagogy

Separate process from product. If we continue assessing only final outputs, students will always look for the fastest way to generate them. Instead, build assessments that reward iteration, revision, and critical engagement. Make thinking visible.

Scaffold AI use explicitly. Do not just say “AI is allowed.” Model how to use it well. Provide examples of strong prompts, side-by-side comparisons of AI drafts and human revisions, and annotated feedback that shows what improvement looks like.

Center reflection. Require students to articulate what the AI got right, what it missed, and what choices they made during revision. This reinforces metacognition and helps students develop ownership over AI-assisted work.

Differentiate tool roles. AI is useful for brainstorming, tone smoothing, and early drafting, but not for fact-checking or original argumentation. Build rubrics that reflect those boundaries and teach students to evaluate AI’s role accordingly.

Modernize honesty policies. Blanket bans prevent transparency, not cheating. Create clear, realistic guidelines for appropriate AI use and explain the rationale. Trust students enough to give them rules worth respecting.

None of this requires a total curricular overhaul. It requires intentionality. The same principles that apply to good teaching, such as transparency, scaffolding, reflection, and iteration, also apply to AI. The challenge is not to reinvent education from scratch. It is to stop pretending the landscape has not changed.

The Real Question

Cognitive offloading is not an intellectual failure. It is a feature of how we grow. We write to remember. We draw diagrams to understand. We use metaphors to think abstractly. We are always extending the mind into the world.

As Clark and Chalmers argued in “The Extended Mind,” cognition is not confined to the skull. It operates through language, gesture, artifacts, software. Vygotsky said the same a century earlier: thought is mediated by tools, and development is a social, cultural process.

Seen through these lenses, ChatGPT is not a rupture. It is a continuation, faster, yes, and more visible, but built on the same principles that have always defined intelligence: distribution, adaptation, integration.

The right question is not “Is AI making us stupid?” It is: “Are we equipping people to think well with new tools?”

If the answer is no, the solution is not to fear the tools. It is to teach better.

We are not witnessing the collapse of cognition. We are witnessing its migration. The scaffolding is shifting. The terrain is unfamiliar. But the response we need is not retreat. It is design. It is pedagogy. It is collective discernment.

We should not be shielding students from AI. We should be preparing them to lead with it. The world they inherit will be fast, complex, and saturated with intelligent systems. Our job is not to slow that world down. It is to build the capacities that allow young people to thrive within it.

Intelligence has never been about purity. It has always been about power, the power to adapt, to judge, to learn. That power now includes tools like ChatGPT. Our responsibility is to ensure they are used with clarity, not fear.

The mind is already extended. The only question now is: What will we do with the reach?

Let us start building for the world we are actually in.