Cognitive Offloading or Cognitive Evolution?

Cognitive Offloading or Cognitive Evolution?

Rethinking the MIT ‘Brain on LLM’ Study

There’s a new 200-page preprint from MIT that tries to measure what happens in our brain when we use a language model to help us write an essay.

Study link: https://www.brainonllm.com/

The goal was to understand the cognitive cost of relying on AI in an educational context. In other words: do we lose part of our thinking ability when we let an LLM help us?

When I came across it, I didn’t want to just skim through it. As someone who uses AI every day to write software and to build Empisto , a framework for running private, local LLMs, I wanted to really understand what this means for how we think.

So, after reading it carefully, I reached a conclusion that may sound contradictory. The answer is both yes and no. Yes, LLMs can reduce visible cognitive effort. And no, that doesn’t mean they make us less capable. It depends on the person, and it depends on the task.

The context matters

The researchers used essay writing to measure brain activity. That is an interesting choice because writing an essay is a complex cognitive task. But it is also just one very specific form of expression.

They measured brain activity using EEG, which captures only surface signals. It tells us something about cortical activity, but not about deeper areas of the brain that drive emotion, motivation, or creative insight. So when they say that “brain activity decreases” when using LLMs, the right question is not “how much,” but “what kind” of activity decreases.


Tools are neutral. Usage is not.

AI is a tool. Like any other tool, its impact depends on how we use it.

Think about the invention of typography. We went from carving text into stone to writing on papyrus, to printing presses, typewriters, computers, and now almost direct thought-to-text communication.

Each step made people worry that we were losing something essential. Yet each step also gave us more freedom.

Typing did not make us less literate. It made us faster thinkers. So perhaps “less brain activity” doesn’t always mean a loss. It might mean that our mind has learned to focus on higher-level reasoning instead of the mechanical act of expression.

The paradox of the silent brain

One of the most interesting parts of the study is the comparison between people who first wrote on their own and then used LLMs, versus those who started directly with AI assistance.

Those who wrote unaided first showed richer brain activity and better integration when using AI later. Those who started with the LLM showed lower engagement when the tool was removed.

To me, this means that it is not the presence of AI that matters, but the sequence. When you build cognitive foundations first, the tool extends your abilities. When you start with the tool, it can easily become a crutch.

What the study misses

The authors themselves admit that their work has limitations. The sample was small and homogeneous. Only one model was used (ChatGPT). Only one task was tested (essay writing). And EEG alone cannot measure deeper brain activity.

So the results are interesting but narrow. They show what happens in a specific context, not how all people use AI. A developer, an artist, or a scientist might experience a very different cognitive pattern when collaborating with an LLM


Evolution, not erosion

Every major shift in how humans think started with the same fear. Writing was supposed to weaken memory. Printing was supposed to weaken faith. Computers were supposed to weaken attention.

None of that happened. We simply evolved.

Some people indeed lose depth, others gain new forms of clarity. It depends on how we use the tool, and what purpose drives its use.

That’s exactly the goal with Empisto . Not to replace thinking, but to support the natural evolution of it. To create private, trusted, local AI tools that work with human cognition, not against it.


Final thought

Lower brain activity does not always mean lower intelligence. Sometimes the quiet state of the brain reflects focus and flow. It’s what happens when effort becomes harmony.

The real risk is not that AI will stop us from thinking. It’s that we might stop wanting to think.

So maybe the right question is not “Do LLMs make us dumber?” but “Can we design them to make us want to think more?”

Let me know what you think in the comments.I’d really like to hear your perspective and continue the discussion.




To view or add a comment, sign in

Others also viewed

Explore content categories