Part 5 — Why AI Changes This Completely
AI does not change learning because it gives better answers. It changes learning because it changes the environment.
In the previous articles, I described how self-directed learning emerges when the environment makes thinking necessary. A real problem, no predefined path, access to information, and space to explore - these elements already shift how people learn.
Now something new has entered this environment: AI.
Most of the current conversation focuses on what AI produces - explanations, summaries, answers. This naturally leads to concern. If answers are instantly available, what happens to thinking?
But this question rests on an assumption: that learning is primarily about receiving answers.
If that were true, AI would indeed replace a large part of learning. But if learning is the process of investigating, structuring, and understanding, then AI does something very different.
It becomes part of the environment.
In a self-directed learning setting, AI is not the authority. It is not “the teacher.” It is a responsive surface that reacts to how we think. Vague questions produce vague answers. Refined questions change the structure of the response. Challenging the output generates alternatives.
This creates something we have not had at this scale before: a system that allows continuous interaction with one’s own thinking.
In the session described earlier, participants moved through phases - orientation, friction, structuring, and understanding. AI accelerates this process, not by removing it, but by making iteration immediate.
The critical shift is this: AI does not need to provide answers. It can expose thinking.
This is where a divide begins to appear. Some will use AI to replace effort - copying answers and moving on. Others will use it to test assumptions, refine questions, and explore alternative explanations.
The difference is not technical. It is behavioral. And again, it is shaped by environment.
If the environment rewards speed and completion, AI will reinforce passivity. If the environment requires inquiry and ownership, AI will amplify thinking.
Recommended by LinkedIn
This is why AI does not automatically improve learning. It intensifies whatever structure is already in place.
Used within a self-directed learning environment, however, something changes fundamentally. The learner is no longer limited by access to information. Instead, the central question becomes: how well can I think?
That is a very different kind of challenge.
The key skill is no longer finding answers, but asking precise questions, recognizing incomplete understanding, and refining thought through interaction.
Learning becomes visible.
We are moving into a situation where those who wait for instruction will struggle, while those who can direct their own learning will accelerate - not because they know more, but because they engage differently.
AI does not decide whether we think. The environment we place it in does.
Have you had comparable experiences where it was the environment which was the decisive factor of progress?
In case you missed the first parts of this series, click here:
Part 1: https://shorturl.at/7H7Nf
Part 2: https://shorturl.at/UVkaF
Part 3: https://shorturl.at/YLeZg
Part 4: https://shorturl.at/LyJqz
AI isn’t just a tool, it reshapes the conditions under which students think. And that puts even more responsibility on us as educators to design environments where thinking is required, not bypassed. The question becomes less “What can AI do?” and more “What are we asking students to do with it?” When structured intentionally, AI can actually raise the level of thinking, pushing students to analyze, question, and justify, rather than just arrive at answers.
The environment question is the right question. In Field-Based STEM, the decisive variable is almost never the quality of instruction. It is whether students arrive with something real already forming. A student who has measured stream velocity, or held a macroinvertebrate, or stood at a site where a decision was made, enters the AI interaction differently from one who starts at a screen. The environment produced the thinking before the tool appeared. Your intensification point is where I'd push hardest. AI doesn't just amplify thinking in an inquiry environment. It amplifies passivity in a compliance environment, faster and more completely than any previous tool. The design stakes are higher, not just the upside. The environments in my NZ classroom work where thinking became unavoidable shared one feature: students had no path to completion that didn't require a genuine decision. The tool accelerated the iteration. The decision was already required. That's the design condition. It has to be in place before AI enters the sequence.