The steersman

The steersman

In 1943, Norbert Wiener had a problem. German aircraft were flying faster than human gunners could track, and the standard approach — training the humans to be faster — had hit a ceiling. So Wiener, working on anti-aircraft fire control at MIT, closed the feedback loop instead.

He wired sensor data directly into the gun's aim, building a mechanism that tracked, predicted, and adjusted continuously without waiting for a human hand on the controls. The human operator chose which targets mattered. The mechanism handled the rest.

Wiener would go on to formalize this into an entire discipline he called cybernetics, from the Greek kubernetes: steersman. A steersman steers — sets direction, corrects course based on what the system tells him. The rowing is someone else's job.

Closing the loop

Three years ago, a senior engineer's day was: write code, review code, deploy code, monitor code, fix code. The human was in every step. In 2026, background agents run continuously in most serious codebases — shipping features from specs, rewriting modules from prompts, triaging bugs, routing issues to the right team without anyone reading every report. The engineer sets constraints and reviews output. Intervenes when something drifts.

What's left for the engineer looks a lot like what was left for Wiener's gunner — and anyone can fill that role now. Your leverage is how many agents you can keep pointed in the right direction. Some companies are already asking candidates about their token budget the way they used to ask about equity.

The shift is uneven, which is why it's hard to see. One team closes the implementation loop but still triages bugs by hand. Another automates triage but hand-deploys. Zoom out and the pattern is clear: every piece of the software development lifecycle is acquiring its own feedback loop, and the human is moving from inside those loops to above them.

You set the policy, the system executes, you correct when the feedback says something's off. Wiener would have recognized this architecture immediately.

What the steersman keeps

Wiener's gunner retained three things: target selection, rules of engagement, the override decision. Everything else went to the machine.

The same pattern is playing out in software, across three areas that resist automation. Problem selection: which problems matter, in what order, for whom. The triage systems are getting better at surfacing signals, but the call is still yours. Constraint definition: architectural boundaries, performance budgets, the difference between a quick fix and a real solution. An agent needs to know what the codebase values before it can write code that belongs there. And drift correction: watching for the gap between what the system is doing and what it should be doing, because feedback loops degrade and models optimize for their metrics, not yours.

All three come down to judgment: looking at what the system produced and knowing whether it's right, whether it's good enough, whether it's solving the problem that matters. Judgment is a limited resource. You can't apply it to everything, so the steersman's real job is deciding where to spend it. That used to be a senior role. It's becoming the only role.

When the steersman stops steering

Right now, drift is real. When Devin demoed autonomous task completion in early 2024, unsupervised work was a coin flip. Two years later the agents are far more reliable, and the harnesses around them (TDD baked into the loop, QA agents that verify before shipping, evals that catch regression) are closing the gap fast. The steersman still matters in 2026 because these systems are good enough to trust on straightforward work and bad enough to miss the subtle stuff. That's a transitional state, and it's obvious which direction the transition is heading.

Wiener saw this too. In The Human Use of Human Beings (1950) he argued that as machines took over routine operations, the human role would shift to setting goals and managing exceptions. He missed something: goal-setting and exception-handling might automate as well. The steersman model assumes there's always a layer that requires human judgment. But the layers keep collapsing. Problem selection, constraint definition, drift correction. Each of these is already partially automated, and the partial is growing.

What happens when the loop closes all the way? It's unclear, because the level of capability that solves drift in software implies something much larger. If an agent can reliably judge whether work is good, whether it's solving the right problem, whether it's accumulating in a coherent direction. That capability doesn't stay confined to codebases. It reaches into law, medicine, logistics, finance, anywhere that human judgment is currently the bottleneck. Software development is where we're seeing the steersman pattern emerge first, but it certainly won't be the last.

Wiener's guns still needed someone to decide what to shoot at. For now, so do ours. As for what comes after: we don't know yet, and anyone who claims otherwise is selling something.



I wrote and edited this with claude code using marginreader.app a free tool for more efficiently providing specific feedback, and building up a log of your feedback so your agent doesn't make the same mistakes again.


To view or add a comment, sign in

More articles by Sam Zoloth

Others also viewed

Explore content categories