Why Complexity Stops Working: The Daugherty Postulates of Stagnation
For years, progress across technology and organizations has followed a familiar pattern: when something isn’t working, we add more.
More data. More compute. More people. More processes. More structure.
Scale has become the default solution to friction. When systems slow down or stall, the instinct is to push harder rather than question the shape of what’s already there.
Recent research suggests that instinct may be wrong in a fundamental way.
Two of our SmartLedger - Trust Infrastructure research efforts—one rooted in empirical optimization dynamics, the other in a reexamination of physical law—arrive at the same conclusion:
Complex systems rarely fail because they are too small. They fail because they become structurally misaligned.
This observation forms the basis of what I call The Daugherty Postulates of Stagnation: three descriptive principles that explain why systems get stuck, why scale stops helping, and why simplification often restores progress.
These are not prescriptive rules. They are patterns observed repeatedly across computational, physical, and organizational systems.
When “Trying Harder” Stops Working
The first line of research began with a practical question familiar to anyone who builds or operates complex systems:
Why do some optimization efforts converge quickly, others slowly, and others not at all—despite similar starting conditions?
The answer was not data quality, tool selection, or initial configuration. Instead, outcomes were governed by stagnation dynamics—distinct regimes where progress either compounds or collapses.
A parallel line of inquiry approached a very different question:
Why do physical constants and coherence appear fundamental, yet behave as if they emerge from deeper processes?
The conclusion was strikingly similar. Laws persist not because they are imposed, but because certain structures maintain coherence over time. Others cannot.
Taken together, these findings point to a shared constraint across systems:
Information, effort, and energy only remain useful when they can flow coherently through structure.
That insight leads directly to the three postulates.
Postulate I: The Minimum Coherence Principle
Before a system reaches a minimum level of internal structure, progress cannot accumulate.
Below this threshold, signals blur together. Memory does not stabilize. Gains reset instead of compounding. Effort produces motion, but not direction.
This appears in algorithms that churn without converging, teams that revisit the same decisions repeatedly, and organizations that stay busy without becoming more effective.
The implication is simple but often overlooked:
Recommended by LinkedIn
Meaning, control, and momentum are not guaranteed. They emerge only once a system becomes coherent enough to support them.
Postulate II: The Stagnation Horizon
As systems grow more complex, they eventually reach a point where improving individual components no longer improves the system as a whole.
Past this horizon, local optimization reinforces global stagnation. Feedback loops tighten. Escape paths narrow. Additional effort deepens the trap instead of resolving it.
This is why adding more process can harden bureaucracy, why optimizing individual KPIs can degrade organizational outcomes, and why “working harder” can worsen cognitive loops.
The failure here isn’t due to insufficient capability.
It’s due to entrapment within the system’s own structure.
Postulate III: The Saturation of Usable Complexity
There is a limit to how much complexity a system can productively coordinate.
Beyond that limit, additional components reduce performance rather than enhance it. Friction increases. Control degrades. Stability becomes fragile.
This contradicts a common assumption in technology and management: that scale is always a net positive. In reality, effectiveness often improves when systems are constrained, simplified, or restructured—not expanded.
The takeaway is clear:
Growth that ignores structural limits eventually works against itself.
A Practical Reframing of Progress
Together, these postulates point to a unifying idea:
Systems fail not because they lack resources, but because their structure cannot support the flow of what they already have.
This applies well beyond computation or physics.
It applies to AI systems scaling faster than they can be governed. To organizations growing faster than they can adapt. To institutions accumulating rules faster than meaning.
The lesson is not to reject complexity. It is to respect its limits.
Progress rarely comes from adding more. It comes from aligning structure with flow.
Sometimes, the most effective move forward is not acceleration—but simplification.