The Speed Paradox
AI lets everyone ship fast. Polished demos in days. MVPs in weeks. Features in hours.
The old signal (shipping fast means you're good) is dead.
Yet speed matters more than ever. Windows close faster. Competitors emerge overnight. If you can't move fast, you're irrelevant.
This is the paradox: speed is both everywhere and essential.
When everyone can move fast, speed stops discriminating. But you still die if you're slow.
Two Types of Speed
The answer isn't eliminating speed as a signal. It's recognizing that AI separated speed into two types that used to be bundled together.
Output speed is what everyone sees: features shipped, demos polished, pixels on screen. AI gives this to everyone now. A founder with Claude and Cursor produces what used to require a team. This is noise.
Judgment speed is what almost no one measures: how fast you recognize real signal versus dismiss it, how fast you cut scope when you hit a wall, how fast you adapt when reality pushes back, how fast you decide what NOT to build.
Before AI, these were bundled. You had to decide what to build before investing months of work. The cost of building forced good judgment.
Now they're separated. The unit of work shrunk from "build the feature" to "test the hypothesis." You can build ten wrong things before you notice they're wrong.
Output speed shows up in demos. Judgment speed only shows up under constraint.
What This Looks Like
I've observed 150+ teams over 18 months under identical pressure conditions. Three examples:
Team Alpha: Polished pitch. Strong demo. Fast execution. Won their track. Then withdrew from a follow-up session with no external constraint preventing participation. High output speed, low judgment speed.
Recommended by LinkedIn
Team Beta: Weak metrics. Rough product. Scored 6th of 11. But between observation points, the founder ran 119 customer interviews (up from 66) and rebuilt the product based on what they learned. Three design partners committed not despite the obvious gaps, but because the gaps were the right gaps: the founder knew exactly what to leave unfinished. When scores degraded more than any other team, conviction didn't waver. Two months later: top accelerator.
Team Charlie: Observed four times over 18 months. Each observation independent, each showing stronger signal. Signal strengthened at each point: unclear monetization → emerging fit → obvious value → 29% of evaluators would invest (highest of any team). One week after the fourth observation: funded by a major seed fund. Then: accelerator acceptance, additional seed checks, user-turned-angel-investor.
Four independent observations. Signal preceded every capital decision.
The difference across all three: what they optimized for under constraint. Alpha optimized for looking good. Beta and Charlie optimized for learning what's real.
Why This Matters Now
Before AI, output speed was hard to fake. Shipping fast signaled technical competence because building things took real capability.
After AI, output speed is trivial. Everyone ships fast. The question shifts entirely to judgment: Are you building the right thing? Do you know what matters? Can you adapt when reality pushes back?
But judgment speed is invisible in a pitch meeting. You see the demo. You hear the narrative. You check the metrics. All of that reflects output speed. Pitch meetings reward performance, not process. Founders rehearse, optimize for expected questions, smooth out rough edges. By the time they're in front of you, they've practiced the narrative.
Judgment speed only becomes visible when you watch teams operate under constraint. When they have to choose what NOT to do. When they have to prioritize with incomplete information. When pressure reveals whether they sharpen or collapse. They can't rehearse for that.
The Implication
For founders: optimize for judgment speed, not demo polish. The teams that win won't ship the most features. They'll make better decisions about what to build, faster.
For investors: the ones who win won't just have the best networks. They'll see judgment speed when everyone else is looking at output speed.
I've spent 18 months observing how teams make decisions under constraint. 150+ teams, identical conditions, longitudinal tracking. The pattern is consistent: judgment speed predicts outcomes. Output speed doesn't.
I'm building a fund around this edge.
Mark Evans really like the point here, I’ve been saying two things that echo - first on the fact that everyone has speed (or has it available whether they use it or not) and second that what matters is choices, but your refinement is perfect because this isn’t about arbitrary choices but about judgement. Decisions make a difference when they are the right decisions, and that takes discernment not just preference or accident. Thanks for this!