Fail small, fail often
Who else is sick of hearing "fail fast" from the people who punish mistakes?
It sounds bold. It sounds like the kind of thing you'd put on a wall in a startup. I think it's a bit dishonest about what real experimentation actually looks like, and I think it's quietly done more harm than good.
What I prefer is: fail small, fail often.
Not because it's a catchier alternative, but because it's more accurate. Small failures are manageable. They're informative. You can correct course without a post-mortem meeting and a damage report. They don't need to be celebrated, but they also don't need to be hidden. They're just part of how you find out what works.
The problem is that most companies are allergic to them anyway, even the small ones.
If your culture only rewards certainty, then you're not really innovating. You're doing the performance of it.
What AI has done is bring it to the surface for a lot of people who never had to think about it before. AI has forced people to try things they don't know the outcome of. For a lot of people, that's genuinely unfamiliar territory. We don't know what's coming tomorrow. We don't know what changes next week. Everyone is figuring it out in real time, including the people who are supposed to have the answers.
Some people have leaned into that. Others have sat still, because sitting still felt safer than moving and falling. What I'd say to that is: most of the time, if you're doing it right, you don't fall. You trip. And then you keep going.
Recommended by LinkedIn
Innovation was the core underpinning of my Masters at University, and something I have in my mind constantly. One of the things it kept coming back to was the idea that new ideas are fragile. Not weak, fragile. There's a difference. They need a certain amount of protection early on, room to be half-formed, discussed, challenged gently, before they're ready to meet full scepticism. Expose them too early to the wrong environment and they just collapse. Not because they were bad ideas. They never got the chance to become anything.
That's a culture problem, not a process problem.
Most organisations treat innovation like a mode they switch into for a specific initiative or project, and then quietly abandon when things get uncertain and the pressure builds to do something safe. Real innovation doesn't work like that. It has to be embedded in how a company actually works day to day. It has to be okay to not know yet. It has to be okay to try something that might not land.
If your culture only rewards certainty, then you're not really innovating. You're doing the performance of it.
The teams I've seen genuinely get value from AI aren't necessarily the ones with the most sophisticated strategy. They're the ones with enough cultural permission to try things, get them wrong, adjust, and try again without it becoming an embarrassing story. Without that permission, even the best tools tend to get evaluated and shelved rather than actually used.
Restrictions, weirdly, can help. A completely blank canvas is often harder to work with than a set of constraints. They make you actually commit to something rather than keeping all your options open forever. Some of the most useful small experiments I've seen came from someone saying "let's try this one thing, in this one context, and see what happens."
I won't end on a question this time as I don't think my opinion can be shifted on this.