When Data Lacks Context, Intelligence Fails
I’ve been thinking a lot about how often systems get the numbers right and the conclusions wrong.
For years, my electric utility has sent me emails comparing my energy usage to my neighbors.
On paper, the data looks precise. Charts. Benchmarks. A neat scorecard showing whether I’m “more efficient” or “less efficient” than the people around me.
But the conclusion is wrong.
My wife, my son, and I all live and work from home. We’ve invested heavily in automation, smart lighting, and efficient heating and cooling. We’re present in the house most of the day. We are not a family that leaves at 8 a.m. and returns at 6 p.m.
None of that context shows up in the model.
So every few months, I get a subtle message that feels like a scolding. You’re using more energy than your neighbors. You could do better.
The system doesn’t understand me. Over time, this lack of understanding has eroded the perception of this utility company. It makes me wonder how many more examples of this exist in other parts of their corporate decision making with data.
I’ve been seeing versions of this problem for more than 35 years.
As a researcher, and later as someone who spent decades inside large technology organizations, I’ve watched systems get better and better at measurement while staying surprisingly weak at understanding. We’ve gotten very good at collecting numbers, very good at optimizing around them, and far less thoughtful about what’s missing when the numbers are stripped of context.
That pattern is what ultimately led me to build what I now call a Qualitative Intelligence System.
Quantitative data is not wrong. It is incomplete on its own.
Recommended by LinkedIn
This same issue becomes far more consequential in higher-stakes environments.
Take rare disease and clinical research.
We collect enormous amounts of structured data. Lab results. Timelines. Enrollment numbers. Trial endpoints. However, the most important signals often live outside those fields.
Patient experience. Caregiver burden. Daily realities that shape adherence. The quiet reasons people hesitate, disengage, or never enroll at all.
When those qualitative factors are fragmented or ignored, trials struggle. Outcomes degrade. Promising therapies stall because the system never fully understood the people inside it.
What’s striking to me is that we’re now recreating the same mistake with AI itself.
Organizations are choosing a single model, a single system, a single “source of truth,” even though every model is trained by humans, shaped by assumptions, and bounded by its own blind spots. We talk about confidence and capability, but rarely about triangulation, iteration, or human validation.
In practice, that means we’re scaling incomplete perspectives faster than ever.
The work I care about now sits at that intersection.
Bringing quantitative and qualitative data together. Using multiple models in dialogue rather than isolation. Keeping humans in the loop, not as a formality, but as a safeguard. Preserving context and memory so decisions can be understood, revisited, and trusted over time.
Accuracy without understanding looks impressive, but it’s brittle. Optimization without context erodes trust. Systems that don’t reflect lived reality eventually stop being believed.
If technology is going to shape our decisions, our health, and our future, it has to do more than count. It has to understand.
“as someone who spent decades inside large technology organizations, I’ve watched systems get better and better at measurement while staying surprisingly weak at understanding” 💯