Quantum Computing Has a Confidence Problem

Quantum Computing Has a Confidence Problem

I'm not a quantum physicist. I'm a Test Architect. I build quality practices for complex enterprise systems, things like Dynamics 365 to SAP integrations with dozens of boundary applications in between. My job is making sure people can trust what a system tells them.

Quantum computing worries me. Not because of what it can do, but because of how hard it's going to be to prove that it did it correctly.

Quantum is moving into mission-critical territory. The U.S. Irregular Warfare Center published a paper in January 2025 outlining quantum applications for counterterrorism intelligence analysis [1]. HSBC is running quantum anti-money laundering pilots with the UK's National Quantum Computing Centre [2]. Researchers are proposing quantum graph neural networks for fraud detection [3] and quantum algorithms specifically targeting terrorist financing, money laundering, and market manipulation [4].

The problem is that in defense, intelligence, and financial services, the technology doesn't matter if people don't trust it. If an analyst can't trust a quantum threat detection system's output, they won't act on it. If a commander can't explain why an algorithm flagged a target, that decision doesn't survive legal review. If a regulator can't validate that a quantum AML system works better than the classical one it replaced, it's not going to production.

Why this is different

Quantum systems are probabilistic. Same inputs, different results on different runs. That single fact breaks the foundation of how we've tested software for decades. Every regression suite, every acceptance test, every audit trail assumes deterministic outputs.

On top of that, current quantum hardware is noisy, error rates are way higher than classical computing, and you can't observe a quantum computation without destroying it. Traditional debugging and logging don't apply.

A recent benchmark study (July 2025) tested quantum and hybrid quantum-classical models against IBM's Anti-Money Laundering dataset and concluded that quantum AI is not yet mature enough to outperform classical methods [5]. That should concern anyone planning to deploy quantum-enhanced detection, because without rigorous quality benchmarking, organizations could deploy systems that actually perform worse than what they replaced. And in counterterrorism, a false positive could mean detaining the wrong person. A false negative could mean a missed threat.

The DOD's Congressional Research Service notes quantum could enable improved pattern recognition and target identification [6]. MITRE flags that the Intelligence Community needs to carefully monitor when these capabilities are actually ready [7]. The capability research is moving fast. But nobody is asking the question that matters for adoption: can you prove it works, and can you explain how it got its answer?

The gap

The people building quantum systems are physicists and algorithm designers. They don't think about test strategy, acceptance criteria, or defect taxonomies. The QA community doesn't know enough about quantum to understand what needs testing. Quality is being treated as something that happens after the algorithm works, and we learned decades ago in enterprise systems that bolting quality on at the end doesn't work. It's more expensive, less effective, and it produces systems people don't trust.

Where this connects

The integration architecture of a quantum-classical hybrid system isn't that different from what we already deal with in enterprise environments. Data flows in from classical systems, gets encoded for quantum processing, runs through a quantum circuit, gets measured, and flows back into classical systems for interpretation. Each handoff point is an integration boundary that introduces failure modes. We've been testing exactly that kind of thing for years.

For a company like TEKsystems Global Services, the opportunity isn't to become a quantum computing company. It's to take what we already know about building confidence in complex systems and apply it where it's desperately needed. Productize it, things like a confidence framework for probabilistic outputs, integration boundary testing for hybrid architectures, and explainability assessments that evaluate whether system outputs can hold up under the scrutiny these environments require.

The defense primes building these systems need this. The financial regulators approving quantum-enhanced detection need this. The quantum platform providers whose government customers will demand quality assurance need this. The DOD's own 2025 assessment puts practical quantum applications at least ten years out [6]. That's not a reason to wait. That's the runway to build the quality discipline before the hardware catches up.

Somebody's going to own this space. I'd rather it be us.

References

[1] U.S. Irregular Warfare Center - "The Emerging Potential for Quantum Computing in Irregular Warfare," January 2025

[2] World Economic Forum - "Quantum Technologies: Key Strategies and Opportunities for Financial Services Leaders," July 2025

[3] Innan et al. - "Financial Fraud Detection using Quantum Graph Neural Networks," arXiv, September 2023

[4] Weinberg and Shapovalova - "Quantum Algorithms: A New Frontier in Financial Crime Prevention," arXiv, March 2024

[5] "FD4QC: Classical and Quantum-Hybrid Machine Learning for Financial Fraud Detection," arXiv, July 2025

[6] Congressional Research Service - "Defense Primer: Quantum Technology," updated 2025

[7] MITRE - "Intelligence After Next: Quantum Computing," January 2025

Cool post. It strikes me that much of what you wrote here still very much applies to Generative AI as well... except perhaps the growing adoption regardless of risk. That's the really scary part. As you said, the geopolitical competitive need to control the landscape of these technologies is profound and as a result, there is a strong push for adoption and acceptance of risk that 5 years ago would have been unthinkable across all facets of industry and defense. And if we think we've seen unusual outputs from GenAi, quantum neural nets will be make them look downright predictable - at least until the tech matures. Good news is that risky adoption should help make that happen faster. Bad news is that risk adoption will make that happen faster.

To view or add a comment, sign in

More articles by Alexander Tarlecky

Others also viewed

Explore content categories