How should we measure progress in quantum computing? For years, progress in quantum computing has often been measured by qubit count. More qubits make for bigger headlines, but they tell us very little about whether a quantum computer can actually solve a useful problem. The real barrier is errors. Quantum computers generate billions of them as they run. Unless those errors are corrected continuously and with extremely low latency, the computation fails long before it reaches a problem worth solving. Today, Riverlane is publishing our Quantum Error Correction Technology Roadmap, which sets out the engineering milestones from today's experimental machines to utility-scale quantum computing. The roadmap measures progress not in qubits, but in QuOps, or reliable quantum operations. Think of it like the evolution of mobile networks: 2G, 3G, 4G, and 5G. Each generation had clearly understood levels of performance and capability. Quantum computing needs the same clarity. Two milestones matter in particular: >MegaQuOp systems, capable of one million reliable operations, are the point where quantum computers begin to outperform classical supercomputers on certain specialised problems. >TeraQuOp, one trillion reliable operations, marks the beginning of utility-scale quantum computing. Our research, including work published in Nature Communications last year, shows that advances in real-time quantum error correction could accelerate that journey by three to five years. The industry needs milestones that reflect real capability and can guide progress across different quantum hardware platforms. This roadmap is our attempt to define them. You can read more here: https://lnkd.in/eFGGXP8m
Quantum Computing Metrics Beyond Qubit Count
Explore top LinkedIn content from expert professionals.
Summary
Quantum computing progress is no longer measured only by the number of qubits in a system; instead, new metrics like reliable operations, fault tolerance, and information capacity per particle are becoming crucial to understanding real advances. These approaches focus on how well quantum computers perform, correct errors, and process complex information beyond simple qubit counts.
- Emphasize reliability: Look for metrics that track how many trustworthy operations a quantum computer can perform, since this is a better indicator of its utility for solving real-world problems.
- Prioritize error correction: Pay attention to advances in quantum error correction and fault-tolerant gates, as these developments help make quantum computations more stable and practical.
- Explore high-dimensional encoding: Consider the shift to qudits and multi-state encoding, which allows quantum computers to process more information per particle and opens new possibilities for future applications.
-
-
Looks like we’ve hit another turning point in quantum computing. Quantinuum just demonstrated 𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗴𝗮𝘁𝗲𝘀 𝗯𝘂𝗶𝗹𝘁 𝗼𝗻 𝗮 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝘁𝗵𝗮𝘁 𝗯𝗲𝗮𝘁 𝘁𝗵𝗲 𝗽𝗵𝘆𝘀𝗶𝗰𝗮𝗹 𝗴𝗮𝘁𝗲𝘀 𝘁𝗵𝗲𝘆'𝗿𝗲 𝗺𝗮𝗱𝗲 𝗳𝗿𝗼𝗺. This includes the hardest one: 𝗮 𝗻𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝘁𝘄𝗼-𝗾𝘂𝗯𝗶𝘁 𝗴𝗮𝘁𝗲. If you’ve followed quantum computing for a while, you know the game has long been about scaling. More qubits, better gates, lower error rates. 𝗕𝘂𝘁 𝗿𝗲𝗮𝗹 𝗳𝗮𝘂𝗹𝘁 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲? That’s been the elusive frontier. Until now. 𝗤𝘂𝗮𝗻𝘁𝗶𝗻𝘂𝘂𝗺'𝘀 𝗻𝗲𝘄 𝘄𝗼𝗿𝗸 𝗱𝗲𝗺𝗼𝗻𝘀𝘁𝗿𝗮𝘁𝗲𝘀 𝘁𝗵𝗲 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗹𝗼𝗰𝗸𝘀 𝗳𝗼𝗿 𝗮 𝘂𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹, 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗴𝗮𝘁𝗲 𝘀𝗲𝘁. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼𝗲𝘀 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻 ? To unlock the full power of quantum computation, you need to go beyond Clifford gates. 𝗡𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝗴𝗮𝘁𝗲𝘀 (like T or controlled-Hadamard) 𝗮𝗿𝗲 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗳𝗼𝗿 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗮𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲, but they’re notoriously hard to implement fault-tolerantly. Why? Because applying a non-Clifford gate directly to a logical qubit can spread a single error into a correlated mess that error correction can't handle. This is a fundamental limitation, not a hardware bug. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼 𝘄𝗲 𝗱𝗼? Instead of applying dangerous gates directly, we 𝘁𝗲𝗹𝗲𝗽𝗼𝗿𝘁 them using special resource states, so-called 𝗺𝗮𝗴𝗶𝗰 𝘀𝘁𝗮𝘁𝗲𝘀. Think of it like outsourcing the risky part of the operation to an ancilla that we can verify, discard if faulty, and only then use to apply the gate safely. That’s the idea. But nobody had shown that this could be done fault-tolerantly and with better-than-physical performance. Quantinuum just released two new papers that change that: • Shival Dasu et al. prepared ultra-clean ∣H⟩ magic states using just 8 qubits, then used them to implement a logical non-Clifford CH gate, achieving a fidelity better than the physical gate. That’s the elusive break-even point: logical > physical. • Lucas Daguerre et al. prepared high-fidelity ∣T⟩ states directly in the distance-3 Steane code, using a clever code-switching protocol from the Reed-Muller code (where transversal T gates are allowed). The resulting magic state had lower error than any physical component involved. Why are these landmark results ? Because these two results together prove you can: • Prepare magic states fault-tolerantly • Use them to implement non-Clifford logic • And do so with error rates below the physical layer 𝗔𝗹𝗹 𝗼𝗻 𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗵𝗮𝗿𝗱𝘄𝗮𝗿𝗲. No hand-waving. No simulations. Of course not everything is solved: these are still distance-2 or -3 codes, and we haven’t seen a full algorithm run start-to-finish with these techniques. But the last conceptual hurdles are falling. Not on superconducting qubits but on ion traps. 📸 Credits: Daguerre et al. (arXiv:2506.14169)
-
One of the trickiest numbers in quantum computing is “logical qubit count.” We often hear phrases such as “This algorithm requires 1,000 logical qubits,” but what is this number actually telling us? A logical qubit is not a single physical object, but rather encoded degrees of freedom constructed from many physical qubits working together to protect fragile quantum information from noise. And importantly, the number of physical qubits required to maintain a logical qubit is not fixed. Instead, it depends strongly on: 🔹 The physical error rates (gate, measurement, decoherence), 🔹 The structure and correlation of the noise, 🔹 The error-correcting code being used, 🔹 The target logical error rate you need, 🔹 The depth and structure of the algorithm. In surface code architectures, for example, a single logical qubit might require hundreds or thousands of physical qubits...and that's before you even account for routing, magic-state production, or connectivity overhead. For this reason, quoting “logical qubit count” without context can be misleading. Logical qubits provide more than just a scaling metric, and should instead be thought of as the final output of an entire stack that builds from fundamental physical interaction, to device engineering, to quantum control, error correction, and eventually deployable algorithms. Given the exponential scaling of logical error rates, even modest improvement across this stack can yield substantial reduction in overhead. For those building systems: Where do you see the greatest leverage today? #QuantumComputing #ErrorCorrection #FaultTolerance #Physics
-
From Qubit Counts to Capability: The Push for Logical Qubit Standards The quantum computing industry is entering a critical transition phase, shifting from headline metrics like raw qubit counts to more meaningful measures of computational reliability. The emerging focus is on logical qubits, which represent stable, error-corrected units of quantum information and a more accurate indicator of real-world performance. Leading organizations such as Microsoft, IBM, Google, IonQ, and Quantinuum have invested heavily in quantum error correction to address the inherent instability of physical qubits. These physical qubits are highly susceptible to noise and errors, making them unreliable for sustained, complex computations. Logical qubits solve this by combining multiple physical qubits into a single, more robust unit capable of maintaining coherence over longer operations. This shift necessitates a standardized framework to measure progress across the industry. To address this, global bodies such as the International Electrotechnical Commission and the International Organization for Standardization have established joint initiatives to define benchmarks and interoperability standards for quantum technologies. These efforts aim to replace fragmented metrics with consistent definitions that reflect actual computational capability rather than theoretical potential. The move toward logical qubit standards reflects a maturation of the quantum sector. Early competition emphasized scale, often measured by the number of qubits a system could support. Now, the focus is shifting toward quality, reliability, and scalability. This evolution aligns more closely with the requirements of enterprise and scientific applications, where precision and repeatability are critical. The implications are strategic and immediate. Standardization will enable clearer comparisons between platforms, accelerate industry collaboration, and reduce uncertainty for investors and adopters. More importantly, it will redefine the benchmark for quantum advantage, moving the conversation from experimental milestones to operational readiness. Organizations that align with this transition will be better positioned to capitalize on the next phase of quantum computing commercialization. I share daily insights with tens of thousands followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
-
Microsoft and Quantinuum reach new milestone in quantum error correction. The collaboration claims to have used an innovative qubit-virtualization system on Quantinuum's H2 ion-trap platform to create 4 highly reliable logical qubits from only 30 physical qubits. What is quantum error correction? The physical qubits, with error rates in the order of 10^-2, are combined to deliver logical qubits with error rates in the order of 10^-5. According to their press release, this is the largest gap between physical and logical error rates reported to date, and has allowed them to run ran more than 14,000 individual experiments without a single error. (https://lnkd.in/dzETsvVA) The race for the qubits count seemed to finish in 2023, with the latest update on IBM's roadmap focusing on quality rather than on quantity (https://lnkd.in/dFu52wJR, "Until this year, our path was scaling the number of qubits. Going forward we will add a new metric, gate operations—a measure of the workloads our systems can run."), and other developments in quantum error correction, like the one announced in December by Harvard University, Massachusetts Institute of Technology, QuEra Computing Inc. and National Institute of Standards and Technology (NIST)/University of Maryland in December (https://lnkd.in/dkW-TT-w) Practical quantum computing gets a little closer, although it is still a distant target. Microsoft Press release: https://lnkd.in/deJ4QCBk Quantinuum's press release: https://lnkd.in/d4Wnmvdq More details from Microsoft: https://lnkd.in/dusfZ4KY Paper: https://lnkd.in/dpPCX3td #quantumcomputing #quantumerrorcorrection #technology
Microsoft and Quantinuum demonstrate the most reliable logical qubits on record
https://www.youtube.com/
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development