Looks like we’ve hit another turning point in quantum computing. Quantinuum just demonstrated 𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗴𝗮𝘁𝗲𝘀 𝗯𝘂𝗶𝗹𝘁 𝗼𝗻 𝗮 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝘁𝗵𝗮𝘁 𝗯𝗲𝗮𝘁 𝘁𝗵𝗲 𝗽𝗵𝘆𝘀𝗶𝗰𝗮𝗹 𝗴𝗮𝘁𝗲𝘀 𝘁𝗵𝗲𝘆'𝗿𝗲 𝗺𝗮𝗱𝗲 𝗳𝗿𝗼𝗺. This includes the hardest one: 𝗮 𝗻𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝘁𝘄𝗼-𝗾𝘂𝗯𝗶𝘁 𝗴𝗮𝘁𝗲. If you’ve followed quantum computing for a while, you know the game has long been about scaling. More qubits, better gates, lower error rates. 𝗕𝘂𝘁 𝗿𝗲𝗮𝗹 𝗳𝗮𝘂𝗹𝘁 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲? That’s been the elusive frontier. Until now. 𝗤𝘂𝗮𝗻𝘁𝗶𝗻𝘂𝘂𝗺'𝘀 𝗻𝗲𝘄 𝘄𝗼𝗿𝗸 𝗱𝗲𝗺𝗼𝗻𝘀𝘁𝗿𝗮𝘁𝗲𝘀 𝘁𝗵𝗲 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗹𝗼𝗰𝗸𝘀 𝗳𝗼𝗿 𝗮 𝘂𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹, 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗴𝗮𝘁𝗲 𝘀𝗲𝘁. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼𝗲𝘀 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻 ? To unlock the full power of quantum computation, you need to go beyond Clifford gates. 𝗡𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝗴𝗮𝘁𝗲𝘀 (like T or controlled-Hadamard) 𝗮𝗿𝗲 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗳𝗼𝗿 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗮𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲, but they’re notoriously hard to implement fault-tolerantly. Why? Because applying a non-Clifford gate directly to a logical qubit can spread a single error into a correlated mess that error correction can't handle. This is a fundamental limitation, not a hardware bug. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼 𝘄𝗲 𝗱𝗼? Instead of applying dangerous gates directly, we 𝘁𝗲𝗹𝗲𝗽𝗼𝗿𝘁 them using special resource states, so-called 𝗺𝗮𝗴𝗶𝗰 𝘀𝘁𝗮𝘁𝗲𝘀. Think of it like outsourcing the risky part of the operation to an ancilla that we can verify, discard if faulty, and only then use to apply the gate safely. That’s the idea. But nobody had shown that this could be done fault-tolerantly and with better-than-physical performance. Quantinuum just released two new papers that change that: • Shival Dasu et al. prepared ultra-clean ∣H⟩ magic states using just 8 qubits, then used them to implement a logical non-Clifford CH gate, achieving a fidelity better than the physical gate. That’s the elusive break-even point: logical > physical. • Lucas Daguerre et al. prepared high-fidelity ∣T⟩ states directly in the distance-3 Steane code, using a clever code-switching protocol from the Reed-Muller code (where transversal T gates are allowed). The resulting magic state had lower error than any physical component involved. Why are these landmark results ? Because these two results together prove you can: • Prepare magic states fault-tolerantly • Use them to implement non-Clifford logic • And do so with error rates below the physical layer 𝗔𝗹𝗹 𝗼𝗻 𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗵𝗮𝗿𝗱𝘄𝗮𝗿𝗲. No hand-waving. No simulations. Of course not everything is solved: these are still distance-2 or -3 codes, and we haven’t seen a full algorithm run start-to-finish with these techniques. But the last conceptual hurdles are falling. Not on superconducting qubits but on ion traps. 📸 Credits: Daguerre et al. (arXiv:2506.14169)
Quantum Supremacy and Fault Tolerance in Computing
Explore top LinkedIn content from expert professionals.
Summary
Quantum supremacy refers to the point where a quantum computer can solve problems that are impossible for classical computers, while fault tolerance means the system can detect and correct its own errors, ensuring reliable results even when individual qubits are fragile. Recent breakthroughs show quantum computers are moving from experimental stages toward practical, reliable machines capable of tackling complex, real-world tasks.
- Stay updated: Keep an eye on major advances in quantum hardware and error correction, as these shape the timeline for when quantum technology will impact industries like medicine, cryptography, and logistics.
- Understand scalability: Recognize that adding more qubits—and keeping error rates low—is key to unlocking practical quantum advantage and realizing tasks far beyond today's supercomputers.
- Consider industry shifts: Anticipate how quantum fault tolerance will drive innovation, from drug discovery to artificial intelligence, and influence competitive positioning across global technology landscapes.
-
-
Harvard’s 448-Qubit Breakthrough Pushes Fault-Tolerant Quantum Computing Into View Introduction Harvard researchers have delivered one of the most important quantum milestones to date: a 448-qubit system that successfully detects and corrects errors below the critical fault-tolerance threshold. This crosses the line from theoretical aspiration to practical scalability, moving quantum computing closer to real-world deployment. What Makes This Breakthrough Different • Qubits are notoriously fragile, collapsing under even minor environmental noise. • For decades, error correction has been the defining barrier to scalable quantum computation. • Harvard’s system—built on neutral rubidium atoms—integrates physical entanglement, logical entanglement, magic-state operations, and quantum teleportation in a single architecture. • This is the first time error rates decreased as qubits were added, proving the architecture can scale rather than collapse under complexity. Why This Matters for the Future of Quantum Systems • Crossing the fault-tolerance threshold is the prerequisite for any quantum machine capable of outperforming classical supercomputers on meaningful tasks. • The team demonstrated not just stability, but a “conceptually scalable architecture” that could, in time, grow toward the millions of qubits needed for fully error-corrected quantum machines. • Paired with the group’s previous demonstration of 3,000+ qubits operating for over two hours, this marks rapid progress toward systems that can run long-duration algorithms without catastrophic loss of coherence. Strategic and Industry Implications • Large-scale, fault-tolerant quantum systems unlock the ability to simulate complex chemistry, run unbreakable encryption, optimize massive logistics systems, and test advanced materials—industries worth trillions. • Defense and national security communities will see this as a clear signal: the quantum race is accelerating, and early pioneers will set the standards for cryptography, secure communications, and advanced AI-quantum hybrid systems. • The collaborative nature of the research—Harvard, MIT, NIST, QuEra, and the University of Maryland—suggests a strengthening U.S. quantum ecosystem capable of competing with China’s nationalized quantum push. Conclusion With this 448-qubit fault-tolerant demonstration, quantum computing shifts from hope to horizon. The architecture is scalable, the physics is validated, and the pathway to million-qubit machines is no longer theory. As Mikhail Lukin noted, the dream is now “in direct sight”—and the next decade will determine who leads in the quantum era. I share daily insights with 33,000+ followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
-
Google and IBM believe first workable quantum computer is in sight - meanwhile Europe offers a more collaborative vision Yesterday, both Google and IBM signalled that quantum computing is entering its engineering phase: Google’s Willow chip, introduced in December 2024, demonstrated scalable error correction: as more qubits were added, error rates dropped exponentially. It completed a benchmark task in under five minutes - one that would take today’s fastest supercomputer an unimaginable 10⁻²⁵ years (i.e., 10 septillion years). IBM revealed a detailed blueprint for industrial-scale quantum, outlining a path to building a fault-tolerant quantum supercomputer by late 2029. Meanwhile, real-world applications are already emerging: IBM and Moderna have collaborated to simulate the longest mRNA sequence (60 nucleotides) ever modelled on a quantum computer, using 80 of the 156 qubits on IBM’s Heron chip. They applied a clever algorithm (CVaR-based VQA) that has made earlier attempts at 42 nucleotides seem modest. Now contrast that with Europe’s collaborative approach. Instead of centralised lab efforts, Europe is deploying nine quantum systems across at least seven countries - spanning superconducting, ion-trap, and annealing technologies - integrated with national supercomputing centers for shared access and resilience. I recently visited the Poznań Supercomputing Centre in Poland to witness one of these systems in action. Europe’s model is about collective strength, diversity, and building long-term quantum infrastructure - demonstrating that the race isn’t just about breakthroughs, but also how you organise for scale and inclusivity.
-
Google Unveils Willow: A Leap Forward in Quantum Computing Google Quantum AI has introduced Willow, a cutting-edge quantum chip designed to address two of the field’s most significant challenges: error correction and computational scalability. Willow, fabricated in Google’s Santa Barbara facility, achieves state-of-the-art performance, marking a pivotal step toward realizing a large-scale, commercially viable quantum computer. It gets way geekier from here – but if you’re with me so far… Exponential Error Reduction Julian Kelly, Director of Quantum Hardware at Google, emphasized Willow’s ability to exponentially reduce errors as the system scales. Utilizing a grid of superconducting qubits, Willow demonstrated a historic breakthrough in quantum error correction. By expanding arrays from 3×3 to 5×5 and then 7×7 qubits, researchers cut error rates in half with each iteration. This achievement, referred to as being “below threshold,” signifies that larger quantum systems can now exhibit fewer errors, a challenge pursued since Peter Shor introduced quantum error correction in 1995. The chip also achieved “beyond breakeven” performance, where arrays of qubits outperformed the lifetimes of individual qubits, which is key to ensuring the feasibility of practical quantum computations. Ten Septillion Years in Five Minutes Willow’s computational capabilities were validated using the Random Circuit Sampling (RCS) benchmark, a rigorous test of quantum supremacy. According to Google’s estimates, Willow completed a task in under five minutes that would take a modern supercomputer ten septillion years—a timescale exceeding the age of the universe. This achievement underscores the rapid, double-exponential performance improvements of quantum systems over classical alternatives. While the RCS benchmark lacks direct commercial applications, it remains a critical indicator of quantum computational power. Kelly noted that surpassing classical systems on this benchmark solidifies confidence in the broader potential of quantum technology. Building Toward Practical Applications Google’s roadmap aims to bridge the gap between theoretical quantum advantage and real-world utility. The team is now focused on achieving “useful, beyond-classical” computations that solve practical problems. Applications in drug discovery, battery design, and AI optimization are among the potential breakthroughs quantum computing could unlock. Willow’s advancements in quantum error correction and computational scalability highlight its transformative potential. As Kelly explained, “Quantum algorithms have fundamental scaling laws on their side,” making quantum computing indispensable for tasks beyond the reach of classical systems. Quantum computing is still years away, but this is an exciting milestone. Considering the remarkable rate of technological improvement we’re experiencing right now, practical quantum computing (and quantum AI) may be closer than we think. -s
-
A new preprint from Google Quantum AI, Caltech, MIT, and Oratomic just made a bold claim: a quantum computer with fewer than 60 logical qubits can outperform any classical machine with exponentially more memory on real machine learning tasks (arXiv:2604.07639). Actual sentiment analysis on IMDb reviews and cell-type classification of scRNA-seq data. The core idea is called quantum oracle sketching. Instead of loading an entire dataset into quantum memory at once (which has always been the Achilles heel of quantum ML), the algorithm streams data one sample at a time. Each sample drives a tiny rotation of the quantum state, a phase gate whose angle is proportional to the data value divided by the total number of samples. After processing enough samples, these microscopic rotations accumulate into an approximate quantum oracle for the full dataset, without ever storing the dataset itself. Because the circuit is deterministically constructed from data rather than trained by gradient descent, it also sidesteps the barren plateau problem that plagues variational quantum approaches. That said, some challenges remain. The circuit depth scales linearly with the dataset size, which means wall-clock runtime grows with N even as memory stays tiny. And on conventional fault-tolerant hardware, those arbitrary-angle rotation gates must be approximated through T gate synthesis and magic state distillation, an overhead the paper does not account for and one that could easily dwarf the rest of the computation. If you followed my post on the STAR architecture two weeks ago, you might already see where this is going. STAR's native support for arbitrary-angle rotations (small angles) removes precisely the magic state distillation overhead that makes quantum oracle sketching look expensive. The linear depth challenge remains open. Together they sketch a more credible path toward practical fault-tolerant quantum machine learning than either suggests alone. #QuantumComputing #QuantumML #AI #FTQC #QuantumResearch #EmergingTech
-
Harvard University researchers have achieved fault-tolerant universal quantum computation using 448 neutral atoms, marking a critical milestone toward scalable quantum systems This isn't just incremental progress, it's the first demonstration of all key error-correction components in one setup, paving the way for practical quantum applications that could transform AI training, drug discovery, and complex simulations Why this matters: Error Correction Breakthrough: Quantum bits (qubits) are notoriously fragile due to environmental noise; this system operates below the error threshold, allowing real-time detection and correction without halting computations, essential for building larger, reliable quantum machines Scalability Achieved: By showing that adding more qubits reduces overall errors, the team has overcome a major barrier; previous systems struggled with error accumulation, limiting size and utility Impact on AI and Beyond: Quantum computers excel at parallel processing vast datasets; this could accelerate AI model training by orders of magnitude, solving optimization problems that classical supercomputers take years to crack Room for Growth: Using laser-controlled rubidium atoms, the architecture is hardware-agnostic and could integrate with existing tech, speeding up commercialization in fields like materials science and cryptography This positions quantum tech closer to real-world deployment, potentially disrupting industries reliant on high-compute tasks. Read more here: https://lnkd.in/dxM4pQYw #QuantumComputing #AIBreakthroughs #TechInnovation #FutureOfComputing #QuantumAI
-
Quantum computers will finally be useful. For years, quantum computing lived in that awkward category of “amazing physics, unclear usefulness.” This week’s Nature feature captures something I’ve been hearing more often from serious people in the field: a genuine vibe shift. The conversation is moving from “maybe someday” to “we can see the engineering path.” What changed? Not one miracle breakthrough but rather several practical ones stacking up. First, error correction is finally crossing key thresholds. Multiple groups have now shown versions of quantum error correction that actually reduce errors as you scale, which is a prerequisite for “fault-tolerant” machines. Second, hardware fidelity is climbing into “nines” territory. Better qubits, better gates, better control systems and less “fragile lab demo,” more “repeatable device.” Algorithms are getting leaner. Smarter implementations can dramatically reduce the qubit overhead needed for meaningful tasks (including famous benchmarks like factoring). The takeaway (for investors and operators): Quantum’s near-term story isn’t “it will replace classical computing.” It’s “it may become a specialized tool for a few high-value problems” including materials, chemistry, optimization ... once fault tolerance is real. And the credible timeline many are now discussing is ~10 years, not “sometime this century.” If you’re building in tech or life sciences, it’s worth asking now: what would you do differently if reliable quantum compute showed up by 2035? #QuantumComputing #DeepTech #AI #Biotech #MaterialsScience #Cybersecurity
-
🚀 Validating the Power of Quantum Optimization! 🚀 I'm excited to share our latest research advance from from the Quantum Computing Engineering Research team led by Ruslan Shaydulin at Global Technology Applied Research, JPMorganChase. Our work, titled "Threshold for Fault-tolerant Quantum Advantage with the Quantum Approximate Optimization Algorithm" has just appeared on arXiv: https://lnkd.in/eiEqA-a7 In this paper, we explore the Quantum Approximate Optimization Algorithm (#QAOA) combined with Amplitude Amplification (AA) to tackle the challenging random 8-SAT problem. The Boolean SATisfiability problem is NP-complete. Our findings reveal that with a QAOA depth of 623, a realistic fault-tolerant quantum computer can outperform state-of-the-art classical heuristics running in parallel on a supercomputer. Moreover, the smallest running time for which the #quantum speedup is realized is only a few hours. Our research highlights the potential for quantum computers to deliver practical speedups over classical algorithms, even with low-degree polynomial speedups. By optimizing circuit components and leveraging recent advancements in error correction, we demonstrate that large-scale fault-tolerant quantum computers will be useful for #optimization. Authors: Sivaprasad Omanakuttan, Zichang He, Zhiwei Zhang, Tianyi Hao, Arman B., Sami Boulebnane, Shouvanik Chakrabarti, Dylan Herman, Joseph Sullivan, Michael A Perlin, Ruslan Shaydulin, and Marco Pistoia. Join us in celebrating this milestone in #QuantumComputing #research, and let's continue to push the boundaries of what is possible with #quantumtechnology. Feel free to share your thoughts and insights in the comments below!
-
One of the biggest challenges in quantum computing has always been error correction. Unlike classical computers, where errors are rare and manageable, quantum systems are incredibly sensitive. Even the tiniest disturbance can disrupt a calculation. For decades, scientists feared that error correction might require so much effort that it would outweigh the benefit of the computation itself—a roadblock for practical quantum computing. This week, Google announced a major breakthrough with its new #Willow chip, showing that error correction doesn’t have to diverge. They demonstrated that their system can perform calculations with 105 qubits, while simultaneously using error correction to manage and stabilize the system. For the first time, the overhead required for error correction scales in a manageable way as the system grows. Here’s why it’s game-changing: • 70 physical qubits are allocated to error correction for every logical qubit in the system, making the calculations reliable without overwhelming the computational capacity. • It proves quantum systems can become reliable at scale, bringing us closer to real-world applications like drug discovery, clean energy breakthroughs, and revolutionary materials design. • The Willow chip has already shown it can handle complex calculations that today’s fastest supercomputers couldn’t solve in the entire lifetime of the universe. Even Elon Musk couldn’t help but react, commenting “Wow” on X when the news dropped. This marks a turning point for quantum computing—it’s no longer just theoretical. The pieces are falling into place for a future where these machines solve humanity’s toughest problems. #AI #quantum
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development