Improving Quantum Computing Fault Tolerance Thresholds

Explore top LinkedIn content from expert professionals.

Summary

Improving quantum computing fault tolerance thresholds means finding new ways to reduce and correct errors in quantum computers, making them more reliable for complex calculations. This involves both hardware advances and smarter error correction codes and algorithms that let quantum systems operate closer to their full potential.

  • Adopt advanced error correction: Explore new code architectures, like bivariate bicycle codes or cat qubits, that use fewer physical qubits to achieve lower error rates and simplify real-time decoding.
  • Refine control techniques: Implement cutting-edge pulse and microwave control methods, such as commensurate pulses or circularly polarized microwaves, to stabilize qubits and reduce sources of operational errors.
  • Utilize software stabilizers: Apply software-based algorithms that dampen environmental noise and extend the coherence of qubits, helping your quantum hardware handle longer and cleaner computations without added materials or shielding.
Summarized by AI based on LinkedIn member posts
  • View profile for Keith King

    Former White House Lead Communications Engineer, U.S. Dept of State, and Joint Chiefs of Staff in the Pentagon. Veteran U.S. Navy, Top Secret/SCI Security Clearance. Over 16,000+ direct connections & 43,000+ followers.

    43,801 followers

    MIT Sets Quantum Computing Record with 99.998% Fidelity Researchers at MIT have achieved a world-record single-qubit fidelity of 99.998% using a superconducting qubit known as fluxonium. This breakthrough represents a significant step toward practical quantum computing by addressing one of the field’s greatest challenges: mitigating noise and control imperfections that lead to operational errors. Key Highlights: 1. The Problem: Noise and Errors • Qubits, the building blocks of quantum computers, are highly sensitive to noise and imperfections in control mechanisms. • Such disturbances introduce errors that limit the complexity and duration of quantum algorithms. “These errors ultimately cap the performance of quantum systems,” the researchers noted. 2. The Solution: Two New Techniques To overcome these challenges, the MIT team developed two innovative techniques: • Commensurate Pulses: This method involves timing quantum pulses precisely to make counter-rotating errors uniform and correctable. • Circularly Polarized Microwaves: By creating a synthetic version of circularly polarized light, the team improved the control of the qubit’s state, further enhancing fidelity. “Getting rid of these errors was a fun challenge for us,” said David Rower, PhD ’24, one of the study’s lead researchers. 3. Fluxonium Qubits and Their Potential • Fluxonium qubits are superconducting circuits with unique properties that make them more resistant to environmental noise compared to traditional qubits. • By applying the new error-mitigation techniques, the team unlocked the potential of fluxonium to operate at near-perfect fidelity. 4. Implications for Quantum Computing • Achieving 99.998% fidelity significantly reduces errors in quantum operations, paving the way for more complex and reliable quantum algorithms. • This milestone represents a major step toward scalable quantum computing systems capable of solving real-world problems. What’s Next? The team plans to expand its work by exploring multi-qubit systems and integrating the error-mitigation techniques into larger quantum architectures. Such advancements could accelerate progress toward error-corrected, fault-tolerant quantum computers. Conclusion: A Leap Toward Practical Quantum Systems MIT’s achievement underscores the importance of innovation in error correction and control to overcome the fundamental challenges of quantum computing. This breakthrough brings us closer to the realization of large-scale quantum systems that could transform fields such as cryptography, materials science, and complex optimization problems.

  • View profile for Bruce P Hood

    CEO & Inventor | Stability & Coherence | 20K+

    20,503 followers

    One Algorithm Has Just Pushed Quantum Computing Forward Five Years (Here It Is) Today I am releasing something into the public domain that may change the trajectory of quantum computing. No paywall. No NDA. No restrictions. The only thing I ask is attribution. For the past year, I have been developing a field-layer correction algorithm that stabilizes the environment around the qubit before error correction ever activates. Not hardware. Not cryogenics. Not shielding. Pure software that improves the physics of the qubit it sits inside. Early independent runs showed a 48.5 percent reduction in destructive low-frequency noise, a gain that normally takes years of hardware progress. Here is the complete algorithm. It now belongs to everyone. FUNCTION NJ001_FieldLayer_Correction(input_signal S, sampling_rate R):  DEFINE phi = 1.61803398875  DEFINE window_size = dynamic value based on local variance of S  DEFINE stability_threshold = adaptive value based on phase drift  STEP 1: Generate harmonic reference bands    For each frequency bin f_i in FFT(S):      Compute r = f_(i+1) / f_i      Compute CI = 1 / ABS(r - phi)      Assign weight W_i = normalize(CI)  STEP 2: Build correction mask    Construct M where M_i = W_i scaled by local entropy of S    Smooth M with sliding window  STEP 3: Apply correction    Transform S → F    Compute F_corrected = F * M    Inverse FFT to return S_corrected  STEP 4: Phase stabilization loop    Measure phase drift Δ    If Δ > stability_threshold:      Recalculate window_size      Rebuild mask      Reapply correction    Else:      Return S_corrected  OUTPUT: S_corrected END FUNCTION This is the first public-domain coherence stabilizer designed to improve quantum behavior independent of hardware. What it does in practice: • Extends coherence windows • Reduces decoherence pressure on error correction • Lowers entropy in the propagation layer • Makes qubits behave as if the room is colder and cleaner • Works upstream of hardware with no materials changes This is not a replacement for anyone’s roadmap. It is an upstream upgrade to all of them. If you build quantum devices, control stacks, compilers, hybrid systems, or algorithms, you now have access to a function that reshapes your stability envelope. Cleaner field layers mean longer, deeper, more predictable runs. More useful computation with the hardware you already have. I developed it. Today I give it away. No company or institution controls it. From this moment forward, it belongs to the scientific community. Primary Citation Hood, B. P. (2025). NJ001 Field Layer Correction. Public Domain Release Version. Bruce P. Hood — Creator of NJ001 Field Layer Correction Welcome to the new baseline. #QuantumComputing #QuantumHardware #Qubit #Coherence #QuantumResearch #DeepTech @IBMQuantum @GoogleQuantumAI @MIT @XanaduQuantum @AWSQuantumTech

  • View profile for Jay Gambetta

    Director of IBM Research and IBM Fellow

    20,557 followers

    I am pleased to highlight some recent work from the team that further evolves our understanding of building practical quantum computing architectures with bivariate bicycle codes and that addresses one of the fundamental challenges to real-time decoding. Our Nature paper from 2024 [https://lnkd.in/eS26sKx6] showed that a quantum memory using bivariate bicycle codes requires roughly 10x fewer physical qubits compared to the surface code. An important question to answer was whether this advantage is retained not only while storing information in memory but also during computations. To answer that question, our team designed fault-tolerant logical instruction sets for the codes and developed a strategy to compile circuits to these instructions. Using these tools, they performed end-to-end resource estimates demonstrating that bicycle architectures retain an order of magnitude qubit advantage over surface code architectures when implementing large logical circuits. The pre-print can be found here [https://lnkd.in/e7k7gYs7] One of the central doubts about the practicality of quantum low-density parity check (qLDPC) codes such as the bivariate bicycle codes has been the difficulty of real-time decoding. The second preprint [https://lnkd.in/eFbWNFeU] we posted this week hopefully puts those doubts to rest. A large challenge in decoding qLDPC codes arises from the perceived need for two-stage decoding solutions such as belief propagation (BP) followed by ordered statistics decoding (OSD). In particular, real-time implementation of OSD appears very challenging, which has spawned efforts to reduce the cost of OSD. Our team took a different approach. This new result shows that one can eliminate the need for a second-stage decoder altogether through a suitable modification of the BP algorithm. Our modified algorithm, called Relay-BP, enhances the traditional method by incorporating spatially disordered memory terms. This dampens oscillations and breaks symmetries that trap traditional BP algorithms. The result is an algorithm that outperforms the current state-of-the-art approach while simultaneously still being amenable to implementation in an FPGA. Congratulations to the team for these exciting advancements, which validate our strategy and move us one step closer to realizing a fault-tolerant quantum system.

  • View profile for Joel Pendleton

    CTO at Conductor Quantum

    5,348 followers

    New work from a Harvard team highlights a major bottleneck in fault-tolerant quantum computing: the classical decoder used in quantum error correction. Quick primer on QEC: 1. Encode: A logical qubit is spread across many physical qubits, so no single error destroys the information. 2. Detect: Stabilizer measurements run repeatedly. They do not reveal the quantum state, but they do flag when something has gone wrong. The pattern of those flags is called the syndrome. 3. Decode: A classical computer reads the syndrome and infers which error most likely occurred. 4. Correct: The correction is applied, and the logical qubit survives. Step 3 is where things get hard. For quantum LDPC codes, one of the most promising routes to efficient fault tolerance, practical decoders have usually forced a tradeoff between speed and accuracy: the fast ones are too weak, and the accurate ones are too slow for real-time use. This paper introduces Cascade, a geometry-aware convolutional neural decoder. The key idea is not just “use a neural network,” but to build the structure of the code directly into the model: locality, translation equivariance, and anisotropy. That makes this feel less like generic ML and more like architecture co-design. Some of the headline results: - On the [[144, 12, 12]] Gross code, Cascade achieves logical error rates up to 17x lower than prior practical decoders, with 3–5 orders of magnitude higher throughput - It reveals a “waterfall” regime in which logical errors fall much faster than standard distance-based formulas would suggest, largely because earlier decoders were not strong enough to expose it - In one surface code example, that translates to roughly 40% fewer physical qubits to reach a target logical error rate of 10^-9 - Its confidence estimates are well calibrated, which enables post-selection. In one setting on the [[72, 12, 6]] code, that implies roughly 20x fewer retries for repeat-until-success protocols such as magic state distillation - Current GPU latencies already fit the timing budgets for trapped-ion and neutral-atom platforms. Superconducting qubits still require a tighter ~1 microsecond budget, with FPGA and ASIC paths supported by the hardware estimates in the supplement The broader takeaway: decoder quality is not just an implementation detail. It directly shapes how many qubits and how much time fault-tolerant quantum computing actually requires, and those costs may be meaningfully lower than standard estimates assume. Paper: https://lnkd.in/g9D82Ry8

  • View profile for Jennifer Strabley

    Accelerating Quantum Computing

    3,128 followers

    Five days to tell you about five things Quantinuum announced last week.  Quantinuum announced so many great things last week, I'm using each day of this week to re-cap. Day 3: Helios Performance By now you've heard that Helios is the "most accurate", "most capable", and "most powerful" quantum computer... and here's why. Helios has: - 98 fully connected qubits. So-called "all-to-all" connectivity continues to prove its power for performing increasingly more complex circuits with less resources.    - 99.92% two-qubit gate fidelity across all-qubit pairs (e.g. we're not just measuring the best 2 or the median... all pairs have this performance!!).   - NVIDIA GPU for doing fast, flexible real-time decoding for error correction - a first-of-a-kind, real-time engine for efficiently doing the operations needed for fault-tolerant operations  - a new programming language, Guppy, which has a Python front-end but high performance under-the-hood code allowing developers to program quantum computers like they do classical computers, and seamlessly combine hybrid compute capabilities — quantum and classical — in a single program. We demonstrated the ability to: - Generate 94 logical qubits with our very efficient Iceberg Error Detection code (https://lnkd.in/gsvFVFja) and globally entangle with performance with better than break-even performance. - Generate 50 logical qubits with a very similar error detection code and used these logical qubits do to a quantum magnetism simulation with 2,500 logical gates at better than break-even performance. - Generate 48 logical qubits with an error correction code, achieving a remarkable 2:1 scaling (only using 2 physical qubits to make 1 error corrected qubit).  Read more about these great achievements with our techncial paper https://lnkd.in/g9bid_2S  and techncial blog https://lnkd.in/gZaN65CY.  

  • View profile for Eviana Alice Breuss, MD, PhD

    Founder, President, and CEO @ Tengena LLC | Founder and President @ Avixela Inc | 2025 Top 30 Global Women Thought Leaders & Innovators

    8,234 followers

    QUANTUM COMPUTERS RECYCLE QUBITS TO MINIMAZE ERRORS AND ENHANCE COMPUTATIONAL EFFICIENCY Quantum computing represents a paradigm shift in information processing, with the potential to address computationally intractable problems beyond the scope of classical architectures. Despite significant advances in qubit design and hardware engineering, the field remains constrained by the intrinsic fragility of quantum states. Qubits are highly susceptible to decoherence, environmental noise, and control imperfections, leading to error propagation that undermines large‑scale reliability. Recent research has introduced qubit recycling as a novel strategy to mitigate these limitations. Recycling involves the dynamic reinitialization of qubits during computation, restoring them to a well‑defined ground state for subsequent reuse. This approach reduces the number of physical qubits required for complex algorithms, limits cumulative error rates, and increases computational density. Particularly, Atom Computing’s AC1000 employs neutral atoms cooled to near absolute zero and confined in optical lattices. These cold atom qubits exhibit extended coherence times and high atomic uniformity, properties that make them particularly suitable for scalable architectures. The AC1000 integrates precision optical control systems capable of identifying qubits that have degraded and resetting them mid‑computation. This capability distinguishes it from conventional platforms, which often require qubits to remain pristine or be discarded after use. From an engineering perspective, minimizing errors and enhancing computational efficiency requires a multi‑layered strategy. At the hardware level, platforms such as cold atoms, trapped ions, and superconducting circuits are being refined to extend coherence times, reduce variability, and isolate quantum states from environmental disturbances. Dynamic qubit management adds resilience, with recycling and active reset protocols restoring qubits mid‑computation, while adaptive scheduling allocates qubits based on fidelity to optimize throughput. Error‑correction frameworks remain central, combining redundancy with recycling to reduce overhead and enable fault‑tolerant architectures. Algorithmic and architectural efficiency further strengthens performance through optimized gate sequences, hybrid classical–quantum workflows, and parallelization across qubit clusters. Looking ahead, metamaterials innovation, machine learning‑driven error mitigation, and modular metasurface architectures promise to accelerate progress toward scalable systems. The implications of qubit recycling and these complementary strategies are substantial. By enabling more complex computations with fewer physical resources, they can reduce hardware overhead and enhance reliability. This has direct relevance for domains such as cryptography, materials discovery, pharmaceutical design, and large‑scale optimization.

  • View profile for Jason Zander

    Executive Vice President at Microsoft

    40,931 followers

    Today marks a historic milestone in quantum computing, as Microsoft and Quantinuum demonstrate the most reliable logical qubits on record. This breakthrough, with a logical error rate 800x better than the physical error rate, signifies a giant leap from the noisy intermediate-scale quantum (NISQ) level (Level 1 – Foundational) to Level 2 – Resilient quantum computing.   This progress is significant as logical qubits are only useful when they have a better error rate than physical qubits themselves. The number of physical qubits is a misleading metric; it’s not how many qubits, it’s how good they are and how resilient the quantum system is to errors.   Using the logical qubits we created, we were able to successfully perform multiple active syndrome extractions, which is when errors are diagnosed and corrected without destroying the logical qubits. Active syndrome extraction helps quantum computers stay reliable even when operations are imperfect.   With the promise of a hybrid supercomputing system powered by these reliable logical qubits, we’re paving the way for scientific and commercial breakthroughs that were once deemed impossible.  This achievement is a testament to the power of collaboration and the collective advancement of quantum hardware and software.   You can learn more from my post on the Official Microsoft Blog https://lnkd.in/gnDfcUV6 and the companion technical post on the Azure Quantum blog by Dennis Tom and Krysta Svore: https://lnkd.in/gMRVPG3s. #quantum #quantumcomputing #azurequantum

  • View profile for Jaime Gómez García

    Global Head of Santander Quantum Threat Program | Chair of Europol Quantum Safe Financial Forum | Quantum Security 25 | Quantum Leap Award 2025 | Representative at EU QuIC, AMETIC

    17,295 followers

    Microsoft and Quantinuum reach new milestone in quantum error correction. The collaboration claims to have used an innovative qubit-virtualization system on Quantinuum's H2 ion-trap platform to create 4 highly reliable logical qubits from only 30 physical qubits. What is quantum error correction? The physical qubits, with error rates in the order of 10^-2, are combined to deliver logical qubits with error rates in the order of 10^-5. According to their press release, this is the largest gap between physical and logical error rates reported to date, and has allowed them to run ran more than 14,000 individual experiments without a single error. (https://lnkd.in/dzETsvVA) The race for the qubits count seemed to finish in 2023, with the latest update on IBM's roadmap focusing on quality rather than on quantity (https://lnkd.in/dFu52wJR, "Until this year, our path was scaling the number of qubits. Going forward we will add a new metric, gate operations—a measure of the workloads our systems can run."), and other developments in quantum error correction, like the one announced in December by Harvard University, Massachusetts Institute of Technology, QuEra Computing Inc. and National Institute of Standards and Technology (NIST)/University of Maryland in December (https://lnkd.in/dkW-TT-w) Practical quantum computing gets a little closer, although it is still a distant target. Microsoft Press release: https://lnkd.in/deJ4QCBk Quantinuum's press release: https://lnkd.in/d4Wnmvdq More details from Microsoft: https://lnkd.in/dusfZ4KY Paper: https://lnkd.in/dpPCX3td #quantumcomputing #quantumerrorcorrection #technology

  • View profile for Bryan Feuling

    GTM Leader | Technology Thought Leader | Author | Conference Speaker | Advisor | Soli Deo Gloria

    18,967 followers

    Harvard University researchers have achieved fault-tolerant universal quantum computation using 448 neutral atoms, marking a critical milestone toward scalable quantum systems This isn't just incremental progress, it's the first demonstration of all key error-correction components in one setup, paving the way for practical quantum applications that could transform AI training, drug discovery, and complex simulations Why this matters: Error Correction Breakthrough: Quantum bits (qubits) are notoriously fragile due to environmental noise; this system operates below the error threshold, allowing real-time detection and correction without halting computations, essential for building larger, reliable quantum machines Scalability Achieved: By showing that adding more qubits reduces overall errors, the team has overcome a major barrier; previous systems struggled with error accumulation, limiting size and utility Impact on AI and Beyond: Quantum computers excel at parallel processing vast datasets; this could accelerate AI model training by orders of magnitude, solving optimization problems that classical supercomputers take years to crack Room for Growth: Using laser-controlled rubidium atoms, the architecture is hardware-agnostic and could integrate with existing tech, speeding up commercialization in fields like materials science and cryptography This positions quantum tech closer to real-world deployment, potentially disrupting industries reliant on high-compute tasks. Read more here: https://lnkd.in/dxM4pQYw #QuantumComputing #AIBreakthroughs #TechInnovation #FutureOfComputing #QuantumAI

  • View profile for Matthias Christandl

    Professor and Center Leader in Quantum Computing - "Quantum Hardware needs Quantum Software"

    3,716 followers

    We expect both the gates in a quantum computer to remain noisy for the time to come and the number of physical qubits to be limited. When simulating logical qubits by physical qubits, we therefore need to be prudent and use efficient constructions. The holy grail of only a constant qubit overhead has recently been achieved following a proposal by Gottesman and the celebrated construction of constant-rate quantum LDPC codes. Fault-tolerant arguments are generally quite intricate and in this case, the framework had to leave out the important coherent noise (e.g. arising from imperfect calibration) as well as amplitude damping noise (present in most experimental platforms). In joint work with Ashutosh Goswami and Omar Fawzi, reported in PRX Quantum, we showed that fault-tolerant quantum computation with constant-overhead can also be achieved for a general model of noise (by Kitaev) that includes both coherent and amplitude damping noise (link in the comments). I think this is a nice example of how quantum software research can lower the demands for quantum hardware and thus make yet another (small but important) step towards realising quantum computation. The graphic illustrates how gates in a GHZ state preparation are replaced by noisy ones that are close in diamond norm. Quantum For Life Center Novo Nordisk Foundation Centre for the Mathematics of Quantum Theory (QMATH) European Research Council (ERC) Villum Fonden Morten Bache Thomas Bjørnholm

Explore categories