Fault-Tolerant Quantum Computing Methods

Explore top LinkedIn content from expert professionals.

Summary

Fault-tolerant quantum computing methods are techniques designed to protect quantum information from errors, allowing quantum computers to run reliably even when their hardware is imperfect. These approaches are crucial for scaling quantum systems and unlocking their true computational power.

  • Implement error correction: Use continuous cycles of error detection and correction so that quantum information can survive through extended computations without being lost to environmental noise.
  • Explore topological encoding: Consider encoding quantum data in stable topological structures, such as skyrmions, to shield it from disturbances and preserve entangled states.
  • Utilize resource-efficient codes: Adopt advanced coding strategies like bivariate bicycle codes or distributed quantum architectures to reduce hardware demands and improve computational reliability.
Summarized by AI based on LinkedIn member posts
  • View profile for Michaela Eichinger, PhD

    Product Solutions Physicist @ Quantum Machines | I talk about quantum computing.

    16,208 followers

    Looks like we’ve hit another turning point in quantum computing. Quantinuum just demonstrated 𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗴𝗮𝘁𝗲𝘀 𝗯𝘂𝗶𝗹𝘁 𝗼𝗻 𝗮 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝘁𝗵𝗮𝘁 𝗯𝗲𝗮𝘁 𝘁𝗵𝗲 𝗽𝗵𝘆𝘀𝗶𝗰𝗮𝗹 𝗴𝗮𝘁𝗲𝘀 𝘁𝗵𝗲𝘆'𝗿𝗲 𝗺𝗮𝗱𝗲 𝗳𝗿𝗼𝗺. This includes the hardest one: 𝗮 𝗻𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝘁𝘄𝗼-𝗾𝘂𝗯𝗶𝘁 𝗴𝗮𝘁𝗲. If you’ve followed quantum computing for a while, you know the game has long been about scaling. More qubits, better gates, lower error rates. 𝗕𝘂𝘁 𝗿𝗲𝗮𝗹 𝗳𝗮𝘂𝗹𝘁 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲? That’s been the elusive frontier. Until now. 𝗤𝘂𝗮𝗻𝘁𝗶𝗻𝘂𝘂𝗺'𝘀 𝗻𝗲𝘄 𝘄𝗼𝗿𝗸 𝗱𝗲𝗺𝗼𝗻𝘀𝘁𝗿𝗮𝘁𝗲𝘀 𝘁𝗵𝗲 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗹𝗼𝗰𝗸𝘀 𝗳𝗼𝗿 𝗮 𝘂𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹, 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗴𝗮𝘁𝗲 𝘀𝗲𝘁. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼𝗲𝘀 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻 ? To unlock the full power of quantum computation, you need to go beyond Clifford gates. 𝗡𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝗴𝗮𝘁𝗲𝘀 (like T or controlled-Hadamard) 𝗮𝗿𝗲 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗳𝗼𝗿 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗮𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲, but they’re notoriously hard to implement fault-tolerantly. Why? Because applying a non-Clifford gate directly to a logical qubit can spread a single error into a correlated mess that error correction can't handle. This is a fundamental limitation, not a hardware bug. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼 𝘄𝗲 𝗱𝗼? Instead of applying dangerous gates directly, we 𝘁𝗲𝗹𝗲𝗽𝗼𝗿𝘁 them using special resource states, so-called 𝗺𝗮𝗴𝗶𝗰 𝘀𝘁𝗮𝘁𝗲𝘀. Think of it like outsourcing the risky part of the operation to an ancilla that we can verify, discard if faulty, and only then use to apply the gate safely. That’s the idea. But nobody had shown that this could be done fault-tolerantly and with better-than-physical performance. Quantinuum just released two new papers that change that: • Shival Dasu et al. prepared ultra-clean ∣H⟩ magic states using just 8 qubits, then used them to implement a logical non-Clifford CH gate, achieving a fidelity better than the physical gate. That’s the elusive break-even point: logical > physical.    • Lucas Daguerre et al. prepared high-fidelity ∣T⟩ states directly in the distance-3 Steane code, using a clever code-switching protocol from the Reed-Muller code (where transversal T gates are allowed). The resulting magic state had lower error than any physical component involved.    Why are these landmark results ? Because these two results together prove you can: • Prepare magic states fault-tolerantly • Use them to implement non-Clifford logic • And do so with error rates below the physical layer    𝗔𝗹𝗹 𝗼𝗻 𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗵𝗮𝗿𝗱𝘄𝗮𝗿𝗲. No hand-waving. No simulations. Of course not everything is solved: these are still distance-2 or -3 codes, and we haven’t seen a full algorithm run start-to-finish with these techniques. But the last conceptual hurdles are falling. Not on superconducting qubits but on ion traps. 📸 Credits: Daguerre et al. (arXiv:2506.14169)

  • View profile for Jay Gambetta

    Director of IBM Research and IBM Fellow

    20,557 followers

    I am pleased to highlight some recent work from the team that further evolves our understanding of building practical quantum computing architectures with bivariate bicycle codes and that addresses one of the fundamental challenges to real-time decoding. Our Nature paper from 2024 [https://lnkd.in/eS26sKx6] showed that a quantum memory using bivariate bicycle codes requires roughly 10x fewer physical qubits compared to the surface code. An important question to answer was whether this advantage is retained not only while storing information in memory but also during computations. To answer that question, our team designed fault-tolerant logical instruction sets for the codes and developed a strategy to compile circuits to these instructions. Using these tools, they performed end-to-end resource estimates demonstrating that bicycle architectures retain an order of magnitude qubit advantage over surface code architectures when implementing large logical circuits. The pre-print can be found here [https://lnkd.in/e7k7gYs7] One of the central doubts about the practicality of quantum low-density parity check (qLDPC) codes such as the bivariate bicycle codes has been the difficulty of real-time decoding. The second preprint [https://lnkd.in/eFbWNFeU] we posted this week hopefully puts those doubts to rest. A large challenge in decoding qLDPC codes arises from the perceived need for two-stage decoding solutions such as belief propagation (BP) followed by ordered statistics decoding (OSD). In particular, real-time implementation of OSD appears very challenging, which has spawned efforts to reduce the cost of OSD. Our team took a different approach. This new result shows that one can eliminate the need for a second-stage decoder altogether through a suitable modification of the BP algorithm. Our modified algorithm, called Relay-BP, enhances the traditional method by incorporating spatially disordered memory terms. This dampens oscillations and breaks symmetries that trap traditional BP algorithms. The result is an algorithm that outperforms the current state-of-the-art approach while simultaneously still being amenable to implementation in an FPGA. Congratulations to the team for these exciting advancements, which validate our strategy and move us one step closer to realizing a fault-tolerant quantum system.

  • View profile for Laurent Prost

    Product Manager chez Alice & Bob

    5,883 followers

    Today, I'd like to highlight an often overlooked, yet essential aspect of the Google Willow experiment: the repeatability of error correction cycles. What is this exactly? Well, because errors are happening constantly during a computation, error correction must run constantly too. Some even say that the art of fault-tolerant quantum computing is to squeeze useful operations between quantum error correction cycles. An error correction cycle is a series of operations aimed at detecting and correcting errors: - You perform a bunch of gates (usually CNOTs) between the qubits carrying your data and a set of "auxiliary" qubits - You measure your auxiliary qubits - You use the result to guess which data qubits (if any) experienced an error Interestingly, an error correction cycle doesn't always include actually correcting errors when they are detected. Knowing what errors happened may be enough, and you may just post-process the results of your circuit using the output of error correction cycles. What happens when repeating error correction cycles is well illustrated in the graph below, taken from Google's latest paper. There's a lot to unpack here, so let's explain. Each point in this graph corresponds to 10^5 repetitions of an experiment where: - A logical state is prepared - t error correction cycles are applied and their results are stored - The logical state is measured - Using all the measurements above, the authors try to guess what was the initial logical state Usually, half of the experiments are done in the Z basis (prepare |0>, measure Z) and half in the X basis (prepare |+>, measure X). When the state is not successfully retrieved, this counts as an error. The error rate then determines how high the point is. As can be expected, the more error correction cycles you apply, the less likely you are to retrieve the initial state at the end of the experiment. Each cycle misses some of the errors and this effect only compounds. You can however see that using a long distance for their error correction code (dark blue curve), Google is able to retrieve the initial state with a higher probability than by simply letting their best physical qubit decohere (green curve). To the best of my knowledge, no experiments run on neutral atoms or trapped ions have been able to repeat error correction cycles like Google just did here. They did show one, maybe a handful of repetitions, but not hundreds like in the graph below. As I understand it, this is because on these platforms, it is hard to immediately reuse an auxillary qubit after it has been measured. This is a challenge they will have to overcome if they wish to run the deep circuits for which we need fault-tolerant quantum computers. Of course, superconducting circuits will also have challenges of their own. But repeating error correction cycles doesn't seem to be one, and I hope you now understand how Google brilliantly showed it.

  • View profile for Marin Ivezic

    CEO Applied Quantum | PostQuantum.com | SANS Instructor | Former CISO, Big 4 Partner, Quantum Entrepreneur

    34,165 followers

    Over the last few weeks, the quantum timeline has been crowded with big headlines: Quantinuum’s Helios launch, IBM’s new Loon and Nighthawk processors, and a steady stream of new “record” benchmarks and roadmaps. In that flood of news, one announcement from Harvard flew a bit under the radar - and it arguably deserves as much attention as any of them. Lukin’s group has just published a Nature paper describing “a fault-tolerant neutral-atom architecture for universal quantum computation”. In practice, they show a reconfigurable neutral-atom processor that brings together the key building blocks of scalable fault tolerance: below-threshold surface-code style error correction, transversal logical gates, teleportation-based logical rotations, mid-circuit qubit reuse, and deep logical circuits that are explicitly engineered to keep entropy under control. I’ve broken down what they achieved, how it compares to other platforms, and why I think this is a genuine inflection point for neutral atoms and for fault tolerance more broadly: https://lnkd.in/gqnYdPXQ This also ties directly into something I’ve been arguing for years in my Path to CRQC / Q-Day methodology: operating below the error-correction threshold is not a nice-to-have, it’s a capability in its own right – the tipping point where adding more qubits and more code distance finally starts to reduce logical error, instead of making things worse. Motivated by the Harvard result, I’ve also published a companion piece that walks through some of the most important below-threshold QEC experiments across platforms – bosonic cat codes, trapped-ion Bacon–Shor and color codes, superconducting surface codes, and now neutral atoms: https://lnkd.in/gvJDNhgm If you’re trying to separate marketing noise from genuine progress toward fault-tolerant, cryptographically relevant quantum computers, these are the kinds of milestones worth tracking. My analysis of the Harvard announcement is here: https://lnkd.in/gqnYdPXQ #Quantum #QuantumComputing #QEC

  • View profile for Matthias Christandl

    Professor and Center Leader in Quantum Computing - "Quantum Hardware needs Quantum Software"

    3,716 followers

    We expect both the gates in a quantum computer to remain noisy for the time to come and the number of physical qubits to be limited. When simulating logical qubits by physical qubits, we therefore need to be prudent and use efficient constructions. The holy grail of only a constant qubit overhead has recently been achieved following a proposal by Gottesman and the celebrated construction of constant-rate quantum LDPC codes. Fault-tolerant arguments are generally quite intricate and in this case, the framework had to leave out the important coherent noise (e.g. arising from imperfect calibration) as well as amplitude damping noise (present in most experimental platforms). In joint work with Ashutosh Goswami and Omar Fawzi, reported in PRX Quantum, we showed that fault-tolerant quantum computation with constant-overhead can also be achieved for a general model of noise (by Kitaev) that includes both coherent and amplitude damping noise (link in the comments). I think this is a nice example of how quantum software research can lower the demands for quantum hardware and thus make yet another (small but important) step towards realising quantum computation. The graphic illustrates how gates in a GHZ state preparation are replaced by noisy ones that are close in diamond norm. Quantum For Life Center Novo Nordisk Foundation Centre for the Mathematics of Quantum Theory (QMATH) European Research Council (ERC) Villum Fonden Morten Bache Thomas Bjørnholm

  • View profile for Pablo Conte

    Merging Data with Intuition 📊 🎯 | AI & Quantum Engineer | Qiskit Advocate | PhD Candidate

    32,520 followers

    ⚛️ A Review of Variational Quantum Algorithms: Insights into Fault-Tolerant Quantum Computing 📜 Variational quantum algorithms (VQAs) have established themselves as a central computational paradigm in the Noisy Intermediate-Scale Quantum (NISQ) era. By coupling parameterized quantum circuits (PQCs) with classical optimization, they operate effectively under strict hardware limitations. However, as quantum architectures transition toward early fault-tolerant (EFT) and ultimate fault-tolerant (FT) regimes, the foundational principles and long-term viability of VQAs require systematic reassessment. This review offers an insightful analysis of VQAs and their progression toward the fault-tolerant regime. We deconstruct the core algorithmic framework by examining ansatz design and classical optimization strategies, including cost function formulation, gradient computation, and optimizer selection. Concurrently, we evaluate critical training bottlenecks, notably barren plateaus (BPs), alongside established mitigation strategies. The discussion then explores the EFT phase, detailing how the integration of quantum error mitigation and partial error correction can sustain algorithmic performance. Addressing the FT phase, we analyze the inherent challenges confronting current hybrid VQA models. Furthermore, we synthesize recent VQA applications across diverse domains, including many-body physics, quantum chemistry, machine learning, and mathematical optimization. Ultimately, this review outlines a theoretical roadmap for adapting quantum algorithms to future hardware generations, elucidating how variational principles can be systematically refined to maintain their relevance and efficiency within an error-corrected computational environment. ℹ️ Zhirao Wang et al - 2026

  • View profile for Jennifer Strabley

    Accelerating Quantum Computing

    3,128 followers

    Five days to tell you about five things Quantinuum announced last week.  Quantinuum announced so many great things last week, I'm using each day of this week to re-cap. Day 3: Helios Performance By now you've heard that Helios is the "most accurate", "most capable", and "most powerful" quantum computer... and here's why. Helios has: - 98 fully connected qubits. So-called "all-to-all" connectivity continues to prove its power for performing increasingly more complex circuits with less resources.    - 99.92% two-qubit gate fidelity across all-qubit pairs (e.g. we're not just measuring the best 2 or the median... all pairs have this performance!!).   - NVIDIA GPU for doing fast, flexible real-time decoding for error correction - a first-of-a-kind, real-time engine for efficiently doing the operations needed for fault-tolerant operations  - a new programming language, Guppy, which has a Python front-end but high performance under-the-hood code allowing developers to program quantum computers like they do classical computers, and seamlessly combine hybrid compute capabilities — quantum and classical — in a single program. We demonstrated the ability to: - Generate 94 logical qubits with our very efficient Iceberg Error Detection code (https://lnkd.in/gsvFVFja) and globally entangle with performance with better than break-even performance. - Generate 50 logical qubits with a very similar error detection code and used these logical qubits do to a quantum magnetism simulation with 2,500 logical gates at better than break-even performance. - Generate 48 logical qubits with an error correction code, achieving a remarkable 2:1 scaling (only using 2 physical qubits to make 1 error corrected qubit).  Read more about these great achievements with our techncial paper https://lnkd.in/g9bid_2S  and techncial blog https://lnkd.in/gZaN65CY.  

  • View profile for Joel Pendleton

    CTO at Conductor Quantum

    5,348 followers

    A quantum computer that learns from its own errors while it's computing. That's the framing in a recent paper from Google Quantum AI and Google DeepMind on reinforcement learning control of quantum error correction. Large quantum processors drift. The standard fix is to halt the computation and recalibrate, which won't scale to algorithms expected to run for days or weeks. The authors ask whether QEC can calibrate itself from the data it already produces. The idea: repurpose error detection events as a training signal for a reinforcement learning agent that continuously tunes the physical control parameters (pulse amplitudes, detunings, DRAG coefficients, CZ parameters, and so on). Rather than optimizing logical error rate directly, which is expensive and global, the agent minimizes average detector-event rate, a cheap local proxy whose gradient is approximately aligned with the gradient of LER in the small-perturbation regime. The results on a Willow superconducting processor: - On distance-5 surface and color codes, RL fine-tuning after conventional calibration and expert tuning yields about 20% additional LER suppression - Against injected drift, RL steering improves logical stability 2.4x, rising to 3.5x when decoder parameters are also steered - New record logical error per cycle: 7.72(9)×10⁻⁴ for a distance-7 surface code (with the AlphaQubit2 decoder) and 8.19(14)×10⁻³ for a distance-5 color code (with Tesseract) - In simulation, the framework scales to a distance-15 surface code with roughly 40,000 control parameters, with a convergence rate that is independent of system size The broader takeaway: calibration and computation may not need to be separate phases. If detector statistics can carry enough information to steer a large control stack online, fault tolerance becomes less about pausing to retune and more about a processor that keeps learning while it computes. Worth noting that the current experiments rely on short repeated memory circuits, so real-time steering during a single long logical algorithm (where exploration noise would affect the computation directly) remains future work. Paper: https://lnkd.in/gVQXnpzZ

  • View profile for Dimitrios A. Karras

    Assoc. Professor at National & Kapodistrian University of Athens (NKUA), School of Science, General Dept, Evripos Complex, adjunct prof. at EPOKA univ. Computer Engr. Dept., adjunct lecturer at GLA & Marwadi univ, India

    28,784 followers

    By driving a quantum processor with laser pulses arranged according to the Fibonacci sequence, physicists observed the emergence of an entirely new phase of matter—one that displays extraordinary stability in a domain where fragility is the norm. Quantum computers operate using qubits, which differ radically from classical bits. A qubit can exist in superposition, occupying multiple states at once, and can become entangled with others across space. These properties enable immense computational power, but they come with a cost: quantum states are notoriously short-lived. Environmental noise, microscopic imperfections, and edge effects rapidly degrade coherence, limiting how long quantum information can survive. Seeking a new way to protect fragile quantum states, scientists at the Flatiron Institute, instead of applying laser pulses at regular intervals, they used a rhythm governed by the Fibonacci sequence—an ordered but non-repeating pattern long known to appear in biological growth, crystal structures, and wave interference. The experiment was carried out on a chain of ten trapped-ion qubits, driven by precisely timed laser pulses. The result was the formation of what is described as a time quasicrystal. Unlike ordinary crystals, which repeat periodically in space, a time quasicrystal exhibits structure in time without repeating in a simple cycle. The Fibonacci-based driving created a temporal order that resisted disruption, allowing the quantum system to remain coherent far longer than expected. The improvement was significant. Under standard conditions, the quantum state persisted for roughly 1.5 seconds. When driven by the Fibonacci pulse sequence, coherence times stretched to approximately 5.5 seconds—more than a threefold increase. Even more intriguing was the system’s temporal behavior. Measurements indicated that the quantum dynamics unfolded as if time itself possessed two independent structural directions. This does not imply time flowing backward, but rather that the system’s evolution followed two intertwined temporal pathways—an emergent property arising purely from the Fibonacci drive. The researchers propose that the non-repeating structure of the Fibonacci sequence suppresses errors that typically accumulate at the boundaries of quantum systems. By distributing disturbances in a highly ordered yet aperiodic way, the sequence stabilizes the collective behavior of the qubits. In effect, a mathematical pattern found throughout nature acts as a self-organizing error-management protocol. The findings suggest a powerful new strategy for quantum control. Rather than fighting noise solely with complex correction algorithms, future quantum technologies may harness structured patterns—drawn from mathematics and natural order—to achieve resilience at a fundamental level. https://lnkd.in/dVxp7R8J https://lnkd.in/dDVNRsPk

  • View profile for Hrant Gharibyan, PhD

    CEO @ BlueQubit | PhD Stanford

    14,194 followers

    Quantum Error Correction: Major Breakthroughs in the Past Year 🚀 The past year has been remarkable for quantum computing, with groundbreaking progress in quantum error correction (QEC) bringing us closer to realizing fault-tolerant quantum computers. Across various architectures, the advancements have been truly inspiring: 🔹 Neutral-Atom Systems: QuEra Computing Inc. & Harvard University (https://lnkd.in/dPxA2NuH), as well as with Atom Computing & Microsoft (https://lnkd.in/dV7s3Gd2), demonstrated scalable logical quantum computations and reliable qubit operations using reconfigurable neutral-atom arrays with up to 256 atoms. 🔹 Superconducting Qubits: IBM Quantum (https://lnkd.in/dzaJH6vA) and Google's Quantum AI (https://lnkd.in/dR-CTUGm) reached a major milestone with surface code quantum memory, operating below the error-correction threshold on a 100+ qubit superconducting processor. 🔹 Trapped-Ion Systems: Quantinuum & Microsoft (https://lnkd.in/d5fPzcVU) set a new standard for reliability in logical qubits with Quantinuum’s 56 qubit H2 system, advancing the precision and scalability of trapped-ion quantum processors. 🔹 Cat Qubits: Amazon Web Services (AWS) & Caltech (https://lnkd.in/d3HRd86s) developed hardware-efficient QEC using concatenated bosonic qubits, reducing the physical qubit overhead and advancing the field of fault-tolerant quantum computation.  Why it matters:❓ These achievements represent more than technological milestones—they signify a paradigm shift. The timelines for realizing fault-tolerant quantum computers are accelerating, underscoring the rapid progress across quantum architectures. #QuantumComputing #QuantumInnovation #QuantumErrorCorrection #FutureOfComputing

Explore categories