Quantum Error Correction for Data Security

Explore top LinkedIn content from expert professionals.

  • View profile for Michaela Eichinger, PhD

    Product Solutions Physicist @ Quantum Machines | I talk about quantum computing.

    16,215 followers

    Many talk about surface codes. But what if they’re not the future? Quantum Low-density parity-check (qLDPC) codes are gaining traction 𝗳𝗮𝘀𝘁. IBM is building fault-tolerant memories using Bivariate Bicycle (BB) codes. IQM Quantum Computers is designing hardware with qLDPC in mind. And now, a new experiment from China shows the 𝗳𝗶𝗿𝘀𝘁 𝘄𝗼𝗿𝗸𝗶𝗻𝗴 𝗾𝗟𝗗𝗣𝗖 𝗰𝗼𝗱𝗲 𝗼𝗻 𝗮 𝘀𝘂𝗽𝗲𝗿𝗰𝗼𝗻𝗱𝘂𝗰𝘁𝗶𝗻𝗴 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗼𝗿. On the 32-qubit Kunlun chip, researchers implemented: • 𝗔 [[𝟭𝟴, 𝟰, 𝟰]] 𝗕𝗕 𝗰𝗼𝗱𝗲 • 𝗔 [[𝟭𝟴, 𝟲, 𝟯]] 𝗾𝗟𝗗𝗣𝗖 𝗰𝗼𝗱𝗲    The notation [[𝗻, 𝗸, 𝗱]] describes a quantum error correction code that uses 𝗻 physical qubits to encode 𝗸 logical qubits, with 𝗱 being the code distance. Unlike surface codes, LDPC codes keep each error check (called a stabilizer) connected to only a small number of qubits—just 6 in this case—even as the code scales. That means fewer ancillas, fewer gates, and potentially lower overhead for fault tolerance. The hardware was purpose-built for this experiment: • 𝟯𝟮 𝗳𝗿𝗲𝗾𝘂𝗲𝗻𝗰𝘆-𝘁𝘂𝗻𝗮𝗯𝗹𝗲 𝘁𝗿𝗮𝗻𝘀𝗺𝗼𝗻 𝗾𝘂𝗯𝗶𝘁𝘀 • 𝟴𝟰 𝘁𝘂𝗻𝗮𝗯𝗹𝗲 𝗰𝗼𝘂𝗽𝗹𝗲𝗿𝘀, enabling non-local interactions up to 𝟲.𝟱 𝗺𝗺 apart • 𝗔𝗶𝗿 𝗯𝗿𝗶𝗱𝗴𝗲𝘀 to support a crossbar-style layout • Stabilizer checks executed in just 𝟳 𝗖𝗭 𝗹𝗮𝘆𝗲𝗿𝘀    Gate fidelities were solid: • Single qubit: 99.95% • Two-qubit: 99.22%    The decoding was performed offline using 𝗯𝗲𝗹𝗶𝗲𝗳 𝗽𝗿𝗼𝗽𝗮𝗴𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗼𝗿𝗱𝗲𝗿𝗲𝗱 𝘀𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝘀 𝗱𝗲𝗰𝗼𝗱𝗶𝗻𝗴 (𝗕𝗣-𝗢𝗦𝗗)—an approach better suited to LDPC-style codes. Logical error rates were: • 𝗕𝗕: 𝟴.𝟵𝟭 ± 𝟬.𝟭𝟳% • 𝗾𝗟𝗗𝗣𝗖: 𝟳.𝟳𝟳 ± 𝟬.𝟭𝟮%    Both are still above the physical qubit error rate—but 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻𝘀 𝘀𝗵𝗼𝘄 𝘁𝗵𝗮𝘁 𝗮 𝟮× 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 𝗶𝗻 𝗳𝗶𝗱𝗲𝗹𝗶𝘁𝘆 𝘄𝗼𝘂𝗹𝗱 𝗯𝗲 𝗲𝗻𝗼𝘂𝗴𝗵 𝘁𝗼 𝗽𝘂𝘀𝗵 𝘁𝗵𝗲𝘀𝗲 𝗰𝗼𝗱𝗲𝘀 𝗯𝗲𝗹𝗼𝘄 𝘁𝗵𝗿𝗲𝘀𝗵𝗼𝗹𝗱. qLDPC codes are no longer just a concept—they’re being implemented, measured, and decoded on superconducting hardware. 📸 Image Credits: Ke Wang, Zhide Lu, Chuanyu Zhang et al. (2025, arXiv)

  • View profile for Sam Stanwyck

    Director, Quantum Product

    6,777 followers

    I'm really happy with the rapid development of CUDA-Q QEC, our toolkit for quantum error correction. QEC is an incredibly rich and fast-moving field, and in CUDA-Q QEC we aim to provide a platform with a diverse set of accelerated decoders, AI infrastructure, tools to enable researchers to develop and test their own codes, decoders, and architectures, hopefully even better than our own! As we dig deeper into the problem of scalable QEC, the benefits of GPUs and AI have become much clearer. We started with research tools, for simulation and offline decoding, which is still an important capability. Now with the 0.5.0 release we also provide the infrastructure for real-time decoding, where syndrome processing occurs concurrently with quantum operations. This release also introduces GPU-accelerated algorithmic decoders like RelayBP, a promising approach developed in the past year that aims to overcome the convergence limitations of traditional belief propagation. For scenarios demanding maximum throughput, we have integrated a TensorRT-based inference engine that allows researchers to deploy custom AI decoders trained in frameworks like PyTorch and exported to ONNX directly into the quantum control loop. To address the complexities of continuous system operation, we added sliding window decoders that handle circuit-level noise across multiple rounds without assuming temporal periodicity. These tools are designed to be hardware-agnostic and scalable, supporting our partners across the ecosystem who are building the first generation of reliable logical qubits. Check out the full technical breakdown in our latest developer blog by Kevin Mato, Scott Thornton, Ph.D., Melody Ren, Ben Howe, and Tom L. https://lnkd.in/gvC__zRd

  • View profile for Keith King

    Former White House Lead Communications Engineer, U.S. Dept of State, and Joint Chiefs of Staff in the Pentagon. Veteran U.S. Navy, Top Secret/SCI Security Clearance. Over 16,000+ direct connections & 44,000+ followers.

    43,845 followers

    Nord Quantique Unveils Compact, Energy-Efficient Quantum Error Correction Breakthrough Introduction: Québec-based startup Nord Quantique has announced a major leap in quantum error correction—one that could dramatically reduce the size and energy needs of quantum data centers. By adopting a novel “multimode” encoding method, the company says it can overcome a longstanding challenge in quantum computing: maintaining qubit fidelity without exponentially scaling hardware. Key Points: • A New Approach to Error Correction: • Traditional quantum error correction relies on many redundant physical qubits to protect information in one logical qubit. • Nord Quantique’s multimode encoding stores quantum information across multiple resonance frequencies within a single aluminum cavity. • This allows a single physical element to represent more than one quantum state, increasing redundancy without needing more hardware. • Efficiency Gains in Space and Power: • Because the method doesn’t require added physical qubits, quantum systems stay compact even as they scale. • Nord Quantique claims a dramatic reduction in power usage—120 kilowatts (kW) for solving a difficult encryption task (RSA-830) in one hour. • For comparison: • A photonic quantum computer would require 1,400 kW over 10 hours. • A classical computer would reportedly need 1,300 kW and much longer time. • Implications for Data Centers: • Today’s error correction methods make large-scale quantum computing impractical for commercial deployment due to their hardware and power demands. • Nord Quantique’s innovation could lead to more scalable, energy-efficient quantum data centers, paving the way for broader use in cryptography, chemistry, optimization, and AI. Why This Matters: Quantum error correction is one of the biggest barriers to practical, fault-tolerant quantum computing. By sidestepping the need for massive physical redundancy, Nord Quantique’s multimode approach could radically simplify system architectures. This is a vital step toward making quantum data centers viable—economically and physically—opening new doors in secure computation, next-generation materials science, and machine learning. Conclusion: Nord Quantique’s compact, low-power solution to error correction isn’t just a technical refinement—it’s a foundational breakthrough. If validated, it could mark the transition from quantum theory to real-world, large-scale deployment. The future of quantum computing may well be smaller, faster, and greener. Keith King https://lnkd.in/gHPvUttw

  • View profile for Joel Pendleton

    CTO at Conductor Quantum

    5,353 followers

    A quantum computer that learns from its own errors while it's computing. That's the framing in a recent paper from Google Quantum AI and Google DeepMind on reinforcement learning control of quantum error correction. Large quantum processors drift. The standard fix is to halt the computation and recalibrate, which won't scale to algorithms expected to run for days or weeks. The authors ask whether QEC can calibrate itself from the data it already produces. The idea: repurpose error detection events as a training signal for a reinforcement learning agent that continuously tunes the physical control parameters (pulse amplitudes, detunings, DRAG coefficients, CZ parameters, and so on). Rather than optimizing logical error rate directly, which is expensive and global, the agent minimizes average detector-event rate, a cheap local proxy whose gradient is approximately aligned with the gradient of LER in the small-perturbation regime. The results on a Willow superconducting processor: - On distance-5 surface and color codes, RL fine-tuning after conventional calibration and expert tuning yields about 20% additional LER suppression - Against injected drift, RL steering improves logical stability 2.4x, rising to 3.5x when decoder parameters are also steered - New record logical error per cycle: 7.72(9)×10⁻⁴ for a distance-7 surface code (with the AlphaQubit2 decoder) and 8.19(14)×10⁻³ for a distance-5 color code (with Tesseract) - In simulation, the framework scales to a distance-15 surface code with roughly 40,000 control parameters, with a convergence rate that is independent of system size The broader takeaway: calibration and computation may not need to be separate phases. If detector statistics can carry enough information to steer a large control stack online, fault tolerance becomes less about pausing to retune and more about a processor that keeps learning while it computes. Worth noting that the current experiments rely on short repeated memory circuits, so real-time steering during a single long logical algorithm (where exploration noise would affect the computation directly) remains future work. Paper: https://lnkd.in/gVQXnpzZ

Explore categories