I am pleased to highlight some recent work from the team that further evolves our understanding of building practical quantum computing architectures with bivariate bicycle codes and that addresses one of the fundamental challenges to real-time decoding. Our Nature paper from 2024 [https://lnkd.in/eS26sKx6] showed that a quantum memory using bivariate bicycle codes requires roughly 10x fewer physical qubits compared to the surface code. An important question to answer was whether this advantage is retained not only while storing information in memory but also during computations. To answer that question, our team designed fault-tolerant logical instruction sets for the codes and developed a strategy to compile circuits to these instructions. Using these tools, they performed end-to-end resource estimates demonstrating that bicycle architectures retain an order of magnitude qubit advantage over surface code architectures when implementing large logical circuits. The pre-print can be found here [https://lnkd.in/e7k7gYs7] One of the central doubts about the practicality of quantum low-density parity check (qLDPC) codes such as the bivariate bicycle codes has been the difficulty of real-time decoding. The second preprint [https://lnkd.in/eFbWNFeU] we posted this week hopefully puts those doubts to rest. A large challenge in decoding qLDPC codes arises from the perceived need for two-stage decoding solutions such as belief propagation (BP) followed by ordered statistics decoding (OSD). In particular, real-time implementation of OSD appears very challenging, which has spawned efforts to reduce the cost of OSD. Our team took a different approach. This new result shows that one can eliminate the need for a second-stage decoder altogether through a suitable modification of the BP algorithm. Our modified algorithm, called Relay-BP, enhances the traditional method by incorporating spatially disordered memory terms. This dampens oscillations and breaks symmetries that trap traditional BP algorithms. The result is an algorithm that outperforms the current state-of-the-art approach while simultaneously still being amenable to implementation in an FPGA. Congratulations to the team for these exciting advancements, which validate our strategy and move us one step closer to realizing a fault-tolerant quantum system.
Evaluating Logical Qubit Memory Performance
Explore top LinkedIn content from expert professionals.
Summary
Evaluating logical qubit memory performance means investigating how reliably a quantum computer can store and process quantum information using "logical qubits," which are groups of physical qubits protected by error-correcting codes. This is crucial because logical qubits are designed to reduce errors and make quantum calculations practical, even though the underlying hardware is fragile and prone to noise.
- Monitor error rates: Track both physical and logical error rates closely to determine how well error correction is working during quantum operations.
- Test real-time decoding: Assess how quickly and accurately your system responds to errors mid-calculation, since fast feedback is vital for maintaining reliable quantum memory.
- Adapt to hardware changes: Incorporate methods that automatically adjust to qubit drift and environmental fluctuations to keep logical qubits stable over time.
-
-
Everyone agrees quantum error correction (QEC) is essential. But why do we care so much about things like ≤ 𝟬.𝟭% 𝗴𝗮𝘁𝗲 𝗲𝗿𝗿𝗼𝗿 or µ𝘀-𝘀𝗰𝗮𝗹𝗲 𝗱𝗲𝗰𝗼𝗱𝗲𝗿 𝗹𝗮𝘁𝗲𝗻𝗰𝘆? Here’s the core idea: QEC combines many noisy qubits into a more stable 𝘭𝘰𝘨𝘪𝘤𝘢𝘭 qubit. If your hardware is good enough, you can reduce error rates 𝗲𝘅𝗽𝗼𝗻𝗲𝗻𝘁𝗶𝗮𝗹𝗹𝘆 by increasing code size. But that only works if your system can keep up—𝗱𝗲𝗰𝗼𝗱𝗶𝗻𝗴 𝗲𝗿𝗿𝗼𝗿𝘀 𝗮𝗻𝗱 𝗿𝗲𝗮𝗰𝘁𝗶𝗻𝗴 𝗺𝗶𝗱-𝗰𝗶𝗿𝗰𝘂𝗶𝘁, fast. Especially for circuits with non-Clifford gates (like T-gates), you need real-time feedback between measurements and feedforward operations. That’s where the hardware starts to feel the pressure: • Gate fidelity ≤ 𝟬.𝟭% • Decoder latency ≤ 𝟭𝟱 µ𝘀 • Controller-Decoder Communication ≤ 𝟭𝟬 µ𝘀 • Bandwidth ≥ 𝟭 𝗠𝗯𝗶𝘁/𝘀 𝗽𝗲𝗿 𝗾𝘂𝗯𝗶𝘁 These aren’t wishful targets. They come from full-stack simulations of real quantum circuits, like Shor’s algorithm for factoring 21 using surface codes. In those simulations, the system must handle: • ~13 decoding tasks • ~5 mid-circuit corrections • ~1000 physical qubits That’s the blueprint. It doesn’t just explain 𝘸𝘩𝘺 QEC is hard—it points us toward what needs to work for it to succeed at scale. Image Credits: Yaniv Kurman et al. (2024, arXiv)
-
Five days to tell you about five things Quantinuum announced last week. Quantinuum announced so many great things last week, I'm using each day of this week to re-cap. Day 3: Helios Performance By now you've heard that Helios is the "most accurate", "most capable", and "most powerful" quantum computer... and here's why. Helios has: - 98 fully connected qubits. So-called "all-to-all" connectivity continues to prove its power for performing increasingly more complex circuits with less resources. - 99.92% two-qubit gate fidelity across all-qubit pairs (e.g. we're not just measuring the best 2 or the median... all pairs have this performance!!). - NVIDIA GPU for doing fast, flexible real-time decoding for error correction - a first-of-a-kind, real-time engine for efficiently doing the operations needed for fault-tolerant operations - a new programming language, Guppy, which has a Python front-end but high performance under-the-hood code allowing developers to program quantum computers like they do classical computers, and seamlessly combine hybrid compute capabilities — quantum and classical — in a single program. We demonstrated the ability to: - Generate 94 logical qubits with our very efficient Iceberg Error Detection code (https://lnkd.in/gsvFVFja) and globally entangle with performance with better than break-even performance. - Generate 50 logical qubits with a very similar error detection code and used these logical qubits do to a quantum magnetism simulation with 2,500 logical gates at better than break-even performance. - Generate 48 logical qubits with an error correction code, achieving a remarkable 2:1 scaling (only using 2 physical qubits to make 1 error corrected qubit). Read more about these great achievements with our techncial paper https://lnkd.in/g9bid_2S and techncial blog https://lnkd.in/gZaN65CY.
-
Google Quantum AI Demonstrates Three Dynamic Surface Codes, Advancing Fault-Tolerant Quantum Computing Introduction Quantum computers promise exponential gains but remain constrained by extreme fragility: qubits are easily disrupted by noise, making error correction the central challenge of the field. Google Quantum AI has now taken a major step toward practical fault tolerance by successfully implementing three dynamic versions of the surface code—one of the most promising quantum error-correction frameworks. Key Developments • The team realized three distinct dynamic surface code circuits—hex, iSWAP, and walking—originally proposed in theoretical work by co-author Matt McEwen. • Their experiments validate that multiple circuit variations can work on real hardware, expanding pathways for adapting error-correction codes to specific device architectures. • Hex circuit: Recompiles the surface code onto a hexagonal grid, reducing connectivity requirements from four neighbors to three. This simplifies fabrication and achieved 2.15× better error suppression. • iSWAP circuit: Replaces CZ gates with iSWAP gates, which are easier to execute and avoid leakage errors. Though they introduce CPHASE errors, the team showed strong performance even on hardware optimized for CZ gates, achieving 1.56× error suppression. • Walking circuit: Allows qubits to exchange roles, effectively “walking” logical information across the chip. This helps isolate and clean leakage errors and offers a new method for routing logical qubits, delivering 1.69× better suppression. • All three implementations successfully detected and corrected noise without disturbing quantum information, confirming the practicality of dynamic constructions. Scientific Significance • This is the strongest evidence yet that dynamic surface codes—adapted to hardware constraints—can function reliably in real quantum devices. • The team also introduced a simplified “detector budgeting” technique, enabling easier analysis of how specific error sources impact logical performance. • The work opens new avenues for designing codes tailored to imperfect hardware, enabling better yield and robustness as systems scale. • Upcoming experiments will explore even more advanced dynamic circuits, including those based on the LUCI framework for routing around faulty qubits. Why This Matters Reliable quantum error correction is the linchpin for large-scale quantum computing. Google’s demonstration shows that error-correcting codes can be adapted dynamically to real hardware constraints—unlocking higher performance, easier fabrication, and more flexible architectures. This progress accelerates the roadmap toward fault-tolerant quantum systems capable of solving real-world scientific and industrial problems. I share daily insights with 34,000+ followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
-
**Qubits drift. Google Just Gave It an Autopilot.** Quantum processors are not stable machines – they slowly drift out of tune. Tiny changes in temperature, vibrations, and electronics mean the gate you calibrated in the morning is slightly wrong by the afternoon. Over time, that drift quietly increases the error rate, and even with quantum error correction (QEC), your logical qubit fidelity starts to fall off. The standard fix today is brutal: stop the computation, recalibrate, then resume. That’s barely acceptable for short experiments, and totally unrealistic for fault-tolerant algorithms that might run for hours or days. Google Quantum AI’s new paper, “Reinforcement Learning Control of Quantum Error Correction”, takes a different approach: they merge calibration with computation. Instead of pausing the QEC cycles, they: • Treat QEC syndromes (error signals) as feedback about how the hardware is drifting. • Use a reinforcement learning (RL) agent to nudge thousands of control parameters (pulse amplitudes, frequencies, couplings) while the code is running. • Optimize for lower logical error rate, not just pretty single-qubit gate metrics. On their superconducting Willow processor, this RL “autopilot”: • Improves logical error-rate stability of a distance-5 surface code by about 3.5× against injected drift. • Gives ~20% extra suppression of logical error rate on top of already hand-tuned, state-of-the-art calibration. • Scales in simulation to larger surface codes (up to distance-15) with optimization speed that doesn’t degrade with code size. How does this compare to other decoders? • Classical decoders (like matching decoders) assume the noise model is roughly fixed and then compute the best correction from the syndrome history. • Learned decoders try to map syndromes → corrections more accurately, but still assume a mostly stable device. • RL-QEC doesn’t replace the decoder – it steers the hardware and decoder together so the same QEC stack keeps working even as the environment drifts. If we want truly useful quantum computers, adding more qubits isn’t enough. We’ll also need systems that learn to stay calibrated while they compute and this paper is one of the first serious demonstrations of that idea. Paper: https://lnkd.in/ek2pDgek #QuantumComputing #QuantumErrorCorrection #ReinforcementLearning #GoogleQuantumAI
-
Error correcting codes for near-term quantum computers - new blog post from IBM Research. "IBM scientists published the discovery of new codes that work with ten times fewer qubits." "As a concrete example, we show that 12 logical qubits can be preserved for ten million syndrome cycles using 288 physical qubits in total, assuming the physical error rate of 0.1%. We argue that achieving the same level of error suppression on 12 logical qubits with the surface code would require more than 4000 physical qubits. Our findings bring demonstrations of a low-overhead fault-tolerant quantum memory within the reach of near-term quantum processors." https://lnkd.in/du-WMhMv
-
There is one thing that people don't notice about Alice & Bob's approach to logical qubits, but which is made very obvious by the table below (thank you Brian Siegelwax!). Look at the 2Q Fidelity here: everybody is at 99.50% or better, but the A&B approach works with "only" 98.40%. In spite of this somewhat unimpressive figure, the logical fidelity is very high (1e-8, meaning ~100M gates or ~1M depth), and the logical to physical ratio (1:15) is simply unmatched. Let me stress how counterintuitive this is: to reach a 1e-8 logical error rate, others don't just need more qubits, they also need their qubits to be better! How is this possible? To understand this, you need to realize that most quantum error correction methods target two types of errors at the same time: bit-flips and phase-flips. This puts a strong constraint on the quality of qubits: a surface code only tolerates a ~1% error rate (i.e. 99% fidelity). And you need to be significantly better than this threshold if you want your error rates to scale nicely as you add more qubits. This is why everyone is looking at bringing the error rates down to 0.1% or lower. But with cat qubits, things are "easier" because there's one type of errors you don't need to worry about: bit-flips! A cat qubit can be tuned to bring bit-flip error rates so low that they don't need to be targeted by quantum error correction. In the paper whose figures are in the table, the cycle time is 900 ns and the cat qubit is tuned to reach ~800 seconds of bit-flip lifetime. This means a ~1e-9 physical bit-flip error rate. When you have so few bit-flips, you use an error correction code which only targets phase-flips; this is less demanding both in terms of qubit quality and qubit count. But because there's no free lunch, this comes with a few inconveniences: 👉 Because the error correction code doesn't target bit-flips, it increases the bit-flip error rate. Fortunately, this increase is only linear with the code's distance, but you still go down from ~1e-9 physical to 1e-8 logical. 👉 The logical error rate is limited by the physical bit-flip error rate. Going below a 1e-8 logical error rate requires improving the qubit's design, while other error correction codes "just" require more qubits. 👉 Reaching very long bit-flip lifetimes can be hard experimentally: 1️⃣ First, some saturation mechanisms may appear. The first cat qubits were implemented using a transmon, which limited the lifetime to a few milliseconds. Transmon-free designs have however proved that they have the potential to reach the hundreds of seconds you need. 2️⃣ But then comes a second problem: low bit-flip error rates are obtained at the expense of higher phase-flip error rates. If there are too many phase-flips, they can no longer be corrected. Fortunately, bit-flips decrease exponentially with the tuning, while phase-flips increase only linearly. I'm unfortunately hitting the character limit, so check out the first comment for more ;)
-
Quantum error correction below the surface code threshold experiment from Google Notes: - A logical qubit composed of 101 physical qubits lasted 2.4 times longer than its best physical qubit. - Demonstrated logical error rates decrease by over 50% with each increase in surface code size using 3, 5, and 7 data qubits on a side. - Experiments with 1D repetition codes using up to 29 data qubits showed that logical performance is limited by rare correlated error events occurring about once per hour.
-
⚛️ Architectural mechanisms of a universal fault-tolerant quantum computer 📑 Quantum error correction (QEC) is believed to be essential for the realization of large-scale quantum computers. However, due to the complexity of operating on the encoded ‘logical’ qubits , understanding the physical principles for building fault-tolerant quantum devices and combining them into efficient architectures is an outstanding scientific challenge. Here we utilize reconfigurable arrays of up to 448 neutral atoms to implement all key elements of a universal, fault-tolerant quantum processing architecture and experimentally explore their underlying working mechanisms. We first employ surface codes to study how repeated QEC suppresses errors, demonstrating 2.14(13)x below-threshold performance in a four-round characterization circuit by leveraging atom loss detection and machine learning decoding. We then investigate logical entanglement using transversal gates and lattice surgery, and extend it to universal logic through transversal teleportation with 3D [[15,1,3]] codes, enabling arbitrary-angle synthesis with logarithmic overhead. Finally, we develop mid-circuit qubit re-use, increasing exoerimental cycle rates by two orders of magnitude and enabling deep-circuit protocols with dozens of logical qubits and hundreds of logical teleportations with [[7,1,3]] and high-rate [[16,6,4]] codes while maintaining constant internal entropy. Our experiments reveal key principles for efficient architecture design, involving the interplay between quantum logic & entropy removal, judiciously using physical entanglement in logic gates & magic state generation, and leveraging teleportations for universality & physical qubit reset. These results establish foundations for scalable, universal error-corrected processing and its practical implementation with neutral atom systems. ℹ️ Bluvstein et al - 2025
-
New work from a Harvard team highlights a major bottleneck in fault-tolerant quantum computing: the classical decoder used in quantum error correction. Quick primer on QEC: 1. Encode: A logical qubit is spread across many physical qubits, so no single error destroys the information. 2. Detect: Stabilizer measurements run repeatedly. They do not reveal the quantum state, but they do flag when something has gone wrong. The pattern of those flags is called the syndrome. 3. Decode: A classical computer reads the syndrome and infers which error most likely occurred. 4. Correct: The correction is applied, and the logical qubit survives. Step 3 is where things get hard. For quantum LDPC codes, one of the most promising routes to efficient fault tolerance, practical decoders have usually forced a tradeoff between speed and accuracy: the fast ones are too weak, and the accurate ones are too slow for real-time use. This paper introduces Cascade, a geometry-aware convolutional neural decoder. The key idea is not just “use a neural network,” but to build the structure of the code directly into the model: locality, translation equivariance, and anisotropy. That makes this feel less like generic ML and more like architecture co-design. Some of the headline results: - On the [[144, 12, 12]] Gross code, Cascade achieves logical error rates up to 17x lower than prior practical decoders, with 3–5 orders of magnitude higher throughput - It reveals a “waterfall” regime in which logical errors fall much faster than standard distance-based formulas would suggest, largely because earlier decoders were not strong enough to expose it - In one surface code example, that translates to roughly 40% fewer physical qubits to reach a target logical error rate of 10^-9 - Its confidence estimates are well calibrated, which enables post-selection. In one setting on the [[72, 12, 6]] code, that implies roughly 20x fewer retries for repeat-until-success protocols such as magic state distillation - Current GPU latencies already fit the timing budgets for trapped-ion and neutral-atom platforms. Superconducting qubits still require a tighter ~1 microsecond budget, with FPGA and ASIC paths supported by the hardware estimates in the supplement The broader takeaway: decoder quality is not just an implementation detail. It directly shapes how many qubits and how much time fault-tolerant quantum computing actually requires, and those costs may be meaningfully lower than standard estimates assume. Paper: https://lnkd.in/g9D82Ry8
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development