To build powerful quantum computers, we need to correct errors. One promising, hardware-friendly approach is to use 𝘣𝘰𝘴𝘰𝘯𝘪𝘤 𝘤𝘰𝘥𝘦𝘴, which store quantum information in superconducting cavities. These cavities are especially attractive because they can preserve quantum states far longer than even the best superconducting qubits. But to manipulate the quantum state in the cavity, you need to connect it to a ‘helper’ qubit - typically a transmon. Unfortunately, while effective, transmons often introduce new sources of error, including extra noise and unwanted nonlinearities that distort the cavity state. Interestingly, the 𝗳𝗹𝘂𝘅𝗼𝗻𝗶𝘂𝗺 𝗾𝘂𝗯𝗶𝘁 offers a powerful alternative, with several advantages for controlling superconducting cavities: • 𝗠𝗶𝗻𝗶𝗺𝗶𝘀𝗲𝗱 𝗗𝗲𝗰𝗼𝗵𝗲𝗿𝗲𝗻𝗰𝗲: Fluxonium qubits have demonstrated millisecond coherence times, minimising qubit-induced decoherence in the cavity. • 𝗛𝗮𝗺𝗶𝗹𝘁𝗼𝗻𝗶𝗮𝗻 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴: Its rich energy level structure offer significant design flexibility. This allows the qubit-cavity Hamiltonian to be tailored to minimize or eliminate undesirable nonlinearities. • 𝗞𝗲𝗿𝗿-𝗙𝗿𝗲𝗲 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Numerical simulations show that a fluxonium can be designed to achieve a large dispersive shift for fast control, while simultaneously making the self-Kerr nonlinearity vanish. This is a regime that is extremely difficult for a transmon to reach without significant, undesirable qubit-cavity hybridisation. And there are now experimental results that support this approach. Angela Kou's team coupled a fluxonium qubit to a superconducting cavity, generating Fock states and superpositions with fidelities up to 91%. The main limiting factors were qubit initialisation inefficiency and the modest 12μs lifetime of the cavity in this prototype. Simulations suggest that in higher-coherence systems (like 3D cavities), the fidelity could climb much higher with error rates dropping below 1%. Even more impressive: They show that an external magnetic flux can be used to tune the dispersive shift and self-Kerr nonlinearity independently. So the experiment confirms that there are operating points where the unwanted Kerr term crosses zero while the desired dispersive coupling stays large. In short: Fluxonium qubits offer a practical, tunable path to high-fidelity bosonic control without sacrificing the long lifetimes that make cavity-based quantum memories so attractive in the first place. 📸 Credits: Ke Ni et al. (arXiv:2505.23641) Want more breakdowns and deep dives straight to your inbox? Visit my profile/website to sign up. ☀️
Reducing Error Rates in Circuit QED Systems
Explore top LinkedIn content from expert professionals.
Summary
Reducing error rates in circuit QED systems is crucial for advancing quantum computing, as it involves minimizing mistakes that occur when quantum information is processed using superconducting circuits and quantum error correction strategies. Circuit QED (Quantum Electrodynamics) systems use superconducting qubits and cavities to store and manipulate quantum states, but even tiny errors can disrupt computations, so researchers are developing new methods and hardware to maintain the integrity of these delicate states.
- Choose robust qubits: Using fluxonium qubits instead of traditional transmons can help limit environmental noise and allow longer quantum state preservation.
- Shape control pulses: Carefully designing the frequency and timing of control pulses reduces leakage errors during quantum operations and improves the reliability of gate actions.
- Combine error mitigation techniques: Integrating partial quantum error correction and new error-mitigation strategies can significantly cut error rates without requiring excessive hardware resources.
-
-
Reducing Leakage of Single-Qubit Gates for Superconducting Quantum Processors Using Analytical Control Pulse Envelopes https://lnkd.in/e7ZhGXGM Abstract Improving the speed and fidelity of quantum logic gates is essential to reach quantum advantage with future quantum computers. However, fast logic gates lead to increased leakage errors in superconducting quantum processors based on qubits with low anharmonicity, such as transmons. To reduce leakage errors, we propose and experimentally demonstrate two new analytical methods, Fourier ansatz spectrum tuning derivative removal by adiabatic gate (FAST DRAG) and higher-derivative (HD) DRAG, both of which enable shaping single-qubit control pulses in the frequency domain to achieve stronger suppression of leakage transitions compared to previously demonstrated pulse shapes. Using the new methods to suppress the 𝑒𝑓 transition of a transmon qubit with an anharmonicity of −212 MHz, we implement 𝑅𝑋(𝜋/2) gates achieving a leakage error below 3.0×10−5 down to a gate duration of 6.25 ns without the need for iterative closed-loop optimization. The obtained leakage error represents a 20-fold reduction in leakage compared to a conventional cosine DRAG pulse. Employing the FAST DRAG method, we further achieve an error per gate of (1.56±0.07)×10−4 at a 7.9-ns gate duration, outperforming conventional pulse shapes both in terms of error and gate speed. Furthermore, we study error-amplifying measurements for the characterization of temporal microwave control-pulse distortions, and demonstrate that non-Markovian coherent errors caused by such distortions may be a significant source of error for sub-10-ns single-qubit gates unless corrected using predistortion.
-
MIT Sets Quantum Computing Record with 99.998% Fidelity Researchers at MIT have achieved a world-record single-qubit fidelity of 99.998% using a superconducting qubit known as fluxonium. This breakthrough represents a significant step toward practical quantum computing by addressing one of the field’s greatest challenges: mitigating noise and control imperfections that lead to operational errors. Key Highlights: 1. The Problem: Noise and Errors • Qubits, the building blocks of quantum computers, are highly sensitive to noise and imperfections in control mechanisms. • Such disturbances introduce errors that limit the complexity and duration of quantum algorithms. “These errors ultimately cap the performance of quantum systems,” the researchers noted. 2. The Solution: Two New Techniques To overcome these challenges, the MIT team developed two innovative techniques: • Commensurate Pulses: This method involves timing quantum pulses precisely to make counter-rotating errors uniform and correctable. • Circularly Polarized Microwaves: By creating a synthetic version of circularly polarized light, the team improved the control of the qubit’s state, further enhancing fidelity. “Getting rid of these errors was a fun challenge for us,” said David Rower, PhD ’24, one of the study’s lead researchers. 3. Fluxonium Qubits and Their Potential • Fluxonium qubits are superconducting circuits with unique properties that make them more resistant to environmental noise compared to traditional qubits. • By applying the new error-mitigation techniques, the team unlocked the potential of fluxonium to operate at near-perfect fidelity. 4. Implications for Quantum Computing • Achieving 99.998% fidelity significantly reduces errors in quantum operations, paving the way for more complex and reliable quantum algorithms. • This milestone represents a major step toward scalable quantum computing systems capable of solving real-world problems. What’s Next? The team plans to expand its work by exploring multi-qubit systems and integrating the error-mitigation techniques into larger quantum architectures. Such advancements could accelerate progress toward error-corrected, fault-tolerant quantum computers. Conclusion: A Leap Toward Practical Quantum Systems MIT’s achievement underscores the importance of innovation in error correction and control to overcome the fundamental challenges of quantum computing. This breakthrough brings us closer to the realization of large-scale quantum systems that could transform fields such as cryptography, materials science, and complex optimization problems.
-
🚨 Exciting #quantumcomputing alert! Now #QEC primitives actually make #quantumcomputers more powerful! 75 qubit GHZ state on a superconducting #QPU 🚨 In our latest work we address the elephant in the room about #quantumerrorcorrection - in the current era where qubit counts are a bottleneck in the systems available, adopting full-blown QEC can be a step backwards in terms of computational capacity. This is because even when it delivers net benefits in error reduction, QEC consumes a lot of qubits to do so and we just don't have enough right now... So how do we maximize value for end users while still pushing hard on the underpinning QEC technology? To answer this the team at Q-CTRL set out to determine new ways to significantly reduce the overhead penalties of QEC while delivering big benefits! In this latest demonstration we show that we can adopt parts of QEC -- indirect stabilizer measurements on ancilla qubits -- to deliver large performance gains without the painful overhead of logical encoding. And by combining error detection with deterministic error suppression we can really improve efficiency of the process, requiring only about 10% overhead in ancillae and maintaining a very low discard rate of executions with errors identified! Using this approach we've set a new record for the largest demonstrated entangled state at 75 qubits on an IBM quantum computer (validated by MQC) and also demonstrated a totally new way to teleport gates across large distances (where all-to-all connectivity isn't possible). The results outperform all previously published approaches and highlight the fact that our journey in dealing with errors in quantum computers is continuous. Of course it isn't a panacea and in the long term as we try to tackle even more complex algorithms we believe logical encoding will become an important part of our toolbox. But that's the point - logical QEC is just one tool and we have many to work with! At Q-CTRL we never lose sight of the fact that our objective is to deliver maximum capability to QC end users. This work on deploying QEC primitives is a core part of how we're making quantum technology useful, right now. https://lnkd.in/gkG3W7eE
-
Google Unveils Willow: A Leap Forward in Quantum Computing Google Quantum AI has introduced Willow, a cutting-edge quantum chip designed to address two of the field’s most significant challenges: error correction and computational scalability. Willow, fabricated in Google’s Santa Barbara facility, achieves state-of-the-art performance, marking a pivotal step toward realizing a large-scale, commercially viable quantum computer. It gets way geekier from here – but if you’re with me so far… Exponential Error Reduction Julian Kelly, Director of Quantum Hardware at Google, emphasized Willow’s ability to exponentially reduce errors as the system scales. Utilizing a grid of superconducting qubits, Willow demonstrated a historic breakthrough in quantum error correction. By expanding arrays from 3×3 to 5×5 and then 7×7 qubits, researchers cut error rates in half with each iteration. This achievement, referred to as being “below threshold,” signifies that larger quantum systems can now exhibit fewer errors, a challenge pursued since Peter Shor introduced quantum error correction in 1995. The chip also achieved “beyond breakeven” performance, where arrays of qubits outperformed the lifetimes of individual qubits, which is key to ensuring the feasibility of practical quantum computations. Ten Septillion Years in Five Minutes Willow’s computational capabilities were validated using the Random Circuit Sampling (RCS) benchmark, a rigorous test of quantum supremacy. According to Google’s estimates, Willow completed a task in under five minutes that would take a modern supercomputer ten septillion years—a timescale exceeding the age of the universe. This achievement underscores the rapid, double-exponential performance improvements of quantum systems over classical alternatives. While the RCS benchmark lacks direct commercial applications, it remains a critical indicator of quantum computational power. Kelly noted that surpassing classical systems on this benchmark solidifies confidence in the broader potential of quantum technology. Building Toward Practical Applications Google’s roadmap aims to bridge the gap between theoretical quantum advantage and real-world utility. The team is now focused on achieving “useful, beyond-classical” computations that solve practical problems. Applications in drug discovery, battery design, and AI optimization are among the potential breakthroughs quantum computing could unlock. Willow’s advancements in quantum error correction and computational scalability highlight its transformative potential. As Kelly explained, “Quantum algorithms have fundamental scaling laws on their side,” making quantum computing indispensable for tasks beyond the reach of classical systems. Quantum computing is still years away, but this is an exciting milestone. Considering the remarkable rate of technological improvement we’re experiencing right now, practical quantum computing (and quantum AI) may be closer than we think. -s
-
Interesting new study: "EnQode: Fast Amplitude Embedding for Quantum Machine Learning Using Classical Data." The authors introduce a novel framework to address the limitations of traditional amplitude embedding (AE) [GitHub repo included]. Traditional AE methods often involve deep, variable-length circuits, which can lead to high output error due to extensive gate usage and inconsistent error rates across different data samples. This variability in circuit depth and gate composition results in unequal noise exposure, obscuring the true performance of quantum algorithms. To overcome these challenges, the researchers developed EnQode, a fast AE technique based on symbolic representation. Instead of aiming for exact amplitude representation for each sample, EnQode employs a cluster-based approach to achieve approximate AE with high fidelity. Here are some of the key aspects of EnQode: * Clustering: EnQode begins by using the k-means clustering algorithm to group similar data samples. For each cluster, a mean state is calculated to represent the central characteristics of the data distribution within that cluster. * Hardware-optimized ansatz: For each cluster's mean state, a low-depth, machine-optimized ansatz is trained, tailored to the specific quantum hardware being used (e.g., IBM quantum devices). * Transfer Learning for fast embedding: Once the cluster models are trained offline, transfer learning is used for rapid amplitude embedding of new data samples. An incoming sample is assigned to the nearest cluster, and its embedding circuit is initialized with the optimized parameters of that cluster's mean state. These parameters can then be fine-tuned, significantly accelerating the embedding process without retraining from scratch. * Reduced circuit complexity: EnQode achieved an average reduction of over 28× in circuit depth, over 11× in single-qubit gate count, and over 12× in two-qubit gate count, with zero variability across samples due to its fixed ansatz design. * Higher state fidelity in noisy environments: In noisy IBM quantum hardware simulations, EnQode showed a state fidelity improvement of over 14× compared to the baseline, highlighting its robustness to hardware noise. While the baseline achieved 100% fidelity in ideal simulations (as it performs exact embedding), EnQode maintained an average of 89% fidelity when transpiled to real hardware in ideal simulations, which is considered a good approximation given the significant reduction in circuit complexity. Here the article: https://lnkd.in/dQMbNN7b And here the GitHub repo: https://lnkd.in/dbm7q3eJ #qml #datascience #machinelearning #quantum #nisq #quantumcomputing
-
A quantum computer that learns from its own errors while it's computing. That's the framing in a recent paper from Google Quantum AI and Google DeepMind on reinforcement learning control of quantum error correction. Large quantum processors drift. The standard fix is to halt the computation and recalibrate, which won't scale to algorithms expected to run for days or weeks. The authors ask whether QEC can calibrate itself from the data it already produces. The idea: repurpose error detection events as a training signal for a reinforcement learning agent that continuously tunes the physical control parameters (pulse amplitudes, detunings, DRAG coefficients, CZ parameters, and so on). Rather than optimizing logical error rate directly, which is expensive and global, the agent minimizes average detector-event rate, a cheap local proxy whose gradient is approximately aligned with the gradient of LER in the small-perturbation regime. The results on a Willow superconducting processor: - On distance-5 surface and color codes, RL fine-tuning after conventional calibration and expert tuning yields about 20% additional LER suppression - Against injected drift, RL steering improves logical stability 2.4x, rising to 3.5x when decoder parameters are also steered - New record logical error per cycle: 7.72(9)×10⁻⁴ for a distance-7 surface code (with the AlphaQubit2 decoder) and 8.19(14)×10⁻³ for a distance-5 color code (with Tesseract) - In simulation, the framework scales to a distance-15 surface code with roughly 40,000 control parameters, with a convergence rate that is independent of system size The broader takeaway: calibration and computation may not need to be separate phases. If detector statistics can carry enough information to steer a large control stack online, fault tolerance becomes less about pausing to retune and more about a processor that keeps learning while it computes. Worth noting that the current experiments rely on short repeated memory circuits, so real-time steering during a single long logical algorithm (where exploration noise would affect the computation directly) remains future work. Paper: https://lnkd.in/gVQXnpzZ
-
𝗧𝘄𝗼 𝗻𝗲𝘄 𝗽𝗮𝗽𝗲𝗿𝘀 𝗳𝗿𝗼𝗺 𝗜𝗕𝗠 𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗺𝗮𝗿𝗸 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗽𝗿𝗼𝗴𝗿𝗲𝘀𝘀 𝘁𝗼𝘄𝗮𝗿𝗱 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗳𝗮𝘂𝗹𝘁-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗰𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴. What’s notable is that these papers are not presenting isolated techniques. They are improvements over an entire #FTQC pipeline: a decoding algorithm that meets the architectural constraints of an efficient LDPC-based system, and an architecture that exploits those decoding capabilities to scale. And in conclusion, 𝘁𝗵𝗲𝘆 𝘀𝗵𝗼𝘄 𝗵𝗼𝘄 𝗙𝗧𝗤𝗖 𝗰𝗮𝗻 𝗯𝗲 𝗺𝗮𝗱𝗲 𝗲𝗮𝗿𝗹𝗶𝗲𝗿, 𝗳𝗮𝘀𝘁𝗲𝗿. The first, Relay-BP, introduces a 𝗱𝗲𝗰𝗼𝗱𝗶𝗻𝗴 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺 𝗳𝗼𝗿 #Quantum 𝗟𝗗𝗣𝗖 𝗰𝗼𝗱𝗲𝘀 that is accurate, efficient, and implementable in hardware. It achieves logical error rates up to 100× lower than previous methods like BP+OSD, while operating within the tight time budgets required for real-time decoding: under 600 iterations, each ~20ns, comfortably within a 12μs QEC cycle. It is parallel, low-footprint, and deployable on FPGA or ASIC. This is essential for scaling beyond simulation. The second paper outlines the Bicycle Architecture, a 𝗺𝗼𝗱𝘂𝗹𝗮𝗿 𝗙𝗧𝗤𝗖 𝗱𝗲𝘀𝗶𝗴𝗻 built on bivariate bicycle codes. These LDPC codes offer high encoding rates (e.g., 12 logical qubits in 144 or 288 physical qubits), with logical operations implemented via Pauli measurements, lattice surgery, and T-state injection. The architecture supports universal fault-tolerant logic with significantly reduced overhead. At physical error rates around 7×10⁻⁴, workloads like TFIM simulations become accessible with fewer than 10k physical qubits. Those numbers are roughly 10× more efficient than comparable surface code implementations. This comes just after Google’s improved resource estimates for Shor’s algorithm (they dropped their resource estimation to run Shor from 20m physical qubits to 1m two weeks ago). Across the field, assumptions about overhead, feasibility, and performance are shifting. The foundational tools for fault-tolerant architectures (decoding, code design, and modular compilation) are converging faster than expected. FTQC is feasible. The theory shows that easier than we thought. Hardware is where it will shine ! https://lnkd.in/eQtn92zE https://lnkd.in/eMXeUwzi #QuantumComputing #FaultTolerance #QEC #LDPC #IBMQuantum #QuantumArchitecture
-
**Qubits drift. Google Just Gave It an Autopilot.** Quantum processors are not stable machines – they slowly drift out of tune. Tiny changes in temperature, vibrations, and electronics mean the gate you calibrated in the morning is slightly wrong by the afternoon. Over time, that drift quietly increases the error rate, and even with quantum error correction (QEC), your logical qubit fidelity starts to fall off. The standard fix today is brutal: stop the computation, recalibrate, then resume. That’s barely acceptable for short experiments, and totally unrealistic for fault-tolerant algorithms that might run for hours or days. Google Quantum AI’s new paper, “Reinforcement Learning Control of Quantum Error Correction”, takes a different approach: they merge calibration with computation. Instead of pausing the QEC cycles, they: • Treat QEC syndromes (error signals) as feedback about how the hardware is drifting. • Use a reinforcement learning (RL) agent to nudge thousands of control parameters (pulse amplitudes, frequencies, couplings) while the code is running. • Optimize for lower logical error rate, not just pretty single-qubit gate metrics. On their superconducting Willow processor, this RL “autopilot”: • Improves logical error-rate stability of a distance-5 surface code by about 3.5× against injected drift. • Gives ~20% extra suppression of logical error rate on top of already hand-tuned, state-of-the-art calibration. • Scales in simulation to larger surface codes (up to distance-15) with optimization speed that doesn’t degrade with code size. How does this compare to other decoders? • Classical decoders (like matching decoders) assume the noise model is roughly fixed and then compute the best correction from the syndrome history. • Learned decoders try to map syndromes → corrections more accurately, but still assume a mostly stable device. • RL-QEC doesn’t replace the decoder – it steers the hardware and decoder together so the same QEC stack keeps working even as the environment drifts. If we want truly useful quantum computers, adding more qubits isn’t enough. We’ll also need systems that learn to stay calibrated while they compute and this paper is one of the first serious demonstrations of that idea. Paper: https://lnkd.in/ek2pDgek #QuantumComputing #QuantumErrorCorrection #ReinforcementLearning #GoogleQuantumAI
-
One of the things I truly enjoy about quantum computing is how we can leverage its intrinsic properties — such as reversibility — to turn hardware limitations into opportunities in the NISQ era. 🤓 In a world where noise is unavoidable, what if we treat noise not just as a problem… but as part of the algorithmic workflow? 🚀 This is precisely the idea behind error mitigation techniques like Zero Noise Extrapolation (ZNE). The intuition is elegant: We start by considering our original circuit as the baseline noise level (scale factor = 1). 👀 Then, we deliberately increase the noise — either locally or globally — by inserting additional gate operations that effectively compose to the identity. Mathematically, the circuit remains unchanged. 😆 Physically, however, the hardware accumulates more noise. 😲 By measuring the observable at different noise levels and extrapolating back to the zero-noise limit, we can estimate what the result would have been in an ideal, noiseless regime. Instead of fighting noise directly, we model it — and use it. Have you implemented ZNE in your workflows? Or have you explored how noise actually scales with additional gate insertions on real hardware? 🤓 I’m sharing a resource from QGSS25, where we discussed this in depth and built a hands-on notebook around it with some great colleagues: https://lnkd.in/eXDRrKBb What other error mitigation resources or techniques have you found useful? I’d love to hear your thoughts. #QuantumComputing #NISQ #ErrorMitigation #ZeroNoiseExtrapolation #QuantumAlgorithms #QuantumHardware #QuantumEngineering #Qiskit #QuantumResearch #DeepTech #QuantumOptimization
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development