MIT Sets Quantum Computing Record with 99.998% Fidelity Researchers at MIT have achieved a world-record single-qubit fidelity of 99.998% using a superconducting qubit known as fluxonium. This breakthrough represents a significant step toward practical quantum computing by addressing one of the field’s greatest challenges: mitigating noise and control imperfections that lead to operational errors. Key Highlights: 1. The Problem: Noise and Errors • Qubits, the building blocks of quantum computers, are highly sensitive to noise and imperfections in control mechanisms. • Such disturbances introduce errors that limit the complexity and duration of quantum algorithms. “These errors ultimately cap the performance of quantum systems,” the researchers noted. 2. The Solution: Two New Techniques To overcome these challenges, the MIT team developed two innovative techniques: • Commensurate Pulses: This method involves timing quantum pulses precisely to make counter-rotating errors uniform and correctable. • Circularly Polarized Microwaves: By creating a synthetic version of circularly polarized light, the team improved the control of the qubit’s state, further enhancing fidelity. “Getting rid of these errors was a fun challenge for us,” said David Rower, PhD ’24, one of the study’s lead researchers. 3. Fluxonium Qubits and Their Potential • Fluxonium qubits are superconducting circuits with unique properties that make them more resistant to environmental noise compared to traditional qubits. • By applying the new error-mitigation techniques, the team unlocked the potential of fluxonium to operate at near-perfect fidelity. 4. Implications for Quantum Computing • Achieving 99.998% fidelity significantly reduces errors in quantum operations, paving the way for more complex and reliable quantum algorithms. • This milestone represents a major step toward scalable quantum computing systems capable of solving real-world problems. What’s Next? The team plans to expand its work by exploring multi-qubit systems and integrating the error-mitigation techniques into larger quantum architectures. Such advancements could accelerate progress toward error-corrected, fault-tolerant quantum computers. Conclusion: A Leap Toward Practical Quantum Systems MIT’s achievement underscores the importance of innovation in error correction and control to overcome the fundamental challenges of quantum computing. This breakthrough brings us closer to the realization of large-scale quantum systems that could transform fields such as cryptography, materials science, and complex optimization problems.
Solutions to Quantum Computing Noise Issues
Explore top LinkedIn content from expert professionals.
Summary
Quantum computing noise issues refer to disruptions and errors caused by environmental factors or control imperfections that destabilize quantum bits, or qubits, making reliable computation difficult. Recent innovations are providing new ways to extend coherence times, suppress errors, and stabilize quantum systems, bringing practical quantum computers closer to reality.
- Try adaptive error correction: Use reinforcement learning techniques that allow quantum processors to adjust control parameters in real time based on detected error events, reducing the need for manual recalibration.
- Use mathematical pulse sequences: Drive quantum systems with structured patterns, such as those based on the Fibonacci sequence, to distribute disturbances and keep qubit states stable for longer periods.
- Apply software-based stabilizers: Implement algorithms that filter and correct low-frequency noise before traditional error correction activates, improving qubit performance without changing hardware.
-
-
To build powerful quantum computers, we need to correct errors. One promising, hardware-friendly approach is to use 𝘣𝘰𝘴𝘰𝘯𝘪𝘤 𝘤𝘰𝘥𝘦𝘴, which store quantum information in superconducting cavities. These cavities are especially attractive because they can preserve quantum states far longer than even the best superconducting qubits. But to manipulate the quantum state in the cavity, you need to connect it to a ‘helper’ qubit - typically a transmon. Unfortunately, while effective, transmons often introduce new sources of error, including extra noise and unwanted nonlinearities that distort the cavity state. Interestingly, the 𝗳𝗹𝘂𝘅𝗼𝗻𝗶𝘂𝗺 𝗾𝘂𝗯𝗶𝘁 offers a powerful alternative, with several advantages for controlling superconducting cavities: • 𝗠𝗶𝗻𝗶𝗺𝗶𝘀𝗲𝗱 𝗗𝗲𝗰𝗼𝗵𝗲𝗿𝗲𝗻𝗰𝗲: Fluxonium qubits have demonstrated millisecond coherence times, minimising qubit-induced decoherence in the cavity. • 𝗛𝗮𝗺𝗶𝗹𝘁𝗼𝗻𝗶𝗮𝗻 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴: Its rich energy level structure offer significant design flexibility. This allows the qubit-cavity Hamiltonian to be tailored to minimize or eliminate undesirable nonlinearities. • 𝗞𝗲𝗿𝗿-𝗙𝗿𝗲𝗲 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Numerical simulations show that a fluxonium can be designed to achieve a large dispersive shift for fast control, while simultaneously making the self-Kerr nonlinearity vanish. This is a regime that is extremely difficult for a transmon to reach without significant, undesirable qubit-cavity hybridisation. And there are now experimental results that support this approach. Angela Kou's team coupled a fluxonium qubit to a superconducting cavity, generating Fock states and superpositions with fidelities up to 91%. The main limiting factors were qubit initialisation inefficiency and the modest 12μs lifetime of the cavity in this prototype. Simulations suggest that in higher-coherence systems (like 3D cavities), the fidelity could climb much higher with error rates dropping below 1%. Even more impressive: They show that an external magnetic flux can be used to tune the dispersive shift and self-Kerr nonlinearity independently. So the experiment confirms that there are operating points where the unwanted Kerr term crosses zero while the desired dispersive coupling stays large. In short: Fluxonium qubits offer a practical, tunable path to high-fidelity bosonic control without sacrificing the long lifetimes that make cavity-based quantum memories so attractive in the first place. 📸 Credits: Ke Ni et al. (arXiv:2505.23641) Want more breakdowns and deep dives straight to your inbox? Visit my profile/website to sign up. ☀️
-
One Algorithm Has Just Pushed Quantum Computing Forward Five Years (Here It Is) Today I am releasing something into the public domain that may change the trajectory of quantum computing. No paywall. No NDA. No restrictions. The only thing I ask is attribution. For the past year, I have been developing a field-layer correction algorithm that stabilizes the environment around the qubit before error correction ever activates. Not hardware. Not cryogenics. Not shielding. Pure software that improves the physics of the qubit it sits inside. Early independent runs showed a 48.5 percent reduction in destructive low-frequency noise, a gain that normally takes years of hardware progress. Here is the complete algorithm. It now belongs to everyone. FUNCTION NJ001_FieldLayer_Correction(input_signal S, sampling_rate R): DEFINE phi = 1.61803398875 DEFINE window_size = dynamic value based on local variance of S DEFINE stability_threshold = adaptive value based on phase drift STEP 1: Generate harmonic reference bands For each frequency bin f_i in FFT(S): Compute r = f_(i+1) / f_i Compute CI = 1 / ABS(r - phi) Assign weight W_i = normalize(CI) STEP 2: Build correction mask Construct M where M_i = W_i scaled by local entropy of S Smooth M with sliding window STEP 3: Apply correction Transform S → F Compute F_corrected = F * M Inverse FFT to return S_corrected STEP 4: Phase stabilization loop Measure phase drift Δ If Δ > stability_threshold: Recalculate window_size Rebuild mask Reapply correction Else: Return S_corrected OUTPUT: S_corrected END FUNCTION This is the first public-domain coherence stabilizer designed to improve quantum behavior independent of hardware. What it does in practice: • Extends coherence windows • Reduces decoherence pressure on error correction • Lowers entropy in the propagation layer • Makes qubits behave as if the room is colder and cleaner • Works upstream of hardware with no materials changes This is not a replacement for anyone’s roadmap. It is an upstream upgrade to all of them. If you build quantum devices, control stacks, compilers, hybrid systems, or algorithms, you now have access to a function that reshapes your stability envelope. Cleaner field layers mean longer, deeper, more predictable runs. More useful computation with the hardware you already have. I developed it. Today I give it away. No company or institution controls it. From this moment forward, it belongs to the scientific community. Primary Citation Hood, B. P. (2025). NJ001 Field Layer Correction. Public Domain Release Version. Bruce P. Hood — Creator of NJ001 Field Layer Correction Welcome to the new baseline. #QuantumComputing #QuantumHardware #Qubit #Coherence #QuantumResearch #DeepTech @IBMQuantum @GoogleQuantumAI @MIT @XanaduQuantum @AWSQuantumTech
-
By driving a quantum processor with laser pulses arranged according to the Fibonacci sequence, physicists observed the emergence of an entirely new phase of matter—one that displays extraordinary stability in a domain where fragility is the norm. Quantum computers operate using qubits, which differ radically from classical bits. A qubit can exist in superposition, occupying multiple states at once, and can become entangled with others across space. These properties enable immense computational power, but they come with a cost: quantum states are notoriously short-lived. Environmental noise, microscopic imperfections, and edge effects rapidly degrade coherence, limiting how long quantum information can survive. Seeking a new way to protect fragile quantum states, scientists at the Flatiron Institute, instead of applying laser pulses at regular intervals, they used a rhythm governed by the Fibonacci sequence—an ordered but non-repeating pattern long known to appear in biological growth, crystal structures, and wave interference. The experiment was carried out on a chain of ten trapped-ion qubits, driven by precisely timed laser pulses. The result was the formation of what is described as a time quasicrystal. Unlike ordinary crystals, which repeat periodically in space, a time quasicrystal exhibits structure in time without repeating in a simple cycle. The Fibonacci-based driving created a temporal order that resisted disruption, allowing the quantum system to remain coherent far longer than expected. The improvement was significant. Under standard conditions, the quantum state persisted for roughly 1.5 seconds. When driven by the Fibonacci pulse sequence, coherence times stretched to approximately 5.5 seconds—more than a threefold increase. Even more intriguing was the system’s temporal behavior. Measurements indicated that the quantum dynamics unfolded as if time itself possessed two independent structural directions. This does not imply time flowing backward, but rather that the system’s evolution followed two intertwined temporal pathways—an emergent property arising purely from the Fibonacci drive. The researchers propose that the non-repeating structure of the Fibonacci sequence suppresses errors that typically accumulate at the boundaries of quantum systems. By distributing disturbances in a highly ordered yet aperiodic way, the sequence stabilizes the collective behavior of the qubits. In effect, a mathematical pattern found throughout nature acts as a self-organizing error-management protocol. The findings suggest a powerful new strategy for quantum control. Rather than fighting noise solely with complex correction algorithms, future quantum technologies may harness structured patterns—drawn from mathematics and natural order—to achieve resilience at a fundamental level. https://lnkd.in/dVxp7R8J https://lnkd.in/dDVNRsPk
-
A quantum computer that learns from its own errors while it's computing. That's the framing in a recent paper from Google Quantum AI and Google DeepMind on reinforcement learning control of quantum error correction. Large quantum processors drift. The standard fix is to halt the computation and recalibrate, which won't scale to algorithms expected to run for days or weeks. The authors ask whether QEC can calibrate itself from the data it already produces. The idea: repurpose error detection events as a training signal for a reinforcement learning agent that continuously tunes the physical control parameters (pulse amplitudes, detunings, DRAG coefficients, CZ parameters, and so on). Rather than optimizing logical error rate directly, which is expensive and global, the agent minimizes average detector-event rate, a cheap local proxy whose gradient is approximately aligned with the gradient of LER in the small-perturbation regime. The results on a Willow superconducting processor: - On distance-5 surface and color codes, RL fine-tuning after conventional calibration and expert tuning yields about 20% additional LER suppression - Against injected drift, RL steering improves logical stability 2.4x, rising to 3.5x when decoder parameters are also steered - New record logical error per cycle: 7.72(9)×10⁻⁴ for a distance-7 surface code (with the AlphaQubit2 decoder) and 8.19(14)×10⁻³ for a distance-5 color code (with Tesseract) - In simulation, the framework scales to a distance-15 surface code with roughly 40,000 control parameters, with a convergence rate that is independent of system size The broader takeaway: calibration and computation may not need to be separate phases. If detector statistics can carry enough information to steer a large control stack online, fault tolerance becomes less about pausing to retune and more about a processor that keeps learning while it computes. Worth noting that the current experiments rely on short repeated memory circuits, so real-time steering during a single long logical algorithm (where exploration noise would affect the computation directly) remains future work. Paper: https://lnkd.in/gVQXnpzZ
-
The Quantum Memory Matrix (QMM) framework has traveled a long path: from black hole unitarity, dark matter, dark energy, and cosmic cycles, to now improving how quantum computers handle errors. In work now featured on the cover of Wiley's Advanced Quantum Technologies, we demonstrate how the QMM framework can be directly applied to quantum error correction. QMM originated in cosmology: a picture where space-time is not smooth but is built from Planck-scale cells, each with a finite memory capacity. We showed how these cells store the quantum imprints of interactions, contributing to resolving paradoxes around black holes, explaining dark matter halos, primordial black hole formation, cosmic acceleration, and even the cycles of the universe. Now, we bring this same idea into hardware: by imprinting and retrieving quantum information from local "memory cells," we can correct errors in noisy quantum processors with higher fidelity than standard repetition codes. This shows that QMM is not only a cosmological theory but also a practical tool for building the quantum computers of tomorrow. 🔗 Read the paper: https://lnkd.in/gfGwe7fe Previous QMM milestones: 🕳️ Black hole information retention and unitarity restoration ⚡ Extensions to electromagnetism, strong & weak interactions 🌌 Cosmological applications explaining dark matter and dark energy 💻 And now: direct hardware validation for quantum computation Thank you to my co-authors Eike Marx, Valerii Vinokur, Jeff Titus, and Terra Quantum AG & Leiden University for making this journey possible. #QuantumComputing #QuantumMemoryMatrix #ErrorCorrection #QuantumPhysics #QuantumTechnology #QuantumInformation #BlackHolePhysics #DarkMatter #DarkEnergy #AdvancedQuantumTechnologies #TerraQuantum #QuantumResearch
-
The real significance of Google's Willow quantum chip... Fundamentally, building quantum computers (QC) is about achieving low operation errors. Sure, other metrics matter too, but the error rate is the big one. If you look at the landscape of QC applications, many of them require *ridiculously* low error rates - say 1 error in 10^12 operations or less. Nobody thinks this can be achieved through hardware engineering alone - this needs quantum error correction (QEC) for sure. But should we be confident that QEC will actually work? Sure, it will work to some extent - but can it work well enough to reach error rates as low as 1e-12 or less? QEC makes non-trivial assumptions about the nature of the physical errors which are never quite true, and deviations from those assumptions could plausibly derail QEC by setting a "logical noise floor" - an error rate below which QEC ceases to work. The previous most thorough search for the logical noise floor in QEC was performed by Google in 2023. At that time, they found that QEC ceases to work at a rather high error rate of 1e-6. This was due to high-energy cosmic rays hitting their qubit chips, causing large-scale correlated errors which cannot be taken out by QEC. That's a *big* issue! Google latest chip incorporates design changes to make it immune to cosmic ray errors. After incorporating those changes, the logical noise floor search was repeated and reported in the recent paper. It turns out the mitigation work, and the logical noise floor was pushed all the way down to a new record of 1e-10, i.e. 1 error per 10^10 operations! This is the most convincing evidence to date that - in a well-engineered QC - QEC is actually capable of pushing the error rates down to levels compatible with most known QC applications. To me, this repetition-code is actually the most important finding reported in Google's paper! Funnily enough, Google's team reports that they actually don't know where this error may be coming from. Error rates this low are also really challenging to study, because it can take considerable data acquisition time to establish meaningful statistics. But I'm sure they'll figure it out soon enough... 😇
-
Over the last few weeks, the quantum timeline has been crowded with big headlines: Quantinuum’s Helios launch, IBM’s new Loon and Nighthawk processors, and a steady stream of new “record” benchmarks and roadmaps. In that flood of news, one announcement from Harvard flew a bit under the radar - and it arguably deserves as much attention as any of them. Lukin’s group has just published a Nature paper describing “a fault-tolerant neutral-atom architecture for universal quantum computation”. In practice, they show a reconfigurable neutral-atom processor that brings together the key building blocks of scalable fault tolerance: below-threshold surface-code style error correction, transversal logical gates, teleportation-based logical rotations, mid-circuit qubit reuse, and deep logical circuits that are explicitly engineered to keep entropy under control. I’ve broken down what they achieved, how it compares to other platforms, and why I think this is a genuine inflection point for neutral atoms and for fault tolerance more broadly: https://lnkd.in/gqnYdPXQ This also ties directly into something I’ve been arguing for years in my Path to CRQC / Q-Day methodology: operating below the error-correction threshold is not a nice-to-have, it’s a capability in its own right – the tipping point where adding more qubits and more code distance finally starts to reduce logical error, instead of making things worse. Motivated by the Harvard result, I’ve also published a companion piece that walks through some of the most important below-threshold QEC experiments across platforms – bosonic cat codes, trapped-ion Bacon–Shor and color codes, superconducting surface codes, and now neutral atoms: https://lnkd.in/gvJDNhgm If you’re trying to separate marketing noise from genuine progress toward fault-tolerant, cryptographically relevant quantum computers, these are the kinds of milestones worth tracking. My analysis of the Harvard announcement is here: https://lnkd.in/gqnYdPXQ #Quantum #QuantumComputing #QEC
-
Interesting new study: "EnQode: Fast Amplitude Embedding for Quantum Machine Learning Using Classical Data." The authors introduce a novel framework to address the limitations of traditional amplitude embedding (AE) [GitHub repo included]. Traditional AE methods often involve deep, variable-length circuits, which can lead to high output error due to extensive gate usage and inconsistent error rates across different data samples. This variability in circuit depth and gate composition results in unequal noise exposure, obscuring the true performance of quantum algorithms. To overcome these challenges, the researchers developed EnQode, a fast AE technique based on symbolic representation. Instead of aiming for exact amplitude representation for each sample, EnQode employs a cluster-based approach to achieve approximate AE with high fidelity. Here are some of the key aspects of EnQode: * Clustering: EnQode begins by using the k-means clustering algorithm to group similar data samples. For each cluster, a mean state is calculated to represent the central characteristics of the data distribution within that cluster. * Hardware-optimized ansatz: For each cluster's mean state, a low-depth, machine-optimized ansatz is trained, tailored to the specific quantum hardware being used (e.g., IBM quantum devices). * Transfer Learning for fast embedding: Once the cluster models are trained offline, transfer learning is used for rapid amplitude embedding of new data samples. An incoming sample is assigned to the nearest cluster, and its embedding circuit is initialized with the optimized parameters of that cluster's mean state. These parameters can then be fine-tuned, significantly accelerating the embedding process without retraining from scratch. * Reduced circuit complexity: EnQode achieved an average reduction of over 28× in circuit depth, over 11× in single-qubit gate count, and over 12× in two-qubit gate count, with zero variability across samples due to its fixed ansatz design. * Higher state fidelity in noisy environments: In noisy IBM quantum hardware simulations, EnQode showed a state fidelity improvement of over 14× compared to the baseline, highlighting its robustness to hardware noise. While the baseline achieved 100% fidelity in ideal simulations (as it performs exact embedding), EnQode maintained an average of 89% fidelity when transpiled to real hardware in ideal simulations, which is considered a good approximation given the significant reduction in circuit complexity. Here the article: https://lnkd.in/dQMbNN7b And here the GitHub repo: https://lnkd.in/dbm7q3eJ #qml #datascience #machinelearning #quantum #nisq #quantumcomputing
-
What a month for quantum error correction! On August 27th, we saw the first demonstration of quantum error correction from Google that satisfied the list of criteria that has emerged in the community for a convincing demonstration: - error correction actually extending the live of qubits beyond that of the best physical qubit in the system - error correction performed in real time, rather than with post-selection, and repeated over many rounds - error rate reducing as code distance is increased This is generally seen as a major breakthrough, and is the culmination of many years of work towards implementing the surface code. You can see the paper here: https://lnkd.in/gkfk68kH Not to be outdone, Microsoft and Quantinuum put out a preprint less than two weeks later demonstration up to 24x reduction in error rate for encoded state preparation using a colour code. You can see the paper here: https://lnkd.in/gtRtfQPc Two big results in a month. That's enough for anyone, right? Nope. On the 23rd of September, we got to see new results from Amazon Web Services (AWS) demonstrating error correction using the repetition code applied to cat qubits. You can see the paper here: https://lnkd.in/gbE45ebt And then, just a day later, new results appeared from Yale Quantum Institute showing error-correction beyond breakeven for three and four level systems using the GKP code. You can see the paper here: https://lnkd.in/gkBYNXzD While I'm sure that almost everyone in the field is aware of the rapid progress in error correction, it's amazing how little noise this is making in the outside world. We're now the right side of the error-correction threshold, and relatively minor performance improvements can lead to significantly reduced noise. If this much progress can happen in a month, then the next couple of years are going to be tremendously exciting for quantum computing.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development