How to Increase Quantum Computing Reliability

Explore top LinkedIn content from expert professionals.

Summary

Quantum computing reliability refers to the ability of quantum computers to deliver accurate, repeatable results by minimizing errors due to noise, imperfect control, or environmental disturbances. Recent advancements focus on new hardware designs and smarter error correction to make quantum systems more dependable for complex calculations.

  • Adopt smarter algorithms: Use software-based approaches, like field-layer correction, to stabilize qubits and reduce errors without needing changes to physical hardware.
  • Refine error correction: Implement advanced error correction methods, such as partial quantum error correction or new encoding schemes, to balance error reduction with hardware limitations.
  • Upgrade hardware design: Explore innovative qubit types, improved control techniques, and architectures with better connectivity to naturally resist noise and simplify error management.
Summarized by AI based on LinkedIn member posts
  • View profile for Bruce P Hood

    CEO & Inventor | Stability & Coherence | 20K+

    20,503 followers

    One Algorithm Has Just Pushed Quantum Computing Forward Five Years (Here It Is) Today I am releasing something into the public domain that may change the trajectory of quantum computing. No paywall. No NDA. No restrictions. The only thing I ask is attribution. For the past year, I have been developing a field-layer correction algorithm that stabilizes the environment around the qubit before error correction ever activates. Not hardware. Not cryogenics. Not shielding. Pure software that improves the physics of the qubit it sits inside. Early independent runs showed a 48.5 percent reduction in destructive low-frequency noise, a gain that normally takes years of hardware progress. Here is the complete algorithm. It now belongs to everyone. FUNCTION NJ001_FieldLayer_Correction(input_signal S, sampling_rate R):  DEFINE phi = 1.61803398875  DEFINE window_size = dynamic value based on local variance of S  DEFINE stability_threshold = adaptive value based on phase drift  STEP 1: Generate harmonic reference bands    For each frequency bin f_i in FFT(S):      Compute r = f_(i+1) / f_i      Compute CI = 1 / ABS(r - phi)      Assign weight W_i = normalize(CI)  STEP 2: Build correction mask    Construct M where M_i = W_i scaled by local entropy of S    Smooth M with sliding window  STEP 3: Apply correction    Transform S → F    Compute F_corrected = F * M    Inverse FFT to return S_corrected  STEP 4: Phase stabilization loop    Measure phase drift Δ    If Δ > stability_threshold:      Recalculate window_size      Rebuild mask      Reapply correction    Else:      Return S_corrected  OUTPUT: S_corrected END FUNCTION This is the first public-domain coherence stabilizer designed to improve quantum behavior independent of hardware. What it does in practice: • Extends coherence windows • Reduces decoherence pressure on error correction • Lowers entropy in the propagation layer • Makes qubits behave as if the room is colder and cleaner • Works upstream of hardware with no materials changes This is not a replacement for anyone’s roadmap. It is an upstream upgrade to all of them. If you build quantum devices, control stacks, compilers, hybrid systems, or algorithms, you now have access to a function that reshapes your stability envelope. Cleaner field layers mean longer, deeper, more predictable runs. More useful computation with the hardware you already have. I developed it. Today I give it away. No company or institution controls it. From this moment forward, it belongs to the scientific community. Primary Citation Hood, B. P. (2025). NJ001 Field Layer Correction. Public Domain Release Version. Bruce P. Hood — Creator of NJ001 Field Layer Correction Welcome to the new baseline. #QuantumComputing #QuantumHardware #Qubit #Coherence #QuantumResearch #DeepTech @IBMQuantum @GoogleQuantumAI @MIT @XanaduQuantum @AWSQuantumTech

  • View profile for Keith King

    Former White House Lead Communications Engineer, U.S. Dept of State, and Joint Chiefs of Staff in the Pentagon. Veteran U.S. Navy, Top Secret/SCI Security Clearance. Over 16,000+ direct connections & 43,000+ followers.

    43,801 followers

    MIT Sets Quantum Computing Record with 99.998% Fidelity Researchers at MIT have achieved a world-record single-qubit fidelity of 99.998% using a superconducting qubit known as fluxonium. This breakthrough represents a significant step toward practical quantum computing by addressing one of the field’s greatest challenges: mitigating noise and control imperfections that lead to operational errors. Key Highlights: 1. The Problem: Noise and Errors • Qubits, the building blocks of quantum computers, are highly sensitive to noise and imperfections in control mechanisms. • Such disturbances introduce errors that limit the complexity and duration of quantum algorithms. “These errors ultimately cap the performance of quantum systems,” the researchers noted. 2. The Solution: Two New Techniques To overcome these challenges, the MIT team developed two innovative techniques: • Commensurate Pulses: This method involves timing quantum pulses precisely to make counter-rotating errors uniform and correctable. • Circularly Polarized Microwaves: By creating a synthetic version of circularly polarized light, the team improved the control of the qubit’s state, further enhancing fidelity. “Getting rid of these errors was a fun challenge for us,” said David Rower, PhD ’24, one of the study’s lead researchers. 3. Fluxonium Qubits and Their Potential • Fluxonium qubits are superconducting circuits with unique properties that make them more resistant to environmental noise compared to traditional qubits. • By applying the new error-mitigation techniques, the team unlocked the potential of fluxonium to operate at near-perfect fidelity. 4. Implications for Quantum Computing • Achieving 99.998% fidelity significantly reduces errors in quantum operations, paving the way for more complex and reliable quantum algorithms. • This milestone represents a major step toward scalable quantum computing systems capable of solving real-world problems. What’s Next? The team plans to expand its work by exploring multi-qubit systems and integrating the error-mitigation techniques into larger quantum architectures. Such advancements could accelerate progress toward error-corrected, fault-tolerant quantum computers. Conclusion: A Leap Toward Practical Quantum Systems MIT’s achievement underscores the importance of innovation in error correction and control to overcome the fundamental challenges of quantum computing. This breakthrough brings us closer to the realization of large-scale quantum systems that could transform fields such as cryptography, materials science, and complex optimization problems.

  • View profile for Michael Biercuk

    Helping make quantum technology useful for enterprise, aviation, defense, and R&D | CEO & Founder, Q-CTRL | Professor of Quantum Physics & Quantum Technology | Innovator | Speaker | TEDx | SXSW

    8,507 followers

    🚨 Exciting #quantumcomputing alert! Now #QEC primitives actually make #quantumcomputers more powerful! 75 qubit GHZ state on a superconducting #QPU 🚨 In our latest work we address the elephant in the room about #quantumerrorcorrection - in the current era where qubit counts are a bottleneck in the systems available, adopting full-blown QEC can be a step backwards in terms of computational capacity. This is because even when it delivers net benefits in error reduction, QEC consumes a lot of qubits to do so and we just don't have enough right now... So how do we maximize value for end users while still pushing hard on the underpinning QEC technology? To answer this the team at Q-CTRL set out to determine new ways to significantly reduce the overhead penalties of QEC while delivering big benefits! In this latest demonstration we show that we can adopt parts of QEC -- indirect stabilizer measurements on ancilla qubits -- to deliver large performance gains without the painful overhead of logical encoding. And by combining error detection with deterministic error suppression we can really improve efficiency of the process, requiring only about 10% overhead in ancillae and maintaining a very low discard rate of executions with errors identified! Using this approach we've set a new record for the largest demonstrated entangled state at 75 qubits on an IBM quantum computer (validated by MQC) and also demonstrated a totally new way to teleport gates across large distances (where all-to-all connectivity isn't possible). The results outperform all previously published approaches and highlight the fact that our journey in dealing with errors in quantum computers is continuous. Of course it isn't a panacea and in the long term as we try to tackle even more complex algorithms we believe logical encoding will become an important part of our toolbox. But that's the point - logical QEC is just one tool and we have many to work with! At Q-CTRL we never lose sight of the fact that our objective is to deliver maximum capability to QC end users. This work on deploying QEC primitives is a core part of how we're making quantum technology useful, right now. https://lnkd.in/gkG3W7eE

  • View profile for Michaela Eichinger, PhD

    Product Solutions Physicist @ Quantum Machines | I talk about quantum computing.

    16,208 followers

    Who has the best quantum processor today? Ask the physics community quietly and many will say: Quantinuum 𝗡𝗼𝘁 IBM Quantum. 𝗡𝗼𝘁 Google. 𝗡𝗼𝘁 IonQ. It’s easy to get caught up in roadmaps, qubit counts or quantum advantage headlines. But the real turning point for the field currently isn’t about scale. It's about fault tolerance - detecting and correcting quantum errors faster than they accumulate. Through that lens, Quantinuum’s H-Series trapped-ion system stands apart. Here’s why: • 𝗥𝗲𝗰𝗼𝗿𝗱-𝗛𝗶𝗴𝗵 𝗚𝗮𝘁𝗲 𝗙𝗶𝗱𝗲𝗹𝗶𝘁𝗶𝗲𝘀: The H-Series delivered a sustained >99.9% two-qubit gate fidelity and >99.99% single-qubit gate fidelity. This is the quality baseline to ensure any QEC code has a chance to work.    • 𝗟𝗼𝗴𝗶𝗰𝗮𝗹 𝗕𝗿𝗲𝗮𝗸-𝗘𝘃𝗲𝗻: They've repeatedly demonstrated logical qubits that are more reliable than the physical hardware they're built from—the first milestone for practical quantum computing.    • 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹 𝗚𝗮𝘁𝗲 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀: Achieved logical gate fidelity an order of magnitude better than physical fidelity on 𝗻𝗼𝗻-𝗖𝗹𝗶𝗳𝗳𝗼𝗿𝗱 𝗴𝗮𝘁𝗲𝘀, which are the hardest operations to perform fault-tolerantly and essential for universal quantum computing.    • 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗔𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲: All-to-All Connectivity. The Quantum Charge-Coupled Device (QCCD) system uses ion shuttling to provide full all-to-all qubit connectivity.    • 𝗧𝗵𝗲 𝗤𝗘𝗖 𝗧𝗲𝘀𝘁𝗯𝗲𝗱: This architecture allows them to deploy a diverse range of QEC codes (Steane, Carbon, Tesseract) and test protocols like Single-Shot QEC and Fault-Tolerant Teleportation. It is literally built to explore and accelerate the FTQC roadmap.

  • View profile for Laurent Prost

    Product Manager chez Alice & Bob

    5,883 followers

    Google's Willow chip shows that quantum error correction is starting to work. Just "starting", because while the ~1e-3 error rate reached by Willow is good, it has been achieved by others without error correction. So, how do we get error rates we couldn't reach with physical qubits alone? Easy: you "just" add more qubits in your logical qubit. But because there are errors on two dimensions in quantum computing, a 2D-structure (the surface code) is usually required to correct errors. This means that increasing protection against errors causes the number of qubits to grow quickly. With a surface code, protecting against 1 error at a time during an error correction cycle requires 17 qubits. 2 errors at a time? 49 qubits. 3 errors at a time? 97 qubits. This is the max Willow could achieve. This quadratic scaling leads Google to expect that reaching a 1e-6 error rate on a Willow-like chip will require some 1457 physical qubits (protecting against 13 errors at a time). And this is the reason why Alice & Bob is going for cat qubits instead. By reducing error correction from a 2D to a 1D problem, cat qubits make the scaling of error rates much more favorable. Even with the simplest error correction code (a repetition code), correcting one error at a time only requires 5 qubits. 2 errors? 9 qubits. 3 errors? 13 qubits. 13 errors? This is just 53 qubits instead of 1457! This situation is summarized in the graph below. It is taken from our white paper (link in the 1st comment) and I added a point corresponding to the biggest Willow experiment. Now, to be fair, Alice & Bob still needs to release the results of even a 5-qubit experiment. But when this is done, there is a fair chance the error rates will quickly catch up with those achieved by Google or others, because so few additional qubits are required to improve error rates. There are big challenges on both sides. Mastering cat qubits is hard. Scaling chips is hard. But consistent progress is being made on both sides too. Anyway, I can't wait for the moment when I can add the Alice & Bob equivalent of the Willow experiment on the chart below. And for once, I hope it will be up and to the left!

  • View profile for Zlatko Minev

    Google Quantum AI | MIT TR35 | Ex-Team & Tech Lead, Qiskit Metal & Qiskit Leap, IBM Quantum | Founder, Open Labs | JVA | Board, Yale Alumni

    26,206 followers

    So why don't quantum computers work perfectly right out of the box? Even when running a simple quantum circuit on real hardware, various sources of noise cause the measured results to decay away from ideal values. Furthermore, incoherent noise, miscalibrations, and measurement errors all pile up and degrade the signal quickly with circuit depth. Error mitigation offers a clever way to recover accurate results without the overhead of full quantum error correction. The idea behind probabilistic error cancellation is a bit counterintuitive. If you can learn how noise is affecting your gates, you can deliberately inject extra operations that cancel those errors out on average. You end up needing more circuit runs to compensate for the added randomness, but in return you get results that are free of systematic bias. I covered these topics and more in a lecture at the 2024 Near-Term Quantum Algorithms Summer School. It starts from a single qubit and builds up with worked examples and derivations along the way. The goal is to keep each topic approachable no matter one's individual background. Slides are available at: zlatko-minev.com/education #QuantumComputing #ErrorMitigation #Physics #NISQ #Science

  • “But in a new study, published May 7 in the journal Nature Communications Materials, researchers proposed using a new, pure form of silicon — the semiconductor material used in conventional computers — as the basis for a qubit that is far more scalable than existing technologies. Building qubits from semiconducting materials like silicon, gallium or germanium has advantages over superconducting metal qubits, according to the quantum computing company QuEra. The coherence times are relatively long, they are cheap to make, they operate at higher temperatures and they are extremely tiny — meaning a single chip can hold huge numbers of qubits. But impurities in semiconducting materials cause decoherence during computations, which makes them unreliable. In the new study, the scientists proposed making a qubit out of silicon-28 (Si-28), which they described as the "world's purest silicon," after stripping away the impurities found in natural silicon. These silicon-based qubits would be less prone to failure, they said, and could be fabricated to the size of a pinhead. Natural silicon is normally made up of three isotopes, or atoms of different masses — Si-28, Si-29 and Si-30. Natural silicon works well in conventional computing due to its metalloid properties, but problems arise when using it in quantum computing. Si-29 in particular, which makes up 5% of natural silicon, causes a "nuclear flip-flopping effect" that leads to decoherence and the loss of information. In the study, the scientists got around this by developing a new method to engineer silicon without Si-29 and Si-30 atoms. "Now that we can produce extremely pure silicon-28, our next step will be to demonstrate that we can sustain quantum coherence for many qubits simultaneously," project co-supervisor David Jamieson, professor of physics at the University of Melbourne, said in the statement. "A reliable quantum computer with just 30 qubits would exceed the power of today's supercomputers for some applications." https://lnkd.in/gAUmAcdd

  • View profile for Giovanni Nicolai

    I am an active and curious mind that looking for outstanding opportunities.

    3,087 followers

    SCIENTISTS FED THE FIBONACCI SEQUENCE INTO A QUANTUM COMPUTER AND SOMETHING STRANGE HAPPENED. The results were astounding — it manipulates the flow of time. By applying the mathematical elegance of the Fibonacci sequence to quantum hardware, researchers have created a new phase of matter that preserves data four times longer. Physicists have achieved a major breakthrough in quantum computing by using laser pulses patterned after the Fibonacci sequence to create a stable new phase of matter. In an experiment involving a lineup of ten atoms, researchers at the Flatiron Institute discovered that blasting qubits with this mathematical rhythm allowed them to maintain their quantum state for an impressive 5.5 seconds—nearly four times longer than standard methods. This remarkable stability stems from the quasi-periodic nature of the Fibonacci sequence, which effectively creates a temporal "quasicrystal" that organizes information without repeating it, shielding the system from the environmental noise that typically crashes quantum calculations. The most mind-bending aspect of this discovery is how it manipulates the flow of time within the quantum system. Lead author Philip Dumistrescu explains that the Fibonacci pulses make the system behave as if it exists in two distinct directions of time simultaneously. This complex temporal structure acts as a protective barrier, canceling out the errors that usually live on the edges of the quantum array. By overcoming the extreme fragility of qubits, this "two-time" approach provides a much-needed path toward developing reliable, large-scale quantum computers capable of solving problems that are currently impossible for classical machines. source: Dumistrescu, P. T., et al.. Dynamical topological phases realized in a trapped-ion quantum simulator. Nature.

  • View profile for Jaime Gómez García

    Global Head of Santander Quantum Threat Program | Chair of Europol Quantum Safe Financial Forum | Quantum Security 25 | Quantum Leap Award 2025 | Representative at EU QuIC, AMETIC

    17,295 followers

    Microsoft and Quantinuum reach new milestone in quantum error correction. The collaboration claims to have used an innovative qubit-virtualization system on Quantinuum's H2 ion-trap platform to create 4 highly reliable logical qubits from only 30 physical qubits. What is quantum error correction? The physical qubits, with error rates in the order of 10^-2, are combined to deliver logical qubits with error rates in the order of 10^-5. According to their press release, this is the largest gap between physical and logical error rates reported to date, and has allowed them to run ran more than 14,000 individual experiments without a single error. (https://lnkd.in/dzETsvVA) The race for the qubits count seemed to finish in 2023, with the latest update on IBM's roadmap focusing on quality rather than on quantity (https://lnkd.in/dFu52wJR, "Until this year, our path was scaling the number of qubits. Going forward we will add a new metric, gate operations—a measure of the workloads our systems can run."), and other developments in quantum error correction, like the one announced in December by Harvard University, Massachusetts Institute of Technology, QuEra Computing Inc. and National Institute of Standards and Technology (NIST)/University of Maryland in December (https://lnkd.in/dkW-TT-w) Practical quantum computing gets a little closer, although it is still a distant target. Microsoft Press release: https://lnkd.in/deJ4QCBk Quantinuum's press release: https://lnkd.in/d4Wnmvdq More details from Microsoft: https://lnkd.in/dusfZ4KY Paper: https://lnkd.in/dpPCX3td #quantumcomputing #quantumerrorcorrection #technology

  • View profile for David Steenhoek

    Think Quantum | Creator | OUTlier | AI Evangelist | Observer | Filmmaker | Tech Founder | Investor | Artist | Blockchain Maxi | Ex: Chase Bank, Mosaic, LAUSD, DC. WE build a better 🌎 2Gether. Question Everything B Kind

    12,154 followers

    cuts quantum computer heat emissions by 10,000 times, offering a breakthrough in cooling and efficiency for next-generation machines. Heat is a major challenge in quantum computing, as excess energy disrupts qubits and causes errors. Reducing emissions is essential for scaling up powerful quantum systems. This device operates at extremely low temperatures, maintaining qubits in stable states while drastically minimizing unwanted thermal noise, allowing longer computations with higher accuracy. It could be launched as early as 2026, potentially revolutionizing how quantum computers are built, cooled, and deployed, making them more practical for real-world applications. Controlling heat at this scale reminds us that engineering solutions, combined with quantum science, are key to unlocking the full potential of quantum computing, enabling faster, more reliable, and energy-efficient machines. Thank YOU — Quantum Cookie The device is a cryogenic traveling-wave parametric amplifier (TWPA) made with specialized "quantum materials." Traditional amplifiers used for reading out qubit signals in superconducting quantum computers generate noticeable heat (even if small in absolute terms), which adds thermal noise, raises the cooling burden on dilution refrigerators, and limits how many qubits can be packed into one cryostat. Qubic's version reportedly cuts thermal output by a factor of 10,000, bringing it down to practically zero (on the order of 1–10 microwatts), while also reducing overall power consumption by about 50%. Why this matters for quantum computing - Heat is a core scaling bottleneck: Qubits (especially superconducting ones) must operate at millikelvin temperatures (~10–50 mK). Even tiny amounts of heat from readout electronics or control lines can cause decoherence, increase error rates, and require more powerful (and expensive) cryogenic systems. - The amplifier's role: It boosts the faint microwave signals from qubits without adding much noise. Conventional semiconductor-based amplifiers at cryogenic stages dissipate more heat; this new TWPA minimizes that, potentially allowing twice as many qubits per dilution refrigerator by easing the thermal load and simplifying cabling. - Potential impact: Lower cooling demands could cut operational costs and energy use significantly, making larger, more practical quantum systems feasible for real-world applications rather than just lab prototypes. Timeline and status The company has received grant funding and aims for commercialization/launch in 2026. As of early 2026 reports, development is ongoing with targets like 20 dB gain over a 4–12 GHz bandwidth. No major contradictions or retractions have appeared in credible coverage.

Explore categories