Major Quantum Leap: Researchers Stabilize Majorana Modes for Scalable Quantum Computing Introduction: A New Foundation for Fault-Tolerant Quantum Machines Quantum computing promises transformative breakthroughs, but instability in quantum states has long been a critical barrier. Now, researchers from the University of Oxford, Delft University of Technology, Eindhoven University of Technology, and Quantum Machines have achieved a major milestone: successfully stabilizing Majorana zero modes (MZMs)—exotic quantum particles long theorized as ideal components for reliable, scalable quantum computers. Key Breakthroughs and Technical Advances • Why Majorana Zero Modes Matter • MZMs are non-abelian quasiparticles believed to be intrinsically resistant to environmental noise, unlike traditional qubits which are easily disrupted. • Their topological protection could enable the creation of fault-tolerant quantum systems, solving one of the most pressing challenges in quantum hardware. • The Innovation: A Three-Site Kitaev Chain • Researchers constructed a three-site Kitaev chain, a key structure in topological quantum theory. • This chain consists of quantum dots and superconducting segments embedded in hybrid semiconductor-superconductor nanowires. • The architecture allows for precise control over quantum states while ensuring MZMs remain spatially separated, preserving their stability and reducing the chance of quantum decoherence. • Material and Engineering Breakthroughs • Traditional materials had limited success in hosting stable MZMs due to microscopic imperfections and noise. • The team overcame this by refining fabrication techniques and improving the coherence environment within the nanowire system. • Collaborative Research Power • The success reflects a multinational effort, combining theoretical physics, materials science, and quantum engineering expertise from leading European institutions and industry partner Quantum Machines. Why This Matters: Paving the Way to Real-World Quantum Applications Stabilizing Majorana zero modes is a long-sought goal in the quest to build scalable, error-resistant quantum computers. This achievement offers a viable platform for creating topological qubits, which could drastically reduce the overhead needed for quantum error correction—a major bottleneck in today’s quantum systems. As researchers inch closer to commercial quantum machines, this breakthrough represents a foundational step toward a new generation of robust, high-performance quantum processors capable of tackling challenges in cryptography, materials design, and complex optimization. The era of reliable quantum computation just got one step closer.
Balancing Stability and Scalability in Quantum Computing
Explore top LinkedIn content from expert professionals.
Summary
Balancing stability and scalability in quantum computing means building machines that can reliably maintain delicate quantum states while also being able to grow to handle more complex problems. This involves finding ways to keep quantum bits (qubits) from getting disrupted and designing systems that can expand without losing performance.
- Invest in error correction: Using built-in or self-learning error correction methods helps maintain the reliability of quantum information as the system grows larger.
- Simplify hardware design: Building quantum platforms that trap or control thousands of qubits with fewer bulky components supports long-term scalability and reduces interference.
- Adapt real-time calibration: Allowing quantum processors to fine-tune themselves during computation helps keep them stable without frequent pauses for manual adjustments.
-
-
Many talk about surface codes. But what if they’re not the future? Quantum Low-density parity-check (qLDPC) codes are gaining traction 𝗳𝗮𝘀𝘁. IBM is building fault-tolerant memories using Bivariate Bicycle (BB) codes. IQM Quantum Computers is designing hardware with qLDPC in mind. And now, a new experiment from China shows the 𝗳𝗶𝗿𝘀𝘁 𝘄𝗼𝗿𝗸𝗶𝗻𝗴 𝗾𝗟𝗗𝗣𝗖 𝗰𝗼𝗱𝗲 𝗼𝗻 𝗮 𝘀𝘂𝗽𝗲𝗿𝗰𝗼𝗻𝗱𝘂𝗰𝘁𝗶𝗻𝗴 𝗾𝘂𝗮𝗻𝘁𝘂𝗺 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗼𝗿. On the 32-qubit Kunlun chip, researchers implemented: • 𝗔 [[𝟭𝟴, 𝟰, 𝟰]] 𝗕𝗕 𝗰𝗼𝗱𝗲 • 𝗔 [[𝟭𝟴, 𝟲, 𝟯]] 𝗾𝗟𝗗𝗣𝗖 𝗰𝗼𝗱𝗲 The notation [[𝗻, 𝗸, 𝗱]] describes a quantum error correction code that uses 𝗻 physical qubits to encode 𝗸 logical qubits, with 𝗱 being the code distance. Unlike surface codes, LDPC codes keep each error check (called a stabilizer) connected to only a small number of qubits—just 6 in this case—even as the code scales. That means fewer ancillas, fewer gates, and potentially lower overhead for fault tolerance. The hardware was purpose-built for this experiment: • 𝟯𝟮 𝗳𝗿𝗲𝗾𝘂𝗲𝗻𝗰𝘆-𝘁𝘂𝗻𝗮𝗯𝗹𝗲 𝘁𝗿𝗮𝗻𝘀𝗺𝗼𝗻 𝗾𝘂𝗯𝗶𝘁𝘀 • 𝟴𝟰 𝘁𝘂𝗻𝗮𝗯𝗹𝗲 𝗰𝗼𝘂𝗽𝗹𝗲𝗿𝘀, enabling non-local interactions up to 𝟲.𝟱 𝗺𝗺 apart • 𝗔𝗶𝗿 𝗯𝗿𝗶𝗱𝗴𝗲𝘀 to support a crossbar-style layout • Stabilizer checks executed in just 𝟳 𝗖𝗭 𝗹𝗮𝘆𝗲𝗿𝘀 Gate fidelities were solid: • Single qubit: 99.95% • Two-qubit: 99.22% The decoding was performed offline using 𝗯𝗲𝗹𝗶𝗲𝗳 𝗽𝗿𝗼𝗽𝗮𝗴𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗼𝗿𝗱𝗲𝗿𝗲𝗱 𝘀𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝘀 𝗱𝗲𝗰𝗼𝗱𝗶𝗻𝗴 (𝗕𝗣-𝗢𝗦𝗗)—an approach better suited to LDPC-style codes. Logical error rates were: • 𝗕𝗕: 𝟴.𝟵𝟭 ± 𝟬.𝟭𝟳% • 𝗾𝗟𝗗𝗣𝗖: 𝟳.𝟳𝟳 ± 𝟬.𝟭𝟮% Both are still above the physical qubit error rate—but 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻𝘀 𝘀𝗵𝗼𝘄 𝘁𝗵𝗮𝘁 𝗮 𝟮× 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 𝗶𝗻 𝗳𝗶𝗱𝗲𝗹𝗶𝘁𝘆 𝘄𝗼𝘂𝗹𝗱 𝗯𝗲 𝗲𝗻𝗼𝘂𝗴𝗵 𝘁𝗼 𝗽𝘂𝘀𝗵 𝘁𝗵𝗲𝘀𝗲 𝗰𝗼𝗱𝗲𝘀 𝗯𝗲𝗹𝗼𝘄 𝘁𝗵𝗿𝗲𝘀𝗵𝗼𝗹𝗱. qLDPC codes are no longer just a concept—they’re being implemented, measured, and decoded on superconducting hardware. 📸 Image Credits: Ke Wang, Zhide Lu, Chuanyu Zhang et al. (2025, arXiv)
-
Scientists just trapped 78,400 atoms using a single flat surface thinner than a human hair, a breakthrough that could unlock the next era of quantum computing. By holding thousands of atoms in precise positions, researchers can create highly controlled quantum systems, a critical step toward building scalable, reliable quantum devices. This flat surface acts as a stable platform where quantum states can be maintained, minimizing interference and decoherence, which are major challenges in quantum technology. The experiment could accelerate the development of advanced quantum computers capable of solving problems far beyond the reach of classical machines, from drug discovery to material design. Trapping atoms at this scale demonstrates how quantum physics can be harnessed with extreme precision, revealing the potential to control matter at the smallest levels and reshape the future of computing. Thank YOU — Quantum Cookie In March 2026, physicists at Tsinghua University in China (led by researchers including Tao Zhang) demonstrated an optical metasurface — a single flat silicon nitride chip, patterned with nanoscale pillars and thinner than a human hair—that can generate a 280 × 280 array of 78,400 individual optical tweezers from one input laser beam. These tweezers are focused laser spots that trap and hold individual neutral atoms (likely rubidium or similar) in precise positions with high uniformity (>96% intensity consistency across the array). The metasurface replaces bulky, complex traditional optics like spatial light modulators (SLMs) and acousto-optic deflectors (AODs), making the setup far more compact, stable, scalable, and CMOS-compatible for manufacturing. Why this matters for quantum computing Neutral-atom platforms are promising for quantum computers because atoms are identical, can have long coherence times, and support two-qubit gates via Rydberg interactions. Scaling them up has been limited by the difficulty of creating and controlling huge numbers of stable traps without massive, expensive optical systems. This work shows a path to tens of thousands (or more) of trapped atoms on a simpler platform, addressing a key bottleneck. The team is already working on a larger ~19.5 mm metasurface aimed at >10,000 atoms in a more practical external configuration. Similar metasurface approaches have been explored by groups at Columbia University and others, but this hits a notable record for a single flat device generating that many traps.
-
A quantum computer that learns from its own errors while it's computing. That's the framing in a recent paper from Google Quantum AI and Google DeepMind on reinforcement learning control of quantum error correction. Large quantum processors drift. The standard fix is to halt the computation and recalibrate, which won't scale to algorithms expected to run for days or weeks. The authors ask whether QEC can calibrate itself from the data it already produces. The idea: repurpose error detection events as a training signal for a reinforcement learning agent that continuously tunes the physical control parameters (pulse amplitudes, detunings, DRAG coefficients, CZ parameters, and so on). Rather than optimizing logical error rate directly, which is expensive and global, the agent minimizes average detector-event rate, a cheap local proxy whose gradient is approximately aligned with the gradient of LER in the small-perturbation regime. The results on a Willow superconducting processor: - On distance-5 surface and color codes, RL fine-tuning after conventional calibration and expert tuning yields about 20% additional LER suppression - Against injected drift, RL steering improves logical stability 2.4x, rising to 3.5x when decoder parameters are also steered - New record logical error per cycle: 7.72(9)×10⁻⁴ for a distance-7 surface code (with the AlphaQubit2 decoder) and 8.19(14)×10⁻³ for a distance-5 color code (with Tesseract) - In simulation, the framework scales to a distance-15 surface code with roughly 40,000 control parameters, with a convergence rate that is independent of system size The broader takeaway: calibration and computation may not need to be separate phases. If detector statistics can carry enough information to steer a large control stack online, fault tolerance becomes less about pausing to retune and more about a processor that keeps learning while it computes. Worth noting that the current experiments rely on short repeated memory circuits, so real-time steering during a single long logical algorithm (where exploration noise would affect the computation directly) remains future work. Paper: https://lnkd.in/gVQXnpzZ
-
𝐈𝐬 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐨𝐫 𝐡𝐚𝐫𝐝𝐰𝐚𝐫𝐞 𝐤𝐢𝐧𝐠 𝐢𝐧 𝐪𝐮𝐚𝐧𝐭𝐮𝐦 𝐜𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠? It’s tempting to treat this like a classical rivalry like chips vs code, gates vs logic. But quantum isn’t a binary contest. It’s a 𝐝𝐞𝐥𝐢𝐜𝐚𝐭𝐞 𝐧𝐞𝐠𝐨𝐭𝐢𝐚𝐭𝐢𝐨𝐧 𝐰𝐢𝐭𝐡 𝐩𝐡𝐲𝐬𝐢𝐜𝐬 𝐢𝐭𝐬𝐞𝐥𝐟. Hardware defines how nature computes. Whether it’s 𝐬𝐮𝐩𝐞𝐫𝐜𝐨𝐧𝐝𝐮𝐜𝐭𝐢𝐧𝐠 𝐪𝐮𝐛𝐢𝐭𝐬 (IBM, Google), 𝐭𝐫𝐚𝐩𝐩𝐞𝐝 𝐢𝐨𝐧𝐬 (IonQ, Quantinuum), 𝐩𝐡𝐨𝐭𝐨𝐧𝐢𝐜 𝐥𝐚𝐭𝐭𝐢𝐜𝐞𝐬 (PsiQuantum), or 𝐧𝐞𝐮𝐭𝐫𝐚𝐥 𝐚𝐭𝐨𝐦𝐬 (Pasqal), every platform balances three parameters physicists call the 𝐭𝐫𝐢𝐩𝐥𝐞 𝐜𝐨𝐧𝐬𝐭𝐫𝐚𝐢𝐧𝐭: • Fidelity (accuracy per gate) • Coherence time (how long a qubit resists decoherence) • Scalability (how well you can add qubits without amplifying noise). Today’s leading systems achieve 𝐠𝐚𝐭𝐞 𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐢𝐞𝐬 𝐚𝐫𝐨𝐮𝐧𝐝 𝟗𝟗.𝟗% and 𝐜𝐨𝐡𝐞𝐫𝐞𝐧𝐜𝐞 𝐭𝐢𝐦𝐞𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐦𝐢𝐥𝐥𝐢𝐬𝐞𝐜𝐨𝐧𝐝 𝐫𝐚𝐧𝐠𝐞, but that still translates to roughly 𝟏 𝐞𝐫𝐫𝐨𝐫 𝐩𝐞𝐫 𝟏,𝟎𝟎𝟎 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 which is fatal for deep quantum circuits. That’s where software becomes the real differentiator. Quantum compilers (Qiskit, Cirq, Tket) now 𝐫𝐞𝐨𝐫𝐝𝐞𝐫 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬, 𝐜𝐨𝐦𝐩𝐫𝐞𝐬𝐬 𝐜𝐢𝐫𝐜𝐮𝐢𝐭𝐬, 𝐚𝐧𝐝 𝐦𝐚𝐩 𝐥𝐨𝐠𝐢𝐜𝐚𝐥 𝐪𝐮𝐛𝐢𝐭𝐬 𝐭𝐨 𝐭𝐡𝐞 𝐦𝐨𝐬𝐭 𝐬𝐭𝐚𝐛𝐥𝐞 𝐩𝐡𝐲𝐬𝐢𝐜𝐚𝐥 𝐨𝐧𝐞𝐬. Machine learning models predict error pathways, dynamic decoupling sequences suppress phase drift, and 𝐡𝐲𝐛𝐫𝐢𝐝 𝐨𝐫𝐜𝐡𝐞𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐟𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬 offload unstable gates to classical backends in real time. In short: 1. Hardware fights physics. 2. Software negotiates with it. The next inflection won’t come from 1,000 qubits, it’ll come from 𝟏,𝟎𝟎𝟎 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐪𝐮𝐛𝐢𝐭-𝐜𝐲𝐜𝐥𝐞𝐬 per computation. Whoever builds the 𝐦𝐢𝐝𝐝𝐥𝐞𝐰𝐚𝐫𝐞 𝐢𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞 that converts unstable quantum states into stable computational outcomes will define the stack of “AWS of uncertainty.” Quantum supremacy won’t belong to whoever traps more atoms. It’ll belong to whoever tames entropy with software.
-
After 20 years of research, Microsoft introduces the first 𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗨𝗻𝗶𝘁 (QPU), leveraging topological qubits - What will be the impact on AI Industry? Some breakthroughs signal an incremental step forward. Others, like Microsoft’s new Majorana 1 chip, could be a paradigm shift, also for the AI and Generative AI Industry. For years, quantum computing faced a key challenge: building stable, scalable qubits. Microsoft’s approach is different. According to Microsoft, they had to develop a whole new class of materials with a previously unobserved state of matter (Yes, fluid, gas, plasma, solid and now, topological 🤯) - topological conductors. Unlike traditional qubits, topological qubits are inherently stable and less affected by noise, making them promising for fault-tolerant quantum computing. The result? A potential path to one million qubits on a single chip, something once thought to be at least a decade away. The new Quantum Processing Unit (QPU), called Majorana 1, is being compared to the invention of the transistor. Just as the transistor replaced vacuum tubes and launched the digital era, topological quantum computing could redefine what’s possible. What does this mean for the AI community? If Microsoft’s Majorana 1 chip delivers on its promise of scalable, fault-tolerant quantum computing, it could further accelerate the development of AI and unlock new use cases: ✅ Faster AI Training - Today’s largest AI models take weeks or months to train using thousands of GPUs could reduced to hours or even minutes. Complex optimizations, like hyperparameter tuning, would become dramatically faster, enabling systems to evolve in real time. ✅ Quantum-powered AI could simulate physical, chemical, and biological systems, unlocking use cases like, true-to-life 3D simulations, instant drug discovery on demand, hyper-realistic creative AI tools ✅ AI-Driven Material Discovery - Quantum computers excel at simulating quantum mechanics, something classical computers struggle with. ✅ Smarter Decision-Making for Complex Systems - Industries like logistics, finance, and supply chain management rely on solving massively complex optimization problems. 👉 Of course, challenges remain. Scaling from scientific discovery to a commercially viable product has derailed many promising technologies (like fusion energy, ...). But as quantum computing for AI advances, we could see a power shift in AI and cloud markets, where today’s compute-centric monopolies face new challengers leveraging quantum breakthroughs, potentially leading to a bifurcation: Either extreme consolidation (as only a few control quantum access) or rapid diversification as new players emerge. At the same time, industries like biotech, materials science, and logistics could be fundamentally reshaped as quantum-driven AI unlocks solutions previously thought impossible. What are your thoughts? Will this be quantum’s "transistor moment"?
-
By driving a quantum processor with laser pulses arranged according to the Fibonacci sequence, physicists observed the emergence of an entirely new phase of matter—one that displays extraordinary stability in a domain where fragility is the norm. Quantum computers operate using qubits, which differ radically from classical bits. A qubit can exist in superposition, occupying multiple states at once, and can become entangled with others across space. These properties enable immense computational power, but they come with a cost: quantum states are notoriously short-lived. Environmental noise, microscopic imperfections, and edge effects rapidly degrade coherence, limiting how long quantum information can survive. Seeking a new way to protect fragile quantum states, scientists at the Flatiron Institute, instead of applying laser pulses at regular intervals, they used a rhythm governed by the Fibonacci sequence—an ordered but non-repeating pattern long known to appear in biological growth, crystal structures, and wave interference. The experiment was carried out on a chain of ten trapped-ion qubits, driven by precisely timed laser pulses. The result was the formation of what is described as a time quasicrystal. Unlike ordinary crystals, which repeat periodically in space, a time quasicrystal exhibits structure in time without repeating in a simple cycle. The Fibonacci-based driving created a temporal order that resisted disruption, allowing the quantum system to remain coherent far longer than expected. The improvement was significant. Under standard conditions, the quantum state persisted for roughly 1.5 seconds. When driven by the Fibonacci pulse sequence, coherence times stretched to approximately 5.5 seconds—more than a threefold increase. Even more intriguing was the system’s temporal behavior. Measurements indicated that the quantum dynamics unfolded as if time itself possessed two independent structural directions. This does not imply time flowing backward, but rather that the system’s evolution followed two intertwined temporal pathways—an emergent property arising purely from the Fibonacci drive. The researchers propose that the non-repeating structure of the Fibonacci sequence suppresses errors that typically accumulate at the boundaries of quantum systems. By distributing disturbances in a highly ordered yet aperiodic way, the sequence stabilizes the collective behavior of the qubits. In effect, a mathematical pattern found throughout nature acts as a self-organizing error-management protocol. The findings suggest a powerful new strategy for quantum control. Rather than fighting noise solely with complex correction algorithms, future quantum technologies may harness structured patterns—drawn from mathematics and natural order—to achieve resilience at a fundamental level. https://lnkd.in/dVxp7R8J https://lnkd.in/dDVNRsPk
-
QUANTUM SYSTEM AT THE EDGE OF CHAOS: A PATH TOWARD STABLE QUANTUM COMPUTATION Quantum physics rarely offers moments where theory, engineering, and the raw behavior of many‑body systems collide to reveal a new dynamical regime. Yet that is exactly what the 78‑qubit Chuang‑tzu 2.0 processor has uncovered: a quantum system pushed to the brink of chaos can be held in a long‑lived, tunable prethermal state—an island of order suspended inside non‑equilibrium turbulence. This discovery goes far beyond Floquet physics. Periodic driving has already given us time crystals and engineered topological phases, but non‑periodic driving—especially with structured randomness—has long been synonymous with rapid heating and the loss of quantum information. Instead, this experiment shows that temporal randomness can be engineered to suppress heating, stabilize dynamics, and preserve coherence far longer than expected. Random multipolar driving, neither periodic nor chaotic, acts as a hidden temporal scaffold that shapes how energy flows through the system. Applied to a two‑dimensional Bose–Hubbard model across 78 qubits and 137 couplers, this protocol prevents the system from collapsing into chaos. Instead, it enters a robust prethermal plateau where imbalance decays slowly, entanglement grows in a controlled way, and the heating rate becomes tunable—matching universal algebraic scaling predicted for multipolar drives. This is not a subtle correction; it is a macroscopic reshaping of the system’s dynamical landscape. The geometry of entanglement is equally striking. Different subsystems show distinct behaviors—some oscillate coherently, others settle into plateaus—revealing a highly non‑uniform spread of correlations across the lattice. It is the first time such fine‑grained entanglement dynamics have been observed in a large, non‑periodically driven quantum simulator. Classical tensor‑network methods like GMPS and PEPS cannot keep pace once heating accelerates, confirming that these dynamics lie firmly beyond classical reach. Quantum systems at the brink of chaos are not doomed to disorder. With the right temporal geometry, they can be shaped, stabilized, and made computationally powerful. This work demonstrates that the boundary between coherence and chaos is not a hard limit but a navigable frontier—and that the future of quantum computation may lie precisely in mastering this edge. # https://lnkd.in/eJBkGts5
-
Do you take something which is scalable and make it quantum? Or, do you take something that is quantum and scale it up? The answer lies in NOISE. For years, most quantum computing businesses have taken scaled technologies (semiconductors, optics, etc.) and use them to force quantum states. Typically this has resulted in low quality qubits, unforeseen barriers leading to revised roadmaps, an existential reliance on massively improved error correction, and complicated NOISY set ups. Last week MSFT announced some encouraging indicators as they pursue their contrarian, bottom-up approach to building a quantum computer. They have taken the approach that unless you build from the bottom up, noise and low qubit quality will eventually be your downfall. That's the same approach that we've taken at Silicon Quantum Computing. We've precision engineered and manufacture silence by placing phosphorous atoms in isotopically pure silicon. Spin vaccums, resistant to noise. Noise is the enemy. You need to build a QPU for the fight.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development