The more qubits we add, the more control lines we need—or do we? One of the big challenges in scaling superconducting quantum processors is the sheer number of control lines needed to manipulate the qubits. These lines carry the microwave pulses that drive operations like single- and two-qubit gates. But with thousands or even millions of qubits in future systems, fitting all those lines into a cryogenic system becomes a serious problem. 𝗙𝗿𝗲𝗾𝘂𝗲𝗻𝗰𝘆-𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲𝘅𝗲𝗱 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 offers a clever solution. Instead of dedicating a separate control line to each qubit, multiple qubits share a single line. Each qubit is uniquely addressed by a pulse tuned to its specific frequency. However, a problem arises when we send multiple control pulses—typically microwaves with a Gaussian envelope—down the same line. These pulses have broad frequency profiles, which can unintentionally excite nearby qubits. This limits how densely qubit frequencies can be packed and reduces the gate fidelity. Yet, there seems to be a new solution to this problem, referred to as 𝗦𝗲𝗹𝗲𝗰𝘁𝗶𝘃𝗲 𝗘𝘅𝗰𝗶𝘁𝗮𝘁𝗶𝗼𝗻 𝗣𝘂𝗹𝘀𝗲𝘀 (𝗦𝗘𝗣). Instead of using Gaussian pulses, SEP carefully shapes the frequency spectrum of the pulse. The key idea is to create 𝗻𝘂𝗹𝗹 𝗽𝗼𝗶𝗻𝘁𝘀—frequencies where the pulse has negligible energy—at the frequencies of the non-target qubits. This in turn isolates the target qubit, reducing unintended interactions, even when qubit frequencies are closely spaced. A recent experiment has demonstrated that SEP: - 𝗥𝗲𝗱𝘂𝗰𝗲𝘀 𝘂𝗻𝗶𝗻𝘁𝗲𝗻𝗱𝗲𝗱 𝗾𝘂𝗯𝗶𝘁 𝗲𝘅𝗰𝗶𝘁𝗮𝘁𝗶𝗼𝗻𝘀 from 10% (Gaussian pulses) to just 0.2%. - Maintains 𝗵𝗶𝗴𝗵 𝗴𝗮𝘁𝗲 𝗳𝗶𝗱𝗲𝗹𝗶𝘁𝗶𝗲𝘀, averaging 99.8% for the target qubit. This technique is highly promising, as it provides a straightforward method to 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗺𝗼𝗿𝗲 𝗾𝘂𝗯𝗶𝘁𝘀 𝗼𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗹𝗶𝗻𝗲. New pulse shaping techniques like SEP may sometimes fly under the radar, but they are essential for improving gate fidelity and enabling scalability. Advancements like these are a powerful reminder of how much innovation is still happening at the fundamental level of quantum control. 📸 Image Credits: Matsuda et al. (2025)
Improving Reliability in Quantum Circuit Scaling
Explore top LinkedIn content from expert professionals.
Summary
Improving reliability in quantum circuit scaling means finding ways to make quantum computers work with more qubits while maintaining accuracy and minimizing errors. As circuits get larger, new techniques are needed to control, monitor, and stabilize qubit interactions so quantum systems can reach their full potential.
- Shape control signals: Consider using selective excitation pulses or frequency-multiplexed lines to address individual qubits without disturbing their neighbors, which helps reduce errors as circuits grow.
- Integrate compact hardware: Embrace miniaturized and energy-efficient components, such as chip-scale optical modulators, to make it practical to scale up quantum systems without excessive heat or complexity.
- Apply error detection: Use clever circuit layouts and error-checking methods, including symmetry-based detection and randomized checks, to pinpoint faults and build confidence in large-scale quantum computations.
-
-
Tiny Optical Modulator Removes a Major Bottleneck to Scalable Quantum Computing Introduction A long-standing scaling barrier in quantum computing has been the inability to precisely control vast numbers of lasers without bulky, power-hungry hardware. New research now demonstrates an ultra-compact optical phase modulator that could unlock quantum systems with tens of thousands—or even millions—of qubits. Core Breakthrough What Was Built • Researchers developed an optical phase modulator nearly 100 times smaller than the diameter of a human hair. • The device manipulates visible laser light using microwave-frequency vibrations oscillating billions of times per second. • It delivers precise, stable laser frequency control essential for atom- and ion-based quantum computers. Why It Is Different • The modulator consumes roughly 80 times less microwave power than many commercial alternatives. • Lower power consumption dramatically reduces heat generation. • The tiny footprint allows dense integration of many optical channels on a single chip. Why Laser Control Matters • Trapped-ion and trapped-neutral-atom quantum computers rely on lasers to control individual qubits. • Each laser frequency must be tuned with extreme precision, often to billionths of a percent. • Today’s tabletop electro-optic modulators cannot scale to the thousands of channels future systems require. Manufacturing at Scale • The device was fabricated entirely using CMOS foundry processes—the same manufacturing approach used for mainstream microelectronics. • This enables mass production of thousands or millions of identical photonic components at low cost. • The approach moves optics away from bespoke, hand-assembled systems toward fully integrated photonic chips. System-Level Impact • Efficient frequency generation enables dense, chip-scale laser control architectures. • Reduced heat output makes large-scale quantum systems physically and thermally feasible. • The technology supports quantum computing, quantum sensing, and quantum networking. What Comes Next • The team is developing fully integrated photonic circuits combining frequency generation, filtering, and pulse shaping. • Collaborations with quantum computing companies will test the chips in trapped-atom and neutral-atom platforms. • The goal is a complete, scalable photonic control stack for very large quantum computers. Why This Matters This advance removes one of the most practical constraints on quantum computing scale. By combining extreme miniaturization, drastic power reduction, and CMOS-compatible manufacturing, the new optical modulator transforms laser control from a laboratory bottleneck into an engineering problem that can be solved at industrial scale. It represents a critical step toward quantum computers large enough to deliver real-world impact. Keith King https://lnkd.in/gHPvUttw
-
The preparation of GHZ states is a common benchmark for quantum processors. These states are not only a test of device-wide entanglement, they are also useful resources in numerous quantum algorithms. Our team recently demonstrated a 120-qubit logical GHZ state on our Heron r2 processors, the largest reported on any hardware. This includes a 60-logical qubit GHZ on a single-shot basis (i.e. with no readout error mitigation). These experiments were enabled by error detection both at the device and circuit level. At the device level, we can use our knowledge of the device architecture to detect if some couplers fail during a particular shot. At the circuit level, we can use symmetries inherent in the GHZ state to detect if certain violations occur. The state preparation proceeds as follows: we first eliminate some edges with bad CZ or bad readout (above a given threshold). Then, starting from a qubit at the center of the remaining graph, we perform a breadth-first search (BFS) to prepare a GHZ state in shallow depth. During the BFS, some nodes are randomly blocked in order to increase the chance of check qubits being found. Afterwards, any node that does not belong to the GHZ but is adjacent to 2 of its qubits may act as a check in a ZZ parity measurement. We aim to maximize the ''coverage'' of checks that we can find through this randomization, while not increasing the depth beyond a given threshold above the best possible depth. The coverage of checks is the number of locations in the circuit whose failure is detected by one of the checks, which we can compute efficiently using Pauli propagation. Therefore, we can predict exactly how many failures will be detected using our checks, and can optimize the layout for them. These experiments were performed by Ali Javadi and Simon Martiel. They also leverage many of the recent advances made by our team, including improved readout on Herons, characterization of coupler errors, and M3 readout error mitigation. For comparison, the recent demonstrations by Microsoft/Atom with a 24-qubit GHZ, Quantinuum with a 50-qubit GHZ, and Q-ctrl with a 75-qubit GHZ (also on Heron) also relied on error detection. As we chart the path towards advantage all that really matters is how large a quantum circuit can we run and can we trust the method used gives accurate results. While GHZ are simple to simulate this method shows that error detection with post selection is a potentially viable tool to add with error mitigation or sample based quantum diagonalization, to run experiments at the utility scale (100+ qubits) and build the set of trusted tools to search for quantum advantage on near term devices. This is why we are pushing near term methods such as error mitigation, error detection on utility-scale (100+ qubits) quantum computers.
-
By driving a quantum processor with laser pulses arranged according to the Fibonacci sequence, physicists observed the emergence of an entirely new phase of matter—one that displays extraordinary stability in a domain where fragility is the norm. Quantum computers operate using qubits, which differ radically from classical bits. A qubit can exist in superposition, occupying multiple states at once, and can become entangled with others across space. These properties enable immense computational power, but they come with a cost: quantum states are notoriously short-lived. Environmental noise, microscopic imperfections, and edge effects rapidly degrade coherence, limiting how long quantum information can survive. Seeking a new way to protect fragile quantum states, scientists at the Flatiron Institute, instead of applying laser pulses at regular intervals, they used a rhythm governed by the Fibonacci sequence—an ordered but non-repeating pattern long known to appear in biological growth, crystal structures, and wave interference. The experiment was carried out on a chain of ten trapped-ion qubits, driven by precisely timed laser pulses. The result was the formation of what is described as a time quasicrystal. Unlike ordinary crystals, which repeat periodically in space, a time quasicrystal exhibits structure in time without repeating in a simple cycle. The Fibonacci-based driving created a temporal order that resisted disruption, allowing the quantum system to remain coherent far longer than expected. The improvement was significant. Under standard conditions, the quantum state persisted for roughly 1.5 seconds. When driven by the Fibonacci pulse sequence, coherence times stretched to approximately 5.5 seconds—more than a threefold increase. Even more intriguing was the system’s temporal behavior. Measurements indicated that the quantum dynamics unfolded as if time itself possessed two independent structural directions. This does not imply time flowing backward, but rather that the system’s evolution followed two intertwined temporal pathways—an emergent property arising purely from the Fibonacci drive. The researchers propose that the non-repeating structure of the Fibonacci sequence suppresses errors that typically accumulate at the boundaries of quantum systems. By distributing disturbances in a highly ordered yet aperiodic way, the sequence stabilizes the collective behavior of the qubits. In effect, a mathematical pattern found throughout nature acts as a self-organizing error-management protocol. The findings suggest a powerful new strategy for quantum control. Rather than fighting noise solely with complex correction algorithms, future quantum technologies may harness structured patterns—drawn from mathematics and natural order—to achieve resilience at a fundamental level. https://lnkd.in/dVxp7R8J https://lnkd.in/dDVNRsPk
-
New preprint out today with my PhD student Simon Pettersson Fors and researcher Jorge Fernández Pendás at the Wallenberg Centre for Quantum Technology (WACQT) at Chalmers tekniska högskola: ”Comprehensive explanation of ZZ coupling in superconducting qubits” https://lnkd.in/dyqWUn6X We have now become so good at protecting superconducting quantum computers from decoherence due to unwanted couplings to their surroundings that unwanted couplings between qubits within the quantum computer are emerging as a major challenge for scaling up. All such couplings between qubits can manifest as a ZZ coupling, a shift of the energy of one qubit conditioned on the state of another qubit. This coupling can be used as a two-qubit controlled-Z (CZ) gate if it’s strong, but when doing other operations on the quantum computer we want it to be zero or close to zero, to not introduce errors. There have been many setups suggested in the past few years to suppress or cancel the ZZ coupling between two superconducting qubits. To guide the search in this large parameter space of setups and coupling strengths, it is important to have a good understanding of the mechanisms giving rise to the ZZ coupling. In our paper, we present extensive analytical and numerical results for the ZZ coupling in a setup with two fixed-frequency transmon qubits and a flux-tunable transmon coupler. We introduce a diagrammatic perturbation theory to clarify the mechanisms behind the ZZ coupling to a greater extent than has been done before. To support our approximations in the perturbation theory, and the results emerging from it, we perform careful numerical modeling, which considers the Hamiltonian for the transmon qubits from a low level and leverages an improved algorithm to identify eigenstates in the system. We find that the qubit frequencies, anharmonicities, and coupling strengths in our considered system can be chosen to create three types of parameter regions with zero or near-zero ZZ coupling that can be accessed by current technology without major redesigns. Through our diagrammatic perturbation theory we are able to explain the underlying mechanisms (both level repulsions and some higher-order mechanisms) for the existence of all these regions. Our results thus open up both for improving gate speeds for CPHASE and CZ gates, and for improving fidelities of other gates, which are negatively affected by ZZ coupling. Through the analytical and numerical methods we introduce, system parameters and architectures can be constrained to a more manageable search space. Our methods are not limited to the three-transmon setup we study here as an example; we expect them to find applications in investigations of larger systems (including ZZZ and higher-order couplings), in setups with other types of superconducting qubits, possibly also in other quantum-computing systems where ZZ coupling constitutes a challenge, e.g., semiconductor qubits.
-
+1
-
I'm really happy with the rapid development of CUDA-Q QEC, our toolkit for quantum error correction. QEC is an incredibly rich and fast-moving field, and in CUDA-Q QEC we aim to provide a platform with a diverse set of accelerated decoders, AI infrastructure, tools to enable researchers to develop and test their own codes, decoders, and architectures, hopefully even better than our own! As we dig deeper into the problem of scalable QEC, the benefits of GPUs and AI have become much clearer. We started with research tools, for simulation and offline decoding, which is still an important capability. Now with the 0.5.0 release we also provide the infrastructure for real-time decoding, where syndrome processing occurs concurrently with quantum operations. This release also introduces GPU-accelerated algorithmic decoders like RelayBP, a promising approach developed in the past year that aims to overcome the convergence limitations of traditional belief propagation. For scenarios demanding maximum throughput, we have integrated a TensorRT-based inference engine that allows researchers to deploy custom AI decoders trained in frameworks like PyTorch and exported to ONNX directly into the quantum control loop. To address the complexities of continuous system operation, we added sliding window decoders that handle circuit-level noise across multiple rounds without assuming temporal periodicity. These tools are designed to be hardware-agnostic and scalable, supporting our partners across the ecosystem who are building the first generation of reliable logical qubits. Check out the full technical breakdown in our latest developer blog by Kevin Mato, Scott Thornton, Ph.D., Melody Ren, Ben Howe, and Tom L. https://lnkd.in/gvC__zRd
-
PHOTON-INTERFACED SCALABLE QUANTUM NODES LINKING LIGHT AND MATTER The photon‑interfaced ten‑qubit register of trapped ions constitutes a potential advance in the development of scalable quantum network nodes. In this architecture, each ion in a ten‑qubit linear chain is individually entangled with a propagating photon, producing a sequential train of ion–photon Bell pairs with high fidelity. Previous experiments had only achieved this capability for one or two ions, making the extension to a full ten‑qubit register a meaningful step toward practical matter‑to‑light interfaces for distributed quantum information processing. The system operates by dynamically transporting ions into the mode of an optical cavity and driving a cavity‑mediated Raman transition that generates a single photon entangled with the ion’s internal qubit state. This procedure yields a time‑ordered photonic qubit stream in which each photon carries the quantum information of a distinct ion. The significance of this work lies in its direct response to a central challenge in quantum networking: the need to map the quantum state of a multi‑qubit matter register onto a set of photonic qubits that can propagate through optical fiber with low loss. Trapped ions serve as exceptionally coherent stationary qubits, but they cannot be transported between processors. Photons, by contrast, function as low‑loss flying qubits capable of transmitting quantum information over long distances. Ion–photon entanglement is therefore the essential mechanism for linking spatially separated ion‑based processors. Scaling this interface to ten ions establishes a clear path toward high‑rate, multiplexed entanglement distribution. This scaling is particularly relevant in light of recent long‑distance demonstrations in which multiple ions, each entangled with its own photon, were used to increase entanglement distribution rates over fiber links exceeding one hundred kilometers. Generating a rapid sequence of entangled photons—each correlated with a different ion—enables temporal multiplexing, which is indispensable for overcoming fiber loss and improving heralded entanglement rates. The ten‑ion photon‑interfaced register provides precisely the type of multiplexed matter‑to‑light source required for such architectures. Despite its importance, several technical challenges remain. Photon detection probabilities must be increased to support long‑distance networking without excessive repetition rates. Sequential ion shuttling introduces timing overhead and potential motional heating, and cavity alignment and stability become increasingly demanding as the register size grows. Maintaining spectral and temporal indistinguishability across the full photon train is essential for multi‑node entanglement generation and remains an active area of optimization. These challenges, however, represent engineering refinements rather than fundamental limitations. #DOI: https://lnkd.in/e5HRus5e
-
cuts quantum computer heat emissions by 10,000 times, offering a breakthrough in cooling and efficiency for next-generation machines. Heat is a major challenge in quantum computing, as excess energy disrupts qubits and causes errors. Reducing emissions is essential for scaling up powerful quantum systems. This device operates at extremely low temperatures, maintaining qubits in stable states while drastically minimizing unwanted thermal noise, allowing longer computations with higher accuracy. It could be launched as early as 2026, potentially revolutionizing how quantum computers are built, cooled, and deployed, making them more practical for real-world applications. Controlling heat at this scale reminds us that engineering solutions, combined with quantum science, are key to unlocking the full potential of quantum computing, enabling faster, more reliable, and energy-efficient machines. Thank YOU — Quantum Cookie The device is a cryogenic traveling-wave parametric amplifier (TWPA) made with specialized "quantum materials." Traditional amplifiers used for reading out qubit signals in superconducting quantum computers generate noticeable heat (even if small in absolute terms), which adds thermal noise, raises the cooling burden on dilution refrigerators, and limits how many qubits can be packed into one cryostat. Qubic's version reportedly cuts thermal output by a factor of 10,000, bringing it down to practically zero (on the order of 1–10 microwatts), while also reducing overall power consumption by about 50%. Why this matters for quantum computing - Heat is a core scaling bottleneck: Qubits (especially superconducting ones) must operate at millikelvin temperatures (~10–50 mK). Even tiny amounts of heat from readout electronics or control lines can cause decoherence, increase error rates, and require more powerful (and expensive) cryogenic systems. - The amplifier's role: It boosts the faint microwave signals from qubits without adding much noise. Conventional semiconductor-based amplifiers at cryogenic stages dissipate more heat; this new TWPA minimizes that, potentially allowing twice as many qubits per dilution refrigerator by easing the thermal load and simplifying cabling. - Potential impact: Lower cooling demands could cut operational costs and energy use significantly, making larger, more practical quantum systems feasible for real-world applications rather than just lab prototypes. Timeline and status The company has received grant funding and aims for commercialization/launch in 2026. As of early 2026 reports, development is ongoing with targets like 20 dB gain over a 4–12 GHz bandwidth. No major contradictions or retractions have appeared in credible coverage.
-
Delighted to share some work we've been developing over the past months! 📄 🔗🔗 https://lnkd.in/enXEQdDc 🔗🔗 ✨ Building on our previous research [https://lnkd.in/eCjCDv2D], we've explored a new direction for modular quantum computing with surface codes. The focus is on whether emission-based hardware can support fault-tolerant quantum error correction. The question we set out to answer: 🤔 📡 Can we distribute entanglement across modules without relying on slow and noisy two-qubit gates? 🔗 Our earlier work showed emission-based platforms were feasible but limited to thresholds of 0.16 % ⚡ Is there a more efficient protocol path forward? Our approach: 🎯 We propose single-shot GHZ state generation — creating the entangled states needed for stabilizer measurements directly, without Bell-pair fusion. The optical setup generates Bell pairs, W states, and GHZ states by simply observing photon detection patterns. Benchmarking on realistic hardware: 🧪 #DiamondColorCenters #QuantumHardware 🔴 We modeled this for diamond color-center platforms (what experimentalists are actually building) 🔴 Full noise modeling includes photon loss, detector efficiency, and circuit-level errors 🔴 Both photon-number-resolving and standard detectors analyzed The findings: 📊 We're grateful for what the analysis reveals about this architecture with circuit-level noise: 💎 Threshold of 0.24 % with photon-number-resolving detectors 💎 Threshold of 0.19 % with standard detectors 💎 These thresholds scale with hardware improvements — unlike previous approaches that saturated Why this matters: 🛣️ #FaultTolerance #ModularQuantumComputing #QuantumErrorCorrection This work suggests a practical pathway toward scalable modular quantum computers using hardware that's already being developed in labs. The protocols require only modest enhancements to existing emission-based setups. Looking ahead: 🔮 #ExperimentalQuantum #QuantumNetworks #DistributedQuantum We hope these results help guide the experimental community's next steps. We've tried to provide clear hardware targets and realistic thresholds that could inform near-term implementations. Special thanks to our collaborators at QuTech, Keio University, and OIST for making this collaborative effort possible. 🙏 Daniel Bhatti, Rikiya Kashiwagi, David Elkouss, Kazufumi Tanji, Wojciech Roga, Masahiro Takeoka #QuantumComputing #SurfaceCode #Photonics #ColorCenters #QuantumErrorCorrection #ModularArchitectures #QuantumInternet
-
Google has made significant strides in quantum computing with the development of its latest quantum chip, Willow. This chip represents a major advancement toward building practical, large-scale quantum computers capable of solving complex problems far beyond the reach of classical supercomputers. Key Features of Willow: (1) Enhanced Qubit Count: Willow boasts 105 qubits, nearly doubling the count from its predecessor, the Sycamore chip. This increase enables more complex computations and improved error correction capabilities. (2) Error Correction Breakthrough: A notable achievement with Willow is its ability to reduce errors exponentially as the system scales. This addresses a fundamental challenge in quantum computing, where qubits are highly sensitive and prone to errors. By effectively managing these errors, Willow paves the way for more reliable quantum computations. (3) Unprecedented Computational Speed: In benchmark tests, Willow completed a complex computation in under five minutes—a task that would take the most advanced classical supercomputers an estimated 10 septillion years. This dramatic speedup underscores the potential of quantum computing to tackle problems currently deemed intractable. Implications and Future Prospects: The advancements demonstrated by Willow have profound implications across various fields: (4) Cryptography: The immense processing power of quantum computers like Willow could potentially break current cryptographic systems, prompting a reevaluation of data security measures. However, experts note that while Willow's 105 qubits are impressive, breaking encryption such as that used by Bitcoin would require a quantum computer with around 13 million qubits. Therefore, while the threat is not immediate, it is a consideration for the future. (5) Scientific Research: Quantum computing can revolutionize fields like drug discovery, materials science, and complex system modeling by performing simulations and calculations at unprecedented speeds. Artificial Intelligence: The ability to process vast datasets and perform complex optimizations rapidly could significantly enhance AI development and deployment. While Willow marks a significant milestone, the journey toward fully functional, large-scale quantum computers continues. Ongoing research focuses on further increasing qubit counts, enhancing error correction methods, and developing practical applications for this transformative technology.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development