The preparation of GHZ states is a common benchmark for quantum processors. These states are not only a test of device-wide entanglement, they are also useful resources in numerous quantum algorithms. Our team recently demonstrated a 120-qubit logical GHZ state on our Heron r2 processors, the largest reported on any hardware. This includes a 60-logical qubit GHZ on a single-shot basis (i.e. with no readout error mitigation). These experiments were enabled by error detection both at the device and circuit level. At the device level, we can use our knowledge of the device architecture to detect if some couplers fail during a particular shot. At the circuit level, we can use symmetries inherent in the GHZ state to detect if certain violations occur. The state preparation proceeds as follows: we first eliminate some edges with bad CZ or bad readout (above a given threshold). Then, starting from a qubit at the center of the remaining graph, we perform a breadth-first search (BFS) to prepare a GHZ state in shallow depth. During the BFS, some nodes are randomly blocked in order to increase the chance of check qubits being found. Afterwards, any node that does not belong to the GHZ but is adjacent to 2 of its qubits may act as a check in a ZZ parity measurement. We aim to maximize the ''coverage'' of checks that we can find through this randomization, while not increasing the depth beyond a given threshold above the best possible depth. The coverage of checks is the number of locations in the circuit whose failure is detected by one of the checks, which we can compute efficiently using Pauli propagation. Therefore, we can predict exactly how many failures will be detected using our checks, and can optimize the layout for them. These experiments were performed by Ali Javadi and Simon Martiel. They also leverage many of the recent advances made by our team, including improved readout on Herons, characterization of coupler errors, and M3 readout error mitigation. For comparison, the recent demonstrations by Microsoft/Atom with a 24-qubit GHZ, Quantinuum with a 50-qubit GHZ, and Q-ctrl with a 75-qubit GHZ (also on Heron) also relied on error detection. As we chart the path towards advantage all that really matters is how large a quantum circuit can we run and can we trust the method used gives accurate results. While GHZ are simple to simulate this method shows that error detection with post selection is a potentially viable tool to add with error mitigation or sample based quantum diagonalization, to run experiments at the utility scale (100+ qubits) and build the set of trusted tools to search for quantum advantage on near term devices. This is why we are pushing near term methods such as error mitigation, error detection on utility-scale (100+ qubits) quantum computers.
Strategies for Managing Large-Scale Quantum Experiments
Explore top LinkedIn content from expert professionals.
Summary
Strategies for managing large-scale quantum experiments involve organizing and controlling complex quantum systems with many qubits, aiming for stable operations and error correction to make quantum computers reliable and scalable. These approaches address both the technical challenges in maintaining coherence and the practical steps needed for real-time management of quantum devices.
- Implement error detection: Use built-in checks at both the device and circuit levels to spot and fix errors, ensuring trustworthy results even as the number of qubits increases.
- Automate calibration: Track changes in device performance and apply real-time feedback protocols, so quantum systems can adjust themselves and maintain stability during long experiments.
- Simulate distributed systems: Model interactions across multiple quantum nodes to plan scalable architectures, making it easier to manage and allocate resources when building larger quantum computers.
-
-
Quantum Scaling Recipe: ARQUIN Provides Framework for Simulating Distributed Quantum Computing Systems Key Insights: • Researchers from 14 institutions collaborated under the Co-design Center for Quantum Advantage (C2QA) to develop ARQUIN, a framework for simulating large-scale distributed quantum computers across different layers. • The ARQUIN framework was created to address the “challenge of scale”—one of the biggest hurdles in building practical, large-scale quantum computers. • The results of this research were published in the ACM Transactions on Quantum Computing, marking a significant step forward in quantum computing scalability research. The Multi-Node Quantum System Approach: • The research, led by Michael DeMarco from Brookhaven National Laboratory and MIT, draws inspiration from classical computing strategies that combine multiple computing nodes into a single unified framework. • In theory, distributing quantum computations across multiple interconnected nodes can enable the scaling of quantum computers beyond the physical constraints of single-chip architectures. • However, superconducting quantum systems face a unique challenge: qubits must remain at extremely low temperatures, typically achieved using dilution refrigerators. The Cryogenic Scaling Challenge: • Dilution refrigerators are currently limited in size and capacity, making it difficult to scale a quantum chip beyond certain physical dimensions. • The ARQUIN framework introduces a strategy to simulate and optimize distributed quantum systems, allowing quantum processors located in separate cryogenic environments to interact effectively. • This simulation framework models how quantum information flows between nodes, ensuring coherence and minimizing errors during inter-node communication. Implications of ARQUIN: • Scalability: ARQUIN offers a roadmap for scaling quantum systems by distributing computations across multiple quantum nodes while preserving quantum coherence. • Optimized Resource Allocation: The framework helps determine the optimal allocation of qubits and operations across multiple interconnected systems. • Improved Error Management: Distributed systems modeled by ARQUIN can better manage and mitigate errors, a critical requirement for fault-tolerant quantum computing. Future Outlook: • ARQUIN provides a simulation-based foundation for designing and testing large-scale distributed quantum systems before they are physically built. • This framework lays the groundwork for next-generation modular quantum architectures, where interconnected nodes collaborate seamlessly to solve complex problems. • Future research will likely focus on enhancing inter-node quantum communication protocols and refining the ARQUIN models to handle larger and more complex quantum systems.
-
⚛️ Architectural mechanisms of a universal fault-tolerant quantum computer 📑 Quantum error correction (QEC) is believed to be essential for the realization of large-scale quantum computers. However, due to the complexity of operating on the encoded ‘logical’ qubits , understanding the physical principles for building fault-tolerant quantum devices and combining them into efficient architectures is an outstanding scientific challenge. Here we utilize reconfigurable arrays of up to 448 neutral atoms to implement all key elements of a universal, fault-tolerant quantum processing architecture and experimentally explore their underlying working mechanisms. We first employ surface codes to study how repeated QEC suppresses errors, demonstrating 2.14(13)x below-threshold performance in a four-round characterization circuit by leveraging atom loss detection and machine learning decoding. We then investigate logical entanglement using transversal gates and lattice surgery, and extend it to universal logic through transversal teleportation with 3D [[15,1,3]] codes, enabling arbitrary-angle synthesis with logarithmic overhead. Finally, we develop mid-circuit qubit re-use, increasing exoerimental cycle rates by two orders of magnitude and enabling deep-circuit protocols with dozens of logical qubits and hundreds of logical teleportations with [[7,1,3]] and high-rate [[16,6,4]] codes while maintaining constant internal entropy. Our experiments reveal key principles for efficient architecture design, involving the interplay between quantum logic & entropy removal, judiciously using physical entanglement in logic gates & magic state generation, and leveraging teleportations for universality & physical qubit reset. These results establish foundations for scalable, universal error-corrected processing and its practical implementation with neutral atom systems. ℹ️ Bluvstein et al - 2025
-
#NewPaperAlert ⚛️ Happy to start the year with an exciting result on scaling up solid-state spin qubits! Checkout our paper: "Towards autonomous time-calibration of large quantum-dot devices: Detection, real-time feedback, and noise spectroscopy." on arxiv (2512.24894) Scaling quantum computers is as much about maintaining stability as it is about qubit count, more qubits only help if we can control them. Today, we have proof-of-principle few qubit devices, but scaling to thousands or millions of qubits would require autonomous qubit control that can recalibrate devices in real-time before noise exhausts their coherence (T2) times. It is well known that device imperfections, fabrication inhomogeneities and the vicious two-level fluctuators (#TLFs) can cause each qubit to face different local environments that lead to non-markovian noise and power-law noise processes. Manifesting as drifts in gate voltages, these lead to lower qubit gate-fidelity and eventually forbid fault-tolerance. This begs the question, how do we autonomously track drift in device parameters and apply feedback to correct for them? Answer: By tracking quantum dots in (2+1) D ! With experimental collaborators, we present a study on evaluating drift in quantum dots, identifying noise processes and applying real-time feedback. In this work, we propose to monitor a sequence of 2D charge stability maps in time as a probe of the local electrostatic environment. In a first set of experiments, we track 10 quantum dots arranged on a 2D lattice and autonomously flag drifts as big as 5 millivolts! Access to these local trajectories also helps us to study the underlying noise processes, think power spectral densities and Allan variances of each dot without a sensor next to it. This in turn informs us on any two-level switching and provides feedback on device fabrication. Tracking all quantum dots, helps us identify a linear correlation length in our device, approximately 188 nanometers, implying that qubits within this distance can have correlated-errors (an absolute no-no!) and suggesting that qubits be operated farther than this length. We also propose simple proportional-only feedback protocols to stabilize each quantum dot over time. To make contact with experiments, we benchmark the robustness of our approach and find that our method offers a detection accuracy of upto ~90% for signal-to-noise ratios of 0.7. I hope these methods become a standard part of the autonomous qubit tuning stack, leading to more stable, fault-tolerant hardware. Huge thanks to my collaborators Barnaby van Straaten, Francesco Borsoi, Menno Veldhorst, and Justyna Zwolak for the support. Happy to see this collaboration between University of Maryland – College of Computer, Mathematical, and Natural Sciences and Delft University of Technology progress! 🔗 Read the full paper on arXiv: https://lnkd.in/edSVuCz3 #QuantumComputing #Physics #SpinQubits #DeepTech #FaultTolerance
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development