Improving Quantum AI Performance Against Shot Noise

Explore top LinkedIn content from expert professionals.

Summary

Improving quantum AI performance against shot noise means finding ways to make quantum computers and algorithms more reliable when the measurements they take (called "shots") are disturbed by random errors, or "noise." Shot noise can degrade the accuracy of quantum calculations, so researchers look for smarter techniques to maintain performance even as these errors creep in.

  • Refine sampling methods: Use smarter protocols and mathematical models to reduce the number of measurements needed, making quantum algorithms less sensitive to noisy data.
  • Tune parameters carefully: Adjust quantum circuit settings and optimization parameters to minimize the impact of shot noise and hardware drift during computations.
  • Incorporate real-time feedback: Integrate dynamic calibration and learning systems that adapt as the quantum hardware changes, improving stability and prediction accuracy throughout long experiments.
Summarized by AI based on LinkedIn member posts
  • View profile for Jay Gambetta

    Director of IBM Research and IBM Fellow

    20,583 followers

    Recently the team published a paper in Nature Computational Science in collaboration with researchers from Los Alamos National Lab and the University of Basel. The paper was on provable bounds for noise-free expectation values computed from noisy samples. This calibration started in the optimization working group. The paper discusses how the “Layer Fidelity” or how effective two qubit error as measured by the “Error Per Layered Gate” can be used to quantify the impact of hardware noise on sampling-based quantum (optimization) algorithms. Each one of our devices reports this number in the resource tab of the IBM Quantum Platform (https://lnkd.in/eRd2yKwB). The paper allows you to estimate the number of additional shots required to compensate for the impact of noise. It turns out that by using this method it is much cheaper than mitigating the noise when requiring unbiased estimators of expectation values (sqrt(gamma) vs gamma^2). These insights allowed us to prove that the Conditional Value at Risk (CvaR) – an alternative loss function suggested in 2019 and widely used to train variational algorithms, borrowed from mathematical finance – leads to provable bounds on expectation values using only noisy samples. The theoretical insights have been demonstrated on two use cases using up to 127 qubits: estimation of state fidelity (as required, e.g. to evaluate quantum kernels) and optimization (QAOA). In both cases, the team see a good agreement between the theory and experiment. Read the paper here https://lnkd.in/ehyz4GCJ

  • View profile for Marco Pistoia

    CEO, IonQ Italia

    19,415 followers

    🚀 Exciting News! 🚀 I'm happy to share the most recent results from the longstanding collaboration between JPMorganChase and Argonne National Laboratory! Our scientific #QuantumComputing paper, "End-to-End Protocol for High-Quality QAOA Parameters with Few Shots," has just been published on arXiv. 📚✨    In this article, we explore the #Quantum Approximate Optimization Algorithm (QAOA) parameter setting under realistic hardware execution scenarios, where the number of circuit executions (shots) is limited.    🔍 Key Highlights: - Developed an end-to-end protocol for QAOA parameter setting, encompassing problem rescaling, parameter initialization, and shot-frugal fine tuning. - Discovered that, given limited shots, an optimizer with the simplest internal model (linear) performs best. - Optimized the hyper-parameters of the optimizer through extensive simulations. - Demonstrated the robustness of the pipeline to small amounts of hardware noise in both MaxCut and #PortfolioOptimization problems. Read the full paper here: https://lnkd.in/ecg2QMBs   To the best of our knowledge, these are the largest demonstrations of QAOA parameter fine-tuning on a trapped-ion processor, using up to 32 qubits and five QAOA layers. A big thank you to our coauthors from the Global Technology Applied Research team at JPMorganChase: Tianyi Hao, Zichang He, and Ruslan Shaydulin; and to our coauthor from Argonne National Laboratory, Jeffrey Larson.

  • View profile for Zlatko Minev

    Google Quantum AI | MIT TR35 | Ex-Team & Tech Lead, Qiskit Metal & Qiskit Leap, IBM Quantum | Founder, Open Labs | JVA | Board, Yale Alumni

    26,220 followers

    I'm excited to share our latest work, Demonstration of robust and efficient quantum property learning with shallow shadows, published in Nature Communications! 🎉 📝 Authors: Hong-Ye Hu, Andi Gu, Swarnadeep Majumder, Hang Ren, Yipei Zhang, Derek S. Wang, Yi-Zhuang You, Zlatko Minev, Susanne F. Yelin, Alireza Seif 🔍 Context: Extracting information efficiently from quantum systems is crucial for advancing quantum information processing. Classical shadow tomography offers a powerful technique, but it struggles with noisy, high-dimensional quantum states and complex observables. 🤔 Key Question: Can we overcome noise limitations and improve sample efficiency in quantum state learning, especially for high-weight and non-local observables, using shallow quantum circuits? 💡 Our Findings: We introduce robust shallow shadows—a protocol designed to mitigate noise using Bayesian inference, enabling highly efficient learning of quantum state properties, even in the presence of noise. Our experiments on a 127-qubit superconducting quantum processor confirm the protocol’s practical use, showing up to 5x reduction in sample complexity compared to traditional methods. ✨ Key Takeaways: 1. Noise-resilience: Accurate predictions across diverse quantum state properties. 2. Sample Efficiency: Substantial reduction in sample complexity for high-weight and non-local observables. 3. Scalability: The protocol is well-suited for near-term quantum devices, even with noise. Paper: https://lnkd.in/dW4NJ23Q

  • View profile for Frédéric Barbaresco

    THALES "QUANTUM ALGORITHMS/COMPUTING" AND "AI/ALGO FOR SENSORS" SEGMENT LEADER

    31,330 followers

    Sample-based Krylov Quantum Diagonalization by IBM https://lnkd.in/eEBBt7jJ Abstract: Approximating the ground state of many-body systems is a key computational bottleneck underlying important applications in physics and chemistry. It has long been viewed as a promising application for quantum computers. The most widely known quantum algorithm for ground state approximation, quantum phase estimation, is out of reach of current quantum processors due to its high circuit-depths. Quantum diagonalization algorithms based on subspaces represent alternatives to phase estimation, which are feasible for pre-fault-tolerant and early-fault-tolerant quantum computers. Here, we introduce a quantum diagonalization algorithm which combines two key ideas on quantum subspaces: a classical diagonalization based on quantum samples, and subspaces constructed with quantum Krylov states. We prove that our algorithm converges in polynomial time under the working assumptions of Krylov quantum diagonalization and sparseness of the ground state. We then show numerical investigations of lattice Hamiltonians, which indicate that our method can outperform existing Krylov quantum diagonalization in the presence of shot noise, making our approach well-suited for near-term quantum devices. Finally, we carry out the largest ground-state quantum simulation of the single-impurity Anderson model on a system with 41 bath sites, using 85 qubits and up to 6·103 two-qubit gates on a Heron quantum processor, showing excellent agreement with density matrix renormalization group calculations. 

  • View profile for Pranav Kulkarni

    Founding Research Engineer at QuPrayog

    2,642 followers

    **Qubits drift. Google Just Gave It an Autopilot.** Quantum processors are not stable machines – they slowly drift out of tune. Tiny changes in temperature, vibrations, and electronics mean the gate you calibrated in the morning is slightly wrong by the afternoon. Over time, that drift quietly increases the error rate, and even with quantum error correction (QEC), your logical qubit fidelity starts to fall off. The standard fix today is brutal: stop the computation, recalibrate, then resume. That’s barely acceptable for short experiments, and totally unrealistic for fault-tolerant algorithms that might run for hours or days. Google Quantum AI’s new paper, “Reinforcement Learning Control of Quantum Error Correction”, takes a different approach: they merge calibration with computation. Instead of pausing the QEC cycles, they: • Treat QEC syndromes (error signals) as feedback about how the hardware is drifting. • Use a reinforcement learning (RL) agent to nudge thousands of control parameters (pulse amplitudes, frequencies, couplings) while the code is running. • Optimize for lower logical error rate, not just pretty single-qubit gate metrics. On their superconducting Willow processor, this RL “autopilot”: • Improves logical error-rate stability of a distance-5 surface code by about 3.5× against injected drift. • Gives ~20% extra suppression of logical error rate on top of already hand-tuned, state-of-the-art calibration. • Scales in simulation to larger surface codes (up to distance-15) with optimization speed that doesn’t degrade with code size. How does this compare to other decoders? • Classical decoders (like matching decoders) assume the noise model is roughly fixed and then compute the best correction from the syndrome history. • Learned decoders try to map syndromes → corrections more accurately, but still assume a mostly stable device. • RL-QEC doesn’t replace the decoder – it steers the hardware and decoder together so the same QEC stack keeps working even as the environment drifts. If we want truly useful quantum computers, adding more qubits isn’t enough. We’ll also need systems that learn to stay calibrated while they compute and this paper is one of the first serious demonstrations of that idea. Paper: https://lnkd.in/ek2pDgek #QuantumComputing #QuantumErrorCorrection #ReinforcementLearning #GoogleQuantumAI

Explore categories