The recent results from the collaboration between IBM Quantum, Algorithmiq and Prof. John Goold's group at Trinity College Dublin is a good example of how we can work together towards the goal of quantum advantage and continue to show that we are in what I call as the era of quantum utility where a quantum computer can be used to explore interesting science beyond exact circuit simulations. They are investigating a special case of many body dynamics using a class of maximally chaotic circuits known as dual-unitary circuits — that are composed of gates that are unitary in space and time. The team executes these circuits on our Eagle processor (ibm_strasbourg), leveraging advances in Pauli noise learning, parametric updates for fast circuit compilation, and the tensor network error mitigation (TEM) methods developed by Algorithmiq and implemented entirely in classical post-processing. They leverage the fact that at the dual-unitary point there are analytical solutions for certain correlation functions and then they perturb the circuits away from this point where both analytical solutions and brute force simulation on classical computers are not possible. In this parameter space they compare their results to approximate classical simulations known as tensor network methods in both the Heisenberg and Schrödinger picture. This in it self is both a powerful benchmarking tool for quantum computers as it can be use to show that error mitigation is working at scale, and second it continues to expand the methodology of how we search for advantage. The team was able to execute circuit volumes up to 91 qubits and roughly 4100 two-qubit gates (see figure) and show really good agreement with the exact solution (see figure) and when they perturb away from the point with and analytical solution the results continued to agree with the Heisenberg while the Schrödinger picture failed to reproduce the results (see figure). This gives us trust in our quantum computers are working in what we call the utility scale and as a field we are developing new methods to perturb our circuits beyond exact verification and still have confidence in the accuracy. Furthermore, this result also add to an increasing body of work that demonstrates the use of classical HPC to extend the reach of current quantum computers, an architecture we call quantum-centric supercomputing. The preprint can be found here https://lnkd.in/ermzhdH3 and Algorithmiq have added there TEM method to our Qiskit Function Catalog https://lnkd.in/eWCNrsuY
Strategies for Quantum Circuit Execution in Noisy Environments
Explore top LinkedIn content from expert professionals.
Summary
Strategies for quantum circuit execution in noisy environments focus on methods to run quantum algorithms on hardware that produces imperfect results due to random disturbances. These approaches help boost the reliability and accuracy of quantum computations, even when errors and noise are unavoidable.
- Implement error mitigation: Use techniques such as zero-noise extrapolation and probabilistic error cancellation to correct or reduce the impact of hardware-induced errors during quantum computations.
- Simplify circuit design: Reduce circuit depth and gate complexity by grouping similar data samples and tailoring circuits to the quantum hardware, which helps limit noise exposure and improve outcome reliability.
- Utilize classical simulation: Combine classical computing resources with quantum machines to verify results and extend the reach of quantum algorithms beyond what current hardware alone can achieve.
-
-
Interesting new study: "EnQode: Fast Amplitude Embedding for Quantum Machine Learning Using Classical Data." The authors introduce a novel framework to address the limitations of traditional amplitude embedding (AE) [GitHub repo included]. Traditional AE methods often involve deep, variable-length circuits, which can lead to high output error due to extensive gate usage and inconsistent error rates across different data samples. This variability in circuit depth and gate composition results in unequal noise exposure, obscuring the true performance of quantum algorithms. To overcome these challenges, the researchers developed EnQode, a fast AE technique based on symbolic representation. Instead of aiming for exact amplitude representation for each sample, EnQode employs a cluster-based approach to achieve approximate AE with high fidelity. Here are some of the key aspects of EnQode: * Clustering: EnQode begins by using the k-means clustering algorithm to group similar data samples. For each cluster, a mean state is calculated to represent the central characteristics of the data distribution within that cluster. * Hardware-optimized ansatz: For each cluster's mean state, a low-depth, machine-optimized ansatz is trained, tailored to the specific quantum hardware being used (e.g., IBM quantum devices). * Transfer Learning for fast embedding: Once the cluster models are trained offline, transfer learning is used for rapid amplitude embedding of new data samples. An incoming sample is assigned to the nearest cluster, and its embedding circuit is initialized with the optimized parameters of that cluster's mean state. These parameters can then be fine-tuned, significantly accelerating the embedding process without retraining from scratch. * Reduced circuit complexity: EnQode achieved an average reduction of over 28× in circuit depth, over 11× in single-qubit gate count, and over 12× in two-qubit gate count, with zero variability across samples due to its fixed ansatz design. * Higher state fidelity in noisy environments: In noisy IBM quantum hardware simulations, EnQode showed a state fidelity improvement of over 14× compared to the baseline, highlighting its robustness to hardware noise. While the baseline achieved 100% fidelity in ideal simulations (as it performs exact embedding), EnQode maintained an average of 89% fidelity when transpiled to real hardware in ideal simulations, which is considered a good approximation given the significant reduction in circuit complexity. Here the article: https://lnkd.in/dQMbNN7b And here the GitHub repo: https://lnkd.in/dbm7q3eJ #qml #datascience #machinelearning #quantum #nisq #quantumcomputing
-
One of the things I truly enjoy about quantum computing is how we can leverage its intrinsic properties — such as reversibility — to turn hardware limitations into opportunities in the NISQ era. 🤓 In a world where noise is unavoidable, what if we treat noise not just as a problem… but as part of the algorithmic workflow? 🚀 This is precisely the idea behind error mitigation techniques like Zero Noise Extrapolation (ZNE). The intuition is elegant: We start by considering our original circuit as the baseline noise level (scale factor = 1). 👀 Then, we deliberately increase the noise — either locally or globally — by inserting additional gate operations that effectively compose to the identity. Mathematically, the circuit remains unchanged. 😆 Physically, however, the hardware accumulates more noise. 😲 By measuring the observable at different noise levels and extrapolating back to the zero-noise limit, we can estimate what the result would have been in an ideal, noiseless regime. Instead of fighting noise directly, we model it — and use it. Have you implemented ZNE in your workflows? Or have you explored how noise actually scales with additional gate insertions on real hardware? 🤓 I’m sharing a resource from QGSS25, where we discussed this in depth and built a hands-on notebook around it with some great colleagues: https://lnkd.in/eXDRrKBb What other error mitigation resources or techniques have you found useful? I’d love to hear your thoughts. #QuantumComputing #NISQ #ErrorMitigation #ZeroNoiseExtrapolation #QuantumAlgorithms #QuantumHardware #QuantumEngineering #Qiskit #QuantumResearch #DeepTech #QuantumOptimization
-
So why don't quantum computers work perfectly right out of the box? Even when running a simple quantum circuit on real hardware, various sources of noise cause the measured results to decay away from ideal values. Furthermore, incoherent noise, miscalibrations, and measurement errors all pile up and degrade the signal quickly with circuit depth. Error mitigation offers a clever way to recover accurate results without the overhead of full quantum error correction. The idea behind probabilistic error cancellation is a bit counterintuitive. If you can learn how noise is affecting your gates, you can deliberately inject extra operations that cancel those errors out on average. You end up needing more circuit runs to compensate for the added randomness, but in return you get results that are free of systematic bias. I covered these topics and more in a lecture at the 2024 Near-Term Quantum Algorithms Summer School. It starts from a single qubit and builds up with worked examples and derivations along the way. The goal is to keep each topic approachable no matter one's individual background. Slides are available at: zlatko-minev.com/education #QuantumComputing #ErrorMitigation #Physics #NISQ #Science
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development