Interesting new study: "EnQode: Fast Amplitude Embedding for Quantum Machine Learning Using Classical Data." The authors introduce a novel framework to address the limitations of traditional amplitude embedding (AE) [GitHub repo included]. Traditional AE methods often involve deep, variable-length circuits, which can lead to high output error due to extensive gate usage and inconsistent error rates across different data samples. This variability in circuit depth and gate composition results in unequal noise exposure, obscuring the true performance of quantum algorithms. To overcome these challenges, the researchers developed EnQode, a fast AE technique based on symbolic representation. Instead of aiming for exact amplitude representation for each sample, EnQode employs a cluster-based approach to achieve approximate AE with high fidelity. Here are some of the key aspects of EnQode: * Clustering: EnQode begins by using the k-means clustering algorithm to group similar data samples. For each cluster, a mean state is calculated to represent the central characteristics of the data distribution within that cluster. * Hardware-optimized ansatz: For each cluster's mean state, a low-depth, machine-optimized ansatz is trained, tailored to the specific quantum hardware being used (e.g., IBM quantum devices). * Transfer Learning for fast embedding: Once the cluster models are trained offline, transfer learning is used for rapid amplitude embedding of new data samples. An incoming sample is assigned to the nearest cluster, and its embedding circuit is initialized with the optimized parameters of that cluster's mean state. These parameters can then be fine-tuned, significantly accelerating the embedding process without retraining from scratch. * Reduced circuit complexity: EnQode achieved an average reduction of over 28× in circuit depth, over 11× in single-qubit gate count, and over 12× in two-qubit gate count, with zero variability across samples due to its fixed ansatz design. * Higher state fidelity in noisy environments: In noisy IBM quantum hardware simulations, EnQode showed a state fidelity improvement of over 14× compared to the baseline, highlighting its robustness to hardware noise. While the baseline achieved 100% fidelity in ideal simulations (as it performs exact embedding), EnQode maintained an average of 89% fidelity when transpiled to real hardware in ideal simulations, which is considered a good approximation given the significant reduction in circuit complexity. Here the article: https://lnkd.in/dQMbNN7b And here the GitHub repo: https://lnkd.in/dbm7q3eJ #qml #datascience #machinelearning #quantum #nisq #quantumcomputing
Improving Success Rates in Quantum Simulations
Explore top LinkedIn content from expert professionals.
Summary
Improving success rates in quantum simulations means increasing the reliability and accuracy of calculations performed by quantum computers, which are highly sensitive to noise and hardware errors. Recent advances focus on smarter error correction, streamlined circuit designs, and innovative hardware and software strategies that make quantum simulations more dependable and practical for real-world use.
- Streamline circuit design: Use efficient algorithms and reduced-depth circuits to minimize sources of error, making it easier for quantum simulations to run accurately even on imperfect hardware.
- Adopt robust error correction: Implement advanced error correction methods, like faster decoders or tailored protocols for multi-level quantum systems, to reduce mistakes and maintain the integrity of information during simulations.
- Speed up hardware-software coordination: Ensure fast communication and feedback between quantum hardware and control systems so errors can be detected and corrected in real time, which keeps simulations on track as systems scale up.
-
-
As quantum computers enter the utility era, with users executing circuits on 100 or more qubits, the performance of quantum computing software begins to play a prominent role. With this in mind, starting in 2020 Qiskit began the move from a mainly Python-based package to one utilizing the Rust programming language. What began with creating a highly optimized graph library in Rust (https://lnkd.in/eUdwqiMU), has now culminated in most of the circuit creation, manipulation, and transpilation code being fully ported over in the upcoming Qiskit 1.3. The fruits of this labor are easy to verify, with Qiskit outperforming competing SDKs in terms of runtime by an order of magnitude or more, as measured by rigorous benchmarks (https://lnkd.in/e98wniXY). However, algorithmic improvements also play a critical role in Qiskit's continued success. The team recently released a paper highlighting 18-months of effort optimizing the routing of circuits to match the topology of a target quantum device. This new LightSABRE method (https://lnkd.in/eMgm3TMG) is 200x faster than previous implementations, and reduces the number of two-qubit gates by nearly 20% compared to the original SABRE algorithm. In addition, LightSABRE, supports complex quantum architectures, disjoint connectivity graphs, and classical flow-control. The work the team puts into optimizing and enhancing Qiskit is one of the primary reasons why nearly 70% of quantum developers select Qiskit as their go-to quantum computing SDK.
-
Everyone agrees quantum error correction (QEC) is essential. But why do we care so much about things like ≤ 𝟬.𝟭% 𝗴𝗮𝘁𝗲 𝗲𝗿𝗿𝗼𝗿 or µ𝘀-𝘀𝗰𝗮𝗹𝗲 𝗱𝗲𝗰𝗼𝗱𝗲𝗿 𝗹𝗮𝘁𝗲𝗻𝗰𝘆? Here’s the core idea: QEC combines many noisy qubits into a more stable 𝘭𝘰𝘨𝘪𝘤𝘢𝘭 qubit. If your hardware is good enough, you can reduce error rates 𝗲𝘅𝗽𝗼𝗻𝗲𝗻𝘁𝗶𝗮𝗹𝗹𝘆 by increasing code size. But that only works if your system can keep up—𝗱𝗲𝗰𝗼𝗱𝗶𝗻𝗴 𝗲𝗿𝗿𝗼𝗿𝘀 𝗮𝗻𝗱 𝗿𝗲𝗮𝗰𝘁𝗶𝗻𝗴 𝗺𝗶𝗱-𝗰𝗶𝗿𝗰𝘂𝗶𝘁, fast. Especially for circuits with non-Clifford gates (like T-gates), you need real-time feedback between measurements and feedforward operations. That’s where the hardware starts to feel the pressure: • Gate fidelity ≤ 𝟬.𝟭% • Decoder latency ≤ 𝟭𝟱 µ𝘀 • Controller-Decoder Communication ≤ 𝟭𝟬 µ𝘀 • Bandwidth ≥ 𝟭 𝗠𝗯𝗶𝘁/𝘀 𝗽𝗲𝗿 𝗾𝘂𝗯𝗶𝘁 These aren’t wishful targets. They come from full-stack simulations of real quantum circuits, like Shor’s algorithm for factoring 21 using surface codes. In those simulations, the system must handle: • ~13 decoding tasks • ~5 mid-circuit corrections • ~1000 physical qubits That’s the blueprint. It doesn’t just explain 𝘸𝘩𝘺 QEC is hard—it points us toward what needs to work for it to succeed at scale. Image Credits: Yaniv Kurman et al. (2024, arXiv)
-
New Approach Reduces Decoherence in Qudit-Based Quantum Processors A team of physicists from the University of Southern California (USC) and UC Berkeley has developed a new method to reduce decoherence in qudit-based quantum computers, potentially improving their stability and computational power. The research, published in Physical Review Letters, introduces dynamical decoupling (DD) protocols tailored for qudits, which could significantly enhance the performance of multi-level quantum computing systems. Why Qudits Matter • Traditional quantum computers store and process information using qubits, which exist in a superposition of two states (0 and 1). • Qudits, on the other hand, can exist in more than two states, allowing them to store more information per unit and perform computations more efficiently. • The challenge? Qudits are more prone to decoherence, a process where quantum states degrade due to environmental interference, leading to errors and data loss. How the New Protocol Works • The researchers developed a novel dynamical decoupling (DD) technique specifically designed to counteract environmental noise in qudit-based systems. • By applying precisely timed quantum operations, the system cancels out decoherence effects, allowing for longer coherence times and more stable quantum operations. • This approach could enable more practical and scalable quantum processors, as qudits have the potential to perform complex calculations more efficiently than qubit-based systems. Implications for Quantum Computing • Enhanced Quantum Performance – More stable qudit-based quantum computers could outperform qubit systems in optimization, simulation, and cryptography. • Lower Hardware Requirements – Because each qudit can store more information, future quantum processors could require fewer physical qubits, reducing hardware complexity. • A Step Closer to Practical Quantum Computing – Solving decoherence issues is one of the biggest challenges in making large-scale quantum computers viable for real-world applications. The Bigger Picture While qubit-based quantum computers dominate current research, this breakthrough highlights the growing interest in qudits as a more powerful alternative. If further developed, qudit-based quantum systems could revolutionize computing, unlocking greater efficiency and computational power while overcoming some of the biggest limitations of current quantum technology.
-
New work from a Harvard team highlights a major bottleneck in fault-tolerant quantum computing: the classical decoder used in quantum error correction. Quick primer on QEC: 1. Encode: A logical qubit is spread across many physical qubits, so no single error destroys the information. 2. Detect: Stabilizer measurements run repeatedly. They do not reveal the quantum state, but they do flag when something has gone wrong. The pattern of those flags is called the syndrome. 3. Decode: A classical computer reads the syndrome and infers which error most likely occurred. 4. Correct: The correction is applied, and the logical qubit survives. Step 3 is where things get hard. For quantum LDPC codes, one of the most promising routes to efficient fault tolerance, practical decoders have usually forced a tradeoff between speed and accuracy: the fast ones are too weak, and the accurate ones are too slow for real-time use. This paper introduces Cascade, a geometry-aware convolutional neural decoder. The key idea is not just “use a neural network,” but to build the structure of the code directly into the model: locality, translation equivariance, and anisotropy. That makes this feel less like generic ML and more like architecture co-design. Some of the headline results: - On the [[144, 12, 12]] Gross code, Cascade achieves logical error rates up to 17x lower than prior practical decoders, with 3–5 orders of magnitude higher throughput - It reveals a “waterfall” regime in which logical errors fall much faster than standard distance-based formulas would suggest, largely because earlier decoders were not strong enough to expose it - In one surface code example, that translates to roughly 40% fewer physical qubits to reach a target logical error rate of 10^-9 - Its confidence estimates are well calibrated, which enables post-selection. In one setting on the [[72, 12, 6]] code, that implies roughly 20x fewer retries for repeat-until-success protocols such as magic state distillation - Current GPU latencies already fit the timing budgets for trapped-ion and neutral-atom platforms. Superconducting qubits still require a tighter ~1 microsecond budget, with FPGA and ASIC paths supported by the hardware estimates in the supplement The broader takeaway: decoder quality is not just an implementation detail. It directly shapes how many qubits and how much time fault-tolerant quantum computing actually requires, and those costs may be meaningfully lower than standard estimates assume. Paper: https://lnkd.in/g9D82Ry8
-
⚛️ Hybrid Sequential Quantum Computing 📑 We introduce hybrid sequential quantum computing (HSQC), a paradigm for combinatorial optimization that systematically integrates classical and quantum methods within a structured, stagewise workflow. HSQC may involve an arbitrary sequence of classical and quantum processes, as long as the global result outperforms the standalone components. Our testbed begins with classical optimizers to explore the solution landscape, followed by quantum optimization to refine candidatesolutions, and concludes with classical solvers to recover nearby or exact-optimal states. We demonstrate two instantiations: (i) a pipeline combining simulated annealing (SA), bias-field digitized counterdiabatic quantum optimization (BF-DCQO), and memetic tabu search (MTS); and (ii) a variant combining SA, BF-DCQO, and a second round of SA. This workflow design is motivated by the complementary strengths of each component. Classical heuristics efficiently find low-energy configurations, but often get trapped in local minima. BF-DCQO exploits quantum resources to tunnel through these barriers and improve solution quality. Due to decoherence and approximations, BF-DCQO may not always yield optimal results. Thus, the best quantum-enhanced state is used to continue with a final classical refinement stage. Applied to challenging higher-order unconstrained binary optimization (HUBO) problems on a 156-qubit heavy-hexagonal superconducting quantum processor, we show that HSQC consistently recovers ground-state solutions in just a few seconds. Compared to standalone classical solvers, HSQC achieves a speedup of up to 700× over SA and up to 9× over MTS in estimated runtimes. These results demonstrate that HSQC provides a flexible and scalable framework capable of delivering up to two orders of magnitude improvement at runtime quantum-advantage level on advanced commercial quantum processors. ℹ️ Chandarana et al - 2025
-
Markov-chain Monte Carlo method enhanced by a quantum alternating operator ansatz https://lnkd.in/eygdQBCx Quantum computation is expected to accelerate certain computational tasks over classical counterparts. Its most primitive advantage is its ability to sample from classically intractable probability distributions. A promising approach to make use of this fact is the so-called quantum-enhanced Markov-chain Monte Carlo (qe-MCMC) method [D. Layden et al., Nature (London) 619, 282 (2023)], which uses outputs from quantum circuits as the proposal distributions. In this paper, we propose the use of a quantum alternating operator ansatz (QAOA) for qe-MCMC and provide a strategy to optimize its parameters to improve convergence speed while keeping its depth shallow. The proposed QAOA-type circuit is designed to satisfy the specific constraint which qe-MCMC requires with arbitrary parameters. Through our extensive numerical analysis, we find a correlation in a certain parameter range between an experimentally measurable value, acceptance rate of MCMC, and the spectral gap of the MCMC transition matrix, which determines the convergence speed. This allows us to optimize the parameter in the QAOA circuit and achieve quadratic speedup in convergence. Since MCMC is used in various areas such as statistical physics and machine learning, this paper represents an important step toward realizing practical quantum advantage with currently available quantum computers through qe-MCMC.
-
Quantum simulation represents a crucial bridge in the development of practical quantum algorithms, as limitations in current quantum hardware necessitate robust classical methods for testing and refinement. Guolong Zhong, Yi Fan, and Zhenyu Li, from the University of Science and Technology of China, address this need with a new, scalable approach to simulating quantum circuits. Their work introduces a comprehensively parallelised solution within the Q Chemistry software package, delivering substantial performance gains on both conventional CPUs and powerful GPUs. By optimising how calculations depend on each other and processing multiple operations simultaneously, this research demonstrates a significant leap forward in simulation speed and portability, consistently outperforming existing open-source simulators across a range of quantum circuit designs and paving the way for more complex algorithm development. https://lnkd.in/eXYDtU5J
-
My paper "Benchmarking Quantum-Assisted PINN (QA-PINN) for Computational Fluid Dynamics" is now available to read. Abstract: Physics-Informed Neural Network (PINN) is a powerful method in computer-aided engineering (CAE), especially for design space exploration studies (like surrogate modeling and design of experiments), that combines deep learning with physics-based modeling. Though PINN is in a nascent stage, its success hinges on two key factors - generalizability and training efficiency. One common aspect of these factors is trainable parameters. The goal of this study is to improve these two factors by the introduction of Quantum Machine Learning in a classical PINN framework. This study proposes a Quantum-Assisted PINN (QA-PINN) to reduce the trainable parameters while maintaining accuracy. The study utilized a standard forward problem of simulating partial differential equations (PDEs) from the field of Computational Fluid Dynamics (CFD), a well-understood area with classical PINN. A comparative analysis was performed between QA-PINN, equivalent classical vanilla PINN (c-PINN), and state-of-the-art classical PINN - Globally Adaptive Activation Function PINN (GAAF-PINN). All methods were benchmarked against the ground-truth analytical solution for 1D Burger's equation. The analysis is conducted for 3, 4, and 5-qubit QA-PINN architectures. In comparison to c-PINN and GAAF-PINN, 4 and 5-qubit QA-PINN is shown to reduce the number of trainable parameters of the network by 20%, and while maintaining the accuracy. The reduced number of trainable parameters indicates the reduced complexity of the model and the reduced risk of overfitting, making it more generalizable. Thus, the proposed approach can have a significant impact on PINNs enhancing generalization and improving training efficiency for deploying the model for real-world applications. Link to the paper: https://lnkd.in/dtjZt95Y I'm looking forward to your feedback. If you would like to discuss this work, you can reach out to me or my co-authors Abhishek Chopra and Rut Lineswala. You can expect more papers on this subject by BosonQ Psi (BQP)'s QML Team Jay Shah #QuantumComputing #MachineLearning #IBM #QML #Qiskit #Google #paper #CFD
-
Prior studies identified efficient parallel quantum algorithms for greater model performance; additional studies utilizing quantum trainable gates were needed to further quantify machine learning benefits. (Fig. 3) Using a 10Q2L basis with fixed number of qubits, parallel layers, and quantum embedding: Increasing the number of trainable rotational gates lowered training loss, validation loss, and increased accuracy to 100%. In specific, using 3 trainable gates yielded the lowest overall validation loss with 100% accuracy, while using 4 trainable gates produced comparable losses, but with better convergence. Both Runtimes and RAM increased at similar rates in a predictable fashion. (Fig. 4-5) Utilizing the 10Q2L basis with fixed number of qubits and quantum embedding for a given number of parallel layers - The number of trainable rotational gates and parallel layers 1-8 were varied. The main findings were that increasing the number of parallel layers yielded lower loss with each additional layer added. For a given number of layers, a specific number of trainable rotational gates assisted the majority of the runs. Runtimes and RAM slope gradually increased as the number of layers were increased. (Tables 6/7) Using a 10Q60L 1RYe embedding basis vs. a 10Q8L 8RYe basis: Varying the number of trainable rotational gates lowered training loss, validation loss, and improved accuracy for both bases in general. In specific, adding anywhere between 1-5 trainable gates for the weaker 1RYe embedding lowered loss values over two-fold. The stronger 8RYe embedding with the smaller 10Q8L architecture had a trainable gate configuration that lowered embedding only validation loss from 0.0034 to 0.0011 with 100% accuracy. Runtimes and RAM were higher for the 10Q60L 1RYe system due to over 7x as many parallel quantum algorithms being used. (Fig. 8-9) Applying a 2Q160L architecture to several traditional algorithms vs. a simpler embedding + trainable gate method yielded higher performance using the latter. Adding a single trainable gate to the 8RYe embedding improved training loss 16X with matching validation loss at 0.0003 and 100% accuracy. The results indicate that embeddings with better loss convergence should first be identified before adding trainable gates. This experiment set featured significantly lower RAM than Runtime, as only 2 qubits were needed to accommodate two qubit CNOT gates and dataset requirements. (Table 10/11) Using a 10Q circuit with a fixed quantum algorithm: Increasing the number of parallel quantum algorithms lowered loss and increased accuracy on the ‘Make moons’ dataset with 2X noise, in general. Specifically, 8 layers yielded over 5x better validation loss performance than the original single circuit with a 13% increase in accuracy from 84.0% to 97.0%. (Fig. 12) Related quantum algorithm research using NVIDIA GPUs is being conducted by Brookhaven Lab, SandboxAQ, Terra Quantum, Classiq, and BASF.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development