New pre-print from PhD student Hang Zou on warm-starting the variational quantum eigensolver using flows: Flow-VQE! Flow-VQE is parameter transfer on steroids: it learns how to solve a family of related problems, dramatically reducing the aggregate compute cost! The cost-advantages from from the embedding of a generative model into the VQE optimization loop, and learning it via preference based optimization, alleviating the need to evaluate gradients of the quantum circuit. Flow-VQE outperforms baseline optimization algorithms, achieving computational accuracy with fewer circuit evaluations (up to 100x improvement) and in the warm-start context of new systems, accelerates subsequent fine-tuning by up to 50x compared to HF initialization. Curious to read more about the experiments and the method? Check out the pre-print here: https://lnkd.in/dcYDGRBf Code will follow soon. Feedback and input very welcome! Collaboration with Anton Frisk Kockum and Martin Rahm
Minimizing Computational Costs for Quantum Testing
Explore top LinkedIn content from expert professionals.
Summary
Minimizing computational costs for quantum testing means finding ways to use fewer computing resources—like time, memory, and energy—when running quantum simulations or experiments. This involves making smart choices in algorithms and hardware to speed up calculations and make them more practical, even as quantum systems get larger and more complex.
- Streamline circuit depth: Break up long and complicated quantum circuits into shorter segments to reduce noise and save on measurement costs during simulations.
- Smart neural network use: Apply neural networks that directly learn quantum properties or correlations without relying on pre-computed datasets, which can cut computation time dramatically.
- Distribute tasks wisely: Assign qubits and divide circuits across multiple quantum devices while carefully managing expensive quantum communication to boost efficiency and performance.
-
-
❓ Ever wondered how Neural Networks (NNs) could revolutionize #quantum research? #NeuralNetworks aren't just transforming #AI —they're also pivotal in the quantum realm! In the work entitled "Parameter Estimation by Learning Quantum Correlations in Continuous Photon-Counting Data Using Neural Networks." Quantinuum proudly collaborated with global partners, such as the Universidad Autónoma de Madrid, Chalmers University of Technology, and the University of Michigan, uniting expertise from every corner of the world. 🌍 https://lnkd.in/gj8qttdN 🔍 Key Findings: 1️⃣ The study introduces a novel inference method employing artificial neural networks for quantum probe parameter estimation. 2️⃣ This method leverages quantum correlations in discrete photon-counting data, offering a fresh perspective compared to existing techniques focusing on diffusive signals. 3️⃣ The approach achieves performance on par with Bayesian inference - renowned for its optimal information retrieval capability - yet does so at a fraction of the computational cost. 4️⃣ Beyond efficiency, the method stands robust against imperfections in measurement and training data. 5️⃣ Potential applications span from quantum sensing and imaging to precise calibration tasks in laboratory setups. 🤔 Curious About the Unknowns? The authors are sharing EVERYTHING on Zenodo! 🎉 The codes used to generate these results, including the proposed NN architectures as TensorFlow models, are available here https://lnkd.in/gVdzJycM as well as all the data necessary to reproduce the results openly available here: https://lnkd.in/gVdzJycM Enrico Rinaldi, Manuel González Lastre, Sergio Garcia Herreros, Shahnawaz Ahmed, Maryam Khanahmadi, Franco Nori, and Carlos Sánchez Muñoz
-
Neural networks solve quantum mechanics variationally—without needing pre-computed training data Simulating how electrons behave in molecules and materials is essential for designing drugs, batteries, and catalysts. The standard method—density functional theory—has enabled decades of discoveries, but its iterative self-consistent field approach and repeated matrix diagonalization become computationally demanding for large systems. Luqi Dong and coauthors take a fundamentally different approach. Instead of training neural networks to mimic pre-computed results, they train networks to minimize the energy functional directly—turning machine learning into a variational solver. Their model, DeepDM, predicts the density matrix using equivariant graph neural networks, then maps it to the total energy. The network parameters are optimized through backpropagation to minimize energy, not to match labeled examples. This means no pre-computed datasets required. The challenge: density matrices must satisfy strict physical constraints—Hermiticity, particle number conservation, and idempotency. They handle this through a two-stage architecture. First, a network generates an initial density matrix satisfying these constraints. Second, another network applies an exponential transformation that explores the space of valid density matrices while preserving all constraints mathematically. The results match conventional calculations for both molecules (water, methane) and periodic systems (graphene, diamond). More remarkably, models trained only on primitive unit cells generalize accurately to larger supercells—the graph neural network architecture enables this scaling without retraining. The implication: pre-trained models could provide near-converged initial guesses for subsequent calculations, potentially reducing the computational overhead of large-scale quantum simulations significantly. Paper: https://lnkd.in/e9-8jPBb #ArtificialIntelligence #MachineLearning #DeepLearning #QuantumMechanics #DensityFunctionalTheory #ComputationalChemistry #MaterialsScience #NeuralNetworks #Physics #QuantumComputing #AIforScience #GraphNeuralNetworks #ComputationalPhysics #ElectronicStructure #ScientificComputing
-
It is no secret that the number one killer of anything useful in near-term quantum computing (i.e., before error correction) for quantum simulations is circuit depth. As the system gets bigger and more complicated, the required circuit gets deeper—until the device is mostly producing noise instead of useful signal. In our recent Q-SENSE method ( https://lnkd.in/gRCJQGnZ ), we tackle this by writing the wavefunction as a linear combination of short-depth circuits (a subspace expansion), instead of one huge, fragile circuit. Anyone who has ever done subspace expansion knows the catch: it usually kills you on measurement cost because you need many Hamiltonian matrix elements. In Q-SENSE, we use seniority symmetries so that most Hamiltonian terms don’t couple our symmetry-adapted states. The result: a dramatic reduction in measurements—in our benchmarks, the cost is lower than a single VQE cycle for the same systems. #quantumcomputing, #quantumphysics, #quantumchemistry
-
⚛️ Time-Aware Qubit Assignment and Circuit Optimization for Distributed Quantum Computing 📑 Abstract—The emerging paradigm of distributed quantum computing promises a potential solution to scaling quantum computing to currently unfeasible dimensions. While this approach itself is still in its infancy, and many obstacles must still be overcome before its physical implementation, challenges from the software and algorithmic side must also be identified and addressed. For instance, this paradigm shift requires a new form of compiler that considers the network constraints in general as well as phenomena arising due to the nature of quantum communication. In distributed quantum computing, large circuits are divided into smaller subcircuits such that they can be executed individually and simultaneously on multiple QPUs that are connected through quantum channels. As quantum communication, for example, in the form of teleportation, is expensive, it must be used sparingly. We address the problem of assigning qubits to QPUs to minimize communication costs in two different ways. First by applying time-aware algorithms that take into account the changing connectivity of a given circuit as well as the underlying network topology. We define the optimization problem, use simulated annealing and an evolutionary algorithm and compare the results to graph partitioning and sequential qubit assignment baselines. In another approach, we propose an evolutionary-based quantum circuit optimization algorithm that adjusts the circuit itself rather than the schedule to reduce the overall communication cost. We evaluate the techniques against random circuits and different network topologies. Both evolutionary algorithms outperform the baseline in terms of communication cost reduction. We give an outlook on how the approaches can be integrated into a compilation framework for distributed quantum computing. ℹ️ Sunkel et al - Institute for Informatics, LMU Munich, Munich, Germany - 2025
-
In the Amsterdam Modeling Suite (AMS), Machine Learning Potentials (MLP), DFTB, and PM6 represent three increasingly approximate yet highly efficient approaches for obtaining quantum-level insights, each with a distinct balance between speed and accuracy. MLPs in AMS, built using ParAMS and deployed through MLPotentials, are trained on DFT data to emulate near-DFT energies and forces at molecular-dynamics timescales, making them well suited for very large systems and long simulations once a model is trained. DFTB, available via the DFTB engine in AMS, is a semi-empirical, physics-grounded approximation to DFT that preserves an explicit electronic structure description at a fraction of the computational cost, offering reasonable transferability when appropriate parameter sets are available. PM6, accessed through the MOPAC engine in AMS, is a traditional NDDO-based semi-empirical method designed for rapid geometry optimization and property estimation in organic and biochemical molecules, though it is less transferable across diverse chemistries. Overall, PM6 provides maximum speed with limited generality, DFTB offers a practical compromise between efficiency and quantum detail, and MLP achieves near-DFT fidelity at force-field speed after training, making all three complementary choices in AMS depending on system size, accuracy needs, and data availability. #QM #MD #DFT #Compchem #Materials
-
Quantum Breakthrough Accelerates Fault-Tolerant Computing by Up to 100x Introduction Quantum computing has long promised revolutionary speedups, but fragile qubits and heavy error-correction overhead have slowed progress toward practical systems. A new breakthrough from researchers at QuEra demonstrates a fundamentally more efficient approach to managing errors, potentially advancing the timeline for large-scale, useful quantum computers by years. Key Breakthrough: Algorithmic Fault Tolerance Researchers introduced algorithmic fault tolerance (AFT), a method that embeds error detection and correction directly into quantum algorithms rather than relying on frequent external checks. In simulations using neutral-atom quantum architectures, this approach dramatically reduced overhead while preserving accuracy. How AFT Changes Quantum Error Correction Traditional quantum error correction inserts repeated checks that significantly slow computation and require large numbers of physical qubits. AFT restructures algorithms so error detection happens continuously within the computation flow. Instead of dozens of checks per operation, a single check per logical step may be sufficient. Simulations showed time and computational cost reductions between 10x and 100x, depending on the algorithm. Why Neutral-Atom Systems Are a Strong Fit Neutral-atom quantum computers use individual atoms controlled by lasers, enabling flexible, all-to-all qubit interactions. Qubits can be repositioned dynamically, avoiding fixed wiring constraints common in other platforms. Parallel operations allow errors to be isolated without cascading across the system. Systems operate at room temperature, reducing infrastructure complexity and cost. Implications for Real-World Quantum Use Faster error correction enables longer, more complex quantum calculations with less hardware. Problems once considered impractical, such as large-scale logistics optimization, could move from month-long runtimes to less than a day. Hardware tests of AFT are expected within one to two years, signaling tangible progress toward scalable, fault-tolerant quantum computing. Why This Matters This breakthrough challenges the assumption that massive error-correction overhead is unavoidable in quantum computing. By removing a critical bottleneck, algorithmic fault tolerance significantly improves feasibility, speed, and scalability. If validated on real hardware, AFT could mark a turning point where quantum systems transition from experimental promise to operational value across science, industry, and national-scale optimization challenges. I share daily insights with 37,000+ followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
-
Stop waiting a full day for a 3-second quantum circuit to execute. Quantum computing is finally moving from isolated experiments into large-scale cloud workflows, but there is a massive bottleneck: the 60x overhead caused by provider queues. In today’s landscape, a job that takes seconds to run can sit in a first-come-first-serve line for hours, often on a device that doesn't even offer the best fidelity for your specific task. Enter Qurator, a new architecture-agnostic scheduler designed to bridge the gap between classical HPC and heterogeneous quantum providers. Unlike traditional schedulers that treat queue time and circuit fidelity as separate issues, Qurator optimizes for both simultaneously. By reconciling incompatible calibration data from IBM (IBM Quantum), IonQ , Rigetti Computing , and others into a unified success score, Qurator makes intelligent mapping decisions. QuEra Computing Inc. It uses advanced techniques like circuit cutting and merging to fit tasks onto the best available hardware while respecting quantum-specific constraints like the No-Cloning Theorem and entanglement synchronization. The results speak for themselves. Under high-load conditions, Qurator reduces queue wait times by 30% to 75% while keeping execution fidelity within a user-defined target. It proves that we don't have to sacrifice accuracy for speed if we treat quantum constraints as first-class scheduling concerns. If you are building the future of hybrid quantum-classical systems, this is how we scale. Read the full paper: Qurator: Scheduling Hybrid Quantum-Classical Workflows Across Heterogeneous Cloud Providers. #QuantumComputing #CloudComputing #HPC #QuantumSoftware #TechInnovation
-
> Sharing Resource < Interesting: "TreeVQA: A Tree-Structured Execution Framework for Shot Reduction in Variational Quantum Algorithms" by Yuewen Hou, Dhanvi Bharadwaj, Gokul Subramanian Ravi Abstract: Variational Quantum Algorithms (VQAs) are promising for near- and intermediate-term quantum computing, but their execution cost is substantial. Each task requires many iterations and numerous circuits per iteration, and real-world applications often involve multiple tasks, scaling with the precision needed to explore the application's energy landscape. This demands an enormous number of execution shots, making practical use prohibitively expensive. We observe that VQA costs can be significantly reduced by exploiting execution similarities across an application's tasks. Based on this insight, we propose TreeVQA, a tree-based execution framework that begins by executing tasks jointly and progressively branches only as their quantum executions diverge. Implemented as a VQA wrapper, TreeVQA integrates with typical VQA applications. Evaluations on scientific and combinatorial benchmarks show shot count reductions of 25.9×on average and over 100× for large-scale problems at the same target accuracy. The benefits grow further with increasing problem size and precision requirements. Link: https://lnkd.in/e9kkZZX5 #quantumcomputing #quantummachinelearning #research #paper
-
Interesting research in Quantum Machine Learning addresses key challenges in scalability and data encoding. The GitHub repository is included for further reference. A recent study titled "An Efficient Quantum Classifier Based on Hamiltonian Representations" (Tiblias et al.) proposes a novel approach to quantum classification. The study tackles the limitations of current QML methods that often rely on toy datasets or significant feature reduction due to hardware constraints and the high costs of encoding dense vector representations on quantum devices. The researchers introduce an efficient approach called the Hamiltonian classifier, which circumvents the costs of data encoding by mapping inputs to a finite set of Pauli strings and making predictions based on their expectation values. They also present two classifier variants, PEFF and SIM, with different trade-offs in terms of parameters and sample complexity. Key outcomes of this work include: * A new encoding scheme achieving logarithmic complexity in both qubits and quantum gates relative to the input dimensionality. * The development of classifier variants (PEFF and SIM) offers different performance-cost trade-offs. PEFF reduces model size, while SIM boasts better sample complexity. * The Simplified Hamiltonian (SIM) variant achieves logarithmic scaling in qubit and gate complexity along with a constant sample complexity, making it a strong candidate for practical implementation on Noisy Intermediate-Scale Quantum (NISQ) devices. * Experiments showed that increasing the number of Pauli strings in the SIM model leads to better performance and more stable training dynamics, with models using 500 to 1000 Pauli strings often matching the performance of classical baselines. You can find the GitHub repo here: https://lnkd.in/dN38CFPv. The article here: https://lnkd.in/dG4agXap #quantumcomputing #machinelearning #quantummachinelearning #artificialintelligence #research #nlp #imageclassification #datascience
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development