Many of you will have seen the news about HSBC’s world-first application of quantum computing in algorithmic bond trading. Today, I’d like to highlight the technical paper that explains the research behind this milestone. In collaboration with IBM, our teams investigated how quantum feature maps can enhance statistical learning methods for predicting the likelihood that a trade is filled at a quoted price in the European corporate bond market. Using production-scale, real trading data, we ran quantum circuits on IBM quantum computers to generate transformed data representations. These were then used as inputs to established models including logistic regression, gradient boosting, random forest, and neural networks. The results: • Up to 34% improvement in predictive performance over classical baselines. • Demonstrated on real, production-scale trading data, not synthetic datasets. • Evidence that quantum-enhanced feature representations can capture complex market patterns beyond those typically learned by classical-only methods. This marks the first known application of quantum-enhanced statistical learning in algorithmic trading. For full technical details please see our published paper: 📄 Technical paper: https://lnkd.in/eKBqs3Y7 📰 Press release: https://lnkd.in/euMRbbJG Congratulations to Philip Intallura Ph.D , Joshua Freeland Freeland and all HSBC colleagues involved — and huge thanks to IBM for their partnership.
Impact of Quantum States on Algorithm Performance
Explore top LinkedIn content from expert professionals.
Summary
The impact of quantum states on algorithm performance refers to how the unique properties of quantum systems—such as entanglement and superposition—can transform the way algorithms solve problems, often allowing quantum computers to outperform traditional ones for certain tasks. This emerging field explores how quantum data and specialized quantum states can unlock new levels of predictive accuracy, speed, and scalability across industries, from finance and chemistry to cryptography and artificial intelligence.
- Explore quantum data: Quantum algorithms are most promising when working directly with quantum-generated information, where classical computers struggle to capture the complexity of these states.
- Use error detection: Integrating error detection and mitigation techniques is crucial for trustworthy results as quantum circuits scale up in size and complexity.
- Match algorithm to resources: Carefully consider the demands of each quantum algorithm—like the number of qubits or state preparation steps—to ensure practical performance gains for your specific application.
-
-
The preparation of GHZ states is a common benchmark for quantum processors. These states are not only a test of device-wide entanglement, they are also useful resources in numerous quantum algorithms. Our team recently demonstrated a 120-qubit logical GHZ state on our Heron r2 processors, the largest reported on any hardware. This includes a 60-logical qubit GHZ on a single-shot basis (i.e. with no readout error mitigation). These experiments were enabled by error detection both at the device and circuit level. At the device level, we can use our knowledge of the device architecture to detect if some couplers fail during a particular shot. At the circuit level, we can use symmetries inherent in the GHZ state to detect if certain violations occur. The state preparation proceeds as follows: we first eliminate some edges with bad CZ or bad readout (above a given threshold). Then, starting from a qubit at the center of the remaining graph, we perform a breadth-first search (BFS) to prepare a GHZ state in shallow depth. During the BFS, some nodes are randomly blocked in order to increase the chance of check qubits being found. Afterwards, any node that does not belong to the GHZ but is adjacent to 2 of its qubits may act as a check in a ZZ parity measurement. We aim to maximize the ''coverage'' of checks that we can find through this randomization, while not increasing the depth beyond a given threshold above the best possible depth. The coverage of checks is the number of locations in the circuit whose failure is detected by one of the checks, which we can compute efficiently using Pauli propagation. Therefore, we can predict exactly how many failures will be detected using our checks, and can optimize the layout for them. These experiments were performed by Ali Javadi and Simon Martiel. They also leverage many of the recent advances made by our team, including improved readout on Herons, characterization of coupler errors, and M3 readout error mitigation. For comparison, the recent demonstrations by Microsoft/Atom with a 24-qubit GHZ, Quantinuum with a 50-qubit GHZ, and Q-ctrl with a 75-qubit GHZ (also on Heron) also relied on error detection. As we chart the path towards advantage all that really matters is how large a quantum circuit can we run and can we trust the method used gives accurate results. While GHZ are simple to simulate this method shows that error detection with post selection is a potentially viable tool to add with error mitigation or sample based quantum diagonalization, to run experiments at the utility scale (100+ qubits) and build the set of trusted tools to search for quantum advantage on near term devices. This is why we are pushing near term methods such as error mitigation, error detection on utility-scale (100+ qubits) quantum computers.
-
Lockheed and IBM Use Quantum Computing to Solve Chemistry Puzzle Once Thought Impossible Introduction: Cracking a Chemical Code with Quantum Power In a breakthrough for quantum chemistry, Lockheed Martin and IBM have successfully used quantum computing to model the complex electronic structure of an “open-shell” molecule—a challenge that has defied classical computing for years. This marks the first application of the sample-based quantum diagonalization (SQD) method to such systems and signals a significant advance in the practical application of quantum computing for scientific research. Key Highlights from the Collaboration • The Molecule: Methylene (CH₂): • Methylene is an open-shell molecule, meaning it has unpaired electrons that lead to complex quantum behavior. • These molecules are notoriously difficult to simulate accurately because electron correlations create exponentially growing complexity for classical algorithms. • The Innovation: Sample-Based Quantum Diagonalization (SQD): • The team used IBM’s quantum processor to implement SQD for the first time in an open-shell system. • SQD is a hybrid algorithm that leverages quantum sampling to solve eigenvalue problems in quantum chemistry, reducing computational burdens. • Why Classical Methods Fall Short: • Traditional high-performance computing (HPC) platforms struggle with electron correlation in multi-electron systems. • Approximation techniques become prohibitively expensive as system size increases, especially for reactive or radical species like methylene. • Quantum Advantage in Practice: • Quantum processors can represent electron configurations using entangled qubits, offering more scalable solutions. • By simulating the electronic structure directly, quantum methods could help scientists design new materials, catalysts, and pharmaceuticals faster and more efficiently. Why It Matters: Pushing Past the Limits of Classical Chemistry • Industrial and Scientific Impact: • Simulating open-shell systems is vital for battery design, combustion processes, and metalloprotein modeling. • The success of SQD opens the door to accurate modeling of previously inaccessible molecules, potentially accelerating innovations in energy, health, and aerospace. • Defense and Aerospace Relevance: • Lockheed Martin’s involvement reflects strategic interest in applying quantum computing to defense-grade materials and mission-critical chemistry. • Quantum Chemistry as a Flagship Use Case: • This achievement underscores how quantum computing is beginning to deliver real results in scientific domains where classical methods hit their ceiling. • As quantum hardware improves, the number of solvable molecular systems will expand exponentially. Quantum computing just helped humanity take a critical step into the chemical unknown, proving its value not just in theory—but in practice. Keith King https://lnkd.in/gHPvUttw
-
𝗤𝗨𝗔𝗡𝗧𝗨𝗠 𝗔𝗡𝗗 𝗔𝗜: 𝗪𝗵𝗲𝗿𝗲 𝘄𝗲 𝘀𝘁𝗮𝗻𝗱 (𝗡𝗼𝘃. 𝟮𝟬𝟮𝟱) Yesterday I attended a Dell Technologies/AMD dinner with AI leaders from several companies, and we had time to reflect on AI adoption and its real impact. Naturally, the question came up: what about the connection between quantum and AI? This was timely because a new primer on quantum machine learning was published recently. It does something rare in this field. Instead of simply listing algorithms, it explains 𝘄𝗵𝘆 many proposed quantum speedups rely on assumptions that do not always hold in practice. The authors take the time to peel back the layers. Many quantum ML results look impressive on paper, but only if you have ideal state preparation, perfect access to data, or a very specific problem structure. When these assumptions are relaxed, the quantum advantage can disappear. The paper goes through this step by step and shows where the real bottlenecks are. It is very clear that encoding classical data into quantum states is often the main cost, and this step alone can cancel out theoretical speedups. The most convincing results appear in a different setting: learning directly from quantum data. Here the situation changes. Quantum states, processes, and many-body dynamics are things classical models cannot easily represent or access. In this regime, quantum learners can use fewer samples, rely on collective measurements, and leverage entanglement to extract more information per experiment. Some tasks even show provable gaps between what quantum and classical learners can do. This part of the paper feels very grounded because these techniques are already used in labs for tomography, Hamiltonian learning, and experimental optimization. So the message is not that quantum ML is unrealistic. It is that near-term progress is much more promising on quantum data than on classical datasets. The paper is unusually direct about this, which is why I found it worth sharing. For comparison, I also looked again at an AI-for-quantum white paper that came out a few months ago. On this side the progress is much more immediate. Machine learning is already improving control, calibration, compilation, sensing, simulation, and even early error-decoding strategies. It is incremental engineering work, but it is happening now and it makes a difference. Together, the two perspectives fit well. Quantum for AI is still research-heavy and depends on strong conditions. AI for quantum is already helping current devices perform better. Knowing this distinction gives a more realistic picture of where the impact is coming from today. #AI #QuantumComputing #QML #QuantumAI #DeepTech
-
Shor’s algorithm is possible with as few as 10,000 reconfigurable atomic qubits by John Preskill (Caltech) https://lnkd.in/ethGUK8B Quantum computers have the potential to perform computational tasks beyond the reach of classical machines. A prominent example is Shor's algorithm for integer factorization and discrete logarithms, which is of both fundamental importance and practical relevance to cryptography. However, due to the high overhead of quantum error correction, optimized resource estimates for cryptographically relevant instances of Shor's algorithm require millions of physical qubits. Here, by leveraging advances in high-rate quantum error-correcting codes, efficient logical instruction sets, and circuit design, we show that Shor's algorithm can be executed at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. Increasing the number of physical qubits improves time efficiency by enabling greater parallelism; under plausible assumptions, the runtime for discrete logarithms on the P-256 elliptic curve could be just a few days for a system with 26,000 physical qubits, while the runtime for factoring RSA-2048 integers is one to two orders of magnitude longer. Recent neutral-atom experiments have demonstrated universal fault-tolerant operations below the error-correction threshold, computation on arrays of hundreds of qubits, and trapping arrays with more than 6,000 highly coherent qubits. Although substantial engineering challenges remain, our theoretical analysis indicates that an appropriately designed neutral-atom architecture could support quantum computation at cryptographically relevant scales. More broadly, these results highlight the capability of neutral atoms for fault-tolerant quantum computing with wide-ranging scientific and technological applications.
-
Quantum computing promises to making LLMs more efficient. And it's already working on real hardware. Efficient fine-tuning of large language models remains a critical bottleneck in AI development, with most researchers focused on purely classical computing approaches. A new paper from Chinese researchers demonstrates how quantum computing principles can dramatically reduce the parameters needed while improving model performance. The team introduces Quantum Weighted Tensor Hybrid Network (QWTHN), which combines quantum neural networks with tensor decomposition techniques to overcome the expressive limitations of traditional Low-Rank Adaptation (LoRA). By leveraging quantum state superposition and entanglement, their approach achieves remarkable efficiency: reducing trainable parameters by 76% while simultaneously improving performance by up to 15% on benchmark datasets. Most importantly, this isn't just theoretical - they've successfully implemented inference on actual quantum computing hardware. This represents a tangible advancement in making quantum computing practical for AI applications, demonstrating that even current-generation quantum devices can enhance the capabilities of billion-parameter language models. The integration of quantum techniques into traditional deep learning frameworks might become standard practice for resource-efficient AI development in the future. More on Quantum Hybrid Networks and other AI highlights in this week's LLM Watch:
-
Exciting work from Caltech, Google Quantum AI, MIT, and Oratomic on quantum advantage for classical machine learning. The long standing question: can quantum computers offer a rigorous advantage in large scale classical data processing, not just specialized problems like cryptography or quantum simulation? This paper gives rigorous results for formalized machine learning tasks. In the benchmarks they report, a quantum computer with fewer than 60 logical qubits performs classification and dimension reduction on massive datasets using 4 to 6 orders of magnitude less memory than the classical and QRAM based baselines in the paper. The key idea is quantum oracle sketching. Instead of loading an entire dataset into quantum memory, it streams classical samples one at a time, applies small quantum rotations, and discards each sample immediately. These operations coherently build an approximate quantum oracle that can then be used in downstream quantum algorithms. The authors present numerical experiments on IMDb sentiment analysis and single cell RNA sequencing that are consistent with the theory. What makes this notable: - A provable quantum memory advantage for classification and dimension reduction - The advantage is framed as a theorem under the paper's learning model, not just a conjecture or empirical trend - The approach is designed to work with streaming, noisy, and time varying classical data Read the paper here: https://lnkd.in/g77PuZzQ
-
Quantum Support Vector Regression (QSVR) on a 27-qubit IBM quantum computer? What could happen? An interesting recent study, "Quantum Support Vector Regression for Robust Anomaly Detection," analyzes the capabilities and challenges of using Quantum Support Vector Regression (QSVR) for semi-supervised anomaly detection using a NISQ device. Key takeaways: * Hardware performance: The QSVR model was benchmarked on a 27-qubit IBM quantum computer. It achieved strong classification performance, with an average AUC a few pp lower than the noiseless simulation (0.72 vs 0.76). Interestingly, the QSVR implemented on hardware surprisingly outperformed its simulated counterpart on two datasets (CC and KDD), which the authors attribute to hardware noise potentially improving generalization. * Noise robustness: The study investigated the influence of six different noise channels on the QSVR's performance. The QSVR was found to be largely robust against depolarizing, phase damping, phase flip, and bit flip noise. However, amplitude damping noise resulted in the most significant degradation of the model, and miscalibration noise also had the potential to impact performance. * Vulnerability to adversarial attacks: A critical finding is the high vulnerability of the QSVR to adversarial attacks. Even weak Projected Gradient Descent (PGD) attacks with a strength of ε = 0.01 could reduce the Area Under the ROC Curve (AUC) by up to an order of magnitude on some datasets. * Noise and adversarial robustness: Introducing quantum noise into the QSVR did not provide a clear beneficial effect on its adversarial robustness. The adversarial attacks were often so powerful that the noisy models transitioned to random classifiers at higher noise levels. * Adversarial Training: Adversarial training, a common strategy to increase robustness in classical ML, was explored. However, in this semi-supervised setting where only normal samples are used for training, adversarial training did not reliably improve the adversarial robustness of the QSVR. Here the article: https://lnkd.in/d7QmmHFN #quantum #qml #datascience #machinelearning #ml
-
Three "secrets" of amplitude amplification (generalized Grover's algorithm) First, the quantum benefit comes from measurement, not the computation itself. In fact, the computation that is performed with gates is much more complicated that its classical counterpart. Making some numbers larger does not help, unless they are interpreted by nature as probabilities. Second, the "inversion by the mean" is not really an inversion by the mean, unless, you start with a uniform distribution (the original context in Grover's algorithm, before generalization, which actually makes it useful). In the general case, each amplitude is inverted with respect to its own point of inversion. What's common between these inversion points is that they are obtained by scaling the original amplitudes by the same number (the inner product between the original state and the current one). Third, this inner product is cos(2*j*theta), where j is the number of iterations and theta satisfies the property that the initial combined probability of desired outcomes is sin^2((2*j+1)*theta). The quantum implementation will have a minus in front of the cos for an odd number of iterations. This property of the iterate allows to use it as a rotation, and apply phase estimation to it. We don't need multi-dimensional vector spaces to understand the effect of Grover operators. Typically presentations go to that extreme, or to treating only the case when the original state is the uniform distribution. Developers typically don't understand high level linear algebra... We can use just two dimensions, trigonometry and binary strings to do quantum computing. What about a fourth secret: the inversion can be reduced to the inversion of the default state (G = MO = AM_0A^-1O), and that inversion happens to be just multiplication by -1 because the amplitude of 0 is always real in the Grover context. And again, there is a minus in the quantum implementation compared to the theory. #quantumcomputing #grover #search #quantum #algorithms #geometry
-
Prior studies identified efficient parallel quantum algorithms for greater model performance; additional studies utilizing quantum trainable gates were needed to further quantify machine learning benefits. (Fig. 3) Using a 10Q2L basis with fixed number of qubits, parallel layers, and quantum embedding: Increasing the number of trainable rotational gates lowered training loss, validation loss, and increased accuracy to 100%. In specific, using 3 trainable gates yielded the lowest overall validation loss with 100% accuracy, while using 4 trainable gates produced comparable losses, but with better convergence. Both Runtimes and RAM increased at similar rates in a predictable fashion. (Fig. 4-5) Utilizing the 10Q2L basis with fixed number of qubits and quantum embedding for a given number of parallel layers - The number of trainable rotational gates and parallel layers 1-8 were varied. The main findings were that increasing the number of parallel layers yielded lower loss with each additional layer added. For a given number of layers, a specific number of trainable rotational gates assisted the majority of the runs. Runtimes and RAM slope gradually increased as the number of layers were increased. (Tables 6/7) Using a 10Q60L 1RYe embedding basis vs. a 10Q8L 8RYe basis: Varying the number of trainable rotational gates lowered training loss, validation loss, and improved accuracy for both bases in general. In specific, adding anywhere between 1-5 trainable gates for the weaker 1RYe embedding lowered loss values over two-fold. The stronger 8RYe embedding with the smaller 10Q8L architecture had a trainable gate configuration that lowered embedding only validation loss from 0.0034 to 0.0011 with 100% accuracy. Runtimes and RAM were higher for the 10Q60L 1RYe system due to over 7x as many parallel quantum algorithms being used. (Fig. 8-9) Applying a 2Q160L architecture to several traditional algorithms vs. a simpler embedding + trainable gate method yielded higher performance using the latter. Adding a single trainable gate to the 8RYe embedding improved training loss 16X with matching validation loss at 0.0003 and 100% accuracy. The results indicate that embeddings with better loss convergence should first be identified before adding trainable gates. This experiment set featured significantly lower RAM than Runtime, as only 2 qubits were needed to accommodate two qubit CNOT gates and dataset requirements. (Table 10/11) Using a 10Q circuit with a fixed quantum algorithm: Increasing the number of parallel quantum algorithms lowered loss and increased accuracy on the ‘Make moons’ dataset with 2X noise, in general. Specifically, 8 layers yielded over 5x better validation loss performance than the original single circuit with a 13% increase in accuracy from 84.0% to 97.0%. (Fig. 12) Related quantum algorithm research using NVIDIA GPUs is being conducted by Brookhaven Lab, SandboxAQ, Terra Quantum, Classiq, and BASF.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development