Many of you will have seen the news about HSBC’s world-first application of quantum computing in algorithmic bond trading. Today, I’d like to highlight the technical paper that explains the research behind this milestone. In collaboration with IBM, our teams investigated how quantum feature maps can enhance statistical learning methods for predicting the likelihood that a trade is filled at a quoted price in the European corporate bond market. Using production-scale, real trading data, we ran quantum circuits on IBM quantum computers to generate transformed data representations. These were then used as inputs to established models including logistic regression, gradient boosting, random forest, and neural networks. The results: • Up to 34% improvement in predictive performance over classical baselines. • Demonstrated on real, production-scale trading data, not synthetic datasets. • Evidence that quantum-enhanced feature representations can capture complex market patterns beyond those typically learned by classical-only methods. This marks the first known application of quantum-enhanced statistical learning in algorithmic trading. For full technical details please see our published paper: 📄 Technical paper: https://lnkd.in/eKBqs3Y7 📰 Press release: https://lnkd.in/euMRbbJG Congratulations to Philip Intallura Ph.D , Joshua Freeland Freeland and all HSBC colleagues involved — and huge thanks to IBM for their partnership.
Applications of Quantum Simulators in Data Analysis
Explore top LinkedIn content from expert professionals.
Summary
Quantum simulators are specialized computers that use the principles of quantum physics to model complex data patterns, offering new possibilities for analyzing and processing large datasets. In data analysis, these simulators can reveal subtle relationships in information, sometimes providing speed or accuracy improvements over traditional computing methods.
- Explore quantum feature mapping: Consider using quantum simulators to transform data into richer, more informative representations, which can reveal hidden trends that might be missed by classical approaches.
- Utilize quantum parallelism: Take advantage of quantum computing's ability to process multiple data samples at once, helping to speed up tasks like classification and pattern discovery in large datasets.
- Embrace novel workflows: Experiment with quantum approaches like oracle sketching or quantum neural networks, especially when working with streaming or unknown data, to uncover new efficiencies in memory usage or predictive accuracy.
-
-
> Sharing Resource < Interesting benchmark for finance: "Quantum vs. Classical Machine Learning: A Benchmark Study for Financial Prediction" by Rehan Ahmad, Muhammad Kashif, Nouhaila I., Muhammad Shafique Abstract: In this paper, we present a reproducible benchmarking framework that systematically compares QML models with architecture-matched classical counterparts across three financial tasks: (i) directional return prediction on U.S. and Turkish equities, (ii) live-trading simulation with Quantum LSTMs versus classical LSTMs on the S\&P 500, and (iii) realized volatility forecasting using Quantum Support Vector Regression. By standardizing data splits, features, and evaluation metrics, our study provides a fair assessment of when current-generation QML models can match or exceed classical methods. Our results reveal that quantum approaches show performance gains when data structure and circuit design are well aligned. In directional classification, hybrid quantum neural networks surpass the parameter-matched ANN by \textbf{+3.8 AUC} and \textbf{+3.4 accuracy points} on \texttt{AAPL} stock and by \textbf{+4.9 AUC} and \textbf{+3.6 accuracy points} on Turkish stock \texttt{KCHOL}. In live trading, the QLSTM achieves higher risk-adjusted returns in \textbf{two of four} S\&P~500 regimes. For volatility forecasting, an angle-encoded QSVR attains the \textbf{lowest QLIKE} on \texttt{KCHOL} and remains within ∼ 0.02-0.04 QLIKE of the best classical kernels on \texttt{S\&P~500} and \texttt{AAPL}. Our benchmarking framework clearly identifies the scenarios where current QML architectures offer tangible improvements and where established classical methods continue to dominate. Link: https://lnkd.in/e4WUdr-n #quantummachinelearning #machinelearning #research #paper #benchmark #finance
-
*How can you use quantum neural networks (QNNs) to gain a quantum advantage on classical data?* We propose to use QNNs (and other quantum algorithms, including quantum signal processing) to process data in quantum sensors. Attempts over the past 7+ years to find near-term practical applications of quantum neural networks on classical data have faced a variety of challenges, including: if the classical data is small enough to be able to load into a quantum computer, then it has (empirically) always been possible to address the same problem with a classical neural network - and without the downsides of quantum computing with current (noisy) hardware. Rather than trying to tackle problems in the setting where the classical data originates from a classical computer's memory, we switch the framing of the problem slightly, but in a way that makes a huge difference: what if we use QNNs to perform classification on classical but a priori _unknown_ data? What do we mean by _unknown_ data? A quantum sensor senses a classical signal that is unknown to us, but is ultimately classical. We can use a QNN to help reveal a _trained nonlinear function_ of the unknown classical signal. One of the examples we have explored shows how you can gain an advantage where both the quantum sensing and quantum computing are performed by a single qubit! If you already knew the classical signal, there would be no hope for a quantum advantage (simulating a single qubit is of course trivial), but in the sensing setting we don't know the signal a priori. We have been able to show it is possible to gain a quantum computational-sensing advantage using quantum signal processing (QSP) treated as a QNN, versus first using a conventional quantum sensor and then postprocessing to compute the nonlinear classification function classically. By performing an approximation of the nonlinear classification function in the quantum system before measurement, the quantum sampling noise is greatly reduced: measurements of the system yield 0 or 1 with high probability depending on which of two classes the signal was in. We have a preprint on the arXiv showing various schemes for quantum computational sensing with a small number of qubits and/or bosonic modes, tested on a variety of binary and multiclass classification problems: https://lnkd.in/enQxFDNt I am optimistic about the prospects for experimental proof-of-concept demonstrations given the modest quantum resources required (down to just a single qubit and a not-particularly-deep circuit). Congratulations to Saeed Khan and Sridhar Prabhu, as well as Logan Wright!
-
⚛️ Parallel Data Processing in Quantum Machine Learning 🧾 We propose a Quantum Machine Learning (QML) framework that leverages quantum parallelism to process entire training datasets in a single quantum operation, addressing the computational bottleneck of sequential data processing in both classical and quantum settings. Building on the structural analogy between feature extraction in foundational quantum algorithms and parameter optimization in QML, we embed a standard parameterized quantum circuit into an integrated architecture that encodes all training samples into a quantum superposition and applies classification in parallel. This approach reduces the theoretical complexity of loss function evaluation from O(N^2) in conventional QML training to O(N), where N is the dataset size. Numerical simulations on multiple binary and multi-class classification datasets demonstrate that our method achieves classification accuracy comparable to conventional circuits while offering substantial training time savings. These results highlight the potential of quantum-parallel data processing as a scalable pathway to efficient QML implementations. ℹ️ Ramezani et al - 2025
-
Exciting work from Caltech, Google Quantum AI, MIT, and Oratomic on quantum advantage for classical machine learning. The long standing question: can quantum computers offer a rigorous advantage in large scale classical data processing, not just specialized problems like cryptography or quantum simulation? This paper gives rigorous results for formalized machine learning tasks. In the benchmarks they report, a quantum computer with fewer than 60 logical qubits performs classification and dimension reduction on massive datasets using 4 to 6 orders of magnitude less memory than the classical and QRAM based baselines in the paper. The key idea is quantum oracle sketching. Instead of loading an entire dataset into quantum memory, it streams classical samples one at a time, applies small quantum rotations, and discards each sample immediately. These operations coherently build an approximate quantum oracle that can then be used in downstream quantum algorithms. The authors present numerical experiments on IMDb sentiment analysis and single cell RNA sequencing that are consistent with the theory. What makes this notable: - A provable quantum memory advantage for classification and dimension reduction - The advantage is framed as a theorem under the paper's learning model, not just a conjecture or empirical trend - The approach is designed to work with streaming, noisy, and time varying classical data Read the paper here: https://lnkd.in/g77PuZzQ
-
In finance, Monte Carlo simulations help us to measure risks like VaR or price derivatives, but they’re often painfully slow because you need to generate millions of scenarios. Matsakos and Nield suggest something different: they build everything directly into a quantum circuit. Instead of precomputing probability distributions classically, they simulate the future evolution of equity, interest rate, and credit variables inside the quantum computer, including binomial trees for stock prices, models for rates, and credit migration or default models. All that is done within the circuit, and then quantum amplitude estimation is used to extract risk metrics without any offline preprocessing. This means you keep the quadratic speedup of quantum MC while also removing the bottleneck of classical distribution generation. If you want to explore the topic further, here is the paper: https://lnkd.in/dMHeAGnS #physics #markets #physicsinfinance #derivativespricing #quant #montecarlo #simulation #finance #quantitativefinance #financialengineering #modeling #quantum
-
If you've been doubting whether quantum computers will ever do anything useful beyond breaking encryption, this one's for you. A quantum computer with fewer than 60 logical qubits can run AI on massive real-world datasets using ten thousand to a million times less memory than any classical machine. Movie review sentiment analysis. Cell type classification from RNA sequencing. Real AI tasks, real data. This is not a storage trick. The quantum computer runs the full ML pipeline. An algorithm called quantum oracle sketching streams data through the processor one sample at a time. Each sample applies a small quantum rotation, then gets discarded. The accumulated rotations build a compressed quantum model of the entire dataset in a handful of qubits. Quantum algorithms then run classification and dimensionality reduction directly on that model. A readout protocol extracts the results. Data in, model built, inference done, predictions out. All on a tiny quantum chip. A classical machine matching this provably needs exponentially more memory, and that proof is unconditional. It relies only on quantum superposition being real. It holds even if you give classical machines unlimited time. Think about what this means for the age of AI. The world generates more data every day than it can store. Every sensor, every device, every interaction. Classical AI has to choose: store less and learn worse, or build bigger data centers and burn more energy. A quantum ML pipeline that learns from streaming data without storing it sidesteps that tradeoff entirely. But to be clear: This is a theoretical proof validated through numerical simulations. It has not been demonstrated on actual quantum hardware. Yet, fewer than 60 logical qubits is in the range that near-term error-corrected machines are targeting. We are finally getting the use-case evidence this field needed. 📸 Credits: Haimeng Zhao, Caltech Alexander Zlokapa Hsin-Yuan (Robert) Huang John Preskill Ryan Babbush Jarrod McClean Hartmut Neven Paper on arXiv:2604.07639 Deep dive on this live on X (@drmichaela_e). Newsletter version at 5pm CET today, link on my website.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development