Quantum-Inspired Methods: Classical Algorithms with a Quantum Twist
The term “quantum-inspired” is appearing more and more frequently in discussions around quantum technologies, but what does it actually mean?
In simple terms, quantum-inspired algorithms are classical computing techniques that borrow ideas from quantum physics to solve problems differently. They mimic quantum features, like superposition or entanglement, using clever math on ordinary computers. Unlike true quantum algorithms that need actual qubits, quantum-inspired methods run on today’s classical hardware. The goal is to capture some of the underlying strength of quantum computations (for example, how quantum systems can encode complex correlations) to achieve speedups or compression beyond standard approaches, without requiring a quantum computer. Equally exciting is that many such algorithms have a clear path to quantum hardware: for instance, a solution expressed as a matrix product state can be translated into an equivalent quantum circuit (the MPS’s tensor factors become entangled qubits). This isn’t hype or science fiction or marketing fluff, it’s a growing set of methods combining the intersection of quantum theory and classical algorithm design.
Roots in Many-Body Physics and Tensor Networks
Picture 1: A simplified example of a 4-site MPS (left) and its corresponding quantum circuit (right): Each circle represents an MPS tensor with one physical index (horizontal line, representing a qubit with dimension d=2) and two vertical bond indices connecting neighboring tensors. The quantum circuit on the right prepares the same quantum state: starting from all qubits initialized in ∣0⟩, each blue rectangle represents a two-qubit unitary gate corresponding to an MPS tensor. The bond dimension D=2 of the MPS determines how much entanglement can be represented between subsystems. It relates to the number of shared bond qubits n_Vbetween circuit blocks via D=2^(n_V ). In this example, a single bond qubit is reused across the circuit to carry entanglement sequentially along the chain [1]
Where do these ideas actually come from?
Quantum-inspired methods trace their roots to many-body quantum physics and the tensor network techniques developed to simulate it. In quantum physics, describing a system with many particles is hard because the state space grows exponentially with the number of particles. Physicists responded by developing tensor network models, which factor large quantum states into smaller connected tensors. This reduces complexity by limiting the amount of entanglement [1].
A well-known example is the matrix product state (MPS), which expresses a large, high-dimensional vector as a chain of smaller matrices. The key insight is that if a quantum system only shows localized correlations, meaning the entanglement follows an area law, an MPS can approximate it efficiently. By trimming away weak long-range correlations, these methods make it possible to simulate quantum systems that would otherwise be too large to handle.
Quantum-inspired algorithms leverage these same tricks in classical settings. They use tensor networks (and other physics-inspired models) to encode structure and correlations in complex data. The classical algorithm may simulate a “quantum” system internally, for instance, propagating an MPS through some computation, but it ultimately runs on normal silicon. The benefit is that we can import powerful concepts like entanglement (think of it as a way to correlate many variables compactly) into machine learning, optimization, or simulations on classical computers. By mimicking quantum entanglement in a controlled way, these methods achieve impressive compression of information and sometimes improved computation speeds, all without quantum hardware.
A clear introduction to this perspective can be found in Schollwöck (2011), who explained how MPS structures underpin the efficiency of DMRG, and paved the way for their use in classical and ML settings (See: Schollwöck, Ann. Phys. 326, 96 (2011)).
Let’s explore two real-world examples from recent research that showcase what quantum-inspired techniques can do.
Quantum-Inspired Data Compression for ML (Tensor Networks & Images)
A notable example comes from Dilip et al. (2022), who use tensor networks to compress image data for quantum machine learning. The context here is the challenge of loading large classical datasets (like images) into a small quantum computer. Today’s quantum devices have very few qubits, so feeding in something like a 28×28 pixel image (which naïvely needs 784 qubits if each pixel is one qubit) is impractical. Dilip and colleagues’ idea was to use an MPS to compress the image data before loading it onto qubits. Essentially, the MPS acts as a structured entanglement-based encoding of the image that uses far fewer qubits and simpler circuits.A high-dimensional vector is converted into a matrix product stat by successive singular value decompositions (SVD).
Picture 1: A high-dimensional vector is converted into a matrix product state (MPS) by successive singular value decompositions (SVD). Each tensor represents a site in the chain and has three indices: ※ a physical index (e.g., pixel or qubit state) ※ and two bond indices connecting neighboring tensors. The bond dimension limits how much correlation (entanglement) is retained between parts of the system. (from Dilip et al., Phys. Rev. Research 4, 043007 (2022).
They start by splitting the image into patches and encoding each patch into a small quantum state (using an encoding similar to FRQI, a method for images). Then these patch-states are strung together into a chain (an MPS) with a limited bond dimension χ, which controls how much entanglement (correlation between patches) is allowed. By adjusting χ, they trade off compression vs. fidelity: a low χ means a highly compressed state (less entanglement between parts of the image) while a higher χ can capture more image detail at the cost of needing a deeper circuit or more parameters. Crucially, there is a direct mapping between such an MPS and a quantum circuit: the sequence of matrices in the MPS corresponds to a sequence of quantum gates. So this compressed MPS representation can be directly implemented as a shallow quantum circuit on a small number of qubits [1].
After encoding images this way, the team trained a simple classifier directly on the MPS representations. In their tests on Fashion-MNIST, a quantum circuit using only 11 qubits (thanks to the compression) could achieve competitive accuracy to state-of-the-art classical tensor network classifiers. In fact, by tuning the patching and entanglement parameters, they could reach ~88% test accuracy with circuits significantly shallower and using an order of magnitude fewer qubits than a naive encoding. For instance, one setting used just 64 qubits (as virtual qubits in the MPS, or equivalently a 2×4 patch encoding) instead of 784, with only minor loss in accuracy. This showcases a core benefit of quantum-inspired methods: data compression. By exploiting structured correlations (here, treating an image like a quantum state with local entanglement), we can cram information into a much smaller representational space without losing the essence needed for tasks like classification [1]. And because this MPS can be mapped to a quantum circuit, the approach smoothly transitions from a classical simulation to something one could try on a real quantum device when one is available.
Picture 2: Quantum-inspired image compression using tensor networks. (Left) The image (from the Fashion-MNIST dataset) is divided into patches and encoded into a quantum state using address qubits (circles) and a color qubit (square). These form a matrix product state representation with limited entanglement (controlled by bond dimension χ). (Right) Example of an input image reconstructed under different compression settings: (c) shows the image encoded with χ = 4 using varying patch sizes (1×1 patch = whole image vs. 2×2 patches, etc.), and (d) shows the image as a single patch with increasing bond dimensions χ = 1, 2, 4, 8. Higher χ (more entanglement) or more patches yield better fidelity to the original. (Figure from Dilip et al., 2022)
Recommended by LinkedIn
Exploiting Turbulent Flow Structures with Tensor Networks
Our second example comes from fluid dynamics, an unexpected but powerful domain for quantum-inspired methods. Gourianov et al. (2022) introduced a tensor network approach to simulate turbulent flows like jets and vortices. Turbulence is notoriously hard to model because energy transfers across a wide range of scales, from large eddies down to tiny swirls. A full direct numerical simulation (DNS) must resolve all these scales, requiring billions of variables at high Reynolds numbers (a dimensionless quantity that describes the ratio of inertial to viscous forces and indicates whether flow is turbulent).
The researchers drew a parallel to quantum many-body systems, where systems have huge dimensionality but often only limited entanglement. Similarly, turbulence contains vast detail but shows limited correlations between distant scales. To test this, the team applied a method based on the Schmidt decomposition to split the flow into coarse and fine components. The result: most of the interscale coupling could be captured with low effective entanglement, suggesting that an MPS could represent the system efficiently.
Building on this, they developed an MPS-based simulator for a 2D jet and a 3D Taylor-Green vortex (a canonical test case in fluid dynamics that models the transition from laminar flow to turbulence). The flow field was compressed into an MPS, and evolved over time using a fixed bond dimension to control how much correlation was preserved. They then compared this compressed simulation to a coarse DNS using the same number of variables.
The difference was striking: in 2D, an MPS using only ~1/49th of the DNS data reproduced the key vortex dynamics, while the coarse DNS failed to do so. In 3D, the MPS compressed the system by ~97%, yet still retained essential vortex structures and matched the energy dissipation curve, something the coarse DNS could not [2].
Picture 3: Comparison of different simulation methods for a 2D turbulent jet: the high-resolution DNS captures fine vortices, MPS-based quantum-inspired simulations (χ = 118, 74, 33) closely match it despite strong compression, while standard coarse-grid methods miss key structures; the right panel shows stress evolution over time, where MPS tracks DNS accurately, unlike the coarse simulations (Source: Gourianov et al., Nature Computational Science, 2022).
The impact is significant: the MPS approach matched the accuracy of full DNS while using far fewer parameters. In one 3D vortex simulation, they reduced the variable count from 1.8 million to just 110,000, achieving a 16-fold compression. This not only speeds up classical computations, but also points toward future potential. The authors also note that such tensor network methods could one day run on quantum hardware, since MPS algorithms can be translated into quantum circuits. Even today, these techniques act as powerful pre-processing tools, reducing complexity and focusing computation on the most physically relevant features, the coherent structures of turbulence.
Mimicking Entanglement on Classical Hardware – How it Helps in Practice
A key strength of quantum-inspired methods is how they simulate entanglement: not physically, but as a structured way to represent complex correlations in data. Tensor networks like MPS allow classical algorithms to capture these patterns efficiently, enabling compression without losing essential structure.
In image tasks, this means global features like edges are preserved across pixels. In turbulence modeling, interactions between large and small scales remain intact: something standard methods often miss. These entanglement-like structures are what make quantum-inspired approaches so powerful, even without any quantum hardware.
It’s important to note that quantum-inspired doesn’t mean quantum-limited. These methods run on classical CPUs/GPUs. They won’t beat a true quantum algorithm in asymptotic speedup (we don’t get exponential factor gains purely classically), but they often find a sweet spot of practical efficiency. By borrowing the language of qubits and entanglement, we can redesign classical algorithms to be more like a quantum circuit in how they handle information. This can lead to polynomial speedups or simply huge constant-factor savings (by compressing data size). And critically, it makes certain computations feasible today that would otherwise be out of reach until quantum hardware matures.
Looking Ahead: Bridging Classical Performance and Quantum Possibilities
Quantum-inspired techniques are a fascinating bridge between classical and quantum computing. On one hand, they offer immediate benefits on classical machines, whether it’s faster optimization, compressed machine learning models, or more efficient simulations. On the other hand, they prepare us for the quantum future: algorithms like the ones above can often be reformulated for quantum computers, potentially unlocking even greater performance once hardware catches up. For example, an MPS-based classifier could be the blueprint for a variational quantum circuit classifier. In the turbulence work, the compressed simulation could one day run on a quantum processor with large, entangled states, possibly simulating fluid dynamics in new regimes beyond classical capacity.
In summary, quantum-inspired methods bring the structural advantages of quantum computing into the classical realm. They remind us that many hard problems have hidden structure, like entanglement patterns, that we can exploit. By intuitively combining physics insights with computer science, we get the best of both worlds: algorithms that are inspired by quantum mechanics but don’t wait for quantum hardware.
As these approaches gain traction in fields from finance to materials science, we’re likely to see a growing toolkit of tensor networks, quantum annealing analogs and other hybrids entering the standard repertoire of data scientists and engineers. They won’t replace true quantum algorithms (when we have full-scale quantum computers), but they are extremely valuable right now for any domain where high-dimensional correlations or optimization landscapes are the bottleneck.
I hope this gave you a clearer idea of what “quantum-inspired” really means in practice. It’s a fascinating mix of ideas from physics and computer science, and we’re only at the beginning. If you’re working with these methods or just curious about them (in ML, physics, or any other field), feel free to reach out. Whether you’re into quantum or classical tech, there’s plenty to explore and talk about.
Very insightful and well explained!