Variational Quantum Circuits: The Engine of QML

Variational Quantum Circuits: The Engine of QML

Quantum ML Series 2 | Post 04 of 11 | #QFD2

This is Post 4 of the Quantum Machine Learning for Developers series. If you are brand new to quantum computing start with Series 1 first : link at the bottom of this article.


The Part I Kept Skipping Over

When I first started reading about quantum machine learning, I kept encountering the term variational quantum circuit. Every paper, every tutorial, every explainer mentioned it. And every time I read a definition, I nodded along and moved on without really understanding it.

It took me a while to admit that I was skipping over the most important concept in all of QML.

Variational quantum circuits are not a component of quantum machine learning. They are the mechanism by which quantum machine learning actually works.

Everything else, the encoding layers, the measurement, the classical optimizer, is scaffolding around the variational quantum circuit. If you do not understand what a variational quantum circuit is and why it is designed the way it is, you do not really understand QML.

This post is my attempt to explain it the way I wish someone had explained it to me. From a developer's perspective. Honestly, including the parts that are still theoretical and the parts that current hardware genuinely struggles with.

Article content

What Variational Actually Means

The word variational comes from the calculus of variations, a branch of mathematics concerned with optimizing functionals rather than functions. In the context of quantum circuits, variational simply means parameterized and optimizable.

A variational quantum circuit is a quantum circuit where some or all of the gates have tunable parameters, specifically rotation angles, that can be adjusted during a training or optimization process.

Think of it this way. A fixed quantum circuit always does the same thing. Given the same input, it always produces the same output. It has no capacity to learn.

A variational quantum circuit is different. It has free parameters, the rotation angles, that you can tune. By changing those parameters you change what the circuit does. Training a quantum machine learning model means finding the parameter values that make the circuit produce the outputs you want.

As a developer, the analogy that helped me most is this. A variational quantum circuit is to QML what a neural network architecture is to classical ML. The circuit structure defines the hypothesis space. The parameters define where in that space you currently are. Training moves you through that space toward better performance.

Article content

The Anatomy of a Variational Quantum Circuit

A variational quantum circuit has three distinct parts. Understanding each part separately before combining them made this click for me.

The initial state preparation. Every quantum circuit starts with qubits in a known state, typically all set to the zero state. This is the quantum equivalent of initializing your weights before training. The starting state is fixed and does not change during training.

The ansatz. This is the heart of the circuit and the most consequential design decision in the entire QML system. The ansatz is the specific arrangement of parameterized gates that defines the structure of the variational circuit. The word 'ansatz' comes from German and roughly means educated guess or assumed form. In quantum ML it means the assumed structure of the solution.

The ansatz determines several things simultaneously. It determines how many parameters the model has. It determines what kinds of correlations between qubits the model can learn. It determines how deep the circuit is. And it determines how trainable the circuit is, because not all ansatz designs are equally easy to optimize.

Choosing an ansatz is not a purely technical decision. It is a modeling decision that encodes assumptions about the problem structure. A poorly chosen ansatz will fail to solve the problem regardless of how well you train it. This is directly analogous to choosing the wrong neural network architecture for a problem. A fully connected network for image data will struggle in ways that a convolutional network will not, not because of training but because of structure.

The measurement scheme. After the ansatz runs, you measure some or all of the qubits. The specific measurement scheme determines what information you extract from the quantum state. Different measurement choices extract different kinds of information. For a classification problem you typically measure the expectation value of a specific observable and use that as your model output.

Article content

What an Ansatz Actually Looks Like

Let me make this concrete. Here is what a simple ansatz structure looks like at the level a developer needs to understand.

Imagine you have four qubits, four horizontal wire lines running left to right. The ansatz consists of a repeating pattern of two types of operations applied in sequence.

The first type is a layer of single qubit rotation gates. You place one rotation gate on each qubit wire. Each rotation gate has a parameter theta that you can tune. These gates rotate the individual qubit states independently.

The second type is a layer of entangling gates. Typically these are CNOT gates or controlled Z gates applied between neighboring qubits. These gates are not parameterized. They are fixed. Their job is to create entanglement between qubits, which is what allows the circuit to learn correlations between different parts of the input data.

This two step pattern, rotation layer followed by entangling layer, is called a circuit layer or a block. You repeat this block multiple times to make the circuit deeper. Each repetition adds more parameters and more expressive power.

The number of times you repeat this block is a hyperparameter of the model, directly analogous to the number of layers in a classical neural network.

A complete ansatz for four qubits with two layers would look like this. Initialize four qubits to zero state. Apply rotation gates to all four qubits with parameters theta 1 through theta 4. Apply CNOT gates between qubit 1 and 2, qubit 2 and 3, qubit 3 and 4. Apply rotation gates again with parameters theta 5 through theta 8. Apply CNOT gates again in the same pattern. Measure the qubits.

That is eight learnable parameters total. Simple by classical ML standards. But this is already approaching the limits of what current NISQ hardware can run reliably.

Article content

The Expressibility and Trainability Tradeoff

This is where things get genuinely difficult and where I want to be careful not to oversimplify.

There are two properties of an ansatz that you want to maximize simultaneously, and they pull against each other. They are called expressibility and trainability.

Expressibility refers to how much of the space of possible quantum states the ansatz can reach. A highly expressible ansatz can represent a wide variety of functions. A low expressibility ansatz is limited to a narrower range of solutions.

Trainability refers to how easy it is to optimize the ansatz parameters using gradient descent. A highly trainable ansatz has gradients that are well-behaved: they are large enough to drive parameter updates in useful directions.

The problem is this. As you increase expressibility by making the circuit deeper and adding more parameters, the gradients tend to become exponentially smaller. This is the barren plateau problem I mentioned in Post 3. In a deep highly expressive quantum circuit, the gradient landscape becomes essentially flat everywhere. The optimizer has no direction to move in. Training stalls.

This is not a solved problem. Researchers are actively working on ansatz designs, initialization strategies and optimization methods that reduce the barren plateau problem. Some progress has been made with problem specific ansatz designs that encode domain knowledge to keep the circuit in a trainable region. But for general purpose deep quantum circuits, barren plateaus are a real and serious constraint today.

I want to be direct here because Brian's point applies squarely. The theoretical expressibility of deep variational circuits is well established mathematically. But running deep circuits on NISQ hardware introduces noise that compounds with circuit depth. And the barren plateau problem means that even if you could run deep circuits cleanly, training them may be practically infeasible. These are operational constraints, not just theoretical ones.

Article content

Hardware Realism: What NISQ Actually Limits

I have been referencing NISQ constraints throughout this post. Let me be specific about what those constraints mean for variational quantum circuits in practice.

NISQ stands for Noisy Intermediate Scale Quantum. The noisy part is the critical word here.

Every gate operation on current quantum hardware introduces error. Not a small rounding error. A meaningful probability of producing the wrong quantum state. Current two qubit gate fidelities on the best available hardware are typically in the range of 99 to 99.9 percent. That sounds high. But in a circuit with 50 two qubit gates, even at 99.9 percent fidelity per gate, the probability that all 50 gates executed correctly is around 95 percent. Add more gates and the error accumulates rapidly.

This means circuit depth is genuinely limited on current hardware. Not by theoretical constraints but by physical noise. Deep ansatz designs that look reasonable on paper become unreliable when run on real devices.

The intermediate scale part means current devices have between roughly 50 and 1000 qubits. That sounds like a lot. But because of noise, only a fraction of those qubits can be used in a connected reliable circuit at any one time. Effective usable qubits for a reliable computation are significantly fewer than the headline qubit count.

This is the honest picture of where variational quantum circuits run today. The theory is ahead of the hardware. The gap is real. It is narrowing as hardware improves, but it has not closed.

Article content

Why This Still Matters for What We Are Building

Given everything above, you might reasonably ask: if variational quantum circuits are this constrained by current hardware, why are we building a quantum image classifier with them?

The answer has not changed from what I said in Post 2 and Post 3. We are building this as a learning exercise with full transparency about the limitations. The skills you develop designing and implementing a variational quantum circuit on a small problem transfer directly to the applications where quantum ML will eventually matter: quantum chemistry, materials discovery, quantum optimization.

The architecture patterns are the same regardless of problem scale. The difference is the domain and the data, not the fundamental circuit design approach.

In our quantum image classifier, we will use a shallow ansatz with a small number of qubits and layers. This is both a practical necessity given hardware constraints and an honest reflection of what is actually runnable today. We will not pretend the circuit is deeper or more powerful than it is. And we will benchmark its actual performance against a classical CNN with full transparency.

That benchmark is coming in Post 8. I am genuinely curious how it will go.


What Comes Next

Post 5 is where theory meets code. We set up the development environment, install PennyLane and its dependencies, and write our first quantum circuit in actual code. If you have been waiting for the hands-on part, Post 5 is where it begins.

See you Soon!!.


One More Thing Before You Go

I want to be transparent about something outside the series.

I am a developer with six years of coding experience currently open to new opportunities. My background is in full stack development and I am actively learning and building in quantum computing through this series.

If you are working on something interesting in either space, whether that is full stack engineering, developer tooling, or anything touching quantum computing or quantum ML, I would genuinely love to connect and learn more about what you are building.

No hard sell. Just an honest note from someone who is building in public and looking for the right team to build with.

You can reach me through my LinkedIn profile or drop a comment below.


New to Quantum Computing?

This series assumes basic quantum computing knowledge. If you are brand new start with Series 1 -> Quantum Computing for Developers:

Series 1 Start Here: linkedin.com/feed/update/urn:li:activity:7436025619882307584

This is Post 4 of the Quantum ML for Developers series. We are building a Quantum Image Classifier from scratch, post by post, and releasing everything on GitHub. If you are new here, Post 0,1,2 and 3 are already live.


Amit | Developer | Learning Quantum ML in Public Series 2 Index: Coming in first comment GitHub Project Repository: Coming in Post 5


Next: Post 05 - Theory meets Code-Pennylane | Dropping soon | follow along so you do not miss it !!

To view or add a comment, sign in

More articles by Amit Kumar

  • Training a Quantum Neural Network

    Quantum ML Series 2 | Post 07 of 11 | #QFD2 This is Post 7 of the Quantum Machine Learning for Developers series. If…

  • Building Your First Quantum Classifier

    Quantum ML Series 2 | Post 06 of 11 | #QFD2 This is Post 6 of the Quantum Machine Learning for Developers series. If…

    8 Comments
  • Setting Up Your QML Environment

    Quantum ML Series 2 | Post 05 of 11 | #QFD2 This is Post 5 of the Quantum Machine Learning for Developers series. If…

  • Quantum Neural Networks Explained

    Quantum ML Series 2 | Post 03 of 11 | #QFD2 This is Post 3 of the Quantum Machine Learning for Developers series. If…

    2 Comments
  • Where Quantum Beats Classical ML and Where It Does Not

    Quantum ML Series 2 | Post 02 of 11 | #QFD2 This is Post 2 of the Quantum Machine Learning for Developers series. If…

    6 Comments
  • Quantum Machine Learning for Developers What is QML and Why Does it Matter?

    Quantum ML Series 2 | Post 01 of 11 | #QFD2 This is Post 1 of the Quantum Machine Learning for Developers series. If…

Others also viewed

Explore content categories