On Randomised Compiling
Introduction
Quantum Computing has potential!
Why else would immense amounts of time and money be spent on developing an entirely new paradigm for computing? Quantum techniques have the power to solve problems that might take regular, “classical” computers a lifetime. But there are gargantuan challenges in the way of making quantum computing as useful a reality as some expect, and at the top of the list sits a single word:
Error
What kinds of errors? For instance, quantum computers are sensitive to light, and interactions with the photons in a light beam could destroy any computation that is being conducted. Or you may wish perform an operation that flips a bit from 0 to 1, but your operation may be imperfectly implemented, and so you flip to 0.97 instead of 1. These errors need to be corrected, and quantum error-correction is currently being explored and researched by major players in the field all over the world.
Randomised Compiling is an error-correction protocol that corrects a type of error known as coherent errors. This error type is particularly harmful to computation, and in essence, the protocol works by converting coherent errors into a more benign type that we know how to deal with efficaciously. The goal of this article is to illustrate the working of this protocol in a way approachable to people unfamiliar with even the basics of quantum computing. I hope the reader will acknowledge that such a simple idea can make great strides and prove immensely useful. But first, we need to know a few things about errors.
All ‘Bout the Errors
Use your imagination!
You’re in the stands and are about to witness Usain Bolt run the 100 meter. You have a stopwatch, and you prepare yourself as you hear the word “set”. Bolt raises his torso, ready to explode. At the sound of the horn, he takes off like … well … a bolt. His yellow speed suit (a runner’s uniform) complements the use of the simile. You start your stopwatch as well, and you record his time: an astounding 9.92 seconds!
Now, rewind the tape: have Bolt start all over, and record his time again. Does the stopwatch read 9.92 seconds this time? It probably doesn’t, right? Why?
We know from our intuition and experiences that just because the event occurring is the same, that doesn’t imply measurements pertaining to that event will undoubtedly and indefinitely yield the same values. In this particular scenario, several things could have happened that might have altered the recorded time. Perhaps you started the stopwatch ever so slightly after the horn sounded for one measurement, leading to a seemingly faster time. Or perhaps Bolt felt a little less energetic in one of the runs than he did in another, leading to a slower time. This fluctuation in our measured quantity is given an apt name by physicists: “random error”. There are three key points here:
(i) It is random because it doesn’t occur in a particular direction:
For instance, it isn’t the case that your reaction time to the starting horn is always delayed: you might start the stopwatch before the signal as well, and that would lead to a slower time as opposed to a faster time. Or if you’re cooking and you need 2.5 cups of rice, you might end up taking 2.6 cups or 2.4 cups of rice. Who knows? If you got more than 2.5 cups, maybe you stopped pouring the rice into the measuring cup a little later than you should have. If you got less than 2.5 cups, you might have stopped pouring the rice too soon. The error can either be a positive contribution or a negative contribution, and this is random. We have very little control over this type of error, and in most cases, it is bound to occur.
(ii) Systematic Errors, on the other hand, occur in one particular direction:
Say you measure your body temperature, but the thermometer hasn’t been calibrated well, and at a temperature of 0°C, it actually reads 1°C. This means that every measurement you take with that thermometer will be 1°C larger. Or perhaps you wish to measure your weight - if you have a faulty weighing scale that reads -1 kilograms when there is no weight on it, every measurement you make with the scale will be smaller by 1 kilogram, i.e. if you read 75 kg, you actually weigh 76 kg (I realise that reading this probably worries you; try not to worry about this being a problem with your scale. Also, acknowledge that you probably found this more worrisome than the idea of your thermometer being faulty. I do too). This type of error will always occur in one direction, and it is therefore predictable, and the opposite of the random errors above.
(iii) Random or systematic, the word “error” seems to suggest that there is some true, error-free value:
It can be tricky to find out if you have a systematic error, and this true value might therefore evade you. For instance, one could use the faulty thermometer, take a measurement, and never know that they are actually measuring a lower temperature. Even if you repeat the measurement, you will still be off by 1°C, and won’t be able to detect the error itself. On the other hand, as in the Usain Bolt instance, you’ll know your measurements have random errors in them because repeating them won’t yield the same value. There is a traditional technique used extensively to effectively “cancel out” the effect of these random errors, and arrive at a “true” value. This method is simple and an important aspect of the Randomised Compiling procedure.
Averaging out the Random Error
Recommended by LinkedIn
Say you don’t have anything better to do, and you replay this Usain Bolt clip ten times. After each instance, the stopwatch reads a certain time. So you throw all the data points onto a set of axes as shown below:
Each blue point on the graph has an x and a y coordinate. The y-coordinate of a point tells you the time that was recorded, and the x-coordinate tells you which dream you recorded that time in. Now observe the graph below, which contains exactly the same data points, but also has a horizontal line drawn seemingly through the middle of the data. This line represents the average of the ten times that we recorded. Doesn’t your gut tell you that we can safely regard this as, generally, the amount of time it takes for Bolt to run the 100 meter? This feels like a value we can quote: something close to a true value. Why?
Well, we know that when it comes to random error, each measurement is essentially the true value plus-or-minus some error. Think of the example with the cups of rice: if you pour 2.6 cups of rice, you poured in 2.5 cups + 0.1 cups. Here 2.5 is the true value, and 0.1 is the error. Sometimes, there are smaller measurements, and sometimes there are larger ones. This means that the measurements we repeatedly make surround some true value. In other words, this true value is somewhere in between these measurements. The average is simply a method to find the middle value of some data! That’s why this technique works. Now, the average isn’t necessarily going to be the true value. We could fill in a measuring cup with rice ten times, take the average of the amounts, and it wouldn’t be 2.5 even though that was the value we were going for. With only ten data points, the middle of the data might be 2.53 or 2.46. But the value would be closer because the random error would be reduced or “averaged”.
A quick summary before we move on!
(i) Random Errors are random because they can occur in either direction: 2.5 cups of rice might actually be 2.4 cups or 2.6 cups.
(ii) Systematic Errors occur in only one direction: the thermometer actually yields a temperature that is a degree higher than the true value.
(iii) The word “error” begs the question: error in what? It seems to suggest there is some true, error-free, value. In the case of systematic errors, it can be tricky to know what this true value is because you may not realise you have a systematic error in your measurements. Random errors are bound to occur, and because they take no particular direction, the various error-prone measurements you make will surround a true value. To find a value close to the true value, you can take the average of your measurements.
So … Randomised Compiling
Armed with these notions surrounding the vast topic of errors, we can understand what the Randomised Compiling technique is essentially doing. Recall that the particular type of error that this technique targets is known as a coherent error. At its essence, this is simply a systematic error: you perform some sort of computation and arrive at some answer that is off by the same amount no matter how many times you repeat it. We ask the question:
I know how to deal with random error. Why can’t I convert my systematic errors into random errors?
Well, the answer to that question is the fundamental nature of systematic errors: the same error will occur every time the computation is repeated. Our averaging procedure, on the other hand, relies on the randomness of the error. If the error is the same each time, it isn’t going to work. This is analogous to the temperature being off by 1°C consistently when using the incorrectly-calibrated thermometer. It isn’t going to be off by 2°C or -1°C the next time.
But what if it was? What if you repeated the temperature measurement multiple times and obtained a different value for each instance? Then, you have essentially converted your systematic error into a random error. You can then find the “middle” value by taking the average of the erroneous values, and obtain a solid result with reduced error that you can report with some confidence. Enter Randomised Compiling.
Randomised Compiling takes the emboldened question above and says, “Why not?” Upon being provided with a computation that the user wishes to perform, the protocol creates several different ways of performing the same computation! To be quite frank, it is not much different from wanting to perform 2 + 3, but doing 2 + 3, 1 + 4, and 0 + 5 instead! That’s how simple it is. Performing the computation a different way will aim for the same true value, but will introduce a different type of coherent (systematic) error because the new method will be off in a different way. Following the simplistic addition example, imagine that a computer actually registers 0.97 when the user inputs 1, and 3.03 when the user enters 3. In that case, performing 1 + 4 would yield 4.97, and 2 + 3 would yield 5.03. A different error has been achieved (ironic though it may seem, this is an achievement). Or perhaps you prefer the thermometer example: imagine using two thermometers to measure the temperature of the same body. One could be off by 1°C consistently, and the other by -2°C. If the true temperature is 50°C, the thermometers would read 51°C and 48°C. Both of these errors are systematic. We’ve made it random by trying to arrive at the same result differently.
Taking the average of the temperatures yields 49.5°C, which is a much better value than either of 48°C and 51°C. Randomised Compiling follows suit with quantum computations, and averages the several iterations of what is essentially the same computation to achieve a much better result. It is the epitome of error correction: we accept that errors aren’t just possible, but inevitable. Results are always error-prone even in fields unrelated to quantum computing (how many examples do we have at this point?). We must acknowledge this and develop techniques to fight against error. This will allow quantum computing to become the reality that many dream of. What I truly enjoy about this technique is its blend of simplicity and efficaciousness – it works phenomenally well, and its working can be understood in the remarkably easy way illustrated in this article. If you’re feeling confident now, give the original journal paper a read.
It’s an exciting time to be involved in Quantum Computing, folks! The world lies in wait for this technology to unleash its true potential. Please feel free to shoot me your thoughts, questions, comments, and critiques at abhayagarwal@berkeley.edu. Together, we can participate in the quantum revolution that is taking the world by storm!
Good illustration of the problem & solution- thanks for writing this.
Really well written and insightful; well done Abhay!