122. Overview of the Harrow-Hassidim-Lloyd Algorithm

Let’s continue with another very important quantum algorithm, the Harrow Hassidim Lloyd algorithm, which is also called quantum matrix inversion. This protocol is used in many, many quantum machine learning algorithms as a fundamental building block. So it is really important that we understand the logic behind it, but it’s a tricky algorithm. So first, let’s go through a quick overview of how it works. So the problem that we want to solve is given a system of linear equations, we want to– and assuming that the A coefficient matrix is invertible, then we want to calculate the solution X as the inverse of A times B. And we would like to find a quantum algorithm for this. Classically, this would take some polynomial time, probably cubic in time or maybe a bit better. And this algorithm gives you an exponentially faster quantum protocol for performing something very similar. So first, we have to consider state preparation. So you have to prepare the beta state. So the B state, the B vector, and this is in amplitude encoding. So you might have to use a fairly involved circuit to prepare this so that– or it increases your circuit depth. And then A here would be corresponding to Hamiltonian encoding since we can think of this as a Hamiltonian. So if your A is not Hermitian, then you have to make it Hermitian by applying a simple transformation. And then you would be able to encode it as a Hamiltonian and then eventually as a unitary. The next stage is to perform a quantum phase estimation, because we are going to estimate the eigenvalues of this A operator. And by estimating the eigenvalues of this operator, well, what we are going to get is a very simple description of the structure of A that allows us to easily invert the eigenvalue. So after quantum phase estimation, we have this state, roughly this state. So it’s some state, and what we are going to do is we introduce an ancilla register, one more, and we apply a controlled rotation on the eigenvalue estimate. And what that’s going to give us is this is not ancillar register, it’s additional one qubit. For the time being, let’s forget about what the rest of the state is here, because this is what happens– this is the important thing that happens. So by creating this condition rotation on the ancilla, you can encode some constant times the inverse of the eigenvalue in the probability amplitude of the one state of your ancilla register. And that’s what we are going to use. But before we can use that, first we have to uncompute the eigenvalue register, which means that everything that we did for the quantum phase estimation, we have to reverse. So this will be a full quantum phase estimation reversed in the circuit. And the reason we have to do that is because if you start doing any kind of measurement here, these registers are entangled. And if you remember the very beginning of this course, you know that if you just trace out or get rid of a part of an entangled system, then you’re going to end up with some kind of a mixed state. We want to avoid it. So we have to uncompute everything in these registers so we have this state. So this is just essentially B itself expanded in an eigenbasis. And then this would be the vacuum state of the eigenvalue register. And then your ancilla is untouched by this inverse operation. So once we do this, we can confidently do a rejection sampling, which means that we measure the ancilla. And if we get 0, we discard everything, and we restore the calculation. And if we get 1, that means that now whatever output you get is in probability proportional to the inverse of lambda. And that’s what we are going to use to estimate observables or some measurable thing why we need the solution of the linear system of equations. So the resource requirements of this protocol are steep. So if you watched Roger Melko’s guest lecture, he talked about Shor’s algorithm. If you look at how Shor’s algorithm works, it actually has a similar structure. It has a very similar structure to quantum phase estimation. So that gives us a lower bound on how much resource we need to run this protocol. So for 2,048-bit RSA encryption, we would need 4,000 logical qubits. And that would take us in the range of millions of physical qubits on which we have to apply error correction. So if we look at the number of qubits we have today on gate model quantum computers, they are less than 100. So we are very, very far from being able to implement this algorithm in a practical manner. But this algorithm is definitely very important as we move forward, and we build better and better quantum computers.

• This is a deep circuit that requires a large, fault-tolerant quantum computer. Can’t we just do linear regression with the objective [Math Processing Error] \(min\parallel\parallel Ax-b\parallel\parallel_{2}^{2}\)using quantum optimization? For instance, you could use a quantum annealer or QAOA.

– Yes, although minimizing in the \(l_{2}\) -norm might not solve the original linear equality problem.

• Why do you think we do the conditional rotation after the phase estimation?

– This is how we move the result of the calculation to the ancilla, so we can sample it later ignoring the other registers.

• Shor’s algorithm is just quantum phase estimation in disguise, so it is a good baseline for resource estimation. It is a very different setting from the shallow-circuit algorithms we have seen in Module 3. What are some of the key resource requirements to run this algorithm?.

– Coherence time must be long, since we need time to implement millions of gates in a sequence.

– We need millions of physical qubits for error correction. #F

123. Introduction

The HHL algorithm underlies many quantum machine learning protocols, but it is a highly nontrivial algorithm with lots of conditions. In this notebook, we implement the algorithm to gain a better understanding of how it works and when it works efficiently. The notebook is derived from the computational appendix of the paper Bayesian Deep Learning on a Quantum Computer. We restrict our attention to inverting a \(2\times 2\) matrix, following Pan et al.’s implementation [1] of the algorithm.

import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
from qiskit import BasicAer as Aer
π = np.pi

124. Setting up the problem

We will solve the equation \(Ax=b\) with \(A = \frac{1}{2}\begin{bmatrix}3 & 1 \\1 & 3 \\ \end{bmatrix}\) and \(b =\begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}\). We will encode the \(A\) matrix as a Hamiltonian and \(b\) in a register. With ancillas, we will need a total of four qubits and one classical register for post-selection. We add an extra qubit and extra classical register to create a swap test to compare our result to the ideal state.

q = QuantumRegister(6)
c = ClassicalRegister(2)
hhl = QuantumCircuit(q, c)

125. Quantum Matrix Inversion

Taking a second and more detailed look at how the quantum matrix inversion protocol works. So we have to start with state preparation. And first and foremost, we have to prepare this b state from the b vector that we had in the original equation that we wanted to solve. Well, this can be a deep circuit. We don’t know. But in principle, this can be done. And then to encode a, we use Hamiltonian simulation. So this is one of the early results of quantum computing that, yes, this can be done on a quantum computer, but there are terms and conditions that apply. So in this original formalism of quantum matrix inversion, this matrix has to be sparse and well-conditioned. So sparse means that it does not have many non-zero entries. And well-conditioned means that it doesn’t have vanishing eigenvalues. So the ratio of the biggest and the smallest eigenvalue is well behaved. It’s not crazy. And if the same matrix is not Hermitian, then you can easily embed it into a matrix that’s twice its size in either direction and then say the lower diagonal– the lower anti-diagonal would be a and the upper anti-diagonal element would be the complex conjugate transpose of a. And this matrix is definitely Hermitian, so, in principle, this could be used for Hamiltonian simulation. Now the second step is quantum phase estimation, which we explained in much detail. So inside is a quantum Fourier transformation. And then you have these controlled unitary applications. So this is where the simulation of a comes in as a Hamiltonian. The next step is to do a conditional rotation. And this is really perhaps the most difficult step in understanding how it works. So we add one ancilla bit. So after the inverse quantum Fourier transformation, on the main register where the b state was, so now this psi actually b, this is again just the b vector. And now in this eigenvalue register, we have an estimate of the eigenvalue. So this would be the eigenvalue register to some finite n precision that could decide how many qubits you want to use to represent this estimated phase. We have this state, we can apply some bit operations to actually get the inverse of this. And once we have that, we add this ancilla So that ancilla would be in the zero state. And we apply a conditional rotation on this. And we do this to create this superposition in the ancilla qubit. So the rest of this data is the same, but we changed the ancilla qubit with the conditional rotation into this one. And the reason we do this is because here, the inverse of lambda i appears. And if you have that, then we can start estimating the inverse of the operator with some rejection sampling. So this is also kind of an amplitude encoding. We encoded the information we are interested in in the amplitude of the excited state of the ancilla qubit. But before we can start pulling out this probability amplitude and estimating it, we have to uncompute everything we’ve done on all the registers, except the ancilla. The reason we have to do this is because if you just start rejection sampling here, then all of these qubits are still entangled. So you would end up here with some strange mixed state if you just trace all of the parts of the system. So to avoid that, we have to uncorrelate the rest of the registers. And the way to do it is to uncompute, to inverse every single step we’ve done so far, except the conditional rotation, which created our desired states. So once we do this, first of all, our circuit is immediately twice as deep as it was. When it wants to do that, we can do this rejection sampling on this. So once we measure the ancilla just when we get one, we know that the probability of that is proportional to the inverse of lambda i. And with that, we can start estimating observables in the solution of this. This is not minus 1, this is minus 1. In the solution of this linear equation. That’s the essence of it, and if you accept that this algorithm has a very, very large resource requirement and you’ll meet all the conditions that are required to run this protocol, then this protocol can give you an exponentially faster way of calculating or estimating the solution of this linear equation.

• Imagine that you have a matrix A that is not Hermitian. What does it imply for the resource requirements?

– The number of qubits required approximately doubles.

• Ignoring warnings for the qubit quality required, you implement this algorithm on a current quantum computer, for instance, to invert a \(4\times4\) Hermitian matrix \(A\). You observe a very high success rate in rejection sampling. What does this mean?

– The poor coherence time, noise, and two-qubit fidelity would make it unlikely for the rejection sampling to give the correct state.

• What does n mean in the circuit?

– The precision of the phase estimation.

The vector \(b\) can be encoded as \(\left|b\right\rangle = \sum_{i=0}^N b_i\left|i\right\rangle = \left|0\right\rangle\), so no explicit state preparation circuit is needed for this case (this will not be true in general).

126. Quantum phase estimation

The next step is to encode the eigenvalues of the matrix \(A\) in an additional register. This is done via quantum phase estimation of the evolution described by the Hamiltonian \(A\) during some time \(t_0\), \(\exp(i A t_0)\). The protocol has three steps.

First we prepare the ancilla state \(\left|\psi_0\right\rangle=\sum_{\tau=0}^{T-1}\left|\tau\right\rangle\). Why this state? It will control the time evolution: it is like a clock, turning on evolution for a certain amount of time. The original HHL algorithm suggests a weighted superposition of all states \(\tau\) that minimizes errors in following steps in the algorithm. However, for our implementation, a uniform superposition already gives good results.

Our goal is to create a superposition of \(A\) as a Hamiltonian applied for different durations. Since the eigenvalues are always situated on the complex unit circle, these differently evolved components in the superposition help reveal the eigenstructure. So we apply the conditional Hamiltonian evolution \(\sum_{\tau=0}^{T-1}\left|\tau\right\rangle\left\langle\tau\right|\otimes e^{i A\tau t_0/T}\) on \(\left|\psi_0\right\rangle\otimes\left|b\right\rangle\). This operation evolves the state \(\left|b\right\rangle\) according to the Hamiltonian \(A\) for the time \(\tau\) determined by the state \(\left|\psi_0\right\rangle\). Given that in \(\left|\psi_0\right\rangle\) we have a superposition of all possible time steps between \(0\) and \(T\), we will end up with a superposition of all possible evolutions, and a suitable choice of number of timesteps \(T\) and total evolution time \(t_0\) allow to encode binary representations of the eigenvalues.

As a final step, we apply an inverse Fourier transformation that writes the phases (that, recall, encode the eigenvalues of \(A\)) into new registers.

The total circuit for phase estimation is the following:

Quantum phase estimation in the quantum matrix inversion algorithm

In our \(2\times 2\) case, the circuit is massively simplified. Given that the matrix \(A\) has eigenvalues that are powers of \(2\), we can choose \(T=4\), \(t_0=2\pi\) to obtain exact results with just two controlled evolutions.

# Superposition
hhl.h(q[1])
hhl.h(q[2])
# Controlled-U0
hhl.cu3(-π / 2, -π / 2, π / 2, q[2], q[3])
hhl.cu1(3 * π / 4, q[2], q[3])
hhl.cx(q[2], q[3])
hhl.cu1(3 * π / 4, q[2], q[3])
hhl.cx(q[2], q[3])
# Controlled-U1
hhl.cx(q[1], q[3])
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[3], line 5
      3 hhl.h(q[2])
      4 # Controlled-U0
----> 5 hhl.cu3(-π / 2, -π / 2, π / 2, q[2], q[3])
      6 hhl.cu1(3 * π / 4, q[2], q[3])
      7 hhl.cx(q[2], q[3])

AttributeError: 'QuantumCircuit' object has no attribute 'cu3'

We apply quantum inverse Fourier transformation to write the phase to a register:

hhl.swap(q[1], q[2])
hhl.h(q[2])
hhl.cu1(-π / 2, q[1], q[2])
hhl.h(q[1])
<qiskit.extensions.standard.h.HGate at 0x7fc1ac2680b8>

127. Using Quantum Linear Algebra for Learning

So we looked at the quantum Fourier transformation, the quantum phase estimation, and the quantum matrix inversion algorithms, and we didn’t really say what you can do with them for machine learning. So let’s talk about the kind of algorithms that you can accelerate with these methods. So all of these algorithms belong to what you can call quantum linear algebra or basic linear algebra subroutines. Quantum variance. And the idea goes back to this correspondence that we started this course with. So quantum states are really just some variants of stochastic vectors. They’re a special variant where you can have complex probability amplitudes. And you can look at quantum computations in two different ways. One is about transforming probability distributions, and the other is just doing linear algebra on these high dimensional vectors. So in the third module, the learning algorithms that we looked at focused on shallow circuit algorithms and quantum annealing, which are unable to sustain coherence in the quantum chip for a long time. So doing linear algebra is very limited. You can only do very simple things. But if we have a perfect large scale robust and noise tolerant quantum computer, then we can use these algorithms. So far we have seen a couple of these. So we looked at these elementary gates, where we basically apply rotations and a couple of other operations. And then we looked at the quantum Fourier transformation, which can be thought of as a signal processing primitive. Then we looked at eigenvalue or phase estimation. And the interesting thing about this algorithm is that what it actually does, it changes representations. So when you go back to the beginning of module three we talked about how you can encode classical information in quantum states. We talked about amplitude encoding and basis encoding. And what you’re actually doing here with this eigenvalue estimation is it change from one representation to the other. And so this is it. This is very useful primitive because some quantum algorithms operate on amplitudes, others are registers, depending on which one you want to use. You might want to change representations, and this is the routine to use to do that. Then we talked about quantum matrix inversion, which was the first breakthrough algorithm in quantum machine learning because this was the first proposal to gain exponential advantage in executing a learning protocol. So the very first such algorithm was quantum support vector machines. And since then, there have been many. For instance, topological data analysis and also Gaussian processes. And that’s the one that we are going to look at in the next lecture. But there are many, many, many, many other proposals to use matrix inversion as a primitive for learning protocols. The typical recipe for this is to take a look at the learning algorithm and if there’s matrix inversion, then it’s a perfect fit. You can straight away apply it. And if there isn’t, then you might be able to change some conditions and parameters in the algorithm to have a variant that uses matrix inversion. For instance, the quantum support vector machines solve a slightly different problem than the most common classical implementations of support vector machines. They use the least squares formulation of support vector machines because that has a system of linear equations that you can solve with matrix inversion. But there are many other linear algebra subroutines that you can accelerate by quantum computers. So another example is preparing a kernel matrix. So if you go back to the lecture on kernel methods you know that we can prepare this matrix where the elements are, let’s say, the dot products of the data instances. So we can do it fast in a quantum computer under certain terms and conditions. Then we can also do quantum principle component analysis. So in this case, what we want to do is we want to reveal the eigenstructure of a quantum state. And we can do that when we have some unitary operation. That’s what we use quantum phase estimation for. But since the density matrix is also a Hermitian operator, you can also think of it as a Hamiltonian. So there is a way of self exponentiating the quantum state and reveal its own eigenstructure and thus derive a quantum principle component analysis algorithm. Then there are many other routines. You can think of calculating the Hadamard product, which is the elementwise product of matrices, and there are countless many other primitives that use linear algebra on quantum computers. And so these are the kind of algorithms that give you a massive speedup. So here we are talking about exponential speedup compared to the best known classical algorithm. But all of these algorithms comes with caveats. So first of all, again, you need a very large scale and perfect noise from the quantum computer to implement them. And once you have that you still need something called a QRAM, or a Quantum Random Access Memory, which is able to give you data points in a superposition in an efficient manner. So it’s not about just preparing an arbitrary state one by one and putting them into superposition. That will be too fast. Then all of your exponential advantage would go away. So here, you want to retrieve data points in a superposition fast. It is unclear how we are going to build a Quantum RAM, or even if it’s possible at all. So with these warnings in mind, it is, in principle, possible to give an exponential speedup to machine learning algorithms by using quantum computers.

• We returned to the first metaphor that we introduced in Module 1: quantum states are special probability distribution written as vectors. The quantum basic linear algebra subroutines (QBLAS) give you a toolbox to do linear algebra with these vectors. Would a measurement qualify as such?

– Yes

• Quantum random access memory (QRAM) is a necessary component in many coherent quantum algorithms. What is its purpose?

– To retrieve desired states in a superposition.

The state of the system after this decomposition is approximately \(\sum _{j{\mathop {=}}1}^{N}\beta _{j}\left|u_{j}\right\rangle \left|\lambda_{j}\right\rangle\), where \(\left|b\right\rangle=\sum _{j{\mathop {=}}1}^{N}\beta _{j}\left|u_{j}\right\rangle\) is the encoding of the vector \(b\) in the eigenbasis of \(A\). Now, there is an often overlooked step that performs bit operations on \(\left|\lambda_{j}\right\rangle\) to actually invert it.

In our case, the inversion of the eigenvalues is easy. The eigenvalues of \(A\) are \(\lambda_1=2=10_2\) and \(\lambda_2=1=01_2\), and their reciprocals are \(\lambda_1^{-1}=1/2\) and \(\lambda_2^{-1}=1\). Noting that \(2\lambda_1^{-1}=01_2\) and \(2\lambda_2^{-1}=10_2\), a swap gate is enough to obtain the state \(\sum _{j{\mathop {=}}1}^{N}\beta _{j}\left|u_{j}\right\rangle \left|2\lambda _{j}^{-1}\right\rangle\), that encodes the reciprocals of the eigenvalues.

hhl.swap(q[1], q[2])
<qiskit.extensions.standard.swap.SwapGate at 0x7fc195c03978>

128. Conditional rotation of ancilla

Next, we perform a conditional rotation to encode the information of the reciprocals of the eigenvalues in the amplitudes of a state, on which we will later post-select. The state we would like to get is \(\sum _{j{\mathop {=}}1}^{N}\beta _{j}\left|u_{j}\right\rangle\left|2\lambda _{j}^{-1}\right\rangle \left(\sqrt{1-\frac{C^2}{\lambda_j^2}}\left|0\right\rangle+\frac{C}{\lambda_j}\left|1\right\rangle \right)\). This is achieved by controlled rotations in the same spirit of the conditional Hamiltonian evolution.

hhl.cu3(0.392699, 0, 0, q[1], q[0])  # Controlled-RY0
hhl.cu3(0.19634955, 0, 0, q[2], q[0]);  # Controlled-RY1

129. Uncomputing the eigenvalue register

A necessary step when performing quantum computations is to uncompute all operations except those that store the information that we want to obtain from the algorithm in the final registers. We need to do this in case the registers are entangled, which would affect the results.

In our case, we must uncompute the phase estimation protocol. After the uncomputation, the state should be \(\sum_{j=1}^N\beta_j\left|u_j\right\rangle\left|0\right\rangle\left(\sqrt{1-\frac{C^2}{\lambda_j^2}}\left|0\right\rangle+\frac{C}{\lambda_j}\left|1\right\rangle \right)\), so we can safely forget about the eigenvalue register.

hhl.swap(q[1], q[2])
hhl.h(q[1])
hhl.cu1(π / 2, q[1], q[2])  # Inverse(Dagger(Controlled-S))
hhl.h(q[2])
hhl.swap(q[2], q[1])
# Inverse(Controlled-U1)
hhl.cx(q[1], q[3])
# Inverse(Controlled-U0)
hhl.cx(q[2], q[3])
hhl.cu1(-3 * π / 4, q[2], q[3])
hhl.cx(q[2], q[3])
hhl.cu1(-3 * π / 4, q[2], q[3])
hhl.cu3(-π / 2, π / 2, -π / 2, q[2], q[3])
# End of Inverse(Controlled-U0)
hhl.h(q[2])
hhl.h(q[1])
<qiskit.extensions.standard.h.HGate at 0x7fc195c32358>

130. Rejection sampling on the ancilla register and a swap test

The state \(\left|x\right\rangle=A^{-1}\left|b\right\rangle\propto\sum_j \beta_j\lambda_j^{-1}\left|u_j\right\rangle\) that contains information about the solution to \(Ax=b\) is that obtained when measuring \(1\) on the ancilla state. We perform the post-selection by projecting onto the desired \(\left|1\right\rangle\). To check that the solution is the expected one, we prepare the correct output state manually to perform a swap test with the outcome.

# Target state preparation
hhl.rz(-π, q[4])
hhl.u1(π, q[4])
hhl.h(q[4])
hhl.ry(-0.9311623288419387, q[4])
hhl.rz(π, q[4])
# Swap test
hhl.h(q[5])
hhl.cx(q[4], q[3])
hhl.ccx(q[5], q[3], q[4])
hhl.cx(q[4], q[3])
hhl.h(q[5])

hhl.barrier(q)
hhl.measure(q[0], c[0])
hhl.measure(q[5], c[1])
<qiskit.circuit.measure.Measure at 0x7fc1ae31df98>

131. Quantum-Assisted Gaussian Processes

We talked about quantum linear algebra and a couple of applications in machine learning, and I want to take a look at a very specific one, which is quantum assisted Gaussian processes. It’s a very good fit for the quantum matrix inversion algorithm because the conditions are fulfilled. So let’s take a look at how it works– but before we take a look at the actual protocol, which is very simple, let’s take a look at what Gaussian processes are in the first place. If you think of some very simple machine learning problem, perhaps the easiest one– just think about linear regression. Here I’ve given a couple of points, and you want to fit a line to them. The points are given in– say, some high dimensional space. In this case, it’s just two dimensions with some corresponding labels which are real values in this case. And you can see that the data points follows in a completely different distribution than our line. But we, as the machine learning professionals, chose the line to fit. And we don’t really get any kind of estimate of how good we are, other than just the raw error that we get on the data points. Gaussian processes are different– they take a Bayesian approach, which means you always get some estimate of the confidence of a prediction. And in this particular case, what these models achieve is that, instead of fitting a specific function to the data points, you get a probability distribution over all possible functions in principle. And for any Bayesian approach, you have mainly three components. A prior distribution, a posterior, and something to connect the two– an update rule or some likelihood function. So in our case, the prior is over functions, and this is a Gaussian distribution with a 0 mean– and our posterior is also Gaussian function with a 0 mean, but now you have this covariance matrix or kernel matrix as the variance of this multivariate Gaussian distribution. What’s going on here is that the covariance matrix is up to us how we choose it. It’s very similar to how kernel methods work. What they can do, for instance, is if we choose an RBF kernel, which introduces an exponential decay over a Euclidean distance, that will mean that these more distant points to a curve that we are fitting are discounted a lot more, so they have much less weight. The scoring of some metrics also ensures that whatever is closing the input space stays reasonably close in the output space. You can think of the choice of the kernel function or the covariance matrix as some kind of a hyperparameter that you can tune to fit your actual data, and then you can run your Gaussian process to get some kind of a fitting. And the forward pass, which means calculating the value for some point in the model, is the following. What you’re interested in is in getting the value of the function for some data point given the data, and it will also follow a Gaussian distribution. This is some mean m and some s square variance. This is where calculation starts to become important. So first of all, let me define a vector. We take these elements– this is the kernel function– of this x asterisk x point with all the data points j, We create a vector of that. I’m going to call that as k– I’m going to denote that as k asterisk. And then we multiply it with the inverse of this matrix and the y vector, the vector of the labels. Now this is something that we have to invert– so we have the kernel metrics and the diagonal is biased by the variance of the data. This is one inversion. Then we have a second one when we calculate these s square variance of this probability. Here we have the kernel function of this point, and then we calculate the transpose of this vector. And again, this vector. So it’s very much like an expectation value of this inverse matrix. This is the same. So what we can do now is to actually calculate this inverse with the quantum algorithm. All we have to do is plug in the quantum matrix inversion routine, and we get a significant advantage. I mentioned that there are two very important conditions for being able to run quantum matrix inversion. One is sparsity, and other is that the matrix must be fair conditioned. The good news is that both can be fulfilled in this case. So the covariance– choosing the covariance function is up to us– it’s a hyperparameter. We can choose it in a way that the resulting covariance matrix is going to be sparse– so we can, for instance, introduce this exponentially fast decay of weights, and we can explicitly push them down to zero. So you will end up with a sparse kind of matrix– in which case, we can actually invert this algorithm efficiently. Conditioning this matrix is also not so difficult since you have this extra term here. And by increasing the variance of the noise, you can increase the values on the diagonal, which will affect how well conditioned your matrix is. So by increasing the noise, you can make it more well conditioned. So both conditions are met– you are free to run the quantum matrix inversion, provided that you have a sufficiently large quantum computer.

• Bayesian approaches pivot on priors, posterior, and how we update the latter based on observations. Why do we like a Gaussian prior and a Gaussian posterior?

– It is easy to derive a convenient update rule.

• The quantum-assisted forward pass has a fair bit of classical pre- and post-processing. Would you consider it a hybrid classical-quantum variational algorithm?

– No, since the quantum circuit has no parameters to optimize.

Note: it is a good exercise to check that the right result is given by the state \(\left|x\right\rangle=0.949\left|0\right\rangle + 0.314\left|1\right\rangle\), which is the state we prepare above.

There are two measurements performed, one of the ancilla register (for doing the post-selection) and another one that gives the result of the swap test. To calculate success probabilities, let us define some helper functions.

def get_psuccess(counts):
    '''Compute the success probability of the HHL protocol from the statistics

    :return: (float) The success probability.
    '''
    try:
        succ_rotation_fail_swap = counts['11']
    except KeyError:
        succ_rotation_fail_swap = 0
    try:
        succ_rotation_succ_swap = counts['01']
    except KeyError:
        succ_rotation_succ_swap = 0
    succ_rotation = succ_rotation_succ_swap + succ_rotation_fail_swap
    try:
        prob_swap_test_success = succ_rotation_succ_swap / succ_rotation
    except ZeroDivisionError:
        prob_swap_test_success = 0
    return prob_swap_test_success

Finally we run the circuit on the simulator:

backend = Aer.get_backend('qasm_simulator')
job = execute(hhl, backend, shots=100)
result = job.result() 
counts = result.get_counts(hhl)
print(get_psuccess(counts))
1.0

Running on the actual QPU would yield a much poorer result due to imprecisions in the applications of the gates and noise caused by the environment.

132. References

[1] J. Pan, Y. Cao, X. Yao, Z. Li, C. Ju, H. Chen, X. Peng, S. Kais, and J. Du. (2014). Experimental realization of quantum algorithm for solving linear systems of equations. Physical Review Letters 89:022313.