Lecture 009 - Grover's Algorithm

Grover's Algorithm

Grover's Algorithm: solve \text{SAT} in O(n^\frac{n}{2})

Unique SAT: promise there is exactly one solution in a circuit SAT.

Notice for Bias Busting, we detect if a truth table is biased toward 1 or !0$. For Unique SAT, we determine which row there is a 1.

The true power is that we can first Had all bits to uniform superposition, then we can compile a classical code to quantum gates just by running sign-compute on uniform superposition.

Reflection Accross Mean

In order to understand grover's algorithm, we first build some helper function such that: when given a bunch of amplitude, it calculates the mean of the amplitude and reflect every amplitude accross the mean amplitude.

def reflection_accross_mean():
  Had X1, X2, ..., Xn
  If OR(X1, X2, ..., Xn) then Minus
  Had X1, X2, ..., Xn

Notice the above function is 4n instruction where n is the number of variables.

After the first line, we produce some vector who's first element is the mean and some random stuff after it:

H_{all} \cdot \vec{f} = \vec{g} =\begin{bmatrix} \mu\\ \alpha\\ \beta\\ \dots\\ \zeta\\ \end{bmatrix}

After the second line, we add negative sign on all amplitudes except the first one.

g' = \begin{bmatrix} \mu\\ -\alpha\\ -\beta\\ -\dots\\ -\zeta\\ \end{bmatrix}

After the third line, we successfully get reflected version of \vec{f} called \vec{f'}. But why?

Essentially, we need to prove that

H_{all} \cdot \vec{g'} = \vec{f'}

The Proof

To see why, let's write \vec{f} as its mean \mu plus difference \vec{\Delta}

\vec{f} = \mu \begin{bmatrix} 1\\ 1\\ \dots\\ 1\\ 1\\ \end{bmatrix} + \begin{bmatrix} 0.25\\ 0.25\\ \dots\\ -1.75\\ 0.25\\ \end{bmatrix} = \mu \vec{1} + \vec{\Delta}

Then reflection accross mean just means we want to transform

\vec{f} = \mu \vec{1} + \vec{\Delta} \to \vec{f'} = \mu \vec{1} - \vec{\Delta}

To see that \vec{f'} equals, let's write \vec{g} differently. $$ \vec{g} = \mu \begin{bmatrix} 1\ 0\ \dots\ 0\ 0\ \end{bmatrix} + \begin{bmatrix} 0\ ?\ \dots\ ?\ ?\ \end{bmatrix} = \mu \vec{10} + \vec{?}\ $$

Then we have the following

\begin{align*} \vec{f} =& H_{all} \cdot \vec{g} \tag{by reverse $H_{all}$}\\ \mu \vec{1} + \vec{\Delta} =& H_{all} \cdot (\mu \vec{10} + \vec{?})\tag{by writing $\vec{g}$ differently}\\ \mu \vec{1} + \vec{\Delta} =& H_{all}(\mu \vec{10}) + H_{all}\vec{?}\tag{expand}\\ \mu \vec{1} + \vec{\Delta} =& \mu \vec{1} + H_{all}\vec{?} \tag{by $H_{all}\mu \vec{10} = \mu \vec{1}$}\\ \vec{\Delta} =& H_{all}\vec{?} \tag{by cancelation}\\ -\vec{\Delta} =& -H_{all}\vec{?} \tag{by negation}\\ \mu \vec{1}-\vec{\Delta} =& \mu \vec{1}-H_{all}\vec{?} \tag{by adding back $\mu \vec{1}$}\\ \vec{f'} =& \mu \vec{1}-H_{all}\vec{?} \tag{by $\vec{f'} = \mu \vec{1}-\vec{\Delta}$}\\ \vec{f'} =& H_{all}(\mu \vec{10}) - H_{all}\vec{?}\tag{extract $H_{all}$ out}\\ \vec{f'} =& H_{all} (\mu \vec{10} - \vec{?}\tag{extract $H_{all}$ out})\\ \vec{f'} =& H_{all} \vec{g'} \tag{by definition of $\vec{g'}$}\\ \end{align*}

Now, we have successfully proved the correctness of reflect accross the mean.

Actual Algorithm

Problem: given f : \{0, 1\}^n \to \{0, 1\}, find input x so that f(x) = 1.

Think Had x1, x2, ... followed by a signed compute as a way to do start some parallel computation by brute force trying all inputs but storing the result into negative amplitude.

@require x1, x2, ... = 0
def solve_sat(f):
  // prepare uniform superposition
  Had x1, x2, ...

  // repeat following two steps
  If f() then Minus // reflection accross 0
  reflection_accross_mean()

Let's trace down the states

\begin{align*} [+1, +1, +1, +1, +1, +1, +1] \tag{uniform superposition}\\ [+1, +1, +1, +1, -1, +1, +1] \tag{if $f()$ then minus}\\ [+1, +1, +1, +1, +3, +1, +1] \tag{reflection accross mean}\\ [+1, +1, +1, +1, -3, +1, +1] \tag{if $f()$ then minus}\\ [+1, +1, +1, +1, +5, +1, +1] \tag{reflection accross mean}\\ \dots\tag{as long as the mean is $+1^{ish}$}\\ \end{align*}

For n variables with T, we need O(T) instructions for if f() then minus and O(n) instructions for reflection accross mean. If we repeat k times, then it is O(knT)

For small enough k \leq 0.01\sqrt{2^n}, amplitude of x^* is about 2k + 1. Which is about \frac{1}{2500} probability.

To get Pr\{\text{print out } x^*\} \simeq 100\%, we need:

k = \sqrt{N} \cdot \frac{\pi}{4} = \sqrt{2^n} \cdot \frac{\pi}{4}

Grover's algorithm (n = 4, N = 16). This shows the amplitudes after each of 25 Grover combos (then it repeats). Made by Ryan O'Donnell

Grover's algorithm (n = 4, N = 16). This shows the amplitudes after each of 25 Grover combos (then it repeats). Made by Ryan O'Donnell

Aside

The following two computation are equivalent:

H on A
CNOT B A
H on A
H on B
CNOT A B
H on B

They all change amplitude on \alpha|11\rangle to -\alpha|11\rangle. (Can be proved by brute force computation using grids.)

Table of Content