Here is a good review on intuition of Euclidean Algorithm.
This lecture note is hugely modified from the original presentation by Ryan O'Donnell as I find my version easier for beginers like me to follow.
Actually, factoring exist in nature. This video shows that factoring is related to brightness in coffee cup, Mandelbrot Set, cardioid microphone, and rolling of circles of different sizes.
We claim that once we have the length of cycle of node 1, denoting L, then we can solve factoring.
Last time, we saw we can know the rotation of a quantum operation given a quantum state that is in the subspace of the rotation. This is helpful for factoring! We need to prepare two thing:
Once we have those two, we can estimate the angle of U, and therefore know the length of cycle L.
Quantum Operation: simply make a quantum version of U(x) = a \cdot x \mod F.
Quantum State: simply encode |v_1\rangle, |v_2\rangle, ..., |v_L\rangle in binary number.
Since we love quantum, we want to use its power by making superpositions. So we would like to create the following quantum state:
For k = 0, 1, ..., L denote U^k|V\rangle = |V_k\rangle.
However, as you can see in above example, U acts as doing nothing since we can no longer track the modification done on each possible outcome. (Since |V_0\rangle = |V_1\rangle = ... = |V_L\rangle)
Yes, I am viewing + as "OR" since |V\rangle is a non-determine state and it can collapse to each |v_i\rangle with equal probability
So we need to add a "tag" for each state.
Quantum Operation: U'(x, |B\rangle) = (a \cdot x \mod F), |B\rangle
Quantum State: |v_1\rangle \otimes b_1, |v_2\rangle \otimes b_2, ..., |v_L\rangle \otimes b_L for b_1, b_2, ..., b_L \in \mathbb{R}^2.
Notice that we need b_1 \neq b_2 \neq ... b_L for us to tag each state. Because of this, we need to make |B\rangle (the added qubit) entangled with |V\rangle. Therefore we require the superposition state to be:
For k = 0, 1, ..., L denote U'^k|V'\rangle = |V'_k\rangle.
Notice by construction, operation U' has the effect:
Rotate |V'\rangle vector by \frac{2\pi}{L}, since it loop back at exactly L rotations
These rotation forms a 2D subspace since \text{avg}\{U'U'|V'\rangle, |V'\rangle\} = U'|V'\rangle
Intuition: in our first construction, because |V\rangle lives in the rotational axis of U, we can't measure the angle of rotation even though we know U^L|V\rangle = |V\rangle. But now, since we added a qubit, U'^L|V'\rangle = |V'\rangle still hold but rotation is no longer 0^\circ and therefore we can measure it.
We have done our construction on paper, if we throw U', |V'\rangle in Revolver Resolver (a bunch of Hadamard Tests), then we can get the angle \theta and get L = \frac{2\pi}{\theta}. But the problem is although we can construct U', we can't construct |V'\rangle since it require us to know |v_1\rangle, ..., |v_L\rangle which defeats the purpose.
Well, there is another version of explanation that doesn't require additional qubit. But it will involve complex number. For complex number, Ryan suggest us to read Visual Complex Analysis. Here is a quote from him.
You will find countless geometric insights in this tome, from the common to the virtually unknown. The author, Tristan Needham, builds a geometric world on the initial observation that complex numbers rotate the 2D plane. With this concrete visual in mind, he explains everything from the infamous quaternions to unitary matrices. Even just reading the first chapter fully explains basic complex number algebra in a geometric light.
This goes above and beyond what's necessary but is an excellent resource for anyone with a curiosity to understand why things are true.
We already have our vector: (we rename everything from index 0 to index L-1 for convenience)
We want to make a vector that is easily constructable and still have similar effect.
Consider the following vector:
Observe that, whenever U' is a \theta rotation on |V^{\times 1}\rangle, then U' is a 2\theta rotation on |V^{\times 2}\rangle. The rotation will rotate 2\cdot 2\pi until it rotate back to origin.
Note that (\forall j \neq j \in \{0, 1, ..., L - 1\})(|V^{\times i} \perp |V^{\times j}), which can be proved easily be inner product. Also, since they are orthogonal, the subspace in which each vector rotate in are orthogonal.
// TODO: calculation for inner product
Notice Revolver Resolver will give us k\theta when we feed in U' and |V^{\times k}\rangle. But the problem is: we still can't make any of |V^{\times k}\rangle.
Consider yet another vector, which now we believe we can make:
// TODO: calculation for =1,0,0,0,0
If we put in |start\rangle, U' to Revolver Resolver, we will get the answer as if we put into |V^{\times i}\rangle for uniformly random i \in \{0, 1, 2, 3, ..., L - 1\}. That is, we will get any element in the set \theta, 2\theta, ..., L\theta. Since L = 2\pi / \theta, we will estimate our \hat{L} \in \{L, \frac{1}{2}L, \frac{1}{3}L, ..., \frac{1}{L-1}L, 1\}.
Let L be the true L and k/\hat{L} be the fraction from Revolver Resolver.
To convert number into fraction, we first convert it to accurate representation where \hat{L} = 2^x if in binary (or \hat{L} = 10^x if in base-ten), but is not in lowest term.
It is sufficient with 10n digits of accuracy. // TODO: why
Finding LCM takes O(\log(\text{min}(a,b))
Say we got \frac{\hat{k}}{\hat{L}}, but not in the lowest term. We at least want to find the fractions in lowest term without factoring (since using factoring to find the fractions in lowest term would defeat purpose). Finding lowest term is just finding LCM.
Suppose we got \frac{\hat{k}}{\hat{L}}, since the number of primes in range [L/2, L] \geq \frac{1.6}{n} L for n digits of L, then Pr\{\hat{k} > \frac{L}{2} \land \hat{k} \text{ is prime}\} \gtrsim 1.6/n (which is a big number since n is way smaller than 2^n). If \hat{k} is prime, we know that both numerator and denominator cannot be divided by a common prime number, but denominator maybe can divide the entire prime numerator, so we require in addition \hat{k} > \frac{L}{2}.
We could repeat Revolver Resolver O(n) times to find lowest term. There is in expectation a fraction with lowest term. We can check for sure it is lowest term using LCM.
We don't actually need to find the lowest term. We can just take the lowest common multiplier of the denominator of \frac{\hat{k_1}}{\hat{L}}, \frac{\hat{k_2}}{\hat{L}}, then we can get L with high probability.
Since we have lowest term, observe these numbers are multiplicative of each other.
Note that some of the Ls above may be L/2, L/3, L/4, ...
So you can just take lowest common multiplier of \frac{k_1}{L}, \frac{k_2}{L} to get \frac{1}{L}. Therefore, L is obtained.
F = 387850941396970290534206277605204343359741811271883352319963844051\
2334975267439669826638880447656457799276726253996913347749220404884903\
30684685885180686701127 (* the thing we factor *)
M = 42 (* pick a M *)
fract = Rationalize[
0.01526682248009029195034034975403147290236332528792618573153059121\
8762524979368136164017198481249650018376474843659852818171952961162605\
0700509690570835756871746781962160158532737241555053538313369786621199\
9263171630535362489437077402352074556676261743566026432128949792146984\
3721741140297158743828890592908710648655633540355052515612363795188136\
1815261425976277027765965180568298688837478254112068738930922267128011\
4689493518405082753257557307360461335067110993699080431402199168809443\
0787906029477221];
L = Denominator[fract] (* get L from quantum *)
Solution1 =
If[GCD[M, F] == 0, M, "no easy solution"]
W = ModularInverse[M, F]
Bad1 = If[Mod[L, 2] == 0, "good", "bad M"]
S = PowerMod[M, L/2, F]
Bad2 = If[S == F - 1, "bad M", "good"]
P = GCD[S - 1, F]
Q = GCD[S + 1, F]
The above gives P = 12345678901234567890123456789012345678901234567890123456789012345678901234567997, Q = 1
Table of Content