QCI: 1

QCI: Week 1

In this course we will be learning about quantum computing. We will start this course by using jupyter notebook. It is an opensource note taking software where you can run python (and other programming languages) inline. Usefull stuff. Installing Jupyter.

Set-up (Linux)

Anaconda

First of all is Anaconda installed? You can check this by inputting:

conda --version
#=> conda 4.0.1

# otherwise:
wget https://repo.anaconda.com/archive/Anaconda3-2018.12-Linux-x86_64.sh | sudo bash

Heads up! Why it's a bad idea to wget | bash. Note this is not recomended for production system, only do this if you are really sure what you are doing! It's a quick and dirty way of getting it done.

Why the wget | bash is a bad idea.

If so then jupyter is already installed, it comes prepackeged with anaconda. Second of all is pip installed? So why do we need this actaully?

Pip

pip --version
#=> pip 9.0.1

Qutip

A good way to represent Quantum Gates is by using the Qtip package in python.

conda install qutip

Jupyter

Anyways now we can start with jupyter, to start a server @localhost in your browser.

jupyter notebook

Scipy

We will be using the scipy package in order to do our computations. SciPy Docs.

Now that we have our Jupyter enviroment up and running we can start by multiplying the matrices with each other and see what happens.

#Import libs
import scipy as sp
from scipy import linalg

Pauli Gates
$$I= \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \qquad X= \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \qquad Z= \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$

Hadamard Gate(H)
$$H= \frac{1}{\sqrt{2}} \cdot \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{pmatrix}$$

S Gate
$$S = \begin{pmatrix} 1 & 0\\ 0 & 1j \end{pmatrix}$$

# Definition and initialization of gates as matrices
# Pauli Gates {I, X, Z}
pauli_i = sp.matrix([[1, 0], [0, 1]])
pauli_x = sp.matrix([[0, 1], [1, 0]])
pauli_z = sp.matrix([[1, 0], [0, -1]])

# Hadamard-Gate 
H = 1 / sp.sqrt(2) * sp.matrix([[1, 1], [1, -1]])

# S-Gate, using 1j notation instead of complex(0,1)
S = sp.matrix([[1, 0], [0, 1j]])

1.

$$X \cdot Z = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \cdot \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} = \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix}$$

pauli_x * pauli_z
matrix([[ 0, -1],
        [ 1,  0]])

2.

So at first I thought that in order to find the inverse of the matrix, you need apply the power of -1 to the matrix. described as the formula below.

$$X = X^{3} = X^{-1} = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}^{-1} = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}$$

$$Z = Z^{3} = Z^{-1} = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}^{-1} = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$

$$H = H^{3} = H^{-1} = \Bigg( \frac{1}{ \sqrt{2}} \cdot \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} \Bigg) ^{-1} = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{pmatrix}^{-1} = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{pmatrix}$$

I'm pretty sure this there is a function within the scypi lib that will inverse the matrices for me. What am I looking for? Let's check the docs. So we can find what we need in the docs.scipy.org/.../linear-algbebra#finding-inverse section.

But now I'm not so sure anymore, because I think that I need to multiply the matrices 3 times with itself in order to prove that the pauli{x, z}, hadamard gates are self inverse.

Anyway, I did the following computation, eventhough I am not sure anymore if it is correct.

# Usefull to know
# Shorthand Inverse: pauli_x.I
# Shorthand Transpose: pauli_x.Td

print("inv(pauli_x):\n", sp.linalg.inv(pauli_x))
print("pauli_x*pauli_x*pauli_x:\n", pauli_x * pauli_x * pauli_x, "\n")

print("inv(pauli_z):\n", sp.linalg.inv(pauli_z))
print("pauli_z*pauli_*pauli_z:\n", pauli_z * pauli_z * pauli_z, "\n")

print("inv(H):\n", sp.linalg.inv(H))
print("H*H*H:\n", H * H * H, "\n")


inv(pauli_x):
 [[-0.  1.]
 [ 1.  0.]]
pauli_x*pauli_x*pauli_x:
 [[0 1]
 [1 0]] 

inv(pauli_z):
 [[ 1.  0.]
 [ 0. -1.]]
pauli_z*pauli_*pauli_z:
 [[ 1  0]
 [ 0 -1]] 

inv(H):
 [[ 0.70710678  0.70710678]
 [ 0.70710678 -0.70710678]]
H*H*H:
 [[ 0.70710678  0.70710678]
 [ 0.70710678 -0.70710678]] 

Let's check for equality now

It seems like multiplying itself 3 times and inversing it give back the original matrix!

The code above somewhat represents what we want, but does not precisely adhere to the question that was asked. For example what if we have a huge matrix? If we do the operations above to find out if the matrix is equal to each other it would take forever. We need a faster way to do this.

It would be better to test if X = X⁻¹. Ideas were output flatten and reduce the output matrices. That way you can come to an answer relatively easily. But there is a method defined in scipy that will take care of this for us.

I think it's the following operation?
I'm pretty sure I don't fully grasp what they mean with matrix self inversion. From what I've understood is that you when you inverse a matrix, you should be able to come to the original state by inversing the inversed matrix.

??? X => X⁻¹ = X⁻¹ => X

sp.linalg.inv(pauli_x) == pauli_x
sp.equal(sp.linalg.inv(pauli_x), pauli_x)
pauli_x.I == pauli_x
# all of the above output the following: =>
# matrix([[ True, True], [ True, True]])
matrix([[ True,  True],
        [ True,  True]])

The code described above isnt a clear answer of what we want.
Let us try the following

print("sp.matrix.all(pauli_x == pauli_x.I): ", sp.matrix.all(pauli_x == pauli_x.I))
# => True
print("sp.matrix.all(pauli_z == pauli_z.I): ", sp.matrix.all(pauli_z == pauli_z.I))
# => True
print("sp.matrix.all(H == H.I): ", sp.matrix.all(H == H.I))
# => True
sp.matrix.all(pauli_x == pauli_x.I):  True
sp.matrix.all(pauli_z == pauli_z.I):  True
sp.matrix.all(H == H.I):  False

Clear and concise, perfect!

3.

$$H \cdot X \cdot H = \frac{1}{ \sqrt{2}} \cdot \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} \cdot \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \cdot \frac{1}{ \sqrt{2}} \cdot \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} = Z$$

# pay attention to the order of how you multiply the matrices with each other.
# pauli_x * H * H =/= H * pauli_x * H
H * pauli_x * H
matrix([[ 1.,  0.],
        [ 0., -1.]])

4.

If the above proof is true, we should be able to get X by multiplying Z and H in the following manner.
There probably need to be some fences around the Hadamard matrices.
$$H \cdot Z \cdot H = \frac{1}{ \sqrt{2}} \cdot \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} \cdot \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \cdot \frac{1}{ \sqrt{2}} \cdot \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} = X$$

# Remember the order of the multiplication of the matrices matters!
print(H * pauli_z * H)
[[0. 1.]
 [1. 0.]]

5.

We can use the following code to get an idea of how to brute force the answer. By analyzing the resulting matrices, we should be able to identify a pattern.

counter = 0
matrix_s = sp.matrix(S)
while(counter < 12):
 matrix_s *= S
 counter += 1
 print(counter, ".\n", matrix_s, "\n")
1 .
 [[ 1.+0.j  0.+0.j]
 [ 0.+0.j -1.+0.j]] 

2 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.-1.j]] 

3 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]] 

4 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.+1.j]] 

5 .
 [[ 1.+0.j  0.+0.j]
 [ 0.+0.j -1.+0.j]] 

6 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.-1.j]] 

7 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]] 

8 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.+1.j]] 

9 .
 [[ 1.+0.j  0.+0.j]
 [ 0.+0.j -1.+0.j]] 

10 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.-1.j]] 

11 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]] 

12 .
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.+1.j]] 

As we can see above, the matrices appear to represent Pauli_Z matrix, at {1, 5, 9, ..., n+4} starting from one. So there are 4 matrices in between each one. So we can speculate that everytime you multiply tha matrix 4 times with itself we get pauli_Z matrix.
//There needs to be a better way to express this, mathematically.

$$Z= \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} = \begin{pmatrix} 1.+0.j& 0.+0.j\\ 0.+0.j & -1.+0.j \end{pmatrix}$$

6.

We can use the answer from question 5 to get our answer. So at {3, 7, 11, ..., n+4} starting from 3.

7.

So this question continues with the theme of the previous two questions. We can multiply S with itself a couple of times and find its inverse. Just like we did with the previous two questions. I'm pretty sure this continues as well that the 4th multiplication of S times itself will result in the inverse of S.

print("S.I\n", S.I, "\n")
print("S*S*S: \n", S*S*S, "\n")

print("S.I == S*S*S:\n", sp.matrix.all(S.I == S*S*S))
S.I
 [[ 1.+0.j -0.+0.j]
 [ 0.+0.j  0.-1.j]] 

S*S*S: 
 [[1.+0.j 0.+0.j]
 [0.+0.j 0.-1.j]] 

S.I == S*S*S:
 True

Turns out you only need to multiply it 3 times with itself.

Show Comments