If you thought ML was fun, here comes QML !
A lot of work has been done in the area of Quantum Machine Learning (QML) and this blog is just to give you a short 10 minutes intro into the world of QML. Hence, this should all be just a fun reading for you, while I plan to write a more systematic & in depth series of Tutorials on QML.
The Classical Computers are universal as they can implement any logical operation, whereas Quantum Computers are Universal in the sense that the any Quantum state over set of qubits can be converted to any other Quantum state. A lot of good early hope in feasibility of Quantum (circuit) Computing comes from Solovay–Kitaev theorem, which on very broad terms (finiteness of neuron/ Gate & Universal function/ State Approximation) can be equated to Universal approximation theorem for Deep Learning.
Ising Model & the Hamiltonian
Simply put, Ising model is an mathematical model for representing the phase transition & interaction within a system [1][2]. The energy of such a system at any point is given by respective Hamiltonian (equation) at that point.
So lets say in the Fig 1. below, we have a system of 4X4 (lattice) Matrix. Each of the cell is in one of the two possible states (say +1, -1). Let’s say every cell would have only four adjacent ( top, left, right, bottom) cell, if present.
So adjacent cell for 6 are 3,5,7 & 10.

Now each of the cell would possess some energy due to the interaction with its neighbors and some due to its own state. Every system would want to achieve (transition to) the lowest energy state.

So Eq1. above is the Energy equation for such a system at any point
- M: Represent the System/ Matrix above at any point
- <i,j>: all pairs of represent adjacent cells i,j
- J i,j : Represent the interaction energy coefficient between adjacent cells
- Mi : Represent the value of each cell {+1, -1}
- hi: Represent the energy coefficient for each
- μ: Represent the overall (external) energy coefficient for each cell
Now, though the equation above is not even accurate enough for a simple physical Magnet ( for which it was initially proposed), but it does a fair job of putting modelling the energy of a system due to state of each particle, its interaction with other particles and for any external factor on each particle.




Fig 2. Above shows transition of such a system (200X200) from initial state (left) to the final state (right).
Ways for Quantum ML
Though summarizing all the possible ways for Quantum ML would not be possible for me, mostly due to my limited understanding of handful of the algorithms I have read but since in general any Quantum Optimization/ minimization algorithm can have an equivalent QML use-case.
Of the 3 methods provided below, we would only focus on the 1st model for the later part of our discussion in this blog, other two methods are put here just to give the bigger picture of QML algorithms
Quantum Circuit/ Gate Model
- Mapping your classical problem to a Quantum Algorithm.
- The respective Quantum Algorithm is then converted to a Quantum Circuit
- The Quantum Circuit is than run on the Quantum Processor
Adiabatic Quantum Computing [ignore for now]
- H1 represent the energy of an system purely due to individual cells & not interactions (i.e. Jij = 0).
(Note: the ground/ minimum energy state here would be of superposition of all bits) Ignore this for now, if you do not know what superposition of qubit is - H2 represent a general energy state as given by Eq1 above for our system
- Now over time (with each step) we would want to transition from ground state of H1 to the ground state of H2
- At any time the state of system would be a like convex combination of H1 to H2 over time, t ∈ [0,1]
H (t) = (1-t) * H1 + t * H2 - Note the problem is not easy to solve and is a Hard problem to solve, as we do not know the correct transition speed (something like selecting an learning rate in Gradient descent) for moving between all the states
Quantum Annealing [ Simplified Adiabatic Quantum Computing]
- This is more feasible/ practical version of Adiabatic Quantum Computing, where rather than finding the ground state of our target system, we go and start finding many different low energy states with each transition (based on different transition speeds). And we hope the state with lowest energy is our Ground State of the Target system
If you would want to read more on these, check below pointers:
Basically, all the above methods would somehow turn out to be, we getting an equivalent Quantum Circuit & solving it over a Quantum Processing Unit (QPU), in combination with our standard CPU.
Going forward on this blog, we would assume we are magically given a Quantum circuit for our problem & we want to solve it on a QPU with an programming interface.
Let’s Code
For now, we would try to build a simple regression cum classification problem, provided in the notebook below, but here we would break it piece by piece.
AbhishekAshokDubey/quantum-computing-101
Again we would already assume a circuit for the problem, mainly driven by the final MNIST problem circuit as suggested by Farhi et al & also used by Google TFQ MNIST example.
Before we start, we would need to install tensorflow-quantum (TFQ) & Cirq on top of tensorflow (TF).
- tensorflow-quantum: Serves as the programming language (interface) for any underlying QPU. (Much like CUDA for GPU)
- Cirq: serves as programming language to define the Quantum Circuits
tensorflow & pip install -q tensorflow==2.1.0
pip install -q tensorflow-quantum
pip install -q cirq
Let’s import all the packages we would need
import numpy as np
import sympy
# For Quantum ML
import cirq
import tensorflow as tf
import tensorflow_quantum as tfq
# For Visualization
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
Let’s make a dummy toy data for our problem
def get_data(l):
data = np.random.randint(0,1,(l,2))
label = np.ones(l)
data[0::2,0]=1
data[1::2,1]=1
label[0::2] = -1
p = np.random.permutation(l)
return data[p], label[p]
x_train, y_train = get_data(5000)
x_test, y_test = get_data(200)
print(pd.DataFrame(np.concatenate((x_train, np.reshape(y_train, (-1,1))), axis=1) , columns=["x1", "x2", "y"]))
The generated data looks like below, output of above print statement. Basically x1 =1 → y = -1 and x2 =1 → y = +1. Do not ask me why, but this looks like way to simple problem to start with 🙂
x1 x2 y
0 1.0 0.0 -1.0
1 1.0 0.0 -1.0
2 1.0 0.0 -1.0
3 0.0 1.0 1.0
4 1.0 0.0 -1.0
... ... ... ...
4995 1.0 0.0 -1.0
4996 0.0 1.0 1.0
4997 1.0 0.0 -1.0
4998 1.0 0.0 -1.0
4999 1.0 0.0 -1.0
One fun fact about Quantum data is, it can’t be stored & it can be only generated/provided as a Quantum Circuit itself. Check out few more
Hence each data point we generated above should be convert to an equivalent Quantum circuit. So basically for 0 bit we do not do anything, but we pass a 1 bit though a NOT Gate. Below is a simple function for doing the same, converting our data over 0/1 into equivalent circuits
def convert_to_circuit(x):
qubits = cirq.GridQubit.rect(1, 2)
circuit = cirq.Circuit()
for i, val in enumerate(x):
if val:
circuit.append(cirq.X(qubits[i]))
return circuit
So, let’s get the equivalent quantum data (circuit) for each data point in our classical dataset.
x_train_circ = [convert_to_circuit(x) for x in x_train]
x_test_circ = [convert_to_circuit(x) for x in x_test]
For passing the circuits / data points generated above with CIRQ to TFQ, we would need to convert each circuit/ data point to a tensor in TFQ.
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
Now we are ready with our Quantum Data, all we now need is the Quantum circuit for our Problem & a way to solve it 🙂
As mentioned above, let’s just assume one circuit, even if it’s not the best for the problem in hand, but we trust if a similar circuit can solve simplified MNIST binary (3 vs 6) digit classification, it should easily solve our problem

Here the (-1,-1) line/ qubit is what would have our final output class prediction, while (0,0) & (1,0) qubits would have our input data.
So, let’s build the above circuit in CIRQ.
input_qubits = cirq.GridQubit.rect(2, 1) # 2x1 grid.
readout = cirq.GridQubit(-1, -1) # a qubit at [-1,-1]
model_circuit = cirq.Circuit()
model_circuit.append(cirq.X(readout))
model_circuit.append(cirq.H(readout))
alpha1 = sympy.Symbol('a1')
model_circuit.append(cirq.XX(input_qubits[0], readout)**alpha1)
alpha2 = sympy.Symbol('a2')
model_circuit.append(cirq.XX(input_qubits[1], readout)**alpha2)
beta1 = sympy.Symbol('b1')
model_circuit.append(cirq.ZZ(input_qubits[0], readout)**beta1)
beta2 = sympy.Symbol('b2')
model_circuit.append(cirq.ZZ(input_qubits[1], readout)**beta2)
model_circuit.append(cirq.H(readout))
model_readout = cirq.Z(readout)
SVGCircuit(model_circuit)
The SVGCircuit(model_circuit) code/ command should be able to draw your circuit image on the inline console.
And from here on things would make same sense as in Classical Machine Leaning
Now more like Keras Sequential Neural Network model, we would build a model to solve our Problem Circuit.
# Build the model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# PQC layer returns the expected val of the readout gate @[-1,1]
tfq.layers.PQC(model_circuit, model_readout),
])
Finally, we would need to define the loss function, optimizer & metric to track.
model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[accuracy])
Below is what our model built so far contains (from model summary)

Let’s Train our Quantum Circuit model with Quantum Data.
model_history = model.fit(
x_train_tfcirc, y_train,
batch_size=200,
epochs=5,
verbose=1,
validation_data=(x_test_tfcirc, y_test))
results = model.evaluate(x_test_tfcirc, y_test)
Below is how your training & validation data loss (& accuracy) would change with each epoch.

Hoping all went well, try Predicting the values from the test data.
print(list(zip(model.predict(x_test_tfcirc).ravel()[:10], y_test[:10])))
[(-0.7765335, -1.0),
(0.77620333, 1.0),
(0.77620333, 1.0),
(0.77620333, 1.0),
(-0.7765335, -1.0),
(0.77620333, 1.0),
(-0.7765335, -1.0),
(0.77620333, 1.0),
(0.77620333, 1.0),
(0.77620333, 1.0)]
Now based on above you would believe we have solved a classical Classification problem as regression problem, so please go ahead & improve it
Hint: Hinge Loss ?
Finally, once you understand this, it would be much more easily for you to write your own simple MNIST two-class classifier without referring to Google’s Tutorial for the same 🙂
If you are wondering how Quantum Processing Unit would be used in combination of current processing units and where does everything we used above fits in bigger picture (?).
The architecture below by Google TFQ team would put some quick light on same for you.

Now it’s time to move from a smaller toy example to a better toy example, try below.
- Classification with Quantum Neural Networks on Near Term Processors
- MNIST classification | TensorFlow Quantum
Cheers & Happy Learning!!!
References:
- https://www.youtube.com/watch?v=16ZfkPRVf2w
- https://github.com/AbhishekAshokDubey/quantum-computing-101/tree/master/qml-101
- https://www.tensorflow.org/quantum/tutorials/mnist
- https://www.youtube.com/watch?v=-o9AhIz1uvo
- https://medium.com/@adubey40/quantum-computing-101-1ed742540ba2
Quantum Machine Learning 101 was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Comments
Loading…