Quantum Machine Learning 102 — QSVM Using Qiskit

Shubham Agnihotri
5 min readSep 23, 2020

--

As part of the peer learning series, Quantum Computing India had a session on Quantum Machine Learning 102 — QSVM hosted by me. Here’s a quick log of the session. In the end, you will find full video link.

Table Of Content

  • What is Support Vector Machine, why do we need Quantum Support Vector Machine?
  • Quantum Classification
  • Building the Circuit to solve the classification problem
  • Hands On !!!

What is Support Vector Machine

  • In simple, A support vector Machine is an algorithm that fall under classification in supervised learning.
  • In Supervised learning we have a labelled dataset, i.e. there are data entries which belong to some class labelled as a category in the dataset.
  • The algorithm tries to learn and predict which category the data point belongs to.
SVM

Here in the GIF we can see that it is a binary classification, and the dimension so the dataset is also less, hence it could be easily classified in 3D plane. But Once we go to multiclassification dataset and higher dimensionality our classical SVM will behave like this👇.

Source

Thus here, Quantum Machine Learning comes in to solve this problem but leveraging its property of Entanglement and Superposition and enabling parallel computing.

Quantum Classification

There are 3 steps in brief required to perform a Quantum Classification

  1. Convert classical data to quantum data
  2. We need to process the data
  3. We need to Apply measurements to read the output

Quantum Feature Map

Quantum Feature Maps V(Φ(𝑥⃗)) convert Classical data to Quantum data. Here Φ(…) is a classical function applied on a classical data. V(Φ(𝑥⃗)) is the parameterized circuit which converts the classical data to Quantum Data

The reason of choosing a quantum feature map is to get the quantum advantage.

4 main factors to choose a feature map:

  • The feature map circuit depth
  • The data map function for encoding the classical data
  • The quantum gate set
  • The order of expansion

Types of Feature Maps:

ZFeatureMap

The first order Pauli Z-evolution circuit.

A first order diagonal expansion is implemented using the ZFeature Map where |S| = 1. The resulting circuit contains no interactions between features of the encoded data, and therefore no entanglement

ZFeature Maps

Arguments

  1. feature_dimensions: dimensionality of the classical data (equal to the number of required qubits)
  2. reps: number of times the feature map circuit is repeated
  3. data_map_function: function encoding the classical data.
Default Settings

ZZFeatureMap

  1. Second-order Pauli-Z evolution circuit.
  2. The ZZFeatureMap feature map allows |S| ≤ 2, so interactions in the data will be encoded in the feature map according to the connectivity graph and the classical data map.
  3. Here ϕ is a classical non linear function
ZZFeature Map

Arguments

  1. feature_dimensions: dimensionality of the classical data (equal to the number of required qubits)
  2. reps: number of times the feature map circuit is repeated
  3. data_map_function: function encoding the classical data.
  4. entanglement: generates connectivity 'full' or 'linear' or defining your own entanglement structure
ϕ Mapping for ZZFeature Map

PauliFeatureMap

  1. More general form of the feature map
  2. It allows the user to create feature maps using different gates
  3. The default value is ['Z', 'ZZ'], which is equivalent to ZZFeatureMap.
PauliFeature Map[‘Z’, ‘Y’, ‘ZZ’]

Arguments

  1. feature_dimensions: dimensionality of the classical data (equal to the number of required qubits)
  2. reps: number of times the feature map circuit is repeated
  3. data_map_function: function encoding the classical data.
  4. entanglement: generates connectivity 'full' or 'linear' or defining your own entanglement structure
  5. paulis: The list of strings to be paulis

For Multiclass problems

There are various multiclass extensions supported for QSVM i.e.

All Pairs

This extension creates k × (k-1)/2 binary filters

If there are 4 classes i.e. A,B,C and D. Then there will be 6 filters, those are AB,AC,AD, BC, BD and CD. Hence based on these 6 filter values, final class will be predicted.

One Against Rest

In this extension, we create n filters for n classes i.e. one filter for each class. Each filter has to classify if the data point falls in a particular class or not.

Error Correcting Code (ECC)

Here, in this extension we create n bits, for m classes, These bits can be called as filters i.e. while training these bit values will change, but once the training is done

ECC Example

These bits acts a value to the class. for example class 2 here in the example is 100100. So at inferencing, when the test dataset is project, it will generate these bits and then Euclidean distance is calculated with each class. The shortest distance is preferred as the class of the test data point.

The Classical Optimizer

Once we get our predictions, A classical optimization routine changes the values of our circuit and repeats the whole process again.

The task of this loop is to lower the cost function value resulting in higher accuracy.

There are three types of Optimizers:

  1. COBYLA (Constrained Optimization By Linear Approximation optimizer.)
  2. SPSA (Simultaneous Perturbation Stochastic Approximation optimizer.)
  3. SLSQP (Sequential Least Squares Programming optimize)

We will cover each of them when we look in VQC in 103.

QSVM uses SPSA at the backend as it gave the best results, where as VQC gives you a free hand to choose between difference optimizers

Currently the loss function is fixed and cannot be altered.

Lets Dive in to the Code

You can find the whole code here

ZFeature Map:

Code for ZFeature map

ZZFeature Map:

Code for ZZFeature map

PauliFeature Map([Z,X,ZY]):

Code for PauliFeature Map

Multiclass Dataset:

Code for Multiclass classification
QSVM

References

  1. QSVM Documentation
  2. Qiskit tutorials
  3. Qiskit Circuit Library

Quantum Machine Learning Team:

--

--

Responses (1)