Vectors and matrices in quantum computing

Some familiarity with linear algebra is essential to understand quantum computing. This article introduces the basic concepts of linear algebra and how to work with vectors and matrices in quantum computing.

Vectors

A column vector, or vector for short, $v$ of dimension (or size) $n$ is a collection of $n$ complex numbers $(v_1,v_2,\ldots,v_n)$ arranged as a column:

$$v =\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n \end{bmatrix}$$

The norm of a vector $v$ is defined as $\sqrt{\sum_i |v_i|^2}$. A vector is called a unit vector if its norm is $1$.

The adjoint of a column vector $v$ is a row vector denoted as $v^\dagger$ and is defined as the conjugate transpose of $v$. For a column vector $v$ of dimension $n$, the adjoint is a row vector of dimension $1 \times n$:

$$\begin{bmatrix}v_1 \\ \vdots \\ v_n \end{bmatrix}^\dagger = \begin{bmatrix}v_1^* & \cdots & v_n^* \end{bmatrix}$$

where $v_i^*$ denotes the complex conjugate of $v_i$.

Using linear algebra, the state of a qubit $ \psi = a \ket{0} + b \ket{1}$ is described as a quantum state vector $\begin{bmatrix} a \\ b \end{bmatrix}$, where $|a|^2 + |b|^2 = 1$. For more information, see The qubit.

Scalar product

Two vectors can be multiplied together through the scalar product, also known as dot product or inner product. As the name implies, the result of the scalar product of two vectors is a scalar. The scalar product gives the projection of one vector onto another and is used to express one vector as a sum of other simpler vectors. The scalar product between two column vectors $u$ and $v$ is denoted as $\left\langle u, v\right\rangle = u^\dagger v $ and is defined as

$$ \left\langle u, v\right\rangle = u^\dagger v= \begin{bmatrix}u_1^* & \cdots & u_n^* \end{bmatrix} \begin{bmatrix}v_1\\ \vdots\\ v_n \end{bmatrix} = u_1^* v_1 + \cdots + u_n^* v_n. $$

With the scalar product, the norm of a vector $v$ can be written as $\sqrt{\langle v, v\rangle}$.

You can multiply a vector with a number $a$ to form a new vector whose entries are multiplied by $a$. You can also add two vectors $u$ and $v$ to form a new vector whose entries are the sum of the entries of $u$ and $v$. These operations are the following:

$$\mathrm{If}~u =\begin{bmatrix} u_1\\ u_2\\ \vdots\\ u_n \end{bmatrix}~\mathrm{and}~ v =\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n \end{bmatrix},~\mathrm{then}~ au+bv =\begin{bmatrix} au_1+bv_1\\ au_2+bv_2\\ \vdots\\ au_n+bv_n \end{bmatrix} $$

Matrices

A matrix of size $m \times n$ is a collection of $m\cdot n$ complex numbers arranged in $m$ rows and $n$ columns as shown below:

$M = \begin{bmatrix} M_{11} ~~ M_{12} ~~ \cdots ~~ M_{1n}\\ M_{21} ~~ M_{22} ~~ \cdots ~~ M_{2n}\\ \ddots\\ M_{m1} ~~ M_{m2} ~~ \cdots ~~ M_{mn}\\ \end{bmatrix}$

Note

Note that a vector of dimension $n$ is simply a matrix of size $n \times 1$.

Quantum operations are represented by squared matrices, that is, the number of rows and columns are equal. For example, single-qubit operations are represented by $2 \times 2$ matrices, like the Pauli $X$ operation

$$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$

Tip

In Q#, the Pauli $X$ operation is represented by the X operation.

As with vectors, you can multiply a matrix with a number $c$ to obtain a new matrix where every entry is multiplied with $c$, and two matrices of the same size can be added to produce a new matrix whose entries are the sum of the respective entries of the two matrices.

Matrix multiplication

You can also multiply a matrix $M$ of dimension $m\times n$ and a matrix $N$ of dimension $n \times p$ to get a new matrix $P$ of dimension $m \times p$ as follows:

$$ \begin{align} &\begin{bmatrix} M_{11} ~~ M_{12} ~~ \cdots ~~ M_{1n}\\ M_{21} ~~ M_{22} ~~ \cdots ~~ M_{2n}\\ \ddots\\ M_{m1} ~~ M_{m2} ~~ \cdots ~~ M_{mn} \end{bmatrix} \begin{bmatrix} N_{11} ~~ N_{12} ~~ \cdots ~~ N_{1p}\\ N_{21} ~~ N_{22} ~~ \cdots ~~ N_{2p}\\ \ddots\\ N_{n1} ~~ N_{n2} ~~ \cdots ~~ N_{np} \end{bmatrix}=\begin{bmatrix} P_{11} ~~ P_{12} ~~ \cdots ~~ P_{1p}\\ P_{21} ~~ P_{22} ~~ \cdots ~~ P_{2p}\\ \ddots\\ P_{m1} ~~ P_{m2} ~~ \cdots ~~ P_{mp} \end{bmatrix} \end{align} $$

where the entries of $P$ are $P_{ik} = \sum_j M_{ij}N_{jk}$. For example, the entry $P_{11}$ is the scalar product of the first row of $M$ with the first column of $N$. Note that since a vector is simply a special case of a matrix, this definition extends to matrix-vector multiplication.

Special types of matrices

One special square matrix is the identity matrix, denoted $\mathbb{I}$, which has all its diagonal elements equal to $1$ and the remaining elements equal to $0$:

$\mathbb{I}=\begin{bmatrix} 1 ~~ 0 ~~ \cdots ~~ 0\\ 0 ~~ 1 ~~ \cdots ~~ 0\\ ~~ \ddots\\ 0 ~~ 0 ~~ \cdots ~~ 1 \end{bmatrix}.$

For a square matrix $A$, a matrix $B$ is its inverse if $AB = BA = \mathbb{I}$. If a matrix $A$ has an inverse, the inverse matrix is unique and is written as $A^{-1}$.

For any matrix $M$, the adjoint or conjugate transpose of $M$ is a matrix $N$ such that $N_{ij} = M_{ji}^*$. The adjoint of $M$ is denoted $M^\dagger$.

A matrix $U$ is unitary if $UU^\dagger = U^\dagger U = \mathbb{I}$ or equivalently, $U^{-1} = U^\dagger$. One important property of unitary matrices is that they preserve the norm of a vector. This happens because

$\langle v,v \rangle=v^\dagger v = v^\dagger U^{-1} U v = v^\dagger U^\dagger U v = \langle U v, U v\rangle.$

Note

Quantum operations are represented by unitary matrices, which are squared matrices whose adjoint is equal to their inverse.

A matrix $M$ is called Hermitian if $M=M^\dagger$.

In quantum computing, there are essentially only two matrices that you encounter: Hermitian and unitary.

Tensor product

Another important operation is the tensor product, also called the matrix direct product or Kronecker product.

Consider the two vectors $v=\begin{bmatrix}a \\ b \end{bmatrix} $ and $u =\begin{bmatrix} c \\ d \end{bmatrix} $. Their tensor product is denoted as $v \otimes u$ and results in a block matrix.

$$ \begin{bmatrix} a \\ b \end{bmatrix} \otimes \begin{bmatrix} c \\ d \end{bmatrix} = \begin{bmatrix} a \begin{bmatrix} c \\ d \end{bmatrix} \\[1.5em] b \begin{bmatrix} c \\ d \end{bmatrix} \end{bmatrix} = \begin{bmatrix} a c \\ a d \\ b c \\ b d \end{bmatrix} $$

Note

Note that the tensor product is distinguished from matrix multiplication, which is an entirely different operation.

The tensor product is used to represent the combined state of multiple qubits. The real power of quantum computing comes from leveraging multiple qubits to perform computations. For more, see Operations on multiple qubits.

The tensor product of two square matrices $M$ and $N$ of size $n\times n$ is a larger matrix $P=M\otimes N$ of size $n^2 \times n^2$. For example:

$$ \begin{bmatrix} a\ b \\ c\ d \end{bmatrix} \otimes \begin{bmatrix} e\ f\\ g\ h \end{bmatrix} = \begin{bmatrix} a\begin{bmatrix} e\ f\\ g\ h \end{bmatrix} b\begin{bmatrix} e\ f\\ g\ h \end{bmatrix} \\[1em] c\begin{bmatrix} e\ f\\ g\ h \end{bmatrix} d\begin{bmatrix} e\ f\\ g\ h \end{bmatrix} \end{bmatrix} = \begin{bmatrix} ae\ af\ be\ bf \\ ag\ ah\ bg\ bh \\ ce\ cf\ de\ df \\ cg\ ch\ dg\ dh \end{bmatrix}. $$

Eigenvalues and eigenvectors

Consider a square matrix $M$ and a vector $v$. The vector $v$ is an eigenvector of $M$ if $Mv = cv$ for some number $c$. The integer $c$ is the eigenvalue corresponding to the eigenvector $v$.

In general, a matrix $M$ may transform a vector into any other vector. An eigenvector is special because it's unchanged except for being multiplied by a number. If $v$ is an eigenvector with eigenvalue $c$, then $av$ is also an eigenvector (for any nonzero $a$) with the same eigenvalue. For example, for the identity matrix, every vector $v$ is an eigenvector with eigenvalue $1$.

As another example, consider a diagonal matrix $D$, which only has non-zero entries on the diagonal:

$$ \begin{bmatrix} d_1 & 0 & 0 \\ 0 & d_2 & 0 \\ 0 & 0 & d_3 \end{bmatrix}. $$

The vectors

$$\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix} \text{and} \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$$

are eigenvectors of this matrix with eigenvalues $d_1$, $d_2$, and $d_3$, respectively. If $d_1$, $d_2$, and $d_3$ are distinct numbers, then these vectors (and their multiples) are the only eigenvectors of the matrix $D$.

In general, for a diagonal matrix it's easy to read off the eigenvalues and eigenvectors. The eigenvalues are all the numbers appearing on the diagonal, and their respective eigenvectors are the unit vectors with one entry equal to $1$ and the remaining entries equal to $0$.

Note in the example that the eigenvectors of $D$ form a basis for $3$-dimensional vectors. A basis is a set of vectors such that any vector can be written as a linear combination of them. More explicitly, $v_1$, $v_2$, and $v_3$ form a basis if any vector $v$ can be written as $v=a_1 v_1 + a_2 v_2 + a_3 v_3$ for some numbers $a_1$, $a_2$, and $a_3$.