Recently, I was trying to get the hang of quantum computing. I found myself in a position where I forgot most of the linear algebra stuff I’ve learned in past semesters. So again, I decide to put them down in hope that some of the knowledge here will stay in my memory a bit longer.

## General single-qubit Gates ๐

Trying to understand unitary matrix in the context of pure linear algebra is, I must admit, rather boring. Perhaps that is one reason why I brushed them off so quickly and so easily. However, explaining it in the context of quantum computing feels a lot more fun. Maybe it’s because I can associate a unitary matrix with a quantum gate, which is something a bit more concrete, or simply because the term ‘‘quantum computing’’ makes me sound smarter.

Speaking of something concrete, here ara two example unitary matrices: the NOT gate (\(X\)) and Hadamard gate (\(H\)):

\[ X =\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} ;\ H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \]

For example, if we take the Hadamard gate (\(H\)) and compute its adjoint \(H^{\dagger}\):

\[ H^{\dagger} = \begin{pmatrix} \begin{pmatrix}\frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \end{pmatrix}^T \end{pmatrix}^{*} \]

We know the transpose of \(H\) is still \(H\), and taking the complex conjugate of \(H^T\) doesn’t do anything since \(H^T\) is a real matrix. Thus, we can verify that \(H^{\dagger}H = I\).

There are other single-qubit quantum gates such as the \(Y\) and \(Z\) matrices (Pauli matrices) introduced by physicist Wolfgang Pauli . It’s a good exercise to verify they are also unitary matrices.

## What does it mean for a matrix to be unitary ๐

The most important property of unitary matrices is that they *preserve the length of inputs*. It means that given a quantum state, represented as vector \(|\psi\rangle\), it must be that \( \left\lVert U|\psi\rangle \rangle \right\rVert = \left\lVert |\psi\rangle \right\rVert \).

Proving unitary matrix is length-preserving is straightforward. We wanna show that \( \left\lVert U |\psi\rangle \right\rVert_2 = \left\lVert |\psi\rangle \right\rVert_2 \):

\[\begin{aligned} \left\lVert U |\psi\rangle \right\rVert_2^2 &= (U |\psi\rangle)^H(U |\psi\rangle) \\ &= |\psi\rangle^H U^H U |\psi\rangle \\ &=|\psi\rangle^H |\psi\rangle \\ &= \left\lVert |\psi\rangle \right\rVert_2^2 \end{aligned}\]

## Why are unitaries the only matrices that preserve length ๐

Previously, we use the *ket* notation for quantum state vectors. We can extend the two-dimensional quantum state vectors to more general vectors and the properties of unitary matrix will still hold.

Putting our questions in formal terms, we want to show that if \(A \in \mathbb{C}^{m \times m}\) preserves length (\(\left\lVert A x \right\rVert_2 = \left\lVert x \right\rVert_{2}\ \forall x \in \mathbb{C}^m\), then \(A\) is unitary).

We first prove that \((Ax)^H(Ay) = x^Hy\) for all \(x\), \(y\) by considering that \( \left\lVert x - y \right\rVert_2^2 = \left\lVert A(x - y) \right\rVert_2^2 \). Then we will the result to evaluate \(e_i^H A^HAe_j\).

Let \(x\), \(y \in \mathbb{C}^m\), then we can use the alternative definition for the matrix 2-norm (e.g. \(\left\lVert y \right\rVert_2 = y^Hy\)) for \( \left\lVert x - y \right\rVert_2^2 = \left\lVert A(x - y) \right\rVert_2^2 \),

\[ (x-y)^H(x-y) = (A(x-y))^HA(x-y) \]

Based on that fact that the hermitian transpose rule that \((Ax)^H = x^HA^H\), we get

\[ (x-y)^H(x-y) = (x-y)^HA^HA(x-y) \]

Multiplying the above formula out,

\[ x^Hx - y^Hx - x^Hy + y^Hy = x^HA^HAx - y^HA^HAx - x^HA^HAy + y^HA^HAy \]

The alternative definition for \(y^Hx\) is \(\overline{x^Hy}\), so we apply the definition here,

\[ x^Hx - (\overline{x^Hy} + x^Hy) + y^Hy = x^HA^HAx - (\overline{x^HA^HAy} + x^HA^HAy) + y^HA^HAy \]

We know that \(A\) preserves length, and that \(\frac{\alpha + \overline{\alpha}}{2} = Re(\alpha)\). so we can simplify the above formula as:

\[ Re(x^Hy) = Re((Ax)^H(Ay)) \]

We know that \(A\) preserves length, and thus we need to show that \(A^HA = I\) by using the fact that the standard basis vectors have the property that

\[ \begin{equation} e_i^H e_j = \begin{cases} 1 & \text{if \(i = j\)}\\ 0 & \text{otherwise} \end{cases} \end{equation} \]

Therefore, \(e_i M e_j\) will essentially extract the \(i,\ j\)th entry in matrix \(M\). So we know that

\[ e_i A^HA e_i = \left\lVert Ae_i \right\rVert^2 = \left\lVert e_i \right\rVert^2 = 1 \]

We can conclude that all the diagonal elements of \(A^HA\) are \(1\).

A side question remains, how do we prove that all the off-diagonal elements in \(A^HA\) are \(0\)? Turns out it very straightforward to illustrate the process if we resort back to the two-dimensional quantum vector state matrix.

Suppose we have \(|\psi\rangle = |e_i\rangle + |e_j\rangle\), we already know that \(\left\lVert A |\psi\rangle \right\rVert^2 = \left\lVert |\psi\rangle \right\rVert^2 = 1 + 1 = 2\), and we know we can expand \(\left\lVert A |\psi\rangle \right\rVert^2\) to \(1 + e_i A^HA e_j + e_j A^HA e_i + 1\), we would get \(e_i A^HA e_j + e_j A^HA e_i = 0\).

Then, suppose instead we have \(|\psi\rangle = |e_i\rangle + i|e_j\rangle\), following the same process, we would get \(e_i A^HA e_j - e_j A^HA e_i = 0\). Combining with the fact that \(e_i A^HA e_j + e_j A^HA e_i = 0\), we’ve proven that the off-diagonal elements in \(A^HA\) are all \(0\). We can extend the vector \(\psi\) to higher-dimensional vectors and the proof will be similar.