# Solution of An Introduction to Quantum Computing
$$
% \def\qubit{\@ifstar{\qubitH}{\qubitV}}
\def\qubitV#1{\qubitVV#1&}
\def\qubitVV#1&{\begin{bmatrix}
#1 \\
#2
\end{bmatrix}}
\def\qubitH#1{\qubitHH#1&}
\def\qubitHH#1&{\begin{bmatrix}
#1 & #2
\end{bmatrix}}
% \def\tqubit{\@ifstar{\tqubitH}{\tqubitV}}
\def\tqubitV#1{\tqubitVV#1&}
\def\tqubitVV#1&{\begin{bmatrix}
#1 \\
#2 \\
#3 \\
#4
\end{bmatrix}}
\def\tqubitH#1{\tqubitHH#1&}
\def\tqubitHH#1&{\begin{bmatrix}
#1 & #2 & #3 & #4
\end{bmatrix}}
\def\bmqty#1{\begin{bmatrix} #1 \end{bmatrix}}
\def\f#1{\ff#1/}
\def\ff#1/#2/{\frac{#1}{#2}}
\def\l{\left}
\def\r{\right}
\newcommand{\bra}[1]{\langle #1 |}
\newcommand{\ket}[1]{| #1 \rangle}
\newcommand{\braket}[1]{\langle #1 \rangle}
\newcommand{\ketbra}[2]{| #1 \rangle \langle #2 |}
\def\op{\ketbra}
\def\dyad{\ketbra}
\def\R{\mathbb{R}}
\def\H{\mathcal{H}}
\def\Set#1{\l\{#1\r\}}
\def\set#1{\{#1\}}
\def\Tr#1{\operatorname{Tr} \l( #1 \r)}
% Examples:
% \qubitV{1 & 0}
% \qubitH{0 & 1}
% \tqubitV{\f{1 / 2} & 0 & 0 & \f{1 / 2}}
% \tqubitH{0 & 1 & 0 & 1}
% \bmqty{
% 1 & 2 \\
% 3 & 4 \\
% }
$$
[TOC]
## Contribute
The solutions may not detailed enough or may have some errors. Welcome to edit it to make it better.
## Book Information
- **Authors**: Phillip Kaye, Raymond Laflamme, Michele Mosca
- **Edition**: 1st
- **Publisher**: Oxford University Press
- **Publish Date**: 2007/01/18 (yyyy/mm/dd)
- **ISBN-10**: 019857049X
- **ISBN-13**: 978-0198570493
## Exercise 1.5.1
***Question***
A sequence of $n$ CNOT gates with the target bits all initialized to 0 is the simplest way to copy an $n$-bit string $y$ stored in the control bits. However, more sophisticated copy operations are also possible, such as a circuit that treats a string $y$ as the binary representation of the integer $y_1 +2y_2 +4y_3 + \cdots 2^{n−1}y_n$ and adds $y$ modulo $2^n$ to the copy register (modular arithmetic is defined in Section 7.3.2).
Describe a reversible 4-bit circuit that adds modulo 4 the integer $y \in \{0,1,2,3\}$ represented in binary in the first two bits to the integer $z$ represented in binary in the last two bits.
***Answer***
Not yet
## Exercise 2.3.1
***Question***
Prove that the trace has the cyclic property $\Tr{ABC}=\Tr{BCA}$.
***Answer***
## Exercise 2.3.2
***Question***
Using the result of the previous exercise, together with the fact that a change of orthonormal basis can be written as a unitary operator, show that $\Tr{A}$ is independent of the basis in which it is expressed. Notice that in the matrix representation, $\Tr{A}$ equals the sum of the diagonal entries of the square matrix
representing $A$.
***Answer***
If the unitary operator $U$'s columns are composed by the new basis representing in the original basis, the transformed $A$, which is represented as $A'$ is
$$
A' = U^{-1}AU
$$
Calculate trace of the both sides.
$$
\begin{align*}
\Tr{A'}
&= \Tr{U^{-1}AU} \\
&= \Tr{UU^{-1}A} \quad(\Tr{ABC}=\Tr{BCA}) \\
&= \Tr{A}
\end{align*}
$$
The trace doesn't change after changing the basis. This result holds for all linear independent basis.
Q.E.D.
## Exercise 3.2.1
***Question***
Show that (3.2.12) is a solution of the time-independent Schrödinger equation.
$$
\ket{\psi(t_2)}
= e^{\frac{-i}{\hbar} H (t_2 - t_1)}
\ket{\psi(t_1)} \tag{3.2.12}
$$
***Answer***
From Schrödinger equation
$$
i \hbar \frac{\partial}{\partial t} \ket{\psi(t)} = H \ket{\psi(t)},
$$
we can get the general solution of $\ket{\psi(t)}$ is
$$
\ket{\psi(t)} = e^{-\frac{i}{\hbar} H t} \ket{C},
$$
where $\ket{C}$ is a time-independent state. Substituting $t$ by $t_1$ and $t_2$, we get
$$
\begin{align*}
\ket{C}
= e^{\frac{i}{\hbar} H t_2} \ket{\psi(t_2)}
&= e^{\frac{i}{\hbar} H t_1} \ket{\psi(t_1)} \\
\ket{\psi(t_2)}
&= e^{\frac{-i}{\hbar} H (t_2 - t_1)}
\ket{\psi(t_1)}
\end{align*}
$$
Q.E.D.
## Exercise 3.3.1
***Question***
Consider the 2-qubit state
$$
\ket{\psi}
= \f{1 / \sqrt 2} \ket{0} \ket{0} +
\f{1 / \sqrt 2} \ket{1} \ket{1}
$$
Show that this state is entangled by proving that there are no possible values $α_0$, $α_1$, $β_0$, $β_1$ such that
$$
\ket{\psi}
= \l( \alpha_0 \ket{0} + \alpha_1 \ket{1} \r)
\l( \beta_0 \ket{0} + \beta_1 \ket{1} \r).
$$
(Note, the state $\ket{\psi}$ above is called an EPR pair, named for Einstein, Podolsky, and Rosen, who considered such states in their investigations of quantum mechanics.)
***Answer***
## Exercise 3.4.1
***Question***
(a) Prove that if the operators $P_i$ satisfy $P_i^\dagger = P_i$ and $P_i^2 = P_i$, then $P_iP_j = 0$ for all $i \ne j$.
(b) Prove that any pure state $\ket{\psi}$ can be decomposed as $\ket{\psi} = \sum_i \alpha_i \ket{\psi_i}$ where $\alpha_i = \sqrt{p(i)}$, $p(i) = \braket{\psi | P_i | \psi}$, and $\ket{\psi_i} = \f{P_i \ket{\psi} / \sqrt{p(i)}}$.
Also prove that $\braket{\psi_i | \psi_j} = \delta_{i,j}$.
(c\) Prove that any decomposition $I = \sum_i P_i$ of the identity operator on a Hilbert space of dimension $N$ into a sum of nonzero projectors $P_i$ can have at most $N$ terms in the sum.
***Answer***
(a) This statement is not true. Here is a counter-example.
$$
\begin{align*}
&P_i = \bmqty{
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
} \;,\;
P_j = \bmqty{
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
} \\[0.3cm]&\Rightarrow\;\;
P_i P_j
= \bmqty{
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
} \bmqty{
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
}
= \bmqty{
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
}
\ne 0
\end{align*}
$$
To make the statement true, we have to add one more condition.
$$
I = P_1 + P_2 + \cdots + P_i + \cdots + P_j + \cdots + P_N \tag{1}
$$
Each $P_k$, where $1 \le k \le N$, is a $N \times N$ matrix and statifies $P_k^2 = P_k$.
$$
R_k \equiv \{P_k v : v \in N \times 1 \text{ vectors} \equiv S\}
$$
In fact, $R_k$ is the [image](https://math.libretexts.org/Bookshelves/Linear_Algebra/A_First_Course_in_Linear_Algebra_(Kuttler)/09%3A_Vector_Spaces/9.08%3A_The_Kernel_and_Image_of_a_Linear_Map) of $P_k$. Because, for every $v \in S$, the summation of projected $v$ equals $v$ itself,
$$
\begin{align*}
P_1 v + \cdots + P_N v
&= (P_1 + \cdots + P_N)v \\
&= I v
&& (\text{by (1)})\\
&= v
\end{align*}
$$
the [sum of subspaces](https://www.statlect.com/matrix-algebra/direct-sum#hid2) $R_k$ is $S$.
$$
\begin{align*}
\sum_k R_k
&= \sum_k \l\{ P_kv : v \in S \r\} \\
&= \l\{ \sum_k P_kv : v \in S \r\} \\
&= \l\{ \l(\sum_k P_k \r)v : v \in S \r\} \\
&= \l\{ Iv : v \in S \r\}
&& (\text{by (1)}) \\
&= \l\{ v : v \in S \r\} \\
&= S \tag{2}
\end{align*}
$$
In the next step, it shows the sum of dimensions of $R_k$ is $N$. $a_k$ is the number of zero-valued eigenvalues of $P_k$ and $\lambda_{km}$ is the $m$th eigenvalue of $P_k$
$$
\begin{align*}
\sum_k \operatorname{dim}(R_k)
&= \sum_k n - \operatorname{Nullity}(P_k)
&& (
\href{https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem}
{\text{Rank–nullity theorem}}
) \\
&= \sum_k n -
\operatorname{dim}(
\operatorname{Ker}(P_k)
)
&& (\text{definition of nullity})\\
&= \sum_k n -
\operatorname{dim}(
\l\{v\in S : P_k v = 0\r\}
)
&& (
\href{https://math.libretexts.org/Bookshelves/Linear_Algebra/A_First_Course_in_Linear_Algebra_(Kuttler)/09%3A_Vector_Spaces/9.08%3A_The_Kernel_and_Image_of_a_Linear_Map}
{\text{definition of kernel}}
)\\
&= \sum_k n - a_k
&& (P_k v = 0 v)^* \\
&= \sum_k \sum_m \lambda_{km}
&& (\text{eigenvalues of projectors are 0 or 1}) \\
&= \sum_k \Tr{P_k}
&& (
\href{https://people.math.harvard.edu/~knill/teaching/math19b_2011/handouts/lecture28.pdf}
{\text{trace is sum of eigenvalues}}
) \\
&= \Tr{\sum_k P_k} \\
&= \Tr{I}
&& (\text{by (1)}) \\
&= N
\end{align*}
$$
*One eigenspace corresponding to eigenvalue 0 contributes one dimention of the kernel.
As a result, if two different $R_k$ contribute the same dimension, it disobeys (2). This implies the [direct sum](https://www.statlect.com/matrix-algebra/direct-sum#hid5) of $R_k$ is $S$.
$$
\bigoplus_k R_k = S
$$
Because [the direct sum of the image and the kernel of a projector is the original vector space](https://www.statlect.com/matrix-algebra/projection-matrix#hid9),
$$
P_j v \in R_j \subset \operatorname{Ker}(P_i)
$$
for every $v \in S$. By the [definition of kernel](https://math.libretexts.org/Bookshelves/Linear_Algebra/A_First_Course_in_Linear_Algebra_(Kuttler)/09%3A_Vector_Spaces/9.08%3A_The_Kernel_and_Image_of_a_Linear_Map), we get
$$
\begin{align*}
P_i P_j v = 0,\ \forall v \in S \;\;
\Rightarrow \;\; P_i P_j = 0
\end{align*}
$$
Q.E.D.
(b)
The statement of the question is only true if $\alpha_i$ is a real number. Here is the counter-example that $\alpha_i$ is not a real number. Let
$$
\begin{align*}
P_i = \bmqty{
1 & 0 \\
0 & 0
}\;,\;\;
\ket{\psi} = \qubitV{i & 0}.
\end{align*}
$$
$P_i$'s normalized eigenvector corresponding to the non-zero eigenvalue is
$$
\ket{\psi_i} = \qubitV{1 & 0}.
$$
We get
$$
\braket{\psi | P_i | \psi} = p(i) = 1
$$
and
$$
\begin{align*}
\hphantom{{}\Rightarrow{}} P_i \ket{\psi} = \alpha_i \ket{\psi_i} \;\;
&\Rightarrow\;\; \bmqty{
1 & 0 \\
0 & 0
}
\qubitV{i & 0} = i \qubitV{1 & 0} \\
&\Rightarrow \;\;\alpha_i = i \ne \sqrt{p(i)} = 1
\end{align*}
$$
We change the statement of the question. Let $|\alpha_i|^2 = p(i)$ and $\ket{\psi}$ be normalized to make the statement true. Below is the proof.
$$
\begin{align*}
\hphantom{{}\Rightarrow{}} I = \sum_i P_i
\;\;\Rightarrow \;\;
I \ket{\psi} = \sum_i P_i \ket{\psi}
\;\;\Rightarrow\;\;
\ket{\psi} = \sum_i \alpha_i \ket{\psi_i}
\end{align*}
$$
$\ket{\psi_i}$ is the result of projecting $\ket{\psi}$ by $P_i$. We will show $\ket{\psi}$ should be normalized under the requirement from the question.
$$
\begin{align*}
&\hphantom{{}\Rightarrow{}} P_i \ket{\psi} = \alpha_i \ket{\psi_i} \\
&\Rightarrow \braket{\psi | P_i^\dagger P_i | \psi}
= \braket{\psi_i | \alpha_i^* \alpha_i | \psi_i} \\
&\Rightarrow \braket{\psi | P_i P_i | \psi}
= \braket{\psi_i | \alpha_i^* \alpha_i | \psi_i}
&& (P_i^\dagger = P_i) \\
&\Rightarrow \braket{\psi | P_i | \psi}
= \braket{\psi_i | \alpha_i^* \alpha_i | \psi_i}
&& (P_i^2 = P_i) \\
&\Rightarrow \braket{\psi | P_i | \psi}
= |\alpha_i|^2 \braket{\psi_i | \psi_i}
&& (\alpha_i \text{ is a number}) \\
&\Rightarrow \braket{\psi | P_i | \psi}
= |\alpha_i|^2
&& (\ket{\psi_i} \text{ is normalized}) \\
&\Rightarrow p(i) = |\alpha_i|^2
&& (\text{Hermitian's expected value is real*})
\end{align*}
$$
*Refer to [Hermitian's expected value is real](#Hermitian-operator’s-expected-value-is-real)
Below prove $\braket{\psi_i | \psi_j} = \delta_{i,j}$.
$$
\begin{align*}
\braket{\psi_i | \psi_j}
&= \braket{\psi_i | P_i^\dagger P_j | \psi_j}
&& (\text{definition of } \ket{\psi_i}) \\
&= \braket{\psi_i | P_i P_j | \psi_j}
&& (P_i^\dagger = P_i) \\
&= \braket{\psi_i | 0 | \psi_j}
&& \l(\text{if } i \ne j,\ P_i P_j = 0,\ \text{proved at (a)} \r) \\
&= 0
\end{align*}
$$
When $i = j$, $\braket{\psi_i | \psi_i} = 1$ because $\ket{\psi_i}$ is normalized. Combined these two cases, we get $\braket{\psi_i | \psi_j} = \delta_{i,j}$
Q.E.D.
(c\)
Denote the $N$-dimension Hilbert space as $S$.
$$
\begin{align*}
\hphantom{{}\Rightarrow{}} I = \sum_{1 \le i \le N} P_i\;\;
&\Rightarrow\;\; I v = \sum_i P_i v
&& (v \in S) \\
&\Rightarrow\;\; v = \sum_i P_i v \\
\end{align*}
$$
$v$ and each $P_i v$ are in $S$. There are at most $N$ linear independent vector in a $N$-dimension Hilbert space. If there is $N + 1$ term in the sum, $P_{N + 1} v$ can be represent as the linear combination of the other $N$ terms.
$$
P_{N + 1} v = a_1 P_1 v + \cdots + a_N P_N v \quad (a_i \in \mathbb{C})
$$
And because $P_{N + 1} v$ is not zero in our assumption, there must be an $a_i \ne 0$. Assume $a_i \ne 0$.
$$
\begin{align*}
&\hphantom{{}\Rightarrow{}}
P_{N + 1} v = a_1 P_1 v + \cdots + a_N P_N v \\
&\Rightarrow P_i P_{N + 1} v =
a_1 P_i P_1 v + \cdots +
a_i P_i P_i v + \cdots +
a_N P_i P_N v \\
&\Rightarrow 0 = a_i P_i P_i v
&& \l( P_i P_j = 0 \r) \\
&\Rightarrow 0 = a_i P_i v \ne 0
&& \l( P_i^2 = P_i \r)
\end{align*}
$$
We get a contradictory result, so it has at most N terms in the sum.
Q.E.D
## Exercise 3.4.2
***Question***
Show that measuring the observable $\ket{1}\bra{1}$ is equivalent to measuring the observable $Z$ up to a relabelling of the measurement outcomes.
***Answer***
## Exercise 3.4.3
***Question***
Verify that a measurement of the Pauli observable $X$ is equivalent to a complete measurement with respect to the basis $\l\{ \f{1 / \sqrt 2} \ket{0} + \ket{1}, \f{1 / \sqrt 2} \ket{0} - \ket{1} \r\}$ basis.
***Answer***
## Exercise 3.4.4
***Question***
(a) Prove that performing a projective measurement with respect to $P_0$ and $P_1$ (defined above) on an $n$-qubit state is equivalent to measuring the observable $Z^{⊗n}$.
(b) Explain why performing a complete Von Neumann measurement with respect to the computation basis, and then outputting the parity of the resulting string is not equivalent to performing a projective measurement of the parity.
***Answer***
## Exercise 3.4.5
***Question***
The observable formalism gives a convenient way to describe the expected value of a quantum measurement, where the eigenvalues $m_i$ correspond to some relevant physical quantity. Such *expectation values* are particularly relevant for quantum experiments (particularly the early ones) where one does not measure individual quantum systems, but rather an ensemble of many independent and identically prepared systems, and where the measurement apparatus provides only cumulative results (e.g. the net magnetic field induced by an ensemble of nuclear spins). Consider a projective measurement described by projectors $P_i$, and suppose we measure the state $\ket{\psi}$. Show that the expected value of $m_i$ is
$$
E(m_i) = \Tr{M \ket{\psi} \bra{\psi}}
$$
***Answer***
## Exercise 3.5.1
***Question***
Find the density matrices of the following states:
(a) $\Set{
(\ket{0}, \f{1 / 2}),
(\ket{1}, \f{1/ 2})
}$
(b) $\f{1 / \sqrt{2}} \ket{0} + \f{1 / \sqrt{2}} \ket{1}$
(c\) $\Set{(
\f{1 / \sqrt{2}} \ket{0} +
\f{1 / \sqrt{2}} \ket{1},
\f{1 / 2}
),(
\f{1 / \sqrt{2}} \ket{0} -
\f{1 / \sqrt{2}} \ket{1},
\f{1 / 2}
)}$
***Answer***
Density operator of a mixed state $\Set{(\ket{\psi_1}, p_1), (\ket{\psi_2}, p_2), \cdots}$ is $p_1 \dyad{\psi_1}{\psi_1} + p_2 \dyad{\psi_2}{\psi_2} + \cdots$.
(a)
$$
\rho
= \f{1/ 2} \dyad{0}{0} + \f{1/ 2} \dyad{1}{1}
= \bmqty{
\f{1 / 2} & 0 \\
0 & \f{1 / 2}
}
$$
(b)
$$
\begin{align*}
\rho
&= \l( \f{1 / \sqrt{2}} \ket{0} + \f{1 / \sqrt{2}} \ket{1} \r)
\l( \f{1 / \sqrt{2}} \bra{0} + \f{1 / \sqrt{2}} \bra{1} \r) \\
&= \f{1 / 2} \dyad{0}{0} + \f{1 / 2} \dyad{0}{1} +
\f{1 / 2} \dyad{1}{0} + \f{1 / 2} \dyad{1}{1}
&& (\text{expansion}) \\
&= \bmqty{
\f{1 / 2} & \f{1 / 2} \\
\f{1 / 2} & \f{1 / 2}
}
\end{align*}
$$
(c\)
$$
\begin{align*}
\rho
&= \f{1 / 2} \l( \f{1 / \sqrt{2}} \ket{0} + \f{1 / \sqrt{2}} \ket{1} \r)
\l( \f{1 / \sqrt{2}} \bra{0} + \f{1 / \sqrt{2}} \bra{1} \r) +
\f{1 / 2} \l( \f{1 / \sqrt{2}} \ket{0} - \f{1 / \sqrt{2}} \ket{1} \r)
\l( \f{1 / \sqrt{2}} \bra{0} - \f{1 / \sqrt{2}} \bra{1} \r) \\
&= \f{1 / 2} \l(
\f{1 / 2} \dyad{0}{0} + \f{1 / 2} \dyad{0}{1} +
\f{1 / 2} \dyad{1}{0} + \f{1 / 2} \dyad{1}{1}
\r) + \f{1 / 2} \l(
\f{1 / 2} \dyad{0}{0} - \f{1 / 2} \dyad{0}{1} -
\f{1 / 2} \dyad{1}{0} + \f{1 / 2} \dyad{1}{1}
\r)
&& (\text{expansion}) \\
&= \f{1/ 2} \dyad{0}{0} + \f{1/ 2} \dyad{1}{1} \\
&= \bmqty{
\f{1 / 2} & 0 \\
0 & \f{1 / 2}
}
\end{align*}
$$
## Exercise 3.5.2
***Question***
**(a)**
Prove that the density operator $\rho$ for an ensemble of pure states satisfies the following conditions:
(i) $\Tr{\rho} = 1$.
(ii) $\rho$ is a positive operator (i.e. for any $\ket{v}$, $\braket{v | \rho | v}$ is real and non-negative; equivalently, the eigenvalues of $\rho$ are non-negative).
**(b)**
Show that for any matrix $\rho$ satisfying conditions 1 and 2, there exists a finite list of probabilities $p_i$ and pure states $\ket{\psi_i}$ such that $\rho$ is the density matrix of the mixed state
$$
\Set{(\ket{\psi_1}, p_1), (\ket{\psi_2}, p_2), \ldots, (\ket{\psi_k}, p_k)}.
$$
***Answer***
(a)
(i)
[Trace is the sum of eigenvalues](#Trace-is-sum-of-eigenvalues).
$$
\Tr{\rho} = \sum_i^N \lambda_i
$$
$N$ is the dimension of the state, $\set{\lambda_i}$ are the eigenvalues of $\rho$.
The definition of the density operator $\rho \equiv \ketbra{\psi}{\psi}$, where $\ket{\psi}$ is the state with unit length. Let's prove the eigenvectors of $\rho$ are $\ket{\psi}$ and other $N - 1$ vectors which are orthogonal to $\ket{\psi}$ and each other. Denote the $N - 1$ orthogonal vectors as $\set{\ket{\psi_i^\perp}\ |\ 1 \le i \le N - 1}$.
First, prove $\ket{\psi}$ is one of the eigenvector of $\rho$.
$$
\begin{align*}
\rho \ket{\psi}
&= \ket{\psi} \braket{\psi | \psi}
&& (\text{definition of density operator}) \\
&= \ket{\psi} 1
&& (\text{state is an unit vector}) \\
&= 1 \ket{\psi}
\end{align*}
$$
Second, prove $\ket{\psi_i^\perp}$ is the eigenvector of $\rho$.
$$
\begin{align*}
\rho \ket{\psi_i^\perp}
&= \ket{\psi} \braket{\psi | \psi_i^\perp}
&& (\text{definition of density operator}) \\
&= \ket{\psi} 0
&& (\text{inner product of orthogonal vectors is zero}) \\
&= 0 \\
&= 0 \ket{\psi_i^\perp}
\end{align*}
$$
Denote the corresponding eigenvalue of $\ket{\psi_i^\perp}$ is $\lambda_i$ and the eigenvalues to $\ket{\psi}$ is $\lambda_N$. From the proof above, we get $\lambda_i = 0$ for $1 \le i \le N - 1$ and $\lambda_N = 1$.
$$
\begin{align*}
\Tr{\rho}
&= \sum_i^N \lambda_i \\
&= \lambda_N \\
&= 1
\end{align*}
$$
Q.E.D.
(ii)
From (i), we get the eigenvectors of $\rho$ form an orthonormal basis $\set{\ket{\psi_j}\ |\ 1 \le j \le N}$. We can express any vector $\ket{v}$ as
$$
\ket{v} = \sum_j^N a_j \ket{\psi_j},\ a_j \in \mathbb{C}
$$
$$
\begin{align*}
\braket{v | \rho | v}
&= \braket{v | \psi } \braket{\psi | v} \\
&= \l( \sum_j^N a_j^* \braket{\psi_j | \psi} \r)
\l( \sum_j^N a_j \braket{\psi | \psi_j} \r) \\
&= \l( \sum_j^N a_j^* \braket{\psi_j | \psi_N} \r)
\l( \sum_j^N a_j \braket{\psi_N | \psi_j} \r)
&& (\ket{\psi} \text{ is denoted as } \ket{\psi_N} \text{, see (i)}) \\
&= a_N^* \braket{\psi_N | \psi_N} a_N \braket{\psi_N | \psi_N}
&& (\set{\ket{\psi_j}} \text{ are orthogonal to each other}) \\
&= a_N^* a_N
&& (\set{\ket{\psi_j}} \text{are nomalized}) \\
&= |a_N|^2 \in \R
\end{align*}
$$
And in (i), we have proved the eigenvalues of $\rho$ are $0$s and $1$ , which are non-negative.
Q.E.D.
(b)
Because positive operators on $\H_\mathbb{C}$ are normal operators. See
- [functional analysis - Show that a positive operator on a complex Hilbert space is self-adjoint - Mathematics Stack Exchange](https://math.stackexchange.com/q/561636/633347)
- [Positive operator (Hilbert space) - Wikipedia](https://en.wikipedia.org/wiki/Positive_operator_(Hilbert_space)#On_HC,_if_A_≥_0_then_A_is_symmetric)
<!-- XXX: Is this true? -->
$\rho$ can be written as a spectral decomposition by an unitary operator.
$$
\rho = U \Lambda U^{-1} = U \Lambda U^\dagger
$$
The columns of $U$ are the $\rho$'s eigenvectors $\set{\ket{\psi_i}}$ and $\Lambda$ is a diagonal matrix composed by $\set{\lambda_i\ |\ \rho \ket{\psi_i} = \lambda_i \ket{\psi_i}}$ the eigenvalues of $\rho$. In the equations below, we use $\bra{e_i}$ to denote the elements of the Cartesian basis.
$$
\begin{align*}
\bra{e_i}
&= \begin{bmatrix}
\smash[b]{\underbrace{0 \dots 0}_{i - 1 \text{ times}}} &
1 &
\smash[b]{\underbrace{0 \dots 0}_{N - i \text{ times}}}
\end{bmatrix} \vphantom{\underbrace{1}_{N}} \\
\ket{e_i}
&= \begin{bmatrix}
\smash[b]{\underbrace{0 \dots 0}_{i - 1 \text{ times}}} &
1 &
\smash[b]{\underbrace{0 \dots 0}_{N - i \text{ times}}}
\end{bmatrix}^T \vphantom{\underbrace{1}_{N}}
\end{align*}
$$
$$
\begin{align*}
\rho
&= U \Lambda U^\dagger \\
&= U \l[ \sum_i^N \lambda_i \ketbra{e_i}{e_i} \r] U^\dagger \\
&= \sum_i^N U \lambda_i \ketbra{e_i}{e_i} U^\dagger\\
&= \sum_i^N \l[
\lambda_i
\l( U \ket{e_i} \r)
\l( \bra{e_i} U^\dagger \r)
\r]\\
&= \sum_i^N \l[
\lambda_i
\l( U \ket{e_i} \r)
\l( U \ket{e_i} \r)^\dagger
\r]
&& ((AB)^\dagger = B^\dagger A^\dagger)\\
&= \sum_i^N \lambda_i \ket{\psi_i} \l( \ket{\psi_i} \r)^\dagger \\
&= \sum_i^N \lambda_i \ketbra{\psi_i}{\psi_i}
\end{align*}
$$
Because $\Tr{\rho} = 1$, $\sum_i^N \lambda_i = 1$. The sum above is the finite list we need.
$$
\Set{(\ket{\psi_1}, \lambda_1), (\ket{\psi_2}, \lambda_2), \ldots, (\ket{\psi_N}, \lambda_N)}.
$$
Q.E.D.
## Exercise 3.5.3
***Question***
Consider any linear transformation $T$ on a Hilbert space $\H$ of dimension $N$. This linear transformation $T$ induces a transformation $\rho \mapsto T \rho T^\dagger$ on the set of linear operators on the Hilbert space $\H$. Prove that the above transformation is also linear.
***Answer***
To prove $\rho \mapsto T \rho T^\dagger$ is linear, we have to prove
1. Additivity: $T (\rho_1 + \rho_2) T^\dagger = T \rho_1 T^\dagger + T \rho_2 T^\dagger$
2. Homogenity: $T (a \rho) T^\dagger = a (T \rho T^\dagger)$
**Additivity:**
$$
\begin{align*}
T (\rho_1 + \rho_2) T^\dagger
&= [T (\rho_1 + \rho_2)] T^\dagger
&& (\text{associative}) \\
&= (T \rho_1 + T \rho_2) T^\dagger
&& (T \text{ is linear transformation}) \\
&= \l[ T (T \rho_1 + T \rho_2)^\dagger \r]^\dagger
&& ( (AB)^\dagger = B^\dagger A^\dagger ) \\
&= \l\{
T \l[
(T \rho_1)^\dagger + (T \rho_2)^\dagger
\r]
\r\}^\dagger
&& ((A + B)^\dagger = A^\dagger + B^\dagger) \\
&= \l[ T (T\rho_1)^\dagger + T (T\rho_2)^\dagger \r]^\dagger
&& (T \text{ is linear transformation}) \\
&= \l[ T (T\rho_1)^\dagger \r]^\dagger +
\l[ T (T\rho_2)^\dagger \r]^\dagger
&& ((A + B)^\dagger = A^\dagger + B^\dagger) \\
&= (T \rho_1) T^\dagger + (T \rho_2) T^\dagger
&& ((AB)^\dagger = B^\dagger A^\dagger) \\
&= T \rho_1 T^\dagger + T \rho_2 T^\dagger
&& (\text{associative})
\end{align*}
$$
Q.E.D.
**Homogenity:**
$$
\begin{align*}
T (a \rho) T^\dagger
&= [T (a \rho)] T^\dagger
&& (\text{associative}) \\
&= (a T \rho) T^\dagger
&& (T \text{ is linear transformation}) \\
&= a (T \rho T^\dagger)
&& (\text{associative})
\end{align*}
$$
Q.E.D.
## Appendix
### eigenvalues of projectors
### Hermitian operator's eigenvalues are real
Prove if
$$
T = T^\dagger
$$
, then
$$
T \ket{\psi} = \lambda \ket{\psi},\; \lambda \in \R
$$
proof:
$$
\begin{align*}
\l( T \ket{\psi} \r)^T \ket{\psi^*}
&= \l( T^* \ket{\psi}^* \r)^\dagger \ket{\psi^*}
&& \l(
A^* B^* = (AB)^*
\r) \\
&= \braket{\psi |^* T^T | \psi^*}
&& \l(
\l( T \ket{\psi} \r)^\dagger = \bra{\psi} T^\dagger
\r) \\
&= \braket{\psi |^* T^* | \psi^*}
&& \l( T = T^\dagger \r) \\
&= \bra{\psi^*} \l( T^* \ket{\psi^*} \r) \\
&= \bra{\psi^*} \l( T \ket{\psi} \r)^*
&& \l(
A^* B^* = (AB)^*
\r) \\
&= \bra{\psi^*} \l( \lambda \ket{\psi} \r)^*
&& \l(T \ket{\psi} = \lambda \ket{\psi} \r) \\
&= \braket{\psi^* | \lambda^* | \psi^*}
&& \l(
A^* B^* = (AB)^*
\r) \\
&= \lambda^* \braket{\psi^* | \psi^*}
\end{align*} \tag{1}
$$
$$
\begin{align*}
\l( T \ket{\psi} \r)^T \ket{\psi^*}
&= \l( \lambda \ket{\psi} \r)^T \ket{\psi^*}
&& \l(T \ket{\psi} = \lambda \ket{\psi} \r) \\
&= \lambda \ket{\psi}^T \ket{\psi^*}
&& \l(\lambda \text{ is a number} \r) \\
&= \lambda \braket{\psi^* | \psi^*}
&& \l( \ket{\psi}^T = \ket{\psi^*}^\dagger = \bra{\psi^*} \r)
\end{align*} \tag{2}
$$
Because $\ket{\psi} \ne 0$, by comparing (1) and (2), we get $\lambda = \lambda^*$, hence $\lambda \in \R$.
### Hermitian operator's expected value is real
$$
\begin{align*}
\braket{\psi | T | \psi}
&= \braket{\psi | T | \psi}^T
&& (\braket{\psi | T | \psi} \text{ is a number}) \\
&= \l( \qubitH{v_1^* & v_2^*} T \qubitV{v_1 & v_2} \r)^T
&& \l( \text{represent } \ket{\psi} as \qubitV{v_1 & v_2} \r) \\
&= \qubitH{v_1 & v_2} T^T \qubitV{v_1^* & v_2^*}
&& \l( (ABC)^T = C^T B^T A^T \r) \\
&= \qubitH{v_1 & v_2} T^* \qubitV{v_1^* & v_2^*}
&& \l( T^\dagger = T \r) \\
&= \braket{\psi^* | T^* | \psi^*} \\
&= \braket{\psi | T | \psi}^*
&& \l( A^* B^* = (AB)^* \r) \\
\end{align*}
$$
$\because
\braket{\psi | T | \psi} = \braket{\psi | T | \psi}^*
\therefore \braket{\psi | T | \psi} \in \R$
Q.E.D.
### Trace is sum of eigenvalues
Refer to [Shayan Oveis Gharan's lecture note: Lecture 10: Linear Algebra Background](https://courses.cs.washington.edu/courses/cse521/18au/521-lecture-9.pdf) on P.10-3.
###### tags: `Quantum Computing` `Solution`