# Cubes and the Radon Transform
Let $\mathbb{Z}_2^n$ denote the direct product of $\mathbb{Z}_2$ with itself $n$ times, so the elements of $\mathbb{Z}_2^n$ are $n$-tuples $(a_1, \ldots, a_n)$ of 0’s and 1’s, under the operation of component-wise addition.
* $\mathbb{Z}_2^3=\{(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1), (1, 1, 1)\}$
Define a graph $C_n$, called the *$n$-cube*, as follows: the vertex set of $C_n$ is given by $V(C_n) = \mathbb{Z}_2^n$, and two vertices $u$ and $v$ are connected by an edge if they differ in exactly one component. Equivalently, $u+v$ has exactly one nonzero component.
* $C_3 \ (3-cube)$

If we regard $\mathbb{Z}_2^n$ as consisting of real vectors, then these vectors form the set of vertices of an $n$-dimensional cube. Moreover, two vertices of the cube lie on an edge (in the usual geometric sense) if and only if they form an edge of $C_n$. This explains why $C_n$ is called the $n$-cube. We also see that walks in $C_n$ have a nice geometric interpretation—they are simply walks along the edges of an $n$-dimensional cube.

We want to determine explicitly the eigenvalues and eigenvectors of $C_n$. We will do this by a somewhat indirect but extremely useful and powerful technique, the finite Radon transform. Let $\mathcal{V}$ denote the set of all functions $f: \mathbb{Z}_2^n \to \mathbb{R}$. Note that $\mathcal{V}$ is a vector space over $\mathbb{R}$ of dimension $2^n$ since $|\mathbb{Z}_2^n|=2^n$ .
> The functions $f:S\to\mathbb{R}$ form a vector space of dimension $|S|$ .
>
> **Proof.** Let $S=\{s_1, s_2, \cdots , s_{|S|}\}$ and define $f_i(s_j)=\delta_{ij}$. Then $f_1, \cdots, f_{|S|}$ are linear independent and for any $g:S\to\mathbb{R}$, we have $g=a_1f_1+\cdots+a_{|S|}f_{|S|}$ for suitable $a_i$. Hence $\{f_1, \cdots, f_{|S|}\}$ is a basis. $\square$
If $u = (u_1, \ldots, u_n)$ and $v = (v_1, \ldots, v_n)$ are elements of $\mathbb{Z}_2^n$, then define their dot product by
\begin{equation}
\tag{2.1} u \cdot v = u_1 v_1 + \cdots + u_n v_n
\end{equation}
where the computation is performed modulo 2. Thus we regard $u \cdot v$ as an element of $\mathbb{Z}_2$, that is, $u \cdot v \in \{0, 1\}$. The expression $(-1)^{u \cdot v}$ is defined to be the real number $+1$ or $-1$, depending on whether $u \cdot v = 0$ or $1$, respectively. Since for integers $k$, the value of $(-1)^k$ depends only on $k \pmod{2}$, it follows that we can treat $u$ and $v$ as integer vectors without affecting the value of $(-1)^{u \cdot v}$. Thus, for instance, formulas such as
$$
(-1)^{u \cdot (v+w)} = (-1)^{u \cdot v + u \cdot w} = (-1)^{u \cdot v} (-1)^{u \cdot w}
$$
are well defined and valid.
> \begin{split}
u \cdot (v+w) &= u_1(v_1+w_1)+\cdots+u_n(v_n+w_n) \quad (\bmod 2)\\\\
&= (u_1v_1+\cdots+u_nv_n)+(u_1w_1+\cdots+u_nw_n) \quad (\bmod 2) \\\\ &= u\cdot v +u\cdot w \quad (\bmod 2)
\end{split}
We now define two important bases of the vector space $\mathcal{V}$. There will be one basis element of each basis for each $u\in \mathbb{Z}_2^n$ . The first basis, denoted $B_1$, has elements $f_u$ defined as follows:
$$
\tag{2.2} f_u(v) = \delta_{uv}
$$
It is easy to see that $B_1$ is a basis, since any $g \in \mathcal{V}$ satisfies
$$
\tag{2.3} g = \sum_{u \in \mathbb{Z}_2^n} g(u) f_u
$$
Hence $B_1$ spans $V$, so since $|B_1| = \dim \mathcal{V} = 2^n$, it follows that $B_1$ is a basis. The second basis, denoted $B_2$, has elements $\chi_u$ defined as follows:
$$
\chi_u(v) = (-1)^{u \cdot v}
$$
In order to show that $B_2$ is a basis, we will use an inner product on $\mathcal{V}$ (denoted $\langle \cdot, \cdot \rangle$) defined by
$$
\langle f, g \rangle = \sum_{u \in \mathbb{Z}_2^n} f(u) g(u)
$$
**Lemma 2.1.** The set $B_2 = \{\chi_u : u \in \mathbb{Z}_2^n\}$ forms a basis for $\mathcal{V}$.
*Proof.* Since $|B_2| = \dim \mathcal{V} = 2^n$, it suffices to show that $B_2$ is linearly independent. In fact, we will show that the elements of $B_2$ are orthogonal. We have
$$
\begin{split}
\langle \chi_u, \chi_v \rangle = \sum_{w \in \mathbb{Z}_2^n} \chi_u(w) \chi_v(w) &= \sum_{w \in \mathbb{Z}_2^n} (-1)^{u \cdot w+v \cdot w}\\ &= \sum_{w \in \mathbb{Z}_2^n} (-1)^{(u+v) \cdot w}=\begin{cases} 2^n, & \text{if } y = (0, 0, \ldots, 0) \\ 0, & \text{otherwise} \end{cases}
\end{split}
$$
> For any $y \in \mathbb{Z}_2^n$, we have
> $$
\sum_{w \in \mathbb{Z}_2^n} (-1)^{y \cdot w} = \begin{cases} 2^n, & \text{if } y = (0, 0, \ldots, 0) \\ 0, & \text{otherwise} \end{cases}
$$
> since if $y=(0, 0, \ldots, 0)$, then $\displaystyle \sum_{w \in \mathbb{Z}_2^n} (-1)^{y \cdot w} = \sum_{w \in \mathbb{Z}_2^n}1=2^n$
>
> For $y\neq (0, 0, \ldots, 0)$, if y has $k$ $1$'s, $n-k$ $0$'s, then note that the number of $w$ such that $y \cdot w=0$ is $$2^{n-k}\times[\dbinom{k}{0}+\dbinom{k}{2}+\dbinom{k}{4}+\cdots ]=2^{n-k}\times2^{k-1}=2^{n-1}$$ so $$\displaystyle \sum_{w \in \mathbb{Z}_2^n} (-1)^{y \cdot w} =2^{n-1}-2^{n-1}=0 $$
Thus ==$\langle \chi_u, \chi_v \rangle \neq 0$== if and only if $u + v = (0, 0, \ldots, 0)$, i.e., $u = v$, so the elements of $B_2$ are orthogonal (and nonzero). Hence they are linearly independent as desired. $\square$
> Let $c_1\chi_{u_1}+ \cdots+ c_n\chi_{u_{2^n}}=0$, take the dot product of $\chi_j$ with both sides of this equation. We have $c_j|\chi_j|^2=0$, which forces $c_j=0$. Thus, the set is linearly independent.
We now come to the key definition of the Radon transform.
Given a subset $\Gamma$ of $\mathbb{Z}_2^n$ and a function $f \in \mathcal{V}$, define a new function $\Phi_\Gamma f \in \mathcal{V}$ by
$$
\Phi_\Gamma f(v) = \sum_{w \in \Gamma} f(v + w).
$$
The function $\Phi_\Gamma f$ is called the (*discrete* or *finite*) *Radon transform* of $f$ (on the group $\mathbb{Z}_2^n$, with respect to the subset $\Gamma$).
We have defined a map $\Phi_\Gamma : \mathcal{V} \to \mathcal{V}$. It is easy to see that $\Phi_\Gamma$ is a linear transformation.
> $\Phi_\Gamma(f+g)(v)=\displaystyle\sum_{w \in \Gamma} (f+g)(v + w)=\displaystyle\sum_{w \in \Gamma} f(v + w)+\displaystyle\sum_{w \in \Gamma} g(v + w)=\Phi_\Gamma f(v)+\Phi_\Gamma g(v)$
>
> $\Phi_\Gamma (cf)(v)=\displaystyle\sum_{w \in \Gamma} (cf)(v + w)=c\displaystyle\sum_{w \in \Gamma} f(v + w)=c\ \Phi_\Gamma f(v)$
We want to compute its eigenvalues and eigenvectors.
**Theorem 2.2.** The eigenvectors of $\Phi_\Gamma$ are the functions $\chi_u$, where $u \in \mathbb{Z}_2^n$. The eigenvalue $\lambda_u$ corresponding to $\chi_u$ (i.e., $\Phi_\Gamma \chi_u = \lambda_u \chi_u$) is given by
$$
\lambda_u = \sum_{w \in \Gamma} (-1)^{u \cdot w}
$$
*Proof.* Let $v \in \mathbb{Z}_2^n$. Then
\begin{split}
\Phi_\Gamma \chi_u(v) &= \sum_{w \in \Gamma} \chi_u(v + w)\\\\
&= \sum_{w \in \Gamma} (-1)^{u \cdot (v + w)}\\\\ &= \left( \sum_{w \in \Gamma} (-1)^{u \cdot w} \right) (-1)^{u \cdot v} \\\\
&=\left( \sum_{w \in \Gamma} (-1)^{u \cdot w} \right) \chi_u(v)
\end{split} Hence
$$
\Phi_\Gamma \chi_u = \left( \sum_{w \in \Gamma} (-1)^{u \cdot w} \right) \chi_u
$$as desired. $\square$
Note that because the $\chi_{u}$'s form a basis for $\mathcal{V}$ by Lemma 2.1, it follows that Theorem 2.2 yields a complete set of eigenvalues and eigenvectors for $\Phi_\Gamma$. Note also that the eigenvectors $\chi_{u}$ of $\Phi_{\Gamma}$ are independent of $\Gamma$; only the eigenvalues depend on $\Gamma$.
Now we come to the payoff. Let $\Delta=\left\{\delta_1, \ldots, \delta_n\right\}$, where $\delta_i$ is the $i$ th unit coordinate vector (i.e., $\delta_i$ has a 1 in position $i$ and 0 's elsewhere). Note that the $j$ th coordinate of $\delta_i$ is just $\delta_{i j}$ (the Kronecker delta), explaining our notation $\delta_i$. Let [$\Phi_{\Delta}$] denote the matrix of the linear transformation $\Phi_{\Delta}: \mathcal{V} \rightarrow \mathcal{V}$ with respect to the basis $B_1$ of $\mathcal{V}$ given by (2.2).
**2.3 Lemma.** We have $\left[\Phi_{\Delta}\right]=\boldsymbol{A}\left(C_n\right)$, the adjacency matrix of the $n$-cube.
*Proof.*
> $f_u(v) = \delta_{uv}$
Let $u, v \in \mathbb{Z}_2^n$. We have
$$
\begin{aligned}
\Phi_{\Delta} f_u(v) & =\sum_{w \in \Delta} f_u(v+w) \\\\
& =\sum_{w \in \Delta} f_{u+w}(v)
\end{aligned}
$$since $u=v+w$ if and only if $u+w=v$. There follows
$$
\tag{2.4} \Phi_{\Delta} f_u=\sum_{w \in \Delta} f_{u+w}
$$
Equation (2.4) says that the $(u, v)$-entry (short for $(f_u, f_v)$-entry) of the matrix $[\Phi_{\Delta}]$ is given by
$$
\left(\Phi_{\Delta}\right)_{u v}=\left\{\begin{array}{l}
1, \text { if } u+v \in \Delta \\
0, \text { otherwise }
\end{array}\right.
$$
Now $u+v \in \Delta$ if and only if $u$ and $v$ differ in exactly one coordinate. This is just the condition for $u v$ to be an edge of $C_n$, so $\left[\Phi_{\Delta}\right]=\boldsymbol{A}\left(C_n\right)$. $\square$
**2.4 Corollary.** The eigenvectors $E_u\left(u \in \mathbb{Z}_2^n\right.$ ) of $\boldsymbol{A}\left(C_n\right)$ are given by
$$
\tag{2.5} E_u=\sum_{v \in \mathbb{Z}_2^n}(-1)^{u\cdot v} v
$$
The eigenvalue $\lambda_u$ corresponding to the eigenvector $E_u$ is given by
$$
\tag{2.6} \lambda_u=n-2 \omega(u)
$$where $\omega(u)$ is the number of $1$'s in $u$. (The integer $\omega(u)$ is called the Hamming weight or simply the weight of $u$.) Hence $\boldsymbol{A}\left(C_n\right)$ has $\dbinom{n}{i}$ eigenvalues equal to $n-2 i$, for each $0 \leq i \leq n$.
*Proof.*
> (2.3) $g =\displaystyle \sum_{u \in \mathbb{Z}_2^n} g(u) f_u$
>
> $\chi_u(v) = (-1)^{u \cdot v}$
>
> **Theorem 2.2.** $\Phi_\Gamma \chi_u = \lambda_u \chi_u$, where $\lambda_u =\displaystyle \sum_{w \in \Gamma} (-1)^{u \cdot w}$
For any function $g \in \mathcal{V}$ we have by (2.3) that
$$
g=\sum_v g(v) f_v
$$
Applying this equation to $g=\chi_u$ gives
$$
\tag{2.7} \chi_u=\sum_v \chi_u(v) f_v=\sum_v(-1)^{u\cdot v} f_v
$$Equation (2.7) expresses the eigenvector $\chi_u$ of $\Phi_{\Delta}$ as a linear combination of the functions $f_v$. But $\Phi_{\Delta}$ has the same matrix with respect to the basis of the $f_v$ 's as $\boldsymbol{A}\left(C_n\right)$ has with respect to the vertices $v$ of $C_n$. Hence the expansion of the eigenvectors of $\Phi_{\Delta}$ in terms of the $f_v$ 's has the same coefficients as the expansion of the eigenvectors of $\boldsymbol{A}\left(C_n\right)$ in terms of the $v$ 's, so (2.5) follows.
According to Theorem 2.2 the eigenvalue $\lambda_u$ corresponding to the eigenvector $\chi_u$ of $\Phi_{\Delta}$ (or equivalently, the eigenvector $E_u$ of $\left.\boldsymbol{A}\left(C_n\right)\right)$ is given by
$$
\tag{2.8}\lambda_u=\sum_{w \in \Delta}(-1)^{u\cdot w}
$$
Now $\Delta=\left\{\delta_1, \ldots, \delta_n\right\}$ and $\delta_i \cdot u$ is $1$ if $u$ has a one in its $i$ th coordinate and is $0$ otherwise. Hence the sum in (2.8) has $n-\omega(u)$ terms equal to $+1$ and $\omega(u)$ terms equal to $-1$ , so $\lambda_u=(n-\omega(u))-\omega(u)=n-2 \omega(u)$, as claimed. $\square$
We have all the information needed to count walks in $C_n$.
**2.5 Corollary.** Let $u, v \in \mathbb{Z}_2^n$, and suppose that $\omega(u+v)=k$ (i.e., $u$ and $v$ disagree in exactly $k$ coordinates). Then the number of walks of length $\ell$ in $C_n$ between $u$ and $v$ is given by
$$
\tag{2.9}\left(\boldsymbol{A}^{\ell}\right)_{u v}=\frac{1}{2^n} \sum_{i=0}^n \sum_{j=0}^k(-1)^j\binom{k}{j}\binom{n-k}{i-j}(n-2 i)^{\ell}
$$where we set $\dbinom{n-k}{i-j}=0$ if $j>i$. In particular;
$$
\tag{2.10}\left(\boldsymbol{A}^{\ell}\right)_{uu}=\frac{1}{2^n} \sum_{i=0}^n\binom{n}{i}(n-2 i)^{\ell}
$$
*Proof.*
> **Corollary 1.2** Let $\lambda_1, \cdots, \lambda_p$ be the eigenvalues of the adjacency matrix $\boldsymbol{A}(G)$. Then there exist real numbers $c_1, \cdots, c_p$ such that for all $\ell \ge 1$, we have $$(\boldsymbol{A}(G)^{\ell})_{ij}=c_1\lambda_1^{\ell}+\cdots+c_p\lambda_p^{\ell}$$ In fact, if $U=(u_{rs})$ is a real orthogonal matrix such that $U^{-1}\boldsymbol{A} U=$ diag$(\lambda_1, \cdots, \lambda_p)$, then we have $$c_k=u_{ik}u_{jk}$$
Let $E_u$ and $\lambda_u$ be as in Corollary 2.4. In order to apply Corollary 1.2, we need the eigenvectors to be of unit length (where we regard the $f_v$'s as an orthonormal basis of $\mathcal{V}$ ). By (2.5), we have
$$
\left|E_u\right|^2=\sum_{v \in \mathbb{Z}_2^n}\left((-1)^{u \cdot v}\right)^2=\sum_{v \in \mathbb{Z}_2^n}1=2^n
$$
Hence we should replace $E_u$ by $E_u^{\prime}=\dfrac{1}{2^{n/ 2}} E_u$ to get an orthonormal basis. According to Corollary 1.2 , we thus have
$$
\left(\boldsymbol{A}^{\ell}\right)_{u v}=\frac{1}{2^n} \sum_{w \in \mathbb{Z}_2^n} E_{u w} E_{v w} \lambda_w^{\ell}
$$
Now $E_{u w}$ by definition is the coefficient of $f_w$ in the expansion (2.5), i.e., ==$E_{u w}=(-1)^{u \cdot w}$== (and similarly for $E_v$ ), while $\lambda_w=n-2 \omega(w)$. Hence
$$\tag{2.11}
\left(\boldsymbol{A}^{\ell}\right)_{u v}=\frac{1}{2^n} \sum_{w \in \mathbb{Z}_2^n}(-1)^{(u+v)\cdot w}(n-2 \omega(w))^{\ell}
$$ The number of vectors $w$ of Hamming weight $i$ which have $j$ $1$'s in common with $u+v$ is $\dbinom{k}{j}\dbinom{n-k}{i-j}$, since we can choose the $j$ 1's in $u+v$ which agree with $w$ in $\dbinom{k}{j}$ ways, while the remaining $i-j$ $\ 1$'s of $w$ can be inserted in the $n-k$ remaining positions in $\dbinom{n-k}{i-j}$ ways. Since $(u+v) \cdot w \equiv j \ (\bmod 2)$, we have $$\left(\boldsymbol{A}^{\ell}\right)_{u v}=\frac{1}{2^n} \sum_{i=0}^n \sum_{j=0}^k(-1)^j\binom{k}{j}\binom{n-k}{i-j}(n-2 i)^{\ell}$$ as desired. Clearly setting $u=v$ , then $k=j=0$, in above equation yields $$\left(\boldsymbol{A}^{\ell}\right)_{uu}=\frac{1}{2^n} \sum_{i=0}^n\binom{n}{i}(n-2 i)^{\ell}$$ completing the proof. $\square$
**2.6 Example.** Setting $k=1$ in (2.9) yields
$$
\begin{aligned}
\left(\boldsymbol{A}^{\ell}\right)_{u v} & =\frac{1}{2^n} \sum_{i=0}^n\left[\binom{n-1}{i}-\binom{n-1}{i-1}\right](n-2 i)^{\ell} \qquad(O)\\\\
& =\frac{1}{2^n} \sum_{i=0}^{n-1}\binom{n-1}{i} \frac{(n-2 i)^{\ell+1}}{n-i} \qquad(??)
\end{aligned}
$$
> Take $n=\ell=2$, then above equation gives $$\frac{1}{4} \sum_{i=0}^2\left[\binom{1}{i}-\binom{1}{i-1}\right](2-2 i)^{2}=\dfrac{1}{4}(4-4)=0$$but below gives $$\frac{1}{4} \sum_{i=0}^{1}\binom{1}{i} \frac{(2-2 i)^{3}}{2-i}=\dfrac{1}{4}(4)=1$$
I think it should be
$$
\begin{aligned}
\left(\boldsymbol{A}^{\ell}\right)_{u v} & =\frac{1}{2^n} \sum_{i=0}^n\left[\binom{n-1}{i}-\binom{n-1}{i-1}\right](n-2 i)^{\ell} \\\\
& =\frac{1}{2^n} \sum_{i=0}^{n-1}\binom{n-1}{i}[(n-2i)^\ell-(n-2i-2)^\ell]
\end{aligned}
$$