# Asset Pricing and Portfolio Choice Theory (Second edition)
author **Kerry E. Back**
this webpage is written by **You Cih, Chen** and partially from the excercise of Capital Market (I)
###### tags: `資本` [:arrow_right:此頁連結](https://hackmd.io/tHDmL8WWQtSvnHUmmCGlEg) [:house_with_garden:資本筆記](https://hackmd.io/EzdObDGdQDiX99PN32PnDQ?both)
>本篇為課本後的練習題,少數幾題無答案標黃底
# Chapter 1
## Exercises 1.1.
1. negative exponential utility
$u(w)=-e^{-\alpha w}$
$\tau(w)=\dfrac{1}{\alpha(w)}$
$\alpha(w)=-\dfrac{u''(w)}{u'(w)}$
$u'(w)=\alpha e^{-\alpha w}$
$u''(w)=-\alpha^2 e^{-\alpha w}$
$\alpha(w)=-\dfrac{-\alpha^2 e^{-\alpha w}}{\alpha e^{-\alpha w}}=\alpha$
$\tau(w)=1/\alpha\ \ \ \blacksquare$
2. power utility
$u(w)=\dfrac{1}{1-\rho}w^{1-\rho}$
$u'(w)=w^{-\rho}$
$u''(w)=-\rho w^{-\rho-1}$
$\alpha(w)=-\dfrac{u''(w)}{u'(w)}=-\dfrac{-\rho w^{-\rho-1}}{w^{-\rho}}=\rho w^{-1}$
$\tau(w)=\dfrac{1}{\alpha(w)}=\dfrac{w}{\rho}\ \ \ \blacksquare$
3. log utility
$u(w)=\log(w)$
$u'(w)=\dfrac{1}{w}=w^{-1}$
$u''(w)=-w^{-2}$
$\alpha(w)=-\dfrac{u''(w)}{u'(w)}=-\dfrac{-w^{-2}}{w^{-1}}=w^{-1}$
$\tau(w)=w\ \ \ \blacksquare$
4. shifted log utility
$u(w)=\log(w-\xi)$
$u'(w)=\dfrac{1}{w-\xi}=(w-\xi)^{-1}$
$u''(w)=-(w-\xi)^{-2}$
$\alpha(w)=-\dfrac{u''(w)}{u'(w)}=-\dfrac{-(w-\xi)^{-2}}{(w-\xi)^{-1}}=(w-\xi)^{-1}$
$\tau(w)=w-\xi\ \ \ \blacksquare$
5. shifted power utility
$u(w)=\dfrac{\rho}{1-\rho}\left(\dfrac{w-\xi}{\rho}\right)^{1-\rho}$
$u'(w)=\dfrac{\rho}{1-\rho}(1-\rho)\left(\dfrac{w-\xi}{\rho}\right)^{1-\rho-1}*\dfrac{1}{\rho}=\left(\dfrac{w-\xi}{\rho}\right)^{-\rho}$
$u''(w)=-\rho\left(\dfrac{w-\xi}{\rho}\right)^{-\rho-1}*\dfrac{1}{\rho}=-\left(\dfrac{w-\xi}{\rho}\right)^{-\rho-1}$
$\alpha(w)=-\dfrac{u''(w)}{u'(w)}=\left(\dfrac{w-\xi}{\rho}\right)^{-\rho-1}/\left(\dfrac{w-\xi}{\rho}\right)^{-\rho}=\left(\dfrac{w-\xi}{\rho}\right)^{-1}$
$\tau(w)=\dfrac{w-\xi}{\rho}\ \ \ \blacksquare$
## Exercises 1.2.
### (a)
$u(w-\pi)=\frac{1}{2}u(w-x)+\frac{1}{2}u(w+x)$
* If u is log utility, $u(\tilde{w})=\log(\tilde{w})$
$\log(w-\pi)=\frac{1}{2}\log(w-x)+\frac{1}{2}\log(w+x)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\frac{1}{2}\log(w^2-x^2)$
$w-\pi=(w^2-x^2)^{\frac{1}{2}}=\sqrt{(w^2-x^2)}$
$\pi=w-\sqrt{(w^2-x^2)}\ \ \ \blacksquare$
* If u is power utility, $u(\tilde{w})=\dfrac{1}{1-\rho}\tilde{w}^{1-\rho}$
$\dfrac{1}{1-\rho}(w-\pi)^{1-\rho}=\dfrac{1}{2(1-\rho)}(w-x)^{1-\rho}+\dfrac{1}{2(1-\rho)}(w+x)^{1-\rho}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{1}{2(1-\rho)}\left[(w-x)^{1-\rho}+(w+x)^{1-\rho}\right]$
$(w-\pi)^{1-\rho}=\dfrac{(w-x)^{1-\rho}+(w+x)^{1-\rho}}{2}$
$w-\pi=\left[\dfrac{(w-x)^{1-\rho}+(w+x)^{1-\rho}}{2}\right]^{1/1-\rho}$
$\pi=w-\left[\dfrac{(w-x)^{1-\rho}+(w+x)^{1-\rho}}{2}\right]^{1/1-\rho}\ \ \ \blacksquare$
### (b)
$u(w)=\frac{1}{2}u(w-x)+\frac{1}{2}u(w+y)$
* If u is log utility, $u(\tilde{w})=\log(\tilde{w})$
$\log(w)=\dfrac{1}{2}\log(w-x)+\dfrac{1}{2}\log(w+y)\\
\ \ \ \ \ \ \ \ \ \ \ =\dfrac{1}{2}\log[(w-x)(w+y)]$
$\dfrac{1}{w}=\dfrac{1}{\sqrt{(w-x)(w+y)}}$
$w=\sqrt{(w-x)(w+y)}$
$w^2=(w-x)(w+y)$
$w-x=\dfrac{w^2}{w+y}$
$x=w-\dfrac{w^2}{w+y}\\
\ \ =\dfrac{wy}{w+y}\ \ \ \blacksquare$
* If u is power utility, $u(\tilde{w})=\dfrac{1}{1-\rho}\tilde{w}^{1-\rho}$
$\dfrac{1}{1-\rho}w^{1-\rho}=\dfrac{1}{2(1-\rho)}(w-x)^{1-\rho}+\dfrac{1}{2(1-\rho)}(w+y)^{1-\rho}$
$2w^{1-\rho}=(w-x)^{1-\rho}+(w+y)^{1-\rho}$
$(w-x)^{1-\rho}=2w^{1-\rho}-(w+y)^{1-\rho}$
$w-x=\large\sqrt[1-\rho]{2w^{1-\rho}-(w+y)^{1-\rho}}$
$x=w-\large\sqrt[1-\rho]{2w^{1-\rho}-(w+y)^{1-\rho}}\ \ \ \blacksquare$
### (c) by homework
wealth = $$100,000$, and $\rho \in (0.5,40)$
The gamble are $x=100, x=1,000, x=10,000, x=25,000$ respectively.
|$\pi$ | $x=100 | $x=1,000 |$x=10,000 |$x=25,000 |
| -------- | -------- | -------- | -------- | -------- |
| $\rho=0.5$ | 0.03 | 3 | 251 | 1588 |
| $\rho=1$ | 0.05 | 5 | 501 | 3175 |
| $\rho=2$ |0.10 | 10 | 1000 | 6250 |
| $\rho=5$ |0.25 | 25 | 2434 | 13486 |
| $\rho=10$ | 0.5 | 50 | 4424 | 19086 |
| $\rho=40$ | 2 | 195 | 8387 | 23655 |
### (d) :star: weird
the gamble as part (b),so $u(w)=\frac{1}{2}u(w-x)+\frac{1}{2}u(w+y)$
because $\rho>1$, so the utility is power utility.
$\dfrac{1}{1-\rho}w^{1-\rho}=\dfrac{1}{2(1-\rho)}(w-x)^{1-\rho}+\dfrac{1}{2(1-\rho)}(w+y)^{1-\rho}$
$2w^{1-\rho}=(w-x)^{1-\rho}+(w+y)^{1-\rho}\tag{1}$
* If $\dfrac{x}{w}\geq1-0.5^{1/(\rho-1)}$
$x\geq w-{0.5}^{1/(\rho-1)}w$
$w-x\leq {0.5}^{1/(\rho-1)}w$
$(w-x)^{1-\rho}\geq 0.5^{\frac{1}{\rho-1}*(1-\rho)}\ w^{1-\rho}=2w^{1-\rho}$ because $\rho>1$, so ${1-\rho}<0$
$(w-x)^{1-\rho}\geq 2w^{1-\rho}\tag{2}$
denote $(w-x)^{1-\rho}=2w^{1-\rho}+\Delta$
$\therefore$ $2w^{1-\rho}=2w^{1-\rho}+\Delta+(w+y)^{1-\rho}$ given $eq(1)$ and $eq(2)$
$(w+y)^{1-\rho}=-\Delta$
$\dfrac{1}{(w+y)^{\rho-1}}=-\Delta$
$(w+y)^{\rho-1}=-\dfrac{1}{\Delta}$
therefore, no matter how large the y is, or how large the $\rho$ is, there is no help to increase the utility. $\ \ \ \blacksquare$
Or put in another way
$2w^{1-\rho}=(w-x)^{1-\rho}+(w+y)^{1-\rho}\tag{1}$
$(w-x)^{1-\rho}-2w^{1-\rho}\geq 0\tag{2}$
$\therefore (w-x)^{1-\rho}-2w^{1-\rho}=-(w+y)^{1-\rho}\geq 0$
$(w+y)^{1-\rho}\leq 0\ \ \ \blacksquare$
therefore, the outcome is the same.
* If $\rho\geq\dfrac{\log(0.5)+\log(1-x/w)}{\log(1-x/w)}$
notice that $\log{(1-x/w)}<0$ because $0<\dfrac{x}{w}<1$
$\sqrt{\log(1-\frac{x}{w})}\rho\geq 1\\
\log(1-\frac{x}{w})\rho^2\geq 1\\
(1-\dfrac{x}{w})e^{\rho^2}\geq e\\
(w-x)e^{\rho^2-1}\geq 0\\
(w-x)\geq e^{1-\rho^2}\\
(w-x)^{1-\rho}\leq \exp\{(1-\rho^2)(1-\rho)\}\\
2w^{1-\rho}=(w-x)^{1-\rho}+(w+y)^{1-\rho}\\
(w+y)^{1-\rho}=2w^{1-\rho}-(w-x)^{1-\rho}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \geq 2w^{1-\rho}-\exp\{(1-\rho^2)(1-\rho)\}\ \ \ \blacksquare$
## Exercises 1.3.
### (a)
$u(\tilde{w})=-e^{-\alpha \tilde{w}}$
$\large E[u(\tilde{x})]=E[u(\tilde{x}+\tilde{w}-\text{BID})]\\
\large E[-e^{-\alpha \tilde{x}}]=E[-e^{-\alpha (\tilde{x}+\tilde{w}-\text{BID})}]\\
E[-e^{-\alpha \tilde{x}}]=-E[e^{-\alpha \tilde{x}}]=-\exp(-\alpha\mu_x+\dfrac{1}{2}\alpha^2\sigma^2_x)\\
E[-e^{-\alpha (\tilde{x}+\tilde{w}-\text{BID})}]=-\exp\{-\alpha( \mu_x+\mu_w-\text{BID})+\dfrac{1}{2}\alpha^2(\sigma^2_x+\sigma^2_w+2\rho\sigma_x\sigma_w)\}\\
-\exp(-\alpha\mu_x+\dfrac{1}{2}\alpha^2\sigma^2_x)=-\exp\{-\alpha( \mu_x+\mu_w-\text{BID})+\dfrac{1}{2}\alpha^2(\sigma^2_x+\sigma^2_w+2\rho\sigma_x\sigma_w)\}\\
-\alpha\mu_x+\dfrac{1}{2}\alpha^2\sigma^2_x=-\alpha( \mu_x+\mu_w-\text{BID})+\dfrac{1}{2}\alpha^2(\sigma^2_x+\sigma^2_w+2\rho\sigma_x\sigma_w)\\
0=-\alpha( \mu_w-\text{BID})+\dfrac{1}{2}\alpha^2(\sigma^2_w+2\rho\sigma_x\sigma_w)\\
0=-\alpha\mu_w+\alpha\text{BID}+\dfrac{1}{2}\alpha^2\sigma^2_w+\alpha^2\rho\sigma_x\sigma_w\\
\alpha\text{BID}=\alpha\mu_w-\dfrac{1}{2}\alpha^2\sigma^2_w-\alpha^2\rho\sigma_x\sigma_w\\
\text{BID}=\mu_w-\dfrac{1}{2}\alpha\sigma^2_w-\alpha\rho\sigma_x\sigma_w\ \ \ \blacksquare$
### (b)
$u(\tilde{w})=-e^{-\alpha \tilde{w}}$
$\large E[u(\tilde{x})]=E[u(\tilde{x}-\tilde{w}+\text{ASK})]\\
\large E[-e^{-\alpha \tilde{x}}]=E[-e^{-\alpha (\tilde{x}-\tilde{w}+\text{ASK})}]\\
-\exp\{-\alpha\mu_x+\dfrac{1}{2}\alpha^2\sigma^2_x\}=-\exp\{-\alpha(\mu_x-\mu_w+ASK)+\dfrac{1}{2}\alpha^2(\sigma^2_x+\sigma^2_w-2\rho\sigma_x\sigma_w)\}\\
-\alpha\mu_x+\dfrac{1}{2}\alpha^2\sigma^2_x=-\alpha(\mu_x-\mu_w+ASK)+\dfrac{1}{2}\alpha^2(\sigma^2_x+\sigma^2_w-2\rho\sigma_x\sigma_w)\\
0=-\alpha(-\mu_w+ASK)+\dfrac{1}{2}\alpha^2(\sigma^2_w-2\rho\sigma_x\sigma_w)\\
0=\alpha\mu_w-\alpha ASK+\dfrac{1}{2}\alpha^2\sigma^2_w-\alpha^2\rho\sigma_x\sigma_w\\
\alpha ASK=\alpha\mu_w+\dfrac{1}{2}\alpha^2\sigma^2_w-\alpha^2\rho\sigma_x\sigma_w\\
ASK=\mu_w+\dfrac{1}{2}\alpha\sigma^2_w-\alpha\rho\sigma_x\sigma_w\ \ \ \blacksquare$
==Exercises 1.4.==
## Exercises 1.5.
### (a)
* If log utility
$u(\tilde{w})=\log(\tilde{w})$
$u(w(1-\pi))=E[u(w(1+\tilde{\varepsilon}))]\\
\log(w(1-\pi))=E[\log(w(1+\tilde{\varepsilon}))]\\
\log(w)+\log(1-\pi)=E[\log(w)+\log(1+\tilde{\varepsilon})]\\
\log(1-\pi)=E[\log(1+\tilde{\varepsilon})]\\
1-\pi=\exp\{E[\log(1+\tilde{\varepsilon})]\}\\
\pi=1-\exp\{E[\log(1+\tilde{\varepsilon})]\}$
$\therefore, \pi\perp w\ \ \ \blacksquare$
* If power utility
$u(\tilde{w})=\dfrac{1}{1-\rho}\tilde{w}^{1-\rho}$
$u(w(1-\pi))=E[u(w(1+\tilde{\varepsilon}))]\\
\dfrac{1}{1-\rho}(w(1-\pi))^{1-\rho}=\dfrac{1}{1-\rho}E[(w(1+\tilde{\varepsilon}))^{1-\rho}]\\
w^{1-\rho}(1-\pi)^{1-\rho}=E[w^{1-\rho}(1+\tilde{\varepsilon})^{1-\rho}]\\
(1-\pi)^{1-\rho}=E[(1+\tilde{\varepsilon})^{1-\rho}]\\
(1-\pi)=\sqrt[1-\rho]{E[(1+\tilde{\varepsilon})^{1-\rho}]}\\
\pi=1-\sqrt[1-\rho]{E[(1+\tilde{\varepsilon})^{1-\rho}]}\ \ \ \blacksquare$
### (b)
* If log utility $(\rho=1)$
$1+\tilde{\varepsilon}=e^{\tilde{z}}\\
\tilde{z}\sim N(-\dfrac{\sigma^2}{2},\sigma^2)\\
u(w(1-\pi))=E[u(w(1+\tilde{\varepsilon}))]\\
\log(w(1-\pi))=E[\log(w(1+\tilde{\varepsilon}))]\\
\log(w)+\log(1-\pi)=\log(w)+E[\log(1+\tilde{\varepsilon})]\\
\log(1-\pi)=E[\log(1+\tilde{\varepsilon})]\\
1-\pi=\exp\{E[\log(1+\tilde{\varepsilon})]\}$
$\log(1+\tilde{\varepsilon})=\tilde{z}\\
E[\log(1+\tilde{\varepsilon})]=E[\tilde{z}]=-\dfrac{\sigma^2}{2}\\
1-\pi=e^{-\sigma^2/2}\\
\pi=1-e^{-\sigma^2/2}\ \ \ \blacksquare$
* If power utility $(\rho>1)$
$u(w(1-\pi))=E[u(w(1+\tilde{\varepsilon}))]\\
\dfrac{1}{1-\rho}(w(1-\pi))^{1-\rho}=\dfrac{1}{1-\rho}E[(w(1+\tilde{\varepsilon}))^{1-\rho}]\\
(1-\pi)^{1-\rho}=E[(1+\tilde{\varepsilon})^{1-\rho}]\\
(1-\rho)\log(1-\pi)=\log(E[(1+\tilde{\varepsilon})^{1-\rho}])\\
1+\tilde{\varepsilon}=e^{\tilde{z}}\\
(1+\tilde{\varepsilon})^{1-\rho}=e^{(1-\rho)\tilde{z}}\\
E[(1+\tilde{\varepsilon})^{1-\rho}]=E[e^{(1-\rho)\tilde{z}}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left(-\dfrac{(1-\rho)\sigma^2}{2}+\dfrac{1}{2}(1-\rho)^2\sigma^2\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left(\dfrac{(1-\rho)^2\sigma^2-(1-\rho)\sigma^2}{2}\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left(\dfrac{(1-\rho)\sigma^2[(1-\rho)-1]}{2}\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left(\dfrac{-\rho(1-\rho)\sigma^2}{2}\right)\\
(1-\rho)\log(1-\pi)=\log(\exp\left(\dfrac{-\rho(1-\rho)\sigma^2}{2}\right))=\dfrac{-\rho(1-\rho)\sigma^2}{2}\\
\log(1-\pi)=\dfrac{-\rho\sigma^2}{2}\\
1-\pi=\exp(-\rho\sigma^2/2)\\
\pi=1-\exp(-\rho\sigma^2/2)\ \ \ \blacksquare$
## Exercises 1.6.
If $E[\tilde{\varepsilon}|\tilde{y}]=0$
$Cov(\tilde{y},\tilde{\varepsilon})=E[\tilde{y}\tilde{\varepsilon}]-E[\tilde{y}]E[\tilde{\varepsilon}]\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[E[\tilde{y}\tilde{\varepsilon}|\tilde{y}]]-E[\tilde{y}]E[E[\tilde{\varepsilon}|\tilde{y}]]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{y}E[\tilde{\varepsilon}|\tilde{y}]]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =0$
## Exercises 1.7.
$\tilde{y}=e^{\tilde{x}}\\
\tilde{x}\sim N(\mu,\sigma^2)\\
E[\tilde{y}]=E[e^{\tilde{x}}]=e^{\mu+\frac{1}{2}\sigma^2}\\
var(\tilde{y})=E[\tilde{y}^2]-E[\tilde{y}]^2\\
\ \ \ \ \ \ \ \ \ \ \ =E[e^{2\tilde{x}}]-E[e^{\tilde{x}}]^2\\
\ \ \ \ \ \ \ \ \ \ \ =e^{2\mu+2\sigma^2}-e^{2\mu+\sigma^2}\\
\dfrac{\text{stdev}(\tilde{y})}{E[\tilde{y}]}=\dfrac{\sqrt{e^{2\mu+2\sigma^2}-e^{2\mu+\sigma^2}}}{e^{\mu+\frac{1}{2}\sigma^2}}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{\sqrt{e^{2\mu+2\sigma^2}-e^{2\mu+\sigma^2}}}{\sqrt{e^{2\mu+\sigma^2}}}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\sqrt{\dfrac{e^{2\mu+2\sigma^2}-e^{2\mu+\sigma^2}}{e^{2\mu+\sigma^2}}}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\sqrt{e^{\sigma^2}-1}\ \ \ \blacksquare$
# Chapter 2
## Exercises 2.1.
$E[R_1]=E[R_2]=E[R_3]=1.1\\
R_f=1.05\\
\Sigma=\begin{pmatrix}0.09 & 0.06 & 0\\
0.06 & 0.09 & 0\\
0 & 0 & 0.09 \end{pmatrix}\\
\tilde{R}\sim N(\mu,\sigma^2)\\
\phi=\dfrac{1}{\alpha w_0}\Sigma^{-1}(\mu-R_f\iota)\\
\mu-R_f\iota=\begin{bmatrix}1.1-1.05\\
1.1-1.05\\
1.1-1.05\end{bmatrix}=\begin{bmatrix}0.05\\ 0.05\\ 0.05\end{bmatrix}$
$\det\Sigma=\begin{pmatrix}0.09 & 0.06 & 0\\
0.06 & 0.09 & 0\\
0 & 0 & 0.09 \end{pmatrix}
\begin{matrix}0.09 & 0.06\\
0.06 & 0.09\\
0 & 0\end{matrix}=0.09^3-(0.09*0.06^2)=0.000405$
$\Sigma^{-1}=\dfrac{1}{\det\Sigma}\begin{pmatrix}
\begin{vmatrix}0.09 & 0\\0 & 0.09\end{vmatrix} &
\begin{vmatrix}0.06 & 0\\0 & 0.09\end{vmatrix} &
\begin{vmatrix}0.06 & 0.09\\0 & 0\end{vmatrix}\\
\begin{vmatrix}0.06 & 0\\0 & 0.09\end{vmatrix} &
\begin{vmatrix}0.09 & 0\\0 & 0.09\end{vmatrix} &
\begin{vmatrix}0.09 & 0.06\\0 & 0\end{vmatrix}\\
\begin{vmatrix}0.06 & 0\\0.09 & 0\end{vmatrix} &
\begin{vmatrix}0.09 & 0\\0.06 & 0\end{vmatrix} &
\begin{vmatrix}0.09 & 0.06\\0.06 & 0.09\end{vmatrix}\end{pmatrix}^T$
$\ \ \ \ \ \ \ \ =\dfrac{1}{\det\Sigma}\begin{pmatrix}0.09^2 & -(0.06*0.09) & 0\\
-(0.06*0.09) & 0.09^2 & -0\\
0 & -0 & 0.09^2-0.06^2\end{pmatrix}\\
\ \ \ \ \ \ \ \ =\dfrac{1}{0.000405}\begin{pmatrix}0.0081 & -0.0054 & 0\\
-0.0054 & 0.0081 & -0\\
0 & -0 & 0.0045\end{pmatrix}\\
\ \ \ \ \ \ \ \ =\begin{pmatrix}20 & -13.33 & 0\\
-13.33 & 20 & -0\\
0 & -0 & 11.11\end{pmatrix}$
$\alpha w_0=2\\
\phi=\dfrac{1}{2}\begin{pmatrix}20 & -13.33 & 0\\
-13.33 & 20 & -0\\
0 & -0 & 11.11\end{pmatrix}\begin{bmatrix}0.05\\ 0.05\\ 0.05\end{bmatrix}\\
\ \ \ =\dfrac{1}{2}\begin{pmatrix}0.05(20-13.33)\\
0.05(-13.33+20)\\
11.11*0.05\end{pmatrix}=\begin{pmatrix}0.167\\0.167\\0.278\end{pmatrix}\ \ \ \blacksquare$
## Exercises 2.2
### (a)
$w_1=(w_0-c_0-\theta'p)R_f+\theta'\tilde{x}$
$u_1(\tilde{w})=-e^{\alpha \tilde{w}}\\
\ \ \ \ \ \ \ \ \ =\large-e^{-\alpha[(w_0-c_0-\theta'p)R_f+\theta'\tilde{x}]}$
$E[w_1]=E[(w_0-c_0-\theta'p)R_f+\theta'\tilde{x}]\\
\ \ \ \ \ \ \ \ \ \ =(w_0-c_0-\theta'p)R_f+\theta'E[\tilde{x}]\\
\ \ \ \ \ \ \ \ \ \ =(w_0-c_0-\theta'p)R_f+\theta'\mu_x$
$E[w_1^2]=\theta'\Sigma_x\theta$
$E[u_1(w_1)]=-\exp\left(-\alpha[(w_0-c_0-\theta'p)R_f+\theta'\mu_x]+\dfrac{1}{2}\alpha^2\theta'\Sigma_x\theta\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\left(-\alpha\left[(w_0-c_0-\theta'p)R_f+\theta'\mu_x-\dfrac{1}{2}\alpha\theta'\Sigma_x\theta\right]\right)$
$\text{CE}=(w_0-c_0-\theta'p)R_f+\theta'\mu_x-\dfrac{1}{2}\alpha\theta'\Sigma_x\theta\\
\dfrac{\partial\text{CE}}{\partial c_0}=-R_f<0\\
\dfrac{\partial\text{CE}}{\partial \theta}=-pR_f+\mu_x-\alpha\theta\Sigma_x=0$
$\alpha\theta\Sigma_x=-pR_f+\mu_x\\
\theta=\dfrac{1}{\alpha}\Sigma_x^{-1}(\mu_x-R_fp)\ \ \ \blacksquare$
### (b)
$\tilde{R}\sim N(\mu,\Sigma)\\
w_1=(w_0-c_0-\theta'p)R_f+\theta'p\tilde{R}\\
E[w_1]=(w_0-c_0-\theta'p)R_f+\theta'p\mu\\
E[w_1^2]=p^2\theta'\Sigma\theta\\
\text{CE}=(w_0-c_0-\theta'p)R_f+\theta'p\mu-\dfrac{1}{2}\alpha p^2\theta'\Sigma\theta\\
\ \ \ \ \ \ =(w_0-c_0-\phi)R_f+\phi\mu-\dfrac{1}{2}\alpha \phi'\Sigma\phi$
$\dfrac{\partial\text{CE}}{\partial \phi}=-R_f\iota+\mu-\alpha\phi\Sigma=0\\
\alpha\phi\Sigma=\mu-R_f\iota\\
\phi=\dfrac{1}{\alpha}\Sigma^{-1}(\mu-R_f\iota)\ \ \ \blacksquare$
## Exercises 2.3
### (a)
$\tilde{w_1}=\phi_fR_f+\phi\tilde{R}+\tilde{y}$
$u(w_1)=-\exp\{-\alpha(\phi_fR_f+\phi\tilde{R}+\tilde{y})\}\\
\ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha((w_0-\phi)R_f+\phi\tilde{R}+\tilde{y})\}\\
\ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha(w_0R_f+\phi(\tilde{R}-R_f)+\tilde{y})\}$
$E[u(w_1)]=-E[\exp\{-\alpha(w_0R_f+\phi(\tilde{R}-R_f)+\tilde{y})\}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-E[\exp\{-\alpha(w_0R_f+\phi(\tilde{R}-R_f)\}\exp\{-\alpha\tilde{y}\}]$
Given the law of iterated expectation
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-E[\exp\{-\alpha(w_0R_f+\phi(\tilde{R}-R_f)\}]E[\exp\{-\alpha\tilde{y}\}]$
We will found that if we $\arg \max\phi$, $E[\exp\{-\alpha\tilde{y}\}]$ won't affect the result. $\ \ \ \blacksquare$
### (b)
$b=\dfrac{Cov(\tilde{y},\tilde{R})}{var(\tilde{R})}\\
a=\dfrac{E[\tilde{y}]-bE[\tilde{R}]}{R_f}\\
\tilde{\varepsilon}=\tilde{y}-aR_f-b\tilde{R}\\
\tilde{y}=\tilde{\varepsilon}+aR_f+b\tilde{R}\ \ \ \blacksquare$
$E[\tilde{\varepsilon}]=E[\tilde{y}]-aR_f-bE[\tilde{R}]\\
\ \ \ \ \ \ \ =aR_f+bE[\tilde{R}]-aR_f-bE[\tilde{R}]=0\ \ \ \blacksquare$
$Cov(\tilde{y},\tilde{R})=Cov(\tilde{\varepsilon}+aR_f+b\tilde{R},\tilde{R})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =Cov(\tilde{\varepsilon}+b\tilde{R},\tilde{R})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[(\tilde{\varepsilon}+b\tilde{R})\tilde{R}]-E[\tilde{\varepsilon}+b\tilde{R}]E[\tilde{R}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{\varepsilon}\tilde{R}+b\tilde{R}^2]-E[\tilde{\varepsilon}]E[\tilde{R}]-bE[\tilde{R}]^2\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =bE[\tilde{R}^2]-bE[\tilde{R}]^2\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b\text{var}(\tilde{R})$
It shows that $\tilde{\varepsilon}\perp\tilde{R}\ \ \ \blacksquare$
==### (c)==
$E[u(w_1)]=-E[\exp\{-\alpha(w_0R_f+\phi(\tilde{R}-R_f)+\tilde{y})\}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\left\{-\alpha(w_0R_f+\phi(\mu-R_f)+E[\tilde{y}]+\dfrac{1}{2}\alpha^2\left[\phi^2\text{var}(\tilde{R})+2\phi\text{Cov}(\tilde{y},\tilde{R})\right]\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\left\{-\alpha\left(w_0R_f+\phi(\mu-R_f)-\dfrac{1}{\alpha}E[\tilde{y}]-\dfrac{1}{2}\alpha\phi^2\text{var}(\tilde{R})-\alpha\phi\text{Cov}(\tilde{y},\tilde{R})\right)\right\}$
$\text{CE}=w_0R_f+\phi(\mu-R_f)-\dfrac{1}{\alpha}E[\tilde{y}]-\dfrac{1}{2}\alpha\phi^2\text{var}(\tilde{R})-\alpha\phi\text{Cov}(\tilde{y},\tilde{R})$
$\because \text{Cov}(\tilde{y},\tilde{R})=b\text{var}(\tilde{R})$ given $(b)$
$\dfrac{\partial\text{CE}}{\partial\phi}=\mu-R_f-\alpha\phi\text{var}(\tilde{R})-\alpha b\text{var}(\tilde{R})=0\\
\alpha\phi\text{var}(\tilde{R})=\mu-R_f-\alpha b\text{var}(\tilde{R})$
$\phi=\dfrac{\mu-R_f-\alpha b\text{var}(\tilde{R})}{\alpha\text{var}(\tilde{R})}\ \ \ \blacksquare$
## Exercises 2.4
$u(\tilde{w})=-e^{-\alpha\tilde{w}}\\
\tilde{w}=\phi\tilde{R}\\
u(\tilde{w})=-\exp\{-\alpha\phi\tilde{R}\}\\
E[u(\tilde{w})]=-E[\exp\{-\alpha\phi\tilde{R}\}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha\phi'\mu+\dfrac{1}{2}\alpha^2\phi'\Sigma\phi\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha(\phi'\mu-\dfrac{1}{2}\alpha\phi'\Sigma\phi)\}\\
\therefore\text{CE}=\phi'\mu-\dfrac{1}{2}\alpha\phi'\Sigma\phi\\
\mathcal{L}: CE-\gamma(\iota'\phi-w_0)\\
\dfrac{\partial\mathcal{L}}{\partial\phi}=\mu-\alpha\Sigma\phi-\iota\gamma=0$
$\alpha\Sigma\phi=\mu-\iota\gamma$
$\phi'=\dfrac{1}{\alpha}(\mu-\iota\gamma)'\Sigma^{-1}\tag{1}$
$\because \iota'\phi=w_0\\
\iota'\phi=(\iota'\phi)'=\phi'\iota\\
\left(\dfrac{1}{\alpha}(\mu-\iota\gamma)'\Sigma^{-1}\right)\iota=w_0\\
(\mu-\iota\gamma)'\Sigma^{-1}\iota=\alpha w_0\\
\mu'\Sigma^{-1}\iota-\iota'\gamma\Sigma^{-1}\iota=\alpha w_0\\
\iota'\gamma\Sigma^{-1}\iota=\mu'\Sigma^{-1}\iota-\alpha w_0$
$\gamma=\dfrac{\mu'\Sigma^{-1}\iota-\alpha w_0}{\iota'\Sigma^{-1}\iota}\tag{2}$
$\therefore\phi'=\dfrac{\mu'-\left(\dfrac{\mu'\Sigma^{-1}\iota-\alpha w_0}{\iota'\Sigma^{-1}\iota}\right)'\iota'}{\alpha}\Sigma^{-1}\\
\ \ \ \ \ \ =\dfrac{1}{\alpha}\mu'\Sigma^{-1}+\left(\dfrac{\alpha w_0-\mu'\Sigma^{-1}\iota}{\alpha\iota'\Sigma^{-1}\iota}\right)'\iota'\Sigma^{-1}\\
\ \ \ \ \ \ =\dfrac{1}{\alpha}\mu'\Sigma^{-1}+\left(\dfrac{\alpha w_0-\iota'\Sigma^{-1}\mu}{\alpha\iota'\Sigma^{-1}\iota}\right)\iota'\Sigma^{-1}$
$\ \ \ \ \phi=\dfrac{1}{\alpha}\Sigma^{-1}\mu+\left(\dfrac{\alpha w_0-\iota'\Sigma^{-1}\mu}{\alpha\iota'\Sigma^{-1}\iota}\right)\Sigma^{-1}\iota\ \ \ \blacksquare$
## Exercises 2.5
$\tilde{w}=w_0R_f+\phi'(\tilde{R}-R_f\iota)\\
\xi E[w_0R_f+\phi'(\tilde{R}-R_f\iota)]-\dfrac{1}{2}E[w_0R_f+\phi'(\tilde{R}-R_f\iota)]^2-\dfrac{1}{2}\text{var}(\phi\tilde{R})\\
=\xi w_0R_f+\xi\phi'\left(E[\tilde{R}]-R_f\iota\right)-\dfrac{1}{2}E\left[w_0^2R_f^2+2w_0Rf\phi'(\tilde{R}-R_f\iota)+\phi'(\tilde{R}-R_f\iota)^2\phi\right]-\dfrac{1}{2}\phi'\Sigma\phi\\
=\xi w_0R_f-\dfrac{1}{2}w_0^2R_f^2+\xi\phi'(\mu-R_f\iota)-w_0R_f\phi'(\mu-R_f\iota)-\dfrac{1}{2}\phi'(\mu-R_f\iota)^2\phi-\dfrac{1}{2}\phi'\Sigma\phi$
Denote $\phi'(\mu-R_f\iota)=\gamma$
$\dfrac{\partial u(\tilde{w})}{\partial\phi}=\xi(\mu-R_f\iota)-w_0R_f(\mu-R_f\iota)-\phi'(\mu-R_f\iota)^2-\phi'\Sigma=0\\
\xi(\mu-R_f\iota)-w_0R_f(\mu-R_f\iota)-\gamma(\mu-R_f\iota)=\phi'\Sigma\\
(\mu-R_f\iota)(\xi-w_0R_f-\gamma)=\phi'\Sigma$
$\phi=(\xi-w_0R_f-\gamma)\Sigma^{-1}(\mu-R_f\iota)\tag{1}$
$\because \gamma=\phi'(\mu-R_f\iota)\\
\ \ \ \ \ \ =(\mu-R_f\iota)\Sigma^{-1}(\xi-w_0R_f-\gamma)(\mu-R_f\iota)\\
\gamma=(\mu-R_f\iota)\Sigma^{-1}(\xi-w_0R_f)(\mu-R_f\iota)-\gamma(\mu-R_f\iota)\Sigma^{-1}(\mu-R_f\iota)\\
\gamma+\gamma(\mu-R_f\iota)\Sigma^{-1}(\mu-R_f\iota)=(\mu-R_f\iota)\Sigma^{-1}(\xi-w_0R_f)(\mu-R_f\iota)\\
\gamma\left[1+(\mu-R_f\iota)\Sigma^{-1}(\mu-R_f\iota)\right]=(\mu-R_f\iota)\Sigma^{-1}(\xi-w_0R_f)(\mu-R_f\iota)$
Deonte $\kappa^2=(\mu-R_f\iota)\Sigma^{-1}(\mu-R_f\iota)$
$\gamma=\dfrac{(\mu-R_f\iota)'(\xi-w_0R_f)}{1+\kappa^2}\Sigma^{-1}(\mu-R_f\iota)\tag{2}$
$\phi=\left[\xi-w_0R_f-\dfrac{(\mu-R_f\iota)'(\xi-w_0R_f)}{1+\kappa^2}\Sigma^{-1}(\mu-R_f\iota))\right]\Sigma^{-1}(\mu-R_f\iota)\\
\ \ =\left[\dfrac{(\xi-w_0R_f)+(\xi-w_0R_f)(\mu-R_f\iota)\Sigma^{-1}(\mu-R_f\iota)-(\mu-R_f\iota)(\xi-w_0R_f)\Sigma^{-1}(\mu-R_f\iota)}{1+\kappa^2}\right]\Sigma^{-1}(\mu-R_f\iota)\\
\ \ =\left(\dfrac{\xi-w_0R_f}{1+\kappa^2}\right)\Sigma^{-1}(\mu-R_f\iota)\ \ \ \blacksquare$
## Exercises 2.6
If $v(c_0,c_1)=\frac{1}{1-\rho}C_0^{1-\rho}+\frac{\delta}{1-\rho}C_1^{1-\rho}$
$\text{MRS}(c_0,c_1)=\dfrac{\dfrac{1}{1-\rho}(1-\rho)c_0^{-\rho}}{\dfrac{\delta}{1-\rho}(1-\rho)c_1^{-\rho}}=\dfrac{1}{\delta}\dfrac{c_0^{-\rho}}{c_1^{-\rho}}=\dfrac{1}{\delta}\left(\dfrac{c_1}{c_0}\right)^{\rho}\\
\text{EIS}=\dfrac{d\log(\dfrac{c_1}{c_0})}{d\log\left(\dfrac{1}{\delta}\left(\dfrac{c_1}{c_0}\right)^{\rho}\right)}\\
\ \ \ \ \ \ =\dfrac{d\log\left(\dfrac{c_1}{c_0}\right)}{d\left(-\log(\delta)+\rho\log\left(\dfrac{c_1}{c_0}\right)\right)}\\
\ \ \ \ \ \ =\dfrac{1}{\rho}\ \ \ \blacksquare$
## Exercises 2.7
$(a)$
$\max\limits_{C_0}\frac{1}{1-\rho}c_0^{1-\rho}+\frac{\delta}{1-\rho}c_1^{1-\rho}\ \ s.t.\ \ c_0+\frac{1}{R_f}c_1=w_0$
$c_0+\dfrac{c_1}{R_f}=w_0\\
c_1=w_0R_f-c_0R_f=R_f(w_0-c_0)$
$\max\limits_{\substack{c_0}}\frac{1}{1-\rho}c_0^{1-\rho}+\frac{\delta}{1-\rho}R_f^{1-\rho}(w_0-c_0)^{1-\rho}$
$u(c)=\dfrac{1}{1-\rho}c_0^{1-\rho}+\dfrac{\delta}{1-\rho}R_f^{1-\rho}(w_0-c_0)^{1-\rho}\\
\dfrac{\partial u(c)}{\partial c_0}=\dfrac{1}{1-\rho}(1-\rho)c_0^{1-\rho-1}-\dfrac{\delta}{1-\rho}(1-\rho)R_f^{1-\rho}(w_0-c_0)^{1-\rho-1}=0$
$c_0^{-\rho}=\delta R_f^{1-\rho}(w_0-c_0)^{-\rho}\\
c_0=\delta^{-\frac{1}{\rho}} R_f^{-\frac{1-\rho}{\rho}}(w_0-c_0)$
Denote $\Delta=\delta^{-\frac{1}{\rho}} R_f^{-\frac{1-\rho}{\rho}}$
$c_0+\Delta c_0=\Delta w_0\\
c_0(1+\Delta)=\Delta w_0\\
\dfrac{c_0}{w_0}=\dfrac{\Delta}{1+\Delta}=\dfrac{\delta^{-\frac{1}{\rho}} R_f^{1-\frac{1}{\rho}}}{1+\delta^{-\frac{1}{\rho}} R_f^{1-\frac{1}{\rho}}}$
If $\text{EIS}=\frac{1}{\rho}>1$, then $0<\rho<1$, ***the optimal consumption-to-wealth ratio*** $\dfrac{c_0}{w_0}$ is a decreasing function of $R_f\ ($because $1-\frac{1}{\rho}<0)$
If $\text{EIS}<1$, then $\rho>1$ or $\rho<0$, ***the optimal consumption-to-wealth ratio*** $\dfrac{c_0}{w_0}$ is a increasing function of $R_f\ ($because $1-\frac{1}{\rho}>0)$$\ \ \ \blacksquare$
$(b)$
For given $c_0$ and $\tilde c_1$, we construct a ratio $\dfrac{c_0}{c_1}$ to see the relation with $\text{EIS}$ and $R_f$.
Let $\Delta=\delta^{\frac{-1}{\rho}}R_f^{1-\frac{1}{\rho}}$
$c_1=R_f(w_0-c_0)=R_f\left (w_0-\dfrac{\delta^{\frac{-1}{\rho}}R_f^{1-\frac{1}{\rho}}w_0}{1+\delta^{\frac{-1}{\rho}}R_f^{1-\frac{1}{\rho}}}\right)=R_f\left(w_0-\dfrac{\Delta w_0}{1+\Delta}\right)$
$=R_f\left(\dfrac{w_0+\Delta w_0-\Delta w_0}{1+\Delta}\right)=R_f\left(\dfrac{w_0}{1+\Delta}\right)=\dfrac{w_0R_f}{1+\delta^{\frac{-1}{\rho}}R_f^{1-\frac{1}{\rho}}}$
$\dfrac{c_0}{c_1}=\dfrac{\dfrac{\Delta w_0}{1+\Delta}}{R_f\left(\dfrac{w_0}{1+\Delta}\right)}=\dfrac{\Delta}{R_f}=\delta^{\frac{-1}{\rho}}R_f^{1-\frac{1}{\rho}}*R_f^{-1}=\delta^{\frac{-1}{\rho}}R_f^{\frac{-1}{\rho}}$
$\Rightarrow R_f=\left(\dfrac{c_0}{c_1}\right)^{-\rho}\delta^{-1}=\left(\dfrac{c_1}{c_0}\right)^{\rho}\dfrac{1}{\delta}$
Then we found that when $\text{EIS}=\frac{1}{\rho}$ is higher, $\rho$ is smaller, then $R_f$ is smaller. $\ \ \ \blacksquare$
## Exercises 2.8
$\tilde c_1=(w_0-c_0)R_f+\tilde y$
$\max\limits_{\substack{C_0}}u(c_0)+\delta \text{E}[u((w_0-c_0)R_f+\tilde y)]$
$\text{F.O.C.}\ \ \frac{\partial}{\partial C_0}=u'(c_0)-\delta R_fE[u'(c_1)]=0$
$u'(c_0^*)=\delta R_fE[u'((w_0-c_0)R_f+\tilde y)]$
Because $u^{'''}>0$ (convex), by ***Jensen inequality***, we know that
$u'(c_0^*)=\delta R_fE[u'((w_0-c_0)R_f+\tilde y)]\geq \delta R_fu'\{E[(w_0-c_0)R_f+\tilde y]\}=\delta R_fu'\{(w_0-c_0)R_f+E[\tilde y]\}\tag{1}$
If $\tilde y=0$, $u'(c_0)=\delta R_fE[u'((w_0-c_0)R_f)]=\delta R_fu'((w_0-c_0)R_f)$
Given equation $(1)$
If $E[\tilde y]=0$
$u'(c_0^*)=\delta R_fE[u'((w_0-c_0)R_f+\tilde y)]\geq \delta R_f u'((w_0-c_0)R_f)=u'(c_0)\\
u'(c_0^*)\geq u'(c_0)\\
\therefore c_0^*\leq c_0\ \ \ \blacksquare$
## Exercises 2.9
$(a)$
By #$2.8$, $u(c_0)+\delta \text{E}[u((w_0-c_0)R_f+\tilde y)]$
$\text{F.O.C.}u'(c_0)=\delta R_fE[u'((w_0-c_0)R_f+\tilde y)]$
$\because w^*=w_0-\pi\ \ \text{when}\ c_0^*$ and $\tilde y=0$
$\therefore u'(c_0^*)=\delta R_fu'\{(w^*-c_0^*)R_f\}$
$\Rightarrow c_0^*$ is the optimal consumption of investor if $\tilde y=0$ and had $w_0-\pi\ \ \ \blacksquare$
$(b)$
If the utility function is $\text{CARA}\ u(\tilde{w})=-e^{-\alpha \tilde{w}}$
$\tilde{w}=(w_0-\pi-c_0^*)R_f=(w^*-c_0^*)R_f$
Given $(a)\ u'(c_0)=\delta R_fE[u'\{(w^*-c_0^*)R_f+\tilde{y}\}]$
$u'(\tilde{w})=u'((w^*-c_0^*)R_f)\\
\ \ \ \ \ \ \ \ \ =-\alpha R_f\exp\{-\alpha((w^*-c_0^*)R_f)\}\\
\ \ \ \ \ \ \ \ \ =-\alpha R_f\exp\{-\alpha((w_0-\pi-c_0^*)R_f)\}\\
\ \ \ \ \ \ \ \ \ =-\alpha R_f\exp\{-\alpha((w_0-c_0^*)R_f)\}\exp\{\alpha\pi R_f\}$
$\delta R_fE[u'\{(w^*-c_0^*)R_f+\tilde{y}\}]=\delta R_fE[-\exp\{-\alpha[(w^*-c_0^*)R_f+\tilde{y}]\}]\alpha R_f\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\delta R_f^2\alpha E[-\exp\{-\alpha(w^*-c_0^*)R_f-\alpha\tilde{y}\}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\delta R_f^2\alpha E[-\exp\{-\alpha(w^*-c_0^*)R_f\}\exp\{-\alpha\tilde{y}\}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\delta R_f^2\alpha \exp\{-\alpha(w^*-c_0^*)R_f\}E[\exp\{-\alpha\tilde{y}\}]$
$-\alpha R_f\exp\{-\alpha((w_0-c_0^*)R_f)\}\exp\{\alpha\pi R_f\}=-\delta R_f^2\alpha \exp\{-\alpha(w^*-c_0^*)R_f\}E[\exp\{-\alpha\tilde{y}\}]\\
\exp\{\alpha\pi R_f\}=\delta R_f E[\exp\{-\alpha\tilde{y}\}]$
Therefore, we found tht $\pi$ is independent of initial wealth $(w_0)\ \ \ \blacksquare$
# Chapter 3
## Exercises 3.1
$(a)$
$\because R_u>R_d$, there are three possibilities that
1. $R_u>R_d\geq R_f$
2. $R_f\geq R_u>R_d$
3. $R_u>R_f>R_d$
* If $R_u>R_d\geq R_f$
$R_u-R_f>0\\
R_d-R_f>0$
$\Rightarrow$在不同state of the world,都可以得到報酬>0
在這種情況下,有套利的機會。
* If $R_f\geq R_u>R_d$
$R_f-R_u>0\\
R_f-R_d>0$
$\Rightarrow$在不同state of the world,都可以得到報酬>0
在這種情況下,有套利的機會。
* If $R_u>R_f>R_d$
$R_u-R_f>0\\
R_d-R_f<0$
(買一單位risky asset, 賣一單位risk-free asset)
$\Rightarrow$no arbitrage opportunity,即不符合要在任何一個state of the world都要能套利,因此只有在這種情況下才沒有套利機會。$\ \ \ \blacksquare$
$(b)$
* compute the unique vector of state prices
$R_fq_1+R_fq_2=1\tag{1}$
$R_uq_1+R_dq_2=1\tag{2}$
$\Rightarrow (R_f-R_u)q_1+(R_f-R_d)q_2=0$
$(R_f-R_u)q_1=(-R_f+R_d)q_2$
$q_1=\dfrac{(-R_f+R_d)q_2}{R_f-R_u}\tag{3}$
$\dfrac{R_fq_2(-R_f+R_d)}{R_f-R_u}+\dfrac{R_fq_2(R_f-R_u)}{R_f-R_u}=1$
$R_fq_2(R_d-R_u)=R_f-R_u$
$R_fq_2=\dfrac{R_f-R_u}{R_d-R_u}$
$q_2=\dfrac{R_f-R_u}{R_f(R_d-R_u)}$ #
$R_fq_1+\dfrac{R_f(R_f-R_u)}{R_f(R_d-R_u)}=1$
$R_fq_1=1-\dfrac{R_f-R_u}{R_d-R_u}$
$q_1=\dfrac{R_d-R_u-R_f+R_u}{R_f(R_d-R_u)}=\dfrac{R_d-R_f}{R_f(R_d-R_u)}\ \ \ \blacksquare$
$(b)$
* compute the unique risk-neutral probabilities of states $\omega_1,\omega_2$
$Q(A)=R_fE[\tilde{m}1_A]$
$Q(\omega_1)=R_fE[\tilde{m}1_{\omega_1}]$ $\because \tilde{m}=\frac{q}{\text{prob}}$
$\ \ \ \ \ \ \ \ \ \ =R_fE[\frac{q_1}{\text{prob}_1}1_{\omega_1}]$
$\ \ \ \ \ \ \ \ \ \ =R_f\left[(\frac{q_1}{\text{prob}_1}*1*\text{prob}_1)+(\frac{q_1}{\text{prob}_1}*0*\text{prob}_2)\right]$
$\ \ \ \ \ \ \ \ \ \ =R_fq_1$
$Q(\omega_2)=R_fE[\tilde{m}1_{\omega_2}]$ $\because \tilde{m}=\frac{q}{\text{prob}}$
$\ \ \ \ \ \ \ \ \ \ =R_fE[\frac{q_2}{\text{prob}_2}1_{\omega_2}]$
$\ \ \ \ \ \ \ \ \ \ =R_f\left[(\frac{q_2}{\text{prob}_2}*1*\text{prob}_2)+(\frac{q_2}{\text{prob}_2}*0*\text{prob}_1)\right]$
$\ \ \ \ \ \ \ \ \ \ =R_fq_2\ \ \ \blacksquare$
$(c)$
$Xq=p$
$R_u: \max(\tilde{x}(\omega_1)-K,0)$
$R_d: \max(\tilde{x}(\omega_2)-K,0)$
$p=R_uq_1+R_dq_2$
$p=E^*[\tilde{x}q]= \max(\tilde{x}(\omega_1)-K,0)*\dfrac{R_d-R_f}{R_f(R_d-R_u)}+\max(\tilde{x}(\omega_2)-K,0)*\dfrac{R_f-R_u}{R_f(R_d-R_u)}$
$\ \ \ \ \ \ \ \ \ =\max\left(\dfrac{(\tilde{x}(\omega_1)-K)(R_d-R_f)+(\tilde{x}(\omega_2)-K)(R_f-R_u)}{R_f(R_d-R_u)},0\right)\ \ \ \blacksquare$
## Exercises 3.2

$(a)$
Suppose we have a portfolio $\phi\Rightarrow$ payoff: $\phi(R-R_f)$
$\because E[\tilde{m}\tilde{R}]=1$
If SDF exists, then there is no arbitrage opportunity.
$E[\tilde{m}\phi(\tilde{R}-R_f)]$ $\ \ \because E[\tilde{m}\tilde{R}]=1$
$=E[\tilde{m}\phi\tilde{R}]-E[\tilde{m}\phi R_f]$
$=\phi-\phi=0\ \ \ \blacksquare$
在不同state of the world下的期望報酬為0,表示真的沒有套利機會。
$(b)$
$R_f: q_1+q_2+q_3=1$
$\text{R}: 1.1q_1+q_2+0.9q_3=1$
$\Rightarrow 0.1q_1-0.1q_3=0$
$\Rightarrow 0.1q_1=0.1q_3$
therefore we get
$\begin{cases}q_1=q_3\\q_2=1-2q_1\end{cases}\ \ \ \blacksquare$
$(c)$
$\hat{m}=(m_1,m_2,m_3)$
$\exists\ \hat{m}=(4,-2,4)$
$m_1=\frac{q_1}{\text{prob}_1}=4q_1$
$m_2=\frac{q_2}{\text{prob}_2}=2q_2=2-4q_1$
$m_3=\frac{q_3}{\text{prob}_3}=4q_3=4q_1$
If $q_1=1\Rightarrow\begin{cases}m_1=4\\m_2=-2\\m_3=4\end{cases}$ is an SDF.$\ \ \ \blacksquare$
$(d)$
>$m_p=\bar{m}+\dfrac{Cov(\tilde{m},\tilde{x})}{var(\tilde{x})}(\tilde{x}-\bar{x})$
$\bar{m}=(\frac{1}{4}*4)+(\frac{1}{2}*-2)+(\frac{1}{4}*4)=1$
$\bar{x}=(\frac{1}{4}*1.1)+(\frac{1}{2}*1)+(\frac{1}{4}*0.9)=1$
$Cov(\tilde{m},\tilde{x})=E[(\tilde{m}-\bar{m})(\tilde{x}-\bar{x})]=\sum^n_{i=1}(m_i-\bar{m})(x_i-\bar{x})*\text{prob}_i$
$\ \ \ \ =\left((4-1)*(1.1-1)*\frac{1}{4}\right)+\left((-2-1)*(1-1)*\frac{1}{2}\right)+\left((4-1)*(0.9-1)*\frac{1}{4}\right)$
$\ \ \ \ =(3*0.1*\frac{1}{4})-0+(3*-0.1*\frac{1}{4})$
$\ \ \ \ =0$
$m_p=\bar{m}+\dfrac{Cov(\tilde{m},\tilde{x})}{var(\tilde{x})}(\tilde{x}-\bar{x})$
$\ \ \ \ =1+0*(\tilde{x}-1)=1\ \ \ \blacksquare$
$(e)$
Given the answer, the SDF $m_p$ is 1, therefore, the portfolio is buy 1 unit of $(\phi)$ risk-free asset.$\ \ \ \blacksquare$
## Exercises 3.3
>$Y_p=\bar{Y}+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}(\tilde{X}-\bar{X})$
$\text{def.}$
$\tilde{R}:$ the vector of risky asset returns
$\mu:$ the mean of $\tilde{R}$
$\iota:$ a vector of $1$
$\sum:$ the covariance matrix of $\tilde{R}$
Assume that there is a risk-free asset, so the mean of $\tilde{m}$ is $\bar{m}=\dfrac{1}{R_f}$
In order to get the SDF $\tilde{m_p}$, that is $\tilde{m}$ project on $X$, therefore,
$\because Cov(\tilde{R})=\sum$, and $\bar{m}=\dfrac{1}{R_f}$
$\because Cov(\tilde{m},\tilde{R})=E[(\tilde{m}-\bar{m})(\tilde{R}-\mu)']=E[\tilde{m}\tilde{R}]'-\bar{m}\mu'=\iota'-\dfrac{1}{R_f}\mu'=(\iota-\dfrac{1}{R_f}\mu)'$
$\therefore\tilde{m_p}=\bar{m}+Cov(\tilde{m},\tilde{R})Cov(\tilde{R})^{-1}(\tilde{R}-\mu)$
$\ \ \ \ \ =\dfrac{1}{R_f}+(\iota-\dfrac{1}{R_f}\mu)'\sum^{-1}(\tilde{R}-\mu)\ \ \ \blacksquare$
## Exercises 3.4
$\tilde{Y}=A+B\tilde{X}+\tilde{\varepsilon}\\
\tilde{Y}=(\bar{Y}-B\bar{X})+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\tilde{X}+\tilde{\varepsilon}\\
A=\bar{Y}-B\bar{X}\\
B=Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\\
E[\tilde{Y}|\tilde{X}]=E[(\bar{Y}-B\bar{X})+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\tilde{X}+\tilde{\varepsilon}|X]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =(\bar{Y}-B\bar{X})+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\tilde{X}+E[\tilde{\varepsilon}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =(\bar{Y}-B\bar{X})+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\tilde{X}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =\bar{Y}-Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\bar{X}+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}\tilde{X}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =\bar{Y}+Cov(\tilde{Y},\tilde{X})Cov(\tilde{X})^{-1}(\tilde{X}-\bar{X})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =Y_p\ \ \ \blacksquare$
## Exercises 3.5
If there is a positive $\text{SDF}$, that is $\exists \ \tilde m>0$
Suppose we have $i$ assets, and denote the following notation:
$p_i:$ price of asset $i$
$x_i:$ the payoff of asset $i$, and $\tilde x_i\geq 0\ \ \forall i$
$\omega_i:$ the states of the world
$p_i=E[\tilde m_i\tilde x_i]=\sum_{j=1}^k\tilde m(\omega_j)x_i(\omega_j)\text{prob}_j$
$\because \tilde m>0$, $x_i>0$, and $\text{prob}\geq 0\Rightarrow p_i\geq 0$
* Case $1:$ If $p>0$
$p'\theta\not\leq 0$, and $x_i\geq 0$, therefore, there is no arbitrage opportunity.$\ \ \ \blacksquare$
* Case $2:$ If $p=0$
If $p_i=0$ and $\tilde m>0$, then $x_i$ must equal to $0$.
It does not satisfies the condition of "either $p'\theta\leq 0$ or $\sum^n\theta_i\tilde x_i>0.\ \ \ \blacksquare$
## Exercises 3.6
**The student loan** satisfies both the definition of **Law of one price** and the **arbitrage opportunity**.
* law of one price: The banks will decide an appropriate amount for every qualified students in Taiwan to borrow, which satisfies the law of one price.
* Arbitrage opportunity: the qualified students would get a positive amount of money before buying anything $($the payoff $\sum^n_{i=1}\theta_i\hat{x_i}\geq0$ with $\text{prob.=1})\ \ \ \blacksquare$
# Chpater 4
## Exercise 4.1
$(a)$
Given the chapter Exercise $2.2$, the optimal vector of sharing holdings is $\theta^*=\dfrac{1}{\alpha}\Sigma_x^{-1}(\mu_x-R_fp)$
Then $\theta_h=\dfrac{1}{\alpha_h}\Sigma^{-1}(\mu_x-R_fp)$
$\sum_{h=1}^H\theta_h=\left(\sum_{h=1}^H\frac{1}{\alpha_h}\right)\Sigma^{-1}(\mu_x-R_fp)$
Since the market price clear, which means that demand=supply
$\bar{\theta}=\dfrac{1}{\alpha}\Sigma^{-1}(\mu-R_fp)$
$\alpha\Sigma\bar{\theta}-\mu=-R_fp$
$p=\dfrac{1}{R_f}(\mu-\alpha\Sigma\bar{\theta})\ \ \blacksquare$
$(b)$
$\begin{cases}\alpha:\text{aggregate absolute risk aversion}\\
\Sigma:\text{the covariance matrix of the vector X of asset payoffs}\\
\bar{\theta}:\text{the vector of supplies of the n risky assets}\ \ \bar{\theta}=(\bar{\theta_1},...,\theta_n)'
\end{cases}$
If $\alpha$ :arrow_up:, means that investor want more risk premium than the risker one, so he will ask for the lower price relative to its expected payoffs.
$\Sigma\bar{\theta}=\begin{pmatrix}\sigma_{11}&...&\sigma_{1n}\\.&.&.\\.&.&.\\
\sigma_{n1}&...&\sigma_{nn}
\end{pmatrix}\begin{pmatrix}\bar{\theta}_1\\.\\.\\\bar{\theta}_n\end{pmatrix}=\begin{pmatrix}\sum\theta_i\sigma_{1i}\\
.\\
.\\
\sum\theta_i\sigma_{ni}
\end{pmatrix}$
$\sum\theta_i\sigma_{1i}=\sum\theta_iCov(x_1,x_i)=Cov(x_1,\sum\theta_ix_i)\ \ \blacksquare$
$(c)$
$u_0(\tilde{c})=-\exp\{-\alpha_h \tilde{c}\}\\
u_1(\tilde{c})=-\delta_h\exp\{-\alpha_h\tilde{c}\}$
Assume that risk-free asset in zero supply (? check)
$\bar{c}_0=\sum^{H}_{h=1}y_{h0}\\
\tilde{c}_1=(w_0-c_0-p'\theta)R_f+\theta'\tilde{x}\\
p=\dfrac{1}{R_f}(\mu-\alpha\Sigma\bar{\theta})$
---
$u_0(c_{h0})+u_1(c_{h1})=-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_hc_{h1}\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_h((w_0-c_{h0}-p'\theta)R_f+\theta'\tilde{x})\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f-\alpha_h\theta'\tilde{x}\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f\}\exp\{-\alpha_h\theta'\tilde{x}\}$
$E[u_0(c_{h0})+u_1(c_{h1})]=-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f\}E[\exp\{-\alpha_h\theta'\tilde{x}\}]\tag{1}$
* $E[\exp\{-\alpha_h\theta'\tilde{x}\}]$
$\theta_h=\theta=\dfrac{1}{\alpha_h}\Sigma^{-1}(\mu-R_fp)$ from $(a)$
$p=\dfrac{1}{R_f}(\mu-\alpha\Sigma\bar{\theta})\\
\tilde{x}\sim N(\mu,\Sigma)$
$E[\exp\{-\alpha_h\theta'\tilde{x}\}]=\exp\left\{-\alpha_h\theta'\mu+\dfrac{1}{2}\alpha_h^2\theta'\Sigma\theta\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left\{-\alpha_h\left[\dfrac{1}{\alpha_h}\Sigma^{-1}(\mu-R_fp)\right]\mu+\dfrac{1}{2}\alpha_h^2\left[\dfrac{1}{\alpha_h}(\mu-R_fp)\Sigma^{-1}\right]\Sigma\left[\dfrac{1}{\alpha_h}\Sigma^{-1}(\mu-R_fp)\right]\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left\{-\mu\Sigma^{-1}\mu+\Sigma^{-1}R_fp\mu+\dfrac{1}{2}(\mu-R_fp)\Sigma^{-1}(\mu-R_fp)\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left\{-\mu\Sigma^{-1}\mu+\Sigma^{-1}R_f\left(\dfrac{1}{R_f}(\mu-\alpha_h\Sigma\bar{\theta})\right)\mu+\dfrac{1}{2}\left(\mu-R_f\left[\dfrac{1}{R_f}(\mu-\alpha_h\Sigma\bar{\theta})\right]\right)\Sigma^{-1}\left(\mu-R_f\left[\dfrac{1}{R_f}(\mu-\alpha_h\Sigma\bar{\theta})\right]\right)\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left\{-\mu\Sigma^{-1}\mu+\mu\Sigma^{-1}\mu-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\left(\mu-(\mu-\alpha_h\Sigma\bar{\theta})\right)\Sigma^{-1}\left(\mu-(\mu-\alpha_h\Sigma\bar{\theta})\right)\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\left\{-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\alpha_h^2\bar{\theta}'\Sigma\bar{\theta}\right\}$
substitue it into equation $(1)$
$E[u_0(c_{h0})+u_1(c_{h1})]=-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f\}E[\exp\{-\alpha_h\theta'\tilde{x}\}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp\{-\alpha_h c_{h0}\}-\delta_h\exp\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f\}\exp\left\{-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\alpha_h^2\bar{\theta}'\Sigma\bar{\theta}\right\}$
* $\underset{c_{h0}}{\max}E[u_0(c_{h0})+u_1(c_{h1})]$
$\dfrac{\partial E[u_0(c_{h0})+u_1(c_{h1})]}{\partial c_{h0}}=\alpha_h\exp\{-\alpha_hc_{h0}\}-\delta_h\exp\left\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\alpha_h^2\bar{\theta}'\Sigma\bar{\theta}\right\}*\alpha_hR_f=0$
$\alpha_h\exp\{-\alpha_hc_{h0}\}=\alpha_hR_f\delta_h\exp\left\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\alpha_h^2\bar{\theta}'\Sigma\bar{\theta}\right\}$
$\exp\{-\alpha_hc_{h0}\}=R_f\delta_h\exp\left\{-\alpha_h(w_0-c_{h0}-p'\theta)R_f-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\alpha_h^2\bar{\theta}'\Sigma\bar{\theta}\right\}$
$-\alpha_hc_{h0}=\ln(R_f)+\ln(\delta_h)-\alpha_h(w_0-c_{h0}-p'\theta)R_f-\alpha_h\bar{\theta}\mu+\dfrac{1}{2}\alpha_h^2\bar{\theta}'\Sigma\bar{\theta}$
$c_{h0}=-\dfrac{\ln(R_f)}{\alpha_h}-\dfrac{\ln(\delta_h)}{\alpha_h}+(w_0-c_{h0}-p'\theta)R_f+\bar{\theta}\mu-\dfrac{1}{2}\alpha_h\bar{\theta}'\Sigma\bar{\theta}$
Given the assumption that $R_f$ is zero supply, and $\sum^H_{h=1}\frac{\ln(\delta_h)}{\alpha_h}=\frac{\ln(\delta)}{\alpha}$
$\sum^H_{h=1}c_{h0}=\bar{c}_0=-\dfrac{\ln(R_f)}{\alpha}-\dfrac{\ln(\delta)}{\alpha}+\bar{\theta}\mu-\dfrac{1}{2}\alpha\bar{\theta}'\Sigma\bar{\theta}$
* In oder to obtain $R_f$
$\alpha\bar{c}_0=-\ln(R_f\delta)+\alpha\bar{\theta}\mu-\dfrac{1}{2}\alpha^2\bar{\theta}'\Sigma\bar{\theta}$
$\ln(R_f\delta)=-\alpha\bar{c}_0+\alpha\bar{\theta}\mu-\dfrac{1}{2}\alpha^2\bar{\theta}'\Sigma\bar{\theta}$
$R_f\delta=\exp\{-\alpha\bar{c}_0+\alpha\bar{\theta}\mu-\dfrac{1}{2}\alpha^2\bar{\theta}'\Sigma\bar{\theta}\}$
$R_f=\dfrac{1}{\delta}\exp\{-\alpha\bar{c}_0+\alpha\bar{\theta}\mu-\dfrac{1}{2}\alpha^2\bar{\theta}'\Sigma\bar{\theta}\}\ \ \ \blacksquare$
$(d)$
>Explain the relationship between $R_f$ and $\bar{\theta}'\mu,\delta,\bar{c}_0$ and $\bar{\theta}'\sum\bar{\theta}$
$\bar{\theta}'\mu:$ It is the expected return of $n$ risky assets, so higher the expected return $(\bar{\theta}'\mu)$, higher the risk-free return $(R_f)$ (and you will ask more compensated)
$\delta:$ It is the parameter of the utility at date $1$ $(\delta)$ (also the dicount factors), therefore, higher the utility of future, lower the investment at date $0$, lesser the return $(R_f)$
$\bar{c}_0:$ More the consumption at date $0$ $(\bar{c}_0)$, lesser the investment at date $0$, therefore, lesser the return at date $1$$(R_f)$.
$\bar{\theta}'\sum\bar{\theta}:$ It is the covariance matrix of expected returns. Suppose that the covariance of the risky assets is higher, which may means that he is the riskier investor, therefore, he will ask higher compensated return. $(R_f)\ \ \ \blacksquare$
$(e)$
$c_{ho}=-\dfrac{\ln(\delta_h)}{\alpha_h}-\dfrac{\ln(R_f)}{\alpha_h}+(w_0-c_{ho}-p'\theta)R_f+\theta'\mu-\dfrac{1}{2}\alpha_h\theta'\Sigma\theta$
If $\delta=\prod^H_{h=1}\delta_h^{\frac{\tau_h}{\tau}}$, and $(w_0-c_{ho}-p'\theta)=0$ given zero supply of risk-free assets
$\sum^H_{h=1}c_{h0}=\bar{c_0}=-\sum^H_{h=1}\dfrac{\ln(\delta_h)}{\alpha_h}-\sum^H_{h=1}\dfrac{\ln(R_f)}{\alpha_h}+\theta'\mu-\dfrac{1}{2}\sum^H_{h=1}\alpha_h\theta'\Sigma\theta\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\sum^H_{h=1}\ln(\delta_h)\tau_h-\ln(R_f)\sum^H_{h=1}\tau_h+\theta'\mu-\dfrac{1}{2}\alpha\theta'\Sigma\theta\ \ \because\sum^H_{h=1}\alpha_h=\alpha\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\sum^H_{h=1}\ln(\delta_h)\tau_h-\ln(R_f)\tau+\theta'\mu-\dfrac{1}{2}\alpha\theta'\Sigma\theta\ \ \ \ \because\sum^H_{i=1}\tau_h=\tau$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \dfrac{\bar{c_0}}{\tau}=-\sum^H_{h=1}\dfrac{\tau_h}{\tau}\ln(\delta_h)-\ln(R_f)+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\sum^H_{h=1}\ln\large(\delta_h^{\frac{\tau_h}{\tau}}\large)-\ln(R_f)+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\ln\large(\prod^H_{h=1}\delta_h^{\frac{\tau_h}{\tau}}\large)-\ln(R_f)+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\ln(\delta)-\ln(R_f)+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\ln(R_f\delta)+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta$
$\ln(R_f\delta)=-\dfrac{\bar{c_0}}{\tau}+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\\
\ \ \ \ \ \ R_f\delta=\exp{\{-\dfrac{\bar{c_0}}{\tau}+\dfrac{1}{\tau}\theta'\mu-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\}}\\
\ \ \ \ \ \ \ \ \ \ \ \ =\exp{\{\dfrac{1}{\tau}(\theta'\mu-\bar{c_0})-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\}}\\
\ \ \ \ \ \ \ \ R_f=\dfrac{1}{\delta}\exp{\{\dfrac{1}{\tau}(\theta'\mu-\bar{c_0})-\dfrac{1}{2\tau}\alpha\theta'\Sigma\theta\}}\\
\ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{1}{\delta}\exp{\{\alpha(\theta'\mu-\bar{c_0})-\dfrac{1}{2}\alpha^2\theta'\Sigma\theta\}}\ \ \ \blacksquare$
## Exercise 4.3
$(a)\\
u(w_1)=\dfrac{1}{1-\rho}w^{1-\rho}$
$u(w_2)=\dfrac{1}{1-2\rho}w^{1-2\rho}$
Given the Pareto-optimal sharing rules, $\lambda_1u_1'(w_1(\omega))=\lambda_2u_2'(w_2(\omega))$
$u'(w_1)=\dfrac{1}{1-\rho}(1-\rho)w_1^{1-\rho-1}=w_1^{-\rho}$
$u'(w_2)=\dfrac{1}{1-2\rho}(1-2\rho)w_2^{1-2\rho-1}=w_2^{-2\rho}$
$\lambda_1\tilde{w}_1^{-\rho}=\lambda_2\tilde{w}_2^{-2\rho}$
$\tilde{w}_1^{-\rho}=\dfrac{\lambda_2}{\lambda_1}\tilde{w}_2^{-2\rho}$
$\Rightarrow \tilde{w}_1=\left(\dfrac{\lambda_2}{\lambda_1}\right)^{-\frac{1}{\rho}}\tilde{w}_2^2$
According to Pareto-optimal allocation
$\sum_{h=1}^2\tilde{w}_h=\tilde{w}_m\Rightarrow \tilde{w}_1+\tilde{w}_2=\tilde{w}_m\tag{1}$
$\left(\dfrac{\lambda_2}{\lambda_1}\right)^{-\frac{1}{\rho}}\tilde{w}_2^2+\tilde{w}_2=\tilde{w}_m\tag{2}$
Let $A=\left(\dfrac{\lambda_2}{\lambda_1}\right)^{-\frac{1}{\rho}}$, and rewrite the equation $(2)$ as
$A\tilde{w}_2^2+\tilde{w}_2=\tilde{w}_m\tag{3}$
$\Rightarrow\tilde{w}_2^2+\dfrac{1}{A}\tilde{w}_2=\dfrac{1}{A}\tilde{w}_m$
$\Rightarrow\tilde{w}_2^2+\dfrac{1}{A}\tilde{w}_2+\dfrac{1}{4A^2}=\dfrac{1}{A}\tilde{w}_m+\dfrac{1}{4A^2}$
$\Rightarrow (\tilde{w}_2+\dfrac{1}{2A})^2=\dfrac{1}{A}\tilde{w}_m+\dfrac{1}{4A^2}$
$\Rightarrow \tilde{w}_2+\dfrac{1}{2A}=\sqrt{\dfrac{1}{A}\tilde{w}_m+\dfrac{1}{4A^2}}\tag{4}$
Let $\eta=\dfrac{1}{2A}$ and rewrite the equation $(4)$ as
$\tilde{w}_2+\eta=\sqrt{2\eta\tilde{w}_m+\eta^2}$
$\Rightarrow\tilde{w}_2=\sqrt{\eta^2+2\eta\tilde{w}_m}-\eta$ #
Then given the equation $(1)$, $w_1=\tilde{w}_m-\tilde{w}_2$
$\Rightarrow w_1=\tilde{w}_m+\eta-\sqrt{\eta^2+2\eta\tilde{w}_m}$ #
$(b)$
Denote $b$ as the marginal utility
$\because E\left[\dfrac{u'(w)}{b}\tilde{R}\right]=1$
$\therefore \dfrac{u'(w)}{b}=\tilde{m}$
Becasue under the conditions of the competitive equilibria, each agent optimizes themselves, so we could get the same result from each investor.
$\Rightarrow u'(w_2)=(\sqrt{\eta^2+2\eta\tilde{w}_m}-\eta)^{-2\rho}$
$\tilde{m}=\dfrac{(\sqrt{\eta^2+2\eta\tilde{w}_m}-\eta)^{-2\rho}}{b}$
$\therefore\gamma=\dfrac{1}{b}$
therefore $\tilde{m}=\gamma(\sqrt{\eta^2+2\eta\tilde{w}_m}-\eta)^{-2\rho}$
## Exercise 4.5
$u(\tilde{w})=-e^{-\alpha\tilde{w}}$ given CARA utility
$\tilde{w}_h=a_h+b_h\tilde{w}_m$, where $b_h=\frac{\tau_h}{\tau}, \sum^H_{i=1}a_h=0$
Given pareato optima, $\lambda_hu_h'(\tilde{w}_h)=\eta$
$u'_h(\tilde{w})_h=\alpha_he^{-\alpha_h\tilde{w}_h}$
$\therefore \lambda_h\alpha_he^{-\alpha_h\tilde{w}_h}=\eta$
$\exp\{-\alpha_h\tilde{w}_h\}=\dfrac{\eta}{\alpha_h\lambda_h}$
$-\alpha_h\tilde{w}_h=\ln(\eta)-\ln(\alpha_h\lambda_h)$
$\tilde{w}_h=-\dfrac{\ln(\eta)}{\alpha_h}+\dfrac{\ln(\alpha_h\lambda_h)}{\alpha_h}\tag{1}$
$\sum^H_{h=1}\tilde{w}_h=\tilde{w}_m=-\sum^H_{h=1}\dfrac{\ln(\eta)}{\alpha_h}+\sum^H_{h=1}\dfrac{\ln(\alpha_h\lambda_h)}{\alpha_h}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\ln(\eta)\sum^H_{h=1}\tau_h+\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\tau\ln(\eta)+\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \tau\ln(\eta)=-\tilde{w}_m+\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h)$$\\
\ \ \ \ \ \ \ \ \ \ \ \ -\ln(\eta)=\dfrac{1}{\tau}\tilde{w}_m-\dfrac{1}{\tau}\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h)\tag{2}$
$\tilde{w}_h=-\dfrac{\ln(\eta)}{\alpha_h}+\dfrac{\ln(\alpha_h\lambda_h)}{\alpha_h}\\
\ \ \ \ \ =\dfrac{1}{\alpha_h\tau}\tilde{w}_m-\dfrac{1}{\alpha_h\tau}\sum^H_{k=1}\tau_k\ln(\alpha_k\lambda_k)+\dfrac{\ln(\alpha_h\lambda_h)}{\alpha_h}\\
\ \ \ \ \ =\dfrac{\tau_h}{\tau}\tilde{w}_m-\dfrac{\tau_h}{\tau}\sum^H_{k=1}\tau_k\ln(\alpha_k\lambda_k)+\tau_h\ln(\alpha_h\lambda_h)$, which is an afine sharing rule $(4.14)$, which is the solution of the social planner's problem.
If $\lambda_h=\tau_he^{\frac{a_h}{\tau_h}}$
Given pareato optima, $\lambda_hu_h'(\tilde{w}_h)=\eta$
We can prove that $\lambda_hu_h'(\tilde{w}_h)=\tau_h\alpha_h\exp\{\dfrac{a_h}{\tau_h}\}\exp\{-\alpha_h\tilde{w}_h\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\dfrac{a_h}{\tau_h}\}\exp\{-\alpha_h\tilde{w}_h\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\dfrac{a_h}{\tau_h}-\alpha_h\tilde{w}_h\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_h(a_h-\tilde{w}_h)\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_h(a_h-\dfrac{\tau_h}{\tau}\tilde{w}_m+\dfrac{\tau_h}{\tau}\sum^H_{k=1}\tau_k\ln(\alpha_k\lambda_k)-\tau_h\ln(\alpha_h\lambda_h))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_h(a_h-\dfrac{\tau_h}{\tau}\tilde{w}_m+\dfrac{\tau_h}{\tau}\sum^H_{k=1}\tau_k\ln(\alpha_k\lambda_k)-\tau_h\ln(\alpha_h\lambda_h))\}$
$\because \tilde{w}_m=-\tau\ln(\eta)+\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h)$
$\lambda_hu_h'(\tilde{w}_h)=\exp\{\alpha_h(a_h-\dfrac{\tau_h}{\tau}(-\tau\ln(\eta)+\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h))\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\dfrac{\tau_h}{\tau}\sum^H_{k=1}\tau_k\ln(\alpha_k\lambda_k)-\tau_h\ln(\alpha_h\lambda_h))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_h(a_h+\tau_h\ln(\eta)-\dfrac{\tau_h}{\tau}\sum^H_{h=1}\tau_h\ln(\alpha_h\lambda_h)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\dfrac{\tau_h}{\tau}\sum^H_{k=1}\tau_k\ln(\alpha_k\lambda_k)-\tau_h\ln(\alpha_h\lambda_h))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_h(a_h+\tau_h\ln(\eta)-\tau_h\ln(\alpha_h\lambda_h))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_ha_h+\alpha_h\tau_h\ln(\eta)-\alpha_h\tau_h\ln(\alpha_h\lambda_h))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_ha_h+\ln(\eta)-\ln(\alpha_h\lambda_h))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_ha_h+\ln(\eta)-\ln(\alpha_h\tau_he^{\frac{a_h}{\tau_h}}))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_ha_h+\ln(\eta)-\ln(e^{\frac{a_h}{\tau_h}}))\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_ha_h+\ln(\eta)-\frac{a_h}{\tau_h}\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\alpha_ha_h+\ln(\eta)-\alpha_ha_h\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\exp\{\ln(\eta)\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\eta\ \ \ \blacksquare$ which is pareato optima.
## Exercise 4.6
$u(\tilde{w}_h)=\dfrac{1}{1-\rho}(\tilde{w}_h-\zeta_h)^{1-\rho}$
Given pareato optima, $\lambda_hu_h'(\tilde{w}_h)=\eta$
$u'(\tilde{w}_h)=(\tilde{w}_h-\zeta_h)^{-\rho}$
$\lambda_h(\tilde{w}_h-\zeta_h)^{-\rho}=\eta$
$(\tilde{w}_h-\zeta_h)^{-\rho}=\dfrac{\eta}{\lambda_h}$
$\tilde{w}_h-\zeta_h=\left(\dfrac{\eta}{\lambda_h}\right)^{-\frac{1}{\rho}}=\left(\dfrac{\lambda_h}{\eta}\right)^{\frac{1}{\rho}}$
$\tilde{w}_h=\left(\dfrac{\lambda_h}{\eta}\right)^{\frac{1}{\rho}}+\zeta_h\tag{1}$
$\tilde{w}_m=\sum^H_{h=1}\tilde{w}_h=\sum^H_{h=1}\left(\dfrac{\lambda_h}{\eta}\right)^{\frac{1}{\rho}}+\sum^H_{h=1}\zeta_h\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{w}_m=\left(\dfrac{1}{\eta}\right)^{\frac{1}{\rho}}\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}+\zeta\ \ \because \sum^H_{h=1}\zeta_h=\zeta$
$\left(\dfrac{1}{\eta}\right)^{\frac{1}{\rho}}\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}=\tilde{w}_m-\zeta$
$\left(\dfrac{1}{\eta}\right)^{\frac{1}{\rho}}=\dfrac{\tilde{w}_m-\zeta}{\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}}\tag{2}$
$\tilde{w}_h=\left(\dfrac{\lambda_h}{\eta}\right)^{\frac{1}{\rho}}+\zeta_h\\
\ \ \ \ \ =\left(\dfrac{\tilde{w}_m-\zeta}{\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}}\right)\lambda_h^{\frac{1}{\rho}}+\zeta_h\\
\ \ \ \ \ =\left(\dfrac{\lambda_h^{\frac{1}{\rho}}\tilde{w}_m-\lambda_h^{\frac{1}{\rho}}\zeta}{\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}}\right)+\zeta_h\\
\ \ \ \ \ =\left(\dfrac{\lambda_h^{\frac{1}{\rho}}}{\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}}\right)\tilde{w}_m-\left(\dfrac{\lambda_h^{\frac{1}{\rho}}}{\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}}\right)\zeta+\zeta_h\\
\ \ \ \ \ =b_h\tilde{w}_m+(\zeta_h-b_h\zeta)\ \ \text{denote}\ b_h=\left(\dfrac{\lambda_h^{\frac{1}{\rho}}}{\sum^H_{h=1}\lambda_h^{\frac{1}{\rho}}}\right)\\
\ \ \ \ \ =b_h\tilde{w}_m+a_h\ \ \text{denote}\ a_h=\zeta_h-b_h\zeta$, which is an afine sharing rule $(4.15)$
If $\lambda_h=b_h^\rho$
Given pareato optima, $\lambda_hu_h'(\tilde{w}_h)=\eta$
$\lambda_h(\tilde{w}_h-\zeta_h)^{-\rho}=b_h^\rho(\tilde{w}_h-\zeta_h)^{-\rho}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b_h^\rho\left(\left(\dfrac{\lambda_h}{\eta}\right)^{\frac{1}{\rho}}+\zeta_h-\zeta_h\right)^{-\rho}\ \text{given}\ eq(1)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b_h^\rho\left(\dfrac{\lambda_h}{\eta}\right)^{-1}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b_h^\rho\left(\dfrac{\eta}{b_h^\rho}\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\eta\ \ \blacksquare$, which is pareato optima.
## Exercise 4.7
>If each investor has shifted CARA with the same coefficient $\rho>0$ and shifted $\zeta_h$
$u'(w)=\dfrac{w-\zeta}{\rho}^{-\rho}$
$(\forall h)\ \ \lambda_h(\dfrac{\tilde{w}_h-\zeta_h}{\rho})=\tilde{\eta}$
* Solving for $\tilde{w}_h$ to find $\tilde{\eta}$
$\dfrac{\tilde{w}_h-\zeta_h}{\rho}=\left(\dfrac{\tilde{\eta}}{\lambda_h}\right)^{-\frac{1}{\rho}}=\left(\dfrac{\lambda_h}{\tilde{\eta}}\right)^{\frac{1}{\rho}}$
$\therefore\tilde{w}_h=\rho\left(\dfrac{\lambda_h}{\tilde{\eta}}\right)^{\frac{1}{\rho}}+\zeta_h$
* Calculating the market wealth
$\tilde{w}_m=\rho\sum_{j=1}^H\left(\dfrac{\lambda_j}{\tilde{\eta}}\right)^{\frac{1}{\rho}}+\sum_{j=1}^H\zeta_j$
$\ \ \ \ \ =\rho\sum_{j=1}^H\left(\dfrac{\lambda_j}{\tilde{\eta}}\right)^{\frac{1}{\rho}}+\zeta$
$\tilde{w}_m-\zeta=\rho\sum_{j=1}^H\left(\dfrac{\lambda_j}{\tilde{\eta}}\right)^{\frac{1}{\rho}}$
$\dfrac{\tilde{w}_m-\zeta}{\rho}\eta^{\frac{1}{\rho}}=\sum_{j=1}^H\lambda_j^{\frac{1}{\rho}}$
$\Rightarrow\tilde{\eta}^{\frac{1}{\rho}}=\sum_{j=1}^H\lambda_j^{\frac{1}{\rho}}\left(\dfrac{\rho}{\tilde{w}_m-\zeta}\right)$
* Then substitute this back to the market wealth $(\tilde{w}_h)$
$\tilde{w}_h=\rho\left(\dfrac{\lambda_h^{\frac{1}{\rho}}}{\sum_{j=1}^H\lambda_j^{\frac{1}{\rho}}\left(\dfrac{\rho}{\tilde{w}_m-\zeta}\right)}\right)+\zeta_h$
$\ \ \ \ \ =\left(\dfrac{\lambda_h^{\frac{1}{\rho}}(\tilde{w}_m-\zeta)}{\sum_{j=1}^H\lambda_j^{\frac{1}{\rho}}}\right)+\zeta_h$
$\ \ \ \ \ =\left(\dfrac{\lambda_h^{\frac{1}{\rho}}\tilde{w}_m}{\sum_{j=1}^H\lambda_j^{\frac{1}{\rho}}}\right)+\zeta_h+\left(\dfrac{\lambda_h^{\frac{1}{\rho}}\zeta}{\sum_{j=1}^H\lambda_j^{\frac{1}{\rho}}}\right)\ \ \blacksquare$
# Chapter 5
## Exercise 5.1
>$\mu_1=1.08,\ \mu_2=1.16,\ \sigma_1=0.25,\ \sigma_2=0.35$
$\sum=\begin{pmatrix}0.25^2 & 0.25*0.35*0.3\\
0.25*0.35*0.3 & 0.35^2
\end{pmatrix}$
$\ \ \ \ =\begin{pmatrix}0.0625 & 0.02625\\
0.02625 & 0.1225
\end{pmatrix}$
* Calculate the GMV portfolio $(\pi_{gmv})$
$\pi_{gmv}=\dfrac{1}{\iota'\sum^{-1}\iota}\sum^{-1}\iota$
$\sum^{-1}=\frac{1}{det\sum}\begin{bmatrix}0.1225 & -0.02625\\-0.02625 & 0.0625\end{bmatrix}$
$\ \ \ \ \ \ \ \ =\dfrac{1}{(0.0625*0.1225)-0.02625^2}\begin{bmatrix}0.1225 & -0.02625\\-0.02625 & 0.0625\end{bmatrix}$
$\ \ \ \ \ \ \ \ =\dfrac{1}{0.00697}\begin{bmatrix}0.1225 & -0.02625\\-0.02625 & 0.0625\end{bmatrix}$
$\ \ \ \ \ \ \ \ =\begin{bmatrix}17.58 & -3.77\\
-3.77 & 8.97\end{bmatrix}$
$\iota'\sum^{-1}\iota=\begin{bmatrix}1 & 1\end{bmatrix}\begin{bmatrix}17.58 & -3.77\\
-3.77 & 8.97\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix}=19.01$
$\sum^{-1}\iota=\begin{bmatrix}17.58 & -3.77\\
-3.77 & 8.97\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix}=\begin{bmatrix}13.81\\5.2\end{bmatrix}$
$\therefore\pi_{gmv}=\dfrac{1}{19.01}\begin{bmatrix}13.81\\5.2\end{bmatrix}=\begin{bmatrix}0.73\\0.27\end{bmatrix}\ \ \ \blacksquare$
* Locate $(\pi_{gmv}\mu,\pi_{gmv}\sigma)$ on Figure $5.1$
$\pi_{gmv}\mu=\begin{bmatrix}0.73\\0.27\end{bmatrix}*\begin{bmatrix}1.08 & 1.16\end{bmatrix}$
$\ \ \ \ \ \ \ \ \ \ =0.7884+0.3132=1.1016$
$\pi_{gmv}\sigma=\sqrt{\pi_{gmv}'\sum\pi_{gmv}}=\sqrt{(0.73^2*0.25^2)++2(0.25*0.35*0.3)+(0.27^2*0.35^2)}$
$\ \ \ \ \ \ \ \ \ \ =\sqrt{0.04533}=0.213\ \ \ \blacksquare$ #

## Exercise 5.2
$(a)$
>From the result of Exercise $2.5$, the optimal portfolio for the quadractic utility investor is $\phi=\dfrac{(\zeta-w_0R_f)}{1+\kappa^2}\sum^{-1}(\mu-R_f\iota)$, where $\kappa^2=(\mu-R_f\iota)'\sum^{-1}(\mu-R_f\iota)$
The frontier portfolios are
$\pi=\dfrac{\mu_{\text{targ}}-R_f}{(\mu-R_f\iota)'\sum^{-1}(\mu-R_f\iota)}\sum^{-1}(\mu-R_f\iota)$
$\because\dfrac{(\zeta-w_0R_f)}{1+(\mu-R_f\iota)'\sum^{-1}(\mu-R_f\iota)}=\dfrac{\mu_{\text{targ}}-R_f}{(\mu-R_f\iota)'\sum^{-1}(\mu-R_f\iota)}$
Therefore, the portfolio $\phi$ is on the mean-variance frontier. $\ \ \ \blacksquare$
$(b)$
The investors will invest on the efficient frontier under the circumstances that $(\zeta-w_0R_f)>0$, thus the portfolio $\phi$ locates on the Capital Market Line, which implies that $\zeta>w_0R_f$, and the investor will obtain the higher payoff than $R_f.\ \ \ \blacksquare$
==## Exercise 5.3==
## Exercise 5.4
$E[\tilde{R}^2]=E[(\tilde{R_p}+b\tilde{e_p}+\tilde{\varepsilon})^2]$
$=E[(\tilde{R_p}+b\tilde{e_p})^2+2b\tilde{R_p}\tilde{e_p}\tilde{\varepsilon}+\tilde{\varepsilon}^2]\ \ \because E[\tilde{R_p}\tilde{e_p}]=0$
$=E[\tilde{R}_p^2+2b\tilde{R_p}\tilde{e_p}+b^2\tilde{e}_p^2+\tilde{\varepsilon}^2]$
$=E[\tilde{R}_p^2+b^2\tilde{e}_p^2+\tilde{\varepsilon}^2]\geq E[\tilde{R}_p^2]\ \ \ \blacksquare$
* with risk-free asset

* only risky assets

## Exercise 5.5
$\tilde{x}=\dfrac{1-\tilde{e}_p}{E[\tilde{R}_p]}\\
E[\tilde{x}\tilde{R}]=E\left[\dfrac{1-\tilde{e}_p}{E[\tilde{R}_p]}\tilde{R}\right]\\
\ \ \ \ \ \ \ \ \ \ \ =E\left[\dfrac{\tilde{R}}{E[\tilde{R}_p]}-\dfrac{\tilde{e}_p\tilde{R}}{E[\tilde{R}_p]}\right]\\
\ \ \ \ \ \ \ \ \ \ \ =E\left[\dfrac{\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}}{E[\tilde{R}_p]}-\dfrac{\tilde{e}_p(\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon})}{E[\tilde{R}_p]}\right]\\
\ \ \ \ \ \ \ \ \ \ \ =E\left[\dfrac{\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}}{E[\tilde{R}_p]}-\dfrac{b\tilde{e}_p^2}{E[\tilde{R}_p]}\right]\\
\ \ \ \ \ \ \ \ \ \ \ =1+\dfrac{bE[\tilde{e}_p]}{E[\tilde{R}_p]}-\dfrac{bE[\tilde{e}_p]}{E[\tilde{R}_p]}\\
\ \ \ \ \ \ \ \ \ \ \ =1\ \ \ \blacksquare$
## Exercise 5.6
$(a)$
$\tilde{R}^*=\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}\\
\text{var}(\tilde{R}^*)=\text{var}(\tilde{R}_p+b\tilde{e}_p)+\text{var}(\tilde{\varepsilon})+2Cov(\tilde{R}_p+b\tilde{e}_p,\ \tilde{\varepsilon})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p+b\tilde{e}_p)+\text{var}(\tilde{\varepsilon})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)+b^2\text{var}(\tilde{e}_p)+2Cov(\text{var}(\tilde{R}_p,\ b\tilde{e}_p))+\text{var}(\tilde{\varepsilon})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)+b^2\text{var}(\tilde{e}_p)+2(bE[\tilde{R}_p\tilde{e}_p]-bE[\tilde{R}_p]E[\tilde{e}_p])+\text{var}(\tilde{\varepsilon})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)+b^2\text{var}(\tilde{e}_p)-2bE[\tilde{R}_p]E[\tilde{e}_p]+\text{var}(\tilde{\varepsilon})$
$b^2\text{var}(\tilde{e}_p)-2bE[\tilde{R}_p]E[\tilde{e}_p]=\left(b\sqrt{\text{var}(\tilde{e}_p)}-\dfrac{E[\tilde{R}_p]E[\tilde{e}_p]}{\sqrt{\text{var}(\tilde{e}_p)}}\right)^2-\dfrac{E[\tilde{R}_p]^2E[\tilde{e}_p]^2}{\text{var}(\tilde{e}_p)}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\left(b\sqrt{\text{var}(\tilde{e}_p)}-b_m\sqrt{\text{var}(\tilde{e}_p)}\right)^2-b_m^2\text{var}(\tilde{e}_p)$
$\because b_m=\dfrac{E[R_p]}{1-E[e_p]}=\dfrac{E[R_p]E[e_p]}{\text{var}(e_p)}$
$\text{var}(\tilde{R}_p+b_m\tilde{e}_p)=\text{var}(\tilde{R}_p)+b_m^2\text{var}(\tilde{e}_p)+2(b_mE[\tilde{R}_p\tilde{e}_p]-b_mE[\tilde{R}_p]E[\tilde{e}_p])\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)+b_m^2\text{var}(\tilde{e}_p)-2b_mE[\tilde{R}_p]E[\tilde{e}_p]\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)+\left(b_m\sqrt{\text{var}(\tilde{e}_p)}-\dfrac{E[\tilde{R}_p]E[\tilde{e}_p]}{\sqrt{\text{var}(\tilde{e}_p)}}\right)^2-\dfrac{E[\tilde{R}_p]^2E[\tilde{e}_p]^2}{\text{var}(\tilde{e}_p)}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)+\left(b_m\sqrt{\text{var}(\tilde{e}_p)}-b_m\sqrt{\text{var}(\tilde{e}_p)}\right)^2-b_m^2\text{var}(\tilde{e}_p)\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)-b_m^2\text{var}(\tilde{e}_p)$
If $b=b_m,\ \text{var}(\tilde{R}^*)=\text{var}(\tilde{R}_p)-b_m^2\text{var}(\tilde{e}_p)+\text{var}(\tilde{\varepsilon})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p+b_m\tilde{e}_p)+\text{var}(\tilde{\varepsilon})\\
\therefore \text{var}(\tilde{R}^*)\geq \text{var}(\tilde{R}_p+b_m\tilde{e}_p)\ \ \ \ \ \blacksquare$
$(b)$
$Cov(\tilde{R}_p,\ \tilde{R}_p+b_z\tilde{e}_p)=E[\tilde{R}_p(\tilde{R}_p+b_z\tilde{e}_p)]-E[\tilde{R}_p]E[\tilde{R}_p+b_z\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2+b_z\tilde{R}_p\tilde{e}_p]-E[\tilde{R}_p]^2-b_zE[\tilde{R}_p]E[\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =(E[\tilde{R}_p^2]-E[\tilde{R}_p]^2)+b_zE[\tilde{R}_p\tilde{e}_p]-b_zE[\tilde{R}_p]E[\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)-b_zE[\tilde{R}_p]E[\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\text{var}(\tilde{R}_p)-\dfrac{\text{var}(\tilde{R}_p)}{E[\tilde{R}_p]E[\tilde{e}_p]}E[\tilde{R}_p]E[\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =0\ \ \ \ \ \blacksquare$
$(c)$ prove equation (5.23)
$\because \tilde{R}=\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}\\
\therefore \tilde{R}_p\perp\tilde{e}_p,\ \tilde{R}_p\perp\tilde{\varepsilon},\ \tilde{e}_p\perp \tilde{\varepsilon}$
$E[\tilde{R}(\tilde{R}_p+b_c\tilde{e}_p)]=E[\tilde{R}\tilde{R}_p]+b_cE[\tilde{R}\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[(\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon})\tilde{R}_p]+b_cE[(\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon})\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2]+bE[\tilde{e}_p\tilde{R}_p]+E[\tilde{\varepsilon}\tilde{R}_p]+b_cE[\tilde{R}_p\tilde{e}_p]+b_cbE[\tilde{e}_p^2]+b_cE[\tilde{\varepsilon}\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2]+b_cbE[\tilde{e}_p^2]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2]+b_cbE[\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2]+\dfrac{bE[\tilde{R}_p^2]}{E[\tilde{R}_p]}E[\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{E[\tilde{R}_p^2]E[\tilde{R}_p]+bE[\tilde{R}_p^2]E[\tilde{e}_p]}{E[\tilde{R}_p]}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{E[\tilde{R}_p^2](E[\tilde{R}_p]+bE[\tilde{e}_p])}{E[\tilde{R}_p]}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b_c(E[\tilde{R}_p]+bE[\tilde{e}_p])\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b_c(E[\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}])\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =b_cE[\tilde{R}]\ \ \ \blacksquare$
## Exercise 5.7
If $\tilde{w}=w_0R_f+\phi'(\tilde{R}-R_f)$
$\tilde{R}=\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}$
$u(\tilde{w})=-\exp \{-\alpha(w_0R_f+\phi'(\tilde{R}-R_f))\}\\
\ \ \ \ \ \ \ \ =-\exp \{-\alpha(w_0R_f+\phi'(\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}-R_f))\}\\
\ \ \ \ \ \ \ \ =-\exp \{-\alpha(w_0R_f-\phi'R_f+\phi'(\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}))\}$
$E[u(\tilde{w})]=-E\left[\exp \{-\alpha(w_0R_f-\phi'R_f+\phi'(\tilde{R}_p+b\tilde{e}_p+\tilde{\varepsilon}))\}\right]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp \left\{-\alpha\left(w_0R_f-\phi'R_f+\phi'(E[\tilde{R}_p]+bE[\tilde{e}_p])\right)+\frac{1}{2}\alpha^2\phi'var(\tilde{R})\phi\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp \left\{-\alpha\left(w_0R_f-\phi'R_f+\phi'(E[\tilde{R}_p]+bE[\tilde{e}_p])\right)\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{2}\alpha^2\phi'\left(var({\tilde{R}}_p+b\tilde{e}_p)+var(\tilde{\varepsilon})\right)\phi\right\}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp \left\{-\alpha\left(w_0R_f-\phi'R_f+\phi'(E[\tilde{R}_p]+bE[\tilde{e}_p])\right)\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{2}\alpha^2\phi'var({\tilde{R}}_p+b\tilde{e}_p)\phi+\frac{1}{2}\alpha^2\phi'var(\tilde{\varepsilon})\phi\right\}$
$\because var({\tilde{R}}_p+b\tilde{e}_p)=E[(\tilde{R}_p+b\tilde{e}_p)^2]-E[{\tilde{R}}_p+b\tilde{e}_p]^2\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2+2b\tilde{R}_p\tilde{e}_p+b^2\tilde{e}_p^2]-\left(E[{\tilde{R}}_p]^2+2bE[{\tilde{R}}_p]E[\tilde{e}_p]+b^2E[\tilde{e}_p]^2\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p^2]+b^2E[\tilde{e}_p^2]-E[{\tilde{R}}_p]^2-2bE[{\tilde{R}}_p]E[\tilde{e}_p]-b^2E[\tilde{e}_p]^2\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =var({\tilde{R}_p})+b^2var(\tilde{e}_p)-2bE[{\tilde{R}}_p]E[\tilde{e}_p]$
$E[u(\tilde{w})]=-\exp \left\{-\alpha\left(w_0R_f-\phi'R_f+\phi'(E[\tilde{R}_p]+bE[\tilde{e}_p])\right)\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{2}\alpha^2\phi'\left(var({\tilde{R}_p})+b^2var(\tilde{e}_p)-2bE[{\tilde{R}}_p]E[\tilde{e}_p]\right)\phi+\frac{1}{2}\alpha^2\phi'var(\tilde{\varepsilon})\phi\right\}\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-\exp \left\{-\alpha\left(w_0R_f-\phi'R_f+\phi'(E[\tilde{R}_p]+bE[\tilde{e}_p])\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{1}{2}\alpha\phi'var({\tilde{R}_p})\phi-\frac{1}{2}b^2\alpha\phi'var(\tilde{e}_p)\phi+b\alpha\phi'E[{\tilde{R}}_p]E[\tilde{e}_p]\phi-\frac{1}{2}\alpha\phi'var(\tilde{\varepsilon})\phi\right)\right\}$
$CE=w_0R_f-\phi'R_f+\phi'(E[\tilde{R}_p]+bE[\tilde{e}_p])-\frac{1}{2}\alpha\phi'var({\tilde{R}_p})\phi-\frac{1}{2}b^2\alpha\phi'var(\tilde{e}_p)\phi\\
\ \ \ \ \ \ +b\alpha\phi'E[{\tilde{R}}_p]E[\tilde{e}_p]\phi-\frac{1}{2}\alpha\phi'var(\tilde{\varepsilon})\phi$
$\max CE:$
$\dfrac{\partial CE}{\partial b}=\phi'E[\tilde{e}_p]-b\alpha\phi'var(\tilde{e}_p)\phi+\alpha\phi'E[{\tilde{R}}_p]E[\tilde{e}_p]\phi=0$
$b\alpha\phi'var(\tilde{e}_p)\phi=\phi'E[\tilde{e}_p]+\alpha\phi'E[{\tilde{R}}_p]E[\tilde{e}_p]\phi$
$b\alpha\phi'(E[\tilde{e}_p^2]-E[\tilde{e}_p]^2)\phi=\phi'E[\tilde{e}_p]+\alpha\phi'E[\tilde{R}_p]E[\tilde{e}_p]\phi$
$b=\dfrac{\phi'E[\tilde{e}_p]+\alpha\phi'E[\tilde{R}_p]E[\tilde{e}_p]\phi}{\alpha\phi'(E[\tilde{e}_p^2]-E[\tilde{e}_p]^2)\phi}\\
\ \ =\dfrac{1+\alpha\phi'E[\tilde{R}_p]}{\alpha\phi'(1-E[\tilde{e}_p])}\ \ \ \because E[\tilde{e}_p^2]=E[\tilde{e}_p]$
$E[\tilde{R}^*]=E[\tilde{R}_p+b^*\tilde{e}_p]\\
\ \ \ \ \ \ \ \ \ \ =E\left[\tilde{R}_p+\left(\dfrac{1+\alpha\phi'E[\tilde{R}_p]}{\alpha\phi'(1-E[\tilde{e}_p])}\right)\tilde{e}_p\right]\\
\ \ \ \ \ \ \ \ \ \ =E\left[\tilde{R}_p+\left(\dfrac{\tilde{e}_p+\alpha\phi'E[\tilde{R}_p]\tilde{e}_p}{\alpha\phi'(1-E[\tilde{e}_p])}\right)\right]\\
\ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p]+E\left[\dfrac{\tilde{e}_p}{\alpha\phi'(1-E[\tilde{e}_p])}\right]+E\left[\dfrac{\alpha\phi'E[\tilde{R}_p]\tilde{e}_p}{\alpha\phi'(1-E[\tilde{e}_p])}\right]\\
\ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p]+\dfrac{E[\tilde{e}_p]}{\alpha\phi'(1-E[\tilde{e}_p])}+\dfrac{E[\tilde{R}_p]E[\tilde{e}_p]}{1-E[\tilde{e}_p]}\\
\ \ \ \ \ \ \ \ \ \ =E[\tilde{R}_p]+\dfrac{E[\tilde{e}_p]^2}{\alpha\phi'var[\tilde{e}_p]}+b_mE[\tilde{e}_p]\ \ \ \blacksquare$
## Exercise 5.8
$\tilde{m}_p=\dfrac{1}{R_f}+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(\tilde{R}-\mu)$ based on equation (3.45)
$\tilde{R}_p=\dfrac{\tilde{m}_p}{E[\tilde{m}_p^2]}$
$\pi_\text{tang}=\dfrac{1}{\iota'\Sigma^{-1}(\mu-R_f\iota)}\Sigma^{-1}(\mu-R_f\iota)$
$\tilde{R}_p=\lambda\pi_\text{tang}\tilde{R}+R_f-\lambda R_f\\
\ \ \ \ \ =\lambda(\pi_\text{tang}\tilde{R}-R_f)+R_f$
$\lambda=\dfrac{\tilde{R}_p-R_f}{\pi_\text{tang}\tilde{R}-R_f}\tag{1}$
$\tilde{m}_p^2=\dfrac{1}{R_f^2}+\dfrac{2}{R_f}\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(\tilde{R}-\mu)+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(\tilde{R}-\mu)(\tilde{R}-\mu)'\Sigma^{-1}\left(\iota-\dfrac{1}{R_f}\mu\right)'\\
\ \ \ \ =\dfrac{1}{R_f^2}+\dfrac{2}{R_f}\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(\tilde{R}-\mu)+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}\Sigma\Sigma^{-1}\left(\iota-\dfrac{1}{R_f}\mu\right)'\\
\ \ \ \ =\dfrac{1}{R_f^2}+\dfrac{2}{R_f}\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(\tilde{R}-\mu)+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}\left(\iota-\dfrac{1}{R_f}\mu\right)'$
$E[\tilde{m}_p^2]=\dfrac{1}{R_f^2}+\dfrac{2}{R_f}\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(E[\tilde{R}]-\mu)+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}\left(\iota-\dfrac{1}{R_f}\mu\right)'\\
\ \ \ \ \ \ \ \ \ \ =\dfrac{1}{R_f^2}+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}\left(\iota-\dfrac{1}{R_f}\mu\right)'$
$\tilde{R}_p=\dfrac{\tilde{m}_p}{E[\tilde{m}_p^2]}=\dfrac{\dfrac{1}{R_f}+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}(\tilde{R}-\mu)}{\dfrac{1}{R_f^2}+\left(\iota-\dfrac{1}{R_f}\mu\right)'\Sigma^{-1}\left(\iota-\dfrac{1}{R_f}\mu\right)'}\\
\ \ \ \ \ =\dfrac{R_f-R_f(\mu-R_f\iota)'\Sigma^{-1}(\tilde{R}-\mu)}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}$
$\tilde{R}_p-R_f=\dfrac{-R_f(\mu-R_f\iota)'\Sigma^{-1}(\tilde{R}-\mu)-R_f(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}\\
\ \ \ \ \ \ \ \ \ =\dfrac{-R_f(\mu-R_f\iota)'\Sigma^{-1}\tilde{R}+R_f(\mu-R_f\iota)'\Sigma^{-1}\mu-R_f(\mu-R_f\iota)'\Sigma^{-1}\mu+R_f(\mu-R_f\iota)'\Sigma^{-1}R_f\iota}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}\\
\ \ \ \ \ \ \ \ \ =\dfrac{-R_f(\mu-R_f\iota)'\Sigma^{-1}\tilde{R}+R_f(\mu-R_f\iota)'\Sigma^{-1}R_f\iota}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}\\
\ \ \ \ \ \ \ \ \ =\dfrac{-R_f(\mu-R_f\iota)'\Sigma^{-1}[\tilde{R}-R_f\iota]}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}$
$\pi_\text{tang}\tilde{R}-R_f=\dfrac{\tilde{R}\Sigma^{-1}(\mu-R_f\iota)-R_f\Sigma^{-1}(\mu-R_f\iota)}{\iota'\Sigma^{-1}(\mu-R_f\iota)}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{(\mu-R_f\iota)\Sigma^{-1}[\tilde{R}-R_f]}{\iota'\Sigma^{-1}(\mu-R_f\iota)}$
Based on equation $(1)$
$\lambda=\dfrac{\tilde{R}_p-R_f}{\pi_\text{tang}\tilde{R}-R_f}\\
\ \ \ =\dfrac{\dfrac{-R_f(\mu-R_f\iota)'\Sigma^{-1}[\tilde{R}-R_f\iota]}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}}{\dfrac{(\mu-R_f\iota)\Sigma^{-1}[\tilde{R}-R_f]}{\iota'\Sigma^{-1}(\mu-R_f\iota)}}\\
\ \ \ =\dfrac{\dfrac{-R_f}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}}{\dfrac{1}{\iota'\Sigma^{-1}(\mu-R_f\iota)}}\\
\ \ \ =\dfrac{-R_f\iota'\Sigma^{-1}(\mu-R_f\iota)}{1+(\mu-R_f\iota)'\Sigma^{-1}(\mu-R_f\iota)}\ \ \blacksquare$
# Chapter 6
## Exercise 6.1
$\tilde{R}_*=a+b'\tilde{F}$
Because $\tilde{R}_*$ on the mean-variance frontier
Given CAPM, if risk-free asset exists,
$E[\tilde{R}]=R_f+\dfrac{Cov(\tilde{R},\tilde{R}_*)}{var(\tilde{R}_*)}\left(E[{\tilde{R}_*}]-R_f\right)\\
\ \ \ \ \ \ \ \ =R_f+\dfrac{Cov(\tilde{R},a+b'\tilde{F})}{var(a+b'\tilde{F})}\left(a+b'E[\tilde{F}]-R_f\right)\\
\ \ \ \ \ \ \ \ =R_f+\dfrac{b'Cov(\tilde{R},\tilde{F})}{b^2var(\tilde{F})}\left(a+b'E[\tilde{F}]-R_f\right)$
$\because E[\tilde{R}_*]=E[a+b'\tilde{F}]=a+b'E[\tilde{F}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ =R_f+var(\tilde{R}_*)$
$E[\tilde{R}]=R_f+\dfrac{b'Cov(\tilde{R},\tilde{F})}{b^2var(\tilde{F})}\left(R_f+var(\tilde{R}_*)-R_f\right)\\
\ \ \ \ \ \ \ \ =R_f+\dfrac{b'Cov(\tilde{R},\tilde{F})}{b^2var(\tilde{F})}\left(b^2var(\tilde{F})\right)\\
\ \ \ \ \ \ \ \ =R_f+b'Cov(\tilde{R},\tilde{F})\ \ \ \blacksquare$
which is a factor model.
## Exercise 6.4
$\tilde{R}_1-\tilde{R}_b=\alpha+\tilde{\varepsilon}\\
E[u'(w_0\tilde{R}_b)(\tilde{R}_1-\tilde{R}_b)]=E[u'(w_0\tilde{R}_b)(\alpha+\tilde{\varepsilon})]$
* If $u(w)=-\dfrac{1}{2}(w-\zeta)^2$
$u(w)=-\dfrac{1}{2}(w^2-2w\zeta+\zeta^2)\\
u'(w)=-w+\zeta\\
u'(w_0\tilde{R}_b)=-w_0\tilde{R}_b+\zeta>0$ (assumption)
$E[(-w_0\tilde{R}_b+\zeta)(\alpha+\tilde{\varepsilon})]=E[\alpha(\zeta-w_0\tilde{R}_b)+\tilde{\varepsilon}(\zeta-w_0\tilde{R}_b)]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\alpha E[\zeta-w_0\tilde{R}_b]+E[\tilde{\varepsilon}\zeta-\tilde{\varepsilon}w_0\tilde{R}_b]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\alpha E[\zeta-w_0\tilde{R}_b]+\zeta E[\tilde{\varepsilon}]-w_0E[\tilde{\varepsilon}\tilde{R}_b]\ \because \tilde{\varepsilon}\perp\tilde{R}_b\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\alpha E[\zeta-w_0\tilde{R}_b]>0\ \ \blacksquare$
* If $\tilde{R}$ and $\tilde{R}_b$ are joint normal distributed
$\tilde{\varepsilon}=\tilde{R}_1-\tilde{R}_b-\alpha$
$\Rightarrow \tilde{\varepsilon}$ adn $\tilde{R}$ are joint normal
$E[u'(w_0\tilde{R}_b)(\tilde{R}_1-\tilde{R}_b)]=E[u'(w_0\tilde{R}_b)(\alpha+\tilde{\varepsilon})]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\alpha E[u'(w_0\tilde{R}_b)]+E[u'(w_0\tilde{R}_b)\tilde{\varepsilon}]$
$Cov(u'(w_0\tilde{R}_b),\ \tilde{\varepsilon})=E[u'(w_0\tilde{R}_b)\tilde{\varepsilon}]-E[u'(w_0\tilde{R}_b)]E[\tilde{\varepsilon}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[u'(w_0\tilde{R}_b)\tilde{\varepsilon}]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[u''(w_0\tilde{R}_b)]Cov(w_0\tilde{R}_b,\ \ \tilde{\varepsilon})\ \text{by Stein's lemma}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[u''(w_0\tilde{R}_b)]\left(w_0E[\tilde{R}_b\tilde{\varepsilon}]-w_0E[\tilde{R}_b]E[\tilde{\varepsilon}]\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E[u''(w_0\tilde{R}_b)]*0\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =0$
$\Rightarrow E[u'(w_0\tilde{R}_b)(\tilde{R}_1-\tilde{R}_b)]=\alpha E[u'(w_0\tilde{R}_b)]>0\ \ \blacksquare$
## Exercise 6.7
$(a)$
$\tilde{R}_1^*=\tilde{R}_1=E[\tilde{R}_1]+\tilde{f}+\tilde{\varepsilon}_1\\
\tilde{R}_2^*=\pi\left(E[\tilde{R}_1]+\tilde{f}+\tilde{\varepsilon}_1\right)+(1-\pi)\left(E[\tilde{R}_2]-\tilde{f}+\tilde{\varepsilon}_2\right)\\
\ \ \ \ =\pi E[\tilde{R}_1]+(1-\pi)E[\tilde{R}_2]+\tilde{f}(2\pi-1)+\pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2$
$Cov(\tilde{\varepsilon}_1,\ \pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2)=E[\tilde{\varepsilon}_1(\pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2)]-E[\tilde{\varepsilon}_1]E[\pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\pi E[\tilde{\varepsilon}_1^2]+(1-\pi)E[\tilde{\varepsilon}_1\tilde{\varepsilon}_2]-\pi E[\tilde{\varepsilon}_1]^2-(1-\pi)E[\tilde{\varepsilon}_1]E[\tilde{\varepsilon}_2]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\pi(E[\tilde{\varepsilon}_1^2]-E[\tilde{\varepsilon}_1]^2)+(1-\pi)(E[\tilde{\varepsilon}_1\tilde{\varepsilon}_2]-E[\tilde{\varepsilon}_1]E[\tilde{\varepsilon}_2])\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\pi\text{var}(\tilde{\varepsilon}_1)+(1-\pi)Cov(\tilde{\varepsilon}_1,\ \tilde{\varepsilon}_2)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\pi\sigma^2\not=0$
According to the definition of statistical factor model, $E[\tilde{\varepsilon}_i\tilde{\varepsilon}_j]=0$ across assets, which contradicts the outcome of computation $Cov(\tilde{\varepsilon}_1,\ \pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2)\not=0.\ \ \blacksquare$
$(b)$
$\tilde{R}_1^*=E[\tilde{R}_1^*]+\tilde{\varepsilon}_1^*$
$\ \ \ \ \ =\tilde{R}_1=E[\tilde{R}_1]+\tilde{f}+\tilde{\varepsilon}_1$
$\therefore \tilde{\varepsilon}_1^*=\tilde{f}+\tilde{\varepsilon}_1\tag{1}$
$\tilde{R}_2^*=E[\tilde{R}_2^*]+\tilde{\varepsilon}_2^*\\
\ \ \ \ \ =\pi E[\tilde{R}_1]+(1-\pi)E[\tilde{R}_2]+\tilde{f}(2\pi-1)+\pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2$
$\therefore \tilde{\varepsilon}_2^*=\tilde{f}(2\pi-1)+\pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2\tag{2}$
1. the expectation of $\tilde{\varepsilon}_1^*$ and $\tilde{\varepsilon}_2^*$
$E[\tilde{\varepsilon}_1^*]=E[\tilde{f}]+E[\tilde{\varepsilon}_1]=0\ \ \blacksquare$
$E[\tilde{\varepsilon}_2^*]=E[\tilde{f}](2\pi-1)+\pi E[\tilde{\varepsilon}_1]+(1-\pi)E[\tilde{\varepsilon}_2]=0\ \ \blacksquare$
2. the covariance of $\tilde{\varepsilon}_1^*$ and $\tilde{\varepsilon}_2^*$
$Cov(\tilde{\varepsilon}_1^*,\ \tilde{\varepsilon}_2^*)=E[\tilde{\varepsilon}_1^*\tilde{\varepsilon}_2^*]-E[\tilde{\varepsilon}_1^*]E[\tilde{\varepsilon}_2^*]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =E\left[(\tilde{f}+\tilde{\varepsilon}_1)\left((2\pi-1)\tilde{f}+\pi \tilde{\varepsilon}_1+(1-\pi)\tilde{\varepsilon}_2\right)\right]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =(2\pi-1)E\left[\tilde{f}^2\right]+\pi E\left[\tilde{f}\tilde{\varepsilon}_1\right]+(1-\pi)E\left[\tilde{f}\tilde{\varepsilon}_2\right]+(2\pi-1)E\left[\tilde{f}\tilde{\varepsilon}_1\right]\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\pi E[\tilde{\varepsilon}_1^2]+(1-\pi)E\left[\tilde{\varepsilon}_1\tilde{\varepsilon}_2\right]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =(2\pi-1)\text{var}(\tilde{f})+E\left[\tilde{f}\tilde{\varepsilon}_1\right](3\pi-1)+E\left[\tilde{f}\tilde{\varepsilon}_2\right](1-\pi)+\pi\text{var}(\tilde{\varepsilon}_1)+(1-\pi)E[\tilde{\varepsilon}_1\tilde{\varepsilon}_2]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =(2\pi-1)+\left(Cov(\tilde{f},\ \tilde{\varepsilon}_1)+E[\tilde{f}]E[\tilde{\varepsilon}_1]\right)(3\pi-1)+\left(Cov(\tilde{f},\ \tilde{\varepsilon}_2)+E[\tilde{f}]E[\tilde{\varepsilon}_2]\right)(1-\pi)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\pi\sigma^2+\left(Cov(\tilde{\varepsilon}_1,\ \tilde{\varepsilon}_2)+E[\tilde{\varepsilon}_1]E[\tilde{\varepsilon}_2]\right)(1-\pi)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =(2\pi-1)+\pi\sigma^2\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\pi(\sigma^2+2)-1\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\dfrac{1}{\sigma^2+2}(\sigma^2+2)-1\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =0\ \ \blacksquare$
$(c)$
If it is exact APT pricing with zero factor, then $Cov(\tilde{R}_i,\ \tilde{f})$ should be 0
$E[\tilde{R}]-R_f=\lambda Cov(\tilde{R}_1,\ \tilde{f})\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\lambda Cov\left(E[\tilde{R}_1]+\tilde{f}+\tilde{\varepsilon}_1,\ \tilde{f}\right)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\lambda E\left[E[\tilde{R}_1]\tilde{f}+\tilde{f}^2+\tilde{\varepsilon}_1\tilde{f}\right]-E[\tilde{R}_1]E\left[\tilde{f}\right]-E\left[\tilde{f}\right]^2-E[\tilde{\varepsilon}_1]E\left[\tilde{f}\right]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\lambda\left(E[\tilde{R}_1]E\left[\tilde{f}\right]+E\left[\tilde{f}\right]^2+E\left[\tilde{\varepsilon}_1\tilde{f}\right]\right)\\
\because E[\tilde{f}^2]=\text{var}(\tilde{f})=1,\ E\left[\tilde{\varepsilon}_1\tilde{f}\right]=Cov(\tilde{\varepsilon}_1,\ \tilde{f})+E[\tilde{\varepsilon}_1]E\left[\tilde{f}\right]=0$
$E[\tilde{R}]-R_f=\lambda\not=0$, which contradicts the definition of exact APT.$\ \ \blacksquare$