# 06 Portfolio optim.: Max. return with fixed risk
###### tags: `Portfolio optimization`
In general, for $n$ assets, we can combine them to the overall return $\mu$ and risk $\sigma$:
$$
\left\{
\begin{array}{rcl}
\mu &=& \boldsymbol{\mu}^T \mathbf{w}\\
\sigma^2 &=& \mathbf{w}^T \Sigma \mathbf{w}
\end{array}
\right.
$$
where $\mathbf{w}=[w_1, \dots, w_n]^T$, $\boldsymbol{\mu}=[\mu_1, \dots, \mu_n]^T$, and $\Sigma$ is the covariance matrix of these $n$ assets.
Suppose we want to maxmimize return with fixed risk, as follows.
$$
\max_{\mathbf{w}} \mu=\boldsymbol{\mu}^T \mathbf{w} \\
s.t.
\left\{
\begin{array}{l}
\mathbf{w}^T\mathbf{1}=1\\
\mathbf{w}^T \Sigma \mathbf{w}=\sigma_0^2
\end{array}
\right.
$$
To solve the above problem, we can use the Lagrange multiplier to form a new objective function:
$$
\max_{\mathbf{w}, \lambda_1, \lambda_2} J(\mathbf{w}, \lambda_1, \lambda_2)=\boldsymbol{\mu}^T \mathbf{w} + \lambda_1(\mathbf{1}^T \mathbf{w}-1 )+\lambda_2(\mathbf{w}^T \Sigma \mathbf{w}-\sigma^2),
$$
where $\mathbf{1}=[1, \dots, 1]^T$. By taking the gradient and set it to zero, we have
$$
\left\{
\begin{array}{ccc}
\boldsymbol{\mu}+\mathbf{1}\lambda_1+2\Sigma \mathbf{w} \lambda_2=\mathbf{0}\\
\mathbf{1}^T \mathbf{w}=1\\
\mathbf{w}^T \Sigma \mathbf{w}=\sigma^2
\end{array}
\right.
$$
By pre-multiply the first equation by $\mathbf{w}^T$, we have
$$
\mathbf{w}^T \boldsymbol{\mu}+\mathbf{w}^T\mathbf{1}\lambda_1+2\mathbf{w}^T\Sigma \mathbf{w}\lambda_2=0
\Rightarrow
\mathbf{w}^T \boldsymbol{\mu}+\lambda_1+2\sigma^2\lambda_2=0
\Rightarrow
\lambda_1=-\mathbf{w}^T \boldsymbol{\mu}-2\sigma^2\lambda_2
$$
By plugging the above expression to the first equation:
$$
\boldsymbol{\mu}+\mathbf{1}(-\mathbf{w}^T \boldsymbol{\mu}-2\sigma^2\lambda_2)+2\Sigma \mathbf{w} \lambda_2=\mathbf{0}
\Rightarrow
2\Sigma \mathbf{w} \lambda_2- \mathbf{1}\boldsymbol{\mu}^T\mathbf{w} = \mathbf{1}2\sigma^2\lambda_2-\boldsymbol{\mu}
\\
\Rightarrow
(2\Sigma \lambda_2- \mathbf{1}\boldsymbol{\mu}^T)\mathbf{w} = \mathbf{1}2\sigma^2\lambda_2-\boldsymbol{\mu}
\Rightarrow
\mathbf{w} = (2\Sigma \lambda_2- \mathbf{1}\boldsymbol{\mu}^T)^{-1}(\mathbf{1}2\sigma^2\lambda_2-\boldsymbol{\mu})
$$
Thus we can obtain $\mathbf{w}$ in terms of $\lambda_2$. But since $\mathbf{1}^T \mathbf{w}=1$, we can find $\lambda_2$ directly. This means the optimizing $\mathbf{w}$ can be derived accordingly.
$$
\mathbf{1}^T\mathbf{w}=1 \\
\Rightarrow \mathbf{1}^T(2\Sigma \lambda_2- \mathbf{1}\boldsymbol{\mu}^T)^{-1}(\mathbf{1}2\sigma^2\lambda_2-\mathbf{1}^T\boldsymbol{\mu})=1 \\
\Rightarrow \mathbf{1}^T(\frac{(2\Sigma \lambda_2)^{-1}\mathbf{1}}{1-\boldsymbol{\mu}^T (2\Sigma \lambda_2)^{-1}})2\sigma^2\lambda_2-\mathbf{1}^T\boldsymbol{\mu}=1 \\
\Rightarrow \lambda_2=?
$$
(To be continued.)