## 3.9 Maxima and Minima Recall the following general optimisation problem from last week. :::success **Optimisation Problem** Let $D \subseteq \mathbb R^n$. Given a function $f :D \to \mathbb R$, the general problem of finding the value that minimises $f$ over $D$ is formulated as $$\min_{x \in D} f(x).$$ In this context, $f$ is the **objective function** and $D$ is the **constraint set**. The input values at which the function output value is minimised, are denoted by $${\arg \min}_{x \in D} f(x)\,.$$ ::: This week, we will focus on some elementary analytic techniques to solve this problem. Before we look at functions of several variables, let's recall the following from single variable calculus. :::success **First Derivative Test** Suppose $f$ is differentiable on an interval $(a,b)$ and has a **local extremum** at $c \in (a,b)$. Then $f'(c)=0$. ::: :::success **Second Derivative Test** Suppose $f''$ is continuous near $c$. (a) If $f'(c) = 0$ and $f'' (c) > 0$, then $f$ has a **local minimum** at c. (b) If $f'(c) = 0$ and $f'' (c) < 0$, then $f$ has a **local maximum** at $c$ ::: :::success To find the **global maximum and minimum values** of a continuous function $f$ on a closed interval $[a, b]$: 1. Find values of $f$ at the critical points of $f$ in $(a, b)$. 2. Find values of $f$ at the end points of the interval. 3. The largest is the global maximum value; the smallest is the absolut minimum value. ::: <br> **Example 1** Find the maximum and the minimum values of $f(x)=x^3-3x$ on $[0,3]$. :::spoiler Answer Note that $f'(x) = 3x^2-3 = 0 \implies x=\pm 1$. Only one critical point $x=1$ lie inside $(0,3)$, and $f''(1)=6>0$. So, $x=1$ is a local minimum with $f(1)=-2$ We should also check the function values at $x=0$ and $x=3$. $f(0)=0$ and $f(3)=18$. So, $\min_{[0,3]} f = -2$ and $\max_{[0,3]} f = 18.$ ![image](https://hackmd.io/_uploads/ryvqQmUzkg.png) ::: <br> Now, we will generalise these ideas to higher dimensions. ### 3.9.1 Local Maxima/Minima :::info **Definition** Let $f$ be a function of $n$ variables that is defined and continuous on an open set containing the point $(a_1,\dots,a_n)$. Then $f$ has a **local minimum** (also called an **relative minimum**) at $(a_1,\dots,a_n)$, if $$f(a_1,\dots,a_n)\leq f(x_1,\dots,x_n)$$ for all points $(x_1,\dots,x_n)$ within some disk centered at $(a_1,\dots,a_n)$. The number $f(a_1,\dots,a_n)$ is called a **local minimum value**. If the inequality holds for every point $(x_1,\dots,x_n) \in Domain(f)$, then $f$ has a **global minimum** (also called an **absolute minimum**) at $(a_1,\dots,a_n)$. ::: We define **local/global maximum** in the same way with the inequality reversed. ![image](https://hackmd.io/_uploads/ryr2PmUzke.png) <br> :::info **Definition** Let $f$ be a function of $n$ variables that is defined and continuous on an open disc centred at $(a_1,\dots,a_n)$. Then $(a_1,\dots,a_n)$ is a **critical point** of $f$ if either $$f_{x_j}(a_1,\dots,a_n)=0,\,\,\,\text{for all}\,\,\, j=1,\dots,n\,$$ or there is $j$ such that $f_{x_j}$ does not exist at $(a_1,\dots,a_n)$. ::: Now, we are ready to state the first result. :::success **Theorem (First Derivative Test)** Suppose $f$ is a function of $n$ variables that is defined and continuous on an open disc centred at $(a_1,\dots,a_n)$. If $f$ has a **local extremum** at $(a_1,\dots,a_n)$, then $(a_1,\dots,a_n)$ is a critical point of $f$. ::: However, not all critical points are local extrema!!! For example, there are **saddle points** (in certain directions, they look like valleys, and in other directions they look like hills). **Example 2** Show that the only critical point of $f(x,y)=y^2-x^2$ is $(0,0)$ and that it is neither a local minimum nor a local maximum. :::spoiler Answer Note that $f$ has continuous partial derivatives everywhere on $\mathbb R^2$ given by $f_x = -2x$ and $f_y = 2y$. Therefore, all critical points of $f$ is given by $f_x=-2x=0$ and $f_y=2y=0$. So, the only critical point of $f$ is $(0,0)$. To see that $(0,0)$ is not a local maximum, note that $f|_{x=0} = y^2$ has a minimum at $(0,0)$. To see that $(0,0)$ is not a local minimum, note that $f|_{y=0} = -x^2$ has a maximum at $(0,0)$. We can see this graphically too. ![image](https://hackmd.io/_uploads/Sym2BIwG1x.png) ::: <br> So, we should be able to determine the nature of critical points. The next theorem provides such a classification. :::success **Theorem (Second Derivative Test)** Suppose $f$ is a function of $n$ variables with continuous second order partial derivatives in a disc centred at a *critical point* $(a_1,\dots,a_n)$ of $f$, and let $$Η_f(a_1,\dots,a_n):=\Big[f_{x_ix_j }(a_1,\dots,a_n)\Big]_{i,j=1,\dots,n}$$ denote the **Hessian matrix** at $(a_1,\dots,a_n)$. If all the eigenvalues of $Η_f (a_1,\dots,a_n)$ 1. are positive, then $f$ has a local minimum at $(a_1,\dots,a_n)$. 2. are negative, then $f$ has a local maximum at $(a_1,\dots,a_n)$. If $Η_f (a_1,\dots,a_n)$ has non-zero eigenvalues both positive ones and negative ones, then $f$ has a saddle point at $(a_1,\dots,a_n)$. If $H_f(a_1,\dots,a_n)$ is singular (at least one eigenvalue is zero), then the test inconclusive. ::: <br> - In the $n=1$ case, this reduces to the second derivative test, we already know. - In the $n=2$ case, we have an easier way to state this because $$\det(H_f) = \lambda_1 \lambda_2$$ is the product of its two eigenvalues $\lambda_1$ and $\lambda_2$. - Note that if $f(x,y)$ has second order continuous partial derivatives, then $$H_f=\begin{bmatrix} f_{xx} & f_{xy}\\ f_{yx} & f_{yy} \end{bmatrix}$$ and $\det(H_f)=f_{xx}f_{yy} - f_{xy}f_{yx}=f_{xx}f_{yy} - (f_{xy})^2\,.$ :::success **Corollary** Suppose $(a_1,a_2)$ is a cirtical point of $f$. 1. If $\det(H_f(a_1,a_2))=0$, then $H_f(a_1,a_2)$ is singular and the test is inconclusive. 2. If $\det(H_f(a_1,a_2))<0$, then $H_f(a_1,a_2)$ has two eigenvalues of opposite sign. So, $(a_1,a_2)$ is a saddle point. 3. If $\det(H_f(a_1,a_2))>0$, then $H_f(a_1,a_2)$ has two eigenvalues of the same sign, their sum is $f_{xx}+f_{yy}$, and $f_{xx}f_{yy}>(f_{xy})^2$. So, - the two eigenvalues are positive, and hence, $(a_1,a_2)$ is a local minimum if $f_{xx}>0$ (or equivalently, if $f_{yy}>0$). - the two eigenvalues are negative, and hence, $(a_1,a_2)$ is a local maximum if $f_{xx}<0$ (or equivalently, if $f_{yy}<0$). ::: <br> **Example 3** Let $\alpha>0$. Verify that $$f(x,y,z) = x^2 + y^2 + z^2 - 2\alpha xyz\,$$ has a critical points at $(\alpha^{-1},\alpha^{-1},\alpha^{-1})$ and $(0,0,0)$. Determine the nature of these critical points. :::spoiler Answer We note that $f$ has continuous second partial derivatives on $\mathbb R^3$. First, we have to check whether $(\alpha^{-1},\alpha^{-1},\alpha^{-1})$ satisfies $f_x=0, f_y=0$ and $f_z=0$. So, we compute $$f_x(x,y,z) = 2x-2\alpha yz,\,\,\,f_y(x,y,z)=2y-2\alpha xz \,\,\,f_z(x,y,z)=2z-2\alpha xy\,.$$It is easy to see that $f_x(0,0,0)=f_y(0,0,0)=f_z(0,0,0)=0$. Also, note that $$f_x(\alpha^{-1},\alpha^{-1},\alpha^{-1}) = 2 \alpha^{-1}-2\alpha\alpha^{-1}\alpha^{-1} = 2\alpha^{-1}-2\alpha^{-1} =0\,.$$ By symmetry, $f_{y}(\alpha^{-1},\alpha^{-1},\alpha^{-1}) =f_z (\alpha^{-1},\alpha^{-1},\alpha^{-1}) =0\,$ as required. Hence, both $(\alpha^{-1},\alpha^{-1},\alpha^{-1})$ and $(0,0,0)$ are critical point of $f$. Next, to determine its nature, we compute the Hessian. $$H_f(x,y,z) = \begin{bmatrix} f_{xx} & f_{xy} & f_{xz}\\ f_{xy} & f_{yy} & f_{yz}\\ f_{xz} & f_{yz} & f_{zz} \end{bmatrix} = \begin{bmatrix} 2 & - 2\alpha z & - 2\alpha y\\ - 2\alpha z & 2 & - 2 \alpha x \\ - 2 \alpha y & - 2 \alpha x & 2 \end{bmatrix}\,.$$ Therefore, $$H_f(0,0,0) = \begin{bmatrix} 2 & 0 & 0\\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix}\,$$ with eigenvalues $2,2,2$. So, $(0,0,0)$ is a local minimum for all $\alpha>0$. $$H_f(\alpha^{-1},\alpha^{-1},\alpha^{-1}) = \begin{bmatrix} 2 & -2 & -2\\ -2 & 2 & -2 \\ -2 & -2 & 2 \end{bmatrix}\,$$ with eigenvalues $4,4,-2$. So, $(\alpha^{-1},\alpha^{-1},\alpha^{-1})$ is a saddle point for all $\alpha>0$. ::: <br> **Example 4** Find all local extrema of $f(x,y) = x^4 +y^4 - 4xy +1$ :::spoiler Answer Since $f$ has partial derivatives everywhere, all of its critical points should satisfy $f_x(x,y)=4x^3-4y=0$ and $f_y(x,y)=4y^3-4x=0$. Therefore, $x^3=y$ and $y^3=x$. This gives us $x^9-x=x(x^8-1)=x(x-1)(x+1)(x^2+1)(x^4+1)=0$. So, critical points are $(0,0), (1,1)$ and $(-1,-1)$. To classify these points we calculate the $\det(H_f)$ at each critical point. $$ \det(H_f) = \begin{vmatrix} f_{xx} & f_{xy}\\ f_{yx} & f_{yy} \end{vmatrix}=f_{xx} f_{yy} - f^2_{xy}=144x^2y^2-16.$$ $\det(H_f)(1,1)=\det(H_f)(-1,-1)=128>0$ and $f_{xx}(1,1)=f_{xx}(-1,-1)=12>0$. So, both $(1,1)$ and $(-1,-1)$ are local minima. $\det(H_f)(0,0)<0$. So, $(0,0)$ is a saddle point. ![image](https://hackmd.io/_uploads/SkvtMLwGye.png =300x300)![image](https://hackmd.io/_uploads/HkIaGIDGJx.png =300x300) ::: <br> Note that in this case the local minimum value of the function is the minimal value of the function in its full domain. ### 3.9.2 Global Maxima/Minima Below, we mention discuss a theorem that gives us sufficient conditions for the existence of global extrema. Before we state the theorem, we need to define what **closed sets** and **bounded sets** in $\mathbb R^n$ are. - A bounded set in $\mathbb R^n$ is a set that is contained entirely in a disc of finite radius. - A closed set in $\mathbb R^n$ is a set whose boundary belongs to the set. ![sets](https://hackmd.io/_uploads/HJH02DDMJe.png =400x350) <br> :::success **Theorem** Let $D \subset \mathbb R^n$ be a closed and bounded set and $f:D \to \mathbb R$ be a continuous function. Then $f$ attains a global maximum and a global minimum on $D$. ::: <br> Now, we can give a general strategy to determine these global extrema. :::success To find global extrema, 1. Find critical points in the interior of $D$ and values of $f$ at those critical points. 2. Find the extreme values that occur on the boundary of $D$. 3. Compare all of those values for the largest and smallest values. ::: **Example 5** A company has developed a profit model that depends on the number $x$ of golf balls sold per month (measured in thousands), and the number of hours per month of advertising $y$, according to the function $$z=f(x,y)=48x+96y−x^2−2xy−9y^2$$where $z$ is measured in ten-thousands of RMBs. The maximum number of golf balls that can be produced and sold is $50 000$, and the maximum number of hours of advertising that can be purchased is $25$. Find the values of $x$ and $y$ that maximise profit, and find the maximum profit. :::spoiler Answer Note that the domain of the function is the closed rectangular region $D$ given by $0 \leq x \leq 50$ and $0 \leq y \leq 25$. ![download](https://hackmd.io/_uploads/ByyuluwGke.gif) Because the function is continuous on $D$, it has a global maximum. To find it, we first determine the critical points from\begin{align*} f_x(x,y)&=48−2x−2y=0 \\ f_y(x,y)&=96−2x−18y=0. \end{align*}This gives us $(21,3)$ is the only critical point, and $f(21,3)=648$. Next, we look at the boundary lines. 1. On $x=0$, $f(0,y)=96y-9y^2,\,\,\,0 \leq y \leq 25$. Solving $$f'(0,y)=96-18y=0 \iff y=16/3$$ and $f(0,16/3)=256.$ Also, $f(0,0)=0$ and $f(0,25)=-3225.$ 2. On $x=50$, $f(50,y)=100-4y-9y^2,\,\,\,0 \leq y \leq 25$. Note that $$f'(50,y)=-4-18y<0\,,\,\,\,0 \leq y \leq 25\,.$$ So, the function is decreasing and maximum is $f(50,0)=100.$ 3. On $y=0$, $f(x,0)=48x-x^2,\,\,\,0 \leq x \leq 50$. Note that $$f'(x,0)=48-2x=0 \iff x=24$$ and $f(24,0)=576$. Also, $f(0,0)=0$ and $f(50,0)=100$. 3. On $y=25$, $f(x,25)=-3225-2x−x^2,\,\,\,0 \leq x \leq 50$. Note that $$f'(x,25)=-2-2x<0\,,\,\,\,0 \leq x \leq 50\,.$$ So, the function is decreasing and maximum is $f(0,25)=-3225.$ Comparing all the values, we conclude that that the global maximum is $648$, which occurs at $(21,3)$. Therefore, a maximum profit of $6480000$ RMB is realized when $21,000$ golf balls are sold and $3$ hours of advertising are purchased per month. ::: <br> There are other existence theorems for global extrema. For example, **convexity** of functions leads to global extrema, but we will not discuss this in this course. In fact, there is dedicated field of mathematics called *convex optimisation*. Non-convexity of loss functions make matters worse for practioners. If you are interested to learn more, follow [this link](https://congma1028.github.io/Teaching/STAT37797/lectures.html) or [this link](https://arxiv.org/pdf/1712.07897) to read about *non-convex optimisation* in Data Science and Machine Learning. --- **References** 1. *Chapter 14.7* : Stewart, J. (2012). Calculus (8th ed.). Boston: Cengage Learning. 2. *Chapter 13.6* : Strang G., Calculus (3rd ed.). [https://ocw.mit.edu/](https://ocw.mit.edu/courses/res-18-001-calculus-fall-2023/pages/textbook/) 3. *Chapter 2.5* : Corral, M. (2021). Vector Calculus. [https://www.mecmath.net/](https://www.mecmath.net/) --- <br> ## 3.10 Constrained optimisation Consider a more general optimisation problem of the following form. :::success **Constrained Optimisation Problem I** Let $D \subseteq \mathbb R^n$, $f :D \to \mathbb R$ and $g :D \to \mathbb R$. $$\max_{x \in D} f(x)\,\,\,\text{subject to}\,\,\, g(x)=k\,.$$In this context, $g(x)=k$ is called a *constraint*. ::: <br> ![image](https://hackmd.io/_uploads/rJvoEib71x.png) Let's try to solve a constrained optimisation problem using what we have already discussed. **Example 6** What is the largest volume of a box if the total surface area is $24\, cm^3$ ? :::spoiler Answer We have to maximise $V = xyz$ subject to $2xy+2yz+2zx=24$ where $x,y,z$ are positive. So, we have to maximise $$V(x,y)=\frac{xy(12-xy)}{x+y}\,,$$ Note that $$V_x=-\frac{y^2 (-12 + x^2 + 2 x y)}{(x + y)^2}=0\,\,\,\text{and}\,\,\,V_y=-\frac{x^2 (-12 + y^2 + 2 x y)}{(x + y)^2}=0$$is equivalent to $$ x^2 + 2 x y = 12\,\,\,\text{and}\,\,\, y^2 + 2 x y =12\,.$$So the only critical point is $x=y=2$. Note that by the physical nature of the problem, there should be a absolute maximum to the problem and it should occur at a critical point. So, the maximum volume is $$2 \times 2 \times 2 = 8\,cm^3,$$ and this happens when the box is a cube!!! ::: <br> Next, we will introduce general method called **Lagrange multipliers** to solve constrained optimisation problems. ### 3.10.1 Method of Lagrange Multipliers The method of Langrange multipliers can be summarised as follows. :::success Suppose $f$ and $g$ are functions of $n$ variables with continuous first partial derivatives on an open disc containing the level set $g(x)=k$, and $\nabla g \neq \vec 0$ at any point on that level set. Then to find maxima and minima of $f(x)$ subject to $g(x)=k$: 1. Find the values $x \in \mathbb R^n$ such that $\nabla f (x)= \lambda \nabla g(x)$ and $g(x)=k$. 2. Evaluate $f$ at all these points. 3. The largest value will be the maximum and the smallest value will be the minimum (provided they exist). ::: The intuition behind this is as follows. $\nabla f$ is the direction you need to move in to increase the value of f, and $\nabla g$ is the direction you can not move in because that is the direction off the hypersurface $g(x)=k$ (and you need to stay on that surface). When these point in the same direction, no movement can improve the value. <br> **Example 7** What is the largest volume of a box if the total surface area is $24\, cm^3$? :::spoiler Answer We maximise $V = xyz$ subject to $g=2xy+2yz+2zx=24$ where $x,y,z$ are positive. We have to find points such that $(V_x,V_y,V_z) = \lambda (g_x,g_y,g_z)$ and $g=12$. That is, $$yz=\lambda(2y+2z)\,,\,\,\,xz=\lambda(2x+2z)\,,\,\,\,xy=\lambda(2x+2y)\,,\,\,\,xy+yz+zx=6\,.$$Note that $\lambda \ne 0$ because it implies $xy=yz=xy=0$ which is impossible as $g=24$. Next, note that $$xyz=2\lambda(xy+zx)=2\lambda(xy+yz)=2\lambda(xz+yz)\,.$$ This gives us $xy=yz=xz=4 \implies x=y=z=2\,$ and $\nabla g (2,2,2)\neq \vec{0}\,.$ (We could have simply noted that the equation is symmetric in $x, y,$ and $z$ and hence, $x=y=z$.) The nature of the problem tells us that there should be a maximum, and hence, the maximum volume is $V=8\, cm^3.$ ::: <br> **Example 8** Find the extreme values of $f(x,y)=x^2+2y^2$ on $\mathbb R^2$ subject to $g(x,y)=x^2+y^2=1$. :::spoiler Answer We have to find points such that $(f_x,f_y) = \lambda (g_x,g_y)$ and $g=1$. That is, $$2x=\lambda2x\,,\,\,\,4y=\lambda 2y\,,\,\,\,x^2+y^2=1\,.$$ From the first equation, $x=0$ or $\lambda = 1$. If $x=0$ then $y = \pm 1$ (from the third equation). If $\lambda =1$ then $y=0$ from the second equation and $x = \pm 1$ (from the third equation). So, we have to consider $(0,1), (0,-1), (1,0)$ and $(-1,0)$. Note that $\nabla g = (2x,2y)$ is not $\vec{0}$ at any of the points. So, we can evaluate $f$ and pick the largest and smallest values. | Point | $f$ value | | | ----------- |:---------:| ------ | | $(0,\pm 1)$ | $2$ | maxima | | $(\pm1,0)$ | $1$ | minima | ![image](https://hackmd.io/_uploads/H1oy9hW7ye.png) In fact, one could write $f(x,y)=1+y^2$ provided $x^2+y^2=1$ (implying $-1\leq y \leq 1$) and arrive at the same conclusion easily. ::: <br> The method can be generalised to the case of multiple constraints. :::success Suppose $f$ and $g_j\,,$ $j=1,\dots,d\,,$ are functions of $n \geq 3$ variables with continuous first partial derivatives on an open disc containing the level sets $g_j(x)=k\,,$ $j=1,\dots,d\,,$ and $\nabla g_j\,,$ $j=1,\dots,d\,,$ are *linearly independent*. Then to find maxima and minima of $f(x)$ subject to $g_j(x)=k_j\,,\,\,\,j=1,\dots,d$. 1. Find $x \in \mathbb R^n$ such that $\nabla f (x)= \sum_{i=1}^d\lambda_j \nabla g_j(x)$ and $g_j(x)=k_j$ for all $j$. 2. Evaluate $f$ at all these points. 3. The largest value will be the maximum and the smallest value will be the minimum (provided they exist). ::: <br> **Example 9** What is the largest volume of a box if the total surface area is $24\, cm^3$ and the sum of length, width, and height is $6$ ? :::spoiler Answer We maximise $V = xyz$ subject to $g=2xy+2yz+2zx=24$ and $h=x+y+z=6$ where $x,y,z$ are positive. We have to find points such that $(V_x,V_y,V_z) = \lambda_1 (g_x,g_y,g_z)+\lambda_2(h_x,h_y,h_z)$, $g=12$ and $h=6$. That is, $$yz=\lambda_1(2y+2z)+\lambda_2\,,\,\,\,xz=\lambda_1(2x+2z)+\lambda_2\,,\,\,\,xy=\lambda_1(2x+2y)+\lambda_2\,,$$ $xy+yz+zx=12$ and $x+y+z=6$. By the symmetry of the system $x=y=z$ and $\lambda_1=\lambda_2$. The last two equations give us that $x=y=z=2$. ::: <br> Finally, let's look at an example where we **cannot** apply the method Lagrange multipliers directly. **Example 10** Find the minimum of $f(x,y)=x$ subject to $g(x,y)=x^3-y^2=0$. :::spoiler Answer Note that $\nabla f = (1,0)$ and $\nabla g = (3x^2,-2y)$. So, if $\nabla f = \lambda \nabla g$, then $3\lambda x^2 = 1$ and $-2\lambda y = 0$. The second equations gives $y=0$. Since $g(x,y)=x^3-y^2=0$, we have $x=0$. But that means $3\lambda x^2 = 0$ and the first equality doesn't hold for any $\lambda$! So, there are no solutions for the equations in the method of Lagrange multipliers. However, $x^3=y^2 \geq 0$, and therefore, $f(x,y)=x \geq 0$ and since $g(0,0)=0$, we have that the solution to the constraint minimisation problem is $0$. ::: <br> In the last example, we notice that $\nabla g (0,0) = (0,0) = \vec 0$ ! So, the condition that $\nabla g \neq \vec 0$ is important for the method to work. In the two dimensional case, the method of Lagrange multiplier can be extended as follows. :::success A maximum or minimum of $f(x,y)$ on the curve $g(x,y) = k$ is EITHER 1. a solution of the *Lagrange equations*: $$\nabla f (x,y)= \lambda \nabla g(x,y)\,\,\,\text{&}\,\,\, g(x,y)=k\,,$$OR 2. a critical point of $g$ on the curve $g(x,y)=k$. ::: <br> - One can extend the method of Lagrange multipliers to constraints with inequalities. Strang discusses this at the end of Chapter 13.7 in [2], please have a look if you are interested. - One direct application of the method of Lagrange Multiplier is data density estimation using *Gaussian Mixture Models*. See [this link](https://mbernste.github.io/posts/gmm_em/) or [this link](https://towardsdatascience.com/gaussian-mixture-models-explained-6986aaf5a95) for a discussion of this application. --- **References** 1. *Chapter 14.8* : Stewart, J. (2012). Calculus (8th ed.). Boston: Cengage Learning. 2. *Chapter 13.7* : Strang G., Calculus (3rd ed.). [https://ocw.mit.edu/](https://ocw.mit.edu/courses/res-18-001-calculus-fall-2023/pages/textbook/) 3. *Chapter 2.7* : Corral, M. (2021). Vector Calculus. [https://www.mecmath.net/](https://www.mecmath.net/) --- <br> :::danger **Summary**: Now, we can - Define critical points of functions. - Determine local extrema of functions with continuous partial derivatives. - State sufficient conditions for the existence of global extrema. - Determine global extrema of differentiable functions on closed and bounded sets. - Use Lagrange multipliers to determine extrema under equality constraints. - State limitations of the optimisation methods discussed. :::