# Numerical Optimization
## Not matched to any week
duality theorem
interior point methods
Lagrange multipliers
https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions
Armijo
Netwon method
## Week 1
### Lecture notes
http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n1.pdf
Introduction to linear programs, see [MG] Chapters 1, 2.2.
### Problems
LPs, graphical methods, guesswork
equational form
integer program relaxation
weighted vertex cover relaxation
cutting paper rolls
## Week 2
### Lecture notes
Linear programming: basic feasible solutions, optimal solutions. [MG] Chapters 2.4, 2.7, 4.1, 4.2
bfs
polytope
brute-force (iterating over vertices?)

### Problems
## Week 3
### Lecture notes
Simplex method, including pivot rules and exceptional cases. [MG] Chapter 5
### Problems
simplex method: pivot steps, ..
## Week 4
### Lecture notes
Convexity of sets and functions. Notes. See also Boyd and Vanderberghe Chapter 2.1.5.
### Problems
we consider LPs in equational form (positive constraints are inequalities ofc) for bfs/simplex method and row independence (otherwise we can delete row)
and $Ax=b$ has at least one solution (otherwise it is easy to derermine the program is infeasible)
A fesaible solution $x \in R^n$ is basic if there is an $m$-element set $B \in \{1, 2, \cdots, n\}$ such that
* the square matrix $A_B$ is nonsingular, i.e. the columns indexed by B are independent
* $\forall_{j \notin B} x_j=0$
bfs needs to be a solution :) (e.g. no negative x-s)
bfs does not depend on the objective
if optimal solution exists, then there is also an optimal bfs
for a polytope, vertex is a bfs iff it is a bfs
simplex method will find out whether the LP is unbounded

Finding bfs:

simplex method example: https://www.math.colostate.edu/~adams/teaching/math510fall2020/LinearProgrammingNotes.pdf page 46/146
Pivoting rules (57/146 Adams)
* Dantzig's (largest coeff) (maximizes improvement of the obj. per unit increase in entering variable)
* largest increase of the objective
set convexity
concavity
midpoint-convexity
optimality is no harder than feasibility (TODO)
Maximum flow 112/146
https://theory.stanford.edu/~trevisan/cs261/lecture15.pdf
Minimum cut 114/146 https://www.math.colostate.edu/~adams/teaching/math510fall2020/LinearProgrammingNotes.pdf
https://www.cs.jhu.edu/~mdinitz/classes/ApproxAlgorithms/Spring2015/notes/lec16.pdf
Optimal transport
120/146 Adams
Paper rolls 18/146 Adams
## Week 5
### Lecture notes
[Lecture 5](http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n5.pdf)
### Problems
???
## Week 6
### Lecture notes
[Lecture 6](http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n6.pdf)
### Problems
Cholesky method
## Week 7
### Lecture notes
[Lecture 7](http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n7.pdf)
A function is self-concordant iff $$|f'''(x)| \leq 2 f''(x)^{3/2}.$$
### Problems
self-concordance
## Week 8
### Lecture notes
[Lecture 8](http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n8.pdf)
conjugate gradient
uasi-Newton equation
Gram-Schmidt procedure
steepest descent
### Problems
## Week 9
### Lecture notes
[Lecture 9](http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n9.pdf)
BFGS
LBFGS
Broyden family
### Problems
## Week 10
### Lecture notes
Nocedal-Wright, Numerical Optimization, Chapter 12, Sections 12.1 to 12.3.
constrained optimization problems
KKT enumeration
### Problems
LICQ constraint qualification
KKT conditions
LICQ-KKT relation
KKT for LPs
## Week 11
### Lecture notes
### Problems
## Week 12
Duality
### Lecture notes
| Original linear program: | In the dual: |
|:--------------------------- |:------------------------ |
| maximum | minimum |
| $max\ c^T x$ | $min b^T\ y$ |
| $m$ constraints | variables |
| $n$ variables | constraints |
| the $i$-th constraint is ≤ | $y_i$ ≥ 0 |
| the $i$-the constraint is ≥ | $y_i$ |
| the $i$-th constraint is = | $y_i$ |
| $x_j ≥ 0$ | the $j$-th constraint is |
| $x_j ≤ 0$ | the $j$-th constraint is |
| $x_j ∈ R$ | the $j$-th constraint is |
The Karush–Kuhn–Tucker/Kuhn–Tucker (KKT) conditions, conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. // constrained optimization
* gradient of the Lagrangian is zero (neccessary for optimality)
* i.e.
* jeśli x jest optymalny, to to zachodzi, ale ten warunek nie wystarcza do optymalności (potrzebne są też poniższe)
* $x$ is feasible
* i.e. $g(x) \ge 0$
* $u \ge 0$
* $u g(x) = 0$
* $ug$ should not affect the objective value of the original optimization problem
**Interior point method** solves modified KKT that is much easier to solve.
Dla KKT(t) od razu można wyliczyć u
$\min f(x) - t \log(-g(x))$
min solution for this automatically satisfies KKT (gdyby 1. niespełniony, to log daje +inf, u automatycznie dodatnie)
Można użyć metody Newtona.
t kontroluje jak mocna kara za chodzenie z daleka od feasible region, jak jest małe, to mało wpływa na objective, ale słabe własności numeryczne dla np. metody Newtona, bo log skacze
we don't want small t to make log term dominate; wtedy dostajemy analytic center of the feasible region (tak jakby originalne objective nie istniało)
close to the optimal solution -> Newton's method super fast
central path are x(t)
Duality for convex problems
Lagrangian
Weak duality for convex problems
Conditions when strong duality holds
prinal-dual methods (central path)
Neighbourhood $N_{-\infty}(\gamma)$
### Problems
verify KKT conditions
Determine the central path C and draw it.
## Week 13
2023-05-31?
### Lecture notes
Legendre transform
proximal gradient operator algorithm: make a gradient descent step and use a projection to get back to the domain
thresholding
optimality condition of the algorithm (z = (x − λ1)+ ?)
convergence (the same as in the steepest descent)
### Problems
## Week 14
### Lecture notes
http://www.math.uni.wroc.pl/~p-wyk4/opt2023/n14.pdf
### Problems
http://www.math.uni.wroc.pl/~p-wyk4/opt2023/ap.pdf
proximal operator
## Week 15
Test