Linear Programming: A Case Study on the Simplex Algorithm Bryten Ives, Dawn Myers, Matt Oakes --- # **Introduction** This project aims to deep dive the Simplex Algorithm, an important aspect of linear programming. Linear programming is a method in optimization which returns an optimal solution, given parameters, represented by linear relationships. The Simplex Algorithm evaluates a feasible region's vertices to figure out an optimal value of an objective function. A problem that can be reduced to linear equations can make use of the simplex algorithm, to find, for example a shortest path or a mimimum cost. This paper will outline the history of the method, and the mathematics and theory behind how it works. Next, we will detail a two-dimensional example by hand, and explain how to generalize to n dimensions. This will also detail limitations of the method. Next, we will discuss the application of the method through Python code, and a couple of toy examples. # **History** Leonid Kantorovich, a Russian mathematician, is considered a founder of Linear Programming. Linear Programming problems were widely used during World War II, in applications such as transportation, scheduling, and shipping, in order to reduce costs to the Army, and best allocate limited resources. George Bernard Dantzig was a member of the US Air Force during the war, and developed the Simplex Algorithm, which helped solve these optimization problems to "plan" resources for the Air Force. This method was developed in just about a years time, and is still a very popular algorithm for solving linear programs. His methods were used particularly for resource management with respect to training and logistics/fielding. After the war, linear programming problems were widely used in industry in daily planning and managing of resources, and maximization of profits. # **Mathematics** **i. Foundational Linear Algebra:** In order to apply the simplex algorithm, one must have a foundational understanding of linear algebra. This entails elementary row operations on matrices, finding elementary matrices, denoted $E$ from the identity matrix, denoted $I$. Also recall that elementary matrices are invertible, meaning it can be multiplied by another matrix to obtain $I$. Now, the foundation for our method can be built upon solving a system of equations using matrices. Take a matrix of coefficients $A$, and solutions $b$. We want to find values for variables, $x$, that satisify the system of equations. We set up our system of equations in the matrix form: \begin{align*} Ax=b \end{align*} In order to solve for $x$, we would form an augmented matrix of the form: \begin{align*}[A|b]\end{align*} Next, perform elementary row operations until we obtain: \begin{align*}[I|\hat{b}]\end{align*} where $x=\hat{b}$ is the solution to the system. **ii. Canonical Form:** By definition, a linear system $Ax=b$, with $rank(A)=m$ is in *canonical form*, if among the $n$ variables, there are $m$ variables with the property that each appears in only one equation, and its coefficient in that equation is unity. A system can be transformed into canonical form by a sequence of elementary row operations. The canonical representation of $Ax=b$ is: \begin{align*} x_1+ y_{1,m+1}x_{m+1}+...+y_{1_n}x_n= y_{10} \\ x_2+ y_{2,m+1}x_{m+1}+...+y_{2_n}x_n= y_{20}\\ \vdots \\ x_m+ y_{m,m+1}x_{m+1}+...+y_{m_n}x_n= y_{m0}\end{align*} This canonical form can be represented in matrix notation by: \begin{align*}[I_m,Y_{m,n-m}]x=y_0\end{align*} The variables $x_1,...,x_m$ are basic variables, corresponding to basic columns of $A$, and can be assumed to be the first $m$ varibles in the system. Thus, we can deduce the basic solution to be: \begin{align*}x_1=y_{10} \\ \vdots \\ x_m=y_{m0} \\ x_{m+1}=0 \\ \vdots \\ x_n=0 \end{align*} So, given a system of equations $Ax=b$, we can translate into the canonical augmented form: $[I_m,Y_{m,n-m},y_0]$ = \begin{bmatrix} 1 & 0 & \cdots & 0 & y_{1,m+1} & \cdots & y_{1n} & y_{10} \\ 0 & 1 & \cdots & 0 & y_{2,m+1} & \cdots & y_{2n} & y_{20} \\ \vdots & \vdots & \ddots & \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & y_{m,m+1} & \cdots & y_{mn} & y_{m0}\\ \end{bmatrix} Which implies that $b= y_{10}a_1+y_{20}a_2+\cdots+y_{m0}a_m$, which is to say that the column $\begin{bmatrix} y_{10} \\\vdots \\ y_{m0} \\ \end{bmatrix}$ are the coordinate vectors of our vector $b$ which respect to the basis $\lbrace a_1,...,a_m\rbrace$. For columns $m < j \leq n$ we can also deduce that $a_j= y_{1,j}a_1+ y_{2,j}a_2 + \cdots + y_{m,j}a_m$. **iii. Pivot Equations:** The canonical augmented matrix of a given system can be updated using pivot equations. Namely, the update is to replace a basic variable by a non basic variable, and finding the new canonical matrix of the new basis. For example, take the basis vector $a_p$ where p lies between 1 and $m$. We will detail out how to replace it by $a_q$, another vector, where $m< q \leq n$, by first finding the coordinates of the vectors $a_1,...,a_n$ with respect to the new basis. These these coordinates form the entries of the updated canonical augmented matrix. The entries of the new canonical augmented matrix, denoted by $y'_{ij}$, are given by the formulas: (1)$ $y'_{ij}$= $y_{ij}-\frac{y_pj}{y_pq}y_{iq}$ for $i\neq p$ $(2)$ $y'_{pj}$= $\frac{y_{pj}}{y_{pq}}$ These two equations are the pivot equations, $y_{pq}$ being the pivot element, in which the matrix is pivoted about. **iv. Linear Programs:** First, let us define a few elements of the basic linear program. We have *decision variables*, which are the elements to be determined. We have *objective functions*, which represent how each decision variable affects the value that needs to be optimized, and *constraints*, which represent how each decision variable would use the limited amount of resources. Lastly, the *data*, which quantifies the relationship between the objective functions and the constraints. Below are two visual examples: ![](https://i.imgur.com/34f6wm7.png) (Figure 1) *Figure (1) is a visual representation of a three-dimensional polytope, where the path of the simplex algorithm is traced - from a starting vertex, along the edges, to an optimal solution vertex.* ![](https://i.imgur.com/YCO1EyS.png) (Figure 2) *Figure (2) is a visual representation of a three-dimensional polytope, where the path of the simplex algorithm is traced in red - from a starting vertex, along the edges, to an optimal solution vertex.* Decision variables are annotated by $x_1,x_2,...,x_n$. The set $\lbrace x_1,x_2,...,x_n \rbrace$ which satisfies all the constraints is called a *feasible point*, where the solution to the linear program is a point in the feasible region. This feasible region, defined by the constraints applied to our objective function, is represented geometrically by a *polytope*. The simplex algorithm begins at a vertex of the defined polytope and moves along edges of the polytope until it reaches an optimal solution vertex where the objective function is maximized. To start, the linear program must be in *standard form*. Mathematically speaking, this consists of a function to be optimized, non-negative variables, non-negative constraints for all variables, and all remaining constraints expressed as equality constraints. The right-hand side vector, $b$, is non-negative. This problem is expressed in matrix form, for example, by: $max\lbrace c^Tx | Ax \leq b ^ x\geq 0\rbrace$ The linear program can be converted into *augmented form* to apply the simplex method. In other words, we are converting the linear inequalities (constraints) into linear equations, through the use of *slack* variables. The slack variable, denoted $s_1$, measures the amount of unused resources in the linear program, and must be non-negative. The modified constraints and the objective function obtained from introducing the slack variables can be written as an augmented matrix, called the *tableau*. This matrix is of the form: \begin{align*}\begin{bmatrix} 1 & -c^t & 0 \\ 0 & A & b\\ \end{bmatrix}\end{align*} which must be found before the simplex algorithm can begin. **v. Simplex Method** Steps of the simplex algorithm: 1. Form a canonical augmented matrix corresponding to an initial basic feasible solution. 2. Calculate the reduced cost coefficients corresponding to the nonbasic variables. 3. If $r_{j} \geq 0$ for all $j$, stop---the current basic feasible solution is optimal. 4. Select a $q$ such that $r_{q} < 0$. 5. If no $y_{iq} > 0$, stop---the problem is unbounded; else, calculate $p = arg min_{i} \{ y_{i0}/y_{iq} : y_{q} > 0\}$. (If more than one index $i$ minimizes $y_{i0}/y_{iq}$, we let $p$ be the smallest such index. 6. Update the canonical augmented matrix by pivoting about the ($p,q$)th element. 7. Go to step 2. # **2-Dimensional Example** **i. Example 15.3** A manufacturer produces two different products, $X_{1}$ and $X_{2},$ using three machines: $M_{1}, M_{2}, M_{3}$. Each machine can be used for only a limited amount of time. Production times of each product on each machine are given in the table below. The objective is to maximize the combined time of utilization of all three machines. Every production decision must satisfy constraints on available time. In particular, \begin{align*} x_{1} + x_{2} \leq 8\\ x_{1} + 3x_{2} \leq 18\\ 2x_{1} + x_{2} \leq 14\\ x_{1},x_{2} \geq 0. \end{align*} where $x_{1}$ and $x_{2}$ are production levels. The combined production time of all three machines is: $$f(x_{1}, x_{2}) = 4x_{1} + 5x_{2}$$ | | Production Time (Hours/Unit) | |Available Time | |:-------:| ---------- | ----- | ----- | | Machine | X1 | X2 | Hours | | M1 | 1 | 1 | 8 | | M2 | 1 | 3 | 18 | | M3 | 2 | 1 | 14 | | TOTAL | 4 | 5 | | **ii. Graphical Solution** ![](https://i.imgur.com/V3QdTzz.png) *(Figure 3) Graphical Representation of Feasible Region* Because we are solving a two-dimensional problem with only 3 constraints, we could solve this problem graphically. The function and constraints are illustrated by the graph. We are trying to find a point within or on the boundary of the shaded region. This is the region defined by the 3 lines of our constraints and the $x,y$ axis, because we are looking for non-negative solutions. A point within or on the boundary of the shaded region is feasible, meaning it satisfies all the constraints. Since we are trying to maximize time, the feasible region is non-negative. We initially set the function equal to 0. This will be a line through the origin. The objective function is often called the isoprofit line for maximization problems, (or isocost line for minimization problems). You can see from the graph if we set our function equal to zero, it will be at the lower bound of our feasible region. We are looking for a point or points that will maximize our function. To solve this graphically, we can draw parallel lines of our function until we find a parallel line that intersects our boundary conditions. It took a few moments on the online Desmos graphing calculator (desmos.com) to find the line $x_{2} = \frac{-4}{5}x_{1} + \frac{37}{5}$. We can easily see from the graph this point of intersection is (3,5). If we plug $x_{1}=3, x_{2} = 5$ into our function, we find that the maximum combined time of all three machines is 37 hours. Before we run this problem through the simplex method, we notice one more piece of information from our graph, the intersections of our lines of constraints. These are called extreme points and are of special importance in the simplex method. The points are, starting from $y$ = 0, (0,6), (3,5), (6,2) and (7,0). Any linear program that has an optimal solution has an extreme point that is optimal. This is a very important result because it greatly reduces the number of points which may be optimal solutions to the linear program. The entire feasible region contains an infinite number of points, however the feasible region contains only five extreme points which may be the optimal solution to the linear program. Once you find all the extreme points, you can just plug in your values, and determine which point or points lead to the maximum or minimum value of the function. **iii. Algebraic Solution** One might try and solve this problem algebraically. You can simply solve for $x_{2}$ in constraint (1) and plug that into constraint (2) to find $x_{1}$ and vice-versa. However, if you plug your solution into constraint 3, the point (3,5) does not work. We do not need to find a point at the intersection of all the constraint lines or infinitely many solutions, we merely need to find a solution within or rather bounded by the feasible region. However, how do we know that (3,5) is the optimal solution? **iv. Simplex Method Solution** We will now show how we would solve this simple 2D problem using the simplex method. Consider a linear programming problem in the standard form: minimize \begin{align*} \textbf{c}^{T} \textbf{x}\\ \end{align*} subject to \begin{align*} \bf{Ax = b}\\ \bf{x} \geq 0 \end{align*} In our example problem, $$ A = \begin{bmatrix} 1&1\\ 1&3\\ 2&1 \end{bmatrix}, b = \begin{bmatrix} 8\\ 18\\ 14 \end{bmatrix}, c^{T} = \begin{bmatrix} 4&5\\ \end{bmatrix} $$ To begin, we first transform our example problem into standard form so the simplex method can be applied. If the LP (linear programming) problem is a maximization problem, we multiply $z$ by (-1) to create a minimization problem. We make adjustments to the inequality constraints by adding in slack variables or subtracting surplus variables. Our problem has less than inequalities, so we will add slack variables. Slack is the leftover of a resource. Surplus is the excess of production. In this problem, we have to add in a slack variable for each in equality. We now have $n = 5$ columns. minimize \begin{align*} z = -4x_{1} - 5x_{2} - 0x_{3} - 0x_{4} - 0x_{5}\\ \end{align*} subject to \begin{align*} x_{1} + x_{2} + x_{3} = 8\\ x_{1} + 3x_{2} + x_{4} = 18\\ 2x_{1} + x_{2} + x_{5} = 14\\ x_{1},x_{2},x_{3},x_{4},x_{5} \geq 0. \end{align*} **Step 1** Form a canonical augmented matrix corresponding to an initial basic feasible solution, with the last row being $\bf{c}^{T}$ or the reduced cost coefficients of our function $z$. \begin{matrix} a_{1}&a_{2}&a_{3}&a_{4}&a_{5}&b\\ 1&1&1&0&0&8\\ 1&3&0&1&0&18\\ 2&1&0&0&1&14\\ -4&-5&0&0&0&0 \end{matrix} This tableau is already in canonical form with respect to the basis \begin{bmatrix}a_{3}& a_{4}&a_{5} \end{bmatrix}The initial basic feasible solution to the problem in the standard form is $$\textbf{x}^{T} =(0,0,8,18,14)^{T}$$ The value of the objective function corresponding to this basic feasible solution is $z$ = 0. **Step 2** We calculate the reduced cost coefficients corresponding to the nonbasic variables. We can see from the looking at the values along the bottom row that the reduced cost coefficients associated with nonbasic variables $x_{1}$ and $x_{2}$ are negative. **Step 3** Since the values of $r_{j} \leq 0$, the current basic feasible solution is not optimal, we proceed to step 4. **Step 4** We find the most negative reduced cost coefficient and we realized we want to bring $\textbf{a}_{2}$ or $q$ = 2 into the basis. To determine which row $p$ will be our pivot, we compute the ratio of $\dfrac{b}{a}$ in the column of the most negative reduced cost coefficient. For example, in our problem we would look at 3 ratios. $\dfrac{8}{1}, \dfrac{18}{3},\dfrac{14}{1}$ The lowest ratio is row 2, column 2, therefore we will pivot about the (2,2) element in our tableau. In the second matrix, we only have one negative reduced cost coefficient ($-\dfrac{7}{3}$), so we pivot about $a_{1}$, $q = 1$. We look at the lowest ratio, and we pivot about the (1,1)th element. We use Gauss-Jordan elimination to complete these elementary row operations. $$ \begin{matrix} a_{1}&a_{2}&a_{3}&a_{4}&a_{5}&b&Ratio\\ 1&1&1&0&0&8&8\\ 1&3&0&1&0&18&6\\ 2&1&0&0&1&14&14\\ -4&-5&0&0&0&0 \end{matrix} \sim \begin{matrix} a_{1}&a_{2}&a_{3}&a_{4}&a_{5}&b&Ratio\\ \frac{2}{3}&0&1&-\frac{1}{3}&0&2&3\\ \frac{1}{3}&1&0&\frac{1}{3}&0&6&18\\ \frac{5}{3}&0&0&-\frac{1}{3}&1&8&\frac{24}{5}\\ -\frac{7}{3}&0&0&\frac{5}{3}&0&30 \end{matrix} $$ \begin{matrix} a_{1}&a_{2}&a_{3}&a_{4}&a_{5}&b\\ 1&0&\frac{3}{2}&-\frac{1}{2}&0&3\\ 0&1&-\frac{1}{2}&\frac{1}{2}&0&5\\ 0&0&-\frac{5}{2}&\frac{1}{2}&1&3\\ 0&0&\frac{7}{2}&\frac{1}{2}&0&37 \end{matrix} **Step 5** Step 5 tells us that if any of the values in the columns of $q$ are negative, (the column with the most negative coefficient), we stop. The problem is unbounded. **Step 6** We update the canonical matrix, using the pivot row/column formula given in step 4. **Step 7** I went back to step 2 one time and updated column $a_{1}$. After this we found our basic feasible solution is optimal as all of $r_{j} \geq 0$, (the last row of the tableau has no negative elements). The vector $\textbf{x} = (3,5,0,0,0)^{T}$ is optimal. The solution to the original problem is therefore $x_{1} = 3, x_{2} = 5$, and the objective function value is 37. **v. Time Complexity and Limitations** Two problems can occur with the simplex method, degenerate solutions and cycling. Degenerate solutions occur when one of the basic variables in the basic feasible solution is zero, and could result in pivots for which there is no improvement in the objective value. Cycling is the term for when the same set of basic variables occurs twice. Cycling is rare, and implementing Bland's rule prevents cycling. Although Bland's rule is rarely used because of the time it takes to converge.(footnote... or link to an explanation of Bland's rule.) Experimental results have shown that the simplex algorithm devised by Dantzig can be shown to average as few as 2$m$ iterations, with $m$ being the number of equations.(footnote, onlinelibary.wiley.com) "The simplex method is very efficient in practice, generally taking 2$m$ to 3$m$ iterations at most (where $m$ is the number of equality constraints) and converging in expected polynomial time..."(Simplex Method Wolfram Alpha) There have been worst-case scenarios presented showing complexity of the simplex method as formulated by Dantzig is exponential time. However, in our research, we found that although newer algorithms have been introduced to compete with the simplex method, it is still widely used. # **Implementation of Simplex Method and Examples** ## **Revised Simplex Method** Although the Simplex Method can easilty be performed by hand, for very large linear programs, it becomes apparent that implementing the algorithm via code is far more efficient in practice. With efficiency in mind during implementation of a code, we would prefer to avoid excess computations. When computing an iteration of the standard simplex method, we are required to update the entire tableau even though we are not using all of our updated columns to calculate our new basic feasible solution. The *revised simplex method* elimantes operations on the columns in $A$ that do not enter the basis and therefore saves us computation time. To get a feel for the revised simplex algorithm, we will walk through an iteration of the revised simplex method. Let $B$ be the matrix composed of the columns from $A$ that correspond to the current basis and let $D$ be a matrix composed of the non-basic columns of $A$. The computation of the current basic feasible solution does not require us to compute $B^-1D$, we only require the change of basis matrix $B^{-1}$. Then, all we need to track is the current basic variables given by the revised tableau $[B^{-1}, B^{-1}b]$. Compared to the standard simplex method tableau of $m \times (n+1)$, we only have a tableau of size $m \times (m+1)$ by the revised simplex method. Suppose we choose an incoming basic column from $A$, denoted $a_{q}$. Then let $y_{q}=B^{-1}a_{q}$, $y_{0}=B^{-1}b$, and $p=argmin_{i}{y_{i0}/y_{iq} : y_{iq} > 0}$. To update the revised tableau, we form the augmented tableau $[B^{-1},y_{0}, y_{q}]$, and we pivot about the p*th* element of the last column. Here the pivot can be reprsented as an elementary matrix $E_{k+1}$ where the pivot operation results in multiplying the augmented tableau on the left by $E_{k+1}$, $[E_{k+1}B^{-1}, E_{k+1}y_{0}, e_{p}]$ ($E_{k+1}y_{q} = e_{p}$). Here $e_{p}$ is represnted by a column vector of zeros with a one in the pth row. Here is a summary of the revised simplex method. 1. Form a revised tableau corresponding to an intial basic feasible solution 2. Calculate the current reduced cost coefficients vector via $$ r_{D}^T = c_{D}^T - \lambda^{T}D $$ where $\lambda = c_{B}^TB^-1$ 3. If $r_{j} \geq 0$ for all $j$, stop -- the current basic feasible solution is optimal 4. Select a $q$ such that $r_{q} \leq 0$ (i.e., the $q$ corresponding to the most negative $r_{q}$) and compute, $$y_{q} = B^{-1}a_{q}$$ 5. If no $y_{iq} \geq 0$, stop -- the problem is unbounded; else compute $p=argmin_{i}\{y_{i0}/y_{iq} : y_{iq} > 0\}$ 6. From the augmented revised tableau $[B^-1, y_0, y_q]$, and pivot about the pth element of the last column. Form the updated revised tableau by taking the first m+1 columns of the resulting augmented revised tableau (remove last column) 7. Go to step 2 With python being a relatively simple language to implement, we show an implementation of the simplex algorithm. The function will take the inputs of the cost coefficients ($c$), the matrix of contraints ($A$), the right hand side of the contraints ($b$), the indicies corresponding to the basic columnes ($v$), and the change of basis matrix ($Binv$). The function will return a tuple of the optimal $x^{*}$, the indicies corresponding to the basic columns, and change of basis matrix. ```python= def simplex_method(c,A,b,v,Binv, bland=0): ''' Implementation of Revised Simplex Method --Params-- c - cost vector nx1 A - matrix of contraints b - rhs of contraints v - indicies of basic columns Binv - change of basis matrix bland (Optional): Bland's Rule to avoid cycling. Otherwise go in the direction that reduces the objective function value the most i.e. greedy --Output-- x - x* value v - indicies of basic columns Binv - change of basis matrix ''' n = len(c) m = len(b) #costs for basic columns cB=c[v] #change of basis on rhs y0 = Binv.dot(b) #calculate reduce cost coefficients lamT = cB.T.dot(Binv) r = c.T - lamT.dot(A) #If reduced costs are 0 or greater terminate #otherwise, while there are negative reduced costs #move to another basic feasible solution if len(np.where(r >= 0)[0]) == 0: pass else: while len(np.where(r >= 0)[0]) != n: if bland: q=0 while r[q] >=0: q+=1 else: r_q = np.min(r) q = np.argmin(r) Aq = np.reshape(A[:,q], (m,1)) y0 = np.reshape(y0, (m,1)) yq = Binv.dot(Aq) p=-1 min_ratio = np.inf for i in range(0,m): if yq[i] > 0: if y0[i]/yq[i] < min_ratio: min_ratio = y0[i]/yq[i] p = i if p == -1: print("Unbounded") return tbl = np.zeros((m,n)) tbl[:,0:m] = Binv tbl[:,m:m+1] = y0 tbl[:,m+1:m+2] = yq aug = pivot(tbl,p,m+1) Binv = aug[:,0:m] y0 = aug[:,m:m+1] v[p] = q cB = c[v[:]] lamT = cB.T.dot(Binv) r = c.T - lamT.dot(A) r = r.squeeze() revised_tbl = np.zeros((m,n)) v = np.reshape(v, (m,1)) revised_tbl[:,0:1] = v revised_tbl[:,1:m+1] = Binv revised_tbl[:,m+1:m+2] = y0 x=np.zeros((n,1)) v=v.reshape((m,1)) for i,j in enumerate(v.squeeze()): x[j] = y0[i] return x,v,Binv ``` Here is our function that computes that pivot operation in our revised augmenented tableau. ```python= def pivot(A,p,q): ''' A - input Matrix p - pivot index q - incoming basic index Output A - Returns updated matrix A ''' for i in range(0, A.shape[0]): if i == p: A[p,:] = A[p,:]/A[p,q] else: A[i,:]=A[i,:] - A[p,:]*(A[i,q]/A[p,q]) return A ``` Recall our 2D manufacturing example, given by: $A = \begin{bmatrix} 1 && 1 \\ 1 && 3 \\ 2 && 1 \end{bmatrix}$, $b=\begin{bmatrix} 8 \\ 18 \\ 14\end{bmatrix}$,$c^T=\begin{bmatrix} 4 && 5 \end{bmatrix}$ we form our inputs as python numpy arrays adding slack variables, setting our indicies for basic columns, and setting our change of basis matrix. ```python= A = np.array([[1,1,1,0,0],[1,3,0,1,0],[2,1,0,0,1]]) b = np.array([8,18,14]) c = np.array([-4,-5,0,0,0]) v = np.array([2,3,4]) binv = np.eye(3) simplex_method(c,A,b,v,binv) ``` The output gives us the optimal $x^*$, indicies of our basic columns, and our change of basis matrix, $B^{-1}$ ```python= (array([[3.], [5.], [0.], [0.], [3.]]), array([[0], [1], [4]]), array([[ 1.5, -0.5, 0. ], [-0.5, 0.5, 0. ], [-2.5, 0.5, 1. ]])) ``` Multiplying the costs vector by $x*$ will yield the optimal value of our objective function. ```python= x=np.array([[3.],[5.],[0.],[0.],[3.]]) c.dot(x) [-37.] ``` Looking at another example from James Jones with 3 decision variables defined by: ![](https://i.imgur.com/XiCdag8.png) ![](https://i.imgur.com/gO2GJoD.png) **Plot of feasible set in three dimensions** Let's see if our implementation can reach the same optimal solution of: ![](https://i.imgur.com/PDdUDcp.png) Form the entries for our simplex_method function for the given linear program. ```python= A = np.array([[3,2,5,1,0,0,0],[2,1,1,0,1,0,0],[1,1,3,0,0,1,0],[5,2,4,0,0,0,1]]) b = np.array([55,26,30,57]) c = np.array([-20,-10,-15,0,0,0,0]) v = np.array([3,4,5,6]) binv = np.eye(4) simplex_method(c,A,b,v,binv) ``` This results in the following: ```python= (array([[ 1.8], [20.8], [ 1.6], [ 0. ], [ 0. ], [ 2.6], [ 0. ]]), array([[2], [1], [5], [0]]), array([[ 0.2, -0.8, 0. , 0.2], [ 0.6, 2.6, 0. , -1.4], [-0.8, 0.2, 1. , 0.2], [-0.4, -0.4, 0. , 0.6]])) ``` ```python= x = array([[1.8],[20.8],[1.6],[0.], 0.],[2.6],[0.]]) c.dot(x) [-268.] ``` With the simplex method being around for 70+ years, there are existing implementations available in a variety of different programming languages that have gone through years of optimization. For example, the python package for scientific computer, scipy has a function called linprog part of the scipy.optimize namespace that implements the simplex method (and interior point method) for solving linear programs. Running our first 2D example with linprog, we obtain the same results. ```python= from scipy.optimize import linprog c = [-4,-5] A = [[1,1],[1,3],[2,1]] b = [8,18,14] x0_bounds = (0, None) x1_bounds = (0, None) linprog(c, A_ub=A, b_ub=b, method='simplex', bounds=[x0_bounds,x1_bounds]) ``` ```python= con: array([], dtype=float64) fun: -37.0 message: 'Optimization terminated successfully.' nit: 3 slack: array([0., 0., 3.]) status: 0 success: True x: array([3., 5.]) ``` # # **Conclusion** In conclusion, the simplex algorithm is a viable and popular linear programming method that has withstood the last 70+ years. It performs well, though has some constraints like exponential time complexity. This in particular has inspired other areas of study, such as *smoothed analysis*, which measures expeted performance of an algorithm under random noise that would generate these exponential-time results. Another algorithm worth mentioning that would be able to solve this type of problem is the relatively newer *interior point method*. This method also optimizes subject to constraints, but approach a solution from the interior of the feasible region. These problems are more efficient for larger operations as the linear algebra involved is less expansive. All in all, Python is an efficient way to calculate linear programs of many variables and constraints, in order to make informed decisions about complex problems. # **Sources** “09.05. Optimization .” Numerical Methods with Applications, by Autar Kaw et al., Lulu Enterprises, 2009. Alpha, Wolfram. “Simplex Method.” From Wolfram MathWorld, 2020, mathworld.wolfram.com/SimplexMethod.html. “Appendix C: Efficiency of the Simplex Method.” An Introduction to Linear Programming and Game Theory, by Paul R. Thie and G. E. Keough, Wiley, 2008. Arsham, Dr. Hossein. “The Classical Simplex Method.” Towards the Simplex Method, 25 Feb. 1994, home.ubalt.edu/ntsbarsh/opre640a/partiv.htm#rsimplexmethod. Chong, Edwin K. P., and Stanislaw H. Zak. An Introduction to Optimization, 4th Edition. John Wiley &amp; Sons, 2013. The Editors of Encyclopaedia Britannica. “Linear Programming.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 18 July 2017, www.britannica.com/science/linear-programming-mathematics. Jones, James. "Linear Programming: Simplex with 3 Decision Variables" https://people.richland.edu/james/ictcm/2006/3dsimplex.html Lewis, Catherine. “Linear Programming: Theory and Applications.” Https://Www.whitman.edu/, 2008, www.whitman.edu/Documents/Academics/Mathematics/lewis.pdf. Mehlhorn, Kurt. “Implementation of the Simplex Algorithm.” Max Planck Institute, 2010, resources.mpi-inf.mpg.de/departments/d1/teaching/ss10/opt/handouts/lecture12-implementation-simplex.pdf. Wang, Ruye. The Simplex Algorithm, 2015, fourier.eng.hmc.edu/e176/lectures/NM/node32.html.