# Chapter 3.4. Linear Programming and Simplex Method Linear Programming (LP) is a powerful mathematical technique used to achieve the best outcome (such as maximum profit or minimum cost) in a model whose requirements are represented by linear relationships. It's fundamentally an optimization method applicable to a vast range of real-world problems where resources are limited and decisions must be made to satisfy a set of constraints. LP's utility spans numerous fields, including business logistics (e.g., supply chain and transportation planning), manufacturing (e.g., optimal production mixing and scheduling), financial modeling (e.g., portfolio optimization), and resource allocation in general. Because it relies on the geometry of convex polyhedra and the principle that optimal solutions lie at vertices, LP provides a robust, systematic framework for making complex decisions under resource scarcity. ### 3.4.1. Mathematical Foundations ==Linear programming== is the problem of optimizing (maximizing or minimizing) a ==linear objective function== subject to a set of linear ==equality and inequality constraints==. The general form is: $$ \begin{array}{cl} \underset{\mathbf{x} \in \mathbb{R}^{n}}{\text { Maximize }} & \mathbf{c}^{T}\mathbf{x} \\ \text { subject to } & \mathbf{Ax} \leq \mathbf{b}, \\ & \mathbf{x} \geq \mathbf{0}, \end{array} $$ where $\mathbf{x} \in \mathbb{R}^{n}$ is the vector of ==decision variables==, $\mathbf{c} \in \mathbb{R}^{n}$ the vector of ==objective coefficients==, $\mathbf{A} \in \mathbb{R}^{m \times n}$ the matrix of constraint coefficients, and $\mathbf{b} \in \mathbb{R}^m$ the right-hand side vector. In this case the constraint set is determined entirely by linear inequalities. The problem may be alternatively expressed as $$\begin{array}{cl}\underset{\hat{\mathbf{x}} \in \mathbb{R}^{n+m}}{\text { Maximize }} & \hat{\mathbf{c}}^{T} \hat{\mathbf{x}} \\ \text { subject to } & \hat{\mathbf{A}}\hat{\mathbf{x}}=\mathbf{b}, \\ & \hat{\mathbf{x}} \geq \mathbf{0}, \end{array}$$ where $\hat{\mathbf{x}} = \left(\begin{array}{c} x_{1} \\ \vdots \\ x_{n} \\ y_{1} \\ \vdots \\ y_{m} \end{array}\right)=\left(\begin{array}{c} \mathbf{x} \\ \mathbf{y} \end{array}\right)$, $\hat{\mathbf{c}}=\left(\begin{array}{c} c_{1} \\ \vdots \\ c_{n} \\ 0 \\ \vdots \\ 0 \end{array}\right)=\left(\begin{array}{c} \mathbf{c} \\ \mathbf{0} \end{array}\right)$, $\hat{\mathbf{A}}=\left(\mathbf{A}, \mathbf{I}\right)=\left(\begin{array}{ccccccc}a_{11} & \cdots & a_{1 n} & 1 & 0 & \cdots & 0 \\ a_{21} & \cdots & a_{2 n} & 0 & 1 & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m 1} & \cdots & a_{m n} & 0 & 0 & \cdots & 1\end{array}\right)$. Here, the $m$-dimensional vector $\mathbf{y}$ is the vector of ==slack variables== and these new positive variables $y_{i}$ converts the inequalities in the LP to equalities. By considering the problem as one having $n+m$ unknowns $x_1, x_2, \ldots, x_n, y_{1}, y_{2}, \ldots, y_{m}$, the problem takes the standard form. The $m \times(n+m)$ matrix that now describes the linear equality constraints is of the special form $\left(\mathbf{A}, \mathbf{I}\right)$ (that is, its columns can be partitioned into two sets; the first $n$ columns make up the original $\mathbf{A}$ matrix and the last $m$ columns make up an $m \times m$ identity matrix). ### 3.4.1 Basic Solutions Consider the system of equalities $$ \mathbf{Ax}=\mathbf{b} \quad\quad\quad\quad (3.),$$ where $\mathbf{x}$ is an $n$-vector, $\mathbf{b}$ an $m$-vector, and $\mathbf{A}$ is an $m \times n$ matrix. Suppose that from the $n$ columns of $\mathbf{A}$ we select a set of $m$ linearly independent columns (such a set exists if the rank of $\mathbf{A}$ is $m$ ). For notational simplicity assume that we select the first $m$ columns of $\mathbf{A}$ and denote the $m \times m$ matrix determined by these columns by $\mathbf{B}$. The matrix $\mathbf{B}$ is then non-singular and we may uniquely solve the equation, $$\mathbf{B}\mathbf{x}_B=\mathbf{b}$$ for the $m$-vector $\mathbf{x}_{\mathbf{B}}$. By putting $\mathbf{x}=\left(\mathbf{x}_{\mathbf{B}}, \mathbf{0}\right)$ (that is, setting the first $m$ components of $\mathbf{x}$ equal to those of $\mathbf{x}_{\mathbf{B}}$ and the remaining components equal to zero), we obtain a solution to $\mathbf{A x}=\mathbf{b}$. This leads to the following definition. <span style="color: lime;">**Definition 3..**</span> Given the set of $m$ simultaneous linear equations in $n$ unknowns, let $\mathbf{B}$ be any non-singular $m \times m$ submatrix made up of columns of $\mathbf{A}$. Then, if all $n-m$ components of $\mathbf{x}$ not associated with columns of $\mathbf{B}$ are set equal to zero, the solution to the resulting set of equations is said to be a ==basic solution== of the system with respect to the basis $\mathbf{B}$. The components of $\mathbf{x}$ associated with columns of $\mathbf{B}$ are called ==basic variables==. The reference of the matrix $\mathbf{B}$ as a basis in the above definition has the usual meaning. The basic solution corresponds to an expression for the vector $\mathbf{b}$ as a linear combination of these basis vectors. In general, of course, (3.) may have no basic solutions. However, we may avoid trivialities and difficulties of a nonessential nature by making certain elementary assumptions regarding the structure of the matrix $\mathbf{A}$. First, we usually assume that $n>m$, that is, the number of variables $x_i$ exceeds the number of equality constraints. Second, we usually assume that the rows of $\mathbf{A}$ are linearly independent, corresponding to linear independence of the $m$ equations. A linear dependency among the rows of $\mathbf{A}$ would lead either to contradictory constraints and hence no solutions to (3.), or to a redundancy that could be eliminated. Formally, we explicitly make the following assumption in our development, unless noted otherwise. **Assumption**. The $m \times n$ matrix $\mathbf{A}$ has $m<n$, and the $m$ rows of $\mathbf{A}$ are linearly independent. Under the above assumption, the system (3.) will always have a solution and, in fact, it will always have at least one basic solution. The basic variables in a basic solution are not necessarily all nonzero. This is noted by the following definition. <span style="color: lime;">**Definition 3..**</span> If one or more of the basic variables in a basic solution has value zero, that solution is said to be a ==degenerate basic solution==. We note that in a nondegenerate basic solution the basic variables, and hence the basis $\mathbf{B}$, can be immediately identified from the positive components of the solution. There is ambiguity associated with a degenerate basic solution, however, since the zero-valued basic and nonbasic variables can be interchanged. So far in the discussion of basic solutions we have treated only the equality constraint (8) and have made no reference to positivity constraints on the variables. Similar definitions apply when these constraints are also considered. Thus, consider now the system of constraints $$ \begin{aligned} \mathbf{A x} & =\mathbf{b} \quad\quad\quad (3.) \\ \mathbf{x} & \geqslant \mathbf{0}, \end{aligned} $$which represent the constraints of a linear program in standard form. <span style="color: lime;">**Definition 3..**</span> A vector $\mathbf{x}$ satisfying (3.) is said to be ==feasible== for these constraints. A feasible solution to the constraints (3.) that is also basic is said to be a ==basic feasible solution==; if this solution is also a degenerate basic solution, it is called a ==degenerate basic feasible solution==. ### The Fundamental Theorem of Linear Programming In this section, through the fundamental theorem of linear programming, we establish the primary importance of basic feasible solutions in solving linear programs. The method of proof of the theorem is in many respects as important as the result itself, since it represents the beginning of the development of the simplex method. The theorem itself shows that it is necessary only to consider basic feasible solutions when seeking an optimal solution to a linear program because the optimal value is always achieved at such a solution. Corresponding to a linear program in standard form $$ \begin{array}{ll} \operatorname{minimize} & \mathbf{c}^T \mathbf{x} \\ \text { subject to } & \mathbf{A x}=\mathbf{b} \quad\quad\quad (3.) \\ & \mathbf{x} \geqslant \mathbf{0}, \end{array} $$ a feasible solution to the constraints that achieves the minimum value of the objective function subject to those constraints is said to be an optimal feasible solution. If this solution is basic, it is an optimal basic feasible solution. <span style="color: mediumslateblue;">**Theorem 3..**</span> Given a linear program in standard form (3.) where $\mathbf{A}$ is an $m \times n$ matrix of rank $m$, (i) if there is a feasible solution, there is a basic feasible solution; (ii) if there is an optimal feasible solution, there is an optimal basic feasible solution. *Proof*. (i) Denote the columns of $\mathbf{A}$ by $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^n$. Suppose $\mathbf{x}=\left(x_1, x_2, \ldots, x_n\right)$ is a feasible solution. Then, in terms of the columns of $\mathbf{A}$, this solution satisfies: $$ x_1 \mathbf{a}^1+x_2 \mathbf{a}^2+\cdots+x_n \mathbf{a}^n=\mathbf{b}.$$ Assume that exactly $p$ of the variables $x_i$ are greater than zero, and for convenience, that they are the first $p$ variables. Thus $$ x_1 \mathbf{a}^1+x_2 \mathbf{a}^2+\cdots+x_p \mathbf{a}^p=\mathbf{b}.$$ There are now two cases, corresponding as to whether the set $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^p$ is linearly independent or linearly dependent. **Case 1**: Assume $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^p$ are linearly independent. Then clearly, $p \leq m$. If $p=m$, the solution is basic and the proof is complete. If $p<m$, then, since $\mathbf{A}$ has rank $m, m-p$ vectors can be found from the remaining $n-p$ vectors so that the resulting set of $m$ vectors is linearly independent. Assigning the value zero to the corresponding $m-p$ variables yields a (degenerate) basic feasible solution. **Case 2**: Assume $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^p$ are linearly dependent. Then there is a non-trivial linear combination of these vectors that is zero. Thus there are constants $y_1, y_2, \ldots, y_p$, at least one of which can be assumed to be positive, such that $$y_1 \mathbf{a}^1+y_2 \mathbf{a}^2+\cdots+y_p \mathbf{a}^p=\mathbf{0} . $$ Multiplying this equation by a scalar $\varepsilon$ and subtracting it from (12), we obtain $$ \left(x_1-\varepsilon y_1\right) \mathbf{a}^1+\left(x_2-\varepsilon y_2\right) \mathbf{a}^2+\cdots+\left(x_p-\varepsilon y_p\right) \mathbf{a}^p=\mathbf{b}.$$ This equation holds for every $\varepsilon$, and for each $\varepsilon$ the components $x_i-\varepsilon y_i$ correspond to a solution of the linear equalities-although they may violate $x_i-\varepsilon y_i \geqslant 0$. Denoting $\mathbf{y}=\left(y_1, y_2, \ldots, y_p, 0,0, \ldots, 0\right)$, we see that for any $\varepsilon$ $$\mathbf{x}-\varepsilon \mathbf{y}$$ is a solution to the equalities. For $\varepsilon=0$, this reduces to the original feasible solution. As $\varepsilon$ is increased from zero, the various components increase, decrease, or remain constant, depending upon whether the corresponding $y_i$ is negative, positive, or zero. Since we assume at least one $y_i$ is positive, at least one component will decrease as $\varepsilon$ is increased. We increase $\varepsilon$ to the first point where one or more components become zero. Specifically, we set $$ \varepsilon=\min \left\{\frac{x_{i}}{y_{i}}: y_{i}>0\right\}.$$ For this value of $\varepsilon$ the solution given by (15) is feasible and has at most $p-1$ positive variables. Repeating this process if necessary, we can eliminate positive variables until we have a feasible solution with corresponding columns that are linearly independent. At that point Case 1 applies. (ii) Let $\mathbf{x}=\left(x_1, x_2, \ldots, x_n\right)$ be an optimal feasible solution and, as in the proof of (i) above, suppose there are exactly $p$ positive variables $x_1, x_2, \ldots, x_p$. Again there are two cases; and Case 1, corresponding to linear independence, is exactly the same as before. Case 2 also goes exactly the same as before, but it must be shown that for any $\varepsilon$ the solution (15) is optimal. To show this, note that the value of the solution $\mathbf{x}-\varepsilon \mathbf{y}$ is $$ \mathbf{c}^T \mathbf{x}-\varepsilon \mathbf{c}^T \mathbf{y}.$$ For $\varepsilon$ sufficiently small in magnitude, $\mathbf{x}-\varepsilon \mathbf{y}$ is a feasible solution for positive or negative values of $\varepsilon$. Thus we conclude that $\mathbf{c}^T \mathbf{y}=0$. For, if $\mathbf{c}^T \mathbf{y} \neq 0$, an $\varepsilon$ of small magnitude and proper sign could be determined so as to render (16) smaller than $\mathbf{c}^T \mathbf{x}$ while maintaining feasibility. This would violate the assumption of optimality of $\mathbf{x}$ and hence we must have $\mathbf{c}^T \mathbf{y}=0$. Having established that the new feasible solution with fewer positive components is also optimal, the remainder of the proof may be completed exactly as in part (i). $\blacksquare$ Our development to this point, including the above proof of the fundamental theorem, has been based only on elementary properties of systems of linear equations. These results, however, have interesting interpretations in terms of the theory of convex sets that can lead not only to an alternative derivation of the fundamental theorem, but also to a clearer geometric understanding of the result. The main link between the algebraic and geometric theories is the formal relation between basic feasible solutions of linear inequalities in standard form and extreme points of polytopes. We establish this correspondence as follows. Definition. A point $\mathbf{x}$ in a convex set $C$ is said to be an extreme point of $C$ if there are no two distinct points $\mathbf{x}_1$ and $\mathbf{x}_2$ in $C$ such that $\mathbf{x}=\alpha \mathbf{x}_1+(1-\alpha) \mathbf{x}_2$ for some $\alpha, 0<\alpha<1$. An extreme point is thus a point that does not lie strictly within a line segment connecting two other points of the set. The extreme points of a triangle, for example, are its three vertices. Theorem. (Equivalence of extreme points and basic solutions). Let $\mathbf{A}$ be an $m \times n$ matrix of rank $m$ and $\mathbf{b}$ an $m$-vector. Let $K$ be the convex polytope consisting of all $n$-vectors $\mathbf{x}$ satisfying $$ \begin{aligned} \mathbf{Ax} & =\mathbf{b} \quad\quad\quad (3.) \\ \mathbf{x} & \geqslant \mathbf{0} . \end{aligned} $$ A vector $\mathbf{x}$ is an extreme point of $\boldsymbol{K}$ if and only if $\mathbf{x}$ is a basic feasible solution to (3.). *Proof*. Suppose first that $\mathbf{x}=\left(x_{1}, x_{2}, \ldots, x_{m}, 0,0, \ldots, 0\right)$ is a basic feasible solution to (3.). Then $$ x_1 \mathbf{a}^1+x_2 \mathbf{a}^2+\cdots+x_m \mathbf{a}^m=\mathbf{b},$$ where $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^m$, the first $m$ columns of $\mathbf{A}$, are linearly independent. Suppose that $\mathbf{x}$ could be expressed as a convex combination of two other points in $\boldsymbol{K}$; say, $\mathbf{x}=\alpha \mathbf{y}+(1-\alpha) \mathbf{z}, 0<\alpha<1, \mathbf{y} \neq \mathbf{z}$. Since all components of $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are nonnegative and since $0<\alpha<1$, it follows immediately that the last $n-m$ components of $\mathbf{y}$ and $\mathbf{z}$ are zero. Thus, in particular, we have $$ y_1 \mathbf{a}^1+y_2 \mathbf{a}^2+\cdots+y_m \mathbf{a}^m=\mathbf{b}$$ and $$ z_1 \mathbf{a}^1+z_2 \mathbf{a}^2+\cdots+z_m \mathbf{a}^m=\mathbf{b}.$$ Since the vectors $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^m$ are linearly independent, however, it follows that $\mathbf{x}=\mathbf{y}=\mathbf{z}$ and hence $\mathbf{x}$ is an extreme point of $\boldsymbol{K}$. Conversely, assume that $\mathbf{x}$ is an extreme point of $\boldsymbol{K}$. Let us assume that the nonzero components of $\mathbf{x}$ are the first $k$ components. Then $$ x_1 \mathbf{a}^1+x_2 \mathbf{a}^2+\cdots+x_k \mathbf{a}^k=\mathbf{b} $$ with $x_i>0, i=1,2, \ldots, k$. To show that $\mathbf{x}$ is a basic feasible solution it must be shown that the vectors $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^k$ are linearly independent. We do this by contradiction. Suppose $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^k$ are linearly dependent. Then there is a non-trivial linear combination that is zero: $$ y_1 \mathbf{a}^1+y_2 \mathbf{a}^2+\cdots+y_k \mathbf{a}^k=\mathbf{0} $$ Define the $n$-vector $\mathbf{y}=\left(y_1, y_2, \ldots, y_k, 0,0, \ldots, 0\right)$. Since $x_i>0,1 \leqslant i \leqslant k$, it is possible to select $\varepsilon$ such that $$ \mathbf{x}+\varepsilon \mathbf{y} \geqslant \mathbf{0}, \quad \mathbf{x}-\varepsilon \mathbf{y} \geqslant \mathbf{0} $$ We then have $\mathbf{x}=\frac{1}{2}(\mathbf{x}+\varepsilon \mathbf{y})+\frac{1}{2}(\mathbf{x}-\varepsilon \mathbf{y})$ which expresses $\mathbf{x}$ as a convex combination of two distinct vectors in $\boldsymbol{K}$. This cannot occur, since $\mathbf{x}$ is an extreme point of $\boldsymbol{K}$. Thus $\mathbf{a}^1, \mathbf{a}^2, \ldots, \mathbf{a}^k$ are linearly independent and $\mathbf{x}$ is a basic feasible solution. (Although if $k<m$, it is a degenerate basic feasible solution.) $\blacksquare$ This correspondence between extreme points and basic feasible solutions enables us to prove certain geometric properties of the convex polytope $\boldsymbol{K}$ defining the constraint set of a linear programming problem. Corollary 1. If the convex set $\boldsymbol{K}$ corresponding to (17) is nonempty, it has at least one extreme point. *Proof*. This follows from the first part of the Fundamental Theorem and the Equivalence Theorem above. $\blacksquare$ Corollary 2. If there is a finite optimal solution to a linear programming problem, there is a finite optimal solution which is an extreme point of the constraint set. Corollary 3. The constraint set $\boldsymbol{K}$ corresponding to (3.) possesses at most a finite number of extreme points. Proof. There are obviously only a finite number of basic solutions obtained by selecting $m$ basis vectors from the $n$ columns of $\mathbf{A}$. The extreme points of $\boldsymbol{K}$ are a subset of these basic solutions. Finally, we come to the special case which occurs most frequently in practice and which in some sense is characteristic of well-formulated linear programsthe case where the constraint set $\boldsymbol{K}$ is nonempty and bounded. In this case we combine the results of the Equivalence Theorem and Corollary 3 above to obtain the following corollary. Corollary 4. If the convex polytope $\boldsymbol{K}$ corresponding to (3.) is bounded, then $\boldsymbol{K}$ is a convex polyhedron, that is, $\boldsymbol{K}$ consists of points that are convex combinations of a finite number of points. ### 3.4. Simplex Method In this section we assume that we begin with a basic feasible solution and that the tableau corresponding to $\mathbf{A x}=\mathbf{b}$ is in the canonical form for this solution. Methods for obtaining this first basic feasible solution, when one is not obvious, are described in the next section. In addition to beginning with the array $\mathbf{A x}=\mathbf{b}$ expressed in canonical from corresponding to a basic feasible solution, we append a row at the bottom consisting of the relative cost coefficients and the negative of the current cost. The result is a simplex tableau. Thus, if we assume the basic variables are (in order) $x_1, x_2, \ldots, x_m$, the simplex tableau takes the initial form shown in Fig. 3.2. The basic solution corresponding to this tableau is $$x_i= \begin{cases}y_{i 0} & 0 \leqslant i \leqslant m \\ 0 & m+1 \leqslant i \leqslant n\end{cases}$$ which we have assumed is feasible, that is, $y_{i 0} \geqslant 0, i=1,2, \ldots, m$. The corresponding value of the objective function is $z_0$. $$\begin{array}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \mathbf{a}^1 & \mathbf{a}^2 & \cdots & \mathbf{a}^m & \mathbf{a}^{m+1} & \mathbf{a}^{m+2} & \ldots & \mathbf{a}^j & \ldots & \mathbf{a}^n & \mathbf{b} \\ \hline 1 & 0 & \cdots & 0 & y_{1, m+1} & y_{1, m+2} & \cdots & y_{1 j} & \cdots & y_{1 n} & y_{10} \\ \hline 0 & 1 & & . & - & - & & . & & . & . \\ \hline . & . & & - & - & - & & - & & - & . \\ \hline 0 & 0 & & . & y_{i, m+1} & y_{i, m+2} & \cdots & y_{i j} & \cdots & . & y_{i 0} \\ \hline 0 & 0 & & 1 & y_{m, m+1} & y_{m, m+2} & \cdots & y_{m j} & \cdots & y_{m n} & y_{m 0} \\ \hline 0 & 0 & \cdots & 0 & r_{m+1} & r_{m+2} & \cdots & r_j & \cdots & r_n & -z_0 \\ \hline \end{array}$$ The relative cost coefficients $r_j$ indicate whether the value of the objective will increase or decrease if $x_j$ is pivoted into the solution. If these coefficients are all nonnegative, then the indicated solution is optimal. If some of them are negative, an improvement can be made (assuming nondegeneracy) by bringing the corresponding component into the solution. When more than one of the relative cost coefficients is negative, any one of them may be selected to determine in which column to pivot. Common practice is to select the most negative value. Some more discussion of the relative cost coefficients and the last row of the tableau is warranted. We may regard $z$ as an additional variable and $$ c_1 x_1+c_2 x_2+\cdots+c_n x_n-z=0$$ as another equation. A basic solution to the augmented system will have $m+1$ basic variables, but we can require that $z$ be one of them. For this reason it is not necessary to add a column corresponding to $z$, since it would always be ( $0,0, \ldots, 0,1$ ). Thus, initially, a last row consisting of the $c_i$ 's and a right-hand side of zero can be appended to the standard array to represent this additional equation. Using standard pivot operations, the elements in this row corresponding to basic variables can be reduced to zero. This is equivalent to transforming the additional equation to the form $$ r_{m+1} x_{m+1}+r_{m+2} x_{m+2}+\cdots+r_n x_n-z=-z_0.$$ This must be equivalent to (24), and hence the $r_j$ 's obtained are the relative cost coefficients. Thus, the last row can be treated operationally like any other row: just start with $c_j$ 's and reduce the terms corresponding to basic variables to zero by row operations. After a column $q$ is selected in which to pivot, the final selection of the pivot element is made by computing the ratio $y_{i 0} / y_{i q}$ for the positive elements $y_{i q}$, $i=1,2, \ldots, m$, of the $q$ th column and selecting the element $p$ yielding the minimum ratio. Pivoting on this element will maintain feasibility as well as (assuming nondegeneracy) decrease the value of the objective function. If there are ties, any element yielding the minimum can be used. If there are no nonnegative elements in the column, the problem is unbounded. After updating the entire tableau with $y_{p q}$ as pivot and transforming the last row in the same manner as all other rows (except row $q$ ), we obtain a new tableau in canonical form. The new value of the objective function again appears in the lower right-hand corner of the tableau. The simplex algorithm can be summarized by the following steps: **Step 0**. Form a tableau as in Fig. 3.2 corresponding to a basic feasible solution. The relative cost coefficients can be found by row reduction. **Step 1**. If each $r_j \geqslant 0$, stop; the current basic feasible solution is optimal. **Step 2**. Select $q$ such that $r_q<0$ to determine which nonbasic variable is to become basic. **Step 3**. Calculate the ratios $y_{i 0} / y_{i q}$ for $y_{i q}>0, i=1,2, \ldots, m$. If no $y_{i q}>0$, stop; the problem is unbounded. Otherwise, select $p$ as the index $i$ corresponding to the minimum ratio. **Step 4**. Pivot on the $p q$ th element, updating all rows including the last. Return to Step 1. Proof that the algorithm solves the problem (again assuming nondegeneracy) is essentially established by our previous development. The process terminates only if optimality is achieved or unboundedness is discovered. If neither condition is discovered at a given basic solution, then the objective is strictly decreased. Since there are only a finite number of possible basic feasible solutions, and no basis repeats because of the strictly decreasing objective, the algorithm must reach a basis satisfying one of the two terminating conditions. > Example. Maximize $3 x_1+x_2+3 x_3$ subject to $$ \begin{aligned} 2 x_1+x_2+x_3 & \leqslant 2 \\ x_1+2 x_2+3 x_3 & \leqslant 5 \\ 2 x_1+2 x_2+x_3 & \leqslant 6 \\ x_1 \geqslant 0, \quad x_2 \geqslant 0, \quad x_3 & \geqslant 0 . \end{aligned} $$ To transform the problem into standard form so that the simplex procedure can be applied, we change the maximization to minimization by multiplying the objective function by minus one, and introduce three nonnegative slack variables $x_4, x_5, x_6$. We then have the initial tableau \begin{array}{cccccccc} & \mathbf{a}^1 & \mathbf{a}^2 & \mathbf{a}^3 & \mathbf{a}^4 & \mathbf{a}^5 & \mathbf{a}^6 & \mathbf{b} \\ & \require{enclose}\enclose{circle}{2} & \require{enclose}\enclose{circle}{1} & 1 & 1 & 0 & 0 & 2 \\ & 1 & 2 & \require{enclose}\enclose{circle}{3} & 0 & 1 & 0 & 5 \\ \mathbf{r}^T & -3 & -1 & -3 & 0 & 0 & 0 & 0 \end{array} The problem is already in canonical form with the three slack variables serving as the basic variables. We have at this point $r_j=c_j-z_j=c_j$, since the costs of the slacks are zero. Application of the criterion for selecting a column in which to pivot shows that any of the first three columns would yield an improved solution. In each of these columns the appropriate pivot element is determined by computing the ratios $y_{i 0} / y_{i j}$ and selecting the smallest positive one. The three allowable pivots are all circled on the tableau. It is only necessary to determine one allowable pivot, and normally we would not bother to calculate them all. For hand calculation on problems of this size, however, we may wish to examine the allowable pivots and select one that will minimize (at least in the short run) the amount of division required. Thus for this example we select $\require{enclose}\enclose{circle}{1}$. \begin{array}{rrrrrrr} 2 & 1 & 1 & 1 & 0 & 0 & 2 \\ -3 & 0 & \require{enclose}\enclose{circle}{1} & -2 & 1 & 0 & 1 \\ -2 & 0 & -1 & -2 & 0 & 1 & 2 \\ -1 & 0 & -2 & 1 & 0 & 0 & 2 \end{array} We note that the objective function-we are using the negative of the original one-has decreased from zero to minus two. Again we pivot on $\require{enclose}\enclose{circle}{1}$. \begin{array}{rrlrrrr} \require{enclose}\enclose{circle}{5} & 1 & 0 & 3 & -1 & 0 & 1 \\ -3 & 0 & 1 & -2 & 1 & 0 & 1 \\ -5 & 0 & 0 & -4 & 1 & 1 & 3 \\ -7 & 0 & 0 & -3 & 2 & 0 & 4 \\ \end{array} The value of the objective function has now decreased to minus four and we may pivot in either the first or fourth column. We select $5$. \begin{array}{lllcclc} 1 & \frac{1}{5} & 0 & \frac{3}{5} & -\frac{1}{5} & 0 & \frac{1}{5} \\ 0 & \frac{3}{5} & 1 & -\frac{1}{5} & \frac{2}{5} & 0 & \frac{8}{5} \\ 0 & 1 & 0 & -1 & 0 & 1 & 4 \\ 0 & \frac{7}{5} & 0 & \frac{6}{5} & \frac{3}{5} & 0 & \frac{27}{5} \\ \end{array} Since the last row has no negative elements, we conclude that the solution corresponding to the fourth tableau is optimal. Thus $x_{1}=\frac{1}{5}$, $x_{2}=0$, $x_{3}=\frac{8}{5}$, $x_{4}=0$, $x_{5}=0$, $x_{6}=4$ is the optimal solution with a corresponding value of the (negative) objective of $-\frac{27}{5}$.