ZeePedia

Linear Programming:SIMPLEX METHOD, Simplex Procedure

<< Linear Programming:SOLUTION TO LINEAR PROGRAMMING PROBLEMS
Linear Programming:PRESENTATION IN TABULAR FORM - (SIMPLEX TABLE) >>
img
Operations Research (MTH601)
112
SIMPLEX METHOD
Basic Properties
The simplex method is based on the following fundamental properties:
The collection of feasible solutions constitutes a convex set.
Property 1:
If a feasible solution exists, a basic feasible solution exists where the basic feasible solution
Property 2.
corresponds to the extreme points (corner points) of the set of feasible solutions.
There exist only a finite number of basic feasible solutions.
Property 3.
If the objective function possesses a finite maximum or minimum, then at least one optimal
Property 4.
solution is a basic feasible solution.
These properties can be easily verified for their plausibility with reference to the graphical representation.
The reason for these properties can be attributed to the complete linearity of the linear programming model. We
shall clarify the hypothesis of properties 2 and 4. The linear programming problem need not necessarily have any
feasible solution. This will be so when the constraints have inconsistencies or contradictions.
In the above example the third constraint x + y < 60 is replaced by, say, x + y > 100. Then we cannot have a
single solution space to offer solution space to have an infinitely high value subject to the restrictions rather than
some finite number. This will be the case when in the example 2.5.1 the second and the third constraint have been
deleted. Then the solution space and the solution are unbounded. As per this model the objective function will have
Z = + as the feasible solution. Suppose there exists a feasible solution and that the optimal value of Z is finite,
properties 2 and 3 indicate that the number of basic feasible solutions is strictly positive and finite. From the
property 4 we infer that only this finite number of solutions namely the basic feasible solutions or the extreme points
are to be investigated to find an optimal solution. Hence even though there exists an infinite number of feasible
solutions, it is enough to find the value of the objective function for the basic feasible solutions. Hence we limit to
only a few of the feasible solutions. Therefore we examine the value of the objective function at each of the corner
points or basic feasible solutions and select the one with the largest value of Z in a maximization problem and the
smallest value of Z in a minimization problem.
If we apply the above logic to the first example the basic feasible solutions include the following with the
value or Z as shown the table below:
(x, y)
value of Z = 4x + 7y
(Basic feasible solution)
0, 0
0
0, 30
210
30, 30
330*
40, 20
300
40, 0
160
112
img
Operations Research (MTH601)
113
we take the largest value of Z as the optimum denoted by Z* = 330 and the values of the decision variables as x = 30,
y = 30.
One more interesting fact can be inferred from the property 4 that an optimal solution need not be a basic
feasible solution. This can take place if a number of feasible solutions yield the same maximum feasible value of Z,
since the property 4 guarantees only that at least one of these will be a basic feasible solution. To illustrate, suppose
that the objective function in the above example is changed to Z = 4x + 4y. Then not only the two basic feasible
solutions (30, 30) , (40, 20) but also all the non-basic feasible solutions that lie on the line segment between the two
points, numbering to infinity, would have been optimal solutions.
Simplex Procedure
The simplex method is an algebraic procedure involving a well-defined iterative process, which leads
progressively to an optimal solution in a few numbers of finite steps. Dantzig introduced the method in 1947 and
even today this seems to be the most versatile and powerful tool to solve linear programming problems. For routine
linear programming problems, computer solution packages have been developed to use in an electronic digital
computer. In the next section we describe what a simplex method does and how it solves a linear programming
problem.
Example
Consider
Z = 3x + 5y
Subject to
x < 40, y < 30, x + y < 60, x, y > 0.
First step in the simplex method is to convert the linear programming model involving inequalities into
strict equalities, by the use of "slack variables". To illustrate the above idea, suppose we have the inequality x < 40.
How can one replace this in equation by an equivalent equation? This inequality x < 40 implies the meaning that x
can take a value of 40 or less. If the value of x is exactly 40, then this is the required equation. But since it also tells
that x can take a value less than 40, it is necessary to add some positive value to x to make it up to 40. This
additional non-negative variable is called the slack variable denoted by S1. We can then write
x + S1 = 40
so that the above is an equation. This has been achieved by the addition of a slack variable S1 to the left hand side of
the in equation, which can take a value between 0 and 40 both inclusive. If S1 = 0, then x = 40 and if S1 = 40, then x
= 0. Thus the slack variable is strictly non-negative.
So we conclude that the addition of a non-negative variable called slack variable converts the 'less than
equal to' constraint into strict 'equality constraint'.
The second inequality, y < 30 can also be converted into an equation by adding another slack variable S2 to
the left hand side. Thus we have
y + S2 = 30
113
img
Operations Research (MTH601)
114
Similarly the third constraint x + y < 60 is also converted with the addition of another slack variable S3
so that we have,
x + y + S3 = 60
Thus an equivalent model alters the original linear programming model,
114
img
Operations Research (MTH601)
115
Maximize
Z = 3x + 5y
Subject to the restrictions,
= 40
(1)
x + S1
= 30
(2)
y + S2
x + y + S3 = 60
(3)
and
x, y > 0
The above form of representation of a linear programming problem is much more convenient for algebraic
manipulation and for identification of basic feasible solutions.
Referring to the restrictions in the above example, there are five variables x, y, S1, S2, and S3 all of them
being non-negative and their values are to be determined to find an optimal solution. We have only three equations
(1), (2) and (3) involving five variables. We cannot find a solution with the set of three equations in five unknowns.
We must have al least five linearly independent equations to solve for all the five variables so that they will have
unique values. Almost we can solve three variables in terms of the remaining two variables.
In general, if there are m equations in n variables (n > m), a basic solution is obtained by solving m
variables in terms of the remaining variables and letting (n - m) variables equal to zero. This assumes that such a
solution exists and is unique. A basic feasible solution is a basic solution where all m of these variables are non-
negative. A non-degenerate basic solution is a basic solution where these m variables are strictly > 0, (i.e.) positive.
The m variables chosen from among n variables are called as "basic" variables or as the variables in the "basis". The
remaining (n - m) variables are termed as "non-basic" variables and their values are zero.
Now, considering the example above, we can choose any three variables and refer to as the basic variables
and the remaining two as non-basic variables. A question may be asked at this juncture as to which three variables to
choose from the five variables. This leads to a combinatorial problem of selecting any three variables from among
five variables and we have 5 C3 (i.e.) 10 combinations. Ten such combinations can be listed and try with the
objective function with the combination selected and we can choose the optimal solution. All of them will be basic
solutions. But one or more of them may lead to the optimum objective function. But this is an exhaustive
enumeration process, which is time consuming.
In the simplex method, it is customary that we select the slack variables viz. S1, S2 and S3 (in the example
cited) to form the initial basic variables, letting x and y as non-basic variables. We can rewrite the above equation as
S1 = 40 - x
S2 = 30 - y
S3 = 60 - x - y
and the non-basic variables x and y are set equal to zero. Therefore, the initial basic feasible solution is
x=0
y=0
S1 = 40
S2 = 30
S3 = 60
and automatically the value of Z = 3x + 5y becomes 0.
Thus we have an initial solution for the linear programming problem with Z = 0. Since the objective
function is 0 the question now are any other solution that will be better than the current one. Thus we are forced to
find the next basic feasible solution, which will give more profit. This requires selection of a new basic variable to
115
Operations Research (MTH601)
116
be included and a variable to leave the basis. The new basic variable to be included is called 'the entering
variable' and the variable we remove from the basis is called the 'leaving variable'.
How are the leaving variable and entering variable selected?
The entering variable is chosen by examining the objective function to estimate the effect of each
alternative. In our example the objective function is to maximize Z = 3x + 5y and if the value of any one of the
variables x and y is changed from the present value of 0, to a positive, there is an increase in the objective function
(as the coefficients of x and y are positive on the right hand side). Therefore either x or y, or, x and y would increase
Z by entering into the basis. Now the question is which to select x or y, or both x and y. There are many alternatives.
But clearly we see that the objective function will reach a greater value if we choose that variable that has more
positive coefficient. In the example we have the coefficient of x as 3 and the coefficient of y as 5. Since x increases Z
at the rate of 3 per unit increase in x and y increases Z at the rate of 5 per unit increase in y, y should be the entering
variable, even though both are potential entering variables and the objective function is written as
Z - 3x - 5y = 0
in which case we have to choose the variable whose coefficient is more negative on the left hand side of equation.
Having decided to choose y to enter the basis, we have to select the variable leaving the basis. As we have
only three equations but five unknowns, only three variables can be solved in terms of the other two.
To decide the candidate to be removed from the basis, we choose the variable whose value reaches zero
first as the value of the entering variable is increased.
The equations involving the slack variables may be rewritten as
S1 = 40 ­ x
(1)
S2 = 30 ­ y
(2)
S3 = 60 - x ­ y
(3)
Referring to the above equations, as the value of y is increased, S1 will never become zero as the first equation does
not involve y. This means that S1 will not leave the basis. The second equation indicates that S2 will be zero for the
value of y = 30 and from the third equation we infer that S3 will be zero for the value of y = 60. So among the three
candidates S1, S2 and S3, S2 will reach zero first as the value of y is increased to 30 from zero. Therefore S2 is chosen
as the leaving variable.
Now, we have to find the values of the remaining basic variables. This is done by the use of the Gauss
Jordan method of elimination in which the objective function is expressed only in terms of the non-basic variables.
This requires that each basic variable appears in exactly one equation and that this equation contains no
other basic variable. Consider the original set of equations where the basic variables are indicated bold
116
Operations Research (MTH601)
117
Z - 3x - 5 y
=0
(0)
+S1
= 40
x
(1)
+S2
= 30
y
(2)
x+ y
+ S3  = 60
(3)
The solution set is Z = 0, S1 = 40, S2 = 30, S3 = 60, x = y = 0.
The variable y enters the basis and S2 leaves the basis. So the variables in the new basis are y, S1 and S3.
The new basic variable y has replaced S2 as th basic variable in equation (2). Gauss elimination process requires that
y must now be eliminated from the other equations in which it appears. This is achieved by adding a suitable
multiple of equation (2) to the other equations. Thus, the new equation (0) is the old equation (0) plus five times the
equation (2)and the new equation (3) will be obtained by multiplying the equation (2) with -1 and adding to the old
equation (3) which is entirely equivalent to the first set algebraically. So the second  set of equations after
manipulating the first set or original set of equations is presented below with the basic variables indicated bold.
Z - 3x
+5S2
= 150
(0)
+S1
= 40
x
(1)
+S2
= 30
y
(2)
- S2 + S3
= 30
x
(3)
The solution set is:
Z = 150, S1 = 40, y = 30 and S3 = 30, x = S2 = 0
Now we question whether the above second basic feasible solution is optimal or a still better solution is
available. Can we still move up in the ladder of maximization? An answer is given be examining the objective
function in the new set of equations. We are no longer using our original objective function. Examination of the
objective function row reveals that x and S2 are the two non-basic variables and between them, if we choose x (since
it has a negative coefficient) as a candidate for entering the basis, the objective function value further increases. S2 is
not a candidate to enter the basis as it has a positive coefficient. Hence we have to include x in the new basis, thus x
is the next candidate to enter the basis; As x is selected to enter the basis, one of the variables in the basis viz, S1, y,
S3 has to leave the basis. We apply the criterion for selecting the leaving variable. As we assign the value for x in the
equations (1) to (3), we observe that S1 will be zero for x = 40 and that S3 will be zero for x = 30. Since S3 will reach
zero earlier as we assign values of x, S3 will be the next leaving variable from the basis. So we need to apply the
elimination procedure once again to get a new solution. This means another iteration is required.
Proceeding once again with the elimination process, we get the set of equations presented below:
+2S2 + 3S3
= 240
Z
(0)
+S1 + S2 + S3
= 40
(1)
+S2
= 30
y
(2)
-S2 + S3
= 30
x
(3)
117
Table of Contents:
  1. Introduction:OR APPROACH TO PROBLEM SOLVING, Observation
  2. Introduction:Model Solution, Implementation of Results
  3. Introduction:USES OF OPERATIONS RESEARCH, Marketing, Personnel
  4. PERT / CPM:CONCEPT OF NETWORK, RULES FOR CONSTRUCTION OF NETWORK
  5. PERT / CPM:DUMMY ACTIVITIES, TO FIND THE CRITICAL PATH
  6. PERT / CPM:ALGORITHM FOR CRITICAL PATH, Free Slack
  7. PERT / CPM:Expected length of a critical path, Expected time and Critical path
  8. PERT / CPM:Expected time and Critical path
  9. PERT / CPM:RESOURCE SCHEDULING IN NETWORK
  10. PERT / CPM:Exercises
  11. Inventory Control:INVENTORY COSTS, INVENTORY MODELS (E.O.Q. MODELS)
  12. Inventory Control:Purchasing model with shortages
  13. Inventory Control:Manufacturing model with no shortages
  14. Inventory Control:Manufacturing model with shortages
  15. Inventory Control:ORDER QUANTITY WITH PRICE-BREAK
  16. Inventory Control:SOME DEFINITIONS, Computation of Safety Stock
  17. Linear Programming:Formulation of the Linear Programming Problem
  18. Linear Programming:Formulation of the Linear Programming Problem, Decision Variables
  19. Linear Programming:Model Constraints, Ingredients Mixing
  20. Linear Programming:VITAMIN CONTRIBUTION, Decision Variables
  21. Linear Programming:LINEAR PROGRAMMING PROBLEM
  22. Linear Programming:LIMITATIONS OF LINEAR PROGRAMMING
  23. Linear Programming:SOLUTION TO LINEAR PROGRAMMING PROBLEMS
  24. Linear Programming:SIMPLEX METHOD, Simplex Procedure
  25. Linear Programming:PRESENTATION IN TABULAR FORM - (SIMPLEX TABLE)
  26. Linear Programming:ARTIFICIAL VARIABLE TECHNIQUE
  27. Linear Programming:The Two Phase Method, First Iteration
  28. Linear Programming:VARIANTS OF THE SIMPLEX METHOD
  29. Linear Programming:Tie for the Leaving Basic Variable (Degeneracy)
  30. Linear Programming:Multiple or Alternative optimal Solutions
  31. Transportation Problems:TRANSPORTATION MODEL, Distribution centers
  32. Transportation Problems:FINDING AN INITIAL BASIC FEASIBLE SOLUTION
  33. Transportation Problems:MOVING TOWARDS OPTIMALITY
  34. Transportation Problems:DEGENERACY, Destination
  35. Transportation Problems:REVIEW QUESTIONS
  36. Assignment Problems:MATHEMATICAL FORMULATION OF THE PROBLEM
  37. Assignment Problems:SOLUTION OF AN ASSIGNMENT PROBLEM
  38. Queuing Theory:DEFINITION OF TERMS IN QUEUEING MODEL
  39. Queuing Theory:SINGLE-CHANNEL INFINITE-POPULATION MODEL
  40. Replacement Models:REPLACEMENT OF ITEMS WITH GRADUAL DETERIORATION
  41. Replacement Models:ITEMS DETERIORATING WITH TIME VALUE OF MONEY
  42. Dynamic Programming:FEATURES CHARECTERIZING DYNAMIC PROGRAMMING PROBLEMS
  43. Dynamic Programming:Analysis of the Result, One Stage Problem
  44. Miscellaneous:SEQUENCING, PROCESSING n JOBS THROUGH TWO MACHINES
  45. Miscellaneous:METHODS OF INTEGER PROGRAMMING SOLUTION