Simplex algorithm – Initialization phase

As mentioned in the previous post, I’m going to enhance my Simplex algorithm with the initialization phase. And I could not stand and added a new category “Simplex” because I guess this will be a long-term topic.

In this step I’m going to introduce the initialization phase of the Simplex algorithm with the Auxiliary problem — which helps us to obtain a feasible dictionary from an infeasible primal problem.

Why do we need the initialization phase? Or better asked: how do we know we need the initialization phase?

Well, the answer is quite simple: our starting dictionary seems infeasible. How do we now it seems infeasible? The b vector contains at least one number which is less than zero.

\\  x_{3} = -2 + 2x_{1} - x_{2}\\  x_{4} = 4 + 0x_{1} - x_{2}\\  x_{5} = -2 - x_{1} + 2x_{2}\\  \underline{x_{6} = 4 - x_{1} + 0x_{2}}\\  z = 0 + x_{1} + 2x_{2}

And before someone gets me wrong: I’m talking always from a dictionary with slack variables added to it already — so the dictionary is in the standard form where we have equations between the basic variables and the A-matrix, and we try to maximize the objective value. Converting to the standard form is coming later — now I focus on the initialization phase.

And why do we need the initialization in the case our dictionary seems infeasible? Because with this phase we can decide if the problem is feasible or not — and if it is feasible, we can provide an initial feasible dictionary to continue our algorithm where we left it in the previous post.

The auxiliary problem

So the question is, how can we get rid of the negative values in the b vector and get a feasible dictionary if the problem is feasible? We have to set up the so called auxiliary problem: we define a new slack variable x_0 and add it to each line of the A-matrix and we set up a new objective function:  max -x_{0} (maximizing the value of minus x_{0}).

So what does it mean to change the objective? It is quite simple: instead of solving (in our example) z = 0 + x_{1} + 2x_{2} we solve the same dictionary with z = 0 - x_{0} .

And because we introduced x_{0} as a non-basic variable we have to include it to the dictionary’s A-matrix: add x_{0} to every line. So our example for the auxiliary problem would look as follows :

\\  x_{3} = -2 + 2x_{1} - x_{2} + x_{0}\\  x_{4} = 4 + 0x_{1} - x_{2} + x_{0}\\  x_{5} = -2 - x_{1} + 2x_{2} + x_{0}\\  \underline{x_{6} = 4 - x_{1} + 0x_{2} + x_{0}}\\  w = 0 - x_{0}

Solving the auxiliary problem

I guess you are questioning the solution (if you have had never do with Simplex) because we generate a final dictionary with this step. Well yes, this is true if we recall a simple definition of a final dictionary: all non-basic variables have a non-positive coefficient (negative or zero). So, how is it possible to solve the dictionary? We have to do a so called “magic step”.

The magic step

Ok, it is not really magic, we only do a step we wouldn’t do in a normal case: we choose (or force if this sounds better) x_{0} as entering variable and this was it. So if we choose x_{0} we will find a leaving variable — and this variable should be the one with the lowest value (so the b-vector contains the lowest value).

I am not a matemagician so I will not prove this as it would be OK — however if you think about it: if we choose x_{0} as the entering variable, we take the basic variable with the lowest value as leaving and we do the pivoting as before we can eliminate all of the negative values in our b vector.

After the magic step

So as mentioned before we have to do the pivoting as we have done it before. So the only magic is to choose x_{0} as the entering variable and look for a good leaving one.

OK, there is one more rule of thumb in the pivoting steps: every time we can choose x_{0} as a leaving variable we have to. And this rule has another positive fact carrying: whenever x_{0} leaves, the dictionary is final. This is a great think, isn’t it? If we do our pivot correctly, we get a final dicitionary as in a normal case. However it can happen that x_{0} stays as a basic variable. This can be a problem but not in every case.

We have to remember or auxiliary objective: we maximize -x_{0} . So if the optimal value of the auyiliary problem is 0 we are happy because we can eliminate x_{0} from our dictionary without any problems. If the objective value is less than 0 the initial problem is infeasiblle.

What about positive optimal values for the initialization phase? Well, I have to tell you again that I am not a mathemagician but none of the papers I’ve read mentioned a positive result for maximizing x_{0} . So I guess it is impossible to get a positive optimal value for this problem (or negative for x_{0} if we resolve the objective value to the variable). I guess I could set up some problems and look myself if it is possible or not. But I guess it is no option for a feasible dicitionary. I’ll come back to this question later — in another post.

For the example above after doing the pivoting steps we get the following final dictionary as result:

\\  x_{1} = 2 + \frac{2}{3}x_{3} + \frac{1}{3}x_{5} - x_{0}\\  x_{2} = 2 + \frac{1}{3}x_{3} + \frac{2}{3}x_{5} - x_{0}\\  x_{4} = 2 - \frac{1}{3}x_{3} - \frac{2}{3}x_{5} - x_{0}\\  \underline{x_{6} = 2 - \frac{2}{3}x_{3} + \frac{4}{3}x_{5} - x_{0}}\\  w = 0 + 0x_{3} + 0x_{5} - x_{0}

Get an initial feasible dictionary

So we optimized -x_{0} and got zero as result. This tells us that the initial problem is feasible. So what? We have to recalculate our initial probem from the auxiliary solution. And it is easier as it seems.

What do we have / know? We have an optimal value for -x_{0} which is zero and we know that we constructed our auxiliary problem from our original one with changing the objective function to max -x_{0} and added x_{0} to every line of the A-matrix. From this knowledge we can simply deduct that we only have to remove x_{0} from the A-matrix and then replace our objective function with the original one and then recalculate the original objective function with the current values of the original non-basic variables. And why can we do this so simple?

Because the value of x_{0} is zero. Adding or removing zero is a great mathemagic trick you have to do to solve some problems (sometimes with derivation it is adding 1 and removing 1 at the same time — and this is zero at the end).

After we reconstructed our dictionary we can do the same steps of pivoting which were described in the previous post.

And naturally the reconstructed dictionary looks as follows:

\\  x_{1} = 2 + \frac{2}{3}x_{3} + \frac{1}{3}x_{5}\\  x_{2} = 2 + \frac{1}{3}x_{3} + \frac{2}{3}x_{5}\\  x_{4} = 2 - \frac{1}{3}x_{3} - \frac{2}{3}x_{5}\\  \underline{x_{6} = 2 - \frac{2}{3}x_{3} + \frac{4}{3}x_{5}}\\  z = 6 + \frac{4}{3}x_{3} + \frac{5}{3}x_{5}

The algorithm

The algorithm is really simple for the initialization phase.

  1. Make the magic step.
    1. Save the original objective function for later (the current “z” list).
    2. Change all values in the objective function (the “z” list) to zero and add -1 at the end of the list which represents x_{0} .
    3. Append 0 at the end of the list for the non-basic variables.
    4. Append 1 at the end of each row in the A_matrix
    5. Recalculate the dictionary with x_{0} as the entering variable and the one basic variable with the lowest coefficient as leaving variable.
  2. Pivot the dictionary until it is final.
  3. If the objective value is not zero tell that the original problem is infeasible.
  4. Else remove x_{0} from the matrix.
  5. Replace the objective function with the original one.
  6. Recalculate the objective function with the results of the initialization phase.
  7. Continue with solving a feasible LP.

Future steps

The next step is to change the solver to an ILP solver where the variables have to be integers. There are some methods to get the integer result from a linear program with real numbers: branch-and-bound and the Gomory-Chvatal cutting method. In the first version I take the naïve approach and do the initialization phase again on the new dictionary. To be honest, the version with Gomory-Chvatal cuts is already available at GitHub however it has some problems.

After this I’ll write some words about duality and convert the ILP algorithm’s naïve version to solve the program using dual dictionaries.

And it would be time to show you how to read out the solution for the problems after all because currently we only know what the optimal solution is but do not know what optimal values the variables have.

And as always, the code is available at my GitHub repo.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s