In this lesson, I'm going to give a brief introduction to linear programming.

Linear programming is one of

the most important optimization techniques and there's a lot of theory behind it.

There are many books written about it but

for us in this course on approximation algorithms,

we're just going to treat it as a black box.

So, that means we just need to know what a linear program

is and we need to know that it can be solved efficiently,

but we don't need to know the algorithms behind it.

We don't need to know the algorithms that actually solve a linear program.

So, what is linear program?

Well, a linear program is a mathematical optimization problem,

where we're given a number of variables,

in this example here we just have two variables: X_1 and X_2,

usually you have many more variables and we want

to optimize some function over these variables.

Okay, in this case, in this example,

we want to minimize a function of these variables and then we

could see this function that we want to minimize as a cost function.

Often also the term objective function is

used and sometimes instead of minimizing the objective function,

maybe we want to maximize the objective function.

Okay? Besides that we want to optimize this function,

we actually also have to make sure that what we choose for the variables,

so the values that we take satisfy certain constraints.

As you can see here in this example,

we have four constraints.

For a linear program,

what is crucial in this formulation is that

both the constraints and the function that we want to optimize are linear.

No quadratic functions or anything like that.

Okay? So, in a linear program,

the objective function and the constraints are linear.

So, here's some examples of things that are not allowed.

So, here this would not be

a good cost function because as you

see there's a term that multiplies two of our variables.

So, it's no longer linear in the variables,

here's another example where you see

again that we have something which is not a linear term.

So, for the cost function this should be the case but also for the constraints.

So again, something like this is not allowed and also something like

this which is a combination of two linear constraints with an or between them.

That's not allowed.

What you can have for the constraints is the number of constraints that should all be

satisfied but you're not allowed to have this constraint or this constraint is satisfied.

Okay. So, that's linear programming and a good way to think about it is to

visualize it and to look at the geometric interpretation of linear programming.

Again, let me do this for two variables because then I can draw nice pictures.

So, let's look at the following two-dimensional linear program.

So, we want to find values for two variables: X_1 and X_2.

So that means that the solutions we can represent as a point in two-dimensional space.

We have to find an X_1 value and we have to find an X_2 value and

together they define a point in my two dimensional solution space.

Now, let's look at the constraints that we have.

For instance, if you look at the bottom constraint,

X_2 is at least 0.

Well, that simply means that

all the valid solutions must lie above the line X_2 equals 0.

So, they must lie in this half-plane.

Actually something I like that is true for all of the constraints.

So, if you look at the second constraint,

which is here then okay I can draw the line 2X_1 minus X_2 equals five,

that's a line and now the constraint set that this should be at most.

So, that means that our points should lie on a certain side of this line.

Okay? We can do that for all the constraints so

every constraint will give us a half-plane.

So, here you see that the third constraint,

the fourth constraint and so on.

Okay? So, each constraint gives us

a half-plane or if we would have more variables like in three for

three dimensions we get a three-dimensional point and then are half-planes or

half spaces so we get some kind of polytope in possibly some high dimensional space.

Our solution or a solution that satisfies

all the constraints must lie in the intersection of all these regions.

So, in this case,

what you see is that the intersection is here and this intersection is called

the feasible region because a point in this solution space is a valid solution,

a feasible solution if and only if it lies in this feasible region.

Okay? So, this represents you could say all the constraints.

Now let's look at the objective function or if we want to minimize,

let's say the cost function.

Well, what we can do for the cost function is actually since we have a linear constraint,

it means that the cost function says that we want to go as far as

possible in a certain direction and as you can see here,

what we wanted to minimize was minus X_1 plus 4X_2,

well that means if we switch the signs because we want to minimize,

we get a vector one minus 4 and we want to go as far as possible in that direction.

Okay? But of course our solution should lie inside the feasible region.

So, in this case this would be the optimal solution.

This is the solution to our linear program.