The main problem of linear programming. Cheat Sheet: Mathematical Programming Cheat Sheet General view of the linear programming model

In practice, the limitations in the problem linear programming are often given not by equations, but by inequalities.

Let's show how you can move from a problem with inequality constraints to a basic linear programming problem.

Let there be a linear programming problem with variables in which the constraints imposed on the variables have the form of linear inequalities. In some of them, the inequality sign may be different from others (the second type is reduced to the first by simply changing the sign of both sides). Therefore, we set all inequality constraints in standard form:

It is required to find a set of non-negative values ​​that would satisfy inequalities (4.1), and, in addition, would minimize the linear function:

From the problem posed in this way it is easy to move on to the main problem of linear programming. Indeed, let us introduce the following notation:

where are some new variables, which we will call “additional”. According to conditions (4.1), these additional variables must also be non-negative.

Thus, we are faced with a linear programming problem in the following formulation: find such non-negative values ​​of the variables that they satisfy the system of equations (4.3) and simultaneously minimize the linear function of these variables:

As you can see, we have before us in its pure form the main linear programming problem (LPLP). Equations (4.3) are given in a form already resolved with respect to the basic variables, which are expressed through free variables. The total number of variables is equal to , of which “initial” and “additional”. The function L is expressed only through the “initial” variables (the coefficients of the “additional” variables in it are equal to zero).

Thus, we have reduced the problem of linear programming with inequality constraints to the main problem of linear programming, but with a large number variables than were originally in the problem.

Example 1 There is a linear programming problem with inequality constraints: find non-negative values ​​of variables that satisfy the conditions

and minimizing the linear function

It is required to bring this problem to the form of an OPLP.

Solution. We reduce inequalities (4.4) to standard form;

We introduce additional variables:

The task comes down to finding non-negative values ​​of the variables

satisfying equations (4.6) and minimizing the linear function (4.5).

We showed how we can move from a linear programming problem with inequality constraints to an equality constraint problem (ICP). The reverse transition is always possible - from the OLP to a problem with inequality constraints. If in the first case we increased the number of variables, then in the second case we will reduce it, eliminating basic variables and leaving only free ones.

Example 2. There is a linear programming problem with equality constraints (LPLP):

and the function to be minimized

It is required to write it as a linear programming problem with inequality constraints.

Solution. Since , we choose some two of the variables as free. Note that variables cannot be chosen as free, since they are related by the first of the equations (4 7): the value of one of them is completely determined by the value of the other, and free variables must be independent

For the same reason, it is impossible to select variables as free (they are connected by the second equation). Let us choose the free variables and express all the rest through them:

Since conditions (4 9) can be replaced by inequalities:

Let us pass in the expression of the linear function L to free variables, substituting their expressions (4.9) in L instead of and. we'll get it.

3.1. General linear programming problem

Linear programming– this is the most developed section of mathematical programming, with the help of which the analysis and solution of extremal problems with linear connections and restrictions.

Linear programming includes a number of heuristic (approximate) solution methods that, under given conditions, allow choosing the best, optimal one from all possible solutions to production problems. These methods include the following - graphical, simplex, potential method (modified distribution method - MODI), Khitchkova, Kreko, Vogel approximation method and others.

Some of these methods are combined under a common name - the distribution, or transport, method.

The birthplace of linear programming is Russia. The first works on linear programming by the future academician L.V. Kantorovich were published in 1939. In 1975, he received the Nobel Prize in Economics for the development of linear programming methods. Since most of the works of Academician L.V. Kantorovich is dedicated to solving transport problems; it can be considered that this Nobel Prize also recognizes the merits of Russian transport science.

In road transport, linear programming methods have been used since the 1960s to solve a large number of critical production problems, namely: reducing the distance of cargo transportation; drawing up an optimal transportation scheme; selection of the shortest routes; tasks of transporting different but interchangeable goods; shift-daily planning; planning the transportation of small-batch cargo; distribution of buses along routes and others.

The features of the linear programming model are as follows:

The objective function and constraints are expressed by linear dependencies (equalities or inequalities);

The number of dependencies is always less than the number of unknowns (uncertainty condition);

Non-negativity of the sought variables. The general form of writing a linear programming model in abbreviated form is as follows:

Find X ij ≥ 0 (j = 1, 2…n) under the following type of restrictions:

These constraints minimize (or maximize) the objective function

The standard form of writing a linear programming model is a system of linear equations written in canonical form, i.e. in the form of linear equalities, in non-negative numbers:

a 11 x 1 + a 12 x 2 + …+ a 1 n x n = b 1;

a 21 x 1 + a 22 x 2 + ... + a 2 n x n = b 2 ; (3.1)

……………………………..

a m x 1 + a m 2 x 2 + …+ a mn x n = b m ..

If the model is written in the form of inequalities in non-negative numbers, that is, it has the form

a 11 x 1 + a 12 x 2 + …+ a 1 n x n ≤ b 1;

a 21 x 1 + a 22 x 2 + … + a 2 n x n ≤ b 2 ; (3.2)

……………………………..

a m x 1 + a m 2 x 2 + …+ a mn x n ≤ b m ,..

then this record is reduced to canonical form (3.1) by introducing additional variables x n +1> 0 (i=1,2…m) to the left side of the inequality (or a reduction in the number of variables if the inequality sign is directed in the other direction). Additional variables make up the basis.

The standard form of solving a linear programming problem is to find solutions to a system of linear equations in non-negative numbers that minimize the objective function. The objective function in this case has the form

L = c 1 x 1 + c 2 x 2 …c n x n → min, (3.3)

Where s 1 , s 2 ... s n– coefficients of the objective function L with variables X j.

Additional variables are included in the objective function with zero coefficients.

In the case of maximizing the objective function L the signs of the objective function variables should be changed to the opposite, and we will again come to the minimization problem, i.e. one task is reduced to another by replacement L on - L or max L= min (– L).

The basic solution of a system of linear equations (3.1) is a solution in which the non-basic variables are given zero values.

An admissible solution is a basic solution in which the variables included in the basis are non-negative.

Optimal is an admissible solution that maximizes (or minimizes) the objective function (3.3).

For every linear programming problem there is a corresponding one, called a dual linear programming problem. The original problem in relation to the dual one is called direct. The direct and dual problems form a pair, called a dual pair in linear programming. A direct and dual pair form an asymmetric pair when the direct problem is written in canonical form, and a symmetric pair when the conditions of the problems are written in inequalities.

The rules for constructing a mathematical model of a dual problem are based on the rules of matrix calculus.

The concept of duality is widely used in the analysis of linear programming problems. The property of duality is considered in detail in each specific case.

3.2. Graphic-analytical method

The graphoanalytic method is one of the simplest linear programming methods. It clearly reveals the essence of linear programming and its geometric interpretation. Its disadvantage is that it allows solving problems with 2 or 3 unknowns, i.e. it is applicable to a narrow range of problems. The method is based on the rules of analytical geometry.

Solving a problem with two variables x 1 And x 2, which according to the meaning of the problem should not be negative, is performed in the Cartesian coordinate system. Equations x 1=0 and x 2= 0 are the axes of the first quadrant coordinate system

Let's look at the solution method using a specific example.

Example 3.1. There are 300 tons of foam concrete products and 200 tons of steel products in stock. The auto company needs to deliver these products to the facility under construction. The auto company has trucks KamAZ - 5320 and

ZIL-4314. In one trip, KamAZ-5320 can deliver 6 tons of foam concrete and 2 tons of steel, and the profit from the trip will be 4 thousand rubles. ZIL-4314 delivers 2 tons of foam concrete and 4 tons of steel in one trip, the profit from the trip is 6 thousand rubles. It is necessary to organize transportation in such a way as to ensure the greatest profit for the auto company.

Let's build mathematical model tasks. Let us denote by x 1 the required number of KamAZ-5320 riders and by X 2 the required number of ZIL-4314 riders.

The total transportation in tons of foam concrete products is 6 x 1 + 2x 2, and from steel 2 x 1 + 4x 2. Transportation restrictions, based on the number of items available, are 6 x 1 + 2x 2 ≤ 300t for foam concrete and 2 x 1 + 4x 2 ≤ 200t for steel.

Total profit in thousand rubles. expressed by the value 4 X 1 + 6X 2, which needs to be maximized and which is the optimality criterion in the problem under consideration. The mathematical model of the problem thus looks like the following. It is necessary to maximize the objective function

L = 4x 1 + 6x 2 → max under conditions: 6 x 1 + 2x 2 ≤ 300; 2x 1 + 4x 2 ≤ 200; x 1 ≥ 0;x 2 ≥ 0.

Consider Equation 6 x 1 + 2x 2 = 300. To construct a line described by this equation, we find two points lying on this line. At x 1= 0 from the equation of the straight line we find 2 x 2 = 300, whence x 2 = 150. Consequently, point A with coordinates (0.150) lies on the desired line. At x 2= 0 we have 6 x 1= 300, whence x 1 = 50, and the point D with coordinates (50,0) is also on the desired line. Draw a straight line through these two points AD(Fig. 3.1).

Linear inequality 6 x 1 + 2x 2 ≤ 300 is a half-plane located on one side of the constructed line 6 x 1 + 2x 2 = 300. To find out on which side of this line the points of the desired half-plane are located, we substitute in inequality 6 x 1 + 2x 2 ≤ 300 coordinates of any point that does not lie on the boundary line. For example, the origin is 0-(0,0). The following inequality is true for it: 6∙0 + 2∙0 = 0< 300. Это значит, что начало координат лежит в области допустимых значений, которая находится слева от прямой AD and in Fig. 3.1 is shaded.

Equation 2 x 1 + 4x 2= 200 we will construct using two points. At x 1 = 0 4x 2 = 200, from where x 2 = 50. Then period E has coordinates (0.50) and belongs to the desired line. At x 2= 0, 2x 2 = 200, dot With is on the given line with coordinates (100,0). Substituting the coordinates of the point into the inequality With(0,0), we get 2∙0 + 4∙0 = 0< 200. Значит, начало координат находится в области допустимых значений от прямой 2x 1+ 4x 2= 200.

The problem constraint system requires that plans ( x 1; x 2) satisfied all four inequalities, i.e., admissible plans are points ( x 1; x 2) must simultaneously be in all four half-planes. Only points located inside and on the border of the polygon meet this requirement. O.E.K.D., which is the polygon of feasible solutions.

The vertices of the polygon of feasible solutions are the points O, E, K, D, line segments OE, EK, KD, OD- his ribs. Any point of the polygon O.E.K.D. is the plan of the problem, satisfying all its conditions. The vertices of the polygon are formed by the intersection of two straight lines and correspond to the reference plans of the problem, among which is the best (optimal) plan. Thus, there will be as many support plans as there are vertices of the polygon of feasible solutions.

A visual geometric representation can also be obtained for the objective function L = 4x 1 + 6x 2. Let us fix some value of the objective function, for example L= 120. equation 4 x 1 + 6x 2 = 120 defines a line passing through a point IN with coordinates (x 1 = 0; x 2 = 20) and point L with coordinates (( X 1 = 30; X 2 = 0). Line segment ВL lies inside the polygon O.E.K.D.. Consequently, for all plans (points) of this segment the value of the objective function is the same and equal to 120. By giving other values ​​to the objective function, we obtain parallel straight lines, which are called level lines target function.

Moving a straight line L parallel to itself in one direction, we obtain an increase in the objective function, and in opposite direction- its decrease. In the example under consideration, the movement of the straight line ВL to the right determines the increase in the objective function that we are maximizing. We do this until the straight line ВL will have at least one common point with the polygon of feasible solutions O.E.K.D.. From Fig. 3.1 it follows that the last point that the straight line of the objective function level will intersect will be the point TO. This means that the point TO determines the optimal plan for the task.

The direction of increase perpendicular to the level line is called direction of greatest increase the objective function and determines its maximum increase. This direction can be established without constructing level lines. To do this it is necessary on the axes x 1 And x 2 set aside segments equal to the coefficients of the objective function, and using them, as coordinates, construct the vector of the greatest increase in the objective function. In mathematics it is called gradient and are designated by the sign grad. Gradient for function L = 4x 1 + 6x 2 will be a vector n| 4; 6 | . For the convenience of constructing it, let’s increase the coordinates, for example, 10 times, i.e. n | 40; 60 | . Let's construct the gradient of the objective function L, for which we connect the point with coordinates (40; 60) with the origin of coordinates. The objective function level lines are drawn perpendicular to the gradient direction.

So, in one way or another it has been established that the point TO determines the optimal plan of the problem, the values ​​of the variables of which correspond to the coordinates of a given point. To establish the coordinates, it is necessary to solve the system of equations of the lines forming this vertex:

6x 1 + 2x 2= 300;

2x 1 + 4x 2= 200.

Let's equalize the coefficients for x 1 by multiplying the second equation by 3, and subtract the first from the second equation. Let's get 10 x 2= 300,x 2 = 30. Substituting the value x 2 = 30 into any of the equations, for example into the first one, we determine the value X 1:

6x 1+ 2X · 30 = 300,

where from 6 x 1 = 300 – 60 = 240, therefore, x 1 = 40.

Thus, in order to get the greatest profit for an automobile enterprise, it is necessary to complete 40 trips on a KamAZ-5320 and 30 trips on a ZIL-4314. The maximum profit will be

L = 4x 1 + 6x 2= 4 · 40 + 6 · 30 = 340 thousand rubles.

Based on the considered example and the geometric interpretation of the optimization problem with two variables, the following conclusions can be drawn:

1) in two-dimensional space, the region of feasible solutions is a polygon;

2) each side of the polygon corresponds to the value of one variable equal to zero;

3) each vertex of the polygon of feasible solutions corresponds to the values ​​of two variables equal to zero;

4) each value of the objective function corresponds to a straight line;

5) the optimal solution to the problem corresponds to the vertex of the polygon at which the objective function acquires the optimal value, and the optimal variables are the coordinates of this vertex.

IN general case optimization problems have a similar geometric interpretation. The set of plans for a problem will be a polyhedron whose vertices correspond to the supporting plans. When solving a problem, a transition is made from one vertex of the polyhedron to another with a large value of the objective function until its optimal value is obtained. Note that the effectiveness of optimization methods lies precisely in the fact that the search for vertices (iteration) is carried out only in the direction of the greatest increase in the objective function. Therefore, not all peaks, of which there are a huge number, are considered, but only those that are closer to the extreme.

When determining a class of optimization problems and choosing a method for solving it, it is necessary to know whether the set of feasible solutions is convex or non-convex, or whether the objective function is linear or nonlinear.

By definition, a set is called convex, if for any two points the entire segment connecting these points belongs to this set. Examples of convex sets include, for example, a segment (Fig. 3.2a), a plane in the form of a circle, a cube, a parallelepiped, as well as polygons that are entirely located on one side of each of its sides, etc.

In Fig. 3.2b shows non-convex sets. In non-convex sets, it is possible to indicate at least two points of the segment AB that do not belong to the set under consideration.

3.3. Simplex method

Simplex method is a common method for solving linear programming problems. The method got its name from the word “simplex”, which denotes the simplest convex polygon, the number of vertices of which is always one more than the dimension of space. The simplex method was developed in the USA by mathematician J. Dantzig in the late 1940s.

The simplex method involves obtaining a non-negative basic solution to a system of canonical linear equations of type (3.1), subsequent minimization (maximization) of the objective function (3.3) and in this way finding the optimal values ​​of the sought variables x 1, x 2… x n.

The idea of ​​the simplex method is that during the calculation process one moves sequentially from the first basic solution to the second, third, etc. using the so-called simplex transformations. Transformations are carried out in the form of simplex tables, which greatly simplifies and speeds up calculations.

To obtain non-negative basic solutions of a system of linear equations, the process of eliminating unknowns must be carried out so that the free terms of the equations remain non-negative at all stages of the process. In this case, one should be guided by the following rule: any free variable with at least one positive coefficient is taken as a new basic variable; a variable is derived from the basis that corresponds to the smallest ratio of the free terms of the equations to the corresponding positive coefficients of the equations for the variable introduced into the basis. Such transformations are called simplex converters.

This is very important, since in order to find a particular non-negative solution corresponding to the largest possible value of any one free variable at zero values other free variables, instead of determining the range of variation of the specified variable and substituting its largest possible value into the general solution, it is enough to take this variable as the basic one and subject the system to a simplex transformation, moving to a new basis, which greatly simplifies the calculations.

It is convenient to perform calculations using simplex tables. The transition from one table to another corresponds to one iteration, i.e., the transition from one basis to another, while the value of the objective function decreases. For a certain number of iterations, they go to the basis for which they obtain the optimal (minimum or maximum) value of the objective function. Let us consider the simplex method in general form.

The general problem of linear programming is to minimize (maximize) an objective function whose variables are interconnected by a system of linear equations and are subject to the non-negativity condition.

Let it be necessary to minimize the linear form

L = c 1 x 1 + c 2 x 2 + … c n x n.

Under the conditions (for clarity, zero and one coefficients in the equations are preserved):

1x 1+ 0x 2 + ... 0x m + a 1m+ 1x m+1 …+a 1n x n = b 1 ;

0x 1 + 1x 2 +… 0x m + a 2m+ 1x m+1 …+a 2n x n = b 2 ;

……………………………………………

0x 1+ 0x 2 + … 1x m + a mm + 1x m +1 …+a mn x n = b m.

This system of equations already has a ready-made basis, since each constraint equation contains an unknown with a coefficient equal to one, which is not present in other equations, i.e., from the coefficients of the variables X 1 , X 2 …, x m you can create an identity matrix.

Let's solve the equations for the basic variables:

x 1 = b 1 – (a 1m+1 x m+1 ...+ a 1n x n);

x 2 = b 2 – (a 2m+1 x m+1 ...+ a 2n x n);

………………………………

x m = b m – (a mm+ 1x m+1 …+ a mn x n),

and we express the objective function in terms of free variables, substituting into it in place of the basic variables their expressions in terms of free variables:

L=c 1 b 1 +c 2 b 2 +c m b m –(c 1 a 1m +c 2 a 2m+1 +…+c m a mn+1)x m+1 -…-(c 1 a 1n +c 2 a 2n +…+c m a mn)x n…+c n x n..

Variables x 1, x 2..., x m, with the help of which the first basic plan was found, are basic, and the rest x m +1 , x m +2 ,…x n – free. There should always be as many basic variables as there are equations in the system. Based on the non-negativity condition, the smallest value of the free variables is equal to zero. The resulting basic solution of the system of equations is its initial admissible solution, i.e. x 1 = b 1, x 2 =b 2, ... x m =b m, x m +1 = 0,…, x n = 0.

This solution corresponds to the value of the objective function

L = s 1 b 1 + s 2 b 2 + … s m b m.

The initial solution is checked for optimality. If it is not optimal, then by introducing free variables into the basis, the following feasible solutions are found with a smaller value of the objective function. To do this, determine the free variable that must be introduced into the basis, as well as the variable that must be derived from the basis. Then they move from the previous system to the next equivalent system. This is done using simplex tables. The solution of the problem continues until the optimal value of the objective function is obtained.

Simplex tables are compiled as follows (see Table 3.1). All variables are placed at the top of the table X 1 , X 2 …, x n and odds c j, with which the corresponding variables are included in the objective function. First column c i consists of the coefficient of the objective function for the variables included in the basis. This is followed by a column of basic variables and free terms of the equations. The elements of the remaining columns of the table represent the coefficients of the variables with which the latter are included in the system of equations. Thus, each row of the table corresponds to an equation of the system solved with respect to the basic variable. The table also shows a version of the plan that corresponds to the objective function for a given basis.

The bottom row of the table is called index. Each element (score) ∆ j determine

j = z j – c j ,

Where c j– coefficients for the corresponding variables in the objective function; z j – the sum of the products of the coefficients of the objective function for the basic variables by the corresponding variables - elements j-th column of the table.

Table 3.1

Simplex table with initial valid

The basis for solving economic problems are mathematical models.

Mathematical model problem is a set of mathematical relationships that describe the essence of the problem.

Drawing up a mathematical model includes:
  • selection of problem variables
  • drawing up a system of restrictions
  • choice of objective function

Task variables are called the quantities X1, X2, Xn, which completely characterize the economic process. They are usually written as a vector: X=(X 1, X 2,...,X n).

System of restrictions problems are a set of equations and inequalities that describe the limited resources in the problem under consideration.

Objective function tasks are called a function of task variables that characterizes the quality of the task and the extremum of which needs to be found.

In general, a linear programming problem can be written as follows:

This entry means the following: find the extremum of the objective function (1) and the corresponding variables X=(X 1 , X 2 ,...,X n) provided that these variables satisfy the system of restrictions (2) and the non-negativity conditions (3) .

Valid solution(plan) of a linear programming problem is any n-dimensional vector X=(X 1 , X 2 ,...,X n) that satisfies the system of constraints and non-negativity conditions.

The set of feasible solutions (plans) of the problem forms region of feasible solutions(ODR).

The optimal solution(plan) of a linear programming problem is such an admissible solution (plan) of the problem in which the objective function reaches an extremum.

Example of compiling a mathematical model

The problem of using resources (raw materials)

Condition: To produce n types of products, m types of resources are used. Create a mathematical model.

Known:

  • b i (i = 1,2,3,...,m) — reserves of each i-th type of resource;
  • a ij (i = 1,2,3,...,m; j=1,2,3,...,n) - the costs of each i-th type of resource for the production of a unit of volume of the j-th type of product;
  • c j (j = 1,2,3,...,n) - profit from the sale of a unit of volume of the j-th type of product.

It is required to draw up a production plan that ensures maximum profit under given restrictions on resources (raw materials).

Solution:

Let us introduce a vector of variables X=(X 1, X 2,...,X n), where x j (j = 1,2,...,n) is the production volume of the j-th type of product.

The costs of the i-th type of resource for the production of a given volume x j of products are equal to a ij x j , therefore the restriction on the use of resources for the production of all types of products has the form:
Profit from the sale of the jth type of product is equal to c j x j, so the objective function is equal to:

Answer- The mathematical model looks like:

Canonical form of a linear programming problem

In the general case, a linear programming problem is written in such a way that the constraints are both equations and inequalities, and the variables can be either non-negative or arbitrarily varying.

In the case where all constraints are equations and all variables satisfy the non-negativity condition, the linear programming problem is called canonical.

It can be represented in coordinate, vector and matrix notation.

The canonical linear programming problem in coordinate notation has the form:

The canonical linear programming problem in matrix notation has the form:

  • A is the matrix of coefficients of the system of equations
  • X—matrix-column of task variables
  • Ао — matrix-column of the right parts of the system of restrictions

Linear programming problems, called symmetric ones, are often used, which in matrix notation have the form:

Reducing a general linear programming problem to canonical form

In most methods for solving linear programming problems, it is assumed that the system of constraints consists of equations and natural conditions for the non-negativity of variables. However, when compiling models of economic problems, constraints are mainly formed in the form of a system of inequalities, so it is necessary to be able to move from a system of inequalities to a system of equations.

This can be done like this:

Let's take the linear inequality a 1 x 1 +a 2 x 2 +...+a n x n ≤b and add to its left side a certain value x n+1 such that the inequality turns into the equality a 1 x 1 +a 2 x 2 + ...+a n x n +x n+1 =b. Moreover, this value x n+1 is non-negative.

Let's look at everything using an example.

Example 26.1

Bring the linear programming problem to canonical form:

Solution:
Let's move on to the problem of finding the maximum of the objective function.
To do this, we change the signs of the coefficients of the objective function.
To transform the second and third inequalities of the system of constraints into equations, we introduce non-negative additional variables x 4 x 5 (in the mathematical model this operation is marked with the letter D).
The variable x 4 is introduced into the left side of the second inequality with the “+” sign, since the inequality has the form “≤”.
The variable x 5 is introduced into the left side of the third inequality with a “-” sign, since the inequality has the form “≥”.
The variables x 4 x 5 are entered into the objective function with a coefficient. equal to zero.
We write the problem in canonical form.

Lecture 2

IN canonical form

acceptable decision of the PPP(valid plan).

optimal solution ZLP.

Necessity



Example.

Let's write the problem in canonical form

Special graphical situations PPP decisions

Except when the task has the only optimal solution for and , maybe special situations:

1. the task has an infinite number of optimal solutions – the extremum of the function is achieved on the segment ( alternative optimum)– Figure 2;

2. task not solvable due to the unlimited nature of the ODR, or – Figure 3;

3. ODR - single point Ah, then ;

4. task not solvable , if ODR is an empty area.

A

Figure 2 Figure 3

If the level line is parallel to the side of the region of feasible solutions, then the extremum is reached at all points on the side. The problem has countless optimal solutions - alternative optimum . The optimal solution is found by the formula

where is the parameter. For any value from 0 to 1, you can get all points of the segment, for each of which the function takes the same value. Hence the name - alternative optimum.

Example. Solve graphically a linear programming problem ( alternative optimum):

Questions for self-control

1. Write down the linear programming problem in general form.

2. Write the linear programming problem in canonical and standard forms.

3. What transformations can be used to move from the general or standard form of a linear programming problem to the canonical one?

4. Define admissible and optimal solutions to a linear programming problem.

5. Which of the solutions is the “best” for the problem of minimizing the function if ?

6. Which of the solutions is the “best” for the problem of maximizing the function if ?

7. Write down the standard form of a mathematical model for a linear programming problem with two variables.

8. How to construct a half-plane defined by a linear inequality in two variables ?

9. What is called the solution to a system of linear inequalities in two variables? Construct on the plane a region of admissible solutions of such a system of linear inequalities that:

1) has a unique solution;

2) has an infinite number of solutions;

3) has no solution.

10. Write for a linear function vector gradient, name the type of level lines. How are the gradient and level lines located relative to each other?

11. Formulate an algorithm for a graphical method for solving a standard ZLP with two variables.

12. How to find the coordinates of the solution and the values ​​of , ?

13. Construct the region of feasible solutions, gradient and level lines, for linear programming problems in which:

1) is achieved at a single point, a - on the ODR segment;

2) is achieved at a single point of the ODR, and .

14. Give a geometric illustration of the ZLP if it:

1) has unique optimal solutions for and ;

2) has many optimal solutions for .

Lecture 2

graphical method for finding the optimal solution

1. Forms of linear mathematical models and their transformation

2. Graphical method for solving a linear programming problem

3. Special situations of the graphical solution of the ZLP

4. Graphical solution of economic linear programming problems

Forms of linear mathematical models and their transformation

The mathematical model of a linear programming problem (LPP) can be written in one of three forms.

IN general form of the mathematical model you need to find the maximum or minimum of the objective function; the system of constraints contains inequalities and equations; not all variables can be non-negative.

IN canonical form the mathematical model needs to find the maximum of the objective function; the system of constraints consists only of equations; all variables are non-negative.

In the standard form of a mathematical model, you need to find the maximum or minimum of a function; all constraints are inequalities; all variables are non-negative.

A solution to a system of constraints that satisfies the conditions of non-negativity of variables is called acceptable decision of the PPP(valid plan).

The set of feasible solutions is called area of ​​admissible solutions of the PLP.

An admissible solution at which the objective function reaches an extreme value is called optimal solution for the ZLP.

The three forms of writing the ZLP are equivalent in the sense that each of them can be reduced to another form using mathematical transformations.

Necessity transition from one form of mathematical model to another is associated with methods for solving problems: for example, the simplex method, widely used in linear programming, is applied to a problem written in canonical form, and the graphical method is applied to the standard form of a mathematical model.

Transition to the canonical form of recording ZLP.

Example.

Let's write the problem in canonical form, introducing into the left side of the first inequality of the system of restrictions an additional (balance) variable with a “+” sign, and into the left side of the second inequality an additional variable with a “minus” sign.

The economic meaning of various additional variables may not be the same: it depends on the economic meaning of the restrictions in which these variables are included.

Thus, in the problem of using raw materials, they show the remaining raw materials, and in the problem of choosing optimal technologies, they show the unused time of the enterprise using a certain technology; in a cutting problem – production of blanks of a given length in excess of the plan, etc.

The simplest are the so-called linear deterministic models. They are specified in the form of a linear form of control variables ( X):

W=a 0 + a 1 x 1 + … + a k x k

under linear constraints of the form

b 1 j x 1 + b 2 j x 2 + … + b kj x k ³ b j , j = 1,…, q 1 ;

c 1 j x 1 + c 2 j x 2 + … + c kj x k = c j , j = 1,…, q 2 ;

d 1 j x 1 + d 2 j x 2 + … + d kj x k £ d j , j = 1,…, q 3 .

Total number of restrictions m = q 1 + q 2 + q 3 may exceed the number of variables (m> k). In addition, the condition that the variables are positive is usually introduced ( x i ³ 0).

The response surface for the linear model is hyperplane. For example, consider a two-variable linear model of the following form:

W=–2x 1 –3x 2 (2.2)

under the following restrictions

(2.3)
2x 1 + 3x 2 £18;

x 1 – 4x 2 £4;

–2x 1 + x 2 £ 2;

X 1 ³ 0; x 2 ³ 0.

Range of acceptable values ​​(domain of definition) OABCD for model (2.2) is formed by restrictions (2.3) (Fig. 2.2). The response surface is a flat polygon OA"B"C"D"(Fig. 2.2, b).

For a certain ratio of constraints, the set of feasible solutions may be absent (empty). An example of such a set is shown in Fig. 2.3. Direct AC And Sun limit the range of acceptable values ​​from above. The third constraint cuts off the range of acceptable values ​​from below the straight line AB. Thus, there is no general region that satisfies all three constraints.

Linear models are quite simple and therefore, on the one hand, they imply a significant simplification of the problem, and on the other hand, they allow the development of simple and effective solution methods.

When studying DLA, linear models are rarely used and almost exclusively for approximate descriptions of problems.

Linear models can be used for step-by-step approximation of nonlinear models (linearization of the problem). This technique is especially effective when studying small areas of the space under study. The representation of individual sections of a nonlinear response surface by a linear model underlies a large group of optimization methods, the so-called methods with linear tactics.

Studying linear models is not difficult. In particular, the influence of each of the variables on the characteristics of the model of the form

W=a 0 + a 1 x 1 + a 2 x 2 + …+ a k x k

is given by its coefficients:

, i = 1,…, k.

To find the optimum of the linear model W An effective simplex method has been developed.

The simplest cost models, considered as a set of costs incurred, are sometimes reduced to linear ones.

An example of such a model is the classic transportation cost model (transport problem)(Fig. 2.4).

Available k production points
(i = 1,…, k) And m consumption points
(j = 1,…, m) of some product. The amount of product produced in each k production points equals a i; quantity of product required in each m points of consumption equals b j.

Equality of total production and consumption is assumed:

Quantity of product transported from i th point of production in j th point of consumption, equals x ij; the cost of transporting a unit of this product – with ij.

Total cost of transportation WITH S is given linear model:

under the following restrictions

Linear models also include models in the form of linear differential equations (ordinary or partial derivatives).

Linear ordinary differential equation n-th order has the form

The initial conditions are written as

The linear partial differential equation has the form

The model, specified as a partial differential equation, includes initial and boundary conditions (conditions on the boundary of the domain of definition of the function F( t)).

2.3. Study of the simplest mathematical model
gas turbine engine operation

The gas turbine engine (GTE) is the main power plant of modern aircraft.

The gas turbine engine circuit has the form shown in Fig. 2.5.



Here 1 – input diffuser; 2 – compressor; 3 - the combustion chamber; 4 – turbine;
5 – output nozzle.

The gas turbine engine operating cycle includes the following stages:

1) Running with speed V The air flow through the diffuser enters the compressor.

2) The compressor, rotating on the same shaft as the turbine, compresses the air that enters the combustion chamber.

3) Fuel (kerosene) is constantly injected into the combustion chamber, which is mixed with compressed air.

4) The gas generated from combustion enters a turbine, which accelerates it to speed W.

5) At this speed, the gas is released into the atmosphere through the nozzle.

Due to the fact that W > V, a traction force is generated R, which allows the aircraft to fly in the atmosphere.

The thrust force is changed by changing the rate of fuel injection into the combustion chamber by moving the engine control stick (EC). The throttle is moved to a certain angle d of the throttle, either manually by the pilot or with the help of an actuator based on signals from the self-propelled guns in flight. Increasing the throttle d value causes an increase in force R, and decrease is a decrease in this force.

GTE is complex technical system, in which a significant number of physical and chemical processes occur. The engine is equipped with all kinds of automation devices, systems for turning and cooling turbine blades, etc. Naturally, the mathematical description of the functioning of the gas turbine engine will also be quite cumbersome, including systems of partial differential equations, ordinary differential equations, transcendental functions, algorithms digital system engine control. Such models are used in the gas turbine engine design process.

To solve flight control problems, a simpler gas turbine engine model is used, which represents the dependence of the thrust force R from angle d throttle throttle deviation. The process of changing the traction force is described as ordinary differential equation type:

, (2.11)

where t > 0 is the engine time constant, which, in addition to design characteristics, also depends on the ambient temperature, humidity and other external factors; k[kg/deg] – proportionality coefficient.

The initial condition for equation (2.11) is written as

R(0) = R 0 . (2.12)

Thus, equation (2.11) together with the initial condition (2.12) is the simplest mathematical model of gas turbine engine operation, written in the form of an ordinary differential equation 1-th order.

To determine the proportionality coefficient k calibration graphs of the dependence of thrust on the steering angle are used, constructed on the basis of experimental data. The tangent of the slope of the graph is equal to the desired coefficient.



Integrating equation (2.11) with the initial condition (2.12) allows us to find out how the thrust force changes over time (Fig. 2.6).

When the throttle is deflected, the thrust R increases and then stabilizes at a certain limiting value, i.e. The gas turbine engine is an inertial object.

Thrust force limit we obtain from (2.11) when the rate of its change is zero:

. (2.13)

Rise duration depends on the value of the motor time constant t. The process is considered steady when t = t mouth, when the thrust enters the so-called five percent corridor of the maximum value of the thrust force (Fig. 2.6). The greater t, the more inertial the engine and, therefore, the more t mouth

In Fig. Figure 2.7 shows the behavior of the traction force depending on the throttle deflection angle at t = 0.5.

The thrust force during takeoff, when the throttle is deflected by 10°, reaches a steady state in the third second and reaches a value of 3390 kg. Ten seconds after takeoff, when the throttle is deflected by 20°, the thrust force is set to 6780 kg, and ten seconds later, when the throttle is deflected by 30°, the thrust force is set to 10170 kg. The limiting value of the traction force is
14270 kg.


Related information.


Internet