Linear mathematical models. Cheat Sheet: Mathematical Programming Cheat Sheet

3.1. General problem of linear programming

Linear programming- this is the most developed section of mathematical programming, with the help of which analysis and solution of extremal problems are performed with linear connections and restrictions.

Linear programming includes a number of heuristic (approximate) solution methods that allow, under given conditions, to choose the best, optimal one from all possible solutions to production problems. These methods include the following - graphical, simplex, potential method (modified distribution method - MOD), Khichkov, Kreko, Vogel approximation method and others.

Some of these methods are united by a common name - the distribution, or transport, method.

The birthplace of linear programming is Russia. The first works on linear programming by the future academician L.V. Kantorovich were published in 1939. In 1975, he received the Nobel Prize in Economics for the development of linear programming methods. Since most of the works of Academician L.V. Kantorovich is devoted to solving transport problems, it can be considered that the mentioned Nobel Prize also marks the merits of Russian transport science.

In road transport, linear programming methods have been used since the 1960s to solve a large number of the most important production problems, namely: reducing the distance of cargo transportation; drawing up an optimal transportation scheme; selection of the shortest routes of movement; tasks of transportation of different, but interchangeable cargoes; shift-daily planning; planning of transportation of small batch cargoes; distribution of buses along routes and others.

The features of the linear programming model are as follows:

The objective function and constraints are expressed by linear dependencies (equalities or inequalities);

The number of dependencies is always less than the number of unknowns (uncertainty condition);

Non-negativity of the required variables. The general form of writing a linear programming model in abbreviated form is as follows:

Find X ij ≥ 0 (j = 1, 2…n) under the following type of constraints:

These constraints minimize (or maximize) the objective function

The standard form of writing a linear programming model is the system linear equations, written in canonical form, i.e. in the form of linear equalities, in non-negative numbers:

a 11 x 1 + a 12 x 2 + ... + a 1 n x n \u003d b 1;

a 21 x 1 + a 22 x 2 + ... + a 2 n x n \u003d b 2 ; (3.1)

……………………………..

a m x 1 + a m 2 x 2 + ...+ a mn x n = b m ..

If the model is written in the form of inequalities in non-negative numbers, i.e., it has the form

a 11 x 1 + a 12 x 2 + ... + a 1 n x n ≤ b 1;

a 21 x 1 + a 22 x 2 + ... + a 2 n x n ≤ b 2 ; (3.2)

……………………………..

a m x 1 + a m 2 x 2 + …+ a mn x n ≤ b m ,..

then this entry is reduced to canonical form (3.1) by introducing additional variables x n +1> 0 (i=1,2…m) to the left side of the inequality (or reduction of the number of variables if the inequality sign is directed in the opposite direction). Additional variables form the basis.

The standard form of solving a linear programming problem is to find solutions to a system of linear equations in non-negative numbers that minimize the objective function. The objective function then has the form

L = c 1 x 1 + c 2 x 2 ... c n x n → min, (3.3)

where s 1 , s 2 … s n are objective function coefficients L with variables X j .

Additional variables enter the objective function with zero coefficients.

In the case of maximizing the objective function L the signs of the variables of the objective function should be reversed, and we will again come to the minimization problem, i.e. one task is reduced to another substitution L on the - L or max L=min(- L).

A basic solution of a system of linear equations (3.1) is a solution in which the nonbasic variables are given null values.

A basic solution is called admissible in which the variables included in the basis are non-negative.

An admissible solution that maximizes (or minimizes) the objective function (3.3) is called optimal.

Each linear programming problem corresponds to another, called the dual linear programming problem. The original problem with respect to the dual one is called the direct problem. The primal and dual problems form a pair, called the dual pair in linear programming. The primal and dual pair form an asymmetric pair when the primal problem is written in canonical form, and a symmetric pair when the conditions of the problems are written as inequalities.

The rules for compiling a mathematical model of a dual problem are based on the rules of matrix calculus.

The concept of duality is widely used in the analysis of linear programming problems. The property of duality is considered in detail in each particular case.

3.2. Graph-analytical method

Graph-analytical method is one of the simplest methods of linear programming. It clearly reveals the essence of linear programming, its geometric interpretation. Its drawback is that it allows you to solve problems with 2 or 3 unknowns, i.e., it is applicable to a narrow range of problems. The method is based on the rules of analytical geometry.

Solving a problem with two variables x 1 and x 2, which, according to the meaning of the problem, should not be negative, is performed in the system of Cartesian coordinates. Equations x 1=0 and x 2= 0 are the axes of the coordinate system of the first quadrant

Let's consider the solution method using a specific example.

Example 3.1. There are 300 tons of foam concrete products and 200 tons of steel products in stock. The auto enterprise needs to deliver these products to the facility under construction. The car company has trucks KAMAZ - 5320 and

ZIL-4314. For one trip, KamAZ-5320 can deliver 6 tons of foam concrete and 2 tons of steel, and the profit from the trip will be 4 thousand rubles. ZIL-4314 delivers 2 tons of foam concrete and 4 tons of steel in one trip, the profit from the trip is 6 thousand rubles. It is necessary to organize transportation in such a way as to provide the auto enterprise with the greatest profit.

Let us construct a mathematical model of the problem. Denote by x 1 the desired number of trips KamAZ-5320 and through X 2 required number of ZIL-4314 riders.

The total transportation in tons of foam concrete products is 6 x 1+ 2x 2, and from steel 2 x 1+ 4x 2. The shipping limit, based on the number of items available, is 6 x 1+ 2x 2 ≤ 300t for foam concrete and 2 x 1+ 4x 2 ≤ 200t for steel.

Total profit in thousand rubles. expressed as 4 X 1 + 6X 2 , which needs to be maximized and which is the optimality criterion in the problem under consideration. The mathematical model of the problem thus looks like the following. It is necessary to maximize the objective function

L = 4x 1+ 6x 2 → max under conditions: 6 x 1+ 2x 2 ≤ 300; 2x 1+ 4x 2 ≤ 200; x 1 ≥ 0;x 2 ≥ 0.

Consider Equation 6 x 1+ 2x 2 = 300. To construct a line described by this equation, we find two points lying on this line. At x 1= 0 from the equation of a straight line we find 2 x 2 = 300, whence x 2 \u003d 150. Therefore, point A with coordinates (0.150) lies on the desired line. At x 2= 0 we have 6 x 1\u003d 300, from where x 1 \u003d 50, and the point D with coordinates (50,0) is also on the desired line. Draw a line through these two points AD(Fig. 3.1).

Linear inequality 6 x 1+ 2x 2 ≤ 300 is a half-plane located on one side of the constructed line 6 x 1+ 2x 2 = 300. To find out on which side of this line the points of the desired half-plane are located, we substitute into inequality 6 x 1+ 2x 2 ≤ 300 coordinates of some point not lying on the boundary line. For example, the origin is 0-(0,0). It satisfies the inequality 6∙0 + 2∙0 = 0< 300. Это значит, что начало координат лежит в области допустимых значений, которая находится слева от прямой AD and in fig. 3.1 is shaded.

Equation 2 x 1+ 4x 2= 200 will be constructed from two points. At x 1 = 0 4x 2 = 200 from where x 2 = 50. Then point E has coordinates (0,50) and belongs to the required line. At x 2= 0, 2x 2 = 200 dot With is on the given line with coordinates (100,0). Substituting into the inequality the coordinates of the point With(0,0), we get 2∙0 + 4∙0 = 0< 200. Значит, начало координат находится в области допустимых значений от прямой 2x 1+ 4x 2= 200.

The task constraint system requires that plans ( x 1; x 2) satisfy all four inequalities, i.e., admissible designs are points ( x 1; x 2) must simultaneously be in all four half-planes. This requirement is met only by points located inside and on the border of the polygon. OEKD, which is the polygon of admissible solutions.

The vertices of the polygon of feasible solutions are the points O, E, K, D line segments OE, EK, KD, OD- his ribs. Any point of the polygon OEKD is the plan of the task, satisfying all its conditions. The vertices of the polygon are formed by the intersection of two straight lines and correspond to the basic plans of the problem, among which is the best (optimal) plan. Thus, there will be as many reference plans as there are vertices in the polygon of feasible solutions.

A visual geometric representation can also be obtained for the objective function L = 4x 1+ 6x 2. We fix some value of the objective function, for example L= 120. equation 4 x 1+ 6x 2 = 120 defines a line passing through a point AT with coordinates (x 1 \u003d 0; x 2 \u003d 20) and a point L with coordinates (( X 1 = 30; X 2 = 0). Line segment BL lies inside the polygon OEKD. Therefore, for all plans (points) of this segment, the value of the objective function is the same and equal to 120. Giving other values ​​of the objective function, we obtain parallel lines, which are called level lines target function.

Moving the line L parallel to itself in one direction, we get an increase in the objective function, and in opposite direction- its decline. In this example, the movement of a straight line BL to the right determines the increase in the objective function that we are maximizing. We do this until the line BL will have at least one common point with the polygon of admissible solutions OEKD. From fig. 3.1 it follows that the last point that the straight line of the level of the objective function intersects will be the point To. This means that the point To determines the optimal task plan.

The direction of increase perpendicular to the level line is called direction of greatest increase objective function and determines its maximum gain. This direction can be set without drawing level lines. For this, it is necessary on the axes x 1 and x 2 set aside segments equal to the coefficients of the objective function, and using them, as in coordinates, construct a vector of the greatest increase in the objective function. In mathematics it is called gradient and denoted by the sign grad. Gradient for a feature L = 4x 1 + 6x 2 will vector n| four; 6 | . For the convenience of its construction, we increase the coordinates, for example, by 10 times, i.e. n | 40; 60 | . Let's construct the gradient of the objective function L, for which we connect the point with coordinates (40; 60) with the origin. The objective function level lines are drawn perpendicular to the direction of the gradient.

So, in one way or another, it is established that the point To determines the optimal task plan, the values ​​of the variables of which correspond to the coordinates of the given point. To establish the coordinates, it is necessary to solve the system of equations of the straight lines that form this vertex:

6x 1+ 2x 2= 300;

2x 1+ 4x 2= 200.

We equalize the coefficients at x 1 by multiplying the second equation by 3, and subtract the first from the second equation. Let's get 10 x 2= 300,x 2 = 30. Substituting the value x 2 \u003d 30 into any of the equations, for example, into the first, we determine the value X 1:

6x 1+ 2X · 30 = 300,

whence 6 x 1 = 300 - 60 = 240, therefore, x 1 = 40.

Thus, in order to get the most profit for a car company, it is necessary to complete 40 trips to KamAZ-5320 and 30 trips to ZIL-4314. The maximum profit in this case will be

L = 4x 1+ 6x 2\u003d 4 40 + 6 30 \u003d 340 thousand rubles.

Based on the considered example and the geometric interpretation of the optimization problem with two variables, we can draw the following conclusions:

1) in two-dimensional space, the area of ​​feasible solutions is a polygon;

2) each side of the polygon corresponds to the value of one variable equal to zero;

3) each vertex of the polygon of admissible solutions corresponds to the values ​​of two variables equal to zero;

4) each value of the objective function corresponds to a straight line;

5) optimal solution The problem corresponds to the vertex of the polygon, in which the objective function acquires the optimal value, while the optimal variables are the coordinates of this vertex.

AT general case optimization problems have a similar geometric interpretation. The set of task plans will be a polyhedron whose vertices correspond to the reference plans. When solving the problem, the transition from one vertex of the polyhedron to another with a large value of the objective function is carried out until its optimal value is obtained. Note that the efficiency of optimization methods lies precisely in the fact that the enumeration of vertices (iteration) is carried out only in the direction of the greatest increase in the objective function. Therefore, not all vertices are considered, of which there are a huge number, but only those that are closer to the extreme one.

When determining a class of optimization problems and choosing a method for solving it, it is necessary to know whether the set of feasible solutions is convex or non-convex, a linear or non-linear objective function.

By definition, a set is called convex if for any two points the entire segment connecting these points belongs to this set. Examples of convex sets are, for example, a segment (Fig. 3.2, a), a plane in the form of a circle, a cube, a parallelepiped, as well as polygons that are entirely located on one side of each of its sides, etc.

On fig. 3.2b shows non-convex sets. In non-convex sets, one can indicate at least two points of the segment AB that do not belong to the set under consideration.

3.3. Simplex Method

Simplex Method is a common method for solving linear programming problems. The method got its name from the word "simplex", denoting the simplest convex polygon, the number of vertices of which is always one more than the dimension of the space. The simplex method was developed in the USA by the mathematician J. Dantzig in the late 1940s.

The simplex method includes obtaining a non-negative basic solution of a system of canonical linear equations of type (3.1), subsequent minimization (maximization) of the objective function (3.3) and, in this way, finding the optimal values ​​of the required variables x 1, x 2… x n.

The idea of ​​the simplex method lies in the fact that in the course of the calculation one sequentially passes from the first basic solution to the second, third, etc. through the so-called simplex transformations. Transformations are made in the form of simplex tables, which greatly simplifies and speeds up calculations.

To obtain non-negative basic solutions of a system of linear equations, it is necessary to conduct the process of eliminating unknowns in such a way that the free terms of the equations remain non-negative at all stages of the process. In this case, one should be guided by the following rule: any free variable for which there is at least one positive coefficient is taken as a new basic variable; a variable is derived from the basis, which corresponds to the smallest ratio of the free terms of the equations to the corresponding positive coefficients of the equations with the variable introduced into the basis. Such transformations are called simplex converters.

This is very important, because in order to find a particular non-negative solution corresponding to the largest possible value of any one free variable at zero values other free variables, instead of determining the range of change of the specified variable and substituting its largest possible value into the general solution, it is enough to take this variable as the base one and subject the system to a simplex transformation, passing to a new basis, which greatly simplifies the calculations.

Calculations are conveniently performed using simplex tables. The transition from one table to another corresponds to one iteration, i.e., the transition from one basis to another, while the value of the objective function decreases. For a certain number of iterations, they pass to the basis for which the optimal (minimum or maximum) value of the objective function is obtained. Consider the simplex method in general view.

The general task of linear programming is to minimize (maximize) the objective function, the variables of which are interconnected by a system of linear equations, subject to the non-negativity condition.

Let it be necessary to minimize the linear form

L = c 1 x 1 + c 2 x 2 + ... c n x n.

Under the following conditions (for clarity, zero and unit coefficients are kept in the equations):

1x 1+ 0x 2 + ... 0x m + a 1m+ 1x m+1 ... + a 1n x n \u003d b 1;

0x 1+ 1x 2+… 0x m + a 2m+ 1x m+1 ... + a 2n x n \u003d b 2;

……………………………………………

0x 1+ 0x 2 + ... 1x m + a mm + 1x m +1 ... + a mn x n \u003d b m.

In this system of equations, there is already a ready-made basis, since each constraint equation contains an unknown with a coefficient equal to one, which is not in other equations, i.e., from the coefficients of the variables X 1 , X 2 …, x m you can create an identity matrix.

Let's solve the equations for the basic variables:

x 1 \u003d b 1 - (a 1m + 1 x m + 1 ... + a 1n x n);

x 2 \u003d b 2 - (a 2m + 1 x m + 1 ... + a 2n x n);

………………………………

x m \u003d b m - (a mm + 1x m + 1 ... + a mn x n),

and we will express the objective function in terms of free variables, substituting into it, in place of the basic variables, their expressions in terms of free variables:

L=c 1 b 1 +c 2 b 2 +c m b m –(c 1 a 1m +c 2 a 2m+1 +…+c m a mn+1)x m+1 -…-(c 1 a 1n +c 2 a 2n +…+c m a mn)x n …+c n x n..

Variables x 1, x 2 ..., x m, with the help of which the first basic plan is found, are basic, and the rest x m +1 , x m +2 ,…x n – free. There should always be as many basic variables as there are equations in the system. Based on the non-negativity condition, the smallest value of free variables is equal to zero. The resulting basic solution of the system of equations is its initial feasible solution, i.e. x 1 \u003d b 1, x 2 \u003d b 2, ... x m \u003d b m, x m +1 \u003d 0,…, x n = 0.

This solution corresponds to the value of the objective function

L = c 1 b 1 + c 2 b 2 + ... c m b m.

The initial solution is checked for optimality. If it is not optimal, then by introducing free variables into the basis, the following feasible solutions are found with a smaller value of the objective function. To do this, a free variable is determined that must be entered into the basis, as well as a variable that must be removed from the basis. Then they move from the previous system to the next equivalent system. This is done using simplex tables. The solution of the problem continues until the optimal value of the objective function is obtained.

Simplex tables are compiled as follows (see Table 3.1). Place all variables at the top of the table X 1 , X 2 …, x n and coefficients cj, with which the corresponding variables are included in the objective function. First column c i consists of the coefficient of the objective function for the variables included in the basis. This is followed by a column of basic variables and free terms of the equations. The elements of the remaining columns of the table are the coefficients of the variables with which the latter enter the system of equations. Thus, each row of the table corresponds to the equation of the system, solved with respect to the basic variable. The table also shows a variant of the plan that corresponds to the objective function for a given basis.

The bottom row of the table is called index. Each of its elements (estimate) ∆ j define

j = z j – c j ,

where cj are the coefficients for the corresponding variables in the objective function; zj – the sum of the products of the coefficients of the objective function with basic variables and the corresponding variables - elements j-th column of the table.

Table 3.1

Simplex table with initial valid

T10. Statement of the linear programming problem

mathematical model An economic problem is a set of mathematical relationships that describe the economic process.

To compile a mathematical model, it is necessary:

1. select task variables;

2. draw up a system of restrictions;

3. set the objective function.

Task variables the quantities x 1 , x 2 ,…, x n are called, which fully characterize the economic process. They are usually written as a vector X \u003d (x 1, x 2, ..., x n).

Task constraint system is a set of equations and inequalities that are satisfied by the variables of the problem and which follow from the limited resources and other economic conditions, for example, the positivity of the variables. In general, they look like:

The objective function is called function F(X) = f(x 1 , x 2 ,…, x n) of the task variables, which characterizes the quality of the task and the extremum of which is required to be found.

General Problem of Mathematical Programming is formulated as follows: find the task variables x 1 , x 2 ,…, x n that provide the extremum of the objective function

F (X) \u003d f (x 1, x 2, ..., x n) ® max (min) (2)

and satisfy the system of constraints (1).

If the objective function (2) and the constraint system (1) are linear, then the problem of mathematical programming is called linear programming problem (LPP).

The vector X (a set of task variables) is called acceptable solution, or the PLP plan, if it satisfies the system of restrictions (1). A feasible plan X that provides an extremum of the objective function is called optimal solution ZLP.

2. Examples of compiling mathematical models of economic problems

The study of specific production situations leads to ZLP, which are interpreted as tasks about optimal use limited resources.

1.The optimal production plan problem

For the production of two types of products T 1 and T 2, three types of resources S 1 , S 2 , S 3 are used. Stocks of resources, the number of units of resources spent on the manufacture of a unit of production, as well as profit from the sale of a unit of production are shown in the table:

It is required to find such a plan for the production of products in which the profit from its sale will be maximum.


Solution.

Let's denote x 1, x 2 - the number of units of production, respectively, T 1 and T 2, planned for production. For their manufacture, (x 1 + x 2) units of resource S 1, (x 1 + 4x 2) units of resource S 2, (x 1) units of resource S 3 will be required. The consumption of resources S 1 , S 2 , S 3 should not exceed their reserves, respectively 8, 20 and 5 units.

Then the economic-mathematical model of the problem can be formulated as follows:

Find a production plan X \u003d (x 1, x 2) that satisfies the system of restrictions:

and the condition

under which the function takes on the maximum value.

The problem can be easily generalized to the case of producing n types of products using m types of resources.

2.The optimal diet problem

There are two types of food K 1 and K 2 containing nutrients S 1 , S 2 and S 3 . The content of the number of nutrient units in 1 kg of each type of feed, the required minimum of nutrients, as well as the cost of 1 kg of feed are shown in the table:

It is necessary to draw up a daily ration that has a minimum cost, in which the content of each type of nutrient would not be less than the established limit.

Solution.

Let's denote x 1, x 2 - the amount of feed K 1 and K 2 included in the daily diet. Then this diet will include (3x 1 + x 2) units of nutrient S 1, (x 1 + 2x 2) units of substance S 2, (x 1 + 6x 2) units of nutrient S 3. Since the content of nutrients S 1 , S 2 and S 3 in the diet should be 9, 8 and 12 units, respectively, the economic-mathematical model of the problem can be formulated as follows:

Compose a daily diet X \u003d (x 1, x 2), satisfying the system of restrictions:

and the condition

under which the function takes the minimum value.

PLP recording forms

In LLP, it is required to find the extremum of the linear objective function:

with restrictions:

and the non-negativity condition

where a ij , b i , c j ( , ) are given constants.

This is how the ZLP is written in general form. If the constraint system contains only inequalities, then the LLP is represented in standard form. Canonical (main) the form of the ZLP notation is the notation when the system of constraints contains only equalities. So the above LLPs are written in standard form.

The general, standard, and canonical forms of the LLP are equivalent in the sense that each of them, with the help of simple transformations, can be rewritten in a different form. This means that if there is a way to solve one of these problems, then the optimal plan for any of the problems can be determined.

In order to move from one form of LLP notation to another, one must be able to move from inequality-constraints to equality-constraints and vice versa.

An inequality constraint (£) can be converted to an equality constraint by adding an additional non-negative variable to its left side, and an inequality constraint (³) can be converted into an equality constraint by subtracting an additional non-negative variable from its left side. The number of introduced additional non-negative variables is equal to the number of transformed inequality constraints.

Introduced additional variables make some economic sense. Thus, if the constraints of the original PLP reflect the consumption and availability of resources, then the value of the additional variable PLP in canonical form is equal to the volume of the unused corresponding resource.

Example 1. Write in the canonical form ZLP:

with restrictions:

Solution.

The objective function remains unchanged:

The system of inequalities is transformed into a system of equalities:

When solving PLP graphic method a transition from the canonical form to the standard form is required.

To bring the LLP to a standard form, use Jordan–Gauss method SLAU solutions. In contrast to the Gauss method, in which the augmented matrix of the system is reduced to stepped view, in the Jordan-Gauss method, an identity matrix is ​​formed as part of the extended matrix. Therefore, a reverse move is not required here.

To convert the original canonical LLP to the equivalent standard LLP:

a) a non-zero element a qp is chosen in the extended matrix of the constraint system. This element is called permissive, and q is the row and p-th column called enable row and enable column.

b) the resolving string is rewritten without change, and all elements of the resolving column, except for the resolving column, are replaced with zeros. The remaining elements of the augmented matrix are determined using the "rectangle rule":

Consider four elements of the expanded matrix: the element a ij to be transformed, the resolving element a qp and the elements a i p and a qj . To find the element a ij, it follows from the element a ij to subtract the product of the elements a i p and a qj located at opposite vertices of the rectangle, divided by the resolving element a qp:

c) the allowed unknowns are simultaneously excluded from the objective function. To do this, the coefficients of the objective function are written in the expanded matrix in the last row. The calculations take into account that the enabling element in the last line cannot be selected.

Example 2. Change to standard form:

Solution.

Using the Jordan-Gauss method, we bring the system of LLP constraint equations to an equivalent system of inequalities. Let's choose the third element of the first row as the resolving element:

The number -9 obtained in the last column of the last row must be written to the objective function with the opposite sign. As a result of the transformations, the LLP takes the form:

Because variables x 2 and x 3 are non-negative, then discarding them, we can write the ZLP in a symmetrical form:

In the canonical form of the LLP, the objective function can be both minimized and maximized. To go from finding the maximum to finding the minimum or vice versa, it is enough to change the signs of the coefficients of the objective function: F 1 = - F. The resulting problem and the original LLP have the same optimal solution, and the values ​​of the objective functions on this solution differ only in sign.

ZLP properties

1. The set of all admissible solutions of the system of constraints of a linear programming problem is convex.

The set of points is called convex, if it contains the entire segment connecting any two points of this set.

According to this definition, the polygon in Fig. 1a is a convex set, while the polygon in Fig. 1b is not, because the segment MN between its two points M and N does not completely belong to this polygon.

Convex sets can be not only polygons. Examples of convex sets are circle, sector, segment, cube, pyramid, etc.

2. If the LLP has an optimal solution, then the linear function takes the maximum (minimum) value at one of the corner points of the decision polyhedron. If a linear function takes a maximum (minimum) value at more than one corner point, then it takes it at any point that is a convex linear combination of these points.

Point X is called convex linear combination points X 1 , X 2 ,…, X n if the following conditions are met:

X \u003d α 1 X 1 + α 2 X 2 + ... + α n X n,

αj ≥ 0, Σαj = 1.

It is obvious that in the special case for n = 2 a convex linear combination of two points is a segment connecting them.

3. Each admissible basic solution of the canonical LLP constraint system corresponds to a corner point of the solution polyhedron, and vice versa, to each corner point of the solution polyhedron there corresponds an admissible basic solution.

It follows from the last two properties that if an LLP has an optimal solution, then it coincides with at least one of its admissible basic solutions.

Thus, the extremum of the LLP linear function must be sought among a finite number of its admissible basic solutions.

FEDERAL AGENCY FOR EDUCATION

FGOU ON "PSKOV COLLEGE OF CONSTRUCTION AND ECONOMICS"

Subject “Mathematical Methods”

Linear programming problem

Course work

Student group 315-PO

Andreev Dmitry Alexandrovich

Coursework Supervisor

Vasilyeva Natalia Anatolievna

Pskov 2009

Introduction

Chapter Ι Linear Programming

§ 1 General formulation of the linear programming problem

§ 2 Mathematical model of a linear programming problem

§ 3 Canonical form of a linear programming problem

Chapter ΙΙ Solution of the problem by the simplex method

§ 1 Statement of the problem

§ 2 Drawing up a mathematical model of the problem

§ 3 Algorithms for solving the problem by the simplex method

§ 4 Construction of the initial reference solution by the Gauss method

§ 5 Problem solution

Conclusion

Literature

Introduction

At present, many problems of planning and management in the sectors of the national economy, as well as a large number of particular applied problems, are solved by methods of mathematical programming. The most developed in the field of solving optimization problems are the methods of linear programming. These methods make it possible to describe with sufficient accuracy a wide range of problems. commercial activities such as merchandising planning; retail placement trading network cities; planning of commodity supply of the city, district; attaching trade enterprises to suppliers; organization of rational transportation of goods; distribution of trade workers to positions; organization of rational purchases of food products; allocation of resources; investment planning; optimization of intersectoral relations; replacement of commercial equipment; determination of the optimal range of goods in a limited area; establishment of a rational mode of operation.

In linear programming problems, the efficiency criterion and functions in the constraint system are linear.

If there is a time variable in a mathematical programming problem, and the efficiency criterion is expressed in terms of equations describing the course of operations in time, then such a problem is a dynamic programming problem.

In many economic models, the relationship between fixed and variable factors can be considered linear.

The use of mathematical programming methods in commercial activities is associated with the collection of the necessary information by a businessman, economist, financier, then setting the problem together with mathematics. Since the methods of mathematical programming are already implemented on the computer in the form of a package standard programs, then access to them is usually simple, automated and does not pose any particular difficulties.

Then the operation of the model includes the collection and processing of information, the input of processed information into a computer, calculations based on the developed programs of schedules, and, finally, the issuance of calculation results (in a form convenient for users) for their use in the field of production activity.

Chapter Ι Linear Programming

§ 1 General formulation of the linear programming problem

Linear programming is a branch of mathematical programming that studies methods for solving extremal problems, which are characterized by a linear relationship between variables and a linear objective function. To solve linear programming problems, a mathematical model of the problem is compiled and a solution method is selected.

The statement of the problem of commercial activity can be represented as a mathematical model of linear programming, if the objective function can be represented as a linear form, and the connection with limited resources can be described using linear equations or inequalities. In addition, an additional restriction is introduced - the values ​​of the variables must be non-negative, since they represent such quantities as turnover, working hours, costs, and other economic indicators.

The geometric interpretation of economic problems makes it possible to visualize their structure, identify features and opens up ways to study more complex properties. A linear programming problem with two variables can always be solved graphically. However, already in a three-dimensional space, such a solution becomes more complicated, and in spaces whose dimensions are more than three, a graphical solution, generally speaking, is impossible. The case of two variables is not of particular practical importance, but its consideration clarifies the properties of linear programming problems, leads to the idea of ​​its solution, makes geometrically clear methods of solution and ways of their practical implementation.

§ 2 Mathematical model of a linear programming problem

Before solving the problem, we make its mathematical model.

A mathematical model is a set of relationships consisting of a linear objective function and linear constraints on a variable.

The principle of drawing up a mathematical model.

1. Select task variables.

The variables of the problem are the quantities

Which fully characterize the economic process described in the task. Usually written as a vector X = () Moreover, )

2. Make up a system for limiting the problem.

A system of constraints is a set of equations and inequalities that are satisfied by the variables of the problem and which follows from the limited economic conditions of the problem.

In general, the system is written as

3. Target function is set.

The objective function is a function Z(X) that characterizes the quality of the task, the extremum of which must be found. In general, the objective function is written Z(X) =

then. the mathematical model has the form find the variables of the problem

satisfying the system of restrictions:

and the non-negativity condition

0 (j = ), which provides the extremum of the objective function Z(Y) =

An acceptable solution to a linear programming problem is any set of variable values ​​that satisfies the system of constraints and conditional non-negativity.

The set of admissible solutions forms the area of ​​admissible solutions of the problem (ODD).

The optimal solution is a feasible solution to the problem, in which the objective function reaches an extremum.

§ 3 Canonical form of a linear programming problem

The mathematical model of the problem must have a canonical form.

If the constraint system consists only of an equation and all variables satisfy the non-negativity condition, then the problem has a canonical form.

If the system has at least one inequality or any variable is unbounded by the non-negativity condition, then the problem has a standard form. To bring the problem to canonical form, you need to:

go from inequalities to the equation as follows: on the left side of the inequalities we introduce an additional variable with the coefficient (+1) for the inequality (

) and (-1) for the inequality () additional variables are not imposed by target non-negativity, then it is replaced by the difference of two non-negative variables, that is: = - (

General view of the canonical form:

Chapter ΙΙ Solution of the problem by the simplex method

The simplex method is a method of successive improvement of the plan (solution), the most effective and is used to solve any linear programming problem.

The name of the method is from the Latin simplecx - simple because. from the initial region of admissible solutions of the problem had the simplest form. The ideas of the method were proposed by the Russian mathematician Kontarovich L.V. in 1939 and then this idea was developed and developed by J. Danzig in 1949.

The simplex method allows for a finite number of steps to either find the optimal solution or prove that it does not exist.

§ 1 Statement of the problem

The company uses 3 types of machines in the production process Ι, ІΙ, ІΙІ. At the same time, raw materials, labor resources are consumed, and overhead costs are taken into account.

Any linear programming problem can be reduced to a linear programming problem in canonical form. To do this, in the general case, you need to be able to reduce the problem of maximization to the problem of minimization; move from inequality constraints to equality constraints and replace variables that do not obey the non-negativity condition. The maximization of some function is equivalent to the minimization of the same function taken with the opposite sign, and vice versa.

The rule for reducing a linear programming problem to a canonical form is as follows:

  • if in the original problem it is required to determine the maximum of a linear function, then you should change the sign and look for the minimum of this function;
  • if the right side of the restrictions is negative, then this restriction should be multiplied by -1;
  • if there are inequalities among the constraints, then by introducing additional non-negative variables they are transformed into equalities;
  • if some variable x j has no sign restrictions, then it is replaced (in the objective function and in all restrictions) by the difference between two new non-negative variables:
    x 3 \u003d x 3 + - x 3 - , where x 3 + , x 3 - ≥ 0 .

Example 1. Reduction to the canonical form of a linear programming problem:

min L \u003d 2x 1 + x 2 - x 3;
2x 2 - x 3 ≤ 5;
x 1 + x 2 - x 3 ≥ -1;
2x 1 - x 2 ≤ -3;
x1 ≤ 0; x2 ≥ 0; x3 ≥ 0.

Let us introduce equalizing variables into each equation of the constraint system x 4 , x 5 , x 6. The system will be written in the form of equalities, and in the first and third equations of the system of constraints, the variables x 4 , x 6 are entered in the left side with the "+" sign, and in the second equation the variable x5 entered with a "-" sign.

2x 2 - x 3 + x 4 \u003d 5;
x 1 + x 2 - x 3 - x 5 \u003d -1;
2x 1 - x 2 + x 6 \u003d -3;
x4 ≥ 0; x5 ≥ 0; x6 ≥ 0.

The free terms in the canonical form must be positive, for this we multiply the last two equations by -1:

2x 2 - x 3 + x 4 \u003d 5;
-x 1 - x 2 + x 3 + x 5 = 1;
-2x1 + x2 - x6 = 3.

In the canonical form of writing linear programming problems, all variables included in the system of constraints must be negative. Let's assume that x 1 \u003d x 1 '- x 7 , where x 1 ‘ ≥ 0, x 7 ≥ 0 .

Substituting this expression into the system of constraints and the objective function and writing the variables in ascending order of the index, we obtain a linear programming problem presented in the canonical form:

L min \u003d 2x 1 ' + x 2 - x 3 - 2x 7;
2x 2 - x 3 + x 4 \u003d 5;
-x 1 ‘- x 2 + x 3 + x 5 + x 7 = 1;
-2x 1 ‘ + x 2 - x 6 + 2x 7 = 3;
x 1 ‘ ≥ 0; x i ≥ 0, i=2, 3, 4, 5, 6, 7.

Optimality condition for the basic design of the canonical LP problem. The simplex method and its convergence.

The simplex method is universal, since it allows solving almost any linear programming problem written in canonical form.

The idea of ​​the simplex method successive improvement of the plan, is that, starting from some initial reference solution, sequentially directed movement by reference solutions of the problem to the optimal one.

The value of the objective function does not decrease with this displacement for tasks to the maximum.

Since the number of reference solutions is finite, after a finite number of steps we obtain the optimal reference solution.

A basic non-negative solution is called a support solution.

Simplex Method Algorithm

1. The mathematical model of the problem must be canonical. If it is non-canonical, then it must be reduced to canonical form.

2. We find the initial reference solution and check it for optimality.
To do this, fill in the simplex table 1.
All rows of the table of the 1st step are filled in according to the data of the system of restrictions and the objective function.

The following cases are possible when solving problems on maximum:

1. If all the coefficients of the last row of the simplex table Dj³ 0, then found

solution optimal.

2 If at least one coefficient DJ £ 0, but with the corresponding variable there is not a single positive estimated relation, then the solution we stop tasks, since F(X) ® ¥ , i.e., the objective function is not limited in the domain of admissible solutions.

If at least one coefficient of the last row is negative, and the corresponding variable has at least one positive evaluative relation, then you need to go to another baseline.

E if negative odds in the last line several, then to the base variable column(BP) introduce that variable, which corresponds to the largest negative coefficient in absolute value.

5. If at least one coefficient Dk< 0 ,then k - th column accept for the leader.

6. For leading line accept the one that matches minimum ratio of free members bi to positive coefficients leader, k - that column.

7. The element at the intersection of the leading rows and columns is called leading element.

Filling in the simplex table 2 :

· fill the base column with zeros and one

· rewrite the leading line by splitting it on the leading element

if the leading row has zeros, then the corresponding columns can be transferred to the next simplex table

· the remaining coefficients are found according to the “rectangle” rule

We get a new reference solution, which we check for optimality:

If all the coefficients of the last row Dj³ 0, then the found solution maximum.

If not, then we fill in the simplex table of the 8th step, and so on.

If the objective function F(X) requires finding minimum value, then the criterion problem optimality is non-positivity of the coefficients D j for all j = 1,2,…n.

Convergence of the simplex method. Degeneracy in LP problems. The most important property of any computational algorithm is convergence, i.e., the possibility of obtaining the desired results (with a given accuracy) in a finite number of steps (iterations) during its application.

It is easy to see that problems with the convergence of the simplex method can potentially arise at the stage of choosing the value of r (section 2") in the case when the same minimum values ​​of the ratio

will be reached for several rows of the table T (q) at the same time. Then at the next iteration the column b(β(q+1)) will contain zero elements.

⇐ Previous12345Next ⇒

Publication date: 2015-11-01; Read: 4190 | Page copyright infringement

Studopedia.org - Studopedia.Org - 2014-2018. (0.002 s) ...

Optimal Solution - Problem - Linear Programming

Page 1

The optimal solution of the linear programming problem is achieved at one of the reference points, where at least k n - m variables are equal to zero.

Using the optimal solution of the linear programming problem, it is possible to find admissible changes in the DS for which L still remains constant.

If there is an optimal solution to a linear programming problem, then there is a basic optimal solution.

It is proved that the optimal solution of the linear programming problem is located on the boundary of the region of admissible values ​​of controlled variables, which is a polyhedron in n-dimensional space and defined by a system of linear constraints.

Since z is an optimal solution to a linear programming problem with m constraints, this solution contains at most m strictly positive variables.

It is proved that the optimal solution of the linear programming problem is located on the boundary of the region of admissible values ​​of controlled variables, which is a polyhedron in / r-dimensional space, defined by a system of linear constraints. The coordinates of each vertex are determined by solving a system of equations (constraints), and in the presence of n controlled variables and m constraints, St p has to solve a system of m equations. The combination Spn n (m - n grows very rapidly with increasing type, so the search for a solution requires a very large number of calculations that are inaccessible even to a computer.

So, in the case of D 1, the optimal solution of the linear programming problem turns out to be automatically integer.

As shown in Part 1, the optimal solution to a linear programming problem is by no means necessarily integer, and at the same time, there are many problems whose nature requires that the solution be integer. Some of these problems are not, at first glance, integer programming problems, but they can be formulated as such.

Obviously, not every basic solution is an optimal solution to a linear programming problem. However, the optimal solution of a nondegenerate problem must always be the basis for the system of equations (VIII, 42), and thus the problem of finding the optimal solution consists in enumeration of only the basic solutions of the system of equations (VIII, 42), among which the optimal one is found.

Obviously, not every basic solution is an optimal solution to a linear programming problem. However, the optimal solution of a non-degenerate problem must always be the basis for the system of equations (VIII42), and thus the problem of finding the optimal solution consists in enumeration of only the basic solutions of the system of equations (VIII42), among which the optimal one is found.

After performing several iterations in step 3, numerous alternative optimal solutions to the linear programming problem may appear.

LINEAR PROGRAMMING PROBLEM

Such cycling is sometimes called continuous degeneracy. Unfortunately, this phenomenon often occurs in high-dimensional medium PI problems. There are also many examples of low-dimensional problems (no more than 10 variables and equations) that require thousands of iterations to achieve convergence.

In these cases, the simplex method is used, which is an iterative (step by step) procedure for determining the optimal solution to a linear programming problem. Calculations by the simplex method begin with the determination of a feasible solution, and then other feasible solutions are found and the possibilities for improving them are checked. The transition from one solution to another continues until new improvements are not possible. Widespread standard computer programs, which use the simplex method to solve such management problems that can be represented as linear programming problems.

If the system of linear constraints has a special structure, for example, if it forms network model, then at step 2, when finding the optimal solution to the linear programming problem, this circumstance can be used.

The idea of ​​proportional distribution was implemented in the form of a two-stage calculation algorithm proposed by I.I. Dikin, which essentially uses the property of the interior points method to develop a relatively interior point of the set of optimal solutions to a linear programming problem. This property means that the boundary values ​​according to the inequality conditions (2.3.2) - (2.3.4) are achieved only for those variables that have these boundary values ​​for any other optimal solution.

Pages:      1    2

Graphical Method for Solving a Linear Programming Problem

Consider the ZLP in standard form for the case of two variables :

(10)

Let the system of inequalities (10) be compatible (has at least one solution). Any inequality of this system geometrically defines a half-plane with a boundary line The non-negativity conditions define half-planes with corresponding boundary lines and.

Since the system is consistent, the half-planes, like convex sets, intersecting, form a common part, which is a convex set and is a collection of points, the coordinates of each of which are the solution of this system. The totality of all these points is called solution polygon. It can be a point, a segment, a ray, a straight line, a closed polygon, an unlimited polygonal area.

The solution of the LLP is geometrically a search for such a point of the solution polygon, the coordinates of which provide the objective function with the largest (smallest) value. Moreover, all points of the polyhedron are admissible solutions.

Consider the so-called level line objective function z, i.e. the line along which this function takes the same fixed value : or

Algorithm for solving a linear programming problem by a graphical method (number of variables).

1. A polygonal area of ​​feasible solutions is constructed on the plane corresponding to the constraints. Then a gradient vector is built

objective function z at any point, the domain of feasible solutions.

2. Straight line (function level line z) perpendicular to the gradient vector moves parallel to itself in the direction of the gradient vector in the case of the maximum problem (and in the opposite direction in the case of the minimum problem) until it leaves the region of feasible solutions. The limit point (or points) of the region are the optimal points.

3. To find the coordinates of the optimal point, it is necessary to solve a system of equations that corresponds to the straight lines, the intersection of which forms this point.

Formulation of the main types of LP problems, construction of their mathematical models

The value of the objective function at this point will be optimal, and the coordinates of the point themselves will be the solution to the LP problem .

Example. Solve a geometric problem:

Let's construct a polygon of all admissible solutions OABCD and the direction vector of the objective function (Fig. 1). The direction of the gradient vector indicates the direction of increasing objective function. Since the problem under consideration is to find the maximum, we move the line perpendicular to the vector in the direction of this vector parallel to itself until this line leaves the area of ​​​​admissible solutions. On the border of the region, in our case at the point FROM, and there will be a solution to the problem. Dot FROM is at the intersection of the lines and . Therefore, its coordinates are determined by the solution of the system of these equations:

from where i.e. dot FROM has coordinates (6, 4).

The maximum (the maximum value of the objective function) is equal to: Answer: at the optimal solution i.e. the maximum profit can be achieved by producing 6 units of the first and 4 units of the second product.

INTRODUCTION

Modern economic theory, both at the micro and macro levels, includes mathematical models and methods as a natural, necessary element. The use of mathematics in economics makes it possible, firstly, to identify and formally describe the most important, essential relationships between economic variables and objects: the study of such a complex object requires a high degree of abstraction. Secondly, from clearly formulated initial data and relations, deduction methods can be used to obtain conclusions that are adequate to the object under study to the same extent as the assumptions made. Thirdly, the methods of mathematics and statistics make it possible to obtain new knowledge about an object in an inductive way: to evaluate the form and parameters of the dependences of its variables, which are most consistent with the available observation. Finally, fourthly, the use of the language of mathematics allows us to accurately and compactly state the provisions of economic theory, to formulate its concepts and conclusions.

Mathematical models used in the economy can be divided into classes according to a number of features related to the features of the modeled object, the purpose of modeling and the tools used: micro- and macroeconomic models, theoretical and equilibrium, statistical and dynamic.

The essence of optimization methods lies in the fact that, based on the availability of certain resources, such a method of their use (distribution) is chosen that provides the maximum (minimum) of the indicator of interest to us.

As methods of optimization in the economy, all the main sections of mathematical programming (planning) are used.

The mathematical discipline that deals with the study of extreme (maximum or minimum) problems of management, planning and the development of methods for solving them is called mathematical programming.

In general, the mathematical formulation of the extremal problem consists in determining the largest or smallest value of the objective function
on condition ,

where and are given functions and are some real numbers.

Depending on the type of goal function and constraints, mathematical programming is divided into linear and non-linear. Most

studied section of mathematical programming is linear programming.

Definition.

Linear programming problem (page 1 of 3)

Linear programming - the science of methods of using and finding extreme (maximum and minimum) values ​​of a linear function, on the unknowns of which linear restrictions are imposed.

This linear function is called the objective function, and the restrictions in the form of equations or inequalities are called the system of restrictions.

Definition. The mathematical expression of the objective function and its constraints is called mathematical model of the economic problem.

Consider some problems of linear programming (LPP).

1. The problem of the use of resources (problem of production planning).

For the manufacture of various products The company uses three different types of raw materials. Consumption rates of raw materials for the production of one product , as well as the total

raw materials of each type that can be used by the enterprise are given in table.

Draw up a plan for the production of products in which the total cost of all products manufactured by the enterprise is maximum.

Let us construct a mathematical model of this problem.

Denote by the desired output of products , through - products ,

through - products.

Since there are cost norms for each type of raw material, then we can find the total cost of each type of raw material for the manufacture of all products. It follows from the table that the total volume of raw materials of type I will be, II -
,

III -
. And since there are restrictions on the fund of raw materials, therefore, the total volume of raw materials of each type should be no more than the total amount of raw materials, i.e.

we get next system inequalities

(1)

Economically, the variables can only take non-negative values:

(2)

The cost of all products of the type will be Accordingly, the total cost of products manufactured by the enterprise will be (3)

We need to find this function. Thus, among all non-negative solutions of system (1), it is necessary to find one for which function (3) takes the maximum value.

This problem can be easily generalized to the case of the release of types of products using types of raw materials (resources).

Denote by - the number of units of products planned for production, - stock of resources - of the -th type, - specific consumption - of the -th resource for the manufacture of -th products. - profit from the sale of a unit of product - type.

Then the economic and mathematical model of the problem of the use of resources in the general setting will take the form: find such a plan
output that satisfies the main system of restrictions

additional system of restrictions

where the objective function is

takes on the maximum value.

Comment. To make a mathematical model of the ZLP, you need to:

– enter the notation of variables;

- based on the purpose of economic research, compose an objective function;

- taking into account the limitations in the use of economic indicators of the task and their quantitative patterns, write down a system of restrictions.

The solution of linear programming problems is based on the concepts of analytic geometry in -dimensional vector space.

Reduction of the general LLP to the canonical form.

The general view of the ZLP is as follows:

(1)

(2)

(3)

where relation (1) is the objective function, (2) is the system of basic constraints, (3) is the system of additional constraints.

Relations (2) and (3) form a complete system of restrictions.

The reduction of the system of basic constraints to the canonical form is carried out by introducing additional non-negative variables into the left parts of the inequalities with coefficients “+1” if the inequalities are of the form and “-1” if the inequalities are of the form . Additional variables enter the objective function with zero coefficients.

Definition . The LLP is said to be given in the canonical form if its system of basic constraints is represented by equations.

Definition. The LLP is said to be given in the standard canonical form if the following conditions are met:

1) the system of basic constraints is represented by equations and all of them are linearly independent;

2) the number of equations is less than the number of variables;

3) the problem of minimizing the objective function is solved;

4) the right parts of the system of basic constraints are non-negative;

5) all variables are also non-negative.

In most methods for solving linear programming problems, it is assumed that the system of constraints consists of equations and natural conditions for the non-negativity of variables.

However, when compiling models of many problems, constraints are mainly formed in the form of a system of inequalities, so it is necessary to be able to move from a system of inequalities to a system of equations.

This can be done like this:

Take the linear inequality

and add to its left side some value , such that the inequality turns into equality

In this case, this value is non-negative.

Example

Reduce the linear programming problem to canonical form:

Solution:

Let's move on to the problem of finding the maximum of the objective function.

To do this, we change the signs of the coefficients of the objective function.

To convert the second and third inequalities of the constraint system into equations, we introduce non-negative additional variables x 4 x 5 (this operation is marked with the letter D on the mathematical model).

The variable x 4 is entered on the left side of the second inequality with a "+" sign, since the inequality has the form "≤".

The variable x 5 is entered on the left side of the third inequality with the "-" sign, since the inequality has the form "≥".

Variables x 4 x 5 are entered into the objective function with a coefficient. equal to zero.

We write the problem in canonical form:

SIMPLEX METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS

This method is a method of purposeful enumeration of reference solutions of a linear programming problem. It allows for a finite number of steps either to find the optimal solution or to establish that there is no optimal solution.

Algorithm of the simplex method for solving linear programming problems

In order to solve the problem by the simplex method, you must do the following:

1. Bring the problem to the canonical form

Topic 8. Linear programming

Find an initial reference solution with a "unit basis" (if there is no reference solution, then the problem does not have a solution due to the incompatibility of the system of constraints)

3. Calculate estimates of vector expansions in terms of the basis of the reference solution and fill in the table of the simplex method

4. If the criterion for the uniqueness of the optimal solution is satisfied, then the solution of the problem ends

5. If the condition for the existence of a set of optimal solutions is satisfied, then by simple enumeration, all optimal solutions are found

An example of solving the problem by the simplex method

Example 1

Solve the problem using the simplex method:

Minimize function value

F = 10×1 - 4×3 max

In the presence of restrictions in the form of inequalities

We bring the problem to the canonical form.

To do this, we introduce an additional variable x 5 with the coefficient +1 into the left side of the first inequality constraint. The variable x 5 is included in the objective function with a zero coefficient (i.e., it is not included).

We get:

F = 10×1 - 4×3+0∙x5 max

In the presence of restrictions in the form of inequalities

We find the initial reference solution. To do this, we equate the free (unresolved) variables to zero x1 = x3 = 0.

We get the reference solution X1 = (0.0.0.5.9/15.6) with unit basis B1 = (A4, A5, A6).

We calculate the estimates of expansions of the condition vectors in terms of the basis of the reference solution using the formula:

Δ k \u003d C b X k - c k

· C b = (с 1 , с 2 , … , с m) - vector of objective function coefficients with basic variables

· X k = (x 1k , x 2k , … , x mk) - the expansion vector of the corresponding vector A to the basis of the reference solution

· C to - the coefficient of the objective function for the variable x to.

Estimates of the vectors included in the basis are always equal to zero.

The reference solution, coefficients of expansions and estimates of expansions of condition vectors in terms of the basis of the reference solution are written in a simplex table:

Above the table, for the convenience of calculating estimates, the coefficients of the objective function are written. The first column "B" contains the vectors included in the basis of the reference solution. The order of writing these vectors corresponds to the numbers of allowed unknowns in the constraint equations. In the second column of the table "With b" the coefficients of the objective function are written with basic variables in the same order. With the correct arrangement of the coefficients of the objective function in the "C b" column, the estimates of the unit vectors included in the basis are always equal to zero.

In the last row of the table with estimates of Δ k in the column "A 0 "the values ​​of the objective function are written on the reference solution Z(X 1).

The initial reference solution is not optimal, since in the maximum problem the estimates Δ 1 = -2, Δ 3 = -9 for the vectors A 1 and A 3 are negative.

According to the reference solution improvement theorem, if in the maximum problem at least one vector has a negative estimate, then a new reference solution can be found on which the value of the objective function will be greater.

Let us determine which of the two vectors will lead to a larger increment of the objective function.

The objective function increment is found by the formula:

We calculate the values ​​of the parameter θ 01 for the first and third columns using the formula:

We get θ 01 = 6 for l = 1, θ 03 = 3 for l = 1 (table 26.1).

We find the increment of the objective function when the first vector is introduced into the basis

ΔZ 1 \u003d - 6 * (- 2) \u003d 12,

and the third vector ΔZ 3 = - 3*(- 9) = 27.

Therefore, for a faster approach to the optimal solution, it is necessary to introduce the vector A3 into the basis of the reference solution instead of the first vector of the basis A6, since the minimum of the parameter θ 03 is reached in the first row (l = 1).

We perform the Jordan transform with the element X13 = 2, we obtain the second reference solution

X2 = (0.0.3.21.42.0)

with basis B2 = (A3, A4, A5).

(table 26.2)

This solution is not optimal, since the vector A2 has a negative estimate Δ2 = - 6.

To improve the solution, it is necessary to introduce the vector A2 into the basis of the reference solution.

We determine the number of the vector derived from the basis. To do this, we calculate the parameter θ 02 for the second column, it is equal to 7 for l = 2.

Therefore, from the basis we deduce the second vector of the basis A4.

We perform the Jordan transform with the element x 22 = 3, we obtain the third reference solution

X3 = (0.7.10.0.63.0)

B2 \u003d (A3, A2, A5) (table 26.3).

This solution is the only optimal one, since for all vectors that are not included in the basis, the estimates are positive

Δ 1 \u003d 7/2, Δ 4 \u003d 2, Δ 6 \u003d 7/2.

Answer: max Z(X) = 201 at X = (0.7,10,0.63).

⇐ Previous123456789Next ⇒

In practice, constraints in a linear programming problem are often given not by equations, but by inequalities.

Let us show how one can pass from a problem with inequality constraints to the main problem of linear programming.

Let there be a linear programming problem with variables , in which the constraints imposed on the variables have the form of linear inequalities. In some of them, the inequality sign can be and others (the second type is reduced to the first by a simple change of the sign of both parts). Therefore, we set all inequality constraints in the standard form:

It is required to find such a set of non-negative values ​​that would satisfy inequalities (4.1), and, in addition, would minimize the linear function:

From the problem posed in this way, it is easy to pass to the main problem of linear programming. Indeed, we introduce the notation:

where are some new variables, which we will call "additional". According to conditions (4.1), these additional variables, as they should be, must be non-negative.

Thus, we face the problem of linear programming in the following formulation: to find such non-negative values ​​of the variables that they satisfy the system of equations (4.3) and at the same time minimize the linear function of these variables:

As you can see, we have before us in its pure form the main problem of linear programming (LPP). Equations (4.3) are given in the form already resolved with respect to the basic variables, which are expressed in terms of free variables. The function L is expressed only in terms of the "initial" variables (the coefficients of the "additional" variables in it are equal to zero).

Thus, we have reduced the problem of linear programming with inequality constraints to the main problem of linear programming, but with a larger number of variables than was originally in the problem.

Example 1 There is a linear programming problem with inequality constraints: find non-negative values ​​of variables that satisfy the conditions

and minimizing the linear function

It is required to reduce this problem to the form of the OLP.

Solution. We bring inequalities (4.4) to the standard form;

We introduce additional variables:

The task is to find non-negative values ​​of the variables

satisfying equations (4.6) and minimizing the linear function (4.5).

We have shown how it is possible to pass from a linear programming problem with inequality constraints to a problem with equality constraints (ELP). The reverse transition is always possible - from the LLP to the problem with inequality constraints. If in the first case we increased the number of variables, then in the second case we will decrease it, eliminating the basic variables and leaving only the free ones.

Example 2. There is a linear programming problem with equality constraints (OLP):

and the function to be minimized

It is required to write it as a linear programming problem with inequality constraints.

Solution. Since , we choose some two of the variables as free ones. Note that variables cannot be chosen as free ones, since they are related by the first of the equations (4 7): the value of one of them is completely determined by the value of the other, and the free variables must be independent

For the same reason, it is impossible to choose variables as free ones (they are connected by the second equation). We choose as free variables and express all the rest in terms of them:

Since conditions (4 9) can be replaced by inequalities:

Let us pass in the expression of the linear function L to free variables Substituting in L instead of and their expressions (4.9). get.

The basis for solving economic problems are mathematical models.

mathematical model problem is a set of mathematical relationships that describe the essence of the problem.

Drawing up a mathematical model includes:
  • task variable selection
  • drawing up a system of restrictions
  • choice of objective function

Task variables are called quantities X1, X2, Xn, which fully characterize the economic process. Usually they are written as a vector: X=(X 1 , X 2 ,...,X n).

The system of restrictions tasks are a set of equations and inequalities that describe the limited resources in the problem under consideration.

target function task is called a function of task variables that characterizes the quality of the task and the extremum of which is required to be found.

In general, a linear programming problem can be written as follows:

This entry means the following: find the extremum of the objective function (1) and the corresponding variables X=(X 1 , X 2 ,...,X n) provided that these variables satisfy the system of constraints (2) and non-negativity conditions (3) .

Acceptable Solution(plan) of a linear programming problem is any n-dimensional vector X=(X 1 , X 2 ,...,X n) that satisfies the system of constraints and non-negativity conditions.

The set of feasible solutions (plans) of the problem forms range of feasible solutions(ODR).

The optimal solution(plan) of a linear programming problem is such a feasible solution (plan) of the problem, in which the objective function reaches an extremum.

An example of compiling a mathematical model

The task of using resources (raw materials)

Condition: For the manufacture of n types of products, m types of resources are used. Make a mathematical model.

Known:

  • b i (i = 1,2,3,...,m) are the reserves of each i-th type of resource;
  • a ij (i = 1,2,3,...,m; j=1,2,3,...,n) are the costs of each i-th type of resource for the production of a unit volume of the j-th type of product;
  • c j (j = 1,2,3,...,n) is the profit from the sale of a unit volume of the j-th type of product.

It is required to draw up a plan for the production of products that provides maximum profit with given restrictions on resources (raw materials).

Solution:

We introduce a vector of variables X=(X 1 , X 2 ,...,X n), where x j (j = 1,2,...,n) is the volume of production of the j-th type of product.

The costs of the i-th type of resource for the production of a given volume x j of products are equal to a ij x j , therefore, the restriction on the use of resources for the production of all types of products has the form:
The profit from the sale of the j-th type of product is equal to c j x j , so the objective function is equal to:

Answer- The mathematical model looks like:

Canonical form of a linear programming problem

In the general case, a linear programming problem is written in such a way that both equations and inequalities are constraints, and variables can be either non-negative or arbitrarily changing.

In the case when all constraints are equations and all variables satisfy the non-negativity condition, the linear programming problem is called canonical.

It can be represented in coordinate, vector and matrix notation.

The canonical linear programming problem in coordinate notation has the form:

The canonical linear programming problem in matrix notation has the form:

  • A is the matrix of coefficients of the system of equations
  • X is a column matrix of task variables
  • Ao is the matrix-column of the right parts of the constraint system

Often, linear programming problems are used, called symmetric ones, which in matrix notation have the form:

Reduction of a general linear programming problem to canonical form

In most methods for solving linear programming problems, it is assumed that the system of constraints consists of equations and natural conditions for the non-negativity of variables. However, when compiling models of economic problems, constraints are mainly formed in the form of a system of inequalities, so it is necessary to be able to move from a system of inequalities to a system of equations.

This can be done like this:

Take a linear inequality a 1 x 1 +a 2 x 2 +...+a n x n ≤b and add to its left side some value x n+1 such that the inequality becomes the equality a 1 x 1 +a 2 x 2 + ...+a n x n +x n+1 =b. Moreover, this value x n+1 is non-negative.

Let's consider everything with an example.

Example 26.1

Reduce the linear programming problem to canonical form:

Solution:
Let's move on to the problem of finding the maximum of the objective function.
To do this, we change the signs of the coefficients of the objective function.
To convert the second and third inequalities of the constraint system into equations, we introduce non-negative additional variables x 4 x 5 (this operation is marked with the letter D on the mathematical model).
The variable x 4 is entered on the left side of the second inequality with a "+" sign, since the inequality has the form "≤".
The variable x 5 is entered on the left side of the third inequality with the "-" sign, since the inequality has the form "≥".
Variables x 4 x 5 are entered into the objective function with a coefficient. equal to zero.
We write the problem in canonical form.

Internet