Study guide: Matrices and determinants. Some Properties of Determinants If 2 rows in a matrix are equal then

The main numerical characteristic of a square matrix is ​​its determinant. Consider a second-order square matrix

The determinant or determinant of the second order is the number calculated according to the following rule

For example,

Let us now consider a square matrix of the third order

.

A third-order determinant is a number calculated according to the following rule

In order to memorize the combination of terms included in expressions to determine the third-order determinant, they usually use Sarrus rule: the first of the three terms included in the right side with a plus sign is the product of the elements on the main diagonal of the matrix , and each of the other two is the product of the elements lying on the parallel to this diagonal and the element from the opposite corner of the matrix.

The last three terms that enter with a minus sign are defined in a similar way, only with respect to the secondary diagonal.

Example:

Basic properties of matrix determinants

1. The value of the determinant does not change when the matrix is ​​transposed.

2. When rearranging the rows or columns of the matrix, the determinant changes only the sign, while maintaining the absolute value.

3. The determinant containing proportional rows or columns is equal to zero.

4. The common factor of the elements of some row or column can be taken out of the sign of the determinant.

5. If all elements of some row or column are equal to zero, then the determinant itself is equal to zero.

6. If we add to the elements of a separate row or column of the determinant the elements of another row or column, multiplied by an arbitrary non-degenerate factor , then the value of the determinant will not change.

Minor matrix is ​​the determinant obtained by deleting the same number of columns and rows from a square matrix.

If all the minors of order above that can be composed from the matrix are equal to zero, and among the minors of order at least one is nonzero, then the number is called rank this matrix.

Algebraic addition element of the determinant of the order, we will call its minor of the order, obtained by deleting the corresponding row and column, at the intersection of which, there is an element taken with a plus sign if the sum of the indices is equal to an even number and with a minus sign otherwise.

Thus

,

where is the corresponding order minor.

Calculating the determinant of a matrix by decomposing over the elements of a row or column

The matrix determinant is equal to the sum of the products of the elements of any row (any column) of the matrix and the corresponding algebraic complements of the elements of this row (this column). When calculating the determinant of a matrix in this way, one should be guided by the following rule: choose the row or column with the largest number of zero elements. This technique can significantly reduce the amount of calculations.

Example: .

When calculating this determinant, we used the method of expanding it by the elements of the first column. As can be seen from the above formula, there is no need to calculate the last of the second-order determinants, since it multiplies by zero.

Inverse Matrix Calculation

When solving matrix equations, the inverse matrix is ​​widely used. To a certain extent, it replaces the operation of division, which is absent in an explicit form in matrix algebra.

Square matrices of the same order, the product of which gives the identity matrix, are called reciprocal or inverse. The inverse matrix is ​​denoted and it is true for it

You can calculate the inverse matrix only for such a matrix for which .

The classical algorithm for calculating the inverse matrix

1. Write down the matrix transposed to the matrix .

2. Replace each element of the matrix with the determinant obtained as a result of deleting the row and column at the intersection of which this element is located.

3. This determinant is accompanied by a plus sign if the sum of the element indices is even, and a minus sign otherwise.

4. Divide the resulting matrix by the matrix determinant.

Let there be a table (called a matrix) consisting of four numbers:

The matrix has two rows and two columns. The numbers that make up this matrix are denoted by a letter with two indices. The first index indicates the row number, and the second index indicates the column number in which given number. For example, means the number in the first row and second column; the number in the second row and first column. The numbers will be called elements of the matrix.

The determinant (or determinant) of the second order corresponding to the given matrix is ​​the number obtained as follows:

The determinant is denoted by the symbol

Thus,

The numbers are called elements of the determinant.

Let us present the properties of the second-order determinant.

Property 1. The determinant does not change if its rows are interchanged with the corresponding columns, i.e.

Property 2.

When two rows (or columns) are interchanged, the determinant will change sign to the opposite, preserving the absolute value, i.e.

Property 3. A determinant with two identical rows (or columns) is equal to zero.

Property 4. The common factor of all elements of a row (or column) can be taken out of the determinant sign:

Property 5. If all elements of any row (or column) are equal to zero, then the determinant is equal to zero.

Property 6. If to any row (or column) of the determinant we add the corresponding elements of another row (or column), multiplied by the same number y, then the determinant will not change its value, i.e.

- Release the bird to certain death!
Let freedom caress her!
And the ship is sailing, and the reactor is roaring...
- Pash, are you stubborn?

I remember that before the 8th grade I did not like algebra. Didn't like it at all. She pissed me off. Because I didn't understand anything.

And then everything changed, because I cut through one chip:

In mathematics in general (and algebra in particular) everything is based on a competent and consistent system of definitions. You know the definitions, you understand their essence - it will not be difficult to figure out the rest.

That's the topic of today's lesson. We will consider in detail several related issues and definitions, thanks to which you will once and for all deal with matrices, determinants, and all their properties.

Determinants are a central concept in matrix algebra. Like abbreviated multiplication formulas, they will haunt you throughout your advanced mathematics course. Therefore, we read, watch and understand thoroughly. :)

And we will start with the most intimate - what is a matrix? And how to work with it.

Correct placement of indexes in the matrix

A matrix is ​​just a table filled with numbers. Neo is not here.

One of the key characteristics of a matrix is ​​its dimension, i.e. the number of rows and columns it consists of. A matrix $A$ is usually said to have size $\left[ m\times n \right]$ if it has $m$ rows and $n$ columns. Write it down like this:

Or like this:

There are other designations - it all depends on the preferences of the lecturer / seminarian / author of the textbook. But in any case, with all these $\left[ m\times n \right]$ and $((a)_(ij))$, the same problem arises:

Which index does what? Row number first, then column number? Or vice versa?

When reading lectures and textbooks, the answer will seem obvious. But when there is only a sheet with a task in front of you on the exam, you can get worried and suddenly get confused.

So let's deal with this issue once and for all. First, let's recall the usual coordinate system from the school mathematics course:

Introduction of a coordinate system on a plane

Remember her? It has an origin (point $O=\left(0;0 \right)$) of the $x$ and $y$ axes, and each point on the plane is uniquely determined by coordinates: $A=\left(1;2 \ right)$, $B=\left(3;1 \right)$, etc.

And now let's take this construction and put it next to the matrix so that the origin is in the upper left corner. Why there? Yes, because when we open a book, we start reading from the left upper corner pages - it's easy to remember.

But where to direct the axes? We will direct them so that our entire virtual "page" is covered by these axes. True, for this we will have to rotate our coordinate system. The only possible option for this location:

Mapping a Coordinate System to a Matrix

Now every cell of the matrix has single-valued coordinates $x$ and $y$. For example, the entry $((a)_(24))$ means that we are accessing the element with coordinates $x=2$ and $y=4$. The dimensions of the matrix are also uniquely specified by a pair of numbers:

Defining indexes in a matrix

Just take a close look at this picture. Play around with coordinates (especially when you work with real matrices and determinants) - and very soon you will realize that even in the most complex theorems and definitions you perfectly understand what is at stake.

Got it? Well, let's move on to the first step of enlightenment - the geometric definition of the determinant. :)

Geometric definition

First of all, I would like to note that the determinant exists only for square matrices of the form $\left[ n\times n \right]$. The determinant is a number that is calculated according to certain rules and is one of the characteristics of this matrix (there are other characteristics: rank, eigenvectors, but more on that in other lessons).

Well, what is this characteristic? What does it mean? It's simple:

The determinant of a square matrix $A=\left[ n\times n \right]$ is the volume of an $n$-dimensional parallelepiped, which is formed if we consider the rows of the matrix as vectors that form the edges of this parallelepiped.

For example, the determinant of a 2x2 matrix is ​​​​just the area of ​​a parallelogram, and for a 3x3 matrix it is already the volume of a 3-dimensional parallelepiped - the very one that infuriates all high school students so much in stereometry lessons.

At first glance, this definition may seem completely inadequate. But let's not rush to conclusions - let's look at examples. In fact, everything is elementary, Watson:

Task. Find the matrix determinants:

\[\left| \begin(matrix) 1 & 0 \\ 0 & 3 \\\end(matrix) \right|\quad \left| \begin(matrix) 1 & -1 \\ 2 & 2 \\\end(matrix) \right|\quad \left| \begin(matrix)2 & 0 & 0 \\ 1 & 3 & 0 \\ 1 & 1 & 4 \\\end(matrix) \right|\]

Solution. The first two determinants are 2x2. So, these are just the areas of parallelograms. Let's draw them and calculate the area.

The first parallelogram is built on the vectors $((v)_(1))=\left(1;0 \right)$ and $((v)_(2))=\left(0;3 \right)$:

The 2x2 determinant is the area of ​​the parallelogram

Obviously, this is not just a parallelogram, but quite a rectangle. Its area is equal to

The second parallelogram is built on the vectors $((v)_(1))=\left(1;-1 \right)$ and $((v)_(2))=\left(2;2 \right)$. Well, so what? This is also a rectangle:

Another 2x2 determinant

The sides of this rectangle (in fact, the lengths of vectors) are easily calculated using the Pythagorean theorem:

\[\begin(align) & \left| ((v)_(1)) \right|=\sqrt(((1)^(2))+((\left(-1 \right))^(2)))=\sqrt(2); \\ & \left| ((v)_(2)) \right|=\sqrt(((2)^(2))+((2)^(2)))=\sqrt(8)=2\sqrt(2); \\ & S=\left| ((v)_(1)) \right|\cdot \left| ((v)_(2)) \right|=\sqrt(2)\cdot 2\sqrt(2)=4. \\\end(align)\]

It remains to deal with the last determinant - there is already a 3x3 matrix. We'll have to remember the stereometry:


The 3x3 determinant is the volume of the parallelepiped

It looks mind-blowing, but in fact it is enough to recall the formula for the volume of a parallelepiped:

where $S$ is the area of ​​the base (in our case, it is the area of ​​the parallelogram on the $OXY$ plane), $h$ is the height drawn to this base (in fact, the $z$-coordinate of the vector $((v)_(3) )$).

The area of ​​the parallelogram (we drew it separately) is also easy to calculate:

\[\begin(align) & S=2\cdot 3=6; \\ & V=S\cdot h=6\cdot 4=24. \\\end(align)\]

That's all! We write down the answers.

Answer: 3; 4; 24.

A small note about the notation system. Someone will probably not like that I ignore the "arrows" over vectors. Allegedly, this way you can confuse a vector with a point or something else.

But let's be serious: we are already adult boys and girls, so we understand perfectly well from the context when we are talking about a vector, and when we are talking about a point. Arrows only litter the narrative, already stuffed to capacity with mathematical formulas.

And further. In principle, nothing prevents us from considering the determinant of a 1x1 matrix - such a matrix is ​​\u200b\u200bjust one cell, and the number written in this cell will be the determinant. But there is an important note here:

Unlike the classical volume, the determinant will give us the so-called " oriented volume”, i.e. volume, taking into account the sequence of consideration of row vectors.

And if you want to get the volume in the classical sense of the word, you will have to take the modulus of the determinant, but now you should not worry about it - anyway, in a few seconds we will learn how to count any determinant with any signs, sizes, etc. :)

Algebraic definition

With all the beauty and clarity of the geometric approach, it has a serious drawback: it does not tell us anything about how to calculate this very determinant.

Therefore, now we will analyze an alternative definition - algebraic. To do this, we need a brief theoretical preparation, but at the output we will get a tool that allows us to calculate anything in matrices as we please.

True, there will be a new problem ... but first things first.

Permutations and inversions

Let's write a line of numbers from 1 to $n$. You get something like this:

Now (purely for fun) let's swap a couple of numbers. You can change the neighboring

Or maybe not very neighboring:

And you know what? But nothing! In algebra, this crap is called a permutation. And it has a lot of properties.

Definition. A permutation of length $n$ is a string of $n$ different numbers written in any order. Usually, the first $n$ natural numbers are considered (that is, exactly the numbers 1, 2, ..., $n$), and then they are shuffled to obtain the desired permutation.

Permutations are denoted in the same way as vectors - just a letter and a sequential enumeration of their elements in brackets. For example: $p=\left(1;3;2 \right)$ or $p=\left(2;5;1;4;3 \right)$. The letter can be anything, but let it be $p$. :)

Further, for simplicity of presentation, we will work with permutations of length 5 - they are already serious enough to observe any suspicious effects, but not yet so severe for a fragile brain as permutations of length 6 and more. Here are examples of such permutations:

\[\begin(align) & ((p)_(1))=\left(1;2;3;4;5 \right) \\ & ((p)_(2))=\left(1 ;3;2;5;4 \right) \\ & ((p)_(3))=\left(5;4;3;2;1 \right) \\\end(align)\]

Naturally, a permutation of length $n$ can be considered as a function that is defined on the set $\left\( 1;2;...;n \right\)$ and bijectively maps this set onto itself. Returning to the permutations of $((p)_(1))$, $((p)_(2))$, and $((p)_(3))$ we just wrote down, we can legitimately write:

\[((p)_(1))\left(1 \right)=1;((p)_(2))\left(3 \right)=2;((p)_(3))\ left(2\right)=4;\]

The number of different permutations of length $n$ is always limited and equal to $n!$ — this is an easily provable fact from combinatorics. For example, if we want to write down all permutations of length 5, then we will hesitate a lot, since there will be such permutations

One of the key characteristics of any permutation is the number of inversions in it.

Definition. Inversion in permutation $p=\left(((a)_(1));((a)_(2));...;((a)_(n)) \right)$ — any pair $ \left(((a)_(i));((a)_(j)) \right)$ such that $i \lt j$ but $((a)_(i)) \gt ( (a)_(j))$. Simply put, inversion is when more stands to the left of the smaller one (not necessarily the neighboring one).

We will use $N\left(p \right)$ to denote the number of inversions in the permutation $p$, but be prepared to meet other notation in different textbooks and by different authors - there are no uniform standards here. The topic of inversions is very extensive, and a separate lesson will be devoted to it. Now our task is simply to learn how to count them in real problems.

For example, let's count the number of inversions in the permutation $p=\left(1;4;5;3;2 \right)$:

\[\left(4;3 \right);\left(4;2 \right);\left(5;3 \right);\left(5;2 \right);\left(3;2 \right ).\]

Thus, $N\left(p \right)=5$. As you can see, there is nothing wrong with this. I must say right away: further we will be interested not so much in the number $N\left(p \right)$, but in its even/oddness. And here we smoothly move on to the key term of today's lesson.

What is a determinant

Let $A=\left[ n\times n \right]$ be a square matrix. Then:

Definition. The determinant of the matrix $A=\left[ n\times n \right]$ is the algebraic sum of $n!$ terms composed as follows. Each term is the product of $n$ matrix elements, taken one from each row and each column, multiplied by (−1) to the power of the number of inversions:

\[\left| A \right|=\sum\limits_(n{{{\left(-1 \right)}^{N\left(p \right)}}\cdot {{a}_{1;p\left(1 \right)}}\cdot {{a}_{2;p\left(2 \right)}}\cdot ...\cdot {{a}_{n;p\left(n \right)}}}\]!}

The fundamental point in choosing factors for each term in the determinant is the fact that no two factors are in the same row or in the same column.

Due to this, we can assume without loss of generality that the indices $i$ of the factors $((a)_(i;j))$ "run through" the values ​​1, ..., $n$, and the indices $j$ are some permutation of first:

And when there is a permutation $p$, we can easily calculate the inversions of $N\left(p \right)$ - and the next term of the determinant is ready.

Naturally, no one forbids swapping factors in any term (or in all at once - why bother with trifles?), And then the first indices will also represent some kind of permutation. But in the end, nothing will change: the total number of inversions in the indices $i$ and $j$ remains even under such perversions, which is quite consistent with the good old rule:

By rearranging the factors, the product of numbers does not change.

But you don’t need to drag this rule to matrix multiplication - unlike multiplication of numbers, it is not commutative. But I digress. :)

Matrix 2x2

In fact, you can also consider a 1x1 matrix - it will be one cell, and its determinant, as you might guess, is equal to the number written in this cell. Nothing interesting.

So let's consider a 2x2 square matrix:

\[\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) \\ ((a)_(21)) & ((a)_(22)) \\\end(matrix) \right]\]

Since the number of rows in it is $n=2$, then the determinant will contain $n!=2!=1\cdot 2=2$ terms. Let's write them out:

\[\begin(align) & ((\left(-1 \right))^(N\left(1;2 \right)))\cdot ((a)_(11))\cdot ((a) _(22))=((\left(-1 \right))^(0))\cdot ((a)_(11))\cdot ((a)_(22))=((a)_ (11))((a)_(22)); \\ & ((\left(-1 \right))^(N\left(2;1 \right)))\cdot ((a)_(12))\cdot ((a)_(21)) =((\left(-1 \right))^(1))\cdot ((a)_(12))\cdot ((a)_(21))=((a)_(12))( (a)_(21)). \\\end(align)\]

Obviously, there are no inversions in the permutation $\left(1;2 \right)$, which consists of two elements, so $N\left(1;2 \right)=0$. But in the permutation $\left(2;1 \right)$ there is one inversion (actually, 2< 1), поэтому $N\left(2;1 \right)=1.$

In total, the universal formula for calculating the determinant for a 2x2 matrix looks like this:

\[\left| \begin(matrix) ((a)_(11)) & ((a)_(12)) \\ ((a)_(21)) & ((a)_(22)) \\\end( matrix) \right|=((a)_(11))((a)_(22))-((a)_(12))((a)_(21))\]

Graphically, this can be represented as the product of the elements on the main diagonal, minus the product of the elements on the secondary:

2x2 matrix determinant

Let's look at a couple of examples:

\[\left| \begin(matrix) 5 & 6 \\ 8 & 9 \\\end(matrix) \right|;\quad \left| \begin(matrix) 7 & 12 \\ 14 & 1 \\\end(matrix) \right|.\]

Solution. Everything is considered in one line. First matrix:

And the second one:

Answer: -3; -161.

However, it was too easy. Let's look at 3x3 matrices - it's already interesting there.

Matrix 3x3

Now consider a 3x3 square matrix:

\[\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ((a)_(13)) \\ ((a)_(21)) & ((a)_(22)) & ((a)_(23)) \\ ((a)_(31)) & ((a)_(32)) & ((a)_(33) ) \\\end(matrix) \right]\]

When calculating its determinant, we get $3!=1\cdot 2\cdot 3=6$ terms - not too much to panic, but enough to start looking for some patterns. First, let's write out all the permutations of the three elements and calculate the inversions in each of them:

\[\begin(align) & ((p)_(1))=\left(1;2;3 \right)\Rightarrow N\left(((p)_(1)) \right)=N\ left(1;2;3\right)=0; \\ & ((p)_(2))=\left(1;3;2 \right)\Rightarrow N\left(((p)_(2)) \right)=N\left(1;3 ;2\right)=1; \\ & ((p)_(3))=\left(2;1;3 \right)\Rightarrow N\left(((p)_(3)) \right)=N\left(2;1 ;3\right)=1; \\ & ((p)_(4))=\left(2;3;1 \right)\Rightarrow N\left(((p)_(4)) \right)=N\left(2;3 ;1\right)=2; \\ & ((p)_(5))=\left(3;1;2 \right)\Rightarrow N\left(((p)_(5)) \right)=N\left(3;1 ;2\right)=2; \\ & ((p)_(6))=\left(3;2;1 \right)\Rightarrow N\left(((p)_(6)) \right)=N\left(3;2 ;1\right)=3. \\\end(align)\]

As expected, there are 6 permutations $((p)_(1))$, ... $((p)_(6))$ in total (naturally, one could write them out in a different sequence - the point is not change), and the number of inversions in them varies from 0 to 3.

In general, we will have three plus terms (where $N\left(p \right)$ is even) and three more minus terms. In general, the determinant will be calculated according to the formula:

\[\left| \begin(matrix) ((a)_(11)) & ((a)_(12)) & ((a)_(13)) \\ ((a)_(21)) & ((a) _(22)) & ((a)_(23)) \\ ((a)_(31)) & ((a)_(32)) & ((a)_(33)) \\\end (matrix) \right|=\begin(matrix) ((a)_(11))((a)_(22))((a)_(33))+((a)_(12))( (a)_(23))((a)_(31))+((a)_(13))((a)_(21))((a)_(32))- \\ -( (a)_(13))((a)_(22))((a)_(31))-((a)_(12))((a)_(21))((a)_ (33))-((a)_(11))((a)_(23))((a)_(32)) \\\end(matrix)\]

Just don't sit down now and furiously cram all these indexes! Instead of incomprehensible numbers, it is better to remember the following mnemonic rule:

Triangle rule. To find the determinant of a 3x3 matrix, you need to add three products of the elements on the main diagonal and at the vertices of isosceles triangles with a side parallel to this diagonal, and then subtract the same three products, but on the secondary diagonal. Schematically it looks like this:


3x3 Matrix Determinant: Rule of Triangles

It is these triangles (or pentagrams - as you like) that they like to draw in all sorts of textbooks and manuals on algebra. However, let's not talk about sad things. Let's better calculate one such determinant - for warming up before a real tin. :)

Task. Calculate the determinant:

\[\left| \begin(matrix) 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 1 \\\end(matrix) \right|\]

Solution. We work according to the rule of triangles. First, let's calculate three terms made up of elements on the main diagonal and parallel to it:

\[\begin(align) & 1\cdot 5\cdot 1+2\cdot 6\cdot 7+3\cdot 4\cdot 8= \\ & =5+84+96=185 \\\end(align) \]

Now let's deal with the side diagonal:

\[\begin(align) & 3\cdot 5\cdot 7+2\cdot 4\cdot 1+1\cdot 6\cdot 8= \\ & =105+8+48=161 \\\end(align) \]

It remains only to subtract the second from the first number - and we get the answer:

That's all!

However, the determinants of 3x3 matrices are not yet the pinnacle of skill. The most interesting is waiting for us further. :)

General scheme for calculating determinants

As we know, as the dimension of the matrix $n$ increases, the number of terms in the determinant is $n!$ and grows rapidly. After all, the factorial is a pretty fast growing function.

Already for 4x4 matrices, it becomes somehow not good to count the determinants ahead (ie, through permutations). I generally keep quiet about 5x5 and more. Therefore, some properties of the determinant are connected to the case, but a little theoretical preparation is needed to understand them.

Ready? Go!

What is a matrix minor

Let an arbitrary matrix $A=\left[ m\times n \right]$ be given. Note: not necessarily square. Unlike determinants, minors are cute things that exist not only in harsh square matrices. We choose several (for example, $k$) rows and columns in this matrix, with $1\le k\le m$ and $1\le k\le n$. Then:

Definition. The $k$ order minor is the determinant of the square matrix that appears at the intersection of the chosen $k$ columns and rows. We will also call this new matrix itself a minor.

Such a minor is denoted by $((M)_(k))$. Naturally, one matrix can have a whole bunch of minors of order $k$. Here is an example of an order 2 minor for the $\left[ 5\times 6 \right]$ matrix:

Selecting $k = 2$ columns and rows to form a minor

It is not necessary for the selected rows and columns to be side by side, as in the example above. The main thing is that the number of selected rows and columns be the same (this is the number $k$).

There is another definition. Perhaps someone will like it more:

Definition. Let a rectangular matrix $A=\left[ m\times n \right]$ be given. If after deleting one or more columns and one or more rows in it, a square matrix of size $\left[ k\times k \right]$ is formed, then its determinant is the minor $((M)_(k))$ . We will also sometimes call the matrix itself a minor - this will be clear from the context.

As my cat used to say, sometimes it's better to get food from the 11th floor once than to meow while sitting on the balcony.

Example. Let the matrix

By choosing row 1 and column 2, we get the first-order minor:

\[((M)_(1))=\left| 7\right|=7\]

Selecting rows 2, 3 and columns 3, 4, we get a second-order minor:

\[((M)_(2))=\left| \begin(matrix) 5 & 3 \\ 6 & 1 \\\end(matrix) \right|=5-18=-13\]

And if you select all three rows, as well as columns 1, 2, 4, there will be a minor of the third order:

\[((M)_(3))=\left| \begin(matrix) 1 & 7 & 0 \\ 2 & 4 & 3 \\ 3 & 0 & 1 \\\end(matrix) \right|\]

It will not be difficult for the reader to find other minors of orders 1, 2 or 3. Therefore, we move on.

Algebraic additions

“Well, ok, and what do these minions give us minors?” you will surely ask. On their own, nothing. But in square matrices, each minor has a "companion" - an additional minor, as well as an algebraic addition. And together these two slapsticks will allow us to click the determinants like nuts.

Definition. Let a square matrix $A=\left[ n\times n \right]$ be given, in which the minor $((M)_(k))$ is chosen. Then the additional minor for the minor $((M)_(k))$ is a piece of the original matrix $A$, which will remain after deleting all the rows and columns involved in the compilation of the minor $((M)_(k))$:

Additional minor to minor $((M)_(2))$

Let's clarify one point: the additional minor is not just a "piece of the matrix", but the determinant of this piece.

Additional minors are denoted with an asterisk: $M_(k)^(*)$:

where the operation $A\nabla ((M)_(k))$ literally means "delete from $A$ the rows and columns included in $((M)_(k))$". This operation is not generally accepted in mathematics - I just came up with it myself for the beauty of the story. :)

Complementary minors are rarely used on their own. They are part of a more complex construction - the algebraic addition.

Definition. The algebraic complement of the minor $((M)_(k))$ is the complementary minor $M_(k)^(*)$ multiplied by $((\left(-1 \right))^(S))$ , where $S$ is the sum of the numbers of all rows and columns involved in the original minor $((M)_(k))$.

As a rule, the algebraic complement of the minor $((M)_(k))$ is denoted by $((A)_(k))$. That's why:

\[((A)_(k))=((\left(-1 \right))^(S))\cdot M_(k)^(*)\]

Difficult? At first glance, yes. But it is not exactly. Because it's really easy. Consider an example:

Example. Given a 4x4 matrix:

We choose a minor of the second order

\[((M)_(2))=\left| \begin(matrix) 3 & 4 \\ 15 & 16 \\\end(matrix) \right|\]

Captain Evidence, as it were, hints to us that rows 1 and 4, as well as columns 3 and 4, were involved in the compilation of this minor. We cross them out - we get an additional minor:

It remains to find the number $S$ and get the algebraic complement. Since we know the numbers of the involved rows (1 and 4) and columns (3 and 4), everything is simple:

\[\begin(align) & S=1+4+3+4=12; \\ & ((A)_(2))=((\left(-1 \right))^(S))\cdot M_(2)^(*)=((\left(-1 \right) )^(12))\cdot \left(-4 \right)=-4\end(align)\]

Answer: $((A)_(2))=-4$

That's all! In fact, the whole difference between an additional minor and an algebraic addition is only in the minus in front, and even then not always.

Laplace's theorem

And so we came to the point why, in fact, all these minors and algebraic additions were needed.

Laplace's theorem on the decomposition of the determinant. Let $k$ rows (columns) be selected in a matrix of size $\left[ n\times n \right]$, with $1\le k\le n-1$. Then the determinant of this matrix is ​​equal to the sum of all products of minors of order $k$ contained in the selected rows (columns) and their algebraic complements:

\[\left| A \right|=\sum(((M)_(k))\cdot ((A)_(k)))\]

Moreover, there will be exactly $C_(n)^(k)$ such terms.

Okay, okay: about $C_(n)^(k)$ - I'm already showing off, there was nothing like that in the original Laplace theorem. But no one has canceled combinatorics, and literally a cursory glance at the condition will allow you to make sure for yourself that there will be exactly that many terms. :)

We will not prove it, although this is not particularly difficult - all calculations come down to the good old permutations and even / odd inversions. However, the proof will be presented in a separate paragraph, and today we have a purely practical lesson.

Therefore, we turn to a special case of this theorem, when the minors are separate cells of the matrix.

Row and column expansion of determinant

What we are going to talk about now is precisely the main tool for working with determinants, for the sake of which all this game with permutations, minors and algebraic additions was started.

Read and enjoy:

Corollary from Laplace's Theorem (decomposition of the determinant in row/column). Let one row be selected in the $\left[ n\times n \right]$ matrix. The minors in this row will be $n$ individual cells:

\[((M)_(1))=((a)_(ij)),\quad j=1,...,n\]

Additional minors are also easy to calculate: just take the original matrix and cross out the row and column containing $((a)_(ij))$. We call such minors $M_(ij)^(*)$.

For the algebraic addition, the number $S$ is also needed, but in the case of a minor of order 1, this is simply the sum of the "coordinates" of the cell $((a)_(ij))$:

And then the original determinant can be written in terms of $((a)_(ij))$ and $M_(ij)^(*)$ according to Laplace's theorem:

\[\left| A \right|=\sum\limits_(j=1)^(n)(((a)_(ij))\cdot ((\left(-1 \right))^(i+j))\cdot ((M)_(ij)))\]

That's what it is row expansion formula. But the same is true for columns.

Several conclusions can be drawn from this corollary:

  1. This scheme works equally well for both rows and columns. In fact, most often the decomposition will go precisely along the columns, rather than along the lines.
  2. The number of terms in the expansion is always exactly $n$. This is much less than $C_(n)^(k)$ and even less than $n!$.
  3. Instead of a single determinant $\left[ n\times n \right]$, you will have to count several determinants of size one less: $\left[ \left(n-1 \right)\times \left(n-1 \right) \right ]$.

The last fact is especially important. For example, instead of the brutal 4x4 determinant, now it will be enough to count several 3x3 determinants - we will somehow cope with them. :)

Task. Find the determinant:

\[\left| \begin(matrix) 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\\end(matrix) \right|\]

Solution. Let's expand this determinant by the first line:

\[\begin(align)\left| A \right|=1\cdot ((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 5 & 6 \\ 8 & 9 \\\end(matrix) \right|+ & \\ 2\cdot ((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 4 & 6 \\ 7 & 9 \\\end(matrix) \right|+ & \\ 3\cdot ((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 4 & 5 \\ 7 & 8 \\\end(matrix) \right|= & \\\end(align)\]

\[\begin(align) & =1\cdot \left(45-48 \right)-2\cdot \left(36-42 \right)+3\cdot \left(32-35 \right)= \\ & =1\cdot \left(-3 \right)-2\cdot \left(-6 \right)+3\cdot \left(-3 \right)=0. \\\end(align)\]

Task. Find the determinant:

\[\left| \begin(matrix) 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\\end(matrix) \right|\ ]

Solution. For a change, let's work with columns this time. For example, in the last column there are two zeros at once - obviously, this will significantly reduce the calculations. Now you will see why.

So, we expand the determinant in the fourth column:

\[\begin(align)\left| \begin(matrix) 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\\end(matrix) \right|= 0\cdot ((\left(-1 \right))^(1+4))\cdot \left| \begin(matrix) 1 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\\end(matrix) \right|+ & \\ +1\cdot ((\left(-1 \ right))^(2+4))\cdot \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\\end(matrix) \right|+ & \\ +1\cdot ((\left(-1 \ right))^(3+4))\cdot \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\\end(matrix) \right|+ & \\ +0\cdot ((\left(-1 \ right))^(4+4))\cdot \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right| &\\\end(align)\]

And then - oh, a miracle! - two terms immediately fly down the drain, since they have a “0” multiplier. There are two more 3x3 determinants that we can easily deal with:

\[\begin(align) & \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\\end(matrix) \right|=0+0+1-1-1-0=-1; \\ & \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\\end(matrix) \right|=0+1+1-0-0-1=1. \\\end(align)\]

We return to the source and find the answer:

\[\left| \begin(matrix) 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\\end(matrix) \right|= 1\cdot \left(-1 \right)+\left(-1 \right)\cdot 1=-2\]

OK it's all over Now. And no 4! = 24 terms did not have to be counted. :)

Answer: -2

Basic properties of the determinant

In the last problem, we saw how the presence of zeros in the rows (columns) of a matrix drastically simplifies the expansion of the determinant and, in general, all calculations. A natural question arises: is it possible to make these zeros appear even in the matrix where they were not originally there?

The answer is clear: Can. And here the properties of the determinant come to our aid:

  1. If you swap two rows (columns) in places, the determinant will not change;
  2. If one row (column) is multiplied by the number $k$, then the entire determinant is also multiplied by the number $k$;
  3. If you take one string and add (subtract) it any number of times from another, the determinant will not change;
  4. If two rows of the determinant are the same, or proportional, or one of the rows is filled with zeros, then the entire determinant is equal to zero;
  5. All of the above properties are true for columns as well.
  6. Transposing a matrix does not change the determinant;
  7. The determinant of the product of matrices is equal to the product of the determinants.

Of particular value is the third property: we can subtract from one row (column) another until right places zeros will not appear.

Most often, calculations come down to “zeroing out” the entire column everywhere except for one element, and then expanding the determinant along this column, obtaining a matrix of size 1 less.

Let's see how this works in practice:

Task. Find the determinant:

\[\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 4 & 1 & 2 & 3 \\ 3 & 4 & 1 & 2 \\ 2 & 3 & 4 & 1 \\\end(matrix) \right|\ ]

Solution. Zeros here, as it were, are not observed at all, so you can “hollow” on any row or column - the amount of calculations will be approximately the same. Let's not be trifles and "zero out" the first column: it already has a cell with a unit, so just take the first line and subtract it 4 times from the second, 3 times from the third and 2 times from the last.

As a result, we will get a new matrix, but its determinant will be the same:

\[\begin(matrix)\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 4 & 1 & 2 & 3 \\ 3 & 4 & 1 & 2 \\ 2 & 3 & 4 & 1 \\\end(matrix) \right|\ begin(matrix) \downarrow \\ -4 \\ -3 \\ -2 \\\end(matrix)= \\ =\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 4-4\cdot 1 & 1-4\cdot 2 & 2-4\cdot 3 & 3-4\cdot 4 \\ 3-3\cdot 1 & 4-3\cdot 2 & 1-3\cdot 3 & 2-3\cdot 4 \\ 2-2\cdot 1 & 3-2\cdot 2 & 4-2\cdot 3 & 1-2\cdot 4 \ \\end(matrix) \right|= \\ =\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 0 & -7 & -10 & -13 \\ 0 & -2 & -8 & -10 \\ 0 & -1 & -2 & -7 \\ \end(matrix)\right| \\\end(matrix)\]

Now, with the equanimity of Piglet, we decompose this determinant in the first column:

\[\begin(matrix) 1\cdot ((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) -7 & -10 & -13 \\ -2 & -8 & -10 \\ -1 & -2 & -7 \\\end(matrix) \right|+0\cdot ((\ left(-1 \right))^(2+1))\cdot \left| ... \right|+ \\ +0\cdot ((\left(-1 \right))^(3+1))\cdot \left| ... \right|+0\cdot ((\left(-1 \right))^(4+1))\cdot \left| ... \right| \\\end(matrix)\]

It is clear that only the first term will “survive” - in the rest I didn’t even write out the determinants, since they are still multiplied by zero. The coefficient in front of the determinant is equal to one, i.e. it may not be recorded.

But you can take out the "minuses" from all three lines of the determinant. In fact, we took out the factor (−1) three times:

\[\left| \begin(matrix) -7 & -10 & -13 \\ -2 & -8 & -10 \\ -1 & -2 & -7 \\\end(matrix) \right|=\cdot \left| \begin(matrix) 7 & 10 & 13 \\ 2 & 8 & 10 \\ 1 & 2 & 7 \\\end(matrix) \right|\]

We got a small determinant 3x3, which can already be calculated according to the rule of triangles. But we will try to decompose it in the first column - the benefit in the last line is proudly one:

\[\begin(align) & \left(-1 \right)\cdot \left| \begin(matrix) 7 & 10 & 13 \\ 2 & 8 & 10 \\ 1 & 2 & 7 \\\end(matrix) \right|\begin(matrix) -7 \\ -2 \\ \uparrow \ \\end(matrix)=\left(-1 \right)\cdot \left| \begin(matrix) 0 & -4 & -36 \\ 0 & 4 & -4 \\ 1 & 2 & 7 \\\end(matrix) \right|= \\ & =\cdot \left| \begin(matrix) -4 & -36 \\ 4 & -4 \\\end(matrix) \right|=\left(-1 \right)\cdot \left| \begin(matrix) -4 & -36 \\ 4 & -4 \\\end(matrix) \right| \\\end(align)\]

You can, of course, still have fun and decompose the 2x2 matrix along the row (column), but we are adequate with you, so we just calculate the answer:

\[\left(-1 \right)\cdot \left| \begin(matrix) -4 & -36 \\ 4 & -4 \\\end(matrix) \right|=\left(-1 \right)\cdot \left(16+144 \right)=-160\ ]

This is how dreams are broken. Only -160 in the answer. :)

Answer: -160.

A couple of notes before we move on to the last task:

  1. The original matrix was symmetrical with respect to the secondary diagonal. All minors in the decomposition are also symmetrical with respect to the same secondary diagonal.
  2. Strictly speaking, we could not lay out anything at all, but simply bring the matrix to an upper triangular form, when there are solid zeros under the main diagonal. Then (in exact accordance with the geometric interpretation, by the way) the determinant is equal to the product of $((a)_(ii))$ — the numbers on the main diagonal.

Task. Find the determinant:

\[\left| \begin(matrix) 1 & 1 & 1 & 1 \\ 2 & 4 & 8 & 16 \\ 3 & 9 & 27 & 81 \\ 5 & 25 & 125 & 625 \\\end(matrix) \right|\ ]

Solution. Well, here the first line just begs for "zeroing". We take the first column and subtract exactly once from all the others:

\[\begin(align) & \left| \begin(matrix) 1 & 1 & 1 & 1 \\ 2 & 4 & 8 & 16 \\ 3 & 9 & 27 & 81 \\ 5 & 25 & 125 & 625 \\\end(matrix) \right|= \\&=\left| \begin(matrix) 1 & 1-1 & 1-1 & 1-1 \\ 2 & 4-2 & 8-2 & 16-2 \\ 3 & 9-3 & 27-3 & 81-3 \\ 5 & ​​25-5 & 125-5 & 625-5 \\\end(matrix) \right|= \\ & =\left| \begin(matrix) 1 & 0 & 0 & 0 \\ 2 & 2 & 6 & 14 \\ 3 & 6 & 24 & 78 \\ 5 & 20 & 120 & 620 \\\end(matrix) \right| \\\end(align)\]

Expand on the first row, and then take out the common factors from the remaining rows:

\[\cdot\left| \begin(matrix) 2 & 6 & 14 \\ 6 & 24 & 78 \\ 20 & 120 & 620 \\\end(matrix) \right|=\cdot \left| \begin(matrix) 1 & 3 & 7 \\ 1 & 4 & 13 \\ 1 & 6 & 31 \\\end(matrix) \right|\]

Again we observe “beautiful” numbers, but already in the first column - we decompose the determinant according to it:

\[\begin(align) & 240\cdot \left| \begin(matrix) 1 & 3 & 7 \\ 1 & 4 & 13 \\ 1 & 6 & 31 \\\end(matrix) \right|\begin(matrix) \downarrow \\ -1 \\ -1 \ \\end(matrix)=240\cdot \left| \begin(matrix) 1 & 3 & 7 \\ 0 & 1 & 6 \\ 0 & 3 & 24 \\\end(matrix) \right|= \\ & =240\cdot ((\left(-1 \ right))^(1+1))\cdot \left| \begin(matrix) 1 & 6 \\ 3 & 24 \\\end(matrix) \right|= \\ & =240\cdot 1\cdot \left(24-18 \right)=1440 \\\end( align)\]

Order. Problem solved.

Answer: 1440

Most mathematical models in economics are described using matrices and matrix calculus.

Matrix is a rectangular table containing numbers, functions, equations, or other mathematical objects arranged in rows and columns.

The objects that make up the matrix call it elements . Matrices are denoted by capital Latin letters

and their elements are inline.

Symbol
means that the matrix It has
lines and columns element at the intersection -th line and -th column
.

.

They say that the matrix A is equal to the matrix IN : A=B if they have the same structure (that is, the same number of rows and columns) and their corresponding elements are identically equal
, for all
.

Particular types of matrices

In practice, matrices of a special form are quite often encountered. Some methods also involve transformations of matrices from one type to another. The most common types of matrices are listed below.

square matrix, number of rows n equals the number of columns n

column matrix

matrix-row

lower triangular matrix

upper triangular matrix

null matrix

diagonal matrix

E =

identity matrix E(square)

unitary matrix

step matrix

Empty matrix

Elements of the matrix, with equal numbers of rows and columns, that is a ii form the main diagonal of the matrix.

Operations on matrices.


.

Properties of operations on matrices


Operation specific properties

If the matrix product
exists, then the product
may not exist. Generally speaking,
. That is, matrix multiplication is not commutative. If
, That And are called commutative. For example, diagonal matrices of the same order are commutative.

If
, then optional
or
. That is, the product of non-zero matrices can give a zero matrix. For example

exponentiation operation defined only for square matrices. If
, That

.

By definition it is assumed
, and it is easy to show that
,
. Note that from
it does not follow that
.

Element-by-element exponentiation A. m =
.

Transpose operation matrix is ​​to replace the rows of the matrix with its columns:

,

For example

,
.

Transposition properties:


Determinants and their properties.

For square matrices, the concept is often used determinant - a number that is calculated by the elements of the matrix using strictly defined rules. This number is an important characteristic of the matrix and is denoted by the symbols

.

matrix determinant
is its element .

Matrix determinant
calculated according to the rule:

i.e., the product of the elements of the additional diagonal is subtracted from the product of the elements of the main diagonal.

To compute higher order determinants (
) it is necessary to introduce the concepts of minor and algebraic complement of an element.

Minor
element called the determinant, which is obtained from the matrix , crossing out -th line and -th column.

Consider the matrix size
:

,

then, for example,

Algebraic addition element call it a minor multiplied by
.

,

Laplace's theorem: The determinant of a square matrix is ​​equal to the sum of the products of the elements of any row (column) and their algebraic complements.

For example, breaking down
by the elements of the first row, we get:

The last theorem gives a universal way to calculate determinants of any order, starting from the second. As a row (column), always choose the one in which there is the largest number of zeros. For example, it is required to calculate the fourth order determinant

In this case, you can expand the determinant in the first column:

or the last line:

This example also shows that the determinant of an upper triangular matrix is ​​equal to the product of its diagonal elements. It is easy to prove that this conclusion is valid for any triangular and diagonal matrices.

Laplace's theorem makes it possible to reduce the calculation of the determinant -th order to compute determinants
th order and, ultimately, to the calculation of second order determinants.

MATRIXES AND DETERMINERS
Lecture 1. Matrices

1. The concept of a matrix. Matrix types

2. Algebra of matrices

Lecture 2. Determinants

1. Determinants of a square matrix and their properties

2. Laplace's and annihilation theorems

Lecture 3. Inverse matrix

1. The concept of an inverse matrix. Uniqueness of the inverse matrix

2. Algorithm for constructing an inverse matrix. Inverse Matrix Properties

4. Tasks and exercises

4.1. Matrices and actions on them

4.2. Determinants

4.3. inverse matrix

5. Individual tasks

Literature

LECTURE 1. MATRIX

Plan

1. The concept of a matrix. Matrix types.

2. Algebra of matrices.

Key Concepts

Diagonal matrix.

Identity matrix.

Zero matrix.

Symmetric matrix.

Matrix Consistency.

Transposition.

triangular matrix.

1. THE CONCEPT OF A MATRIX. MATRIX TYPES

rectangular table

consisting of m rows and n columns, whose elements are real numbers , where i- line number j- the number of the column at the intersection of which this element stands, we will call the numeric matrix order m´n and denote .

Consider the main types of matrices:

1. Let m = n, then the matrix A is square a matrix that has order n:

A = .

Elements form the main diagonal, the elements form a side diagonal.

diagonal , if all its elements, except possibly the elements of the main diagonal, are equal to zero:

A==diag( ).

Diagonal, and therefore square, matrix is ​​called single , if all elements of the main diagonal are equal to 1:

E = diag (1, 1, 1,…,1).

Note that the identity matrix is ​​the matrix analogue of the identity in the set of real numbers, and also emphasize that the identity matrix is ​​defined only for square matrices.

Here are examples of identity matrices:

Square matrices

A = , V =

are called upper and lower triangular, respectively.

2 . Let m = 1, then matrix A is a row matrix, which looks like:

3 . Let n=1, then matrix A is a column matrix, which looks like:


4 .A zero matrix is ​​a matrix of order m´n, all elements of which are equal to 0:

Note that the null matrix can be a square, a row matrix, or a column matrix. The zero matrix is ​​the matrix analogue of zero in the set of real numbers.

5 . The matrix is ​​called transposed to a matrix and is denoted if its columns are the corresponding rows of the matrix .

Example . Let = , then = .

Note that if matrix A has order m´n, then the transposed matrix has order n´m.

6 . Matrix A is called symmetrical if A=A, and skew-symmetric if A = -A.

Example . Examine the symmetry of the matrix A and B.

Then = , therefore, the matrix A is symmetric, since A = A.

B = , then = , therefore, the matrix B is skew-symmetric, since B = - B.

Note that symmetric and skew-symmetric matrices are always square. Any elements can be on the main diagonal of a symmetric matrix, and identical elements must be symmetrical about the main diagonal, that is, =. There are always zeros on the main diagonal of a skew-symmetric matrix, and symmetrically with respect to the main diagonal = - .

2. ALGEBRA OF MATRIXES

Let us consider operations on matrices, but first we introduce some new concepts.

Two matrices A and B are called matrices of the same order if they have the same number of rows and the same number of columns.

Example. and are matrices of the same order 2´3;

And are matrices of different orders, since 2´3≠3´2.

The concepts of ″greater than″ and ″less than″ are not defined for matrices.

Matrices A and B are called equal if they are of the same order m´n, and = , where 1, 2, 3, …, m, and j = 1, 2, 3, …, n.

Multiplying a matrix by a number.

Multiplying the matrix A by the number λ leads to the multiplication of each element of the matrix by the number λ:

λА = , λR.


From this definition it follows that the common factor of all elements of the matrix can be taken out of the sign of the matrix.

Example.

Let the matrix A =, then 5A = =.

Let the matrix B = = = 5.

Properties of multiplying a matrix by a number :

2) (λμ)А = λ(μА) = μ(λА), where λ,μ R;

3) (λА) = λА;

Sum (difference) of matrices .

The sum (difference) is determined only for matrices of the same order m´n.

The sum (difference) of two matrices A and B of order m´n is the matrix C of the same order, where = ± ( 1, 2, 3, …, m ,

j= 1, 2, 3, …, n.).

In other words, matrix C consists of elements equal to the sum (difference) of the corresponding elements of matrices A and B.

Example . Find the sum and difference of matrices A and B.


then =+= =,

=–==.

If = , = , then A ± B does not exist, since the matrices are of different orders.

From the above definitions it follows properties matrix sums:

1) commutativity A+B=B+A;

2) associativity (A+B)+C=A+(B+C);

3) distributivity to multiplication by the number λR: λ(A + B) = λA + λB;

4) 0+A=A, where 0 is the zero matrix;

5) A+(–A)=0, where (–A) is the matrix opposite to matrix A;

6) (A + B) \u003d A + B.

Product of matrices.

The product operation is defined not for all matrices, but only for consistent ones.

The matrices A and B are called agreed , if the number of columns of matrix A is equal to the number of rows of matrix B. So, if , , m≠k, then matrices A and B are consistent, since n = n, and in reverse order, matrices B and A are inconsistent, since m ≠ k. Square matrices are consistent when they have the same order n, and both A and B and B and A are consistent. = m.

The product of two consistent matrices and

A= , V=

is called the matrix C of order m´k:

=∙, the elements of which are calculated by the formula:

(1, 2, 3, …, m , j=1, 2, 3, …, k),

that is, the element of the i -th row and the j -th column of matrix C is equal to the sum of the products of all elements of the i -th row of matrix A and the corresponding elements of the j -th column of matrix B.

Example . Find the product of matrices A and B.

∙===.

The product of matrices B∙A does not exist, since matrices B and A are not consistent: matrix B has order 2´2, and matrix A has order 3´2.

Consider properties matrix products:

1 ) non-commutativity: AB ≠ BA, even if A and B, and B and A are consistent. If AB = BA, then the matrices A and B are called commuting (matrices A and B in this case will necessarily be square).

Example 1 . = , = ;

==;

==.

Obviously, ≠ .

Example 2 . = , = ;

= = =;

= = = .

Conclusion: ≠, although the matrices are of the same order.

2 ) for any square matrices, the identity matrix E is commuting to any matrix A of the same order, and as a result we get the same matrix A, that is, AE = EA = A.

Example .

===;

===.

3 ) A 0 = 0 A = 0.

4 ) the product of two matrices can be equal to zero, while the matrices A and B can be nonzero.

Example .

= ==.

5 ) associativity ABC=A(BC)=(AB)C:

· (·

Example .

We have matrices , ;

then Аּ(ВּС) = (·

(АּВ)ּС=

===

==.

Thus, we have shown by example that Аּ(ВּС) = (АּВ)ּС.

6 ) distributivity with respect to addition:

(A + B) ∙ C \u003d AC + BC, A ∙ (B + C) \u003d AB + AC.

7) (A∙B)= B∙A.

Example.

, =.

Then AB =∙==

=(A∙B)= =

INA =∙ = ==.

Thus, ( A∙B)= IN A .

8 ) λ(АּВ) = (λА)ּВ = Аּ(λВ), λ,R.

Consider typical examples for performing operations on matrices, that is, you need to find the sum, difference, product (if they exist) of two matrices A and B.

Example 1 .

, .

Solution.

1) + = = =;

2) – ===;

3) the product does not exist, since the matrices A and B are inconsistent, however, the product does not exist for the same reason.

Example 2 .

Solution.

1) the sum of matrices, as well as their difference, does not exist, since the initial matrices are of different orders: matrix A has order 2´3, and matrix B has order 3´1;

2) since matrices A and B are consistent, then the product of matrices A ּ B exists:

·=·= =,

the matrix product В ּ А does not exist, since the matrices and are inconsistent.

Example 3

Solution.

1) the sum of matrices, as well as their difference, does not exist, since the initial matrices are of different orders: matrix A has order 3´2, and matrix B has order 2´3;

2) the product of both matrices АּВ and ВּА exists, since the matrices are consistent, but the result of such products will be matrices of different orders: ·=, ·=.

= = ;

·=·= =

In this case, AB ≠ BA.

Example 4 .

Solution.

1) +===,

2) –= ==;

3) product as matrices A ּ IN, and IN ּ A, exists because the matrices are consistent:

·==·= =;

·==·= =

=≠, that is, the matrices A and B are non-commuting.

Example 5 .

Solution.

1) +===,

2) –===;

3) the product of both matrices АּВ and ВּА exists, since the matrices are consistent:

·==·= =;

·==·= =

АּВ=ВּА, i.e. these matrices are commuting.


LECTURE 2. DETERMINANTS

Plan

1. Determinants of a square matrix and their properties.

2. Laplace's and annihilation theorems.

Key Concepts

Algebraic complement of the determinant element.

Minor of the determinant element.

Second order determinant.

Third order determinant.

Arbitrary order determinant.

Laplace's theorem.

Cancellation theorem.

1. DETERMINANTS OF A SQUARE MATRIX AND THEIR PROPERTIES

Let A be a square matrix of order n:

A= .

Each such matrix can be associated with a single real number, called the determinant (determinant) of the matrix and denoted

Det A= ∆= .

Note that the determinant exists only for square matrices.

Consider the rules for calculating determinants and their properties for square matrices of the second and third order, which we will call for brevity the determinants of the second and third order, respectively.

Second order determinant matrix is ​​a number determined by the rule:

i.e., the second-order determinant is a number equal to the product of the elements of the main diagonal minus the product of the elements of the secondary diagonal.

Example .

Then == 4 3 - (–1) 2=12 + 2 = 14.

It should be remembered that round or square brackets are used to designate matrices, and for the determinant - vertical lines. A matrix is ​​a table of numbers, and a determinant is a number.

From the definition of a second-order determinant, it follows that properties :

1. The determinant will not change when replacing all its rows with the corresponding columns:

2. The sign of the determinant changes to the opposite when the rows (columns) of the determinant are rearranged:

3. The common factor of all elements of the row (column) of the determinant can be taken out of the sign of the determinant:

4. If all elements of some row (column) of the determinant are equal to zero, then the determinant is equal to zero.

5. The determinant is equal to zero if the corresponding elements of its rows (columns) are proportional:

6. If the elements of one row (column) of the determinant are equal to the sum of two terms, then such a determinant is equal to the sum of two determinants:

=+, =+.

7. The value of the determinant will not change if the elements of its row (column) are added (subtracted) by the corresponding elements of another row (column) multiplied by the same number:

=+=,

since =0 by property 5.

The remaining properties of determinants will be considered below.

Let us introduce the concept of a third-order determinant: third order a square matrix is ​​called a number

∆ == detA= =

=++– – – ,

i.e., each term in formula (2) is the product of the elements of the determinant, taken one and only one from each row and each column. To remember which products in formula (2) to take with a plus sign, and which ones with a minus sign, it is useful to know the triangle rule (Sarrus rule):



Example . Compute determinant

==

It should be noted that the properties of the second-order determinant considered above can be transferred without changes to the case of determinants of any order, including the third.

2. THEOREMS OF LAPLACE AND ANNULATION

Consider two more very important properties of determinants.

Let us introduce the concepts of minor and algebraic complement.

Minor element determinant the determinant obtained from the original determinant by deleting the row and column to which the given element belongs is called. The element's minor is denoted by .


Example . = .

Then, for example, = , = .

Algebraic element complement of the determinant is called its minor, taken with sign. The algebraic complement will be denoted by , that is, =.

For example:

= , === –,

Let's return to formula (2). Grouping the elements and taking the common factor out of brackets, we get:

=(– ) +( – ) +(–)=


The equalities are proved similarly:

1, 2, 3; (3)

Formulas (3) are called decomposition formulas determinant over the elements of the i-th row (j-th column), or by Laplace's formulas for the third-order determinant.

Thus we get the eighth property of the determinant :

Laplace's theorem . The determinant is equal to the sum of all products of the elements of any row (column) by the corresponding algebraic complements of the elements of this row (column).

notice, that given property determinant is nothing but the definition of a determinant of any order. In practice, it is used to calculate the determinant of any order. As a rule, before calculating the determinant, using properties 1 - 7, they achieve, if possible, that in any row (column) all elements are equal to zero, except for one, and then they are decomposed by the elements of the row (column).

Example . Compute determinant

== (subtract the first from the second row) =

== (subtract the first from the third row)=

== (expand the determinant in elements of the third

rows) = 1ּ = (subtract the first column from the second column) = = 1998ּ0 – 1ּ2 = –2.

Example .

Consider a fourth-order determinant. To calculate it, we use the Laplace theorem, that is, expansion in terms of the elements of a row (column).

== (since the second column contains three zero elements, we expand the determinant by the elements of the second column)= =3ּ= (subtract the first multiplied by 3 from the second row, and subtract the first multiplied by 2 from the third row) =

= (we expand the determinant by the elements of the first column) = 3ּ1ּ =

Ninth property determiner bears the name cancellation theorem :

the sum of all products of the elements of one row (column) of the determinant and the corresponding algebraic complements of the elements of another row (column) is equal to zero, i.e.

++ = 0,

Example .

= = (expand by elements of the third row)=

0ּ+0ּ+ּ = -2.

But, for the same example: 0ּ+0ּ+1ּ=

0ּ +0ּ+1ּ = 0.

If a determinant of any order has a triangular form

=, then it is equal to the product of the elements on the diagonal:

=ּּ … ּ. (4)


Example. Calculate the determinant.

=

Sometimes, when calculating the determinant using elementary transformations, it is possible to reduce it to a triangular form, after which formula (4) is applied.

As for the determinant of the product of two square matrices, it is equal to the product of the determinants of these square matrices: .


LECTURE 3. INVERSE MATRIX

Plan

1. The concept of an inverse matrix. Uniqueness of the inverse matrix.

2. Algorithm for constructing an inverse matrix.

Properties of the inverse matrix.

Key Concepts

Inverse matrix.

Attached matrix.

1. THE CONCEPT OF INVERSE MATRIX.

UNIQUENESS OF THE INVERSE MATRIX

In number theory, along with a number, a number is defined opposite to it () such that , and a number inverse to it such that . For example, for the number 5, the opposite will be the number

(- 5), and the inverse will be the number . Similarly, in the theory of matrices, we have already introduced the concept of the opposite matrix, its designation (-A). inverse matrix for a square matrix A of order n, a matrix is ​​called if the equalities

Where E is the identity matrix of order n.

We immediately note that the inverse matrix exists only for square non-singular matrices.

The square matrix is ​​called non-degenerate (non-singular) if detA ≠ 0. If detA = 0, then the matrix A is called degenerate (special).

Note that the non-singular matrix A has a unique inverse matrix . Let's prove this statement.

Let for the matrix A there are two inverse matrices, i.e.

Then =ּ=ּ() =

Q.E.D.

Find the determinant of the inverse matrix. Since the determinant of the product of two matrices A and B of the same order is equal to the product of the determinants of these matrices, i.e., therefore, the product of two non-singular matrices AB is a non-singular matrix.

We conclude that the determinant of the inverse matrix is ​​the reciprocal of the determinant of the original matrix.


2. ALGORITHM FOR CONSTRUCTING THE INVERSE MATRIX.

PROPERTIES OF THE INVERSE MATRIX

Let us show that if the matrix A is nonsingular, then there exists an inverse matrix for it, and construct it.

Let's make a matrix from the algebraic complements of the elements of the matrix A:

Transposing it, we get the so-called attached matrix:

.

Find the product ּ. Taking into account the Laplace theorem and the annihilation theorem:


ּ = =

=.

We conclude:

Algorithm for constructing an inverse matrix.

1) Calculate the matrix determinant A. If the determinant is zero, then the inverse matrix does not exist.

2) If the determinant of the matrix is ​​not equal to zero, then compose from the algebraic complements of the corresponding elements of the matrix A matrix .

3) By transposing the matrix , get the associated matrix .

4) According to the formula (2) make up the inverse matrix.

5) According to the formula (1) check the calculations.

Example . Find the inverse matrix.

A). Let A=. Since matrix A has two identical rows, the determinant of the matrix is ​​zero. Therefore, the matrix is ​​degenerate and there is no inverse matrix for it.

b). Let A =.

Calculate the matrix determinant

the inverse matrix exists.

Compose a matrix from algebraic additions

= = ;

transposing the matrix , we obtain the associated matrix

by formula (2) we find the inverse matrix

==.

Let's check the correctness of the calculations

= = .

Therefore, the inverse matrix constructed is correct.

Inverse Matrix Properties

1. ;

2. ;

3. .


4. TASKS AND EXERCISES

4.1 Matrices and actions on them

1. Find the sum, difference, product of two matrices A and B.

A) , ;

b) , ;

V) , ;

G) , ;

e) , ;

e) , ;

and) , ;

h), ;

And) , .

2. Prove that the matrices A and B are commuting.

A) , ; b) , .

3. Given matrices A, B and C. Show that (AB)·C=A·(BC).

A) , , ;

b) , , .

4. Calculate (3A - 2B) C if

, , .

5. Find if

A) ; b) .


6. Find the matrix X if 3A+2X=B, where

, .

7. Find ABC if

A) , , ;

b) , , .

ANSWERS ON THE TOPIC "MATRIXES AND ACTIONS ON THEM"

1. a) , ;

b) the products AB and BA do not exist;

V) , ;

G) , ;

e) sums, differences and products of BA matrices do not exist, ;

e) , ;

g) matrix products do not exist;

h) , ;

And) , .

2. a) ; b) .

3. a) ; b) .

4. .

5. a) ; b) .

6. .

7. a) ; b) .

4.2 Qualifiers

1. Calculate determinants

A) ; b) ; V) ; G) ; e) ; e) ;

and) ; h) .

3. Using the rule of triangles, calculate the determinants

A) ; b) ; V) ; G) .

4. Calculate the determinants of Example 2 using Laplace's theorem.

5. Calculate the determinants, after simplifying them:

A) ; b) ; V) ;

G) ; e) ; e) ;

and) .

6. Calculate the determinant by bringing it to a triangular form

.

7. Let matrices A and B be given. Prove that :

, .

ANSWERS ON THE TOPIC "DETERMINANTS"

1. a) 10; b) 1; c) 25; d) 16; e) 0; f) -3; g) -6; h) 1.

2. a) -25; b) 168; at 21; d) 12.

3. a) -25; b) 168; at 21; d) 12.

4. a) 2; b) 0; c) 0; d) 70; e) 18; f) -66; g) -36.

4.3 Inverse matrix

1. Find the inverse matrix:

A) ; b) ; V) ; G) ;

e) ; e) ; and) ; h) ;

And) ; To) ; l) ;

m) ; m) .


2. Find the inverse matrix and check the condition:

A) ; b) .

3. Prove equality :

A) , ; b) ,.

4. Prove equality :

A) ; b) .

ANSWERS ON THE TOPIC "INVERSE MATRIX"

1. a); b) ; V) ; G) ;

e) ; e) ; and) ;

h) ; And) ;

To) ; l) ;

m) ; m) .

2. a) ; b) .

2. a) , , =;

b) , ,

=.

5. a) , ,

, ;

b) , ,

, .


5. INDIVIDUAL TASKS

1. Calculate the determinant by expansion

a) on the i-th line;

b) by the j-th column.

1.1. ; 1.2. ; 1.3. ;

i=2, j=3. i=4, j=1. i=3, j=2.

1.4. ; 1.5. ; 1.6. ;

i=3, j=3. i=1, j=4. i=2, j=2.

1.7. ; 1.8. ; 1.9. ;

i=4, j=4. i=2, j=2. i=3, j=2.

1.10. ; 1.11. ; 1.12. ;

i=2, j=1. i=1, j=2. i=3, j=2.


1.13. ; 1.14. ; 1.15. ;

i=2, j=3. i=1, j=3. i=4, j=2.

1.16. ; 1.17. ; 1.18. ;

i=2, j=3. i=2, j=4. i=1, j=3.

1.19. ; 1.20. ; 1.21. ;

i=2, j=2. i=1, j=4. i=3, j=2.

1.22. ; 1.23. ; 1.24. ;

i=1, j=3. i=2, j=1. i=3, j=4.

1.25. ; 1.26. ; 1.27. ;

i=4, j=3. i=3, j=3. i=1, j=2.


1.28. ; 1.29. ; 1.30. .

i=3, j=3. i=2, j=1. i=3, j=2.


LITERATURE

1. Zhevnyak R.M., Karpuk A.A. Higher Mathematics. – Mn.: Vysh. school, 1992.- 384 p.

2. Gusak A.A. Reference manual for problem solving: analytical geometry and linear algebra. - Minsk: Tetrasystems, 1998.- 288 p.

3. Markov L.N., Razmyslovich G.P. Higher Mathematics. Part 1. - Minsk: Amalfeya, 1999. - 208 p.

4. Belko I.V., Kuzmich K.K. Higher mathematics for economists. I semester. M.: New knowledge, 2002.- 140 p.

5. Kovalenko N.S., Minchenkov Yu.V., Ovseec M.I. Higher Mathematics. Proc. allowance. -Mn.: CHIUP, 2003. - 32 p.

Computer