The reader probably already met determinants in calculus or algebra, at least the determinants of and matrices. For a matrix
the determinant is simply ; the determinant of a matrix can be found by the “Star of David” rule.
In this chapter we would like to introduce determinants for matrices. I don’t want just to give a formal definition. First I want to give some motivation, and then derive some properties the determinant should have. Then if we want to have these properties, we do not have any choice, and arrive to several equivalent definitions of the determinant.
It is more convenient to start not with the determinant of a matrix, but with determinant of a system of vectors. There is no real difference here, since we always can join vectors together (say as columns) to form a matrix.
Let us have vectors in (notice that the number of vectors coincides with dimension), and we want to find the -dimensional volume of the parallelepiped determined by these vectors.
The parallelepiped determined by the vectors can be defined as the collection of all vectors that can be represented as
It can be easily visualized when (parallelogram) and (parallelepiped). So, what is the -dimensional volume?
If it is area; if it is indeed the volume. In dimension is it just the length.
Finally, let us introduce some notation. For a system of vectors (columns) we will denote its determinant (that we are going to construct) as . If we join these vectors in a matrix (column number of is ), then we will use the notation ,
Also, for a matrix
its determinant is often denoted by
We know, that for dimensions 2 and 3 “volume” of a parallelepiped is determined by the base times height rule: if we pick one vector, then height is the distance from this vector to the subspace spanned by the remaining vectors, and the base is the -dimensional volume of the parallelepiped determined by the remaining vectors.
Now let us generalize this idea to higher dimensions. For a moment we do not care about how exactly to determine height and base. We will show, that if we assume that the base and the height satisfy some natural properties, then we do not have any choice, and the volume (determinant) is uniquely defined.
First of all, if we multiply vector by a positive number , then the height (i.e. the distance to the linear span ) is multiplied by . If we admit negative heights (and negative volumes), then this property holds for all scalars , and so the determinant of the system should satisfy
Of course, there is nothing special about vector , so for any index
| (3.2.1) |
To get the next property, let us notice that if we add 2 vectors, then the “height” of the result should be equal the sum of the “heights” of summands, i.e. that
| (3.2.2) |
In other words, the above two properties say that the determinant of vectors is linear in each argument (vector), meaning that if we fix vectors and interpret the remaining vector as a variable (argument), we get a linear function.
We already know that linearity is a very nice property, that helps in many situations. So, admitting negative heights (and therefore negative volumes) is a very small price to pay to get linearity, since we can always put on the absolute value afterwards.
In fact, by admitting negative heights, we did not sacrifice anything! To the contrary, we even gained something, because the sign of the determinant contains some information about the system of vectors (orientation).
If for some , then the vectors are linearly dependent, so
But then the volume must be (for example, because the “height”, i.e. the distance from to is ). Thus, it is natural to assume that
| (3.2.3) |
In what follows that will be the property of the determinant we will use.
The next property also seems natural. Namely, if we take a vector, say , and add to it a multiple of another vector , the “height” does not change, so
| (3.2.4) |
In other words, if we apply the column operation of the third type, the determinant does not change.
The next property the determinant should have, is that if we interchange 2 vectors, the determinant changes sign:
| (3.2.5) |
Functions of several variables that change sign when one interchanges any two arguments are called antisymmetric.
At first sight this property does not look natural, but it can be deduced from the previous ones. Namely, applying property (3.2.4) three times, and then using (3.2.1) we get
Recall, that the property (3.2.4) follows from (3.2.3) and the linearity (3.2.1), (3.2.2), so the antisymmetry (3.2.5) also follows from these properties.
The last property is the easiest one. For the standard basis in the corresponding parallelepiped is the -dimensional unit cube, so
| (3.2.6) |
In matrix notation this can be written as
The plan of the game is now as follows: using the properties that as we decided in Section 3.2 the determinant should have, we derive other properties of the determinant, some of them highly non-trivial. We will show how to use these properties to compute the determinant using our old friend—row reduction.
Later, in Section 3.4, we will show that the determinant, i.e. a function with the desired properties exists and unique. After all we have to be sure that the object we are computing and studying exists.
While our initial geometric motivation for determinant and its properties came from considering vectors in the real vector space , so they relate only to matrices with real entries, all the constructions below use only algebraic operations (addition, multiplication, division) and are applicable to matrices with complex entries, and even with entries in an arbitrary field.
So in what follows we are constructing determinant not just for real matrices, but for complex matrices as well (and also for matrices with entries in an arbitrary field). The nice geometric motivation for the properties works only in the real case, but after we decided on the properties of the determinant (see properties 1–3 below) everything works in the general case.
We will use the following basic properties of the determinant:
Determinant is linear in each column, i.e. in vector notation for every index
for all scalars , .
Determinant is antisymmetric, i.e. if one interchanges two columns, the determinant changes sign.
Normalization property: .
All these properties were discussed above in Section 3.2. The first property is just the (3.2.1) and (3.2.2) combined. The second one is (3.2.5), and the last one is the normalization property (3.2.6). Note, that we did not use properties (3.2.3) and (3.2.4): they can be deduced from the above three. These three properties completely define determinant!
In what follows, is a function satisfying properties 1–3 in Section 3.3.1 above.
For a square matrix the following statements hold:
If has a zero column, then .
If has two equal columns, then ;
If one column of is a multiple of another, then ;
If columns of are linearly dependent, i.e. if the matrix is not invertible, then .
Statement 1 follows immediately from linearity. If we multiply the zero column by zero, we do not change the matrix and its determinant. But by the property 1 above, we should get .
The fact that determinant is antisymmetric, implies statement 2. Indeed, if we interchange two equal columns, we change nothing, so the determinant remains the same. On the other hand, interchanging two columns changes sign of determinant, so
which is possible only if .
Statement 3 is immediate corollary of statement 2 and linearity.
To prove the last statement, let us first suppose that the first vector is a linear combination of the other vectors,
Then by linearity we have (in vector notation)
and each determinant in the sum is zero because of two equal columns.
Let us now consider general case, i.e. let us assume that the system is linearly dependent. Then one of the vectors, say can be represented as a linear combination of the others. Interchanging this vector with we arrive to the situation we just treated, so
so the determinant in this case is also . ∎
The next proposition generalizes property (3.2.4). As we already have said above, this property can be deduced from the three “basic” properties of the determinant, we are using in this section.
The determinant does not change if we add to a column a linear combination of the other columns (leaving the other columns intact). In particular, the determinant is preserved under “column replacement” (column operation of third type).
Note, that adding to a column a multiple of itself is prohibited here. We can only add multiples of the other columns.
Now we are ready to compute determinant for some important special classes of matrices. The first class is the so-called diagonal matrices. Let us recall that a square matrix is called diagonal if all entries off the main diagonal are zero, i.e. if for all . We will often use the notation for the diagonal matrix
Since a diagonal matrix can be obtained from the identity matrix by multiplying column number by ,
Determinant of a diagonal matrix equal the product of the diagonal entries,
The next important class is the class of so-called triangular matrices. A square matrix is called upper triangular if all entries below the main diagonal are , i.e. if for all . A square matrix is called lower triangular if all entries above the main are , i.e if for all . We call a matrix triangular, if it is either lower or upper triangular matrix.
It is easy to see that
Determinant of a triangular matrix equals to the product of the diagonal entries,
Indeed, if a triangular matrix has zero on the main diagonal, it is not invertible (this can easily be checked by column operations) and therefore both sides equal zero. If all diagonal entries are non-zero, then using column replacement (column operations of third type) one can transform the matrix into a diagonal one with the same diagonal entries: For upper triangular matrix one should first subtract appropriate multiples of the first column from the columns number , “killing” all entries in the first row, then subtract appropriate multiples of the second column from columns number , and so on.
To treat the case of lower triangular matrices one has to do “column reduction” from the left to the right, i.e. first subtract appropriate multiples of the last column from columns number , and so on.
Now we know how to compute determinants, using their properties: one just needs to do column reduction (i.e. row reduction for ) keeping track of column operations changing the determinant. Fortunately, the most often used operation—row replacement, i.e. operation of third type does not change the determinant. So we only need to keep track of interchanging of columns and of multiplication of column by a scalar.
If an echelon form of does not have pivots in every column (and row), then is not invertible, so . If is invertible, we arrive at a triangular matrix, and is the product of diagonal entries times the correction from column interchanges and multiplications.
The above algorithm implies that can be zero only if a matrix is not invertible. Combining this with the last statement of Proposition 3.3.1 we get
if and only if is not invertible. An equivalent statement: if and only if is invertible.
Note, that although we now know how to compute determinants, the determinant is still not defined. One can ask: why don’t we define it as the result we get from the above algorithm? The problem is that formally this result is not well defined: that means we did not prove that different sequences of column operations yield the same answer.
In this section we prove two important theorems.
For a square matrix ,
This theorem implies that for all statement about columns we discussed above, the corresponding statements about rows are also true. In particular, determinants behave under row operations the same way they behave under column operations. So, we can use row operations to compute determinants.
For matrices and
In other words
Determinant of a product equals product of determinants.
To prove both theorems we need the following lemma.
For a square matrix and an elementary matrix (of the same size)
The proof can be done just by direct checking: determinants of special matrices are easy to compute; right multiplication by an elementary matrix is a column operation, and effect of column operations on the determinant is well known.
This can look like a lucky coincidence, that the determinants of elementary matrices agree with the corresponding column operations, but it is not a coincidence at all.
Namely, for a column operation the corresponding elementary matrix can be obtained from the identity matrix by this column operation. So, its determinant is 1 (determinant of ) times the effect of the column operation.
And that is all! It may be hard to realize at first, but the above paragraph is a complete and rigorous proof of the lemma! ∎
Applying times Lemma 3.3.6 we get the following corollary.
For any matrix and any sequence of elementary matrices (all matrices are )
Any invertible matrix is a product of elementary matrices.
We know that any invertible matrix is row equivalent to the identity matrix, which is its reduced echelon form. So
and therefore any invertible matrix can be represented as a product of elementary matrices,
(the inverse of an elementary matrix is an elementary matrix). ∎
First of all, it can be easily checked, that for an elementary matrix we have . Notice, that it is sufficient to prove the theorem only for invertible matrices , since if is not invertible then is also not invertible, and both determinants are zero.
By Lemma 3.3.8 matrix can be represented as a product of elementary matrices,
and by Corollary 3.3.7 the determinant of is the product of determinants of the elementary matrices. Since taking the transpose just transposes each elementary matrix and reverses their order, Corollary 3.3.7 implies that . ∎
Let us first suppose that the matrix is invertible. Then Lemma 3.3.8 implies that can be represented as a product of elementary matrices
and so by Corollary 3.3.7
If is not invertible, then the product is also not invertible, and the theorem just says that .
To check that the product is not invertible, let us assume that it is invertible. Then multiplying the identity by from the left, we get , so is a left inverse of . So is left invertible, and since it is square, it is invertible. We got a contradiction. ∎
First of all, let us say once more, that the determinant is defined only for square matrices! Since we now know that , the statements that we knew about columns are true for rows too.
Determinant is linear in each row (column) when the other rows (columns) are fixed.
If one interchanges two rows (columns) of a matrix , the determinant changes sign.
For a triangular (in particular, for a diagonal) matrix its determinant is the product of the diagonal entries. In particular, .
If a matrix has a zero row (or column), .
If a matrix has two equal rows (columns), .
If one of the rows (columns) of is a linear combination of the other rows (columns), i.e. if the matrix is not invertible, then ;
More generally,
if and only if is not invertible, or equivalently
if and only if is invertible.
does not change if we add to a row (column) a linear combination of the other rows (columns). In particular, the determinant is preserved under the row (column) replacement, i.e. under the row (column) operation of the third kind.
.
.
And finally,
If is an matrix, then .
The last property follows from the linearity of the determinant, if we recall that to multiply a matrix by we have to multiply each row by , and that each multiplication multiplies the determinant by .
If is an matrix, how are the determinants and related? Remark: only in the trivial case of matrices
How are the determinants and related if
Using column or row operations compute the determinants
A square () matrix is called skew-symmetric (or antisymmetric) if . Prove that if is skew-symmetric and is odd, then . Is this true for even ?
A square matrix is called nilpotent if for some positive integer . Show that for a nilpotent matrix .
Prove that if the matrices and are similar, then .
A real square matrix is called orthogonal if . Prove that if is an orthogonal matrix then .
Show that
This is a particular case of the so-called Vandermonde determinant.
Let points , and in the plane have coordinates , and respectively. Show that the area of triangle is the absolute value of
Hint: use row operations and geometric interpretation of determinants (area).
Let be a square matrix. Show that block triangular matrices
all have determinant equal to . Here can be anything.
The following problems illustrate the power of block matrix notation.
Use the previous problem to show that if and are square matrices, then
Hint: .
Let be and be matrices. Prove that
Hint: While it is possible to transform the matrix by row operations to a form where the determinant is easy to compute, the easiest way is to right multiply the matrix by .
In this section we arrive to the formal definition of the determinant. We show that a function, satisfying the basic properties 1, 2, 3 from Section 3.3 exists, and moreover, such function is unique, i.e. we do not have any choice in constructing the determinant.
Consider an matrix , and let be its columns, i.e.
Using linearity of the determinant we expand it in the first column :
| (3.4.1) |
Then we expand it in the second column, then in the third, and so on. We get
Notice, that we have to use a different index of summation for each column: we call them ; the index here is the same as the index in (3.4.1).
It is a huge sum, it contains terms. Fortunately, some of the terms are zero. Namely, if any 2 of the indices coincide, the determinant is zero, because there are two equal columns here.
So, let us rewrite the sum, omitting all zero terms. The most convenient way to do that is using the notion of a permutation. Informally, a permutation of an ordered set is a rearrangement of its elements. A convenient formal way to represent such a rearrangement is by using a function
where gives the new order of the set . In other words, the permutation rearranges the ordered set into .
Such function has to be one-to-one (different values for different arguments) and onto (assumes all possible values from the target space). The functions which are one-to-one and onto are called bijections, and they give one-to-one correspondence between the domain and the target space.111 There is another canonical way to represent permutation by a bijection , namely in this representation gives new position of the element number . In this representation rearranges into . While in the first representation it is easy to write the function if you know the rearrangement of the set , the second one is more adapted to the composition of permutations: it coincides with the composition of functions. Namely if we first perform the permutation that correspond to a function and then one that correspond to , the resulting permutation will correspond to .
Although it is not directly relevant here, let us notice, that it is well-known in combinatorics, that the number of different permutations of the set is exactly . The set of all permutations of the set will be denoted .
Using the notion of a permutation, we can rewrite the determinant as
The matrix with columns can be obtained from the identity matrix by finitely many column interchanges, so the determinant
is or depending on the number of column interchanges.
To formalize that, we (informally) define the sign (denoted ) of a permutation to be 1 if an even number of interchanges is necessary to rearrange the -tuple into , and if the number of interchanges is odd.
It is a well-known fact from the combinatorics, that the sign of permutation is well defined, i.e. that although there are infinitely many ways to get the -tuple from , the number of interchanges is either always odd or always even.
One of the ways to show that is to introduce an alternative definition. Let be the number of disorders of , i.e. the number of pairs , , such that , and see if the number is even or odd. We call the permutation odd if is odd and even if is even. Then define ; note that this way is well defined.
We want to show that can indeed be computed by rearranging the -tuple into and counting the number of interchanges, as was described above.
If , then the number of disorders is , so sign of such identity permutation is . Note also, that any elementary transpose, which interchange two neighbors, changes the sign of a permutation, because it changes (increases or decreases) the number of disorders exactly by 1. So, to get from a permutation to another one always needs an even number of elementary transposes if the permutations have the same sign, and an odd number if the signs are different.
Finally, any interchange of two entries can be achieved by an odd number of elementary transposes. This implies that sign changes under an interchange of two entries. So, to get from to an even permutation (positive sign) one always need even number of interchanges, and odd number of interchanges is needed to get an odd permutation (negative sign).
So, if we want determinant to satisfy basic properties 1–3 from Section 3.3, we must define it as
| (3.4.2) |
where the sum is taken over all permutations of the set .
If we define the determinant this way, it is easy to check that it satisfies the basic properties 1–3 from Section 3.3. Indeed, it is linear in each column, because for each column every term (product) in the sum contains exactly one entry from this column.
Interchanging two columns of just adds an extra interchange to the permutation, so right side in (3.4.2) changes sign. Finally, for the identity matrix , the right side of (3.4.2) is (it has one non-zero term).
Suppose the permutation takes to .
Find sign of ;
What does do to ?
What does the inverse permutation do to ?
What is the sign of ?
Let be a permutation matrix, i.e. an matrix consisting of zeroes and ones and such that there is exactly one 1 in every row and every column.
Can you describe the corresponding linear transformation? That will explain the name.
Show that is invertible. Can you describe ?
Show that for some
Use the fact that there are only finitely many permutations.
Why is there an even number of permutations of and why are exactly half of them odd permutations? Hint: This problem can be hard to solve in terms of permutations, but there is a very simple solution using determinants.
If is an odd permutation, explain why is even but is odd.
How many multiplications and additions is required to compute the determinant using formal definition (3.4.2) of the determinant of an matrix? Do not count the operations needed to compute .
For an matrix let denotes the matrix obtained from by crossing out row number and column number .
Let be an matrix. For each , , determinant of can be expanded in the row number as
Similarly, for each , , the determinant can be expanded in the column number ,
Let us first prove the formula for the expansion in row number 1. The formula for expansion in row number then can be obtained from it by interchanging rows number 1 and . Interchanging then rows number and we get the formula for the expansion in row number , and so on.
Since , column expansion follows automatically.
Let us first consider a special case, when the first row has one non-zero term . Performing column operations on columns we transform to the lower triangular form. The determinant of then can be computed as
But the product of all diagonal entries except the first one (i.e. without ) times the correcting factor is exactly , so in this particular case .
Let us now consider the case when all entries in the first row except are zeroes. This case can be reduced to the previous one by interchanging columns number 1 and 2, and therefore in this case .
The case when is the only non-zero entry in the first row, can be reduced to the previous one by interchanging columns 2 and 3, so in this case .
Repeating this procedure we get that in the case when is the only non-zero entry in the first row . 222In the case when is the only non-zero entry in the first row it may be tempting to exchange columns number 1 and number , to reduce the problem to the case . However, when we exchange columns 1 and we change the order of other columns: if we just cross out column number , then column number will be the first of the remaining columns. But, if we exchange columns and , and then cross out column (which is now the first one), then the column will be now column number . To avoid the complications of keeping track of the order of columns, we can, as we did above, exchange columns number and , reducing everything to the situation we treated on the previous step. Such an operation does not change the order for the rest of the columns.
In the general case, linearity of the determinant in each row implies that
where the matrix is obtained from by replacing all entries in the first row except by . As we just discussed above
so
To get the cofactor expansion in the second row, we can interchange the first and second rows and apply the above formula. The row exchange changes the sign, so we get
Exchanging rows 3 and 2 and expanding in the second row we get formula
and so on.
To expand the determinant in a column one need to apply the row expansion formula for . ∎
The numbers
are called cofactors.
Using this notation, the formula for expansion of the determinant in the row number can be rewritten as
Similarly, expansion in the column number can be written as
Very often the cofactor expansion formula is used as the definition of determinant. It is not difficult to show that the quantity given by this formula satisfies the basic properties of the determinant: the normalization property is trivial, the proof of antisymmetry is easy. However, the proof of linearity is a bit tedious (although not too difficult).
Although it looks very nice, the cofactor expansion formula is not suitable for computing determinant of matrices bigger than .
As one can count it requires more than multiplications (to be precise it requires multiplications), and grows very rapidly. For example, cofactor expansion of a matrix require more than multiplications. It would take a computer performing a billion multiplications per second over years to perform multiplications; performing the multiplications required for the cofactor expansion of the determinant of a matrix will require more than 132 years.333The reader can check the numbers using, for example, WolframAlpha
On the other hand, computing the determinant of an matrix using row reduction requires multiplications (and about the same number of additions). It would take a computer performing a million operations per second (very slow, by today’s standards) a fraction of a second to compute the determinant of a matrix by row reduction.
It can only be practical to apply the cofactor expansion formula in higher dimensions if a row (or a column) has a lot of zero entries.
However, the cofactor expansion formula is of great theoretical importance, as the next section shows.
The matrix whose entries are cofactors of a given matrix is called the cofactor matrix of .
Let be an invertible matrix and let be its cofactor matrix. Then
Let us find the product . The diagonal entry number is obtained by multiplying th row of by th column of (i.e. th row of ), so
by the cofactor expansion formula.
To get the off diagonal terms we need to multiply th row of by th column of , , to get
It follows from the cofactor expansions formula (expanding in th row) that this is the determinant of the matrix obtained from by replacing row number by the row number (and leaving all other rows as they were). But the rows and of this matrix coincide, so the determinant is . So, all off-diagonal entries of are zeroes (and all diagonal ones equal ), thus
That means that the matrix is a right inverse of , and since is square, it is the inverse. ∎
Recalling that for an invertible matrix the equation has a unique solution
we get the following corollary of the above theorem.
For an invertible matrix the entry number of the solution of the equation is given by the formula
where the matrix is obtained from by replacing column number of by the vector .
The cofactor formula really shines when one needs to invert a matrix
The cofactors are just entries ( matrices), the cofactor matrix is
so the inverse matrix is given by the formula
While the cofactor formula for the inverse does not look practical for dimensions higher than 3, it has a great theoretical value, as the examples below illustrate.
Suppose that we want to construct a matrix with integer entries, such that its inverse also has integer entries (inverting such a matrix would make a nice homework problem: no messing with fractions). If and its entries are integer, the cofactor formula for inverses implies that also have integer entries.
Note, that it is easy to construct an integer matrix with : one should start with a triangular matrix with on the main diagonal, and then apply several row or column replacements (operations of the third type) to make the matrix look generic.
Another example is to consider a polynomial matrix , i.e. a matrix whose entries are not numbers but polynomials of the variable . If , then the inverse matrix is also a polynomial matrix.
If , it follows from the cofactor expansion that is a polynomial, so has rational entries: moreover, is a multiple of each denominator.
Evaluate the determinants using any method
Use row (column) expansion to evaluate the determinants. Note, that you don’t need to use the first row (column): picking row (column) with many zeroes will simplify your calculations.
For the matrix
compute , where is identity matrix. You should get a nice expression involving and . Row expansion and induction is probably the best way to go.
Using cofactor formula compute inverses of the matrices
Let be the determinant of the tridiagonal matrix
Using cofactor expansion show that . This yields that the sequence is the Fibonacci sequence
Vandermonde determinant revisited. Our goal is to prove the formula
for the Vandermonde determinant.
We will apply induction. To do this
Check that the formula holds for , .
Call the variable in the last row , and show that the determinant is a polynomial of degree , , with the coefficients depending on .
Show that the polynomial has zeroes at , so it can be represented as , where as above.
Assuming that the formula for the Vandermonde determinant is true for , compute and prove the formula for .
How many multiplication is needed to compute the determinant of an matrix using the cofactor expansion? Prove the formula.
For a matrix let us consider its submatrix, obtained by taking rows and columns. The determinant of this matrix is called a minor of order . Note, that an matrix has different submatrices, and so it has minors of order .
For a non-zero matrix its rank equals to the maximal integer such that there exists a non-zero minor of order .
Let us first show, that if then all minors of order are . Indeed, since the dimension of the column space is , any columns of are linearly dependent. Therefore, for any submatrix of its columns are linearly dependent, and so all minors of order are .
To complete the proof we need to show that there exists a non-zero minor of order . There can be many such minors, but probably the easiest way to get such a minor is to take pivot rows and pivot columns (i.e. rows and columns of the original matrix, containing a pivot). This submatrix has the same pivots as the original matrix, so it is invertible (pivot in every column and every row) and its determinant is non-zero. ∎
This theorem does not look very useful, because it is much easier to perform row reduction than to compute all minors. However, it is of great theoretical importance, as the following corollary shows.
Let be an polynomial matrix (i.e. a matrix whose entries are polynomials of ). Then is constant everywhere, except maybe finitely many points, where the rank is smaller.
Let be the largest integer such that for some . To show that such exists, we first try . If there exists such that , we have found . If not, we replace by and try again. After finitely many steps we either stop or hit . So, exists.
Let be a point such that , and let be a minor of order such that . Since is the determinant of a polynomial matrix, is a polynomial. Since , it is not identically zero, so it can be zero only at finitely many points. So, everywhere except maybe finitely many points . But by the definition of , for all . ∎
True or false
Determinant is only defined for square matrices.
If two rows or columns of are identical, then .
If is the matrix obtained from by interchanging two rows (or columns), then .
If is the matrix obtained from by multiplying a row (column) of by a scalar , then .
If is the matrix obtained from by adding a multiple of a row to some other row, then .
The determinant of a triangular matrix is the product of its diagonal entries.
.
.
A matrix is invertible if and only if .
If is an invertible matrix, then .
Let be an matrix. How are , and related to .
If the entries of both and are integers, is it possible that ? Hint: what is ?
Let be vectors in and let be the matrix with columns . Prove that is the area of the parallelogram with two sides given by the vectors .
Consider first the case when . To treat general case left multiply by a rotation matrix that transforms vector into . Hint: what is the determinant of a rotation matrix?
The following problem illustrates relation between the sign of the determinant and the so-called orientation of a system of vectors.
Let , be vectors in . Show that if and only if there exists a rotation such that the vector is parallel to (and looking in the same direction), and is in the upper half-plane (the same half-plane as ).
Hint: What is the determinant of a rotation matrix?