A vector space is a (non-empty) collection of objects, called vectors (denoted in this book by lowercase bold letters, like ), along with two operations, addition of vectors and multiplication by a number (scalar)111We need some visual distinction between vectors and other objects, so in this book we use bold lowercase letters for vectors and regular lowercase letters for numbers (scalars). In some (more advanced) books Latin letters are reserved for vectors, while Greek letters are used for scalars; in even more advanced texts any letter can be used for anything and the reader must understand from the context what each symbol means. I think it is helpful, especially for a beginner to have some visual distinction between different objects, so a bold lowercase letters will always denote a vector. And on a blackboard an arrow (like in ) is used to identify a vector., such that the results of both operations are always in , and the following 8 properties below (the so-called axioms of a vector space) hold:
The first 4 properties deal with the addition:
Commutativity: for all ;
Associativity: for all ;
Zero vector: there exists a special vector, denoted by such that for all ;
Additive inverse: For every vector there exists a vector such that . Such additive inverse is usually denoted as ;
The next two properties concern multiplication:
Multiplicative identity: for all ;
Multiplicative associativity: for all and all scalars , ;
And finally, two distributive properties, which connect multiplication and addition:
for all and all scalars ;
for all and all scalars , .
The above properties seem hard to memorize, but it is not necessary. They are simply the familiar rules of algebraic manipulations with numbers, that you know from high school. The only new twist here is that you have to understand what operations you can apply to what objects. You can add vectors, and you can multiply a vector by a number (scalar). Of course, you can do with number all possible manipulations that you have learned before. But, you cannot multiply two vectors, or add a number to a vector.
It is easy to prove that zero vector is unique, and that given its additive inverse is also unique.
It is also not hard to show using properties 5, 6 and 8 that for any , and that . Note, that to do this one still needs to use other properties of a vector space in the proofs, in particular properties 3 and 4.
If the scalars are the usual real numbers, we call the space a real vector space. If the scalars are the complex numbers, i.e. if we can multiply vectors by complex numbers, we call the space a complex vector space.
Note, that any complex vector space is a real vector space as well (if we can multiply by complex numbers, we can multiply by real numbers), but not the other way around.
It is also possible to consider a situation when the scalars are elements of an arbitrary field222If you do not know what a field is, do not worry, since in this book we consider only the case of real and complex spaces.. In this case we say that is a vector space over the field . Although many of the constructions in the book (in particular, everything in Chapters 1–3) work for general fields, in this text we are considering only real and complex vector spaces.
If we do not specify the set of scalars, or use a letter for it, then the results are true for both real and complex spaces. If we need to distinguish real and complex cases, we will explicitly say which case we are considering.
Note, that in the definition of a vector space over an arbitrary field, we require the set of scalars to be a field, so we can always divide (without a remainder) by a non-zero scalar. Thus, it is possible to consider vector space over rationals, but not over the integers.
The space consists of all columns of size ,
whose entries are real numbers. Addition and multiplication are defined entrywise, i.e.
The space also consists of columns of size , only the entries now are complex numbers. Addition and multiplication are defined exactly as in the case of , the only difference is that we can now multiply vectors by complex numbers, i.e. is a complex vector space.
Many results in this text are true for both and . In such cases we will use notation .
The space (also denoted as ) of matrices: the addition and multiplication by scalars are defined entrywise. If we allow only real entries (and so only multiplication only by reals), then we have a real vector space; if we allow complex entries and multiplication by complex numbers, we then have a complex vector space.
Formally, we have to distinguish between between real and complex cases, i.e. to write something like or . However, in most situations there is no difference between real and complex case, and there is no need to specify which case we are considering. If there is a difference we say explicitly which case we are considering.
As we mentioned above, the axioms of a vector space are just the familiar rules of algebraic manipulations with (real or complex) numbers, so if we put scalars (numbers) for the vectors, all axioms will be satisfied. Thus, the set of real numbers is a real vector space, and the set of complex numbers is a complex vector space.
More importantly, since in the above examples all vector operations (addition and multiplication by a scalar) are performed entrywise, for these examples the axioms of a vector space are automatically satisfied because they are satisfied for scalars (can you see why?). So, we do not have to check the axioms, we get the fact that the above examples are indeed vector spaces for free!
The same can be applied to the next example, the coefficients of the polynomials play the role of entries there.
The space of polynomials of degree at most , consists of all polynomials of form
where is the independent variable. Note, that some, or even all, coefficients can be .
In the case of real coefficients we have a real vector space, complex coefficient give us a complex vector space. Again, we will specify whether we treating real or complex case only when it is essential; otherwise everything applies to both cases.
Question: What are zero vectors in each of the above examples?
An matrix is a rectangular array with rows and columns. Elements of the array are called entries of the matrix.
It is often convenient to denote matrix entries by indexed letters: the first index denotes the number of the row, where the entry is, and the second one is the number of the column. For example
| (1.1.1) |
is a general way to write an matrix.
Very often for a matrix the entry in row number and column number is denoted by or , and sometimes as in example (1.1.1) above the same letter but in lowercase is used for the matrix entries.
Given a matrix , its transpose (or transposed matrix) , is defined by transforming the rows of into the columns. For example
So, the columns of are the rows of and vice versa, the rows of are the columns of .
The formal definition is as follows: meaning that the entry of in the row number and column number equals the entry of in the row number and column number .
The transpose of a matrix has a very nice interpretation in terms of linear transformations, namely it gives the so-called adjoint transformation. We will study this in detail later, but for now transposition will be just a useful formal operation.
One of the first uses of the transpose is that we can write a column vector (recall that is or ) as . If we put the column vertically, it will use significantly more space.
Let , , . Compute , , .
Which of the following sets (with natural addition and multiplication by a scalar) are vector spaces. Justify your answer.
The set of all continuous functions on the interval ;
The set of all non-negative functions on the interval ;
The set of all polynomials of degree exactly ;
The set of all symmetric matrices, i.e. the set of matrices such that .
True or false:
Every vector space contains a zero vector;
A vector space can have more than one zero vector;
An matrix has rows and columns;
If and are polynomials of degree , then is also a polynomial of degree ;
If and are polynomials of degree at most , then is also a polynomial of degree at most
Prove that a zero vector of a vector space is unique.
What matrix is the zero vector of the space ?
Prove that the additive inverse, defined in Axiom 4 of a vector space is unique.
Prove that for any vector .
Prove that for any vector its additive inverse is given by .
Let be a vector space, and let be a collection of vectors. A linear combination of vectors is a sum of form
A system of vectors is called a basis (for the vector space ) if any vector admits a unique representation as a linear combination
The coefficients are called coordinates of the vector (in the basis, or with respect to the basis ).
Another way to say that is a basis is to say that for any possible choice of the right side , the equation (with unknowns ) has a unique solution.
Before discussing any properties of bases333the plural for the “basis” is bases, the same as the plural for “base”, let us give a few examples, showing that such objects exist, and that it makes sense to study them.
In the first example the space is , where is either or . Consider vectors
(the vector has all entries except the entry number , which is ). The system of vectors is a basis in . Indeed, any vector
can be represented as the linear combination
and this representation is unique. The system is called the standard basis in .
In this example the space is the space of the polynomials of degree at most . Consider vectors (polynomials) defined by
Clearly, any polynomial , admits a unique representation
So the system is a basis in . We will call it the standard basis in .
If a vector space has a basis , then any vector is uniquely defined by its coefficients in the decomposition . So, if we stack the coefficients in a column, we can operate with them as if they were column vectors, i.e. as with elements of (again here is either or , but everything also works for an abstract field ).
Namely, if and , then
i.e. to get the column of coordinates of the sum one just need to add the columns of coordinates of the summands. Similarly, to get the coordinates of we need simply to multiply the column of coordinates of by .
This is a very important remark, that will be used throughout the book. It allows us to translate any statement about the standard column space to a vector space with a basis
The definition of a basis says that any vector admits a unique representation as a linear combination. This statement is in fact two statements, namely that the representation exists and that it is unique. Let us analyze these two statements separately.
If we only consider the existence we get the following notion
A system of vectors is called a generating system (also a spanning system, or a complete system) in if any vector admits representation as a linear combination
The only difference from the definition of a basis is that we do not assume that the representation above is unique.
The words generating, spanning and complete here are synonyms. I personally prefer the term complete, because of my operator theory background. Generating and spanning are more often used in linear algebra textbooks.
Clearly, any basis is a generating (complete) system. Also, if we have a basis, say , and we add to it several vectors, say , then the new system will be a generating (complete) system. Indeed, we can represent any vector as a linear combination of the vectors , and just ignore the new ones (by putting corresponding coefficients ).
Now, let us turn our attention to the uniqueness. We do not want to worry about existence, so let us consider the zero vector , which always admits a representation as a linear combination.
A linear combination is called trivial if .
A trivial linear combination is always (for all choices of vectors ) equal to , and that is probably the reason for the name.
A system of vectors is called linearly independent if only the trivial linear combination ( with ) of vectors equals .
In other words, the system is linearly independent iff the equation (with unknowns ) has only trivial solution .
If a system is not linearly independent, it is called linearly dependent. By negating the definition of linear independence, we get the following
A system of vectors is called linearly dependent if can be represented as a nontrivial linear combination, . Non-trivial here means that at least one of the coefficient is non-zero. This can be (and usually is) written as .
So, restating the definition we can say, that a system is linearly dependent if and only if there exist scalars , such that
An alternative definition (in terms of equations) is that a system is linearly dependent iff the equation
(with unknowns ) has a non-trivial solution. Non-trivial, once again means that at least one of is different from , and it can be written as .
The following proposition gives an alternative description of linearly dependent systems.
A system of vectors is linearly dependent if and only if one of the vectors can be represented as a linear combination of the other vectors,
| (1.2.1) |
Suppose the system is linearly dependent. Then there exist scalars , such that
Let be the index such that . Then, moving all terms except to the right side we get
Dividing both sides by we get (1.2.1) with .
Obviously, any basis is a linearly independent system. Indeed, if a system is a basis, admits a unique representation
Since the trivial linear combination always gives , the trivial linear combination must be the only one giving .
So, as we already discussed, if a system is a basis it is a complete (generating) and linearly independent system. The following proposition shows that the converse implication is also true.
A system of vectors is a basis if and only if it is linearly independent and complete (generating).
We already know that a basis is always linearly independent and complete, so in one direction the proposition is already proved.
Let us prove the other direction. Suppose a system is linearly independent and complete. Take an arbitrary vector . Since the system is linearly complete (generating), can be represented as
We only need to show that this representation is unique.
Suppose admits another representation
Then
Since the system is linearly independent, , and thus the representation is unique. ∎
In many textbooks a basis is defined as a complete and linearly independent system (by Proposition 1.2.7 this definition is equivalent to ours). Although this definition is more common than one presented in this text, I prefer the latter. It emphasizes the main property of a basis, namely that any vector admits a unique representation as a linear combination.
Any (finite) generating system contains a basis.
Suppose is a generating (complete) set. If it is linearly independent, it is a basis, and we are done.
Suppose it is not linearly independent, i.e. it is linearly dependent. Then there exists a vector which can be represented as a linear combination of the vectors , .
Since can be represented as a linear combination of vectors , , any linear combination of vectors can be represented as a linear combination of the same vectors without (i.e. the vectors , , ). So, if we delete the vector , the new system will still be a complete one.
If the new system is linearly independent, we are done. If not, we repeat the procedure.
Repeating this procedure finitely many times we arrive to a linearly independent and complete system, because otherwise we delete all vectors and end up with an empty set.
So, any finite complete (generating) set contains a complete linearly independent subset, i.e. a basis. ∎
Find a basis in the space of matrices .
True or false:
Any set containing a zero vector is linearly dependent
A basis must contain ;
subsets of linearly dependent sets are linearly dependent;
subsets of linearly independent sets are linearly independent;
If then all scalars are zero;
Recall, that a matrix is called symmetric if . Write down a basis in the space of symmetric matrices (there are many possible answers). How many elements are in the basis?
Write down a basis for the space of
symmetric matrices;
symmetric matrices;
antisymmetric () matrices;
Let a system of vectors be linearly independent but not generating. Show that it is possible to find a vector such that the system is linearly independent. Hint: Take for any vector that cannot be represented as a linear combination and show that the system is linearly independent.
Is it possible that vectors are linearly dependent, but the vectors , and are linearly independent?
A transformation from a set to a set is a rule that for each argument (input) assigns a value (output) .
The set is called the domain of , and the set is called the target space or codomain of .
We write to say that is a transformation with the domain and the target space .
The words “transformation”, “transform”, “mapping”, “map”, “operator”, “function” all denote the same object.
Let , be vector spaces (over the same field ). A transformation is called linear if
;
for all and for all scalars .
Properties 1 and 2 together are equivalent to the following one:
for all and for all scalars .
You dealt with linear transformation before, may be without even suspecting it, as the examples below show.
Differentiation: Let (the set of polynomials of degree at most ), , and let be the differentiation operator,
Since and , this is a linear transformation.
Rotation: in this example (the usual coordinate plane), and a transformation takes a vector in and rotates it counterclockwise by radians see Fig. 1.1 below. Since rotates the plane as a whole, it rotates as a whole the parallelogram used to define a sum of two vectors (parallelogram law). Therefore the property 1 of linear transformation holds. It is also easy to see that the property 2 is also true.
Reflection: in this example again , and the transformation is the reflection in the first coordinate axis. It can also be shown geometrically, that this transformation is linear, but we will use another way to show that.
Namely, it is easy to write a formula for ,
and from this formula it is easy to check that the transformation is linear.
Let us investigate linear transformations . Any such transformation is given by the formula
Indeed,
So, any linear transformation of is just a multiplication by a constant.
It turns out that a linear transformation also can be represented as a multiplication, not by a scalar, but by a matrix.
Let us see how. Let be a linear transformation. What information do we need to compute for all vectors ? My claim is that it is sufficient to know how acts on the standard basis of . Namely, it is sufficient to know vectors in (i.e. the vectors of size ),
Indeed, let
Then and
So, if we join the vectors (columns) together in a matrix ( being the th column of , ), this matrix contains all the information about .
Let us show how one should define the product of a matrix and a vector (column) to represent the transformation as a product, . Let
Recall, that the column number of is the vector , i.e.
Then if we want we get
So, the matrix–vector multiplication should be performed by the following column by coordinate rule:
multiply each column of the matrix by the corresponding coordinate of the vector.
The “column by coordinate” rule is very well adapted for parallel computing. It will be also very important in different theoretical constructions later.
However, when doing computations manually, it is more convenient to compute the result one entry at a time. This can be expressed as the following row by column rule:
To get the entry number of the result, one need to multiply row number of the matrix by the vector, that is, if , then , ;
here and are coordinates of the vectors and respectively, and are the entries of the matrix .
As we discussed above, linear transformation (acting from to ) is completely defined by its values on the standard basis in .
The fact that we consider the standard basis is not essential, one can consider any basis, even any generating (spanning) set. Namely,
A linear transformation is completely defined by its values on a generating set (in particular by its values on a basis).
So, if is a generating set (in particular, if it is a basis) in , and and are linear transformations such that
then .
The proof of this statement is trivial and left as an exercise.
To get the matrix of a linear transformation one needs to join the vectors (where is the standard basis in ) into a matrix: th column of the matrix is , .
If the matrix of the linear transformation is known, then can be found by the matrix–vector multiplication, . To perform matrix–vector multiplication one can use either “column by coordinate” or “row by column” rule.
The latter seems more appropriate for manual computations. The former is well adapted for parallel computers, and will be used in different theoretical constructions.
For a linear transformation , its matrix is usually denoted as . However, very often people do not distinguish between a linear transformation and its matrix, and use the same symbol for both. When it does not lead to confusion, we will also use the same symbol for a transformation and its matrix.
Since a linear transformation is essentially a multiplication, the notation is often used instead of . We will also use this notation. Note that the usual order of algebraic operations apply, i.e. means , not .
In the matrix–vector multiplication the number of columns of the matrix matrix must coincide with the size of the vector , i.e. a vector in can only be multiplied by an matrix.
It makes sense, since an matrix defines a linear transformation , so vector must belong to .
The easiest way to remember this is to remember that if performing multiplication you run out of some elements faster, then the multiplication is not defined. For example, if using the “row by column” rule you run out of row entries, but still have some unused entries in the vector, the multiplication is not defined. It is also not defined if you run out of vector’s entries, but still have unused entries in the row.
One does not have to restrict himself to the case of with standard basis: everything described in this section works for transformation between arbitrary vector spaces as long as there is a basis in the domain and in the target space. Of course, if one changes a basis, the matrix of the linear transformation will be different. This will be discussed later in Section 2.8 of Chapter 2.
Multiply:
;
;
;
.
Let a linear transformation in be the reflection in the line . Find its matrix.
For each linear transformation below find it matrix
defined by ;
defined by ;
, (find the matrix with respect to the standard basis );
, (again with respect to the standard basis ).
Find matrices representing the transformations of which:
project every vector onto - plane;
reflect every vector through - plane;
rotate the - plane through , leaving -axis alone.
Let be a linear transformation. If is the center of the straight interval , show that is the center of the interval . Hint: What does it mean that is the center of the interval ?
The set of complex numbers can be canonically identified with the space by treating each as a column .
Treating as a complex vector space, show that the multiplication by is a linear transformation in . What is its matrix?
Treating as the real vector space show that the multiplication by defines a linear transformation there. What is its matrix?
Define . Show that this transformation is not a linear transformation in the complex vectors space , but if we treat as the real vector space then it is a linear transformation there (i.e. that is a real linear but not a complex linear transformation).
Find the matrix of the real linear transformation .
Show that any linear transformation in (treated as a complex vector space) is a multiplication by .
What operations can we perform with linear transformations? We can always multiply a linear transformation for a scalar, i.e. if we have a linear transformation and a scalar we can define a new transformation by
It is easy to check that is also a linear transformation:
If and are linear transformations with the same domain and target space ( and , or in short ), then we can add these transformations, i.e. define a new transformation by
It is easy to check that the transformation is a linear one, one just needs to repeat the above reasoning for the linearity of .
So, if we fix vector spaces and and consider the collection of all linear transformations from to (let us denote it by ), we can define 2 operations on : multiplication by a scalar and addition. It can be easily shown that these operations satisfy the axioms of a vector space, defined in Section 1.1.
This should come as no surprise for the reader, since axioms of a vector space essentially mean that operation on vectors follow standard rules of algebra. And the operations on linear transformations are defined as to satisfy these rules!
As an illustration, let us write down a formal proof of the first distributive law (axiom 7) of a vector space. We want to show that . For any
| by the definition of multiplication | ||||
| by the definition of the sum | ||||
| by the definition of the sum |
So indeed .
Linear operations (addition and multiplication by a scalar) on linear transformations correspond to the respective operations on their matrices. Since we know that the set of matrices is a vector space, this immediately implies that is a vector space.
We presented the abstract proof above, first of all because it work for general spaces, for example, for spaces without a basis, where we cannot work with coordinates. Secondly, the reasonings similar to the abstract one presented here, are used in many places, so the reader will benefit from understanding it.
And as the reader gains some mathematical sophistication, he/she will see that this abstract reasoning is indeed a very simple one, that can be performed almost automatically.
Knowing matrix–vector multiplication, one can easily guess what is the natural way to define the product of two matrices: Let us multiply by each column of (matrix-vector multiplication) and join the resulting column-vectors into a matrix. Formally,
if are the columns of , then are the columns of the matrix .
Recalling the row by column rule for the matrix–vector multiplication we get the following row by column rule for the matrices
the entry (the entry in the row and column ) of the product is defined by
Formally it can be rewritten as
if and are entries of the matrices and respectively.
I intentionally did not speak about sizes of the matrices and , but if we recall the row by column rule for the matrix–vector multiplication, we can see that in order for the multiplication to be defined, the size of a row of should be equal to the size of a column of .
In other words the product is defined if and only if is an and is matrix.
One can ask yourself here: Why are we using such a complicated rule of multiplication? Why don’t we just multiply matrices entrywise?
And the answer is, that the multiplication, as it is defined above, arises naturally from the composition of linear transformations.
Suppose we have two linear transformations, and . Define the composition of the transformations , as
Note that . Since , the expression is well defined and the result belongs to . So, .
It is easy to show that is a linear transformation (exercise), so it is defined by an matrix. How one can find this matrix, knowing the matrices of and ?
We will usually identify a linear transformation and its matrix, but in the next few paragraphs we will distinguish them
Let be the matrix of and be the matrix of . As we discussed in the previous section, the columns of are vectors , where is the standard basis in . For we have
(operators and are simply the multiplication by and respectively).
So, the columns of the matrix of are , and that is exactly how the matrix was defined!
Let us return to identifying again a linear transformation with its matrix. Since the matrix multiplication agrees with the composition, we can (and will) write instead of and instead of .
Note that in the composition the transformation is applied first! The way to remember this is to see that in the transformation meets first.
There is another way of checking the dimensions of matrices in a product, different from the row by column rule: for a composition to be defined it is necessary that belongs to the domain of . If acts from some space, say to , then must act from to some space, say . So, in order for to be defined the matrices of and should be of sizes and respectively—the same condition as obtained from the row by column rule.
Let be the reflection in the line . It is a linear transformation, so let us find its matrix. To find the matrix, we need to compute and . However, the direct computation of and involves significantly more trigonometry than a sane person is willing to remember.
An easier way to find the matrix of is to represent it as a composition of simple linear transformation. Namely, let be the angle between the axis and the line , and let be the reflection in the -axis. Then to get the reflection we can first rotate the plane by the angle , moving the line to the -axis, then reflect everything in the axis, and then rotate the plane by , taking everything back. Formally it can be written as
(note the order of terms!), where is the rotation by . The matrix of is easy to compute,
the rotation matrices are known
To compute and take a vector in the line , say a vector . Then
and similarly
Gathering everything together we get
It remains only to perform matrix multiplication here to get the final result. ∎
Matrix multiplication enjoys a lot of properties, familiar to us from high school algebra:
Associativity: , provided that either left or right side is well defined; we therefore can (and will) simply write in this case.
Distributivity: , , provided either left or right side of each equation is well defined.
One can take scalar multiplies out: .
These properties are easy to prove. One should prove the corresponding properties for linear transformations, and they almost trivially follow from the definitions. The properties of linear transformations then imply the properties for the matrix multiplication.
The new twist here is that the commutativity fails:
matrix multiplication is non-commutative, i.e. generally for matrices .
One can see easily it would be unreasonable to expect the commutativity of matrix multiplication. Indeed, let and be matrices of sizes and respectively. Then the product is well defined, but if , is not defined.
Even when both products are well defined, for example, when and are (square) matrices, the multiplication is still non-commutative. If we just pick the matrices and at random, the chances are that : we have to be very lucky to get .
Given a matrix , its transpose (or transposed matrix) is defined by transforming the rows of into the columns. For example
So, the columns of are the rows of and vice versa, the rows of are the columns of .
The formal definition is as follows: meaning that the entry of in the row number and column number equals the entry of in the row number and row number .
The transpose of a matrix has a very nice interpretation in terms of linear transformations, namely it gives the so-called adjoint transformation. We will study this in detail later, but for now transposition will be just a useful formal operation.
One of the first uses of the transpose is that we can write a column vector as . If we put the column vertically, it will use significantly more space.
A simple analysis of the row by columns rule shows that
i.e. when you take the transpose of the product, you change the order of the terms.
For a square () matrix its trace (denoted by ) is the sum of the diagonal entries
Let and be matrices of size and respectively (so the both products and are well defined). Then
We leave the proof of this theorem as an exercise, see Problem 1.5.6 below. There are essentially two ways of proving this theorem. One is to compute the diagonal entries of and of and compare their sums. This method requires some proficiency in manipulating sums in notation.
If you are not comfortable with algebraic manipulations, there is another way. We can consider two linear transformations, and , acting from to defined by
To prove the theorem it is sufficient to show that ; the equality for gives the theorem.
Since a linear transformation is completely defined by its values on a generating system, we need just to check the equality on some simple matrices, for example on matrices , which has all entries 0 except the entry in the intersection of th column and th row.
Let
Mark all the products that are defined, and give the dimensions of the result: , , , , , , , , .
Compute , , , , .
Let be the matrix of rotation by in . Check by matrix multiplication that
Multiply two rotation matrices and (it is a rare case when the multiplication is commutative, i.e. , so the order is not essential). Deduce formulas for and from here.
Find the matrix of the orthogonal projection in onto the line . Hint: What is the matrix of the projection onto the coordinate axis ?
Find linear transformations such that but .
Prove Theorem 1.5.1, i.e. prove that .
Construct a non-zero matrix such that .
Find the matrix of the reflection through the line . Perform all the multiplications.
Among all linear transformations, there is a special one, the identity transformation (operator) , , .
To be precise, there are infinitely many identity transformations: for any vector space , there is the identity transformation , , . However, when it is does not lead to the confusion we will use the same symbol for all identity operators (transformations). We will use the notation only we want to emphasize in what space the transformation is acting.
Clearly, if is the identity transformation in , its matrix is the matrix
( on the main diagonal and everywhere else). When we want to emphasize the size of the matrix, we use the notation ; otherwise we just use .
Often, the symbol is used in Linear Algebra textbooks for the identity matrix. I prefer , since it is used in operator theory.
Clearly, for an arbitrary linear transformation , the equalities
hold (whenever the product is defined).
Let be a linear transformation. We say that the transformation is left invertible if there exist a linear transformation such that
The transformation is called right invertible if there exists a linear transformation such that
The transformations and are called left and right inverses of . Note, that we did not assume the uniqueness of or here, and generally left and right inverses are not unique.
A linear transformation is called invertible if it is both right and left invertible.
If a linear transformation is invertible, then its left and right inverses and are unique and coincide.
A transformation is invertible if and only if there exists a unique linear transformation (denoted ), such that
| (1.6.1) |
The transformation is called the inverse of .
Very often existence of thew transformation satisfying (1.6.1) is used as the definition of an invertible transformation.
Let and . Then
On the other hand
and therefore .
Suppose for some transformation we have . Repeating the above reasoning with instead of we get . Therefore the left inverse is unique. The uniqueness of is proved similarly. ∎
A matrix is called invertible (resp. left invertible, right invertible) if the corresponding linear transformation is invertible (resp. left invertible, right invertible).
Theorem 1.6.1 asserts that a matrix is invertible if there exists a unique matrix such that , . The matrix is called (surprise) the inverse of .
The identity transformation (matrix) is invertible, ;
The rotation
is invertible, and the inverse is given by . This equality is clear from the geometric description of , and it also can be checked by the matrix multiplication;
The column is left invertible but not right invertible. One of the possible left inverses in the row .
To show that this matrix is not right invertible, we just notice that there are more than one left inverse. Exercise: describe all left inverses of this matrix.
The row is right invertible, but not left invertible. The column is a possible right inverse.
An invertible matrix must be square (). Moreover, if a square matrix has either left or right inverse, it is invertible. So, it is sufficient to check only one of the identities , .
This fact will be proved later. Until we prove this fact, we will not use it. I presented it here only to prevent the reader from trying wrong directions.
If linear transformations and are invertible (and such that the product is defined), then the product is invertible and
note the change of the order!
Direct computation shows:
and similarly
∎
The invertibility of the product does not imply the invertibility of the factors and (can you think of an example?). However, if one of the factors (either or ) and the product are invertible, then the second factor is also invertible.
We leave the proof of this fact as an exercise.
If a matrix is invertible, then is also invertible and
Using we get
and similarly
∎
And finally, if is invertible, then is also invertible, .
So, let us summarize the main properties of the inverse:
If is invertible, then is also invertible, ;
If and are invertible and the product is defined, then is invertible and .
If is invertible, then is also invertible and .
An invertible linear transformation is called an isomorphism. We did not introduce anything new here, it is just another name for the object we already studied.
Two vector spaces and are called isomorphic (denoted ) if there is an isomorphism .
Isomorphic spaces can be considered as different representation of the same space, meaning that all properties and constructions involving vector space operations are preserved under isomorphism.
The theorem below illustrates this statement.
Let be an isomorphism, and let be a basis in . Then the system is a basis in .
We leave the proof of the theorem as an exercise.
In the above theorem one can replace “basis” by “linearly independent”, or “generating”, or “linearly dependent”—all these properties are preserved under isomorphisms.
If is an isomorphism, then so is . Therefore in the above theorem we can state that is a basis if and only if is a basis.
The converse to the Theorem 1.6.6 is also true
Let be a linear map,and let and be bases in and respectively. If , , then is an isomorphism.
Define the inverse transformation by , (as we know, a linear transformation is defined by its values on a basis). ∎
Let ( is the set of polynomials , of degree at most ) is defined by
By Theorem 1.6.7 is an isomorphism, so .
Let be a vector space (over ) with a basis . Define transformation by
where is the standard basis in . Again by Theorem 1.6.7 is an isomorphism, so .
The space of matrices with entries in is isomorphic to ;
More generally,
Any real vector space with a basis is isomorphic to (for some ). Similarly, any complex vector space with a basis is isomorphic to .
Let be a linear transformation. Then is invertible if and only if for any right side the equation
has a unique solution .
Suppose is invertible. Then solves the equation . To show that the solution is unique, suppose that for some other vector
Multiplying this identity by from the left we get
and therefore . Note that both identities, and were used here.
Let us now suppose that the equation has a unique solution for any . Let us use symbol instead of . We know that given the equation
has a unique solution . Let us call this solution .
Note that is defined for all , so we defined a transformation .
Let us check that is a linear transformation. We need to show that . Let , , i.e. , . Then
which means
And finally, let us show that is indeed the inverse of . Take and let , so by the definition of we have . Then for all
so . Similarly, for arbitrary let , so . Then for all
so . ∎
An matrix is invertible if and only if its columns form a basis in .
Prove, that if is an isomorphism (i.e. an invertible linear transformation) and is a basis in , then is a basis in .
Find all right inverses to the matrix (row) . Conclude from here that the row is not left invertible.
Find all left inverses of the column
Is the column right invertible? Justify
Find two matrices and that is invertible, but and are not. Hint: square matrices and would not work. Remark: It is easy to construct such and in the case when is a matrix (a scalar). But can you get matrix ? ? ?
Suppose the product is invertible. Show that is right invertible and is left invertible. Hint: you can just write formulas for right and left inverses.
Let and be invertible (assuming that the product is well defined). Prove that is invertible.
Let be matrix. Prove that if then is not invertible
Suppose for some non-zero matrix . Can be invertible? Justify.
Write matrices of the linear transformations and in , defined as follows: interchanges the coordinates and of the vector , and just adds to the coordinate times the coordinate , and does not change other coordinates, i.e.
here is some fixed number.
Show that and are invertible transformations, and write the matrices of the inverses. Hint: it may be simpler, if you first describe the inverse transformation, and then find its matrix, rather than trying to guess (or compute) the inverses of the matrices , .
Find the matrix of the rotation in through the angle around the vector . We assume that the rotation is counterclockwise if we sit at the tip of the vector and looking at the origin.
You can present the answer as a product of several matrices: you don’t have to perform the multiplication.
Give examples of matrices (say ) such that:
is not invertible although both and are invertible;
is invertible although both and are not invertible;
All of , and are invertible
Let be an invertible symmetric () matrix. Is the inverse of symmetric? Justify.
A subspace of a vector space is a non-empty subset of which is closed under the vector addition and multiplication by scalars, i.e.
If then for all scalars ;
For any the sum ;
Again, the conditions 1 and 2 can be replaced by the following one:
Note, that a subspace with the operations (vector addition and multiplication by scalars) inherited from , is a vector space. Indeed, since is non-empty, it contain at least vector and since , the above condition 1. imples that the zero vector is in . Also, for any its additive inverse in given by , so again by property 1. . The rest of the axiom of the vector space are satisfied because all operations are inherited from the vector space . The only thing that could possibly go wrong, is that the result of some operation does not belong to . But the definition of a subspace prohibits this!
Now let us consider some examples:
Trivial subspaces of a space , namely itself and (the subspace consisting only of zero vector). Note, that the empty set is not a vector space, since it does not contain a zero vector, so it is not a subspace.
With each linear transformation we can associate the following two subspaces:
The null space, or kernel of , which is denoted as or and consists of all vectors such that
The range is defined as the set of all vectors which can be represented as for some .
If is a matrix, i.e. , then recalling column by coordinate rule of the matrix–vector multiplication, we can see that any vector can be represented as a linear combination of columns of the matrix . That explains why the term column space (and notation ) is often used for the range of the matrix. So, for a matrix , the notation is often used instead of .
And now the last example.
Given a system of vectors its linear span (sometimes called simply span) is the collection of all vectors that can be represented as a linear combination of vectors . The notation is also used instead of
It is easy to check that in all of these examples we indeed have subspaces. We leave this an an exercise for the reader. Some of the statements will be proved later in the text.
Let and be subspaces of a vector space . Prove that is a subspace of .
Let be a vector space. For the sum is the collection of all vectors which can be represented as , , . Show that if and are subspaces of , then is also a subspace.
Let be a subspace of a vector space , and let , . Prove that if then .
Let and be subspaces of a vector space . Using the previous exercise, show that is a subspace if and only if or .
What is the smallest subspace of the space of matrices which contains all upper triangular matrices ( for all ), and all symmetric matrices ()? What is the largest subspace contained in both of those subspaces?
In this section we give some ideas of how linear algebra is used in computer graphics. We will not go into the details, but just explain some ideas. In particular we explain why manipulation with dimensional images are reduced to multiplications of matrices.
The - plane (more precisely, a rectangle there) is a good model of a computer monitor. Any object on a monitor is represented as a collection of pixels, each pixel is assigned a specific color. Position of each pixel is determined by the column and row, which play role of and coordinates on the plane. So a rectangle on a plane with - coordinates is a good model for a computer screen: and a graphical object is just a collection of points.
There are two types of graphical objects: bitmap objects, where every pixel of an object is described, and vector object, where we describe only critical points, and graphic engine connects them to reconstruct the object. A (digital) photo is a good example of a bitmap object: every pixel of it is described. Bitmap object can contain a lot of points, so manipulations with bitmaps require a lot of computing power. Anybody who has edited digital photos in a bitmap manipulation program, like Adobe Photoshop, knows that one needs quite a powerful computer, and even with modern and powerful computers manipulations can take some time.
That is the reason that most of the objects, appearing on a computer screen are vector ones: the computer only needs to memorize critical points. For example, to describe a polygon, one needs only to give the coordinates of its vertices, and which vertex is connected with which. Of course, not all objects on a computer screen can be represented as polygons, some, like letters, have curved smooth boundaries. But there are standard methods allowing one to draw smooth curves through a collection of points, for example Bezier splines, used in PostScript and Adobe PDF (and in many other formats).
Anyhow, this is the subject of a completely different book, and we will not discuss it here. For us a graphical object will be a collection of points (either wireframe model, or bitmap) and we would like to show how one can perform some manipulations with such objects.
The simplest transformation is a translation (shift), where each point (vector) is translated by , i.e. the vector is replaced by (notation is used for this). Vector addition is very well adapted to computers, so the translation is easy to implement.
Note, that the translation is not a linear transformation (if ): while it preserves the straight lines, it does not preserve .
All other transformation used in computer graphics are linear. The first one that comes to mind is rotation. The rotation by around the origin is given by the multiplication by the rotation matrix we discussed above,
If we want to rotate around a point , we first need to translate the picture by , moving the point to , then rotate around (multiply by ) and then translate everything back by .
Another very useful transformation is scaling, given by a matrix
. If it is uniform scaling which enlarges (reduces) an object, preserving its shape. If then and coordinates scale differently; the object becomes “taller” or “wider”.
Another often used transformation is reflection: for example the matrix
defines the reflection through -axis.
We will show later in the book, that any linear transformation in can be represented either as a composition of scaling rotations and reflections. However it is sometimes convenient to consider some different transformations, like the shear transformation, given by the matrix
This transformation makes all objects slanted, the horizontal lines remain horizontal, but vertical lines go to the slanted lines at the angle to the vertical ones.
Three-dimensional graphics is more complicated. First we need to be able to manipulate -dimensional objects, and then we need to represent it on -dimensional plane (monitor).
The manipulations with -dimensional objects is pretty straightforward, we have the same basic transformations: translation, reflection through a plane, scaling, rotation. Matrices of these transformations are very similar to the matrices of their counterparts. For example the matrices
represent respectively reflection through - plane, scaling, and rotation around -axis.
Note, that the above rotation is essentially -dimensional transformation, it does not change coordinate. Similarly, one can write matrices for the other 2 elementary rotations around and around axes. It will be shown later that a rotation around an arbitrary axis can be represented as a composition of elementary rotations.
So, we know how to manipulate -dimensional objects. Let us now discuss how to represent such objects on a -dimensional plane. The simplest way is to project it to a plane, say to the - plane. To perform such projection one just needs to replace coordinate by , the matrix of this projection is
Such method is often used in technical illustrations. Rotating an object and projecting it is equivalent to looking at it from different points. However, this method does not give a very realistic picture, because it does not take into account the perspective, the fact that the objects that are further away look smaller.
To get a more realistic picture one needs to use the so-called perspective projection. To define a perspective projection one needs to pick a point (the center of projection or the focal point) and a plane to project onto. Then each point in is projected into a point on the plane such that the point, its image and the center of the projection lie on the same line, see Fig. 1.2.
This is exactly how a camera works, and it is a reasonable first approximation of how our eyes work.
Let us get a formula for the projection. Assume that the focal point is and that we are projecting onto - plane, see Fig. 1.3 a). Consider a point , and let be its projection. Analyzing similar triangles see Fig. 1.3 b), we get that
so
and similarly
Note, that this formula also works if and if : you can draw the corresponding similar triangles to check it.
Thus the perspective projection maps a point to the point .
This transformation is definitely not linear (because of in the denominator). However it is still possible to represent it as a linear transformation. To do this let us introduce the so-called homogeneous coordinates.
In the homogeneous coordinates, every point in is represented by coordinates, the last, th coordinate playing role of the scaling coefficient. Thus, to get usual -dimensional coordinates of the vector from its homogeneous coordinates one needs to divide all entries by the last coordinate and take the first 3 coordinates 444If we multiply homogeneous coordinates of a point in by a non-zero scalar, we do not change the point. In other words, in homogeneous coordinates a point in is represented by a line through in . (if this recipe does not work, so we assume that the case corresponds to the point at infinity).
Thus in homogeneous coordinates the vector can be represented as , so in homogeneous coordinates the perspective projection is a linear transformation:
Note that in the homogeneous coordinates the translation is also a linear transformation:
But what happen if the center of projection is not a point but some arbitrary point . Then we first need to apply the translation by to move the center to while preserving the - plane, apply the projection, and then move everything back translating it by . Similarly, if the plane we project to is not - plane, we move it to the - plane by using rotations and translations, and so on.
All these operations are just multiplications by matrices. That explains why modern graphic cards have matrix operations embedded in the processor.
Of course, here we only touched the mathematics behind -dimensional graphics, there is much more. For example, how to determine which parts of the object are visible and which are hidden, how to make realistic lighting, shades, etc.
What vector in has homogeneous coordinates ?
Show that a rotation through can be represented as a composition of two shear-and-scale transformations
In what order the transformations should be taken?
Multiplication of a 2-vector by an arbitrary matrix usually requires multiplications.
Suppose a matrix contains coordinates of points in . How many multiplications are required to transform these points using 2 arbitrary matrices and . Compare 2 possibilities, and .
Write matrix performing perspective projection to - plane with center .
A transformation in is a rotation about the line in the - plane through an angle . Write a matrix corresponding to this transformation.
You can leave the result as a product of matrices.