Problem # A.1:
(a) Check that the vectors \(\mathbf{v}=(1,2,1)\)
and \(\mathbf{w}=(8,4,-1)\) are solutions to the linear equation
\[
2x_1 - 3x_2 + 4x_3 = 0. \qquad\text{(1)}
\]
Problem # A.2:
An \(m\)-by-\(n\) matrix with coefficients in a field \(\mathbb{F}\)
is defined to be an \(m\)-by-\(n\) array of elements of \(\mathbb{F}\). We
write \(M_{m,n}(\mathbb{F})\) for the set of all such matrices, so an
element \(A\in M_{m,n}(\mathbb{F})\) looks like
\[
A = \begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & & \ddots & \vdots \\
a_{m1} & a_{m2} & \cdots & a_{mn} \\
\end{pmatrix}
\]
We make \(M_{m,n}(\mathbb{F})\) into a vector space in
the obvious way:
\[
\begin{pmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn} \\
\end{pmatrix}
+
\begin{pmatrix}
b_{11} & \cdots & b_{1n} \\
\vdots & \ddots & \vdots \\
b_{m1} & \cdots & b_{mn} \\
\end{pmatrix}
=
\begin{pmatrix}
a_{11}+b_{11} & \cdots & a_{1n}+b_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1}+b_{m1} & \cdots & a_{mn}+b_{mn} \\
\end{pmatrix}
\]
and
\[
c \begin{pmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn} \\
\end{pmatrix}
=
\begin{pmatrix}
ca_{11} & \cdots & ca_{1n} \\
\vdots & \ddots & \vdots \\
ca_{m1} & \cdots & ca_{mn} \\
\end{pmatrix}
\]
Problem # A.3:
The transpose of a (square)
matrix \(A\), denoted \(A^*\), is obtained
by flipping the entries across the main diagonal. So for example
\[
\begin{pmatrix} 1&2&3\\ 4&5&6\\ 7&8&9\\ \end{pmatrix}^*
=
\begin{pmatrix} 1&4&7\\ 2&5&8\\ 3&6&9\\ \end{pmatrix}.
\]
A matrix \(A\) is symmetric if \(A^*=A\)
and it is anti-symmetric if \(A^*=-A\).
ProblemPage 59 # 2:
Give an example of a function \(f:\mathbf{R}^2\to\mathbf{R}\) such that
\[
f(a v) = af(v)
\]
for all \(a\in\mathbf{R}\) and all \(v\in\mathbf{R}^2\), but \(f\) is not
linear.
Problem Page 59 # 4:
Suppose that \(T\) is a linear map from \(V\) to \(\mathbf{F}\). Prove
that if \(u\in V\) is not in null\((T)\), then
\[
V = \text{null}(T) \oplus \{au : a\in\mathbf{F}\}.
\]
Return to the
Math 540 Homework Page
Return to the
Math 540 Home Page
(b) The vector \(\mathbf{u}=(-37,-14,8)\) is also a solution to equation (1).
Find real numbers \(a,b\in\mathbf{R}\) so that
\[
\mathbf{u} = a\mathbf{v}+b\mathbf{w}.
\]
(c) Prove the following general result.
If the vector \(\mathbf{z}=(z_1,z_2,z_3)\in\mathbf{R}^3\) is
a solution to equation (1), then there are scalars
\(a,b\in\mathbf{R}\) so that
\[
\mathbf{z} = a\mathbf{v}+b\mathbf{w}. \qquad\text{(2)}
\]
(d) In (c), prove that for a given vector \(\mathbf{z}\),
there is only one choice for \(a\) and \(b\) that makes equation (2) true.
Solution:
(b)
We need to find \(a\) and \(b\) so that
\[\begin{aligned}
\mathbf{u} &= a \mathbf{v} + b \mathbf{w} \\
(-37,-14,8) &= a(1,2,1) + b(8,4,-1) \\
(-37,-14,8) &= (a+8b,2a+4b,a-b). \\
\end{aligned}
\]
So we need to solve the simultaneous equations
\[
-37=a+8b,\qquad -14=2a+4b,\qquad 8=a-b.
\]
There are lots of ways to do this.
For example, subtracting the third equation from the first one,
we get
\[
\begin{aligned}
-37-8 &= (a+8b)-(a-b) \\
-45 &= 9b \\
-5 &= b. \\
\end{aligned}
\]
Substituting \(b=-5\) into the last equation gives \(a=3\). Then
one should check that \((a,b)=(3,-5)\) is a solution to the three
simultaneous equations. It is, so the solution to (b) is
\[
\mathbf{u} = 3 \mathbf{v} - 5 \mathbf{w}.
\]
(c)
We are given that \(\mathbf{z}=(z_1,z_2,z_3)\) satisfies equation (1),
so we know that
\[
2z_1 - 3z_2 + 4z_3 = 0.
\]
We want to find \(a\) and \(b\) so that
\(\mathbf{z} = a\mathbf{v}+b\mathbf{w}\). This is more-or-less the same
problem as in (b), we need to solve
\[\begin{aligned}
\mathbf{z} &= a \mathbf{v} + b \mathbf{w} \\
(z_1,z_2,z_3) &= a(1,2,1) + b(8,4,-1) \\
(z_1,z_2,z_3) &= (a+8b,2a+4b,a-b). \\
\end{aligned}
\]
So we need to solve
\[\begin{aligned}
a+8b &= z_1 \\
2a+4b &= z_2 \\
a-b &= z_3.\\
\end{aligned}
\]
Proceeding as in (b), we subtract the third equation from the first to
get
\[
9b = z_1-z_3,
\quad\text{so}\quad
b = \frac{z_1-z_3}{9}.
\]
Then substituting in to the first equation gives
\[
a = z_1 - 8b = z_1 - \frac{8z_1-8z_3}{9}
= \frac{z_1+8z_3}{9}.
\]
We now need to check that these values of \(a\) and \(b\) actually work.
(Note that so far, we haven't used the fact that \(\mathbf{z}\) satisfies
equation (1).) So we compute
\[\begin{align*}
a \mathbf{v} + b \mathbf{w}
&= \frac{z_1+8z_3}{9}(1,2,1) + \frac{z_1-z_3}{9}(8,4,-1) \\
&= \left( z_1, \frac{2z_1+4z_3}{3}, z_3 \right) \\
&= \left( z_1, \frac{3z_2}{3}, z_3 \right)
\quad\text{because \(\mathbf{z}\) satisfies equation (1).} \\
&= (z_1,z_2,z_3) \; \checkmark
\end{align*}
\]
(d)
The computation that we did in (c) essentially shows that the only
possible choices for \(a\) and \(b\) are \(a = \frac{z_1+8z_3}{9}\) and \(b =
\frac{z_1-z_3}{9}\).
Alternatively, here is a direct proof, if we wanted to prove (d) without
having first done (c). We assume that
\[
\mathbf{z} = a\mathbf{v}+b\mathbf{w}
= a'\mathbf{v}+b'\mathbf{w},
\]
and we need to show that \(a=a'\) and \(b=b'\). Subtracting, we have
\[
(a-a')\mathbf{v}+(b-b')\mathbf{w} = \mathbf{z}-\mathbf{z}= \mathbf{0},
\]
so what we really need to do is show that \(\mathbf{v}\)
and \(\mathbf{w}\) are linearly independent. So we suppose that
\[
c_1\mathbf{v}+c_1\mathbf{w} = \mathbf{0},
\]
and we will prove that \(c_1=c_2=0\). Thus
\[\begin{aligned}
c_1\mathbf{v}+c_1\mathbf{w} &= \mathbf{0} \quad\text{by assumption,} \\
c_1(1,2,1)+c_1(8,4,-1) &= (0,0,0) \quad\text{these are the values
of \(\mathbf{v}\) and \(\mathbf{w}\),} \\
(c_1+8c_2,2c_1+4c_2,c_1-c_2) &= (0,0,0).\\
\end{aligned}
\]
So
\[
c_1+8c_2=0,\qquad 2c_1+4c_2=0,\qquad c_1-c_2=0.
\]
Subtracting the third equation from the first gives \(9c_2=0\), so
\(c_2=0\), and then substituting \(c_2=0\) into the first equation gives
\(c_1=0\). This completes the proof that \(\mathbf{v}\) and \(\mathbf{w}\)
are linearly independent, so there is at most one way to write any
given vector \(\mathbf{z}\) as a linear combination of \(\mathbf{v}\) and
\(\mathbf{w}\).
(a)
Write down a basis for the \(\mathbb{F}\)-vector space
\(M_{3,2}(\mathbb{F})\) of 3-by-2 matrices. What is the dimension of
\(M_{3,2}(\mathbb{F})\)?
(b)
More generally, what is the dimension of
\(M_{m,n}(\mathbb{F})\)?
Solution:
(a) The following six vectors form a basis for \(M_{3,2}(\mathbb{F})\),
so \(M_{3,2}(\mathbb{F})\) has dimension 6.
\[
\begin{pmatrix} 1 & 0 \\ 0 & 0 \\ 0 & 0 \\ \end{pmatrix},\quad
\begin{pmatrix} 0 & 1 \\ 0 & 0 \\ 0 & 0 \\ \end{pmatrix},\quad
\begin{pmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 0 \\ \end{pmatrix},\quad
\begin{pmatrix} 0 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{pmatrix},\quad
\begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 1 & 0 \\ \end{pmatrix},\quad
\begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{pmatrix}.
\]
To see this, note that
\[
\begin{pmatrix} a & b \\ c & d \\ e & f \\ \end{pmatrix}=
a\begin{pmatrix} 1 & 0 \\ 0 & 0 \\ 0 & 0 \\ \end{pmatrix}+
b\begin{pmatrix} 0 & 1 \\ 0 & 0 \\ 0 & 0 \\ \end{pmatrix}+
c\begin{pmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 0 \\ \end{pmatrix}+
d\begin{pmatrix} 0 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{pmatrix}+
e\begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 1 & 0 \\ \end{pmatrix}+
f\begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{pmatrix}.
\]
This shows that every matrix in \(M_{3,2}(\mathbb{F})\) is a linear
combination of the given 6 matrices, so the given
6 matrices span. Further, it shows that the only
linear combination of the 6 matrices that equals the 0 matrix is by taking the
0 multiple of each of them, so the 6 matrices are linearly indepedent.
(b)
Let \(E_{ij}\) be the matrix that has a \(1\) for its entry in
the \(i\)'th row and \(j\)'th column, and every other entry is 0.
Then
\[
\{ E_{ij} : 1\le i\le m,\; 1\le j\le n \}
\]
is a basis for \(M_{m,n}(\mathbb{F})\), so \(M_{m,n}(\mathbb{F})\) has
dimension \(mn\). To see that this is a basis, we note that
\[
\sum_{i=1}^m \sum_{j=1}^n a_{ij}E_{ij} =
\begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & & \ddots & \vdots \\
a_{m1} & a_{m2} & \cdots & a_{mn} \\
\end{pmatrix},
\]
so we get every matrix in \(M_{m,n}(\mathbb{F})\) as a linear
combination of the \(E_{ij}\) matrices, so the \(E_{ij}\) matrices span,
and the only
linear combination of the \(E_{ij}\) matrices
that equals the zero matrix is taking all of the
\(a_{ij}\) to be 0, so the \(E_{ij}\) matrices are linearly independent.
(a)
Prove that the set of \(n\)-by-\(n\) symmetric matrices is a
vector subspace of \(M_{n,n}(\mathbb{F})\).
(b)
Find a basis for the space of 2-by-2 symmetric matrices. What
is its dimension?
(c)
Generalize by describing a basis for the space of \(n\)-by-\(n\) symmetric
matrices and computing its dimension. (It might help to start with 3-by-3.)
(d)
Prove that the set of
\(n\)-by-\(n\) anti-symmetric matrices is also a vector subspace
of \(M_{n,n}(\mathbb{F})\),
describe a basis, and compute its dimension.
(e) (Bonus)
Let's write \(M_{n,n}(\mathbb{F})^{\text{sym}}\)
for the space of symmetric matrices and
\(M_{n,n}(\mathbb{F})^{\text{anti-sym}}\)
for the space of anti-symmetric matrices. Prove that
\[
M_{n,n}(\mathbb{F})
= M_{n,n}(\mathbb{F})^{\text{sym}} + M_{n,n}(\mathbb{F})^{\text{anti-sym}}.
\]
Is this a direct sum of vector spaces?
Solution:
What happens if we add two matrices and then take their transpose?
We compute
\[
\begin{align*}
(A+B)^*
&= \left(\begin{pmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn} \\
\end{pmatrix}
+
\begin{pmatrix}
b_{11} & \cdots & b_{1n} \\
\vdots & \ddots & \vdots \\
b_{m1} & \cdots & b_{mn} \\
\end{pmatrix}\right)^* \\
&=
\begin{pmatrix}
a_{11}+b_{11} & \cdots & a_{1n}+b_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1}+b_{m1} & \cdots & a_{mn}+b_{mn} \\
\end{pmatrix}^* \\
&=
\begin{pmatrix}
a_{11}+b_{11} & \cdots & a_{m1}+b_{m1}\\
\vdots & \ddots & \vdots \\
a_{1n}+b_{1n} & \cdots & a_{mn}+b_{mn} \\
\end{pmatrix} \\
&= \left(\begin{pmatrix}
a_{11} & \cdots & a_{m1} \\
\vdots & \ddots & \vdots \\
a_{1n} & \cdots & a_{mn} \\
\end{pmatrix}
+
\begin{pmatrix}
b_{11} & \cdots & b_{m1} \\
\vdots & \ddots & \vdots \\
b_{1n} & \cdots & b_{mn} \\
\end{pmatrix}\right) \\
&= A^* + B^*
\end{align*}
\]
and similarly \((cA)^* = c A^*\). So we have the useful formulas
\[
(A+B)^*=A^*+B^*\quad\text{and}\quad(cA)^* = c A^*.
\]
(a)
Suppose that \(A\) and \(B\) are
symmetric matrices, so \(A^*=A\) and \(B^*=B\). Then
using the formulas that we just proved, we have
\[
(A+B)^* = A^*+B^* = A+B
\quad\text{and}\quad
(cA)^* = cA^* = cA.
\]
This shows that the set of symmetric
matrices \(M_{n,n}^{\text{sym}}(\mathbb{F})\) is closed under addition
and scalar multiplication, so it is a vector subspace
of \(M_{n,n}(\mathbb{F})\).
(b)
Consider the three matrices
\[
E_{11} = \begin{pmatrix} 1&0\\ 0&0\\ \end{pmatrix},\quad
E_{22} = \begin{pmatrix} 0&0\\ 0&1\\ \end{pmatrix},\quad
F_{12} = \begin{pmatrix} 0&1\\ 1&0\\ \end{pmatrix}.
\]
Then any 2-by-2 symmetric matrix can be written as
\[
\begin{pmatrix} a&b\\ b&d\\ \end{pmatrix}
= \begin{pmatrix} a&0\\ 0&0\\ \end{pmatrix}
+ \begin{pmatrix} 0&0\\ 0&d\\ \end{pmatrix}
+ \begin{pmatrix} 0&b\\ b&0\\ \end{pmatrix}
= aE_{11} + dE_{22} + bF_{12}
\]
in exactly one way, so \(\{E_{11},E_{22},F_{12}\}\) is a basis
for \(M_{n,n}^{\text{sym}}(\mathbb{F})\). In particular,
\(\dim M_{n,n}^{\text{sym}}(\mathbb{F})=3\).
(c)
More generally, as in Problem A.2, let \(E_{ij}\) be the matrix that
has a \(1\) for its entry in the \(i\)'th row and \(j\)'th column, and
every other entry is 0. Also let
\[
F_{ij} = E_{ij} + E_{ji},
\]
so \(F_{ij}\) has two 1's and the rest of its entries are 0. Since
it's clear that
\[
E_{ij}^* = E_{ji},
\quad\text{we have}\quad
F_{ij}^* = (E_{ij} + E_{ji})^*
= E_{ij}^* + E_{ji}^*
= E_{ji} + E_{ij}
= F_{ij},
\]
so \(F_{ij}\) is a symmetric matrix. It's also clear that
\(E_{ii}\) is a symmetric matrix. Then generalizing the proof
in (b), one sees that
\[
\{E_{ii} : 1\le i\le n\}
\cup
\{F_{ij} : 1\le i\lt j\le n\}
\]
is a basis for \(M_{n,n}^{\text{sym}}(\mathbb{F})\). (Note that
\(F_{ij}=F_{ji}\), so we only need one of them.)
More precisely, since a symmetric matrix has \(a_{ij}=a_{ji}\), we have
\[
\begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn} \\
\end{pmatrix}
= \sum_{i=1}^n a_{ii}E_{ii}
+ \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_{ij}F_{ij}.
\]
Finally,
\[
\dim M_{n,n}^{\text{sym}}(\mathbb{F})
= n + \frac{n(n-1)}{2} = \frac{n^2+n}{2}.
\]
(d)
This is similar, but instead of using the \(E_{ij}\) and
the \(F_{ij}\), use the matrices \(G_{ij}=E_{ij}-E_{ji}\).
Note that
\[
G_{ij}^* = (E_{ij}-E_{ji})^*
= E_{ij}^*-E_{ji}^*
= E_{ji}-E_{ij}
= -G_{ij},
\]
so \(G_{ij}\in M_{n,n}^{\text{anti-sym}}(\mathbb{F})\).
Further, it's not hard to show
that \(\{G_{ij} : 1\le i\lt j\le n\}\) is a basis for
\(M_{n,n}^{\text{anti-sym}}(\mathbb{F})\).
Thus a matrix is anti-symmetric if and only if \(a_{ij}=-a_{ji}\),
which means that the matrix can be written as
\[
\begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn} \\
\end{pmatrix}
= \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_{ij}G_{ij}.
\]
(Notice in particular that diagonal entries satisfy \(a_{ii}=0\),
since they equal their own negatives.) Hence
\[
\dim M_{n,n}^{\text{anti-sym}}(\mathbb{F})= \frac{n^2-n}{2}.
\]
(e)
Given any matrix \(A\in M_{n,n}(\mathbb{F})\), write \(A\) as
\[
A = \frac{1}{2}(A+A^*) + \frac{1}{2}(A-A^*).
\]
Then it's easy to check that \(\frac{1}{2}(A+A^*)\) is symmetric and
\(\frac{1}{2}(A-A^*)\) is anti-symmetric, which shows the desired sum
formula. In order to show that it is a direct sum, it's enough to
show (by a result in the book) that the only matrix that is both symmetric
and anti-symmetric is the zero matrix. But if \(A\) is both symmetric
and anti-symmetric, then
\[
A = A^* = -A,\quad\text{so}\quad 2A=0,\quad\text{so}\quad A=0.
\]
Solution:
An interesting observation is that if \(f\) has a Taylor series
expansion around \((0,0)\), then \(f\) has to be linear. (You might
try proving this.) Anyway, it shows that we need to look for a
function whose derivative fails to exist.
Someone suggested the function \(f(x,y)=\sqrt{xy}\). This almost works,
but it has a couple of problems. First, it's value isn't in \(\mathbf{R}\)
if \(xy<0\). Second, it satisfies
\[
f(ax,ay) = \sqrt{a^2xy} = |a|f(x,y).
\]
However, this example suggests an approach. The problem with taking square
roots is that \(\sqrt{x}\) is only defined if \(x\ge0\) and the values
all satisfy \(\sqrt{x}\ge0\). But if we take cube roots, then \(\sqrt[3]{x}\)
is defined for all real values of \(x\), and the cube root of \(x^3\)
is always equal to \(x\). So here are some functions that
have the desired property:
\[
f_1(x,y) = \sqrt[3]{x^3+y^3},\qquad
f_2(x,y) = \sqrt[3]{xy^2+x^2y},\qquad
f_3(x,y) = \sqrt[7]{x^7 + xy^6}.
\]
And you can make up many others of a similar
nature. Let's check that \(f_1\) is not linear. We have
\[
f_1\bigl((1,0)+(0,1)\bigr) = f(1,1) = \sqrt[3]{2}
\quad\text{and}\quad
f_1(1,0)+f_1(0,1) = \sqrt[3]{1}+\sqrt[3]{1}=2.
\]
One can also create weirder sorts of non-linear functions that have the
desired property. Here's an example:
\[
f(x,y) = \begin{cases}
x &\text{if \(y=0\) or if \(x/y\) is a rational number,} \\
0 &\text{if \(y\ne0\) and \(x/y\) is an irrational number,} \\
\end{cases}
\]
I'll leave you to check that \(f(ax,ay)=f(x,y)\). To see that \(f\) is not
linear, we use the fact that \(\pi=3.14159\ldots\) is not rational. Then
\[
f(\pi,1) + f(1-\pi,1) = 0 + 0 = 0
\quad\text{and}\quad
f\bigl((\pi,1)+(1-\pi,1)\bigr) = f(1,1) = 1.
\]
Pretty weird!
Solution:
The fact that \(T\) is a map from \(V\) to \(\mathbf{F}\) means
that the values of \(T\) are in \(\mathbf{F}\), i.e., they are scalars.
So \(T(u)\in\mathbf{F}\) is a scalar, and since we are given
that \(u\) is not in null\((T)\), that scalar is not \(0\). To emphasize
that it is a scalar, we'll call it \(b\). In other words,
\[
b = T(u) \in \mathbf{F}\quad\text{and}\quad b \ne 0.
\]
Now let \(v\in V\) be any vector. We want to show that \(v\) is the sum
of a vector in null\((T)\) and a vector in \(\{au:a\in\mathbf{F}\}\). The
idea is to try to subtract a multiple of \(u\) from \(v\) to get something
that's in null\((T)\). So we look for a scalar \(t\) so that
\[
T(v-tu) = 0.
\]
By linearity, this is
\[
T(v) - tT(u) = 0.
\]
But remember that \(T(u)=b\) is a non-zero scalar, i.e., \(b\) is a non-zero
number, so it has a multiplicative inverse \(b^{-1}\). So we should
set \(t=b^{-1}T(v)\), where remember that \(T(v)\) is also a scalar.
To emphasize that \(T(v)\) is a scalar, let's call it \(c\), so \(c=T(v)\).
With this notation, we take \(t=b^{-1}c\).
This means that if we write \(v\) as
\[
v = (v - b^{-1}cu) + b^{-1}cu,
\]
then the vector in parentheses is in null\((T)\), since
\[
T(v-b^{-1}cu) = T(v) - b^{-1}cT(u) = c - b^{-1}cb = 0,
\]
while the vector \(b^{-1}cu\) is clearly in \(\{au:a\in\mathbf{F}\}\), since
it is a scalar multiple of \(u\). This proves that every vector
in \(V\) is a sum of a vector in null\((T)\) and a vector in
\(\{au:a\in\mathbf{F}\}\), so this proves that
\[
V = \text{null}(T) + \{au:a\in\mathbf{F}\}.
\]
Finally, to show that it is a direct sum, we need to show that
\[
\text{null}(T) \cap \{au:a\in\mathbf{F}\} = \{0\}.
\]
So let \(v\) be a vector in the intersection. Then \(v=au\) for some
scalar \(a\). But also \(v\) is in null\((T)\), so
\[
0 = T(v) = T(au) = aT(u).
\]
We are given that \(T(u)\ne0\), so we conclude that \(a=0\), and hence
that \(v=au=0\).