Professional Documents
Culture Documents
Contents
1
Rn
1.1
Review of Vectors in
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
1.3
Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4
1.4.1
1.4.2
Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.3
1.5
Linear Transformations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.5.1
1.5.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
A vector, as we typically think of it, is a quantity which has both a magnitude and a direction. This is in
contrast to a scalar, which carries only a magnitude.
Real-valued vectors are extremely useful in just about every aspect of the physical sciences, since just
about everything in Newtonian physics is a vector position, velocity, acceleration, forces, etc. There
is also vector calculus namely, calculus in the context of vector elds which is typically part of a
multivariable calculus course; it has many applications to physics as well.
We often think of vectors geometrically, as a directed line segment (having a starting point and an endpoint).
We denote the
where the
ai
n-dimensional
(a1 , a2 , , an )
as
~v = ha1 , a2 , , an i,
are scalars.
Some vectors:
4
, e2 , 27, 3, , 0, 0, 1 .
3
hi
()
between a vector, and the coordinates of a point in space. I also draw arrows above vectors or typeset
them in boldface (thus
~v
or
v),
in order to set them apart from scalars. This is not standard notation
Note/Warning: Vectors are a little bit dierent from directed line segments, because we don't care where a
vector starts: we only care about the dierence between the starting and ending positions. Thus: the directed
segment whose start is
the same vector,
(0, 0)
and end is
(1, 1)
(1, 1)
and ending at
(2, 2)
represent
h1, 1i.
We can add vectors (provided they are of the same length!) in the obvious way, one component at a time: if
~v = ha1 , , an i
and
w
~ = hb1 , , bn i
then
~v + w
~ = ha1 + b1 , , an + bn i.
(a1 , , an ).
Then
Another way (though it's really the same way) to think of vector addition is via the parallelogram
diagram, whose pairs of parallel sides are
~v
and
w
~,
if
~v + w
~.
r ~v =
hra1 , , ran i.
Again, we can justify this by our geometric idea of what a vector does: if
direction, then
1
~v
2
Example: If
~v
~v = h1, 2, 2i and w
~ = h3, 0, 4i then 2w
~ = h6, 0, 8i
2w
~ = h7, 2, 10i
~v
, and
~v + w
~ = h2, 2, 2i
. Furthermore,~
v
Rn
satises several algebraic properties (which follow more or less directly from
the denition):
There is a zero vector (namely, the vector with all entries zero) such that every vector has an additive
inverse.
The two operations of addition and scalar multiplication (and the various algebraic properties they satisfy)
are the key properties of vectors in
Rn .
vectors (+) and scalar multiplication of a vector by a real number (), satisfying the following axioms:
Let
(1 + 2 ) ~v = 1 ~v + 2 ~v .
~v
and
, 1 ,2
~v1 + ~v2
and
~v
~v ,
with
~v + (~v ) = ~0.
1 (2 ~v ) = (1 2 ) ~v .
1 ~v1 = ~v1 .
Important Remark: One may also consider vector spaces where the collection of scalars is something other
than the real numbers for example, there exists an equally important notion of a complex vector space,
whose scalars are the complex numbers. (The axioms are the same.)
We will principally consider real vector spaces, in which the scalars are the real numbers.
The most general notion of a vector space involves scalars from a eld, which is a collection of numbers
which possess addition and multiplication operations which are commutative, associative, and distributive, with an additive identity 0 and multiplicative identity 1, such that every element has an additive
inverse and every nonzero element has a multiplicative inverse.
Aside from the real and complex numbers, another example of a eld is the rational numbers (i.e.,
fractions). One can formulate an equally interesting theory of vector spaces over the rational numbers.
The vectors in
Rn
n > 0.
Note: For simplicity I will demonstrate all of the axioms for vectors in
This space is rather boring: since it only contains one element, there's really not much to say about
In particular, if we take
n = 1,
then we see that the real numbers themselves are a vector space.
The set of
mn
and any
n,
The various algebraic properties we know about matrix addition give [A1] and [A2] along with [M1],
[M2], [M3], and [M4].
The zero vector in this vector space is the zero matrix (with all entries zero), and [A3] and [A4]
follow easily.
Note of course that in some cases we can also multiply matrices by other matrices. However, the
requirements for being a vector space don't care that we can multiply matrices by other matrices!
(All we need to be able to do is add them and multiply them by scalars.)
a + bi,
where
i2 = 1,
The axioms all follow from the standard properties of complex numbers. As might be expected, the
0 = 0 + 0i.
Again, note that the complex numbers have more structure to them, because we can also multiply
two complex numbers, and the multiplication is also commutative, associative, and distributive over
addition. However, the requirements for being a vector space don't care that the complex numbers
have these additional properties.
The collection of all real-valued functions on any part of the real line is a vector space, where we dene
the sum of two functions as
for every
x,
( f )(x) = f (x).
To illustrate: if
then
f +g
(f + g)(x) = x + x2 ,
and
2f
The axioms follow from the properties of functions and real numbers. The zero vector in this space
is the zero function; namely, the function
and
which has
z(x) = 0
for every
For example (just to demonstrate a few of the axioms), for any value
g,
x.
[a, b]
in
we have
[A1]:
[M2]:
There are many simple algebraic properties that can be derived from the axioms (and thus, are true in every
vector space), using some amount of cleverness. For example:
Idea: Add
~v
~v ,
~v + ~a = ~v + ~b
if
then
~a = ~b.
to
~a = ~b.
~v , if ~v + ~a = ~v ,
~
when b = 0.
4. The scalar
~v ,
if
~v + ~a = ~0
~v = (1 + 0) ~v = ~v + 0 ~v
Idea: Expand
6. The scalar
~0 = (~0 + ~0) = ~0 + ~0
0 ~v = ~0
~0 = ~0
~v .
.
(1) ~v = ~v
~v .
~0 = 0 ~v = (1 + (1)) ~v = ~v + (1)~v ,
~a = ~v .
~a = ~v .
then
~a = ~0.
~b = ~v .
Idea: Expand
then
(~v ) = ~v
~v .
(~v ) = (1) ~v = 1 ~v = ~v .
1.3 Subspaces
Denition: A subspace
of a vector space
V,
Very often, if we want to check that something is a vector space, it is often much easier to verify that it
is a subspace of something else we already know is a vector space.
We will make use of this idea when we talk about the solutions to a homogeneous linear dierential
equation (see the examples below), and prove that the solutions form a vector space merely by
checking that they are a subspace of the set of all functions, rather than going through all of the
axioms.
We are aided by the following criterion, which tells us exactly what properties a subspace must satisfy:
is a subspace of
V,
properties:
[S1]
[S2]
[S3]
V.
w
~1
and
w
~2
in
W,
the vector
and
w
~
w
~1 + w
~2
in
W,
is also in
the vector
W.
w
~
is also in
W.
The reason we don't need to check everything to verify that a collection of vectors forms a subspace is that
most of the axioms will automatically be satised in
V.
As long as all of the operations are dened, axioms [A1]-[A2] and [M1]-[M4] will hold in
hold in
V.
because they
But we need to make sure we can always add and scalar-multiply, which is why we need [S2]
and [S3].
W,
W,
[S1].
(1) w
~ = w
~,
Remark: Any vector space automatically has two easy subspaces: the entire space
consisting only of the zero vector.
subspace
Examples: Here is a rather long list of examples of less trivial subspaces (of vector spaces which are of interest
to us):
ht, t, ti
are a subspace of
R3 .
x = y = z .]
t = 0.
ht1 , t1 , t1 i + ht2 , t2 , t2 i = ht1 + t2 , t1 + t2 , t1 + t2 i, which is again
we take t = t1 + t2 .
[S3]: We have ht1 , t1 , t1 i = ht1 , t1 , t1 i, which is again of the same form if
hs, t, 0i
are a subspace of
. [This is the
xy -plane,
we take
t = t1 .
z = 0.]
s = t = 0.
[S2]: We have hs1 , t1 , 0i + hs2 , t2 , 0i = hs1 + s2 , t1 + t2 , 0i, which is again of the same form, if we take
s = s1 + s2 and t = t1 + t2 .
[S3]: We have hs1 , t1 , 0i = hs1 , t1 , 0i, which is again of the same form, if we take s = s1 and
t = t1 .
The vectors
hx, y, zi
with
2x y + z = 0
are a subspace of
R3 .
2(0) 0 + 0 = 0.
hx2 , y2 , z2 i have 2x1 y1 + z1 = 0 and 2x2 y2 + z2 = 0 then adding the
equations shows that the sum hx1 + x2 , y1 + y2 , z1 + z2 i also lies in the space.
[S3]: If hx1 , y1 , z1 i has 2x1 y1 + z1 = 0 then scaling the equation by shows that hx1 , x2 , x3 i
[S1]: The zero vector is of this form, since
[S2]: If
hx1 , y1 , z1 i
and
of
hx1 , , xn i
Rn .
It is possible to check this directly by working with equations. But it is much easier to use matrices:
solution.
22
The collection of
[S2]: We have
[S3]: We have
a1 b1
a2 b2
+
0 a1
0 a2
a1 b1
a1
=
0 a1
0
22
matrices.
a + 2ai
a = b = 0.
a1 + a2 b1 + b2
=
, which is
0
a1 + a2
b1
, which is also of this form.
a1
a b
0 a
[a, b]
[a, b].
The collection of
on
[a, b],
The zero function is dierentiable, as are the sum and product of any two functions which are
dierentiable
times.
Observe that polynomials are functions on the entire real line. Therefore, it is sucient to verify the
subspace criteria.
The zero function is a polynomial, as is the sum of two polynomials, and any scalar multiple of a
polynomial.
y 00 + 6y 0 + 5y = 0
form a
vector space.
We show this by verifying that the solutions form a subspace of the space of all functions.
[S1]: The zero function is a solution.
[S2]: If
y1
and
y2
y100 + 6y10 + 5y1 = 0 and y200 + 6y20 + 5y2 = 0, so adding and using
00
0
that (y1 + y2 ) + 6(y1 + y2 ) + 5(y1 + y2 ) = 0, so y1 + y2 is also a
[S3]: If
of derivatives shows
is a scalar and
using properties
Note: Observe that we can say something about what the set of solutions to this equation looks like,
namely that it is a vector space, without actually solving it!
y = Aex + Be5x
and
B.
From here,
if we wanted to, we could directly verify that such functions form a vector space.
Note that
y (n)
means the
nth
derivative of
y.
As in the previous example, we show this by verifying that the solutions form a subspace of the
space of all functions.
(n)
(n1)
y1 and y2 are solutions, then by adding the equations y1 +Pn (x)y1
+ +P1 (x)y1 = 0
(n)
(n1)
and y2
+ Pn (x) y2
+ + P1 (x) y2 = 0 and using properties of derivatives shows that
(y1 + y2 )(n) + Pn (x) (y1 + y2 )(n1) + + P1 (x) (y1 + y2 ) = 0, so y1 + y2 is also a solution.
(n)
(n1)
[S3]: If is a scalar and y1 is a solution, then scaling y1 +Pn (x)y1
+ +P2 (x)y10 +P1 (x)y1 = 0
(n)
by and using properties of derivatives shows that (y1 )
+Pn (x)(y1 )(n1) + +P1 (x)(y1 ) = 0,
[S2]: If
so
y1
is also a solution.
Note: This example is a fairly signicant amount of the reason we are interested in linear algebra
(as it relates to dierential equations):
y;
P1 (x), , Pn (x),
it is not possible
One thing we would like to know, now that we have the denition of a vector space and a subspace, is what
else we can say about elements of a vector space i.e., we would like to know what kind of structure the
elements of a vector space have.
Rn
written down in terms of one or more parameters. In order to discuss this idea more precisely, we rst need
some terminology.
1.4.1
a1 , , an
Example: In
1 h0, 1i.
R2 ,
~v1 , , ~vn
such that
the vector
w
~
is a linear combination of
~v1 , , ~vn
if there
w
~ = a1 ~v1 + + an ~vn .
h1, 1i
is a linear combination of
h1, 0i
and
h0, 1i,
because
h1, 1i = 1 h1, 0i +
because
and
h1, 1, 1, 2i,
R3 , the vector h0, 0, 1i is not a linear combination of h1, 1, 0i and h0, 1, 1i because there
scalars a1 and a2 for which a1 h1, 1, 0i + a2 h0, 1, 1i = h0, 0, 1i: this would require a common
to the three equations a1 = 0, a1 + a2 = 0, and a2 = 1, and this system has no solution.
Non-Example: In
exist no
solution
Example: In
an ~vn ,
Remark 1: The span is always subspace: since the zero vector can be written as
0 ~v1 + + 0 ~vn ,
and
W.
a1 , , an ,
Remark 3: For technical reasons, we dene the span of the empty set to be the zero vector.
a h1, 0, 0i +
1.4.2
V,
V,
~v1 , , ~vn
is all of
V,
we say that
~v1 , , ~vn
are a
V.
z -coordinate
z = 0.
generate
R3 ,
ha, b, ci
we
Linear Independence
a1 = = an = 0.
~v1 , , ~vn
is linearly independent if
a1 ~v1 + + an ~vn = ~0
implies
Note: For an innite set of vectors, we say it is linearly independent if every nite subset is linearly
independent (per the denition above); otherwise (if some nite subset displays a dependence) we say it
is dependent.
In other words,
~v1 , , ~vn
as a linear combination of
are linearly independent precisely when the only way to form the zero vector
~v1 , , ~vn
An equivalent way of thinking of linear (in)dependence is that a set is dependent if one of the vectors is a
3
Example: The vectors h1, 1, 0i and h2, 2, 0i in R are linearly dependent, because we can write 2h1, 1, 0i+
(1) h2, 2, 0i = h0, 0, 0i. Or, in the equivalent formulation, we have h2, 2, 0i = 2 h1, 1, 0i.
h1, 0, 2, 2i, h2, 2, 0, 3i, h0, 3, 3, 1i, and h0, 4, 2, 1i in R4 are linearly dependent,
2 h1, 0, 2, 2i + (1) h2, 2, 0, 3i + (2) h0, 3, 3, 1i + 1 h0, 4, 2, 1i = h0, 0, 0, 0i.
a1 = = an = 0,
because
0 ~v1 + + 0 ~vn = ~0
For the other direction, suppose we had two dierent ways of decomposing a vector
w
~,
say as
w
~ =
w
~ = b1 ~v1 + b2 ~v2 + + bn ~vn .
and
Then subtracting and then rearranging the dierence between these two equations yields
w
~ w
~ =
Now
w
~ w
~
b1 , , an bn
~v1 , , ~vn
are zero.
a1 = b1 , a2 = b2 , . . . , an = bn
a1
1.4.3
Terminology Note: The plural form of the (singular) word basis is bases.
V.
h1, 0, 0i, h0, 1, 0i, and h0, 0, 1i generate R3 , as we saw above. They are also
linearly independent, since a h1, 0, 0i + b h0, 1, 0i + c h0, 0, 1i is the zero vector only when a = b = c = 0.
3
Thus, these three vectors are a basis for R .
Rn ,
e1 , e2 , , en
(where
ej
j th
has a 1 in the
1, x, x2 , x3 ,
h1, 0, 0i, h0, 1, 0i, h0, 0, 1i, and h1, 1, 1i in R3 are not a basis, as
1 h1, 0, 0i + 1 h0, 1, 0i + 1 h0, 0, 1i + (1) h1, 1, 1i = h0, 0, 0i.
2
1, x, x , x ,
it is
they are
polynomial).
a0 , a1 , , an
a0 1 + a1 x + + an xn = 0,
x.
Then if we take the nth derivative of both sides (which is allowable because a0 1+a1 x+ +an xn = 0
is assumed to be true for all x) then we obtain n! an = 0, from which we see that an = 0.
Then repeat by taking the (n1)st derivative to see an1 = 0, and so on, until nally we are left with
2
n
just a0 = 0. Hence the only way to form the zero function as a linear combination of 1, x, x , , x
2
3
is with all coecients zero, which says that 1, x, x , x , is a linearly-independent set.
such that
Theorem: A collection of
are the vectors
The idea behind the theorem is to multiply out and compare coordinates, and then analyze the resulting
system of equations.
a1 , , an
such that
a1~v1 + + an~vn = w
~,
w
~
in
Rn .
~v1 , , ~vn , ~a
B ~a = w
~,
where
a1 , , an ,
and
w
~
is
B ~a = w
~
w
~.
But having a unique way to write any vector as a linear combination of vectors in a set is precisely the
statement that the set is a basis. So we are done.
contains a basis. Any linearly independent set of vectors can be extended to a basis.
Remark: If you only remember one thing about vector spaces, remember that every vector space has a basis !
Remark: That a basis always exists is really, really, really useful. It is without a doubt the most useful
fact about vector spaces: vector spaces in the abstract are very hard to think about, but a vector space
with a basis is something very concrete (since then we know exactly what the elements of the vector
space look like).
To show the rst and last parts of the theorem, we show that we can build any set of linearly independent
vectors into a basis:
Start with
being some set of linearly independent vectors. (In any vector space, the empty set is
1. If
spans
V,
basis.
2. If
in
S,
the new
V,
there is an element
of
S.
Then if we put
Eventually (to justify this statement in general, some fairly technical and advanced machinery may
be needed), it can be proven that we will eventually land in case (1).
If
has dimension
has innite dimension that things get tricky and confusing, and
To show the third part of the theorem, the idea is to imagine going through the list of elements in a
generating set and removing elements until it becomes linearly independent.
This idea is not so easy to formulate with an innite list, but if we have a nite generating set,
then we can go through the elements of the generating set one at a time, throwing out an element
if it is linearly dependent with the elements that came before it. Then, once we have gotten to the
end of the generating set, the collection of elements which we have not thrown away will still be a
generating set (since removing a dependent element will not change the span), but the collection will
also now be linearly independent (since we threw away elements which were dependent).
is a basis with
B,
m > n,
elements, with
then
say as
ai =
n
X
ci,j bj
for
elements and
is linearly dependent.
ai
in
1 i m.
j=1
dk ,
ai
n
X
dk ak = ~0.
k=1
B,
Since
bj
in the resulting
If we tabulate the resulting system, we can check that it is equivalent to the matrix equation
where
is the
the scalars
Now since
dk .
C is
mn
and
d~ is
the
n1
n
X
C d~ = ~0
dk ak = ~0
for scalars
dk
d~ which
C d~ = ~0,
a matrix which has more rows than columns, by the assumption that
ci,j ,
m > n,
we see
is linearly dependent.
k=1
to be the dimension of
Rn
is
n,
since the
V.
This says that the term dimension is reasonable, since it is the same as our usual notion of dimension.
mn
matrices
Ei,j ,
where
Ei,j
mn
matrices is
Example:
polynomials
1, x, x2 , x3 ,
Now that we have a reasonably good idea of what the structure of a vector space is, the next natural question
is: what do maps from one vector space to another look like?
It turns out that we don't want to ask about arbitrary functions, but about functions from one vector space
to another which preserve the structure (namely, addition and scalar multiplication) of the vector space.
The analogy to the real numbers is: once we know what the real numbers look like, what can we say
about arbitrary real-valued functions?
The answer is, not much, unless we specify that the functions preserve the structure of the real numbers
which is abstract math-speak for saying that we want to talk about continuous functions, which turn
out to behave much more nicely.
This is the idea behind the denition of a linear transformation: it is a map that preserves the structure of a
vector space.
Denition:
If
and
T from V to W
, we have the two
~v , ~v1 , ~v2
and scalar
(denoted
T ( ~v ) = ~v .
Remark: Like with the denition of a vector space, one can show a few simple algebraic properties of linear
(of
is a
transformations for example, that any linear transformation sends the zero vector (of
T : V W)
properties
V)
W ).
Example: If
V = W = R2 ,
which sends
to
hx, x + yi
is a linear transformation.
Let
[T1]: We have
[T2]: We have
and
~v2 = hx2 , y2 i,
hx, yi
V = W = R2 ,
so that
which sends
hx, yi
to
is a linear
transformation.
Just like in the previous example, we can work out the calculations explicitly.
column vector
ax + by
cx + dy
=
a
c
a
c
When we think of the map in this way, it is easier to see what is happening:
[T1]: We have
[T2]: Also,
T (~v1 + ~v2 ) =
a b
T ( ~v ) =
c d
is any
to the
.
a b
a b
(~v1 + ~v2 ) =
~v1 +
~v2 = T (~v1 ) + T (~v2 ).
c d
c d
a b
~v1 =
~v1 = T (~v ).
c d
b
d
b
d
a
c
b
x
.
d
y
x
y
If
[T1]: We have
[T2]: Also,
This last example is very general: in fact, it is so general that every linear transformation from
of this form! Namely, if
that
acts by sending
T is a linear
~v to A ~v .
transformation from
to
nm
Rm
Rn is
A such
to
matrix
The reason is actually very simple, and it is easy to write down what the matrix
matrix whose columns are the vectors
basis elements of
Rm (ej
is the vector
~v =
m
X
aj ej
~v
in
Rm
j=1
transformation, we obtain
T (~v ) =
m
X
aj T (ej ).
j=1
~v .
V
cx + dy .
e.g., if
A=
a
c
b
d
then the linear functions are
ax + by
This is the reason that linear transformations are named so because they are really just
and any
from
to
n-dimensional
vector space
m-
nm
matrix
A.
Remark 1: This result underlines one of the reasons that matrices and vector spaces (which initially seem
like they have almost nothing to do with one another) are actually closely related: because matrices
describe the maps from one vector space to another.
Remark 2: One can also use this relationship between maps on vector spaces and matrices to provide
almost trivial proofs of some of the algebraic properties of matrix multiplication which are hard to prove
by direct computation.
For example: the composition of linear transformations is associative (because linear transformations
are functions, and function composition is associative).
1.5.1
Denition: If
elements
exists a
(the range of
Essentially (see below), the kernel measures how far from one-to-one the map
measures how far from onto the map
T,
A ~x = ~0
A.
Another reason is that they will say something about the subspace
is.
One of the reasons we care about these subspaces is that (for example) the set of solutions to a set of
homogeneous linear equations
T ).
[S1] We have
V.
T (~0) = ~0,
(see below).
of multiplication by
[S2] If v1 and v2
~0 + ~0 = ~0.
[S3] If
[S1] We have
[S2] If
Then
[S3] If
T (v) = ~0.
Hence
Therefore,
T ( v) = T (v) = ~0 = ~0.
W.
T (~0) = ~0,
w1 and w2 are in the image, then there exist v1 and v2 are such that T (v1 ) = w1
T (v1 + v2 ) = T (v1 ) + T (v2 ) = w1 + w2 , so that w1 + w2 is also in the image.
w
with
T (v) = w.
Then
and
T (v2 ) = w2 .
T ( v) = T (v) = w,
so
is
ker(T ) consists of only the zero vector if and only if the map T
W if and only if the map T is onto.
If
maps to
to ~
0.
~0.
Thus
ker(T ) = ~0.
ker(T )
The idea behind this theorem is that if we have a basis for im(T ), say
v1 , , vk
with
write
T (v) =
k
X
j wj =
j=1
k
X
w1 , , wk , then there
ker(T ), the goal is to
is a basis for
k
X
j T (vj ) = T j vj ,
where the
exist
show
are unique.
j=1
j=1
k
k
X
X
Then subtraction shows that T v
j vj = ~0 so that v
j vj
j=1
as a sum
l
X
i ai ,
where the
is in
j=1
are unique.
i=1
v =
k
l
X
X
j vj +
i a i
j=1
{v1 , , vk , a1 , al }
1.5.2
is a basis for
and
i ,
i=1
V.
Example: If
then
T1 + T2
and
and
transformations from
to
W,
These follow from the criteria. (They are somewhat confusing to follow when written down, so I won't
bother.)
Example: If
to the value
f (0)
[T1]: We have
[T2]: Also,
W = R,
is a linear transformation.
T ( f ) = (f )(0) = f (0) = T (f ).
Note of course that being a linear transformation has nothing to do with the fact that we are evaluating
at 0. We could just as well evaluate at 1, or
Example: If
and
W are both the vector space of real-valued functions and P (x) is any real-valued function,
f (x) to the function P (x)f (x) is a linear transformation.
[T1]: We have
[T2]: Also,
Example: If
nth
[T1]: The
f2 )
(n)
[T2]: Also,
of the
nth
derivatives, so we have
T (f1 + f2 ) = (f1 +
In particular, the kernel of this linear transformation is the collection of all functions
Note that since we know the kernel is a vector space (as it is a subspace of
solutions to
such that
y (n) +
V ),
However, it is very useful to be able to think of this linear dierential operator sending
Pn (x) y
(n1)
+ + P2 (x) y + P1 (x) y
to
y (n) +
as a linear transformation.