You are on page 1of 28

Algebraic Geometry

Diane Maclagan
Notes by Florian Bouyer
Copyright (C) Bouyer 2011.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license can be found at http://www.gnu.org/licenses/fdl.html
Contents
1 Introduction and Basic Denitions 2
2 Grobner Bases 3
2.1 The Division Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Zariski Topology 7
3.1 Morphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Images of varieties under morphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Sylvester Matrix 11
4.1 Hilberts Nullstellensatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Irreducible Components 16
5.1 Rational maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Projective Varieties. 21
6.1 Ane Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2 Morphisms of projective varieties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2.1 Veronese Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2.2 Segre Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2.3 Grassmannian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7 Dimension and Hilbert Polynomial 26
7.1 Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
Books:
Hasset: Introduction to Algebraic Geometry
Cox, Little, Oshea Ideals, Varieties and Algorithm
1 Introduction and Basic Denitions
Algebraic geometry starts with the study of solutions to polynomial equations.
e.g.: (x, y) C
2
: y
2
= x
3
2x + 1 (an elliptic curve)
e.g.: (x, y, w, z) C
4
: x +y +z +w = 0, x + 2y + 3z = 0 (Subspace of C
4
)
The goals of this module is to understand solutions to polynomial equations varieties. That is
properties, maps between them, how to compute them and examples of them. Why would we do that?
Because varieties occurs in many dierent parts of mathematics:
e.g.: A robot arm: any movement can be described by polynomial equations (and inequalities)
e.g.: (x, y) ( 0)
2
: x
4
+y
4
= 1 = (by Fermats Last Theorem)
Algebraic geometry seeks to understand these spaces using (commutative) algebra.
Denition 1.1. Let S be the ring of polynomial with coecients in a eld k.
Notation. S = k[x
1
, . . . , x
n
]
Denition 1.2. The ane space is A
n
= (y
!
, . . . , y
n
) : y
i
k. That is k
n
without the vector space
structure.
Denition 1.3. Given polynomial f
1
, . . . f
r
S the ane variety dened by the f
i
is V (f
1
, . . . , f
r
) =
y = (y
1
, . . . , y
n
) A
n
: f
i
(y) = 0 i
Example. V (x
2
+y
2
1) = circle of radius 1
Note. Two dierent sets of polynomials can dene the same varieties.
Example. V (x +y +z, z + 2y) = V (y z, x + 2z) = (2a, a, a) : a k
Recall: The ideal generated by f
1
, . . . , f
r
S is I = f
1
, . . . , f
r
=

r
i=1
h
i
f
i
: h
i
S. It is
closed under addition and multiplication by elements of S.
Lemma 1.4. V (f
1
, . . . , f
r
) = y A
n
: f(y) = 0f f
1
, . . . , f
r
. Thus if f
1
, . . . , f
r
= g
1
, . . . g
s

then V (f
1
, . . . , f
r
) = V (g
1
, . . . , g
s
).
Proof. We show the inclusion both ways:
: Let y V (f
1
, . . . , f
r
). Then f
i
(y) = 0i, so let f =

r
i=1
h
i
f
i
f
1
, . . . , f
r
, then f(y) = 0.
: Conversely if f(y) = 0f f
1
, . . . , f
r
then f
i
(y) = 0 i. Hence y V (f
1
, . . . , f
r
).
Notation. If I = f
1
, . . . , f
r
we write V (I) for V (f
1
, . . . , f
r
).
Denition. Let X A
n
be a set. The ideal of function vanishing on X is I(X) = f S : f(y) =
0y X
Example. X = 0 A
1
. Then I(X) = x.
Note that I I(V (I)). To see this we have f I f(y) = 0 y V (I) f I(V (I)). On the
other hand we dont have always equality.
e.g., I =

x
2
_
k[x], then V (I) = 0 A
n
, so I(V (I)) = x , =

x
2
_
.
e.g., k = 1 and I =

x
2
+ 1
_
. Then V (I) = so I(V (I)) = 1 = 1[x] ,= I
2
.
2
2 Grobner Bases
Question: Given f
1
, . . . , f
r
, f S, how can we decide if f f
1
, . . . , f
r
? That is: given generators for
I(X) how can we decide if f vanishes on X?
Example 2.1. n = 1, k = . Is

x
2
3x + 2, x
2
4x + 4
_
=

x
3
6x
2
+ 12x 8, x
2
5x + 6
_
=
x 2? Yes since we are in a PID so we can use Eulers algorithm to nd the generator. This
is a solved problems
Any n and f
1
, . . . , f
r
are linear
Is y z x +y +z, x + 2y? Yes.
Is 5x
1
+3x
2
7x
4
+8x
5
x
1
+x
2
+x
3
+x
4
+x
5
, 3x
1
7x
4
+ 9x
5
, 2x
1
+ 3x
4
= f
1
, f
2
, f
3
?
If f f
1
, f
2
, f
3
then f = af
1
+bf
2
+cf
3
for a, b, c k. So the question now becomes: is
(5, 3, 0, 7, 8) row
_
_
1 1 1 1 1
3 0 0 7 9
2 0 0 3 0
_
_
?
To solve this we use Gaussian elimination from Linear Algebra
As we seen from the above examples, we need a common generalization. This is the Theory of
Grobner bases.
Denition 2.2. A term order (or monomial order) is a total order on the monomials (polynomial in
one variable) is S = k[x
1
, . . . , x
n
] such that:
1. 1 < x
u
for all u ,= 0
2. x
u
< x
v
x
u+w
< x
v+w
for all w N
n
.
Several term orders:
Lexicographic order X
u
< X
v
if the rst non-zero element of v u is positive.
Example. f = 3x
2
8xz
9
+ 9y
10
. If x > y > z, then x
2
> xz
9
> y
10
(since if v = (2, 0, 0), u =
(1, 0, 9) then v u = (1, 0, 9).
Degreelicographic order X
u
< X
v
if
_
deg(X
u
) < deg(X
v
)
X
u
<
lex
X
v
if deg(X
u
) = deg(X
v
)
.
Example. f = 3x
2
8xz
9
+ 9y
10
. Then xz
9
> y
10
> x
2
.
Reverse lexicographic order (revlex) X
2
< X
v
is
_
deg(X
u
) < deg(X
v
)
the last non-zero entry of v u is negative if deg(X
u
) = deg(X
v
)
Example. f = 3x
2
8xz
9
+ 9y
10
. Then y
10
> xz
9
> z
2
.
Denition 2.3. Given a polynomial f =

c
u
X
u
S and a term order <, the initial term of f is
c
v
X
v
with X
v
> X
u
for all u and c
v
,= 0. This is denoted in
<
(f).
Denition 2.4. The initial ideal of I with respect to < is in
<
(I) = in
<
(f) : f I
Warning: If I = f
1
, . . . , f
r
then in
<
(I) is not necessarily generated by in
<
(f
1
), . . . in
<
(f
r
).
e.g., Let I = x +y +z, x + 2y and let the term ordering be x > y > z. Then in
<
(I) = x, y.
Denition 2.5. A set g
1
, . . . g
s
is a Grobner basis for I if g
1
, . . . , g
s
I and in
<
(I) = in
<
(g
1
), . . . , in
<
(g
s
).
The point of this is that long division by a Grobner basis decides the ideal membership problem,
that is, is f f
1
, . . . , f
r
?
Denition 2.6. A monomial ideal is an ideal I S generated by monomials X
u
.
3
Lemma 2.7. Let I be a monomial ideal, I = X
u
: u A for some A N
n
. Then:
1. X
v
I if and only if X
u
[X
v
for some u A.
2. If f =

c
v
X
v
I then each X
v
with c
v
non-zero is divisible by some X
u
for U A, hence they
lies in I.
Proof. Note that part 1. is a special case of part 2.
Since f I we can write f =

h
u
X
u
with u A, h
u
S and all but nitely many are 0. Let us
expand the RHS as a sum of monomials. Then each term is a multiple of some X
u
so lies in I, hence
the same is true for the terms of f.
Theorem 2.8 (Dicksons Lemma). Let I = X
u
: u A for some set A N
n
, then there exists
a
1
, . . . a
s
A with I = X
a1
, . . . , X
as
.
Proof. The proof is by induction on n.
n = 1: We have I = X
u
for U = minU : U A, this uses the fact that N is well ordered
n > 1: Name the variables of the polynomial ring x
1
, . . . , x
n1
, y.. Let J =

X
u
: j 0 with x
u
y
j
I
_

k[x
1
, . . . , x
n1
]. By induction hypothesis J = X
ai
1
, . . . X
ais
where (a
ij
, m
j
) A for some
m
j
N. Let m = max(m
j
). For 0 l m1, let J
l
=

X
u
: x
u
y
l
I
_
k[x
1
, . . . , x
n1
].
So again by induction we have that J
l
=

x
b
l1
, . . . , x
b
r(l)
_
where b
ls
N
n1
and x
b
ls
y
l
I.
We now claim that I =

x
b
ls
y
l
: 0 l m1, 1 s r(l)
_
+x
aij
y
mj
: 1 j s.
Indeed if x
u
y
j
I, if j < m then x
u
J
j
so x
bjs
[x
u
for some b
js
so x
bjs
y
j
[x
u
y
j
. If j m
then x
u
J, so there is a
i
with X
ai
[X
u
so X
ai
y
mi
[X
u
y
j
. In particular, every monomial
generator of I lies in

x
b
ls
y
l
, x
aij
y
mj
_
so the ideals are equal and I is nitely generated.
For each of the nite number of generators we can nd a
i
A with X
ai
dividing the
generator (using the previous lemma).
Corollary 2.9. A term order is well ordered (every set of monomials has a least element)
Proof. If not, there would be an innite chain X
u1
> X
u2
> . . . . Let I = X
ui
: i 1 k[x
1
, . . . , x
n
],
then by Dicksons lemma I = X
ui
1
, . . . , X
uis
for some i
1
< i
2
< < i
s
. In particular for j i
s
there exists l such that X
ui
l
[X
uj
. Thus X
uj
= X
ui
l
X
w
, but then X
ui
l
< X
uj
because 1 < X
W
. This
is a contradiction.
Corollary 2.10. Let I be an ideal in k[x
1
, . . . , x
n
] then there exists g
1
, . . . g
s
I with in
<
(I) =
in
<
(g
1
), . . . , in
<
(g
s
). Hence a Grobner basis exists.
Proof. By denition in
<
(I) = in
<
(f) : f I. By Discksons lemma, there exists g
1
, . . . , g
s
I with
in
<
(g
1
), . . . , in
<
(g
s
) = in
<
(I).
2.1 The Division Algorithm
Input: f
1
, . . . , f
s
, f S, < the term order
Output: Expression of the form

s
i=1
h
i
f
i
+r where h
i
S and r =

c
u
X
u
with {c
u
,= 0
X
u
is not divisible by in
<
(f
i
) i}, such that if in
<
(f) = c
u
X
u
, in
<
(h
i
f
i
) =
c
vi
X
vi
then X
u
X
vi
i.
Step 1: Initialize h
1
= = h
s
= 0, r = 0, p = f.
Step 2: While p ,= 0 do:
i = 1
Divisionoccured = false
While i s and Divisionoccured = false do:
If in
<
(f
i
)[ in
<
(p) then:
h
i
= h
i
+
in<(p)
in<(fi)
p = p
in<(p)
in<(fi)
f
i
4
Divisionoccured = true
Else:
i = i + 1
If Divisionoccured = false then:
r = r + in
<
(p)
p = p in
<
(p)
Step 3: Output: h
1
, . . . , h
s
, r.
Example 2.11.
Input: f
1
= x +y +z, f
2
= 3x 2y, f = 5y + 3z, < lex (x < y < z)
Step 1: h
1
= 0, h
2
= 0, r = 0, p = 5y + 3z.
Step 2: i = 1
Divisionoccured = false
does in
<
(f
1
)[ in
<
(p)? Yes:
h
1
= 0 + 3
p = 5y + 3z) 3 (x +y +z) = 3x + 2y
Divisionoccured = true
Step 2: i = 1
Divisionoccured = false
does in
<
(f
1
)[ in
<
(p)? No:
i = 2
does in
<
(f
2
)[ in
<
(p)? Yes:
h
2
= 0 +1
p = 3x + 2y + (1) (3x 2y) = 0
Divisionoccured = true
Step 3: Output: h
1
= 3, h
2
= 1, r = 0
Note that the division algorithm depends on the ordering. (In the above example if x > y > z then
the output is h
1
= h
2
= 0 and r = 5y + 3z)
Proposition 2.12. The above algorithm terminates with the correct output .
Proof. As each stage the initial term in
<
(p) decreases with respect to <. Since < is a well-order, this
cannot happen an innite number of times, hence the algorithm must terminate.
At each stage we have f = p+

h
i
f
i
+r, where h
i
f
i
and r satisfy the condition, so when it outputs
with p = 0, the output has the desired correct form.
Proposition 2.13. If g
1
, . . . , g
s
is a Grobner basis for I with respect to <, then f I if and only
if the division algorithm outputs r = 0.
Proof. The division algorithm writes f =

h
i
g
i
+ r, where no monomial in r is divisible by in
<
(g
i
).
Thus f I if and only if r I. Now if r ,= 0 then in
<
(r) / in
<
(I) = in
<
(g
1
), . . . , in
<
(g
s
), so r / I.
Hence r = 0 if and only if r I.
Corollary 2.14. If g
1
, . . . , g
s
is a Grobner basis for I then I = g
1
, . . . , g
s

Proof. We have g
1
, . . . , g
s
I by the denition of Grobner basis. If f I, then we divide f by
g
1
, . . . , g
s
to get f =

h
i
g
i
+r, but r = 0. So we have f g
1
, . . . , g
s
, hence I g
1
, . . . , g
s
.
Corollary 2.15 (Hilbert Basis Theorem). Let I S be an ideal. Then I is nitely generated.
Proof. We know that I has a nite Grobner basis (since monomial ideals are nitely generated). By
the previous corollary, this Grobner basis generates I
Denition 2.16. A ring R is Noetherian if all its ideals are nitely generated.
Hence the Hilbert basis theorem says S is Noetherian. Note that there is a standard algorithm (the
Buchberger algorithm) to compute Grobner bases.
Denition 2.17. A reduced Grobner basis for I with respect to <, is a Grobner basis of I which
satises:
5
1. Coecients of in
<
(g
i
) is 1
2. No in
<
(g
i
) divides any other-way
3. No in
<
(g
i
) divides any other term of g
j
.
Such a reduced Grobner basis exists and is unique. With this we can check whether two ideals are
equal. To do this we x a term order and compute a reduced Grobner basis for I and J.
6
3 Zariski Topology
Recall that a topological space is a set X and a collection = U of subsets of X called open sets,
satisfying:
1.
2. X
3. If U, U

then U U


4. If U

for A, then

.
A set Z is closed if its compliment is open.
Denition 3.1. The Zariski Topology on A
n
has close set V (I) for I S an ideal.
Example. In A
1
, under the Zariski Topology, the closed sets are nite set, A
1
or . (A
1
= V (0) and
= V (S))
Recall: If I, J are ideals in S then I + J = i + j : i I, j J, while IJ = ij : i I, j J. In
terms of generators, if I = f
1
, . . . , f
s
and J = g
1
, . . . , g
r
then I + J = f
1
, . . . , f
s
, g
1
, . . . , g
s
and
IJ = f
i
g
j
: 1 i s, 1 j r.
Proposition 3.2. Let X = V (I) and Y = V (J) be two varieties in A
n
then:
X Y = V (I +J)
X Y = V (I J) = V (IJ)
Proof. Let y X Y . Then f(y) = 0 for all f I and g(y) = 0 for all g J. So (f +g)(y) = 0
for all f I and g J. Hence by denition y V (I +J).
Conversely: let y V (I + J), then h(y) = 0 for all h = f + 0 with f I, hence y V (I).
Similarly h(y) = 0 for all h = 0 +g with g J, hence y V (J). So y X Y .
Let y X Y . Then y X or y Y . If y X then f(y) = 0f I, so f(y) = 0 f I J,
hence y V (I J). Similarly if y Y then g(y) = 0g I, so g(y) = 0 g I J, hence
y V (I J).
Let y V (IJ). Then h(y) = 0h = fg with f I, g J. Thus h(y) = f(y)g(y) f I, g J.
Suppose y / Y , that is there exists g J with g(y) ,= 0, then f(y) = 0 f I, hence y V (I) =
X. Thus we have y X Y . So V (IJ) X Y .
Note that I J IJ so V (I J) V (IJ) (This follows from the general fact I J V (J)
V (J))
We have shown V (I J) V (IJ) X Y V (I J), thus they are all equal.
In fact, if X

: A is a collection of varieties in A
n
with X

= V (I

), then

= V (I

).
Challenge question: What goes wrong with arbitrary union.
Corollary 3.3. The Zariski topology is a topology on A
n
.
Note: This topology is weird compare to the Euclidean topology, for example it is not Haussdorf
and open sets are dense.
3.1 Morphism
Denition 3.4. A morphism is a map : A
n
A
m
with (y
1
, . . . , y
n
) = (
1
(y
1
, . . . , y
n
), . . . ,
m
(y
1
, . . . , y
n
))
where k[x
1
, . . . , x
n
].
Example. : A
2
A
2
dened by (x, y) = (x
2
y
2
, x
2
+ 2xy + 3y
2
).
Morphism plays the role of continuous functions in topology. Questions: are all continuous functions
morphism? No.
7
Example. f(x) =
_
x + 1 x /
x x
. This is a continuous function in the Zariski topology. We dont
want this, hence why we restrict to morphism.
Denition 3.5. For f k[z
1
, . . . , z
m
], : A
n
A
m
, the function f k[x
1
, . . . , x
n
] is called the
pullback if f by .
Note.

f = f .
Recall: a k-algebra is a ring R containing the eld k. A k-algebra homomorphism is a ring
homomorphism with (a) = a a k.
Lemma 3.6. The map

: k[z
1
, . . . , z
m
] k[x
1
, . . . , x
n
] is a k-algebra homomorphism

(1) = 1

(0) = 0

(a) = a a k

(fg) =

(f)

(g)

(f +g) =

(f) +

(g)
Proof. Exercise
Note: The polynomial ring is the ring of morphism from A
n
to A
1
.
Denition 3.7. The coordinate ring k[X] of a variety X = V (I) A
n
is the ring of polynomial
functions from X to A
1
.
Equivalently: k[X] = f k[x
1
, . . . , x
n
]/ where f g if f(y) = g(y) for all y X.
Note. f(y) = g(y) y X if and only if (f g)(y) = 0 y X, that is, if and only if f g I(X). So
k[X] = k[x
1
, . . . , x
n
]/I(X) and in particular k[X] is a ring.
Example. X = V (x
2
+y
2
1) then k[X] = k[x, y]/

x
2
+y
2
1
_
X = V (x
3
) A
1
then k[X] = k[x]/ x

= k.
Denition 3.8. Fix X = V (I) A
n
. Two morphism , : A
n
A
m
are equal in X if the induced
pullback

: k[z
1
, . . . , z
m
] k[Z] = k[x
1
, . . . , x
n
]/I(X) are equal.
Denition 3.9. A morphism : X A
n
is an equivalence class of such morphism.
Example 3.10. Let X = V (x
2
+ y
2
1), : A
2
A
1
dened by (x, y) = x
4
and : A
2
A
1
dened by (x, y) = (y
2
1)
2
. We claim that = on X since

: k[z] k[x, y] is dened by


z x
4
while

: k[z] k[x, y] is dened by z (y


2
1)
2
. But k[X] = k[x, y]/(x
2
+y
2
1), and in
there x
4
= (y
2
1)
2
, hence

.
Lemma 3.11. If , : A
n
A
m
are equal on X then (y) = (y) for all y X.
Proof. If (y) ,= (y) for some y X then they dier in some coordinate i. Then z
i
((y)) ,= z
i
((y)),
so

z
i
(y) ,=

z
i
(y). Hence

z
i

z
i
/ I(X), so the pullback homomorphism

and

are
dierent.
Denition 3.12. Let X A
n
and Y A
m
be varieties. A morphism : X Y is a morphism
: X A
m
with (X) Y .
Example. Let X = A
1
and Y = V (cy y
2
) A
3
and let : A
1
A
3
be dened by (t) = (t, t
2
, t
3
).
Then

: k[x, y, z] k[t] is dened by x t, y t


2
and z t
3
. Since tt
3
(t
2
)
2
= 0, (A
1
) Y ,
so is a morphism from A
1
Y .
Proposition. Let X A
n
, Y A
m
be varieties. Any morphism : X Y induces a k-algebra
homomorphism

: k[Y ] k[X]. Conversely given a k-algebra homomorphism from k[Y ] k[X] is

for some morphism : X Y .


8
Proof. Let : X Y be a morphism. Since (X) Y we have f (x) = 0 x X and
f I(Y ). Hence

f I(X) f I(Y ), therefore the induced map

: k[z
1
, . . . , z
m
] k[X] =
k[x
1
, . . . , x
n
]/I(X) factors through k[Y ]. So given a morphism : X Y we get

: k[Y ] k[X].
Conversely given a k-algebra homomorphism : k[Y ] k[X] it suces to nd a k-algebra homo-
morphism

: k[z
1
, . . . , z
m
] k[x
1
, . . . , x
n
] for which we have a commutating diagram
k[z
1
, . . . , z
m
]

//
i

k[x
1
, . . . , x
n
]
i

k[Y ]

//
k[X]
Then will be a morphism A
n
A
m
with (X) Y . We construct such

as follow. Let
g
i
be any polynomial in k[x
1
, . . . , x
n
] with i

X
(g
i
) = (i

Y
(z
i
)). Set

= g
i
and extend as a k-
algebra homomorphism. (g
i
exists since the map i

X
is surjective). This denes

: k[z
1
, . . . , z
m
]
k[x
1
, . . . , x
n
] and i

(z
1
) = i

Y
(z
i
) by construction, hence the diagram commutes.
Example 3.13. Let

: k[t] k[x, y, z]/(x


2
y, x
3
z). Then

(t) = x and

(t) = x + x
2
y is
the same. This is

for : V (x
2
y, x
3
z) A
1
dened by (x, y, z) = x (or (x, y, z) = x+x
2
y
as while they are dierent morphism they agree on X)
So to sum up: Morphism : X Y are the same as k-algebra homomorphism of the coordinate
rings

: k[Y ] k[X]. note that the homomorphism goes the other way! (contragradient).
Exercise 3.14. If X

Y

Z with X

Z. Then

: k[Z] k[X] is

.
Denition 3.15. An isomorphism of ane varieties is a morphism : X Y for which there is a
morphism
1
: Y X with
1
= id
Y
and
1
= id
X
.
An automorphism of an ane variety is an isomorphism : X X.
WARNING: A morphism that is a bijection needs not be an isomorphism.
3.2 Images of varieties under morphism
That is, given : A
n
A
m
what is (X)?
Warning: (X) needs not to be a variety. For example X = V (xy 1) A
1
and : A
2
A
1
dened (x, y) x. Then (X) = A
1
0. (REMEMBER THIS EXAMPLE!). Notice that the closure
of (X), is (X) = A
1
.
Another question is: Given X A
n
and : A
n
A
m
, how do we compute (X). We use
the following clever trick: let X A
n
, rst we send x (x, (x)), then project unto the last m
coordinates, i.e., (X) is the composition of the inclusion of X into the graph of with the projection
onto the last m coordinates.
This breaks the problem into two parts:
Describe the image of X A
n
A
m
Describe (Y ) for Y A
n
A
m
, where is the projection onto the last m coordinates.
For part 1, the image of X = V (I) is V (I) V (z
i

i
(x)) A
n
A
m
= (x
1
, . . . , x
n
, z
1
, . . . , z
m
)
Example. Let : A
2
A
2
dened by (x, y) = (x + y, x y) and let X = V (x
2
y
2
). Then the
graph of X in A
2
A
2
is V (x
2
y
2
, z
1
z y, z
2
x +y) (x, y, z
1
, z
2
). Then (x, y) = (z
1
, z
2
)
Theorem 3.16. Let X A
n
be a variety and let : A
n
A
m
be the projection onto the last m
coordinates. Then (X) = V (I(X) k[x
nm+1
, . . . , x
n
])
Note. Well soon show that if k = k then we can replace I(X) by I. But it is not true otherwise, for
example, consider k = 1 and X = V (x
2
y
2
+ 1) A
2
and : (x, y) y. Then X = , (X) = and
I(X) = 1. But

x
2
y
2
+ 1
_
k[y] = 0
9
Proof. If f I(X)k[x
nm+1
, . . . , x
m
] then f(y) = 0 y X, so f(y
nm+1
, . . . , y
n
) = 0 (y
nm+1
, . . . , y
n
)
with y X, hence f((y)) = 0 y X and thus (X) V (I(X) k[x
nm+1
, . . . , x
n
]).
Conversely if g I((X)) then g(y
nm+1
, . . . , y
n
) = 0 y = (y
1
, . . . , y
n
) X. So g I(X)
k[x
nm+1
, . . . , x
n
] so I((X)) I(X) k[x
nm+1
, . . . , x
n
]. But since (X) = V (I((X)) this shows
V (I(X) k[x
nm+1
, . . . , x
n
]) (X).
This leaves the question: Given I k[x
1
, . . . , x
n
, z
1
, . . . , z
m
] how can we compute I k[z
1
, . . . , z
m
]?
The answer is to use Grobner basis.
Recall: the lexicographic term order with x
1
> > x
n
> z
1
> > z
m
has x
u
z
v
> x
u

z
v

if
(u u

, v v

) has rst non-zero entry positive.


Proposition 3.17. Let I k[x
1
, . . . , x
n
] = S and let G = g
!
, . . . , g
s
be a lexicographic Grobner basis
for I. Then a lexicographic Grobner basis for I k[x
nm+1
, . . . , x
n
] is given by Gk[x
nm+1
, . . . , x
n
] =
S

, i.e., those elements of G that are polynomials in x


nm+1
, . . . , x
n
.
Proof. GS

is a collection of polynomials in IS

, so we just need to show that in


<lex
(g) : g G S

=
in
<lex
(IS

) S

. Let f IS

. Then in
<lex
(f) in
<lex
(I), so there is g G with in
<lex
(g)[in
<lex
(f).
Since f S

, in
<lex
(g) is not divisible by x
1
, . . . , x
nm
and thus g S

. Hence in
<lex
(f) in
<lex
(g) : g G S

,
so G S

is a Grobner basis for I S

.
The next question is: Given X = V (I), what is I(X)?
Hilberts Nullstellensatz. If k = k, then I(V (I)) =

I, where

I is the radical of I. (Denoted
r(I) in Commutative Algebra)
Proof. This proof will come later in the course.
10
4 Sylvester Matrix
Given f, g k[x], how can we decide if they have a common factor?
Denition. f = 5x
5
+ 6x
4
x
3
+ 2x
2
1 and g = 7x
5
+ 8x
3
3x
2
+ 1.
Or f = ax + b, g = cx + d. In this case we have that f, g has a common factor if and only if

a b
c d

= 0. Notice the analogy with Z, that is, n, m Z have a common factor when there is no a, b
such that an +bm = 1. This naturally leads to the next proposition.
Proposition 4.1. Let f =

l
i=0
a
i
x
i
and g =

m
j=0
b
j
x
j
be two polynomials in k[x]. Then the
following are equivalent.
1. f, g have a common root, i.e., there exists k such that f() = g() = 0
2. f, g have a non-constant common factor h
3. There does not exists A, B k[x] with Af +Bg = 1
4. f, g , = k[x]
5. There exists

A,

B k[x] with deg(

A) m1, deg(

B) l 1 and

Af +

Bg = 0.
Proof. 1 3: If f() = g() = 0 and Af + Bg = 1 then A()f() + B()g() = 1 0 + 0 = 1
which is a contradiction, hence no such A, B exists.
3 4: Suppose f, g = 1 = k[x], then 1 f, g so there exists A, B k[x] with Af +Bg = 1
4 2: If f, g , = k[x] then, since k[x] is a PID, the ideal f, g = h for some h k[x] non-
constant. So f, g h ,that is, f =

fh, g = gh and thus f, g have a non-constant common
factor.
2 5: We write f =

fh, g = gh and set

A = g and

B =

f. Then

Af +

Bg = 0 and

A,

Bsatisfy
the degree bound.
5 2: If

Af +

Bg = 0, then every irreducible factor of g divides

Af, since k[x] is a UFD. Since
deg(g) > deg(

A) at least one irreducible factor must divide f. Hence f and g have a


common factor.
2 1: If f, g have a non-constant common factor h, let be any root of h, then f() = g() = 0.
So f and g have a common root.
Part 5 is the key idea here. Given f =

a
i
x
i
and g =

b
j
x
j
with 0 i l and 0 j m, write

A =

m1
i=0
c
i
x
i
and

B =

l1
j=0
d
j
x
j
where c
i
, d
j
are undeterminate coecients.
0 = (c
m1
x
m1
+ +c
0
)(a
l
x
l
+ +a
0
) + (d
l1
x
l1
+ +d
0
)(b
m
x
m
+ +b
0
)
= (c
m1
a
l
+d
l1
b
m
)x
l+m1
+ (c
m1
a
l1
+c
m2
a
l
+d
l1
b
m1
+d
l2
b
m
)x
l+m2
+ + (c
0
a
0
+d
0
b
0
)
Thus all the coecients of x
j
are zero. Remember that a
i
and b
j
are given, so we have a set of linear
equations in the c and d variables. We can count that we have l + m variables and linear equations.
11
This gives the following matrix
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
l
0 . . . b
m
0 . . .
a
l1
a
l
. . . b
m1
b
m
. . .
a
l2
a
l1
.
.
. b
m2
b
m1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
0
b
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
. .
m
. .
l
There exists non-zero

A,

B if the correct degree with

Af +

Bg = 0 if and only if the determinant of
this matrix is zero.
Denition 4.2. Let f =

l
i=0
a
i
x
i
and g =

m
j=0
b
j
x
j
be polynomials in k[x] with a
l
, b
m
,= 0. The
Sylvester matrix of f, g with respect to x is the (l +m) (l +m) matrix
Syl(f, g, x) =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
l
0 . . . b
m
0 . . .
a
l1
a
l
. . . b
m1
b
m
. . .
a
l2
a
l1
.
.
. b
m2
b
m1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
0
b
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
. .
m
. .
l
The determinant of Syl(f, g, x) is a polynomial in a
i
, b
i
with integer coecients. This is called the
resultant of f and g and is denoted Res(f, g, x).
Example. Let f = x
2
+ 3x +a and g = x +b. Then
Syl (f,g,x) =
_
_
1 1 0
3 b 1
a 0 b
_
_
so Res(f, g, x) = b
2
(3b a) = b
2
3b +a, so f and g have a common factor if and only if a = 3b b
2
.
Theorem 4.3. Fix f, g k[x], then f, g have a common factor if and only if Res(f, g, x) = 0
Proof. This is what the previous work has been about.
Example. f = x
2
+ 2x + 1, g = x
2
+ 3x + 2
Syl(f, g, x) =
_
_
_
_
1 0 1 0
2 1 3 1
1 2 2 3
0 1 0 2
_
_
_
_
We see that (r
3
r
1
) (r
2
r
1
) r
4
= 0, so Res(f, g, x) = 0. (In fact the common factor is x + 1)
f = ax
2
+bx +c, g = f

= 2ax +b
Syl(f, g, x) =
_
_
a 2a 0
b b 2a
c 0 b
_
_
So Res(f, g, x) = ab
2
2a(b
2
2ac) = ab
2
+ 4a
2
c = a(b
2
4ac)
12
Notice how in the second example we nearly ended up with the discriminant of a quadratic equa-
tions.
Denition 4.4. Let f =

l
i=0
a
i
x
i
. Then the discriminant of f, disc(f) =
(1)
l1
a
l
Res(f, f

, x)
Proposition 4.5. The polynomial disc(f) lies in Z[a
0
, . . . , a
l
]. The polynomial f has a multiple root
if and only if disc(f) = 0.
Proof. Note that the rst row of Syl(f, f

, x) is (a
l
, 0, . . . , 0, la
l
, 0, . . . , 0) so a
l
[Res(f, f

, x) and thus
disc(f) Z[a
0
, . . . , a
l
].
Since deg(f) = l and a
l
,= 0, we have disc(f) = 0 if and only if f and f

have a common root, so


we just need to check that this happens if and only if f has a multiple root. Fix a root of f and
write f = (x )
m
f where

f() ,= 0. Then f

= m(x )
m1
f + (x )
m
f

, so f

() = 0 if m > 1.
If m = 1 then f

() =

f() ,= 0. So is a root of f

if and only if is a multiple root of f.


Generalizations:
1. More variables:
Given f, g k[x
1
, . . . , x
n
], write f =

l
i=0
a
i
x
i
1
and g =

m
j=0
b
j
x
j
1
where a
i
, b
j
k[x
2
, . . . , x
n
]
and a
l
, b
m
,= 0. Then Res(f, g, x
1
) = det(Syl(f, g, x
1
)) k[x
2
, . . . , x
n
].
Note. We can think about f, g as polynomials in k(x
2
, . . . , x
n
)[x
1
] (elds of rational functions).
So this is a special case of the rst one. In particular, either Res(f, g, x
1
) = 0 or there exists
A, B k(x
1
, . . . , x
n
)[x
1
] with Af +Bg = 1.
Example.

A = ARes(f, g, x
1
),

B = BRes(f, g, x
1
) are polynomials in k[x
1
, x
2
, . . . , x
n
] so

Af +

Bg = Res(f, g, x
1
). A and B comes from solution to
Syl(f, g, x
1
)
_
_
_
_
_
_
_
_
_
_
c
m1
.
.
.
c
0
d
l1
.
.
.
d
0
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
0
.
.
.
1
_
_
_
_
_
_
_
Cramers rule states Ax = b, x
i
=
(1)|Ai|
|A|
where A
i
is A with i
th
column replaced by b. By
Cramers rule, the c
i
and d
j
have the form polynomial is x
2
, . . . x
n
/Res(f, g, x
1
). So ARes(f, g, x
1
)
is a polynomial in k[x
1
, . . . , x
n
]
As a corollary to all of this we have that Res(f, g, x
1
) f, g k[x
2
, . . . , x
n
]. This is a cheaper
way to do elimination/projection.
Proposition 4.6. Fix f, g k[x
1
, . . . , x
n
] for degrees l, m in x
1
respectively. If Res(f, g, x
1
)
k[x
2
, . . . , x
n
] is zero at (c
2
, . . . , c
n
) k
n1
then either a
l
(c
2
, , c
n
) = 0 or b
m
(c
2
, . . . , c
n
) = 0
or c
1
k such that f(c
1
, . . . , c
n
) = g(c
1
, . . . , c
n
) = 0.
Proof. Let f(x
1
, c) = f(x
1
, c
2
, . . . , c
n
) k[x
1
] and similarly let g(x
1
, c) k[x
1
]. If neither a
l
(c),
b
m
(c) = 0 then f(x
1
, c) had degree l and g(x
1
, c) has degree m. So Syl(f(x
1
, c), g(x
1
, c, ), x
1
)
is Syl(f, g, x
1
) with c
2
, . . . , c
n
substituted for x
2
, . . . , x
n
. Thus Res(f(x
1
, c), g(x
1
, c), x
1
) = 0, so
f(x
1
, c) and g(x
1
, c) have a common root c
i
k. Hence f(c
1
, c
2
, . . . , c
n
) = g(c
1
, c
2
, . . . , c
n
) =
0.
2. Resultants of several polynomials.
Given f
1
, . . . , f
s
k[x
1
, . . . , x
n
] we introduce new variables u
2
, . . . , u
s
and let g = u
2
f
2
+. . . u
s
f
s
.
Write Res(f
1
, g, x
1
) =

(x
2
, . . . , x
n
)u

with N
s1
. We call h

k[x
2
, . . . , x
n
] the
generalised resultant.
13
Example. Let f
1
= x
3
+ 3x + 2, f
2
= x + 1, f
3
= x + 5. Then g = u
2
(x + 1) +u
3
(x + 5) and
Syl(f, g, x
1
) =
_
_
1 u
2
+u
3
0
3 u
2
+ 5u
3
u
2
+u
3
2 0 u
2
+ 5u
3
_
_
so Res(f
1
, g, x
1
) = 4u
2
u
3
+ 12u
2
3
. Hence h
1,1
= 4 and h
0,2
= 12.
Lemma 4.7. The polynomial h

lies in f
1
, . . . , f
s
k[x
2
, . . . , x
n
]
Proof. Write Res(f
1
, g, x
1
) = Af
1
+ Bg for A, B k[u
2
, . . . , u
s
, x
1
, . . . , x
n
]. Write A =

and B =

for A

, B

k[x
1
, . . . , x
n
]. Then Res(f
1
, g, x
1
) =

(A

f
1
+

s
i=2
B
ei
f
i
)u

. So h

= A

f
1
+

B
ei
f
i
f
1
, . . . , f
s
. Furthermore h

k[x
2
, . . . , x
n
] by
construction.
4.1 Hilberts Nullstellensatz
Consider : A
n
A
m
projection onto the last mcoordinates. We saw (X) = V (I(X)k[x
nm+1
, . . . , x
n
]).
The question is what do we add then we take the closure? Given y (X) is y (X)?
Theorem 4.8 (Extension Theorem. ). Let k = k. Let X = V (I) A
n
and let : A
n
A
n1
be
projection onto the last n1 coordinates. Write I = f
1
, . . . , f
s
with f
i
= g
i
(x
2
, . . . , x
n
)x
Ni
1
+l.o.t. in x
i
.
Let (c
2
, . . . , c
n
) V (I k[x
2
, . . . , x
n
]). If (c
2
, . . . , c
n
) / V (g
1
, . . . , g
s
) A
n1
then c
1
k with
(c
1
, . . . , c
n
) X.
Example. X = V (xy 1), f
1
= xy 1, g
1
= x. Then the theorem say if c
1
V (0) = A
1
and
c
1
/ V (x) then there exists c
2
with (c
1
, c
2
) V (xy 1). Note that V (0) comes from xy 1 k[x].
Note. I I(X) so I k[x
2
, . . . , x
n
] I(X) k[x
2
, . . . , x
n
] so V (I k[x
2
, . . . , x
n
]) (X). How useful
this is depends on the choice of the generators of I. The theorem talks about I, not I(X), so this
brings us closer to the Nullstellensatz.
Proof. s = 1: In this case f = g
1
(x
2
, . . . , x
n
)x
N
1
+ l.o.t.We have f k[x
2
, . . . , x
n
] = 0 , and
(c
2
, . . . , c
n
) V (f k[x
2
, . . . , x
n
]
Case 1. N ,= 0: g
1
(c
1
, . . . , c
n
) ,= 0, then f(x
1
, c
2
, . . . c
n
) is a polynomial of degree N in
x
1
so has a root c
1
in k.
Case 2. N = 0 then g
1
= f
1
, so if (c
2
, . . . , c
n
) V (f k[x
2
, . . . , x
n
]) = V (f) A
n1
s = 2: The (previous) proposition shows that if g
1
(c
1
, . . . , c
n
) ,= 0 and g
2
(c
2
, . . . , c
n
) ,= 0 then
the desired c
1
exists. Suppose (c
2
, . . . , c
n
) / V (g
1
, g
2
) then without loss of generality
g
1
(c
2
, . . . , c
n
) ,= 0. If g
2
(c
2
, . . . , c
n
) ,= 0 then c
1
exists. Otherwise replace f
2
by f
2
+x
N
1
f
1
for N 0. This does not change the ideal f
1
, f
2
and it does not change (c
2
, . . . , c
n
) /
V (g
1
, g
2
) = V (g
1
, g
1
). Then the proposition implies there exists c
1
with f
1
(c
1
, . . . , c
n
) =
f
2
(c
1
, . . . c
n
) = 0.
s 3: Also assume g
1
(c
2
, . . . , c
n
) ,= 0. Replace f
2
by f
2
+x
N
1
f
1
for N 0 if necessary to guaran-
tee g
2
(c) ,= 0 and deg
x1
(f
2
) > deg
x1
(f
i
) for i > 2. Write Res(f,

s
i=2
u
i
f
i
, x
1
) =

.
Since h

Ik[x
2
, . . . , x
n
] we have h

(c
2
, . . . , c
n
) = 0 . Thus Res(f,

u
i
f
i
, x
1
)(c
2
, . . . , c
n
, u
1
, . . . , u
s
)
is the zero polynomial.
By construction the coecients of the maximal power of x
1
in f
1
and in

u
i
f
i
are g
1
and
g
1
u
1
, so are non-zero are (c
2
, . . . , c
n
). Thus 0 = Res(f,

u
i
f
i
, x
1
)(c
2
, . . . , c
n
, u
1
, . . . , u
s
) =
Res(f
1
(x
1
, c
2
, . . . , c
n
),

u
i
f
i
(x
1
, c
2
, . . . , c
n
), x
1
). Thus there exists F k(u
2
, . . . , u
s
)[x
1
]
with deg
x1
F > 0, F[f
1
(x
1
, c
2
, . . . , c
n
). Write F =

F/g where

F = k[u
2
, . . . , u
s
, x
1
],
g k[u
2
, . . . , u
s
]. Then

F divides f
1
(x
1
, c
2
, . . . , c
n
)g(u
2
, . . . , u
s
). Let F

be an irredu-
cible factor of

F with positive degree in x
1
. Then F

[f
1
(x
1
, c
2
, . . . , c
n
). Thus it does not
contain any u
i
. So F

u
i
f
i
(x
i
, c
2
, . . . , c
n
) but F

k[x
1
] thus F

[f
i
(x
1
, c
2
, . . . , c
n
) for
2 i n. Then F

[f
i
(x
1
, c
2
, . . . , c
n
) for all 1 i s. Then choose a root c
1
of F

. Then
F

(c
1
) = 0 so f
i
(c
1
, . . . , c
n
) = 0 so (c
1
, . . . , c
n
) X
14
Weak Nullstellensatz. Let k = k. Suppose I k[x
1
, . . . , x
n
] satises V (I) = , then I = 1 =
k[x
1
, . . . , x
n
].
Proof. We use an induction proof on n.
n = 1: I = f k[x
1
]. If f / k there exists k with f() = 0 so V (I) ,= . Thus if V (I) = ,
I = f = 1.
n > 1: Let I = f
1
, . . . , f
s
and suppose V (I) = . We may assume deg(f
i
) > 0 for all i. Let the
degree of f
1
be N. Consider the morphism : A
n
A
n
given by

: k[x
1
, . . . , x
m
]
k[z
1
, . . . , z
n
] with

(x
i
) = z
i
+a
i
z
1
with a
1
= 0 and a
i
K for i > 1.
Note:

is an isomorphism, since the matrix is


_
_
_
_
_
_
_
1 0 0 . . . 0
a
2
1 0 . . . 0
a
3
0 1 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
n
0 0 . . . 1
_
_
_
_
_
_
_
is invertible. (

)
1
(z
i
) = x
i
a
i
x
1
. This means that 1 I if and only if 1

(I), and

1
(X) = V (

(I)) = . (This is because V (

(I)) = y :

f(y) = 0 f I = y :
f (y) = 0 f I = y : (y) V (I). )
Let f
1
=

c
u
x
u
. Note that

(f
1
) = c(a
2
, . . . , a
n
)z
N
1
+ l.o.t in z
1
where c(a
2
, . . . , a
n
) is
the non-zero polynomial in a
2
, . . . , a
n
, i.e., c(a
2
, . . . , a
n
) =

|u|=N
c
u

a
ui
i
. Thus we can
choose (a
2
, . . . , a
n
) k
n1
with c(a
2
, . . . , a
n
) ,= 0. (Exercise: this holds because the eld
is innite)
Then g
1
k, so V (g
1
, . . . , g
s
) = for

1
(f
i
) = g
i
z
Ni
1
+ l.o.t. Let J =

(I) k[z
1
, . . . , z
n
],
then by the extension theorem, if (c
2
, . . . , c
n
) V (J) then there exists c
1
k with
(c
1
, . . . , c
n
) V (

(I)). Thus V (J) = and by induction J = 1, so 1

(I) and
so 1 I.
Note that is 1 I, we can write 1 =

A
i
f
i
with A
i
k[x
1
, . . . , x
n
].
Nullstellensatz. Let k = k. Then I(V (I)) =

I.
Proof. Let f
m
I, then f
m
(x) = 0 x V (I) so f(x) = 0 for all x V (I). Hence f I(V (I)), thus

I I(V (I))
For the reverse inclusion, suppose f I(V (I)) and let I = f
1
, . . . , f
s
and

I = f
1
, . . . , f
s
, 1 yf
k[x
1
, . . . , x
n
, y]. Now that V (

I) = since if f
1
(x
1
, . . . , x
n
) = = f
s
(x
1
, . . . , x
n
) = 0 then f(x
1
, . . . , x
n
) =
0 so 1 yf(x
1
, . . . , x
n
) ,= 0 y. So by the Weak Nullstellensatz we have that 1

I. So there ex-
ists p
1
, . . . , p
s
, q k[x
1
, . . . , x
n
, y] with 1 =

p
i
f
i
+ q(1 yf). Regard this as an expression in
k(x
1
, . . . , x
n
, y) and substitute y =
1
f
, then 1 =

p
i
(x
1
, . . . , x
n
1
f
)f
i
. Choose m > 0 for which
p
i
(x
1
, . . . , x
n
1
f
)f
m
k[x
1
, . . . , x
n
] then f
m
=

(p
i
(x
1
, . . . , x
n
1
f
)f
m
)f
i
, hence f
m
I and thus
f

I.
15
5 Irreducible Components
(There is some cross-over with commutative algebra here, revise both!)
Denition 5.1. A variety X A
n
is reducible if X = X
1
X
2
with X
1
, X
2
non-empty varieties in
A
n
and X
1
, X
2
_ X
X is irreducible if it is not reducible.
Example. X = V (x, y) A
2
then X = V (x) V (y).
V (x
2
+y
2
1) A
2
is irreducible but it is not trivial to prove. We will prove this later.
X A
1
is reducible if and only if 1 < [X[ <
X = V (f) is a hypersurface in A
n
. Let f = cf
1
1
. . . f
r
r
where c k and f
i
are distinct irreducible
polynomials. Then V (f) = V (f
1
) V (f
r
). Claim: If r > 1 then V (f) is reducible. We just
need too show that V (f
i
) ,= , X for all i. Now V (f
i
) ,= since 1 / f
i
. If V (f
i
) = X
then V (f
j
) V (f
i
) for some j ,= i. Hence f
i
I(V (f
j
)) =
_
f
j
= f
j
(exercise). So
f
j
[f
i
which contradicts f
i
, f
j
being distinct irreducible. Actually V (f
i
) are all irreducible, so
X = V (f
1
) V (f
r
) is a decomposition into irreducible.
Theorem 5.2. Let X A
n
be a variety. Then X = X
1
X
r
, where each X
r
is irreducible. This
representation is unique up to permutation provided it is irredundant (i.e., X
i
_ X
j
for any i ,= j)
Proof. For this theorem, we use the fact that k[x
1
, . . . , x
n
] is Noetherian, in particular, that there is
no innite ascending chain I
1
_ I
2
_ I
3
_ . . . of ideals in k[x
1
, . . . , x
n
].
Existence: If X is irreducible then we are done. Otherwise write X = X
1
X
2
where X
1
, X
2
are
proper subvarieties. Again if both are irreducible then we are done. Otherwise we can write X
1
=
X
11
X
12
and X
2
= X
21
X
22
where X
ij
are proper subvarieties of X
i
. Iterate this process. We claim
that this process terminates with X = X
j
(Finite union). If not we have an innite descending chain
X _ X
1
_ X
11
_ X
111
_ . . . . This gives a reverse containment I(X) _ I(X
1
) _ I(X
11
) _ . . . . This
chain must stabilize, so I(X
111...11
) = I(X
111...11111
) but V (I(X
11...111
) = X
11...11
which contradicts
the proper inclusion of varieties. Since V (I(V (I))) = V (I). Hence the decomposition process must
terminates.
Uniqueness: Suppose X = X
1
X
2
X
r
= X

1
X

s
are two irredundant irreducible
decompositions. Consider
X (X

i
) = X

i
= (X
1
X
r
) X

i
= (X
1
(X

i
)) (X
r
(X

i
))
Since X

i
is irreducible, there must be j with X
j
(X

i
) = X

i
, so X

i
X
j
. The same argument shows
that there is k with X
j
X

k
, so we have X

i
X
j
X

k
. Since the decomposition is irredundant,
X

i
= X

k
= X
k
. This construct a bijection between X
j
and X

i
, hence r = s and the decomposition
is unique up to permutation.
Note: This was a topological proof. A topological space with no innite descending chain of closed
set is called Noetherian (note how this is the opposite condition to Noetherian ring). Noetherian
topological spaces have irreducible decompositions.
Theorem 5.3. Let X A
n
be a variety. The following are equivalent:
1. X is irreducible
2. The coordinate ring k[X] is a domain
3. I(X) k[x
1
, . . . , x
n
] is prime.
16
Proof. 2 3: Recall k[X] = k[x
1
, . . . , x
n
]/I(X). So if f, g k[X] satisfy fg = 0 then there are
lifts

f, g k[x
1
, . . . , x
n
] such that

f, g / I(X) and

f g I(X). Same argument works the
other way round.
1 3: Suppose I(X) is not prime, that is, there exists f, g / I(X) with fg I(X). Let X
1
=
V (f) X and X
2
= V (g) X. Since f, g / I(X) then X
1
, X
2
_ X. However X
1
X
2
=
(V (f) X) (V (g) X) = ((V (f) V (g)) X = V (fg) X = X since fg I(X). So X
is reducible.
3 1: Suppose X = X
1
X
2
with X
1
and X
2
proper subvarieties. Then I(X) _ I(X
1
), I(X) _
I(X
2
) (To see this take V (_) of both side then V (I(X
i
)) = X
i
). So we may choose f
I(X
1
)I(X) and g I(X
2
)I(X). Now fg I(X
1
)I(X
2
), so V (I(X
1
)I(X
2
)) V (fg).
But V (I(X
1
) I(X
2
)) = V (I(X
1
)) V (I(X
2
)) = X
1
X
2
= X. So fg I(X) so I(X) is
not prime.
Remark. Some text reserve the word variety for irreducible varieties and call what we call varieties
algebraic sets.
Warning: If X = V (I) is irreducible, this does not imply that I is prime, just that I(X) is. This
about I =

x
2
, xy
2
_
k[x, y].
Theorem 5.4. Let k = k (this condition is unnecessary as there exists a commutative algebra proof
which show this theorem holds for k ,= k. See Commutative Algebra notes, this is the whole theory of
Primary Decomposition). Let I =

I (a radical ideal) in k[x


1
, . . . , x
n
], then I = P
1
P
r
where
each P
i
is prime. This decomposition is unique up to order if irredundant.
Proof. Let X = V (I) and let X = X
1
X
r
be an irredundant irreducible decomposition. Let
P
i
= I(X
i
) which is prime by the previous theorem and let P = P
i
. Then V (P) = V (P
i
) = X
i
=
X. So

P = I(X) = I. If f
m
P for some m > 0 then f
m
P
i
for all i, so f P
i
for all i. Hence
f P
i
= P and thus

P = P. So I = P
i
Uniqueness follows from the uniqueness of primary decomposition.
The next question to come up is how can we determine the P
i
, that is the prime decomposition of
radical ideals.
Denition 5.5. Let I, J be ideals. Then the colon (or quotient) ideal is (I : J) = f k[x
1
, . . . , x
n
] :
fg = I g J I.
Example. Let I =

x
2
, xy
2
_
and J = x. Then (I : J) = f : fg

x
2
, xy
2
_
g x = f : fx

x
2
, xy
2
_
=

x, y
2
_
Theorem 5.6. Let I =

I and let I = P
i
be an irredundant primary decomposition. Then the P
i
are precisely the prime ideals of the form (I : f) for f k[x
1
, . . . , x
n
].
Proof. Notice: (I : f) = (P
i
: f) = (P
i
: f). Now for any prime P we have (P : f) =
_
1 f P
P f / P
.
So (I : f) =
f / Pi
P
i
. Fix P
i
, since P
j
_ P
i
for any j ,= i, we can nd f
j
P
j
P
i
. Let f =

i=j
f
j
then f
j=i
P
j
P
i
. So (I : f) = P
i
.
Conversely if (I : f) = P is prime for some f, then P =
f / Pi
P
i
(as P
i
=

P
i
so P = P
i
for some
i. In more details P P
i
for all i. If P _ P
i
for all i then we can nd f
i
P
i
P, so f =

f
i
P
i
P
which is a contradiction. So P = P
i
for some i)
Example. Let I = xy, xz, yx, then V (I) = union of x, y, z axes = V (x, y) V (x, z) V (y, z). So
I = x, y x, z y, x. We want to see the theorem in action, so notice that (I : z) = x, y ,(I :
y) = x, z and (I : x) = y, z. Warning: (I : x +y) = z, xy (not as obvious.)
Let I =

x
3
xy
2
x
_
. Then (I : x
2
+y
2
1) = x and (I : x) =

x
2
+y
2
1
_
.
Note. If X, Y are varieties in A
n
then (I(X) : I(Y )) = I(XY ). To see this: x x XY , since x / Y
there is g I(Y ) with g(x) ,= 0. So if f (I(X) : I(Y )) then f(x)g(x) = 0, so f(x) = 0 and thus
f I(XY) . Conversely if f I(XY ) and g I(Y ) then fg I(X) so f (I(X) : I(Y )). Hence
(XY ) = V (I(X) : I(Y )).
17
5.1 Rational maps
How can we decide if X is irreducible? This is hard in general! We use the following trick. If : Y X
is surjective and X = X
1
X
2
then Y =
1
(X
1
)
1
(X
2
). Now both sides are closed and proper if
both X
1
and X
2
are. So if X is reducible then so is Y . Or if Y is irreducible then so is X.
Denition 5.7. A morphism : X Y of ane varieties is dominant if (X) = Y
Example. Take : V (xy 1) A
1
dened by (x, y) x. This is not surjective but it is dominant.
Proposition 5.8. A morphism : X Y is dominant if and only if

: k[Y ] k[X] is injective.


Example. k[x] k[x, y]/ xy 1 , x x (linked to the previous exampled) is injective.
Proof. A morphism : X Y induces a homomorphism

: k[Y ] k[X]. Now (X) Z _ Y (Z


a variety) if and only if there exists g I(X)I(Y ) with g((x)) = 0 x X, so

(g(x)) = 0 x X.
Hence

g I(X) and thus the image of g in k[Y ] is non-zero but is mapped to zero by

so

is
not injective.
Proposition 5.9. If : X Y is dominant and X is irreducible then so is Y .
Proof. Since is dominant, the map

: k[Y ] k[X] is injective. Since X is irreducible, we have


k[X] is a domain, and thus so is k[Y ]. Hence Y is also irreducible.
Denition 5.10. A rational map : A
n
A
m
is dened by (x
1
, . . . , x
n
) = (
1
(x
1
, . . . , x
n
), . . . ,
m
(x
1
, . . . x
n
))
with
i
k(x
1
, . . . , x
n
) (the eld of rational functions)
Example. : A
1
A
1
dened by (x) =
1
x
.
Warning: is not necessarily a function dened on all of A
n
. Write
i
=
fi
gi
for f
i
, g
i
k[x
1
, . . . , x
n
]
and let U = x A
n
: g
i
(x) ,=. Then : U A
m
is well dened. Notice that U is an open set
(U = A
n
V (

g
i
)). In the above example is dened on U = x A
1
: x ,= 0.
Note. A rational map induces a k-algebra homomorphism

: k[z
1
, . . . , z
m
] k(x
1
, . . . , x
n
) dened
by z
i

i
. Conversely any such k-algebra homomorphism determines a rational map.
Example. Let : A
1
A
2
be the inverse stereographic projection, that is, dened by (t) =
(
t
2
1
t
2
+1
,
2t
t
2
+1
). It is a rational map from A
1
to V (x
2
+ y
2
1). It is dened on A
1
i and the image
V (x
2
+y
2
1)(1, 0).
Denition 5.11. Let Y A
m
, a rational map : A Y is a rational map : A
n
A
m
with

(I(Y )) = 0, so

: k[Y ] k(x
1
, . . . , x
n
).
Example. Let : A
1
V (x
2
+ y
2
1) be the inverse stereographic projection. Then

(x) =
t
2
1
t
2
+1
,

(y) =
2t
t
2
+1
. Hence

(x
2
+y
2
1) = (
t
4
2t
2
+1+4t
2
t
4
+2t
2
+1
1) = 0, so is indeed a rational map.
What about rational maps : X A
m
? We recall that a morphism X A
m
was an equivalence
class of morphism A
n
A
m
. But we have some problems: consider X = V (x) A
2
and : A
2
A
3
dened by (x, y) = (x
2
,
1
xy
, y
3
). Then is dene on U = (x, y) : x, y ,= 0, is not dened at (x, y)
for any (x, y) X. The solution to this is to allow rational maps that are dened on enough of X.
Denition 5.12. Let R be a commutative ring with identity. An element f R is a zero-divisor if
there exists g R with g ,= 0 such that fg = 0.
Denition 5.13. Let : A
n
A
m
be a rational map with
i
=
fi
gi
where f
i
and g
i
have no common
factors. Then is admissible on X if the image of each g
i
in k[X] is a non-zero divisor.
Example. : A
2
A
3
, (x, y) = (x
2
,
1
xy
, y
3
) is not admissible on V (x).
Let X = V (x
2
y
2
) A
2
and : A
2
A
1
dened by (x, y)
1
x+y
. Then is not admissible
on X as x + y is a zero divisor in k[x, y]/(x
2
y
2
). On the other hand : A
2
A
2
dened by
(x, y) = (
1
x
,
1
y
) is.
18
Denition 5.14. Let U be the set of non-zero divisor on a ring R. Note U ,= 0 since 1 U. The
total quotient ring (ring under obvious multiplication and addition) is as follow
Q(R) = R[U
1
] =

r
s
: r R, s U
r1
s1
=
r2
s2
if r
1
s
2
= r
2
s
1
(This like the localization in Commutative Algebra)
Example. R = Z, U = Z 0 then Q(R) = .
R = k[x
1
, . . . , x
n
] then Q(R) = k(x
1
, . . . , x
n
).
Denition 5.15. If X is a variety, the total quotient ring of k[X] is written k(X) and is called the
ring of rational functions of X.
Note. If R is a domain, U = R0, so Q(R) is the eld of fractions of R. So if X is irreducible, k(X)
is the eld of fractions of k[X].
Proposition 5.16. Let : A
n
A
m
be a rational map admissible on a ane variety X A
n
. Then

induces a k-algebra homomorphism

: k[z
1
, . . . , z
n
] k(X). Conversely each such homomorphism
arise from a rational map.
Proof. Write
i
= f
i
= g
i
with f
i
and g
i
not sharing any irreducible factors. By hypothesis each g
i
is
a non-zero divisor on k[X]. So
fi
gi
is a well dened element of k(X). Thus

: k[z
1
, . . . , z
m
] k(X)
given by

(z
i
) =
fi
gi
is well dened.
Conversely given

: k[z
1
, . . . , z
m
] k(X) write

(z
i
) =
fi
gi
for some f
i
, g
i
k[x
1
, . . . , x
n
] with
g
i
a non-zero divisor on k[X]. Then : A
n
A
m
dened by
i
(x) = f
i
(x)/g
i
(x) is admissible on
X.
Denition 5.17. Let X A
n
be an ane variety. Two rational maps , : A
n
A
m
admissible
on X are said to be equivalent on X if the induced homomorphism

: k[z
1
, . . . , z
m
] k(X) are
equal.
Example. Let X = V (x + y) A
2
. Let : A
2
A
2
be dened by (x, y) = (
3x
2y
2
,
2x
3x+5y
). This is
dened on U

= (x, y) : y
2
,= 0, 3x + 5y ,= 0. Let : A
2
A
2
be dened by (x, y) = (
3
2x
, 1).
This is dened on U

= (x, y) : x ,= 0. These are clearly not the same rational maps but we will
show that they are equivalent on X.

: k[z
1
, z
2
] k(x, y) is dened by

(z
1
) =
3x
2y
2
and

(z
2
) =
2x
3x+5y
. And

: k[z
1
, z
2
] k(x, y)
is dened by

(z
1
) =
3
2x
and

(z
2
) = 1. Now in k(X) = k[x, y]/(x +y) we have
3x
2y
2
=
3x
2x
2
=
3
2x
2x
3x + 5y
=
2x
2y
= 1
So

: k[z
1
, z
2
] k(X) are equal so , are equivalent on X.
(Check = on U

X)
Denition. Let X A
n
and Y A
m
be ane varieties. A rational map : X Y is an equivalence
class of rational maps : A
n
Y admissible on X.
Corollary 5.18. Let X A
n
and Y A
m
be ane varieties. Then there is a one to one correspond-
ence between rational maps X Y and k-algebra homomorphism k[Y ] k(X).
Denition 5.19. A rational map : X Y is dominant if

: k[Y ] k(X) is injective.


Example. Let : A
1
V (x
2
+y
2
1) dened by (t) = (
t
2
1
t
2
+1
,
2t
t
2
+1
). Then is dominant.
Lemma 5.20. If : X Y is dominant and X is irreducible, then so is Y
Proof. Since is dominant we have by denition

: k[Y ] k(X) is injective. Since X is irreducible,


k[X] is a domain, so k(X) is a eld. Hence k[Y ] is also a domain and thus Y is irreducible.
19
Corollary 5.21. V (x
2
+y
2
1) is irreducible.
Denition 5.22. Let Y A
n
be an ane variety. A rational parametrisation of Y is a rational map
: A
n
Y such that Y = im(), i.e., a dominant rational map : A
n
Y . Such Y are called
unirational.
Note. Unirational varieties are irreducible, by the lemma, and we have k(Y ) k(x
1
, . . . , x
n
).
Denition 5.23. A variety X is rational if it admits a rational parametrisation : A
n
X such
that the induced eld extension

: k(X) k(x
1
, . . . , x
n
) is an isomorphism.
Corollary 5.24. X is rational if and only if k(X)

= k(x
1
, . . . , x
n
)
Proof. If X is rational then k(X)

= k(x
1
, . . . , x
n
) by denition, so we just need to show the converse.
Suppose we have

: k(X) k(x
1
, . . . , x
n
). Then

[
k[X]
is injective, so denes a dominant rational
map : A
n
X. Hence X is rational.
Denition. Let X, Y be irreducible varieties. We say X, Y are birational if k(X)

= k(Y ) as k-algbera.
Proposition 5.25. If X, Y are irreducible varieties and k(X)

= k(Y ) then there exists dominant
rational maps X Y and Y X that are inverses.
Proof. If

: k(X)

=
k(Y ), then

[
k[X]
is injective, so the corresponding rational map : Y X
is dominant. Similarly
1
induces a dominant rational map
1
: X Y . By construction


1
= id [
Y
.
20
6 Projective Varieties.
Denition 6.1. Projective Space P
n
over a eld k is (k
n+1
0)/ where v v for k

= k0.
A point in P
n
correspond to a line through the origin in k
n+1
.
Notation. [x
0
: x
1
: : x
n
] is the equivalence class of (x
0
, x
1
, . . . , x
n
) k
n+1
.
Recall: A polynomial f =

c
u
x
u
is homogeneous if [u[ = d for all u with c
u
,= 0 for some d.
Denition 6.2. An ideal I k[x
0
, x
1
, . . . , x
n
] is homogeneous if I = f
1
, . . . , f
s
, where each f
i
is
homogeneous.
Example.

7x
2
0
+ 8x
1
x
2
+ 9x
2
1
, 3x
3
1
+x
3
2
_
is,

x +y
2
, y
2
_
=

x, y
2
_
is.
Denition 6.3. Let f k[x
0
, . . . , x
n
]. Then f =

f
i
where each f
i
is a homogeneous polynomial of
degree i. The f
i
are called the homogeneous components of f.
Example. Let I be a homogeneous ideal and let f I. Then each homogeneous component of f is in
I. Idea: we choose g
1
, . . . , g
s
homogeneous with I = g
1
, . . . , g
s
. Then we can write f =

c
ui
x
ui
f
i
,
where the f
i
could be repeated. Then f
i
=

j:deg(x
u
j
)+deg(gi)=i
c
ui
x
ui
g
i
I.
Denition 6.4. Let I be a homogeneous ideal in k[x
0
, x
1
, . . . , x
n
]. The projective variety dened by
I is V(I) = [x] P
n
: f(x) = 0 for all homogenous f I
Example. Let I = 2x
0
x
1
, 3x
0
x
2
. Then V(I) = [1 : 2 : 3] P
3
.
I =

x
0
x
2
x
2
1
_
. Then V(I) = [1 : t : t
2
] : t k [0 : 0 : 1]
I = x
0
, x
1
, x
2
k[x
0
, x
1
, x
2
]. V(I) = . Note that the weak Nullstellenzatz does not apply
here.
I =

x
0
x
3
x
1
x
2
, x
0
x
2
x
2
1
, x
1
x
3
x
2
2
_
k[x
0
, x
1
, x
2
, x
3
]. Then V(I) = ([1, t, t
2
, t
3
] : t
k [0 : 0 : 0 : 1]. The twisted cubic
Note. Points in V(I) correspond to lines through 0 in V (I) A
n+1
. V (I) is called the ane cover
over I.
Denition 6.5. The Zariski topology on P
n
has closed sets V(I) for I k[x
0
, . . . , x
n
].
6.1 Ane Charts
Denition 6.6. Let U
i
= [u] P
n
: x
i
,= 0. We can write x U
i
uniquely as [x
0
: : 1 : : x
n
]
(1 in i
th
position). U
i
bijection with A
n
. P
n
=
n
i=1
U
i
ane cover of P
n
. We can think of P
n
=
U
0
[x] : x
0
= 0. See that U
0
is a kind of like A
n
while the set is P
n1
sometime called hyperplane
at innity. Fix I homogeneous in k[x
0
, . . . , x
n
] and let X = V(I) P
n
. Let X U
i
= [x] P
n
:
f(x) = 0 f I = [x
0
: : 1 : . . . , x
n
] P
n
: f(x
0
, . . . , 1, . . . , x
n
) = 0 f I = V (I
i
) A
n
where
I
i
= f(x
0
, . . . , 1, . . . , x
n
) : f I = 1[
x1=1
.
X =
n
_
i=0
X U
i
.
Is a union of ane varieties. This is called an ane cover, let X U
i
are called ane charts.
Example. X = V(x
0
x
2
x
2
1
) P
n
. Then:
X U
0
= V (x
2
x
2
1
) = (t, t
2
) : t k.
X U
1
= V (x
0
x
2
1) = (t,
1
t
) : t k.
X U
2
= V (x
0
x
2
1
) = (t
2
, t) : t k
21
Actually X = (X U
0
) (X U
2
) in this case. We can think of X as created by gluing together
three ane varieties X U
0
, X U
1
, X U
2
. This is how abstract varieties are dened (not covered
in this module).
Given an ane variety X A
n
, we can embed it into P
n
by identifying A
n
with U
i
for some i
(normally i = 0).
Denition 6.7. The projective closure of X A
n
in P
n
is the Zariski closure of X U
i
P
n
in P
n
(Assume by default U
0
)
Example. X = V (x
2
x
2
1
) = (t, t
2
) : t k A
2
. The projective closure is the Zariski closure of
[1 : t : t
2
] : t k. This adds [0 : 0 : 1]
Question: Given X how can we compute the projective closure in P
n
?
Denition 6.8. Let f =

c
u
x
u
k[x
1
, . . . , x
n
]. The homogenization of f is

f =

u
x
u
x
d|u|
0
where
d = max [u[
Let I k[x
1
, . . . , x
n
] be an ideal. Its homogenization is

I =
_

f : f I
_
Warning: If I = f
1
, . . . , f
s
then we do not always have

I =
_

f
1
, . . . ,

f
s
_
. For example, consider
I = x
1
1, x
1
k[x
1
]. We have I = 1,

I = 1 , = x
1
x
0
, x
1
= x
1
, x
0

Proposition 6.9. Let k = k, I =

I k[x
1
, . . . , x
n
]. The projective closure of V (I) A
n
via the
identication A
n
= U
0
is V(

I) P
Proof. If x V (I), f(x) = 0 f I then

f(I, x) = 0

f

I. So [1 : x] V(

I). So the projective


closure of V(I) is contained in V(

I)
Conversely, suppose that f k[x
0
, . . . , x
n
] is homogeneous with f([1 : x]) = 0 x V (I). Then
g = f(1, x) I(V (I)) =

I = I. Then f = x
k
0
g for some k 0 so since g I,f

I and thus V(

I) is
contained in the projective closure of V (I).
Question: How can we compute

I? Answer: Let < be any term order with deg(X
u
) > deg(X
v
)
X
u
> X
v
(for example we can revlex). Let G = g
1
, . . . , g
s
be a Grobner basis for I with respect to
<. We claim that

I = g
1
, . . . , g
s
.
Proof of above claim. Extend <to a term order

<on k[x
0
, . . . , x
n
] by setting x
a
0
x
u

<x
b
0
x
v
if
_
x
u
< x
v
x
u
,= x
v
a < b x
u
= x
v
.
Note that in

<
(

f) = in
<
(f). Let F

I be a homogeneous polynomial in k[x
0
, . . . , x
n
]. Then
f =

A
i

f
i
for some f
i
I, A
i
l[x
0
, . . . , x
n
]. Write f(x
1
, . . . , x
n
) = F(1, x
1
, . . . , x
n
) k[x
1
, . . . , x
n
].
Then f =

A
i
(1, x
1
, . . . , x
n
)f
i
so f I. We know that F = x
k
0

f for some k 0 so in

<
(

f) =
x
k
0
in

<
(

f) = x
k
0
in
<
(f). Since Gis a Grobner basis for I with respect to <, we have that in
<
(f
j
)[x
k
0
in
<
(f)
for some g. Hence in

<
(f)

in

<
( g
1
), . . . , in

<
( g
s
)
_
. So g
1
, . . . , g
s
is a Grobner basis for

I, hence it
generates

I.
Proposition 6.10. Let k = k. V(I) = if and only if x
0
, . . . , x
n

I.
Proof. Let X = V (I) A
n+1
. Then V (I) = , implies either X = so 1 I or X = 0 so
x
0
, . . . , x
n

I.
Conversely if x
0
, . . . , x
n

I then V (I) 0, so V(I) = .


Denition 6.11. The ideal x
0
, . . . , x
n
is called the irrelevant ideal.
Let X P
n
. The ideal I(X) is I(X) = homogeneous f k[x
0
, . . . , x
n
] : f([x]) = 0 x X
Homogeneous coordinate ring of X P
n
is k[x
0
, . . . , x
n
]/I(X).
Theorem 6.12 (Projective Nullstellensatz). Let k = k. Let I be a homogeneous ideal in [x
0
, . . . , x
n
]
with x
0
, . . . , x
n
_

I. Then I(V(I)) =

I.
Proof. Let X = V(I) and let Y = V (I) A
n+1
be ane cover of X. Then I(V(I)) = f homogeneous :f([x]) =
0 [x] X = f : f(x) = 0 x Y = I(Y ) =

I
22
Denition 6.13. A projective variety X is reducible if X = X
1
X
2
with X
1
, X
2
_ X and X
1
, X
2
are subvarietes of X.
Exercise: If X is a non-empty irreducible variety then I(X) is prime.
6.2 Morphisms of projective varieties.
A rational map of degree d, : P
n
P
m
is given by ([x
0
: : x
n
]) = [
0
(x
0
, . . . , x
n
) : :

m
(x
0
, . . . , x
n
)] where
i
are homogeneous polynomials of degree d in k[x
0
, . . . , x
n
].
Example. ([s : t]) = [s
2
: st : t
2
] a rational map P
1
P
2
of degree 2. (actually a morphism)
([s : t]) = [s
3
: s
2
: st
2
] a rational map P
1
P
2
of degree 3. This is not dened at [s : t] = [0 : 1]
(not a morphism)
A rational map : P
n
P
m
is a morphism if is dened for all [x] P
n
.
Note. We dont need to use rational functions as we can clear denominators. The polynomials need to
be homogeneous of the same degree to make the map well dened, i.e., independent of representative
of [x] P
n
.
is a morphism if and only if V (
0
, . . . ,
m
) P
n
is empty if and only if x
0
, . . . , x
n

_

0
, . . . ,
m

Denition 6.14. A rational map


i
: P
n
P
m
is linear if degree
i
= 1 i.
In that case is determined by a m + 1 n + 1 matrix A = (a
ij
). Then is a morphism when
rankA = n +1 (or ker(A) = 0). In the case n = m, then is a morphism if and only if A is invertible.
The set : ([x]) = [Ax] for an invertible (n + 1) (n + 1) matrix A = Aut(P
n
) forms a group
under composition. Note that A
1
=
_
1 0
0 1
_
and A
2
=
_
2 0
0 2
_
dene the same morphism. If fact
Aut(P
n
) = GL
n+1
/k

=: PGL
n+1
where k

= I
6.2.1 Veronese Embedding
Denition 6.15. The morphism : P
1
P
d
given by ([x
0
: x
1
]) = [x
d
0
: x
d1
0
x
1
: : x
d
1
] is called
the d
th
Veronese embedding.
Y = im() = V(y
1
y
j+1
y
i+1
y
j
: 0 i, j n 1). Y is called the rational normal curve of degree
d.
Example. There are
_
n+d
d
_
monomials of degree d. To see this, notice that any string of d and n [
correspond uniquely to a monomial of degree d, for example, [ [ correspond to x
3
0
x
1
x
3
2
while
[[ [ correspond to x
3
2
x
3
. So the number of the monomials is the number of such strings.
The d
th
Veronese embedding of P
n
is P
n
P
(
n+d
d
)1
dened by [x
0
: : x
n
] [x
d
0
: x
d1
0
x
1
:
: x
d
n
] (all monomial of degree d). The image of is V(z

: + = + ) where z are
coordinates on k[z

: N
n+1
,

i
= d]. This is a generalization of y
i
z
di,i
. We prove all of this
in the following proposition
Proposition 6.16. im() is closed and equals V(z

: + = +), where z are coordinates


k[z

: N
n
:

i
= d].
Proof. Let Z = V(z

: + = + ). If z = (x) then z

= x

= 0, so
if + = + we have im() Z.
Conversely we want to consider z Z and want to nd [x] P
n
with ([x]) = [z]. We rst show
there is i with z
dei
,= 0. To see this, suppose z

,= 0 for some (there must be some such ). Without


loss of generality
0
> 0. If
0
<
d
2
we write 2 = (2e
0
+ ) +

where
0
,

0
= 0 and

i
= d
(For example if d = 5, = (2, 2, 1) then (4, 4, 2) = (4, 1, 0) + (0, 3, 2))
Then z
2

= z
20e0+
z

(here we have = = , = 2
0
e
0
+ , =

). So z

,= 0 implies
that z
20e0+
,= 0. So after repeated applications, we may assume
0
>
d
2
. Then we write 2 =
de
0
+ (2 de
0
), then z

2
= z
de0
z
2de0
, so z
de0
,= 0. Now set x
i
=
z
(d1)e
0
+e
i
z
de
0
for 1 i n and
x
0
= 1. Set [z

] = ([x]).
23
We now show that [z

] = [z] to show this we show z

z
de0
= z

. We do this by a proof on
induction on

n
i=1

i
. The base case is by denition/construction. The general case follows from
z

z
de0
= z
ei+e0
z
(d1)e0+ei
for
i
> 0. Note that this is also true for z

. Hence z
de0
z

= z
de0
z

de0
z

=
z
de0
z

e1+e0
z

(d1)e0+ei
= z
ei+e0
z

(d1)e0+ei
=
ze
1
+e
0
z
(d1)e
0
+ep
z
de
0
= z

.
6.2.2 Segre Embedding
The Segre embedding realizer P
n
P
m
as a subvarieties of P
(n+1)(m+1)1
. We map ([x], [y]) P
n
P
m
to ([x], [y]) = ([x
i
y
j
] : 0 i n, 0 j m).
Proposition 6.17. im() = V(z
ij
z
kl
z
il
z
kj
: 0 i, k n, 0 j, l m) where the z are coordinates
on k[z
ij
: 0 i n, 0 j m]. (Notice that they are the 2 by 2 minors of a generic (n+1) (m+1)
matrix (z
ij
)
Proof. Let Y = V(z
ij
z
kl
z
il
z
kj
: 0 i, k n, 0 j, l m). If z = ([x], [y]) then z
ij
z
kl
z
il
z
kj
=
x
i
y
j
x
k
y
l
x
i
y
l
x
k
y
j
= 0 so im() Y .
Conversely, given [z] Y , without loss of generality, we may assume z
00
,= 0. Set x
i
= z
i0
/z
00
and
y
j
= z
0j
/z
00
, x
0
= y
0
= 1, and let z

= ([x], [y]). Then the equation on z


ij
z
00
z
i0
z
0j
implies that
[z

] = [z].
6.2.3 Grassmannian
The Grassmannian G(d, n) parametrizes all d-dimensial subspace of k
n
Example. G(1, n) = P
n1
(a one dimensional subspace is a line through 0)
G(n 1, n) = P
n1
Well describe G(d, n) as a projective variety by the Pucker G(d, n) P
(
n
d
)1
. Let V k
n
be a
d-dimensial subspace. Choose a basis v
1
, . . . , v
d
for V and write A
v
=
_
_
_
v
1
.
.
.
v
d
_
_
_ for the d n matrix with
rows the v
i
. Map V to the vector of dd of A
V
in P
(
n
d
)1
, for example : V (1 : 3 : 4 : 1 : 2 : 2).
We name the coordinates on P
(
n
d
)1
, x
I
where I 1, . . . , n and [I[ = d. I indexes the columns of
the d d submatrix of A
v
whose determinant is (V )
I
Note. 1. (V ) is not the zero vectors, since rank A
V
= d, so A
v
has a non-vanishing minor of size
d
2. If we choose a dierent basis v

1
, . . . , v

d
for V , then A

v
= UA
V
where U is a d d invertible
matrix (in fact the change of basis matrix). So the Ith minor of A

v
is det(U). So A
V
, A

V
gives
the same point in P
(
n
d
)1
. This means the map : V (V ) P
(
n
d
)1
is well dened
3. We can recover V from (V ).
Example. If (V ) = [1 : 0 : 0 : 0 : 0 : 0] then V = span
_
_
_
_
_
_
_
_
1
0
0
0
_
_
_
_
,
_
_
_
_
0
1
0
0
_
_
_
_
_
_
_
_
. Since (V )
12
= 1 we can
assume A
V
=
_
1 0
0 1
_
.
Let I be an index with (V )
I
,= 0 (this exists by 1). Let B be the d d submatrix of A
V
indexed
by I. Then det(B) ,= 0. So A

V
= B
1
A
V
has an identity matrix in the column indexed by I. But
then for j / I, (A

V
)
ji
= (V )
I\{1}{j}
.
Question: What does im() look like?
Example. G(2, 4) assume that (V )
12
,= 0 so that we can take A
V
=
_
1 0 a b
0 1 c d
_
, (V ) = [1 : c :
d : a : b : adbc]. Note (V ) V(x
12
x
34
x
13
x
24
+x
14
x
23
). The equation x
12
x
34
x
13
x
24
+x
14
x
23
24
is invariant (up to sign) under the S
4
action on the labels, where we set x
21
= x
12
. This says that
(V ) V(x
12
x
34
x
13
x
24
+x
14
x
23
) any V . Alternatively, we could check the other row reduced forms.
Conversely, if [z] V(x
12
x
34
x
13
x
24
+x
14
x
23
) with z
12
,= 0, then [z] = (V ) for V = row
_
1 0
z23
z12

z23
z12
0 1
z13
z12
z14
z12
_
The formula x
12
x
34
x
13
x
24
+x
14
x
23
is called a Plucker relation.
For the embedding G(d, n) P
(
n
d
)1
we get a Plucker relation PJ
1
J
2
for all J
1
, J
2
1, . . . , n
with [J
1
[ = d 1, [J
2
[ = d + 1.
PJ
1
J
2
=

jJ2
(1)
sgn(j,J1)
X
J1j
X
J2\j
where X
J1j
= 0 if j J
1
and sgn(j, J
1
) = #(i J
i
: i > j) + #(i J
2
: i < j)
Example. n = 4, d = 2, J
1
= 1 and J
2
= 2, 3, 4. Then PJ
1
J
2
= x
12
x
34
x
13
x
24
+x
14
x
23
Denition 6.18. Let I
d,n
= PJ
1
J
2
: J
1
, J
2
1, . . . , n, [J
1
[ = d 1, [J
2
[ = d + 1 k[X
1
: [I[ = d]
Theorem 6.19. G(d, n) = im() = V(I
d,n
)
Proof. Assignment sheet.
Question; What are the ane charts for G(d, n)?
Answer: G(d, n) U
I
is V which look like A
V
= (I
d
[

A) (where I
d
are the columns of I and

A the
other columns, not that

A is an arbitary d (n d) matrix). So G(d, n) U
I

= A
d(nd)
.
Check: G(2, 4) U
1,2
= V (x
12
x
34
x
13
x
24
+ x
14
x
23
) A
5
. This is isomorphic to A
4
since
k[x
13
, x
14
, x
23
, x
24
, x
34
]/(x
12
x
34
x
13
x
24
+x
14
x
23
)

= k[x
13
, x
14
, x
23
, x
24
]

= k[A
4
]
We can think of G(d, n) as
_
n
d
_
copies of A
d(nd)
glue together. This worked for any eld, e.g.,
the real Grassmannian is the manifold of dimension d(n d). Similarly for C.
25
7 Dimension and Hilbert Polynomial
Denition 7.1. A ring R is Z-graded if there is a decomposition (as groups) R

=
iZ
R
i
with
R
i
R
j
R
i+j
. The R
i
are called the graded pieces and f R
i
is homogeneous of degree i.
A graded k-algebra is a k-algebra R with cf R
i
f R
i
(so each R
i
is a k-vector space). Then
k R
0
(normally for our examples R
0
= k)
Example. R = k[x
0
, . . . , x
n
], R
i
are polynomials of degree i.
S = k[x
0
, . . . , x
n
], I homegenous ideal. R = S/I then R
i
= S
i
/I
i
.
X P
n
a projective variety, R(X) := k[x
0
, . . . , x
n
]/I(X) is the projective coordinate ring of X.
Denition 7.2. Let R be a graded k-algbera with dim
k
(R
i
) < for all i. The Hilbert Function of
R is H
R
(d) = dim
k
R
d
(note H
R
: Z N).
Example. Let R = k[x
0
, . . . , x
n
]. Then H
R
(d) =
_
n+d
d
_
=
_
n+d
n
_
since a basis for R
d
is the set of
monomials of degree d.
S = k[x
0
, x
1
, x
2
] and f homogeneous of degree 3. Let R = S/ f.
i dim
k
(S/ f)
i
= dim
k
(S) dim
k
f
i
< 0 0
0 1
1 3
2 6
3
_
2+3
2
_
1 = 9
4
_
2+4
2
_
3 = 12
In general we have the following graded short exact sequence:
0
//
S
//
S
//
S/ f
//
0
g

//
gf g

//
g
So dim
k
(S/ f)
d
= dim
k
S
d
dim
k
S
d3
(assuming d 1)
How can we compute H
R
?
Proposition 7.3. Let I S = k[x
1
, . . . , x
n
] and let < be a term order. The (image of the)
monomials in S not in the initial ideal of I form a k-basis for S/I. Thus if I is homogeneous
H
S/I
(d) = H
S/ in<(I)
(d)
Proof. Let f be polynomial in S. Then the remainder of dividing f be a Grobner basis for I with
respect to < is a polynomial g with f g I and g =

c
u
x
u
where c
u
,= 0 implies c
u
/ in
<
(I).
So f = g in S/I and g spanx
U
: x
U
/ in
<
(I), so this set spans S/I. If I is homogeneous and f
has degree d, then so does g, so the set of monomials not in in
<
(I) of degree d spans (S/I)
d
. To see
that these sets are linearly independent, note that if f =

c
u
x
u
is a linear dependence, then f ,= 0
but f 0 in S/I, hence f I and c
u
,= 0 x
u
/ in
<
(I). Then in
<
(f) / in
<
(I) which contradicts
f I. So we conclude that x
u
: x
u
/ in
<
(I) is linearly independent so is a basis for S/I. If I
is homogeneous then x
u
: deg(x
u
) = d, x
u
/ in
<
(I) is a basis for (S/I)
d
, as well as a basis for
(S/ in
<
(I))
d
. So the Hilbert functions are equal.
This reduces the question to how can we compute H
S/m
for m a monomial ideal? The key point
is the following short exact sequence. Let I be a homogeneous ideal and f homogeneous of degree d.
Then we have the following s.e.s
0 S/(I : f)

S/I

S/(I, f) 0
where (I : f) = g S : fg I and (I, f) = I +f. The map is dened by g fg while the map
is dened by g g. We check that this sequence is exact.
1. is injective: If fg I then g (I : f)
26
2. im() = ker():
: g S, fg I +f so im() ker().
: Suppose g ker() then g = i + hf for i I. Hence g hf = i I, so g hf in S/I, so
g = (h) im().
3. is surjective since I +f I.
This short exact sequence is graded, i.e., 0 (S/(I : f))
m
(S/I)
m+d
(S/(I, f))
m+d
0.
Recall: given an exact sequence 0 U V W 0 of vector space we have V

= U W. So
dimV = dimU +dimW. In our case dim
k
(S/I)
m+d
= dim
k
(S/(I : f))
m
+dim
k
(S/(I, f))
m+d
. Apply
this when I is a monomial ideal and f a variable. Then (I : f), (I, f) I. If f is chosen carefully we
have strict containment, so eventually (I : f), (I, f) are monomial prime ideals, which we know the
Hilbert function of.
Lemma 7.4. Let I = x
i1
, . . . , x
is
S = k[x
0
, . . . , x
n
] be prime. Then H
S/I
(m) =
_
m+ns
ms
_
=
_
m+ns
n
_
Proof. S/I

= k[x
j
: j ,= i
k
for any k]. This has Hilbert function
_
ns+m
m
_
=
_
ns+m
ns
_
. This is a
polynomial in m (n s)
Example. Let I = x
0
x
3
, x
0
x
2
, x
1
x
3
S = k[x
0
, x
1
, x
2
, x
3
]. Let us choose f = x
0
. Then (I : f) =
x
2
, x
3
and (I, f) = x
0
, x
1
x
3
. So H
S/I
(d) = H
S/x2,x3
(d 1) + H
S/x0,x1x3
(d). Take f = x
1
.
Then (x
0
, x
1
x
3
: x
1
) = x
0
, x
3
and (x
0
, x
1
x
3
, x
1
) = x
0
, x
1
. So H
S/I
(d) = H
S/x1,x3
(d 1) +
H
S/x0,x3
(d 1) +H
S/x0,x1
(d) = d +d +d + 1 = 3d + 1. This is valid for d 0.
Theorem 7.5. Let I be a homogeneous ideal in S = k[x
0
, . . . , x
n
]. Then there exists a polynomial
P [t] such that H
S/I
(d) = P(d) for d 0.
Proof. Since H
S/I
= H
S/ in<(I)
, we may assume that I is a monomial ideal. The case I is a monomial
prime was the lemma. The proof is by Noetherian induction.
Given a monomial ideal I that is not prime, we may assume that the theorem is true for all
monomial ideals containing I. Choose a variable x
i
properly dividing a generator of I. This must
exists since I is not prime. Then (I : x
i
), (I, x
i
) _ I. By induction there exists P
1
, P
2
[t] with
H
S/(I:xi)
(d) = P
1
(d) for d 0 and H
S/(I,xi)
(d) = P
2
(d) for d 0. Then H
S/I
(d) = P
1
(d 1) +P
2
(d)
for d 0 and this is a polynomial in d.
Denition 7.6. Let X P
n
be a projective variety. The polynomial P := P
x
of the theorem for
I = I(X) is called the Hilbert Polynomial.
Let X P
n
be a projective variety. Then the dimension of X is the degree of the Hilbert
Polynomial.
Example. 1. If V k
n+1
is a subspace of dim(d+1) then P(V ) P
n
is a subsvariety of dimension
d
2. X = twisted cubic =image of : P
1
P
3
dened by [t
0
, t
1
] [t
3
0
, t
2
0
t
1
, t
0
t
2
1
, t
3
1
] = V(x
0
x
3

x
1
x
2
, x
0
x
2
x
2
1
, x
1
x
3
x
2
2
) = V(I). in
<
(I) = x
0
x
3
, x
0
x
2
, x
1
x
3
(where x
0
> x
1
> x
2
> x
3
).
Then from the previous work we have H
k
[x
0
, . . . , x
3
]/ in
<
(I) = 3d + 1 for d 1, so dim(X) = 1
There are many dierent (equivalent) denition of dimension. Proving they are equivalent is non-
trivial. For example, for X A
n
we can dene dim(X) to be the dimension of the projective closure
of X (See Eisenbud Commutative Algebra, Chapters 8-13)
7.1 Singularities
How close is a variety to a manifold?
Let X = V (f) A
n
. Fix a X. What is the tangent plane to X at a?
Example. n = 2
X = V (yf(x)), for example X = V (yx
2
). The tangent line at a is (ya
2
) =
df
dx
[
a
(xa
1
)
27
X = V (f(x, y)), for example X = V (y
2
+x
2
1). The slope is
dy
dx
=
df
dx
df
dy
. Tangent line is
y a
2
=
dy
dx
[
a
(x a
1
). So
df
dy
(y a
2
) =
df
dx
(x a
1
)
df
dy
(y a
2
) +
df
dx
(x a
1
) = 0.
n = 3. X = V (x
2
+ y
2
+ z
2
1), a = (1, 0, 0), the tangent plane to X at a is spanned by
(0, 1, 0), (0, 0, 1), i.e., x = 1. f(a) =
_
_
2x
2y
2z
_
_
[
1,0,0
=
_
_
2
0
0
_
_
2x x = 0.
The tangent space to the variety of f at a is T
a
(X) = (y
1
, . . . , y
n
) :

df
dxi
[
a
(y
i
a
i
) = 0. This
is a hyperplane with normal vector f(a) unless f(a) = 0
Example. X = V (x
3
y
2
), a = (0, 0). Then f(a) =
_
3x
2
2y
_
[
a
=
_
0
0
_
. So T
0,0
(X) = (y
1
, y
2
) :
0y
1
+ 0y
2
= 0 = A
2
. If a = (1, 1) then f(a) =
_
3
2
_
. so T
(1,1)
(X) = (y
1
, y
2
) : 3y
1
2y
2
= 0 =
y
2
=
2
3
y
1

Denition 7.7. X = V (f) is singular at a point a X if the tangent space to X at a is not a


hyperplane.
Let X A
n
, x a X. The tangent space to X at a is T
a
(X) = a+(y
1
, . . . , y
n
) :

df
dxi
[
a
(y
i
a
i
) =
0 f I(X), i.e., T
a
(X) =
f:XV (f)
T
a
(V (f)).
Example. X = V (x
2
y, x
3
2) and a = (1, 1, 1). Then T
a
(X) = (y
1
, y
2
, y
3
) : 2y
1
y
2
, 3y
1

y
3
= 0 + (1, 1, 1) = (1, 1, 1)+ span of (1, 2, 3)
X = V (x
2
y
2
, xz, yz) = V (x y, z) V (x +y, z) V (x, y).
a = (1, 1, 0) then T
a
= (y
1
, y
2
, y
3
) : 2y
1
2y
2
= 0, y
3
= 0 + (1, 1, 0) = span(1, 1, 0)
a = (1, 1, 0) then T
a
(X) = (y
1
, y
2
, y
3
) : 2y
1
+2y
2
= 0, y
3
= 0+1, 1, 0 = span(1, 1, 0)
a = (0, 0, 1) then T
a
(X) = span(0, 0, 1)
a = (0, 0, 0) then T
a
(X) = A
3
.
28

You might also like