You are on page 1of 80

Singular Morphisms, Smoothness, and Lifting Lemmas

H.J. Stein University of California, Berkeley Berkeley, CA 94702

1. Introduction and Setting

Chapter 1. Introduction and Setting


Let A and B be commutative rings with unity. Let : A B be a ring homomorphism making B into an A-algebra. We say that is smooth if all A-algebra homomorphisms : B C/I of B to a quotient of the A-algebra C can be lifted to C/I 2 . If is understood, we say that B is smooth over A. We will investigate the degree of liftability embodied in homomorphisms which are not smooth. This work continues along the lines of [Coleman], which extends and unies that of [Elkik], [Tougeron], and [Greenberg]. Let An = A[x1 , . . . , xn ], : An B be surjective, Ker() = (g1 , . . . , gm ). Let G be the column vector (g1 . . . gm )tr , (where the superscript tr denotes the transpose), and let (G) denote the ideal generated by the entries of G. For f An , and for column vectors and row vectors P = (p1 . . . ps )tr and Q = (q1 . . . qt ) respectively (with entries in An ), let f /Q denote ( f /q1 ... f /qt ), and let P/Q denote the matrix whose ith row is pi /Q. We let f = f /X , and P = P/X denote f /Q, and P/Q for Q = (x1 . . . xt ). For a matrix M = (mij ) with entries in An , let M/f denote the matrix (mij /f ). In this notation, the Jacobian matrix of G is G = G/X = the matrix whose rows are gi = the matrix whose columns are G/xi . Let Matmn (A) denote the set of m n matrices whose entries are elements of A. If
This work was supported in part by the NSF, the Max Planck-Institut f ur Mathematik, Bonn, Germany, and the Institut des Hautes Etudes Scientiques, France.
A A This thesis was processed using L MS -T MS -T EX. Since L EX uses AMS-TEX, I suppose that I am required to mention that AMS-TEX was used.

1. Introduction and Setting m and n are left out, we mean the set of matrices of all sizes. If M is a matrix, and i and j are integers, let Mij denote the matrix consisting of the rst i rows and j columns of M . If I = (I1 . . . Im ) and J = (J1 . . . Jn ) are row vectors of integers, let MIJ denote the matrix (MIi Jj ). The following denitions are derived from those of [Coleman]. The ideal D also appears in part II, section 2, remark 2.1 (page 89) of [Artin]. 1.1 Denition. Let H be a function taking triples (C, c, I ), where C is an A-algebra, and c and I are ideals of C , with c nitely generated, to triples (h1 , h2 , h3 ) of ideals of C contained in I such that h1 h2 , and h3 h2 . An ideal b of B is said to have the H -innitesimal lifting property (or H -lifting property for short) if for any C , c, and I with c nitely generated, and any homomorphism : B C/h1 (C, c, I ) such that (b)(C/h1 (C, c, I )) c/h1 (C, c, I ), there exists a homomorphism : B C/h3 (C, c, I ) making the following diagram commute: B C/h3 (C, c, I )

C/h1 (C, c, I ) C/h2 (C, c, I )

If b has the H -lifting property for H (C, c, I ) = (cI, I, I 2 ), we say that b has the strong innitesimal lifting property (abbreviated SILP). For the case of H (C, c, I ) = (cI, I, AnnC (c/cI 2 ) I ), we say that b has the weak innitesimal lifting property, (WILP). The H -lifting property for H (C, c, I ) = (cI, I, AnnC ((c + I 2 )/I 2 )) is called the very weak innitesimal lifting property (VWILP). When H (C, c, I ) = (c2 I, cI, c2 I 2 ), the H - lifting property is called the Newtonian innitesimal lifting property (NILP). We call the latter the Newtonian innitesimal lifting property because it has the form used in the Newtons lemma in [Coleman]. Note that if b has SILP then it has WILP, and if it has WILP, then it 2

1. Introduction and Setting has VWILP. Also, if b has SILP then it has NILP. WILP, and SILP were introduced, and VWILP was mentioned in [Coleman]. Let B = A[X ]/(G), where X = {x1 , . . . , xn }, G = (g1 . . . gm )tr (the column vector with entries gi ) and gi A[X ]. Let I denote the identity matrix with m rows and columns. 1.2 Denition. s = s (B/A, G) = { A[X ] | (I + M ) + GN 0 mod (G), for matricies N and M with M G = 0} w = w (B/A, G) = { A[X ] | (I + M + GN ) 0 mod (G) for matricies N and M with M G = 0} = (B/A, G) = { A[X ] | I + M + GN 0 mod (G), for matricies N and M with M G = 0}

Ds Dw D

= = =

Ds (B/A) Dw (B/A) D(B/A)

= s (B/A, G) mod (G) = w (B/A, G) mod (G) = (B/A, G) mod (G)

It is shown in [Coleman] that: Ds = {b B | (b) has SILP}, and that Dw = {b B | (b) has WILP}. This both shows that Dw and Ds are independent of the representation of B as an A-algebra and determines the principal ideals having WILP and SILP. It is also shown in [Coleman] that D(B/A) is independent of the representation of B as a quotient of a polynomial ring with coecients in A. 3

1. Introduction and Setting We continue the study of these various lifting properties as well as the study of the mysterious ideal D. We determine necessary and sucient conditions for an ideal to have SILP, WILP, VWILP and NILP. We show that WILP and VWILP are equivalent. We show that Ds is the unique maximal ideal of B having SILP, and we use WILP to strengthen some lifting lemmas of [Elkik]. Towards understanding D, we show that if A B C are rings, with C/B smooth, then D(C/A) = D(B/A)C . In particular, this means that D localizes, and that the points of B/D are the singular points of B/A. We also compute several examples of D, Ds and Dw .

2. Determining ideals having ILPs

Chapter 2. Determining ideals having ILPs


The setting for this chapter is as follows. B/A are rings, with B = An /(G), where G = (g1 . . . gm )tr is a column vector of polynomials in An . Let = (1 . . . r )tr be a column vector of entries of B . We will determine when ( ) has WILP, SILP, or VWILP. Our procedure is to create a ring R which is, in a sense, universal for ( ) to have these properties. 2.1. Ds has SILP We begin by determining weaker conditions for the ideal ( ) to have SILP. 2.1.1 Lemma. The ideal ( ) has SILP if and only if for all C , c, and I with c nitely generated, and I 2 = 0, and all : B C/cI satisfying ( ) c/cI , there exists : B C making the following diagram commute: B C/cI

C/I

Proof: Clearly, if ( ) has SILP, the conclusion holds. Conversely, suppose the latter antecedent holds. Let C be an A-algebra, and let c and I be ideals of C such that c is nitely generated. Let : B C/cI be an A-algebra homomorphism satisfying ( ) c/cI . We must show that there exists a : B C/I 2 making the following diagram commute: 5

2. Determining ideals having ILPs B C/cI C/I 2 C/I

Let C = C/I 2 , and let c and I be the ideals generated by the images of c and I in C , respectively (c = (c + I 2 )/I 2 , and I = I/I 2 ). Then we have the canonical homomorphism f : C C which reduces to f1 : C/cI C /c I , f2 : C/I C /I , and f3 : C/I 2 C . Furthermore, f ( ) c /c I . Thus, by the supposition, there exists : B C which is congruent to f modulo I . These homomorphism make the following diagram commute:

C f3 2
C/I

C /c I

f1

C/cI

C/I
f2

C /I

1 Then = f3 makes the rst diagram commute. Thus ( ) has SILP.

2.1.2 Lemma. Let d = (d1 . . . dr )tr be a column vector of polynomials in A[X ] which is congruent to modulo (G). Then ( ) has SILP if and only if for all A-algebras and making the C and all ideals I of C with I 2 = 0, and all homomorphisms following diagram commute: A[X ]

)I C/(d 6

2. Determining ideals having ILPs there exists : B C making the following diagram commute: B )I C/(d

C (1)

C/I

Proof: Once again, it is clear that the rst statement implies the second statement. As for the converse, assume that the latter holds. Let C be any A-algebra. Let (c) and I be ideals of C with c = (c1 . . . cs )tr a column vector with entries in C. Let : B C/(c)I be a homomorphism such that ( ) (c)/(c)I . We must show that there exists a making the following diagram commute: B C/(c)I C/I 2 C/I (2)

be any homomorphism By the previous lemma, we may assume that I 2 = 0. Let making the following commute: A[X ]

C/(c)I

) + (c)I (c) + (c)I , so there exists an Since ( ) (c)/(c)I , we have that (d s s matrix L with entries in I , and an s r matrix S with entries in C such that = c + Lc. Letting I be the s s identity matrix, we have that (I L)S d = S d = c. c L2 c. But I 2 = 0, so L2 = 0, so there exists a matrix S such that S d ) (c), so C/(d )I C/cI . By assumption, there exists a making (1) Thus (d commute. The same makes (2) commute. Let d = (d1 . . . dr )tr be a column vector of polynomials in A[X ] which is congruent
s to modulo (G). Let Y be an m x r matrix of indeterminants. Let R = Rd,G =

2. Determining ideals having ILPs A[X, Y ]/((Y )2 , Y d G). Then there exists an A-homomorphism : B R/(d)(Y ),
s and ( ) (d)/(d)(Y ). The ring Rd,G appears in [Coleman] in the case r = 1. He

uses it to determine necessary conditions for principal ideals to have SILP or WILP.
s . The ideal ( ) has SILP if and only if there exists 2.1.3 Proposition. Let R = Rd,G

an A-homomorphism making the following commute: B

R (3)

R/(d)(Y ) R/(Y )

Proof: The only if part follows immediately from the SILP property, since in R, (Y )2 = 0. As for the if part, suppose there exists such a . Let C be an Aalgebra and let c, I C be ideals with c nitely generated. Let : B C/cI be an A-homomorphism such that ( ) c/cI . We must show that there exists an Ahomomorphism making (2) commute. By the previous lemma, we may assume that making the following commute: I 2 = 0, and that there exists a A[X ]

C/cI

). Then there exists an m r matrix L with entries in I such that such that c = (d = Ld . We dene : A[X, Y ] C by X = X , and Y = L. Then ((Y )2 ) G G = 0, so factors through a homomorphism I 2 = 0, and (Y d G) = Ld : R C . Since maps (d) into c and (Y ) into I , it reduces to homomorphisms 1 : R/(d)(Y ) C/cI , 2 : R/(Y ) C/I , and 3 : R C . 8

2. Determining ideals having ILPs Then 1 = , so we have the following commutative diagram:

R/(d)(Y )

B C/cI 1

R 3
C

C/I
2

R/(Y )

Thus = makes (2) commute, and we conclude that ( ) has SILP.

2.1.4 Corollary. The ideal ( ) has SILP if and only if there exist, for all 0 i n, an m m matrix M with M G = 0, and n m matrices Ni satisfying the following: i (I + M ) + GNi 0 mod (G), where I is the m m identity matrix. Proof: Let J = G. We show that (4) has solution if and only if there exists a making (3) commute. Let the columns of Y be Y1 through Yr . Suppose that such a exists. Then there exist matrices Ni (with entries in A[X, Y ]) such that
r

(4)

G(X +
i=1

Ni Yi ) 0 mod ((Y )2 , (Y d G)).

(5)

Expanding G, we see that 0 G(X + G(X ) + J Yd+J Ni Yi ) mod (Y )2 + (Y d G) Ni Yi mod (Y )2 + (Y d G) Ni Yi mod (Y )2 + (Y d G). 9 (6) (7) (8)

2. Determining ideals having ILPs Yd=


r i=1 Yi di ,

so the right hand side of (8) is congruent to


r

(di I + JNi )Yi mod (Y )2 + (Y d G).


i=1

(9)

This implies that there exists an m 1 matrix E with entries in (Y )2 , and an m m matrix F with entries in A[X, Y ] such that
r

(di I + JNi )Yi = E + F (Y d G).


i=1

(10)

This equation must hold in A[X, Y ]. Write F = F 0 + F 1 + F 2 , where F 0 has entries in A[X ] and F 1 is linear homogeneous in Y , and F 2 has entries in (Y )2 . Write Ni = Ni0 + Ni1 , where Ni0 has entries in A[X ], and Ni1 has entries in (Y ). Looking at the terms of equation (10) which are constant with respect to the entries of Y , we see that F 0 G = 0. Considering the terms which are linear with respect to the entries of Y , we see that
r r

(di I + JNi0 )Yi = F 0


i=1 i=1

di Yi + F 1 G.

(11)

Taking (11) modulo (G), and comparing the like coecients of the entries of Y , we see that di I + JNi0 di F 0 mod (G). Taking M = F 0 and Ni = Ni0 yields the desired equations. Conversely, suppose that there exist matrices M and Ni with M G = 0, which 10

2. Determining ideals having ILPs satisfy (4). We map A[X ] to R by sending X to X + G(X + Ni Yi ) G(X ) + J Yd+J Ni Yi . Then G maps to

Ni Yi mod (Y )2 + (Y d G) Ni Yi mod (Y )2 + (Y d G)

(di I + JNi )Yi mod (Y )2 + (Y d G) (di M + E )Yi mod (Y )2 + (Y d G). (12)

for some m m matrix E with entries in (G). Since G Y d mod (Y d G), we may replace E by a matrix E with entries in (Y d), in which case EYi 0 mod (Y )2 . Thus the right side of (12) is congruent to di M Yi mod (Y )2 + (Y d G) M Y d mod (Y )2 + (Y d G) M G mod (Y )2 + (Y d G) 0 mod (Y )2 + (Y d G). The homomorphism therefore factors through (G). The complete picture is given by the following theorem. 2.1.5 Theorem. The set Ds is an ideal and has SILP. To prove this we rst cite the following denition (from [Coleman]). 2.1.6 Denition. Let M be a square matrix such that M G = 0. Dene M = { An | (I M ) (G)N mod (G) for some n m matrix N }, and dene DM to be the image of M in B . It is clear that M is an ideal and is contained in s . It is shown in [Coleman] (and also follows from the above equations) that DM has SILP. 11

2. Determining ideals having ILPs For convenience we also make the following denition. 2.1.7 Denition. For a square matrix P such that P G = G, let P = { An |
be the image of in B . P (G)N mod (G) for some n m matrix N }. Let DP P

Then M = (I +M ) . The theorem follows almost immediately from the following lemma.
2.1.8 Lemma. P + P P P Proof: If P , and P , then there exist matrices N and N such that

P GN mod (G), and P GN mod (G). Then ( + )(P P ) = GN P + P GN . By dieretiating dieretiating the equality P G = G we have that P G G mod (G). Thus ( + )(P P ) GN P + GN G(N P + N ) mod (G), so + is in PP. The rst part of the theorem is given by the following corollary to the above lemma. 2.1.9 Corollary. The set Ds (B/A) is an ideal. Proof: If and map to elements of Ds , then there exist P and P such that
s P , and P , so + is in P P . But contains P P , so + is in

s . The fact that Ds is closed under multiplication follows from the fact that each element of Ds is contained in an ideal of B (namely DP for some P ) which in turn is contained in Ds . The proof of the above theorem is completed by the following corollary to the above lemma. 2.1.10 Corollary. The ideal Ds (B/A) has SILP. Proof: Suppose that C is an A-algebra, that c, I C are ideals of C , and that c is nitely generated. Suppose that f is an A-algebra homomorphism of B into 12

2. Determining ideals having ILPs C/cI such that (f (Ds )) c in C/cI . We must show that there exists an A-algebra homomorphism g of B into C/I 2 such that the following diagram commutes: B f C/cI C/I 2 C/I
g

Since (f (Ds )) c, and c is nitely generated, there must exist 1 , . . . , l Ds


. Then whose image in C/cI generate c. Then there exist Pi such that i DP i i (i ) DP , and so (f (DP )) c in C/cI . Since DP has SILP, we can 1 Pl 1 Pl 1 Pl

lift f to g .

2.2. Equations for VWILP To determine the ideals which have VWILP, we follow the same procedure. We rst reduce to the case that I 2 = 0. 2.2.1 Lemma. The ideal ( ) has VWILP if and only if for all C , c, and I with c nitely generated, and I 2 = 0, and all : B C/cI satisfying ( ) c/cI , there exists : B C making the following diagram commute: B C/cI C/AnnC (c) C/I

Proof: The only dierence between the proof here and the proof in the SILP case
is that we must check that C/AnnC ((c + I 2 )/I 2 ) = C /Ann C (c ). This is clear.

Next, we reduce to the case that c = ( ). 13

2. Determining ideals having ILPs 2.2.2 Lemma. Let d = (d1 . . . dr )tr be a column vector of polynomials in A[X ] which is congruent to modulo (G). Then ( ) has VWILP if and only if for all A-algebras and making the C and all ideals I of C with I 2 = 0, and all homomorphism following diagram commute: A[X ]

)I C/(d

there exists : B C making the this diagram commute: B )I C/(d C/AnnC (c) C/I

Proof: Again, the only change to the proof for the SILP case is to note that ). AnnC (c) AnnC (d As in the strong case, let d = (d1 . . . dr )tr be a column vector of polynomials in A[X ] which is congruent to modulo (G). Let Y be an m x r matrix of indetermiv nants. Let R = Rv = Rd,G = A[X, Y ]/((Y )2 , Y d G) (note that Rs = Rv ). Then

there exists an A-homomorphism : B R/(d)(Y ). We are immediately left with the 2.2.3 Proposition. The ideal ( ) has VWILP if and only if there exists an Ahomomorphism making the following commute: B R/(d)(Y ) R/AnnR ((d)) R/(Y ) (13)

It remains to determine the equations for VWILP. 14

2. Determining ideals having ILPs 2.2.4 Theorem. The ideal ( ) has VWILP if and only if there exists an m m matrix Mi with Mi G = 0 and an n m Ni with Mi G = 0 which for all i and j satisfy the following: j (i I + GNi ) i Mj mod (G), where I is the m m identity matrix. Let J = G. We show that (14) has solution if and only if there exists a making (13) commute. Let the columns of Y be Y1 through Yr . Giving is the same as giving a homomorphism from A[X ] to R/AnnR ((d)) such that G 0 and which makes (13) commute (after reducing modulo (G)). To make (13) commute, such a homomorphism must map X to something congruent to X modulo (Y ). Thus, it must map X to X+ Ni Yi for some n m matrices Ni . Then G G(X + Ni Yi ), by which is meant composition, not multiplication. Thus, we see that exists there exist n m matrices Ni (with entries in A[X, Y ]) such that
r

(14)

G(X +
i=1

Ni Yi ) 0 mod AnnR (d)

there exist n m matrices Ni (with entries in A[X, Y ]) such that for all 1jr
r

dj G(X +
i=1

Ni Yi ) 0 mod (Y )2 + (Y d G).

(15)

Expanding G, we have that (15) holds there exist n m matrices Ni (with entries in A[X, Y ]) such that for all 1jr
r

dj (G + J
i=1

Ni Yi ) 0 mod (Y )2 + (Y d G). 15

(16)

2. Determining ideals having ILPs Replacing G by Y d (since were working modulo (Y d G)), we see that (16) holds there exist matrices n m Ni (with entries in A[X, Y ]) such that for all 1jr
r

dj (Y d + J
i=1

Ni Yi ) 0 mod (Y )2 + (Y d G)

(17)

Saying that (17) holds modulo (Y )2 + (Y d G) is equivalent to saying that for all 1 j r there exist m 1 and m m matricies Ej and Fj respectively, with Ej 0 mod (Y )2 such that
r

dj (Y d + J
i=1 r i=1 di Yi

Ni Yi ) = Ej + Fj (Y d G)

(18)

Replacing Y d by

we see that (18) holds

there exist n m matrices Ni (with entries in A[X, Y ]) such that for all 1 j r there exist m 1 and m m matricies Ej and Fj respectively, with Ej 0 mod (Y )2 such that
r

dj
i=1

(di I + JNi )Yi = Ej + Fj (Y d G)

(19)

Write Fj as Fj0 + Fj1 + Fj2 , where Fj0 Mat(A[X ]), Fj1 Mat(A[X, Y ]) is linear homogeneous in the entries of Y , and Fj2 Mat((Y )2 ). Write Ni as Ni0 + Ni1 , where
0 Mat(A[X ]), N 1 Mat((Y )). Separating out the terms wich are constant with Nj j

respect to the entries of Y , linear with respect to the entries of Y , and the remaining terms, we see that (19) holds there exist matrices n m Ni (with entries in A[X, Y ]) such that for all 1 j r there exist m 1 and m m matricies Ej and Fj respectively, with 16

2. Determining ideals having ILPs Ej 0 mod (Y )2 such that 0 = Fj0 G


r

(20) (21)

dj
i=1

(di I + JNi0 )Yi = Fj0 Y d + Fj1 G


r

dj
i=1

(JNi1 )Yi = Ej + Fj1 Y d + Fj2 (Y d G)

(22)

Equations (20) and (21) imply (by taking (21) mod (G)) that there exist n m matricies Ni0 (with entries in A[X ]) such that for all 1 j r there exist m m matrices Fj0 Mat(A[X ]) with Fj0 G = 0 and
r r

dj
i=1

(di I

+ JNi0 )Yi

Fj0
i=1

(di Yi ) mod (G)

(23)

By setting like coecients equal, this implies (14). Thus if ( ) has VWILP, then (14) holds. Conversely, given (14), we take Ni0 equal to the given Ni , and Fj0 equal to the given Mj . Then (20) holds. Since (14) holds modulo G, there must exist matrices Fj1 which are linear homogeneous in the entries of Y such that (21) holds. Finally, (22) holds by taking Ni1 = 0, Fj2 = 0, and Ej = Fj1 Y d. Thus if equation (14) holds, then ( ) has VWILP.qed 2.3. VWILP and WILP are equivalent We now analyze WILP. Unfortunately, we cannot proceed as we did for SILP and VWILP. Although we can (and will) take the analogous rst step of reducing to a quotient of C (in this case we may assume cI 2 = 0, instead of I 2 = 0), we cannot ) because if c and c are ideals, c c, it need not be the case that replace c by (d
c c AnnC ( cI 2 ) AnnC ( c I 2 ). We solve this problem by replacing the ring R with a

family of rings Rt . 17

2. Determining ideals having ILPs We begin with the analogous lemma. 2.3.1 Lemma. The ideal( ) has WILP if and only if for all C , c, and I with c nitely generated, and cI 2 = 0, and all : B C/cI satisfying ( ) c/cI , there exists : B C making the following diagram commute: B C/cI C/AnnC (c) C/I

Proof: The proof is the same as those in the previous sections, except that we
c take C = C/cI 2 and c = c + cI 2 and must note that C/AnnC ( cI 2 ) = C /AnnC (c ).

Now comes the messy part. Without the second lemma, we must resort to using additional variables to represent the fact that the image of (d) contains c. This forces us to use a set of rings Rt . As in the previous cases, x d = (d1 . . . dr )tr as a column vector of polynomials in A[X ] which is congruent to modulo (G). From here on, though, things will be slightly dierent. Let t be a positive integer. Let Y be an m t matrix of
w indeterminants. Let Z be a t r matrix of indeterminants. Let Rt = Rt,d,G =

A[X, Y, Z ]/((Zd)(Y )2 , Y Zd G).

Then there exist A-homomorphisms t : B

Rt /(Zd)(Y ), namely the ones sending X mod (G) to X mod (Zd)(Y )2 + (Y Zd G). 2.3.2 Proposition. The ideal ( ) has WILP if and only if for all t Z+ there exist
making the following commute: A-homomorphisms t

B t

t Rt /AnnRt (Zd)

Rt /(Y )

(24)

Rt /(Zd)(Y )

exist. Let Proof: The only if part is clear. As for the if part, suppose such t

C be an A-algebra, let c = (c1 . . . ct )tr be a column vector of entries of C , and let I 18

2. Determining ideals having ILPs be an ideal of C . Let : B C/(c)I be such that ( ) (c)/(c)I . By the previous lemma, we may assume that (c)I 2 = 0. to make the following commute: Choose A[X ]

C/(c)I

= Lc. Since Then there exists an m t matrix L with entries in I such that G . ( ) c/cI , there also exists a t m matrix E Mat(C ) such that c = E d , Y = L, and Z = E . Then sends We dene : A[X, Y, Z ] C by X = X (Zd)(Y )2 and (Y Zd G) to 0 and thus factors through a homomorphism : R C . Then t mod (c)I . Since sends (Zd) to (c) and (Y ) into I , it reduces to homomorphisms 1 : Rt /(Zd)(Y ) C/(c)I , 2 : Rt /(Y ) C/I , 3 : Rt /AnnRt (Zd) C/AnnC (c)I . Thus, we have the following commutative diagram:
t

B t 1 Rt /(Zd)(Y )

C/AnnC (c)

Rt /AnnRt (Zd)

C/(c)I

C/I 2 Rt/(Y )

is the homomorphism which demonstrates that ( ) has WILP. Then t

Finally, the analogous corollary detailing the equations for VWILP remains to be proved. For simplicity, we will prove it in two steps. First we will determine systems of equations in A[X, Z ] mod (G) that must hold, one system for each t. Then we will eliminate t and Z . 19

2. Determining ideals having ILPs 2.3.3 Corollary. The ideal ( ) has WILP if and only if for all t Z+ there exist m m matrices Mi Mat(A[X, Z ]) such that Mi = 0, and n m matrices Ni Mat(A[X, Z ]) such that the following holds: (Z )j ((Z )i I + GNi ) (Z )i Mj mod (G), (25)

where I is the m m identity matrix, and (Z )i is the ith entry of the column vector Z . Proof: Let J = G. We show that (25) has solution if and only if there exists a
making (24) commute. t

Let the columns of Y be Y1 through Yt , let d = Zd, and let = Z so that in


particular d i = (Zd)i . Then exists

there exist n m matrices Ni (with entries in A[X, Y, Z ]) such that for all 1jt
t

G(X +
i=1

Ni Yi ) 0 mod AnnRt (d ).

there exist n m matrices Ni (with entries in A[X, Y, Z ]) such that


t

d j G(X

+
i=1

Ni Yi ) 0 mod (d )(Y )2 + (Y d G).

(26)

Expanding G, we have that (26) holds there exist n m matrices Ni (with entries in A[X, Y, Z ]) such that
t

d j (G + J
i=1

Ni Yi ) 0 mod (d )(Y )2 + (Y d G).

(27)

Since were working modulo (Y d G), we may replace G by Y d and we see that (27) holds 20

2. Determining ideals having ILPs there exist n m matrices Ni (with entries in A[X, Y, Z ]) such that
t d j (Y d

+J
i=1

Ni Yi ) 0 mod (d )(Y )2 + (Y d G).

(28)

Saying that (28) holds modulo (d )(Y )2 + (Y d G) is equivalent to saying that there exist m 1 and m m matrices Ej and Fj respectively, with Ej 0 mod (d )(Y )2 such that
t d j (Y d

+J
i=1

Ni Yi ) = Ej + Fj (Y d G). we see that (29) holds

(29)

Replacing Y d by

t i=1 di Yi

there exist n m, m 1, and m m matrices Ni , Ej and Fj respectively (with entries in A[X, Y, Z ]) such that Ej 0 mod (d )(Y )2 and
t

d j
i=1

(d i I + JNi )Yi = Ej + Fj (Y d G).

(30)

Write Fj as Fj0 + Fj1 + Fj2 , where Fjk Mat(A[X, Y, Z ]) with Fj0 constant with respect to the entries of Y , Fj1 linear homogeneous in the entries of Y , and Fj2
k Mat(A[X, Y, Z ]) with N 0 constant 0 mod (Y )2 . Write Ni as Ni0 + Ni1 , where Nj j 1 0 mod (Y ). Then (30) holds with respect to the entries of Y and Nj

there exist n m, m 1, and m m matrices Ni , Ej and Fj respectively (with entries in A[X, Y, Z ]) such that Ej 0 mod (d )(Y )2 and 0 = Fj0 G
t

(31) (32)

d j
i=1

0 0 1 (d i I + JNi )Yi = Fj Y d + Fj G t

d j
i=1

(JNi1 )Yi = Ej + Fj1 Y d + Fj2 (Y d G) 21

(33)

2. Determining ideals having ILPs Equations (31) and (32) imply (by taking (32) mod (G)) that there exist Ni0 and Fj0 (with entries in A[X, Z ]) such that Fj0 G = 0 and
t t

d j
i=1

(d iI

+ JNi0 )Yi

Fj0
i=1

(d i Yi ) mod (G)

(34)

By setting like coecients equal, this implies (25). Conversely, given (25), we take Ni0 equal to the given Ni , and Fj0 equal to the given Mj . Then (31) holds. Since (25) holds modulo G, there must exist matrices Fj1 which are linear homogeneous in Y such that (32) holds. Finally, (33) holds by taking Ni1 = 0, Fj2 = 0, and Ej = Fj1 Y d , which is then 0 mod (d )(Y )2 . Therefore ( ) has WILP. The nal equations for WILP are given by: 2.3.4 Proposition. The ideal ( ) has WILP if and only if there exist m m and n m matrices Mi and Ni respectively with entries in A[X ] such that Mi G = 0 and following holds: j (i I + GNi ) i Mj mod (G), where I is the identity matrix. Proof: By the previous corollary, we must show that (35) holds if and only if (25) holds. Suppose that (35) holds. Then it holds for t = r. Mapping A[X, Z ] to A[X ] by sending X to X and Z to the identity matrix yields equation (25). Conversely, suppose that (35) holds. Fix t. Then Z = (zij ), where i ranges from 1 to t, and j ranges from 1 to r. Let Zi be the ith row of Z . Multiplying (35) by zkj , and summing over j , we get zkj j (i I + GNi ) i
j j

(35)

zkj Mj mod (G)

(36)

22

2. Determining ideals having ILPs


= Letting Mk j zkj Mj ,

we may rewrite (36) as (37)

k (i I + GNi ) i Mk mod (G).

Now multiply by zli , sum over i, and replace (25).

i zli Ni

by Nl and we get equation

Thus WILP and VWILP have the same equations, so 2.3.5 Theorem. The ideal ( ) has WILP if and only if ( ) has VWILP.

2.4. Equations for NILP We now analyze NILP. We will proceed as we did for WILP. 2.4.1 Lemma. The ideal ( ) has NILP if and only if for all C , c, and I with c nitely generated and satisfying c2 I 2 = 0, and for all : B C/c2 I satisfying ( ) c/c2 I , there exists : B C making the following diagram commute: B C/c2 I

C/cI

Proof: We proceed as before, taking C = C/c2 I 2 , c = (c + c2 I 2 )/c2 I 2 , and I = I/c2 I 2 . Let d = (d1 . . . dr )tr be a column vector of polynomials in A[X ] which is congruent to modulo (G). Let t be a positive integer. Let Yi be an m t matrix of indeterminants (for i = 1, . . . , t).
n Rt = Rt,d,G = A[X, Y, Z ]/ (Zd)2 ({Yi })2 , (

Then ({Yi }) denotes the ideal generYi Zd(Zd)i ) G , where (Zd)i denotes

ated by the entries of the Yi s. Let Z be a t r matrix of indeterminants. Let the ith entry of the column vector Zd, and the superscript n stands for NILP. Let 23

2. Determining ideals having ILPs t : B Rt /(Zd)2 ({Yi }) be the A-homomorphisms given by sending X mod (G) to X mod (Zd)2 ({Yi }). 2.4.2 Proposition. The ideal ( ) has NILP if and only if for all t Z+ there exist
making the following commute: Ahomomorphisms t

B t

Rt (38)

Rt /(Zd)2 ({Yi }) Rt /(Zd)({Yi })


exist. Let C Proof: The only if part is clear. As for the if part, suppose such t

be an A-algebra, let c = (c1 . . . ct )tr be a column vector of entries of C , and let I be an ideal of C . Let : B C/(c)2 I be such that ( ) (c)/(c)2 I . By the previous lemma, we may assume that (c)2 I 2 = 0. to make the following commute: Choose A[X ]

C/(c)2 I Li cci . Since

= Then there exist m t matrices Li with entries in I such that G

( ) (c)/(c)2 I , there also exist t r and t t matrices E , E Mat(C ) respectively, = c + E c. Multiplying by I E , (where I is with E 0 mod (c)I such that E d the t t identity matrix), using the fact that c2 I 2 = 0, and letting E = (I E )E , = c. we see that there exists a t r matrix E such that E d , Yi = Li , and Z = E . Then We dene : A[X, {Yi }, Z ] C by X = X sends both (Zd)2 ({Yi })2 and (( Yi Zd(Zd)i ) G) to 0, and thus factors through Rt via a homomorphism : Rt C . Since (Zd) = c and ({Yi }) I , reduces to homomorphisms 1 : Rt /(Zd)2 ({Yi }) C/(c)2 I , 2 : Rt /(Zd)({Yi }) C/(c)I , and 3 : Rt C . Since 1 t mod (c)2 I , these homomorphisms make the following 24

2. Determining ideals having ILPs diagram commute:

C B
t

Rt

t Rt /(Zd) ({Yi })
2

C/(c) I

C/(c)I

Rt/(Zd)({Yi})

mod (c)I , so ( ) has NILP. Then 3 t

Now for the equations. As in the WILP case, we rst determine equations involving Z . 2.4.3 Corollary. The ideal ( ) has NILP if and only if for all t Z+ and 1 i j t, there exist m m and n m matrices M and Nijk respectively, with entries in A[X, Z ] such that M G = 0 and the following holds:
t

(Z )i (Z )j (I + M ) G
k=1

Nijk (Z )k mod (G),

(39)

where I is the m m identity matrix, and (Z )i denotes the ith entry of the column vector Z . Proof: It suces to prove this without the condition that i j because the left side of (39) doesnt change when i and j are switched, so taking Nijk to be Njik for i > j solves the remaining equations. Let J = G. Let the columns of Yi be Yi1 through Yit . Fix t. Let d = Zd, and let = Z so that in particular d i = (Zd)i . We show that (39) has solution if and
making (38) commute. only if there exist t

25

2. Determining ideals having ILPs


exists We see that t

there exist n m matrices Nijk (with entries in A[X, {Yi }, Z ]) such that
t t

G(X +
i,j,k=1

Nijk d k Yij )

0 mod (d ) ({Yi }) +

(
i=1

Yi d d i) G .

(40)

Expanding G, we see that (40) holds there exist n m matrices Nijk (with entries in A[X, {Yi }, Z ]) such that
t t 2 2 Nijk d k Yij 0 mod (d ) ({Yi }) + i,j,k=1

G+J

(
i=1

Yi d d i) G .

(41)

Since were working modulo (( we see that (41) holds

Yi d d i ) G), we may replace G by

Yi d d i and

there exist n m matrices Nijk (with entries in A[X, {Yi }, Z ]) such that
t t t

Yi d d i
i=1

+J
i,j,k=1

Nijk d k Yij

0 mod (d ) ({Yi }) + ((
i=1

Yi d d i ) G). (42)

Saying that (42) holds modulo (d )2 ({Yi })2 + ( 0 mod (d )2 ({Yi })2 such that
t t

Yi d d i G) is equivalent to

saying that there exist m 1 and m m matrices E and F respectively, with E

Yi d d i
i=1

+J
i,j,k=1

Nijk dk Yij = E + F

(
i=1

Yi d d i) G .

(43)
i Yi d di

Replacing Yi d by comes
i,j

t j =1 Yij dj

in the leftmost sum of (43), so that

be-

Yij d i dj we have that (43) holds

there exist n m, m 1, and m m matrices Nijk , E , and F respectively, (with entries in A[X, {Yi }, Z ]) with E 0 mod (Zd)2 ({Yi })2 such that
t (d i dj I i,j =1 t t

+J
k=1

Nijk d k )Yij 26

= E + F ((
i=1

Yi d d i ) G).

(44)

2. Determining ideals having ILPs Write F as F 0 + F 1 + F 2 , where F k Mat(A[X, {Yi }, Z ]) with F 0 constant with respect to the entries of the Yi , F 1 linear homogeneous in the entries of the Yi , and
0 + N 1 , where N k Mat(A[X, {Y }, Z ]) F 2 0 mod ({Yi })2 . Write Nijk as Nijk i ijk ijk 0 constant with respect to the entries of the Y , and N 1 0 mod ({Y }). with Nijk i i ijk

Then (44) holds there exist n m matrices Nijk , E , and F (with entries in A[X, {Yi }, Z ]) with E 0 mod (Zd)2 ({Yi })2 such that 0 = F 0G
t (d i dj I i,j =1 t t 0 Nijk dk )Yij k=1 t

(45)
1 Yij d i dj + F G

+J

=F

0 i,j =1

(46)
t

t 1 dk Yij Nijk

J
i,j,k=1

=E+F

1 i,j =1

Yij d i dj

+F (
i,j =1

Yij d i dj G) (47)

Equations (45) and (46) imply (by taking (46) mod (G)) that there exist matrices
0 and F 0 (with entries in A[X, Z ]) satisfying F 0 G = 0 and Nijk t d i dj I + J i,j =1 k 0 Nijk dk t

Yij = F 0
i,j =1

Yij d i dj

(48)

By comparing the like coecients of the entries of the Yi , this implies (39). Thus, if ( ) has NILP then (39) holds.
0 = N , and F 0 = M . Then (45) holds. Since Conversely, given (39) we take Nijk ijk

(39) holds modulo G, there must exist a matrix F 1 which is linear homogeneous in
1 = 0, the entries of the Yi such that (46) holds. Finally, (47) holds by taking Nijk

F 2 = 0, and E = F 1

ij

2 2 1 Yij d i dj , which is then 0 mod (d ) ({Yi }) since F is

linear homogeneous in the entries of the Yi . Thus, if (39) holds then ( ) has NILP.

27

2. Determining ideals having ILPs The nal equations for NILP are given by: 2.4.4 Theorem. The ideal ( ) has NILP if and only if there exists an m m matrix M Mat(A[X ]), and n m matrices N1i and N2i with entries in A[X ] (for 1 i r) such that M G = 0 and for all 1 i, j r the following holds: i j (I M ) G(N1i j + N2j i ) mod (G), where I is the m m identity matrix. Proof: In the equations of the previous corollary, let M = M 0 + M 1 , where M 0 is constant with respect to the entries of Z and M 1 Mat(Z ). Let Nijk =
0 + N 1 + N 2 , where N 0 is constant with respect to the entries of Z , N 1 is Nijk ijk ijk ijk ijk 2 Mat((Z )2 ). Let J = G. Then linear homogenous in the entries of Z , and Nijk 0 , N1 , there exist M and Nijk satisfying (39) if and only if there exist M 0 , M 1 , Nijk ijk 2 as above with M 0 G = M 1 G = 0, satisfying Nijk t

(49)

0J
k=1 t

0 Nijk (Z )k mod (G),

(50)

(Z )i (Z )j (I M ) J
k=1 t

1 Nijk (Z )k mod (G),

(51)

(Z )i (Z )j (M 1 ) J
k=1

2 (Z )k mod (G). Nijk

(52)

We rst note that equations (50) and (52) are superuous, since all satisfy
2 = M 1 = N 0 = 0. Thus ( ) has NILP if and only if there exist them by taking Nijk ijk 1 Mat(A[X, Z ]) with M 0 G = 0 and N 1 linear matrices M 0 Mat(A[X ]) and Nijk ijk

homogenous in the entries of Z , which satisfy (51) for all 1 i, j t.


1 is linear homogeneous in the Consider equation (51). Let Z = (zuv ). Since Nijk 1 = entries of Z , Nijk r u,v =1 Nijkuv zuv ,

for some matrices Nijkuv with entries in A[X ]. 28

2. Determining ideals having ILPs Also (Z )i =


r a=1 Zia a ,

so we may rewrite (51) as Nijkuv zuv zkw w mod (G),


kuvw

zia zjb a b (I M 0 ) J
ab

(53)

where k and u run from 1 to t, and a, b, v , and w run from 1 to r. Thus ( ) has NILP if and only if there exist m m and n m matrices M 0 and Nijkuv respectively, satisfying M 0 G = 0 and equation (53). We x i and j and set coecients equal by determining the coecient of zef zgh on each side of equation (53). We may assume that e g and that if e = g then f h. Consider the right side of equation (53). If e = g and f = h, the coecient of zef zgh is JNijgef h . If e = g or f = h the coecient of zef zgh is JNijgef h +JNijegh f . The right side is slightly more complicated. The monomial zia zjb = zef zgh if and only if (i, a, j, b) = (e, f, g, h) or (j, b, i, a) = (e, f, g, h). If i = j the latter cannot hold (since i j and e g ). So rst consider the left side when i = j . Then if e = i or g = j the coecient of zef zgh is 0 and if e = i and g = j the coecient of zef zgh is f h (I M 0 ). There are three possibilities when i = j . If e = i or g = j the coecient of zef zgh is 0. If e = i and g = j but f = h the coecient of zef zgh is 2f h (I M 0 ). Finally, if e = i, g = j , and f = h, the coecient of zef zgh is f h (I M 0 ). Reassembling all the cases when i = j , we have that (53) is equivalent to: 0 = JNijgef h (e = i or g = j ) and (e = g and f = h) 0 = J (Nijgef h + Nijegh f ) f h (I M 0 ) = J (Nijgef h + Nijegh f ) 29 (e = i or g = j ) and (e = g or f = h) e = i and g = j (53c) (53b) (53a)

2. Determining ideals having ILPs When i = j , equation (53) is equivalent to: 0 = JNijgef h (e = i or g = j ) and (e = g and f = h) 0 = J (Nijgef h + Nijegh f ) 2f h (I M 0 ) = J (Nijgef h + Nijegh f ) f h (I M 0 ) = JNijgef h (e = i or g = j ) and (e = g or f = h) e = i and g = j and f = h (53c ) e = i and g = j and f = h (53d ) (53b ) (53a )

We may discard equations (53a), (53b), (53a ), and (53b ) since the Nijgef and Nijegh matrices appearing in those equations dont appear in the remaining equations, and (53a), (53b), (53a ), and (53b ) are satised by taking those matrices equal to zero. Equation (53c) is equivalent to the existence of a matrix M 0 annihilating G such that for each f and h, there exist matrices N1f = Nijjif and N2f = Nijijf such that f h (I M 0 ) = J (N1f h + N2h f ) (54)

Equations (53c ) and (53d ) are equivalent to the existence, for each i and j , i = j of matrices N3f such that for all f, h 2f h (I M 0 ) = J (N3f h + N3h f )
2 f (I M 0 ) = J (N3f f )

(f = h) (f = h)

(55a) (55b)

Finally, we note that equations (55a) and (55b) are redundant, since given (54),
3 = O 1 + O 2 . Thus (54) implies (49), which completes the proof. we may take Of f f

30

3. Some Lifting Lemmas

Chapter 3. Some Lifting Lemmas


We extend Colemans lifting lemma to NILP ideals, and extend his version of Elkiks lemma to non-principal WILP ideals. As in the previous chapter, B/A are rings, B = A[X ]/(G). Additionally, let I and J be ideals of A. We begin with a denition. 3.1 Denition. Let H be an ideal of A[X ]. Suppose that J = (t) is principal, and that A is complete with respect to the J -adic topology. Let I be any ideal of A. Let L be the ideal of annihilators of powers of t. Let k be an integer such that the intersection of L with (tk ) is 0. We will say that H saties Elkiks lemma in the principal complete case if for any integers h and n such that n > max(2h, h + k ), and for any homomorphism f : A[X ] A inducing a homomorphism g : B A/J n I as follows: A[X ] f A B g A/J n I

such that f (H ) contains J h , then there exists a lifting of g to A making the following commute: A[X ] f A B g A/J n I A A/J nh I

Elkiks lemma states that all subideals of Elkiks ideal satisfy Elkiks lemma in the principal complete case (although its not phrased this way in [Elkik]). Colemans 31

3. Some Lifting Lemmas version of Elkiks lemma is that D satises Elkiks lemma in the principal complete case. This is an improvement since D contains Elkiks ideal and is sometimes larger. 3.2 Denition. Let H be any ideal of A[X ]. Let J be an ideal of A. Suppose that A is complete with respect to J . We say that H satises Elkiks lemma in the complete case if for any integer h, there exist integers n and r such that if m > n, and we have homomorphisms making the following commute: A[X ] f A B g A/J m

such that f (H ) contains J h , then we can lift g to make the following commute: A[X ] f A B g A/J m A A/J mr

Elkik proves the following theorem, albet not stating it in this form. 3.3 Theorem. (Elkik) Suppose that H satises Elkiks lemma in the principal complete case, and that if C is any A-algebra, the image of M in B C satises Elkiks lemma in the principal complete case for B C/C with respect to the image of J . Then H satises Elkiks lemma in the complete case. Since D(B/A) satises the conditions given, and from [Coleman], we know that Elkiks lemma holds for D(B/A), we have 3.4 Corollary. The ideal D satises Elkiks lemma in the complete case. Let (A, J ) be a Henselian pair. Let A be the J -adic completion of A. Let B = B A. Let V be an open subscheme of Spec B which is smooth over Spec A. Then V = V Spec A is an open subscheme of Spec B . Let U be the open neighborhood 32

3. Some Lifting Lemmas of Spec A consisting of the primes of A which do not contain J , and let U be the open neighborhood of Spec A consisting of the primes of A which do not contain J A. Then we have the following commutative diagram: V Spec B s U
i

Spec B s Spec A

V
i

Spec A

3.5 Denition. We say that H satises Elkiks lemma in the Henselian case if H satises Elkiks lemma in the complete case with (A, J ) merely being a Henselian pair instead of complete. Elkik proves a quite useful theorem which states: 3.6 Theorem. (Elkik) Let f be a section of s such that f i factors through V . Then for any integer n, there exists a section f of s such that f i factors through V and such that f f mod J n . This immediately implies 3.7 Corollary. The ideal D satises Elkiks lemma in the Henselian case. Alternatively, it is possible to extend Colemans Newtons lemma to the Henselian case more directly. This has the advantage of only requiring a homomorphism from B to A/a2 J instead of to A/J n for some suciently large n. It has the disadvantage, however, of only working for principal ideals with WILP (although any SILP ideal will still work). It also requires additional hypotheses. First, we extend Colemans lemma for SILP ideals to NILP ideals. 3.8 Lemma. Let A be complete with respect to an ideal I . Let c be any ideal of A. Let b be an ideal of B having NILP. Let : B A/c2 I be an A-homomorphism such that ((b)) c/c2 I . Then there exists an A-homomorphism : B A making the 33

3. Some Lifting Lemmas following commute: B A/c2 I

A/cI

Proof: The proof is identical to Colemans proof. We merely reproduce it here for making the following commute: simplicity. By the NILP property, there exists a B A/c2 I A/c2 I 2 A/cI

(b)) c/c2 I 2 . It is easy We may then proceed by induction provided that ( to see that this is the case. Replace b and c by column vectors of their generators. Let the number of their rows be r and s respectively. Then since ((b)) (c)/(c)2 I mod (c)I , there must exist matrices S and L with L 0 mod I such that and = c Lc = (I L)c, where I is the r r identity matrix. Since I L is invertible S b in A (its inverse is I + L + L2 + , which converges because L 0 mod I ), we have ) (c). that (b Thus, by induction and the completeness of A with respect to I , there exists a making the above diagram commute. Our extension of Colemans Newtons lemma to the Henselian case then reads as follows. 3.9 Proposition. Let (A, J ) be a Henselian pair. Let H be an ideal of B . Let a be an ideal of A. Let f : B A/a2 J be an A-homomorphism such that (f (b)) a/a2 J . Suppose that H J r for some r Z, and that Spec B V (H ) is smooth over A. If 1. H has WILP, a is principal, AnnA (a) aJ = (0), and AnnA b aJ A = (0), or 34

3. Some Lifting Lemmas 2. H has NILP, then there exists f : B A satisfying f f mod aJ . Proof: From Colemans Newtons lemma and from our above extension to the NILP case, we know that this holds for A complete with respect to J . We have that aA is principal, and NILP is stable under base change [Coleman]. Then since
2 AnnA b aJ A = (0), we may apply Colemans Newtons lemma to B A A/(a J A),

: B A A such that f f 1 mod aJ A. Applying and we get the existence of f Elkiks above theorem to f nishes the proof.

35

4. About D

Chapter 4. About D
We begin with an example. Let K be a ring, n an integer, and suppose n is invertible in K . Let B = K [x, y ]/(xn+1 , xy n ). Then B is the ane line over k with the origin given multiplicity n. 4.1 Example. D(B/K ) = ((n + 1)xn y n1 , (n + 1)y 2n ).

Proof: Let be the canonical homomorphism from K [x, y ] to B , and let G = xy n )tr . To compute D(B/A) we will compute the image of (B/A, G) in B . (n + 1)xn 0 Let J = G = . Then = (B/A, G) if and only if yn nxy n1 there exist matrices M , N Mat(K [x, y ]) of the necessary sizes such that ( xn+1 I2 + JN + M 0 mod (G) where M G = 0, which is to say that we need to solve this system of equations in B . To determine M such that M G = 0, we must solve axn+1 + bxy n = 0 for a and b. Clearly, a and b satisfy axn+1 + bxy n = 0 i a is a multiple of y n , and b is ky n kxn the negative of the same multiple of xn . Thus M is of the form , k y n k xn a b for k, k K [x, y ]. Let N = . We must solve the following system of four c d 36

4. About D equations in B : (n + 1)xn a + ky n + = 0 (n + 1)xn b kxn = 0 y n a + nxy n1 c + k y n = 0 y n b + nxy n1 d k xn + = 0 Equation (1) holds i = (n + 1)xn a ky n . (1a) (1) (2) (3) (4)

Factoring an xn out of (2), we see that k (n + 1)b mod Ann(xn ) = (x, y n ), where Ann(f ) denotes the annihilator of f in B . Thus, k = (n + 1)b + kx x + ky y n (2a)

for some kx and ky in B , and plugging back, we see that this is necessary and sucient. Note that we now have the solution set for the system consisting of (1) and (2). Factoring y n1 out of (3), we see that ya + nxc + k y 0 mod Ann(y n1 ) = (xy ). Taking this mod (y ) we see that nxc mod (y ) 0. The annihilator of x in B/(y ) is (xn ), so nc mod (xn , y ) 0 so nc = cx xn + cy y (3a)

for some cx and cy in B . Plugging back into equation (3), we see that we must have y n a + y n k = 0, so k a mod Ann(y n ) = (x) so
k = a + kx x

(3b)

. Once again, this is necessary and sucient, and we have now solved (1), for some kx

(2), and (3) simultaneously. 37

4. About D Plugging all our information into (4), equation (4) becomes nxy n1 d nxn a ny n b ky y 2n = 0. Considering this equation modulo x, we nd that y n (nb + ky y n ) 0 mod (x), so nb ky y n mod (x), so b = ky y n /n + bx x (4a)

for some bx in B . Plugging back, we see that were left with nxy n1 d nxn a = 0, so y n1 d xn1 a mod Ann(x) = (xn , y n ), so y n1 d 0 mod (xn1 , y n ), so d = dy y + dx xn1 . (4b)

Plugging back one last time, we see that a y n1 dy mod Ann(xn1 ) = (x2 , y n ), so a = y n1 dy + ax x2 + ay y n for some ax and ay , and lo and behold, were done. Sorting back through the dependencies, we can eventually see that = ((n + 1)xn y n1 , (n + 1)y 2n ) + (G). Coleman proves that D(B/A) is independent of the embedding of B into ane space over A, thus making D a canonical ideal of B associated to the structure morphism of B/A. Thus Spec B/D is a canonical subscheme of Spec B corresponding to the structure morphism of Spec B over Spec A. Coleman also proves that D = (1) i B/A is smooth. We prove here that D localizes nicely. This enables us to generalize the denition of D to arbitrary schemes and to prove that the support of Spec B/D is the closure of the singular points of B over A. It also is the rst step in proving that D also pushes through smooth maps. Let B be an A-algebra, let : An B be surjective, with kernel generated by the m-rowed column vector G. Let J = G. 4.2 Proposition. If f B , D(Bf /A) = Bf D(B/A). 38 (4c)

4. About D Proof: Let f An map to f B . Then : An [z ] Bf , dened by sending X to (X ) and z to 1/f is surjective and has kernel (g1 , . . . , gm , zf 1). Let G = ( g1 ... gm zf 1 )tr . Let J = G . We rst show that a D(B/A) implies that a D(Bf /A). Well, if a D(B/A) then there exist m m and n m matrices M and N Mat(An ) such that aIm JN + M mod (G), and M G = 0 in An . We will show that aIm+1 J N z 2 (f )N 0 za + M 0 0 0 mod (G )

It is clear that this equation holds in all but the last row. As for the last row, in all but the last column we must show that the equation 0 ( zf /X f ) N z 2 f /XN mod (G )

holds. This holds because zf 1 mod (G ). In the last position of the last row, the equation is a f za mod (G ). This holds for the same reason. Thus the above equation holds. M 0 Clearly G = 0 in An [z ], so a D(Bf /A), so D(B/A) D(Bf /A), so 0 0 Bf D(B/A) D(Bf /A). Conversely, suppose a D(Bf /A), and let a (Bf /A, G) such that the image of a is a. Then there exist m m and n m matrices M and N Mat(An [Z ]) satisfying a Im+1 J N + M mod (G ). Consider the map from An [z ] to (An )f which sends An to itself and z to 1/f . Since (G ) contains the kernel of this map, we have An [z ]/(G ) = (An )f /(G), so we can map our relation into (An )f and we get a Im+1 J N + M mod (G) in (An )f . Since M G = 0 in An [z ], the product is also zero in (An )f . But in (An )f , G G = , so if M is the matrix consisting of all rows and columns of M except 0 39

4. About D the last row and column, we have that M G = 0 in (An )f . Letting N be all columns of N except the last one, we get a Im JN + M mod (G) in (An )f . There exists an s > 0, a An , and N , M Mat(An ), such that a = a /f , N = N /f , and M = M /f . Then the congruence above implies that there exists an integer t > 0 such that f a J (f N ) + f M mod (G) in An . This means that f a = f the proof.
t s+t a t t t s s s

(B/A, G), which implies that f

s+t

a D(B/A), which

implies that a Bf D(B/A). Thus, D(Bf /A)/subsetBf D(B/A), which completes

4.3 Proposition. Let f A, and suppose that the structure morphism of B/A factors through Af . Then D(B/Af ) = D(B/A). Proof: Let : An B surjective, G a column vector of generators of the kernel of , and J = G. Let m be the number of rows of G. Then factors through (Af )n . We have An (Af )n B, with the composition being . The kernel of must also be generated by the entries of G since the image of f in B must be invertible. If D(B/A) then there exists A congruent to modulo (G) such that I GN + M mod (G), for n m and m m matrices N and M in Mat(An ) with M G = 0. Then the image of in (Af )n also must satisfy the same equation, so D(B/Af ). Thus D(B/A) D(B/Af ). If D(B/Af ), then there exists (Af )n congruent to modulo (G) such that I GN + M mod (G), for n m and m m matrices N and M in Mat((Af )n ) with M G = 0. Multiplying by a suciently high power of f (say l), insures that all entries are in the image of An under the canonical homomorphism. Then f l (B/A), so 40
can.

4. About D the image of f l is in D(B/A). Since the image of f in B is invertible, we have that D(B/A). Thus D(B/Af ) D(B/A).

4.4 Corollary. Let B be an A-algebra. Let f A. Let f denote the image of f in B . Then D(Bf /Af ) = Bf D(B/A). Proof: D(Bf /Af ) = Bf D(Bf /A) = Bf D(B/A). For now, we assume that were working over the ring A so we let D(B )=D(Spec B) = D(B/A), and we confuse ideals of rings with sheaves of ideals over the spectrum of said rings. 4.5 Corollary. The restriction of D(B ) to an open ane subscheme Spec C of Spec B is D(C ). Proof: The open ane subscheme can be covered by open anes of the form Spec Bf , where Bf = Cf . Then D(C ) Spec Cf = Cf D(C ) = D(Cf ) = D(Bf ) = Bf D(B ) = D(B ) Spec Bf . Thus, D(C ) Spec Bf = D(B ) Spec Bf for each open ane in our cover, and thus D(C ) = D(B ) Spec C .

4.6 Corollary. The ane scheme Spec B/D(B ) is precisely the closure of the set of points of Spec B that arent smooth over A. Proof: Let Sing(B ) be the points of Spec B which arent smooth over A. If p Spec B is not an element of Sing(B ), then there is an open ane neighborhood Spec C of p which doesnt intersect Sing(B ), and therefore C is smooth over A, and thus D(B ) Spec C = D(C ) = (1), because that D(B/A) = (1) i B/A is smooth [Coleman]. Therefore p (as an ideal of B) does not contain D(B ), so p is not in Spec B/D(B ). Thus Spec B Sing(B ). 41

4. About D Conversely, D(B ) restricted to the complement of Spec B/D(B ) is 1, and thus the complement of Spec B/D(B ) is smooth over A. Therefore the complement of Spec B/D(B ) is contained in the complement of Sing(B ), so Spec B/D(B ) Sing(B ). Spec B/D(B ), being closed, then contains Sing(B ). We now understand the geometric structure of Spec B/D. Its algebraic structure is still not well understood. 4.7 Denition. If Y /X are schemes, let D(Y /X ) be the ideal sheaf which is isomorphic to D(V /U ) on open anes V of X which are contained in the inverse image of open anes U of Y . D(Y /X ) is well dened by the previous results. We will now prove that D passes through smooth maps. Let C/B/A be rings, such that C is smooth over B . Let B = An /(G). We want to show that D(C/A) = C D(B/A). 4.8 Lemma. If Spec C is ane space over Spec B , then D(C/A) = C D(B/A). Proof: Let C = B [z1 , ..., zl ]. Let An map onto B surjectively, with kernel generated by the entries of the m-rowed column vector G, and let J = G. Then An+l surjects onto C with kernel also generated by G, and G/ (x1 , ..., xn , z1 , ..., zl ) = J = ( J 0 ). Let (B/A), so that Im JN + M mod (G) for some n m and m m matrices N N and M respectively, with M G = 0. Then Im ( J 0 ) + M mod (G), 0 so D(C/A). Conversely, if D(C/A) then Im J N + M mod (G). Let L = {monomials in z1 , . . . , zl }. Then there exist Z An , and MZ and NZ in Mat(An ) such that =
Z L Z Z ,

N=

Z L NZ Z ,

and M =

Z L MZ Z .

Then

Z Im J NZ MZ 0 mod (G).
Z L

42

4. About D Since the entries of G are in An , J s entries are also in An . Thus we have that Z Im J NZ MZ 0 mod (G), and MZ G = 0 for each Z L. Since J N = J (Nn,m ) we have that Z D(B/A) for each Z , so D(B/A)C .

We now achieve the same result for C etale over B .

4.9 Lemma. If C is etale over B then D(C/A) = C D(B/A). Proof: Let B/A[X ](G), X = (x1 . . . xn ), and G = (g1 . . . gm )tr . Since C is etale over B , its locally of the form (B [y ]/(h))b for some monic polynomial h such that h is invertible once we localize at b [Milne, Thm. I.3.14]. By previous propositions we know that D localizes, so it suces to prove the lemma in the case of C = (B [y ]/(h))h/y = B [y, z ]/(h, zh/y 1), for some monic polynomial h B [y ]. Let h h denote the derivative of h with respect to y . Let H = . Let Y = (y, z ). h z 1 Then D(C/A) i there exists a A[X, y, z ] congruent to mod (G) + (H ), and m m, m 2, 2 m and 2 2 matrices M1 , M2 , M3 , M4 respectively, with M1 G + M2 H = 0, and M3 G + M4 H = 0, and n m, n 2, 2 m and 2 2 matrices N1 , N2 , N3 , N4 respectively, satisfying:
G X H X

0
H Y

N1 N3

N2 N4

M1 M3

M2 M4

mod (G) + (H )

The element satises the above equation i it satises the six sets of equations 43

4. About D following: G N1 + M1 mod (G) + (H ) X G N2 + M2 mod (G) + (H ) 0 X H H 0 N1 + N3 + M3 mod (G) + (H ) X Y H H N2 + N4 + M4 mod (G) + (H ) I X Y 0 = M1 G + M2 H I 0 = M3 G + M4 H The matrix N4 only occurs in equation (4), and
H Y 2

(1) (2) (3) (4) (5) (6)

is invertible mod (G) + (H )

because its determinant is invertible (det( H Y ) = h , and zh 1 mod (G) + (H )).

Thus equation (4) is superuous becuase we may take N4 to be any matrix whose
H reduction mod (G) + (H ) is (I H X N2 M4 ) Y 1

, and then (C/A, (G, H )).

Similarly, equation (3) is also superuous. Thus (C/A, (G, H )) i there exist matrices such that equations (1), (2), (5) and (6) are satised. Reducing equation (5) mod (G), we see that M2 H 0 mod (G). Dierentiating with respect to Y yields that
M2 y H

+ M2 H y 0 mod (G), and


H Y

M2 z H

+ M2 H y

0 mod (G), so M2 H Y 0 mod (G) + (H ). Since equivalent to I

is invertible mod (G) + (H ), we

have that M2 0 mod (G) + (H ). This implies that that equations (1) and (2) are

G N1 + M1 mod (G) + (H ) X G N2 mod (G) + (H ) 0 X

(1 ) (2 )

Neither of these equations involves M3 or M4 , so we may drop equation (6). Equation (2 ) is the only one involving N2 , and is satised by N2 = 0, so we may 44

4. About D drop equation (2 ), and were left with only equations (1 ) and (5) being necessary and sucient. Equation (5) is the only equation left involving M2 , and the equation itself is equivalent to M1 G 0 mod (H ), so were left with the system: G N1 + M1 mod (G) + (H ) X M1 G 0 mod (H ) I (1 ) (2 )

Adjoining z and reducing modulo h z 1 is the same as localizing at h , so solving (1 ) and (2 ) over A[X, y, z ] is the same as solving the following in A[X, y ]h . G N1 + M1 mod (G) + (h) X M1 G 0 mod (h) I (1 ) (2 )

If satises (1 ) and (2 ) for some N1 and M1 , then we can clear denominators, so there exists an integer l, a A[X, y ], and matrices N and M in Mat(A[X, y ]) such that /h = , and satises: G N + M mod (G) + (h) X M G 0 mod (h) I (1 ) (2 )
l

Clearly, the image in C of the set of A[X, y ] satisfying (1 ) and (2 ) generates D(C/A). In particular, this implies that the image of D(B/A) in C lies in D(C/A). We now show that this image generates D(C/A). Since the equations are taken modulo the monic polynomial h, and of degree less than d = deg(h). Then if = M =
d1 i i=0 Mi y , d1 i i=0 i y , G X

and G and

do not involve y , we may assume that , and entries of N , and M are polynomials N =
d1 i i=0 Ni y ,

where the i , and the entries of the Ni and the Mi are elements of

A[X ], we see that there exist matrices such that satises (1 ) and (2 ), i there 45

4. About D exist matrices such that the i satisfy: G Ni + Mi mod (G) X Mi G = 0 i I (A) (B )

This means that the images of the i lie in the image of D(B/A), so D(B/A) generates D(C/A).

4.10 Theorem. If C is a B -algebra, and B is an A-algebra, and C is smooth over B , then D(C/A) = D(B/A)C . Proof: We know that this holds if C is ane over B , or if C is etale over B . The result follows from the fact that if : B C is smooth then locally there exists an integer l such that factors through B [z1 , . . . , zl ], and C is etale over B [z1 , . . . , zl ].

46

5. Examples

Chapter 5. Examples
We compute D, Ds , and Dw for a variety of rings.

5.1. Computational aids Let A be a noetherian ring, and I be an ideal, let A be the I -adic completion of A, and let S be the set of elements of A which are congruent to 1 modulo I . 5.1.1 Lemma. Let J be an ideal of A. The I -adic closure of J in A is {g A | sg J for some s S }. Proof: We have the homomorphisms A S 1 A A, the second of which is faithfully at, so all the ideals of S 1 A are closed. So if J A is an ideal of A, then its closure in the I -adic topology in A, J is Thus, J = {g A | sg J (for some s S )}.
n (J

+ I n ) = (J A) A = (S 1 J ) A.

5.1.2 Proposition. If B is a complete intersection then D(B/A) = Ds (B/A). If B is an integral domain then D(B/A) = Dw (B/A). Proof: Let B be a complete intersection over A. Then there exist n and G such that B = An /(G), and M G = 0 implies that M 0 mod (G). Let J = G. Then D(B/A) i I + M JN mod (G) i I JN mod (G) i (I + M ) JN mod (G). Similarly, Dw (B/A) i (I + JN + M ) 0 i (I + JN + M ) 0 or 0 i D.

47

5. Examples 5.1.3 Proposition. Let B = A[X ]/G, with G = (g1 . . . gr )tr . Let J = G. If 1. ({m1 , . . . , mr | (m1 . . . mr )G = 0})B = (1), = 0, where J denotes 2. there exists a vector v = 0 with entries in B such that v J the image of J in B , and 3. B is a noetherian integral domain, then Ds (B/A) = 0. Proof: Let a = ({m1 , . . . , mr | (m1 . . . mr )G = 0}). Letdenote reduction modulo )= G. Let Ds (B/A). Then there exist M and N such that M G = 0 and (I M N . Item 2 implies that v (I M ) = 0. Let d Z+ . Multiplying on the right by J + + M d1 , we see that v (I M d ) = 0, so the entries of v are in ad for I +M all d. Since a = (1) and B is an integral domain, d ad = 0, so v = 0. Since v = 0, this means that = 0.

5.2. Demonstrative examples 5.2.1 Example. The ideals D and Ds need not be equal. Proof: We show that Ds (B/K ) = 0 for B = K [x, y ]/(xi y j , xk y l ), where i, j, l 1, k 0, i > k, j < l, K is a eld, and i, j , k , and l are nonzero in K when they are nonzero integers. Since we already have that D(B/K ) = ((l + 1)y l xl1 , (l + 1)x2l ) for (i, j, k, l) = (1, l, 0, l + 1) (from the exercise beginning chapter 4), this demonstrates the supposition. We have that Ds i there exist a, b, c, d, m, n B such that 0 0 Let M = = ixi1 y j kxk1 y l my lj ny lj jxi y j 1 lxk y l1 a b c d a + c my lj ny lj mxik nxik =0

mxik nxik

b d and I be the 2 2 identity matrix. Then (I M ) = JN . Multiplying on the 48

, N =

, G = (xi y j , xk y l )tr , J = G,

5. Examples left by I + M + + M s1 yields (I M s ) = JN for all s 1. Thus, I JN mod (y lj , xik )s for all s, so I JN mod s (y lj , xik )s for all s. Since s (y lj , xik )s = s (x, y )s we must have that I JN mod s (x, y )s . The ideal s (x, y )s in B is the image of s (x, y )s + (G) = (G), the (x, y )- adic closure of (G) in K [x, y ]. Let S = {1+g | g (x, y )}. Then by the lemma, (G) = {g | sg (G), s S }. If s S and sg (G) then sg = pxi y j + qxk y l = xk y j (pxik + qy lj ) for some p and q . Since s 1 mod (x, y ), neither x nor y divides s, so xk y j must divide g , so there exists a g such that g = xk y l g . Thus, sg = pxik + qy lj . The ideal (xik , y lj ) is primary in K [x, y ] becuase if (xik , y lj ), then either or must have no constant term. Suppose its . then t (xik , y lj ) for some positive integer t. Thus, since no power of s can be in (xik , y lj ), we must have that g (xik , y lj ), and thus g (G). Thus (G) = G, so s (x, y )s = 0 in B . Therefore I JN mod s (x, y )s implies that I JN . Thus Ds if and only if there exist polynomials a, b, c, d satisfying: = aixi1 y j + bjxi y j 1 0 = akxk1 y l + blxk y l1 = ckxk1 y l + dlxk y l1 0 = cixi1 y j + djxi y j 1 (1) (2) (3) (4)

First assume that k is nonzero in K . Equating equations (1) and (3), we see that we must have that aixi1 y j + bjxi y j 1 = ckxk1 y l + dlxk y l1 , so y j 1 xk1 (aixik y j + bjxik+1 ckxlj +1 y l dlxy lj ) = 0, so aixik y j + bjxik+1 ckxlj +1 y l + dlxy lj mod Ann(xj 1 y k1 ). The annihilator of (xj 1 y k1 ) is (xy lj +1 , xik+1 y ), so considering the last equation modulo y , we see that bjxik+1 = 0, which means that bj 0(y ), so there exists a by such that b = by y . Plugging this into equation (2) we see that akxk1 y l = 0, so ak = 0 mod Ann(xk1 y l ). This annihilator is (x), so there exists an ax such that a = ax x. Plugging the equations for a and b into equation (1) yields that = 0. 49

5. Examples If k = 0, our equations become: = aixi1 y j + bjxi y j 1 0 = bly l1 = dly l1 0 = cixi1 y j + djxi y j 1 (1 ) (2 ) (3 ) (4 )

Equation (4 ) implies that ciy + djx 0 mod Ann(xi1 y j 1 ). This annihilator is (xy, y lj +1 ), so considering the equation modulo (y ) yelds that djx 0(y ), so d = dy y for some dy . Plugging this into equation (3 ) yields that = 0. It is clear from the equations for D and Dw that Dw D { 2 | Dw }. The following example demonstrates that we cannot replace this containment with equality. 5.2.2 Example. The sets D and Dw need not be equal, Dw need not be an ideal, and D need not equal { 2 | Dw }. Not even the set of squares of D need equal { 2 | Dw }. Proof: We compute D(B/A) and Dw (B/A) for B = A[x]/(x3 ). Since B is dened by one equation over A, and this equation has trivial annihilator, D(B/A) = (3x2 ). As for Dw (B/A), Dw if and only if there exists an n B satisfying ( + 3x2 n) 0 mod (x3 ).
2 + 2 x + ( 2 + Let n = n0 + n1 x + n2 x2 , and let = 0 + 1 x + 2 x2 . Then 2 = 0 0 1 1

20 2 )x2 , and 3x2 n = 30 n0 x2 . Thus Dw if and only if there exists n0 A such that
2 2 0 + 20 1 x + (1 + 20 2 )x2 + 30 n0 x2 0 mod (x3 ).

50

5. Examples This is equivalent to


2 0 = 0,

20 1 = 0, and

2 1 + 20 2 + 30 n0 = 0.

Thus,
2 2 Dw = {0 + 1 x + 2 x2 | 0 = 0, 20 1 = 0, and n0 such that 1 + 20 2 = 30 n0 }.

This demonstrates that Dw need not be an ideal, since if A is taken to be k [u, v ]/(u2 , v 2 ), we see that u Dw , and vx Dw , but u + vx Dw (if 2 = 0 in k ). This also demonstrates that D need not equal Dw , since D is always an ideal. To show that D need not be { 2 | Dw } we let A = Q. Then 3x2 D, but it doesnt have a square root in B , let alone a square root lying in Dw . Furthermore, the squares of D need not be the squares of elements of Dw , as is illustrated by taking A = k [u, v ]/(u2 , uv ), where k is any ring in which 3 is invertible and 2 = 0. Let = u + vx + x2 . Then 2 (3x2 ). However there exists no n0 A
2 + 2 = 3 n , i.e. there exists no n A satisfying v 2 + 2u = 3un , satisfying 1 0 2 0 0 0 0

so Dw . Let Dn denote the set of elements of B generating principal ideals having NILP. 5.2.3 Example. The sets Ds and Dn need not be equal. Proof: Let B = A[x]/(x4 ). Then Ds = D = (4x3 ), but x2 Dn , as is evident by glancing at the equations for NILP. 5.2.4 Example. A nonprincipal D, and a maximal nonprincipal SILP ideal. Proof: Take B = k [x, y ]/(x2 y 2 ). Then D = Ds = (2x, 2y ). 5.2.5 Example. The ideal D need not be a maximal WILP ideal, and maximal ideals contained in Dw need not have WILP. Proof: Let B = k [x, y ]/(x2 , y 2 ). Suppose that k is a eld in which 2 = 0. Then D = (xy ) (from the example at the beginning of the chapter about D), but we will 51

5. Examples show that Dw = (x, y ), so (x) and (y ) both have WILP. We will also show that (x, y ) does not have WILP. n3 n4 we have that ( ) has WILP if and only if there exist n1 , . . . , n4 B satisfying: Writing down the equations for ( ) having WILP, and letting N = n1 n2 ,

2 + 2xn1 = 0 2xn2 = 0 2yn3 = 0 2 + 2yn4 = 0

(1)

(2)

2 + We may take n2 = n3 = 0. Let = 0 + 1 x + 2 y + 3 xy . Then 2 = 0 20 1 x + 20 2 y + (21 2 + 20 3 )xy . Let n1 = n 0 + n1 x + n2 y + n3 xy . Then equation

(1) becomes:

2 0 + 20 1 x + 20 2 y + (21 2 + 20 3 )xy + 2x0 n 0 + 2xy0 n2 + 2xy2 n0 = 0

(1 )

2 is multiplied by Since every expression in equation (1 ) except for the leading 0

either an x or a y , we must have that 0 = 0, and then (1 ) has solution for all 1 , 2 and 3 by letting n 0 = 1 . By symmetry, the same is true of equation (2), so Dw = (x, y ). To see that Dw does not have WILP, we write down the equations that x and y 52

5. Examples would have to satisfy. There would have to exist matrices Nx and Ny such that: x(xI + x(yI + y (xI + y (yI + where I is the 2 2 identity matrix. However, the second equation above has no solutions, since x annihilates the rst 2x 0 row of . Thus (x, y ) does not have WILP. 0 2y 5.2.6 Example. A maximal nonprincipal WILP ideal. Proof: Let B = k [x, y ]/(xy (y 1)), where k is a eld in which 2 = 0. We show that Dw (B/k ) = (y (y 1), x(y 1)) (y (y 1), x(2y 1)) (y (y 1), xy ), and that each of the ideals composing this union has WILP. The element has WILP if and only if there exist n1 , n2 B satisfying: + ( y (y 1) x(2y 1) ) n1 n2 =0 2x 0 0 2y 2x 0 0 2x 2y 0 Nx ) = 0 Ny ) = 0 Nx ) = 0 Ny ) = 0

0 2y 2x 0 0 2y

Multiplying out, and utilizing the fact that were working modulo xy (y 1), we see that the above equation is equivalent to the existence of polynomials n1 , n2 , n3 k [x, y ] satisfying: ( + n1 y (y 1) + n2 x(2y 1)) = n3 xy (y 1) (1)

To solve this equation, we analyze it modulo various ideals, breaking it into cases. Consider the equation modulo (x). We get ( + n1 y (y 1)) 0 mod (x) 53

5. Examples Thus either 0 mod (x) (case A), or n1 y (y 1) mod (x) (case B). Case A: 0 mod (x). Then = xx for some x . Plugging back into equation (1) and cancelling xs, we nd that x (xx + n1 y (y 1) + n2 x(2y 1)) = n3 xy (y 1). Considering equation (A) modulo (y ), we see that x (xx n2 x) 0 mod (y ), so either x 0 mod (y ) (case A1), or x n2 mod (y ) (case A2). Case A1: x 0 mod (y ). Then x = yxy , for some xy . Plugging back into equation (A) and cancelling xs, we see that xy (xyxy + n1 y (y 1) + n2 x(2y 1)) = n3 (y 1). Reducing modulo (y 1), we conclude that xy (xxy + n2 x) 0 mod (y 1). This means that either xy 0 mod (y 1), or xy n2 mod (y 1). The former implies that = 0 in B . The latter implies that xy = n2 + xyz (y 1) for some xyz . Plugging back into equation (A1) and cancelling (y 1)s, we see that (n2 + (y 1)xyz )(xn2 + xyxyz + n1 ) = n3 . This equation has solution for all xyz and for all n2 , so we see that in this case, = xx = xyxy = xy (n2 + xyz (y 1)), so ( ) = (xy ). 54 (A1) (A)

5. Examples Case A2: xx n2 x 0 mod (y ). In this case, we let x = n2 + yxy . Plugging into equation (A), we see that (n2 + yxy )(xxy + n1 (y 1) + 2n2 x) = n3 (y 1). (A2)

Considering this equation modulo (y 1), we see that either n2 + xy 0 mod (y 1) or xy + 2n2 0 mod (y 1). Plugging back into equation (A2), it is clear that the former case leads to (x(y 1)), and the latter case leads to (x(1 2y )). Thus in case A (xy ) (x(y 1)) (x(2y 1)). Case B: n1 y (y 1) mod (x). In case B let = n1 y (y 1) + xx for some x. Plugging into equation (1) and cancelling xs we get that (n1 y (y 1) + xx )(x + n2 (2y 1)) = n3 y (y 1). (B )

Considering equation (B ) modulo (y ), we see that either x 0 mod (y ), or x n2 mod (y ), or equivalently, there exists a xy such that either x = yxy (case B1) or x = n2 + yxy (case B2). Case B1: x = yxy . Plugging this back into equation (B ), cancelling y s, and reducing modulo (y 1), we see that either (y (y 1)), or (y (y 1), xy ). Case B2: x = n2 + yxy . Plugging into equation (B ), cancelling y s and reducing modulo (y 1), we see that either (y (y 1), x(y 1)) or (y (y 1), x(2y 1)). Summing up all cases, we see that Dw = (y (y 1), x(y 1)) (y (y 1), xy ) (y (y 1), x(2y 1)). We will now show that each of these ideals has WILP. 55

5. Examples The ideal (1 , 2 ) has WILP if and only if there exist n1 , n2 , n3 , n4 B satisfying: 1 (2 + n1 y (y 1) + n2 x(2y 1)) = 0 1 (1 + n3 y (y 1) + n4 x(2y 1)) = 0 2 (1 + n3 y (y 1) + n4 x(2y 1)) = 0 2 (2 + n1 y (y 1) + n2 x(2y 1)) = 0 In each of our cases, 1 2 = 0, so we need only solve the equations (over B): 1 (n1 y (y 1) + n2 x(2y 1)) = 0 1 (1 + n3 y (y 1) + n4 x(2y 1)) = 0 2 (n3 y (y 1) + n4 x(2y 1)) = 0 2 (2 + n1 y (y 1) + n2 x(2y 1)) = 0 (1 ) (2 ) (3 ) (4 ) (1) (2) (3) (4)

Since each of the ideals is generated by y (y 1) and one other element, we may take 1 = y (y 1). Plugging in, and utilizing the fact that xy (y 1) = 0, we arrive at the equivalent equations: n1 y 2 (y 1)2 = 0 y 2 (y 1)2 + n3 y 2 (y 1)2 = 0 2 (n3 y (y 1) + n4 x(2y 1)) = 0 2 (2 + n1 y (y 1) + n2 x(2y 1)) = 0 (1 ) (2 ) (3 ) (4 )

Equations (1 ) and (2 ) are satised if and only if n1 = xn 1 and n3 = 1 + xn3 for some n 1 and n3 in B . Plugging this into equations (3 ) and (4 ) yield that (y (y 1), 2 ) has WILP if and only if there exist n 1 , n2 , n3 , and n4 satisfying:

2 (y (y 1) + n4 x(2y 1)) = 0 2 (2 + n2 x(2y 1)) = 0 56

(3 ) (4 )

5. Examples Since x(y 1), xy , and x(2y 1) each annihilate y (y 1), equation (3 ) is satised by taking n4 = 0. Thus (y (y 1), 2 ) has WILP if and only if there exists an n2 B satisfying: 2 (2 + n2 x(2y 1)) = 0 ()

Then 2 = x(y 1) satises () by setting n2 = 1. For 2 = xy , () is satised by taking n2 = 1. Finally, when 2 = x(2y 1), we may solve () by setting n2 = 1. Thus, each of the three ideals composing Dw have WILP.

5.3. Easy examples 5.3.1 Example. The twisted cubic along the degenerate elliptic curve. Let B = k [x, y, z ]/((y z 2 )2 (x z 3 )3 ) (i.e. At each z value, we have the degenerate elliptic curve centered at (z 3 , z 2 )). Let g = (y z 2 )2 (x z 3 )3 . Then Dw = D(B/k ) = Ds (B/k ) = (g ) = (3(x z 3 )2 , 2(y z 2 ), 4z (y z 2 ) + 9z 2 (x z 3 )2 ) . = (3(x z 3 )2 , 2(y z 2 )) The ideal Dw then has SILP, and therefore WILP. 5.3.2 Example. A bound on Ds for some single point schemes. Let B = k [X ]/(G) be a one point scheme, where k is an algebraically closed eld, X = x1 , . . . , xn , and G = (g1 gm )tr . Then (G) contains a power of a maximal
i ideal, which we may assume is (X ). Thus there exist integers ei , such that xe i

ei i 1 i (G), and xe (G). Suppose that gi = xe i i for i = 1, . . . , n, and that xi


i (g1 , . . . , gi1 , gi+1 , . . . , gm )k [X ](X ) , that is, xe i not in the ideal generated by the

57

5. Examples remaing equations in k [X ] localized at the maximal ideal (x1 , . . . , xn ). Then


n

Ds (B/k )
i=1

i 1 ((ei xe ) + (G)) i

Proof: Let G = (g1 gn )tr . Let H = (gn+1 . . . gm )tr . Then Ds i there exists a k [X ], and matrices M = (mij ) and N = (nij ), such that mod (G), M G = 0, and: (I M )
G X H X

N mod (G).

This implies that for i = 1, . . . , n, (1 mii ) ei xei 1 nii mod (G). We are given that
j
i mij gj = 0, so by the condition that xe i (g1 , . . . , gi1 ,

gi+1 , . . . , gm )k [X ](X ) , we know that mii is not invertible in k [X ](X ) , and hence is not invertible in B . Thus 1 mii is invertible in B , so (ei xei 1) + (G).

5.3.3 Example. If B is a hypersurface over A, B = A[X ]/(g ), and if g is monic, then B/A is a complete intersection, so by the proposition at the beginning of this
g ). chapter, D(B/A) = Ds (B/A) = ( X

5.3.4 Example. The ideal D(B/A) = Ds (B/A) = (ixi1 y j , jxi y j 1 ) for B = A[x, y ]/(xi y j ) (by the same proposition). 5.3.5 Example. The ideal D(B/A) = (xi1 y j 1 ) for B = A[x, y ]/(xi , y j ), if i and j are invertible in A, and D(B/A) = Ds (B/A). Proof: The column vector [xi , y j ]tr has no annihilators not equal to zero in B , so D = Ds , and D if and only if there exists a matrix N such that satises: 0 0 = ixi1 0 0 jy j 1 N .

Thus D = (xi1 ) (y j 1 ) = (xi1 y j 1 ).

58

5. Examples 5.4. A hard example Let a < b < c Z+ . Let B be the subring of Q[z ] generated by z a , z b , and z c (i.e. Q[z a , z b , z c ]). Assume the nonredundancy condition that z b Q[z a ], and that z c Q[z a , z b ]. Assume that a = 3. We will show that in this case D(B/Q) = (z 2b , z b+c , z 2c ) and that Ds (B/Q) = 0. Note that this nonredundancy condition implies that Z/3Z = {0,b mod 3,c mod 3}. It also implies that 3 < b < c 2b 3 and c 2b mod (3). 5.4.1. Equations for B We need to rst embed B in ane space over A. 5.4.1.1 Lemma. The ring Q[u, v, w]/(v 2 u(2bc)/3 w, u(b+c)/3 vw, w2 u(2cb)/3 v ) = B by u z 3 , v z b , w z c . Proof: Before we begin, we should show that the above expressions are polynomials, by demonstrating that (2b c)/3, (b + c)/3, and (2c b)/3 are non-negative integers. Since b and c are the nonzero elements of Z/3Z, we have that 2b c mod (3), 2c b mod (3) and b c mod (3), so 2b c, b + c, and 2c b are all congruent to 0 modulo (3). Thus (2b c)/3, (b + c)/3, and (2c b)/3 are integers. They are positive because 3 < b < c 2b 3. We map Q[u, v, w] to B by u z 3 , v z b , w z c . Then the above three equations map to zero. To see that they must generate the kernel, consider a polynomial f which maps to 0. After subtracting o multiples of the rst and third equations, we can assume that f is linear in v and w, and after subtracting o multiples of the second equation, we may assume that f contains no monomials divisible by vw, so f = fu + fv v + fw w, where fu , fv , fw Q[u]. Then fu , fv , and fw all map to polynomials in z 3 . Since v and w map to z b and z c respectively, and by the nonredundancy condition, b c mod (3), and neither is congruent to 0, the monomials in the images of fu , fv v , and fw w are of dierent degrees, so the image can only be zero if fu , fv v , 59

5. Examples and fw w all map to 0. This can only occur if fu = fv = fw = 0. Thus, we may conclude that the kernel is generated by the claimed equations. Let g1 = v 2 u(2bc)/3 w, g2 = u(b+c)/3 vw, g3 = w2 u(2cb)/3 v , and G = (g1 , g2 , g3 )tr . To compute D we still need to know the annihilator of G. 5.4.1.2 Lemma. The annihilator of G is generated by the vectors (w, v , u(2bc)/3 ) and (u(2cb)/3 , w, v ). Proof: It is easy to verify that (w, v, u(2bc)/3 ) and (u(2cb)/3 , w, v ) annihilate (v 2 u(2bc)/3 w, u(b+c)/3 vw, w2 u(2cb)/3 v )tr . Conversely, suppose (f, g, h) annihilates the column vector G. By subtracting o multiples of the two annihilators of the lemma, we can reduce (f, g, h) to the point where g is a polynomial in u. But then, g (u(b+c)/3 vw) contains monomials purely in u, where as f and h times the other two polynomials do not. Thus, g must now be zero. Then (f, h) annihilates (g1 , g3 )tr . Since g1 and g3 are relatively prime, (f, h) must be a multiple of (g3 , g1 ). (g3 , 0, g1 ) = w(w, v, u(2bc)/3 ) v (u(2cb)/3 , w, v ), so (w, v, u(2bc)/3 ) and (u(2cb)/3 , w, v ) do indeed generate the annihilator of G.

We can now begin computing D. We start by determining the system of equations that we must solve. 5.4.1.3 Lemma. Let B . Then D if and only if there exist ni B for 1 i 9 and aj , bj B for 1 j 3 satisfying the system of equations in gure 1. Proof: This is the simple matter of writing down the equations for D. D if n1 n4 n7 and only if there exists a 3 3 matrix N = n2 n5 n8 with entries in Q[u, v, w], n3 n6 n9 60

5. Examples

c 2 b 3 1 2b3 z 0 0 0 0 c b+ c 3 0 b+ 3 z 1 0 0 0 b 2 c 2 c 3 0 3 z 0 0 1 0

2z b 0 0 z c 0 0 z 2cb 0 0

z 2bc 0 0 z b 0 0 2z c 0 0

0
c 2 b 3 z 2b3

0 0 b+ c b+ c 3 z 3 0 0 b 2 c 2 c 3 3 z 0

0 2z b 0 0 z c 0 0 z 2cb 0

0 0 2z b 0 0 z c 0 0 z 2cb

0 0 z 2bc 0 0 z b 0 0 2z c

zc zb z 2 b c 0 0 0 0 0 0

0 0 0 zc zb z 2 b c 0 0 0

0 0 0 0 0 0 zc zb z 2 b c

z 2 cb zc zb 0 0 0 0 0 0

0 0 0 z 2 cb zc zb 0 0 0

0 z 0 c 2 b 3 z 0 2b3 0 0 z b 0 b+ c b+ c 3 0 3 z 0 0 c 2z 0 b 2 c 2 c 3 0 3 z 0 n1 0 n2 0 n3 0 0 n4 0 0 n 0 5 0 n 6 0 0 n7 0 0 n = 0 8 0 n 0 9 z 2 cb 0 a 1 zc a2 0 b z a3 0 b1 0 b2 0
2 b c

b3

Figure 1. The element D if and only if this system has solution. and a 3 3 matrix M with entries in Q[u, v, w] satisfying M G = 0 such that:

0 1 0 + GN + M 0 mod (G) 0 0 1 61

1 0 0

5. Examples Computing the jacobian matrix of G, we see that


c 2bc u 2b3 1 w

G =

3 c b+c b+ 3 1 3 u b b2c 2c3 1 v 3 u

2v w u
2 cb 3

2 b c 3

v 2w

By the previous lemma, we know that each row of M must be a linear combination of the vectors (w, v, u(2bc)/3 ) and (u(2cb)/3 , w, v ), so M has the form a1 a3 b1 b3

a2 ( w

u(2bc)/3 ) + b2 ( u(2cb)/3

v)

for some a1 , a2 , a3 , b1 , b2 , b3 Q[u, v, w]. Solving these equations in Q[u, v, w]/(G) is the same as solving their image in Q[z 3 , z b , z c ]. Their image is: 0 1 0 + 0 0 1 1 0 0
c 2b3 z 2b3 b+c b+c3 3 z b2c 2c3 3 z

2z b z c z 2cb zb

z 2bc 2z c

+ a2 ( z c a3

a1

z b n2 b1 b3 n3

n1

n4 n5 n6 zc

n7 n9

n8 zb )

z 2bc ) + b2 ( z 2cb

Rewriting these equations in the matrix form of a system of linear equations for , n1 , . . . n9 , a1 , a2 , a3 , b1 , b2 , b3 yields the equations cited in the lemma. Let A denote the big matrix in the above lemma. Note that the entries of the matrix are all monomials. Since factoring out or multiplying by powers of z will not change the solution set, we may multiply rows 2, 3 and 6 by z cb , z 2c2b , and z cb respectively. We may also factor z cb , z 2c2b , and z cb out of rows 4, 7, and 62

5. Examples 8 respectively. This leaves us with a matrix such that nonzero entries in the same column have the same degree. In particular, this means that there exist matrices S and T such that A = ST , where S is a matrix with entries in Q, and T is the diagonal matrix with diagonal 1, z 2b3 , z b , z 2bc , z b+c3 , z c , z b , z 2c3 , z 2cb , z c , z c , z b , z 2bc , z 2cb , z c , z b .
c 1 2b3 0 0 0 0 b+ c 0 3 1 0 S= 0 0 b 2 c 0 3 0 0 1 0

2 1 0 0 0 0 1 1 0 0 0 0 1 2 0 0 0 0

0
c 2b3 0 0 b+ c 3

0 0
b 2 c 3

0 2 0 0 1 0 0 1 0

0 0 1 0 c 0 2b3 0 0 1 0 b+ c 0 3 0 0 2 0 b 2 c 0 3

0 0 0 0 2 1 0 0 0 0 1 1 0 0 0 0 1 2

1 1 1 0 0 0 0 0 0

0 0 0 1 1 1 0 0 0

0 0 0 0 0 0 1 1 1

1 1 1 0 0 0 0 0 0

0 0 0 1 1 1 0 0 0

0 0 0 0 0 0 1 1 1

We will now prove a lemma to aid in solving equations of this form over the ring B. 5.4.1.4 Lemma. Let S be an m n matrix with entries in Q. Let T be an n n diagonal matrix whose diagonal entries are powers of z . Let T 1 be its inverse (as a matrix over Q(z )). Then KerB n (ST ) = {T 1 z i v | v Qn , T 1 z i v B n , Sv = 0} Proof: Let v B n . Then there exist unique vectors vi Qn such that v = T 1 vi z i . Since each coordinate of v is in B , we must have that the j th entry of vi is 0 if Tj1 z i B , where T 1j denotes the j th diagonal entry of T 1. Conversely, given vectors vi such that the j th entry of vi is 0 if Tj1 z i B , the sum a vector in B n . Therefore if v = T 1 vi z i , then ST v = ST T 1 vi z i = ST T 1 vi z i = Svi z i , so ST v = 0 if and only if Svi = 0 for all i. 63 T 1 vi z i is

5. Examples The statement of the lemma follows immediately. We note that if v B n and T 1 is as in the lemma then T 1 z i v B n if and only if Tj1 z j B whenever vj = 0. We also note that if 0 i < b then z i B if and only if i 0 mod (3), if b i < c then z i B if and only if i 0 mod (3) or i b mod (3), and all z i B for i c. Using the matrix T as cited before the lemma, T 1 is the diagonal matrix whose diagonal is 1, z 2b+3 , z b b, z 2b+c , z bc+3 , z c , z b , z 2c+3 , z 2c+b , z c , z c , z b , z 2b+c , z 2c+b , z c , z b . 5.4.1.5 Lemma. The columns of the matrix in gure 2 form a basis for the kernel of S . Proof: Merely reduce S to row-echelon form over Q[a, b], and write down the kernel. Although this was done with VAXIMA, it is easy to verify that the matrix of gure 1 is the kernel of S . Let K be the matrix of gure 1. Since each column of K contains a 1 in a row which is otherwise zero, the columns of K must be linearly independent. We may also verify that the columns of K are in the kernel of S by matrix multiplication. To show that the columns of K span the kernel of S , it suces to show that the kernel of S has dimension at most 9. Well, it is easy to verify that columns 1, 3, 4, 6, 7, 9, and 10 are linearly independent, so the rank of S is at least 7. Since S is 9 16, this means that the dimension of its kernel is at most 9. Thus the columns of K form a basis for the kernel of S . We are nally in a position to determine D. 5.4.1.6 Proposition. The ideal D(B/Q) is (z 2b , z b+c , z 2c ). Proof: We know that D = {rst coordinates of vectors v | ST v = 0}. Thus D is generated by the rst coordinates of vectors T 1 z i v such that v Qn , T 1 z i v B n , 64

5. Examples 0 0 0 0 0
3 c b c

0 0 0 0 0 0 0
3 c b c

1 0 0 0
1 c b2c 3c

1
1 c 2cb 3c

1
1 c b+c 3c

1 0 0 0
1 c b2c 3c

1
1 c 2cb 3c

1
1 c b+c 3c

3 c b c 1 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0
2 c c2b 3c

0
2 c 2bc 3c

0 0 0 0
2 c c2b 3c

1 0 0 0 0 0 0 0 0 0

0
1 c bc 3c

0 0 0 0 0 0 1 0 0 0

0
1 c bc 3c

1 0 0 0 0 0 0

0 1 0 0 0 0 0

0 0 1 0 0 0 0

0 0 0 0 1 0 0

0 0 0 0 0 1 0

0 2 c 2bc 3c 0 0 0 0 0 0 0 0 0 1

Figure 2. The columns of this matrix form a basis for the kernel of S . and Sv = 0. Since such rst coordinates are monomials, we know that D is generated by monomials. If z i D, then z i+3 D. Thus it suces to show that z 2b , z b+c and z 2c are in D, and that z 2b3 , z b+c3 and z 2c3 are not in D. The rst diagonal element of T 1 = 1. Thus z i D if and only if there exists a vector v KerS whose rst coordinate is 1 and satisfying T 1 z i v B n . Let the columns of the matrix in gure 2 be c1 through c9 . Let the diagonal entries
1 1 of T 1 be T1 through T16 . Then Ti1 z b+c B as long as i is neither 2 nor 8. To 1 illustrate the method used to determine this, we will demonstrate that T9 B,

65

5. Examples
1 b+c 1 b+c whereas T2 z may not be in B . The monomial T9 z = z 2c+b z b+c = z 2bc .

Since c 2b 3, we have that 2b c 3. Furthermore, 2b c c c 0 mod (3), so z 2bc B .


1 b+c On the other hand, the monomial T2 z = z 2b+3 z b+c = z cb+3 . Since c

2b 3, we have that c b + 3 b. Furthermore, c b + 3 c + c b mod (3), so z cb+3 B if and only if c = 2b 3.


1 In any case, 3 c1 c6 is one in its rst coordinate, and is zero in coordinates 2,

and 8, so we have that z b+c D. On the other hand, z b+c3 D because Ti1 z b+c3 B for i {2, 3, 6, 7, 8, 10, 11, 12, 15, 16}, and there is no linear combination of the c1 through c9 which is nonzero in the rst coordinate and zero in these coordinates. One may verify this last statement by noting that c2 is the only basis vector with a nonzero entry in row 7, so it may not be used in a linear combination of the vectors c1 through c9 which is zero in row 7. Similarly, c3 , c4 , c5 , c8 , and c9 are eliminated. This leaves only c1 , c6 and c7 to work with. However, out of these vectors, only c7 is nonzero in row 8, which eliminates c7 from such a linear combination. Finally, c1 is zero in row 6, and c6 is not, so the only linear combination of the two which is zero in row 6 is a multiple of c1 . But c1 is zero in the rst row, and we need a linear combination which is 1 in the rst row, so we can conclude that there is no vector in the kernel of S which is both zero in the rows indicated, and nonzero in the rst row. Thus, z b+c3 D. Consider Ti1 z 2b . It is in B if and only if i {5, 8, 9, 14}. The linear combination so z 2b
2 3 c2

c6 is 1 in the rst coordinate and zero in coordinates {5, 8, 9, 14}, However, for Ti1 z 2b3 , we see that it is in B if and only if

D.

i {3, 4, 5, 7, 8, 9, 12, 13, 14, 16}, and every element of KerS which is zero in these coordinates is also zero in coordinate 1. Thus z 2b3 D. As for Ti1 z 2c , we see that it is in B if and only if i {2, 5}. Since 1 3 c2 c4 is zero in coordinates 2 and 5, but is 1 in coordinate 1, we can conclude that z 2c D. On the 66

5. Examples other hand, Ti1 z 2c3 is in B if and only if i {2, 5, 6, 9, 10, 11, 14, 15}. Since every element of KerS which is zero in these coordinates is also zero in the rst coordinate, we see that z 2c3 D. Thus, D = (z 2b , z b+c , z 2c ). We now compute Ds for the same family of curves. 5.4.1.7 Proposition. Let n 3 be an integer. Let k be a eld. Let e1 through en be positive integers such that z ei k [z e1 , . . . , z ei1 ]. Let B = k [z e1 , . . . , z en ]. Let K be the kernel of the surjection k [u1 , . . . , un ] B given by ui z ei . Let G = ( g1 . . . gm )tr be a column vector of polynomials generating K . If 1. the entries of G are dierences of relatively prime monomials, 2. no subset of the entries of G generates K , 3. at least one of the ai are nonzero in k , and 4. the number of entries of G is greater than or equal to n, then Ds (B/k ) = 0. Proof: We use proposition 5.1.2. Suppose 1, 2, 3, and 4 of this proposition hold. We will show that 1 and 2 imply condition 1 of proposition 5.1.2, and that 1, 3 and 4 imply condition 2 of proposition 5.1.2. It is obvious that condition 3 of proposition 5.1.2 holds. Let m be the number of rows of G Let N = {a k [u1 , . . . , un ]m | aG = 0}. Condition 2 implies that there is no a N with an entry in k . Condition 1 implies that N can be generated by vectors whose coordinates are all homogeneous (where deg ui is dened to be ei ). Thus (N ) (u1 , . . . , un ), so the image of (N ) in k [z e1 , . . . , z en ] is contained in (z e1 , . . . , z en ). Therefore condition 1 of the proposition holds. We will denote monomials of k [u1 , . . . , un ] as follows. For a set = {1 , . . . r }
r 1 {1, . . . , n}, ua denotes the monomial u1 ur .

a Condition 1 says that each polynomial gi is of the form ua , where u

67

5. Examples {1, . . . , n}, and denotes its complement. Its image in B is then z so we must have that ( e1 z di e1
j ej aj P
ej aj

ej aj

j ej aj .

Denote this value by di . Then gi ,

the image in Q[z ] of the derivative of gi with respect to (u1 , . . . , un ), is of the form ej z di ej en z di en ), where the +1 occurs if j and the an z en )tr annihilates gi 1 occurs otherwise. Thus the column vector ( a1 z e1

on the right. Thus it annihiliates G from the right. By condition 3, it must be nonzero. By condition 4, we know that G has at least as many rows as columns, so there exists a nonzero vector v which annihilates J from the left too. Thus condition 2 of the proposition holds. Thus by proposition 5.1.2, we conclude that Ds (B/k ) = 0. It is easy to see that the conditions of the proposition are satised for k [z 3 , z b , z c ] as a quotient of k [u, v, w] modulo the equations given, so we immediately arrive at the following corollary. 5.4.1.8 Corollary. The ideal Ds (k [z 3 , z b , z c ]/k ) = 0. Lest one think that Ds is always zero for these curves, we supply the following example: 5.4.1.9 Example. Let B = k [z 5 ,z 6 ,z 9 ]. Then Ds (B/k ) = D(B/k ) = (z 18 , z 19 , z 21 , z 22 ). Proof: It is easy to verify that B = k [u, v, w]/(u3 vw, v 3 w2 ), and that B is thus a complete intersection, so D = Ds . 3u2 0

G =

w 3v 2 68

v 2w

5. Examples so D if and only if there exist a1 , . . . , a6 B satisfying: a1 a2 10 9 6 1 0 3z z z + a3 a4 = 0 12 9 0 1 0 3z 2z a5 a6

Rewriting these equations in matrix form, multiplying by powers of z so that each equations: 0 0 z9 0 0 0 0 0 0 0 z6 0 0 0 0 0 0 0 z 13 0 0 0 0 0 0 0 z 12 0 0 0 a1 0 a2 0 a3 = 0 0 a4 0 a5 a6

column has the same degree, and factoring out powers of z , we see that D if and only if there exists a solution to the following 1 0 10 0 z 1 3 1 1 0 0 0 0 0 0 3 1 1 0 0 0 0 0 0 0 3 2 0 0 0 0 0 1 0 0 0 0 3 2 0 0 0 0

z9

The kernel of the above matrix of constants is spanned by the columns of: 0 9 6 5 3 2 6 0 0 9 0 0 0 1 1 0 0 3 0 0 3 Let the matrix of constants which appears in the equations be S . Let the above matrix be K . It is easy to check the columns of K are in the kernel of S . To see that they span the kernel of S , we rst note that they are linearly independent, so they will span the kernel of S if we can show that the kernel of S has dimension no greater 69

5. Examples than 3. This is easy to see by picking 4 columns of S which are linearly independent, say columns 2, 4, 5, and 6. At this point, it is easy to verify (using the results of the previous section) that z 18 , z 19 , z 21 , and z 22 are the lowest powers of z not congruent modulo (5) which are in D.

70

1.

References
[Artin] Artin, M., Lectures on Deformations of Singularities, Tata Institute of Fundamental Research, 1976 [Coleman] Coleman, R.F., Newtons Lemma, to appear. [Greenberg] Greenberg, Something Somewhere. [Elkik] Elkik, R., Equations ` a coecients dans un anneau Hens` elien, Ann. Sci. Ec. Norm. Sup., 6 (1973) 553-603 [Milne] Milne, J.S., Etale Cohomology, Princeton University Press, 1980 [Seidenberg] Seidenberg, Constructions in Algebra, Transactions of the American Mathematical Society, 197 (1974) 273-313 [Tougeron] Tougeron, J.-C., Id eaux de fonctions di erentiable (Th` ese, Rennes, 1967)

71

A. Symbolic Calculation

Appendix A. Symbolic Calculation


As may be evident after reading the previous chapter, calculating D by hand can be complicated and tedious. This motivated me to use computers as computational tools. After being unable to nd an existing package to solve the types of problems that arise here, I wrote my own software to handle the task. A.1. Task denition Let B be an A-algebra. Computing D(B/A) requires solving a system of equations of the form P x = 0, where P is a matrix with entries in B , and x is a column vector of variables. These equations are to be solved over B . If P is m n, then giving P is equivalent to specifying a B -module homomorphism f : B n B m , so solving P x = 0 is equivalent to determining the kernel of f . A.2. The utility of computing kernels Computing kernels is a central task in algebra. If one can compute the kernels of module homomorphisms over a ring B , one can also compute the images of module homomorphisms. After all, if P is a matrix with entries in B , we may compute the image of P by computing the kernel of the matrix [P | I ], where I is the m m identity matrix, and projecting the solutions onto the second set of coordinates. Computing kernels also allows us to compute the intersection of ideals, for to compute (a1 . . . am ) (b1 . . . bn ) we need only solve a1 x1 + + am xm + b1 y1 + + bn yn = 0. If {v1 , . . . , vp } spans the solution set, where vi = (vi1 , . . . , vi,m+n ), then the intersection 72

A. Symbolic Calculation is generated by {a1 v1i + + am v1m , . . . , a1 vmi + + am vmm }. If one can compute the kernels of module homomorphisms over a ring B , one can do so for any quotient of B too. After all, if (g1 , . . . , gr ) is an ideal of B , and P is an m n matrix over B/(g1 , . . . , gr ), let P be a matrix over B that reduces to P . Let N be the block diagonal matrix with (g1 . . . gr ) as the blocks and also having m rows. Let v1 . . . vs generate the kernel of [P | N ]. Then the kernel of P is generated by the images of the projections of the vi onto the rst n coordinates. Thus, a computer program which determines the kernel of module homomorphisms could be a major part of the engine driving a symbolic algebra system.

A.3. Looking for programs To solve this problem, I considered using Macaulay, Macsyma, Mathematica, Pari, Cayley, and some other programs. After a cursory review, it seemed that only Macauley, Macsyma or Mathematica would be capable of computing kernels. After discussing the problem and the programs with people familiar with each program, and after working extensively with Macaulay and Macsyma, it seemed to me that none of them are equipped to compute the kernels of module homomorphisms directly. Although they could probably be programmed to solve such problems, I felt that in the time that it would take to learn how to program one of these systems, I could write enough software from scratch to supply the functionality that I needed.

A.4. Writing a program I wrote a computer program (in LISP) to compute the kernel of a module homomorphism between free modules for various rings. One species a ring and species a matrix of ring elements, and the computer outputs a basis for the kernel. Specifying a ring consists of dening (in LISP) the functions needed to to perform the various ring operations. Namely, one must provide an addition function, a 73

A. Symbolic Calculation multiplication function, a subtraction function, the zero element, the unit element, and a zerop predicate which operates on elements of the ring and returns true if and only if the given element is the zero element. To specify a ring with a special property, one must also provide functions which demonstrate this property. For example, to specify a PID, one must also supply two additional functions. One must provide a PID function which takes two ring elements a and b and returns three elements x, y , and d such that ax + by = d. That is, if one were to use the multiplication, addition, and subtraction functions to compute ax + by d, then the zerop predicate applied to the result must return true. One must also provide a division function which will take the output of the PID function and either a or b and return a ring element x such that dx = a. Since I dont know of any method of nding the kernel of a module homomorphism for arbitrary rings, I concerned myself with certain classes of rings. I considered polynomial rings over elds, PIDs and Euclidean domains. After all, this would then enable one to solve the same problem for quotients of such rings, and most rings that appear in algebraic geometry are quotients of polynomial rings over a eld.

A.4.1. Polynomial rings over elds Let B = k [Z ] be the polynomial ring over the eld k with r variables. Let P = (pij ) be an m n matrix with entries in B . The algorithm we use in this case appears in [Seidenberg]. Since it was only programmed in the case of one variable, we will only discuss this case. We may assume that P has one row since we can always solve the rst equation and substitute back. After rearrainging terms, we may assume that the pi are ordered by degree, with p1 being of least degree amongst the pi . To solve the equation pi xi = 0, we note that (0, . . . , 0, pj , 0, . . . , 0, pi , 0, . . . , 0) is a solution, where pj is in the ith position, and pi is in the j th position. Thus, after adding the vectors 74

A. Symbolic Calculation (pj , 0, . . . , 0, p1 , 0, . . . , 0) to our set spanning the solution space, we need only search for solutions which are of degree less than the degree of p1 in coordinates 2 through n. However, since pi xi is then of degree less than deg(pn ) + deg(p1 ), we must have that x1 is of degree less than deg(pn ). This bounds the degrees of the coordinates of the solutions and thus allows us to then use linear algebra over k to determine the coecients of such polynomials. A.4.2. PIDs and Euclidean domains When B is a PID, we may make use of the structure theorem for modules over a PID to determine matrices U and V which are invertible over B and such that U P V is diagonal. This is done by extended row and column operations. To illustrate, if P = ( a b ), and ax + by = d, with (d) = (a, b), then multiplying on the right by the x b/d matrix will perform an elementary column operation which reduces P y a/d to ( d 0 ). Then if I is the set of integers i such that the ith column of U P V is zero, the kernel of P is generated by the columns {vi }iI of V . When B is a Euclidean domain, we use the same algorithm, except that we make use of the euclidean algorithm instead of the PID structure. Using the degree function allows us to always start with the smallest entry of the matrix. This allows the algorithm to proceed in a more ecient manner than in the PID case. To enhance the PID algorithm in an analogous way would require factoring the entries of the matrix, and thus would likely be too inecient. A.4.3. Special rings Let {ai } be a set of positive integers such that z ai Q[z a1 , . . . , z ai1 ]. We also would like to solve equations over rings of the form B = Q[z a1 , . . . , z an ], that is, subrings of Q[z ] which are the image of Q[u1 , . . . , un ] in Q[z ] under the ring homomorphism 75

A. Symbolic Calculation ui z ai for some set of integers ai . This case is computed by rst solving the system over Q[z ] and then computing the linear combinations of solution vectors which land in B . Since n such that m > n z m B the latter can be reduced to linear algebra over Q.

A.5. Results To compute D(B/Q), with B = Q[z 3 , z b , z c ], we rst used Seidenbergs method on Q[z ], followed by linear algebra to determine the linear combinations of the solutions which yield solutions lying in B . This method was only able to, after many hours, compute D for Q[z 3 , z 4 , z 5 ] and Q[z 3 , z 5 , z 7 ] over Q. Larger values of b and c lled all available memory and crashed. This is partially due to my less than perfect coding of gaussian elimination in LISP. Using the PID method on Q[z ] instead of the polynomial method allowed us to compute D for Q[z 3 , z 7 , z 8 ], Q[z 3 , z 7 , z 11 ], and Q[z 3 , z 8 , z 10 ]. Higher values of b and c were not possible for the same reasons. Finally, using the Euclidean algorthm, we were able to also compute D for Q[z 3 , z 8 , z 13 ] and Q[z 3 , z 10 , z 11 ]. Using this method, all computations ran comparitavely quickly. It was able to compute D for Q[z 3 , z 5 , z 7 ] in only a couple of minutes. Before using the computer to solve these equations, I only knew an algorithm for computing equations for the kernel of Q[u, v, w] Q[z 3 , z b , z c ], and an algorithm for nding the annihilator of the kernel. It was only after seeing that D was always generated by powers of z that I began looking for an explanation of this, and was able to rst determine equations for the kernel of Q[u, v, w] Q[z 3 , z b , z c ], and then to compute D in general for Q[z 3 , z b , z c ] over Q.

A.6. Veriability Whenever a computer is used to do calculations, there is always a question as to the 76

A. Symbolic Calculation accuracy of the calculation. How do you check the results? In my programs, I have endeavored to supply proofs of various computations. For example, it is not acceptable to merely reduce a matrix to row-echelon form. One should supply some evidence that this is in fact the row-echelon form of the matrix supplied. This may be done, for example, by supplying the elementary matrix which reduces the given matrix to row-echelon form. A proof may then be supplied as to the correctness of the solution by computing the determinant of the elementary matrix to demonstrate that it is invertible, and by multiplying the elementary matrix by the given matrix and verifying that one does indeed gets the row-echelon matrix. Unfortunately, this is not always possible. In fact, I did have a bug in my programs which led to nding ideals contained in D rather than all of D. It was only after I computed D by hand that I found the error.

77

Table of Contents

A. Appendix - Symbolic Calculation A.1. Task denition . . . . . . . . . A.2. The utility of computing kernels . A.3. Looking for programs . . . . . . A.4. Writing a program . . . . . . . A.4.1. Polynomial rings over elds . A.4.2. PIDs and Euclidean domains . A.4.3. Special rings . . . . . . . . A.5. Results . . . . . . . . . . . . A.6. Veriability . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

72 72 72 73 73 74 75 75 76 76

ii

Acknowledgements

I would like to thank Robert F. Coleman for introducing me to this subject, and for enabling me to investigate it in Europe. I would like to thank Arthur Ogus for meeting with me on the subject while Professor Coleman was away from Berkeley. I would also like to thank Bas Edixhoven for fruitful discussions about various parts of the subject.

iii

You might also like