Professional Documents
Culture Documents
David A. Grable
Institut f ur Informatik
Humboldt-Universit at zu Berlin
D-10099 Berlin Germany
Submitted: August 5, 1996; Accepted: February 26, 1997
Abstract
The behaviour of the random greedy algorithm for constructing a maximal packing of edge-
disjoint triangles on n points (a maximal partial triple system) is analysed with particular
emphasis on the nal number of unused edges. It is shown that this number is at most n
7/4+o(1)
,
halfway from the previous best-known upper bound o(n
2
) to the conjectured value n
3/2+o(1)
.
The more general problem of random greedy packing in hypergraphs is also considered.
1 Introduction
Consider the following simple algorithm for constructing a maximal collection of pair-disjoint triples
in a set of n points: repeatedly pick a triple uniformly at random from among all triples which do
not share a pair with any previously picked triple, until there are no more candidate triples. It is
perhaps mildly surprising that such a simple random greedy procedure almost always results in a
collection of triples which cover almost all of the pairs [13, 12]. In this paper we obtain signicantly
tighter bounds on the number of uncovered pairs. In particular, we show that the number of
uncovered pairs is almost always no more than n
7/4+o(1)
, where o(1) is a function going to 0 as n
goes to innity.
This problem is expressed nicely in the language of design theory. A partial triple system on n
points (a PTS(n) for short) is a collection of 3-element subsets (triples) of 1, . . . , n such that each
2-element subset (pair) is contained in (covered by) at most one triple. Of considerable interest are
partial triple systems in which every pair is covered by exactly one triple. Such systems are called
Steiner triple systems. The reader is referred to [3] for more background on design theory.
A partial triple system is maximal if no triple can be added without covering some pair more
than once. It is obvious, but worth noting, that Steiner triple systems are maximal, but they are
5
2
3
+O(
4
).
With this in mind, we dene real numbers D
s
:= H
2s
n and E
s
:= H
s
_
n
2
_
. When going from
G
s1
to G
s
, we will nibble with probability /D
s1
. Nibbling at this rate will ensure that each
edge of G
s
is in about D
s
triangles and that G
s
has about E
s
edges.
Just for reference, in the traditional nibble, is a small constant and e
is used in place of
H Indeed, H = e
+O(
2
), so the dierence in relatively insignicant, but this minor dierence
benets us in a major way.
Of course, the D
s
and E
s
dont track the numbers of triangles contained in each edge and the
the electronic journal of combinatorics 4 (1997), #R11 5
number of edges exactly. Fix a small positive constant and dene quantities
0 =
0
1
s
t
such that for each 0 s t and edge e G
s
,
deg
s
(e) = (1
s
)D
s
= (1
s
)H
2s
n,
where deg
s
(e) is dened to be the number of triangles of G
s
containing edge e (the motivation for
this seemingly strange concept of degree is explained in the last section along with the connection
to packings in hypergraphs, where the edges here will correspond to points and the triangles to
hyperedges) and (1
s
)D
s
denotes some value in the interval [(1 +
s
)D
s
, (1
s
)D
s
]. By forcing
t
, we are really forcing t to be not too large. Similarly, dene
0 =
0
1
s
t
such that for each 0 s t,
[E(G
s
)[ = (1
s
)E
s
= (1
s
)H
s
_
n
2
_
,
These denitions tell us nothing about how large the
s
s and the
s
s are. In the analyses that
follow, our goal will be to nd upper bounds on these quantities (as functions of s) and therewith
determine the maximum t such that
t
and
t
are less than . This maximum value of t will then
minimize [E(G
t
)[ E
t
= H
t
_
n
2
_
. Of course, we can adjust and to suit our convenience.
To simplify some of the computations which we encounter along the way, we will assert now
that we will choose so that
t n and D
t
1. (1)
After choosing and determining t we will verify that these assertions are indeed valid.
3 Simplied Analysis
In this and the next section, we give an analysis which leads to a non-optimal result in order to
illustrate the basic idea. In the following section, we show how to enhance this analysis to improve
the result.
Suppose that we have G
s1
and know that deg
s1
(e) = (1
s1
)D
s1
for every edge e G
s1
and that [E(G
s1
)[ = (1
s1
)E
s1
, where
s1
,
s1
< 1. What can we say about
s
and
s
?
To simplify the notation, well systematically drop the subscript s 1 and replace the subscript
s by
, i.e. G
s1
becomes G, G
s
becomes G
)[
to bound
.
To compute the expectation, well need a few more denitions and two lemmas. We say two
triangles are adjacent if they share a common edge (but not merely a common vertex). A path is
a sequence of adjacent triangles and a cluster is a maximal collection of selected triangles in which
every two triangles are joined by a path within the collection.
The ordering of the triangles within a cluster determines which of the triangles of that cluster
succeed, but the success of the triangles in one cluster is independent of the orderings of the other
clusters.
The rst lemma says that with high probability, we will never encounter clusters with more
than a large, but xed, number of triangles.
Lemma 1 With probability 1 O(n
2
), there are no clusters with more than m = O(1) triangles.
Proof The probability that all c of the triangles of a potential partial (i.e. not necessarily
maximal) c-cluster are selected is (/D)
c
and the number of potential partial c-clusters is at most
(ED/3) 3D 5D. . . (2c 1)D < n
2
(2c)
c
D
c
. Therefore, the expected number of partial c-clusters
is at most (2c)
c
c
n
2
.
To guarantee that, with probability 1 O(n
2
), no cluster with m or more triangles is ever
selected, we show that the expected number of partial m-clusters is O(n
2
) and use Markovs
inequality. Let > 0 be a xed real number such that n
is
1
D
fF
deg(f) +O(
2
).
Note The constant concealed by the O-notation is independent of , , and D, and therefore also
of s, but does depend on [F[. In point of fact, the constant is at most 3[F[ +4[F[
2
, but the exact
value is irrelevant.
the electronic journal of combinatorics 4 (1997), #R11 7
Ts cluster Pr[cluster [ T is selected] Pr[T is accepted [ cluster]
s
s
s
T
_
1
D
_
3D3
= 1 3 +
9
2
2
O(
3
) 1
s
s
s
s
T
3D
D
_
1
D
_
5D6
= 3 15
2
+O(
3
)
1
2
s
s
ss
s
s
T
3
_
D
2
_ _
D
_
2
_
1
D
_
7D9
=
3
2
2
+O(
3
)
1
3
s
s
s
s
s
T
6D
2
_
D
_
2
_
1
D
_
7D9
= 6
2
+O(
3
)
2
3
s
s
s
s
s
T
3D
2
_
D
_
2
_
1
D
_
7D9
= 3
2
+O(
3
)
1
3
s
s
s s
T
O(D)
_
D
_
2
_
1
D
_
6D9
= O(
2
/D) = O(
3
)
1
3
cluster with
c triangles
O(
c1
)
1
c
1
Table 1: Small clusters in a graph with deg(e) = D for every edge e
the electronic journal of combinatorics 4 (1997), #R11 8
Proof First compute the probability for one edge e:
Pr[e , E
] =
Te
Pr[T is accepted],
where the sum is over all triangles T containing e. Equality follows from the fact that the events
T is accepted are disjoint.
Fixing T, T is accepted only if it is rst selected with probability
D
. Given that it is selected,
it will be accepted depending on what type of cluster it is in:
Pr[T is accepted [ T is selected] =
cluster type C
Pr[T is accepted [ cluster type C]
Pr[cluster type C [ T is selected].
The small clusters and the relevent probabilities are given in Table 1. In the table we assume that
each edge is contained in exactly D triangles; in the present situation each edge is contained in
(1 )D triangles, so we must introduce additional (1 ) factors. Keep in mind, however, that
the probability of selection remains exactly /D. The rst few terms of the sum, corresponding to
clusters with 3 or fewer triangles, are
[1 3(1 ) +
9
2
(1 )
2
2
O(
3
)] +
1
2
[3(1 ) 15(1 )
2
2
+O(
3
)]
+
1
3
[
3
2
(1 )
2
2
+O(
3
)] +
2
3
[6(1 )
2
2
+O(
3
)]
+
1
3
[3(1 )
2
2
+O(
3
)] +
1
3
[O(
3
)]
= 1
3
2
(1 ) +
5
2
(1 )
2
2
O(
3
).
Each cluster type with more than 3 triangles contributes O(
3
) to the sum and, since Lemma 1
says that we only have to consider the constantly-many cluster types having at most m triangles,
all these cluster types together contribute only O(
3
) to the sum. Therefore,
Pr[T is accepted [ T is selected] = 1
3
2
(1 ) +
5
2
(1 )
2
2
O(
3
).
Actually, at this point we only need to know that this probability is 1 O(), but we will return
to this computation when improving the result. It then follows that
Pr[e E
] = 1
Te
Pr[T is accepted] = 1
D
(1 O()) deg(e) = 1
D
deg(e) +O(
2
). (2)
Now consider the general case, when F = e
1
, e
2
, . . . , e
f
. We use the Bonferroni inequalities
to approximate the complementary probability:
f
i=1
Pr[e
i
, E
]
f
i=1
f
j=1
Pr[e
i
, e
j
, E
] Pr
_
f
_
i=1
(e
i
, E
)
_
f
i=1
Pr[e
i
, E
].
the electronic journal of combinatorics 4 (1997), #R11 9
Looking at the terms of double sum, we see that
Pr[e
i
, e
j
, E
] =
T
i
e
i
T
j
e
j
_
D
_
2
Pr[T
i
and T
j
are accepted [ T
i
and T
j
are selected] (1 +)
2
2
,
since there are at most (1 + )D choices for each of T
i
and T
j
and the probability is trivially at
most 1. Thus, the double sum is
f
i=1
f
j=1
Pr[e
i
, e
j
, E
] (1 +)
2
f
2
2
= O(
2
)
and we can conclude that
Pr
_
f
_
i=1
(e
i
, E
)
_
=
f
i=1
Pr[e
i
, E
] O(
2
).
To nish up the proof of the lemma, note that
Pr
_
f
i=1
(e
i
E
)
_
= 1Pr
_
f
_
i=1
(e
i
, E
)
_
= 1
fF
Pr[f , E
] +O(
2
) = 1
D
fF
deg(f)+O(
2
).
Equation (2) also gives us the approxiamte value of H. H was dened as the survival probability
of an edge in an ideal graph, where every edge has degree exactly D. Equation (2) proves that
H = Pr[e E
] = 1 +O(
2
).
For a xed edge e, we would like to tightly bound the new degree of e. Using Lemma 2, in each
case with [F[ = 2, we see that
Ex[deg
(e) [ e E
] =
Te
Pr[T survives [ e E
]
=
T={e,e1,e2}
(1
D
(deg(e
1
) +deg(e
2
)) +O(
2
))
= (1 (1 )2 +O(
2
)) deg(e)
= (1 2 +O(
2
))(1 2 +O(
2
))(1 )D
= (1 ( +2 + 2
2
+O(
2
)))D
.
Since we assumed further that there was a constant > , we have shown that
Ex[deg
(e) [ e E
] = (1 ( + (1 +)2 +O(
2
)))D
. (3)
We defer the computation of the deviation to the next section. There we will prove the following.
Again, the exact constant hidden by the O-notation is independent of s.
the electronic journal of combinatorics 4 (1997), #R11 10
Lemma 3 If D = (1) then, with probability at least 1 3n
10
, deg
= (1 + (1 +)2) +O(
2
) +O
_
_
/Dlog n
_
. (4)
We aim to solve this recurrence relation, but rst we consider the number of surviving edges and
the related error factor
s
. Our goal here is to prove that
s
s
(
)[] =
eE(G)
Pr[e survives]
=
eE(G)
(1
D
deg(e) +O(
2
))
= (1 (1 ) +O(
2
))[E(G)[
= (1 +O(
2
))(1 +O(
2
))(1 )E
= (1 ( + + +O(
2
)))E
.
Since we assumed further that there was a constant > , we have shown that
Ex[[E(G
. (5)
Again we defer the computation of the deviation to the next section.
Lemma 4 If E = (1) then, with probability at least 1 3n
10
, [E(G
is at most
= + (1 +) +O(
2
) +O
_
_
/Elog n
_
+(1 +)2 +O(
2
) +O
_
_
/Dlog n
_
.
We would like to say that this last expression is at most the
using
whichever constants are greater, those given here or those in (4). With this new
, we may safely
conclude that
.
the electronic journal of combinatorics 4 (1997), #R11 11
Returning to a broader perspective, (4) (possibly with modied hidden constants) says that
s
is less than the solution to the recurrence
s
=
s1
H
(1+)2
+O(
2
) +O
__
n
H
(1+)s
log n
_
, (6)
where, as previously noted, H = 1 +O(
2
).
Recurrences, such as this one, of the form
x
s
= A
2
x
s1
+B +CA
s
have the solution
x
s
= B
A
2s
1
A
2
1
+CA
s
A
s
1
A 1
.
For us, this means that
s
= O
_
2
H
(1+)2s
(1 +)2
_
+O
_
_
n
H
(1+)s
H
(1+)s
(1 +)
log n
_
= O
_
H
(1+)2s
_
+O
_
log n
n
H
(1+)2s
_
.
We were very careful to point out that the hidden constants are independent of s in order to insure
that B and C are also. This guarantees that this computation is valid.
We would like to take as many steps as possible, always ensuring that
s
. Since
s
forms
an increasing sequence, it is only necessary that
t
= O
_
H
(1+)2t
_
+O
_
log n
n
H
(1+)2t
_
. (7)
This gives us enough information to set and t intelligently. In order that both terms of (7) be
roughly equal, set
= n
1/3+
.
Condition (7) is then implied by
H
t
n
1/6
for suciently large n. This inequality is easily satised with equality by setting
t =
(1/6 ) log n
log(1/H)
=
_
log n
_
n
1/3
.
This veries the rst part of assertion (1). The second part is valid since
D
t
= H
2t
n = n
1/3+3
1.
the electronic journal of combinatorics 4 (1997), #R11 12
Turning nally to the number of edges remaining at this step t, we see that
[E
t
[ = O
_
H
t
n
2
_
= O
_
n
11/6+
_
.
We can force the number of remaining edges as close to n
11/6
as we like by xing > 0 suciently
small. In other words, modulo Lemmas 3 and 4, weve proven
Theorem 5 For any xed > 0, the random greedy triangle packing algorithm almost always
leaves fewer than O
_
n
11/6+
_
uncovered pairs.
4 Deviations
To bound the deviations of the random variables discussed in the previous section, well need the
powerful large deviation inequality discussed in full detail in [8]. Here we only give the essential
denitions and relevant considerations.
Assume we have a probability space generated by independent random variables X
i
(choices),
where choice X
i
is from the nite set A
i
, and a function (random variable) Y = f(X
1
, . . . , X
n
) on
that probability space. We are interested in proving a sharp concentration result on Y that is, to
bound Pr[[Y Ex[Y ][ > a], for any a, as well as we can.
Consider the following query game, the aim of which is to determine the value of Y . We can
make queries of the form what is the value of X
i
? in any order we want. The questioning can
be adaptive, meaning that we can chose the next X
i
to be queried as a function of the knowledge
gained so far. A querying strategy for Y is an algorithm for determining which queries should be
made in which order. We can describe a strategy as a decision tree whose internal nodes designate
queries to be made. If a given node represents the query what is the value of X
i
? then it has
as many children as there are possible values for X
i
. You might think of the down edges as being
labelled with the values a A
i
. In this fashion, every path from the root to a node going through
vertices corresponding to X
i
1
, . . . , X
i
k
denes an assignment a
1
, . . . , a
k
for these random variables.
We can then think of the j
th
node as storing the value Ex[Y [ X
i1
= a
1
, . . . , X
i
j1
= a
j1
]. In
particular, the leaves of the tree store each of the possible values of Y .
The idea behind this query-based large deviation inequality is to bound the deviation of Y
by carefully bounding the contribution to the deviation made by each X
i
. A strategy eciently
expresses which X
i
are relevant and more importantly allows us to estimate the contribution at the
time that the query is made. For instance, the result of previous queries may imply that the next
query will have a more limited eect than would have been initially anticipated. Formally, dene
the variance of a query q concerning choice i to be
v
q
=
aA
i
Pr[X
i
= a]
2
q,a
,
the electronic journal of combinatorics 4 (1997), #R11 13
where
q,a
= Ex[Y [ X
i
= a and previous queries] Ex[Y [ previous queries].
In words,
q,a
measures the amount which our expectation changes when the answer to query q is
revealed to be a.
Also dene the maximum eect of query q as
c
q
= max
a,bA
i
[
q,a
q,b
[.
A way to think about c
q
is the following. If we consider the children of node q, c
q
is the maximum
dierence between any two values Ex[Y [ previous queries] stored at the children. In the sequel, we
will always compute an upper bound on c
q
by taking the maximum amount which Y can change
if choice i is changed, but all other choices remain the same. That is, we consider the subtree
rooted at q and consider the maximum dierence between any two values stored in the leaves of
this subtree.
When computing v
q
, we use two elementary but non-trivial upper bounds. For queries with
two possible outcomes Yes and No (or two types of equivalent outcomes),
v
q
Pr[Yes] Pr[No] c
2
q
.
In general,
v
q
c
2
q
/4.
Turing back to the whole strategy, the variance of a strategy is the maximum cumulative variance
of any sequence of queries which determines Y . In other words, we sum the variances down every
path leading from the root to the leaves of the decision tree and take the maximum sum. By
the way, the use of the term variance is meant to be suggestive: the variance of a strategy for
determining Y is an upper bound on the variance of Y .
There is only one more twist to the story: sometimes it can happen that very unlikely situations
occur which force certain queries to have very large variances driving up the variance of the entire
strategy. We call such rare but ruinous sets of values for the X
i
exceptional outcomes. The Theorem
is able to give tight bounds on the deviation of Y despite the presence of exceptional outcomes, as
long as their probability is not too great.
Theorem 6 Let m and M be, respectively, the minimum and maximum values taken by Y over
all possible outcomes. Let ( be a set of exceptional outcomes. Consider a strategy for determining
Y , assuming that the actual outcome is not in (. If the variance of this strategy is at most V then
Pr
_
[Y Ex[Y ][ > 2
_
V + (M m) Pr[(]
_
2e
+ Pr[(],
for every 0 V/ max c
2
q
.
the electronic journal of combinatorics 4 (1997), #R11 14
Now we can return to the matter at hand, proving Lemmas 3 and 4. To prove Lemma 3, start by
xing an edge e and assuming that it survives. That just means that none of its incident triangles
is accepted. But some of them may not survive as triangles in G
2
O(
3
).
We can continue this to any desired accuracy O(
b
) as long as we know how many potential clusters
of each type there are. It is for this reason that Lemma 2 is set up to handle any constant number
of triangles (we know that we never need to consider clusters with more than a constant number
of triangles by Lemma 1).
the electronic journal of combinatorics 4 (1997), #R11 16
Continuing, it follows that we can work out Pr[e E
] to any accuracy O(
b
). For instance,
Pr[e E
] = 1
Te
Pr[T is accepted]
= 1 deg(e)/D +
3
2
(1 )
2
deg(e)/D
5
2
(1 )
2
3
deg(e)/D +O(
4
).
In the ideal case where all edges have degree exactly D, this probability is
H := Pr[e E
] = 1 +
3
2
5
2
3
+O(
4
),
so for a real edge e we can conclude that for any b 1,
Pr[e E
] = H (1 +4) +O(
b
)
For any three distinct edges e
1
, e
2
, and e one can show that
Pr[e
1
, e
2
E
[ e E
] = (H (1 +4))
2
+O(
b
)
= H
2
2(1 + 5) +O(
b
).
Thus,
Ex[deg
(e) [ e E
] =
Te
Pr[T survives [ e E
] =
T={e,e1,e2}
Pr[e
1
, e
2
E
[ e E
]
= (1 )
_
H
2
2(1 + 5) +O(
b
)
_
D
= (1 )
_
1 2(1 +5) +O(
b
)
_
H
2
D
=
_
1 + 2(1 +)(1 +5) +O(
b
)
_
D
.
This and Lemma 3 give
s
=
s1
H
(1+)2
+O(
b
) +O
__
n
H
(1+)s
log n
_
.
This is identical to recurrence equation (6) except that O(
2
) is replaced by O(
b
). Again, the
constants hidden by the O-notation are independent of s, so we may solve this new recurrence
relation and get
s
= O
_
b1
H
(1+)2s
_
+O
_
log n
n
H
(1+)2s
_
.
the electronic journal of combinatorics 4 (1997), #R11 17
All we have to do is set and t so that
t
= O
_
b1
H
(1+)2t
_
+O
_
log n
n
H
(1+)2t
_
. (8)
This is accomplished rst by setting
= n
(1)/(2b1)
.
Condition (8) is then implied by
H
t
n
(1/4)(12)(11/b)
,
for suciently large n. This is satised with equality by setting
t =
(1/4)(1 2)(1 1/b) log n
log(1/H)
=
_
log n
_
n
1/(2b1)
n
1/3
.
This veries the rst part of assertion (1). The second part is again valid since
D
t
= H
2t
n = n
1(1)/(2b1)(1/2)(12)(11/b)
n
1/6
1.
Finally, the number of edges remaining after the t
th
burst is
[E
t
[ = O
_
H
t
n
2
_
= O
_
n
2(1/4)(12)(11/b)
_
.
We can force the exponent to be as close to 7/4 as we like by setting suciently small and b
suciently large. Therefore, weve proven the following theorem.
Theorem 7 For any > 0, the random greedy triangle packing algorithm almost always leaves
fewer than O
_
n
7/4+
_
uncovered pairs.
6 Packings in Hypergraphs
Actually, everything done in this paper can be done in the more general setting of packings in
simple, k-uniform, D-regular hypergraphs on N points. A hypergraph is simple, k-uniform, and
D-regular if no two edges share two points, if all edges contain exactly k points, and if every point
is contained in D edges. A packing is a collection of disjoint edges.
The random greedy hypergraph packing algorithm picks edges one at a time, uniformly at
random to build up a maximal packing. How many points are not covered by any edge in the
resulting packing? The results of Spencer [13] and Rodl and Thoma [12] apply to this setting and
state that almost always the number of uncovered points is o(N). The corresponding conjecture
the electronic journal of combinatorics 4 (1997), #R11 18
is that the number of uncovered points is almost always ND
1/(k1)+o(1)
. The techniques of this
paper can be used to show that the number of uncovered points is almost always ND
1/2(k1)+o(1)
.
To see how triangle packing is a special case of hypergraph packing, we construct a hypergraph
whose packings correspond to partial triple systems. The N =
_
n
2
_
points of the hypergraph are
the pairs of points in the triple system and the
_
n
3
_
hyperedges correspond to all of the possible
triples. A packing is then a collection of triples no two of which share a pairthat is, a partial
triple system. We note that a similar construction gives a hypergraph whose packings correspond
to partial designs with larger parameters.
Now it is clear why we called the number of triangles containing a given edge that edges degree:
in the hypergraph the edge is a point and the triangles which contain it are its incident hyperedges.
Also, in this setting the clusters are simply the connected components of the hypergraph induced
by the selected hyperedges.
Acknowledgements
Special thanks are due to Joel Spencer for suggesting the problem and to him and the anonymous
referee for useful comments on an earlier version of the paper.
References
[1] N. Alon, H.J. Kim, J. Spencer, Nearly perfect matchings in regular simple hypergraphs, Israel
Journal of Mathematics, to appear.
[2] K.T. Bali nska and I. Wieczorek, Random triangle deletion process, Technical University
Pozna n, 1989.
[3] T. Beth, D. Jungnickel, and H. Lenz, Design Theory, Cambridge University Press, Cambridge,
1986.
[4] P. Erd os and H. Hanani, On a limit theorem in combinatorial analysis, Publ. Math. Debrecen
10 (1963), 1013.
[5] P. Frankl and V. R odl, Near perfect coverings in graphs and hypergraphs, European Journal
of Combinatorics 6 (1985), 317326.
[6] D. Gordon, G. Kuperberg, O. Patashnik, and J. Spencer, Asymptotically optimal covering
designs, Journal of Combinatorial Theory, Series A, to appear.
[7] D.A. Grable, More-than-nearly-perfect packings and partial designs, preprint, 1994.
[8] D.A. Grable, A large deviation inequality for functions of independent, multi-way choices,
Combinatorics, Probability, and Computing, to appear.
the electronic journal of combinatorics 4 (1997), #R11 19
[9] J. Kahn, Asymptotically good list-colorings, Journal of Combinatorial Theory, Series A 73
(1996), 159.
[10] N. Pippenger and J. Spencer, Asymptotic behavior of the chromatic index for hypergraphs,
Journal of Combinatorial Theory, Series A 51 (1989), 2442.
[11] V. R odl, On a packing and covering problem, European Journal of Combinatorics 5 (1985),
6978.
[12] V. R odl and L. Thoma, Asymptotic packing and the random greedy algorithm, Random Struc-
tures and Algorithms 8 (1996), 161177.
[13] J. Spencer, Asymptotic packing via a branching process, Random Structures and Algorithms
7 (1995), 167172.