Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Introduction to Minimax
Introduction to Minimax
Introduction to Minimax
Ebook457 pages3 hours

Introduction to Minimax

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This user-friendly text offers a thorough introduction to the part of optimization theory that lies between approximation theory and mathematical programming, both linear and nonlinear. Written by two distinguished mathematicians, the expert treatment covers the essentials, incorporating important background materials, examples, and extensive notes.
Geared toward advanced undergraduate and graduate students of mathematical programming, the text explores best approximation by algebraic polynomials in both discrete and continuous cases; the discrete problem, with and without constraints; the generalized problem of nonlinear programming; and the continuous minimax problem. Several appendixes discuss algebraic interpolation, convex sets and functions, and other topics. 1974 edition.
LanguageEnglish
Release dateMar 5, 2014
ISBN9780486793283
Introduction to Minimax

Related to Introduction to Minimax

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Introduction to Minimax

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Introduction to Minimax - V. F. Dem’yanov

    Authors.

    Chapter I

    BEST APPROXIMATION BY ALGEBRAIC POLYNOMIALS – DISCRETE CASE

    §1. STATEMENT OF THE PROBLEM

    Suppose we are given a table of values for some function:

    We shall assume throughout this chapter that the values of the independent variable (abscissas) are arranged in increasing order:

    The goodness of fit of an algebraic polynomial Pn (A, t) of degree at most n ),

    to this table is most naturally characterized by the maximum deviation:

    For fixed n, set

    A polynomial Pn(A*, t) for which the maximum deviation

    is minimal is known as a polynomial of best approximation to the original table [and the corresponding deviation (the number ρ) will be called the best fit].

    Our subject in this chapter is the actual construction of polynomials of best approximation. Success in this project depends essentially on the relation between n and N.

    Thus, if n = N (the degree of the required polynomial equals the number of abscissas in the table), the solution to our problem is clearly given by an interpolating polynomial (see Appendix I, §2). In this case, ρ = 0.

    The first nontrivial (and, as we shall see, fundamental) case is N = n + 1, which leads to what is known as Chebyshev interpolation. Chebyshev interpolation ultimately provides a polynomial of best approximation in the general case N > n + 1.

    §2. CHEBYSHEV INTERPOLATION

    1. Retaining the previous notation and setting N = n + 1, we state and prove the following theorem.

    Theorem 2.1. Let

    There exists a polynomial of best approximation, and it is unique. A polynomial Pn(A*, t) is the polynomial of best approximation if and only if, for some h,

    In that case, ρ = |h.

    The proof will be divided into steps as follows: sufficiency, existence, necessity, uniqueness.

    Sufficiency. Let Pn(A*, t) be a polynomial satisfying (2.1) for some h. We must show that Pn(A*, t) is a polynomial of best approximation.

    Suppose, on the contrary, that

    Then, by the definition of ρ, there is a polynomial Pn(A1, t) such that

    Consider the polynomial Qn(t) = Pn(A1, t) – Pn(A*, t); its values at the abscissas tk, k ∈ [0 : n + 1], are

    By (2.2), Qn(tk) has the same sign as (– 1)k h.

    Since the abscissas tk, k ∈ [0 : n + 1], are arranged in increasing order, we see that Qn(t) has n + 1 consecutive sign changes and therefore n + 1 zeros. But a nontrivial polynomial of degree n cannot have n + 1 zeros, and so Qn(t) ≡ 0, i. e., Pn(A1, t) ≡ Pn(A*, t), contradicting (2.2). This proves sufficiency, and also that ρ = |h|.

    Existence. The existence problem for a polynomial of best approximation is now reduced to solvability of system (2.1), which is a system of (n + 2) linear equations in (n + 2) unknowns h, ..., an:

    The determinant Δ of this system is

    Set

    Expanding the determinant Δ in terms of its first column, we get

    Since the tk are arranged in increasing order, all the terms in this sum are positive. Thus Δ > 0.

    It follows that system (2.1') is always solvable and the solution is unique. Hence the existence of a polynomial of best approximation.

    Necessity. We first develop an explicit expression for h. For all k ∈ [0 : n + 1], set

    where

    The numbers ak possess the following properties:

    ;

    ;

    c) for any polynomial Pn(A, t)of degree at most n,

    (Concerning these properties, see Appendix I, §1.)

    Using (2.1), we get the equality

    Hence it follows that

    We can now take up the necessity proof. Let Pn(A*, t) be some polynomial of best approximation.

    We have to prove that it satisfies (2.1), with h defined by (2.3). Since ρ = |h|, we have

    We claim that in fact

    We distinguish between the cases h = 0, h > 0, h < 0. If h = 0, the validity of (2.5) is evident from (2.4). Let h < 0 (the treatment of the case h > 0 is analogous), and suppose that at least one of the equalities (2.5) fails to hold. Then, first, by (2.4),

    and, second, for some k0,

    Using (2.3) and properties (a) – (c) of the numbers ak, we obtain

    which is absurd. This proves necessity.

    Uniqueness. That the polynomial of best approximation is unique now follows from the unique solvability of system (2.1) – (2.1'). The proof is complete.

    2. Definition. The construction of a polynomial satisfying (2.1) is known as Chebyshev interpolation; the polynomial itself is called a Chebyshev interpolating polynomial.

    Thus, the content of the preceding subsection may be summed up thus: we have shown that when N = n + 1 construction of a polynomial of best approximation reduces to Chebyshev interpolation. Moreover, the value of h is found in advance by (2.3).

    We shall need another expression for h. Let χ be a function defined at the abscissas tk by

    Recalling the formula for divided differences (Appendix I, §1) and the definition of ak, we rewrite (2.3) as

    This is the required expression.

    Proceeding now to construction of the polynomial of best approximation, we first introduce an interpolating polynomial Ln + 1(t) of degree at most n + 1, uniquely determined by

    Using Newton's formula, we express Ln + 1(t) as

    and consider the last coefficient

    By (2.6), w[t0, t1, ..., tn + 1] = 0, so that Ln + 1(t) is indeed a polynomial of degree at most n, satisfying (2.1)–(2.l"). Thus Ln + 1(t) ≡ Pn(A*, t), and we finally obtain the following formula:

    Practical use of this formula involves the following steps:

    I. Construct a table of divided differences for the functions y and χ

    II. Compute h from (2.6).

    III. Compute y0 – h and

    and substitute these values into formula (2.7) to obtain Pn(A*, t).

    IV. Check for the validity of the equalities

    Example. Consider the function y(t) = |t|, and the following two systems of abscissas in [–1, 1]:

    1)

    ;

    2)

    .

    We shall construct polynomials of best approximation for |t| with these abscissas, supposing that N = n + 1; in the first case we shall get a polynomial of degree at most 2, in the second – of degree at most 3.

    The computations are arranged in accordance with steps I through IV as recommended above.

    1)

    Here there is no need to set up a table of differences for χ, because y[t0, t1, t2, t3] = 0 and so h = 0. We at once find the polynomial of best approximation

    It is readily checked that

    2)

    A direct check shows that

    3*. To conclude this section, we consider a special choice of abscissas:

    In this case we shall be able to derive simpler formulas than (2.6) and (2.7) for h and Pn(A*, t).

    Theorem 2.2. Let

    . Then

    and

    where

    Here

    is the Dirichlet kernel;

    Proof. It suffices to check the relations

    We first prove two lemmas.

    Lemma 2.1. The function Tv,(t) = cos v is an algebraic polynomial of degree v with leading coefficient 2v–1, v = 1, 2, ...

    Proof. We first note that

    Next, since

    we see, setting x = arccost t in this identity, that the functions Tv(t) obey the following recurrence relations:

    and the statement of our lemma follows immediately. In particular,

    The polynomials Tv(t) are known as the Chebyshev polynomials.

    Corollary. The functions τs(t), s ∈ [0 : n + 1] defined above are algebraic polynomials of degree n.

    Thus, all the expressions on the right of .

    Lemma 2.2. The Dirichlet kernel Dn(x)(n = 0, 1, 2, ...) satisfies the equalities

    . To determine the values of Dn(x, s = ± 1, ..., ± (2n + 1), we observe that

    Therefore

    Q. E. D.

    We now return to the proof of the theorem, writing the following expression for Pn(A*, t) at the abscissas tk, k ∈ [0 : n + 1]:

    Let k = 0. Then, since (– 1)ν = (– 1)ν for any integer ν, we get

    If k = n + 1, then

    Finally, let k ∈ [1 : n]. Then

    Q. E. D.

    Corollary. k ∈ [0 : n + 1]. Then, for any algebraic polynomial Qn(t) of degree .

    Proof. Set yk = Qn(tk), k ∈ [0 : n + 1]. Carrying out Chebyshev interpolation, we see via (2.6) and property 4 of divided differences (see Appendix I, §1), that

    Formula (2.10) now follows from (2.8).

    §3. GENERAL DISCRETE CASE; DE LA VALLÉE-POUSSIN ALGORITHM

    1. We now consider the general case of the discrete problem, N > n + 1. Set

    We wish to construct a polynomial of best approximation Pn(A*, t):

    We shall show in this section that there is an exact solution to this problem, which is obtained by carrying out finitely many Chebyshev interpolations.

    2. Definition. A basis is any subsystem of n + 2 abscissas

    The set of bases {σ} will be denoted by Ξ.

    For each basis σ of type (3.1), we can perform Chebyshev interpolation, i. e., construct the (unique) algebraic polynomial Pn(A(σ), t) such that

    where

    and

    Let ρ(σ) be the best fit on the basis σ:

    As we have seen,

    Finally, we set

    so that φ(A(σ)) is the maximum deviation of the polynomial Pn(A(σ), t) for the whole system of abscissas.

    It is obvious that for any basis σ ∈ Ξ

    Lemma 3.1. Suppose that for some basis σ*,

    we have φ (A(σ*)) = ρ (σ*); then

    i. e., Pn(A(σ*), t) is the required polynomial of best approximation.

    Proof. By the definition of ρ(σ*), we have for any polynomial Pn(A, t)

    A fortiori,

    Since by assumption

    it follows via (3.3) and (3.4) that for any polynomial Pn(A, t):

    Thus (by the definition) Pn(A(σ*), t) is indeed a polynomial of best approximation for the whole system of abscissas.

    3. To summarize: if it turns out that φ(A (σ)) = ρ(σ) for some basis σ, then Pn(A(σ), t) is a polynomial of best approximation in the general discrete case.

    We now consider bases σ ∈ Ξ of type (3.1) for which

    We shall introduce a standard transformation S that converts any basis σ satisfying (3.5) into a new basis σ1 = Sσ,

    such that

    This type of transformation will have a major role to play in the sequel.

    Prior to describing the transformation S, we set

    and let tk0 be an abscissa at which

    (if there are several, we choose any one).

    There are three possibilities:

    1) tk0 < tk0 tin+1;

    2) tk0 < ti0;

    3) tk0 > tin+1.

    The transformation S is defined differently in each case.

    Case 1. A typical situation is illustrated in Figure 1.

    FIGURE 1.

    Let ν be an integer such that

    We first set

    If sign Δσ(tk0) = sign Δσ (tiν), then

    But if this condition fails to hold, then

    In this case, then the basis σ1 = Sσ is completely defined. In the situation of Figure 1, the basis σ1 is obtained from σ by replacing ti2 by tk0.

    Case 2. If sign Δ0(tk0) = sign Δσ(ti0), we set

    If the above condition fails to hold, then

    (the abscissa tin+1 is discarded).

    Case 3. If sign Δσ(tk0) = sign Δσ(tin+1), then

    Otherwise,

    (the abscissa ti0 is discarded).

    This completes our description of the transformation S.

    The construction and inequality (3.5) imply the following properties of the basis σ1 = Sσ if ρ(σ) > 0:

    , k ∈ [0: n].

    denotes the abscissa in the basis σ1 corresponding to tk0, then

    We can now prove the fundamental property of the basis σ1:

    Proof. By (3.2) and property 4 of divided differences (see Appendix I, §1),

    First let ρ(σ) > 0.

    Since αik, (σ1) > 0, k ∈ [0 : n , it follows via I and II that

    But if ρ(σ) = 0, then

    In all cases, then, ρ(σ1) > ρ(σ).

    This proves our assertion.

    The result is summarized in the following lemma.

    Lemma 3.2. Let σ ∈ Ξ be a basis for which φ(A(σ)) > ρ(σ). Then there exists a new basis σ1 = Sσ such that ρ(σ1) > ρ(σ).

    4. Set

    Definition. A basis σ* for which

    is called an extremal basis.

    Since the set Ξ is finite, it is clear that an extremal basis exists, though in general it need not be unique.

    Theorem 3.1. In the general discrete case, there exists a unique polynomial of best approximation. A polynomial Pn(A*, t) is the polynomial of best approximation if and only if it is a Chebyshev interpolating polynomial for some extremal basis σ*, i. e.,

    Finally, so that

    Proof. Sufficiency. Let σ* be some extremal basis. This means that for any σ ∈ Ξ

    We must show that Pn(A(σ*), t) is a polynomial of best approximation, i. e.,

    By Lemma 3.1, it will suffice to verify that φ(A(σ*)) = ρ(σ*).

    If this is false, then φ(A(σ*)) > ρ (σ*). By Lemma 3.2, there is a new basis σ1, = Sσ* such that

    and this contradicts (3.8), proving sufficiency.

    We have at the same time proved (3.7), for

    The existence of a polynomial of best approximation follows from the existence of an extremal basis.

    Necessity. Fix some extremal basis σ*:

    and let Pn(A*, t) be some polynomial of best approximation. Since

    it follows that

    Thus Pn(A*, t) is a polynomial of best approximation for σ*. Since the σ* latter is unique, we conclude that

    This proves both necessity and uniqueness, completing the proof.

    Corollary. A basis

    is extremal if and only if

    or, equivalently,

    5. The fundamental theorem yields a procedure for constructing polynomials of best approximation in the general (discrete) case. One first examines all bases, and selects a basis σ* for which ρ(σ*) is maximal. The Chebyshev interpolating polynomial Pn(A(σ*), t) will then be the required polynomial.

    The difficulty here is how to organize examination of the bases in the most rational manner. This is particularly important when N is large.

    Since we are looking for a basis giving the maximal best fit, it is reasonable to demand that we discard a basis σk and adopt a new one σk + 1 only when

    Fulfillment of this condition also guarantees that no basis can come up more than once in any examination.

    The above idea is implemented in the de la Vallée-Poussin algorithm for constructing polynomials of best approximation. This algorithm employs the transformation S described in subsection 3.

    Select an initial basis σ1, arbitrarily, and construct the polynomial Pn(A(σ1) t). If

    then (Lemma 3.1) Pn(A(σ1), t) is the polynomial of best approximation. On the other hand, if

    we proceed to a new basis σ2 = Sσ1. By Lemma 3.2,

    Suppose we have already constructed a basis σk. Determine Pn(Ak), t). If

    then Pn(A k), t) is the required polynomial. Otherwise, i. e., if

    we go on to the basis σk + 1 = Sσk.

    This procedure will yield an extremal basis, and hence a polynomial of best approximation, in finitely many trials. That the procedure is indeed finite follows from the fact that the number of bases

    Enjoying the preview?
    Page 1 of 1