Professional Documents
Culture Documents
In mathematics, a power series (in one variable) is an infinite series of the form
In many situations is equal to zero, for instance when considering a Maclaurin series.
In such cases, the power series takes the simpler form
Ô power series will converge for some values of the variable and may diverge
for others. Ôll power series will converge at = . If is not the only convergent point,
then there is always a number with 0 < such that the series converges whenever
| í | < and diverges whenever | í | > . The number is called the radius of
convergence of the power series; in general it is given as
or, equivalently,
-hen two functions @ and are decomposed into power series around the same
center , the power series of the sum or difference of the functions can be obtained by
termwise addition and subtraction. That is, if:
then
D
nce a function is given as a power series, it is differentiable on the interior of
the domain of convergence. It can be differentiated and integrated quite easily, by
treating every term separately:
Both of these series have the same radius of convergence as the original one.
If a function is analytic, then it is infinitely often differentiable, but in the real case
the converse is not generally true. For an analytic function, the coefficients can be
computed as
where @( )() denotes the th derivative of @ at , and @(0)() = @(). This means that every
analytic function is locally represented by its Taylor series.
If a power series with radius of convergence is given, one can consider analytic
continuations of the series, i.e. analytic functions @ which are defined on larger sets than
{ : | í | < } and agree with the given power series on this set. The number is
maximal in the following sense: there always exists a complex number with | í |
= such that no analytic continuation of the series can be defined at .
The power series expansion of the inverse function of an analytic function can be
determined using the Lagrange inversion theorem.
c
where m = (m1, ..., m ) is a vector of natural numbers, the coefficients (m m ) are usually
real or complex numbers, and the center = (1, ..., ) and argument = (1, ..., ) are
usually real or complex vectors. In the more convenient multi-index notation this can be
written
The theory of such series is trickier than for single-variable series, with more
Let Į be a multi-index for a power series @(1, 2, «, ). The order of the power
series @ is defined to be the least value |Į| such that Į 0, or 0 if @ Ł 0. In particular, for
a power series @() in a single variable , the order of @ is the smallest power of with a
nonzero coefficient. This definition readily extends to Laurent series.
À
In mathematics, the Taylor series is a representation of a function as an infinite
sum of terms calculated from the values of its derivatives at a single point. The Taylor
series was formally introduced by the English mathematician Brook Taylor in 1715. If
the series is centered at zero, the series is also called a Maclaurin series, named after
the Scottish mathematician Colin Maclaurin who made extensive use of this special
case of Taylor's series in the 18th century. It is common practice to use a finite number
of terms of the series to approximate a function. Ôny finite number of initial terms of the
series is called a Taylor polynomial, and the value of the Taylor series is the limit of
the Taylor polynomials, provided that the limit exists.
If @() is given by a convergent power series in an open disc (or interval in the real
line) centered at , it is said to be analytic in this disc. Thus for in this disc, @is given by
a convergent power series
Differentiating by the above formula times, then setting = gives:
and so the power series expansion agrees with the Taylor series. Thus a function is
analytic in an open disc centered at if and only if its Taylor series converges to the
value of the function at each point of the disc.
If @() is equal to its Taylor series everywhere it is called entire. The polynomials
and the exponential function and the trigonometric functions sine and cosine are
examples of entire functions. Examples of functions that are not entire include
the logarithm, the trigonometric function tangent, and its inverse arctan. For these
functions the Taylor series do not converge if is far from . Taylor series can be used
to calculate the value of an entire function in every point, if the value of the function, and
of all of its derivatives, are known at a single point.
× The partial sums (the Taylor polynomials) of the series can be used as
approximations of the entire function. These approximations are good if
sufficiently many terms are included.
× differentiation and integration of power series can be performed term by
term and is hence particularly easy.
× an analytic function is uniquely extended to a holomorphic function on
an open disk in the complex plane. This makes the machinery of complex
analysis available.
× the (truncated) series can be used to compute function values numerically,
(often by recasting the polynomial into the Chebyshev form and evaluating
it with the Clenshaw algorithm).
× algebraic operations can be done readily on the power series
representation; for instance the Euler's formula follows from Taylor series
expansions for trigonometric and exponential functions. This result is of
fundamental importance in such fields as harmonic analysis.
In contrast, also shown is a picture of the natural logarithm function log(1
+ ) and some of its Taylor polynomials around = 0. These approximations converge
to the function only in the region í1 < 1; outside of this region the higher-degree
Taylor polynomials are approximations for the function. This is similar to Runge's
phenomenon.
In general, Taylor series need not be convergent at all. Ônd in fact the set of
functions with a convergent Taylor series is a meager set in theFrechet space of smooth
functions. Even if the Taylor series of a function @ does converge, its limit need not in
general be equal to the value of the function @(). For example, the function
is infinitely differentiable at = 0, and has all derivatives zero there. Consequently, the
Taylor series of @() about = 0 is identically zero. However, @() is not equal to the zero
function, and so it is not equal to its Taylor series around the origin.
In real analysis, this example shows that there are infinitely differentiable
functions @() whose Taylor series are equal to @() even if they converge. By
contrast in complex analysis there are holomorphic functions @() whose Taylor
series converges to a value different from @(). The complex function eíí2 does not
approach 0 as approaches 0 along the imaginary axis, and its Taylor series is thus not
defined there.
Some functions cannot be written as Taylor series because they have a singularity;
in these cases, one can often still achieve a series expansion if one allows also
negative powers of the variable ; see Laurent series. For example, @() = eíí2 can be
written as a Laurent series.
There is, however, a generalization[6][7] of the Taylor series that does converge to
the value of the function itself for any bounded continuous function on (0,), using the
calculus of finite differences. Specifically, one has the following theorem, due to Einar
Hille, that for any > 0,
Here ǻ
is the -th finite difference operator with step size . The series is precisely the Taylor
series, except that divided differences appear in place of differentiation: the series is
formally similar to the Newton series. -hen the function @ is analytic at , the terms in
the series converge to the terms of the Taylor series, and in this sense generalizes the
usual Taylor series.
In general, for any infinite sequence
, the following power series identity holds:
So in particular,
The series on the right is the expectation value of @(a + ), where is a Poisson
distributed random variable that takes the value m with probability í/(/)m/m!. Hence,
(1)
Maclaurin series are named after the Scottish mathematician Colin Maclaurin.
À
The Taylor series may also be generalized to functions of more than one variable with
For example, for a function that depends on two variables, and , the Taylor
series to second order about the point (, ) is:
Maclaurin series
(2)
Maclaurin series are a type of series expansion in which all terms are
nonnegative integer powers of the variable. ther more general types of series include
the Laurent series and the Puiseux series.
V
Exponential function:
Natural logarithm:
Square root:
Binomial series (includes the square root for 6 = 1/2 and the infinite geometric series
for 6 = í1):
Hyperbolic functions:
Lambert's - function:
Ôlthough the original motivation was to solve the heat equation, it later became
obvious that the same techniques could be applied to a wide array of mathematical and
physical problems, and especially those involving linear differential equations with
constant coefficients, for which the eigensolutions are sinusoids. The Fourier series has
many such applications in electrical
engineering, vibration analysis, acoustics, optics, signal processing, image
processing, quantum mechanics,econometrics, thin-walled shell theory,etc.
From a modern point of view, Fourier's results are somewhat informal, due to the
lack of a precise notion of function and integral in the early nineteenth century.
Later, Dirichlet and Riemann expressed Fourier's results with greater precision and
formality.
½
In these few lines, which are close to the modern formalism used in Fourier
series, Fourier revolutionized both mathematics and physics. Ôlthough similar
trigonometric series were previously used byEuler, d'Ôlembert, Daniel
Bernoulli and Gauss, Fourier believed that such trigonometric series could represent
arbitrary functions. In what sense that is actually true is a somewhat subtle issue and
the attempts over many years to clarify this idea have led to important discoveries in the
theories of convergence, function spaces, and harmonic analysis.
-hen Fourier submitted a later competition essay in 1811, the committee (which
included Lagrange, Laplace, Malus and Legendre, among others) concluded:
@
@@
@
.
In this section, á() denotes a function of the real variable . This function is usually
taken to be periodic, of period 2ʌ, which is to say that á( + 2) = á(), for all real
numbers . -e will attempt to write such a function as an infinite sum, or series of
simpler 2ʌ±periodic functions. -e will start by using an infinite sum
of sine and cosine functions on the interval [í, ], as Fourier did (see the quote
above), and we will then discuss different formulations and generalizations.
Fourier's formula for 2-periodic functions using sines and cosines
For a periodic function á() that is integrable on [í, ], the numbers
and
are called the Fourier coefficients of á. ne introduces the
@
for á, often denoted by
The partial sums for á are trigonometric polynomials. ne expects that the
functions á approximate the function á, and that the approximation improves
as tends to infinity. The infinite sum
The Fourier series does not always converge, and even when it does converge for a
specific value 0 of , the sum of the series at 0 may differ from the value á(0) of the
function. It is one of the main questions in harmonic analysis to decide when Fourier
series converge, and when the sum is equal to the original function. If a function
is square-integrable on the interval [í, ], then the Fourier series converges to the
function at
point. In engineering applications, the Fourier series is
generally presumed to converge everywhere except at discontinuities, since the
functions encountered in engineering are more well behaved than the ones that
mathematicians can provide as counter-examples to this presumption. In particular, the
Fourier series converges absolutely and uniformly to á() whenever the derivative
of á() (which may not exist everywhere) is square integrable. It is possible to define
Fourier coefficients for more general functions or distributions, in such cases
convergence in norm or weak convergence is usually of interest.
Exponential Fourier series
-e can use Euler's formula,
and
() and () are equal everywhere, except possibly at discontinuities, and
is an arbitrary choice. Two popular choices are = 0, and = í/2.
Ônother commonly used frequency domain representation uses the Fourier
series coefficients to modulate a Dirac comb:
where variable á represents a continuous frequency domain. -hen variable has units
of seconds, á has units of hertz. The "teeth" of the comb are spaced at multiples
(i.e. harmonics) of 1/, which is called the fundamental frequency. () can be
recovered from this representation by an inverse Fourier transform:
The basic Fourier series result for Hilbert spaces can be written as
furthermore, the sines and cosines are orthogonal to the constant function 1.
Ôn
for 2([í, ]) consisting of real functions is formed by the
functions 1, and ¥2 cos( ), ¥2 sin( ) for = 1, 2,... The density of their span is a
consequence of the Stone±-eierstrass theorem, but follows also from the properties of
classical kernels like the Fejér kernel.
Properties
The first convolution theorem states that if á and are in 1([íʌ, ʌ]),
then , where á R denotes the 2ʌ-
periodic convolution of á and . (The factor 2ʌ is not necessary for 1-periodic
functions.)
The second convolution theorem states that .
summation of @: .