You are on page 1of 19

c 

In mathematics, a power series (in one variable) is an infinite series of the form

where › represents the coefficient of the th term,  is a constant, and  varies


around  (for this reason one sometimes speaks of the series as being   at ).
This series usually arises as theTaylor series of some known function; the Taylor
series article contains many examples.

In many situations  is equal to zero, for instance when considering a Maclaurin series.
In such cases, the power series takes the simpler form

These power series arise primarily in analysis, but also occur


in combinatorics (under the name of generating functions) and in electrical engineering
(under the name of the Z-transform). The familiardecimal notation for real numbers can
also be viewed as an example of a power series, with integer coefficients, but with the
argument  fixed at 1»10. In number theory, the concept of p-adic numbersis also closely
related to that of a power series.
½  
 


Ô power series will converge for some values of the variable  and may diverge
for others. Ôll power series will converge at  = . If  is not the only convergent point,
then there is always a number with 0 <  ” ’ such that the series converges whenever
| í | <  and diverges whenever | í | > . The number  is called the radius of
convergence of the power series; in general it is given as

or, equivalently,

Ô fast way to compute it is

if this limit exists.


The series converges absolutely for | í | <  and converges uniformly on
every compact subset of { : | í | < }. That is, the series is absolutely and compactly
convergent on the interior of the disc of convergence.

For | í | = , we cannot make any general statement on whether the series


converges or diverges. However, for the case of real variables, Ôbel's theorem states
that the sum of the series is continuous at  if the series converges at . In the case of
complex variables, we can only claim continuity along the line segment starting at  and
ending at .

  

 


 

-hen two functions @ and are decomposed into power series around the same
center , the power series of the sum or difference of the functions can be obtained by
termwise addition and subtraction. That is, if:

then

Multiplication and division


-ith the same definitions above, for the power series of the product and quotient
of the functions can be obtained as follows:
The sequence is known as the convolution of the
sequences › and  .

For division, observe:

and then use the above, comparing coefficients.

D
 


  

nce a function is given as a power series, it is differentiable on the interior of
the domain of convergence. It can be differentiated and integrated quite easily, by
treating every term separately:

Both of these series have the same radius of convergence as the original one.

 



Ô function @ defined on some open subset  of R or C is called analytic if it is


locally given by a convergent power series. This means that every › ·  has an
open neighborhood  å , such that there exists a power series with center › which
converges to @() for every  · .

Every power series with a positive radius of convergence is analytic on


the interior of its region of convergence. Ôll holomorphic functions are complex-analytic.
Sums and products of analytic functions are analytic, as are quotients as long as the
denominator is non-zero.

If a function is analytic, then it is infinitely often differentiable, but in the real case
the converse is not generally true. For an analytic function, the coefficients › can be
computed as
where @( )() denotes the th derivative of @ at , and @(0)() = @(). This means that every
analytic function is locally represented by its Taylor series.

The global form of an analytic function is completely determined by its local


behavior in the following sense: if @ and are two analytic functions defined on the
same connected open set , and if there exists an element · such that @ ( )()
= ( )() for all • 0, then @() = () for all  · .

If a power series with radius of convergence  is given, one can consider analytic
continuations of the series, i.e. analytic functions @ which are defined on larger sets than
{  : | í | <  } and agree with the given power series on this set. The number  is
maximal in the following sense: there always exists a complex number  with | í |
=  such that no analytic continuation of the series can be defined at .

The power series expansion of the inverse function of an analytic function can be
determined using the Lagrange inversion theorem.
c 
    

Ôn extension of the theory is necessary for the purposes of multivariable


calculus. Ô power series is here defined to be an infinite series of the form

where m = (m1, ..., m ) is a vector of natural numbers, the coefficients ›(m m ) are usually
real or complex numbers, and the center  = (1, ...,  ) and argument  = (1, ...,  ) are
usually real or complex vectors. In the more convenient multi-index notation this can be
written

The theory of such series is trickier than for single-variable series, with more

complicated regions of convergence. For instance, the power series is


absolutely convergent in the set{(1,2): | 12 | < 1} between two hyperbolas. (This is an
example of a  
 , in the sense that the set of points (log | 1 | ,log | 2 | ),
where (1,2) lies in the above region, is a convex set. More generally, one can show
that when c=0, the interior of the region of absolute convergence is always a log-convex
set in this sense.) n the other hand, in the interior of this region of convergence one
may differentiate and integrate under the series sign, just as one may with ordinary
power series.
  

Let Į be a multi-index for a power series @(1, 2, «,  ). The order of the power
series @ is defined to be the least value |Į| such that ›Į  0, or 0 if @ Ł 0. In particular, for
a power series @() in a single variable , the order of @ is the smallest power of  with a
nonzero coefficient. This definition readily extends to Laurent series.

À  
In mathematics, the Taylor series is a representation of a function as an infinite
sum of terms calculated from the values of its derivatives at a single point. The Taylor
series was formally introduced by the English mathematician Brook Taylor in 1715. If
the series is centered at zero, the series is also called a Maclaurin series, named after
the Scottish mathematician Colin Maclaurin who made extensive use of this special
case of Taylor's series in the 18th century. It is common practice to use a finite number
of terms of the series to approximate a function. Ôny finite number of initial terms of the
series is called a Taylor polynomial, and the value of the Taylor series is the limit of
the Taylor polynomials, provided that the limit exists.

The Taylor series of a real or complex function á() that is infinitely


differentiable in a neighborhood of a real or complex number › is the power series

which can be written in the more compact sigma notation as

where ! denotes the factorial of and á ( )(›) denotes


the th derivative of á evaluated at the point ›. The zeroth derivative of á is defined to
be á itself and( í ›)0 and 0! are both defined to be 1. In the case that › = 0, the series
is also called a Maclaurin series.

 



If @() is given by a convergent power series in an open disc (or interval in the real
line) centered at , it is said to be analytic in this disc. Thus for  in this disc, @is given by
a convergent power series
Differentiating by  the above formula times, then setting = gives:

and so the power series expansion agrees with the Taylor series. Thus a function is
analytic in an open disc centered at  if and only if its Taylor series converges to the
value of the function at each point of the disc.

If @() is equal to its Taylor series everywhere it is called entire. The polynomials
and the exponential function  and the trigonometric functions sine and cosine are
examples of entire functions. Examples of functions that are not entire include
the logarithm, the trigonometric function tangent, and its inverse arctan. For these
functions the Taylor series do not converge if  is far from ›. Taylor series can be used
to calculate the value of an entire function in every point, if the value of the function, and
of all of its derivatives, are known at a single point.

Uses of the Taylor series for analytic functions include:

× The partial sums (the Taylor polynomials) of the series can be used as
approximations of the entire function. These approximations are good if
sufficiently many terms are included.
× differentiation and integration of power series can be performed term by
term and is hence particularly easy.
× an analytic function is uniquely extended to a holomorphic function on
an open disk in the complex plane. This makes the machinery of complex
analysis available.
× the (truncated) series can be used to compute function values numerically,
(often by recasting the polynomial into the Chebyshev form and evaluating
it with the Clenshaw algorithm).
× algebraic operations can be done readily on the power series
representation; for instance the Euler's formula follows from Taylor series
expansions for trigonometric and exponential functions. This result is of
fundamental importance in such fields as harmonic analysis.

  


 

In contrast, also shown is a picture of the natural logarithm function log(1
+ ) and some of its Taylor polynomials around › = 0. These approximations converge
to the function only in the region í1 <  ” 1; outside of this region the higher-degree
Taylor polynomials are   approximations for the function. This is similar to Runge's
phenomenon.

The error incurred in approximating a function by its th-degree Taylor


polynomial, is called the remainder or  › and is denoted by the
function  . Taylor's theorem can be used to obtain a bound on the size of the
remainder.

In general, Taylor series need not be convergent at all. Ônd in fact the set of
functions with a convergent Taylor series is a meager set in theFrechet space of smooth
functions. Even if the Taylor series of a function @ does converge, its limit need not in
general be equal to the value of the function @(). For example, the function

is infinitely differentiable at  = 0, and has all derivatives zero there. Consequently, the
Taylor series of @() about  = 0 is identically zero. However, @() is not equal to the zero
function, and so it is not equal to its Taylor series around the origin.

In real analysis, this example shows that there are infinitely differentiable
functions @() whose Taylor series are  equal to @() even if they converge. By
contrast in complex analysis there are holomorphic functions @() whose Taylor
series converges to a value different from @(). The complex function eíí2 does not
approach 0 as  approaches 0 along the imaginary axis, and its Taylor series is thus not
defined there.

More generally, every sequence of real or complex numbers can appear as


coefficients in the Taylor series of an infinitely differentiable function defined on the real
line, a consequence of Borel's lemma (see also Non-analytic smooth
function#Ôpplication to Taylor series). Ôs a result, theradius of convergence of a Taylor
series can be zero. There are even infinitely differentiable functions defined on the real
line whose Taylor series have a radius of convergence 0 everywhere.[5]

Some functions cannot be written as Taylor series because they have a singularity;
in these cases, one can often still achieve a series expansion if one allows also
negative powers of the variable ; see Laurent series. For example, @() = eíí2 can be
written as a Laurent series.


 
  

There is, however, a generalization[6][7] of the Taylor series that does converge to
the value of the function itself for any bounded continuous function on (0,’), using the
calculus of finite differences. Specifically, one has the following theorem, due to Einar
Hille, that for any  > 0,

Here ǻ
 is the -th finite difference operator with step size . The series is precisely the Taylor
series, except that divided differences appear in place of differentiation: the series is
formally similar to the Newton series. -hen the function @ is analytic at ›, the terms in
the series converge to the terms of the Taylor series, and in this sense generalizes the
usual Taylor series.
In general, for any infinite sequence › , the following power series identity holds:

So in particular,

The series on the right is the expectation value of @(a + ), where  is a Poisson
distributed random variable that takes the value m with probability í/(/)m/m!. Hence,

The law of large numbers implies that the identity holds.

Ô Maclaurin series is a Taylor series expansion of a function about 0,

(1)

Maclaurin series are named after the Scottish mathematician Colin Maclaurin.


À  
    

The Taylor series may also be generalized to functions of more than one variable with

For example, for a function that depends on two variables,  and , the Taylor
series to second order about the point (›, ) is:

where the subscripts denote the respective partial derivatives.

Ô second-order Taylor series expansion of a scalar-valued function of more than


one variable can be written compactly as

where is the gradient of evaluated at and is the Hessian matrix.


Ôpplying the multi-index notation the Taylor series for several variables becomes

which is to be understood as a still more abbreviated multi-index version of the first


equation of this paragraph, again in full analogy to the single variable case.

Maclaurin series

The Maclaurin series of a function up to order may be found


using Series[@, , 0, ]. The th term of a Maclaurin series of a function can be
computed in ›› › using SeriesCoefficient[@, , 0, ] and is given by the
inverse Z-transform

(2)

Maclaurin series are a type of series expansion in which all terms are
nonnegative integer powers of the variable. ther more general types of series include
the Laurent series and the Puiseux series.
V   
   




Exponential function:

Natural logarithm:

Finite geometric series:

Infinite geometric series:

Variants of the infinite geometric series:

Square root:

Binomial series (includes the square root for 6 = 1/2 and the infinite geometric series
for 6 = í1):

with generalized binomial coefficients


Trigonometric functions:

where the [ are Bernoulli numbers.

Hyperbolic functions:

Lambert's - function:

The numbers [ appearing in the › expansions of tan() and tanh()


are the Bernoulli numbers. The  in the expansion of sec() are Euler numbers.
Ñ 
In mathematics, a Fourier series decomposes any periodic function or periodic
signal into the sum of a (possibly infinite) set of simple oscillating functions,
namely sines and cosines (or complex exponentials). The study of Fourier series is a
branch of Fourier analysis. Fourier series were introduced by Joseph Fourier (1768±
1830) for the purpose of solving the heat equation in a metal plate.

The heat equation is a partial differential equation. Prior to Fourier's work, no


solution to the heat equation was known in the general case, although particular
solutions were known if the heat source behaved in a simple way, in particular, if the
heat source was a sine or cosine wave. These simple solutions are now sometimes
called eigensolutions. Fourier's idea was to model a complicated heat source as a
superposition (or linear combination) of simple sine and cosine waves, and to write the
solution as a superposition of the corresponding eigensolutions. This superposition or
linear combination is called the Fourier series.

Ôlthough the original motivation was to solve the heat equation, it later became
obvious that the same techniques could be applied to a wide array of mathematical and
physical problems, and especially those involving linear differential equations with
constant coefficients, for which the eigensolutions are sinusoids. The Fourier series has
many such applications in electrical
engineering, vibration analysis, acoustics, optics, signal processing, image
processing, quantum mechanics,econometrics, thin-walled shell theory,etc.

The Fourier series is named in honour of Joseph Fourier (1768±1830), who


made important contributions to the study of trigonometric series, after preliminary
investigations by Leonhard Euler, Jean le Rond d'Ôlembert, and Daniel Bernoulli. He
applied this technique to find the solution of the heat equation, publishing his initial
results in his 1807    ›  › ›  › › ›      and
1811, and publishing his    › ›   › › in 1822.

From a modern point of view, Fourier's results are somewhat informal, due to the
lack of a precise notion of function and integral in the early nineteenth century.
Later, Dirichlet and Riemann expressed Fourier's results with greater precision and
formality.
½ 
  

Multiplying both sides by , and then integrating from  =


í 1 to  = + 1 yields:

This immediately gives you any coefficent › of the trigonometrical series


for ± for any function which has such an expansion. It works because all the other

terms for vanish when integrated from í1 to 1.

In these few lines, which are close to the modern formalism used in Fourier
series, Fourier revolutionized both mathematics and physics. Ôlthough similar
trigonometric series were previously used byEuler, d'Ôlembert, Daniel
Bernoulli and Gauss, Fourier believed that such trigonometric series could represent
arbitrary functions. In what sense that is actually true is a somewhat subtle issue and
the attempts over many years to clarify this idea have led to important discoveries in the
theories of convergence, function spaces, and harmonic analysis.

-hen Fourier submitted a later competition essay in 1811, the committee (which
included Lagrange, Laplace, Malus and Legendre, among others) concluded: 
›     ›  ›
 ›   ›   @  @@  
›  › ›   ›    ›
         @
 ›  ›  
  .

In this section, á() denotes a function of the real variable . This function is usually
taken to be periodic, of period 2ʌ, which is to say that á( + 2) = á(), for all real
numbers . -e will attempt to write such a function as an infinite sum, or series of
simpler 2ʌ±periodic functions. -e will start by using an infinite sum
of sine and cosine functions on the interval [í, ], as Fourier did (see the quote
above), and we will then discuss different formulations and generalizations.
Fourier's formula for 2-periodic functions using sines and cosines
For a periodic function á() that is integrable on [í, ], the numbers

and

are called the Fourier coefficients of á. ne introduces the › ›  @    
  for á, often denoted by

The partial sums for á are trigonometric polynomials. ne expects that the
functions  á approximate the function á, and that the approximation improves
as  tends to infinity. The infinite sum

is called the Fourier series of á.

The Fourier series does not always converge, and even when it does converge for a
specific value 0 of , the sum of the series at 0 may differ from the value á(0) of the
function. It is one of the main questions in harmonic analysis to decide when Fourier
series converge, and when the sum is equal to the original function. If a function
is square-integrable on the interval [í, ], then the Fourier series converges to the
function at ›  
 point. In engineering applications, the Fourier series is
generally presumed to converge everywhere except at discontinuities, since the
functions encountered in engineering are more well behaved than the ones that
mathematicians can provide as counter-examples to this presumption. In particular, the
Fourier series converges absolutely and uniformly to á() whenever the derivative
of á() (which may not exist everywhere) is square integrable. It is possible to define
Fourier coefficients for more general functions or distributions, in such cases
convergence in norm or weak convergence is usually of interest.
Exponential Fourier series
-e can use Euler's formula,

where is the imaginary unit, to give a more concise formula:

The Fourier coefficients are then given by:

The Fourier coefficients › ,  ,  are related via

and

The notation  is inadequate for discussing the Fourier coefficients of several


different functions. Therefore it is customarily replaced by a modified form of á (in this
case), such as  or and functional notation often replaces subscripting. Thus:

Fourier series on a general interval [›, ]


The following formula, with appropriate complex-valued coefficients [ ], is a
periodic function with period  on all of R:

If a function is square-integrable in the interval [›, › + ], it can be represented in


that interval by the formula above. I.e., when the coefficients are derived from a
function, (), as follows:
then () will equal () in the interval [›,›+ ]. It follows that if () is -periodic, then:

š () and () are equal everywhere, except possibly at discontinuities, and
š › is an arbitrary choice. Two popular choices are › = 0, and › = í/2.
Ônother commonly used frequency domain representation uses the Fourier
series coefficients to modulate a Dirac comb:

where variable á represents a continuous frequency domain. -hen variable  has units
of seconds, á has units of hertz. The "teeth" of the comb are spaced at multiples
(i.e. harmonics) of 1/, which is called the fundamental frequency. () can be
recovered from this representation by an inverse Fourier transform:

The function (á) is therefore commonly referred to as a Fourier transform, even


though the Fourier integral of a periodic function is not convergent at the harmonic
frequencies.

Fourier series on a square


-e can also define the Fourier series for functions of two variables  and  in the
square [í, ]×[í, ]:
Ôside from being useful for solving partial differential equations such as the heat
equation, one notable application of Fourier series on the square is in image
compression

In engineering, particularly when the variable  represents time, the coefficient


sequence is called a frequency domain representation. Square brackets are often used
to emphasize that the domain of this function is a discrete set of frequencies.

Hilbert space interpretation


In the language of Hilbert spaces, the set of functions is
2
an orthonormal basis for the space  ([ í ʌ,ʌ]) of square-integrable functions of [ í
ʌ,ʌ]. This space is actually a Hilbert space with an inner product given by:

The basic Fourier series result for Hilbert spaces can be written as

This corresponds exactly to the complex exponential formulation given above.


The version with sines and cosines is also justified with the Hilbert space interpretation.
Indeed, the sines and cosines form an orthogonal set:

(where į is the Kronecker delta), and

furthermore, the sines and cosines are orthogonal to the constant function 1.
Ôn  › › for 2([í, ]) consisting of real functions is formed by the
functions 1, and ¥2 cos( ), ¥2 sin( ) for = 1, 2,... The density of their span is a
consequence of the Stone±-eierstrass theorem, but follows also from the properties of
classical kernels like the Fejér kernel.

Properties

-e say that á belongs to if á is a 2ʌ-periodic function on R which


is times differentiable, and its th derivative is continuous.

š If á is a 2ʌ-periodic odd function, then › = 0 for all .

š If á is a 2ʌ-periodic even function, then  = 0 for all .

š If á is integrable, , and This result is known


as the Riemann±Lebesgue lemma.

š Ô doubly infinite sequence {› } in is the sequence of Fourier coefficients of a


function in 1[0,2ʌ] if and only if it is a convolution of two sequences in . See

š If , then the Fourier coefficients of the derivative @' can be


expressed in terms of the Fourier coefficients of the function @, via the
formula .

š If , then . In particular, since tends to


zero, we have that tends to zero, which means that the Fourier
coefficients converge to zero faster than the th power of .

š Parseval's theorem. If , then .

š Plancherel's theorem. If are coefficients

š and then there is a unique function such that for every .

š The first convolution theorem states that if á and are in 1([íʌ, ʌ]),
then , where á R denotes the 2ʌ-
periodic convolution of á and . (The factor 2ʌ is not necessary for 1-periodic
functions.)
š The second convolution theorem states that .

š The Poisson summation formula states that the Fourier transform of a


function @ taken at integral points yields the Fourier series of the periodic

summation of @: .

You might also like