You are on page 1of 23

CHAPTER 1

Introduction to Fractional
Calculus
1.1 Historical foreword

Fractional calculus is a generalization of ordinary differentiation and integration to arbitrary


(non-integer) order. The subject is as old as the differential calculus, and goes back to times
when Leibnitz and Newton invented differential calculus. The most common notations for β-th
order derivative of a function y(t) defined in (a,b) are y(β)(t) and aDtβy(t). Negative values of β
correspond to fractional integrals.

In a letter to L`Hospital in 1695 Leibniz raised the following question: "Can the meaning of
derivatives with integer order be generalized to derivatives with non-integer orders?"
L`Hospital was somewhat curious about that question and replied by another question to Leibniz:
"What if the order will be 1/2?" Leibnitz in a letter dated September 30, 1695 — the exact
birthday of the fractional calculus! — replied: "It will lead to a paradox, from which one day
useful consequences will be drawn." The question raised by Leibnitz for a fractional derivative
was an ongoing topic for more than 300 years. Many known mathematicians contributed to this
theory over the years, among them Liouville, Riemann, Weyl, Fourier, Abel, Lacroix, Leibniz,
Grunwald and Letnikov.

Fractional calculus is three centuries old as the conventional calculus, but not very popular
among science and/or engineering community. The beauty of this subject is that fractional
derivatives (and integrals) are not a local (or point) property (or quantity). Thereby this considers
the history and non-local distributed effects. In other words, perhaps this subject translates the
reality of nature better! Therefore to make this subject available as popular subject to science and
engineering community, it adds another dimension to understand or describe basic nature in a
better way. Perhaps fractional calculus is what nature understands, and to talk with nature in this
language is therefore efficient. For past three centuries, this subject was with mathematicians,
and only in last few years, this was pulled to several (applied) fields of engineering and science
and economics. However, recent attempt is on to have the definition of fractional derivative as
local operator specifically to fractal science theory. Next decade will see several applications
based on this 300 years (old) new subject, which can be thought of as superset of fractional
differintegral calculus, the conventional integer order calculus being a part of it. Differintegration
is an operator doing differentiation and sometimes integrations, in a general sense. In this book,
fractional order is limited to only real numbers; the complex order differentigrations are not
touched. Also the applications and discussions are limited to fixed fractional order
differintegrals, and the variable order of differintegration is kept as a future research subject.
Perhaps the fractional calculus will be the calculus of twenty-first century. In this book, attempt
is made to make this topic application oriented for regular science and engineering applications.
Therefore, rigorous mathematics is kept minimal. In this introductory chapter, list in tabular form
is provided to readers to have felt of the fractional derivatives of some commonly occurring
functions.

Fractional Calculus has recently been applied in various areas of engineering, science, finance,
applied mathematics, and bio engineering. However, many researchers remain unaware of this
field. They often ask: What is a fractional derivative? Is this a new field or an old field? Are
there applications of fractional derivatives? What are those applications? In this talk, several
definitions of fractional derivatives will be introduced. Examples from different engineering
fields will be presented to demonstrate that fractional derivatives arise naturally in many
applications. A fractional derivative based formulation will be presented for thermal analysis of a
disk brake, and the analytical results will be compared with the experimental results. A fractional
variational problem will be introduced, and it will be demonstrated that the fractional Euler
Lagrange equation for this problem leads to a new class of fractional differential equations with
forward and backward fractional derivatives. A finite element formulation and numerical results
will be presented to solve a fractional variational problem. Finally, possible directions for future
research will be discussed.

Fractional calculus is a branch of mathematical analysis that studies the possibility of taking
real number, or even complex number, powers of the differential operator

d
D = dx

and the integration operator J.(Usually J is used in favor of I to avoid with other I-like identities)

In this context powers refer to iterative application or composition, in the same sense that

f 2 x = f(f(x))

For example, one may pose the question of interpreting meaningfully


1
D=D 2

as a square root of the differentiation operator (an operator half iterate),i.e., an expression for
some operator that when applied twice to a function will have the same effect as differentiation.

More generally, one can look at the question of defining

Ds

for real number values of s in such a way that when s takes an integer value n, the usual power of
n-fold differentiation is recovered for n > 0, and the nth power of J when n < 0.

There are various reasons for looking at this question. One is that in this way the semigroup of
powers Dn in the discrete variable n is seen inside a continuous semigroup (one hopes) with
parameter s which is a real number. Continuous semigroups are prevalent in mathematics, and
have an interesting theory. Notice here that fraction is then a misnomer for the exponent, since it
need not be rational, but the term fractional calculus has become traditional.

1.2 Fractional derivative

As far as the existence of such a theory is concerned, the foundations of the subject were laid by
Liouville in a paper from 1832. The fractional derivative of a function to order a is often now
defined by means of the Fourier or Mellin integral transforms. An important point is that the
fractional derivative at a point x is a local property only when a is an integer; in non-integer
cases we cannot say that the fractional derivative at x of a function f depends only on the graph
of f very near x, in the way that integer-power derivatives certainly do. Therefore it is expected
that the theory involves some sort of boundary conditions, involving information on the function
further out. To use a metaphor, the fractional derivative requires some peripheral vision.

1.3 Heuristics

A fairly natural question to ask, Does there exit an operator H, or Half-derivative, such that

d
H 2 f x = Df x = dx f x = f ′ x ?

It turns out that there is such an operator, and indeed for a > 0, there exists an operator p such
that
pα f x = f ′ (x)

dn y
Or to put it another way the definition of dx n can be extended to all real values of n.

To delve into a little detail, start with the Gamma function Γ, which extends factorials to non-
integer values.

n! = Γ(n + 1)

Assuming a function f (x) that is well defined where x > 0, we can form the definite integral from
0 to x. Let‘s call this

x
Jf x = 0
f(t)dt

Repeating process gives

x x t
J2 f x = 0
J f t dt = 0 0
f(s)ds dt,

and this can be extended arbitrarily.

The Cauchy for repeated integration, namely

1 x
J n f x = (n−1)! 0
x−t n−1
f t dt,

leads to a straightforward way to a generalization for real n,

Simply using the gamma function to remove the discrete nature of the fractional function

recalling that Γ n + 1 = n!, or equivalently Γ n = (n − 1)! gives us a natural candidate


for fractional applications of the integral operator.

1 x
Jα f x = Γ α 0
x−t α−1
f t dt

This is in fact a well-defined operator.

It can be shown that J operator is both commutative and additive, That is,

1 x
J α J β f = jα+β f = Γ α+β 0
x−t α+β−1
f t dt
This property is called the Semi-Group property of fractional differintegral operators.

Unfortunately the comparable process for the derivative operator D is significantly more.

Complex, but it can be shown that D is neither commutative nor additive in general.

1.4 Half derivative of a simple function

The half derivative (maroon curve) of the function y = x (blue curve) together with the first
derivative (red curve)

Let us assume that f(x) is monomial of the form

f(x) = x k .

The first derivative is as usual

d
f ‘(x) = dx f x = kx k−1

Repeating this gives the more general result that

da k!
xk = x k−a
dx a (k−a)!

This after replacing the factorials with the Gamma function leads us to
da Γ(k+1)
xk = x k−a (1.4.1)
dx a Γ(k−a+1)

So for example half derivative of x is

1 1
1 1 1 1
d2 Γ(1+1) 1− Γ(2) − 2𝑥 2
1 x= 1 x 2 = 3 x = 2𝜋
2 2 𝑥 =
2
Γ(1− +1) Γ( ) π
dx 2 2 2

Repeating this process gives

1 3
1 1 1 Γ( )
d2 − 1
1 2𝜋 2 𝑥 = 2 2𝜋 −2 2 x0 = =1
Γ(1) Γ(1)
dx 2

Which is indeed the expected result of

1 1
d2 d2 d
1 1 x = dx x = 1
dx 2 dx 2

This extension of the above differential operator need not be constrained only to real powers. For
example, the (1+i)th derivative of the (1-i)th derivative yields the 2nd derivative Also notice that
setting negative values for a yields integrals.

In some ways the most natural and appealing generalization is based on the exponential function
f(x) =eax , whose nth derivative is simply an eax .This immediately suggests defining the
derivative of order (not necessarily an integer) as

dv
eax = av eax
dx v

Negative values of v represent integrations and we can even extend this to allow complex values
of v. Any function expressible as a sum of exponential functions can then be differentiated in the
same way. For example, the generalized derivative of the cosine function according to this
approach is given by

dv dv e ix +e −ix i v e ax +(−i)v e −ix


cos x = =
dx v dx v 2 2

v
Since ±i = e±vπ/2 , we have the result

dv π
cos x = cos x + v 2
dx v
Thus the generalized differential operator simply shifts the phase of the cosine function (and
likewise the sine function) in proportion to the order of the differentiation. Needless to say, this
approach can be applied to the exponential Fourier representation of an arbitrary function to
define the generalized derivative and integral. Recall that the exponential Fourier representation
of a function f(x) is defined as

1 ∞ 1 ∞
f(x) = g(α)e−iαx dα where g α = f(x)eiαx dx
2π −∞ 2π −∞

The functions f and g are Fourier transforms of each other. Given the representation of f(x),the
generalized derivatives (and integrals) are simply

dv 1 ∞
f x = g(α) −iα v e−iαx dα
dx v 2π −∞

Hence if g(𝛼) is the fourier transform of f(x), then the Fourier transform of the generalized v th
derivative of f(x) is −iα v g(α).

The exponential approach seems to give a very satisfactory way of defining fractional
derivatives…but we have yet to answer L‘Hospital‘s question, which was to determine the half-
derivative of f(x)=x. There is no Fourier representation of this open-ended function, so it has no
well-defined spectral decomposition. Of course, we can find the Fourier representation of x over
some finite interval, but what interval we should choose? This ambiguity gives a hint of why
Leibnitz considered the subject to be paradoxical local, because it depends on how the function
behaves over the range for which the integration is performed, not just at a single point. But he
was used the thinking of differentiation as both unique and local, because whole derivatives
happen to possess both of those attributes. The apparent paradoxes of fractional derivatives arise
from the fact that, in general, differentiation is non-unique and non-local, just as is integration.
This shouldn‘t be surprising, since the generalization essentially unifies integrals and derivatives
into a single operator. If anything, we ought to be surprised at how this operator takes on
uniqueness and locality for positive integer arguments.

To get a clearer idea of the ambiguity in the concept of a generalized derivative, it‘s useful to
examine a few other approaches, and compare them with the exponential approach described
above. The most fundamental approach may be to begin with the basic definition of the whole
derivative of a function f(x).
d f x −f(x−ε)
f x = limε→0
dx ε

Repeated composition of this operation leads to

dn 1 n n
n
f x = limε→0 ε n j=0(−1) j f x − εj (1.4.2)
dx n

For any positive integer n, to illustrate, this formula gives the second derivative of the function
f(x) =x 4 as

d2 1 2 2
x 4 = limε→0 ε 2 j=0(−1)
j
x − jε 4
dx 2 j

1
= limε→0 ε 2 x 4 − 2(x − ε)4 + x − 2ε 4

= limε→0 12x 2 − 24xε + 14ε2

We can generalize equation (1.4.2) for non-integer orders, but to do this we must not only
generalize the binomial coefficients, we also need to determine the appropriate generalization of
the upper summation limit, which we wrote as n in equation (1.4.2) To clarify the situation, let us
go back and derive ―from stratch‖ the operations of differentiation and integration in a unified
context. Consider an arbitrary smooth function f(x) as shown in the figure below:

In addition to the point at x, we‘ve also marked six other equally-spaced values on the interval from 0 to
x, each a distance ε from its neighbors. The number k of these points is related to the values of x and ε by
x= k ε. For convenience, we define a shift operator σε such that σε f x = f x − ε . Now we consider a
general operator D defined by
1−σε n
Dn f(x) = limε→0 ε
f(x)

Recalling the geometric series expansion 1/ (1-z) =1+z+z 2 +z 3 +…., the effects of this operator with n=+1
or -1 are

1−σε 1 f x −f(x−ε)
D1 f(x) = f x =
ε ε

1−σε −1
D−1 f(x) = ε
f x = ε f x + f x − ε + f x − 2ε + ⋯ + f(0)

In the limit as ε goes to zero. Thus we see that D is simply the differentiation operator, and its
inverse,D−1 , is the integration operator. This reproduces the ordinary whole derivatives. For example, the
second derivative of f(x) is

1−σε 2 f x −2f x−ε +f(x−2ε)


D1 f(x) = ε
f x = ε2

In the limit as ε goes to zero. which illustrated how we recover the binomial equation (1.4.2) for any
whole number of differentiations. However strictly speaking, this context makes it clear that we should
actually write the second derivative as

1−σε 2 1f x −2f x−ε +1f x−2ε −0f x−3ε +⋯+0f(0)


f x =
ε ε2

It just to happen that, if n is a positive integer, all the binomial coefficients after the first n+1 are
identically zero (i.e., we have C(n;j)=0 for all j greater than n), so we can truncate the series but for any
negative or fractional positive values of n, the binomial coefficients are non-terminating, so we
must include the entire summation over the specified range. Consequently, the upper summation
limit in (1.4.2) should actually be (xx0)/, where x0 is the lower bound on the range of
evaluation. We often choose x0 = 0 by convention, but it is actually arbitrary, and we will see
below some circumstances in which the lower bound is not zero. In any case, we can re-write
equation (1.4.2) in the more correct form that does not rely on n being a positive integer

x −x 0
dn 1 n
f x = limε→0 ε n j=0
ε
(−1)n j f x − εj (1.4.2a)
dx n

To define the binomial coefficient for non-integer values of n, recall that for integer arguments
these coefficients are defined as
n n!
j = j!(n−j)!

So we need a way of evaluating the factorial function for non-integer arguments. Notice that for
any positive integer n we have the definite integral

1 22n +1 (n!)2
−1
(1 − x 2 )n dx = (2n+1)!

Solving for n! gives

(2𝑛+1)! 1
n! = (1 − 𝑥 2 )𝑛 dx
22𝑛 +1 −1

The argument of the factorial on the right side is 2n+1, so the right hand expression is well-
defined for half-integer value of n such that 2n+1 are non-negative. Hence this is a well-defined
expression for the factorial of any such half-integer argument. For example, setting n=-1/2 we
get,

−1/2 ! = π

Furthermore, now that the factorial of all (positive) half-integers is defined, the above formula
allows us to compute the factorial of any quarter-integer, and then every sixteenth, and so on.
Hence, using the binary representation of real numbers, and using the identity (x+1)! = (x+1) x!
We now have a well-defined factorial function for any real number. This is traditionally called
the gamma function, with the argument offset by 1 relative to the factorial notation, so we have

Γ 𝑛 = 𝑛−1 !

For any positive integer n, the fundamental recurrence formula for the gamma function is
therefore (x+1) = x (x). Thus we have the following values for positive half-integer arguments

3 4 1 1 3 1 3 3
Γ − = π, Γ − = −2 π, Γ = π, Γ = π, Γ = π
2 3 2 2 2 2 2 4

Note the reflection relation (x) (1 – x) =  / sin(x). Now that we have a general way of
expressing ―factorials‖ for non-integers, we can re-write equation (1.4.2a) in generalized form,
replacing each appearance of the integer n with the real number . This gives
x −x 0
dv 1 Γ(v+1)
f x = limε→0 ε v j=0
ε
(−1)j j!Γ(v+1−j) f x − jε (1.4.3)
dx v

If  is an integer n, the vanishing of the binomial coefficients for all j greater than n implies that
we don‘t really need to carry the summation beyond j = n, and in the limit as  goes to zero the n
values of f(xk) with non-zero coefficients all converge on x, so the derivative is local.
However, in general, the binomial expansion has infinitely many non-zero coefficients, so the
result depends on the values of x all the way down to x0. We typically choose x0 = 0, so we are
effectively evaluating the ―derivative‖ (which is not the same as the ―slope‖) for the interval
from 0 to x. Thus, as mentioned previously, the generalized derivative is a non-local operation,
just as is integration. The general derivative depends on the value of the function f over the
whole range from x0 to x. This can be seen from the factor f(x – j) in the summation in equation
(1.5.2), showing that as j ranges from zero to (x – x0)/ the argument of f ranges from x down to
zero. It just so happens that this non-locality disappears for positive whole derivatives. (This
simple mathematical fact has an important consequence in the strong form of Huygens‘
Principle, which accounts for the sharp propagation of light and other wave like phenomena in
three dimensional space.)

Choosing x0 = 0 as the low end of our differentiation interval, the formula (1.4.3) for the general
derivative becomes

x
dv 1 Γ(v+1)
f x = limε→0 ε v ε
j=0
(−1)j j!Γ(v+1−j) f x − jε (1.4.4)
dx v

With this, we are finally equipped to attempt to answer L‘Hospital‘s question. Taking the
function f(x) = x with  = 1/2, signifying the half-derivative, this formula gives

d1/2 1 1 1 5 1/2
= lim 1 x − x − ε − x − 3ε − x − 4ε − ⋯ − x 2ε
dx1/2 ε→0 ε 2 16 128 −2
ε

1
− 2 (ε)
x
−1
ε

In this equation the binomial coefficients symbol is understood to denote the generalized
function, with the factorials expressed in terms of the gamma function. As explained previously,
the coefficients in the above expression are just the coefficients in the binomial expansion of (1 –
)1/2. Evaluating this expression in the limit as  goes to zero, we find that the half-derivative of
x is

1 1
d2 2𝑥 2
1 x= (1.4.5)
π
dx 2

The half-derivative of any constant function cx 0 using (1.4.1) is

dk 0! c
c = c Γ(1/2) x −1/2 = (1.4.6)
dx k πx

Thus, not only is the half-derivative of a constant with (respect to x) non-zero, it is infinite at x =
0. Nevertheless, equations (1.4.4) and (1.4.1) are agreeably consistent with each other, giving
some confidence in the significance of this generalization of the derivative. Given this
equivalence, one might wonder about the value of the elaborate derivation of equation (1.4.1)
when it seems to be so much easier and more direct to arrive at equation (1.4.1). In answer to this
there are two points to consider. First, equation (1.4.4) applies to fairly arbitrary functions,
whereas equation (1.4.1) applies only to functions expressible as power series. Still, a very large
class of functions can be expressed as power series, so this in itself is not an overriding factor.
More important is the fact that equation (1.4.1) gives no hint of the non-locality of the
generalized derivative, i.e., the dependence on the function over a finite range rather than just at
a single point, and the need to specify (implicitly or explicitly) the chosen range. The importance
of this can be seen in several different ways. Perhaps the most significant reason for taking care
of the derivative interval is brought to light when we try to apply equation (1.4.4) or (1.4.1) to
the simple exponential function. We previously proposed that the general nth derivative of eax is
simply (a)n eax, and yet if we expand the exponential function ex into a power series

𝑎 𝑎2 𝑎3
eax = 1 + 1! 𝑥 + 𝑥2 + 𝑥3 + ⋯
2! 3!

And equation (1.4.1) to determine the half-derivative, term by term, we get


d1/2 x 1 4 8 3 16 4
1/2
e = 1 + 2x + x 2 + x + x …
dx πx 3 15 105

A plot of this function, along with ex , is shown in the figure below.


Here we see one of the paradoxes that might have intrigued Leibniz. According to a very
reasonable general definition we expect any derivative (including fractional derivatives) of the
exponential function to equal itself, and the exponential goes to 1 as x goes to zero, and yet our
carefully-derived formulas for the half-derivative of the exponential function goes to infinity at x
= 0. Clearly something is wrong. Must we abandon the elegant exponential approach, along with
it‘s beautiful explanation of the trigonometric derivatives as simple phase shifts, etc? No, we can
reconcile our results, provided we recognize that the derivative is non-local, and therefore
depends on the chosen range of differentiation. Consider the two anti-differentiations shown
below

x 3 x4 x04 x u
x0
u du = − , x0
e du = ex − ex 0
4 4

The left-hand integral shows that when we say x3 is the derivative of x4/4 we are implicitly
assuming x0 = 0, which is consistent with our derivation of equation (1.4.4). However, the right-
hand integral shows that, by saying ex is the derivative of ex, we are implicitly assuming x0 =-∞.
Thus ranges of integration/differentiation we have tacitly assumed for these two definitions are
different. To get agreement between the interpolated binomial expansion method and the
definition based on exponential functions we must return to equation (1.4.3), and replace the
condition x0 = 0 with the condition x0 = -∞. This is easy to do, because it simply amounts to
setting the upper summation limit to infinity, i.e., we take the following formula for our
generalized derivative

dv 1 ∞ j Γ v+1
f x = limε→0 ε v j=0 −1 f x − jε (1.4.7)
dx v j!Γ v+1−j
With this, we do indeed find that the th derivative of eax is simply (a)eax, consistent with the
purely exponential approach.

We have Cauchy‘s expression for repeated integrals, which we can express using the gamma
function instead of factorials as,

d −v 1 x v−1
f x = Γ(v) x−u f(u)du (1.4.8)
dx −v 0

The convergence properties of this formula are best when  has a value between 0 and 1. There
are two different ways in which this formula might be applied. For example, if we wish to find
the (7/3)rd derivative of a function, we could begin by differentiating the function three whole
times, and then apply the above formula with  = 2/3 to ―deduct‖ two thirds of a differentiation,
or alternatively we could begin by applying the above formula with  = 2/3 and then differentiate
the resulting function three whole times. These two alternatives are called the Right Hand and
the Left Hand Definitions respectively. Although these two definitions give the same result in
many circumstances, they are not entirely equivalent, because (for example) the half-derivative
of a constant is zero by the Right Hand Definition, whereas the Left Hand Definition gives for
the half-derivative of a constant the result given previously as equation (1.4.6). In general, the
Left Hand Definition is more uniformly consistent with the previous methods, but the Right
Hand Definition has also found some applications.

Equation (1.4.8) highlights (again) the non-local character of fractional operations, because it
explicitly involves an integral, which we have stipulated to range from 0 to x. For any whole
number of differentiations we don‘t need to invoke this integral, but for a non-integer number of
differentiations we must include the effect of this integral, which implies that the result depends
not just on the values of f at x, but over the stipulated range from 0 to x.

To illustrate the use of equation (1.4.8), we will (again) determine the half-derivative of f(x) = x,
as L‘Hospital requested. Using the Left Hand Definition, we first apply half of an integration to
this function using equation (1.4.8) with  = 1/2, giving

1
d −1/2 1 x − 4
x = Γ(1/2) x−u 2 u du = 3 x 3/2
dx −1/2 0 π

Then we apply one whole differentiation to give the net result of a half-derivative
d 1/2 d 4 x
x = dx x 3/2 = 2
dx 1/2 3 π π

In agreement with equation (1.4.6). In this case the Right Hand Definition gives the same result.

Now suppose we apply this method to the exponential function. Since our definition has been
based on the range from 0 to x, whereas we‘ve seen that the ―exponential approach‖ to fractional
derivatives is essentially based on the range from -∞ to x, we expect to find disagreement, and
indeed for the half-derivative of ex we get (by applying (1.4.8) with  =1/2 and then
differentiating one whole time)

d 1/2 e −x
ex = ex erf x +
dx 1/2 πx

This is identical to the half-derivative of ex given by equation (1.4.1), shown in red in the plot
presented previously (when we only had the series expansion of this function). Again, we can
reconcile this approach with the ―exponential approach‖ by changing the lower limit on the
integration from 0 to -∞. When we make this change, equation (1.4.8) gives

1
d −1/2 1 x −
ex = x−u 2 eu du = ex
dx −1/2 Γ(1/2) 0

and of course the whole derivative of this is also ex, so the half-derivative of ex by this method is
indeed ex, provided we use a suitable range of differentiation.

1.5. The Fractional Derivatives and Integral

1.5.1 The Fractional Integral

According to the Riemann-Liouville approach to fractional calculus the notion of fractional


integral of order α (α > 0) is a natural consequence of the well known formula (usually attributed
to Cauchy), that reduces the calculation of the n−fold primitive of a function f(t) to a single
integral of convolution type. In our notation the Cauchy formula reads,

1 t
J n f(t) = fn (t) = (n − 1)! 0
(t − τ)n−1 f(τ)dτ , t > 0 ,nϵℕ (1.5.1)
Where N is the set of positive integers. From this definition we note that fn(t)vanishes at t = 0
with its derivatives of order 1, 2, . . ., n − 1 .
For convention we require that f(t) and henceforth fn(t) be a causal function, i.e. identically
vanishing for t < 0 .In a natural way one is led to extend the above formula from positive integer
values of the index to any positive real values by using the Gamma function. Indeed, noting that
(n − 1)! = Γ(n) , and introducing the arbitrary positive real number α ,
one defines the Fractional Integral of order α > 0 :

1 t
J α f(t) = Γ(α) 0
(t − τ)α−1 f(τ)dτ , t > 0 ,αϵℝ+ (1.5.2)

Where ℝ+ is the set of positive real numbers. For complementation we define


J 0 = I (Identity operator), i.e. we mean J 0 f(t) = f(t) . Furthermore, by J α f(0+) we mean the limit
(if it exists) of J α f(t) for t → 0+ ; this limit may be infinite.
We note the semi group property

J α J β = J α+β for α, β≥ 0 , (1.5.3)

Which implies the commutative property J β J α = J α+β , and the effect of our operators J α on the
power functions

Γ(γ)
J α t γ =Γ(γ+1+α) t γ+α α>0, γ>−1, t>0 (1.5.4)

The properties (1.5.3) & (1.5.4) are of course a natural generalization of those known when the
order is a positive integer. The proofs are based on the properties of the two Eulerian integrals,
i.e. the Gamma and Beta functions,

∞ −u z−1
Γ(z) = 0
e u du , Γ(z + 1) = z Γ(z) , Re {z} > 0 (1.5.5)

Γ p Γ(q )
β(p,q)= (1 − u)p−1 uq−1 du = = β(q, p), Re {p , q} > 0 (1.5.6)
Γ(p+q)

It may be convenient to introduce the following causal function


t + α+1
Φα(t):= , α>0 (1.5.7)
Γ(α)

Where the suffix + is just denoting that the function is vanishing for t < 0 . Being α > 0 , this
function turns out to be locally absolutely integrable in ℝ+ . Let us now recall the notion of
Laplace convolution, i.e. the convolution integral with two causal functions, which reads in a
standard notation

t
f(t)*g(t) = 0
f t − τ g(τ) dτ=g(t)*f(t)

Then we note from (1.5.2) and (1.5.7) that the fractional integral of order α > 0 can be considered
as the Laplace convolution between ∅α (t) and f(t) , i.e.

J α f(t) = Φα(t) * f(t), α>0 (1.5.8)

Furthermore, based on the Eulerian integrals, one proves the composition rule

Φα (t) * Φβ(t) = Φα+β (t), α, β>0 (1.5.9)

Which can be used to re-obtain (1.5.3) and (1.5.4).

Introducing the Laplace transform by the notation,

∞ −st
L {f(t)} = 0
e f(t)dt = f(s)

s ∈ ℂ, and using the sign ÷ to denote a Laplace transform pair, i.e.f(t) ÷ f(s)

We note the following rule for the Laplace transform of the fractional integral,

f (s)
J α f(t) ÷ sα
(1.5.10)
Which is the straightforward generalization of the case with an n-fold repeated integral (α = n).
For the proof it is sufficient to recall the convolution theorem for Laplace transforms and note
the pair Φα(t) ÷ 1/sα , with α > 0

1.5.2 The Fractional Derivatives

After the notion of fractional integral, that of fractional derivative of order α (α > 0) becomes a
natural requirement and one is attempted to substitute α with −α in the above formulas. However,
this generalization needs some care in order to guarantee the convergence of the integrals and
preserve the well known properties of the ordinary derivative of integer order.

Denoting by Dn with n ∈ ℕ , the operator of the derivative of order n , we first note that

Dn J n =I , J n Dn ≠ I ; n∈ ℕ (1.5.11)

i.e. Dn is left-inverse (and not right-inverse) to the corresponding integral operatorIn fact from
Jn .

In fact from (1.5.1),

k
n−1 (k) + t
J n Dn f(t) = f(t) – k=0 f (0 )k! ,t>0 (1.5.12)

As a consequence we expect that Dα is defined as left-inverse to J α . For this purpose, introducing


the positive integer m such that m − 1 < α ≤ m,

We define the Fractional Derivative of order α > 0 :

Dα f (t) = Dm J m−α f(t) , namely

dm 1 t f t
0 t−τ α+1−m
dτ ; m − 1 < α < 𝑚
α dt m Γ m−α
D f(t)= = dm
(1.5.13)
f t ; α=m
dt m
Defining for complementation D0 = J 0 = I, then we easily recognize that

Dα J α = I , α ≥ 0 , (1.5.14)

Γ(γ+1)
and Dα t γ =Γ(γ+1−α) t α ;α>0, γ>−1, t>0 . (1.5.15)

Of course, the properties (1.5.14) & (1.5.15) are a natural generalization of those known when
the order is a positive integer. Since in (1.5.15) the argument of the Gamma function in the
denominator can be negative, we need to consider the analytical continuation of Γ(z) in (1.5.5) to
the left half plane.

The remarkable fact that the fractional derivative Dα f is not zero for the constant function f(t)≡ 1
if α∉ ℕ . In fact, (1.5.15) with γ =0 teaches us that

t −α
Dα 1 = Γ(1−α) ; α≥ 0, t >0 (1.5.16)

This, of course, is ≡ 0 for α ∈ ℕ, due to the poles of the gamma function in the points 0,−1,−2, . .
..

We now observe that an alternative definition of fractional derivative, originally


introduced by and Caputo and Mainardi in the framework of the theory of Linear
Viscoelasticity is the so-called Caputo Fractional Derivative of order α > 0 :

Dα f(t) = J m−α Dm f(t) with m − 1 < α ≤ m, namely

1 t fm τ
0 t−τ α+1−m
; m−1 < α <m
Γ m−α
Dα ∗ f(t) = (1.5.17)
dm
f t ; α=m
dt m
This definition is of course more restrictive than (1.5.13), in that requires the absolute
integrability of the derivative of order m. Whenever we use the operator Dα ∗ , we assume that this
condition is met. We easily recognize that in general,

= Dm J m−α f t
Dα f(t) ≠ J m−α Dm f t (1.5.18)
= Dα ∗ f t

unless the function f(t) along with its first m − 1 derivatives vanishes at t = 0+. In fact, assuming
that the passage of the m-derivative under the integral is legitimate, one recognizes that, for m −
1 < α < m and t > 0,

t k −α
Dα f(t) = Dα ∗ f(t) + m−1
k=0 Γ(k−α+1) f
(k) +
(0 ) (1.5.19)

and therefore, recalling the fractional derivative of the power functions (1.5.15),

t k −α
Dα f t − m−1
k=0 Γ(k−α+1) f
(k)
(0+) = Dα ∗ f(t) (1.5.20)

The alternative definition (1.5.17) for the fractional derivative thus incorporates the initial values
of the function and of its integer derivatives of lower order.

Remark
The main advantage of the Caputo derivative is that the initial conditions for FDEs are the same
form as that of ODEs with integer derivatives. Another difference is that the Caputo derivative
for a constant c is zero, while the R-L FD for a constant c is not zero but equals to

c t − a −α
Dα t c =
Γ 1−𝛼
The subtraction of the Taylor polynomial of degree m−1 at t = 0+ from f(t) means a sort of
regularization of the fractional derivative. In particular, according to this definition, the relevant
property for which the fractional derivative of a constant is still zero,

i.e. Dα ∗ 1≡ 0 (1.5.21)

can be easily recognized.

We now explore the most relevant differences between the two fractional derivatives (1.5.13)
and (1.5.17). We agree to denote (1.5.17) as the Caputo fractional derivative to distinguish it
from the standard Riemann-Liouville fractional derivative (1.5.13).

We observe, again by looking at (1.5.15), that

Dα t α−1 ≡ 0 ,α > 0, t > 0 (1.5.22)

From (1.5.21) and (1.5.22) we thus recognize the following statements about functions which for
t > 0 admit the same fractional derivative of order α, with
m−1 < α ≤ m,m∈ ℕ

Dα f(t)=Dα g(t)⟺f(t)=g(t)+ m
j=1 cj t
α−j
(1.5.23)

Dα ∗ f(t)=Dα ∗ g(t)⟺f(t)=g(t)+ m
j=1 cj t
m−j
(1.5.24)

In these formulas the coefficients cj are arbitrary constants.

Incidentally, we note that (1.5.22) provides an instructive example to show how Dα


is not right-inverse to J α ,since

J α Dα t α−1 ≡ 0 but Dα J α t α−1 = t α−1 ;α >0,t >0 (1.5.25)

For the two definitions we also note a difference with respect to the formal limit
as α →(m − 1)+. From (1.5.13) and (1.5.17) we obtain respectively,
α → (m − 1)+

⇒ Dα f(t) → Dm J f(t) = Dm−1 f(t) ; (1.5.26)

From α → (m − 1)+

⟹ Dα ∗ f(t)→ JDm f(t) = Dm−1 f(t) - f (m−1) (0+) (1.5.27)

We now consider the Laplace transform of the two fractional derivatives. For the standard
fractional derivative Dα the Laplace transform, assumed to exist, requires the knowledge of the
(bounded) initial values of the fractional integral J m−α and of its integer derivatives of order k =
1, 2,. . . , (m – 1) .

The corresponding rule reads, in our notation,

Dα f(t) ÷ s α f (s)- m −1 k m−α


k=0 D j f(0+)sm−1−k ; m-1<α≤m (1.5.28)

The Caputo fractional derivative appears more suitable to be treated by the Laplace transform
technique in that it requires the knowledge of the (bounded) initial values of the function and of
its integer derivatives of order k = 1, 2,. . . ,m -1 in analogy with the case when α = m. In fact,
by using (1.5.10) and noting that

tk
J α Dα ∗ f(t) = J α J m−α Dm f(t) = J m Dm f(t)=f(t) – m−1 k
k=0 f (0+) k! (1.5.29)

We easily prove the following rule for the Laplace transform,

Dα f(t)÷sα f(s)– m−1 k


k=0 f (0+)sα−1−k ,m-1<α≤m (1.5.30)

Indeed, the result (1.5.30), first stated by Caputo by using the Fubini-Tonelli theorem, appears as
the most ―natural‖ generalization of the corresponding result well known for α = m.

You might also like