Professional Documents
Culture Documents
KEITH CONRAD
1. Introduction
The method of differentiation under R 1the integral sign, due originally to Leibniz, concerns integrals
depending on a parameter, such as 0 x2 e−tx dx. Here t is the extra parameter. (Since x is the
variable of integration, x is not a parameter.) In general, we might write such an integral as
Z b
(1.1) f (x, t) dx,
a
where f (x, t) is a function of two variables like f (x, t) = x2 e−tx .
Example 1.1. Let f (x, t) = (2x + t3 )2 . Then
Z 1 Z 1
f (x, t) dx = (2x + t3 )2 dx.
0 0
An anti-derivative of (2x + t3 )2 with respect to x is 16 (2x + t3 )3 , so
Z 1 x=1
3 2 (2x + t3 )3
(2x + t ) dx =
0 6
x=0
(2 + t3 )3 − t9
=
6
4
= + 2t3 + t6 .
3
This answer is a function of t, which makes sense since the integrand depends on t. We integrate
over x and are left with something that depends only on t, not x.
Rb
An integral like a f (x, t) dx is a function of t, so we can ask about its t-derivative, assuming
that f (x, t) is nicely behaved. The rule, called differentiation under the integral sign, is that the
t-derivative of the integral of f (x, t) is the integral of the t-derivative of f (x, t):
d b
Z Z b
∂
(1.2) f (x, t) dx = f (x, t) dx.
dt a a ∂t
1
2 KEITH CONRAD
If you are used to thinking mostly about functions with one variable, not two, keep in mind that
(1.2) involves integrals and derivatives with respect to separate variables: integration with respect
to x and differentiation with respect to t.
R1
Example 1.2. We saw in Example 1.1 that 0 (2x + t3 )2 dx = 4/3 + 2t3 + t6 , whose t-derivative is
6t2 + 6t5 . According to (1.2), we can also compute the t-derivative of the integral like this:
d 1
Z Z 1
∂
(2x + t3 )2 dx = (2x + t3 )2 dx
dt 0 0 ∂t
Z 1
= 2(2x + t3 )(3t2 ) dx
0
Z 1
= (12t2 x + 6t5 ) dx
0
x=1
2 2 5
= 6t x + 6t x
x=0
2 5
= 6t + 6t .
The answer agrees with our first, more direct, calculation.
We will apply (1.2) to many examples of integrals, in Section 11 we will discuss the justification
of this method in our examples, and then we’ll give some more examples.
so
Z ∞
1
(2.4) xe−tx dx = .
0 t2
Differentiate both sides of (2.4) with respect to t, again using (1.2) to handle the left side. We get
Z ∞
2
−x2 e−tx dx = − 3 .
0 t
Taking out the sign on both sides,
Z ∞
2
(2.5) x2 e−tx dx = .
0 t3
If we continue to differentiate each new equation with respect to t a few more times, we obtain
Z ∞
6
x3 e−tx dx = 4 ,
0 t
Z ∞
24
x4 e−tx dx = 5 ,
0 t
and Z ∞
120
x5 e−tx dx = 6 .
0 t
Do you see the pattern? It is
Z ∞
n!
(2.6) xn e−tx dx = n+1 .
0 t
We have used the presence of the extra variable t to get these equations by repeatedly applying
d/dt. Now specialize t to 1 in (2.6). We obtain
Z ∞
xn e−x dx = n!,
0
which is our old friend (2.1). Voilá!
The idea that made this work is introducing a parameter t, using calculus on t, and then setting
t to a particular value so it disappears from the final formula. In other words, sometimes to solve
a problem it is useful to solve a more general problem. Compare (2.1) to (2.6).
Applying this with a = −t and turning the indefinite integral into a definite integral,
(t sin x + cos x) −tx x=∞
Z ∞
0 −tx
F (t) = − e (sin x) dx = e .
0 1 + t2 x=0
As x → ∞, t sin x + cos x oscillates a lot, but in a bounded way (since sin x and cos x are bounded
functions), while the term e−tx decays exponentially to 0 since t > 0. So the value at x = ∞ is 0.
Therefore Z ∞
0 1
F (t) = − e−tx (sin x) dx = − .
0 1 + t2
We know an explicit antiderivative of 1/(1 + t2 ), namely arctan t. Since F (t) has the same
t-derivative as − arctan t, they differ by a constant: for some number C,
Z ∞
sin x
(3.1) e−tx dx = − arctan t + C for t > 0.
0 x
We’ve computed the integral, up to an additive constant, without finding an antiderivative of
e−tx (sin x)/x.
To compute C in (3.1), let t → ∞ on both sides.R ∞ Since |(sin x)/x| ≤ 1, the absolute value of
the integral on the left is bounded from above by 0 e−tx dx = 1/t, so the integral on the left in
(3.1) tends to 0 as t → ∞. Since arctan t → π/2 as t → ∞, equation (3.1) as t → ∞ becomes
0 = − π2 + C, so C = π/2. Feeding this back into (3.1),
Z ∞
sin x π
(3.2) e−tx dx = − arctan t for t > 0.
0 x 2
If we let t → 0+ in (3.2), this equation suggests that
Z ∞
sin x π
(3.3) dx = ,
0 x 2
which is true and it is important in signal processing and Fourier analysis. It is a delicate matter to
derive (3.3) from (3.2) since the integral in (3.3) is not absolutely convergent. Details are provided
in an appendix.
Differentiate both sides of (5.4) with respect to t. After removing a common factor of −1/2 on
both sides, we get
Z ∞ √
4 −tx2 /2 3 2π
(5.5) x e dx = 5/2 .
−∞ t
Differentiating both sides of (5.5) with respect to t a few more times, we get
Z ∞ √
6 −tx2 /2 3 · 5 2π
x e dx = ,
−∞ t7/2
Z ∞ √
8 −tx2 /2 3 · 5 · 7 2π
x e dx = ,
−∞ t9/2
and Z ∞ √
10 −tx2 /2 3 · 5 · 7 · 9 2π
x e dx = .
−∞ t11/2
Quite generally, when n is even
Z ∞ r
n −tx2 /2 1 · 3 · 5 · · · (n − 1) 2π
x e dx = ,
−∞ tn/2 t
where the numerator is the product of the positive odd integers from 1 to n − 1 (understood to be
the empty product 1 when n = 0).
In particular, taking t = 1 we have computed (5.1):
Z ∞
2 √
xn e−x /2 dx = 1 · 3 · 5 · · · (n − 1) 2π.
−∞
R∞
As an application of (5.4), we now compute ( 12 )! := 0 x1/2 e−x dx, where the notation ( 21 )! and
its definition are inspired by Euler’sR integral formula (2.1) for n! when n is a nonnegative integer.
∞
Using the substitution u = x1/2 in 0 x1/2 e−x dx, we have
Z ∞
1
! = x1/2 e−x dx
2 0
Z ∞
2
= ue−u (2u) du
0
Z ∞
2
= 2 u2 e−u du
0
Z ∞
2
= u2 e−u du
−∞
√
2π
= 3/2
by (5.4) at t = 2
2
√
π
= .
2
6. A cosine transform of the Gaussian
We are going to compute Z ∞
2 /2
F (t) = cos(tx)e−x dx
0
DIFFERENTIATING UNDER THE INTEGRAL SIGN 7
R∞ 2 √
We already know 0 e−x dx = π/2, but how do we find the integral when a factor of log x is
√
inserted into the integrand? Replacing x with x in the integral,
Z ∞
1 ∞ log x −x
Z
−x2
(7.1) (log x)e dx = √ e dx.
0 4 0 x
To compute this last integral, the key idea is that (d/dt)(xt ) = xt log x, so we get a factor of
log x in an integral after differentiation under the integral sign if the integrand has an exponential
parameter: for t > −1 set Z ∞
F (t) = xt e−x dx.
0
(This is integrable for x near 0 since for small x, xt e−x ≈ xt , which is integrable near 0 since
t > −1.) Differentiating both sides with respect to t,
Z ∞
0
F (t) = xt (log x)e−x dx,
0
so (7.1) tells us the number we are interested in is F 0 (−1/2)/4.
The function F (t) is well-known under a different name: for s > 0, the Γ-function at s is defined
by Z ∞
Γ(s) = xs−1 e−x dx,
0
so Γ(s) = F (s − 1). Therefore Γ0 (s) = F 0 (s − 1), so F 0 (−1/2)/4 = Γ0 (1/2)/4. For the rest of this
section we work out a formula for Γ0 (1/2)/4 using properties of the Γ-function; there is no more
differentiation under the integral sign.
We need two standard identities for the Γ-function:
√
1
(7.2) Γ(s + 1) = sΓ(s), Γ(s)Γ s + = 21−2s πΓ(2s).
2
R∞
The first identity follows from integration by parts. Since Γ(1) = 0 e−x dx = 1, the first identity
implies Γ(n) = (n − 1)! for every positive integer n. The √ second identity, called the duplication
formula, is subtle. For example, at s = 1/2 it says Γ(1/2) = π. A proof of the duplication formula
can be found in many complex analysis textbooks. (The integral defining Γ(s) makes sense not just
for real s > 0, but also for complex s with Re(s) > 0, and the Γ-function is usually regarded as a
function of a complex, rather than real, variable.)
Differentiating the first identity in (7.2),
(7.3) Γ0 (s + 1) = sΓ0 (s) + Γ(s),
so at s = 1/2
√ √
03 1 0 1 1 1 0 1 0 1 0 3
(7.4) Γ = Γ +Γ = Γ + π =⇒ Γ =2 Γ − π .
2 2 2 2 2 2 2 2
Differentiating the second identity in (7.2),
√ √
1 1
(7.5) Γ(s)Γ0 s + + Γ0 (s)Γ s + = 21−2s (− log 4) πΓ(2s) + 21−2s π2Γ0 (2s).
2 2
Setting s = 1 here and using Γ(1) = Γ(2) = 1,
√ √
0 3 0 3
(7.6) Γ + Γ (1)Γ = (− log 2) π + πΓ0 (2).
2 2
DIFFERENTIATING UNDER THE INTEGRAL SIGN 9
√
We compute Γ(3/2) by the first identity in (7.2) at s = 1/2: Γ(3/2) = (1/2)Γ(1/2) = π/2 (we
already computed this at the end of Section 5). We compute Γ0 (2) by (7.3) at s = 1: Γ0 (2) =
Γ0 (1) + 1. Thus (7.6) says
√
√ √ √ Γ0 (1) √
0 3 0 π 0 0 3
Γ + Γ (1) = (− log 2) π + π(Γ (1) + 1) =⇒ Γ = π − log 2 + + π.
2 2 2 2
It turns out that Γ0 (1) = −γ, where γ ≈ .577 is Euler’s constant. Thus, at last,
Z ∞ √
−x2 Γ0 (1/2) π
(log x)e dx = =− (2 log 2 + γ).
0 4 4
for some C. To find C, let t → 0+ . On the right side, log(1 + t) tends to 0. On the left side, the
integrand tends to 0: |(xt − 1)/ log x| = |(et log x − 1)/ log x| ≤ t because |ea − 1| ≤ |a| when a ≤ 0.
Therefore the integral on the left tends to 0 as t → 0+ . So C = 0, which implies
Z 1 t
x −1
(8.1) dx = log(t + 1)
0 log x
for all t > 0, and it’s obviously also true for t = 0. Another way to compute this integral is to write
xt = et log x as a power series and integrate term by term, which is valid for −1 < t < 1.
Under the change of variables x = e−y , (8.1) becomes
Z ∞ dy
(8.2) e−y − e−(t+1)y = log(t + 1).
0 y
10 KEITH CONRAD
This is an upper bound on F 0 (t). Putting the upper and lower bounds on F 0 (t) together,
1 1
(9.2) < F 0 (t) ≤ log 2 +
1−t 1−t
for all t > 1.
We are concerned with the behavior of F (t) as t → 1+ . Let’s integrate (9.2) from a to 2, where
1 < a < 2: Z 2 Z 2 Z 2
dt 1
< F 0 (t) dt ≤ log 2 + dt.
a 1−t a a 1−t
Using the Fundamental Theorem of Calculus,
2 2 2
− log(t − 1) < F (t) ≤ ((log 2)t − log(t − 1)) ,
a a a
so
log(a − 1) < F (2) − F (a) ≤ (log 2)(2 − a) + log(a − 1).
Manipulating to get inequalities on F (a), we have
(log 2)(a − 2) − log(a − 1) + F (2) ≤ F (a) < − log(a − 1) + F (2)
Since a − 2 > −1 for 1 < a < 2, (log 2)(a − 2) is greater than − log 2. This gives the bounds
− log(a − 1) + F (2) − log 2 ≤ F (a) < − log(a − 1) + F (2)
Writing a as t, we get
− log(t − 1) + F (2) − log 2 ≤ F (t) < − log(t − 1) + F (2),
so F (t) is a bounded distance from − log(t−1) when 1 < t < 2. In particular, F (t) → ∞ as t → 1+ .
series for h(t) at 0 divided by t). Since functions with power series representations around a point
are infinitely differentiable at the point, r(t) is infinitely differentiable at t = 0.
However, this is an incomplete answer to our question about the infinite differentiability of r(t)
2
at t = 0 because we know by the key example of e−1/t (at t = 0) that a function can be infinitely
differentiable at a point without having a power series representation at the point. How are we
going to show r(t) = h(t)/t is infinitely differentiable at t = 0 if we don’t have a power series to
help us out? Might there actually be a counterexample?
The solution is to write h(t) in a very clever way using differentiation under the integral sign.
Start with
Z t
h(t) = h0 (u) du.
0
(This is correct since h(0) = 0.) For t 6= 0, introduce the change of variables u = tx, so du = t dx.
At the boundary, if u = 0 then x = 0. If u = t then x = 1 (we can divide the equation t = tx by t
because t 6= 0). Therefore
Z 1 Z 1
0
h(t) = h (tx)t dx = t h0 (tx) dx.
0 0
The left and right sides don’t have any t in the denominator.
R1 0 Are they equal at t = 0 too? The
left side at t = 0 is r(0) = h (0). The right side is 0 h (0) dx = h0 (0) too, so
0
Z 1
(10.2) r(t) = h0 (tx) dx
0
for all t, including t = 0. This is a formula for h(t)/t where there is no longer a t being divided!
Now we’re set to use differentiation under the integral sign. The way we have set things up
here, we want to differentiate with respect to t; the integration variable on the right is x. We can
use differentiation under the integral sign on (10.2) when the integrand is differentiable. Since the
integrand is infinitely differentiable, r(t) is infinitely differentiable!
Explicitly,
Z 1
r0 (t) = vh00 (tx) dx
0
and
Z 1
00
r (t) = vh00 (tx) dx
0
Let Z 1
F (t) = f (x, t) dx.
0
R1 R1
For instance, F (0) = 0 f (x, 0) dx = 0 0 dx = 0. When t 6= 0,
1
xt3
Z
F (t) = dx
0 (x2 + t2 )2
1+t2
t3
Z
= du (where u = x2 + t2 )
t2 2u2
u=1+t2
t3
= −
2u u=t2
t3 t3
= − +
2(1 + t2 ) 2t2
t
= .
2(1 + t2 )
This formula also works at t = 0, so F (t) = t/(2(1 + t2 )) for all t. Therefore F (t) is differentiable
and
1 − t2
F 0 (t) =
2(1 + t2 )2
14 KEITH CONRAD
2 2 2
6 cos(tx)e−x /2 [0, ∞) R all t e−x /2 |x|e−x /2
7 xt−1 e−x (0, ∞) 0 < t ≤ c 1/2, 1, 3/2, 2 xc−1 e−x xc−1 | log x|e−x
xt −1 1−x c
8 log x (0, 1] 0<t<c 1 log x 1
1 1 1
9 xt log x [2, ∞) t≥c>1 t>1 x2 log x xc
10 xk h(k+1) (tx) [0, 1] |t| < c 0 max |h(k+1) (y)| max |h(k+2) (y)|
|y|≤c |y|≤c
Table 1. Summary
∂
• there are α < β and c1 < c2 such that f (x, t) and ∂t f (x, t) are continuous on [α, β]×(c1 , c2 ),
• we have a(t) ∈ [α, β] and b(t) ∈ [α, β] for all t ∈ (c1 , c2 ),
∂
• there are upper bounds |f (x, t)| ≤ A(x) and | ∂t f (x, t)| ≤ B(x) for (x, t) ∈ [α, β] × (c1 , c2 )
Rβ Rβ
such that α A(x) dx and α B(x) dx exist.
Proof. This is a consequence of Theorem 11.3 and the chain rule for multivariable functions. Set a
function of three variables Z b
I(t, a, b) = f (x, t) dx
a
for (t, a, b) ∈ (c1 , c2 ) × [α, β] × [α, β]. (Here a and b are not functions of t.) Then
Z b
∂I ∂ ∂I ∂I
(11.3) (t, a, b) = f (x, t) dx, (t, a, b) = −f (a, t), (t, a, b) = f (b, t),
∂t a ∂t ∂a ∂b
where the first formula follows from Theorem 11.3 (its hypotheses are satisfied for each a and
b in [α, β]) and the second and third formulas are the Fundamental Theorem of Calculus. For
differentiable functions a(t) and b(t) with values in [α, β] for c1 < t < c2 , by the chain rule
d b(t)
Z
d
f (x, t) dx = I(t, a(t), b(t))
dt a(t) dt
∂I dt ∂I da ∂I db
= (t, a(t), b(t)) + (t, a(t), b(t)) + (t, a(t), b(t))
∂t dt ∂a dt ∂b dt
Z b(t)
∂f
= (x, t) dx − f (a(t), t)a0 (t) + f (b(t), t)b0 (t) by (11.3).
a(t) ∂t
A version of differentiation under the integral sign for t a complex variable is in [5, pp. 392–393].
Rt
Example 11.5. For a parametric integral a f (x, t) dx, where a is fixed, Corollary 11.4 tells us
that
d t
Z Z t
∂
(11.4) f (x, t) dx = f (x, t) dx + f (t, t)
dt a a ∂t
provided the hypotheses of the corollary are satisfied: (i) there are α < β and c1 < c2 such that f
and ∂f /∂t are continuous for (x, t) ∈ [α, β] × (c1 , c2 ), (ii) α ≤ a ≤ β and (c1 , c2 ) ⊂ [α, β], and (iii)
16 KEITH CONRAD
∂
there are bounds |f (x, t)| ≤ A(x) and | ∂t f (x, t)| ≤ B(x) for (x, t) ∈ [α, β] × (c1 , c2 ) such that the
Rβ Rβ
integrals α A(x) dx and α B(x) dx both exist.
Let’s apply this to the integral
Z t
log(1 + tx)
F (t) = dx
0 1 + x2
∂ 1
where t > 0. Here f (x, t) = log(1 + tx)/(1 + x2 ) and ∂t f (x, t) = (1+tx)(1+x 2 ) . Fix c > 0 and
use α = c1 = 0 and β = c2 = c. Then we can justify (11.4) for (x, t) ∈ [0, c] × (0, c) using
A(x) = log(1 + c2 )/(1 + x2 ) and B(x) = c/(1 + x2 ), so
Z t
0 x log(1 + t2 )
F (t) = 2
dx +
0 (1 + tx)(1 + x ) 1 + t2
Z t
log(1 + t2 )
1 −t t+x
= 2
+ dx + .
0 1+t 1 + tx 1 + x2 1 + t2
Splitting up the integral into three terms,
t
log(1 + x2 ) log(1 + t2 )
−1 t
F 0 (t) = log(1 + tx) + arctan(x) + +
1 + t2 1 + t2 2(1 + t2 )
0 1 + t2
− log(1 + t2 ) t arctan(t) log(1 + t2 ) log(1 + t2 )
= + + +
1 + t2 1 + t2 2(1 + t2 ) 1 + t2
t arctan(t) log(1 + t2 )
= + .
1 + t2 2(1 + t2 )
Clearly F (0) = 0, so by the Fundamental Theorem of Calculus
Z t Z t
y arctan(y) log(1 + y 2 )
0
F (t) = F (y) dy = + dy.
0 0 1 + y2 2(1 + y 2 )
y
Using integration by parts on the first integrand with u = arctan(y) and dv = 1+y 2
dy,
t Z t t
log(1 + y 2 )
Z
F (t) = uv −
v du + 2
dy
0 0 0 2(1 + y )
t Z t Z t
log(1 + y 2 ) log(1 + y 2 ) log(1 + y 2 )
= arctan(y) − 2
dy + 2
dy
2
0 0 2(1 + y ) 0 2(1 + y )
1
= arctan(t) log(1 + t2 ).
2
Thus for 0 < t < c we have
Z t
log(1 + tx) 1
dx = arctan(t) log(1 + t2 ).
0 1 + x2 2
Since c was arbitrary, this equation holds for all t > 0 (and trivially at t = 0 too). Setting t = 1,
Z 1
log(1 + x) 1 π log 2
2
dx = arctan(1) log 2 = .
0 1+x 2 8
DIFFERENTIATING UNDER THE INTEGRAL SIGN 17
1A reader who knows complex analysis can derive (13.1) for t > 0 by the residue theorem, viewing cos(tx) as the
real part of eitx .
DIFFERENTIATING UNDER THE INTEGRAL SIGN 19
We want to compute F 00 (t) using differentiation under the integral sign. For 0 < c < t < c0 , the
t-partial derivative of the integrand for F 0 (t) is bounded above in absolute value by a function of
y that is independent of t and integrable over R (exercise), so for all t > 0 we have
∂2 ∂2
Z Z
00 t cos y t
F (t) = 2
dy = cos y dy.
R ∂t t2 + y 2 R ∂t
2 t2 + y 2
It turns out that (∂ 2 /∂t2 )(t/(t2 + y 2 )) = −(∂ 2 /∂y 2 )(t/(t2 + y 2 )), so
∂2
Z
00 t
F (t) = − 2
cos y dy.
R ∂y t2 + y 2
Using integration by parts on this formula for F 00 (t) twice (starting with u = − cos y and dv =
(∂ 2 /∂y 2 )(t/(t2
+ y 2 )), we obtain
Z Z
00 ∂ t t
F (t) = − sin y dy = cos y dy = F (t).
R ∂y t2 + y 2 2
R t +y
2
The equation F 00 (t) = F (t) is a second order linear ODE whose general solution is aet + be−t , so
Z
cos(tx)
(13.4) 2
dx = aet + be−t
R 1 + x
for all t > 0 and some real constants a and b. To determine a and b we look at the behavior of the
integral in (13.4) as t → 0+ and as t → ∞.
+ integrand in (13.4) tends pointwise to 1/(1 + x2 ), so we expect the integral tends
R t → 0 , the
As
to R dx/(1 + x ) = π as t → 0+ . To justify this, we will bound the absolute value of the difference
2
Z Z
| cos(tx) − 1|
Z
cos(tx) dx
2
dx − 2
≤ dx
R 1+x R 1+x
R 1 + x2
by an expression that is arbitrarily small as t → 0+ . For any N > 0, break up the integral over R
into the regions |x| ≤ N and |x| ≥ N . We have
| cos(tx) − 1| | cos(tx) − 1|
Z Z Z
2
2
dx ≤ 2
dx + 2
dx
R 1+x |x|≤N 1+x |x|≥N 1 + x
Z Z
t|x| 2
≤ 2
dx + 2
dx
|x|≤N 1 + x |x|≥N 1 + x
|x|
Z π
= t 2
dx + 4 − arctan N .
|x|≤N 1 + x 2
20 KEITH CONRAD
Taking N sufficiently large, we can make π/2 − arctan N as small as we wish, and after doing that
we can make the first term as small as we wish by taking t sufficiently small. Returning to (13.4),
letting t → 0+ we obtain π = a + b, so
Z
cos(tx)
(13.5) 2
dx = aet + (π − a)e−t
R 1 + x
for all t > 0.
Now let t → ∞ in (13.5). The integral tends to 0 by the Riemann–Lebesgue lemma from Fourier
analysis, although we can explain this concretely in our special case: using integration by parts
with u = 1/(1 + x2 ) and dv = cos(tx) dx, we get
Z Z
cos(tx) 1 2x sin(tx)
2
dx = 2 2
dx.
R 1 + x t R (1 + x )
The absolute value of the term on the right is bounded above by a constant divided by t, which
tends to 0 as t → ∞. Therefore aet + (π − a)e−t → 0 as t → ∞. This forces a = 0, which completes
the proof that F (t) = πe−t for t > 0.
14. Exercises
Z ∞
sin x π
1. From the formula e−tx dx = − arctan t for t > 0, in Section 3, use a change
0 x Z 2
∞
sin(bx)
of variables to obtain a formula for e−ax dx when a and b are positive. Then use
0 x Z ∞
differentiation under the integral sign with respect to b to find a formula for e−ax cos(bx) dx
0
when a and b Zare positive. (Differentiation under the integral sign with respect to a will produce
∞
a formula for e−ax sin(bx) dx, but that would be circular in our approach since we used that
0 Z ∞
sin x
integral in our derivation of the formula for e−tx dx in Section 3.)
0 x
Z ∞
sin x π
2. From the formula e−tx dx = − arctan t for t > 0, the change of variables x = ay
0 x 2
with a > 0 implies
Z ∞
sin(ay) π
e−tay dy = − arctan t,
0 y 2
so the integral on the left is independent of a and thus has a-derivative 0. Differentiation under
the integral sign, with respect to a, implies
Z ∞
e−tay (cos(ay) − t sin(ay)) dy = 0.
0
Verify that this application of differentiation under the integral sign is valid when a > 0 and t > 0.
What happens if t = 0?
Z ∞
sin(tx) π
3. Show 2 + 1)
dx = (1 − e−t ) for t > 0 by justifying differentiation under the integral
0 x(x 2
sign and using (13.1).
DIFFERENTIATING UNDER THE INTEGRAL SIGN 21
∞
−1
Z
−tx cos x t
4. Prove that e dx = log √ for t > 0. What happens to the integral
0 x 1 + t2
as t → 0+ ?
∞
log(1 + t2 x2 )
Z
5. Prove that dx = π log(1 + t) for t > 0 (it is obvious for t = 0). Then deduce
0 1 + x2
Z ∞
log(1 + a2 x2 ) π log(1 + ab)
2 2
dx =
0 b +x b
for a > 0 and b > 0.
Z ∞
dx
6. Prove that (e−x − e−tx ) = log t for t > 0 by justifying differentiation under the integral
0 x Z ∞
dx
sign. This is (8.2) for t > −1. Deduce that (e−ax − e−bx ) = log(b/a) for positive a and b.
0 x
7. In calculus textbooks, formulas for the indefinite integrals
Z Z
xn sin x dx and xn cos x dx
are derived recursively using integration by parts. Find formulas for these integrals when n =
1, 2, 3, 4 using differentiation under the integral sign starting with the formulas
Z Z
sin(tx) cos(tx)
cos(tx) dx = , sin(tx) dx = −
t t
for t > 0.
8. If you are familiar with integration of complex-valued functions, show for y ∈ R that
Z ∞
2 √
e−(x+iy) dx = 2π.
−∞
In other words, show the integral on the left side is independent of y. (Hint: Use differentiation
under the integral sign to compute the y-derivative of the left side.)
On the interval [kπ, (k + 1)π], where k is an integer, we can write sin x = (−1)k | sin x|, so
R∞ Rb
convergence of 0 sinx x dx = limb→∞ 0 sinx x dx is equivalent to convergence of the series
X Z (k+1)π sin x X Z (k+1)π
| sin x|
k
dx = (−1) dx.
kπ x kπ x
k≥0 k≥0
R (k+1)π | sin x|
This is an alternating series in which the terms ak = kπ x dx are monotonically decreasing:
Z (k+2)π Z (k+1)π Z (k+1)π
| sin x| | sin(x + π)| | sin x|
ak+1 = dx = dx = dx < ak .
(k+1)π x kπ x + π kπ x+π
1
R∞
for k ≥ 1, so ak → 0. Thus 0 sinx x dx = k≥0 (−1)k ak converges.
P
By a simple estimate ak ≤ kπ
To show the right side of (A.3) tends to 0 as t → 0+ , we write it as an alternating series. Breaking
up the interval of integration [0, ∞) into a union of intervals [kπ, (k + 1)π] for k ≥ 0,
Z ∞ Z (k+1)π
−tx sin x
X | sin x|
(A.4) (1 − e ) dx = k
(−1) Ik (t), where Ik (t) = (1 − e−tx ) dx.
0 x kπ x
k≥0
Since 1 − e−tx > 0 for t > 0 and x > 0, the series k≥0 (−1)k Ik (t) is alternating. The upper bound
P
References
[1] W. Appel, Mathematics for Physics and Physicists, Princeton Univ. Press, Princeton, 2007.
[2] R. P. Feynman, Surely You’re Joking, Mr. Feynman!, Bantam, New York, 1985.
[3] S. K. Goel and A. J. Zajta, “Parametric Integration Techniques,” Math. Mag. 62 (1989), 318–322.
[4] S. Lang, Undergraduate Analysis, 2nd ed., Springer-Verlag, New York, 1997.
[5] S. Lang, Complex Analysis, 3rd ed., Springer-Verlag, New York, 1993.
[6] M. Rozman, “Evaluate Gaussian integral using differentiation under the integral sign,” Course notes
for Physics 2400 (UConn), Spring 2017. Online at http://www.phys.uconn.edu/phys2400/downloads/
gaussian-integral.pdf.
[7] A. R. Schep, “A Simple Complex Analysis and an Advanced Calculus Proof of the Fundamental Theorem of
Algebra,” Amer. Math. Monthly 116 (2009), 67–68.
[8] E. Talvila, “Some Divergent Trigonometric Integrals,” Amer. Math. Monthly 108 (2001), 432–436.
[9] http://math.stackexchange.com/questions/390850/integrating-int-infty-0-e-x2-dx-using-feynmans-
parametrization-trick.