You are on page 1of 23

A very fast and accurate numerical method to price

American options under the Heston’s model

Luca Vincenzo Ballestra a,∗ ,


a Dipartimento
di Economia, Seconda Università di Napoli,
Corso Gran Priorato di Malta, 81043 Capua, Italy

Abstract

KeyWords: American option, Heston’s model, radial basis function, operator splitting, option
pricing

∗ Corresponding author. Email: lucavincenzo.ballestra@unina2.it

Preprint submitted to Elsevier April 26, 2013


1. Introduction

The majority of options traded on the financial markets are of American type, i.e. they give to the holder
the right to buy or sell the underlying instrument at any time prior to maturity. American options have been
extensively priced using the popular Black-Scholes model, according to which the price of the underlying asset
is described by a geometric Brownian motion with constant drift and volatility. However, many empirical
studies have revealed that the volatility of real asset prices is far from being constant, and therefore some
authors have proposed models alternative to the Black-Scholes model in which the volatility is specified as a
stochastic process. Among these stochastic volatility models, there is the so-called Heston’s model, which has
received considerable attention by financial researchers and practitioners as it offers a realistic and consistent
description of asset prices.
From the mathematical standpoint, pricing American options under the Heston’s model amounts to solving
a free-boundary partial differential problem in two spatial variables, namely the price and the volatility of
the underlying asset. Such a problem does not have an exact closed-form solution and requires numerical
approximation. In particular, the main approaches to price American options under the Heston’s model have
been proposed by Clarke and Parrott [9], by Forsyth, Vetzal, and Zvan [17], by Ikonen [34], and by Oosterlee
[?], and are based on finite difference or finite element schemes. Among these numerical methods, the most
computationally efficient is that developed by Ito and Toivanen, as is also shown by the simulations and
comparisons presented in [35].
In the present manuscript, we follow a direction different from the existing finite difference and finite
element literature, and propose a numerical approach based on radial basis function (RBF) approximation.
In particular, we develop a RBF method to price American options under the Heston’s model that is extremely
efficient from the computational standpoint, and performs significantly better than the finite difference scheme
presented in [35].
The RBF approximation has been originally developed by [36], has been further improved by [21], and has
been successively analyzed and applied in several works (see for instance [4], [7], [10], [12], [16], [18], [19],
[27], [28], [30], [37], [38], [40], [43], [45], [46]). Its main advantage is that high levels of accuracy are achieved
using a relatively small number of basis functions. In particular, the RBF approach yields spectral resolution,
and, as shown by [1], [14], [38], in two spatial dimensions it outperforms standard spectral methods based on
Chebyshev polynomials or Fourier series expansions. RBF techniques have also been used in mathematical
finance (see [2], [3], [8], [11], [22], [26], [29], [39], [42], [44]).
The RBF method yields a global approximation (the solution at a given point depends on the solution
at all the points of the computational domain), and thus has the disadvantage of requiring the inversion of
large system matrices. To remedy this inconvenient, in the present paper we employ the approach proposed
in [3], which is based on a suitable operator splitting technique and on the use of radial basis functions of
exponential type. Such a procedure allows us to reduce the original partial differential problem to partial
differential problems in only one space dimension, so that large linear systems are no longer obtained.
In addition, in order to properly take into account the possibility of early exercise, the RBF approach is
combined with an ad-hoc extrapolation method. Precisely, first of all, the American option is approximated by
a set of Bermudan options, i.e. options that can be exercised only at a discrete set of dates. Then an accurate
estimation of the American option price is obtained from the prices of such Bermudan options using a suitable
repeated Richardson extrapolation procedure. This is a very efficient approach for taking into account the
possibility of early exercise. In fact, in contrast to other methods used to price American options, such as the
linear complementarity method (see for example [34], [35]), or the penalty method (see for example [17]), the
Richardson extrapolation technique does not require one to perform a fixed-point iteration procedure.
Numerical experiments are presented showing that the discretization scheme resulting from the combination
of the aforementioned techniques (the operator splitting procedure, the exponential radial basis function, the
Richardson extrapolation) achieves high computational performances. In particular, as already mentioned, it
behaves fairly better than the method proposed by [35], which, as already mentioned, is currently the most
efficient numerical approach for pricing American options under the Heston model (at least to the best of
our knowledge).
The paper is structured as follows: in Section 2 the partial differential problem is presented which gives the
price of an American option under the Heston model; in Section 3 the novel RBF method is developed; in
Sectionn 4 the numerical results obtained are presented and discussed; finally in Section 5 some conclusions
are drown.

2. The mathematical problem

According to the Heston model, the price and the variance of a risky asset, denoted S and v, respectively,
satisfy the following stochastic differential equations:

dS(t) = µS(t) + v(t)S(t)dW1 (t), (1)


dv(t) = κ(θ − v(t)) + σ v(t)dW2 (t), (2)

where µ is a (constant) drift parameter, κ, θ, σ are positive (constant) parameters that describe the dynamics
of the variance (see [25]), W1 and W2 are correlated Wiener standard processes. In particular, the correlation
between W1 and W2 is assumed to be constant and is hereafter denoted by ρ.
Let us consider an American Call option on an underlying asset described by model (1)-(2), with maturity
T and strike price K. The price of such option is denoted by P (S, v, τ ), where τ is the time to maturity:

τ = T − t. (3)

The function P (S, v, t) satisfies the following linear complementarity problem (see for instance [?]):
∂P (S, v, τ ) e (S, v, τ ) + rP (S, v, τ ) ≥ 0,
− LP (4)
∂τ

P (S, v, τ ) ≥ g(S), (5)

( )
∂P (S, v, τ ) e
− LP (S, v, τ ) + rP (S, v, τ ) (P (S, v, τ ) − g(S)) = 0, (6)
∂τ
where
2 2 2
e (S, v, τ ) = 1 vS 2 ∂ P (S, v, τ ) + ρσSv ∂ P (S, v, τ ) + 1 σ 2 v ∂ P (S, v, τ )
LP
2 ∂S 2 ∂S∂v 2 ∂v 2
∂P (S, v, τ ) ∂P (S, v, τ )
+ rS + κ(θ − v) , (7)
∂S ∂v
and g is the payoff function:

g(S) = max(K − S, 0). (8)

Problem (4)-(6) must be solved for (S, v, τ ) ∈ Ω × [0, T ], where

Ω = [0, +∞) × [0, +∞), (9)

with initial condition:

P (S, v, 0) = g(S), (10)

and boundary conditions:

P (0, v, τ ) = E, lim P (S, v, τ ) = 0. (11)


S→+∞

Note that the boundary conditions have been written only for S = 0 and S → +∞. In fact, at the boundaries
v = 0 and v → +∞ it is not exactly clear which conditions (if any) are to be prescribed. Therefore, in the
present manuscript, following a common approach (see for example [?]), for v = 0 and v → +∞ we do
not apply any boundary condition (in a sense, we let the linear complementarity problem itself impose the
boundary conditions for v = 0 and v → +∞).
In the general case, problem (4)-(6), (10), (11) does not have an exact closed-form solution, and thus requires
numerical approximation.

3. The RBF numerical method

For the sake of clarity, this section is divided in 8 subsections. In Subsection 3.1 the Bermudan approx-
imation of problem (4)-(6), (10), (11) is performed; in Subsection 3.2 the operator splitting procedure is
described; in in Subsection 3.2 a change of variable is introduced which allows for local mesh refinement; in
Subsection 3.3, Subsection 3.4, Subsection 3.5 and Subsection 3.6. the RBF method is described; ...

3.1. Bermudan approximation

We do not directly solve the linear complementarity problem (4)-(6), (10), (11), which would requires us
to perform a complex fixed-point iteration procedure. Instead, we compute the American option price by
Richardson extrapolation of the prices of several Bermudan options. This latter approach dates back to [20],
and has been employed, for example, by [5], [6], [31], [41]. Note that [41] reports that American options are
priced “efficiently by applying Richardson extrapolation to the prices of Bermudan options”.
T
Let us consider a set of Nτ + 1 equally spaced time levels: τk = k∆τ , k = 0, 1, . . . , Nτ , where ∆τ = ,

and let P k (S, v; Nτ ) denote an approximation of the function P k (S, v, τk ), k = 0, 1, . . . , Nτ (note that the
dependence of P k on Nτ is explicitly indicated). The functions P k (S, v; Nτ ), k = 0, 1, . . . , Nτ , are computed
using the following recursion procedure. Set

P 0 (S, v; Nτ ) = g(S). (12)

Then, for k = 1, 2, . . . , Nτ , first of all, solve the initial-boundary value partial differential problem:
e (S, v, τ )
∂U
− LeU
e (S, v, τ ) + rU
e (S, v, τ ) = 0, τ ∈ (τk−1 , τk ], (13)
∂τ

e (S, v, τk−1 ) = P k−1 (S, v; Nτ ),


U (14)

e (0, v, τ ) = E,
U e (S, v, τ ) = 0.
lim U (15)
S→+∞

and then impose the American constraint:


( )
P k (S, v; Nτ ) = max Ue (S, v, τk+1 ), g(S) . (16)

The above approach amounts to approximating the American option by a Bermudan option that can be
exercised only at times (to maturity) τ1 , τ2 , . . ., τNτ . In fact: 1) problem (13)-(15) is the pricing problem
that holds for an European option (see [25]); 2) the possibility of early exercise, which is accounted for by
the constraint (16), is allowed only at times τ1 , τ2 , . . ., τNτ .
The Bermudan option price P k (S, v; Nτ ) tends to the American option price P (S, v, τk ) as the number
of exercise dates Nτ increases, or equivalently, as the time discretization parameter ∆τ tends to zero, k =
0, 1, . . . , Nτ (see [31]). In this paper, in order to enhance the convergence of the Bermudan option price to the
American option price, we employ a Richardson extrapolation procedure, which is described in the sequel.

3.2. Richardson extrapolation

In [31] it is found that the


( difference
) between the Bermudan option price and the American option price
3
tends to zero like c1 ∆τ +O ∆τ 2 as ∆τ tends to zero, c1 being a suitable constant. Rigorously speaking, such
a result has been established under the assumption of constant volatility, and thus could be not applicable
to the case of the Heston model. Nevertheless, to the best of our knowledge, the paper [31] is the only work
available in the literature in which the problem of the convergence of the Bermudan option price to the
American option price is theoretically addressed. Moreover, a rigorous analysis on the error of the Bermudan
approximation is not straightforward and goes far beyond the scope of the present paper. Therefore, we shall
assume that an asymptotic behavior similar to that obtained in [31] holds true also for the Heston model,
and develop the Richardson extrapolation procedure based on the following error representation:
3 5
P (S, v, τk ) = P k (S, v; Nτ ) + c1 ∆τ + c2 ∆τ 2 + c3 ∆τ 2 + c2 ∆τ 2 + . . . , ∆τ → 0, k = 1, 2, . . . , Nτ .
(17)

We remove the first three leading order terms in (17) by Richardson extrapolation of the prices of four
Bermudan options. For the sake of simplicity, we are going to show how to do that only for τ = T (or,
equivalently, for t = 0), as for the other times the approach followed is analogous.
First of all we compute the prices of the four Bermudan options with Nτ + 1, 2Nτ + 1, 4Nτ + 1, and
8Nτ + 1 exercise dates. That is the iterative procedure (12)-(16) is performed four times, with time-steps ∆τ ,
∆τ ∆τ ∆τ
, , and . By doing that we obtain, among all the numerical solution values, also P Nτ (S, v; Nτ ),
2 4 8
P 2Nτ (S, v; 2Nτ ), P 4Nτ (S, v; 4Nτ ) and P 8Nτ (S, v; 8Nτ ), which constitute four approximations of the American
option price at τ = T .
Starting from the knowledge of P Nτ (S, v; Nτ ), P 2Nτ (S, v; 2Nτ ), P 4Nτ (S, v; 4Nτ ) and P 8Nτ (S, v; 8Nτ ) an
extr
extrapolated solution P4,4 (S, v, T ) is computed as follows (see [23]):
l−1
extr
Pl,1 (S, v, T ) = P 2 Nτ
(S, v; 2l−1 Nτ ), l = 1, 2, 3, 4, (18)

extr extr
extr
Pl,m (S, v, T ) − Pl−1,m
extr
(S, v)
Pl,m+1 (S, v, T ) = Pl,m (S, v, T ) + m+1 ,
2 2 −1
l = m + 1, m + 2, . . . , 4, m = 1, 2, 3. (19)

The above procedure allows us to remove all the terms in (17) up to the O(∆τ 2 ) included (see [23]). There-
extr
fore, provided that the power series representation (17) holds true, the extrapolated solution P4,4 (S, v, T )
( 5
)
constitutes an O ∆τ 2 approximation of the American option price C(S, v, T ).

Remark 1 We point out that, the above extrapolation procedure performs extraordinarily well, as shown by
the numerical experiments performed (very high levels of accuracy are obtained using small values of Nt , see
Section 4). This fact confirms the validity of the asymptotic expansion (19).

So far we have to solve the partial differential problems (13)-(14). This is done using an operator splitting
scheme, which is described in the next subsection.

3.3. Operator splitting

The numerical approximation of problem (13)-(15) is identical for every value of k, k = 1, 2, . . . , Nτ .


Therefore, to keep the notation simple, from now on the fact that the integer k runs from 1 to Nτ will be
understood.
The operator Le is split as follows:

Le = Le1 + Le12 + Le2 , (20)

where
∂ 2 P (S, v, τ ) e ∂P (S, v, τ )
Lei P (S, v, τ ) = α
ei (S, v) 2
+ βi (S, v) , i = 1, 2, (21)
∂S ∂S
2
e v) ∂ P (S, v, τ ) ,
Le12 P (S, v, τ ) = ζ(S, (22)
∂S∂v

1 2 1 2
e1 (S, v) =
α vS , e2 (S, v) =
α σ v, ζe12 (S, v) = ρσSv, (23)
2 2

βe1 (S, v) = rS, βe2 (S, v) = κ(θ − v). (24)

Problems (13)-(14) are approximated first along the τ variable, and then along the S and v variables. In
particular, for the time discretization we use the following operator splitting scheme:

Ve k (S, v) − P k−1 (S, v; Nτ )


= Le1 Ve k (S, v) − rVe k (S, v), (25)
∆τ

Ve k (0, v) = E, lim Ve k (S, v) = 0, (26)


S→+∞

Zek (S, v) − Ve k (S, v)


= Le2 Zek (S, v), (27)
∆τ

Zek (0, v) = E, lim Zek (S, v) = 0, (28)


S→+∞

e k (S, v) − Zek (S, v)


U
= Le12 Zek (S, v), (29)
∆τ

e k (0, v) = E,
U e k (S, v) = 0,
lim U (30)
S→+∞

That is at the k-th time step, starting from the numerical solution P k−1 (S, v; Nτ ), first of all we compute
Ve k (S, v) as the solution of the partial differential problem (25)-(26), then we compute Zek (S, v) as the solution
of the partial differential problem (27)-(28), and finally we compute U e k (S, v) using (29)–(30). We observe
that the steps (25)–(26) and (27)-(28) employ the implicit Euler time discretization scheme, whereas the step
(29)-(30) employs the explicit Euler time discretization scheme.
Note that, once that U e k (S, v) is obtained, the American constraint (31) is simply imposed as follows:
( )
P k (S, v, Nτ ) = max U e k (S, v), g(S) . (31)

Remark 2 The time discretization approach (25)-(30), being based on the Euler time stepping, is first-order
accurate only. However, this is not an issue as the Richardson extrapolation procedure described above allows
us to remove all the error terms up to the O(∆τ 2 ) included. That is the formal accuracy of the overall time
discretization scheme is o(∆τ 2 ).

3.4. Change of variables

From now on we will focus our attention onto problems (25)–(26), (27)-(28) and (29)-(30), which are solved
by RBF approximation. First of all, as is customary (see for example [35]), let us replace the infinite domains
Ω with a bounded one Υ:

Υ = [Smin , Smax ] × [0, vmax ], (32)

where the values Smin , Smax , vmax will be chosen based on standard financial arguments, such that the error
caused by replacing Ω with Υ is negligible.
According to (12), the function P 0 (S, v), needed in (25) when k = 1, is the payoff function (8), which is
not differentiable at S = K. Therefore, to reduce as much as possible the losses of accuracy due to the non-
smoothness of the initial solution, in the S direction the centers of the radial basis functions are concentrated
in a region close to the strike price K. Analogously, in the v direction the centers of the radial basis functions
are concentrated in a region close to the boundary v = 0, which, as experienced by [35], is beneficial for
the accuracy of the numerical solution (this is also confirmed by our numerical experiments). In the present
paper, in order to concentrate the RBF centers near S = K and v = 0, we employ a grid transformation
procedure which is based on the following change of variables:

sinh−1 (ξx (S − K)) − c1,x sinh−1 (ξy v)


x(S) = , y(v) = , (33)
c2,x − c1,x c2,y

where

c1,x = −sinh−1 (ξx K), c2,x = sinh−1 (ξx (Smax − K)), c2,y = sinh−1 (ξy vmax ), (34)

and ξx and ξy are stretching parameters to be chosen appropriately. The above change of variables has
originally been proposed by [9] and has further been employed, for example, by [2], [?], [?].
Relations (33) transform the physical domain Υ to the square I = [0, 1] × [0, 1], so that we can easily
choose a set of equally spaced RBF centers in I, and these centers are mapped into points of Υ which tend to
concentrate near S = K and v = 0 (the amount of stretching in the S and v directions is directly proportional
to ξx and ξy , respectively).
Relations (33) can be inverted as follows:

1 1
S(x) = sinh(c2,x x + c1,x (1 − x)) + K, v(y) = sinh(c2,y y). (35)
ξx ξy

Let us define:

e k (S(x), v(y), τ ),
U k (x, y, τ ) = U V k (x, y, τ ) = Ve k (S(x), v(y), τ ), Z k (x, y, τ ) = Zek (S(x), v(y), τ ). (36)

By replacing Ω with Υ, and by using relations (35)-(36), equations (25)-(30) are respectively transformed as
follows:

V k (x, y) − P k−1 (x(S), y(v))


= Lx V k (x, y) − rV k (x, y), (37)
∆τ

Ve k (0, y) = E, Ve k (1, y) = 0, (38)

Z k (x, y) − V k (x, y)
= Ly Z k (x, y), (39)
∆τ

Zek (0, y) = E, Zek (1, y) = 0, (40)

U k (x, y) − Z k (x, y)
= Lxy Z k (x, y), (41)
∆τ

e k (0, y) = E,
U e k (1, y) = 0,
U (42)

where (x, y) varies in I, and

∂2 ∂ ∂2 ∂ ∂2
Lx = α1 (x, y) 2
+ β1 (x, y) , Ly = α2 (x, y) 2
+ β2 (x, y) , Lxy = ζ(x, y) , (43)
∂x ∂x ∂y ∂y ∂x∂y
e1 (S(x), v(y))
α e2 (S(x), v(y))
α e
ζ(S(x), v(y))
α1 (x, y) = , α2 (x, y) = , ζ(x, y) = ,
Ṡ(x) v̇(y) Ṡ(x)v̇(y)

βe1 (S(x), v(y)) S̈(x)


β1 (x, y) = −α
e1 (S(x), v(y)) ( )3 ,
Ṡ(x) Ṡ(x)

βe2 (S(x), v(y)) v̈(y)


β2 (x, y) = −α
e2 (S(x), v(y)) 3. (44)
v̇(y) (v̇(y))

3.5. Radial basis functions

In I let us consider a Cartesian set of equally spaced nodes. Precisely let us consider Nx + 1 nodes along the
1
x direction, such that xi = i∆x, i = 0, 1, . . . , Nx , where ∆x = , and Ny + 1 nodes along the y direction,
Nx
1
such that yj = j∆y, j = 0, 1, . . . , Ny , where ∆y = . Moreover let Ψi,j (x, y) denote the Gaussian radial
Ny
function centered at x = xi and y = yj :

Ψi,j (x, y) = e−εx (x−xi ) −εy (y−yj )2


2
, i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny , (45)

where εx and εy are constant parameters (called shape parameters) to be chosen later.
It is important to observe that relations (45) allow us to use separation of variables:

Ψi,j = Xi (x)Yj (y), i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny , (46)

where

Xi (x) = e−εx (x−xi ) , Yj (y) = e−εy (y−yj ) ,


2 2
i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny . (47)

The functions U k (x, y), V k (x, y), Z k (x, y) are sought as combinations of radial basis functions:


Nx ∑
Ny
U k (x, y) = ϕki,j Xi (x)Yj (y), (48)
i=1 j=1


Nx ∑
Ny
V k (x, y) = φki,j Xi (x)Yj (y), (49)
i=1 j=1


Nx ∑
Ny
k
Z (x, y) = k
ηi,j Xi (x)Yj (y), (50)
i=1 j=1

where the coefficients ϕi,j , φi,j and ηi,j , i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny , are unknown. However, we are not
going to compute ϕi,j , φi,j and ηi,j . By contrast, by exploiting (48)-(50), we will directly obtain the numerical
solutions at all the nodes of the mesh, that is we will directly obtain U k (xi , yj ), V k (xi , yj ), Z k (xi , yj ), i =
0, 1, . . . , Nx , j = 0, 1, . . . , Ny .

3.6. Problem (37), (38)

Equation (37) and the boundary conditions (38) are collocated at the nodes of the mesh:

V k (xi , yj ) − U k−1 (xi , yj ) ∂ 2 V k (xi , yj ) ∂V k (xi , yj )


= α1 (xi , yj ) + β1 (xi , y j ) ,
∆τ ∂x2 ∂x
i = 1, 2, . . . , Nx − 1, j = 0, 1, . . . , Ny . (51)

Instead at x = 0 and x = 1 we impose the boundary conditions (38):


V k (x1 , yj ) = E, V k (xNx , yj ) = 0, j = 0, 1, . . . , Ny . (52)

Let us define:


Ny
ωik (y) = φki,j Yj (y), i = 0, 1, . . . , Nx , (53)
j=1

so that, using (49), we have:


Nx
V k (x, yj ) = ωik (yj )Xi (x), j = 0, 1, . . . , Ny . (54)
i=1

Let us consider the vectors:

Vjk = [V k (x0 , yj ), V k (x1 , yj ), . . . , V k (xNx , yj )]T , j = 0, 1, . . . , Ny , (55)

[ ]T
k ∂V k (x0 , yj ) ∂V k (x1 , yj ) ∂V k (xNx , yj )
Vx,j = , ,..., , j = 0, 1, . . . , Ny , (56)
∂x ∂x ∂x

[ ]T
k ∂ 2 V k (x0 , yj ) ∂ 2 V k (x1 , yj ) ∂ 2 V k (xNx , yj )
Vxx,j = 2
, 2
,..., , j = 0, 1, . . . , Ny , (57)
∂x ∂x ∂x2

[ ]T
k
ωjk = ω0k (yj ), ω1k (yj ), . . . , ωN x
(yj ) , j = 0, 1, . . . , Ny . (58)

By using (54) and by taking partial derivatives with respect to x, we obtain:

Vjk = M0 ωjk , k
Vx,j = M1 ωjk , k
Vxx,j = M2 ωjk , j = 0, 1, . . . , Ny , (59)

where
 
 X1 (x1 ) X2 (x1 ) . . . XNx (x1 ) 
 
 
 X1 (x2 ) X2 (x2 ) . . . XNx (x2 ) 
M0 = 
 .. .. ..
,
 (60)
 .. 
 . . . . 
 
X1 (xNx ) X2 (xNx ) . . . XNx (xNx )

 
 Ẋ1 (x1 ) Ẋ2 (x1 ) . . . ẊNx (x1 ) 
 
 
 Ẋ1 (x2 ) Ẋ2 (x2 ) . . . ẊNx (x2 ) 
M1 = 
 .. .. ..
,
 (61)
 .. 
 . . . . 
 
Ẋ1 (xNx ) Ẋ2 (xNx ) . . . ẊNx (xNx )

 
 Ẍ1 (x1 ) Ẍ2 (x1 ) . . . ẌNx (x1 ) 
 
 
 Ẍ1 (x2 ) Ẍ2 (x2 ) . . . ẌNx (x2 ) 
M2 = 
 .. .. ..
.
 (62)
 .. 
 . . . . 
 
Ẍ1 (xNx ) Ẍ2 (xNx ) . . . ẌNx (xNx )

Moreover, let us define the matrices:


 
0 0 0 ... 0 0
 
 0 α (x , y ) 0
 1 1 j 0 ... 0 
 
 
0 0 α1 (x2 , yj ) ... 0 0
 
A1,j =. , j = 0, 1, . . . , Ny , (63)
. .. .. .. .. 
. . . . . 
 
 
0 α1 (xNx −1 , yj ) 0 
 0 0 0 
 
0 0 0 ... 0 0

 
0 0 0 ... 0 0
 
 0 β (x , y ) 0
 1 1 j 0 ... 0 
 
 
0 0 β1 (x2 , yj ) ... 0 0
 
B1,j =. , j = 0, 1, . . . , Ny , (64)
. .
.. .. .. .. 
. . . . 
 
 
0 β1 (xNx −1 , yj ) 0 
 0 0 0 
 
0 0 0 ... 0 0

and the vectors

Ujk = [E, U k (x1 , yj ), U k (x2 , yj ), . . . , U k (xNx −1 , yj ), 0]T , j = 0, 1, . . . , Ny . (65)

By substitution of (54)-(65) in (51), (52) we obtain:

M0 ωjk − Ujk−1
= A1,j M2 ωjk + B1,j M1 ωjk , j = 0, 1, . . . , Ny . (66)
∆τ
Let us define:

Hj = M0 − ∆τ (A1,j M2 + B1,j M1 ), j = 0, 1, . . . , Ny . (67)

Equations (66) are rewritten in the more compact form:

Hj ωjk = Ujk , j = 0, 1, . . . , Ny . (68)

Relations (68) constitute a set of Ny + 1 systems of Nx + 1 linear equations that allow us to determine
the Ny + 1 unknown vectors ωjk , j = 0, 1, . . . , Ny . Once that ωjk , j = 0, 1, . . . , Ny , are obtained, the vectors
Vjk , j = 0, 1, . . . , Ny containing the nodal values V k (xi , yj ), i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny , can be easily
computed by applying the first of relations (54) (which just requires us to perform Ny + 1 matrix-vector
multiplications).
In summary, following the above approach, to obtain the solution values V k (xi , yj ), i = 0, 1, . . . , Nx ,
j = 0, 1, . . . , Ny , we have to solve not a large system of (Nx + 1) × (Ny + 1) linear equations, but Ny + 1
systems of Nx + 1 linear equations, which is significantly more efficient from the computational standpoint.
The linear systems (68) are solved by Gaussian elimination (the LU factorization of the matrices Hj ,
j = 0, 1, . . . , Ny , is performed only once at the beginning of the numerical simulation). Note finally that
the linear systems (68) are independent each from the other and thus the calculation of the vectors ωjk ,
j = 0, 1, . . . , Ny could eventually be performed using parallel computing.

3.7. Problem (39), (40) and problem (41), (42)

The discretization of problem (39), (40) is substantially identical to the discretization of problem (37),
(38), and therefore is omitted (the only difference is that the variable x is replaced by the variable y, so we
obtain Nx + 1 systems of Ny + 1 linear equations).
After solving problem (37), (38) and problem (39), (40), we end up with the values Z(xi , yj ), i =
0, 1, . . . , Nx , j = 0, 1, . . . , Ny . Then, let us focus our attention onto problem (41), (42). Equation (41) and
the boundary conditions (42) are collocated at the the nodes of the mesh:
∂ 2 Z k (xi , yj )
U k (xi , yj ) = Z k (xi , yj ) + ζ(xi , yj ) , i = 1, 2, . . . , Nx − 1, j = 0, 1, . . . , Ny , (69)
∂x∂y

U k (x0 , yj ) = E, U k (xNx , yj ) = 0 j = 0, 1, . . . , Ny . (70)

According to (69) and (70), to compute the unknown U k (xi , yj ), i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny , we just


∂ 2 Z k (xi , yj )
have to discretize the mixed partial derivative , i = 1, 2, . . . , Nx − 1, j = 0, 1, . . . , Ny , which is
∂x∂y
done below. Let us consider the vectors:

Zjk = [Z k (x0 , yj ), Z k (x1 , yj ), . . . , Z k (xNx , yj )]T , j = 0, 1, . . . , Ny , (71)

[ ]T
k ∂Z k (x0 , yj ) ∂Z k (x1 , yj ) ∂Z k (xNx , yj )
Zx,j = , ,..., , j = 0, 1, . . . , Ny , (72)
∂x ∂x ∂x

[ ]T
k ∂Z k (xi , y0 ) ∂Z k (xi , y1 ) ∂Z k (xi , yNy )
Ẑx,i = , ,..., , i = 0, 1, . . . , Nx , (73)
∂x ∂x ∂x

[ ]T
∂ 2 Z k (xi , y0 ) ∂ 2 Z k (xi , y1 ) ∂ 2 Z k (xi , yNy )
k
Zxy,i = , ,..., , i = 1, 2, . . . , Nx − 1. (74)
∂x∂y ∂x∂y ∂x∂y
k k
The vectors Zx,j , j = 0, 1, . . . , Ny , and Ẑx,i , i = 0, 1, . . . , Nx , contain both the same first-order derivatives
k k
with respect to x. Thus, once that Zx,j , j = 0, 1, . . . , Ny , are obtained, then Ẑx,i , i = 0, 1, . . . , Nx , are known
too. Using an approach analogous to that employed in Subsection ??? we can derive relations similar to the
first two of relations (59):

Zjk = M0 ωjk , k
Zx,j = M1 ωjk , j = 0, 1, . . . , Ny . (75)

From (75), by matrix inversion, we obtain:


k
Zx,j = M1 M0−1 Zjk , j = 0, 1, . . . , Ny , (76)

which yields the first-order partial derivative of Z(x, y) with respect to x at all the nodes of the mesh. Note
that the inverse matrix M0−1 always exists, see [15].
An analogous procedure yields:
k
Zxy,i = N1 N0−1 Ẑy,i
k
, i = 1, 2, . . . , Nx − 1, (77)

 
 Y1 (y1 ) Y2 (y1 ) . . . YNy (y1 ) 
 
 
 Y1 (y2 ) Y2 (y2 ) . . . YNy (y2 ) 
N0 = 
 .. .. ..
,
 (78)
 .. 
 . . . . 
 
Y1 (yNy ) Y2 (yNy ) . . . YNy (yNy )

 
 Ẏ1 (y1 ) Ẏ2 (y1 ) . . . ẎNy (y1 ) 
 
 
 Ẏ1 (y2 ) Ẏ2 (y2 ) . . . ẎNy (y2 ) 
N1 = 
 .. .. ..
.
 (79)
 .. 
 . . . . 
 
Ẏ1 (xNy ) Ẏ2 (yNy ) . . . ẎNy (yNy )
Relations (77) allow us to compute the mixed second-order derivatives of Z k (x, y) once that the first-order
derivatives with respect to x are known. Thus, in summary, the numerical solution values U k (xi , yj ), i =
0, 1, . . . , Nx , j = 0, 1, . . . , Ny , are obtained from Z k (xi , yj ), i = 0, 1, . . . , Nx , j = 0, 1, . . . , Ny , as follows:
first of all, by employing (76), the first-order derivatives of Z k with respect to x are computed at all the
k
nodes of the mesh (these derivatives are stored in the vectors Ẑx,i , i = 0, 1, . . . , Nx ). Then, by employing
(77), the mixed second-order derivatives of Z k are calculated at all the points of the mesh with exception
of those located at the boundaries x = 0 and x = 1. Finally the solution values U k (xi , yj ), i = 0, 1, . . . , Nx ,
j = 0, 1, . . . , Ny , are calculated according to (69), (70).
Following the approach described above, the most expensive operations required to perform the explicit
step (41) are to compute M0−1 Zjk , j = 0, 1, . . . , Ny , and N0−1 Zy,i k
, i = 0, 1, . . . , Nx . This is efficiently done as
follows (we just focus onto M0 Zj , j = 0, 1, . . . , Ny , as the method used for N0−1 Zy,i
−1 k k
, i = 1, 2, . . . , Nx − 1,
is identical). Let us consider the trivial relations:

M0 (M0−1 Zjk ) = Zjk , j = 0, 1, . . . , Ny . (80)

The above identities allow us to see the products M0−1 Zjk , j = 0, 1, . . . , Ny , as the solutions of Ny +1 systems of
linear equations, each one of which has the same matrix M0 . Thus, the products M0−1 Zjk , j = 0, 1, . . . , Ny , are
obtained solving the linear systems (80), which is done by Gauss elimination (one again the LU factorization
of the matrix M0 is performed once at the beginning of the simulation). Note finally that the linear systems
(80), as well as the ones that allow to obtain N0−1 Zy,i
k
, i = 1, 2, . . . , Nx − 1, are independent each from the
other. Therefore the calculation of the vectors ωj , j = 0, 1, . . . , Ny , and N0−1 Zy,i
k k
, i = 1, 2, . . . , Nx − 1, can
eventually be done using parallel computing.

3.8. Shape parameters

Now it remains to select the shape parameters εx and εy in (45). As is well-known, these quantities can
strongly affect the performances of RBF methods. In particular, when εx and εy decrease, the Gaussian radial
basis functions would tend to increase their spatial resolution power, but the matrices M0 and N0 tend to
become ill-conditioned, and thus the numerical solution can become unstable. This fact is clearly highlighted
in [40] by means of arbitrary precision computation.
In the technical literature various rules of thumbs or semi-analytical relations have been proposed to find
optimal values of the shape parameters, i.e. values that yield a good compromise between the spatial resolution
of the RBF approximation and the conditioning of the resulting matrices (see for example [4], [7], [12], [18],
[24], [40], [32], [33], [37], [45], [46]). However, to the best of our knowledge, a general and theoretically rigorous
method to optimally choose the shape parameters is still lacking. Therefore in the present paper εx and εy
are chosen according to an empirical approach. In particular, we employ a procedure analogous to that used
in [3], which has already given very good result in the case of European options.
First of all we note that the accuracy of the RBF approximation is mainly determined by the numbers
Nx and Ny of mesh points employed. Then a very natural choice is to link εx to Nx and ϵy to Ny , or,
1 1
equivalently, to link εx to ∆x and ϵy to ∆y (we recall that ∆x = and ∆y = ). Moreover, looking
Nx Ny
at relation (45), we note that εx and εy are both multiplied by the square of a spatial distance. Therefore,
on the basis of a dimensional argument, it appears to be suitable to choose the shape parameters ϵx and ϵy
inversely proportional to the square of ∆x and ∆y, respectively. Thus we set:

hx hy
εx = , εy = , (81)
∆x2 ∆y 2

where hx and hy denote two (positive) constants, which are determined by direct numerical simulation. In
particular, by performing several numerical experiments we have found that it is optimal to chose hx = hy =
0.665. It is interesting to observe that such values of hx and hy are identical to those that are obtained by
[3] in the case of European options.
The above approach to select the RBF shape parameters, despite being very simplified and heuristic, has
allowed us to obtain excellent results (see the next).
Note that, according to formulae (81) the parameters εx and εy are kept constant over the whole computa-
tional domain. Now, as pointed out by [4] and [37], there exist problems in approximation theory for which the
use of constant shape parameters does not yield particularly accurate solutions (for such problems it would
be better to employ solution-dependent shape parameters, see [4] and [37]). Therefore we shall acknowledge
that the method based on relations (81) cannot be successfully applied to all cases. Nevertheless, as already
mentioned above for the American option pricing problem considered in the present paper, as well as for the
European option pricing problems dealt with in [3], formulae (81) turn out to be a very good and practical
criterion to select the parameters εx and εy .

4. Numerical results

The numerical method developed in the previous section is tested on a standard benchmark problem, which
is the same considered by [9], [17], [35], [?]. Precisely we consider an American Put option with strike price
K = 10 and maturity T = 0.25. Moreover, the coefficients of the Heston’s model are chosen as follows: κ = 5,
θ = 0.16, σ = 0.9, ρ = 0.1, r = 0.1. As far as the discretization parameters introduced in Section 3 are
concerned, we set: Smin = 3, Smax = 20, vmax = 1, ξx = 2 , ξy = 2.
As done in the previous section, we continue to use P (S, v, τ ) to denote the exact American option price.
Moreover the approximate value of P (S, v, τ ) obtained using the RBF method is denoted by PRBF (S, v, τ ).
Following [35], we test the error of the RBF method at time to maturity τ = T (equivalently t = 0) and
for ten different values of price and variance (Si , vi ), i = 1, 2, . . . , 10, where

Si = 7 + i, vi = 0.0625, i = 1, 2, . . . , 5,

Si = 1 + i, vi = 0.25, i = 6, 7, . . . , 10, (82)

In correspondence of such values of S, v and τ , very accurate estimations (with about 6 correct decimal
digits) of P (S, v, τ ) have already been obtained by [35], and, for the reader’s convenience, are reported in
Table 1.
Table 1
v = 0.0625 v = 0.25

S 8 9 10 11 12 8 9 10 11 12

P (S, v, T ) 2.000000 1.107621 0.520030 0.213677 0.082044 2.078364 1.333632 0.795977 0.448273 0.242810

True option values obtained by Ito and Toivanen (Table 6.1 in [35]).

The “almost exact” values of P (S, v, 0) obtained by [35] are used to evaluate the accuracy of the RBF
method, as the true values of P (S, v, 0) are not available. In particular, as done in [35], we measure the error
on PRBF (S, v, 0) using the following quadratic norm:
DA QUI IN POI VA TUTTO QUANTO RISCRITTO. NON CONSIDERARE QUELLO CHE SEGUE

4.1. Problem 1. European digital Call option under the Black-Scholes model

We consider the same test-case presented by Zvan et al. (2003), in which the model parameters and the
option’s data are chosen as in Table 1. Moreover we set ξx = 15, ξy = 15, S1,max = 80, S2,max = 80.

Table 1 goes here

For the initial prices (S1 , S2 ) we consider five different points close to (K, K): (S1 = 40, S2 = 36 + 2i),
i = 0, 1, . . . , 4. This choice is motivated by the fact that the option’s maturity is small (T = 0.25 year),
and the only substantial differences between the option price and the option’s payoff are experienced in the
neighborhood of the line S1 = K or in the neighborhood of the line S2 = K (see Figure 1). Therefore values
of (S1 , S2 ) far away from these lines would not be particularly significant from the financial standpoint (note
that Zvan et al. (2003) consider only the point (S1 = 40, S2 = 40)).
Let D(S1 , S2 , 0) denote the exact value of the digital Call option (at time t = 0), and let DRBF (S1 , S2 , 0)
denote the approximation of D(S1 , S2 , 0) obtained using the RBF method proposed in this paper. We measure
the (relative) error on DRBF (S1 , S2 , 0) using the following quadratic mean:
v
u 4 [
u 1 ∑ DRBF (40, 36 + 2i, 0) − D(40, 36 + 2i, 0) ]2
ErrRBF =t . (83)
5 i=0 D(40, 36 + 2i, 0)

If the point (40, 36 + 2i) does not coincide with a node of the mesh, then DRBF (40, 36 + 2i, 0) is obtained
by cubic interpolation of DRBF (S1 , S2 , 0) at nodes adjacent to (40, 36 + 2i), i = 0, 1, . . . , 4. Moreover in (83)
D(40, 36 + 2i, 0) is computed using the exact analytical formula (??), i = 0, 1, . . . , 4.
The values of ErrRBF and CP U T imeRBF experienced in Problem 1 in correspondence of different choices
of Nx , Ny and M are reported in Table 2. As we may see in Problem 1 the RBF approach proposed in this
paper is very accurate and fast, as the price of the digital Call option is computed with relative error of order
10−4 in about 1 s, or with relative error of order 10−6 in about 6 s.
Finally in Figure 1 the function DRBF (S1 , S2 , 0) is shown. We may note that the RBF method gives a
smooth and sharp approximation of the option price also despite the presence of strong gradients across the
lines S1 = K and S2 = K.

Table 2 goes here

Figure 1 goes here

COMPARISON WITH THE FD SCHEME. The error on D(S1 , S2 , 0) obtained using the FD scheme is
measured using a formula analogous to (83). The values of ErrF D and CP U T imeF D experienced in Problem
1 in correspondence of different choices of Nx , Ny and M are shown in Table 3. By comparing these results
with those reported in Table 2 we can see that the RBF method is significantly more efficient than the FD
approach. For example, using the RBF method the error 6.38 × 10−4 is obtained in a time equal to 1.1 s,
while it takes 4.7 s for the FD scheme to achieve the error 1.30 × 10−3 . In addition, using the RBF method
an error of order 10−6 is obtained in a time equal to 6.0, s, while the error achieved by the FD scheme in
70.9 s is still of order 10−4 .
Finally we may empirically check that the FD scheme is truly second-order accurate in both space and
time (if the values of Nx , Ny and M are multiplied by two then ErrF D is reduced of a factor approximately
equal to one fourth).

Table 3 goes here

4.2. Problem 2. European vanilla Call option under the Heston’s model

We consider the same test-case presented in the original paper by Heston (1993), in which the model
parameters and the option’s data are chosen as in Table 4. Moreover we set ξ = 0.5, Smax = 200, vmax = 0.2.
Note that the value ξ = 0.5 is considerably smaller than the values of ξx and ξy employed in Problem 1.
This is due to the fact that, in Problem 2, contrary to what happens in Problem 1, the option’s payoff is a
continuous function and thus it is convenient to use a smaller amount of grid-stretching.

Table 4 goes here

Following an approach analogous to that used by Chiarella et al. (2009), Chiarella et al. (2010), the initial
values of the price and volatility are chosen as follows: (S = 80 + 20i, v = 0.01), i = 0, 1, . . . , 4 (note that the
value v = 0.01 is also the one considered by Heston (1993).
Let C(S, v, 0) denote the exact value of the Call option price (at time t = 0), and let CRBF (S, v, 0) denote
the approximation of C(S, v, 0) obtained using the RBF method. As done in Problem 1, the (relative) error
on CRBF (S, v, 0) is measured using the quadratic mean:
v
u 4 [
u 1 ∑ CRBF (80 + 10i, 0.01, 0) − C(80 + 10i, 0.01, 0) ]2
ErrRBF =t . (84)
5 i=0 C(80 + 10i, 0.01, 080 + 10i, 0.01, 0)

If the point (80+10i, 0.01) does not coincide with a node of the mesh, then CRBF (80+10i, 0.01, 0) is obtained
by cubic interpolation of CRBF (S, v, 0) at nodes adjacent to (80 + 10i, 0.01), i = 0, 1, . . . , 4. Moreover in (84)
C(80 + 10i, 0.01, 0) is computed using the exact analytical formula derived by Heston (1993), i = 0, 1, . . . , 4.
The values of ErrRBF and CP U T imeRBF experienced in correspondence of different choices of Nx , Ny
and M are reported in Table 5 (note that Ny is always chosen smaller than Nx , because along the y direction,
or, equivalently, along the corresponding v direction, the option’s payoff is an infinitely smooth function). We
may note that also in Problem 2 the RBF approach is rather efficient. In fact the option price is computed
with error 1.08 × 10−3 in 4.4 s, or with error 1.71 × 10−4 in 41.9 s.

Table 5 goes here

COMPARISON WITH THE FD SCHEME. The error on C(S, v, 0) obtained using the FD scheme is
measured using a formula analogous to (84). The values of ErrF D and CP U T imeF D experienced in Problem
2 in correspondence of different choices of Nx , Ny and M are shown in Table 6. By comparing these results
with those reported in Table 5 we can see that the RBF method is much faster than the FD approach (tens
of times faster). For example, using the RBF method the error 1.08 × 10−3 is obtained in a time equal to
4.4 s, while it takes 187.8 s for the FD scheme to achieve the error 1.68 × 10−3 .
Again we make notice that the FD scheme is truly second-order accurate in both space and time.

Table 6 goes here

4.3. Problem 3. Barrier Call option under the Heston’s model

We consider the same test-case reported by Chiarella et al. (2010), in which the model parameters and
the option’s data are chosen as in Table 7. Moreover we set ξ = 0.18086, vmax = 0.5. Note that the grid-
stretching parameter ξ has been chosen by trial and error such as that: 1) one of the mesh nodes (along the
S direction) coincides with the strike price K (see Appendix B); 2) the discretization error is roughly kept
to the minimum. In this respect we observe that the value ξ = 0.18086 is smaller than in Problem 2. In fact
in Problem 3 it is not convenient to use too large values of ξ as the mesh would be too coarsened near the
option’s barrier, where the solution has strong space variations.
Table 7 goes here

As done by Chiarella et al. (2010), the initial values of the price and volatility are chosen as follows: (S =
80 + 10i, v = 0.1) i = 0, 1, . . . , 4. Let B(S, v, 0) denote the barrier option price, and let BRBF (S, v, 0) denote
the approximation of B(S, v, 0) obtained using the RBF method. Contrary to what happens in Problem 1
and Problem 2, B(S, v, 0) cannot be computed using an exact analytical formula. Therefore an almost exact
estimation of B(S, v, 0) is found by numerical approximation. To this aim the RBF method is employed, with
a very refined spatial mesh and small time step (precisely we set Nx = 320, Ny = 160, M = 320).
The values of ErrRBF and CP U T imeRBF experienced in Problem 3 in correspondence of different choices
of Nx , Ny and M are reported in Table 8. Again the results obtained are very satisfactory. In fact the option
price is computed with error 1.21 × 10−3 in 0.69 s, or with error 8.41 × 10−5 in 4.4 s.

Table 8 goes here

COMPARISON WITH ITO AND TOIVANEN.


To make the comparison with [35] complete, let us observe what follows. The numerical method proposed
in [35] is theoretically proved to be stable (actually it is designed such that the matrix that needs to be
inverted at each time iteration is an M −matrix, which yields numerical stability). By contrast, for the RBF
approach developed in the present manuscript no theoretical proof of stability has been provided. Therefore,
one may argue that the finite difference scheme presented in [35] enjoys better theoretical properties than the
RBF approach being proposed. However, it should be observed that in [35], in order to achieve stability, the
coefficients of the finite difference approximation are chosen according to a non conventional approach, such
that the consistency of the method is not a priori guaranteed. Precisely it is not proven that the truncation
error (given by relation (4.10) in [35]) tends to zero as the computational mesh is refined (which in turn
happens for a standard finite difference scheme). By contrast, the RBF approach, being based on a nodal
interpolation, yields a consistent approximation (see [13], [18], [47]). Thus we may conclude that, for the finite
difference method proposed in [35], the stability is a priori guaranteed, but the consistency is not. On the
contrary, for the approach developed in the present manuscript, the consistency is a priori guaranteed, but
the stability is not. Thus, from the theoretical standpoint, the two approaches can be regarded as equivalent
(stability and consistency are both necessary to prove convergence). Finally we emphasize that from the
practical standpoint both the numerical method presented in [35] and that developed in the present paper
have good convergence properties, though the RBF approach being presented is computationally faster.
theoretical standpoint, we have not proved it to be stable.
This is certainly true as far as numerical stability is concerned
The numerical method

Table 9 goes here

5. Conclusions

A highly efficient RBF method for derivative pricing is developed based on an ad-hoc operator splitting
techniques and on Gaussian radial functions. This algorithm is tested onto three different problems which
concern the pricing of options with two stochastic factors under both the Black and Scholes and the Hes-
ton’s model. The results obtained reveal that the method proposed is very accurate and fast, and performs
significantly better than the FD approach.
We believe that the present paper provides several relevant contributions. First of all, contrary to the
existing RBF numerical schemes, the one presented here allows to achieve the high levels of accuracy which
are typical of global approximations without inverting large system matrices. Moreover, to the very best of
our knowledge, this is the first time that the RBF discretization approach is combined with a spatial operator
splitting technique. Third, always to the best of our knowledge, in the financial literature a RBF method for
option pricing models with stochastic volatility has not been proposed yet.
Finally the novel RBF scheme is highly parallelizable, and, very remarkably, can be used to solve an infinite
variety of partial differential problems, not only in the area of finance, but also in other fields of science and
engineering.

References

[1] Z. Avazzadeh, M. Heydari, G. B. Loghmani, A comparison between solving two dimensional integral equations by the
traditional collocation method and radial basis functions, Applied Mathematical Sciences 5 (2011) 1145–1152.
[2] L. V. Ballestra, G. Pacelli, Computing the survival probability density function in jump-diffusion models: a new approach
based on radial basis functions, Engineering Analysis with Boundary Elements 35 (2011) 1075–1084.
[3] L. V. Ballestra, G. Pacelli, Pricing options with two stochastic factors: an highly efficient radial basis function approach,
Working Paper.
[4] R. E. Carlson, T. A. Foley, The parameter r2 in multiquadric interpolation, Computers and Mathematics with Applications
21 (1991) 29–42.
[5] C.-C. Chang, S.-L. Chung, R. C. Stapleton, Richardson extrapolation techniques for pricing American-style options, Journal
of Futures Markets 27 (2007) 791–817.
[6] C.-C. Chang, J.-B. Lin, W.-C. Tsai, Y.-H. Wang, Using Richardson extrapolation techniques to price American options
with alternative stochastic processes, Review of Quantitative Finance and Accounting, in press.
[7] A. H.-D. Cheng, M. A. Golberg, E. J. Kansa, G. Zammito, Exponential convergence and h − c multiquadric collocation
method for partial differential equations, Numerical Methods for Partial Differential Equations 19 (2003) 571–594.
[8] S. Choi, M. D. Marcozzi, A numerical approach to American currency option valuation, Journal of Derivatives 9 (2001)
19–29.
[9] N. Clarke, K. Parrott, Multigrid for American option pricing with stochastic volatility, Applied Mathematical Finance 6
(1999) 177–195.
[10] T. A. Driscoll, A. R. H. Heryudono, Adaptive residual subsampling methods for radial basis function interpolation and
collocation problems, Computers and Mathematics with Applications 53 (2007) 927–939.
[11] G. E. Fasshauer, A. Q. M. Khaliq, D. A. Voss, Using meshfree approximation for multi-asset American option problems,
Journal of the Chinese Institute of Engineers 27 (2004) 563–571.
[12] G. E. Fasshauer, J. Zhang, On choosing “optimal” shape parameters for RBF approximation, Numerical Algorithms 45
(2007) 345–368.
[13] B. Fornberg, N. Flyer, Accuracy of radial basis function interpolation and derivative approximations on 1-D infinite grids,
Advances in Computational Mathematics 23 (2005) 5–20.
[14] B. Fornberg, N. Flyer, J. M. Russell, Comparisons between pseudospectral and radial basis function derivative
approximations, IMA Journal of Numerical Analysis 30 (2010) 149–172.
[15] B. Fornberg, C. Piret, On choosing a radial basis function and a shape parameter when solving a convective PDE on a
sphere, Journal of Computational Physics 227 (2008) 2758–2780.
[16] B. Fornberg, J. Zuev, The Runge phenomenon and spatially variable shape parameters in RBF interpolation, Computers
and Mathematics with Applications 34 (2007) 379–398.
[17] P. A. Forsyth, K. R. Vetzal, R. Zvan, A penalty method for American options with stochastic volatility, Journal of
Computational and Applied Mathematics 91 (1998) 199–218.
[18] C. Franke, R. Schaback, Convergence order estimates of meshless collocation methods using radial basis functions, Advances
in Computational Mathematics 8 (1998) 381–399.
[19] C. Franke, R. Schaback, Solving partial differential equations by collocation using radial basis functions, Applied
Mathematics and Computation 93 (1998) 73–82.
[20] R. Geske, H. E. Johnson, The American Put option valued analytically, Journal of Finance 39 (1984) 1511–1524.
[21] M. A. Golberg, C. S. Chen, S. R. Karur, Improved multiquadric approximation for partial differential equations, Engineering
Analysis with Boundary Elements 18 (1996) 9–17.
[22] Y. Goto, Z. Fei, S. Kan, E. Kita, Options valuation by using radial basis function approximation, Engineering Analysis with
Boundary Elements 31 (2007) 836–843.
[23] H. Hairer, S. Norsett, G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems, Springer, Berlin, 1993.
[24] R. L. Hardy, Theory and applications of the multiquadrics - biharmonic method (20 years of discovery 1968-1988), Computers
and Mathematics with Applications 19 (1990) 163–208.
[25] S. L. Heston, A closed form solution for options with stochastic volatility with applications to bond and currency options,
The Review of Financial Studies 6 (1993) 327–343.
[26] Y. C. Hon, A quasi-radial basis functions method for American option pricing, Computers and Mathematics with
Applications 43 (2002) 513–524.
[27] Y. C. Hon, M. W. Lu, W. M. Xue, Y. M. Zhu, Multiquadric method for the numerical solution of a biphasic mixture model,
Applied Mathematics and Computation 88 (1997) 153–175.
[28] Y. C. Hon, X. Z. Mao, An efficient numerical scheme for Burgers’ equation, Applied Mathematics and Computation 95
(1998) 37–50.
[29] Y. C. Hon, X. Z. Mao, A radial basis function method for solving option pricing models, Financial Engineering 8 (1999)
31–49.
[30] Y. C. Hon, X. Zhou, A comparison on using various radial basis functions for options pricing, International Journal of
Applied Science and Computations 7 (2000) 29–47.
[31] S. Howison, A matched asymptotic expansions approach to continuity corrections for discretely sampled options. Part 2:
Bermudan options, Applied Mathematical Finance 14 (2007) 91–104.
[32] C.-S. Huang, C.-F. Lee, A. H.-D. Cheng, On the increasingly flat radial basis function and optimal shape parameter for the
solution of elliptic PDEs, Engineering Analysis with Boundary Elements 34 (2010) 802–809.
[33] C.-S. Huang, H.-D. Yen, A. H.-D. Cheng, Error estimate, optimal shape factor, and high precision computation of
multiquadric collocation method, Engineering Analysis with Boundary Elements 31 (2007) 614–623.
[34] S. Ikonen, J. Toivanen, Efficient numerical methods for pricing American options under stochastic volatility, Numerical
Methods for Partial Differential Equations 24 (2008) 104–126.
[35] K. Ito, J. Toivanen, Lagrange multiplier approach with optimized finite difference stencils for pricing American options
under stochastic volatility, SIAM Journal on Scientific Computing 31 (2009) 2646–2664.
[36] E. J. Kansa, Multiquadrics - a scattered data approximation scheme with application to computational fluid-dynamics-i
and ii, Computers and Mathematics with Applications 19 (1990) 127–161.
[37] E. J. Kansa, R. E. Carlson, Improved accuracy of multiquadric interpolation using variable shape parameters, Computers
and Mathematics with Applications 24 (1992) 99–120.
[38] E. Larsson, A numerical study of some radial basis function based solution methods for elliptic PDEs, Computers and
Mathematics with Applications 46 (2003) 891–902.
[39] E. Larsson, K. Ahlander, A. Hall, Multi-dimensional option pricing using radial basis functions and the generalized Fourier
transform, Journal of Computational and Applied Mathematics 222 (2008) 175–192.
[40] J. Li, A. H.-D. Cheng, C.-S. Chen, A comparison of efficiency and error convergence of multiquadric collocation method
and finite element method, Engineering Analysis with Boundary Elements 27 (2003) 251–257.
[41] R. Lord, F. Fang, F. Bervoets, C. W. Oosterlee, A fast and accurate FFT-based method for pricing early-exercise options
under Levy processes, SIAM Journal on Scientific Computing 30 (2008) 1678–1705.
[42] M. D. Marcozzi, S. Choi, C. S. Chen, On the use of boundary conditions for variational formulations arising in financial
mathematics, Applied Mathematics and Computation 124 (2001) 197–214.
[43] C. Micchelli, Interpolation of scattered data: distance matrices and conditionally positive definite functions, Constructive
Approximation 2 (1986) 11–22.
[44] U. Pettersson, E. Larsson, G. Marcusson, J. Persson, Improved radial basis function methods for multi-dimensional option
pricing, Journal of Computational and Applied Mathematics 222 (2008) 82–93.
[45] S. Rippa, An algorithm for selecting a good value for the parameter c in radial basis function interpolation, Advances in
Computational Mathematics 11 (1999) 193–210.
[46] S. A. Sarra, D. Sturgill, A random variable shape parameter strategy for radial basis function approximation methods,
Engineering Analysis with Boundary Elements 33 (2009) 1239–1245.
[47] Z.-M. Wu, R. Schaback, Local error estimates for radial basis function interpolation of scattered data, IMA Journal of
Numerical Analysis 13 (1993) 13–27.
Table 5

Nx Ny M ErrRBF CP U T imeRBF

40 20 40 4.52 × 10−3 0.69 s

80 40 80 1.08 × 10−3 4.4 s

160 80 160 1.71 × 10−4 41.9 s


Problem 2. Results obtained using the RBF method.
Table 6

Nx Ny M ErrF D CP U T imeF D

40 20 40 1.49 × 10−1 0.34 s

80 40 80 2.60 × 10−2 1.7 s

160 80 160 6.71 × 10−3 15.0 s

320 160 320 1.68 × 10−3 187.8 s


Problem 2. Results obtained using the FD Scheme.
Table 7

κ θ σ ρ r q K T Sb

2 0.1 0.1 −0.5 0.03 0.05 100 0.5 130


Problem 3. Parameters and data (expressed in year units).
Table 8

Nx Ny M ErrRBF CP U T imeRBF

40 20 40 1.21 × 10−3 0.69 s

80 40 80 8.41 × 10−5 4.4 s


Problem 3. Results obtained using the RBF method.
Table 9

Nx Ny M Err3,F D CP U T imeF D

40 20 40 5.86 × 10−3 0.34 s

80 40 80 1.44 × 10−3 1.7 s

160 80 160 3.60 × 10−4 15.0 s

320 160 320 9.02 × 10−5 187.8 s


Problem 3. Results obtained using the FD Scheme.

You might also like