Professional Documents
Culture Documents
Manfred Gilli
Department of Econometrics
University of Geneva and
Swiss Finance Institute
Spring 2008
Introduction
Outline . . .
References .
Problems . .
Examples. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
3
5
6
8
Introduction `
a la programmation structur
ee
9
Notion de programme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Programmation structuree (affectation, repetition, choix) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Syntaxe Matlab pour repetition et choix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Examples de programmes
Moyenne elements dun vecteur et precision machine
Maximum des elements dun vecteur . . . . . . . . . . .
Tri `a bulles . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulation du lancer dun d`es . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
19
20
21
Introduction `
a Matlab
Elements de syntaxe . . . . . . . . .
Operations elements par elements
Graphiques 2D y = x2 ex . . . .
Graphiques 3D z = x2 y 2 . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
23
24
25
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
26
27
29
30
32
Computer arithmetic
Representation of real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Machine precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of limitations of floating point arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
34
40
42
Numerical differenciation
Forward-difference. . . . . . . . . . .
Backward-difference . . . . . . . . .
Central-difference . . . . . . . . . . .
Approximating second derivatives
Partial derivatives . . . . . . . . . . .
How to choose h . . . . . . . . . . .
44
46
47
48
49
50
51
..........
et script files .
..........
..........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Measuring the error
58
Absolute and relative error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Combining absolute and relative error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Numerical instability and ill conditioning
Numerically unstable algorithm . . . . . . .
Numerically stable algorithm . . . . . . . .
Ill conditioned problem . . . . . . . . . . . .
Condition of a matrix . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
60
61
62
63
65
Complexity of algorithms
66
Order of complexity O() and classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Evolution of performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Equations differentielles
Exemple de solution numerique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Classification des equations differentielles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Equation de Black-Scholes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Finite difference methods
Definition of the grid. . . . .
Formalization of the system
Finite difference solution . .
LU factorization . . . . . . . .
Comparison of FDM . . . . .
Coordinate transformation .
....................
of finite difference equations
....................
....................
....................
....................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
79
83
85
85
88
90
95
101
107
American options
114
Projected SOR (PSOR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Explicit payout (EP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Monte-Carlo methods
Generation of uniform random variables [0, 1[
Simulation of asset price S(t) . . . . . . . . . . .
Efficient simulation of price paths . . . . . . . .
Valuation of an European call . . . . . . . . . . .
Confidence intervals . . . . . . . . . . . . . . . . .
Computing confidence intervals with Matlab .
Examples. . . . . . . . . . . . . . . . . . . . . . . . .
Convergence rate . . . . . . . . . . . . . . . . . . .
Variance reduction . . . . . . . . . . . . . . . . . .
Antithetic variates. . . . . . . . . . . . . . . . . . .
Crude vs antithetic Monte Carlo . . . . . . .
Quasi-Monte Carlo . . . . . . . . . . . . . . . . . .
Halton sequences . . . . . . . . . . . . . . . . . . .
Box-Muller transformation . . . . . . . . . . . . .
Example for Halton sequences. . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
129
129
134
137
141
142
144
145
147
148
149
152
154
156
161
162
Lattice methods
165
Binomial trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Multiplicative binomial tree algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
American put . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Portfolio selection
175
Meanvariance model (Markowitz (1952)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Solving the QP with Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Computing the efficient frontier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
2
Introduction
Outline
References
Problems
Examples
Introduction `
a la programmation structur
ee
Notion de programme
Programmation structuree (affectation, repetition, choix)
Syntaxe Matlab pour repetition et choix
Examples de programmes
Moyenne elements dun vecteur et precision machine
Maximum des elements dun vecteur
Tri `a bulles
Simulation du lancer dun d`es
Introduction `
a Matlab
Elements de syntaxe
Operations elements par elements et script files
Graphiques 2D y = x2 ex
Graphiques 3D z = x2 y 2
Programmation avec Matlab
Realisation dune fonction
Fonction inline
Fonction feval
Exemple de fonction (call Europeen)
Computer arithmetic
Representation of real numbers
Machine precision
Example of limitations of floating point arithmetic
Numerical differenciation
Forward-difference
Backward-difference
Central-difference
Approximating second derivatives
Partial derivatives
How to choose h
An example
Measuring the error
Absolute and relative error
Combining absolute and relative error
Numerical instability and ill conditioning
Numerically unstable algorithm
Numerically stable algorithm
Ill conditioned problem
Condition of a matrix
Complexity of algorithms
Order of complexity O() and classification
Evolution of performance
Equations differentielles
Exemple de solution numerique
Classification des equations differentielles
Equation de Black-Scholes
Finite difference methods
Definition of the grid
Formalization of the system of finite difference equations
4
Introduction
Introduction
The aim is to provide an overview of the basic computational tools that are used by financial engineers.
The course combines an introduction to pricing financial derivatives with finite differences, Monte Carlo
simulation and lattice based methods and to portfolio selection models.
Emphasis is on practical implementations and numerical issues .
NMF (4313006) M. Gilli
Spring 2008 2
Outline
Introduction to programming
Programming principles
Introduction to Matlab
Caveats of floating point computation
Outline
Spring 2008 3
(contd)
Lattice methods
Portfolio selection
Spring 2008 4
References
Brandimarte, P. (2002). Numerical Methods in Finance. A MATLAB-Based Introduction. Wiley Series in Probability and
Statistics. Wiley.
Clewlow, L. and C. Strickland (1998). Implementing Derivatives Models. Wiley series in financial engineering. Wiley.
Higham, D. J. and N. J. Higham (2000). Matlab Guide. SIAM.
Hull, J.C. (1993). Options, futures and other derivative securities. Prentice-Hall.
Prisman, E. Z. (2000). Pricing Derivative Securities: An Interactive Dynamic Environment with Maple V and Matlab.
Academic Press.
Tavella, D. and C. Randall (2000). Pricing Financial Instruments : The Finite Difference Method. Wiley series in Financial
Engineering. Wiley. New York.
Wilmott, P. (1998). Derivatives. The theory and Practice of Financial Engineering. Wiley.
Wilmott, P., J. Dewynne and S. Howison (1993). Option Pricing: Mathematical Models and Computation. Oxford Financial
Press.
Wilmott, P., S. Howison and J. Dewynne (1995). The Mathematics of Financial Derivatives: A Student Introduction.
Cambridge University Press.
Spring 2008 5
Problems
Problems
Spring 2008 6
(contd)
Spring 2008 7
Examples
0
Expected return
x 10
HangSeng
5001
20
1000
15 1
2
0.5
150010
10
Objective function
10
10
10
2000
5
0
10
0
2500
100
3
0.5
10 0 4
0
0.4
0.3
0.2
50
0.1
500
1000
1500
2000
6
8 4000
10
12
16
0 0142500
2000
6000
8000
t 10000
nz
46424
steps
S =R10
Spring 2008 8
Introduction `
a la programmation structur
ee
Notion de programme
Ordinateur effectue des operations elementaires:
addition
soustraction
affectation
comparaison
Programme:
lors de son execution il evolue de facon dynamique en fonction de letat des variables.
NMF (4313006) M. Gilli
Spring 2008 9
Probl
ematique de la programmation
Probl`eme:
Comment concevoir un programme afin davoir une vision aussi claire que possible des situations
dynamiques quil engendre lors de son execution
Objectif de la programmation est une lisibilite aussi claire que possible du programme, afin de pouvoir le
communiquer entre programmeurs et surtout pour pouvoir fournir une demonstration plus ou moins
rigoureuse de sa validite (faire la preuve quil fournira les resultats desires).
Question: Comment atteindre au mieux ces objectifs?
R
eponse: Avec des techniques adequates de programmation.
NMF (4313006) M. Gilli
Spring 2008 10
Note historique
A lorigine les enchanements etaient controles avec des instructions de branchement (GOTO, IF avec
branchement, etc.).
START
100 continue
Lire des donn
ees
.
.
.
Calcul valeur de I
if (I) 400, 300, 200
300 continue
.
.
.
Calcul valeur de K
if (K > 0) GOTO 100
400 continue
.
.
.
GOTO 100
200 continue
.
.
.
STOP
Spaghetti code !!
Difficile dimaginer les
differents etats dynamiques dun tel programme
Les schemas fleches (organigrammes) ne
permettent pas de contourner cette difficulte.
Spring 2008 11
Programmation structur
ee (affectation, repetition, choix)
Apr`es des echecs (catastrophes) un courant de pensee en faveur dun autre style de programmation cest
developpe
Dijkstra (1968): GOTO Statement Considered Harmful
Wilkes (1968): The Outer and the Inner Syntax of a Programming Language
Ces deux articles sont `a lorigine dune veritable polemique qui mettait en cause lutilisation des instructions
de branchement dans les langages de programmation
Les seules structures de controle qui existent dans la programmation structuree sont:
enchanement
repetition
choix
Spring 2008 12
Enchanement
Lenchanement consiste en une simple succession dun nombre quelconque dinstructions daffectation, de
modification, de lecture, decriture:
lire y
x = sqrt(y)
.
.
.
ecrire z
Spring 2008 13
R
ep
etition
La repetition dun ensemble (block) dinstructions, appelee aussi boucle, peut prendre deux formes:
pour i dans lensemble I r
ep
eter
.
.
. (instructions)
fin
tant que condition vraie
.
.
. (instructions)
r
ep
eter
fin
Pour la premi`ere forme, le nombre de repetitions pour la premi`ere forme est defini par le nombre delements de
lensemble I, alors que pour la seconde forme, la repetition se fait `a linfini si la condition reste vraie.
NMF (4313006) M. Gilli
Spring 2008 14
10
Choix
Les instructions qui permettent doperer un choix parmi differents blocks dinstructions possibles sont les
suivantes:
si
condition 1 vraie
.
.
. (instructions)
faire
faire
(instructions)
fin
On execute les instructions qui suivent la premi`ere condition qui est vraie et on avance jusqu`a la fin. Si aucune
condition nest vraie, on execute les instructions qui suivent sinon.
NMF (4313006) M. Gilli
Spring 2008 15
Structure hierarchique
Tout programme est alors compose dune succession de ces trois structures qui sont executees lune apr`es
lautre.
Par opposition `a un programme contenant des instructions de branchement, un programme concu avec les trois
elements de la programmation structuree, permettra une lecture hierarchique de haut en bas ce qui facilite sa
matrise intellectuelle.
Le recours aux organigrammes a dailleurs ete abandonne.
NMF (4313006) M. Gilli
Spring 2008 16
Choix simple
for v = expression
instructions
end
if expression
instructions
end
Choix multiple
if expression1
instructions
elseif expression2
instructions
elseif expression3
instructions
else
instructions
end
while expression
instructions
end
Spring 2008 17
11
Examples de programmes
18
Moyenne
el
ements dun vecteur et pr
ecision machine
x=
1
n (x1
+ x2 + + xn )
s = 0;
for i = 1:n
s = s + x(i);
end
xbar = s/n;
1
n
n
X
xi
i=1
Spring 2008 18
Maximum des
el
ements dun vecteur
m = -realmax;
for i = 1:n
if x(i) > m
m = x(i);
end
end
m
NMF (4313006) M. Gilli
Spring 2008 19
Tri `
a bulles
Sorter les elements dun vecteur dans lordre croissant
4
2
2
4
2
4
5
5
1
7
7 i1 5
7
1
1 i
3
3
3
9
9
9
8
8
8
6
6
6
6
6
6
NMF (4313006) M. Gilli
inversion = 1;
while inversion
inversion = 0;
for i = 2:N
if x(i-1) > x(i)
inversion = 1;
t = x(i);
x(i) = x(i-1);
x(i-1) = t;
end
end
end
Spring 2008 20
Demonstration
Bubblego
NMF (4313006) M. Gilli
12
Spring 2008 21
Demo
j16.m Montrer effet de linitialisation de x.
NMF (4313006) M. Gilli
13
Introduction `
a Matlab
22
El
ements de syntaxe
Polycopie
Syntaxe (elements de)
Symboles [ ] ;
,
variable = expression
ans
x = [8 3 7 9]
x(5) = 12
x(6) = x(1) + x(4)
A = [4 5; 7 3]; A(2,1) = A(2,1) 3;
Transposition A
Addition, soustraction, multiplication, division, puissance
...
/ \
Spring 2008 22
Op
erations
el
ements par
el
ements et script files
Operations elements par elements
Matlab-files (Script)
Spring 2008 23
14
Graphiques 2D
y = x2 ex
x = linspace(-4,1);
y = x.^2 .* exp(x);
plot(x,y);
xlabel(x);
ylabel(y);
title(y = x^2 exp(x));
grid on
ylim([-0.5 2.5]);
hold on
d = 2*x.*exp(x)+x.^2.*exp(x);
plot(x,d,r:);
NMF (4313006) M. Gilli
Graphiques 3D
f (x, y) 6
Spring 2008 24
z = x2 y 2
1
2
3
4
5
0
%
%
1
%
2
%
3
4
5
y
=
x = linspace(0,5,6);
y = linspace(0,5,6);
x
X=
Z = X.^2 .* Y.^2;
mesh(Z);
mesh(x,y,Z);
[X,Y] = meshgrid(x,y);
0
0
0
0
0
0
1
1
1
1
1
1
2
2
2
2
2
2
3
3
3
3
3
3
4
4
4
4
4
4
5
5
5
5
5
5
Y =
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
800
600
400
200
0
15
0 0
2
x
Spring 2008 25
26
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1
9 10 11 12 13 14 15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1
9 10 11 12 13 14 15
9 10 11 12 13 14 15
9 10 11 12 13 14 15
9 10 11 12 13 14 15
Spring 2008 26
16
R
ealisation dune fonction
[r,...,z] = myfunc(x,...,y)
On passe la valeur !!
function [s1,...,sn] = myfunc(e1,...,em)
s1
.. = ...
.
sn = ...
e1 ...
em ...
e1 ...
em ...
Fichier myfunc.m
Spring 2008 27
R
ealisation dune fonction
function N = NVoisins(i,j,M)
% NVoisins calcule le nombre de voisins non nuls de M(i,j)
% NVoisins(i,j,M) i indice de ligne et j indice de colonne
% Version 1.0 30-04-2006
N = M(i-1,j-1) + M(i-1,j) + M(i-1,j+1) + ...
M(i ,j-1)
+
M(i ,j+1) + ...
M(i+1,j-1) + M(i+1,j) + M(i+1,j+1);
Ex.:
n = size(A,1);
NV = zeros(n,n);
for i = 2:n-1
for j = 2:n-1
NV(i,j) = NVoisins(i,j,A);
end
end
NMF (4313006) M. Gilli
Spring 2008 28
Fonction inline
Lorsquil y a un seul argument de sortie et la structure de la fonction est simple il convient de realiser la
fonction avec inline.
f = inline(expression) si la fonction a un seul argument il ny a pas dambigute.
Ex.: f (x) = x2 ex , f = inline(x^2 * exp(x)); f(0.75) = 1.1908
f = inline(expression,arg1 ,arg2 ) sil y a plusieurs arguments
Ex.: f (x, y) = 2 sin(x)2 / log(y), f = inline(2*sin(x)^2 / log(y),x,y); f(1,4) = 1.0215
Sil y a des param`etres on doit les nommer P1, . . . , Pn et donner le nombre
Ex.: f (x) = a xc , f = inline(P1*x^P2,2);
Pour x = 2.7, a = 2 et c = 3 on peut ecrire f(2.7,2,3) = 39.3660
NMF (4313006) M. Gilli
Spring 2008 29
17
Fonction feval
Derivee de f (x):
lim
h0
f (x + h) f (x)
h
En evaluant cette expression pour h donne on obtient une approximation numerique de f (x).
f (x) = exx , f (x) = 1x
ex
Ex.:
1.2
f(x)
f(x)
1
0.8
0.6
h = 1e-3;
f = inline(x./exp(x));
x = linspace(0,5);
da = (f(x+h)-f(x)) / h
plot(x,f(x),k); hold on
plot(x,da,r);
% Pr
ecision
d = inline((1-x)./(exp(x));
plot(x,abs(d(x)-da)./d(x),g);
0.4
0.2
0
0.2
0
1.5
x 10
|dda|/d
1
0.5
0
0.5
1
1.5
2
0
Spring 2008 30
Fonction feval
On realise une fonction qui calcule la derivee numerique pour f quelconque
function da = Dnum00(f,x)
h = 1e-3;
da = ( f(x+h) - f(x) ) / h;
g = inline(2*x^2 + 3*exp(-x));
d = Dnum00( g ,0.1)
Si la fonction g a ete definie avec function alors il faut utiliser feval:
function da = Dnum01(f,x)
h = 1e-3;
da = ( feval(f,x+h) - feval(f,x) ) / h;
NMF (4313006) M. Gilli
Spring 2008 31
18
Application:
vol = linspace(0.1,0.4);
P = 100; strike = 110; ret = 0.06; exp = 1;
call = BScall(P,strike,ret,exp,vol);
plot(vol,call)
Spring 2008 32
19
Computer arithmetic
33
The support for all information in a computer is a word constituted by a sequence of bits (generally 32), a bit
having value 0 or 1.
231
230
23
22
21
20
0 0 0 0 1
This sequence of bits is then interpreted as the binary representation of an integer. As a consequence integers
can be represented exactly in a computer.
If all computations can be made with integers we face integer arithmetic.
Such computations are exact.
NMF (4313006) M. Gilli
Spring 2008 33
{z
e
}|
{z
n
Therefore, whatever size of the word, we dispose of a limited number of bits for the representation of the
integers n et e.
As a result it will be impossible to represent real numbers exactly.
NMF (4313006) M. Gilli
Spring 2008 34
20
0
0
e=
1
1
| {z } | {z }
e
n
0
1
0
1
0
1
2
3
1
1
n=
1
1
0
0
1
1
0
1
0
1
4
5
6
7
Normalizing the mantissa, i.e. imposing the first bit from left being 1, we have n {4, 5, 6, 7} and defining
the offset of the exponent as o = 2s1 1 we have
(0 o) e (2s 1 o),
i.e. e {1, 0, 1, 2}. The real number x then varies between
(2t1 n 2t 1) 2e
NMF (4313006) M. Gilli
Spring 2008 35
Spring 2008 36
n
1
2
0 =2 =4
2
0
1 =2 +2 =5
1
1
1
1
0 =2 +2 =6
0
1 =2 +2 +2 =7
e=1
e=2
1/16
1/8
1/4
1/2
1/4
1/2
5/16
5/8
5/4
5/2
3/8
3/4
3/2
7/16
7/8
7/4
7/2
We also observe that the representable real numbers are not equally spaced.
1
4
7
2
Spring 2008 37
21
Spring 2008 38
float(1.74)
1
0.74
1.74
The result for float (1.74 0.74) = 0.875, illustrates that even if the exact result is in F its representation
might be different.
NMF (4313006) M. Gilli
Spring 2008 39
Machine precision
The precision of a floating point arithmetic is defined as the smallest positif number mach such that
float (1 + mach ) > 1
The machine precision mach can be computed with the following algorithm
1:
2:
3:
4:
5:
e=1
while 1 + e > 1 do
e = e/2
end while
mach = 2e
Spring 2008 40
Spring 2008 41
22
X
1
=
n
n=1
Computing this sum will, of course, produce a finite result. What might be surprising is that this sum is
certainly inferior to 100. The problem is not generated by an underflow of 1/n or an overflow of the
Pk
partial sum n=1 1/n but by the fact that for n verifying
1/n
n1
X
k=1
1
k
< mach
Spring 2008 42
1
Pn1
1
k=1 k
float (
float (
1
n
100
1/n
P n1
1
k=1 k
Pn1
1
k=1 k
1
Pn1
1
k=1 k
50
0
0
10
6
x 10
23
Numerical differenciation
44
Approximation of derivatives
We want to replace derivatives (partial) by numerical approximations.
Given f : R R a continuous function with f continue. For x R we can write:
f (x) = lim
h 0
f (x + h ) f (x)
(1)
If, instead of letting h go to zero, we consider small values for h , we get an approximation for f (x)
Question: How to choose h to get a good approximation if we work with floating point arithmetic?
NMF (4313006) M. Gilli
Spring 2008 44
Approximation of derivatives
(cont.d)
h2
h3
hn (n)
f (x) + f (x) + +
f (x) + Rn (x + h)
2
6
n!
hn+1 (n+1)
f
() avec
(n + 1)!
[x, x + h]
not known !!
Thus Taylor series allow to approximate a function with a degree of precision which equals the remainder.
NMF (4313006) M. Gilli
Spring 2008 45
Forward-difference
Consider Taylors expansion up to order two
f (x + h) = f (x) + h f (x) +
h2
f ()
2
avec
[x, x + h]
f (x + h) f (x) h
f ()
h
2
(2)
Spring 2008 46
24
Backward-difference
Choosing h negativewe get
f (x) =
f (x) f (x h) h
+ f ()
h
2
(3)
Spring 2008 47
Central-difference
Consider the difference f (x + h) f (x h):
h2
h3
f (x)+ f (+ ) avec + [x, x + h]
2
6
3
2
h
h
f (x h) = f (x)h f (x) + f (x) f ( ) avec [x h, x]
2
6
f (x + h) = f (x)+h f (x) +
We have
f (x + h) f (x h) = 2h f (x) +
h3
f (+ ) + f ( )
6
f (+ ) + f ( ) =
3
3 |
2
{z
}
f ()
f (x) =
f (x + h) f (x h) h2
f ()
2h
3
Spring 2008 48
f (x) =
f (x + h) 2f (x) + f (x h) h2
f ()
h2
3
avec
[x h, x + h]
There exist several different approximation schemes, also of higher order (see (Dennis and Schnabel, 1983))
NMF (4313006) M. Gilli
Spring 2008 49
25
Partial derivatives
Given a function of two variables f (x, y) then the approximation, using central-difference, of the partial
derivative with respect to x is
fx =
f (x + hx , y) f (x hx , y)
2hx
=
=
2hy
1
(f (x + hx , y + hy ) f (x hx , y + hy )
4hx hy
f (x + hx , y hy ) + f (x hx , y hy ))
Spring 2008 50
How to choose h
How to choose h if we use floating point arithmetic ?
To reduce the truncation error we would be tempted to choose h as small as possible.
must consider the rounding error !!
If h is too small, we have float (x + h) = float (x)
and even if float (x + h) 6= float
(x),
we can have float f (x + h) = float f (x) if the function varies very slowly
NMF (4313006) M. Gilli
However we also
Spring 2008 51
Denote
fh (x) =
f (x + h) f (x)
h
we get
|fh (x) f (x)|
h
M
2
bound for the truncation error, indicates that we can reach any precision given we choose h sufficiently small.
However, in floating point arithmetic it is not possible to evaluate fh (x) exactly (rounding errors !!)
NMF (4313006) M. Gilli
Spring 2008 52
26
How to choose h
(contd)
Consider that the rounding error for evaluating f (x) has the following bound
float f (x) f (x)
then the rounding error (for finite precision computations) for an approximation of type forward-difference is
bounded by
2
float fh (x) fh (x)
h
float
fh (x)
=
=
float
f (x + h) f (x)
h
float f (x + h) float f (x)
+
h
Finally, the upper bound for the sum of both sources of errors is
2
M
+
h
float fh (x) f (x)
2
h
Spring 2008 53
27
How to choose h
(contd)
2
h
+ h2 M
2
M
+
h2
2
h2
0
4
Mr
2
M
h =
h=2
M
(4)
In practice corresponds to the machine precision mach and if we admit that M is of order one, thus we set
M = 1 and
h = 2 mach
With Matlab we have mach 2 1016 and
How to choose h
h 108
Spring 2008 54
(contd)
2
h
h2
3 M
2M h
2
+
h2
3
h3
h
h 105
6
2M
3 1/3
=
M
=
Spring 2008 55
28
An example
To illustrate the influence of the choice of h on the precision of the approximation of the derivative, consider
the function
f (x) = cos(xx ) sin(ex )
and its derivative
f (x) = sin(xx ) xx log(x) + 1 cos(ex ) ex .
NMF (4313006) M. Gilli
How to choose h
x
Spring 2008 56
(contd)
x
1.5
10
1
0.5
Erreur rlative
0
0.5
10
10
10
1
1.5
2
0.5
15
1.5
x
2.5
10
15
10
10
10
10
10
Relative error for the approximation of the derivative for x = 1.77, as a function of h in the range of 20 et 252 ,
corresponding to mach for Matlab.
NMF (4313006) M. Gilli
Spring 2008 57
29
58
Spring 2008 58
Spring 2008 59
30
60
Numerical instability
If the quality of a solution is not acceptable it is important to distinguish between the following two situations:
Errors are amplified by the algorithm numerical instability
Small perturbations of data generate large changes in the solution
ill conditioned problem
NMF (4313006) M. Gilli
Spring 2008 60
b2 4ac
2a
x2 =
b +
b2 4ac
.
2a
Algorithm:
Transcription of analytical solution
1: = b2 4ac
2: x1 = (b )/(2a)
3: x2 = (b + )/(2a)
For a = 1, c = 2 and a floating point arithmetic with 5 digits of precision the algorithm produces the following
solutions for different values of b:
b
5.2123
121.23
1212.3
4.378135
121.197
1212.2967
float()
4.3781
121.20
1212.3
float(x2 )
0.41708
0.01500
0
x2
0.4170822
0.0164998
0.001649757
| float(x2 )x2 |
|x2 |+1
6
1.55 10
1.47 103
Catastrophic cancelation
Spring 2008 61
= b2 4ac
if b < 0 then
x1 = (b + )/(2a)
else
x1 = (b )/(2a)
end if
x2 = c/(a x1 )
b
1212.3
float()
1212.3
float(x1 )
1212.3
float(x2 )
0.0016498
x2
0.001649757
31
Spring 2008 62
.780
.913
.563
.659
b=
.217
.254
Spring 2008 63
32
5
5
5
5
5
4.5419
4.5419
4.5419
4.5418
4.5417
4.5417
4.5416
4.5416
5
5
3.00013.0001
2.99992.9999
Spring 2008 64
Condition of a matrix
The condition of a matrix (A) is defined as
(A) = kA1 k kAk
Matlab function cond computes the condition of a matrix.
For our example we have cond(A) = 2.2 106
If cond(A) > 1/sqrt(eps), we should worry about our numerical results !!
For eps = 2.2 1016 , we have 1/sqrt(eps) = 6.7 108
(We loose half of the significant digits !!!)
NMF (4313006) M. Gilli
33
Spring 2008 65
Complexity of algorithms
66
n3
3
Spring 2008 66
Definition: Function g(n) is O(f (n)) if there exist constants co and no such that g(n) is inferior to co f (n) for
all n > n0
g(n) = (5/4) n3 + (1/5) n2 + 500
g(n) is O(n3 ) as
c0 f (n) > g(n)
for
n > n0
c0 = 2 and n0 = 9
g(n)
c f(n)
0
f(n)=n
1374
6
11
x 10
15
g(n)
3
f(n)=n
10
2
n
8.8256
12
5
0
0
2000
4000
6000
8000
10000
Algorithms are classified into 2 groups (according to the order of the function counting elementary operations):
polynomial
non-polynomial
Spring 2008 67
34
Evolution of performance
4
10
10
Mflops
0.03
33
12
28
81
1000
1000
30 TFlops
Year
1985
1985
1995
1997
1999
2003
2006
2003
Mflops
Machine
8086/87 `
a 8 MHz
Cray XMP
Pentium 133 MHz
Pentium II 233 MHz
Pentium III 500 MHz
Pentium IV 2.8 GHz
Intel U2500 1.2 GHz
Earth Simulator
10
10
85
95 97 99
03
06
www.top500.org
Fantastic increase of cheap computing power !!!
NMF (4313006) M. Gilli
35
Equations differentielles
69
(5)
est une une equation differentielle ordinaire car f ne fait intervenir qu une seule variable . Lorsquon resoud
une equation differentielle, on cherche la fonction f (x) qui satisfait (5). On verifie que la solution generale est
f (x) =
C
x
(6)
f
x
= xC2
et
C
f
f (x)
C
=
=x = 2
x
x
x
x
Spring 2008 69
y=2/x
1
0
0
10
0
0
10
Spring 2008 70
Spring 2008 71
36
f
x
= f (x)
x comme
f (x+ x ) f (x)
f (x)
=
x
x
do`
u lon tire
x
f (x+ x ) = f (x) 1
x
Ecrivons
le programme qui calcule N approximations de la solution entre x0 et xN .
NMF (4313006) M. Gilli
Spring 2008 72
0
1
1
2
x0 x1
x:
N
N +1
xN
f7 = 1;
% Offset pour l adressage
x0 = 2; y0 = 1; % Initial conditions
xN = 10; N = 199;
dx = ( xN - x0 )/ N ;
x = linspace ( x0 , xN , N +1);
y ( f7 +0) = y0 ;
for i = 1: N
y ( f7 + i ) = y ( f7 +i -1) * ( 1 - dx / x ( f7 +i -1) );
end
plot (x ,y , ro ) , hold on
z = linspace ( x0 , xN ); f = 2./ z ;
plot (z ,f , b )
Spring 2008 73
Note
En executant ce code, on peut observer comment la precision des approximations depend de x .
NMF (4313006) M. Gilli
Spring 2008 note 1 of slide 73
37
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
1
10
11
10
11
0
1
10
11
0.8
0.6
0.4
0.2
0
1
Lets choose the optimal value for x . (We have to reduce the interval !!!)
10
10
11
10
12
10
13
10
h = 107
8
h = 10
14
10
h = 109
h = 1010
15
10
2.0002
2.0004
2.0006
2.0008
2.001
Spring 2008 74
Note
Demonstration avec ODEex1.m comment la precision augmente lorsquon diminue x MAIS le nombre de steps
devient prohibitif !!!
NMF (4313006) M. Gilli
Spring 2008 note 1 of slide 74
38
x
f (x x ) f (x+ x )
2 x
Pour pouvoir resoudre cette expression, il faut connatre deux points de la solution f (x x ) et f (x+ x ) .
On a donc besoin davantage de conditions initiales.
NMF (4313006) M. Gilli
Spring 2008 75
y3 = x3 (y2 y4 )/(2 x )
y4 = x4 (y3 y5 )/(2 x )
1 c1
y1
c1 y0
c2 1 c2
y2 0
c3 1 c3 y3 0
c4 1
y4
c4 y5
o`
u ci = xi /(2 x ), et que lon peut resoudre si lon connat y0 et y5 .
NMF (4313006) M. Gilli
Spring 2008 76
39
Code
/Teaching/MNE/Ex/ODEex2.m:
N = 10
dx = 0.93
1.2
1.2
0.8
0.8
f(x)
f(x)
dx = 2.80
0.6
0.6
0.4
0.4
0.2
0.2
0
0
10
20
30
N = 30
0
0
10
20
30
Spring 2008 77
40
Fonction spdiags
spdiags(B,d,m,n)
NMF (4313006) M. Gilli
Spring 2008 78
Dans la suite du cours on presentera en detail ces deux methodes numeriques dans le cadre des equations
differentielles utilisees pour levaluation des options.
NMF (4313006) M. Gilli
Spring 2008 note 1 of slide 78
non lineaire:
uxx + 3uxy + uyy + u2x u = exy
Spring 2008 79
Equations diff
erentielles (2)
EDP lineaires `a 2 dimensions et de degre 2:
auxx + buxy + cuyy + dux + euy + f u + g = 0
Classees selon la valeur
>0
=0
b2 4ac
<0
du discriminant:
equation hyperbolique
equation parabolique
equation elliptique.
Spring 2008 80
41
Equations diff
erentielles (3)
EDP hyperboliques (processus temporels), ex. mouvement dune onde
uxx utt = 0
EDP paraboliques (processus temporels de propagation)
ex. propagation de la chaleur
ut uxx = 0
EDP elliptiques (syst`emes en equilibre)
ex. equation de Laplace
uxx + uyy = 0
NMF (4313006) M. Gilli
Spring 2008 81
Equations diff
erentielles (4)
Trois composantes dans un probl`eme dEDP:
lEDP
domaine spacial et temporel dans lequel lEDP doit etre satisfaite
conditions initiales (au temps 0) conditions au bord du domaine.
NMF (4313006) M. Gilli
Spring 2008 82
Equation de Black-Scholes
1
Vt + 2 S 2 VSS + (r q)S VS r V = 0
2
V valeur de loption
S prix du sous-jacent
volatilite du sous-jacent
r taux du marche
q dividende
Lequation de Black-Scholes est:
limS V (S, t) = S
V (0, t) = E er(T t)
limS V (S, t) = 0
Spring 2008 83
42
10
15
20
25
30
35
Put
20
18
16
14
12
10
8
6
4
2
0
0
10
15
20
25
30
35
0
0.4
0.2
t
S
Spring 2008 84
43
85
0
put
(N S E)+ call
t = T /M
Smax N
(E i S )+ put
(i S E)+ call
Si = Smin + i S
tj = j t
vij
Numerical approximations
of V (S, t) only computed
at grid points
1
vij = V (Smin + i S , j t )
Smin 0
2
M
0NMF1(4313006)
M.j Gilli
T
t
Eer(T j)t put
0
i=0,...,N
i=0,...,M
i=0,...,N,
j=0,...,M
Spring 2008 85
call
(central difference)
2 S
vi+1,j 2vij + vi1,j
2S
Vt |t=jt
VS |S=Smin +iS
S
Smax N
S
vi+1,j
vi1,j
1
Smin 0
vi,j1vij
t
NMF (4313006) M. Gilli
M
T
t
Spring 2008 86
44
t 2 Si2
22S
and bi =
t (rq)Si
2S
di
ui
Spring 2008 87
v1,j1
v2,j1
.
.
.
vi,j1
.
..
vN 1,j1
+
d1 v0,j
+
d2 v1,j
.
.
.
+
di vi1,j
.
..
+ dN 1 vN 2,j
in matrix notation:
P =
d 1 m1 u 1 0
+
m1 v1,j
+
m2 v2,j
.
.
.
+
mi vi,j
.
..
+ mN 1 vN 1,j
+ u1 v2,j
+ u2 v3,j
.
.
.
+ ui vi+1,j
.
..
+ uN 1 vN ,j
0
..
.
..
.
=
=
.
.
.
=
.
..
=
0
0
0
0
(7)
0 d 2 m2 u 2
..
.. ..
.
.
d3 .
..
.. ..
.
. uN 2 0
.
0 0 dN 1 mN 1 uN 1
Spring 2008 88
45
et =
P =
d1 m1 u1
0 d2 m2 u2
.
.
..
d3 . .
.
..
..
.
0
..
..
0
.
..
.
..
uN 2
0 dN 1 mN 1 uN 1
j
vij , i = 0, 1, . . . , N
v
j
v
vij , i = 1, . . . , N 1
j
v
vij , i = 0, N
j
j
j
= A v
+ B v
P v
Spring 2008 89
Columns
Columns of P correspond to the rows of the grid !!
NMF (4313006) M. Gilli
j1
N 1
becomes
vi+1,j
j
j
j1
j
+ B v
=0
+ A v
v
v
(8)
0
is obtained by computing
The solution v
vi,j1
i
Explicit method
vij
vi1,j
j
j
j1
v
= (I + A) v
+ B v
for j = M, M 1, . . . , 1
1
0
46
Spring 2008 90
vi,j+1 vij
j+1
N
N 1
vi+1,j+1
j+1
j
j
j
v
v
+ A v
+ B v
=0
0
v
(9)
vi,j
i
vi,j+1
vi1,j+1
j
j+1
j
(I A) v
= v
+ B v
for j = M 1, . . . , 1, 0
Implicit method
1
Spring 2008 91
[0, 1]
j+1
j
j
j
j+1
j
j+1
j+1
(v
v
+ A v
+ B v
v
+ A v
+ B v
) + (1 ) (v
)=0
and rearranging
j
j+1
j
j+1
Am v
= Ap v
+ B v
+ (1 ) B v
Am = I A
and
-method
(10)
Ap = I + (1 )A
0
Solution v
obtained by solving linear system (10) for j = M 1, . . . , 1, 0
47
Spring 2008 92
Am = I A
0
1
1
2
Ap = I + (1 )A
Method
Explicit
Implicit
Crank-Nicolson
Am
I
I A
I 12 A
Ap
I +A
I
I + 12 A
0
Algorithm 1 Computes the solution v
with the -method
0:M 1
M
and
, v
1: Define initial and boundary conditions v
2: Compute Am , Ap and B
3: for j = M 1 : 1 : 0 do
j
j+1
j
j+1
Solve Am v
= Ap v
+ B v
+ (1 ) B v
5: end for
4:
Spring 2008 93
2: Compute Am , Ap and B
3: for j = M 1 : 1 : 0 do
j
j+1
j
j+1
Solve Am v
= Ap v
+ B v
+ (1 ) B v
5: end for
4:
Spring 2008 94
48
LU factorization
Two broad categories of methods:
Direct methods resort to a factorization of matrix A (transform the system into one easier to solve)
LU factorization (method of choice if A has no particular structure nor quantification)
U
L
Spring 2008 95
I
llustrate complexity with example n3 + n2 + n for n = 100, 1 000, 10 000
NMF (4313006) M. Gilli
Solution of Ax = b
In Ax = b we replace A by its factorization
U
L
Spring 2008 96
49
Spring 2008 97
ExSL0.m
Generate A with Matlab and compute LU
NMF (4313006) M. Gilli
j
j+1
j+1
j
+B v
+(1 )B v
= Ap v
v
| {z }
|{z}
|{z}
| {z }
V1(1:N1)
V0(1:N1)
V0([0 N])
{z
b
V1([0 N])
V0 V1
j j+1
N
N 1
vi+1,j+1
vi,j
i
vi,j+1
vi1,j+1
function P = FDM1DEuPut(S,E,r,T,sigma,q,Smin,Smax,M,N,theta)
% Finite difference method for European put
f7 = 1;
[Am,Ap,B,Svec] = GridU1D(r,q,T,sigma,Smin,Smax,M,N,theta);
V0 = max(E - Svec,0);
% Solve linear system for successive time steps
[L,U] = lu(Am);
for j = M-1:-1:0
V1 = V0;
V0(f7+0) = (E-Svec(f7+0))*exp(-r*(T-j*T/M)); % dt=T/M
b = Ap*V1(f7+(1:N-1)) + theta *B*V0(f7+[0 N])...
+ (1-theta)*B*V1(f7+[0 N]);
V0(f7+(1:N-1)) = U\(L\b);
end
P = interp1(Svec,V0,S,spline);
1
0
Spring 2008 98
50
Example
>> P = BSput(50,50,0.10,5/12,0.40)
P =
4.0760
>> P = FDM1DEuPut(50,50,0.10,5/12,0.40,0,0,150,100,500,1/2)
P =
4.0760
>> P = FDM1DEuPut(50,50,0.10,5/12,0.40,0,0,150,100,500,0)
P =
-2.9365e+154
>> P = FDM1DEuPut(50,50,0.10,5/12,0.40,0,0,150,100,500,1)
P =
4.0695
>>
NMF (4313006) M. Gilli
Spring 2008 99
Example
function P = BSput(S,E,r,T,sigma)
% Evaluation dun put Europ
een avec Black-Scholes
%
d1 = (log(S./E) + (r + sigma.^2 /2) .* T) ./ (sigma .* sqrt(T));
d2 = d1 - sigma .* sqrt(T);
Ee = E .* exp(-r .* T);
P = Ee .* normcdf(-d2) - S .* normcdf(-d1);
>> P = FDM1DEuPut(50,50,0.10,5/12,0.40,0,0,150,1000,100,0)
P =
4.0756
Need to discuss stability !!
NMF (4313006) M. Gilli
51
Comparison of FDM
function CompareFDM(S,E,r,T,sigma,q, Smin,Smax,M,N)
Ex = FDM1DEuPut(S,E,r,T,sigma,q,Smin,Smax,M,N,0);
Im = FDM1DEuPut(S,E,r,T,sigma,q,Smin,Smax,M,N,1);
CN = FDM1DEuPut(S,E,r,T,sigma,q,Smin,Smax,M,N,1/2);
BS = BSPut(S,E,r,T,sigma);
Ex_err = BS - Ex;
Im_err = BS - Im;
CN_err = BS - CN;
figure
plot(S,Ex_err,b.-,LineWidth,1), hold on
plot(S,Im_err,mo-,LineWidth,1)
plot(S,CN_err,r^-,LineWidth,1)
plot([S(1) S(end)],[0 0],k);
legend(Explicit,Implicit,Crank-Nicolson,2);
title([N=,int2str(N), M=,int2str(M), sigma=,num2str(sigma,2),...
r=,num2str(r,2), T=,num2str(T,1), E=,int2str(E)]);
xlabel(S); ylim([-.005 .045]); grid on
NMF (4313006) M. Gilli
Comparison of FDM
% goCompareFDM.m (without transformation)
clear, close all
E = 10; r = 0.05; T = 6/12; sigma = .2; q = 0;
Smin = 0; Smax = 100;
S = linspace(5,15,30);
%
M = 30; N = 100;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
M = 500;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
M = 30; N = 200;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
M = 30; N = 200; sigma = 0.4;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
M = 30; N = 310; sigma = 0.2;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
M = 30; N = 700; sigma = 0.2;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
M = 30; N = 700; sigma = 0.5;
CompareFDM(S,E,r,T,sigma,q,Smin,Smax,M,N)
52
Comparison of FDM
Time discretization:
N=100 M=30 =0.2 r=0.05 T=0.5 E=10
0.04
0.035
Explicit
Implicit
CrankNicolson
0.04
0.035
0.03
0.03
0.025
0.025
0.02
0.02
0.015
0.015
0.01
0.01
0.005
0.005
0.005
5
10
S
15
Explicit
Implicit
CrankNicolson
0.005
5
10
S
15
Comparison of FDM
Increase space discretization with different volatilities:
N=200 M=30 =0.2 r=0.05 T=0.5 E=10
0.04
0.035
Explicit
Implicit
CrankNicolson
0.04
0.035
0.03
0.03
0.025
0.025
0.02
0.02
0.015
0.015
0.01
0.01
0.005
0.005
0.005
5
10
S
15
0.005
5
Explicit
Implicit
CrankNicolson
10
S
15
53
Comparison of FDM
Increase space discretization:
N=310 M=30 =0.2 r=0.05 T=0.5 E=10
0.04
0.035
Explicit
Implicit
CrankNicolson
0.04
0.035
0.03
0.03
0.025
0.025
0.02
0.02
0.015
0.015
0.01
0.01
0.005
0.005
Explicit
Implicit
CrankNicolson
0.005
5
10
S
15
0.005
5
10
S
15
Comparison of FDM
Higher volatility:
N=700 M=30 =0.5 r=0.05 T=0.5 E=10
0.04
0.035
Explicit
Implicit
CrankNicolson
Conclusions:
0.03
0.025
0.02
0.015
0.01
0.005
0
0.005
5
10
S
15
errors larger
at-the-money
errors depend on
and r
only Crank-Nicolson
produces higher precision by increasing
N (M constant)
suggests variable grid
(finer around E)
54
i
1
Smin 0
t
1
2 equation
j
M
Recall the 0Black-Scholes
T
t
1 2 2
Vt + S VSS + (r q) S VS r V = 0
2
consider the transformation
S = S(x)
(11)
V S
= VS J(x)
S x
VS = VSS J(x)
x
NMF (4313006) M. Gilli
and VS =
and VSS
Vx
J(x)
1
=
J(x) x
Vx
J(x)
| {z }
VS
S(x)
x
J x + 2x
x
V (x+ x ) V (x x )
Vx
2 x
Vx
V (x+ x ) V (x) V (x) V (x x )
x J(x)
J(x + 2x ) 2x
J(x 2x ) 2x
NMF (4313006) M. Gilli
55
+
t
2Ji
2x Ji+ 21
2x Ji 12
Si vi+1,j vi1,j
r vij = 0
(r q)
Ji
2 x
Substituting
Ji+ 12 =
Si+1 Si
,
x
Ji 12 =
Si Si1
x
and Ji =
Si+1 Si1
2x
+
Si+1 Si1
Si+1 Si
Si Si1
Si t
(vi+1,j vi1,j ) t r vij = 0
(r q)
Si+1 Si1
vi,j+1 vij +
We denote
t
ai = Si+1SiS
, bi =
i1
2 Si ai
Si+1 Si ,
ci =
2 Si ai
Si Si1
and ei = (r q)ai
di
ui
vi,j+1 vij +
(ci ei )vi1,j + (bi ci t r)vij + (bi + ei )vi+1,j = 0
| {z }
{z
}
|
| {z }
di
mi
ui
56
P =
d 1 m1 u 1
0 d 2 m2
..
.
d3
..
.
0
u2
..
.
..
..
..
0
..
.
..
.
.
. uN 2
0
0 dN 1 mN 1 uN 1
We apply the same partition to P as done previously and compute the matrices Am , Ap and B needed to
implement the -method
NMF (4313006) M. Gilli
Spring 2008 111
xc
(xc)
e
2
S = Eex
x [I, I]
+E
x [0, 1]
0.4
0.6
(log transform)
(sinh transform)
160
140
120
100
80
60
40
20
0
0.2
0.8
57
Comparaison of transformations
Errors at-the-money for sinh and log transformation
(E = 40, r = 0.05, T = 0.25, = 0.40, q = 0)
1
10
10
10
E 104
N
10
10
sinh transform
log transform I=2
log transform I=3
10
10
10
10
10
58
American options
114
American options
Can be exercised at any time price must satisfy the following PDE
1
Vt + 2 S 2 VSS + (r q)S VS r V 0
2
and in order to exclude arbitrage opportunities
V (S, t) (E S)+
(12)
V (S,t)V (S,T )
V (0, t) = E,
lim V (S, t) = 0
American options
Solution is formalized as
LBS (V ) V (S, t) V (S, T ) = 0
LBS (V ) 0 et
V (S, t) V (S, T ) 0
def
59
American options
Using the notation introduced for the formalization of the -method:
j
j
M
v
v
=0
bj
Am v
j
Am v
bj
and
j
M
v
v
M
v
payoff
Am = I A
Ap = I + (1 )A
j
j+1
j+1
j
+ (1 ) B v
b = Ap v + B v
Ax
Explanation:
We must reason component-wise when explaining the max operation.
NMF (4313006) M. Gilli
x1
(k+1)
x2
(k+1)
x3
(k)
= (b1
= (b2
= (b3
(k+1)
a21 x1
(k+1)
a31 x1
a12 x2
(k+1)
a32 x2
(k)
)/a11
(k)
a23 x3
)/a22
a13 x3
)/a33
60
xi
4:
i1
n
X
X
(k)
(k+1)
= bi
aij xj /aii
aij xj
j=1
5:
end for
6: end for
j=i+1
(k+1)
xi
aii
i
bi
PSOR for Ax = b
1:
2:
3:
4:
5:
6:
P i1
j=1
bi
aii
Pi1
Pn
(k)
j=i+1
(k+1)
j=1 aij xj
(k+1)
xi
(k+1)
aij xj
aij xj
Pn
(k+1)
xGS,i
Gauss-Seidel solution
0 < < 2 relaxation parameter
(k)
aii xi
aii
(k)
j=i aij xj
/aii
(k)
61
PSOR for Ax = b
function x1 = PSOR(x1,A,b,payoff,omega,tol,maxit)
% Projected SOR for Ax = b
if nargin == 4, tol=1e-6; omega=1.2; maxit=100; end
it = 0; converged = 0; N = length(x1);
while ~converged
x0 = x1;
for i = 1:N-1
x1(i) = omega*(b(i)-A(i,:)*x1)/A(i,i) + x1(i);
x1(i) = max(payoff(i), x1(i));
end
converged = all((abs(x1-x0)./(abs(x0)+1)) < tol );
it=it+1; if it>maxit, error(Maxit reached), end
end
NMF (4313006) M. Gilli
V (S,t)V (S,T )
M
v
= (E S)+
for j = M 1 : 1 : 0 do
j+1
j
j+1
bj = Ap v
+ B v
+ (1 ) B v
j
Solve Am v
= bj
j
j
M
v = max(v , v
)
end for
62
P payoff
1.5
0.5
0
50
0.03
48
46
0.02
44
0.01
42 0
0.12
P payoff
0.1
0.08
0.06
0.04
0.02
0
38
0.5
37
0.45
36
S
0.4
35 0.35
Comment
First panel shows the boundary for small values of (near expiration)
Second panel shows the boundary for larger values of (far from expiration). If we consider the truncated
solution we will obtain a step function.
NMF (4313006) M. Gilli
Spring 2008 note 1 of slide 122
63
Computation of Sf by interpolation
j = 55
41
40
39.3323
39
0.0967
solunc Payoff
solunc = U\(L\b);
[V0(f7+(1:N-1)),Imax] = max([Payoff solunc],[],2);
p = find(Imax == 2);
i = p(1) - 1; % Grid line of last point below payoff
Sf(f7+j) = interp1(solunc(i:i+2) - Payoff(i:i+2), ...
Svec(f7+(i:i+2)),0,cubic);
[Payoff solunc] =
20.0
19.0
18.0
17.0
16.0
15.0
14.0
15.1
15.6
15.8
16.0
16.1
16.2
16.5
Imax =
1
1
1
1
2
2
2
64
Example
S = 50; E = 50; r = 0.10; T = 5/12; sigma = 0.40; q = 0;
Smin = 0; Smax = 100; N = 100; M = 100; theta = 1/2;
tic
[P,Sf] = FDM1DAmPut(S,E,r,T,sigma,q,Smin,Smax,M,N,theta,PSOR);
fprintf(\n PSOR: P = %8.6f (%i sec)\n,P,ceil(toc));
PSOR:
P = 4.279683
(2 sec)
tic
[P,Sf] = FDM1DAmPut(S,E,r,T,sigma,q,Smin,Smax,M,N,theta,EPOUT);
fprintf(\n EPout: P = %8.6f (%i sec)\n,P,ceil(toc));
EPout:
P = 4.277555
(1 sec)
65
Example
Early exercise boundary
50
45
S
40
35
0
0.1
0.2
0.3
0.4
t
Option is exercised if S is below the boundary
0.5
Example
20
20
15
15
10
10
5
5
0
100
50
S
0
0
0.4
0.3
0.2
0.1
t
50
100
0
0.1
0.2
0.3
0.4
t
66
Example
20
Payoff
Put Am
Put Eu
18
16
14
12
10
8
6
4
2
0
35
40
45
S
50
55
67
Monte-Carlo methods
129
Solution:
7
Modeliser une procedure complexe pour assurer le caract`ere aleatoire. Mais absence de
comprehension theorique du processus modelise
conduit souvent `a des resultats desastreux !!
Observer un phenom`ene physique aleatoire
(emission de particules dune source radioactive)
Mod`ele mathematique v.a. pseudo-aleatoire
G
en
erateur lin
eaire congruent
Fixer une valeur initiale x0 , puis calculer recursivement pour i 1
xi = a xi1
(mod M )
(13)
xi
10
20
40
80
160
...
xi (mod 70)
10
20
40
10
20
xi (mod 70)
70
0.1429
0.2857
0.5714
0.1429
0.2857
68
G
en
erateur lin
eaire congruent
xi = 2 xi1
x0 = 5 M = 70
2M
80
M
M
xi = 5 xi1
M = 70
40
20
10
10 20
40 M.MGilli
80
NMF
(4313006)
G
en
erateur lin
eaire congruent
Proprietes en fonction de a et M :
Longueur du cycle (tr`es important!!)
Couverture de lintervalle (distance entre les valeurs generees, granularite).
G
en
eration de v.a. uniforme (0, 1) avec Matlab
Periode = 21492 , granularite (tous les F [0, 1[)
rand
rand(seed,n) n est un point de depart quelconque
rand(n,m) gen`ere une matrice n m
1200
1000
x = 0.05:0.1:0.95
800
hist(y,x)
h = findobj(gca,Type,patch);
600
set(h,FaceColor,r,EdgeColor,w);
400
set(gca,xtick,x);
set(gca,FontSize,14);
200
0
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
69
dz
drift
volatility
standard Wiener process
(r
(r
2
2
2
2
u N(0, 1)
) t + t u
)T + T u
(r 22 ) t + t u
{z
} | {z }
|
trend
sigdt
function S = MBG00(S0,r,T,sigma,NIncr,NRepl)
% Geometric brownian motion (Simulation of NRepl paths)
% Every path has NIncr+1 points (S(f7+0)=S0)
f7 = 1;
dt = T/NIncr;
trend = (r - sigma^2/2)*dt;
sigdt = sigma * sqrt(dt);
S = repmat(NaN,NRepl,NIncr+1);
S(:,f7+0) = S0;
for i = 1:NRepl
for j = 1:NIncr
S(i,f7+j) = S(i,f7+j-1)*exp(trend+sigdt* randn );
end
end
70
100
75
50
25
0
0
20
40
60
80
100
120
140
160
S = MBG00(50,0.1,5/12,0.4,150,100);
100
75
50
25
0
0
20
40
60
80
100
120
140
160
71
St+t = St exp (r 22 ) t + t u
Change of variable zt = log(St )
zt+t = zt + r 22 t + t u
{z
}
|
ht
v = [ v 0 v 1 v2 ] [ z 0 h 1 h 2 h 3 ]
Pt
zt = i=0 vi
with Matlab
z = cumsum(v)
15 times faster
NMF (4313006) M. Gilli
72
x 10
6
5
4
3
2
1
0
0
50
100
150
(j)
C0 = eT r CT
(j)
73
Confidence intervals
C0 is a random variable we can construct confidence intervals
(j)
Estimator of variance of C0
N
(j)
(C0 )
Variance of C0
1 X (j) 2
=
C0 C0
N 1 j=1
N
1 X
(j)
C0
N j=1
1 2
(j)
N V (C0 )
=
N
(j)
V (C0 )
=
N
V (C0 ) = V
= E(Xi ),
2 = E(Xi )2
Sn n
N (0, 1)
n
(j)
(j)
E(C0 ) = C0 ,
SN = N C0
N (C0 C0 )
N C0 N C0
q
= q
N (0, 1)
(j)
(j)
V (C0 ) N
V (C0 )
Xj C0 ,
= 0.05,
t/2 = 1.96
C0 t/2
(j)
(C0 )
t/2
N (C0 C0 )
q
t/2
(j)
V (C0 )
C0 C0 + t/2
(j)
V (C0 )
N
74
Given x = [ x1 x2 xn ], xbar =
With Matlab we compute:
1X
1 X
xi , s2 =
(xi xbar)2
n i=1
n 1 i=1
[xbar,s,cixbar,cis] = normfit(x,epsilon)
cixbar confidence interval for xbar
cis confidence interval for s
epsilon confidence level (default = .05)
European call with Monte Carlo:
function [P,IC] = MC01(S0,E,r,T,sigma,NIncr,NRepl)
% European call with Monte Carlo (crude)
S = MBGV(S0,r,T,sigma,NIncr,NRepl);
Payoff = max( (S(:,end)-E), 0);
[P,ignore,IC] = normfit( exp(-r*T) * Payoff );
NMF (4313006) M. Gilli
Examples
function dispMC(name,P,IC,t0)
% Output Monte Carlo results
fprintf(\n %9s: P = %6.3f (%5.2f - %5.2f) ,name,P,IC(1),IC(2));
fprintf( %4.2f (%2i sec),IC(2)-IC(1),ceil(etime(clock,t0)));
S = 50;
E = 50;
r = 0.10;
T = 5/12;
sigma = 0.40;
P = BScall(S,E,r,T,sigma)
P =
6.1165
NMF (4313006) M. Gilli
75
Examples
NIncr = 1; NRepl = 10000;
randn(seed,0); t0 = clock;
[P,IC] = MC01(S,E,r,T,sigma,100,NRepl);
dispMC(MC01,P,IC,t0)
MC01: P =
6.131
( 5.95 -
6.31) 0.36
( 2 sec)
randn(seed,0); t0 = clock;
[P,IC] = MC01(S,E,r,T,sigma,1,NRepl);
dispMC(MC01,P,IC,t0)
MC01: P =
6.052
( 5.86 -
6.24) 0.37
( 1 sec)
randn(seed,0); t0 = clock;
[P,IC] = MC01(S,E,r,T,sigma,NIncr,100000);
dispMC(MC01,P,IC,t0)
MC01: P =
6.098
( 6.04 -
6.16) 0.12
( 1 sec)
76
Convergence rate
6.8
6.8
6.6
6.6
6.4
6.4
6.2
6.2
5.8
5.8
5.6
5.6
5.4
5.4
5.2
5.2
5 3
10
10
10
NRepl
10
10
5 3
10
6.8
6.8
6.6
6.6
6.4
6.4
6.2
6.2
5.8
5.8
5.6
5.6
5.4
5.4
5.2
5.2
4
10
10
NRepl
10
10
5 3
10
6.8
6.8
6.6
6.6
6.4
6.4
6.2
6.2
5.8
5.8
5.6
5.6
5.4
5.4
5.2
5.2
4
10
10
NRepl
10
10
10
10
10
NRepl
10
10
5 3
10
10
NRepl
5 3
10
10
10
5 3
10
10
10
NRepl
10
10
Comment
Comment the different convergence pattern !!!
NMF (4313006) M. Gilli
77
Variance reduction
Given two random variables X1 and X2 with same distribution, then
X + X 1
1
2
V
=
V(X1 ) + V(X2 ) + 2 cov(X1 , X2 )
2
4
Antithetic variates
Consider a portfolio with two (identical) options
Underlying S1 and S2 satisfy same stochastic differential equation
dS1 = S1 dt + S1 dz
dS2 = S2 dt S2 dz
Antithetic variates
function [S1,S2] = MBGAT(S0,r,T,sigma,NIncr,NRepl)
% Brownian geometric motion (NRepl antithetic paths)
dt = T/NIncr; NRepl = fix(NRepl/2);
trend = (r - sigma^2/2)*dt;
sigdt = sigma * sqrt(dt);
D = sigdt*randn(NRepl,NIncr);
S1 = exp(cumsum([repmat(log(S0),NRepl,1) trend + D],2));
S2 = exp(cumsum([repmat(log(S0),NRepl,1) trend - D],2));
function [P,IC] = MCAT(S0,E,r,T,sigma,NIncr,NRepl)
% European call with Monte Carlo (antithetic)
[S1,S2] = MBGAT(S0,r,T,sigma,NIncr,NRepl);
Payoff = ( max( (S1(:,end)-E), 0) + max( (S2(:,end)-E), 0) ) / 2;
[P,ignore,IC] = normfit( exp(-r*T) * Payoff );
NMF (4313006) M. Gilli
78
6.052
( 5.86 -
6.24) 0.37
dispMC(MC01,P,IC,t0)
( 1 sec)
randn(seed,0); t0 = clock;
[P,IC] = MCAT(S,E,r,T,sigma,NIncr,NRepl);
MCAT: P =
6.080
( 5.94 -
6.22) 0.20
dispMC(MCAT,P,IC,t0)
( 1 sec)
randn(seed,0); t0 = clock;
[P,IC] = MC01(S,E,r,T,sigma,NIncr,1e6);
MC01: P =
6.123
( 6.10 -
6.14) 0.04
dispMC(MC01,P,IC,t0)
( 5 sec)
randn(seed,0); t0 = clock;
[P,IC] = MCAT(S,E,r,T,sigma,NIncr,1e6);
MCAT: P =
6.115
( 6.10 -
6.13) 0.03
dispMC(MCAT,P,IC,t0)
( 4 sec)
79
1
0.8
0.6
0.4
0.2
0 3
10
10
10
10
NRepl
Error proportional to 1/ n
!!!
1
n
Other variance reduction techniques:
Control variates
Stratified sampling
Importance sampling
...
cf. J
ackel, P., (2002): Monte Carlo methods in finance. Wiley.
Quasi-Monte Carlo
Sequence of N random vectors x(1) , x(2) , . . . , x(N ) C m = [0, 1]m
Sequence is uniformly distributed if number of points in any subspace G C m is proportional to volume of
G
Given x = [x1 x2 . . . xm ] and the rectangular subspace Gx
Gx = [0, x1 ) [0, x2 ) [0, xm )
Volume of Gx is given by vol(Gx ) = x1 x2 xm
SN (G) is number of point of sequence in G C m
Definition of discrepancy of a sequence:
D(x(1) , . . . , x(N ) ) = sup |SN (Gx ) N x1 xm |
xC m
Quasi-Monte Carlo
80
Quasi-Monte Carlo
rand(seed,0);
plot(rand(100,1),rand(100,1),ro);
grid on
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Halton sequences
plot(GetHalton(100,2),GetHalton(100,7),ko,MarkerSize,8,MarkerFaceColor,g);
grid on
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
81
169 = (20021)3
=
=
(0.12002)3
1 31 + 2 32 +0 33 +0 34 + 2 35
=
=
m
X
d k bk
k=0
m
X
dk b(k+1)
k=0
Previous example:
n = 2 34 + 0 33 + 0 32 + 2 31 + 1 30
h = 2 35 + 0 34 + 0 33 + 2 32 + 1 31
NMF (4313006) M. Gilli
82
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.3077
0.6154
0.9231
Comment
bBits < n Bits = 1 + log(n)/ log(b)
NMF (4313006) M. Gilli
83
Box-Muller transformation
Transformation generates X, Y N (0, 1):
X = 2 log U1 cos(2U2 )
Y = 2 log U1 sin(2U2 )
Use Halton sequences, instead of uniform pseudo-random variables !!
NMF (4313006) M. Gilli
P = 5.1911
P = QMCHalton(50,52,0.10,5/12,0.40,5000,2,7)
P =
5.1970
P = MC01(50,52,0.10,5/12,0.40,1,5000)
P =
5.2549
NMF (4313006) M. Gilli
84
0.2
0.4
0.6
0.8
85
Lattice methods
165
General idea
On approxime la distribution de probabilite du sousjacent par un arbre (les feuilles representent les differents
etats)
On peut proceder de beaucoup de facons:
fixer dabord le nombre de branches (beaucoup de possibilites)
faire correspondre les deux premiers moments de la distribution (lognormale)
Donne lieu `a un syst`eme dequations avec 1 degre de liberte il faut fixer quelque chose (plusieurs
possibilites)
NMF (4313006) M. Gilli
p u St + (1 p) d St = ert St
p u + (1 p) d = ert
2
(14)
St2
p u2 + (1 p) d2 = 2 t +e2rt
(15)
Three unknowns in (14) and (15), we fix d = 1/u and we get the solutions
u = e
p=
ert d
ud
86
Binomial trees
Given a call which expires after t . Possibles prices for underlying and call:
uS
Cu = (u S E)+
S
C
dS
Cd = (d S E)+
Cu Cd
ud
Since the portfolio is riskless we can write
From there we get
S =
u S Cu = d S Cd
u S Cu = ert (S C)
and substituting S = (Cu Cd )/(u d) we get the value for the call
C = ert p Cu + (1 p) Cd
Future payoff discounted under risk neutral probability
NMF (4313006) M. Gilli
Binomial trees
Store tree in a 2-dimensional array:
(4,4)
u3 S
u2 S
uS
dS
dS
d2 S
d3 S
t
0
(4,3)
(2,2)
(3,2)
(4,2)
(1,1)
(2,1)
(3,1)
(4,1)
(1,0)
(2,0)
(3,0)
(4,0)
u S
S
S
d2 S
1
(0,0)
d4 S
4 i
j
4
(3,3)
2
uS
S
u4 S
0
0
4 i
87
Binomial trees
At maturity the payoff is
j
j
CM
= (SM
E)+
(16)
At any node in the tree the value of the option at time t corresponds to the discounted expected value of the
call at time t + 1
j+1
j
Cij = ert p Ci+1
(17)
+ (1 p) Ci+1
j+1
Ci+1
j
Ci+1
1p
Cij
t
By starting with (16) and using (17) we can compute the value of the option for any period i = M 1, . . . , 0 in
the tree.
This is implemented with the multiplicative binomial tree algorithm.
NMF (4313006) M. Gilli
88
(4,4)
j
4
(3,3)
(4,3)
(2,2)
(3,2)
(4,2)
(1,1)
(2,1)
(3,1)
(4,1)
(1,0)
(2,0)
(3,0)
(4,0)
3
2
1
0
1
4 i
10
10
10
10
89
80
100
120
140
160
180
80
100
120
140
160
180
80
100
120
140
160
180
0.02
0
60
0.04
0.02
0
60
!!!! Compare empiric CDF with theoretic lognormal (in particular in the tail,
Spring 2008 173
American put
The binomial tree can be used to evaluate American options. To compute an American put we introduce the
correction Cij = max(Cij , E Sij )+
for i = M 1 : 1 : 0 do
for j = 0 : i do
Cij = v p Cij+1 + (1 p) Cij
Cij = max(Cij , E Sij )
end for
end for
u2 S
u3 S
uS
uS
S
uS
dS
d2 S
d4 S
S
dS
d2 S
u4 S
d S
As for array C it is not necessary to store the tree in order to access the values of S. We simply update
column i of S
(4,4) j
for i = M 1 : 1 : 0 do
for j = 0 : i do
Cj = v (p Cj+1 + (1 p) Cj )
Sj = Sj /d
Cj = max(Cj , E Sj )
end for
end for
4
(3,3)
(4,3)
(2,2)
(3,2)
(4,2)
(1,1)
(2,1)
(3,1)
(4,1)
(1,0)
(2,0)
(3,0)
(4,0)
3
2
1
(0,0)
0
0
90
4 i
Portfolio selection
175
Repose sur les hypoth`eses de la normalite des rendements futurs ou de lutilite quadratique
NMF (4313006) M. Gilli
Formalisation et notations
nA nombre de titres
R = [rtj ]
rj
rendement du titre j en t
rP =
PnA
j=1
(v.a.)
xj rj = x r
Q = cov(R) estimateur de P
NMF (4313006) M. Gilli
91
Efficient frontier
Generation de 50 000 portefeuilles aleatoires
(poids et cardinalite generes aleatoirement):
0.04
0.035
0.03
0.025
rP
0.02
0.015
0.01
0.005
0
0
0.05
0.1
0.15
0.2
rP
=
=
=
=
size(SMI.Ph,2);
diff(log(SMI.Ph(m1,:)),1,1);
mean(R);
cov(R);
92
(contd)
N = 50000; RR = zeros(N,2);
h = waitbar(0, Computing portfolios ...);
for i = 1:N
nA = unidrnd(n);
P = randperm(n);
% Indices titres dans PF
P = P(1:nA);
x = rand(nA,1);
x = x ./ sum(x);
RR(i,1) = r(P) * x;
% rP
RR(i,2) = sqrt(x*Q(P,P)*x); % sigma_rP
waitbar(i/N,h)
end, close(h);
plot(RR(:,2),RR(:,1),k.)
A first problem
Chercher le portefeuille qui minimise le risque (variance)
min x Qx
x
P
j xj = 1
xj 0 j = 1, . . . , nA
Il sagit dun programme quadratique
La fonction Matlab quadprog resoud le probl`eme:
min
x
1
2
x Qx + f x
Ax b
..
.
1
x1
..
.
xnA
0
0
..
min x Qx
x
P
j xj = 1
xj 0 j = 1, . . . , nA
min
x
1
2
x Qx + f x
Ax b
93
(contd)
A = -eye(n);
b = zeros(n,1);
Ae = ones(1,n); be = 1; f = zeros(1,n);
x = quadprog(Q,f,A,b,Ae,be);
rP = r * x;
sigmaP = sqrt(x * Q * x);
plot(sigmaP,rP,ro)
0.04
xlim([0 0.20])
0.035
ylim([0 0.04])
0.03
0.025
rP
0.02
0.015
0.01
0.005
0
0
0.05
0.1
rP
0.15
0.2
A second problem
Find portfolio which minimises risk for a given return:
min x Qx
P
j xj rj rP
P
j xj = 1
xj 0
(additionnal constraint)
j = 1, . . . , nA
A et b become:
r1 r2 r3 rnA
1
..
.
1
rP
x1
0
..
. ..
.
xnA
0
94
(contd)
rP = 0.02;
f = zeros(1,n);
A = [-r; -eye(n)];
b = [-rP zeros(1,n)];
Ae = ones(1,n);
be = 1;
x = quadprog(Q,f,A,b,Ae,be);
0.04
rP = r * x;
sigmaP = sqrt(x* Q * x);
plot(sigmaP,rP,ro)
0.02
0
0
0.0679
rP
0.2
j = 1, . . . , nA
95
Matlab code
f = zeros(1,n);
A = -eye(n);
b = zeros(n,1);
Ae = ones(1,n); be = 1;
npoints = 100;
% Sur circle unitaire
theta = sqrt(1-linspace(0.99,0.05,npoints).^2);
for i = 1:npoints
0.04
H = theta(i)*Q;
f = -(1-theta(i))*r;
x = quadprog(H,f,A,b,Ae,be);
plot(sqrt(x*Q*x),r*x,r.)
end
r
0.02
0
0
0.0679
rP
0.2
Finance toolbox
frontcon, frontier, etc.
NMF (4313006) M. Gilli
Return of portfolio:
rP
r 0 x0 +
nA
X
nA
nA
X
X
xi r i
xi r i = r 0 1
xi +
nA
X
x0
i=1
r0 r0
r0 +
xi +
i=1
nA
X
i=1
i=1
i=1
nA
X
xi r i
{z
i=1
xi (ri r0 )
min x Qx
r0 +
xj (rj r0 ) rP
P
j xj 1
xj 0
j = 1, . . . , nA
96
Matlab code
r0 = 0.0025;
i = 0;
f = zeros(1,n); A = [r0-r; -eye(n)];
for rP = [0.010 0.015 0.023 0.030];
i = i + 1;
b = [r0 - rP zeros(1,n)];
x = quadprog(Q,f,A,b);
x0(i) = 1 - sum(x);
rP = x0(i)*r0 + r*x;
sigmaP = sqrt(x*Q*x);
plot(sigmaP,rP,ko)
end
x0 =
0.6376 0.3960 0.0095 -0.3288
0.04
0.02
0
0
0.0679
rP
0.2
P x
j xj rj rP
P
j xj = 1
xj xj xuj
P:
jP
b = rP
97
Weight
0.5
sigmaP =
0.0679
Unconstraint
Constraint
0.3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Asset number
0.02
Titre
Roche
Nestl
e
Clariant
Baloise
Swatch
EMS
2
3
12
14
17
19
r
0.0110
0.0203
0.0238
0.0318
0.0051
0.0078
0.0651
0.0772
0.1138
0.1309
0.0860
0.0659
P
1
2
3
4
5
Titres
P
14
6
7
14
3, 12, 14 8
3, 12, 14 9
2, 3, 14 10
Titres
2, 3, 14,
2, 3, 17,
2, 3, 17,
2, 3, 17,
2, 3, 17,
19
19
19
19
19
0
0
0.1
0.2
Roche 0.5
Nestl
Clariant
Baloise 0 1 2 3 4 5 6 7 8 910
Swatch
EMS
1
0.8
0.6
0.4
0.2
0
0
10
98
Optimisation de portefeuille
Donnees SMI (fichier SMI.mat):
SMI.Ph
SMI.Dates
dates = [28-Jun-1999
29-Jun-1999];
for i = 1:size(dates,1)
SMI.Dates(i) = datenum(dates(i,:));
end
date = datestr(SMI.Dates(1),1);
SMI.Noms
% Codage
% D
ecodage
load SMI;
plot(SMI.Dates,SMI.Ph(:,3),r)
dateaxis(x,12)
title([SMI.Noms(3,:)])
Asset price
NESTLE R
3500
3000
2500
2000
1500
Jan97
Jul97
Feb98
Sep98
Mar99
Oct99
99
pt
pt1
R = SMI.Ph(2:end,:)./SMI.Ph(1:end-1,:) - 1;
continu:
rt = log(pt ) log(pt1 )
R = diff(log(SMI.Ph),1,1);
Dates = SMI.Dates(2:end);
plot(Dates,R(:,3),b)
Jul97
Feb98
Sep98
Mar99
Oct99
100
Distribution normalis
ee des rendements journaliers
function plotR(SMI)
% Plot des rendements journaliers
R = diff(log(SMI.Ph),1,1);
sig = std(R);
nA = size(R,2); isp = 0; ifig = 0;
for i = 1:nA
if rem(isp,4)==0, ifig=ifig+1;isp=0;figure(ifig),end
isp = isp + 1; subplot(4,1,isp)
epdf(R(:,i)/sig(i),15)
title([deblank(SMI.Noms(i,:))],FontSize,8)
%ylabel([deblank(SMI.Noms(i,:))],Rotation,90,FontSize,8)
end
function epdf(x,nbins)
% Plots empirical probability distribution of x
n = length(x); minx = min(x); maxx = max(x);
d = (maxx - minx) / nbins;
% largeur des classes
cc = minx + d/2 + d * (0:nbins-1);% centres des classes
ec = hist(x,cc);
% effectifs des classes
fr = ec / n;
% frequences relatives
hc = fr * (1/d);
% hauteur rectangle pour loi tronqu
ee (P = 1)
bar(cc,hc,0.8,y)
xlim([-10 8]); ylim([0 0.6]);
Graphiques
NOVARTIS R
0.6
0.4
0.2
0
10
0.6
2 HOLDING0GSH.
ROCHE
2 NESTLE R 0
0.4
0.2
0
10
0.6
0.4
0.2
0
10
0.6
UBS R
0.4
0.2
0
10
CREDIT SUISSE R
0.6
0.4
0.2
0
10
0.6
2SWISS RE R0
2PARTICIPATION
0 R
ABB
2
ZURICH
ALLIED0RA
0.4
0.2
0
10
0.6
0.4
0.2
0
10
0.6
0.4
0.2
0
10
101
Rendements
Rendement moyen du titre j pour N observations:
rj =
N
1 X
Rij
N i=1
% Matlab 6.xx
Figures
1
0.9
0.8
0.7
0.6
5
10
0.5
1
15
0.8
0.6
0.4
0.2
0 20
0.4
5
10
15
0.3
0.2
0
100
200
102
References
Dennis, J. E. Jr. and R. B. Schnabel (1983). Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Series in Computational Mathematics. Prentice-Hall. Englewood Cliffs, NJ.
Hull, J.C. (1993). Options, futures and other derivative securities. Prentice-Hall.
Prisman, E. Z. (2000). Pricing Derivative Securities: An Interactive Dynamic Environment with Maple V and Matlab. Academic Press.
103