Professional Documents
Culture Documents
Computational finance
History
1/61
Computational finance
Introduction
2/61
Computational finance
Quantitative analysis
History
Quantitative finance started in the U.S. in the 1930s as some astute investors
began using mathematical formulae to price stocks and bonds.
Harry Markowitz's 1952 Ph.D thesis "Portfolio Selection" was one of the first
papers to formally adapt mathematical concepts to finance. Markowitz
formalized
a notion of mean return and covariances for common stocks which allowed him
to
quantify the concept of "diversification" in a market. He showed how to
compute
the mean return and variance for a given portfolio and argued that investors
should hold only those portfolios whose variance is minimal among all
portfolios
with a given mean return. Although the language of finance now involves Itō
In 1969 Robert
calculus, Merton of
minimization introduced stochastic calculus
risk in a quantifiable mannerinto the study
underlies much of of
finance.
the
Merton
modernwas motivated by the desire to understand how prices are set in
theory.
financial
markets, which is the classical economics question of "equilibrium," and in
later
papers he used the machinery of stochastic calculus to begin investigation of
this
At the same time as Merton's work and with Merton's assistance, Fischer Black
issue.
and Myron Scholes were developing their option pricing formula, which led to
winning the 1997 Nobel Prize in Economics. It provided a solution for a
practical
problem, that of finding a fair price for a European call option, i.e., the right to
buy one share of a given stock at a specified price and time. Such options are
frequently purchased by investors as a risk-hedging device. In 1981, Harrison
and
Pliska used the general theory of continuous-time stochastic processes to put
the
Black-Scholes option pricing formula on a solid theoretical basis, and as a
result,
showed how to price numerous other "derivative" securities.
3/61
Computational finance
4/61
Computational finance
Fixed Income
Equity
Smart security investing requires in-depth research and analysis. Measuring all
the influencing factors is an essential part of risk management. As a result,
research groups continually create and modify mathematical models to
calculate
stock value, review forecasts, and develop innovative risk strategies.
Equity research groups use the thousands of math and graphics functions in
MathWorks products to access stock data, perform statistical analysis,
determine
derivatives pricing, perform sensitivity analyses, and run Monte Carlo
simulations. The graphics capabilities in MATLAB offer a variety of ways to
review time series data, visualize portfolio risks and returns, and create
forecasting graphs.
5/61
Computational finance
MathWorks deployment tools, you can easily compile and integrate your
MATLAB algorithms into your system.
According to Fund of Funds analyst Fred Gehm, "There are two types of
quantitative analysis and, therefore, two types of quants. One type works
primarily with mathematical models and the other primarily with statistical
models. While there is no logical reason why one person can't do both kinds
of
work, this doesn’t seem to happen, perhaps because these types demand
different
skill sets and, much more important, different psychologies."
A typical problem for a numerically oriented quantitative analyst would be
to
develop a model for pricing and managing a complex derivative product.
A typical problem for statistically oriented quantitative analyst would be to
develop a model for deciding which stocks are relatively expensive and
which
stocks are relatively cheap. The model might include a company's book
value to
price ratio, its trailing earnings to price ratio and other accounting factors.
An
investment manager might implement this analysis by buying the
One of the principal mathematical tools of quantitative finance is
underpriced
stochastic
stocks, selling the overpriced stocks or both.
calculus.
According to a July 2008 Aite Group report, today quants often use alpha
generation platforms to help them develop financial models. These software
solutions enable quants to centralize and streamline the alpha generation
process.
6/61
Computational finance
Areas of application
7/61
Computational finance
Classification of method
Mathematical finance
8/61
Computational finance
choose a portfolio with largest mean return subject to acceptable levels of
variance in the return. Simultaneously, William Sharpe developed the
mathematics of determining the correlation between each stock and the
market.
For their pioneering work, Markowitz and Sharpe, along with Merton Miller,
shared the 1990 Nobel Prize in economics, for the first time ever awarded for a
work in finance.
The portfolio-selection work of Markowitz and Sharpe introduced mathematics
to
the “black art” of investment management. With time, the mathematics has
become more sophisticated. Thanks to Robert Merton and Paul Samuelson,
one-
period models were replaced by continuous time, Brownian-motion models,
and
the quadratic utility function implicit in mean–variance optimization was
INTRODUCTION:
replaced
by more general increasing, concave utility functions.
The mathematical formulation of problems arising in science, engineering,
economics and finance involving rate of change w.r.t. one independent
variable is
governed by ordinary differential equations. Solutions of “real life” problems
often require developing and applying numerical/computational techniques to
model complex physical situations, which otherwise are not possible to solve
by
analytical means. The choice of such a technique depends on how accurate a
solution is required, posing a multitude of several other factors including
computing time constraint and stability of a method. Once a suitable numerical
technique has been applied and the problem is transformed into an algorithmic
form, one can use the powerful computational facilities available.
GOALS:
9/61
Computational finance
Applied mathematics
10/61
Computational finance
computational engineering, which use high performance computing for
the
simulation of phenomena and solution of problems in the sciences and
engineering. These are often considered interdisciplinary programs.
Utility of applied mathematics
Computer Science
11/61
Computational finance
Statistical theory relies on probability and decision theory, and makes
extensive use of scientific computing, analysis, and optimization; for the
design of experiments, statisticians use algebra and combinatorics. Applied
mathematicians and statisticians often work in a department of
mathematical
sciences (particularly at colleges and small universities).
Statisticians have long complained that many mathematics departments
have
assigned mathematicians (without statistical competence) to teach
statistics
courses, effectively giving "double blind" courses. Examing data from 2000,
Schaeffer and Stasny reported
By far the majority of instructors within statistics departments have at least
a
master’s degree in statistics or biostatistics (about 89% for doctoral
departments and about 79% for master’s departments). In doctoral
mathematics departments, however, only about 58% of statistics course
instructors had at least a master’s degree in statistics or biostatistics as
their
highest degree earned. In master’s-level mathematics departments, the
corresponding percentage was near 44%, and in bachelor’s-level
departments
only 19% of statistics course instructors had at least a master’s degree in
statistics or biostatistics as their highest degree earned. As we expected, a
large majority of instructors in statistics departments (83% for doctoral
departments and 62% for master’s departments) held doctoral degrees in
either statistics or biostatistics. The comparable percentages for instructors
of
This unprofessional conduct violates the "Statement on Professional Ethics"
statistics in mathematics
of the American Associationdepartments were
of University about 52%
Professors and has
(which 38%.been
affirmed by many colleges and universities in the USA) and the ethical
codes
of the International Statistical Institute and the American Statistical
Association. The principle that statistics-instructors should have statistical
competence has been affirmed by the guidelines of the Mathematical
Association of America, which has been endorsed by the American
Statistical
Association.
Actuarial science
Mathematical tools
12/61
Computational finance
Asymptotic analysis
Calculus
Copulas
Differential equation
Ergodic theory
Gaussian copulas
Numerical analysis
Real analysis
Probability
Probability distribution
o Binomial distribution
o Log-normal distribution
Expected value
Value at risk
Risk-neutral measure
Stochastic calculus
o Brownian motion
o Lévy process
Itô's lemma
Fourier transform
Girsanov's theorem
Radon-Nikodym derivative
Monte Carlo method
Quantile function
Partial differential equations
o Heat equation
Martingale representation theorem
Feynman Kac Formula
Stochastic differential equations
Volatility
o ARCH model
o GARCH model
Stochastic volatility
Mathematical model
Numerical method
o Numerical partial differential equations
Crank-Nicolson method
Finite difference method
13/61
Computational finance
Derivatives pricing
14/61
Computational finance
Areas of application
Computational finance
Quantitative Behavioral Finance
Derivative (finance), list of derivatives topics
Modeling and analysis of financial markets
International Swaps and Derivatives Association
Fundamental financial concepts - topics
Model (economics)
List of finance topics
List of economics topics, List of economists
List of accounting topics
Statistical Finance
15/61
Computational finance
Numerical analysis
One of the earliest mathematical writings is the Babylonian tablet YBC 7289,
which gives a sexagesimal numerical approximation of , the length of the
diagonal in a unit square. Being able to compute the sides of a triangle (and
hence, being able to compute square roots) is extremely important, for
instance, in
carpentry and construction. In a rectangular wall section that is 2.40 meter by
3.75
meter, a diagonal beam has to be 4.45 meters long.
Numerical analysis continues this long tradition of practical mathematical
calculations. Much like the Babylonian approximation to , modern numerical
analysis does not seek exact answers, because exact answers are impossible
to
obtain in practice. Instead, much of numerical analysis is concerned with
obtaining approximate solutions while maintaining reasonable bounds on
errors.
Numerical analysis naturally finds applications in all fields of engineering and
the
physical sciences, but in the 21st century, the life sciences and even the arts
have
adopted elements of scientific computations. Ordinary differential equations
appear in the movement of heavenly bodies (planets, stars and galaxies);
optimization occurs in portfolio management; numerical linear algebra is
essential
to quantitative psychology; stochastic differential equations and Markov chains
Before the advent
are essential of modern
in simulating computers
living cells for numerical methods
medicine and often depended
biology.
on
hand interpolation in large printed tables. Since the mid 20th century,
computers
calculate the required functions instead. The interpolation algorithms
nevertheless
General introduction
may be used as part of the software for solving differential equations.
The overall goal of the field of numerical analysis is the design and analysis of
techniques to give approximate but accurate solutions to hard problems, the
variety of which is suggested by the following.
History
3x3+4=28
Direct Method
3x3 + 4 = 28.
Subtract 43x3 = 24.
Divide by 3x3 = 8.
Take cube rootsx = 2.
17/61
Computational finance
SHOULD BE NOTED THAT THE BISECTION METHOD BELOW ISN'T
EXACTLY WHAT IS DESCRIBED ON THE BISECTION PAGE
For the iterative method, apply the bisection method to f(x) = 3x3 - 24. The
initial
values are a = 0, b = 3, f(a) = -24, f(b) = 57.
Iterative Method
a bmid f(mid)
0 31.5 -13.875
1.5 32.25 10.17...
1.5 2.251.875 -4.22...
1.875 2.252.0625 2.32...
We conclude from this table that the solution is between 1.875 and 2.0625.
The
algorithm might return any number in that range with an error less than 0.2.
Discretization and numerical integration
In a two hour race, we have measured the speed of the car at three instants
and
recorded them in the following table.
Time 0:20 1:00 1:40
A discretization would be to say that the speed of the car was constant from
0:00
to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the
total distance traveled in the first 40 minutes is approximately (2/3h x 140
km/h)=93.3 km. This would allow us to estimate the total distance traveled as
93.3 km + 100 km + 120 km = 313.3 km, which is an example of numerical
integration (see below) using a Riemann sum, because displacement is the
integral of velocity.
Ill posed problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and
f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of
nearly
1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Well-posed problem: By contrast, the function is continuous and so evaluating
it
is well-posed, at least for x being not close to zero.
18/61
Computational finance
method of linear programming. In practice, finite precision is used and the
result
is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate
in a
number of steps. Starting from an initial guess, iterative methods form
successive
approximations that converge to the exact solution only in the limit. A
convergence criterion is specified in order to decide when a sufficiently
accurate
solution has (hopefully) been found. Even using infinite precision arithmetic
these
methods would not reach the solution within a finite number of steps (in
general).
Iterative
Examplesmethods are more method,
include Newton's common thethan direct methods
bisection method,inand
numerical
Jacobi iteration.
analysis.
In
Some methodsmatrix
computational are direct in principle
algebra, butmethods
iterative are usually
are used as though
generally neededthey
for were
large
not, e.g. GMRES and the conjugate gradient method. For these methods the
problems.
number of steps needed to obtain the exact solution is so large that an
approximation is accepted in the same manner as for an iterative method.
Discretization
The study of errors forms an important part of numerical analysis. There are
several ways in which error can be introduced in the solution of the problem.
Round-off
... ...
Observe that the Babylonian method converges fast regardless of the initial
guess,
whereas Method X converges extremely slowly with initial guess 1.4 and
diverges
for initial guess 1.42. Hence, the Babylonian method is numerically stable,
while
Areas
Methodof study
X is numerically unstable.
20/61
Computational finance
The field of numerical analysis is divided into different disciplines according to
the problem that is to be solved.
Interpolation solves the following problem: given the value of some unknown
function at a number of points, what value does that function have at some
other
point between the given points? A very simple method is to use linear
interpolation, which assumes that the unknown function is linear between
every
pair of successive points. This can be generalized to polynomial interpolation,
21/61
Computational finance
which is sometimes more accurate but suffers from Runge's phenomenon.
Other
interpolation methods use localized functions like splines or wavelets.
Extrapolation is very similar to interpolation, except that now we want to find
the value of the unknown function at a point which is outside the given points.
Regression is also similar, but it takes into account that the data is imprecise.
Given some points, and a measurement of the value of some function at these
points (with an error), we want to determine the unknown function. The least
squares-method is one popular way to achieve this.
Areas of application
Scientific computing
List of numerical analysis topics
Gram-Schmidt process
Numerical differentiation
Symbolic-numeric computation
General
numerical-methods.com
numericalmathematics.com
Numerical Recipes
"Alternatives to Numerical Recipes"
Scientific computing FAQ
Numerical analysis DMOZ category
Numerical Computing Resources on the Internet - maintained by
Indiana
University Stat/Math Center
Numerical Methods Resources
Software
22/61
Computational finance
Many computer algebra systems such as Mathematica also benefit from
the
availability of arbitrary precision arithmetic which can provide more
accurate
results.
Also, any spreadsheet software can be used to solve simple problems relating
to
numerical analysis.
23/61
Computational finance
Computational intelligence
Artificial intelligence
24/61
Computational finance
Perspectives on CI
Thinking machines and artificial beings appear in Greek myths, such as Talos
of
Crete, the golden robots of Hephaestus and Pygmalion's Galatea. Human
likenesses believed to have intelligence were built in many ancient societies;
some of the earliest being the sacred statues worshipped in Egypt and Greece,
and
including the machines of Yan Shi, Hero of Alexandria, Al-Jazari or Wolfgang
von Kempelen. It was widely believed that artificial beings had been created by
Geber, Judah Loew and Paracelsus. Stories of these creatures and their fates
discuss many of the same hopes, fears and ethical concerns that are presented
by
Mary Shelley's
artificial Frankenstein, considers a key issue in the ethics of artificial
intelligence.
intelligence: if a machine can be created that has intelligence, could it also
feel? If
it can feel, does it have the same rights as a human being? The idea also
appears
in modern science fiction: the film Artificial Intelligence: A.I. considers a
machine in the form of a small boy which has been given the ability to feel
human
emotions, including, tragically, the capacity to suffer. This issue, now known as
"robot rights", is currently being considered by, for example, California's
Another
Institute issue explored by both science fiction writers and futurists is the
impact
for the Future, although many critics believe that the discussion is premature.
of artificial intelligence on society. In fiction, AI has appeared as a servant
(R2D2
in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt.
Commander Data in Star Trek), a conqueror (The Matrix), a dictator (With
Folded
Hands), an exterminator (Terminator, Battlestar Galactica), an extension to
human
abilities (Ghost in the Shell) and the saviour of the human race (R. Daneel
Olivaw
Several futurists argue
in the Foundation that
Series). artificial sources
Academic intelligence
havewill transcendsuch
considered the limits of
progress
consequences and fundamentally transform humanity. Ray Kurzweil has used
Moore's
as: a decreased demand for human labor, the enhancement of human ability
law
or (which describes the relentless exponential improvement in digital
technology
experience, and a need for redefinition of human identity and basic values.
with uncanny accuracy) to calculate that desktop computers will have the
same
processing power as human brains by the year 2029, and that by 2045
artificial
intelligence will reach a point where it is able to improve itself at a rate that far
exceeds anything conceivable in the past, a scenario that science fiction writer
Vernor Vinge named the "technological singularity". Edward Fredkin argues
that
"artificial intelligence is the next stage in evolution," an idea first proposed by
Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by
George Dyson in his book of the same name in 1998. Several futurists and
science
fiction writers have predicted that human beings and machines will merge in
the
future into cyborgs that are more capable and powerful than either. This idea,
25/61
called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, is
now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick
Computational finance
and inventor Ray Kurzweil. Transhumanism has been illustrated in fiction as
well,
for example in the manga Ghost in the Shell and the science fiction series
Dune.
Pamela McCorduck writes that these scenarios are expressions of an ancient
human desire to, as she calls it, "forge the gods."
History of CI research
These predictions, and many like them, would not come true. They had failed
to
recognize the difficulty of some of the problems they faced. In 1974, in
response
to the criticism of England's Sir James Lighthill and ongoing pressure from
Congress to fund more productive projects, the U.S. and British governments
cut
In
offthe early 80s, AIexploratory
all undirected, research was revived
research inby
AI.the commercial
This success
was the first of expert
AI winter.
systems, a form of AI program that simulated the knowledge and analytical
skills
of one or more human experts. By 1985 the market for AI had reached more
than
a billion dollars, and governments around the world poured money back into
the
field. However, just a few years later, beginning with the collapse of the Lisp
Machine market in 1987, AI once again fell into disrepute, and a second, longer
In the 90s
lasting and early
AI winter 21st century, AI achieved its greatest successes, albeit
began.
somewhat behind the scenes. Artificial intelligence is used for logistics, data
mining, medical diagnosis and many other areas throughout the technology
industry. The success was due to several factors: the incredible power of
computers today (see Moore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields working on
26/61
Computational finance
similar problems, and above all a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.
Philosophy of AI
"A physical symbol system has the necessary and sufficient means of
general
intelligent action." This statement claims that the essence of intelligence is
symbol manipulation. Hubert Dreyfus argued that, on the contrary, human
expertise depends on unconscious instinct rather than conscious symbol
manipulation and on having a "feel" for the situation rather than explicit
symbolic knowledge.
Gödel's incompleteness theorem
A formal system (such as a computer program) can not prove all true
statements. Roger Penrose is among those who claim that Gödel's theorem
limits what machines can do.
"The appropriately programmed computer with the right inputs and outputs
would thereby have a mind in exactly the same sense human beings have
minds." Searle counters this assertion with his Chinese room argument,
which
asks us to look inside the computer and try to find where the "mind" might
be.
27/61
Computational finance
The artificial brain argument
The brain can be simulated. Hans Moravec, Ray Kurzweil and others have
argued that it is technologically feasible to copy the brain directly into
hardware and software, and that such a simulation will be essentially
identical
to the original.
CI research
In the 21st century, AI research has become highly specialized and technical. It
is
deeply divided into subfields that often fail to communicate with each other.
Subfields have grown up around particular institutions, the work of particular
researchers, particular problems (listed below), long standing differences of
opinion about how AI should be done (listed as "approaches" below) and the
application of widely differing tools (see tools of AI, below).
Problems of AI
The problem of simulating (or creating) intelligence has been broken down into
a
number of specific sub-problems. These consist of particular traits or
capabilities
that researchers would like an intelligent system to display. The traits
described
Deduction, reasoning, problem solving
below have received the most attention.
Early AI researchers developed algorithms that imitated the step-by-step
reasoning that human beings use when they solve puzzles, play board games
or
make logical deductions. By the late 80s and 90s, AI research had also
developed
highly successful methods for dealing with uncertain or incomplete
information,
For difficultconcepts
employing problems, most
from of these algorithms
probability can require enormous
and economics.
computational resources — most experience a "combinatorial explosion": the
amount of memory or computer time required becomes astronomical when the
problem goes beyond a certain size. The search for more efficient problem
solving
algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgments
rather
than the conscious, step-by-step deduction that early AI research was able to
model. AI has made some progress at imitating this kind of "sub-symbolic"
problem solving: embodied approaches emphasize the importance of
sensorimotor
skills to higher reasoning; neural net research attempts to simulate the
structures
Knowledge
inside human representation
and animal brains that gives rise to this skill.
28/61
Computational finance
knowledge about the world. Among the things that AI needs to represent are:
objects, properties, categories and relations between objects; situations,
events,
states and time; causes and effects; knowledge about knowledge (what we
know
about what other people know); and many other, less well researched
domains. A
complete representation of "what exists" is an ontology (borrowing a word
Among
from the most difficult problems in knowledge representation
are:
traditional philosophy), of which the most general are called upper ontologies.
Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions."
For
example, if a bird comes up in conversation, people typically picture an
animal that is fist sized, sings, and flies. None of these things are true about
all
birds. John McCarthy identified this problem in 1969 as the qualification
problem: for any commonsense rule that AI researchers care to represent,
there tend to be a huge number of exceptions. Almost nothing is simply
true or
false in the way that abstract logic requires. AI research has explored a
Thenumber
breadthofofsolutions
commonsense
to this knowledge
problem.
The number of atomic facts that the average person knows is astronomical.
Research projects that attempt to build a complete knowledge base of
commonsense knowledge (e.g., Cyc) require enormous amounts of
laborious
ontological engineering — they must be built, by hand, one complicated
concept at a time.
The subsymbolic form of some commonsense knowledge
Planning
Intelligent agents must be able to set goals and achieve them. They need a
way to
visualize the future (they must have a representation of the state of the world
and
29/61
Computational finance
be able to make predictions about how their actions will change it) and be able
to
make choices that maximize the utility (or "value") of the available choices.
In some planning problems, the agent can assume that it is the only thing
acting
on the world and it can be certain what the consequences of its actions may
be.
However, if this is not true, it must periodically check if the world matches its
predictions and it must change its plan as this becomes necessary, requiring
the
Multi-agent planning
agent to reason underuses the cooperation and competition of many agents to
uncertainty.
achieve a given goal. Emergent behavior such as this is used by evolutionary
algorithms and swarm intelligence.
Learning
Natural language processing gives machines the ability to read and understand
the
languages that the human beings speak. Many researchers hope that a
sufficiently
powerful natural language processing system would be able to acquire
knowledge
on its own, by reading the existing text available over the internet. Some
straightforward applications of natural language processing include information
Motion
retrievaland
(ormanipulation
text mining) and machine translation.
The field of robotics is closely related to AI. Intelligence is required for robots to
be able to handle such tasks as object manipulation and navigation, with sub-
problems of localization (knowing where you are), mapping (learning what is
around you) and motion planning (figuring out how to get there).
Perception
Machine perception is the ability to use input from sensors (such as cameras,
microphones, sonar and others more exotic) to deduce aspects of the world.
30/61
Computational finance
Computer vision is the ability to analyze visual input. A few selected
subproblems
are speech recognition, facial recognition and object recognition.
Social intelligence
Emotion and social skills play two roles for an intelligent agent:
Creativity
Most researchers hope that their work will eventually be incorporated into a
machine with general intelligence (known as strong AI), combining all the skills
above and exceeding human abilities at most or all of them. A few believe that
anthropomorphic features like artificial consciousness or an artificial brain may
be
required for such a project.
Many of the problems above are considered AI-complete: to solve one problem,
you must solve them all. For example, even a straightforward, specific task like
machine translation requires that the machine follow the author's argument
(reason), know what it's talking about (knowledge), and faithfully reproduce
the
author's intention (social intelligence). Machine translation, therefore, is
believed
to be AI-complete: it may require strong AI to be done as well as humans can
do
Approaches to CI
it.
Economist Herbert Simon and Alan Newell studied human problem solving
skills and attempted to formalize them, and their work laid the foundations
of
the field of artificial intelligence, as well as cognitive science, operations
research and management science. Their research team performed
psychological experiments to demonstrate the similarities between human
problem solving and the programs (such as their "General Problem Solver")
they were developing. This tradition, centered at Carnegie Mellon University
would eventually culminate in the development of the Soar architecture in
the
middle 80s.
Logical AI
Unlike Newell and Simon, John McCarthy felt that machines did not need to
simulate human thought, but should instead try to find the essence of
abstract
reasoning and problem solving, regardless of whether people used the
same
algorithms. His laboratory at Stanford (SAIL) focused on using formal logic
to solve a wide variety of problems, including knowledge representation,
planning and learning. Logic was also focus of the work at the University of
Edinburgh and elsewhere in Europe which led to the development of the
programming language Prolog and the science of logic programming.
"Scruffy" symbolic AI
32/61
Computational finance
Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that
solving difficult problems in vision and natural language processing
required
ad-hoc solutions – they argued that there was no simple and general
principle
(like logic) that would capture all the aspects of intelligent behavior. Roger
Schank described their "anti-logic" approaches as "scruffy" (as opposed to
the
"neat" paradigms at CMU and Stanford). Commonsense knowledge bases
(such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must
Knowledge
be based AI
built by hand, one complicated concept at a time.
When computers with large memories became available around 1970,
researchers from all three traditions began to build knowledge into AI
applications. This "knowledge revolution" led to the development and
deployment of expert systems (introduced by Edward Feigenbaum), the
first
truly successful form of AI software. The knowledge revolution was also
driven by the realization that enormous amounts of knowledge would be
required by many simple AI applications.
Sub-symbolic AI
33/61
Computational finance
Computational Intelligence
34/61
Computational finance
Tools of CI research
36/61
Computational finance
decision theory, decision analysis, information value theory. These tools
include
models such as Markov decision processes, dynamic decision networks, game
theory and mechanism design.
Classifiers and statistical learning methods
The simplest AI applications can be divided into two types: classifiers ("if shiny
then diamond") and controllers ("if shiny then pick up"). Controllers do
however
also classify conditions before inferring actions, and therefore classification
forms
a central part of many AI systems.
Classifiers are functions that use pattern matching to determine a closest
match.
They can be tuned according to examples, making them very attractive for use
in
AI. These examples are known as observations or patterns. In supervised
learning,
each pattern belongs to a certain predefined class. A class can be seen as a
decision that has to be made. All the observations combined with their class
When
labels a new observation is received, that observation is classified based on
previous
are known experience. A classifier can be trained in various ways; there are
as a data set.
many
statistical and machine learning approaches.
A wide range of classifiers are available, each with its strengths and
weaknesses.
Classifier performance depends greatly on the characteristics of the data to be
classified. There is no single classifier that works best on all given problems;
this
is also referred to as the "no free lunch" theorem. Various empirical tests have
been performed to compare classifier performance and to find the
characteristics
of data that determine classifier performance. Determining a suitable classifier
The
for most widely used classifiers are the neural network, kernel methods such
as
a given problem is however still more an art than science.
the support vector machine, k-nearest neighbor algorithm, Gaussian mixture
model, naive Bayes classifier, and decision tree. The performance of these
classifiers have been compared over a wide range of classification tasks in
order
to find data characteristics that determine classifier performance.
Neural networks
37/61
Computational finance
Common network architectures which have been developed include the
feedforward neural network, the radial basis network, the Kohonen self-
organizing map and various recurrent neural networks. Neural networks are
applied to the problem of learning, using such techniques as Hebbian learning,
competitive learning and the relatively new architectures of Hierarchical
Temporal Memory and Deep Belief Networks.
Control theory
Specialized languages
38/61
Computational finance
test. This procedure allows almost all the major problems of artificial
intelligence
to be tested. However, it is a very difficult challenge and at present all agents
fail.
Artificial intelligence can also be evaluated on specific problems such as small
problems in chemistry, hand-writing recognition and game-playing. Such tests
have been termed subject matter expert Turing tests. Smaller problems
provide
more achievable goals and there are an ever-increasing number of positive
results.
The broad classes of outcome for an AI test are:
Areas of application
Simulated annealing
Machine learning
Artificial immune systems
Expert systems
Hybrid intelligent systems
Hybrid logic
Simulated reality
Soft computing
Bayesian networks
Chaos theory
Ant colony optimization
Particle swarm optimisation
Cognitive robotics
Developmental robotics
Evolutionary robotics
Intelligent agents
Knowledge-Based Engineering
Type-2 fuzzy sets and systems
39/61
Computational finance
Remotely related topics:
Software
Organizations
40/61
Computational finance
Computer simulation
Computer simulation was developed hand-in-hand with the rapid growth of the
computer, following its first large-scale deployment during the Manhattan
Project
in World War II to model the process of nuclear detonation. It was a simulation
of
12 hard spheres using a Monte Carlo algorithm. Computer simulation is often
used as an adjunct to, or substitution for, modeling systems for which simple
closed form analytic solutions are not possible. There are many different types
of
computer simulation; the common feature they all share is the attempt to
generate
a sample of representative scenarios for a model in which a complete
enumeration
of all possible
Data states of the model would be prohibitive or impossible.
preparation
Computer
models were initially used as a supplement for other arguments, but their use
The data input/output for the simulation can be either through formatted
later
textfiles
became rather widespread.
or a pre- and postprocessor.
41/61
Computational finance
Types
43/61
Computational finance
45/61
Computational finance
Reservoir simulation for the petroleum engineering to model the
subsurface reservoir
Process Engineering Simulation tools.
Robot simulators for the design of robots and robot control algorithms
Traffic engineering to plan or redesign parts of the street network from
single junctions over cities to a national highway network, see for
example
VISSIM.
modeling car crashes to test safety mechanisms in new vehicle models
The reliability and the trust people put in computer simulations depends on the
validity of the simulation model, therefore verification and validation are of
crucial importance in the development of computer simulations. Another
important aspect of computer simulations is that of reproducibility of the
results,
meaning that a simulation model should not provide a different answer for
each
execution. Although this might seem obvious, this is a special point of attention
in
stochastic simulations, where random numbers should actually be semi-
random
numbers. An exception to reproducibility are human in the loop simulations
such
as flightmanufacturers
Vehicle simulations and computer
make use ofgames.
computer Here a humanto
simulation is test
partsafety
of the features
simulation
in
and thus
new influences
designs. the outcome
By building inthe
a copy of a way
car that
in a is hard ifsimulation
physics not impossible to
environment,
reproduce
they exactly.
can save the hundreds of thousands of dollars that would otherwise be
required to build a unique prototype and test it. Engineers can step through
the
simulation milliseconds at a time to determine the exact stresses being put
upon
Computer
each sectiongraphics can be used to display the results of a computer simulation.
of the prototype.
Animations can be used to experience a simulation in real-time e.g. in training
simulations. In some cases animations may also be useful in faster than real-
time
or even slower than real-time modes. For example, faster than real-time
animations can be useful in visualizing the buildup of queues in the simulation
of
humans evacuating a building. Furthermore, simulation results are often
aggregated into static images using various ways of scientific visualization.
In debugging, simulating a program execution under test (rather than
executing
natively) can detect far more errors than the hardware itself can detect and, at
the
same time, log useful debugging information such as instruction trace,
memory
alterations and instruction counts. This technique can also detect buffer
overflow
and similar "hard to detect" errors as well as produce performance information
and tuning data.
46/61
Computational finance
Pitfalls
Organizations
47/61
Computational finance
The Computational Modelling Group at Cambridge University's
Department of Chemical Engineering
Liophant Simulation
United Simulation Team - Genoa University
High Performance Systems Group at the University of Warwick, UK
Education
Examples
48/61
Computational finance
Financial Risk
49/61
Computational finance
productivity of knowledge workers, decrease cost effectiveness, profitability,
service, quality, reputation, brand value, and earnings quality. Intangible risk
management allows risk management to create immediate value from the
identification and reduction of risks that reduce productivity.
Risk management also faces difficulties allocating resources. This is the idea of
opportunity cost. Resources spent on risk management could have been spent
on
more profitable activities. Again, ideal risk management minimizes spending
while maximizing the reduction of the negative effects of risks.
Principles of risk management
50/61
Computational finance
Identification
After establishing the context, the next step in the process of managing risk is
to
identify potential risks. Risks are about events that, when triggered, cause
problems. Hence, risk identification can start with the source of problems, or
with
the problem itself.
Source analysis Risk sources may be internal or external to the system
that is the target of risk management.
When either source or problem is known, the events that a source may trigger
or
the events that can lead to a problem can be investigated. For example:
stakeholders withdrawing during a project may endanger funding of the
project;
privacy information may be stolen by employees even within a closed network;
lightning striking a Boeing 747 during takeoff may make all people onboard
immediate casualties.
The chosen method of identifying risks may depend on culture, industry
practice
and compliance. The identification methods are formed by templates or the
development of templates for identifying source, problem or event. Common
risk
identification methods are:
51/61
Computational finance
Objectives-based risk identification Organizations and project teams
have objectives. Any event that may endanger achieving an objective
partly or completely is identified as risk.
Scenario-based risk identification In scenario analysis different
scenarios are created. The scenarios may be the alternative ways to
achieve an objective, or an analysis of the interaction of forces in, for
example, a market or battle. Any event that triggers an undesired
scenario
alternative is identified as risk - see Futures Studies for methodology
used
by Futurists.
Taxonomy-based risk identification The taxonomy in taxonomy-based
risk identification is a breakdown of possible risk sources. Based on the
taxonomy and knowledge of best practices, a questionnaire is compiled.
The answers to the questions reveal risks. Taxonomy-based risk
identification in software industry can be found in CMU/SEI-93-TR-6.
Common-risk checking In several industries lists with known risks are
available. Each risk in the list can be checked for application to a
particular situation. An example of known risks in the software industry
is
the Common Vulnerability and Exposures list found at
http://cve.mitre.org.
Risk charting (risk mapping) This method combines the above
approaches by listing Resources at risk, Threats to those resources
Modifying Factors which may increase or decrease the risk and
Consequences it is wished to avoid. Creating a matrix under these
headings enables a variety of approaches. One can begin with resources
and consider the threats they are exposed to and the consequences of
each.
Alternatively one can start with the threats and examine which
resources
they would affect, or one can begin with the consequences and
determine
which combination of threats and resources would be involved to bring
Assessment
them about.
Once risks have been identified, they must then be assessed as to their
potential
severity of loss and to the probability of occurrence. These quantities can be
either
simple to measure, in the case of the value of a lost building, or impossible to
know for sure in the case of the probability of an unlikely event occurring.
Therefore, in the assessment process it is critical to make the best educated
guesses possible in order to properly prioritize the implementation of the risk
management plan.
The fundamental difficulty in risk assessment is determining the rate of
occurrence since statistical information is not available on all kinds of past
incidents. Furthermore, evaluating the severity of the consequences (impact) is
often quite difficult for immaterial assets. Asset valuation is another question
that
needs to be addressed. Thus, best educated opinions and available statistics
are the
primary sources of information. Nevertheless, risk assessment should produce
such information for the management of the organization that the primary
risks
are easy to understand and that the risk management decisions may be
prioritized.
52/61
Computational finance
Thus, there have been several theories and attempts to quantify risks.
Numerous
different risk formulae exist, but perhaps the most widely accepted formula for
risk quantification is:
Rate of occurrence multiplied by the impact of the event equals risk
Later research has shown that the financial benefits of risk management are
less
dependent on the formula used but are more dependent on the frequency and
how
risk assessment
In business is performed.
it is imperative to be able to present the findings of risk
assessments in
financial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for
presenting risks in financial terms. The Courtney formula was accepted as the
official risk analysis method for the US governmental agencies. The formula
proposes calculation of ALE (annualised loss expectancy) and compares the
expected loss value to the security control implementation costs (cost-benefit
analysis).
Once risks have been identified and assessed, all techniques to manage the
risk
fall into one or more of these four major categories:
Avoidance (eliminate)
Reduction (mitigate)
Transfer (outsource or insure)
Retention (accept and budget)
Ideal use of these strategies may not be possible. Some of them may involve
trade-offs that are not acceptable to the organization or person making the risk
management decisions. Another source, from the US Department of Defense,
Defense Acquisition University, calls these categories ACAT, for Avoid, Control,
Accept, or Transfer. This use of the ACAT acronym is reminiscent of another
ACAT (for Acquisition Category) used in US Defense industry procurements, in
which Risk Management figures prominently in decision making and planning.
Risk avoidance
Includes not performing an activity that could carry risk. An example would be
not buying a property or business in order to not take on the liability that
comes
with it. Another would be not flying in order to not take the risk that the
airplane
were to be hijacked. Avoidance may seem the answer to all risks, but avoiding
risks also means losing out on the potential gain that accepting (retaining) the
risk
may have allowed. Not entering a business to avoid the risk of loss also avoids
the
possibility of earning profits.
53/61
Computational finance
Risk reduction
Involves methods that reduce the severity of the loss or the likelihood of the
loss
from occurring. For example, sprinklers are designed to put out a fire to reduce
the risk of loss by fire. This method may cause a greater loss by water damage
and therefore may not be suitable. Halon fire suppression systems may
mitigate
that risk, but the cost may be prohibitive as a strategy. Risk management may
also
take the form of a set policy, such as only allow the use of secured IM
platforms
Modern software
(like Brosix) development
and not methodologies
allowing personal reduce
IM platforms riskAIM)
(like by developing
to be used and
in
delivering software incrementally. Early
order to reduce the risk of data leaks. methodologies suffered from the fact
that
they only delivered software in the final phase of development; any problems
encountered in earlier phases meant costly rework and often jeopardized the
whole project. By developing in iterations, software projects can limit effort
wasted to a single iteration.
Outsourcing could be an example of risk reduction if the outsourcer can
demonstrate higher capability at managing or reducing risks. In this case
companies outsource only some of their departmental needs. For example, a
company may outsource only its software development, the manufacturing of
hard goods, or customer support needs to another company, while handling
the
business management itself. This way, the company can concentrate more on
business development without having to worry as much about the
manufacturing
process, managing the development team, or finding a physical location for a
call
Risk retention
center.
Involves accepting the loss when it occurs. True self insurance falls in this
category. Risk retention is a viable strategy for small risks where the cost of
insuring against the risk would be greater over time than the total losses
sustained.
All risks that are not avoided or transferred are retained by default. This
includes
risks that are so large or catastrophic that they either cannot be insured
against or
the premiums would be infeasible. War is an example since most property and
risks are not insured against war, so the loss attributed by war is retained by
the
insured. Also any amounts of potential loss (risk) over the amount insured is
retained risk. This may also be acceptable if the chance of a very large loss is
smalltransfer
Risk or if the cost to insure for greater coverage amounts is so great it would
hinder the goals of the organization too much.
In the terminology of practitioners and scholars alike, the purchase of an
insurance contract is often described as a "transfer of risk." However,
technically
speaking, the buyer of the contract generally retains legal responsibility for the
losses "transferred", meaning that insurance may be described more
accurately as
54/61
Computational finance
a post-event compensatory mechanism. For example, a personal injuries
insurance
policy does not transfer the risk of a car accident to the insurance company.
The
risk still lies with the policy holder namely the person who has been in the
accident. The insurance policy simply provides that if an accident (the event)
occurs involving the policy holder then some compensation may be payable to
the
Some
policy ways
holderofthat
managing risk fall intotomultiple
is commensurate categories. Risk retention pools
the suffering/damage.
are
technically retaining the risk for the group, but spreading it over the whole
group
involves transfer among individual members of the group. This is different from
traditional insurance, in that no premium is exchanged between members of
the
Create
group upafront,
risk-management plan
but instead losses are assessed to all members of the group.
Follow all of the planned methods for mitigating the effect of the risks.
Purchase
insurance policies for the risks that have been decided to be transferred to an
insurer, avoid all risks that can be avoided without sacrificing the entity's
goals,
reduce others, and retain the rest.
Review and evaluation of the plan
Initial risk management plans will never be perfect. Practice, experience, and
actual loss results will necessitate changes in the plan and contribute
information
to allow possible different decisions to be made in dealing with the risks being
faced.
55/61
Computational finance
Risk analysis results and management plans should be updated periodically.
There are two primary reasons for this:
Limitations
If risks are improperly assessed and prioritized, time can be wasted in dealing
with risk of losses that are not likely to occur. Spending too much time
assessing
and managing unlikely risks can divert resources that could be used more
profitably. Unlikely events do occur but if the risk is unlikely enough to occur it
may be better to simply retain the risk and deal with the result if the loss does
in
fact occur. Qualitative risk assessment is subjective and lack consistency. The
primary justification for a formal risk assessment process is legal and
bureaucratic.
Prioritizing too highly the risk management processes could keep an
organization
from ever completing a project or even getting started. This is especially true if
other work is suspended until the risk management process is considered
complete.
It is also important to keep in mind the distinction between risk and
uncertainty.
Risk can be measured by impacts x probability.
Areas of risk management
56/61
Computational finance
risk, interest rate risk or asset liability management, market risk, and
operational
risk.
In the more general case, every probable risk can have a pre-formulated plan
to
deal with its possible consequences (to ensure contingency if the risk becomes
a
liability).
From the information above and the average cost per employee over time, or
cost
accrual ratio, a project manager can estimate:
the cost associated with the risk if it arises, estimated by multiplying
employee costs per unit time by the estimated time lost (cost impact, C
where C = cost accrual ratio * S).
the probable increase in time associated with a risk (schedule variance
due
to risk, Rs where Rs = P * S):
o Sorting on this value puts the highest risks to the schedule first.
This is intended to cause the greatest risks to the project to be
attempted first so that risk is minimized as quickly as possible.
o This is slightly misleading as schedule variances with a large P and
small S and vice versa are not equivalent. (The risk of the RMS
Titanic sinking vs. the passengers' meals being served at slightly
the wrong time).
the probable increase in cost associated with a risk (cost variance due to
risk, Rc where Rc = P*C = P*CAR*S = P*S*CAR)
o sorting on this value puts the highest risks to the budget first.
o see concerns about schedule variance as this is a function of it, as
illustrated in the equation above.
57/61
Computational finance
Planning how risk will be managed in the particular project. Plan should
include risk management tasks, responsibilities, activities and budget.
Assigning a risk officer - a team member other than a project manager
who is responsible for foreseeing potential project problems. Typical
characteristic of risk officer is a healthy skepticism.
Maintaining live project risk database. Each risk should have the
following attributes: opening date, title, short description, probability
and
importance. Optionally a risk may have an assigned person responsible
for
its resolution and a date by which the risk must be resolved.
Creating anonymous risk reporting channel. Each team member should
have possibility to report risk that he foresees in the project.
Preparing mitigation plans for risks that are chosen to be mitigated. The
purpose of the mitigation plan is to describe how this particular risk will
be handled – what, when, by who and how will it be done to avoid it or
minimize consequences if it becomes a liability.
Summarizing planned and faced risks, effectiveness of mitigation
activities, and effort spent for the risk management.
58/61
Computational finance
communicating about risks and crises. Risk Communication can also be linked
to
Crisis communication.
Benefits and Barriers of Risk Communication
59/61
Computational finance
Areas of application
Risk analysis
(engineering)
60/61
Computational finance
Conclusion
This topic lays the mathematical foundations for careers in a number of areas
in
the financial world. In particular, this suitable for novice quantitative analysts
and
developers who are working in quantitative finance. This topic is also suitable
for
Computational finance or financial engineering is a cross-disciplinary field
IT personnel who wish to develop their mathematical skills.
which
relies on mathematical finance, numerical methods, computational intelligence
and computer simulations to make trading, hedging and investment decisions,
as
well as facilitating the risk management of those decisions. Utilizing various
methods, practitioners of computational finance aim to precisely determine the
financial risk that certain financial instruments create.
Mathematics
In this part of the topic we discuss a number of concepts and methods that are
concerned with variables, functions and transformations defined on finite or
infinite discrete sets. In particular, linear algebra will be important because of
its
role in numerical analysis in general and quantitative finance in particular. We
also introduce probability theory and statistics as well as a number of sections
on
numerical analysis. The latter group is of particular importance when we
approximate differential equations using the finite difference method.
Numerical Methods
The goal of this part of the topic is to develop robust, efficient and accurate
numerical schemes that allow us to produce algorithms in applications. These
methods lie at the heart of computational finance and a good understanding of
how to use them is vital if you wish to create applications. In general, the
methods
approximate equations and models defined in a continuous, infinite-
dimensional
space by models that are defined on a finite-dimensional space.
61/61