You are on page 1of 6

Guidelines for Financial Forecasting with Neural Networks

JingTao YAO Chew Lim TAN


Dept of Information Systems Dept of Computer Science
Massey University National University of Singapore
Private Bag 11222 1 Science Drive 2
Palmerston North Singapore 117543
New Zealand Singapore
J.T.Yao@massey.ac.nz tancl@comp.nus.edu.sg

Abstract NN based financial forecasting has been explored for


Neural networks are good at classification, about a decade. Many research papers are published on
forecasting and recognition. They are also good various international journals and conferences
candidates of financial forecasting tools. Forecasting proceedings. Some companies and institutions are also
is often used in the decision making process. Neural claiming or marketing the so called advanced
network training is an art. Trading based on neural forecasting tools or models. Some research results of
network outputs, or trading strategy is also an art. We financial forecasting found in references. For instance,
will discuss a seven-step neural network forecasting stock trading system[4], stock forecasting [6, 22],
model building approach in this article. Pre and post foreign exchange rates forecasting [15, 24], option
data processing/analysis skills, data sampling, training prices [25], advertising and sales volumes [13].
criteria and model recommendation will also be However, Callen et al. [3] claim that NN models are
covered in this article. not necessarily superior to linear time series models
even when the data are financial, seasonal and non-
1. Forecasting with Neural Networks linear.

Forecasting is a process that produces a set of 2. Towards a Better Robust Financial


outputs by given a set of variables. The variables are Forecasting Model
normally historical data. Basically, forecasting
assumes that future occurrences are based, at least in In working towards a more robust financial
part, on presently observable or past events. It assumes forecasting model, the following issues are worth
that some aspects of the past patterns will continue into examining.
the future. Past relationships can then be discovered First, instead of emphasizing on the forecasting
through study and observation. The basic idea of accuracy only, other financial criteria should be
forecasting is to find an approximation of mapping considered. Current researchers tend to use goodness
between the input and output data in order to discover of fit or similar criteria to judge or train their models in
the implicit rules governing the observed movements. financial domain. In terms of mathematical calculation
For instance, the forecasting of stock prices can be this approach is a correct way in theory. As we
described in this way. Assume that u i represents understand that a perfect forecasting is impossible in
today's price, vi represents the price after ten days. If reality. No model can achieve such an ideal goal.
Under this constraint, seeking a perfect forecasting is
the prediction of a stock price after ten days could be not our aim. We can only try to optimize our imperfect
obtained using today's stock price, then there should be forecasts and use other yardsticks to give the most
a functional mapping u i to vi , where vi = Γi (u i ) . realistic measure.
Second, there should be adequate organization and
Using all ( u i , vi ) pairs of historical data, a general processing of forecasting data. Preprocessing and
function Γ () which consists of Γi () could be obtained, proper sampling of input data can have impact on the
H forecasting performance. Choice of indicators as inputs
that is v = Γ (u ) . More generally, u which consists of
through sensitivity analysis could help to eliminate
more information in today's price could be used in redundant inputs. Furthermore, NN forecasting results
function Γ () . As NNs are universal approximators, we should be used wisely and effectively. For example, as
can find a NN simulating this Γ() function. The trained the forecast is not perfect, should we compare the NN
network is then used to predict the movements for the output with the previous forecast or with the real data
future. especially when price levels are used as the forecasting
targets?
Third, a trading system should be used to decide on Prechelt [14] showed that the situation is not better in
the best tool to use. NN is not the single tool that can the NN literature. Out of 190 papers published in well-
be used for financial forecasting. We also cannot claim known journals dedicated to NNs, 29% did not employ
that it is the best forecasting tool. In fact, people are even a single realistic or real learning problem. Only
still not aware of which kind of time series is the most 8% of the articles presented results for more than one
suitable for NN applications. To conduct post problem using real world data.
forecasting analysis will allow us to find out the To build a NN forecasting we need sufficient
suitability of models and series. We may then conclude experiments. To test only for one market or just for one
that a certain kind of models should be used for a particular time period means nothing. It will not lead to
certain kind of time series. Training or building NN a robust model based on manually, trial-and-error, or
models is a trial and error procedure. Some researchers ad hoc experiments. More robust model is needed but
are not willing to test more on their data set [14]. If not only in one market or for one time period. Because
there is a system that can help us to formalize these of the lack of industrial models and because failures in
tedious exploratory procedures, it will certainly be of academic research are not published, a single person or
great value to financial forecasting. even a group of researchers will not gain enough
Instead of just presenting one successful experiment, information or experiences to build up a good
possibility or confidence level can be applied to the forecasting model. It is obvious that an automated
outputs. Data are partitioned into several sets to find system dealing with NN models building is necessary.
out the particular knowledge of this time series. As
stated by David Wolpert and William Macready about 3. Steps of NN Forecasting: The Art of NN
their No-Free-Lunch theorems [28], averaged over all
problems, all search algorithms perform equally. Just Training
experimenting on a single data set, a NN model which
outperforms other models can be found. However, for As NN training is an art, many searchers and
another data set one model which outperforms NN practitioners have worked in the field to work towards
model can also be found according to No-Free-Lunch successful prediction and classification. For instance,
theorems. To avoid such a case of one model William Remus and Marcus O'connor [16] suggest
outperforming others, we partition the data set into some principles for NN prediction and classification
several sub data sets. The recommended NN models are of critical importance in the chapter, “Principles of
are those that outperform other models for all sub time Forecasting” in “A Handbook for Researchers and
horizons. In other words, only those models Practitioners”:
incorporated with enough local knowledge can be used • Clean the data prior to estimating the NN model.
for future forecasting. • Scale and deseasonalize data prior to estimating
It is very important and necessary to emphasize the model.
these three issues here. Different criteria exist for the • Use appropriate methods to choose the right starting
academics and the industry. In academics, sometime point.
people seek for the accuracy towards 100%. While in • Use specialized methods to avoid local optima.
industry a guaranteed 60% accuracy is typically aimed • Expand the network until there is no significant
for. In addition, profit is the eventual goal of improvement in fit.
practitioners, so a profit oriented forecasting model • Use pruning techniques when estimating NNs and
may fit their needs. use holdout samples when evaluating NNs.
Cohen [5] surveyed 150 papers in the proceedings • Take care to obtain software that has in-built features
of the 8th National Conference on artificial to avoid NN disadvantages.
intelligence. He discovered that only 42% of the papers • Build plausible NNs to gain model acceptance by
reported that a program had run on more than one reducing their size.
example; just 30% demonstrated performance in some • Use more approaches to ensure that the NN model is
way; a mere 21% framed hypotheses or made valid.
predictions. He then concluded that the methodologies With the authors' experience and sharing from other
used were incomplete with respect to the goals of researchers and practitioners, we propose a seven-step
designing and analyzing AI system. approach for NN financial forecasting model building.
Tichy [20] showed that in a very large study of over The seven steps are basic components of the automated
400 research articles in computer science. Over 40% of system and normally involved in the manual approach.
the articles are about new designs and the models Each step deals with an important issue. They are data
completely lack experimental data. In a recent IEEE preprocessing, input and output selection, sensitive
computer journal, he also points out 16 excuses to analysis, data organization, model construction, post
avoid experimentation for computer scientists [21]. analysis and model recommendation.
What he is talking is true and not a joke. Step 1. Data Preprocessing
A general format of data is prepared. Depending on xt − xt −1 . In addition, pure time series forecasting
the requirement, longer term data, e.g. weekly, monthly xt −1
data may also be calculated from more frequently
sampled time series. We may think that it makes sense techniques require a stationary time series while most
to use as frequent data sampling as possible for raw financial time series are not stationary. Here
experiments. However, researchers have found that stationarity refers to a stochastic process whose mean,
increasing observation frequency does not always help variances and covariance (first and second order
to improve the accuracy of forecasting [28]. moments) do not change with time. xt − xt −1 is
Inspection of data to find outliers is also important as thought to be unit dependent, and hence comparisons
outliers make it difficult for NNs and other forecasting between series are difficult and are less used in the
models to model the true underlying functional. Although literature. However, after NNs have been introduced,
NNs have been shown to be universal approximators, it we can use the original time series as our forecasting
had been found that NNs had difficulty modeling targets. We can let the networks to determine the units
seasonal patterns in time series [11].When a time series or patterns from the time series. In fact, the traditional
contains significant seasonality, the data need to be returns are not the exact returns in real life. The
deseasonalized. inflation is not taken into account at least. These
Before the data is analyzed, basic preprocessing of returns are named as nominal returns ignoring inflation
data is needed. In the case of days with no trading at all as the inflation cannot be calculated so sensibly from
exist, the missing data need to be fill up manually. daily series. After the aim has been fixed. The NN
Heinkel and Kraus [9] stated that there are three model will find out the relationship between inputs and
possible ways dealing with days with no trading:1) the fixed targets. The relationship is discovered from
Ignore the days with no trading and use data for trading the data rather than according to the human
days. 2) Assign a zero value for the days which are no expectation.
trading. 3) Build a linear model which can be used to In addition to using pure time series, the inputs to
estimate the data value for the day with no trading. In NNs can also include some technical indicators such
most cases, the horizontal axis is marked by the market as moving averages, momentum, RSI, etc. These
day instead of (or in addition to) the calendar date. indicators are in popular use amongst chartists and
Suppose we are forecasting the price of next time floor traders. Certain indicators, such as moving
point. If it is on a Monday, the next time point is averages are one of the oldest technical indicators in
tomorrow or Tuesday. If it is on a Friday, the next day existence and they happen to be among the most useful
will be next Monday in fact it is two days later. indicators. In practice, a trader may only focus on one
In most of times, weekly closing price refers to each indicator and base on certain basic rules to trade.
Fridays closing prices. In the event of Friday being a However, he needs other indicators to confirm his
holiday, the most recently available closing price for findings. For instance, if a short term, say 10 days,
the stock was used. Some researchers also pick any day moving average crosses over a long term, say 30 days,
as weekly prices. Normalization is also conducted in moving average and both moving averages are in an
this phase. The purpose of normalization is to modify upward direction, it is the time to go long. If the 10 day
the output levels to a reasonable value. Without such moving average crosses below the 30 day moving
transformation, the value of the output may be too average and both moving averages are directed
large for the network to handle, especially when downward. most traders will consider this as a valid
several layers of nodes in the NN are involved. A sell signal. With fast calculation capability, more
transformation can occur at the output of each node, or indicators or combined indicator could be used.
it can be performed at the final output of the network. Step 3. Sensitivity Analysis
Original values, Y, along with the maximum and Sensitivity Analysis is used to find out which indicator
minimum values in the input file, is is more sensitive to the outputs. In other words, after a
later entered into the equation below to scale the data sensitivity analysis, we can easily eliminate the less
to the range of [-1,+1]. sensitive variables from the input set. Usually,
2 * Y − ( Max + Min) sensitivity analysis is used to reduce the number of
Nm =
Max − Min fundamental factors. NN or some other forecasting
Step 2. Selection of Input & Output variables models are used in forecasting as the forecast target is
Select inputs from available information. Inputs and believed to have relationship with many other series.
targets also need to be carefully selected. Traditionally, Sometimes, the input variables may be correlated with
only changes are processed to predict targets as the each other. Simply using all the available information
return or changes are the main concerns of fund may not always enhance the forecasting abilities. This
managers. Three types of changes have been used in is the same as the observation that complex models do
previous research: xt − xt −1 , log x t − log x t −1 and not always out perform simple ones. Empirical studies
have shown that forecasts using econometric models
are not necessarily more accurate than those employing model, we partition the data into three parts. The first
time series methods [7]. If the ability of explaining two parts are used to train (and validate) the NN while
economic or business phenomena which can increase the third part of data is used to test the model. But the
our understanding of relationships between variables is networks have not been trained enough as the third part
not counted in, econometric models will be useless. is not used in training. The general partition rule for
Besides fundamental factors, technical indicators may training, validation and testing set is 70%, 20% and
also be used for sensitivity analysis. The basic idea 10% respectively according to the authors' experience.
used here is that several trainings are conducted using Step 5. Model Construction
different variables as inputs to a NN and the Model Construction step deals with NN architecture,
performance of them are then compared. If there is no hidden layers and activation function. A
much difference on the performance with or without a backpropagation NN is decided by many factors,
variable, this variable is said to be of less significance number of layers, number of nodes in each layer,
to the target and thus can be deleted from the inputs to weights between nodes and the activation function. In
the network. Instead of changing the number of input our study, a hyperbolic tangent function is used as the
variables, another approach changes the values of a activation function for a backpropagation network.
particular variable. Again, several trainings are Similar to the situation of conventional forecasting
conducted using perturbed variables. Each time, a models, it is not necessarily true that a complex NN, in
positive or negative change is introduced into the terms of more nodes and more hidden layers, gives a
original value of a variable. If there is no much better prediction. It is important not to have too many
difference on the performance with or without changes nodes in the hidden layer because this may allow the
of a variable, this variable is said to be of less NN to learn by example only and not to generalize [1]
significance. .When building a suitable NN for the financial
Overfitting is another major concern in the design of a application we have to balance between convergence
NN. When there is no enough data available to train the and generalization. We use a one hidden layer network
NNs and the structure of NNs is too complex, the NN for our experiment. We adopt a simple procedure of
tends to memorize the data rather than to generalize from deciding the number of hidden nodes which is also
it. Keeping the NN small is one way to avoid overfitting. determined by the number of nodes in the input or
One can prunes the network to a small size using the preceding layer. For a single hidden layer NN, the
technical such as in [12]. number of nodes in the hidden layer being
Step 4. Data Organization experimented are in the order of n2 , n2 ± 1 , n2 ± 2 ,
The next step is Data Organization. In data …, where n2 stands for half of the input number. The
preprocessing step, we have chosen the prediction goal minimum number is 1 and the maximum number is the
and the inputs that should be used. The historical data number of inputs, n, plus 1. In the case where a single
may not necessarily contribute equally to the model hidden layer is not satisfactory, an additional hidden
building. We know that for certain periods the market layer is added. Then another round of similar
is more volatile than others, while some periods are experiments for each of the single layer networks are
more stable than others. We can emphasize a certain conducted and now the new n2 stands for half of the
period of data by feeding more times to the network or number of nodes in the preceding layer. Besides the
eliminate some data pattern from unimportant time architecture itself the weight change is also quite
periods. With the assumption that volatile periods important. The learning rate and momentum rate can
contribute more, we will sample more on volatile lead to different models.
periods or vise versa. We can only conclude this from The crucial point is the choice of the sigmoid
the experiments of particular data set. activation function of the processing neuron. There are
The basic assumption for time series forecasting is several variations from the standard backpropagation
that the pattern found from historical data will hold in algorithm which aim at speeding up its relatively slow
the future. Traditional regression forecasting model convergence, avoiding local minima or improving its
building uses all the data available. However, the generalization ability. e.g. the use of different
model obtained may not be suitable for the future. activation functions other than the usual sigmoid
When training NNs, we can hold out a set of data, out- function, the addition of a small positive offset to the
of-sample set apart from training. After the network is derivative of the sigmoid function to avoid saturation at
confirmed, we use the out-of-sample data to test its the extremes, the use of a momentum term in the
performance. There are tradeoffs for testing and equation for the weight change. More detailed
training. One should not say it is the best model unless discussion can be found in [10] and [8].
he has tested it, but once one has tested it one has not Although the backpropagation algorithm does not
trained enough. In order to train NNs better, all the data guarantee optimal solution, Rumelhart [17] observed
available should be used. The problem is that we have that solutions obtained from the algorithm come close
no data to test the ``best'' model. In order to test the to the optimal ones in their experiments. The accuracy
of approximation for NNs depends on the selection of words, how long we should retrain the NN model. The
proper architecture and weights, however, knowledge gained from experiment will be used in
backpropagation is only a local search algorithm and future practices. A major disadvantage of NNs is that
thus tends to become trapped in local optima. Random their forecasts seem to come from a black box. One
selection of initial weights is a common approach. If cannot explain why the model made good predictions by
these initial weights are located on local grades, the examining the model parameters and structures. This
algorithm will likely become trapped at a local makes NN models hard to understand and difficult for
optimum. Some researchers have tried to solve this some managers to accept. Some work has been done to
problem by imposing constraints on the search space or make these models more understandable [2, 18].
by restructuring the architecture of the NNs. For Step 7. Model Recommendation
example, parameters of the algorithm can be adjusted As we know, certainty can be produced through a
to affect the momentum of the search so that the search large number of uncertainties. The behavior of an
will break out of local optima and move toward the individual could not be forecast with any degree of
global solution. Another common method for finding certainty, but on the other hand, the behavior of a
the best (perhaps global) solution using group of individuals could be forecast with a higher
backpropagation is to restart the training at many degree of certainty. With only one case of success does
random points. Wang [26] proposes a “fix” for certain not mean it will be successes in the future. In our
classification problems by constraining the NN to only approach, we do not just train the network once using
approximate monotonic functions. one data set. The final NN model we suggested in
Another issue with the backpropagation network is using of forecasting is not a single network but a group
the choice of the number of hidden nodes in the of networks. The networks are amongst the best model
network. While trial-and-error is a common method to we have found using the same data set but different
determine the number of hidden nodes in a network, samples, segments, and architectures. The Model
genetic algorithms are also often used to find the Recommendation could be either Best-so-far or
optimum number [19]. In fact, in recent years, there Committee. Best-so-far is the best model for the testing
has been increasing use of genetic algorithms in data and in the hope of that it is also the best model for
conjunction with NNs. The application of genetic the future new data. As we cannot guarantee that the
algorithms to NNs has followed two separate but only one model is suitable for the future, we
related paths. First, genetic algorithms have been used recommend a group of models as a committee in our
to find the optimal network architectures for specific final model. When forecast is made, instead of basing
tasks. The second direction involves optimization of on one model, it can conclude from the majority of the
the NN using genetic algorithms for search. No matter committee. As on the average, the committee
how sophisticated the NN technology, the design of a suggestion for the historical data is superior to a single
neural trading system remains an art. This art, model. Therefore the possibility for future correctness
especially in terms of training and configuring NNs for is greater.
trading, and be simplified through the use of genetic
algorithms. 4. Conclusion Remarks
Traditional backpropagation NNs training criterion is
based on goodness-of-fit which is also the most NNs are suitable for financial forecasting and
popular criterion for forecasting. However, in the marketing analysis. They can be used for financial time
context of financial time series forecasting, we are not series, such as stock exchange indices, foreign
only concerned at how good the forecasts fit their exchange rates forecasting. Research experiments show
target. In order to increase the forecastability in terms that NN models can outperform conventional models in
of profit earning, Yao [23] proposes a profit based most cases. As beating the markets is still a difficult
adjusted weight factor for backpropagation network problem, if a NN cannot work as an alternative tool for
training. Instead of using the traditional least squares traditional forecasting and analysis models, at least it
error, a factor which contains the profit, direction, and can work as a complementary tool.
time information was added to the error function. The Some people treat NN as a panacea. However, there
results show that the new approach does improve the is no cure-all medicine in the world. When applying a
forecastability of NN models, for the financial NN model in a real application, attention should be
application domain. taken in every single step. The usage and training of a
Step 6. Post Analysis NN is an art.
In Post Analysis step, experiment results will be One successful experiment says nothing for real
analysized to find out the possible relationship such as application. There is always another model that exists
the relations between higher profit and data characters. which is superior to the successful model for other data
According to the performance of each segment, we can sets. Segmenting data into several sub sets and training
decide how long this model can be used. In other with a NN with the same architecture will assure that
the model will not just work for a single data set. and Neural Network Design Strategies”, Neural
Furthermore, building a model construction system will Computing & Applications, No. 1, 1993, pp46-58.
free human being from the tedious trial-and-error [16] W. Remus, M. O'connor, “Neural Networks For
procedures. Time Series Forecasting”, in Principles of Forecasting: A
Handbook for Researchers and Practitioners, J. Scott
References Armstrong, editor, Norwell, MA: Kluwer Academic
Publishers, 2001.
[1] E. B. Baum, D. Hassler, “What Size Net Gives [17] D. E. Rumelhart, D.E., G.E. Hinton, R.J.
Valid Generalization?”, Neural Computation, 1, Williams, “Learning Internal Representations by Error
(1989), 151-160. propagation”, in: Parallel Distributed Processing,
[2] J. M. Benitez, J. L. Castro & I Requena, “Are Volume 1, D.E. Rumelhart, J.L. McClelland (Eds.),
artificial neural networks black boxes?” IEEE MIT Press, Cambridge, MA, 1986, pp318-362.
Transactions on Neural Networks, 8, 1997, 1156-1164. [18] R. Setiono, W.K. Leow and J.Y-L. Thong.
[3] J. L. Callen, C. C.Y. Kwan, P. C.Y. Yip, Y.F. “Opening the neural network blackbox: An algorithm
Yuan, “Neural network forecasting of quarterly for extracting rules from function approximating neural
accounting earnings”, International Journal of networks”, In Proceedings of International Conference
Forecasting, (12)4, 1996 pp 475-482. on Information Systems 2000, Brisbane, Australia,
[4] A. J. Chapman, “Stock Market Trading Systems December 10 - 13, pp176-186
Through Neural Networks: Developing a Model”, [19] R. S. Sexton, R. E. Dorsey, J. D. Johnson, “Toward
International Journal of Applied Expert Systems, Vol. global optimization of neural networks: A comparison of
2, no. 2, 1994, pp88-100. the genetic algorithm and backpropagation”, Decision
[5] P. R. Cohen, “A Survey of the Eighth National Support Systems, vol. 22, no. 2, pp. 171--185, 1998.
Conference on Artificial Intelligence: Pulling Together [20] W. F. Tichy, P. Lukowicz, L. Prechelt, A. Heinz,
or Pulling Apart? ”, AI Magazine, Vol. 12, No. 1, 1992, “Experimental Evaluation in Computer Science: A
pp17-41. Quantitative Study”, Journal of Systems and Software,
[6] R. G. Donaldson, M. Kamstra, “An artificial neural 28(1):9-18. 1995.
network-GARCH model for international stock return [21] W. F. Tichy, “Should Computer Scientists
volatility”, Journal of Empirical Finance, 4(1), 1997, Experiment More? 16 Reasons to Avoid
pp 17-46. Experimentation”, IEEE Computer, 31(5), 1998, pp32-
[7] R. Fildes, ``Quantitative forecasting --- the state of 44.
the art: Causal models'', Journal of the operational [22] J. T. Yao, C. L. Tan and H.-L. Poh, “Neural
Research Society, vol. 36, 1985, pp691-710 Networks for Technical Analysis: A Study on KLCI”,
[8] R. Hecht-Nielsen, Neurocomputing, Addison-Wesley, International Journal of Theoretical and Applied
1990. Finance, Vol. 2, No.2, 1999, pp221-241.
[9] R. Heinkel, A. Kraus, ``Measuring Event Impacts [23] J. T. Yao, C. L. Tan, “Time dependent Directional
in Thinly Traded Stocks'', Journal of Financial and Profit Model for Financial Time Series Forecasting”,
Quantitative Analysis, March 1988. Proceedings of The IEEE-INNS-ENNS International
[10] K. Hornik, M. Stinchcombe, White, “Multilayer Joint Conference on Neural Networks, Como, Italy, 24-
feedforward networks are universal approximators”, 27 July 2000, Volume V, pp291-296.
Neural Networks, 2(5), 1989, 359-366. [24] J. T. Yao, C. L. Tan, “A case study on using neural
[11] T. Kolarik, G. Rudorfer, “Time series forecasting networks to perform technical forecasting of forex”,
using neural networks,” APL Quote Quad, 25(1), 1994, Neurocomputing, Vol. 34, No. 1-4, 2000, pp79-98.
86-94. [25] J. T. Yao, C. L. Tan, Y. L. Li, “Option Prices
[12] M.C. Mozer, P. Smolensky , “Using relevance to Forecasting Using Neural Networks”, Omega: The
reduce network size automatically”, Connection International Journal of Management Science, Vol. 28,
Science, 1(1), 1989, pp3-16. No. 4 2000, pp455-466.
[13] H.-L. Poh, J. T. Yao, T. Jasic, “Neural Networks [26] S. Wang, “The Unpredictability of Standard Back
for the Analysis and Forecasting of Advertising and Propagation Neural Networks in Classification
Promotion Impact”, International Journal of Intelligent Applications,” Management Science 41(3), March
Systems in Accounting, Finance and Management, Vol. 1995, 555-559.
7, No. 4, 1998, pp253-268. [27] D. H. Wolpert, W.G. Macready, “No Free Lunch
[14] L. Prechelt, “A Quantitative Study of Theorems for Search”, Technical Report of The Santa
Experimental Evaluations of Neural Network Learning Fe Institute, SFI-TR-95-02-010, 1996.
Algorithms: Current Research Practice”, Neural [28]B. Zhou, “Estimatinzg the Variance Parameter
Networks, 9(3), 1996, pp457-462. From Noisy High Frequency Financial Data”, MIT
[15] A. N. Refenes, M. Azema-Barac, L. Chen and S. Sloan School Working Paper, No. 3739, 1995.
A. Karoussos, “Currency Exchange Rate Prediction

You might also like