Professional Documents
Culture Documents
12164
Abstract: Financial time series prediction is regarded as one of the most challenging job because of its inherent complexity, and the
hybrid forecasting model incorporating autoregressive integrated moving average and support vector machine (SVM) has been
implemented widely to deal with the both linear and nonlinear patterns in time series data. However, the SVM model does not take into
consideration the time correlation knowledge between different data points in time series, which impacts the learning efciency of the
SVM in real application. To overcome this restriction, this paper proposes the Taylor Expansion Forecasting model as an alternative to
the SVM and develops a novel hybrid methodology via combining autoregressive integrated moving average and Taylor Expansion
Forecasting to exploit the comprehensive forecasting capacity to the nancial time series data with noise. Both theoretical proof and
empirical results obtained on several commodity future prices demonstrate that the proposed hybrid model improves greatly the forecasting
accuracy.
Keywords: nancial time series forecasting, ARIMA, SVM, Taylor expansion, tracking differentiator
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 501
ARIMA and SVM takes advantage of the unique strength of are supposed to be generated from a function f, and then the
ARIMA and ANN in linear and nonlinear modelling and time series data can be forecasted using the previous historical
obtain promising computational tests on 10 different stocks. data by Taylor expansion, in which the tracking differentiator
Nie et al. (2012) presents the hybrid of ARIMA and SVM to is used to compute the specied order derivatives of the time
forecast the short-term load series and demonstrates that the series function. The highlight is that the TEF model can be
SVM extracts the sensitive component to correct the converted into integration form of the historical data and
deviation of the former forecasting by the ARIMA greatly. express the different relationship between different data points
Wang et al. (2012) propose a novel approach via combining in the time series through the weighted function instead of the
ARIMA and SVM to forecast the different return of Euclidean distance in the SVM. Therefore, compared with the
equities, and experiments show that the SVM can deal with SVM, the TEF model not only incorporates the real
the textual information to improve the nancial time series correlations among the data points of the time series but also
forecasting accuracy greatly. Zhu and Wei (2013) develop a realizes dynamic backtracking to the historical information
methodology combining ARIMA and least square SVM to hidden in time series data points. So a novel hybrid
predict the carbon prices. Experimental results reveal that methodology combining ARIMA and TEF model is
the proposed hybrid methodology is suitable fore carbon developed to forecast the nancial time series. In this
price forecasting problem. proposed methodology, the ARIMA and the TEF model
Although the hybrid models exhibit favourable overall are employed to capture the linear and nonlinear components
forecasting performance, there are still some limitations of nancial time series, respectively, and their forecasting
affecting the forecasting accuracy of these models. As a kind values are integrated into the nal forecasting results. Finally,
of special data, the relations of different data points in time the authors evaluate the forecasting performance of the
series will be increasingly strong with the time interval hybrid ARIMA and SVM model and the hybrid ARIMA
decreasing, that is, to say, one data point in time series can and TEF model by forecasting several main future prices.
be inuenced by its previous or next neighbouring points The results of computational tests are found to be promising.
more easily than other. However, the SVM model does The rest of this paper is organized as follows: in the next
not take into account this kind of autocorrelations between section, the individual ARIMA, SVM and TEF models to
different points in real-time series. Theoretically, SVM time series forecasting are described. Section 3 introduces
employs a kernel function K(xi, xj) = (xi) (xj) to the development of the novel hybrid methodology. Section 4
determine how close an input vector is to each stored data reports the experimental results, and Section 5 provides
on dot products between patterns, and the relationship of conclusions and predicts some future research directions.
the training instances expressed by Euclidean distance in
the SVM cannot reect the real correlations for the series
data depending on time sequence. This limitation inuences 2. Individual forecasting model employed in the hybrid
the learning efciency of the machine and could result in a model
dramatic decrease in generalization performance of the
SVM when dealing with the time series data in real 2.1. Autoregressive integrated moving average model
situations. Furthermore, as a supervised learning machine
The ARIMA method introduced by (Box & Tiao, 1975) has
with multiple inputs Xi = {xi1, , xit, , xiN} and one output
had an enormous impact on the theory and practice of
yi R, the dimensions of the input vector N affect the
modern time series analysis and forecasting (Gooijer &
generalization performance of the SVM greatly. However,
Hyndman, 2006; Pan et al., 2014). In the ARIMA model, if
the determination of the input variables dimension N is a
the transferred time series is stationary, the future value of a
hard job when modelling the nonlinear part by the SVM
variable can be forecasted by the linear combination of past
model, which is usually determined by the experiences in real
situations or determined by the backtracking steps value p in values and past errors. Suppose X fxi gNi1 is a time series,
the ARIMA(p, d, q) model (Zhu & Wei, 2013). And once the an ARIMA(p, d, q) model can be formulated as follows:
number of the input variables is determined, the backtracking
steps to the previous data points in the time series are xed, ^lt 0 xt1 xt2 xtp t 1 t1 (2:1)
1 2 p
and the information before the feedback steps will be omitted. 2 t2 q tq
Therefore, the SVM method cannot realize the dynamic
backward search for the historical correlation knowledge where parameters p, d and q are non-negative integers
and will inuence the information absorption hidden in the that refer to the order of the autoregressive, integrated
series data, which further undermines the generalization and moving average parts of the model, respectively, xt
performance of the SVM in real applications. is the actual value of the time series obtained by differencing
In order to overcome the limitations discussed earlier, a d times, l t is the forecast value of xt, t is the residual
new Taylor Expansion Forecasting (TEF) model based on term between the actual data and the forecasting value and
the tracking differentiator is proposed as an alternative to t i(i = 1, 2, , p) and t j ( j = 1, 2, , q) are the coefcients
SVM to forecast the nonlinear components in time series to be estimated. Having completed the order determination
forecasting problem. In the TEF model, the time series data (determining the value of p and q) and parameter estimation
502 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
(determining the value of and ), the ARIMA model can be be solved more easily. The optimization problem can be
applied to forecast the value of the time series. The major described as the following form by a standard dualization
advantage of the ARIMA model is that it is basically a method utilizing Lagrange multipliers.
data-oriented approach that models the linear patterns of the
time series well and is relatively easy to use. However, being a
1X N X N
class of linear model, the ARIMA model can only capture min i i j j K xi ; yj
linear patterns in a time series (Zhang, 2003; Wang et al.,
2 i1 j1
2012; Zhu & Wei, 2013). Therefore, the nonlinear components X
N X N
i i yi i i r (2:4)
in the time series necessitate a proper nonlinear model.
i1 i1
8
2.2. Support vector machine model > X
N
>
< i i 0
Support vector machine is a novel neural network s:t: i1 (2:5)
>
> C
classication technique, which enjoys good generalization : 0 i ; i ; i 1; 2; ; N :
performance and implement global optimization solution N
simultaneously (Vapnik, 1995). Based on statistical learning
Here, i ; i are Lagrange multipliers, and the data points
theory, SVM overcomes the curse of dimensionality and
with non-zero i ; i pairs are called support vectors.
over-tting problem. K(xi, xj) is called kernel function and obtained by K(xi, xj)
In a regression problem, -insensitive loss function was = hi(xi) (xj) in the feature space. Some common kernels
introduced in regression model to obtain SVR, which has include radial basis function (Gaussian), polynomial, spline
been applied to nancial time series problems and shown a or even sigmoidal functions (Vapnik, 1998).
quite good performance (Tay & Cao, 2002; Kim, 2003; Yuan
et al., 2010; Rubio et al., 2011; Duan & Xu, 2012; Wang,
2015). If we use a set of training patterns fxi ; yi gNi1 , where 2.3. Taylor Expansion Forecasting model based on the
xi Rn and yi R, the target of SV regression is to nd a tracking differentiator
regression hyperplane with an -insensitive band from the In fact, at least from the mathematical point of view, there are
actually obtained targets yi for all the training data. a large number of nancial time series, which can be
Therefore, the objective function and constraints for SVR forecasted just by themselves. For example, suppose that
can be formulated as a convex optimization problem: fX i gNi1 is a nancial time series, which is generated from a
1 smooth function f(t) with sampling interval h. It follows from
min
2
w2 the Taylor expansion that, for any ti 1 0, ti > ti 1, i [1, N].
y < w; xi > b
s:t: i (2:2) f t i f t i1 h
< w; xi > b yi
1
f t i1 f_ t i1 h fti1 h2
2
The minimization of norm 12 w in the previous case is 2
1
used to ensure the function atness, and is a free parameter f t i1 hh3 ; 0; 1 (2:6)
6
that serves as a threshold: all predictions have to be within
an range of the true predictions. The smaller its value, where h is the turning parameter that represents the time
the higher accuracy of learning is required, and more interval between ti 1 and ti of time series. Provided the h
support vectors need to be found by the algorithm. is small enough, the previous formula implies that
Sometimes, some errors are wanted to be allowed.
Therefore, two relaxation factors, 0 and * 0, are 1
f t i1 hf ti1 f_ ti1 h ft i1 h2 (2:7)
introduced into the constraints of the optimization problem, 2
and the optimization problem referred previously becomes
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 503
For any function f L2(0, T) with T > 0 ( f can be neither f : sup j f tj: (2:12)
t0;
smooth nor differentiable) where L2 0; T
T
f 0 f 2 t dt < , it follows from the well-known Obviously, all the function in F2 can be forecasted
mathematically in the sense (2.7) with error (2.8). So we
real-analysis theory that there exists a sequence
will call F2 the second-order mathematical-forecasting set.
ff n gn1 C 0; T such that
For any given time series fX i gNi1 , we assume that fX i gNi1
are sampling points of a function F2 with step h. That is,
f n f as n in L2 0; T (2:9)
0 x1 ; h x2 ; ; N 1h xN (2:13)
T
That is to say 0 f n f 2 dt0; n , and this In order to make a one-step forecast for fX i gNi1 (forecast xN
convergence implies that fn can be considered as an estimation + 1 = (Nh)), according to (2.7), we have to estimate _ N 1h
of f, provided n is large enough. We divide f into two parts: and N 1h (the value of the ((N 1)h) is known in one-
step forecasting model). However, because of the sensitiveness
f t f n t f t f n t : f 1 t f 2 t; t 0 (2:10) to the noise, the derivative of the usually rapidly varying noise
will drown out the derivative of the signals. Therefore, it is
not feasible to use nite difference to approximate the
where f1 C (0, T) is differentiable and f2 L2(0, T) is very derivatives of . Fortunately, the derivatives of can be
small in norm L2 0;T for large n. We regard the second approximated by applying the tracking differentiator, which is
term f2 as the high-frequency disturbance or noise that is used to extract derivatives from the corrupted signals (Dabroom
useless to the forecasting. & Khalil, 1999; Guo et al., 2002; Ahrens & Khalil, 2009; Feng &
Remark 2. Obviously, the forecasting method (2.6) or (2.7) Li, 2013). For details, see the appendix. A simulation of tracking
can be extended to the nth order theoretically. However, differentiator is plotted in Figure 1.
owing to the properties of tracking differentiator Feng and By using the tracking differentiator, a one-step forecast
Li (2013), the recommended order is n = 2, 3, 4. In this work, can be made for fxi gNi1 in the sense of (2.7). More specially,
without loss of generality, we only consider the second-order
forecasting formula (2.7).
For this purpose, we dene 1
xN 1 N hxN z1 h z2 h2 ; (2:14)
2
F 2 : f H 3 0; f_ f f < (2:11) where z1 and z2, obtaining from the tracking differentiator,
are used to approximate _ N 1h and N 1h ,
respectively. Although this idea is simple, a large number
where H3(0, ) is a Hilbert space and is dened by of numerical experiments show that it is quite effective.
15 1.5
10 1
5 0.5
0 0
5 0.5
10 1
15 1.5
0 2 4 6 8 10 12 0 2 4 6 8 10 12
504 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
We call this method TEF model. Finally, the previous d
V s AV s Ds; (2:21)
discussion illustrates that in order to make a forecasting ds
for the time series fxi gNi1 by (2.14), what we have to do is
where
to obtain z1 and z2. Consequently, the key issue is how to
obtain z1 and z2 by tracking differentiator. 0 1
0 1 v s 0 1
Next, we introduce some notation. Let X = (x1, x2, 0 1 0 Bd C 0
B C
x3) 3. Dene B C B vsC B C
A@0 0 1 A; V s B ds C; Ds @ 0 A:
q B 2 C
6 6 3 @ d A
s 3
X 2 x21 x22 x23 (2:15) v s
ds2
(2:22)
We denote by A F the Frobenius norm of the matrix A.
The authors apply the following third-order high-gain tracking The solution of system (2.21) is found to be
differentiator (Dabroom & Khalil, 1999; Feng & Li, 2013):
8 s
>
>
3
z_ t z2 t z1 t t ; V s eAs V 0 0 e As D d (2:23)
>
> 1
>
>
>
< 6
z_ 2 t z3 t 2 z1 t t ; Because A is Hurwitz, there exist constants , L > 0 such
(2:16)
>
> s
>
> 6 F Le
that eAs . It follows from (2.22) and (2.23) that
> z_ 3 t
> 3 z1 t t ;
>
:
e
s
z1 0 z10 ; z2 0 z20 ; z3 0 z30 ; V s2 eAs V 02 0
As
D d 2
s
V 02 Les 0 D 2 Les d (2:24)
where (z10, z20, z30) 3 is the initial value and is the
turning parameter. In (2.16), (t) is the input, and z1(t), z2 L
V 02 Les 3
(t), z3(t) the output, which are used to approximate w
t; _ t; t , respectively. That is,
More specially, we have the following lemma.
Lemma 1. Suppose that F2. Then, for any initial data
(z10, z20, z30) , there exist two positive constants ,
3
_ 1 1
1 t ; 1 t ; 1 t 2
V 0 Le
2 2
2 t=
L
independent of t such that system (2.16) satisfy (2:25)
jz1 t t j jz2 t _ t j On the other hand, it follows from (2.18) that
jz3 t t j 2 et= ; t 0: (2:17) 8
3
> _ 1 t 1 t ;
< 2 t
(2:26)
_ 3 z3 , the
Proof If we let 1 z1 ; 2 z2 ; : t t t 3_ t
> 6
3 1
error system is governed by the following: 2 1 1
8
>
> 3 Combining (2.15), (2.25) and (2.26), we are able to
>
> _ 1 t 2 t 1 t;
>
> conclude (2.17) easily with
>
>
<_ 6
2 t 3 t 2 1 t ;
(2:18) max LV 02 ;
L
>
>
>
> 6
>
> _ t 3 1 t t;
> 3
>
: So the proof is complete.
1 0 10 ; 2 0 20 ; 3 0 30 :
In order to use the forecasting algorithm (2.14), we discretize
the system (2.16) by backward Euler method as the following:
where 10 z10 0; 20 z20 _ 0; 30 z30 0 ,
(2.18) can be rewritten as follows: 8 3
>
> Z 1 i 1 Z 1 i Z 2 ih i Z 1 ih;
>
>
>
>
1 t 3 31 t 2 6_ 1 t 6 1 t t3 (2:19) >
< 6
Z 2 i 1 Z 2 i Z 3 ih 2 i Z 1 ih;
(2:27)
>
> 6
Let t = s and v(s) = 1(s). Then it follows that >
>
>
> Z 3 i 1 Z 3 i 3 i Z 1 i h;
>
:
Z 1 1 z10 ; Z 2 1 z20 ; Z 3 1 z30 ;
d3 d2 d
v s 3 vs 6 vs 6vs s3 (2:20)
ds3 ds2 ds
which, together with (2.13) and (2.14), lead easily to our
which can be rewritten as an evolution equation: forecasting formula
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 505
1 forecasting. That is its insensitivity to small high-frequency
xN 1 xN Z 2 N h Z 3 N h2 (2:28)
2 noise (Guo et al., 2002; Ahrens & Khalil, 2009).
Consequently, the forecasting model (2.7) is still feasible to
Theorem 1. Suppose that fxi gNi1 is a nancial time series the following set
sampled from a function F2 with sampling interval h.
That is, (0) = x1, (h) = x2, , ((N 1)h) = xN. Then, for
any initial data (x10, x20, x30) 3, there exists a positive F2 : f f j f H 3 0; ; _f f f
constant C* independent of h such that (2.27) and (2.28) < : ; is some high frequency noiseg
satisfy (2:34)
1
lim jxN 1 N hj C h2 h3 (2:29)
0 6 provided is sufcient small.
Therefore, our forecasting strategy (2.14) seems still valid
Proof The Taylor expansion of at t = (N 1)h is as for the most the nancial time series. The numerical
follows: experiments can support our statements.
In fact, system (2.16) can be written as the following
N h N 1h _ N 1hh (2:30) abstract form:
1 1
N 1hh2 h3 d
2! 3! Z t A h Z t V h f t
dt
where (N 1)h < < Nh. Combining (2.16), (2.28) and
(2.30), we have Z(0) = 0, (2.35) where Z(t) = (z1(t), z2(t), z3(t)) and
jxN 1 Nhj 0 1 2 3
3=h 1 0 3=h
1 B C 6 7
jZ 2 N _ N 1hjh jZ 3 N N 1hjh2 Ah @ 6=h2 0 1 A; V h 4 6=h2 5 (2:35)
1 2!
3 3
h3 jZ 2 N z2 N 1hj jz2 6=h 0 0 6=h
3!
1 (2:31)
N 1h _ N 1hjh jZ 3 N z3 We solve equation (2.35) to obtain
2!
N 1hj jz3 N 1h N 1hjh2 t t
1 Z t 0 eAh t V h f d 0 e Ah t V h f t d (2:36)
h3
3!
Because (2.27) is the discretization of (2.16) by backward Because the matrix Ah is Hurwitz for any h > 0, there exist
Euler method (the convergence of backward Euler method two positive constants L and independent of time t such
can be found in (Butcher, 2003), there exists a positive that
constant C* independent of h such that
e Ah t 3 Let 0 as t (2:37)
jZ 2 N z2 N 1hj (2:32)
jZ 3 N z3 N 1hj C h:
It follows from (2.36) that Z(t) is actually a weighted
average of f over the interval (0, t) by regarding eAh t as
Applying Lemma 2.1, there exist two positive constants
the weighted function. If we let 0 < t1 < t2 < t, then f(t2) is
and such that
more closer related to f(t) than f(t1) because of (2.37),
jz2 N 1h _ N 1hj (2:33) which expresses increasing tight relationship with the
jz3 N 1h N 1hj 2 ehN 1= decreasing time intervals between the neighbouring points
: in the time series data, and this instinctive ability of
incorporating the time correlations would help to improve
the forecasting performance of the TEF model in real
Combining (2.31), (2.32) and (2.33), we get (2.29). The proof applications.
is complete. In essence, the solving process of the f_ ti1 and ft i1 by
Remark 3. Equation (2.29) means that xN + 1 can be (2.36) is the integration of the function eAh t V h f before
regarded as a one-step forecasting for the time series time t. This integration expression shows that the TEF
fxi gNi1 , provided h, is small enough. Consequently, our model can realize the dynamic backward search to make full
forecasting strategy (2.14) is valid for nancial time series use of the historical knowledge in the time series and avoid
if we choose sufciently small. the selection problem of optimal backtracking steps that
served as the input variables in the SVM model.
Remark 4. Here, we note that the high-gain tracking Furthermore, the one-parameter setting mechanism of the
differentiator (2.16) enjoys a very good property for our TEF model simplies the application.
506 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
3. The hybrid methodology autoregressive integrated 4. Following equation (3.2), calculate the residual series
moving average and Taylor Expansion Forecasting i ti1 , which represents the nonlinear components in
development the time series fxi gti1 .
5. Training the TEF model on the residual series i ti1 ,
Because a real-world time series is complex in nature, there
select the parameter h for minimizing MAE.
is a universal agreement that a hybrid strategy incorporating
6. Employing the trained TEF model, make one-step ahead
both linear and nonlinear modelling abilities is a good
nonlinear forecast value n^l t1 of the t + 1
alternative for this problem. Following the well-established
7. performing the equation (3.4) and calculate the nal
linear and nonlinear modelling framework (Zhang, 2003;
forecasting value ^xt1 ^l t1 n^l t1 .
Pai & Lin, 2005; Chen, 2011; Lee & Tong, 2011; Wang
et al., 2012; Zhu & Wei, 2013; Xiong et al., 2015), this study
builds a novel hybrid forecasting model ARIMATEF via
combining the ARIMA and the TEF model to improve
4. Experimentation design and results
the forecasting accuracy. In the hybrid model, the real-
world time series fxi gNi1 is assumed to be composed of the 4.1. Data description
sum of the linear and nonlinear parts as follows:
Commodity prices are widely believed to inuence price
xi l i nl i (3:1) levels more broadly and thus are of interest to those whose
where li and nli are the linear and nonlinear part of the time decisions depend on their expectations of future ination
series data fxi gNi1 to be estimated by the ARIMA and the (Gargano & Timmermann, 2014). The samples employed in
TEF model, respectively. this study consist of historical, daily closing prices for several
Firstly, use the ARIMA to forecast the time series and representative commodity futures in the United States (oil,
obtain the predicted results. We denote that li the forecasting wheat, soy bean, corn, gold and silver futures in Chicago Board
value of li through the ARIMA model. Let of Trade) and commodity futures in the Britain (copper and
aluminium futures in London Metal Exchange) from
i xi ^l i (3:2) WindDataBase. All the closing prices data sets of the futures
which is called the residual containing nonlinear component were for the nearest expiration contracts measured in US
at time ti and can be forecasted by the TEF model using the dollars between January 19, 2011 and January 18, 2013.
previous residual series. That is, Excluding public holidays, the authors learn that the lengths
of the time series for oil, wheat, soy bean, corn gold and silver
i f i ; ; il i (3:3) futures were all 506, while the data were 505 for the copper
and aluminium futures. Figure 2 describes the curve of the daily
where f is the nonlinear function modelled by the TEF and t
oil future prices and indicates that the future prices are highly
is the error. The i can be forecasted and obtain its predicted
uncertain, nonlinear, dynamic and complicated. Furthermore,
result n^l i . Therefore, we can obtain the hybrid nal forecast.
in Table 1, the authors present a brief descriptive statistics
features of these future prices series: the mean, standard
^xt ^l t n^lt (3:4) deviation, Kurtosis, Skewness and JarqueBera test, etc. The
statistics relating to skewness, Kurtosis and JarqueBera test
where ^l i is the ARIMA forecast value of the linear part li and all reveal that these futures prices are all non-normal.
n^li is the TEF model forecast value of the nonlinear part nli.
Combined with the previous analysis, a one-step forecast 115
oil future price
algorithm for any given time series fxi gNi1 can be concluded
110
as Algorithm 1:
105
US dollars per barrel
estimation result ^l i .
i1
3. Employing the trained ARIMA model, obtain one-step Figure 2: Original oil future price series from January 19,
ahead linear forecast value ^lt1 of the xt + 1. 2011 to January 18, 2013.
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 507
0.397166
1.901548
0.000000
Aluminium
38.66530
In order to verify the effectiveness of the proposed
251.5635
2226.830
2160.500
2786.750
1837.000
algorithm, different ratios (90% and 70%) of these future
505
prices datasets are used as in-sample training sets and the
remaining values (10% and 30%) as out-of-sample testing sets,
respectively. The training data set is used exclusively to
develop the forecasting model and the test sample set to
evaluate the forecasting ability of the forecasting model using
0.446017
2.070822
34.91017
different evaluation criteria. The detailed data compositions
798.9826
Copper
8358.996
6784.75
10161.75
for the training and testing datasets are given in Table 2.
8196.5
0
505
4.2. Forecasting evaluation criteria
Considering no single accuracy measurement can capture
the distributional features of the errors (Armstrong &
4.256417
0.795744
3.531055
33.27209
32.735
48.46
26.35
0.000003
1X l
M AE jai yi j (4:1)
l i1
v
0.068843
2.073321
0.000096
u l
65.32318
18.50468
u1 X
686.6981
Corn
RMSE t a i y i 2 (4:2)
685.5
838.2
535.4
l i1
506
l
100X
yi ai
MAPE (4:3)
l i1 ai
100
0.385734
3.266146
0.000893
14.04141
Soy bean
y
Xl
1762.2
1103.4
a y i 2
506
1383
i1 i
WIA 1 Xl (4:5)
i1
jai yjjyi yj2
result and measured value, and andy are the respective means.
44.85093
0.22013
Wheat
736.9941
100.9768
731.875
0.002294
12.15465
Table 1: Descriptive statistics
75.67
113.73
508 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
Table 2: Sample compositions in four data sets
Series Sample size Training set (size) Test set (size)
Oil 506 Jan. 19, 2011Nov. 6, 2012 (456) Nov. 7, 2012Jan. 18, 2013 (50)
506 Jan. 19, 2011Jun. 14, 2012 (355) Jun. 15, 2012Jan. 18, 2013 (151)
Wheat 506 Jan. 19, 2011Nov. 6, 2012 (456) Nov. 7, 2012Jan. 18, 2013 (50)
506 Jan. 19, 2011Jun. 14, 2012 (355) Jun. 15, 2012Jan. 18, 2013 (151)
Soy bean 506 Jan. 19, 2011Nov. 6, 2012 (456) Nov. 7, 2012Jan. 18, 2013 (50)
506 Jan. 19, 2011Jun. 14, 2012 (355) Jun. 15, 2012Jan. 18, 2013 (151)
Corn 506 Jan. 19, 2011Nov. 6, 2012 (456) Nov. 7, 2012Jan. 18, 2013 (50)
506 Jan. 19, 2011Jun. 14, 2012 (355) Jun. 15, 2012Jan. 18, 2013 (151)
Gold 506 Jan. 19, 2011Nov. 6, 2012 (456) Nov. 7, 2012 Jan. 18, 2013 (50)
506 Jan. 19, 2011Jun. 14, 2012 (355) Jun. 15, 2012Jan. 18, 2013 (151)
Silver 506 Jan. 19, 2011Nov. 6, 2012 (456) Nov. 7, 2012Jan. 18, 2013 (50)
506 Jan. 19, 2011Jun. 14, 2012 (355) Jun. 15, 2012Jan. 18, 2013 (151)
Copper 505 Jan. 19, 2011Nov. 6, 2012 (455) Nov. 7, 2012Jan. 18, 2013 (50)
505 Jan. 19, 2011Jun. 14, 2012 (354) Jun. 15, 2012Jan. 18, 2013 (151)
Aluminium 505 Jan. 19, 2011Nov. 6, 2012 (455) Nov. 7, 2012Jan. 18, 2013 (50)
505 Jan. 19, 2011Jun. 14, 2012 (354) Jun. 15, 2012Jan. 18, 2013 (151)
4.3. The models parameters determination For the SVM, the appropriate parameters can improve
the generalization of the learning machine. In this study,
The future prices are varied with daily changes featured by the poly function is selected as the kernel function, and the
strong non-stationarity. Because the ARIMA model is only parameters C and e of the model for all the experimental
t to a stationary time series, an initial differencing step future prices are adjusted based on the principle of
(corresponding to the integrated part of the model) can minimizing the MAE of the model (Daz-Robles et al.,
be applied to remove the non-stationarity. The Akaike 2008), presented in the Tables 3 and 4. In addition,
Information Criterion is used to identify the best model. according to the point of view of Zhu and Wei (2013), the
By trial and error, the optimal ARIMA models generated authors determine the dimension p of the input data Xt =
from the different ratios of the several future price datasets (t 1, t 2, , t p) of SVM by the autoregressive order
are listed in Tables 3 and 4. in the ARIMA(p, d, q) mode.
Oil p = 3, d = 1, q = 3, C = 1, e = 1 p = 3, d = 1, q = 3, h = 55
Wheat p = 3, d = 1, q = 2, C = 1, e = 1 p = 3, d = 1, q = 2, h = 60
Soy bean p = 2, d = 1, q = 2, C = 2, e = 1 p = 2, d = 1, q = 2, h = 60
Corn p = 2, d = 1, q = 3, C = 2, e = 1 p = 2, d = 1, q = 3, h = 60
Gold p = 2, d = 1, q = 2, C = 1.2, e = 0.9 p = 2, d = 1, q = 2, h = 65
Silver p = 2, d = 1, q = 1, C = 2, e = 1 p = 2, d = 1, q = 1, h = 60
Copper p = 3, d = 1, q = 3, C = 1, e = 1 p = 3, d = 1, q = 3, h = 55
Aluminium p = 2, d = 1, q = 1, C = 1, e = 1 p = 2, d = 1, q = 1, h = 60
ARIMASVM, autoregressive integrated moving average and support vector machine; ARIMATEF, autoregressive integrated moving average
and Taylor Expansion Forecasting.
Oil p = 2, d = 1, q = 2, C = 1, e = 1 p = 2, d = 1, q = 2, h = 60
Wheat p = 3, d = 1, q = 3, C = 1, e = 1 p = 3, d = 1, q = 3, h = 55
Soy bean p = 3, d = 1, q = 2, C = 2, e = 1.1 p = 3, d = 1, q = 2, h = 60
Corn p = 3, d = 1, q = 2, C = 3, e = 1 p = 3, d = 1, q = 2, h = 55
Gold p = 2, d = 1, q = 2, C = 1, e = 1 p = 2, d = 1, q = 2, h = 60
Silver p = 2, d = 1, q = 2, C = 3, e = 1 p = 2, d = 1, q = 2, h = 60
Copper p = 2, d = 1, q = 2, C = 2, e = 1 p = 2, d = 1, q = 2, h = 55
Aluminium p = 2, d = 1, q = 1, C = 2, e = 1 p = 2, d = 1, q = 1, h = 60
ARIMASVM, autoregressive integrated moving average and support vector machine; ARIMATEF, autoregressive integrated moving average
and Taylor Expansion Forecasting.
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 509
Figure 3 presents MAE values of the hybrid the valve of MAE decreases rapidly with h increasing
ARIMATEF methodology corresponding to the parameter before reaching the optimization point of 1/55, and as
h value of the TEF model on different datasets. In the value of h continues to increase, the MAE of the
statistics, the MAE is used to measure how close forecasts ARIMATEF forecasting results do not further slide but
or predictions are to the eventual outcomes. As shown in maintain at a certain level with a slight uctuation. This
Figure 2, in the case of 10% of oil future price testing set, result indicates that the forecasting model ARIMATEF
2 2.8
2.6
Mean Absolute Error (MAE)
1.8
1.6 2.2
2
1.4
1.8
1.2 1.6
1.4
1
1.2
0.8 1
1/70 1/55 1/45 1/40 1/35 1/30 1/25 1/75 1/601/55 1/45 1/40 1/35 1/30
h h
Figure 3: Mean absolute percentage error (MAE) of autoregressive integrated moving average and Taylor Expansion
Forecasting model with different parameter h.
98 96
96
Modeled Gold Future Price
94
94
Oil Future Price
92
92
90 90
88
88
86
86
84
82 84
0 10 20 30 40 50 84 86 88 90 92 94 96
Prediction Set Actual Gold Future Price
105 100
Modeled Gold Future Price
100
95
Oil Future Price
95
90
90
85
85
80
80
75 75
0 25 50 75 100 125 150 75 80 85 90 95 100
Prediction Set Actual Gold Future Price
Figure 4: Time series and scatter plots of daily original oil future prices with forecasted ones by autoregressive integrated moving
average and support vector machine (ARIMASVM) and autoregressive integrated moving average and Taylor Expansion
Forecasting (ARIMATEF). The lines in the scatter plots represent linear regression lines.
510 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
that can converge rapidly with h is staying at a relatively produced by the Quantitative Micro Software corporation,
stable interval. As discussed previously, h should be small while the simulation with the SVM is performed by
to guarantee the accuracy of the differentiator. It is, introducing WEKA 3 toolbox (developed by the
however, that the differentiator becomes more insensitive University of Waikato, New Zealand) and the Taylor
to high-frequency noise as h goes to zero. Hence, there is expansion forecasting model is developed by MATLAB,
a trade-off between the tracking accuracy and the noise produced by the Mathworks Laboratory Corporation.
toleration, and the choice of h depends on the experimental The estimated results on the different future price series
data. Some specic settings of the parameter h are listed in are similar, and experimental result of the oil future is
Tables 3 and 4. employed in the following analysis.
The actual and predicted values provided by the two
hybrid models on different ratios of the oil future price
4.4. Experimental results
dataset are presented in Figure 4. The point-to-point
This section reports the experimental results. The ARIMA comparisons of the experimental results (Figure 4(a) and
model is established with the Eviews (Version6.0), (c)) show that the tting values of the ARIMATEF
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 511
model are closer to the real data than those of the hybrid model to the future time series data forecasting
ARIMASVM model for the different testing datasets, problem.
and it can be seen clearly from the graph that the The detailed statistical results are listed in the Tables 5
proposed ARIMATEF model has greater capability to and 6. Compared with the ARIMASVM, the ARIMATEF
detect the local extremum points than the ARIMASVM model performed much better in different ratios of testing
model. The main reason is that the ARIMATEF model sets by all criteria. For out-of-sample precision
enjoys the property to be insensitive to the noise comparisons, the MAE of the ARIMASVM model is at
disturbance in dealing with the complex time series data, 1.0090 and 1.4418, whereas that of the ARIMATEF is
and in Figure 4(b) and (d), the estimated values of the much lower, at 0.8362 and 1.1210, respectively. Similar
hybrid ARIMATEF model cluster around the perfect results are also obtained from other level prediction
coefcient of determination (R2 = 1) while the scatter plot indicators. All these statistical indices listed in the Tables 5
for ARIMASVM is more dispersed. That is to say about and 6 indicate that the ARIMATEF model boasts
more than 88% of the explained variance between the outstanding capability to forecast the future price series.
observed and estimated future datasets in the forecasting To evaluate the robustness of the ARIMATEF model,
phase is captured by the hybrid ARIMATEF model, the authors test the performance of the different models on
which reveals a powerful analysing capability of the new several other representative commodities future prices.
Figure 5: Time series and scatter plots of several representative future daily prices (wheat, soy bean, corn, gold, silver, aluminum
and copper) with forecasted ones by autoregressive integrated moving average and support vector machine (ARIMASVM) and
autoregressive integrated moving average and Taylor Expansion Forecasting (ARIMATEF) on the 10% testing set. The lines in
the scatter plots represent linear regression lines.
512 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
Similarly, two relative ratios of 70% and 90% are considered points with different time intervals in the time series data
as training set. The point-to-point comparisons of results by the notion of the weighted function, which overcomes
and scatter plots are plotted in Figures 5 and 6, and the the shortcomings of the equivalence relationship expressed
forecasting performances of the different two models are by Euclidean distance in the SVM. This instinctive ability
summarized in Tables 5 and 6. It can be observed again that of incorporating the time correlation improves the accurate
the ARIMATEF model has much better out-of-sample forecast performance of the ARIMATEF model. On the
forecasting accuracy because of smaller MAE, RMSE, other hand, different input variables affect the SVM
MAPE, SEP and larger WIA and R2 values on different generalization ability signicantly. However, in real
future price datasets, which demonstrates that the application, the scope of the backtracking steps selected
ARIMATEF model is a preferable hybrid model and can as in the input variables is difcult to determine, which
be quite robust in terms of producing more accurate makes the effort to use SVM in forecasting time series data
forecasting results for nancial time series forecasting based on limited information a very difcult task. By
problem. contrast, the TEF model can realize the dynamic
The possible explanations of the ARIMATEF models backtracking to the historical data by equation (2.36),
superiority to the ARIMASVM could be twofold: on one which avoids the difculty of back search for the best time
hand, besides the nonlinear model ability, the TEF model lags in the time series and makes full use of the historical
expresses different relationships between the neighbouring information hidden in the previous data. Additionally,
Figure 6: Time series and scatter plots of several representative future daily prices (wheat, soy bean, corn, gold, silver, aluminum
and copper) with forecasted ones by autoregressive integrated moving average and support vector machine (ARIMASVM) and
autoregressive integrated moving average and Taylor Expansion Forecasting (ARIMATEF) on the 30% testing set. The lines in
the scatter plots represent linear regression lines.
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 513
compared with other methods, the tracking differentiator is between different nancial markets and so on through some
more feasible in extracting the derivatives of the time series specic learning mechanism, should be more reasonable.
data even if the specic model of the target is unknown Additionally, as a new proposed model, some problems
(Guo et al., 2002; Ibrir, 2004; Davila et al., 2005). should be addressed in the future research. Firstly, only
Therefore, the hybrid methodology not only exploits the several commodity future price datasets are used as
unique strength of the individual models but also develops illustrative examples to evaluate the performances of the
a more powerful hybrid model that can improve the ARIMATEF model in this study. Therefore, using the
comprehensive analysing ability to the time series data with proposed methodology for other kinds of nancial time series
complex characteristics. Therefore, considering the superior is subject to future investigations. Yet, the problem of making
performance of the ARIMATEF forecasts, investors could multi-step ahead forecasts based on the TEF model is still
develop an effective decision support system based on this open for both theoretical and applied research. Finally, the
forecasting model to improve the investment efciency. tracking differentiator is applied to extract the derivatives
from the given series in this paper. However, some recent
studies propose alternative evolutionary approaches, such as
5. Conclusions the study of Su et al. (2005) and the study of Guo and Zhao
A novel hybrid model ARIMATEF incorporating ARIMA (2011). Thus, in the future, a novel TEF model with a new
and TEF models is proposed to forecast the nancial time tracking differentiator to approach the derivatives of the time
series. The main highlight of our works lies in that series trajectory could be investigated.
514 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd
More specially, if we assume that f make the following DAVILA, J., L. FRIDMAN and A. LEVANT (2005) Second-order
free system sliding-modes observer for mechanical systems, IEEE
Transactions on Automatic Control, 50, 17851789.
y_ 1 y2 DELURGIO, S.A. (1960) Principles and Procedures of Statistics with
(4:7)
y_ 2 f y1 ; y2 Special Reference to the Biological Sciences, McGraw-Hill,
New York etc.
be globally stable in the sense that limt yi 0; i 1; 2, DELURGIO, S.A. (1998) Forecasting Principles and Applications,
McGraw-Hill, Boston etc.
then system
8 DAZ-ROBLES, L.A., J.C. ORTEGA, J.S. FU, G.D. REED, J.C. CHOW,
< y_ 1 y2 J.G. WATSON and J.A. MONCADA-HERRERA (2008) A hybrid
1 (4:8) ARIMA and articial neural networks model to forecast
: y_ 2 2 f y1 u; y2 particulate matter in urban areas: the case of Temuco, Chile.
Atmospheric Environment, 42, 83318340.
DUAN, L. and L.D. XU (2012) Business intelligence for enterprise
satises, for any T > 0, (Su et al., 2005; Guo & Zhao, 2011) systems: a survey, IEEE Transactions on Industrial Informatics,
8, 679687.
T FENG, H.Y.P. and S.J. LI (2013) A tracking differentiator based on
0 jy1 t utj jy2 t u_ tjdt0 as 0 (4:9) Taylor expansion, Applied Mathematics Letters, 26, 735740.
GARGANO, A. and A. TIMMERMANN (2014) Forecasting commodity
Therefore, y1 and y2 can be regarded as an price indexes using macroeconomic and nancial predictors,
approximation to u and :u: , respectively. International Journal of Forecasting, 30, 825843.
GOODWIN, P. and R. LAWTON (1999) On the asymmetry of the
The main advantage of tracking differentiator lies in the
symmetric MAPE, International Journal of Forecasting, 15,
fact that it is noise tolerant. For example, we choose the 405408.
following linear form tracking differentiator of type (4.8): GOOIJER, J.G.D. and R.J. HYNDMAN (2006) 25 years of time series
forecasting, International Journal of Forecasting, 22, 443473.
8 GUO, B.Z., J.Q. HAN and F.B. XI (2002) Linear tracking-
< y_ 1 y2 differentiator and application to online estimation of the
2 2 (4:10) frequency of a sinusoidal signal with random noise perturbation,
: y_ 2 2 y1 u y2
International Journal of Systems Science, 33, 351358.
GUO, B.Z. and Z.L. ZHAO (2011) On convergence of tracking
where the signal u is given by (4.6). The simulation of tracking differentiator, International Journal of Control, 84, 693701.
results is plotted in Figure 1 (right). It is easy to nd that the GUO, B.Z. and Z.L. ZHAO (2013) Weak convergence of nonlinear
high-gain tracking differentiator, IEEE Transactions on
tracking differentiator is robust to the small white noise.
Automatic Control, 58, 10741080.
GUO, J., S. XU and Z. BI (2014) An integrated cost-based approach
for real estate appraisals, Information Technology & Management,
15, 131139.
HADAVANDI, E., H. SHAVANDI and A. GHANBARI (2010) Integration
References
of genetic fuzzy systems and articial neural networks for stock
ABU-MOSTAFA, Y. and A. ATIYA (1996) Introduction to nancial price forecasting, Knowledge-Based Systems, 23, 800808.
forecasting, Applied Intelligence, 6, 205213. HANSEN, J.V., J.B. MCDONALD and R.D. NELSON (2006) Some
AHRENS, J.H. and H.K. KHALIL (2009) High-gain observers in the evidence on forecasting time-series with support vector
presence of measurement noise: a switched-gain approach, machines, The Journal of the Operational Research Society, 57,
Automatica, 45, 936943. 10531063.
ARMSTRONG, J.S. and F. COLLOPY (1992) Error measures for HUANG, W., Y. NAKAMORI and S.Y. WANG (2005) Forecasting
generalizing about forecasting methods: empirical comparisons, stock market movement direction with support vector machine,
International Journal of Forecasting, 8, 6980. Computers & Operations Research, 32, 25132522.
ATSALAKIS, G.S. and K.P. VALAVANIS (2009) Surveying stock IBRIR, S. (2004) Linear time-derivative trackers, Automatica, 40,
market forecasting techniques Part II: soft computing methods, 397405.
Expert Systems with Applications, 36(3, Part 2), 59325941. KAO, L.J., C.C. CHIU, C.J. LU and J.L. YANG (2013) Integration of
BOX, G.E.P. and G.C. TIAO (1975) Intervention analysis with nonlinear independent component analysis and support vector
applications to economic and environmental problems, Journal regression for stock price forecasting, Neurocomputing, 99,
of the American Statistical Association, 70, 7079. 534542.
BRILLINGER, D.R. (1974) Time Series 45, Holt, Rinehart and KAUFFMAN, R., J. LIU and D. MA (2015) Technology investment
Winston, Inc, New York-Montreal, Que-London, 661712. decision-making under uncertainty, Information Technology and
BUTCHER, J.C. (2003) Numerical Methods for Ordinary Differential Management, 16, 153172.
Equations, John Wiley & Sons, New York. KIM, K. (2003) Financial time series forecasting using support vector
CAO, L.J. (2003) Support vector machines experts for time series machines, Neurocomputing, 55, 307319.
forecasting, Neurocomputing, 51, 321339. LEE, Y.S. and L.I. TONG (2011) Forecasting time series using a
CHEN, K.Y. (2011) Combining linear and nonlinear model in methodology based on autoregressive integrated moving
forecasting tourism demand, Expert Systems with Applications, average and genetic programming, Knowledge-Based Systems,
38, 1036810376. 24, 6672.
CHENG, G. and Y. YANG (2015) Forecast combination with outlier LEVANT, A. (1998) Robust exact differentiation via sliding mode
protection, International Journal of Forecasting, 31, 223237. technique, Automatica, 34, 379384.
DABROOM, A.M. and H.K. KHALIL (1999) Discrete-time LU, C.J., T.S. LEE and C.C. CHIU (2009) Financial time series
implementation of high-gain observers for numerical forecasting using independent component analysis and support
differentiation, International Journal of Control, 72, 15231537. vector regression, Decision Support Systems, 47, 115125.
2016 Wiley Publishing Ltd Expert Systems, October 2016, Vol. 33, No. 5 515
MATHEWS, B.P. and A. DIAMANTOPOULOS (1994) Towards a YUAN, R., Z. LI, X. GUAN and L. XU (2010) An SVM-based
taxonomy of forecast error measures: a factor-comparative machine learning method for accurate internet trafc
investigation of forecast error dimensions, Journal of Forecasting, classication, Information Systems Frontiers, 12, 149156.
13, 409416. ZHANG, G.P. (2003) Time series forecasting using a hybrid ARIMA
NIE, H., G. LIU, X. LIU and Y. WANG (2012) Hybrid of ARIMA and neural network model, Neurocomputing, 50, 159175.
and SVMs for short-term load forecasting, Energy Procedia, 16, ZHU, B.Z. and Y.M. WEI (2013) Carbon price forecasting with a
14551460. novel hybrid ARIMA and least squares support vector machines
PAI, P.F. and C.S. LIN (2005) A hybrid ARIMA and support vector methodology, Omega, 41, 517524.
machines model in stock price forecasting, Omega, 33, 497505. ZHU, X., H. WANG, L. XU and H. LI (2008) Predicting stock index
PAN, S., L. WANG, K. WANG, Z. BI, S. SHAN and B. XU (2014) A increments by neural networks: the role of trading volume under
knowledge engineering framework for identifying key impact different horizons, Expert Systems with Applications, 34,
factors from safety-related accident cases, Systems Research and 30433054.
Behavioral Science, 31, 383397.
POLIMENIS, V. and I. NEOKOSMIDIS (2014) The global nancial crisis
and its transmission to Asia Pacic, Journal of Management
Analytics, 1, 266284. The authors
RUBIO, G., H. POMARES, I. ROJAS and L.J. HERRERA (2011) A
heuristic method for parameter selection in LS-SVM: application Guisheng Zhang
to time series prediction, International Journal of Forecasting, 27,
725739. Guisheng Zhang is currently a lecturer at the School of
SHI, S., L. XU and B. LIU (1988) Applications of articial neural Economics and Management, Shanxi University, China.
networks to the nonlinear combination of forecasts, Expert Guisheng Zhang received his Bachelor degree and Master
Systems, 110, 195201. degree in School of Computer and Information Technology,
SHI, S., L. XU and B. LIU (1999) Improving the accuracy of
nonlinear combined forecasting using neural networks, Expert Shanxi University, China, in 2000, 2007, respectively. Then
Systems with Applications, 16, 4954. he obtained his PhD degree in School of Economics and
SU, Y., C. ZHENG, S. DONG and B. DUAN (2005) A simple nonlinear Management, Shanxi University, in 2016. His research
velocity estimation for high-performance motion control, IEEE interests focus on nancial time series analysis and
Transactions on Industrial Electronics, 52, 11611169. forecasting.
TAY, F.E.H. and L.J. CAO (2001) Application of support vector
machines in nancial time series forecasting, Omega, 29,
309317. Xindong Zhang
TAY, F.E.H. and L.J. CAO (2002) Modied support vector machines
in nancial time series forecasting, Neurocomputing, 48, 847861. Xindong Zhang is currently a Professor at the School of
VAPNIK, V.N. (1995) The Nature of Statistical Learning Theory, Economics and Management, Shanxi University, China,
Springer New York. where she obtained her Bachelor degree and Master degree
VAPNIK, V.N. (1998) Statistical Learning Theory, Wiley New York. in Mathematics. Then she received her PhD degree in
VENTURA, S., M. SILVA, D. PEREZ-BENDITO and C. HERVAS (1995) Management from the Tianjin University of Finance and
Articial neural networks for estimation of kinetic analytical
parameters, Analytical Chemistry, 67, 15211525. Economics, China. Her research interests include asset
WANG, B., H. HUANG and X. WANG (2012) A novel text mining pricing and corporate nance.
approach to nancial time series forecasting, Neurocomputing,
83, 136145. Hongyinping Feng
WANG, X. (2015) Support vector machine and ROC curves for
modeling of aircraft fuel consumption, Journal of Management Hongyinping Feng received the BSc degree, the MSc degree
Analytics, 2, 2234. and the PhD degree, in mathematics in 2003, 2006 and 2013,
WEI, L., W. ZHANG, X. XIONG and Y. ZHAO (2014) A multi-agent respectively, all from Shanxi University, China. He was a
system for policy design of tick size in stock index futures
markets, Systems Research and Behavioral Science, 31, 512526. visiting scholar at University of the Witwatersrand, South
XIONG, T., Y. BAO and Z. HU (2013) Beyond one-step-ahead Africa from 2013 to 2014. He is currently a Professor in
forecasting: evaluation of alternative multi-step-ahead forecasting the School of Mathematics Science, Shanxi University,
models for crude oil prices, Energy Economics, 40, 405415. China. His research interests focus on distributed parameter
XIONG, T., C. LI, Y. BAO, Z. HU and L. ZHANG (2015) A systems control.
combination method for interval forecasting of agricultural
commodity futures prices, Knowledge-Based Systems, 77,
92102.
516 Expert Systems, October 2016, Vol. 33, No. 5 2016 Wiley Publishing Ltd