9 views

Uploaded by Harbey Alexander Millan Cardenas

jdfjj

- q1.doc
- Untitled
- Ch3. Demand Forecasting
- Development of Rainfall Forecasting Model in Indonesia by Using ASTAR, TRansfer Function, & Arima Methods
- 1944596080000 Process Plant Simulation Str 18
- fp0518
- 353807-review
- 10.1.1.197.8618
- Neural Network Forecasting Technique
- Extending SAS High-Performance Forecasting Using User-Defined Models and External Forecast
- Query Training _UMTS Radio
- Adaptive Neuro Fuzzy Inference System for Financial Trading Using Intraday Seasonality Observation Model
- Research Methodology Question Bank
- Vehicle Speed Prediction
- 1 Example of Time Series Analysis by SSA
- Supply Chain Basics Order Fulfilment
- Real-Time Monitoring of Non-linear Suicidal Dynamics Methodology and a Demonstrative Case Report
- Celicourt Piasecki AGU Poster 2015 HydroUnits
- Copy of Principles of Management
- 1-s2.0-S0377221715009601-main

You are on page 1of 6

2, September 2007: 82 - 87

Neuro-Fuzzy in Time Series Forecasting

1

2

Institut fr Automatisierungstechnik Universitt Bremen

Email: indi@peter.petra.ac.id, natarajan@uni-bremen.de

ABSTRACT

This paper describes LSE method for improving Takagi-Sugeno neuro-fuzzy model for a multi-input and multi-output

system using a set of data (Mackey-Glass chaotic time series). The performance of the generated model is verified using

certain set of validation / test data. The LSE method is used to compute the consequent parameters of Takagi-Sugeno neurofuzzy model while mean and variance of Gaussian Membership Functions are initially set at certain values and will be

updated using Back Propagation Algorithm. The simulation using Matlab shows that the developed neuro-fuzzy model is

capable of forecasting the future values of the chaotic time series and adaptively reduces the amount of error during its

training and validation.

Keywords: forecasting, time series, gaussian membership function, neuro-fuzzy, least square.

INTRODUCTION

When considering fuzzy logic for analyzing and

solving certain problem in engineering which requires

computational intelligence, practically ones choose

between Mamdani model and Takagi-Sugeno model.

Of course another model also could be used as well.

But, Takagi-Sugeno model is preferred in the case of

data-based (or numerically-data-driven) fuzzy

modeling [1, 2, 3] as well as forecasting time series

data where in many cases it can be seen as system

with locally linear model. Another advantage is that

the inference formula of the Takagi-Sugeno model is

only two-step procedure, based on a weighted average

defuzzifier, whereas Mamdani type of fuzzy model

basically consists of four steps [4, 5].

In this paper, we implement neuro-fuzzy modeling

which combines the ability of reasoning on a

linguistic level and handling of uncertain or imprecise

data with the ability of learning and adaptation certain

aspects of intelligent system. Here we choose TakagiSugeno type fuzzy model along with differentiable

operators and membership function and also the

weighted average defuzzifier for defuzzification of

output data. The corresponding output inference can

then be represented in a multilayer feedforward

network structure such as described in [4, 5] which is

identical to the architecture of ANFIS [6]. Since we

use Takagi-Sugeno fuzzy model with Gaussian

Membership Function (GMF), then we have three

types of free parameters: mean and variance of each

Note: Discussion is impected before December, 1st 2007, and will

be publiced in the Jurnal Teknik Elektro volume 8, number 1

March 2008.

82

Parameters (we usually name them as theta).

Optimizing these parameters so that the speed of

convergence can be accelerated becomes special

interest in the research field of neuro-fuzzy technique.

This paper concentrates our main interest in the

optimization of theta by estimation using least square

method (LSE) for MIMO system. So, we arrange our

presentation as follows. First, we explain the

importance of MIMO Takagi-Sugeno Neuro-Fuzzy

implementation for time series forecasting and

treatment procedure of data, in this case we use

Mackey-Glass chaotic time series. Next, we explain

how we implement least square method for

optimizing the free parameters of Takagi-Sugeno

Rules Consequent Parameter. All important

equations or equation(s) derived directly from its

definition will be numbered, while its derivation steps

will not be numbered. Finally, the result from Matlab

simulation will be discussed.

MIMO TAKAGI-SUGENO NEURO-FUZZY

FOR FORECASTING CHAOTIC TIME

SERIES

Here we consider short-term forecasting as the

implementation case of MIMO Takagi-Sugeno

Neuro-Fuzzy for chaotic time series. The chaotic time

series are generated from deterministic nonlinear

systems that are sufficiently complicated as they

appear to be random. The chaotic time series we use

here is Mackey-Glass Time Series which available in

Matlab. This chaotic time series generated by solving

the Mackey-Glass time delay differential equation [7]:

(dx/dt) = (0.2*x(t-)/(1+x10(t-)))-0.1*x(t)

where x(0)=1.2, =17 and x(t)=0 for t<0.

(1)

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

exist, then data preparation for forecasting consists

only structuring of data and determination of training

and validation data set. According to Matlab

documentation, Mackey-Glass data consists of 1200

data points. For our experiment purpose, we neglected

the first 100 data due to its harmonic nature, then we

took next 200 data for training, and finally we used

the next 800 data for validation. So, we have time

series in the form of X = {X1, X2, X3, , Xn} which is

taken at regular intervals of time t = {1, 2, 3, , n}.

For this forecasting problem, usually a set of known

values of the time series up to a point in time, say t, is

used to predict the future value of the time series at

some point, lets say (t+P). Then we created a

mapping from S sample data points, sampled every s

units in time, to a predicted future value of the time

series at time point which resulted an S-dimensional

vector of the form:

XI(t) = [X{t-(S-1)s}, X{t-(S-2)s}, , X{t}]

Palit [3] shows that for predicting the Mackey-Glass

time series, the value of S=4 and s = P = 6 are good

choices. The output data of the fuzzy predictor

corresponds to the trajectory prediction, where for

MIMO case, it can be written as:

XO(t) = [X{t+P}, X{t+2P}, X{t+3P}, ]

(a)

(b)

Figure 1. Structure of Takagi-Sugeno neuro-fuzzy as

proposed by Palit (a) and Jang (b)

Referring to figure 1.a above, the fuzzy logic system

considered here for constructing neuro-fuzzy

structures is based on Takagi-Sugeno fuzzy model

with Gaussian membership functions. It uses product

inference rules and a weighted average defuzzifierdefines as:

and XO, name it as XIO which stands for InputOutput Matrix. For our MIMO case, we use 4 input

and 3 output so that the XIO matrix can be written in

the form:

X1

X

XIO = 4

...

X n 6

X2

X3

X4

X5

X5

X6

X7

X8

...

...

...

...

X n 5 X n 4 X n 3 X n 2

XI matrix

X7

X 9 X 10

...

...

X n1 X n

X6

( )

f j xp =

l =1

M

(2)

l =1

and l = 1,2,3,M reflects the number of rules

(Gaussian membership function).

The output consequent of the lth rule for each output jth

is:

n

y lj = 0l j + ijl xi

i =1

XO matrix

here is the same with that one described in [4, 5]

which is similar with MANFIS (Multiple-output

ANFIS) model [1, 6]. For convenient, here we redraw

the neuro-fuzzy model proposed by [5] in comparison

with MANFIS model [1], both for two input and two

output system.

y lj z l

(3)

Then we generate degree of fulfillment and fuzzification procedure as:

n

z l = Gl ( xi ) where Gl ( xi ) = e

i =1

( xi cil ) 2

il

(4)

83

function with the corresponding mean and variance

parameters cil and il , we assume that cil Ii , il > 0,

and y lj O j , where I i and O j are the input and

output universes of discourse respectively. The

corresponding l th rule from the above fuzzy logic

system can be written as

S / 0l as follows:

S = 0.5ETE = 0.5[e12 + e22 + . en2]

e n2

S 1 e12

=

+

+

en =

e

2

...

2

1

l

l

l

0 2 0

0

then y lj = ojl + 1jl x1 + 2jl x 2 + ... + njl x n

output consequents, and f j are its m outputs. If all

the parameters for neuro-fuzzy network are properly

selected, then the system can correctly approximate

any nonlinear system based on given XIO matrix data.

For training that neuro-fuzzy, which equivalent with

multi-input multi-output (MIMO) feedforward network, we use backpropagation algorithm (BPA).

Generally, BPA based on Steepest Descent Gradient

Rule uses a low learning rate () for network training

in order to avoid the oscillations in the final phase of

the training. That is why BPA needs large number of

epochs (recursive steps) for the smallest SSE (Sum of

Squared Error). The steepest descent gradient rule,

used for Neuro-Fuzzy network training is based on

the recursive expression:

0l j (k + 1) = 0l j (k ) .(S / 0l j )

cil (k + 1) = cil (k ) .(S / cil )

il (k + 1) = il (k ) .(S / il )

S j = 0 .5 ( e ) = 0 .5 E E j

p =1

p 2

j

T

j

(5)

m

SSE = S j

j =1

the difference between the real sampled data from

XIO matrix and its prediction, or, ej = (dj fj).

84

fj =

y z

l =1

M

l

j

, whereas

l =1

f j / 0l as follows:

1 zl

zl

m

=

+

+

...

y

y

j

j

M

M

0l 0l

zl

zl

l =1

l =1

f j

With

e j

0l

(7)

f j

d

f

(

)

j

j

0l

0l

m

z l y 1j

z M y j z l

zl

+

+

=

=

...

1

b 0l b

b

0l b 0l

f j

=0 if lm

= 1,2,3,M reflects the number of rules (Gaussian

membership function), and i = 1, 2, 3, n reflects the

number of input. In the sense of BPA, , c, and are

parameters that will be updated during the training

phase. Here we differentiate 0 from i since it will not

contain any part of input vector, hence will be

calculated independently. The SSE for each epoch

reflects the training performance function. Here we

use notation S for the squared error, which can be

described as:

N

(6)

e1 e12 e2 e22

en en2

+

+

...

l

0l

0l

0

where, b =

l =1

Then we have

e n e n2

S e1 e12 e 2 e 22

...

=

+

+

=

0l 0l

0l

0l

(8)

fn

f1

(f1 d1 ) l + ... + (fn dn ) l

0

0

z lj

S

(

)

= fj dj

0l

b j

(9)

derivative for the adjustable parameters can be

obtained as:

(S / ijl ) = ( f j d j ) ( z l / b) x

(S / cil ) = A {2 ( z l / b) ( xi cil ) /( il ) 2 }

(S / il ) = A {2 ( z l / b) ( xi cil ) 2 /( il ) 3 }

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

so that the consequent parameters can be written as:

where

m

A = y lj f j ( f j d j ) =

j =1

(10)

n

m l

0j + ijl x i f j ( f j d j )

j =1

i =1

model:

R 1 : If x 1 is G 11 and x 2 is G 21 and ... x n is G n1

y l1 = oj1 + 11 j x 1 + 21 j x 2 + ... + nj1 x n

then yl2 = oj2 + 12j x1 + 22j x 2 + ... + nj2 x n

R M : If x 1 is G1M and x 2 is G 2M and ... x n is G nM

then ylM = ojM + 1Mj x 1 + 2Mj x 2 + ... + njM x n

Takagi-Sugeno inference will be:

M

fj =

y lj z l

l =1

(z

y1j + z 2 y 2j + ... + z M y Mj

z + z + ... + z

1

OPTIMIZING THE FREE PARAMETERS OF

TAKAGI-SUGENO RULES CONSEQUENT

PARAMETER

then

l = [ 0l ,1l , 2l ,..., nl ]

(11)

l =1

which is computed for the n input system using the

product operator as:

previous section).

Then the matrix form of the corresponding TakagiSugeno inference can be written as:

1

f1 1 XIe1

1

f2 = 2 XIe2

M

1

fN N XIe N

12 XIe1

22 XIe2

M

XIe N

2

N

1

2

.... 22 XIe2

M

.... N2 XIe N M

....

12 XIe1

[f] = [XIe][], or,

[XIe]T[XIe][] = [XIe]T[f]

which yield:

(13)

[] = ([XIe]T[XIe])-1[XIe]T[f]

SIMULATION USING MATLAB

matrix form and we use Matlab to simulate them so

that we can observe its performance. For the first

experiment, we generate matrix c (mean) and

(variance) of the Gaussian membership function

randomly and matrix is also generated randomly.

Here is the result from the simulation using randomly

generated free parameters of Takagi-Sugeno neurofuzzy network.

z l = Gl ( xi )

(12)

i =1

l

rule in the form: l = z

M

zl

l =1

for ith training sample will be:

(

(

(

f i = i1 01 + 11 x1,i + ... + n1 x n ,i

+

2

i

M

i

2

0

M

0

2

1

2

n

M

1

x1,i + ... +

M

n

x n ,i

matrices. We can apply Least Square Estimation

using XI matrix into matrix fj. But first, we append 1

along with n inputs in XI which take care of 0l from

randomly generated free parameters of TakagiSugeno neuro-fuzzy network

85

different results from the above figure. We recorded

the best simulation which yield SSE = 0.14347. Here

is the best simulation for initially random parameters

during validation process.

parameters during validation process.

The next experiment is using LSE to optimize the rule

consequent parameters . First we modify the

equation for with small adaptation step to avoid

singularities problem during matrix calculation so it

became:

(14)

[] = ([XIe]T[XIe] + I)-1[XIe]T[f]

Here is the optimized result of the Gaussian

membership function after being updated in 200

iterations (recall that we use first 200 data of XIO

matrix for training).

The implementation of LSE method for rule conesquent parameters reduces the value of SSE down to

0.0206 which is much smaller than the previous SSE.

It means that the performance of neuro-fuzzy system

is increased by factor up to 700%. Simulation during

validation process yields the following result.

LSE method.

improvement from the previous result.

DISCUSSION AND CONCLUSION

membership function after being updated in 200

iterations

86

conclude two important remarks here:

a. From the simulation, we can see that LSE method

can enhance the performance of Takagi-Sugeno

neuro-fuzzy for time series forecasting although it

requires more computational power. Although

GMFs arent equally distributed along the universe of discourse, the system gives satisfactory

prediction of Mackey-Glass Data.

b. The value of for updating the parameters should

be choosen properly since this value is sensitive.

Too high value of this parameter makes the

program tends to oscillating. This computation

involves huge number of data. One must find

optimization procedure to simplify the program or

accelerate the calculation process.

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

this paper certainly confirm that the same performance can only be achieved with the high number of

rules which in turn will require much efforts to reduce

redundancies and conflicting rules.

REFERENCES

Fuzzy Modelling, Proceeding of the International Conference on Neural Network: 1995.

[2] Min-You Chen, D.A. Linkens, Rule-base selfgeneration and simplification for data-driven

fuzzy models, Fuzzy Set and System, 2003.

[3] Hossein Salehfar, Nagy Bengiamin, Jun Huang,

A Systematic Approach To Linguistic Fuzzy

Modeling Based On Input-Output Data, Proceedings of the 2000 Winter Simulation Conference:

2000.

Time Series using Neuro-Fuzzy Approach,

Proceeding of IEEE-IJCNN: 1999.

[5] Palit A. K., Popovic D., Computational Intelligence in Time Series Forecasting: Theory and

Engineering Application, Springer-Verlag London: 2005.

[6] J.-S. Roger Jang, ANFIS: Adaptive-Networkbased Fuzzy Inference Systems. IEEE Transaction on Systems, Man and Cybernetics: 1993.

[7] MATLAB, The Language of Technical Computing, Users Guide and Programming, The

MathWorks Inc: 1998.

[8] Felix Pasila, Indar Sugiarto, Mamdani Model

For Automatic Rule Generation of A MISO

System, Proceedings of Soft Computing, Intelligent Systems and Information Technology (SIIT2005): 2005.

87

- q1.docUploaded byZulia Zahir
- UntitledUploaded byapi-283437023
- Ch3. Demand ForecastingUploaded byPoorti Garg
- Development of Rainfall Forecasting Model in Indonesia by Using ASTAR, TRansfer Function, & Arima MethodsUploaded byWisnu Nugraha
- 1944596080000 Process Plant Simulation Str 18Uploaded bySandeep Suresh
- fp0518Uploaded bydaliagc
- 353807-reviewUploaded byPhuong Duyen Huynh Ngoc
- 10.1.1.197.8618Uploaded byKayne Johnston
- Neural Network Forecasting TechniqueUploaded bysyzuhdi
- Extending SAS High-Performance Forecasting Using User-Defined Models and External ForecastUploaded byarijitroy
- Query Training _UMTS RadioUploaded byatices
- Adaptive Neuro Fuzzy Inference System for Financial Trading Using Intraday Seasonality Observation ModelUploaded byAnita Andriani
- Research Methodology Question BankUploaded byMohammed Aadil
- Vehicle Speed PredictionUploaded byApoorva
- 1 Example of Time Series Analysis by SSAUploaded byOvie Uddin Ovie Uddin
- Supply Chain Basics Order FulfilmentUploaded byDebasish Padhy
- Real-Time Monitoring of Non-linear Suicidal Dynamics Methodology and a Demonstrative Case ReportUploaded byppeixinho
- Celicourt Piasecki AGU Poster 2015 HydroUnitsUploaded byPaulcel
- Copy of Principles of ManagementUploaded byCatherine Sanusi Mendoza
- 1-s2.0-S0377221715009601-mainUploaded byLucianoPrado
- Business For CastingUploaded bysemerederibe
- World Paint & CoatingsUploaded byMichael Warner
- IRJET-Error Reduction in Data Prediction using Least Square Regression MethodUploaded byIRJET Journal
- Measuring Workforce PlanningUploaded bywhat_the_fuk
- Mit PresentationUploaded byEagle Collins
- ST203lec8Uploaded byxen101
- MB 0044 AssignmentUploaded byMazhar
- Drivers of Supply Chain ManagementUploaded byazamtoor
- STORAGE GROWING FORECAST WITH BACULA BACKUP SOFTWARE CATALOG DATA MININGUploaded byCS & IT
- 2. Forecasting Methods(1)Uploaded byabril

- GFPI-F-019 Formato Guia de Aprendizaje Electricidad BasicaUploaded byHarbey Alexander Millan Cardenas
- Dialnet-AnalisisDelControlMultivariableMedianteLogicaDifus-4207712.pdfUploaded byHarbey Alexander Millan Cardenas
- Dialnet-AnalisisDelControlMultivariableMedianteLogicaDifus-4207712.pdfUploaded byHarbey Alexander Millan Cardenas
- Ore GenciaUploaded byHarbey Alexander Millan Cardenas
- s.pdfUploaded byHarbey Alexander Millan Cardenas
- Curso SenaUploaded byHarbey Alexander Millan Cardenas
- PidUploaded byRicardo Vega
- hfñjadhfasñjdhfUploaded byHarbey Alexander Millan Cardenas
- OREGENCIA.pdfUploaded byHarbey Alexander Millan Cardenas
- OREGENCIA.pdfUploaded byHarbey Alexander Millan Cardenas
- Learning Guide 1Uploaded bysusana
- BOLETÍN UT1Uploaded bybeto2277
- hfhfhffhfhUploaded byHarbey Alexander Millan Cardenas
- remuneracion_perdidas_no_tecnicas_energia.pdfUploaded byanderson_hernandez
- AaljbfpkjdjUploaded byHarbey Alexander Millan Cardenas
- Control Robusto 1 GeneralidadesUploaded byHarbey Alexander Millan Cardenas
- dgndgnsgngdngsUploaded byHarbey Alexander Millan Cardenas
- Tema 6 Diseno Controladores (Aproximaciones)8Uploaded byjhonathant
- ejemplo.pdfUploaded byHarbey Alexander Millan Cardenas
- Emulations of a Digital Control SystemsUploaded byarafatasghar
- Horarios Recuperacion Semana SantaUploaded byHarbey Alexander Millan Cardenas
- janyfu_ooUploaded byHarbey Alexander Millan Cardenas
- Mantenimiento Predictivo Virtual SenatiUploaded byAlberto Jesus Escobar Guardia
- qcerDQFCAWVCFFUploaded byHarbey Alexander Millan Cardenas
- Hora Rio 2014Uploaded byHarbey Alexander Millan Cardenas
- 07_articulo_juan_antonioUploaded byJohanny Rondon Burgos
- Fritzing Creator Kit Download EnUploaded byHarbey Alexander Millan Cardenas
- Propuesta de InvestigaciónUploaded byHarbey Alexander Millan Cardenas
- autorizacionUploaded byHarbey Alexander Millan Cardenas

- 000 PRELIMINARIES.docxUploaded bySef Ignacio Andagan
- Estudos Semitas Em Memória de William Rainey Harper Antigo TestamentoUploaded byMarcos Panegada
- Theoper1 ReviewerUploaded byAlyssa Gervacio
- Designing and Opening a New.pdfUploaded byLeli Khairani Adnan
- The Content Marketing Playbook - White PaperUploaded byIan Dolan
- PortfolioUploaded byKa Yan Anita Hess
- Referee ReportUploaded byIngenioeren
- Accident Sequence ModelsUploaded bykirandevi1981
- 0123744024 Dairy zUploaded byHector Re
- Sample Size _LinacreUploaded byJuliet Ling
- UM Getting Started and 8 Week ChallengeUploaded byMarcelo S. Roker
- QuestionnaireUploaded bySalman Munawar
- 2011 Adecco Salary Guide MalaysiaUploaded byAlan Chan Tee Siong
- Quran Recitation 2002 DennyUploaded byMooznah Auleear Owodally
- What is DictationUploaded byBryan Andrew
- Mayer August, Wiesbauer Gerald and Spitzlinger Markus- Applications of Hopfield NetworksUploaded byAsvcxv
- EBN MSDUploaded byAko To Asuraboy
- Anthropologist Case StudyUploaded bydavidcooper025
- EXECUTION OF ASSOCIATION RULE MINING WITH DATA GRIDS IN WEKA 3.8Uploaded byInternational Educational Applied Scientific Research Journal (IEASRJ)
- valerie guttierrez resumeUploaded byapi-304353791
- REINSTILLATION OF MORAL AND ETHICAL VALUES IN CHILDREN AND YOUTH: REFLECTING UPON THE ROLE OF PRECEPTORS AND PARENTS IN REKINDLING THE FLAME OF MORALITY IN MORTALSWITH SPECIAL REFERENCE TO INDIAN EDUCATION SYSTEMUploaded byAnonymous CwJeBCAXp
- rose lesson 1 math nonnumeric patternsUploaded byapi-281554653
- Makkinjo (2015) Physics ScoresUploaded byHarrison Wang
- Breastfeeding and Developmental DelayUploaded bysenkonen
- USING MACHINE LEARNING ALGORITHMS TO ANALYZE CRIME DATAUploaded bymlaij
- Monografia de Miguel Andrade.pdfUploaded byCristiane Slugovieski
- Methods of Teaching ELA Lesson PlanUploaded byEmily Prier
- pollution lessonUploaded byapi-268906043
- MS Math RubricUploaded byFerda Gurcan
- BookletUploaded byNurdan Aşar