Professional Documents
Culture Documents
Ecological Engineering
journal homepage: www.elsevier.com/locate/ecoleng
Institute of Mathematics and Informatics, Szent Istvn University, Pter K. u.1, Gdllo,
H-2103, Hungary
Department of Physical and Applied Geology, Etvs Lornd University, Pzmny Pter stny 1/C, Budapest, H-1117, Hungary
a r t i c l e
i n f o
Article history:
Received 8 August 2016
Received in revised form 8 December 2016
Accepted 16 December 2016
Keywords:
Dissolved oxygen forecasting
General regression neural networks
Multilayer perceptron neural networks
Multivariate linear regression
Radial basis function neural network
a b s t r a c t
Dissolved oxygen content is one of the most important parameters in the characterization of surface
water conditions. Our goal is to make a forecast of this parameter in Central Europes most important
river with the use of other, easily measurable water quality parameters (pH, temperature, electrical
conductivity and runoff) with the use of linear and nonlinear models. We adapt four models for forecasting
dissolved oxygen concentration, namely a Multivariate Linear Regression model, a Multilayer Perceptron
Neural Network, a Radial Basis Function Neural Network and a General Regression Neural Network model.
Data is available for Hungarian sampling locations on River Danube (Mohcs, Fajsz and Gyorzmoly)
for
the period of 19982003. The analysis was performed with four alternative combinations, the models
were formulated using data from the period 19982002 and a dissolved oxygen forecast was made for
2003. Evaluating model performance with various statistical measures (root mean square error, mean
absolute error, coefcient of determination, and Willmotts index of agreement), we found that nonlinear models gave better results than linear models. In two cases the General Regression Neural Network
provided the best performance, in two other cases the Radial Basis Function Neural Network gave the
best results. A further goal was to conduct a sensitivity analysis in order to identify the parameter with
the highest inuence on the performance of the created models. Sensitivity analysis was performed for
the combination of all three sampling locations (4th combination) and it was found that for all three
neural network models sensitivity analyses showed that pH has the most important role in estimating
dissolved oxygen content.
2016 Elsevier B.V. All rights reserved.
1. Introduction
To have a proper understanding of surface waters it is vital to
know the water quality parameters provided by the data of monitoring networks. The operation of a monitoring network can be
improved considering various criteria (e.g. cost efciency), or can
be facilitated by estimating certain parameters from other param-
64
scarce) and test sets. The creation of these sets can be undertaken
various ways. Most mainstream sources suggest random creation
of the respective sets (1), in this case the term estimation or modelling should be used (Ahmed, 2014; Antanasijevic et al., 2014;
Basant et al., 2010; Emamgholizadeh et al., 2014; Heddam, 2014;
Rankovic et al., 2012, 2010; Talib and Amat, 2012; Wen et al., 2013).
Other sources divide sets according to sampling points (2), assigning the majority of sampling points to the training set, and a smaller
proportion of sampling points to the test set; in this case the correct terminology is spatial forecasting (Dogan et al., 2009; He et al.,
2011a; Palani et al., 2008). Finally, some sources divide the temporal interval of measurement by assigning multiple initial years to
the training set and a couple of nal years to the test set (3), in
this case temporal forecasting is performed (Antanasijevic et al.,
2013; Ay and Kisi, 2012; Csbrgi et al., 2015; He et al., 2011b;
Singh et al., 2009). This article proposes examples for the 3rd case,
temporal forecasting.
In the following some results are presented from temporal forecasting studies. Antanasijevic et al. (2013) compared three Articial
Neural Networks (ANNs), namely, Multilayer Perceptron Neural
Network (MLPNN), General Regression Neural Network (GRNN)
and Recurrent Neural Network with Multivariate Linear Regression
(MLR) to the forecasting of DO in the River Danube at a single location in Serbia, Bezdan. The data from the years 20042008 were
used as a training data set, and the data from 2009 were applied as
the test data set. The authors found that the Recurrent Neural Network performed much better than the others. Singh et al. (2009)
developed two MLPNN models to forecast the biological oxygen
demand and DO concentration in the River Gomti, India, with the
help of 11 input water quality parameters. The entire water quality data set spanning 10 years was divided into three sub-sets; the
training set contained data from the rst 6 years, the validation set
comprised data of the next 2 years, and the test data set consisted of
the data from the remaining nal 2 years. The authors established
that the MLPNN was a powerful predictive tool in the computation of water quality parameters. Ay and Kisi (2012) developed and
compared two ANNs the MLPNN and RBFNN and MLR for the
forecasting of DO concentration by using four parameters (temperature (WT), pH, electrical conductivity (EC) and runoff (RF)) as
input in Foundation Creek, Colorado, USA. The whole data set was
collected from upstream and downstream USGS stations and the
training, validation and test data sets were divided by date of the
experimental data set. Comparison of the results showed that the
Radial Basis Function Neural Network (RBFNN) model performed
better than the MLPNN and MLR models, and that the RBFNN model
was quite effective without the runoff parameter in DO concentration forecasting. Finally, the downstream DO concentration was
successfully forecasted using only water temperature data of the
upstream station. He et al. (2011b) applied MLPNN and MLR to
forecast the daily DO minimum and the daily DO variation in the
Bow River, Canada. The water quality parameters of 20062007
recorded at 15 or 30 min intervals of both sampling sites were
used for the training set and the test set contained the data from
2008. The DO minimum was forecast using water temperature and
runoff, and the input parameters for the estimation of daily DO variance were radiation, water temperature and runoff. In both cases
the MLPNN model outperformed the linear model.
Our main goal is to aid water quality management using estimation procedures which optimise the operation of monitoring by
ensuring cost efciency and representativity. This may be attained
by providing forecasts of DO-concentration, which is one of the
most important hydrochemical parameters, using easily measureable physical and chemical parameters. We use the mainstream
approaches to reach our objective, i.e. MLR, and the various ANN
methods, and we provide (1) an efciency ranking of these methods
for different combinations of sampling locations (see the details in
65
Table 1
Descriptive statistics of input and output data on the three sampling locations (CV- coefcient of variation, SD standard deviation).
Station
Parameters
D11
RF
WT
pH
EC
DO
RF
WT
pH
EC
DO
RF
WT
pH
EC
DO
D9
D2
Max
5400
25.9
8.75
530
17.7
5310
25.2
8.85
525
15.5
5130
22.8
8.9
560
14.03
RF and WT, the most stable parameter was pH, while EC and DO
show similar degrees of variance. It can also be seen that there is
little difference between sampling locations D11 and D9, see e.g.
Kovcs et al., 2015a. On the other hand, D2 signicantly differs from
the other sampling locations, which can mainly be explained with
the large distance (more than 298 rkm) between D2 and D9. The
difference is well shown through the average values of runoff and
temperature.
Methods presented in sub-chapters 2.32.6 were used in all
combinations, the learning set consisted of data from 1998 to 2002,
the test set was the data from 2003. Eventually, sensitivity analysis
is presented for the CD .
2.3. Multivariate linear regression (MLR)
MLR is used to estimate the linear association between the
dependent and one or more independent parameters. MLR is based
on least squares (Draper and Smith, 1981); it expresses the value
Min
Mean
SD
CV(%)
910
0.2
7.8
272
6.8
920
0.2
7.75
256
7
618
0
7.02
294
5.76
2363.19
12.84
8.24
377.62
11.04
2346.74
12.49
8.26
371.30
11.20
1940.58
11.59
8.13
370.56
9.82
979.18
7.42
0.22
58.98
1.80
939.77
7.50
0.26
57.31
1.60
770.25
6.48
0.27
48.71
1.54
0.41
0.58
0.03
0.16
0.16
0.40
0.60
0.03
0.15
0.14
0.40
0.56
0.03
0.13
0.16
(1)
where xi is the value of the ith predictor parameter, o is the regression constant, and i is the coefcient of the ith predictor parameter.
2.4. Multilayer perceptron neural networks (MLPNN)
ANNs are basically parallel computing systems similar to biological neural networks. Among the various types of ANNs, the MLPNN
structure is the most commonly used and is a well-researched basic
ANN architecture. The MLPNN generally has three layers: input,
output and one or more hidden layer(s), as shown in Fig. 2. Each
layer consists of one or more basic element(s) called a neuron or
a node (or a processing unit). Nodes are connected to each other
by links, and synapses are characterised by a weight factor denoting the strength of the connection between two nodes (wi,j ). Each
66
node in the input and inner layers receives input values, processes
it, and passes it to the next layer. This process is conducted using
weights (Dogan et al., 2009), meaning that the hidden layer sums
the weighted inputs and their own bias values (bk ) and uses its
own transfer function to create an output value (yk ). Typical transfer functions are the linear, sigmoid or hyperbolic tangent functions
(Haykin, 1999).
MLPNNs are trained on the input data using an error
back-propagation algorithm (Antanasijevic et al., 2013). Backpropagation was proposed by Rumelhart et al. (1986), and it is
the most popular algorithm for the training of an MLPNN network
(Haykin, 1999). This back-propagation algorithm has two steps. The
rst step is a forward pass, in which the effect of the input is passed
forward through the network to reach the output layer and to calculate the output value (Yk ). After the error is computed (k ), a second
step starts backward through the network (Emamgholizadeh et al.,
2014) to correct the initial assigned weights of the input layer in
such a way as to minimize the error. This represents one complete cycle, in which all data pass through the network, and is
known as an epoch. The term feed-forward means that a node
connection only exists from a node in the input layer to other
nodes in the hidden layer or from a node in the hidden layer to
nodes in the output layer; and nodes within a layer are not interconnected with each other, and there are no lateral or feedback
connections. MLPNN using a BP algorithm is sensitive to randomly
assigned initial connection weights (Kim and Kim, 2008). The initialization of weights and bias values for a layer is conducted using
the Nguyen-Widrow method in the MATLAB environment (Pavelka
and Prochzka, 2004), and these initial values are dissimilar on
every single run, so after the training process different predicted
values are obtained.
In this study, the Levenberg-Marquardt algorithm is applied to
the adjustment of the MLPNN weights (Marquardt, 1963), and the
number of epochs was 1000. One hidden layer and a hyperbolic
tangent sigmoid transfer function were used between the input
and the hidden layers, and a linear transfer function was applied
between the hidden and output layers. Neural Network Toolbox of
MATLAB was utilized for every one of the three ANNs.
2.5. Radial basis function neural networks (RBFNN)
RBFNN was rst introduced into the neural network literature
by Broomhead and Lowe (1988) and Poggio and Girosi (1990). The
RBFNN is an unsupervised learning neural network and contains a
feed-forward structure including one input layer, a single hidden
layer, and one output layer, as shown in Fig. 3. RBFNNs have the
N
2
Dj =
wij xi .
(2)
i=1
f (Dj ) = exp
Dj
2 2
(3)
m
wj f Dj + b.
(4)
j=1
67
an independent parameter in the model and the number of pattern neurons is equal to the number of data samples. The training
between the input layer and the pattern layer corresponds to the
learning between the input and the hidden layer of the RBFNN.
The number of neurons in the summation layer can be expressed
as No + 1, where No is the number of output neurons (Antanasijevic
et al., 2014). Since the model has only one output, each pattern layer
unit is connected to the two neurons in the summation layer: the
S-summation neuron and the D-summation neuron. The weights
between the summation-neuron and pattern neurons are equal to
the measured value of the output parameter (target data). The Ssummation neuron computes the sum of the weighted outputs of
the pattern layer (S) while the D-summation neuron calculates the
unweighted outputs of the pattern neurons (D).
S=
K
yj f (Dj )
(5)
j=1
D=
R2 =
n
Oi O
i=1
n
Oi O
Pi P
n
2
i=1
Pi P
(9)
2
(10)
i=1
n
IA = 1
n
i=1
(Oi Pi )
|Pi O| + |Oi O|
i=1
K
f (Dj )
j=1
Finally, the output layer merely divides the S-summation neuron by the D-summation neuron (Heddam, 2014).
n
1
2
RMSE =
(Oi Pi )
n
(7)
i=1
1
|Oi Pi |
n
i=1
In the rst case, the MLR1 model with four predictor parameters for the training data set was signicant (the result of the
F-test was 8.3E-19). The performance of this model is characterised
by the least square error (1.13 mg L1 ) and the value of R2 (0.52).
The p-value of RF and EC indicated that these parameters were
not signicant in the MLR1 model (p-value is higher, than 0.05.
Table 2), and so another model, denoted MLR2, was used without
these parameters.
In the second case, without RF and EC, the MLR2 model was
also signicant (3.25862E-20). The p-values of the two predictor
parameters (pH and WT) were under 0.05, so these parameters
were acceptable (Table 2). The Eq. (11) describes the relationship
between the DO value and the two predictor parameters.
DO = 0.18 WT + 5.91 pH 35.46
(11)
This equation was applied to the test data set, the least square
error was 2.03 mg L1 and the value of R2 was 0.4 (Table 3, rst
row).
3.2. Prediction using multilayer perceptron neural network on CA
MAE =
(8)
68
Table 2
Coefcients and errors of the MLR model on CA.
First case MLR1
(Constant)
RF
WT
pH
EC
Coefcient
Standard error
p-value
Coefcient
Standard error
p-value
30.251
0.000
0.224
5.664
0.006
6.228
0.000
0.036
0.651
0.004
4.85717
0.339514
4.78E-09
1.73E-14
0.171113
35.460
4.746
1.2E-11
0.183
5.912
0.018
0.591
1.6E-18
1.2E-17
Table 3
Evaluating model performance on training and test datasets in all four combinations.
Comb.
CA
CB
CC
CD
Model
MLR2
MLPNN
RBFNN
GRNN
MLR2
MLPNN
RBFNN
GRNN
MLR2
MLPNN
RBFNN
GRNN
MLR2
MLPNN
RBFNN
GRNN
RMSE (mg L1 )
MAE (mg L1 )
R2
IA
training
test
training
test
training
test
training
test
1.14
0.65
0.84
0.47
1.00
0.64
0.48
0.35
1.13
0.97
0.79
0.77
1.31
0.96
1.11
0.92
2.03
1.57
1.65
1.42
1.94
1.72
1.62
1.74
1.57
1.46
1.43
1.36
1.98
1.70
1.63
1.70
0.79
0.43
0.62
0.27
0.75
0.46
0.37
0.23
0.84
0.74
0.60
0.56
0.97
0.67
0.81
0.62
1.38
1.28
1.25
1.14
1.41
1.22
1.33
1.27
1.23
1.21
1.19
1.09
1.41
1.21
1.17
1.21
0.51
0.85
0.74
0.93
0.49
0.80
0.88
0.94
0.43
0.58
0.72
0.75
0.36
0.66
0.54
0.70
0.4
0.57
0.59
0.72
0.44
0.58
0.54
0.55
0.50
0.59
0.47
0.60
0.41
0.59
0.59
0.55
0.82
0.93
0.92
0.98
0.81
0.93
0.97
0.98
0.77
0.84
0.91
0.91
0.72
0.88
0.83
0.89
0.72
0.77
0.78
0.87
0.76
0.79
0.75
0.77
0.72
0.72
0.75
0.75
0.73
0.78
0.77
0.77
Fig. 5. Boxplot diagrams of sixty runs for the test dataset on CA.
with identical settings, and the results of these 60 runs are averaged
for a sample both on the training and on the test set and statistical
indicators are calculated for data estimated in this manner. We will
use this approach and compare MLPNN with other neural networks
below.
In the case of MLPNN, we assessed the performance of the net
for different neuron numbers, from 2 to 9 (Fig. 6). It was found that
the RMSE of the test set decreased monotonously in the range of
25 neurons and increased between 5 and 9 neurons, reaching a
minimal value at 5 neurons. Thus, for the current set of parameters
the 5 neuron setting provides the most precise MLPNN estima-
69
2.2
2
1.8
1.6
1.4
1.2
1
2
5
6
7
Number of neurons
Fig. 6. RMSE of the test set with different neuron numbers using MLPNN on CA.
3.5.3. CD
Finally all three locations were simultaneously analysed (CD ),
as a composite system. As usual two MLR models were formulated, and as not all parameters turned out to be signicant in the
rst model, the EC parameter was excluded from the second MLR
model which, on the other hand had only signicant parameters
(Table 3). The MLPNN model was most efcient with 6 neurons;
RBFNN model runs showed a smoothing factor of 0.46 as ideal, the
neuron count in the hidden layer was 14 in this case. Best results
could be achieved with the GRNN model if a smoothing factor of
0.44 was used as an input parameter. The composite system created in this manner was most efciently forecasted with the RBFNN
model.
3.6. Sensitivity analysis
Sensitivity analysis was performed to identify which of the four
parameters is the most important in predicting DO-values. We
tested all three neural network models by omitting a parameter
on each run and examining how this affects model performance.
We analysed the RMSE values of the test set and compared them
with the RMSE values gained in the case when the complete parameter range was used for the test set. This allowed us to develop
a ranking of parameters, as it was obvious that the omission of
the most important parameter would have the highest inuence
on the RMSE and result in the largest loss of model performance.
Sensitivity analysis was performed for the CD with all three neural
networks (Table 4). Results with each neural network conrmed
the importance of the pH parameter as model performance significantly deteriorated without this parameter.
4. Discussion
4.1. Validity of multivariate linear regression
The validity of the linear model was successfully established
through an F-test. However, when testing for the signicance of the
individual parameters it was found that some parameters are not
statistically different from zero (e.g. RF and EC, in the case of CA ).
Thus, the inclusion of these parameters does not improve model
accuracy. Despite the general practice, which ignores these constraints (Akkoyunlu et al., 2011; Antanasijevic et al., 2013; Ay and
Kisi, 2012; He et al., 2011b), we deemed it necessary to construct
an MLR2 model without the respective parameters. In this case the
models and its parameters are statistically signicant.
4.2. Development of the MLPNN
The 60-fold repeated iterations of the MLPNN (Fig. 5) highlighted the fact that due to the random initialisation identical model
settings can result in highly distinct results. Most of the earlier
works in this eld ignored this phenomenon (Dogan et al., 2009;
Kuo et al., 2007; Rankovic et al., 2010; Singh et al., 2009; Talib and
Amat, 2012), though Palani et al. (2008) raises this issue.
In an optimal scenario, the 60-fold reiteration should produce
results with zero variance, that is, all of the results should be
identical. Throughout the course of the analysis, the opposite was
experienced. Variances of the estimations of individual observations of the test set ranged between 0.11 and 32.69.
Thus, this repeated iteration approach turned out to be well
founded, as this allowed us to manage the outliers of certain MLPNN
runs; without this, extremely incorrect results could randomly be
generated and accepted.
70
4.5
4
3.5
3
sigma=0.26
testRMSE=1.645402361
2.5
2
1.5
0
0.1
0.2
0.3
smoothing factor
0.4
0.5
0.6
Fig. 7. RMSE of the test set with different smoothing factors using RBFNN on CA.
2.6
2.4
RMSE (mg L-1)
2.2
2
1.8
1.6
1.4
sigma=0,3;
testRMSE=1,4
1.2
1
0
0.5
1.5
Smoothing factor
2.5
Fig. 8. RMSE of the test set with different smoothing factors using GRNN on CA.
Table 4
Results of sensitivity analysis on combination CD for the test set.
RMSE
MLPNN
RBFNN
GRNN
R2
MLPNN
RBFNN
GRNN
All
Skip pH
Skip WT
Skip EC
Skip RF
1.70
1.88
1.84
1.77
1.81
1.63
3.90
1.81
1.71
1.71
1.70
1.90
1.81
1.78
1.74
All
Skip pH
Skip WT
Skip EC
Skip RF
0.59
0.43
0.51
0.54
0.46
0.59
0.29
0.45
0.52
0.51
0.55
0.48
0.52
0.48
0.50
Oi Pi
(%)
Oi
(13)
where Oi and Pi are the observed and predicted DO values from the
ith element.
Based on RE, it can be asserted that certain models have dynamic
errors (Fig. 9). Our models differ in their degree of accuracy in
describing the variance of dissolved oxygen. According to the test
set, the smallest errors can be expected in those intervals where
DO variance is small (e.g. fall and winter). In these intervals the
RE values of the models are smaller and dont differ signicantly.
71
Fig. 9. The error distribution of the four models for the test data set on CA.
Table 5
Values of RMSE and R2 in the respective combinations as a fraction of the MLR model values.
RMSE
MLR
MLP
RBNN
GRNN
R2
MLR
MLP
RBNN
GRNN
CA
CB
CC
CD
100%
100%
100%
100%
77%
89%
93%
86%
81%
84%
91%
82%
70%
90%
87%
86%
CA
CB
CC
CD
100%
100%
100%
100%
143%
133%
117%
135%
148%
123%
94%
143%
180%
126%
119%
143%
72
Kovcs, J., Mrkus, L., Szalai, J., Kovcs, I.S.Z., 2015b. Detection and evaluation of
changes induced by the diversion of River Danube in the territorial appearance
of latent effects governing shallow-groundwater uctuations. J. Hydrol. 520,
314325.
Kuo, J., Hsieh, M., Lung, W., She, N., 2007. Using articial neural network for
reservoir eutrophication prediction. Ecol. Modell. 200, 171177.
Liang, C., Xin, S., Dongsheng, W., Xiujing, Y., Guodong, J., 2016. The ecological
benet-loss evaluation in a riverine wetland for hydropower projects-A case
study of Xiaolangdi reservoir in the Yellow river. China. Ecol. Eng. 96, 3444.
Liska, I., Wagner, F., Sengl, M., Deutsch, K., Slobodnk, J., 2015. Joint Danube Survey
3, International Commission for the Protection of the Danube River, Vienna
ISBN: 978-3-200-03795-3.
Marquardt, D., 1963. An algorithm for least square estimation of nonlinear
parameters. J. Soc. Ind. Appl. Math. 11, 431441.
Molnr, M., 2002. Possible role of nuclear power in reducing greenhouse-gas
emissions in the Hungarian power sector. In: Proceedings of the 4th
International Conference on Nuclear Option in Countries with Small and
Medium Electricity Grids, Croatian Nuclear Society, Zagreb, pp. 17.
Najah, A., El-Shae, A., Karim, O.A., Jaafar, O., El-Shae, A.H., 2011. An application of
different articial intelligences techniques for water quality prediction. Int. J.
Phys. Sci. 6 (22), 52985308, http://dx.doi.org/10.5897/IJPS11.1180.
Onderka, M., Pekrov, P., 2008. Retrieval of suspended particulate matter
concentrations in the Danube River from Landsat ETM data. Sci. Total Environ.
397 (13), 238243.
Palani, S., Liong, S., Tkalich, P., 2008. An ANN application for water quality
forecasting. Mar. Pollut. Bull. 56, 15861597.
Pavelka, A., Prochzka, A., 2004. Algorithms for initialization of neural network
weights. Sbornik prispevku 11. in: Konference MATLAB. 2, 453459.
Poggio, T., Girosi, F., 1990. Regularization algorithms for learning that are
equivalent to multilayer networks. Science 247 (4945), 978982.
Rankovic, V., Radulovic, J., Radojevic, I., Ostojic, A., Comic, L., 2010. Neural network
modelling of dissolved oxygen in the Gruza reservoir, Serbia. Ecol. Modell. 221,
12391244.
Rankovic, V., Radulovic, J., Radojevic, I., Ostojic, A., Comic, L., 2012. Prediction of
dissolved oxygen in reservoirs using adaptive network-based fuzzy inference
system. J. Hydroinf. 14, 167179.
Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learning internal representation
by error back propagation, in: Rumelhart, D.E. and McClelland, J.L., (Eds.),
Parallel distributed processing. MIT Press, Cambridge, pp. 318362. ISBN:
9780262680530.
Singh, K., Basant, A., Malik, A., Jain, G., 2009. Articial neural network modeling of
the river water qualitya case study. Ecol. Modell. 220, 888895.
Sommerhuser, M., Robert, S., Birk, S., Hering, D., Moog, O., Stubauer, I., Ofenbck,
T., 2003. Final report for he developing the typology of surface waters and
dening the relevant reference conditions, UNDP/GEF danube regional project.
Vienna (Assessed 1 December 2016) http://www.undp-drp.org/pdf/1.1
River%20Basin%20Management%20-%20Phase%201/1.1 UNDP-DRP
Typology%20of%20SW 116 fr.pdf.
Specht, D.F., 1991. A general regression neural network. IEEE Trans. Neural Netw.
2, 568576.
Talib, A., Amat, M.I., 2012. Prediction of chemical oxygen demand in Dondang river
using articial neural network. Int. J. Inform. Educ. Technol. 2, 259261.
Turnpenny, A.W.H., Coughlan, J., Ng, B., Crews, P., Bamber, R.N., Rowles, P., 2010.
Cooling water options for the new generation of nuclear power stations in the
UK. Env. Agency, Bristol. (assessed: 03.12.16) https://www.gov.uk/
government/uploads/system/uploads/attachment data/le/291077/
scho0610bsot-e-e.pdf.
Venkatesan, P., Anitha, S., 2006. Application of a radial basis function neural
network for diagnosis of diabetes mellitus. Curr. Sci. 91, 11951199.
Wen, X., Fang, J., Diao, M., Zhang, C., 2013. Articial neural network modelling of
dissolved oxygen in the Heihe River, Northwestern China. Environ. Monit.
Assess. 185, 43614371.
Wetzel, RG., 2001. 9Oxygen, in: Wetzel, RG., Limnology, third ed., Academic Press
San Diego, pp. 151168. ISBN: 9780127447605.