You are on page 1of 12

1.

0 INTRODUCTION
Our organization Road Development Authority design structures for roads .Box culverts are
the main type of drainage structures we designed,In our organization we have typical culvert
table for different sizes and different fill height ,That can be use to train a neural network for
predicting unknown parameters

2.0 DATA SET


Input data
Box width(m)

Box height(m)

Fill height(m)

Reinforcement provided (mm2)

Output data
Thickness of box (mm)

Total number of data

37

2/3 Number of data

25

Number of training examples

25

Number of testing examples

12

top slab
fill height RF Area A thickness t
H(m)
(mm2/m)
(mm)

Data set

w(m)

h(m)

1005

200

1795

250

1.5

1005

225

1.5

1.5

1340

200

2.5

1.5

2094

225

1005

200

1795

225

2.5

2094

225

1.5

2.5

1340

200

10

2.5

1795

225

11

1608

275

12

1.5

1608

275

13

1340

275

14

3.5

1608

300

15

3.5

1.5

1608

300

16

3.5

2.5

2094

350

17

2094

350

18

1.5

2094

350

19

2.5

2094

350

20

1340

250

21

1795

300

22

1340

300

23

2094

400

24

1340

300

25

2094

400

1.5

1340

200

2.5

2094

225

1.5

1795

225

1.5

1340

200

2.5

1005

200

2.5

2.5

2094

225

2.5

1340

275

3.5

2094

300

2094

350

10

2094

350

11

1795

350

12

1795

350

Training

Testing

3.0 NURAL NETWORK


WinNN32 software used to train and test data set

The raw data was normalized before training to get better performance. Eta and Alpha values
were set to 0.5. Sigmoid transfer function used and target error values adjusted so that 90%
or more good patterns achieve. Here three and two middle layers were tried with two
different target errors. This had given four distinct predictions.
Same data set is used in mathematical regression and test data set was tested accordingly.
All out put data with Mean absolute error and 1-Ratioaverage is displayed

3.1 TRAIN 4:3:1 NURAL NETWORK

Input layer nodes

Mid layer nodes

Number of parameters

4x3

No of parameters x 1.5

19 x 1.5

28.5

25

Number of training examples

Output layer node

3x1

19

But it is possible to train above neural network since number of parameter are less than the
training example available

Artificial Neural Network (3 middle nodes and target error of 0.01)prediction slab
thickness(mm)

Network

Target

Mean
Absolute
error

216

200

16

1.08

255

225

30

1.13

219

225

0.97

212

200

12

1.06

213

200

13

1.06

241

225

16

1.07

283

275

1.03

325

300

25

1.08

351

350

1.00

329

350

21

0.94

376

350

26

1.07

364

350

14

1.04

16

0.05

I 1-RAVG I

Artificial Neural Network (3 middle nodes and target error of 0.005)prediction slab
thickness(mm)

Network

Target

Mean
Absolute
error

229

200

29

1.14

229

225

1.02

220

225

0.98

205

200

1.03

202

200

1.01

228

225

1.01

283

275

1.03

337

300

37

1.12

352

350

1.00

365

350

15

1.04

392

350

42

1.12

391

350

41

1.12

16

0.05

I 1-RAVG I

3.2 TRAIN 4:2:1 NURAL NETWORK

Input layer nodes

Mid layer nodes

Number of parameters

4x2

No of parameters x 1.5

13 x 1.5

19.5

25

Number of training examples

Output layer node

2x1

13

Artificial Neural Network (2 middle nodes and target error of 0.01)prediction slab
thickness(mm)

Network

Target

Mean
Absolute
error

222

200

22

1.11

248

225

23

1.10

224

225

0.99

217

200

17

1.08

215

200

15

1.08

238

225

13

1.06

277

275

1.01

331

300

31

1.10

343

350

0.98

314

350

36

0.90

379

350

29

1.08

379

350

29

1.08

19

0.05

I 1-RAVG I

Artificial Neural Network (2 middle nodes and target error of 0.005)prediction slab
thickness(mm)

Network

Target

Mean
Absolute
error

211

200

11

1.05

233

225

1.03

219

225

0.97

212

200

12

1.06

209

200

1.05

247

225

22

1.10

282

275

1.02

325

300

25

1.08

354

350

1.01

399

350

49

1.14

332

350

18

0.95

318

350

32

0.91

17

0.03

I 1-RAVG I

4.0 MULTIPLE REGRESSION ANALYSIS


SUMMARY OUTPUT
Regression Statistics
Multiple R
0.949
R Square
0.901
Adjusted R Square
0.881
Standard Error
21.471
Observations
25
ANOVA
df
Regression
Residual
Total

Intercept
w
h
H
A

SS
4
20
24

83680
9220
92900

MS
20920
461

F
45

Significance F
9.27818E-10

Coefficients
Standard Error t Stat
P-value
Lower 95%
Upper 95%
138.732
23.581
5.883
0.000
89.542
187.922
49.774
6.197
8.032
0.000
36.848
62.700
-2.719
8.001
-0.340
0.738
-19.409
13.971
18.032
2.712
6.650
0.000
12.376
23.688
-0.00000796
0.016
-0.001
1.000
-0.033
0.033

from above analysis we can write slab thickness as


slab thickness t = 138.7 + 49.774w -2.719h + 18.032H - 0.00000796A

Network

Target

Mean
Absolute
error

211

200

11

1.05

260

225

35

1.16

234

225

1.04

208

200

1.04

182

200

18

0.91

256

225

31

1.14

281

275

1.02

307

300

1.02

332

350

18

0.95

341

350

0.97

355

350

1.01

352

350

1.01

13

0.03

I 1-RAVG I

5.0 RESULTS COMPARISON FOR DIFFERENT MODELS

3 Middle layer nodes

2 Middle layer nodes

Error = 0.01

Error = 0.005

Error = 0.01

Error = 0.005

Multiple
Regression

Mean
Absolute
error

16

16

19

17

13

I 1-RAVG I

0.05

0.05

0.05

0.03

0.03

6.0 DISCUSSION
Results of four Artificial Neural Networks and Multiple regressions are summarized in above
table . Both mean absolute error and 1- ratio average zero are desired

From results of Artificial Neural Networks minimum mean absolute error of 16mm observed
from three middle layer node . This may due to lager number of links tend to fit more with the
trained data examples. Also lager target error avoids sub local maximums and minims.

When comparing Artificial Neural Networks and Multiple Regression results, lowest mean
absolute error was given by Multiple Regression method. This may due to lack of training
examples of Artificial Neural Networks. But generally ANN should give better results than MR.

ANNEXURES
WinNN32 software interface for 4 2 1 model of error 0.01

WinNN32 software interface for 4 2 1 model of error 0.005

WinNN32 software interface for 4 3 1 model of error 0.01

WinNN32 software interface for 4 3 1 model of error 0.005

You might also like