You are on page 1of 9

Midterm Examination

Artificial Neuro-Fuzzy Theory AT07.24

Time: 13:00-15:00 h.
Marks: 45

February 7, 2001
Open Book

Attempt all questions.


Q.1

In a function approximation problem by Levenberg-Marquardt method, determine weights and

biases after the first iteration of a 1-1-1 network shown below. Assume the initial weights and biases
follow
1
w11
(0) = 0.1, b11 (0) = 0.2, w112 (0) = 0.3, b12 (0) = 0.4

while the relation between inputs and targets is represented by

{p1 = 5, t1 = 15}, {p 2 = 10, t 2 = 25}, {p3 = 15, t 3 = 25}, {p 4 = 20, t 4 = 30}, {p5 = 35, t 5 = 35}
when k = 0.01, = 10 .

(20)

1
w11

logsig

b11

w112

purelin

b12

Solution

Determine derivative of transfer function,


f 1 = (1 + e n ) 1 , f& 1 = (1 a11 )(a11 ), f 2 = n, f& 2 = 1

(1)

Present p1,
1

1
p = p1 = 5, n11 = w11
p + b11 = 0.1 5 + 0.2 = 0.7, a11 = (1 + e n1 ) 1 = (1 + e 0.7 ) 1 = 0.668

(2)

n12 = w112 a11 + b12 = 0.3 0.668 + 0.4 = 0.600, a12 = a = n12 = 0.600, e = e1 = t a = 15 0.600 = 14.400 (3)

~
~
~
S12 = f& 2 = 1, S11 = f& 1 w112 S12 = [(1 0.668)0.668](0.3)(1) = 0.067

(4)

Present p2,
1

1
p = p 2 = 10, n11 = w11
p + b11 = 0.1 10 + 0.2 = 1.2, a11 = (1 + e n1 ) 1 = (1 + e 1.2 ) 1 = 0.769

(5)

n12 = w112 a11 + b12 = 0.3 0.769 + 0.4 = 0.631, a12 = a = n12 = 0.631, e = e2 = t a = 25 0.631 = 24.369 (6)

~
~
~
S 22 = f& 2 = 1, S 21 = f& 1 w112 S12 = [(1 0.769)0.769](0.3)(1) = 0.053

(7)

Present p3,
1

1
p = p3 = 15, n11 = w11
p + b11 = 0.1 15 + 0.2 = 1.7, a11 = (1 + e n1 ) 1 = (1 + e 1.7 ) 1 = 0.846

(8)

n12 = w112 a11 + b12 = 0.3 0.846 + 0.4 = 0.654, a12 = a = n12 = 0.654, e = e3 = t a = 25 0.654 = 24.346 (9)

~
~
~
S 32 = f& 2 = 1, S 31 = f& 1 w112 S12 = [(1 0.846)0.846](0.3)(1) = 0.039

(10)

Present p4,
1

1
p = p 4 = 20, n11 = w11
p + b11 = 0.1 20 + 0.2 = 2.2, a11 = (1 + e n1 ) 1 = (1 + e 2.2 ) 1 = 0.900

(11)

n12 = w112 a11 + b12 = 0.3 0.900 + 0.4 = 0.670, a12 = a = n12 = 0.670, e = e 4 = t a = 30 0.670 = 29.330 (12)

~
~
~
S 42 = f& 2 = 1, S 41 = f& 1 w112 S12 = [(1 0.900)0.900](0.3)(1) = 0.027

(13)

Present p5,
1

1
p = p5 = 35, n11 = w11
p + b11 = 0.1 35 + 0.2 = 3.7, a11 = (1 + e n1 ) 1 = (1 + e 3.7 ) 1 = 0.976

(14)

n12 = w112 a11 + b12 = 0.3 0.976 + 0.4 = 0.693, a12 = a = n12 = 0.693, e = e5 = t a = 35 0.693 = 34.307 (15)

~
~
~
S 52 = f& 2 = 1, S 51 = f& 1 w112 S12 = [(1 0.976)0.976](0.3)(1) = 0.007

(16)

Determine summation of error square,

F ( x) = e 2 = 14.400 2 + 24.369 2 + 24.346 2 + 29.330 2 + 34.307 2 = 3431.155

(17)

e1
w1
11
e2
1
w11
e
J ( x) = 31
w11
e4
1
w11
e5
w1
11

(18)

e1
b11
e2
b11
e3
b11
e4
b11
e5
b11

e1
w112
e2
w112
e3
w112
e4
w112
e5
w112

e1
b12

e2 0.067 5
b12 0.053 10
e3
= 0.039 15
b12
e4 0.027 20

b12 0.007 35
e5
b12

0.335
0.530

J (x) = 0.585

0.540
0.245

0.067
0.053
0.039
0.027
0.007

0.668
0.769
0.846
0.900
0.976

0.067
0.053
0.039
0.027
0.007

1
1
1

1
1

1 0.668
1 0.769
1 0.846
1 0.900
1 0.976

1
1
1

1
1

(19)

0.335
0.530

T
J J = 0.585

0.540
0.245

0.067 0.668 1
0.053 0.769 1
0.039 0.846 1

0.027 0.900 1
0.007 0.976 1

0.335
0.530

J T v = 0.585

0.540
0.245

0.335
0.530

0.585

0.540
0.245

0.067
0.053
0.039
0.027
0.007

0.067 0.668 1
1.087
0.053 0.769 1
0.090
0.039 0.846 1 =

1
.851

0.027 0.900 1
2.235
0.007 0.976 1

0.668
0.769
0.846
0.900
0.976

1
1
1

1
1

0.090 1.851 2.235


0.010 0.150 0.193 (20)
0.150 3.516 4.159

0.193 4.159 5.000

14.400
24.369 56.225
4.238

24.346 =
108.836

29.330 126.752

34.307

(21)

Since

x = [ J T J + k I ] 1 J T v

(22)

Thus
1
0.090
1.851
2.235
w11

1.087 + 0.01
1
0.090
0.010 + 0.01
0.150
0.193
b1 =
w112
1.851
0.150
3.516 + 0.01
4.159
2

0.193
4.159
5.000 + 0.01
2.235
b1

56.225 0.899
4.238 11.364
(23)
=

108.836 47.605

126.752 13.380

1
1
1
0.1 0.899 0.799
w11
(1) w11
(0) w11
1
1 1

b1 (1) = b1 (0) + b1 = 0.2 + 11.364 = 11.164


w112 (1) w112 (0) w112 0.3 47.605 47.905
2
2 2

b1 (1) b1 (0) b1 0.4 13.380 12.980

Q.2

(24)

Design a network with appropriate parameters, which can classify class A occupying block

shown in the figure below from class B outside the block.

(15)

z
(-5, 5, 5)

(0, 0, 10)
(-2, 2, 5)
(2, -2, 5)
(5, -5, 5)

(-5, 5, 0)
(2, 2, 5)

(5, 5, 5)

Class A
(5, 5, 0)

(5, -5, 0)

x
3

Class B

Solution
(a) Multi-Layer Perceptron is selected.

w11

w14

w51

1
6

1
7

p3

w122

1
3

p2

b11

w12

p1

w112

w81

w91

b21

b13

2
w21

b61

w123

2
22

b71

2
w23

b81

b22

2
w24

1
5

1
1

w113

w152

b12

b41

w142

b31

w132

b91

(b)
For layer 1
Since w should point into the block and it should be perpendicular to decision boundary. Thus, select

1
w = 0
0
1
1

(1)

0
w = 1
0

(2)

1
w = 1
0

(3)

0
w = 0
1

(4)

0
w = 0
1

(5)

5
w 16 = 0
2

(6)

0
w = 5
2

(7)

1
w = 1
0

(8)

0
w = 0
1

(9)

1
2

1
3

1
4

1
5

1
7

1
8

1
9

0
1 0
0 1 0

1
1
0

0 1
0
1
W =0
0
1

5
0
2

0 5 2

1
0
1
0
0
1

(10)

b is determined from equating n = 0; and solve for b


1
1
1
n11 = w11
p1 + w12
p 2 + w13
p3 + b11 = 0

(11)

at p1 = 5, p2 = 5, and p3 = 0;

1(5) + 0 + 0 + b11 = 0; b11 = 5

(12)

n12 = w121 p1 + w122 p 2 + w123 p3 + b21 = 0

(13)

0 1(5) + 0 + b21 = 0; b21 = 5

(14)

at p1 = 5, p2 = 5, and p3 = 0;

1
1
1
n31 = w31
p1 + w32
p 2 + w33
p3 + b31 = 0

(15)

at p1 = 5, p2 = -5, and p3 = 0;

1(5) + 1(5) + 0 + b31 = 0; b31 = 0

(16)

n14 = w141 p1 + w142 p 2 + w143 p3 + b41 = 0

(17)

0 + 0 + (1)5 + b41 = 0; b41 = 5

(18)

at p1 = 5, p2 = -5, and p3 = 5;

1
1
1
n51 = w51
p1 + w52
p 2 + w53
p3 + b51 = 0

(19)

at p1 = 5, p2 = -5, and p3 = 0;

0 + 0 + 0 + b51 = 0; b51 = 0
1
n16 = w61
p1 + w162 p 2 + w163 p3 + b61 = 0

(20)
(21)

at p1 = 0, p2 = 0, and p3 = 10;

0 + 0 + (2)10 + b61 = 0; b61 = 20

(22)

1
n17 = w71
p1 + w172 p 2 + w173 p3 + b71 = 0

(23)

0 + 0 + (2)10 + b71 = 0; b71 = 20

(24)

1
1
1
n81 = w81
p1 + w82
p 2 + w83
p3 + b81 = 0

(25)

at p1 = 0, p2 = 0, and p3 = 10;

at p1 = 0, p2 = 0, and p3 = 10;

0 + 0 + 0 + b81 = 0; b81 = 0

(26)

1
1
1
n91 = w91
p1 + w92
p 2 + w93
p3 + b91 = 0

(27)

0 + 0 + 5 + b91 = 0; b91 = 5

(28)

at p1 = 2, p2 = 2, and p3 = 5;

5
5

0

5
1
b = 0

20
20

0
5

(29)

For layer 2
Since this is AND layer, if we select

w112 = w122 = w132 = w142 = w152 = 1; w162 = w172 = w182 = w192 = 0

(30)

2
2
2
2
2
2
2
2
2
w21
= w22
= w23
= w24
= w25
= 0; w26
= w27
= w28
= w29
=1

(31)

1 1 1 1 1 0 0 0 0
W2 =

0 0 0 0 0 1 1 1 1

(32)

b must be selected such that only when all inputs are 1s, output is 1. Thus, select

b12 = 4.5 and b22 = 3.5

(33)

4.5
b2 =

3.5

(34)

For layer 3
Since this is OR layer, if we select

w113 = w123 = 1

(35)

W 3 = [1 1]

(36)

b must be selected such that even one input is 1, output is 1. Thus, select

Q.3

b13 = 0.5

(37)

b 3 = [ 0.5]

(38)

Neural network can be applied as a simple analog-signal-processor. Assume that there are 3

analog inputs to the processor and 3 analog outputs from the processor. If the following operations are
needed,

1
4
7
10





p1 = 2, t 1 = 5 , p 2 = 8, t 2 = 11

3
6
9
12

(a) Design the network which functions as the required operation.

(2)

(b) Determine all the necessary parameters.

(6)

(c) Test the selected network and parameters by entering each input and see the result.

(2)

Solution
(a) Linear Associator with Pseudoinverse rule is selected.

3x3

(b)
By pseudoinverse rule,
W = TP+

(1)

P+ = (PT P)-1PT

(2)

Where

and

4 10
1 7

T = 5 11 , P = 2 8
6 12
3 9

(3)

1 7
1 2 3
= 14 50
P P=
2
8


7 8 9 3 9 50 194

(P P)

14 50
=

50 194

0.898 0.232
=

0.232 0.065

(4)

(5)

0.898 0.232 1 2 3 0.722 0.056 0.611


(P T P) 1 P T =
(6)

=
0.056 0.111
0.232 0.065 7 8 9 0.222
4 10
0.667 0.333 1.333
0.722 0.056 0.611

W = TP = 5 11
= 1.167 0.333 1.833
0
.
222
0
.
056
0
.
111
1.667 0.333 2.333
6 12

(7)

(c)
when the input is p1,

0.667 0.333 1.333 1 4


a = 1.167 0.333 1.833 2 = 5
1.667 0.333 2.333 3 6

(8)

0.667 0.333 1.333 7 10


a = 1.167 0.333 1.833 8 = 11
1.667 0.333 2.333 9 12

(9)

when the input is p2,

You might also like