Professional Documents
Culture Documents
I.
INTRODUCTION
Ci (t ) = f (C j (t 1) * W ji )
j =1
A. Definition
As introduced in last section, FCM is a directed graph
which depicts a dynamic system. The nodes represent the
concepts in the system. Weight on an edge between two nodes
shows how much a concept will affect the next concept. The
weight matrix can be shown as:
W11 W12 ... W1n
f ( x) = 1 + e x
76
(2)
(1)
C1 C2 C3 C4 C5 C6 C7
0
0
0
0
C1 1 .89 0
0 .75
0
0
0
0
C2 0
0
.9
0 .89 0
C3 -.67 -.7
0 -.71 0 .94 0 .41
C4 0
0
0 -.31 0 .07 0
C5 0
0
0
0
0
0 .99
C6 0
0
0
.52 0 -.81 0
C7 0
t
0
1
2
3
4
5
6
7
8
C1
.6
.598
.574
.559
.557
.557
.557
.557
.557
C2
.3
.58
.554
.54
.539
.539
.539
.539
.539
C3
.3
.45
.502
.49
.487
.487
.487
.487
.487
C4
.6
.603
.641
.645
.644
.644
.643
.643
.643
C5
.7
.637
.638
.646
.647
.647
.647
.647
.647
C6
.7
.438
.466
.488
.484
.482
.482
.483
.483
C7
.7
.719
.664
.674
.679
.678
.677
.677
.677
initial matrix with the first sample sequence. After the training
algorithm converges, a resulting weight matrix is output as
shown by arrow 3 in Figure 2. The user needs to judge whether
the output matrix meets the expectation. If not, the output
matrix will be input into the training algorithm for next training
as shown with arrow 4 in Figure 2. To be clear, an illustration
about this procedure is shown in Figure 3.
A. Problem Definition
The training objective is to generate weight matrix of FCM
based on sample sequences. An illustration of the training
system is shown in Figure 2.
Input: initial weight matrix
C1
C2
C3
C4
C5
C6
C7
C1
1
0
-1
0
0
0
0
C2
1
0
-1
0
0
0
0
C3
0
1
0
-1
0
0
0
C4
0
0
1
0
-1
0
1
C5
0
0
0
1
0
0
0
C6
0
0
1
0
1
0
-1
C7
0
0
0
1
0
1
0
1
FCM Training Algorithm
0.6 0.8
Values of concepts
C1
C2 C1
C3 C2
C4 C3
C5
C4
C6
C5
C7
C6
0.4 0.6
0.8
0.2 0.4
0.6
0 0.2
0.4
Values of concepts
Values of concepts
1
0.8
Changes of system
Changes of system
Changes of system
C7
0
1
0 0.2
00
4
Step
4
5
Step
3
4
Step
C1
C2
C1
C2
C3
C4
C5
C6
C7
C1
1
0
-.67
0
0
0
0
C2
.89
0
-.7
0
0
0
0
C3
0
.75
0
-.71
0
0
0
C4
0
0
.9
0
-.31
0
.52
C5
0
0
0
.94
0
0
0
C6
0
0
.89
0
.07
0
-.81
C7
0
0
0
.41
0
.99
0
C3
C4
C5
C6
C7
W ji = * Ci (t ) Si (t ) *
Ci (t )
W ji
(6)
* Wki *Sk ( t 1)
e
* * S j (t 1)
Ci
=
2
n
W ji
* (Wki *Sk ( t 1) )
1 + e k =1
(t )
(3)
k =1
(7)
E
Wi
(4)
Ei ( t ) = C i ( t ) S i ( t )
(5)
n n
Wij
(t )
< threshold
i =1 j =1 t =1
(8)
TABLE III.
TABLE V.
0
1
2
3
4
5
6
7
8
EXPERIMENT
C1
C2
C3
C4
C5
C6
C7
C2 C3
1 0
0 1
-1 0
0 -1
0 0
0 0
0 0
C4
0
0
1
0
-1
0
1
C5
0
0
0
1
0
0
0
C6
0
0
1
0
1
0
-1
TABLE IV.
0
1
2
3
4
5
6
C1
C2
C3
C1
.999
0
-.671
C2
.881
0
-.687
C3
0
.752
0
C4
0
0
.859
C5
0
0
0
C6
0
0
.909
C7
0
0
0
0
-.581
0
.806
.94
0
0
0
0
.311
0
-1.052
.411
0
.99
0
C1
.6
.598
.573
.559
.557
.557
.557
.557
.557
C2
.3
.58
.554
.54
.539
.539
.539
.539
.539
TABLE VI.
C7
0
0
0
1
0
1
0
-.71
0
0
0
C3
.3
.45
.502
.49
.487
.487
.487
.487
.487
C4
.6
.602
.645
.645
.643
.643
.643
.643
.643
C5
.7
.637
.638
.647
.647
.647
.647
.647
.647
C6
.7
.439
.463
.489
.485
.482
.483
.483
.483
C7
.7
.719
.664
.673
.679
.678
.677
.678
.678
C1
1
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
C4
C5
C6
C7
C1
.3
.525
.552
.557
.558
.558
.558
C2
.6
.514
.534
.538
.539
.539
.539
TABLE VII.
C1
C2
C3
C4
C5
C6
C7
C1
1.001
0
-.669
0
0
0
0
C2
.881
0
-.69
0
0
0
0
C3
.3
.471
.483
.486
.487
.487
.487
C4
.8
.639
.642
.641
.643
.643
.643
C5
.2
.68
.646
.646
.646
.647
.647
C6
.6
.429
.472
.484
.483
.483
.483
C7
.7
.715
.665
.675
.677
.677
.677
C3
0
.745
0
-.706
0
0
0
C4
0
0
.887
0
-.309
0
.527
C5
0
0
0
.941
0
0
0
C6
0
0
.886
0
.076
0
-.811
C7
0
0
0
.413
0
.983
0
B. Experiment 2
In the second experiment, we will show how the initial
input weight matrix will affect the learning. Like Experiment 1,
the sequence in Table II is used as the first training sample. A
new initial weight matrix is designed by setting all causal
relationships in Table III as -1.
This time the learning algorithm converges in 54,787 cycles
in 1.46875 seconds. The output matrix is shown in Table VIII.
It is noticed that this output matrix is different from the matrix
in Table IV.
TABLE VIII.
C1
C2
C3
C4
C5
C6
C7
C1
.999
0
-.671
0
0
0
0
C2
.88
0
-.686
0
0
0
0
C3
0
.751
0
-.71
0
0
0
C4
0
0
.925
0
-.053
0
.257
C5
0
0
0
.94
0
0
0
C6
0
0
.844
0
-.235
0
-.488
C7
0
0
0
.409
0
.991
0
C1
C2
C3
C4
C5
C6
C7
C1
1.001
.0
-.669
.0
.0
.0
.0
C2
.88
.0
-.689
.0
.0
.0
.0
C3
.0
.748
.0
-.708
.0
.0
.0
C4
.0
.0
.899
.0
-.305
.0
.515
C5
.0
.0
.0
.941
.0
.0
.0
C6
.0
.0
.877
.0
.074
.0
-.804
C7
.0
.0
.0
.407
.0
.991
.0
C1
C1 .5
C2 0
C3 -.2
C2 C3 C4 C5 C6
.1 0 0 0 0
0 .6 0 0 0
-.4 0 .7 0 .3
C7
0
0
0
0
0
0
0
0
0
0
0
.2
0
0
0
0 -.5 0 .8
.1 0 .7 0
0 0 0 .3
1 0 -.2 0
C4
C5
C6
C7
C1
C2
C3
C4
C5
C6
C7
TABLE XI.
C1
.993
.0
-.664
.0
.0
.0
.0
C3
.0
.748
.0
-.707
.0
.0
.0
C2
.875
.0
-.68
.0
.0
.0
.0
C4
.0
.0
.887
.0
-.332
.0
.549
C5
.0
.0
.0
.94
.0
.0
.0
C6
.0
.0
.88
.0
.075
.0
-.807
C7
.0
.0
.0
.418
.0
.98
.0
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
REFERENCES
[1]
[2]
[3]
[20]