Professional Documents
Culture Documents
4, April 2006
i in the input layer to neuron j in the hidden layer. wkj is Where “ o ” denotes max-min composition.
the value of a weight from neuron j in the hidden layer to
Step 9. Update the connection weights between output
neuron k in the output layer. θ k is a bias term in the
layer and hidden layer and bias term.
output layer. ρ is the vigilance threshold, which
determines how close an input has to be to correctly match wkj* (n + 1) = wkj* (n) + αΔwkj* (n + 1) + βΔwkj* (n) (7)
a stored pattern.
θ k (n + 1) = θ k (n) + αΔθ k (n + 1) + βΔθ k (n)
Step 2. Set target value tk and train each input data xi .
Where α is the learning rate and β is the momentum.
Step 3. Calculate output oj in hidden layer. p
∂ok p
∂o
Δwkj * = ∑ (t k − ok ) , Δθ k = ∑ (t k − ok ) k
k =1 ∂wkj * k =1 ∂θk (8)
n −1
∂ok ∂ok
o j = ∑ w ji × x j (3) = 1, where o k = wkj = 0, otherwise.
j =0 ∂wkj * ∂wkj*
∂ok ∂o
= 1, where o k = θ k k = 0, otherwise.
∂θ k ∂θ k
Step 4. Select a winner node o*j .
The method that selects winner node for input data is that
the winner node maximize output value o j in hidden layer. 3. Optimal channel allocation problem
Step 5. Compare vigilance threshold ρ between the value 3.1 Cellular system description
of input data and stored pattern in the winner node.
We consider the performance model of a single cell in
|| T • X ||
mobile cellular networks. Let λ vn be the rate of the
If ≥ ρ , go to step 7. Else, go to step 6. Poisson arrival stream of new calls and λvh be the rate of
|| X ||
Poisson stream of handoff arrivals. An ongoing call (new
or handoff) completes service at the rate μvt and the mobile
Step 6. Reassign zero to o*j in the winner node and go to
engaged in the call departs the cell at the rate μvout . There
step 4.
are a limited number of channels S, in the channel pool.
Step 7. Update the connection weights of the winner node When a handoff call arrives and an idle channel is
between hidden layer and input layer. available in the channel pool, the call is accepted and a
channel is assigned to it. Otherwise, the handoff call is
t j *i
( n + 1) = t j *i
(n) × xi dropped. When a new call arrives, it is accepted provided
(5) that g+1 or more idle channels are available in the channel
t j *i
( n + 1) × x i
w j *
i
( n + 1) = m
pool; otherwise, the new call is blocked. Here, g is the
0 .5 + ∑ w j *i
× xi number of guard channels. We assume that g < S in order
i =1 not to exclude new calls altogether.
Let C(t) denote the number of busy channels at time t, then
Step 8. Calculate node’s NET for output layer using the {C (t ), t ≥ 0} is a birth–death process as shown in Fig. 2.
winner node’s output o*j in hidden layer and the
Λ(0) Λ(n-1) Λ(n) Λ(S-1)
connection weight wkj * between hidden layer and output 0 1 ... n-1 n n+1 ... S-g ... S
M(1) M(n) M(n+1) M(S)
layer. And then calculate the output ok of output layer
using max( ∨ ) operator.
Fig. 2 Markov chain model of mobile cellular handoff
*
NET = {o j o w * }
kj (6)
o k = NET ∨ θ k
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.4, April 2006 97
Let λ = λvn + λvh , μ = μ vt + μ vout . The state-dependent 3.2 Optimization for guard channel
arrival and departure rates in the birth–death process are
given by We consider the problem of finding the optimal number g
for guard channels such that GoS is minimized. In order to
solve the optimization problem, the proposed learning
⎧λ , n = 0,1,..., S − g − 1
Λ ( n) = ⎨ algorithm is used.
⎩λvh , n = S − g ,..., S − 1; g > 0
(9)
With different new call arrival rates, the corresponding
M (n) = nμ , n = 1,..., S handoff call arrival rates vary accordingly. To capture this
dynamic behavior, a fixed point iteration scheme is
applied to determine the handoff arrival rates [12]. To
Because of the structure of the Markov chain we can
specify and solve the Markov chain models, the tool SPNP
easily
[13] is used.
write down the solution to the steady–state balance
equations as follows. Define the steady–state probability
(10)
4. Experimental results
pn = lim Prob ( C ( t ) = n ), n = 0 ,1 , 2 ..., S
t→ ∞
The experimental procedure includes four stages: (1) As
training data, the optimal guard channel numbers under
Let ρ = λ / μ , ρ1 = λvh /( μ vt + μ vout ) . Then we have an
various S, ω , λ vn are obtained using SPNP; (2) The
expression for p n proposed learning algorithm is applied to the optimal
guard channel numbers; (3) To show the optimality,
⎧ρn compare the result of the neural network to the solution of
⎪⎪ n! , n ≤ S − g Markov chain models for unlearned combinations of S, ω ,
pn = p0 ⎨ S − g (11)
λ vn ; (4) And finally, the performance results are compared
⎪ρ ρ n−( S − g ) , n ≥ S − g
⎪⎩ n! 1 to the performance of conventional backpropagation
algorithm.
where As a first step of our procedure, we find a value of g that is
minimizing the GoS according to the result shown in Fig.3
1
p0 = (12)
ρn ρ S −g
∑ + ∑n = S − g
S − g −1
ρ1n−( S − g )
S 0.8
s=11,λvn=6,ω=3
n =0
n! n! s=11,λvn=7,ω=2
s=17,λvn=8,ω=3
0.7 s=17,λvn=12,ω=3
s=21,λvn=12,ω=3
Now we can write expressions for the dropping s=23,λvn=12,ω=3
probability for handoff calls 0.6
ρ S−g
Pd ( S , g ) = p S = p 0 ρ 1g (13)
GoS
0.5
S!
0.4
Similarly, the expression for the blocking probability of
new calls is
0.3
S
ρ S−g
∑ p n = p0 ∑ n=S − g ρ 1n − ( S − g )
S
Pb ( S , g ) =
n=S − g n! (14) 0.2
1 2 3 4 5 6 7 8
Number of guard channels(g)
ρ k
= p0 ρ S−g
∑
g 1
k =0
( k + S − g )!
Fig. 3 GoS versus guard channel numbers
Note that if we set g=0 then expression (14) reduces to
the classical Erlang-B loss formula. In fact, setting g=0 in In the training experiment, 420’s the optimal guard
expression (13) also provides the Erlang-B loss formula channel numbers were used. In step 3, to achieve the
[9]. accuracy of our proposed algorithm, 100 random data is
tested by comparing it with the Markov chain model
98 IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.4, April 2006
TSS
300
11 7 3 2 (0.4996) 3 (0.5046) 0.0050
200
13 8 3 2 (0.4560) 3 (0.4578) 0.0018
Epoch
As shown in Table 1, all 420’s the optimal guard channel
numbers were successfully trained from the proposed
Fig. 4 Variance of TSS according to learning methods
algorithm and 96’s the optimal guard channel numbers
were recognized successfully out of a total of 100’s test
Fig. 4 shows the curve that is the sum of the square of
data. Therefore, the accuracy of recognition rate is 96% in
errors in the backpropagation and the proposed method.
this study.
As shown in Fig. 4, the proposed method wins the
Table 2 represents comparison of epoch’s number and
conventional methods in terms of the speed of the initial
TSS through proposed method and conventional
convergence and training time. Through experimental
backpropagation algorithm.
results, we know that the proposed method spend less
Table 2: Learning results of each learning method training time than conventional training method, and have
a good convergence ability. This is based on the fact that
Epoch number TSS
winner-take-all method is adopted to the connection
weight adaptation, so that a stored pattern for some pattern
Conventional 1305 0.089961
gets updated. Moreover, the proposed method reduced the
Backpropagation possibility of local minima due to the inadequate weights
and the insufficient number of hidden nodes.
The Proposed 701 0.077842
Algorithm
5. Conclusions
The initial connection weights used for training in each
algorithm were set to values between –1 to 1. And the This paper proposes an enhanced supervised learning
learning rate and the momentum were set to 0.5 and 0.5 algorithm by using self-organization that self-generates
for the two recognition algorithms, respectively. For the hidden nodes by the compound Max-Min neural network
proposed algoritm, the vigilance parameter used for the and modified ART1. From the input layer to hidden layer,
creation and update of clusters was empirically set via the a modified ART1 was used to produce nodes. And
priority test. Based on simulation results the optimum winner-take-all method was adopted to the connection
value for the vigilance parameters for traning data was set weight adaptation, so that a stored pattern for some pattern
as 0.8. Table 1’s results represent result of training with gets updated.
error limit 0.09. In backpropagation algorithm, when we Using the proposed architecture, we construct the neural
set 10~30 nodes of hidden layer, we obtained the fact that network algorithm for optimal channel allocation problem
the case of 18 nodes has good performance (fast training in mobile cellular networks.
time, high convergence). Therefore, Table 1 represents a The Experimental result shows that the proposed method
result of training (case of 18 hidden nodes). In the did not sensitively responded about moment, had good
proposed method, because we applied ART1 as structure convergence ability, and had less training time than
of between input and hidden layer, it produced 20 hidden conventional backpropagation algorithm. We must
nodes after training. enhance hidden node number’s increase according to
vigilance variable’s changes.
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.4, April 2006 99
References
[1] James and Freeman, Neural Networks: Algorithm, CheulWoo Ro received his B.S., M.S. and
Application and Programming Techniques, Addison-Wesley, the Ph.D. degrees from Sogang, Dongguk,
1991. Sogang University, Seoul, Korea in 1980,
[2] R. Hecht-Nielsen, “Theory of Backpropagation Neural 1982, and 1995, respectively. From 1982 to
Networks,” Proceedings of IJCNN, Vol.1, pp.593-605, 1989. 1991, he joined ETRI (Electronics and
[3] Y. Hirose, K. Yamashita, S. Hijihya, “Backpropagation Telecommunications Research Institute) as
Algorithm Which Varies the Number of Hidden Units,” a senior research member. Since 1991, He
Neural Networks, Vol.4, pp.61-66, 1991. has been a professor of Silla University.
[4] K. B. Kim, M. H. Kang and E. Y. Cha, “Fuzzy Competitive His research interests include performance
Backpropagation using Nervous System,” Proceedings of analysis of wireless networks, Petri Nets modeling, and
WCSS, pp.188-193, 1997. embedded network system.
[5] S. N. Kavuri, V. Ventatasubramanian, “Solving the Hidden
Node Problem in Neural Networks with Ellipsoidal Units
and Related Issues,” Proceedings of IJCNN, Vol.1, pp.775- KwangEui Lee received his B.S., M.S.
780, 1992. and the Ph.D. degrees from Sogang
[6] M. Georipoulos, G. L. Heileman and J. Huang, Properties of University, Seoul, Korea in 1990, 1992,
Learning Related to Pattern Diversity in ART1,” Neural and 1997, respectively. From 1997 to 2001,
Networks, Vol.4, pp.751-757, 1991. he joined ETRI as a senior research
[7] K. B. Kim, S. W. Jang, and C. K. Kim, "Recognition of car member. Since 2001, He has been an
license plate by using dynamical thresholding method and assistant professor of Dongeui University.
enhanced neural networks," LNCS 2756, pp.309-319, 2003. His research interests include computation
[8] T. Saito and M. Mukaidono, “A Learning algorithm for theory, artificial life, context awareness
Max-Min Network and its Application to Solve Relation and their applications
Equations,” Proceedings of IFSA, pp.184-187, 1991.
[9] Kishor Trivedi, “Loss Formulas and Their Application to
Optimization for Cellular Networks”, IEEE Trans. KyungMin Kim received her B.S. and
Vehicular Technology, Vol.50,p664-673,2001. M.S. degrees from University of Silla,
[10] A Guide to DECT features that influence the traffic capacity Busan, Korea in 1993 and 2000,
and the maintenance of high radio link transmission quality, respectively.
including the results of simulations. http://www.etsi.org, She is currently a Ph.D. candidate in the
July 1991. Department of Computer Engineering
[11] Kishor S. Trivedi, Xiaomin Ma and S. Dharmaraja, University of Silla. Her research interests
“Performability modeling of wireless communication include performance analysis of wireless
systems.,” Internationnal Journal of Communication networks, ad-hoc networks and embedded
Systems, pp.561-577, 2003. programming.
[12] V.Mainka and K.S.Trivedi, “Sufficient conditions for
existence of a fixed point stochastic reward net-based
iterative models”,IEEE Trans. Software Engineering,
Vol.22,pp.640-653, 1996.
[13] G.Ciardo, K.S.Trivedi, "SPNP Usrs Manual, Version 6.0",
Technical report, Duke Univ., 1999.