Professional Documents
Culture Documents
n
i
n
j
ij
p
1 1
1
and a, b, c and d are the real-valued parameters.
This form allows us to relax the assumption that the
sum of all probabilities is equal to 1, called the equality
constraint. The difficulty of probabilities is that they
become very small in large datasets by trying to meet the
equality constraint. The relaxation of the equality
constraint is done by replacing
ij
P with an information
source with the associated grade. For more elaboration on
the entropy function, you may refer to Appendix. With
this constraint relaxed, we have the flexibility of choosing
information sources in such a way that the entropy value
can exceed 1, thus acquiring the discriminating power.
Taking p
ij
=I(i, j), a=b=0, c=1/Imax and d=-I(ref)/Imax in
the exponential gain of Eqn. (2), leads to the exponential
function given by
) (
1 1
2 3
d cp bp ap
n
i
n
j
ij
ij ij ij
e p H
(2)
} | ) ( ) . ( | {
2
h
f ref I j i l
ij
e
(3)
The unknown parameters in Eq. (3) are the reference gray
level in an image I (ref), which can be taken as the
maximum gray level or median in the window. In view of
Eqns. (2) and (3), eqn. (2) can be interpreted as
Here the fuzzifier
2
h
f is defined as
W
i
W
j
h
j i I ref I
j i I ref I
ref f
1
2
4
1
2
)) , ( ) ( (
)) , ( ) ( (
) (
ij
n
i n j
j i I H
1
) , ( (4)
Now the information can be represented as a
set )} , ( { j i I
ij
. In the parlance of a fuzzy set, each
element of the set is a pair consisting of information
source and its subsequent membership value whereas in
the information sets each element is an information value.
Several candidates that serve as information can be
derived from (4), which is the basic form. Some of the
forms which emanate from information sets are:
} )) , ( ( { )}, ( ) , ( {
}, ) , ( { }, ) , ( { }, ) , ( { }, ) , ( {
2
1
2 2 3
ij ij
ij ij ij ij
j i I g f j i I
j i I j i I j i I j i I
A family of information forms is thus deduced from the
Hanman-Anirban entropy for dealing with different
problems.
3.2 The Hanman Transform
As we know that the membership value associated with
each information source gives a measure of uncertainty,
by making it a parameter in the exponential gain function
of this entropy gives rise to the information value as the
gain. To this end, the parameters of Hanman-Anirban
entropy function Eq. (2) are chosen as a=b=d=0 and
max
I
c
so as to obtain the Hanman Transform.
W
j
I
j i H
W
i
t
e j i I I H
1
) , (
1
max
) , ( ) ( (5)
Where, ) , ( ) , ( j i I j i H
ij
. In the general case, one
can take the exponential gain as the function of
information as
)) , ( ( j i H f
e
.
The motivation behind this development is now
elaborated. As can be seen from Eq. (5), the information
source is weighted as a function of the information value.
One can also see the utility of this transform in the social
context. For example, a person (information source) is
judged by the opinions (exponential gain) formed on the
person (information value) resulting in the judgment (the
weighted information source). Just as Fourier transforms
sieves the frequency content through a periodic signal,
Hanman transform sieves the uncertainty (information)
through the vague information source. The exponential
function being the monotonically increasing function, it
has the ability of retrieving things in terms of its gain. As
the information values in the gain can assume different
forms, the Hanman transform can capture the related
things from the information sources thus offering
immense possibilities to try out.
Alternatively, Hanman transform Eq. (5) can also be
written in the matrix form as
)
.
(
max
. ) (
I
I
t
e I I H
(6)
Where I is the sub image of the window and (here the
product is taken element-wise) is the corresponding
information matrix. The information is obtained as the
sum of the matrix elements. It is possible to include a bias
in the Hanman transform as follows:
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 3, Issue 3, May June 2014 ISSN 2278-6856
Volume 3, Issue 3 May June 2014 Page 149
W
j
I
H j i H W
i
t
e j i I I H
1
)
0 ) , (
(
1
max
) , ( ) (
Spatial Variation
If g (k) is the kth feature value representing the spatial
variation of the information source with the
corresponding membership value is
k
k
g k g
s
k
e k g k h g H
max
/ ) (
) ( ) ( ) (
For instance, h (k) vs. g (k) is the histogram of og grays
of an image. If g (k) =k, if gray levels are varying like
natural numbers then h (k) vs. k is the histogram.
Considering h (k) as the membership function of, the
Hanman transform m can be written as:
k
k k h
s
ke g H
) ). ( (
) (
Instead of discrete k, let us take now the continuous
variable t such that h (t) is function of t, the above
becomes,
dt e t H
t t h
t
) ). ( (
) (
3.2.1 Time Variation
Let h (t) be a time varying function and
) (t
be the
continuous membership function then Hanman transform
takes the integral form given by
dt e t h t H
t h t
t
) ( ) (
) ( ) (
This transformation is motivated from the fact that any
information source (text, image or video) must be
weighed as a function of the information. Note that the
information results from an agent who gives the
information source a grade (membership function value).
3.2.2 Heterogeneous Hanman Transform
If A with the information source I
a
and value H
a
and B
with the information source I
b
and value H
b
, the
heterogeneous Hanman transforms are expressed as
b
H
a
e I B A H ) / (
a
H
b
e I A B H ) / (
To mention a few applications of this transform we may
cite: Generation of new features, Evaluation of quality of
signals, Image processing and video processing to name a
few.
Algorithm:
The Hanman transform features are extracted from (5) in
the following steps:
1) Compute the membership value associated with each
gray level in a window of sizeWxW .
2) Compute the information as the product of the gray
level and its membership function value, divided by the
maximum gray level in the window.
3) Take the exponential of the normalized information
and multiply it with the gray level,
4) Repeat steps 1- 3 on all gray levels in a window and
sum the values to obtain a feature,
5) Repeat steps 1-4 on all windows in a face image to get
all features, and
6) Repeat steps 1-5 for =13, 15, 17, 19 for the
performance evaluation.
4. The Proposed Algorithm
A new classifier that seeks accentuate the absolute
differences between the training and test samples using
the t-norms and evaluate the entropy is formulated. Let Nl
be the number of users, Nr be the number of training
samples per user. As we deal with only one test sample,
the number of test samples is no concern. Let the feature
vector of rth training sample and lth user be indicated
by
) , (
,
l r f
k tr
. Similarly, the feature vector of tth the test
sample which may pertain to any user be denoted
by
) (
,
t f
k te
. The absolute errors between the training and
test samples are computed from
) ( ) , ( ) (
, , ,
t f l r f k e
k te k tr l r
, r=1,.., Nr; l=1,..,Nl (7)
All the error vectors (Nr) pertaining to a user (l) contain
the information required for matching. In order to utilize
this information without going for learning, we generate
the normed-error vectors by taking the t-norm of all
possible pairs of error vectors.
)) ( ), ( ( ) (
, ,
k e k e t k E
l j l i ij
(8)
As i, j=1,2,...,Nr, the number of products generated is
Nr
r
r Nr Np
2
) 1 ( . The normed error vectors act as
support vectors of Support Vector Machine (SVM)
because t-norms stretch the errors thus creating a margin.
Recalling the Hanman-Anirban entropy function with
a=b=0 and p=E
ij
(k), we obtain what we call general
Hanman classifier
M
k
d k cE
ij ij
ij
e k E l h
1
] ) ( [
) ( ) ( (9)
In (9) we need to learn c and d, which we can avoid by
taking c=1 and d=0. In this case (9) is simplified to
Hanman classifier:
M
k
k E
ij ij
ij
e k E l h
1
) (
) ( ) (
(10)
The minimum of ) (l h
ij
is the measure of dissimilarity
corresponding to the l
th
user. So we determine the
following:
)} ( min{ ) ( l h l H
ij
(11)
The identity of the user corresponds to the one where H
(l) is minimum. The normed-error vectors can also be
used for the classification by ignoring the exponential in
(10) as
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 3, Issue 3, May June 2014 ISSN 2278-6856
Volume 3, Issue 3 May June 2014 Page 150
M
k
ij ij
k E l h
1
) ( ) (
(12)
It is possible to find the pair from all {hij (l)} which
corresponds to the minimum H (l) for the lth user. This is
repeated for all l.
4.1 Entropy Neural Network
5. Results
We here use 3 dataset FKP (finger knuckle print) of
PolyU and IRIS Flower of UCI Repository and Ear
Database of IITD.
5.1 Without PSO
5.1.1 In FKP Dataset
We have taken LBP (Local Binary Pattern) data for left
index knuckles for 165 users. We use various t-norms
include Hamacher, Enstein product, schweiszer or sklar,
yager and frank. We have tried various values for p but
the best results are considered in table. In table 5.1 you
can see the following results:
Table.5.1 FKP Datasets results using Various Tnorms
T-norms p Result
Schweizer & sklar 0.9 81.82%
Yager 0.1
0.3
86.06 %
85.76 %
Frank
0.1
0.2
0.3
89.09 %
88.18 %
87.20 %
We analyze that frank t-norm shows best result among all
the t-norms.
5.1.2 In IRIS Flower database
The best results are shown in table 5.2.
Table 5.2 Iris Flower Database results using Various T-
norms
T-norms P Result
Enstein product - 85%
Schweizer or
sklar
0.1
0.3
90 %
91.67%
Yager 0.3 91.67 %
Frank 0.1 86.67 %
5.1.3 In Ear Database
The best results are shown in table 5.3.
Table 5.3 Ear Database results using Various T-norms
T-norms P Result
Einstein Product 0.3 86.40%
Hamacher 0.1 85.60%
Frank 0.1 87.20 %
5.2 By using PSO
For optimizing the error rate, we use PSO Algorithm.
We found frank T-norm shows best results.
By using equation (21), we find recognition rate
' '
1
( , )exp ( * ( , ) ))
n
i i i i
e
tnorm e e a tnorm e e b
There is no effect on value of b. So we learn value of a.
5.2.1 FKP Database
Value of a is 3.0437
After learning value or weights through PSO, we apply
this value of alpha in classifier and found recognition rate
is 90.61%.
Table5.4.Results of FKP database using PSO Algorithm
Value of
a
Recognition
rate without
PSO
Recognition
rate with PSO
Increase in
accuracy
rate
3.0437 89.09 % 90.61 1.52
6. Conclusion
We have investigated finger-knuckle and ear based
authentication using the Hanman classifier. This classifier
derived from the t-norms and the entropy function performs
fairly well on both the knuckles and the ear database. Various t-
norms due to Hamacher, Einstein product, Yager, Schweizer
and Sklar, Frank have been explored. This study aims at
tapping the potential of t-norms for classification. The approach
renders very good performance as it is quite computationally
fast.
The entropy neural network is built on the classifier by
incorporating evolutionary learning technique. The evolutionary
learning technique particle swarmoptimization is utilized to
learn the parameters to make machine learning better. The
experimental results ascertain the improvement in the
classification accuracy by optimally learning the parameters
using PSO. Frank T-norm shows 89.09 % in Knuckles
database,89 % in IRIS flower database and 87.2% in ear
database. The experimental results suggest that Frank T-norm
outperforms over all the other t-norms.
References
[1] A.Meyer-Bse,V.Thmmler, Local and Global
Stability Analysis of an Unsupervised Competitive
Neural Network, IEEE Trans. Neural Netw.vol.19,
no.2,pp. 346 - 351,Feb.2008.
[2] Shen Furao, Osamu Hasegawa, An incremental
network for on-line unsupervised classification and
topology learning, Neural Netw., vol.19, no.1,
pp.90-106, Jan.2006.
[3] Hongtao Lu and Shun-ichi Amari, Global
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 3, Issue 3, May June 2014 ISSN 2278-6856
Volume 3, Issue 3 May June 2014 Page 151
Exponential Stability of Multitime Scale Competitive
Neural Networks With Nonsmooth Functions, IEEE
Trans. Neural Netw.,vol.17,no.5,pp. 1152 -
1164,sep.2006.
[4] Mohamad Awad, Kacem Chehdi, and Ahmad Nasri,
Multicomponent Image Segmentation Using Genetic
Algorithm and Artificial Neural Network, IEEE
Trans. Geosciences.remote sensing.vol.4,no.4,pp.
571 - 575,oct.2007.
[5] Dubravko Culibrk, Oge Marques, Daniel Socek, Hari
Kalva, and Borko Furht, Neural Network Approach
to Background Modeling for Video Object
Segmentation, IEEE Trans. Neural
Netw.,vol.18,no.6,pp. 1614 1627,nov.2007.
[6] Dong-Chul Park, Oh-Hyun Kwon, and Jio Chung,
Centroid Neural NetworkWith a Divergence Measure
for GPDF Data Clustering, IEEE Trans. Neural
Netw.,vol.19,no.6,pp.948 - 957,june 2008.
[7] Dongyue Chen, Liming Zhang, Juyang (John)Weng,
SpatioTemporal Adaptation in the Unsupervised
Development of Networked Visual Neurons, IEEE
Trans. Neural Netw.,vol.20,no.6,pp. 992 - 1008
,june.2009.
[8] Victor R. L. Shen, Yue-Shan Chang, Tony Tong-
Ying Juang, Supervised and Unsupervised Learning
by Using Petri Nets, IEEE Trans. Syst., Man,
Cybern. A, vol. 40, no. 2, pp. 363 - 375, Mar.2010.
[9] T.Maul, S.Baba, Unsupervised learning in second-
order neural networks for motion analysis,
Neurocomputing, vol. 74,no.6,pp.884-895,Feb.2011.
[10] H.Tong, T.Liu, Q.Tong, Unsupervised learning
neural network with convex constraint: Structure and
algorithm, Neurocomputing., vol. 71,no.4-6,pp.620-
625,J an.2008.
[11] S.Ghosh, S.Patra, A.Ghosh, An unsupervised
context-sensitive change detection technique based
on modified self-organizing feature map neural
network,International Journal of Approximating
Reasoning, vol. 50,no.1,pp.37-50,Jan.2009.
[12] A. Ravishankar Rao, Guillermo A. Cecchi, Charles
C. Peck, James R. Kozloski, Unsupervised
Segmentation With Dynamical Units, IEEE Trans.
Neural Netw.,vol.19,no.1,pp. 168 182,jan.2008.
[13] D.Prokhorov, A Convolutional Learning System for
Object Classification in 3-D Lidar Data, IEEE
Trans. Neural Netw.,vol.21,no.5,pp. 858 - 863,2010.
[14] P.Giguere,G.Dudek, A Simple Tactile Probe for
Surface Identification by Mobile Robots IEEE
Trans. robotics.,vol.27,no.3,pp. 534 - 544,June.2011.
[15] Alexander A. Frolov, Dusan Husek, Igor P.
Muraviev, and Pavel Yu. Polyakov, Boolean Factor
Analysis by Attractor Neural Network, IEEE Trans.
Neural Netw., vol.18, no.3, pp. 698 - 707,May.2007.
[16] Alexander A. Frolov, Dusan Husek,Pavel Yu.
Polyakov, Recurrent-Neural-Network-Based
Boolean Factor Analysis and Its Application to Word
Clustering, IEEE Trans. Neural Netw.,
vol.20,no.7,pp. 1073 - 1086,July.2009.
[17] E.Lpez-Rubio, E.JPalomo, Growing Hierarchical
Probabilistic Self-Organizing Graphs, IEEE Trans.
Neural Netw., vol.22,no.7,pp. 997 - 1008,July.2011.
[18] Sergio Decherchi, S.Ridella, R.Zunino, P.Gastaldo,
D.Anguita, Using Unsupervised Analysis to
Constrain Generalization Bounds for Support Vector
Classifiers, IEEE Trans. Neural Netw.vol.21,
no.3,pp. 424 - 438,Mar.2011.
[19] Chuan-Yu Chang, ChunHsi Li, Jia-Wei Chang,
MuDer Jeng, An unsupervised neural network
approach for automatic semiconductor wafer defect
inspection,Expert systems and applications, vol.36,
no.1,pp 950-958,Jan.2009.
[20] C. Dimoulas, G. Kalliris, G. Papanikolaou, V.
Petridis, A. Kalampakas, Bowel-sound pattern
analysis using wavelets and neural networks with
application to long-term, unsupervised,
gastrointestinalmotility monitoring Expert systems
and applications, vol.34, no.1,pp 26-41,Jan.2008.
[21] A.Singh, C.Quek, S.-Y.Cho,DCT-Yager FNN: A
Novel Yager-Based Fuzzy Neural Network with the
Discrete Clustering Technique, IEEE Trans. Neural
Netw., vol.19, no. 4, pp.625 - 644, apr.2008.
[22] F.Ratle, G.Camps-Valls,J.Weston, Semisupervised
Neural Networks for Efficient Hyperspectral Image
Classification ,IEEE Trans. Geosciences.remote
sensing. , vol.48, no.5, pp. 2271 - 2282, May.2011.
[23] J. F. Martins, V. Ferno Pires,A. J. Pires,
Unsupervised Neural-Network-Based Algorithm for
an On-Line Diagnosis of Three-Phase Induction
Motor Stator Fault, IEEE
Trans.Industrial.Electronics, vol.54, no.1, pp. 259 -
264, Feb.2007.
[24] S.Weng, H.Wersing, Jochen J. Steil,H.Ritter,
Learning Lateral Interactions for Feature Binding
and Sensory Segmentation From Prototypic Basis
Interactions, IEEE Trans. Neural Netw.vol.17,
no.4,pp. 843 - 862,July.2006.
[25] Xiaoyuan He, R.Kojima,O.Hasegawa,
Developmental Word Grounding Through a Growing
Neural Network With a Humanoid Robot , IEEE
Trans. Syst., Man, Cybern. B, vol. 37, no. 2, pp. 451
- 462, Apr.2007.
[26] Xiaoyuan He,T.Ogura, A.Satou,O.Hasegawa,
Developmental Word Acquisition and Grammar
Learning by Humanoid Robots Through a Self-
Organizing Incremental Neural Network, IEEE
Trans. Syst., Man, Cybern. B, vol. 37, no. 5, pp.
1357 - 1372, Oct.2007.
[27] Hongtao Lu and S.Amari, Global Exponential
Stability of Multitime Scale Competitive Neural
Networks With Nonsmooth Functions, IEEE Trans.
Neural Netw.vol.17, no.5,pp 1152 - 1164,Sep.2006.
[28] S.Furao, T.Ogura, O.Hasegawa, An enhanced self-
organizing incremental neural network for online
unsupervised learning,Neural Netw., vol.28, no.8,pp
893-903,Oct.2007.
[29] A.Forti, G.L.Foresti, Growing Hierarchical Tree
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 3, Issue 3, May June 2014 ISSN 2278-6856
Volume 3, Issue 3 May June 2014 Page 152
SOM: An unsupervised neural network with dynamic
topology, Neural Netw., vol.19, no.10,pp 1568-
1580,Dec.2006.
[30] M.Amini, R.Jalili, H.R.Shahriari, RT-UNNID: A
practical solution to real-time network-based
intrusion detection using unsupervised neural
networks,Computers and Security, vol.25, no.6,pp
459-468,Sep.2006.
[31] Susmita Ghosh, Lorenzo Bruzzone, Swarnajyoti
Patra,Francesca Bovolo, Ashish Ghosh,A Context-
Sensitive Technique for Unsupervised Change
Detection Based on Hopfield-Type Neural
Networks, IEEE Trans. Geosciences. remote
sensing. , vol.45, no.3, pp.778 - 789, mar.2007.
[32] B. Karthikeyan, S. Gopal, S. Venkatesh, ART 2
an unsupervised neural network for PD pattern
recognition and classification Expert systems and
applications, vol.31, no.2,pp .345-350,Aug.2006.
[33] T.Kasakawa, H.Tabata, R.Onodera, H.Kojima,
M.Kimura,H.Hara,S.Inoue, An Artificial Neural
Network at Device Level Using Simplified
Architecture and Thin-Film Transistors, , IEEE
Trans. Electronic.Device.vol.57, no.10, 2744 -
2750,Oct.2010.
[34] N.Tsimboukakis,G.Tambouratzis Word-Map
Systems for Content-Based Document Classification
IEEE Trans. Syst., Man, Cybern. C, vol. 41, no. 5,
pp. 662 - 673, Sep.2011.
[35] www.ics.uci.edu
[36] http://www4.comp.polyu.edu.hk/~biometrics/2D_3D
_Palmprint.htm
[37] http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Data
base_Ear.htm
[38] M.hanmandlu, F.Sayeed, information sets and
information processing of an application of face
recognitioncommunicated to IEEE Trans.
transaction or system man and cybernatics part B