Professional Documents
Culture Documents
I.
INTRODUCTION
RELATED WORK
III.
METHODOLOGIES
V.
a) Skin Extraction
b) Feature Extraction
c) Training and
d) Recognition
In training phase a person has to provide sample image of
his hand gesture so that the reference template model or
database can be built. In training phase images we have
collected is 100X100 pixel images and we have collected hand
gesture of 15 Bangla letters from 22 different people. So, in
total we have collected total 330 sample images. Then we
detect skin from that image and extract the features of these
330 images and saved into a database.
Y=0.299R+ 0.58G+0.4B
(5)
=128+ (-0.169R + 0.331G+0.5B)
(6)
=128+ (-0.5R - 0.419G- 0.081B)
(7)
Experimental results show that skin pixel has Cr value about
100 and Cb value about 150. Pixel is classified as skin or nonskin pixel using Eq. (8).
Decision Making
FIG. 1. WHICH COMPRISES THE WHOLE PROCEDURE OF THIS SYSTEM.
IV.
Fig. 2. Bangla sign language of bangle letter (a), (b), (c), (d), (e), (f)
VI.
After converting into gray scale image we can see that our
process has successfully extracted the hand gesture from that
image. If we convert the background image into gray scale
image that we can see that there is no skin pixel or no sign of
hand in that image and the output image is plain black.
PCA
LDA
XII. RESULT
As per data collection, we have collected all the data by
ourselves from about 22 people for 15 Bengali alphabets.
About 17 images were taken from each person for training
purposes for each symbol or alphabet and at least 5 were for
taken for testing purposes. So we have collected about 330
sets of data among which 200 are for training purposes and 65
are for testing. Below a table is given describing the results of
different methods used on these data.
Symbol
Number
training
images
of
Number
Testing
Images
of
Raw
Data %
PCA %
17
68
37
100
17
66
33
100
17
54
34
100
17
59
39
100
17
72
28
100
17
57
30
100
17
68
35
100
17
67
33
100
17
63
32
100
17
68
31
100
17
67
34
100
17
60
37
100
17
70
39
100
17
69
37
100
17
62
40
100
B. Model 2
This model uses Principal component analysis (PCA) to
train the data to the system. After the training is done the later
processes such as the validations etc. are described below. As
we can see the rate of success is not much for the PCA
process. Here MSE is the mean square error and %E is the
recognition error.
PCA features-hidden
Nodes-50
Mean
Training
MSE
0.12
0.002
0.109
0.139
0.0618
%E
69.44
57.14
70.63
36.546
MSE
0.152
0.141
0.152
0.154
0.1471
%E
71.43
67.86
80.95
75
73.689
MSE
0.15
0.143
0.151
0.156
0.1481
%E
72.62
71.43
77.38
72.62
73.572
60%-200
Validation
20%-65
Testing
A. Model 1
Here we have taken 330 raw images converted to gray and
running them directly to the neural network for training and
testing purposes. 200 sets of data are used for training
purposes and 65 for validation and 65 for testing. Below the
results are given for the raw data.
1
mean
0.002
0.0054
60%-200
%E
3.97
2.183
Validation
MSE
0.098
0.086
0.11
0.0904
20%-65
%E
38.09
34.52
42.86
35.236
Testing
MSE
0.096
0.086
0.074
0.0816
20%-65
%E
40.48
35.75
29.75
32.855
Error on
Testing
Success
Rate
LDA
%
Original pics-hidden
Nodes-50
Training
MSE
20%-65
C. Model 3
In the third model the LDA algorithm is used for feature
extraction. Below a table view of the training, validation and
testing results are given for LDA.
LDA features-hidden
Nodes-50
Training
MSE
60%-200
%E
Validation
MSE
20%-65
%E
Testing
MSE
20%-65
%E
Mean
XIV. CONCLUSION
The sole purpose of this paper is to detect and recognize
the sign language using different techniques and see which is
more accurate. Only this case we have used it to detect and
recognize the Bangla sign language. Number of works done,
specifically on Bangla sign language is very less. So our goal
was to find a better solution to detect and recognize the Bangla
sign language.
References
[1] T. Kapuscinski and M. Wysocki, Hand Gesture Recognition for ManMachine interaction, Second Workshop on Robot Motion and Control,
October 18-20, 2001, pp. 91-96.
[2] D. Y. Huang, W. C. Hu, and S. H. Chang, Vision-based Hand Gesture
Recognition Using PCA+Gabor Filters and SVM, IEEE Fifth International
Conference on Intelligent Information Hiding and Multimedia Signal
Processing, 2009.
[3] Manigandan M. and I. M Jackin, Wireless Vision based Mobile Robot
control using Hand Gesture Recognition through Perceptual Color
Space, IEEE International Conference on Advances in Computer
Engineering, 2010, pp. 95-99.
[4] Amin, M.A. ; City Univ. of Hong Kong, Kowloon ; Hong Yan,Sign
Language Finger Alphabet Recognition from Gabor-PCA Representation
of Hand Gestures, Machine Learning and Cybernetics, 2007 International
Conference on (Volume:4 )
[5] Deng-Yuan Huang ; Dept. of Electr. Eng., Da-Yeh Univ., Chang-Hua,
Taiwan ; Wu-Chih Hu ; Sung-Hsiang Chang, Vision-Based Hand Gesture
Recognition Using PCA+Gabor Filters and SVM, Intelligent Information
Hiding and Multimedia Signal Processing, 2009. IIH-MSP '09. Fifth
International Conference on
[6]
[7]
[8]
[9]
[10]
[11]
[13]
[14]
[15]
Fig. 12. Success rate comparison of three models.
[16]
Lamar, M.V. ; Dept. of Electr. & Comput. Eng., Nagoya Inst. of Technol.,
Japan ; Bhuiyan, M.S. ; Iwata, A., Hand alphabet recognition using
morphological PCA and neural networks, Neural Networks, 1999.
IJCNN '99. International Joint Conference on (Volume:4 )
Lamar, M.V. ; Dept. of Electr. & Comput. Eng., Nagoya Inst. of Technol.,
Japan ; Bhuiyan, M.S. ; Iwata, A.,, Hand gesture recognition using
morphological principal component analysis and an improved
CombNET-II, Systems, Man, and Cybernetics, 1999. IEEE SMC '99
Conference Proceedings. 1999 IEEE International Conference on
(Volume:4 )
William T. Freeman and Michal Roth, Orientation Histograms for Hand
Gesture Recognition, IEEE Intl. Wkshp. on Automatic Face and Gesture
Recognition, Zurich, June, 1995.
Y. Fang, K. Wang, J. Cheng, and H. Lu, A Real-Time Hand Gesture
Recognition Method, IEEE ICME, 2007, pp. 995-998.
S. Saengsri, V. Niennattrakul, and C.A. Ratanamahatana, TFRS: Thai
Finger-Spelling Sign Language Recognition System, IEEE, 2012, pp. 457462.
J. H. Kim, N. D. Thang, and T. S. Kim, 3-D Hand Motion Tracking and
Gesture Recognition Using a Data Glove, IEEE International Symposium
on Industrial Electronics (ISIE), July 5-8, 2009, Seoul Olympic Parktel,
Seoul , Korea, pp. 1013-1018.
Bangla Sign Language Anthology: CDD s JOURNEY, Centre for
Disability in Development (CDD), Bangladesh, 2009.
J. Tang and J. Zhang, Eye tracking based on grey prediction, First
International Workshop on Education Technology and Computer
Science, pp. 861864, 2009.
Z. Juan and Z. Jing-xiu, A novel method of rapid eye location, 2nd
International Conference on Bioinformatics and Biomedical
Engineering, pp. 20042007, 2008.
Swapnil V Tathe and Sandipan P Narote, Face detection using color
models World Journal of Science and Technology 2012, 2(4):182-185;
ISSN: 2231 2587