You are on page 1of 4

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.

com Volume 2, Issue 3, May June 2013 ISSN 2278-6856

Recognition of Alphabets & Words using Hand Gestures by ASL


Prof. S.S. Jadhav1, Mrs. A. A. Bamanikar2
1

BharatiVidyapeeth Deemed University, College Of Engineering, Department of Information Technology, Asst. Professor BharatiVidyapeeth Deemed University, College Of Engineering, Department of Information Technology, Student

Abstract:Hand Gesture Recognition System project will


design and build a man-machine interface using a video camera to interpret the American one-handed sign language alphabet gestures. Advantage is that the user not only can communicate from a distance, but need have no physical contact with the computer. The amount of computation required to process hand gestures is much greater than that of the mechanical devices such as keyboard, mouse; however standard desktop computers with webcam are now quick enough to make this project. Gesture recognition is an area of active current research in computer vision. Body language is an important way of communication among humans, adding emphasis to voice messages or even being a complete message by itself. Thus, hand gesture recognition systems could be used for improving human-machine interaction.The way humans interact with computers is constantly evolving, with the general purpose being to increase the efficiency and effectiveness by which interactive tasks are completed. Realtime, static hand gesture recognition affords users the ability to interact with computers in more natural and intuitive ways.

Keywords: Gesture, Interface, HCI, ASL

Users generally use hand gestures for expression of their feelings and notifications of their thoughts. In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. Vision has the potential of carrying a wealth of information in a nonintrusive manner and at a low cost; therefore it constitutes a very attractive sensing modality for developing hand gestures recognition. Recent researches [1, 2] in computer vision have established the importance of gesture recognition systems for the purpose of human computer interaction. The primary goal of gesture recognition research is to create a system which can identify specific human gestures and use them to convey information or for device control. A gesture may be defined as a physical movement of the hands, arms, face, and body with the intent to convey information or meaning. Gesture recognition, then, consists not only of the tracking of human movement, but also the interpretation of that movement as semantically meaningful commands. Design and development of real time human to machine interactions by gesture recognition requires further improvement to detect hand motion information accurately and get efficient pattern matching of hand movement. It faces problems in gesture performed by different performers.

1. INTRODUCTION
This survey presents an overview of the challenging field of static hand gesture recognition, which mainly consists of the recognition ofwell-defined signs based on a posture of the hand. Since human beings tend to differ in terms of size and shape, the most challenging problem consists of the segmentation and the correct classification of the informations gathered from the input data, captured by one or more cameras. The aim of this report is to show which techniques have successfully been tested and used in order to solve the problems mentioned above yielding a robust and reliable static hand gesture recognition system. With the development of information technology in our society, we can expect that computer systems to a larger extent will be embedded into our environment. These environments will impose needs for new types of human computer-interaction, with interfaces that are natural and easy to use. The user interface (UI) of the personal computer has evolved from a text-based command line to a graphical interface with keyboard and mouse inputs. However, they are inconvenient and unnatural. The use of hand gestures provides an attractive alternative to these cumbersome interface devices for human-computer interaction (HCI). Volume 2, Issue 3 May June 2013

2. AMERICAN SIGN LANGUAGE

American Sign Language is the language of choice for most deaf people in the United States. It is part of the deaf culture and includes its own system of puns, inside jokes, etc. However, ASL is one of the many sign languages of the world. As an English speaker would have trouble understanding someone speaking Japanese, a Page 56

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 3, May June 2013 ISSN 2278-6856
speaker of ASL would have trouble understanding the Sign Language of Sweden. ASL also has its own grammar that is different from English. ASL consists of approximately 6000 gestures of common words with finger spelling used to communicate obscure words or proper nouns. Finger spelling uses one hand and 26 gestures to communicate the 26 letters of the alphabet. The signs can be seen in fig 2.1 sampling-sub pattern node Ni,j(dx,dy) computer its output Outi,j(dx,dy) by Outi,j(dx,dy)= Wi,j * PjT(dx,dy) / ( a + |Wi,j| ) Where |Wi,j| is the superscript T stand for matrix transposition and a>0. since each elements of the PjT(sx,sy) {-1,1} is either 1 or -1 , the following relationship holds |Wi,j|<= Wi,j * PjT(dx,dy) <= |Wi,j|. On the contrary, the more Pj(dx,dy) is different from Wi,j. The closer Outi,j(dx,dy) is -1.Now each sub pattern node Ni,j takes maximum value of Outi,j(dx,dy)(all its input). Now the priority index Pri for the pattern node Ni,j is defined by 64 sub patterns of node Ni. Using priority index s makes training procedure more efficient. The priority nodes stored in decreasing order and placed in priority list. The value of other sampling sub pattern node Nk.j .when the identical input pattern is presented in specific time.then it means that the training pattern should not be combined in any existing pattern subnet. In this case, we create a new pattern subnet for storing this training pattern.

3. SYSTEM ARCHITECTURE / MATH MODULE

5. IMPLEMENTATION
Hand Gesture Recognition System (HGRS) can run on any type of Windows platform. The Pre-requisite is that we should have Web Camera drivers plugged on computer. When user runs the application he has to choose Web Camera through which he can give gesture input to application. There are some conditions that must follow by the user they are Distance between web camera and hand should not be more than 1ft, Background may not be complex, Should provide continuous inputs in particular time periods ( 3 sec), Must follow standard ASL gesture format. The standard database is stored in SQL database server. When user provides input some image processing functions such as background removal, convert to gray, edge detection, blob detection, etc are performed on input frame. Then this image is compared with standard database images using Exhaustive template matching function provided by AForge .NET framework. If input image matches with any image in database then the recognized character is displayed on text box. This recognized character is then converted to audio using SAPI. User must provide second input after particular time stamp (3 sec).User can provide many characters when he clicks on STOP button the recognition process stops, he can continue whenever he wants.

This system architecture drawn from ASL HGRS. It works for both static and dynamic hand patterns. The camera captures the image and passes it as input to system. The system then calibrates that image for its measurement. The Hand tracking algorithm tracks the hand for static or dynamic gestures. According to that output get generated.

4. MATH MODULE
Suppose we are given to set of training patterns. Each pattern is represented by a row matrix G 1024 pixels, And each sub patterns by a row matrix Ij of 16 pixels, 1<= j <= 16 Pj=[Pj1,Pj2,Pj3,,Pj64] and G=[P1,P2,Pk] Where Pj,k is the normalized gray level of the corresponding pixel, i.e Pj,k {-1,1} 1 <= K <= 64, 1 <= j <= 64 with 1 representing black and -1 Representing white For convenience, we represent the input to sampling sub pattern node Ni,j (dx,dy), Pj(0,0) may be abbreviated as Pj. For the sub pattern node Ni,j its node weight Wi,j is defined to be Wi,j=[Wi,j,1,Wi,j,2 ,,Wi,j,64] . where Wi,j,k Z, 1 <= K <=64 is an integer, Support an input training pattern g with class C is present to the network Each

6. TEMPLATE MATCHING ALGORITHM


The template matching method is used as a simple method to track objects or patterns that we want to search for in the input image data. It recognizes a segment with Page 57

Volume 2, Issue 3 May June 2013

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 3, May June 2013 ISSN 2278-6856
the highest correlation as a target. The concept of this method is similar to that of SNF (Strongest Neighbor Filter) that regards the measurement with the highest intensity as target-originated among other measurements. The SNF assumes that the strongest neighbor (SN) measurement in the validation gate originates from the target of interest and the SNF utilizes the SN in the update step of a standard Kalman filter (SKF). The SNF is widely used along with the nearest neighbor filter (NNF), due to computational simplicity in spite of its inconsistency of handling the SN as if it is the true target. Probabilistic Strongest Neighbor Filter for m validated measurements (PSNF-m) accounts for the probability that the SN in the validation gate originates from the target while the SNF assumes at any time that the SN measurement is target-originated. It is known that the PSNF-m is superior to the SNF in performance at a cost of increased computational load. It suggestan image tracking algorithm that combines the template matching and the PSNF-m to estimate the states of a tracked target. Cany Image In this precess edge points of all the objects in the captured images are mapped through its shape and convert it into negative image pattern. That image is then input to the bb_image function. Cany image conversion uses following function// Detecting image edges and saving the result CannyEdgeDetector CED = new CannyEdgeDetector(0, 70); CED.ApplyInPlace(GSampleImage); GSampleImage.Save(@"D:\HGRS_Project\HGRS\Db_ Images\Cany_Image"+i+".jpg"); System.Drawing.Bitmapim = (Bitmap)Bitmap.FromFile(@"D:\HGRS_Project\HG RS\Db_Images\Cany_Image"+i+".jpg");

BB Image(Big Blob Image)

7. IMPORTANT MODULES

Image Processing Fig: 7.1 Image Processing Flow Graph Input Image This image captured through any one of the plugged cameras (i.e. integrated camera or external web camera) is used as input image to the HGRS system.It is captured by following image processing functionSystem.Drawing.Bitmap image = (Bitmap)Bitmap.FromFile(@"D:\HGRS_Project\HGRS\D b_Images\IP_Image" + i + ".jpg"); Gray Scale Image The input image captured through above function is then converted into light black & white color shade (i.e. in gray scale form ) to remove the color variation effetct. This is done through following image processing funcionGrayscalegc = new Grayscale(0.2125, 0.7154, 0.0721); using (Bitmap GSampleImage = gc.Apply(image)) { // Saving the grayscale image-provide good quality GSampleImage.Save(@"D:\HGRS_Project\HGRS\Db_Im ages\Gray_Image" + i +.jpg"); } Volume 2, Issue 3 May June 2013

This funtion uses cany image to calibrate the maximum surface area from the frame. That surface area is then cropped from the frame and stored as bb_image(i.e final blob blob image) in database. This bb_image is also used to match with the pattern of those stored images in databaseExtractBiggestBlob bb = new ExtractBiggestBlob(); ip_im = (Bitmap)bb.Apply(im); //Resize Image ResizeBicubic filter = new ResizeBicubic(60, 80); ip_im = filter.Apply(ip_im); ip_im.Save(@"D:\HGRS_Project\HGRS\Db_Images\ bb_Image" + i + ".jpg"); i++;

8. RESULTS

Page 58

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 3, May June 2013 ISSN 2278-6856 REFERENCES
[1] Static Hand Gesture Recognition Thierry Messer Department of Informatics University of Fribourg. [2] Hand Gesture Recognition UsingFuzzy Neural Network Nguyen Dang Binh, Toshiaki Ejima GVIP 05 Conference, 19-21 December 2005, CICC, Cairo, Egypt. [3] A Fast Algorithm For Vision Based Hand Gesture Recognition For Robot Control AsanterabiMalima, Erolzgr, and Mjdat etin. [4] Image Tracking Algorithm using Template Matching and PSNF-m Jong Sue Bae and TaekLyul Song International Journal of Control, Automation, and Systems, vol. 6, no. 3, pp. 4 13-423, June 2008 [5] Finger Detection for Sign Language RecognitionRavikiran J, Kavi Mahesh,

SuhasMahishi, Dheeraj R, Sudheender S, Nitin V Pujari Proceedings of the International


MultiConference of Engineers and Computer Scientists 2009 Vol I IMECS 2009, March 18 - 20, 2009, Hong Kong. [6] A Project for Hand Gesture Recognition By Carlos R.P. Dionisio and Roberto M Cesar Jr. [7] Fast Hand Gesture Recognition for Real-Time Teleconferencing Applications James MacLeant, Rainer Herperst, Caroline Pantofaru*, Laura Wood*, KonstantinosDerpanis+, Doug Topalovic+, John Tsotsos. [8] A Real Time Hand Gesture Recognition Technique by using Embedded device. Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 1, 043 046. [9] Hand Gesture Recognition in Images and Video Ilan Steinberg, Tomer M. London, Dotan Di Castro. [10] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice-Hall, 2nd edition, 2002. [11] World Academy of Science, Engineering and Technology 60 2009 Hand Gesture Recognition Based On Combined feature extraction Mahmoud Elmezain, Ayoub Al-Hamadi, and Bernd Michaelis Institute for Electronics, Signal Processing and Communications (IESK) Otto-von-GuerickeUniversity Magdeburg. [12] A parallel Algorithm for Real time Hand Gesture Recognition. 2010 Varun Ramesh , Hamilton high School, Chandler, Arizone [13] SushmitaMitra, and TinkuAcharya, Gesture Recognition: A Survey, IEEE Transactions on Systems, Man and CyberneticsPart C: Applications and Reviews, (2007)

9. CONCLUSION
We are developing a gesture recognition system that is proved to be robust for ASL gestures. The system is fully automatic and it works in real-time, static background. The advantage of the system lies in the ease of its use. The users do not need to wear a glove, neither is there need for a uniform background. Experiments on a single hand database have been carried out.

Volume 2, Issue 3 May June 2013

Page 59

You might also like