You are on page 1of 9

Review of Related Literature

Real Time Recognition of Filipino Sign Language Using Bayesian Network (Burkett,

Alexander J. Diaz, Geroda, & Mallari, 2015)

In the study of Burkett, Alexander J. Diaz, Geroda, & Mallari finds out the significant

difference of the accuracy in recognizing sign language using Bayesian Network with and

without facial expression. The main purpose of the proponents was the answer the

question of how the facial expression affects the accuracy in recognizing sign language.

The three parameters that were used in the study are the facial expression, position of

the hand, and the hand shape. The proponents used a regular web camera for the

experiment and colored gloves for better detection of the hand. However, this approach

lessens the ease of use of the user following the Human-Computer Interaction discipline.

Another shortcoming of the study is that it only focuses on 1) single-handed and 2) no


movements of hand and face (static) (Burkett, et al., 2015) whereas Filipino Sign

Language was not mostly composed of. Their study is about developing a standalone

system that translates Filipino Sign Language into text and voice. The process of the

study consists of Facial Expression Recognition where the proponents used a fast and

robust algorithm, the Viola-Jones algorithm to look for features that are similar to the

human face. However, the study disregards the mouth part which is also a vital part in

recognizing a sign and only focuses on the eyes and the eyebrows. Sign Recognition is

also a part of the process where the hand is being detected by the use of contour points

on the detected colored gloves and convex hull which will surround the contour points.

After the experiment is conducted, the researchers concluded that Sign Language with

facial expression has a higher accuracy than without using facial expression (Burkett, et

al., 2015). The results gathered were 77% for recognizing sign language without facial

expression and the accuracy rate of recognizing sign language with facial expression got

a 96.10%. The data gathered shows that there is a significant difference between the sign

language with and without facial expression.


Filipino Sign Language to Speech: A Translator Using Neural Networks (Basa, Dolon,

Geronimo, & Migueh, 2013)

In the study of Basa, Dolon, Geronimo, and Miguel, they aim to measure the level

of accuracy of recognizing Filipino Sign Language in terms of illumination of the

environment and distance. The study also aims to know the reaction of Filipino Sign

Language Teachers in terms of the functionality and user-friendliness of the developed

system. The proponents of the study use Neural Networks to identify the pattern of the

input. The Figure ___ shows the network will be adjusted based on a comparison of the

output and the target until the network output matches the target. The proponents used

different light bulbs with a different range of lumens to obtain the optimal lighting for the

system to capture the hand gesture correctly. After the experiment has been conducted,

the researchers concluded that the higher the lumens of light, the more accurate the

system can detect the hand gesture correctly. The researchers also concluded that 4 feet

are the optimal distance of the user to the camera for better accuracy using a light bulb

with 800-1000 lumens (Basa, Dolon, Geronimo, & Migueh, 2013).


Real-Time Filipino Sign Language to Text Translator using Artificial Neural Networks and

Viola-Jones Algorithm (Canave & Pelacio, 2016)

The study of Canave and Pelacio is about developing a system that will help those

signing people to communicate with non-signing people. The system translates Filipino

Sign Language to text that uses Artificial Neural Networks for recognizing handshape and

Viola-Jones algorithms for face detection because some signs incorporate a region of the

face. The proponents categorized the 50 words that the system will recognize namely

One Handed Static Sign Language, One Handed Dynamic Sign Language and Two

Handed Static Sign Language. Using the experimental research method, the proponents

evaluated and computed the accuracy of each word by conducting 25 trials in each word

with fixed variables which is 400 lumens for the illumination of the environment and a

closed room with no red or yellow object in the background (Canave & Pelacio, 2016).
The proponents used a pair of colored gloves to identify the left and right hand, red glove

for right and yellow glove for the left. After the experiment has been conducted, the

researcher concluded that the One Handed Static Sign Language category got 98.59%

accuracy, the One Handed Dynamic Sign Language got 68.57% accuracy while the Two

Handed Static Sign Language got a 100% accuracy. The proponents stated that the use

of ANN and Viola-Jones Algorithm in the study is effective in recognizing sign languages

and determining its meaning. However, the use of gloves on the study makes it less

natural for the deaf to use it because they don’t wear gloves when they communicate.

The proponents recommended to test the system without the need of hand gloves or any

material that can be worn by hands, the proponents also recommends to expand the

system and also focus on palm orientation and non-manual signs which are also has a

vital role on recognizing sign language.


Filipino Sign Language Translator using Artificial Neural Network and Depth Data

(Boregon & Santos, 2018)

In the study of Boregon and Santos, they developed a system that translates

Filipino Sign Language into text using Artificial Neural Network and Depth Data. The

researchers aim to help Deaf People to convey their message to non-deaf people. They

used Artificial Neural Network to recognize the hand shape given by the depth data and

Kinect camera which produces the depth data without the use of any materials worn on

the hand. The researchers also used a library called OpenCV to extract the hands given

by the camera. After extraction, the hands will be classified using Artificial Neural

Network. The system can recognize two-hand Filipino Sign Language with and without

movement with 26 words and 28 words respectively. The accuracy is evaluated with fixed

variables such as the environment which was a closed room and with only one person

standing in front of the camera (Boregon & Santos, 2018). The focus of the study are

words that uses upper to torso region when delivering sign language. After the experiment
was conducted, the researcher gathered the results of 86.97% and 86.92% accuracy on

recognizing two-hand Filipino Sign Language with and without movement respectively.

However, the study doesn’t focus on palm orientation and non-manual signs which are

also a vital factor in recognizing Filipino Sign Language.

Continuous Indian Sign Language Gesture Recognition and Sentence Formation

(Baranwal, Nandi, & Tripathi, 2015)

The study of Baranwal, Nandi, and Tripathi focus on recognition of Continuous

Indian Sign Language. They develop a system where Continuous Indian Sign Language

will be recognized in which both hands are used for performing gesture. Recognizing a

sign language gestures from continuous gestures is a very challenging research issue

(Baranwal, Nandi, & Tripathi, 2015). The researchers solved this problem by using

Gradient Based Key Frame Extraction Method. Keyframes are helpful for splitting
continuous sign language gestures to be recognized as individual signs as well as

removing redundant and uninformative frames. After splitting the gestures into individual

signs, features of pre-processed gestures will be extracted using Orientation Histogram

with Principal Component Analysis. The researchers used different classifiers to test the

most efficient and effective on recognizing the signs achieved such as Euclidean

Distance, Correlation, Manhattan Distance, City Block Distance, Mahalanobis Distance,

Chessboard Distance, and Cosine Distance. Experiments are conducted on 10 types of

sentences and each sentence has two to 4 gestures on it. Each gesture is made of static

and dynamic gestures. The researchers used an ordinary webcam without any tools to

be worn by the hand. The proposed framework for Continuous Indian Sign Language

gesture gives satisfactory performance including features obtained using orientation

histogram with PCA where both hands have been used for performing gestures

(Baranwal, et al., 2015). Experimental results show that Euclidean distance and

Correlation gives satisfactory results compare to other classifiers that were used.

However, the study doesn’t focus on palm orientation which is also a key factor in

recognizing a sign language. In the study, Key Frame Extraction is the foremost step

which helps the proponents for extracting individual signs a continuous gesture has. Also,

Key Frame Extraction method shows how many numbers of frames an individual gesture

will have.
Preprocessing Step – Review of Key Frame Extraction Techniques for Object Detection

in video (Jevoor, Makandar, & Mulimani, 2015)

The study of Jevoo, Makandar and Mulimani focus on difference Key Frame

Extraction Techniques for Object Detection in Video. Videos are often constructed in the

hierarchical fashion: [Frame] -> [Shot] -> [Scene] -> [Video] and as a preprocessing step

frames have to be extracted for object recognition, detection and then for tracking process

(Jevoor, Makandar, & Mulimani, 2015). For this process to be successful in choosing an

efficient technique to extract the Key Frames from the video is important. The proponents

compared different effective Key Frame Extraction Methods such as Cluster-Base

Analysis, Generalized Gaussian Density Method, General-Purpose Graphical Processing

Unit and Histogram Difference. The researchers conclude that in all the older methods

Histogram Difference and X2(chi square) method the fastest and valid technique to

extract the frames from any kind of video (Jevoor, et al., 2015). Histogram Difference and

X2(chi square) method got an efficiency of almost 94% to 98%.

You might also like