Professional Documents
Culture Documents
INTRODUCTION
The face is our primary focus of attention in social intercourse, playing a major role in
conveying identity and emotion. We can recognize thousands of faces learned throughout our lifetime
and identify familiar faces at a glance after years of separation. This skill is quite robust, despite large
changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as
glasses or changes in hairstyle or facial hair. Computational models of face recognition, in particular,
are interesting because they can contribute not only to theoretical insights but also to practical
applications. Computers that recognize faces could be applied to a wide variety of problems,
including criminal identification, security systems, image and film processing, and human computer
interaction [1]. Unfortunately, developing a computational model of face recognition is quite difficult,
because faces are complex, multidimensional, and meaningful visual stimuli. The user should focus
his attention toward developing a sort of early, pre attentive Pattern recognition capability that does
not depend on having three-dimensional information or detailed geometry. He should develop a
computational model of face recognition that is fast, reasonably simple, and accurate.
Automatically learning and later recognizing new faces is practical within this framework.
Recognition under widely varying conditions is achieved by training on a limited number of
characteristic views (e.g. a "straight on" view, a 45 degree view, and a profile view). The approach has
advantages over other face recognition schemes in its speed and simplicity learning capacity. Images
of faces, represented as high-dimensional pixel arrays, often belong to a manifold of intrinsically low
dimension. Face recognition, and computer vision research in general, has witnessed a growing
interest in techniques that capitalize on this observation, and apply algebraic and statistical tools for
extraction and analysis of the underlying manifold [2]. Eigen face is a face recognition approach that
can locate and track a subject's head, and then recognize the person by comparing characteristics of
the face to those of known individuals. The computational approach taken in this system is motivated
by both physiology and information theory, as well as by the practical requirements of near-real-time
performance and accuracy. This approach treats the face recognition problem as an intrinsically twodimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry,
taking advantage of the fact that faces are normally upright and thus may be described by a small set
of 2-D characteristic views [3].
T.K.M College Of Engineering
1.1
MOTIVATION
Gestures form a significant part of business communication. Mostly it is a form of visual
[5]. Also a prototype for drowsiness detection is developed using the technology.
1.3.3
CHAPTER 2
SYSTEM DESIGN AND DEVELOPMENT
2.1. SYSTEM DESIGN
The overall system design is as follows. In this system, user can give the request for
identifying the facial expression and a processing system can accept the request. An image capturing
device can be used for capturing the image and in the processing unit, image will be processed and
converted into suitable formats by the help of a processing module. The images can be converted into
binary format and can be stored in a byte array. These images can be given into a face recognition
system which helps in recognizing the face from the captured image. This captured image can be
processed in an expression checking module and process it based on the appearance based techniques
like Principle Component Analysis and feature classification technique like Support Vector
Machine.The overall system development diagram is shown in Fig 2.1.
2.2. STAGES
The various stages of the proposed system is shown in Fig 2.2.Using the capturing devices
facial signals is captured. Usually a webcam is used for this purpose. The image thus obtained is
given to the image processing system. Various functionalities are provided by image processing
system. Using these faces in the image frame is located. Expression is checked and the result is
exhibited.
2.3. METHODOLOGY
Different stages in the system are shown in Fig 2.3.Images can either be stored in a database
or it can be captured lively using a webcam. Image which is captured using a webcam is preprocessed
using mat lab and the region of interest is selected. Using the features extracted expression is
recognized. If the image is taken from the database same operations are applied on the image and the
expression is recognized. Recognized expressions are classified and stored as the knowledge of the
system.
C
A
P
T
U
R
IN
GF
A
C
IA
LS
IG
N
A
LS
IM
A
G
E
P
R
O
C
E
S
S
IN
G
S
Y
S
T
E
M
.
C
A
P
T
U
R
IN
GF
A
C
IA
LS
IG
N
A
LS
IM
A
G
E
P
R
O
C
E
S
S
IN
G
S
Y
S
T
E
M
.
U
S
E
C
A
P
T
U
R
IN
G
DE
V
IC
E
S
.
N
E
E
D
S
T
E
C
H
N
IC
A
LS
U
P
P
O
R
T
10
ADMIN
HOME
LOGOUT
UPLOAD
IMAGE
BROWSE
SAVE
IMAGE
IMAGE
SLEEPY
MY
IMAGE
MY
STORE
D
IMAGE
TRAIN
START
FILE
STOP
CHEC
K
EXPN
HOME
LOAD
IMAGE
CAPTUR
E
IMAGE
CHECK
EXPN
11
TRAIN
USER
HOME
LOGOUT
UPLOAD
IMAGE
BROWSE
SAVE
IMAGE
IMAGE
FILE
SLEEPY
ALARM
MY
IMAGE
MY
STORE
D
IMAGE
START
CHEC
K
EXPN
STOP
CAPTURE
IMAGE
12
HOME
maintained
for
login,start,home
page,home,my
images,upload,detect,sleepy,view
( x 1, x 2 , x 3 )= a j x j
j=0
.(2.1)
The analogy here is between the line and the face space, and between R3 and the image space.
13
FRMHOME
FRMLOGIN
START
FRMMYIMAGES
SLEEPY DRIVER
FRMDETECT
FRMNEWUSER
FRMUPLOAD
FRMSLEEPY
FRMVIEWEXPRESSION
FRMADMINHOME
14
has been called one of the most valuable results from applied
provides a roadmap for how to reduce a complex data set to a lower dimension to reveal
where C1 = the subjects score on principal component 1 (the first component extracted)
b1p = the regression coefficient (or weight) for observed variable p, as used in creating principal
component 1
Xp = the subjects score on observed variable p.
15
A principal component was defined as a linear combination of optimally weighted observed variables.
The words linear combination refers to the fact that scores on a component are created by adding
together scores on the observed variables being analyzed. Optimally weighted refers to the fact that
the observed variables are weighted in such a way that the resulting components account for a
maximal amount of variance in the data set.
Number of components extracted. The preceding section may have created the impression
that, if a Principal component analysis
questionnaire, only two components would be created. However, such an impression would not be
entirely correct. In reality, the number of components extracted in a Principal component analysis is
equal to the number of observed variables being analyzed. This means that an analysis of your 7-item
questionnaire would actually result in seven components, not two. However, in most analyses, only
the first few components account for meaningful amounts of variance, so only these first few
components are retained, interpreted, and used in subsequent analyses (such as in multiple regression
analyses). For example, in your analysis of the 7-item job satisfaction questionnaire, it is likely that
only the first two components would account for a meaningful amount of variance; therefore only
these would be retained for interpretation. You would assume that the remaining five components
accounted for only trivial amounts of variance. These latter components would therefore not be
retained, interpreted, or further analyzed.
Characteristics of principal components. The first component extracted in a Principal
component analysis
Under typical conditions, this means that the first component will be correlated with at least some of
the observed variables. It may be correlated with many. The second component extracted will have
two important characteristics. First, this component will account for a maximal amount of variance in
the data set that was not accounted for by the first component. Again under typical conditions, this
means that the second component will be correlated with some of the observed variables that did not
display strong correlations with component 1.
The second characteristic of the second component is that it will be uncorrelated with the first
component. Literally, if you were to compute the correlation between components 1 and 2, that
correlation would be zero. The remaining components that are extracted in the analysis display the
same two characteristics: each component accounts for a maximal amount of variance in the observed
variables that was not accounted for by the preceding components, and is uncorrelated with all of the
preceding components. A Principal component analysis
component accounting for progressively smaller and smaller amounts of variance (this is why only the
first few components are usually retained and interpreted). When the analysis is complete, the
T.K.M College Of Engineering
16
resulting components will display varying degrees of correlation with the observed variables, but are
completely uncorrelated with one another.
An important and largely unsolved problem in dimensionality reduction is the choice of the
intrinsic dimensionality of the principal manifold. No analytical derivation of this number for a
complex natural visual signal is available to date. To simplify this problem, it is common to assume
that in the noisy embedding of the signal of interest (in our case, a point sampled from the face space)
in a high-dimensional space, the signal-to-noise ratio is high. Statistically, that means that the variance
of the data along the principal modes of the manifold is high compared to the variance within the
complementary space. This assumption relates to the eigenspectrum - the set of the eigenvalues of the
data covariance matrix. The i-th eigenvalue is equal to the variance along the i-th principal
component; thus, a reasonable algorithm for detecting k is to search for the location along the
decreasing Eigen spectrum where the value of i drops significantly. Since the basis vectors
constructed by principal component analysis
face images may give insight into the information content of face images, emphasizing the significant
local and global "features". Such features may or may not be directly related to face features such as
eyes, nose, lips, and hair. In the language of information theory, we want to extract the relevant
information in a face image, encode it as efficiently as possible, and compare one face encoding with
a database of models encoded similarly. A simple approach to extracting the information contained in
an image of face is to somehow capture the variation in a collection of images, independent of any
judgment of features, and use this information to encode and compare individual face images. These
eigenvectors can be thought of as a set of features that together characterize the variation between
face images. Each image location contributes more or less of each eigenvector; so that we can display
the eigenvector as a sort of ghostly face which we call an Eigen face. Each individual face can be
represented exactly in terms of a linear combination of the eigen faces. Each face can also be
approximated using only the "best" eigen faces-those that have the largest eigenvalues and which
therefore account for the most variance within the set of face images. The best M eigen faces span an
M-Dimensional subspace- "face space" of all possible images.
This approach of face recognition involves the following initialization operations:
17
Calculate the corresponding distribution in M-dimensional weight space for each known
individual, by projecting his or her face images onto the "face space".
Having initialized the system, the following steps are then used to recognize new face images:
Calculate a set of weights based on the input image and the M eigenfaces by projecting the
space".
If it is a face, classify the weight pattern as either a known person or as unknown.
(Optional) Update the eigenfaces and/or weight patterns.
(Optional) If the same unknown face is seen several times, calculate its characteristic weight
pattern and incorporate into the known faces.
is to find the vectors that best account for the distribution of face
images within the entire image space. These vectors define the subspace of face images, which we
call "face space". Each vector is of length N square, describes an N-by-N image, and is a linear
combination of original face images, and because they are face-like in appearance, we refer then to as
"eigenfaces"[6].
Let the training set of face images be
1 ,
2 , 3 .
defined by
1
=
M
V =1
i= i
.
(2.4)
An example training set is shown in Fig, with the average face Y shown. This set of very large vectors
is then subject to principle component analysis, which seeks a set of M orthonormal vectors, U ,
which best describes the distribution of data. The kith vector, U, is chosen such that
18
1
T
2
k = M (uk n )
n=1
u1 u k =d =0 , if otherwise.The vectors u and scalars l are
is a maximum, subject to 1, if l = k ,
n Tn
AA
1 2 3 M
(2.6)
square eigenvectors and eigenvalues is an intractable task for typical image sizes. We need a
computationally feasible method to find these eigenvectors.
If the number of data points in the image space is less than the dimension of the space (M<N), there
will be only M1, rather than N, meaningful eigenvectors. (The remaining eigenvectors will have
associated eigenvalues of zero). Fortunately we can solve for the N2 dimensional eigenvectors in case
by first solving for the eigenvectors of an M-by-M matrix e.g., solving a 16 x 16 matrix rather than
a 16,384 x 16,384 matrix and then taking appropriate linear combinations of the face images.
Consider the eigenvectors vi of ATA such that
A A T v i = i v i
Premultiplying both sides by A we have
T
A A A v i= i A vi
from which we see that A
A T A ,where
T
Lmn m n and find the M eigenvectors, v l
, of L.These vectors determine linear combinations of the M training set face images to from the
eigenfaces
vl .
ul= v lk k
k=0
l= 1 M
.(2.9)
With this analysis the calculations are greatly reduced, from the order of the number of pixels in
images (N2) to the order of the number image in the training set (M).The associated eigenvalues allow
us to rank to eigenvectors according to their usefulness in characterizing the variation among the
19
images. A new face image (G) is transformed into its eigenface components (projected into "face
space") by a simple operation,
=
uTk .(2.10)
for k = 1,..,M'. This describes a set of point-by-point image multiplications and summations,
operations performed at approximately frame rate on current image and its processing hardware. The
weights form a vector
= [
eigenface in representing the input face image, treating the eigenface as a basis set for face images.
The vector may be used in a standard pattern recognition algorithm to find which of a number of
predefined classes, if any best describes the face. The simplest method for determining of an input
face image is to find the face class k that minimizes the Euclidian distance
2k = (k )2 .(2.11)
Where
is a vector describing the kth face class. The face classes WI are calculated by
averaging the results of the eigenface representation over a small number of face images of each
individual. A face is classified as belonging to class k when minimum
threshold
. Otherwise the face is classified as "unknown" and optionally creates a new face class.
Because creating the vector of weights is equivalent to projecting the original face image onto the low
dimensional face space, many images will project onto a given pattern vector. The distance between
the image and the face space is simply the squared distance between the mean adjusted input images
M
and f = i ui
i=1
f 2 ) .(2.12)
Thus there are four possibilities for an input image and pattern vector: Near face space and near face
class, near face space but not near a known face class, Distant from face space and near a face class,
and Distant from face space and not near a known face class. In the first case, an individual is
recognized and identified. In the second case, an unknown individual is present. The last two cases
indicate that the image is not a face image.
20
Support vector machines are a set of related supervised learning methods that analyze data
and recognize patterns, used for classification and regression analysis[5]. The original support vector
machine
algorithm was invented by Vladimir Vapnik and the current standard incarnation (soft
margin) was proposed by Corinna Cortes and Vladimir Vapnik.[1] The standard support vector
machine takes a set of input data and predicts, for each given input, which of two possible classes the
input is a member of, which makes the support vector machine
classifier. Since an support vector machine is a classifier, then given a set of training examples, each
marked as belonging to one of two categories, an support vector machine training algorithm builds a
model that assigns new examples into one category or the other. Intuitively, an support vector machine
model is a representation of the examples as points in space, mapped so that the examples of the
separate categories are divided by a clear gap that is as wide as possible. New examples are then
mapped into that same space and predicted to belong to a category based on which side of the gap
they fall on.
More formally, a support vector machine constructs a hyper plane or set of hyper planes in a
high or infinite dimensional space, which can be used for classification, regression, or other tasks.
Intuitively, a good separation is achieved by the hyper plane that has the largest distance to the nearest
training data points of any class (so-called functional margin), since in general the larger the margin
the lower the generalization error of the classifier.
Whereas the original problem may be stated in a finite dimensional space, it often happens
that in that space the sets to be discriminated are not linearly separable. For this reason it was
proposed that the original finite dimensional space be mapped into a much higher dimensional space,
presumably making the separation easier in that space. support vector machine
schemes use a
mapping into a larger space so that cross products may be computed easily in terms of the variables in
the original space, making the computational load reasonable. The cross products in the larger space
are defined in terms of a kernel function K(x,y) selected to suit the problem. The hyper planes in the
large space are defined as the set of points whose inner product with a vector in that space is constant.
The vectors defining the hyper planes can be chosen to be linear combinations with parameters i of
images of feature vectors that occur in the data base. With this choice of a hyper plane the points x in
the feature space that are mapped into the hyper plane are defined by the relation:
iK(xi,x) = constant
21
x i belongs. Each
is a p-
dimensional real vector. We want to find the maximum-margin hyper plane that divides the points
having yi = 1 from those having yi = 1. Any hyper plane can be written as the set of points
satisfying samples on the margin are called the support vectors. Maximum-margin hyper plane and
margins for an SUPPORT VECTOR MACHINE trained with samples from two classes. Samples on
the margin are called the support vectors.
wx-b=0 .(2.14)
where denotes the dot product. The vector w is a normal vector: it is perpendicular to the hyper plane.
The parameter
|w|
b
determines the offset of the hyperplane from the origin along the normal
vector w.
We want to choose the w and b to maximize the margin, or distance between the parallel hyper planes
that are as far apart as possible while still separating the data. These hyper planes can be described by
the equations
wx-b=1 .(2.15)
w.x-b=-1 .(2.16)
Note that if the training data are linearly separable, we can select the two hyper planes of the margin
in a way that there are no points between them and then try to maximize their distance. By using
22
|w|
z
, so we want to
minimize ||w||. As we also have to prevent data points falling into the margin, we add the following
constraint: for each i either
This can be rewritten as:
or
w xi b 1 of the second.
We can put this together to get the optimization problem: Minimize (in w,b) ||w|| subject to (for any
i=1,2,,n)
y i (w x ib 1 .(2.17)
Primal form: The optimization problem presented in the preceding section is difficult to solve because
it depends on ||w||, the norm of w, which involves a square root. Fortunately it is possible to alter the
|w|
1
convenience) without changing the solution (the minimum of the original and the modified equation
have the same w and b). This is a quadratic programming (QP) optimization problem. More clearly:
Minimize (in w,b)
|w|
1
.(2.18)
y i ( w . x ib ) 1 .(2.19)
One could be tempted to express the previous problem by means of non-negative Lagrange
multipliers i as
23
|w| i [ y i ( w . x ib )1]}
2
i=1
1
min x {
2
w ,b ,
.(2.20)
family of hyperplanes which divide the points; then all. Hence we could find the minimum by sending
all i to + , and this minimum would be reached for all the members of the family, not only for
the best one which can be chosen solving the original problem. Nevertheless the previous constrained
problem can be expressed as
n
1
max { |w|2 i [ y i ( w . x ib )1]}
2
i=1
.(2.21)
min
w ,b
that is we look for a saddle point. In doing so all the points which can be separated as:
y i ( w . x ib ) 1
>0 do not matter since we must set the corresponding i to zero. This problem
can now be solved by standard quadratic programming techniques and programs. The solution can be
expressed by terms of linear combination of the training vectors as
n
w= i x i y i .(2.22)
i=1
y yi ( w . x ib )=1
also satisfy
1
( w . x ib )= y = y i b=w . xi y i
i
.(2.23)
24
which allows one to define the offset b. In practice, it is more robust to average over all NSV support
vectors:
N SV
1
b=
(w . x i y i ) .(2.24)
N SV
i=1
2.13 BIASED AND UNBIASED HYPERPLANES
For simplicity reasons, sometimes it is required that the hyperplane passes through the origin
of the coordinate system. Such hyperplanes are called unbiased, whereas general hyperplanes not
necessarily passing through the origin are called biased. An unbiased hyperplane can be enforced by
setting b = 0 in the primal optimization problem. The corresponding dual is identical to the dual given
above without the equality constraint
k( x i , x j =( xi . x j)d
Polynomial (homogeneous):
Polynomial (inhomogeneous):
|x ix j|
Gaussian Radial Basis Function:
k( x i , x j =exp
k( x i , x j =( x i . x j +1)d
2
using = 1 / 22
T.K.M College Of Engineering
25
CHAPTER 3
SYSTEM IMPLEMENTATION
The entire sequence of training and testing is sequential and can be broadly classified as consisting of
following two steps:
Database Preparation
Training
Testing
26
for testing phase by taking 4-5 photographs of 10 persons in different expressions and viewing angles
but in similar conditions ( such as lighting, background, distance from camera etc.) using a low
resolution camera. And these images were stored in test folder. The steps are shown below in fig 3.1.
3.2 TRAINING
1. Select any one (.bmp) file from train database using open file dialog box.
2. By using that read all the faces of each person in train folder.
3. Normalize all the faces.
4. Find significant Eigenvectors of Reduced Covariance Matrix.
5. Hence calculate the Eigenvectors of Covariance Matrix.
6. Calculate Recognizing Pattern Vectors for each image and average RPV for each person
7. For each person calculate the maximum out of the distances of all his image RPVs from average
RPV of that person.
8. Expression classification is performed after this.
Flowchart is shown in fig 3.2
27
28
3.3 TESTING
Testing is carried out by following steps as shown in fig 3.3:
Find the distance of this input image eigen value from average eigen values of all the
persons.
3.4 TECHNOLOGY
Microsoft released the .net framework in February 2002.Its biggest initiative since the launch
of windows in 1991. Dot net is a revolutionary Multi language platform that knits various aspects of
application development together with the internet .The framework covers all layers of software
development above the operating system. Several software will be developed by Microsoft to achieve
this goal .It is accepted that every player in the industry, be it a software developer or a device
manufacture adopt .Net so that they can be integrated. The .Net initiative is all about enabling data
transfer between networks, PCs and devices seamlessly, independent of the platforms, architecture
and solutions. Microsoft has taken many of the best ideas in the industry, combined in some ideas of
their own, and bought them all into one coherent package. .Net is Microsofts next generation
platform for building web applications and web service. It is a platform for building web application
and web service. It is a platform for XML web service areas of Microsoft.
29
30
Visual Basic .NET is one of the languages that are directed towards meeting the objectives of
the .NET initiative of creating distributed applications. It has inherited the capability of rapid
application development from its earlier versions and strengthened considerably the implementation
of object-oriented features .Visual Basic .NET is a powerful object-oriented language that provides
features such as abstraction, encapsulation, inheritance, and polymorphism. In addition, it provides
many other features that did not exist in the earlier version, such as multithreading and structured
exception handling.
3.4.2.
Some of the key features of introduced in Visual Basic .NET are as follows:
Inheritance
It is the ability of a class to derive its characteristics from an existing class. Using Visual
Basic .NET, you can create a class that provides basic functionality so that other classes can inherit its
members. The derived classes can further override the inherited properties and methods to provide
new functionality.VB.NET uses inherits keyword to implement inheritance.
31
derived class. The methods that can be overridden by derived classes need to be marked as
overridable in the base class.
Structured Exception Handling
Exceptions are the errors that are generated at runtime as a result of an erroneous statement or
condition or because of some unexpected behavior of the application. For the program to be able to
handle such exceptions VB.NET supports structured exception handling that consists of protected
blocks of code and filters for the possible exceptions that can be raised by the program.
Multithreading:
VB.NET
provides
full
support
for
creating
multithreaded
applications.Multithreadng enables an application to contain one or more threads that can share the
workload in an application by executing one at a time.
3.4.3
important programming features present in other SQL Server 2000 editions. In fact, SSE contains the
same database engine that ships with other SQL Server 2000 editions. The SQL Server 2000 database
engine contains support for the networking protocols, T-SQL, and the storage layer. Advanced
features such as .NET support, the XML data type, stored procedures and triggers, and replication
subscription are also present.
SSE supports databases up to 4GB. An application developed using SSE typically works seamlessly
with other editions of SQL Server 2000. There is no limit on the number of user connections to the
database, but performance is limited by the use of a single CPU and 4GB RAM. Typically
applications using SSE can scale to 25 concurrent users. Easy-to-use graphical interfaces provided
with the SQL Server Management Studio Express Edition (SSMS-EE) Graphical User Interface (GUI)
management tool simplify the basic database operations. This tool contains a query editor that enables
you to interactively work with data inside the database.
SQL Server Configuration Manager allows you to configure networking options. The SSE setup
offers extensive graphical interface tools that allow you to configure the installation. Silent installs are
also supported so that you can transparently install SSE with your application. Servicing of SSE is
integrated with Windows Update and is almost automatic for the user. There is deep integration of
SQL Server 2000 Express Edition with all editions of Visual Studio, including Visual Basic Express
and Visual Web Developer 2008 Express. The rich data controls provided automate simple tasks so
that you can develop a forms-based application that uses a SSE database without writing a line of
code. The single-user scenario that is commonly used for desktop clients and web users is simplified
by the Xcopy feature in SSE that enables the database files to be copied and moved like normal
T.K.M College Of Engineering
32
windows files. Xcopy deployment simplifies the deployment of your application so that you can just
zip up your application and database file and email it to the destination user. The recipient copies the
unzipped file to her machine and double-clicks the application to run it.
Ease of deployment: Xcopy deployment allows you to copy, move, and delete database files just like
normal Windows files. There is support for SSE with all Visual Studio editions so that it is possible to
develop simple desktop and web database applications without writing a line of
code. Building, debugging, and deploying your application is possible with a few mouse clicks from
within Visual Basic Express or Visual Web Developer Express. Application deployment becomes very
easy with Xcopy deployment and Visual Studio Click Once support.
33
User instance functionality: SSE supports the Run as Normal User scenarios, where a non
administrator on the local machine can use the functionality of SSE without having to involve the
system administrator. This is enabled using the user instance functionality that provides for
a private instance of SSE running in each users context. These user instances are automatically
started up by the application using the database owned by the user. One of the goals for the user
instance is to make the single-user scenario very simple; the application developer need not worry
about the complicated SQL Server Security model. SSE supports a file-based permission model which
means that the read and write permissions on the physical database file are used to assign user
permissions and privileges. SSE can also be used as a server where multiple users can connect to the
server database; the performance characteristics of the server are governed by the limits on the CPU
and memory usage. An instance of SSE can use only one CPU and 4GB RAM.
Security: Much thought was given to making SSE install and run securely on your machine. Only
local machine access is enabled by default because a majority of the SSE use cases are for local data.
SSE runs under a low privilege service account. The user instance feature described earlier ensures
that SSE runs under the context of each user for single-user scenarios. For multi-user scenarios, the
SQL Server security model ensures appropriate access to authenticated users.
Advanced security features including encryption are also included in the product.
Replication and messaging capabilities: SSE supports offline capabilities by supporting replication
subscription. Retail branches can subscribe to central offices with synchronization between the servers
occurring at regular intervals. The SQL Service Broker feature supported by SSE provides
asynchronous messaging capabilities so that SSE can send a message to SQL Server. This is
particularly relevant for B2B web services.
Management tools: The SQL Server Management Studio Express Edition tool, which is available
via web download, offers capabilities to develop and test against SSE. It has a query editor that allows
you to execute arbitrary T-SQL statements. SQL Server Configuration Manager allows you to change
networking protocol settings and the SQL Service options. Rich command line facilities are available
with the SQLCMD command line tool, while the SQL Bulk Copy (BCP) tool provides bulk transfer
features.
Easy setup options: SSE provides a reliable and robust setup user interface that guides you through
the various setup and configuration options. A silent setup option is available where little or no user
34
interface is shown. In a silent install you have to pass in the relevant configuration values as command
line parameters or in setup initialization files. The silent option is typically preferred by ISVs who
want to completely control the user experience, for instance, they want their application logo to show
on the screen during installation.
3.4.4
SQL Server based server application where components are installed in a simpler, cheaper, server
configuration. They could be web-facing applications. There are three typical sub-scenarios including
usage: as evaluation copies, single-user or small-user editions, and low-volume web applications. The
evaluation scenario covers the case of a server application that must be deployed on a single machine
for evaluation or demo purposes. For example, the evaluation edition of a customer service
application can ship with SSE or SQL Server Evaluation edition. The application development
scenario is a subset of the evaluation scenario that involves the usage of SSE only in the design and
development phase of a SQL Server application. ISVs develop applications using free SSE licenses
while relying on their customers to purchase SQL Server licenses for testing and production
deployment. This scenario enables development to proceed on client operating systems on desktops or
laptops. The SQL Server 2008 Evaluation edition can also be used for this purpose.
The low-volume web application scenario typically includes web applications deployed on web
servers with low concurrent usage patterns. However, this includes a model where the server
application stores configuration or other data that does not get directly queried by remote clients, and
hence the SSE use is typically low volume.
The SQL Server compatibility for easy scalability as well as the price point provides the primary
attraction for SSE in this scenario.Whereas the previous scenarios involve end-users installing
applications on desktop operating systems, the server scenarios generally involve more
knowledgeable end-users or even IT staff, and will always include installation on server operating
systems. Thus the deployment environment will more closely match that of other editions of SQL
Server 2008; however, the end-user will still not be as typically skilled or experienced as the SQL
Server administrator in a server environment
3.4.5
MATLAB 7
MATLAB is a software package for high performance numerical computation and
visualization. It provides an interactive environment with hundreds of built-in functions for technical
computation, graphics and animation. It also provides easy extensibility with its own high level
programming language. The name MATLAB stands for matrix laboratory.
35
MATLABs built in functions provide excellent tools for linear algebra computations, data
analysis, signal processing and optimization, numerical solution for ordinary differential equations,
quadrature and many other types of scientific computations. Most of these functions use state of the
art algorithms. There are numerous functions for 2-D and 3-D graphics, as well as for animations. The
user is not limited to built in functions, he can write his own functions in the MATLAB language.
Once written these functions behave just like built-in functions.MATLABs language is easy to learn
and use.
There are also several optional toolboxes available from the developers. These toolboxes are
collections of functions written for special applications such as symbolic computation, image
processing, statistics, control system design, neural networks etc.The list of toolboxes keep growing
in time. The basic building block of MATLAB is matrix. The fundamental data type is the
array.Vectors, scalars, real matrices; complex matrices are all automatically handled as special cases
of basic data type. The built in functions are optimized for vector operations. Consequently vectorized
commands or codes run much faster in MATLAB.
3.4.5.1 Basics of MATLAB
1.
MATLAB windows
36
Command history: All commands typed on mat lab prompt in the command window get recorded
even across multiple sessions. You can select command from this window with the mouse and execute
it in the command window by double clicking on it. You can also select a set of commands from this
window and create an m file with the right click of the mouse.
2. Figure window: The output of all graphics commands typed in the command window are flushed
to the graphics or figure window, a separate gray window with white background color. The user can
create as many figure windows as system memory will allow.
3. Editor Window: This is where you write,edit,create,and save your own programs in fields called
M-files.you can use any text editor to carry out these tasks. On most systems, matlab provides item
own built in editor.
A MAT-file stores data in binary (not human-readable) form. In MATLAB, create MAT-files
by using the save command, which writes the arrays currently in memory to a file as a continuous
byte stream. By convention, this file has the filename extension .mat; thus the name MAT-file. The
load command reads the arrays from a MAT-file into the MATLAB workspace. Most MATLAB users
do not need to know the internal format of a MAT-file. Even users who must read and write MAT-files
from C and Fortran programs do not need to know the MAT-file format if they use the MATLAB
Application Program Interface (API). This API shields users from dependence on the details of the
MAT-file format. However, if you need to read or write MAT-files on a system for which the
MATLAB API library is not supported, you must write your own read and write routines. The
MATLAB API is only available for platforms on which MATLAB is supported. This document
provides the details about the MAT-file format you will need to read and write MAT-files on these
systems.
37
CHAPTER 4
APPLICATION DEVELOPMENT
DROWSY DRIVER DETECTION
continuously and the successive images are compared with each other to find drowsiness. If
the images shows that the eyes are closed , the software produces an alarm sound to alert the
person.
A set of images with drowsy expression is trained with administrator privileage.
Inorder to do training and classification, Principal component analysis and Support Vector
Machineare used. These images are stored in the system database. The new images captured
are compared with the database and adequate results are produced. The overall working is as
given in flowchart in fig 4.1.
T.K.M College Of Engineering
38
39
CHAPTER 5
RESULTS AND DISCUSSIONS
obtained are excellent. We got 100% recognition for all five principal emotions namely Angry,
Disgusts, Happy, Sad and Surprise along with Neutral. The Test image is input and its equivalent
recognized image is obtained. Recognition rate is around 100% for all principal emotions. The
training data set is as given in the table 5.1.
The rejection and acceptance plots for databases were plotted for their respective training data as
shown in fig 5.1. the si value was varied from 0 to 1 and the plots were made. It was observed that a
value of si near one resulted in very good acceptance, though the tradeoffs being the acceptance of
even unknown expression to the closest one. The higher si value may be attributed to the lesser
number of samples in the database and to the quality of the face images.
5.3.
Success of a practical face recognition system with images grabbed live depends on its
against the and data variations. Specifically, the important issues involved are
Facial size normalization
Non-frontal view of the face (3D pose, head movement)
40
EXPRESSION
happy
happy
happy
happy
happy
happy
anger
Anger
Anger
Anger
Anger
Anger
Anger
anger
sad
Sad
Sad
Sad
Sad
Sad
Sad
Sad
happy
happy
happy
happy
happy
happy
happy
happy
happy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
sleepy
41
42
43
44
CHAPTER 6
CONCLUSION
is
implemented in Microsoft Visual Studio Platform with MATLAB support. This particular
method using Principal component analysis
information theory, leading to basing face recognition on a small set of image features that
best approximates the set of known face images,without regarding that they correspond to our
intutive notions of facial parts and features.
The eigenface approach provides a practical solution that is well fitted for the problem
of facial expression recognition. It is fast, relatively simple, and works well in a constrained
environment. Certain issues of robustness to changes in lighting, head size, and head
T.K.M College Of Engineering
45
orientation , the tradeoffs between the number of eigenfaces necessary for unambiguous
classification are matter of concern. The project has been developed in the application side to
implement drowsy driver detector successfully.
6.1 FUTURE ENHANCEMENTS
This project is based on eigen face approach that gives an accuracy maximum of about
92.5%. Adaptive algorithms may be used to obtain an optimum threshold value. There is
scope for future betterment of the algorithm by using Neural Network technique that can give
better results as compared to eigen face approach. With the help of neural network technique
accuracy can be improved. Instead of having a constant threshold, it could be made adaptive,
depending upon the conditions and the database available, so as to maximise the accuracy.
The whole software is dependent on the database and the database is dependent on resolution
of camera. So if good resolution digital camera or good resolution analog camera is used , the
results could be considerably improved.
REFERENCES
46
[8] http://www.face-rec.org
[9] http://www.alglib.net
[10] http://math.fullerton.edu/mathews/n2003/JacobiMethodProg.html
47