You are on page 1of 36

Unstructured

Human Activity
Detection

Under the guidance of


Mr.Chidanada H
Asst.Prof

Submitted By:
Aruna L(3VC10CS016)
KavyaShree HK(3VC10CS037)
Nanditha U(3VC10CS056)
Neha D(3VC10CS057)

Abstract
Being able to detect and recognize human activities are essential for
several applications. In this project, we perform detection and recognition
of unstructured human activity in unstructured environments. We use an
external Web camera as an input device and compute a set of features
based on human pose as well as based on input image . This project
presents a fuzzy relational approach to human activity recognition from
their actions and their control.

Introduction:
Being able to infer that the activity of a person is essential in different

environments. In this project, we are interested in detecting daily activities


that a person performs.
True daily activities do not happen in structured environments.
In this project, we are interested in reliably detecting daily activities that a
person performs such as, drinking water, reading, writing, sleeping and so
on.

Existing System:
There is a large body of previous work on human activity recognition. One

common approach is to use space-time features to model points of interest.


Another class of techniques used for activity recognition is that of the
hidden Markov model (HMM). Early work by Brand et al. utilized coupled
HMMs to recognize two handed activities
However; this approach is only capable of classifying, rather than detecting,
activities.

Disadvantages:
Fully manual in traction.
Each and every process should be done manually.
Activity classification has focused on using 2D image.

Proposed System:
We perform activity detection and recognition using an inexpensive

external web Camera .


We proposed a new system for detection of human activities.
Our method is human activity detection which allows more accurate
information .
We considered the problem of detecting and recognizing activities that
humans perform in unstructured environments.
We use an inexpensive external web camera as an input device and capture
an image in a particular angle in required environment.

MODULES:
Capture image & Detect activity
Human Activity Detection
Activity Image Capture & Display
Activity Detection from Input Image
Segmentation & Determination
Activity Detection

Capture Image and Detect Activity:


The simplest image format is supported by almost all cameras/server. The

format is just usual JPEG


In order to detect a persons activity, we make use of an external web
camera as an input device .
And with the help of the input device, we capture a persons activity image
such as drinking, reading, sleeping and writing.
The resulting captured image will be in JPEG format, which will be stored
in database.

The first stage in processing for many image applications is the

segmentation of objects with significant difference in color and shape from


the background, to determine their location in the image scene.
Where the camera is stationary, a natural approach is to model the
background and detect foreground objects by differencing the current frame
with the background.
All pixels that are not similar to the modeled background are referred to as
the foreground. After capturing image , we apply segmentation algorithm
on the captured image for further processing.

Human Activity Detection :

Human Activity Image Capture &


Our algorithm must be able to distinguish the desired activities from other
Display:
random activities that people perform.
To that end, we collected random activities by asking the subject to act in a

manner unlike any of the previously performed activities.


The random activity contains sequence of random movements ranging from
a person standing /sitting to a person lifting hand for drinking water

Activity Detection from Input


For action detection, first we convert binary image from RGB image.
Image:
For converting binary image, we calculate the average value of RGB for
each pixel and if the average value is below than 110, we replace it by black
pixel and otherwise we replace it by white pixel.
By this method, we get a binary image from RGB image.

Segmentation
&
Determination:
This module is used the action region, we first represent the image in the L
* a * b space from its conventional redgreenblue (RGB) space.
The L * a * b system has the additional benefit of representing a
perceptually uniform color space.
The Fuzzy C-means clustering algorithm that we employ to detect the lip
region is supplied with both color and pixel-position information of the
image.
This module use in image segmentation in general and lip region
segmentation in particular is a novel area of research.

Activity Detection:
For Activity detection of an image, we have to find the Bezier curve of the

lip
Then we convert each width of the Bezier curve to 100 and height
according to its width.
If the persons Activity Detect information is available in the database, then
the program will match which Activity Detect height is nearest the current
height and the program will give the nearest Activity Detect as output.
If the persons Activity Detect information is not available in the database,
then the program calculates the average height for each Activity Detect in
the database for all people and then gets a decision according to the average
height

Data flow Diagram:

Main page

Class diagram:

User case diagram:

Activity
diagram

Software Requirements:
Microsoft .NET framework 2.0
Visual studio 2005
C# .NET
SQL Server 2005

Hardware Requirements:
Processor
Monitor
RAM
Speed
Secondary Device

: Intel i3 core
: SVGA
: 128MB(minimum)
: 500MHZ
: 20GB

Snap shots:

Conclusion:
If the persons Activity Detect information is available in the database, then
the program will match which Activity Detect height is nearest to the
current height and the program will give the nearest Activity Detect as
output. If the persons Activity Detect information is not available in the
database, then the program calculates the average height for each Activity
Detect in the database for all people and then gets a decision according to
the average height.

Thank You

You might also like