Professional Documents
Culture Documents
Abstract such as, in [9],[10], and [13], authors use motion feature for
action recognition. Most of the above techniques depend
In this paper, we present a novel method for human ac- on the viewing direction. The work of testing an action us-
tion recognition from any arbitrary view image sequence ing multi-view motion learning is still unsolved. In [11] and
that uses the Cartesian component of optical flow veloc- [12], authors presented view invariant recognition of action.
ity and human body silhouette feature vector information. In this paper, we recognize human action from an ar-
We use principal component analysis (PCA) to reduce the bitrary view of an image sequence using the optical flow
higher dimensional silhouette feature space into lower di- and human body shape features. In this regard, human ac-
mensional feature space. The action region in an image tion models are created in each viewing direction for some
frame represents Q-dimensional optical flow feature vector specific actions using hidden Markov model (HMM). We
and R-dimensional silhouette feature vector. We represent extract the optical flow feature and new body silhouette fea-
each action using a set of hidden Markov models and we ture and then the feature vectors are converted into sym-
model each action for any viewing direction by using the bols. Then, we learn the features and build HMMs in dif-
combined (Q + R)-dimensional features at any instant of ferent viewing directions. Classification is finally achieved
time. We perform experiments of the proposed method by by feeding a given (test) sequence in any viewing direction
using KU gesture database and manually captured data. to all the learned HMMs and employing a likelihood mea-
Experimental results of different actions from any viewing sure to declare the action performed in the image sequence
direction are correctly classified by our method, which in- which makes our system view invariant. For training and
dicate the robustness of our view-independent method. testing actions, we use the KU gesture database [6] and
manually captured data.
This paper is organized as follows: Section 2 describes
1 Introduction the features extraction from optical flow and human body
silhouette. Section 3 briefly describes the HMMs for action
This contribution addresses the human action recogni- modeling and recognition. Experimental results and anal-
tion from any arbitrary view of an image sequence with ysis of the selected approaches are presented in Section 4.
the assumption that each image sequence includes only one Finally, conclusions are drawn in Section 5.
person and performs a single activity. Usually, recognition
of human actions from image sequences is very popular 2 Feature Extraction
in computer vision community, which has applications in
video surveillance and monitoring, human-computer inter-
In this section, first, we extract the foreground and detect
actions, etc. Since, there is no rigid syntax and well defined
the action region from the image sequence, second, we ex-
structure for human action recognition is available, there-
tract the human body shape feature and optical flow feature
fore, human action recognition is a more challenging and
in different viewing directions.
sophisticated task. Several human action recognition meth-
ods have been proposed in the past decades. A detailed
excellent survey can be found in [1]. For recognizing ac- 2.1 Foreground extraction
tion, researchers use either explicit human body shape or
motion. For example, in [7] [8], authors use 2D and 3D For extracting the foreground, we use a simplification of
shape features for recognizing actions. Motion-based ac- the background subtraction algorithm used in [2], and the
tion recognition was also performed by several researchers, color value of each background pixel is modeled by using a