You are on page 1of 15

Implementation of Human Gait

Identification
By Rischan Mafrur

Advisor : Deokjai Choi

December 23, 2014

About Our Dataset

Experiment
Preprocessing
Linear Interpolation
DB6 Level 1~3 De-noising (Noise Removal)
Gait Segmentation

Features Extraction
Identification

Linear Interpolation

De-Noising a Signal with Multilevel Wavelet Decomposition

Original Signal
10
0
-10

Significant de-noising occurs


with the level-4 approximation
coefficients (Daubechies
wavelets)

5
0
-5
5
0
-5
5
0
-5
5
0
-5

50

100
150
200
250
300
350
Reconstructed Approximation - Level 1

400

450

50

100
150
200
250
300
350
Reconstructed Approximation - Level 2

400

450

50

100
150
200
250
300
350
Reconstructed Approximation - Level 3

400

450

50

100
150
200
250
300
350
Reconstructed Approximation - Level 4

400

450

50

100

400

450

150

200

250

300

350

Before and After Linear Interpolation and DB6 noise reduction

Gait Segmentation
Use Z value of Accelerometer
to define gait cycle

Features Extraction
Time domain feature:
1.
2.
3.
4.
5.
6.

Mean from each gait signal (X,Y,Z,M signals)


Average maximum acceleration from (X,Y,Z,M signals)
Average minimum acceleration from (X,Y,Z,M signals)
Average absolute different from (X,Y,Z,M signals)
Standard deviation
RMS (Root Mean Square)

Frequency domain features:


The first 40 FFT coefficients form a feature vector

Features
Time domain Features
Mean, Max, Min, Sd, Abs, Rms (6 features) with 4 signals (X,Y,Z, and M), total
features from time domain features are 24 features.

FFT Features
FFT features is the 40 first FFT coefficient from each gait signal. In this
experiment we use 4 accelerometer signals (X,Y,Z, and M) so total FFT features
are 160 features.

All Features
Combine between time domain features and FFT features. Total : 24+160 = 184 features

---------------------------------------------------------------------------------------------------------------------------------

Result

Time Domain Features


List Features after SFFS:
"MeanX", "AbsX", "MeanY", "MinY", "MeanZ", "SdZ"
Best SVM Parameters: gamma = 0.5, cost = 10
Time Loading
Time Prediction
Accuracy

Original
0.48
0.11
0.7614

SFFS
0.37
0.02
0.8267

FFT Features
List Features after SFFS:
"FFT1", "FFT13", "FFT71", "FFT81", "FFT82", "FFT121"
Best SVM Parameters: cost=1, gamma=1
Time Loading
Time Prediction
Accuracy

Original
1.73
0.45
0.4821

SFFS
0.87
0.03
0.7178

SFS (Sequential Forward Selection)


SFFS (Sequential Floating Forward Selection)

All Features
List Features after SFFS:
"MeanX", "AbsX", "MeanY", "MinY", "MeanZ", "SdZ"
Best SVM Parameters: gamma = 0.5, cost = 10
Time Loading
Time Prediction
Accuracy

Original
2.45
0.55
0.4155

SFFS
0.37
0.02
0.8267

Nave Bayes and Random Forest


SVM SFFS Nave Bayes Nave Bayes with SFFS
Time Loading
0.37
91.36
4.09
Time Prediction
0.02
8.58
0.31
Accuracy
0.8267
0.5297
0.6287

Time Loading
Time Prediction
Accuracy

SVM SFFS
0.37
0.02
0.8267

Random Forest Random Forest with SFFS


2.53
0.62
0.36
0.23
0.848
0.7966

Conclusion
If we want to play with sensor data, we have to
consider deeply about sampling rate.
Features selection is very useful method, we can
use it to find which is the best features.
Many features does not mean good accuracy,
but many features means take more time to load
and predict.

Thank you,

You might also like