You are on page 1of 9

Sports Technology

ISSN: 1934-6182 (Print) 1934-6190 (Online) Journal homepage: http://www.tandfonline.com/loi/rtec20

The potential of the Microsoft Kinect in sports


analysis and biomechanics
Simon Choppin & Jonathan Wheat
To cite this article: Simon Choppin & Jonathan Wheat (2013) The potential of the
Microsoft Kinect in sports analysis and biomechanics, Sports Technology, 6:2, 78-85, DOI:
10.1080/19346182.2013.819008
To link to this article: http://dx.doi.org/10.1080/19346182.2013.819008

Published online: 23 Aug 2013.

Submit your article to this journal

Article views: 289

View related articles

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=rtec20
Download by: [Cukurova Universitesi]

Date: 07 November 2016, At: 12:34

Sports Technology, 2013


Vol. 6, No. 2, 7885, http://dx.doi.org/10.1080/19346182.2013.819008

RESEARCH ARTICLE

The potential of the Microsoft Kinect in sports analysis and


biomechanics

SIMON CHOPPIN & JONATHAN WHEAT


Sheffield Hallam University, Centre for Sports Engineering Research, Sheffield, UK
(Received 16 November 2012; accepted 13 June 2013)

Abstract
The objective of this study was to assess the suitability of the Microsoft Kinect depth camera as a tool in segment scanning,
segment tracking and player tracking. A mannequin was scanned with the Kinect and a laser scanner. The geometries
were truncated to create torso segments and compared. Separate shoulder abduction (2 1008 to 508) and flexion motions
(08 1008) were recorded by the Kinect (using free and commercial software) and a Motion Analysis Corporation (MAC)
system. Segment angles were compared. A participants centre of mass (COM) was tracked over a 6 3 m floor area using
the Kinect and a MAC system and compared. Mean errors with uncertainty of the mass, COM position and principal
moments of inertia were 2 1.9 ^ 1.6%, 0.5 ^ 0.4% and 3 ^ 2.6%, respectively. The commercial software gave the highest
accuracy, in which the maximum and root mean square errors (RMSEs) were 13.858 and 7.598 in abduction and 21.578 and
12.008 in flexion. RMSEs in X, Y and Z COM positions were 0.12, 0.14 and 0.08 m, respectively, although vertical position
(Y) was subject to a large systematic bias of 405 mm. The Kinects low cost and depth camera are an advantage for sports
biomechanics and motion analysis. Although segment tracking accuracy is low, the Kinect could potentially be used in
coaching and education for all three application areas in this study.

Keywords: Kinect, motion capture, scanning, sports analysis, player tracking

Introduction
The Kinect is a motion sensing device for use in
home entertainment, which captures separate colour
and depth data at 30 Hz, at a resolution of 640
480 pixels. The technology achieves this through the
use of two cameras (colour and monochrome) and an
infra-red (IR) projector. A pattern generated by the
projector is imaged by the monochrome camera.
This projected pattern is distorted (from a calibrated
datum) by objects in the scene. The disparities and
deformation are measured and translated into depth
information. The method is thought to be related to
structured light techniques (Scharstein & Szeliski,
2003), although specific details of the methods have
not been published.
Cameras capable of measuring depth are not
new, they already exist in the form of laser-based
time-of-flight cameras, structured light systems
and camera-based triangulation systems. Depth

cameras traditional domain has been in accurate


scanning applications, machine vision and robotics
with costs of around 80,000. The Kinect is novel
because it costs less than 200, opening up the use
of depth cameras to researchers in a wide range of
disciplines.
Within days of its release, researchers and hackers
were using the Kinect for a number of different
applications. The machine vision and robotics
community took a particular interest, but applications have also been developed in computer
animation, enhanced learning and artistic expression.
As a research tool, the Kinect is controlled and
accessed through a computer and driver software.
The majority of software applications to date (written
for the PC) have been custom written for a specific
application and are usually made freely available.
The disadvantage to this is that if existing software
does not meet a specific requirement, the only option
is to program a solution yourself.

Correspondence: S. Choppin, Sheffield Hallam University, Centre for Sports Engineering Research, Sheffield, UK. E-mail: s.choppin@shu.ac.uk
q 2013 Taylor & Francis

Potential of the Microsoft Kinect


The strength of the Kinect is its versatility; through
its depth camera it is able to capture point cloud data
at 30 Hz, effectively scanning a surface as it does so.
Proprietary algorithms developed by PrimeSense and
Microsoft are not only able to use the depth cloud to
recognise human users within the field of view but also
to calculate joint positions and segment angles for the
purposes of gesture recognition and command. The
applications of the Kinect to sports analysis and
biomechanics seem obvious. The ability to scan has
implications for fast body scanning to estimate body
segment inertia parameters using approaches similar
to previous studies which have used scanning
techniques to estimate these parameters (Norton,
Donaldson, & Dekker, 2002; Sheets, Corazza, &
Andriacchi, 2010). In-built and third-party algorithms
able to automatically track skeleton motion could also
be used for simple (and possibly advanced) movement
analysis. Automated object tracking (made easier
through depth measurement) has applications in
notational performance analysis for tactical and
coaching purposes. Metrics such as field/court
position, distance moved and velocity profile can
easily be tracked over the duration of play. The aim of
this paper was to explore the accuracy of the Kinect in
three areas: body scanning and anthropometry,
segment tracking for motion analysis and image
segmentation for coaching and notational analysis.
Methods
In accordance with the Declaration of Helsinki, all
procedures requiring ethical clearance were approved
by the Faculty of Health and Wellbeings research
ethics committee, Sheffield Hallam University, UK.
Segment scanning
The measurement of body segment inertia parameters
is useful for many applications in sport and exercise
biomechanics (inverse dynamics for example). However, obtaining accurate values is difficult. Medical
imaging techniques (e.g. dual-energy X-ray absorptiometry, Durkin & Dowling, 2006) are costly,
regression techniques (e.g. Dempster, 1955; Zatsiorsky & Seluyanov, 1983) are quick but limited and
inaccurate, geometric techniques can be time consuming and prone to error in certain segments (Wicke
& Dumas, 2010). Wicke and Dumas (2010) recently
suggested that structured light techniques could be
used to obtain body segment parameters by more
accurately measuring their surface and volume.
The depth information given by the Microsoft
Kinect is expressed as point cloud data in a coordinate system fixed to the monochrome camera.
The limited field of view of the camera (approximately 608) means that only the surface facing the

79

Kinect is captured; to capture a complete body


segment, either the camera must move around the
segment or the segment must move within the field of
view. Scanning using multiple Kinects is possible, but
measures must be taken to limit the interference of
several IR projections this causes degradation of
the depth data. A full scan with the Kinect results in a
number of point clouds which must be aligned
correctly in order to create a closed volume. This
point cloud registration is difficult to solve, and
many different approaches are used. Microsoft has
presented a fast algorithm which uses features within
the point cloud to locate and rotate separate
viewpoints (Izadi et al., 2011). Without access to
point registration algorithms, we adopted a hardware-based solution, described below.
To assess the accuracy of the Kinect as a scanning
device, the lumbar segment of a Choking Charlie
mannequin (Leardal, UK) width 415 mm, depth
170 mm and height 770 mm was scanned using the
Kinect and compared with a scan obtained using a
ModelMaker D100 (Nikon metrology Belgium) noncontact laser scanner. The laser scanner provided a
gold-standard estimate of the volume and after
assuming uniform density of 1000 kg/m3 inertial
parameters of the mannequin.
A Polhemus Liberty electromagnetic tracking
system (Polhemus USA) was attached to the
mannequin to determine its position and orientation
in relation to the Kinect. To align the co-ordinate
systems of the Kinect and Polhemus, a global coordinate system common to both systems was defined
using three intersection points on a checkerboard.
The mannequin was rotated and four separate scans
were taken to obtain a complete geometry, each point
cloud was transformed into the global co-ordinate
system. Both scans (from the Kinect and ModelMaker
laser scanner) were truncated with horizontal planes
defined by three points at the upper and lower
extremities of the segment (identified using the
Polhemus stylus and a pointer on the ModelMaker)
to represent an anatomical lumbar segment. The
complete point cloud from the Kinect-based system
was post-processed using 1 cm uniform sub-sampling
(chosen by visual inspection as an appropriate
compromise between the number of vertices and
mesh detail) and mesh fitting (a comparison of the
Kinect mesh and ModelMaker mesh is shown in
Figure 1). Subsequently, after assuming a nominal
uniform density, inertia parameters of the lumbar
segment mass, COM location and moment of inertia
(I) were calculated using ProEngineer (PTC,
Needham, MA, USA). One scan from the ModelMaker laser scanner was analysed and served as ground
truth. Twenty-five complete scans from the Kinectbased system were analysed.

80

S. Choppin & J. Wheat

Figure 1. The 3D meshes of the scanned torso segment. The lefthand mesh shows the geometry obtained by the Kinect, and the
right-hand mesh shows the geometry obtained by the ModelMaker
laser scanner.

Segment tracking
The position and orientation of body segments is
important when assessing performance, injury risk
and joint loading. Methods of obtaining joint
information range from simple, single-camera twodimensional methods to sophisticated methods using
multi-camera calibrated volumes or spatially sensitive sensors.
Of the three available Kinect drivers, only
OpenNI/NITE (www.openni.org) and Kinect for
Windows have segment tracking capabilities, both
approach the problem in significantly different ways.
The Primesense (NITE) software registers an initial
pose which a skeleton tracking algorithm locks onto
the participant, allowing tracking in subsequent
frames. Microsoft invested considerable resource in
developing a method that works in a different
manner, requiring only a single frame to capture
body pose. This was achieved using machine learning
techniques with a large data-set of real and
synthesised body position data (Shotton et al.
2013). It is also important to note that at the time
of testing, the OpenNI tracking algorithm gave joint
position and segment orientations, whereas Microsofts gave only joint positions.
The objectives of a biomechanics analysis are far
removed from those of the typical mass consumer.
Calibration poses can be tolerated and real-time
processing can be sacrificed if accuracy of tracking is
increased. IPI Soft (www.ipisoft.com), a commercial
motion capture package, records the colour and depth
streams from the Kinect and analyses them postcapture. Tracking is not real-time but the complexity
of the skeleton is increased (e.g. shoulder and feet
segments are included), making this a popular choice
for users in the computer animation community. The
software has also recently added support for dual
Kinect recording, with the claim of increased
accuracy due to a more complete point cloud.
To assess the accuracy of skeleton tracking
methods, the freely available NITE algorithms were
compared with IPI Soft. A 12-camera Motion
Analysis Corporation (MAC) system was used to

record the movements of a participant simultaneously


with Kinect data. Two participants had reflective
markers added to their torso (sternal notch, xiphoid
process, 7th cervical vertebrae, and 8th thoracic
vertebrae) and right upper arm (anterior shoulder,
posterior shoulder, medial and lateral epicondyle of
the humerus). They were asked to perform shoulder
abduction adduction and flexion extension
motions. Segment angles were calculated as a
projection of the upper arms position onto the
appropriate axes plane of the torso segment. In
shoulder abduction, a vertical plane running from left
to right was used. In shoulder flexion, a vertical plane
running fore-aft was used. Due to practical reasons
(availability of participants, and time available for
initial study), separate participants were used with
each software system. Both participants were male, of
similar age, stature, mass and somatotype, and
performed each action at a low, self-determined
speed. The first participant performed movement
with NITE software being used to record full skeleton
position throughout the motion. The second participant repeated the same movement, whereas IPI Soft
was used to record their motion. IPI Soft was used in a
dual Kinect configuration. The Kinect and MAC
systems were synchronised using an analog signal
available to both systems. IPI Soft provided full
functionality for exporting the requisite data (in the
form of BVH files, described by Menache (2000)),
and custom software was written to capture these data
when using NITE tracking algorithms. The magnitude of the difference between the two measurement
systems was assessed using root mean square errors
(RMSEs). The nature of the difference between the
two measurement systems (MAC and Kinect) was
assessed using ordinary linear products (OLP)
regression techniques (Ludbrook, 1997) which gave
measurements of systematic and proportional bias
within 95% confidence intervals (CIs). The correlation coefficient was also calculated in each case.
Player tracking
Notational analysis is a useful technique for tactical and
coaching analysis; player position can be tracked
during a sporting event to obtain metrics such as
distance travelled, velocity profile and time spent in
specific regions of the pitch/court (Hughes & Franks,
2004). However, notational analysis techniques can be
particularly labour intensive when manual processing
is used. In some cases, image processing techniques
can be used to automatically obtain player position by
threshold and differencing techniques (Mauthner,
Koch, Tilp, & Bischof, 2007), but in certain
circumstances, poor visual conditions (background,
and light levels) can make it difficult to isolate a player
from an image. An image shaded according to depth

Potential of the Microsoft Kinect

81

Figure 2. Object segmentation from images is easier and more reliable when depth information is used instead of colour. The image on the left
was taken with the standard colour camera (image converted to grey scale) in the Kinect. The image on the right was constructed from the
depth information returned by the Kinect. The plots below each image show the intensity of the values taken along each image in the position
of the white line.

provides a more robust method of segmenting objects


due to a reliable, definite edge (Mirante, Georgiev, &
Gotchev, 2011) as illustrated in Figure 2.
The combination of colour and depth cameras
within the Kinect enhances the functionality of the
device. Player segmentation is possible through
streamed depth data, whereas the colour camera can
help with player identification (using clothing colour
for example). The main limitation of the Kinect with
regard to this type of analysis is the size of the available
tracking space. Although the depth data are available
to around 10 m, automatic object recognition through
OpenNI is limited to 4 m. However, fast and efficient
object segmentation algorithms are available to
segment images according to depth and colour
(Mirante et al., 2011); these could be used to segment
images at depths of over 4 m. Another aspect of the
depth information given by the Kinect is that the
resolution decreases with distance. This is most likely
a result of the structured light system having to
operate over a significant range (0.6 10 m).
As a feasibility study, the Kinect was used to capture
the movement of a participant performing mock
badminton movements within a 6 x 3 m playing area.
The motion was recorded with a 12-camera MAC and
the Kinect. For the MAC, a single reflective marker was
attached to the sacrum of the participant, which was
assumed approximately equal to the participants

COM, pilot testing, in which agreement between


whole-body COM and the sacrum position was
assessed, indicated that this assumption was appropriate. For the Kinect, the movements were recorded
as both colour and depth images. A custom algorithm
was written to segment the participant within the depth
images and estimate the position of their COM (edge
detection to segment the individual and centroid
calculation of the resulting segmented image). First, a
coordinate system fixed to the floor was defined by
performing principal components analysis (PCA, see
Daffertshofer, Lamoth, Meijer, and Beek (2004) for a
description) on the points in a manually selected region
of the floor. As the floor points were three-dimensional
(3D), the PCA returned three orthogonal principle
components the axes of which defined the floor-fixed
coordinate system. Principal components 1 and 2 lay
within the floor plane, and principal component 3 was
normal to the floor defining the X-axis, Z-axis and Yaxis, respectively. The floor-fixed coordinate system
was then aligned with the court markings by translation
in the established floor plane and rotation around the
Y-axis such that the X-axis lay along the courts width
and Z-axis along the courts length. This was achieved
by projecting representations of the unit vectors of the
floor-fixed coordinate system onto the video image of
the Kinect, facilitating alignment through visual
inspection. 3D points returned by the Kinect that lay

82

S. Choppin & J. Wheat

within a bounding box coincident with the playing area


were assumed to represent the participant the
minimum height of the bounding box was set to
400 mm above the floor to avoid the influence of noisy
points from the floor plane at large distances from the
Kinect. Each 3D point was assigned an equal, nominal
mass. The COM of these points was assumed to
represent the COM of the participant. Agreement
between the COM estimates determined using the
MAC and Kinect data was assessed by calculating the
RMS difference. The nature of the difference between
the two measurement systems was also assessed using
OLP regression techniques (Ludbrook, 1997) and by
calculating the correlation coefficient.
Results
Segment scanning
On comparing the inertial properties of torso segments
created by the Kinect and laser scanner (Figure 1),
the percentage error in mass, COM, Ixx, Iyy and Izz
was 2 1.9 ^ 1.6%, 0.5 ^ 0.4%, 2 3.2 ^ 2.7%,
2.8 ^ 2.3% and 2 3.0 ^ 2.8%, respectively. Errors
are presented as the average error for 25 separate scans
with the standard deviation as the stated uncertainty.
The largest average errors and uncertainties were
observed in the principal moments of inertias, and the
smallest average error and uncertainty were observed
in the position of the COM.

using NITE, the maximum and RMSEs were 44.078


and 20.288 for the abduction motion and 36.158 and
20.118 for the flexion motion. For segment angles
captured using IPI Soft, the maximum and RMSEs
were 13.858 and 7.598 for the abduction motion and
19.458 and 12.158 for the flexion motion. The OLP
regression measured systematic bias as 4.418 and
3.498 for shoulder flexion using the NITE and IPI
Soft algorithms, respectively. Systematic bias was
significantly higher (exceeding the 95% CIs) for
shoulder abduction, 10.78 and 8.618 for the NITE
and IPI Soft algorithms, respectively. Table I shows
the full set of results from the OLP analysis.
Player tracking
A comparison between the calculated COM position
from the Kinect and MAC systems in the X, Y and Z
directions of the 6 3 m playing area gave RMSEs
in the X, Yand Z directions of 0.12, 0.14 and 0.08 m,
respectively (Figure 5). The OLP analysis (Table II)
revealed that the quality of tracking in the vertical (Y)
direction was considerably worse than along the
length (Z) and width (X) of the court. Systematic
bias in the Y direction was 405 mm compared with
2 74.7 and 2 161 mm in the X and Z directions,
respectively. A poor correlation coefficient of 0.624
also highlights the poor agreement between MAC
and Kinect in the Y direction. Table II contains the
complete results of the OLP analysis.

Segment tracking
Figures 3 and 4 show movement traces for the flexion
and abduction movements for the IPI Soft and NITE
tracking algorithms. For segment angles captured

Discussion
This paper has explored the viability of the Kinect
for use in three distinct sports analysis themes:

Figure 3. A comparison between MAC and Kinect shoulder flexion segment angles. IPI Soft segment tracking is shown on the left and NITE
on the right.

83

Potential of the Microsoft Kinect

Figure 4. A comparison between MAC and Kinect shoulder abduction segment angles. IPI Soft segment tracking is shown on the left and
NITE on the right.

(1) segment scanning for measurement of inertial


parameters, (2) segment tracking and orientation
measurement and (3) participant tracking for
notational and tactical analysis. It is important to
note that the analyses carried out in this paper can be
repeated after purchasing a Kinect (, 200),
although custom software is required to access and
process the data appropriately.
As a scanner the Kinect performed well, with
errors lower than those reported by Wicke and
Dumas (2010) for geometric models, for example.
The analysis was limited in this study as it assumed a
constant density in all cases, but this approach tests
the volumetric equivalence of the scan given by the
Kinect, and more complex density profiles could be
applied for greater anatomical realism if required.
The methodology of obtaining segment geometry
exhibited in this paper relies on a Polhemus or
equivalent system capable of tracking 3D position
and orientation which adds to the complexity and
cost of the process. However, combining the Kinect
with such a system allows scanning directly into an
anatomical coordinate system and segmentation to
be carried out on-the-fly. Furthermore, analyses
requiring body segment inertia parameters are rarely
conducted without this type of system (e.g. optoelectronic system).
Segment tracking is an integral aspect of the
Kinect, and various methods are available to give
segment position and orientation. This paper
compared the Kinect and MAC systems during two
simple shoulder movements. The results of this study
suggest that the commercial IPI Soft was able to

more accurately calculate joint angles. However, the


limited range of movements and use of different
participants render this result insignificant. The
RMSEs and bias levels revealed in this study suggest
that the Kinect is not currently accurate enough for
studies requiring high levels of accuracy and
precision. The observed level of systematic bias
could be due to a misalignment of the global/local coordinate sets or a difference in joint centre location.
The Kinect offers an inexpensive solution which
requires no attached markers or calibration. For
illustrations of body motion or analysis of larger-scale
movement patterns, the Kinect provides a means to
conveniently analyse human motion. The use of the
Kinect to capture sporting motion has not been
considered in this study, and the limited frame rate of
the device (30 Hz) is likely to limit its application for
high-speed movements. A more comprehensive
accuracy study is needed to quantitatively assess the
accuracy of the Kinect in segment tracking.
Table I. The results of an ordinary least products regression
between the recorded MAC and Kinect segment angles. The table
includes correlation coefficient (r), systematic (a) and proportional
(b) biases and their associated CIs.

IPI Soft
Flexion
Abduction
NITE
Flexion
Abduction

a (8)

CI (8)

CI

0.984
0.998

4.41
10.7

3.17/5.59
11.2/10.2

1.06
1.12

1.02/1.09
1.11/1.14

0.993
0.988

3.49
8.61

3.30/3.68
6.55/10.6

1.45
1.26

1.42/1.49
1.22/1.30

84

S. Choppin & J. Wheat

Figure 5. A comparison between the MAC system and Kinect in recording participant COM over a 6 3 m area. The left plot shows
position in the X, Yand Z directions from top to bottom. The right plot shows participant movement as a projection onto the plane of the floor
as captured by the Kinect and MAC systems, shown in a 1:1 aspect.

The Kinect provides depth data which are very


useful for image segmentation and object tracking.
Figure 5 shows that macro movements can be
captured with acceptable accuracy compared with
previous studies which use image segmentation
(Mauthner et al. 2007). It is feasible that with more
sophisticated tracking techniques, this error could be
reduced. It is noteworthy that the choice of gold
standard in this study is likely to have artificially
increased error. In motions that involve reaching,
such as simulated smashes or swings of the racket, the
centre of area of the player moves towards the
extended limb. This reflects what would happen to
the true COM of the participant but would not
manifest in the position of the sacrum marker. The
poor agreement and high amount of systematic bias
in the Y direction support this hypothesis. The error
seen is due to the inclusion of the racket in assessing
COM for the Kinect, but also due to the error arising
from the use of a single sacrum marker in the MAC
system. Future work should assess the accuracy of
the Kinect in predicting the motion of participants
true COM through more sophisticated tracking
algorithms and a complete MAC marker set.
In the future, it is likely that more sophisticated
hardware will be released, which gives increased
Table II. The results of an ordinary least products regression
between the recorded MAC and Kinect player positions. The table
includes correlation coefficient (r), systematic (a) and proportional
(b) biases and their associated CIs.
r
X (width) 0.996
Y (height) 0.624
Z (length) 0.991

A (mm)
274.7
405
2161

CI (mm)

CI

2100/49.4 1.00
0.997/1.01
366/440
0.587 0.548/0.629
2154/2167 0.885 0.875/0.896

resolutions, larger capture volumes, more sophisticated tracking techniques and increased sampling
rates. This will only increase the suitability of depth
cameras for sporting and coaching applications.
There is a need to develop specific software for users
in the sport and coaching community. We hope to
address this need in future research by releasing
software applications under a free licence; visit www.
depthbiomechanics.co.uk for more information.
Conclusions
The low cost and automatic tracking capabilities of
depth cameras (such as the Kinect) make them
potentially revolutionary for sports biomechanics
and motion analysis. Accuracy is currently not high
enough for some applications, but there is potential
for its use in coaching and education domains. In
order for the Kinect and future (more advanced)
depth cameras to benefit the sports analysis and
biomechanics community, there is a need for the
development of effective software and future studies
exploring their accuracy in specific domains.

References
Daffertshofer, A., Lamoth, C. J. C., Meijer, O. G., & Beek, P. J.
(2004). PCA in studying coordination and variability: A
tutorial. Clinical Biomechanics, 19, 415428.
Dempster, W. T. (1955). Space requirements of the seated
operator. WADC technical report 55159, 55 (WADC-55-159,
AD-087-892), 55159. Retrieved from http://www.mendeley.
com/research/space-requirements-of-the-seated-operator/
Durkin, J. L., & Dowling, J. J. (2006). Body segment parameter
estimation of the human lower leg using an elliptical model with
validation from DEXA. Annals of Biomedical Engineering, 34(9),
14831493.

Potential of the Microsoft Kinect


Hughes, M., & Franks, I. (Eds.). (2004). Notational analysis of
sport: Systems for better coaching and performance in sport (2nd ed.,
p. 320). London: Routledge.
Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R.,
Kohli, P., . . . Fitzgibbon, A. (2011). Kinectfusion: Real-time 3D
reconstruction and interaction using a moving depth camera. ACM
Symposium on User Interface Software and Technology. Santa
Barbara, USA.
Ludbrook, J. (1997). Comparing methods of measurement
(October 1996). Clinical and Experimental Pharmacology and
Physiology, 24, 193 203.
Mauthner, T., Koch, C., Tilp, M., & Bischof, H. (2007). Visual
tracking of athletes in beach volleyball using a single camera.
International Journal of Computer Science in Sport, 6, 2134.
Retrieved from http://nguyendangbinh.org/Proceedings/
IACSS/2007/papers PDF/Thomas Mauthner.pdf
Menache, A. (2000). Understanding motion capture for computer
animation and video games. San Diego, CA: Academic Press.
Mirante, E., Georgiev, M., & Gotchev, A. (2011). A fast image
segmentation algorithm using color and depth map. 2011 3DTV
Conference: The True Vision Capture, Transmission and
Display of 3D Video (3DTV-CON) (pp. 1 4). IEEE.
Norton, J., Donaldson, N., & Dekker, L. (2002). 3D whole body
scanning to determine mass properties of legs. Journal of
Biomechanics, 35, 8186. Retrieved from http://www.ncbi.nlm.
nih.gov/pubmed/11747886

85

Scharstein, D., & Szeliski, R. (2003). High-accuracy stereo depth


maps using structured light. 2003 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, 2003.
Proceedings. (pp. 195 202).
Sheets, A. L., Corazza, S., & Andriacchi, T. P. (2010). An
automated image-based method of 3D subject-specific body
segment parameter estimation for kinetic analyses of rapid
movements. Journal of Biomechanical Engineering, 132, 011004.
Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M.,
Blake, A., . . . Moore, R. (2013). Real-time human pose
recognition in parts from single depth images. Communications
of the ACM, 56, 116124.
Wicke, J., & Dumas, G. A. (2010). Influence of the volume and
density functions within geometric models for estimating
trunk inertial parameters. Journal of Applied Biomechanics, 26,
26 31. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/
20147755
Zatsiorsky, V., & Seluyanov, V. (1983). The mass and inertia
characteristics of the main segments of the human body. In
H. Matsui & K. Kobayashi (Eds.), Biomechanics VIIIB, Vol. 4B,
(pp. 11521159): Human Kinetics. Retrieved from http://
scholar.google.com/scholar?hl en&btnG Search&qintitle:
The mass and inertia characteristics of the main
segments of the human body.#0

You might also like