Professional Documents
Culture Documents
Acknowledgements
I would like to thank Professor M. Levesley and Dr A. Jackson for
their guidance throughout this project along with Mr D. Readman for
his help within the control lab.
This project was only made possible thanks to the previous work
completed by the members of the Kinesthesia team: B. Cotter, C.
Norman and D. Clark as well the summer project students: R. Coe,
N. Hussain, A. Oladokun, W. Stokes, J. Skyes and S. Yang.
Contents
1.1
1.2
2
2.1
2.2
2.3
2.4
2.5
3
3.1
3.2
3.3
3.4
4
4.1
4.2
5
5.1
5.1.1
5.1.2
Introduction
1
Aims & Objectives
2
Literature Review
3
Kinect Camera
3
Physiotherapeutic Robots
5
Existing Robotic Arm
6
LabVIEW interface
7
Conclusions
8
System Development
10
Motion Capture
10
Analysis of Original Robotic
Arm
12
Transformation of Kinect Output
to Robot Input
14
Replication of Motion
15
Results
18
Validation
18
Analysis
21
Discussion & Conclusions
22
Future Work
24
Performance
Enhancements for
Existing Systems
24
Development of
Advanced Applications
26
Appendix 1
Appendix 2
References
28
29
30
1.1
Introduction
same
manner
as
they
normally
would
during
manual
1!
1.22 Objectives
2!
Literature Review
This project spans four main topics: the Microsoft Kinect camera,
LabVIEW as an interface, robotic arms, and upper limb rehabilitation.
The work carried out for the project has established links between
each of these elements. Reasoning behind the hardware and
software choices made will be given.
3!
4!
The full capabilities of the Kinect are not used in the project,
with it being used to track only the right and left wrist. However
further controls could be implemented in to the motion capture
program of the project. Natural gestures such as nodding and
shaking can be recognised by the Kinect using its depth camera with
little computing power (Biswas & Basu, 2011), corresponding to yes
and no commands. The motion tracking program could also be
developed to record lower limb motion, it has been demonstrated that
the Kinect could be used to evaluate foot position (Mentiplay & Clark,
2013).
5!
an
improvement
over
traditional
physiotherapeutic
6!
arm
with
medical
the
departments
current
research in to physiotherapeutic
robots (Coe et al., 2012).
It is a double-jointed two
to
be
capable
of
from
departments
the
Design
Catalogue. It is controlled by a
CompactRIO
(National
Instruments,
Texas,
U.S.)
which
is
programmed in LabVIEW.
forearm
Upper arm
7!
(virtual
instruments) which allow the three data feeds (RGB, depth and
audio) from a Kinect camera to be accessed within LabVIEW (Cotter
et al., 2012a). However in this project audio will not be used. It was
developed using Microsofts .NET framework and their Kinect SDK.
By using a DLL (dynamic link library) the Kinects data streams can
be used by multiple programs at once. The toolkit follows National
Instruments recommended DAQ structure and its VIs containing the
data streams can simply be dragged in to a LabVIEW program.
Although the Kinect is capable of actively tracking 2 skeletons at a
time (Kinect for Windows Sensor Specifications, 2012), Kinesthesia
tracks just one to improve the programs efficiency.
8!
2.5 Conclusions
Literature has highlighted some of the requirements for the
rehabilitation system proposed in this report, particularly if it to be
used in a clinical environment and as an effective rehabilitation tool.
Patients who trialled various systems noted that it must of course be
comfortable and simple to operate (Kwakkel et al., 2008).
Physiotherapists who observed trials of iPAM also mentioned that it
must be portable and quick and easy to set up (Jackson et al., 2009).
9!
System Development
10!
been
possible.
The
Microsoft
Kinect
official
Software
Not all of the VIs provided in the toolkit were used in the
motion capture system for this project. The depth and RGB data
streams were not used. To capture the position of the skeletal joints
the skeleton data stream was used. All the processing of the image
seen by the Kinects three cameras is completed on board. The fact
that only the skeletal data stream is used by this motion capture
program does not mean that the other data is not used at all. The
skeletal data stream, made up of joint positions is generated on
board the Kinect by using the RGB and depth cameras data.
11!
and z values are taken, since the robotic arm operates in the
horizontal 2D x z plane. The output of the Kinect is given in meters,
with 3 decimal places, i.e. an accuracy of 1 mm.
Y values for the left hand are also used internally by the
program to start and stop recording the right hand movement. The
ability to control the program with the users left hand was
implemented to prevent the program recording the users movement
as they enter or exit the Kinects workspace in order to start or stop
the program. When the user clicks run on the computer the Kinect
activates but the program does not begin recording right hand
movement until the user raises then lowers their left hand, allowing
them time to move in to the Kinects line of sight and hold their right
hand at the starting position of the movement. Similarly during
recording raising a left hand will stop the program, without needing to
return to the computer.
Figure 6. Front panel of motion capture program, containing status bar, 3D skeleton render with
joint positions as spheres and a plot of recorded right hand motion,
12!
The code that graphs and exports the right hand motion is
contained in a while loop, controlled by a wait function, which
regulates the speed at which the motion is recorded, with each loop
iteration recording one point. This was set to 100 ms, resulting in 10
points per second being recorded. This value was sufficiently
accurate for the motions recorded, but a manual control could easily
be implemented in future work if deemed necessary. The number of
recorded points and the right hand x and z coordinates are written to
two text files.
The
robot
was
originally
allows small shape sizes to be drawn. The coordinates for each point
were fixed in the LabVIEW code as an array. The kinematics were
!
13!
14!
values of the Kinect output, these become the y values of the robot
input. A LabVIEW program was developed to convert the output of
the Kinect in to an input to the robotic arm. The diameter of the
robots maximum range is 0.7 m, approximately half the equivalent
range of the average human arm. This means that when
transforming the motion the Kinect records in to a motion replicated
by the robotic arm, it must be scaled by a factor of 2. This is done by
multiplying the Kinects output coordinates by 200, which also
converts them from meters into millimeters. Once scaled down, the
LabVIEW program also shifts the coordinates so that the first position
recorded by the Kinect is set in the robots coordinate system to a
datum point (0,350), the position of the end of the arm when fully
extended. On average, more motions starting from this central point
are likely to fit in the robots workspace than at any other points.
Successive points on the motion are then transformed by the same
amount as the first.
15!
The new angle finder subVI is used in the new robot control
program. Here the motion coordinates are read from the file output by
the Kinect to robot conversion program discussed in the previous
section. However in order for the robot to access the coordinate files,
they must firstly be placed on to the CompactRIOs internal memory.
This simply consists of dragging and dropping two files between the
!
16!
17!
Results
4.1
Validation
18!
Motion 2.
a.
b.
c.
19!
Motion 3.
a.
b.
c.
20!
4.2
Analysis
!
The conversion to the robot coordinate system is successful, with the
starting point of the motion being set to (0,350) and the following
points being transformed by the same amount, preserving the aspect
ratio. The overall shape of the motion drawn by the robot is relatively
close to that recorded by the Kinect, however there are
discrepancies. In particular it is obvious that the robot struggles to
recreate vertical lines. This difficulty was further highlighted when a
set of test coordinates, where x remained constant and y decreased
linearly from 350 to 275, were used as the robots input. As clear in
figure 9. below, the end of the arm judders whilst moving vertically.
Also small single point deviations from the general course of the
motion are often not registered by the robot as seen in Figure 10.
21!
By fulfilling each objective set out in the introduction, the aim of the
project: to develop a control interface between a Kinect camera and
a robotic arm, has largely been met, as can be seen in the results. A
motion is recorded by the Kinect motion capture program, and it is
replicated by the robotic arm.
22!
when the recorded motion is likely to exceed the range of the robot,
without having to operate the robot to find out.
23!
fact falls within the tolerance for the previous point and no further
movement is carried out. This is however somewhat beneficial to the
accuracy of the system, since is it fairly likely that such deviations are
the result of an occasional inaccurate Kinect depth calculation. These
small deviations would in a more accurate system, result in a jerky
motion, which is generally undesirable in a physiotherapeutic
application.
24!
25!
26!
could
me
made
more
user-friendly.
Defensive
during
recording.
Using
the
unused
functionality
27!
28!
Appendix 1
Robotic Arm Wiring Diagram
29!
Appendix 2
Kinematics
Firstly angle m1 is found, which is the angle from the vertical starting
position that forearm must move through to reach the correct
distance a, distance between (0,0) and the target (x,y). If x is
negative for first point, m1 is made negative (
motion.
and
where L = 175 mm (length of one arm link)
If
m1 is negative:
30!
References
Biswas, K. K., & Basu, S. K. (2011). Gesture recognition using
Microsoft Kinect . Automation, Robotics and Applications
(ICARA), 2011 5th International Conference on.
doi:10.1109/ICARA.2011.6144864
Coe, R., Hussain, N., Oladokun, A., Stokes, W., Skyes, J., & Yang,
S. (2012). Robotic Arm Summer Project. University of Leeds.
Cotter, B., Clark, D., & Norman, C. (2012a). Using a Low-Cost
Motion Capture System Based for New Surgical and
Rehabilitation Technologies. University of Leeds.
Cotter, B., Clark, D., & Norman, C. (2012b). Community: Kinesthesia
- A Kinect Based Rehabilitation and Surgical Analysis System,
UK. Retrieved January 6, 2013, from
https://decibel.ni.com/content/docs/DOC-20973
Du, G., Zhang, P., Mai, J., & Li, Z. (2012). Markerless Kinect-based
hand tracking for robot teleoperation. International Journal of
Advanced Robotic Systems, 9. Retrieved from
http://www.scopus.com/inward/record.url?eid=2-s2.084868150650&partnerID=40&md5=9f194326250abc6f46d4c449
97759756
Henry, P., Krainin, M., Herbst, E., Ren, X., & Fox, D. (2012). RGB-D
mapping: Using Kinect-style depth cameras for dense 3D
modeling of indoor environments. The International Journal of
Robotics Research, 31(5), 647663.
doi:10.1177/0278364911434148
Jackson, A. E., Makower, S. G., Culmer, P. R., Holt, R. J., Cozens, J.
A., Levesley, M. C., & Bhakta, B. B. (2009). Acceptability of
robot assisted active arm exercise as part of rehabilitation after
stroke. Rehabilitation Robotics, 2009. ICORR 2009. IEEE
International Conference on. doi:10.1109/ICORR.2009.5209549
Khoshelham, K. (2011). Accuracy Analysis of Kinect Depth Data.
ISPRS Journal of Photogrammetry and Remote Sensing, 38,
135. Retrieved from http://www.int-arch-photogramm-remotesens-spatial-inf-sci.net/XXXVIII-5-W12/133/2011/isprsarchivesXXXVIII-5-W12-133-2011.pdf
Kinect for Windows. (2012). Retrieved December 27, 2012, from
http://www.microsoft.com/en-us/kinectforwindows/
31!
32!