You are on page 1of 64

Human-Robot Interaction Strategies

for Gait Assistance and Rehabilitation

Prof. Dr. Carlos A. Cifuentes


carlos.cifuentes@escuelaing.edu.co

Dept. of Biomedical Engineering


Biomedical Engineering Group (GiBiome)
Escuela Colombiana de Ingeniera Julio Garavito
Content
1. Conditions that Affect Mobility

2. Mobility Assistive Devices

3. Human-Robot Interaction

4. Physiological Motor Control

5. HRI Strategies for G.A.R.

1. Exoskeleton

2. Walker

3. Exoskeleton + Walker

4. (Bio-inspired) Exoskeleton + Walker + Environment (Ongoing Research)

2
Conditions that Affect Mobility

A leading cause of disability in the Over 130,000 people each year survive a
developed world. traumatic SCI (bound to a wheelchair).

Reduced gait speed, shortened step length Maximization of user independence &
& loss of balance and often experience falls. mobility are the main objectives.

3
Conditions that Affect Mobility
Cerebral Palsy Population over 60 years old

Physical Rehabilitation
Nio Jesus Hospital Spain Center of Espirito Santo Brazil

CP is the most common cause of permanent Less developed countries (1.7 billion 2050)
serious physical disability in childhood. Worldwide (2 billion-2050)

It includes cardiovascular conditions,


Survival in children with severe level of dementia, diabetes, arthritis, osteoporosis
impairment has increased in recent years. and stroke.

4
Mobility Assistive Devices
Alternative devices
(robotic) wheelchairs & special vehicles.

The prolonged use of such devices affects


spine and lower limbs. UFES Robotic Wheelchair

They take advantage of the remaining motor capabilities.


Augmentative devices Wearable (orthoses and protheses)
External devices (canes, crutches and walkers)

DALi project (EU Project) Ekso Bionics


5
Trends in Gait Rehabilitation
(BWS systems)
Treadmill based devices Overground devices
Lokomat ZeroG

Lopes THERA-Trainer e-go

Fixed platform and a predetermined gait Over-ground walking is considered as the


pattern. most natural gait pattern.
Effective training in neurorehabilitation6 allows subjects to participate actively.
Human-Robot Interaction

Socio-economic context Human-Robot Interaction

Personal Assistant

Care Taking

House Keeping
Robot & Human have to safely:
Cooperate & Accomplish tasks together.
Introducing Robots in human environments Communicate & Interact.
bring new challenges to Robotics. 7
At least Co-exist.
Human-Robot Interaction
Human-Robot Interaction
Human-Robot Interaction
Human-Robot Interaction

Cognitive Human-Robot Interaction Human-Robot Interfaces (HRi)


(cHRI)
Natural HRI system should be multimodal.
Humans interact with the environment
through cognitive processes. An interface is a hardware & software link
that connects robot & human.
Physical Human-Robot Interaction
(pHRI) Cognitive Human-Robot Interfaces (cHRi)

Humans & robots share the same supports the flow of information in the
workspace cognitive interaction (possibly two-way)

Exchanging forces & possibly, Physical Human-Robot Interfaces (pHRi)


cooperating.
is based on a set of actuators & a rigid
structure to transmit forces to the human
musculoskeletal system.
11
Control Fusion Strategy (a)
(Following in front Controller + Force Interaction Controller)

Physiological Motor Control


Interface
EEG EMG Force/Pressure

Force
Controller Plant
Muscle
CNS Skeleton/Load
Activation Movement
Movement

Vision
Physiological
Skin
Sensing
Vestibular

ENGI
Physiological
Sensor System 106 HumanRobot Cognitive Interaction

Human

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
LRP MST TST PSW ISW MSW TSW
stance phase swing phase

Figure 4.6 Human gait phases and periods


12
HRI Strategies for Gait Assistance and
Rehabilitation

Wearable Robots Robotic Walkers


(Lower-Limbs)
Natural
Natural
Modalities
Modalities (visual, auditive)
(visual, auditive)
cHRI (EEG)
cHRI (EEG)

pHRI
cHRI (ENG)
cHRI
(Biomechanical)
cHRI (EMG)

cHRI cHRI (ENG)


(Biomechanical)

pHRI cHRI (EMG)

Human Robotic Walker


Human Wearable Robot

WR - designed to compensate gait RW - designed to provide body weight


deficiencies support
13
Exoskeleton Control of HumanRobot Interaction
Control of HumanRobot Interaction 143

CNS Impedance Wearable Human


Trajectory
controller robot limb
Musculoskeletal
System
Movement

Force Force
Position
Wearable
Robot

Figure 5.13 Impedance control schema for a wearable robot for rehabilitation
Control Strategy

cHRI through Biomechanical Monitoring


Figure 5.10 Humanrobot loop interacting dynamically 107

control mode, it allows the operator to control a remote robotic arm (slave). Interaction
closed loop
up/down tween the remote robot arm and its environment are fed back to the master and appli
slopes control systems: the human motor control
n a closed loop, of two dynamic exoskeleton to theCSIC human arm (Cavusoglu, sherman and Tendrick, 2002). The classic c
roller (see Figure 5.10). The human and the wearable robot interact through
channels. The human receptors record chitectures
the physical state in telemanipulation
of the human body are the positionposition and the positionforce approa
walking
sensory information is perceived by the CNS and interpreted by cognitive
control information flows bidirectionally between the master and slave.
or actions. Similarly, the robots sensors detect the state of the machine
ain, the environment is interpreted by the robotstance
control system in order to
sitting
standing
position
Rehabilitation.
phase
Robot-assisted
swing
phase
physical therapy has been shown to aid in rehabilitation
neurological
nteraction is the capacity of the two systems injuries.
for mutual adaptation. For rehabilitation applications, the exoskeleton provides both a
As noted
uman characteristics may be approximated by a variable impedance model.
passive
lers typically run impedance control strategies assistance
to manage the pHRI.in several
Such a therapies. The exoskeleton adopts the role of a therapist d
gure 5.11.
treatment. Sensors attached to the exoskeleton can assess forces and movements of th
robot controller should be able to adjust the impedance of the robot to a
cHRI based on
hekinetics/kinematics
human, depending on the This
goal of the
up/down robot.gives
On the the
other therapist
hand, humansquantitative feedback on the patients recuperation.
rtiesoftolower
the robots
limbs impedance in order to maintain the dynamic properties
stairs
14
Several
uji and Tanaka, 2005). This process is illustrated control
in Case algorithms
Study 5.7. have been developed that automatically adapt the reference
Walker Frames
Standard (the most stable) + 217% Two-wheeled (- dynamic stability) + 84%

Rollator (the most unstable) Hands-free (Assure the safety)

15
Robotic Walkers or Smart Walkers

Guido Walker DALi project (EU Project) JARoW


Sentry Scientifics Simbiosis


Smart Walker
16
Objectives
Design new control strategies to develop a more natural, safer and more
adaptable human-walker interaction.

Study the human motion intentions during walker-assisted gait in order to


extract human-walker interaction parameters.

Integrate HRI sensing modalities that promote natural human-walker


interaction.

Design a multimodal interface for testing and validating control strategies for
robotic walkers.

Design and validate a cognitive HRI control strategy for walker-assisted gait.

Develop a control strategy based on cognitive and physical HRI for walker-
assisted gait.

17
Multimodal Interaction Platform for the
Robotic Walker
3D Force Sensors

pHRI PC/104-plus
Matlab Real Time xPC
(Force Sensors)

LRF Sensor

ZigBee Link
cHRI Human IMU Sensor

Human Tracking
Legs Detection Module
(LRF + IMU Sensors)
IMU Sensor

DC Motors

Control Fusion Strategy


(Following in front Controller + Force Interaction Controller) Caster Wheel

Conceptual Design 18 Implementation


Development of a Cognitive
Interaction Strategy for Mobile Robot
cHRI applied in a carrier robot Model for cHRI applied
Cognitive Process h vh
Reasoning
Planning
Executing h
vr
cHRI
d
Sensors
H r

r a R
Actuators
C
(a)
vh
h arsin()
vhsin() vr
arcos()
H ar
vhcos() d
vr
vrcos()
19 r a R
Development of a Cognitive Interaction
Strategy for Mobile Robot
vh
(a) Human and robot paths (b)
Human
0 20 h
10
1
d vr
[]

2 10

20
3
10 20 30 40 50 60 70 80 Kinematics Robot r
[s]
~ Controller
[m]

4
(c) + d
5 0.04 dd - r
6 0.02
h
[]

0
7
+
0.02 +
8
-
2 1 0 1 2 10 20 30 40 50 60 70 80 r
[m] [s]

Control errors
Simulation of the proposed
Block diagram of the proposed controller.
control strategy

A good real-time implementation of the method proposed in this section relies on robust and
precise measurement or estimation of the parameters.
20
Robot and Sensor Integration Setup

ZIMUED Sensor ZIMUED


End Device ZigBee Coordinator
IMU data

Yaw h
LMS200 LRF USB
Wi-Fi (Eval. mode) / Serial
Com (Control mode)
(d1,a1) (d2,a2)
LRF and Robot position data Control
Server
HIZ Pioneer
Angular and linear velocities
3-AT Robot

21
Estimation of Control Inputs
(d, , vh, h, h, )
Right heel contact
Z Left heel contact
Stance Phase Swing Phase GC 0% GC 100%
Z GC 50%
Z IMU
Sensor
Pelvic Rotation
Pelvic Rotation
(a) Pelvic Angular Velocity
Heel Strike (ZGyroscope)
Foot Flat Heel Off (b) Pelvic Orientation
Toe Off (IMU)
Heel Strike
40
20
20 0 LRF
0% 50% 20 100% Sensor
Pelvic Yaw
[/s]

[]

0
40 Pelvic pitch
20
Gait phases & transverse
60
plane External and internal measurement of the gait
pelvic rotation
80 with the robot following the human in front.
40
0 1 2 0 1 2
[s] [s]
(a) Pelvic Angular Velocity
(c) Legs Distance (LRF) (d) Legs Orientation (LRF) (ZGyroscope) (b) Pelvic Orientation (IMU)
40
2.5 RL 20
84
LL 20 0
2 82 RL 20
[m]

Pelvic Yaw
[/s]

[]
0
[]

LL 40 Pelvic pitch
1.5 80
20 60
1 78 80
40
0 1 2 0 1 [s] 2 0 1 2 0 1 2
[s] [s] [s]
Detection of zero crossing points over pelvic
(c) Legs angular
Distance (LRF) velocity.
(d) Legs Orientation (LRF)
2.5 RL 84
Development of a Cognitive Interaction
Strategy for Mobile Robot
Three experiments to verify the accuracy in the detection of the HRI parameters

First and second experiments

ROBOT: no motion.

HUMAN: Step length (0.25 m, 0.5 m & 0.75 m) marked with distance intervals.
Steps with sound indication (every second).
vh (0.25 m/s, 0.5 m/s & 0.75 m/s).

Third experiment

ROBOT:
vh = 0.15 m/s, h = -7/s.
vh = 0.25 m/s, h =-11/s
vh = 0.30 m/s, h =-14/s.

HUMAN: to maintain a constant distance while following the robot.


23
First experiment
(a) LRF (b) Robot

=0

h
d
vh
Human
-20 -15 -10 -5 0 5 10 15 20
(a) Estimated in first experiment (b) Estimated h in first experiment
25 25

20 20
RMSE 0.6 RMSE 0.2
15 Bias -0.6 15 Bias -0.2

Estimated h []
Estimated []

10 10

5 5

0 0

5 5

10 10

15 15

20 20

25 25
20 10 0 10 20 20 10 0 10 20
Actual [] Actual h []
Estimated values of & h versus reference angles
Second experiment
(a) LRF (b) Robot
h=0



d
vh

Human
T1 T2 T3 T4 0 T5 T6 T7 T8

(a) Tests T1 T2 T3 T4 HLV 0.5m/s (b) Tests T5 T6 T7 T8 HLV 0.5m/s (a) Average linear velocity errors (b) Average angular parameter errors
40 0.35 6

RMSE of Human Linear Velocity

RMSE of Angular Parameters []


30 T1 T5 0.25 m/s
T1 30 T5 0.3 0.50 m/s
5
hT5
0.75 m/s
hT1
Parameter [m/s]
20
T2 20 T6
0.25
4
T2 T6
10
10 0.2
hT2 hT6
3
T3 T7
0.15
[]

0 0
T3 T7
2
hT3 hT7 0.1
10 10
T4 T8

0.05 1
T4 T8
20
20 hT4 hT8
0 0
30
30 0.25 m/s 0.50 m/s 0.75 m/s h
(LRF) (IMU) (LRF+IMU)
40
1 2 1 2
[Strides] [Strides] Average errors (RMSE value) in estimation of
Estimated values of , and h. vh for 0.25 m/s, 0.50 m/s and 0.75 m/s.
Third experiment

h Robot
r
Human vh d
vr
(a) (b)

Error in linear and angular velocities estimation.

26
Controller Evaluation (1/3)
2x

27
Controller Evaluation (2/3)
(a) Human and Robot Orientation h

100 r

[]
0

100
End Start
10 20 30 40 50 60 70 80

(b) Pelvic Angular Velocity Raw


Filtered
50

[/s]
0

50
10 20 30 40 50 60 70 80

(c) Legs Orientation


Human path (dashed line). 40
[] 20
0
20 Left
40 Right
10 20 30 40 50 60 70 80

(d) Legs Distance Left


1.3
1.2 Right
1
[m]

0.8
0.6
0.4
10 20 30 40 50 60 70 80
[s]
Sensors data of robot following in
28 front experiment.
Controller Evaluation (3/3)
(a) Angular Parameters (e) Trajectory
100 r

h
[]

0 10
100
error
9
20 40 60 80 100

(b) Distances 8
1
d
[m]

derror
0.5
7

0 Robot
Human

[m]
6
Start h
20 40 60 80 100
End h
(c) Linear Velocities 5
Start r
0.5 vr(C)
End r
vr(R) 4
[m/s]

0
vh

0.5 3

20 40 60 80 100
2
(d) Angular Velocities
r
(C) 1
20
r
(R)
[/s]

0 0
h

20 0 1 2 3 4 5 6
20 40 60 80 100 [m]
[Steps]

Control data of robot following in front experiment.


Cognitive HRI for Human Mobility Assistance

PC/104-plus
Human ZigBee IMU
Matlab Real Time xPC

LRF Sensor Walker IMU


vh
ZigBee Link h
PC/104-plus
Human IMU Sensor
Matlab Real
h w
Time
C xPC
___
Legs Detection Module WC
___ w
d
WH ZigBee Coordinator
IMU Sensor H
Receiver W

DC Motors
PC/104-plus
Matlab Real
Time xPC vw
(d2,a2)
LRF Sensor (d,)
(d1,a1)
Legs Detection
Module
Caster Wheel [mm]
[degrees]

Multimodal-Interaction Platform 30
Cognitive HRI for Human Mobility Assistance
Sensor Readings During Walker-Assisted Gait

700
(a) Legs distance detection

[mm]
600
500 Left Leg
Right Leg
400
300
0 1 2 3 4 5 6 7 8 9 10

20
(b) Legs angle detection
10
[] 0 Left Leg
-10 Right Leg
-20
0 1 2 3 4 5 6 7 8 9 10
20
(c) Orientations
0
20 Human
[]

40 Walker
60
80
0 1 2 3 4 5 6 7 8 9 10

20
(d) Angular Velocities
10
[/s]

0 Human
Walker
10
20
0 1 2 3 4 5 6 7 8 9 10
[s]

31
Cognitive HRI for Human Mobility Assistance
Human-Walker Parameters Detection

Human Control Parameters Detection


Sensor
h
Human Pelvic
IMU Human
h
Orientation
Filtering w
Walker
-+
IMU
GC
+

Human d
Legs
Gait Human vh
Detection
Cadence Linear
Estimation Velocity Control
Walker
Sensors LDD Strategy

dr = Right Leg dl = Left Leg

[mm]

LDD = dr - dl (Periodic Signal) Time [s]


[mm]
32
Calibration of LRF Sensor

Measured Measured Measured

50 cm
50 cm 50 cm
Height = 190 cm (a) Height = 178 cm (b) Height = 165 cm (c)

Users heights were between 1.65 & 1.90


meters.

The average of the all K values is 1.62.

Adopting a constant K ratio, the error for these


users is close to 0.02 m that corresponds to
4%.
33
Adaptive Estimation of Gait Components
Left Leg Distance
Gait
Gait
Cadence Cadence
+ LDD Gait Cadence
(From WFLC)
Human
_ High-Pass Filter
Estimation Linear
Amplitude FLC Velocity
Second order Butterworth 0.2 Hz WFLC
coefficients
Gait Amplitude
Right Leg Distance LDD Estimation

(a) Legs position detection


1000
[mm]

Right leg
500
Left leg
0
0 5 10 15 20
(b) LDD
1000
[mm]

0
1000
0 5 10 15 20
(c) Gait cadence
[steps/s]

2
1
0
0 5 10 15 20
(d) Amplitude
400
[mm]

200
0
0 5 10 15 20
(e) Human linear velocity
[mm/s]

400 VhAjusted
Vh
200
0 5 10 15 20
[s]

34
Fourier Linear Combiner (FLC) Weighted-Frequency FLC (WFLC)
Adaptive Estimation of Gait Components

Gait Cadence
Gait Amplitude
Estimation
_+
(From WFLC) h h_Ajusted
FLC
Amplitude
Coefficients Gait Detector
(From WFLC)

35
Experimental Study
The experimental session was focused on the evaluation of the response of the sensor
fusion algorithm
vh constant

SL = 300 mm & GC = 0.6 Steps/s, vh= 180 mm/s.


First experiment
SL = 300 mm & GC = 1 Steps/s, vh = 300 mm/s.

S SL = 600 mm & GC = 0.6 Steps/s, vh = 360 mm/s.

C Changing vh

GC = 0.6 S/s constant, SL changing from 300 to 600 mm.

GC = 1 S/s constant, SL changing from 300 to 600 mm.


E (b)
SL = 600 mm constant, GC changing from 0.6 to 1 S/s.

SL = 600 mm constant, GC changing from 1 back to 0.6 S/s.

36
Experimental Study
(a) Legs distance detection (d) Candece
800 1.4
Actual
600 dr 1.2 Estimated

[Steps/s]
[mm]

dl 1
400
d 0.8
200
0 10 20 30 40 50 0 10 20 30 40 50
(b) Legs angle detection (e) Step length
20 400

10 r 300

[mm]
[mm]

l Actual
0 200
Estimated
10 100
0 10 20 30 40 50 0 10 20 30 40 50
(c) LDD (f) Human velocity
400 400 Actual
200 Estimated

[mm/s]
300
[mm]

0 LDD
200 ZC 200
Max
400 100
0 10 20 30 40 50 0 10 20 30 40 50
[s] [s]

User performing a straight path with SL = 300 mm and GC = 0.6 Steps/s.

37
Experimental Study
(a) Average cadence (b) Average step length (c) Average velocity
0.1
estimation errors 50
estimation errors 50
estimation errors
0.08 40 40
RMSE [Steps/s]

RMSE [mm/s]
RMSE [mm]
0.06 30 30

0.04 20 20

0.02 10 10

0 0 0

0.02 10 10
180 mm/s 300 mm/s 360 mm/s 180 mm/s 300 mm/s 360 mm/s 180 mm/s 300 mm/s 360 mm/s

Average errors (RMSE value) of lower-limbs kinematics parameters in


experiments with constant step length and cadence performed by the user.

Average errors (RMSE value) of lower-limbs kinematics parameters in


experiments with a change in the parameters performed by the user.
Cognitive HRI for Human Mobility Assistance
Walker Control System Evaluation

39
40

Walker Control System Evaluation


Manual Guidance (controller off) Controller on

In both cases:
Parameters present similar behavior
Controller allows natural interaction

Orientation parameters presents differences


No physical link to the device (controller on)
Integration of an Upper-limb Interaction Forces System
in the Walker Platform
3D Force Sensors

PC/104-plus
Matlab Real Time xPC

LRF Sensor

ZigBee Link
Human IMU Sensor

Legs Detection Module

IMU Sensor

DC Motors

Caster Wheel

PC/104-plus Fz
Fy
Matlab Real Right 3D
Fx
Force Sensor
Amplifier
Modules Left 3D
Force Sensor

41
Multimodal Interface for Human Mobility Assistance
Multimodal Interaction Strategy

Fl
Fl ~ 0
Fr ~ 0 Passenger
vh Fl
h Fr
h
Fr w ~ h Locomotor vh
w
(d, ~0
) ~0 (d,) w
vw d ~ dd
vw
dd
(a) (b) (c)
Model of Human-walker interaction.

42
Multimodal Interface for Human Mobility Assistance
Right forces Human movement intention
y-axis Fry
z-axis
Left forces Frz
y-axis Low-Pass Filter
z-axis Second order FLC Fly
Gait Amplitude _+
Butterworth 3 Hz
Estimation
Flz

FLC
Force Sensors Gait Amplitude
_+ h
Estimation
Hip orientation
ZigBee IMU
Sensor Lower-limbs Amplitude
kinematics WFLC coefficients Gait
Gait Cadence Detector
GC
Estimation
LDD High-Pass Filter
Walker Fourth order vh
+_ Butterworth
FLC
IMU Sensor Gait Amplitude
0.2 Hz Estimation SL

w
dr _+
+ d +
Legs 1/2
dl +
Detection
System r
+
+ 1/2
l Human relative position to the walker

Diagram that illustrates the multimodal interface for online estimation of human interaction
parameters in walker-assisted gait.
43
Experimental Study

S
C

E (a) (b)
User path guiding the walker (dashed line) to evaluate the parameters estimation. (a)
Performing a straight path. (b) Performing an eight-shaped curve (lemniscate).
First experiment
SL = 300 mm & GC = 0.6 Steps/s, vh= 180 mm/s.

SL = 300 mm & GC = 1 Steps/s, vh = 300 mm/s.

SL = 600 mm & GC = 0.6 Steps/s, vh = 360 mm/s.


44
Experimental Study

Temporal data and frequency spectrum of hip orientation and upper-limb guiding forces
performing a straight path
45
Experimental Study

Temporal data of hip orientation and upper-limb guiding forces performing an eight-
shaped curve (raw signal (R), signal after low pass filter (LP)
46
Multimodal Interface for Human Mobility Assistance
Controller based on pHRI + cHRI

Human vh Walker
cHRI vr
d ~ Controller
+ d
-
dd Robot
Fl
pHRI r
Fr Controller

47
Multimodal Interface for Human Mobility Assistance
Controller based on pHRI + cHRI
d d
Walker Walker Walker Walker Fly
Flz Fly Flz
Human Human

w Human
w Human

vh vvh w vw

Frz Fry Frz Fry


dd dd (b)
(a) (a) (b)

Fl
Input Output
r
conditioning Fuzzy conditioning
Gain, Saturation & Controller Low-pass filtering
Dead Zone & Gain
Fr

48
Controller Evaluation Start End

8m 8m

2m

vh

49
Controller Evaluation

50
Exoskeleton + Walker

Elevation System NFWalker Frame


(User body)

Discharge control System


(User weight)

PC/104-plus
Matlab Real
Time xPC
LRF
Sensor
DC Motors
(Locomotion)

51
Exoskeleton + Walker

ldd
Right Leg
zone
dl
LRF
Sensor

Left Leg dr
(a) zone (b)

52
Exoskeleton + Walker

vh vh

dd dd
~
vr d d
d vr

(a) (b)

53
Exoskeleton + Walker
(a) Legs Distances dr dl d
0.8

[m]
0.4
0 2 4 6 8 10
0.5
(b) ldd

[m]
0
0.5
0 2 4 6 8 10
(c) Cadence
2

[Steps/s]
1
0
0 2 4 6 8 10
0.5
(d) Step Length

[m]
0
0 2 4 6 8 10
0.5 (e) Velocities

[m/s]
vr(C) vr(R) vh
0
0 2 4 6 8 10
Time [s]

vh1 vr(C)1 vr(R)1 vh2 vr(C)2 vr(R)2 vh3 vr(C)3 vr(R)3 vh4 vr(C)4 vr(R)4

0.5
Velocity [m/s]

0.4

0.3

0.2

0.1

0
0 2 4 6 8 10
Time [s]
54
Exoskeleton + Walker

(a) (b)

55
Exoskeleton + Walker
(a) Legs Distances d r dl d
0.8
0.7
[m]

0.6
0.5
0 10 20 30 40 50 60 70 80 90 100 110 120 130
(b) Distance Error
0.1
[m]

0
0.1
0 10 20 30 40 50 60 70 80 90 100 110 120 130

vr (C) vr (R) vh (c) Linear velocities


0.4
[m/s]

0.2
0
0 10 20 30 40 50 60 70 80 90 100 110 120 130
Time [s]
(a) Legs Distances d r dl d
0.8
[m]

0.6
0.4
0 10 20 30 40 50 60 70 80 90 100 110
(b) Distance Error
0.1
[m]

0
0.1
0 10 20 30 40 50 60 70 80 90 100 110
vr (C) vr (R) vh (c) Linear velocities
[m/s]

0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100 110
Time [s]

56
Exoskeleton + Walker

57
Exoskeleton + Walker

58
Exoskeleton + Walker

59
Exoskeleton + Walker

60
Exoskeleton + Walker

61
(Bio-inspired) Exoskeleton + Walker
RASIS
EJ_ASIS SAC

EJ_FR
SAC
RFRC
RASIS RFE
RFE

RKE

EJ_KE
RKE
RKRC
(a) (b)

(a) (b)

62
(c) (d)
Eksowalker + Environment

LIDAR

LRF

Kinect

IMU

Sensor de fuerza 3d

Camara externa

Motor + Encoder
del Andador

63
Human-Robot Interaction Strategies
for Gait Assistance and Rehabilitation

Prof. Dr. Carlos A. Cifuentes


carlos.cifuentes@escuelaing.edu.co

Dept. of Biomedical Engineering


Biomedical Engineering Group (GiBiome)
Escuela Colombiana de Ingeniera Julio Garavito