You are on page 1of 113

Supported by Coastal Geosciences, Office of Naval Research

Vision based Navigation in Riverine Environments

PI: Soon-Jo Chung Co-PI: Seth Hutchinson Junho Yang, Ashwin Dani, Aditya Paranjape, Kevin Meier, Xichen Shi (Undergrad): Sunil Patel, Shubham Gupta, Martin Miller, Simon Peter University of Illinois at Urbana Champaign, Urbana, IL

August 27, 2013

Outline

Finished/on-going Current Work till date


Review of Prior Work (Mono-vision SLAM) Monocular Vision Based SLAM with IMU and Water Reflection (JFR, GNC 2013) Curve based SLAM using Stereo Vision (IROS 2012) New nonlinear estimator for stochastic systems (IEEE T-AC, IEEE CDC 2012) Visual SLAM using Higher Level Features (IROS 2012, IEEE T-RO) Vision-based path planning for agile flight in riverine/forest environment (GNC 2013, IROS 2013, IJRR) Experimental validate the algorithms on UAS platform

Publication / Presentation Results Concluding Remarks References


2

Overall Project Objectives

Develop novel hybrid vision based simultaneous localization and mapping (SLAM) algorithms and path planning/control strategies that can be implemented in a small unmanned aerial system (UAS) flying in a complex riverine environment.
Design a hybrid vision architecture that hierarchically combines monocular depth perception and feature correlation with stereopsis (Tasks 1-4). Integrate the vision perception principles with 3D feature extraction, probabilistic feature mapping, segmentation, and tracking algorithms (Tasks 2-3).

Develop computationally efficient and stable vision-based SLAM algorithms (Task 5).
Path planning, guidance, and control algorithms for complex 3D riverine environments (Task 6) Experimental validation by using the UIUCs helicopter and fixed-wing UASs (Task 7).
3

Review of Prior Results in Vision-based Navigation

J. Yang, D. Rao, S.-J. Chung, and S. Hutchinson, Monocular Vision based Navigation in GPS Denied Riverin e Environments, AIAA Infotech at Aerospace Conference, St. Louis, MO, Mar. 2011, AIAA-2011-1403.

Objective

Estimate a 3D point cloud map


Use feature image points and their reflection on the river Overcome the drawback of the inverse-depth parameteri zation

Generate the trajectory of the UAS Navigate a UAS inside a riverine environment

Location for navigation experiments (Crystal Lake, Urbana, IL)

River border marked with red and reflections shown on river surface

Approach

Feature Extraction & Depth Perception

Monocular Vision based SLAM


Steps for Navigation in Riverine Environments Measure MAV attitude by using epipolar geometry Extract coplanar features around the river surface Measure landmark range and bearing Navigate MAV with FastSLAM algorithm

Results

SLAM Results
Initial camera attitude was determined from the FOE Rotation in later frames was determined relative to this orientation

Results

SLAM at the UIUC Engineering Quad


Utilize the structural commonalities of diverse environments Estimate a path of an MAV

Recent Results (contd)

SLAM at the UIUC Engineering Quad


Compensate angular drift with FOE Track map features for consistent measurement

Results (contd)

Overlay of Mapping Results with MAV Trajectory

UIUC Engineering Quad

Attitude measurement from vision


10

Indoor Results

SLAM at the UIUC Beckman Institute


Algorithm works in diverse range of environments that has a path with planar surface

11

Indoor Results (contd)

Results from Indoor Environments

Beckman Institute

Attitude measurement from vision


12

Inertial-Aided Vision-Based Localization and Mapping in a Riverine Environment with Reflection Measurements
Junho Yang, Ashwin Dani, Soon-Jo Chung, and Seth Hutchinson University of Illinois at Urbana Champaign, Urbana, IL

ONR Progress Report Meeting


J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, Nonlinear Observer Design for UAS Navig ation in Riverine Environments, Journal of Field Robotics (to be submitted), 2013. J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, Inertial-Aided Vision-Based Localization a nd Mapping in a Riverine Environment with Reflection Measurements, AIAA Guidance, Nav igation, and Control Conference, Boston, MA, August 2013.

Outline

Introduction Motivation / Contribution Related work


Navigation System Overview of our system Dynamic model / Measurement model Reflection matching algorithm Observability / Nonlinear estimator Results Numerical simulations Experiments Conclusion

Introduction

Motivation

Reconnaissance and surveillance


Operation in diverse environments with UAVs Our goal: navigation in riverine environments

Riverine environments
Reflections: important aspect of riverine environments Forest canopy: GPS might not always be available Navigation using local onboard sensing and reflections

http://youtube.com/watch?v=Oqgz1PLRmM8

Introduction

Contribution

Observable system
SLAM system: known to be unobservable. Our system: observable with multiple view using reflections and onboard sensor measurements

Robot-centric mapping
Position of each landmark are estimated w.r.t. the UAV body frame: closest frame to the landmarks Normalized coordinates are used to alleviate the nonlinearity.

Navigation in riverine environments


Localization and mapping models are derived particularly for GPS-denied riverine environments with light weight sensors. First result of localization and mapping by exploiting multiple views with reflections in a riverine environment to our knowledge.
4

Introduction

Related Work

SLAM with monocular vision


Filter-based SLAM with inverse depth parameterization
J. Civera, A. Davison, and J. M. M. Montiel, Inverse depth parametrization for monocular SLAM," IEEE Transactions on Robotics, vol. 24, no. 5, pp. 932-945, 2008. J. Sola, T. Vidal-Calleja, J. Civera, and J. M. M. Montiel, Impact of landmark parametrization on monocular EKF-SLAM with points and lines, The International Journal of Computer Vision, vol. 97, no. 3, pp. 339 368, 2012.

Representing landmarks w.r.t. anchor positions

SLAM with higher level features


Curve based localization and mapping
D. Rao, S.-J. Chung, and S. Hutchinson, CurveSLAM: An Approach for Visionbased Navigation without Point Features, IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, Oct. 2012, pp. 4198-4204.

Image moment based localization and mapping


A. Dani, G. Panahandeh, S.-J. Chung, and S. Hutchinson, Image Moments for Higher-Level Feature Based Navigation, IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, November, 2013, to appear. (Also, to be submitted to the International Journal of Robotics Research) A. P. Dani, S.-J. Chung, and S. Hutchinson, Observer Design for Stochastic Nonlinear Systems via Contraction-based Incremental Stability, IEEE Transactions on Automatic Control, conditionally accepted (regular paper)
5

Introduction

Related Work (Cont'd.)

Position estimation with a mirror


Calibration of a camera with known points
J. Hesch, A. Mourikis, and S. Roumeliotis, Mirror-based extrinsic camera calibration, Springer Tracts in Advanced Robotics, vol. 57, pp. 285299, 2009.

Structure from motion with epipolar geometry


G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, Planar mirrors for image-based robot localization and 3-D reconstruction, Mechatronics - Special Issue on Visual Servoing, vol. 22, pp. 398409, 2012.

Navigation in riverine environments


Monocular vision with planar ground assumption
J. Yang, D. Rao, S.-J. Chung, and S. Hutchinson, Monocular vision based navigation in GPS denied riverine environments, in Proc. AIAA Infotech at Aerospace Conference, AIAA-2011-1403, St. Louis, MO, March 2011. S. Scherer, J. Rehder, S. Achar, H. Cover, A. Chambers, S. Nuske, and S. Singh, River mapping from a flying robot:state estimation, river detection, and obstacle mapping, Autonomous Robots, vol. 33, no. 1-2, pp. 189214, 2012.

River mapping with vision, LIDAR, IMU, and GPS.

Navigation System

Overview of Our System

Quadcopter UAV
Sensors: camera, IMU, magnetometer, and an ultrasound altimeter Dedicated dual-core single board computer for vision

Localization and Mapping


Location of landmarks and UAV are estimated simultaneously
Magnetometer IMU Camera UAV Motion Propagation Vision System Propagation Measurement Update World Frame Representation

Altimeter

Localization and Mapping

Localization and Mapping Results

Quadcopter

Autopilot

UAV
7

Quadcopter Video

Navigation System

Overview of Our System (Cont'd.)

Motion model
World-centric model for UAV localization Robot-centric model for landmark mapping

Measurement model
Camera projection of landmarks from multiple view Altitude measurements from the river surface

Navigation System

Dynamic Model

Localization of the UAV


World-centric model composed of location and linear velocity of the UAV

Mapping of landmarks
Robot-centric model of normalized image coordinates and inverse depth parameterization

10

Navigation System

Vision Measurements

Current view in the body frame


Camera / unit sphere projection of current view Transformation to UAV body frame

Initial view in the body frame


Normalized coordinates when the feature appears Representation in terms of UAV and landmark state

11

Navigation System

Vision Measurements (Cont'd.)

Measurement of reflections
Camera projection of reflections Representation in terms of UAV and landmark state

12

Navigation System

Reflection Matching
Algorithm 1 Matching of Reflections in the River Input Output 1. 2. 3. 4. 5. 6. 7. 8. Camera orientation and image data Matching of a real object point and its reflection If no reflection match do Select features with Shi Tomasi corner detector Slide a flipped image patch and compute a correlation coefficient Find a location that has high matching probability Compute a matching region using UAV orientation data Reject the match using the search region Track the center of matched image patches with pyramid KLT end if

Matching of reflections
Use template matching with correlation coefficients

Region for matching


Reject outliers: set a region across the source image.

13

Navigation System

Observability

Observability of SLAM
Typically, SLAM is known to be unobservable. Exploit available measurements from the UAV in a riverine environment. attitude, altitude from river surface, current view, initial view and reflection projection of features Observability matrix becomes full rank starting from measurements of only two features and UAV altitude.

14

Navigation System

EKF Estimator

Estimated state
Location and linear velocity of the UAV Normalized coordinates and inverse depth of landmarks

Motion prediction
Prediction of state and covariance estimates

15

Navigation System

EKF Estimator (Cont'd.)

Measurement update
Update of mean and covariance estimates

Measurement model

Kalman filter gain

Landmark mapping
World-centric representation of estimated landmarks

16

Navigation System

SDC Estimator

State dependent coefficient (SDC) forms


Nonlinear function can be written in the form Convex optimization to combine the predicted covariance by using the criteria of minimizing its trace.
sensor measurement

SDRE estimator 1

Optimal combination of pdfs

SDRE estimator 2

SDRE estimator n

A. P. Dani, S.-J. Chung, and S. Hutchinson, Observer Design for Stochastic Nonlinear Systems via Contraction-based Incremental Stability, IEEE Trans. Automatic Control, conditionally accepted. also, IEEE CDC 2012. 17

Navigation System

SDC Estimator (Cont'd.)


Motion prediction with multiple SDC forms

Optimal combination of the predicted covariance


where with , , and

SDC estimator gain

Measurement update with optimal covariance

18

Results

Numerical Simulations

Simulation setup
UAV starts from the origin Twenty landmarks are randomly distributed All the landmarks and their reflections are measurable

Initial conditions
Location of the UAV Linear velocity of the UAV State of landmarks

Disturbance in motion
Gaussian white noise Linear acceleration and angular velocity: = 0.01 Attitude and altitude: = 0.001

19

Results

Numerical Simulations (Cont'd.)

Localization and mapping with EKF

20

Results

Numerical Simulations (Cont'd.)

Localization and mapping with EKF

21

Results

Numerical Simulations (Cont'd.)

Analysis of estimation results


Estimation error: converges towards zero Uncertainty: bounded since the system is observable

Estimation error of the UAV location 5

Estimation error of UAV linear velocities 1

v (m/s)

estimation error

3 standard deviation

estimation error

3 standard deviation

x (m)

-5 5

10

20

30

40

50

60

70

80

90

100

-1 1

10

20

30

40

50

60

70

80

90

100

v (m/s)
0 10

y (m)

-5 0.5

20

30

40

50

60

70

80

90

100

-1 0.5

10

20

30

40

50

60

70

80

90

100

v (m/s)
0

z (m)

-0.5

10

20

30

40

50 60 time (sec)

70

80

90

100

-0.5

10

20

30

40

50 60 time (sec)

70

80

90

100

22

Results

Numerical Simulations (Cont'd.)

Average root mean squared error


Location of the UAV: 0.1435m Linear velocity of the UAV: 0.0644m/s Inverse depth of landmarks: 0.0019 (1/m)
Average estimation error of point feature
estimation error 3 standard deviation

RMSE

UAV location (m)

1.5 1 0.5 0 0.4 0 10 20 30 40 50 60 70 80 90 100

normalized x

-1 1

10

20

30

40

50

60

70

80

90

100

linear velocity (m/s)

normalized y

0.2

-1

10

20

30

40

50

60

70

80

90

100

0 0.02

10

20

30

40

50

60

70

80

90

100

inverse depth (1/m)

0.5

inverse depth (1/m)

0.01

-0.5

10

20

30

40

50 60 time (sec)

70

80

90

100

10

20

30

40

50 60 time (sec)

70

80

90

100

23

Results

Experiments

Demonstration at a creek
Data collection Boneyard creek at the UIUC Engineering quad Measurements from our UAV quadcopter Data processing Processed the data offline

24

Results

Experiments (Cont'd.)

Localization and mapping results

25

Results

Experiments (Cont'd.)

Localization and mapping results

26

Results

Experiments (Cont'd.)

Onboard sensor readings


UAV motion model: attitude, angular velocity, linear acceleration, altitude data are used
UAV attitude readings 10 5 0 -5 20 0 10 20 30 40 50 60 70

Gyroscope readings 0.2

1 (rad/s)

roll (deg)

-0.2 1

10

20

30

40

50

60

70

pitch (deg)

10 0 -10 100 0 10 20 30 40 50 60 70

2 (rad/s)

-1 0.2

10

20

30

40

50

60

70

yaw (deg)

50 0 -50 0 10 20 30 40 time (sec) 50 60 70

3 (rad/s)

-0.2

10

20

30 40 time (sec)

50

60

70

Accelerometer readings 4

Altimeter readings 2 1.8 1.6

a (m/s 2)

2 0 -2 1 0 -1 -2 -8 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70

1.4

altitude (m)

1.2 1 0.8 0.6 0.4

a (m/s 2)

a (m/s 2)

-10

0.2 0

-12

10

20

30 40 time (sec)

50

60

70

10

20

30 40 time (sec)

50

60

70

27

Results

Experiments (Cont'd.)

UAV state estimation


Location and linear velocity of the UAV are estimated. Measurement model: altitude and vision measurements

Estimation of the UAV location

Estimation of UAV linear velocities 2

40

v (m/s)
0 10

x (m)

1 0 -1 0.5 0 10 20 30 40 50 60 70

20 0 20 30 40 50 60 70 20

v (m/s)
0

y (m)

10 0 10 20 30 40 50 60 70

0 -0.5 -1 0.5 0 10 20 30 40 50 60 70

-0.8

z (m)

-1 -1.2 -1.4 0 10 20 30 40 time (sec) 50 60 70

v (m/s)

y z

-0.5

10

20

30 40 time (sec)

50

60

70

28

Results

Experiments (Cont'd.)

Landmark state estimation


Normalized coordinates: directly measureable Depth of landmark: estimated in terms of its inverse

Estimation of the 1st point feature 0

Estimation of the 4th point feature 0.4

normalized x

-0.2

normalized x

ground truth estimate 0 2 4 6 8 10 12

0.2

ground truth estimate 28 30 32 34 36 38 40 42

-0.4 0.4

0 0.5

normalized y

0.2

normalized y
0 2 4 6 8 10 12

-0.5

28

30

32

34

36

38

40

42

inverse depth (1/m)

0.2 0.15 0.1 0.05 0 2 4 6 time (sec) 8 10 12

inverse depth (1/m)

0.1

0.05

28

30

32

34

36 time (sec)

38

40

42

29

Conclusion

We presented an inertial-aided vision-based localization and mapping algorithm for riverine environments. Reflection is an important aspect of riverine environments, which we can exploit to fully constrain the system. We made the system observable by deriving a vision measurement model with projection of features, their initial observation, their reflections, and onboard sensor readings. We estimated the position of the features with respect to the UAV body frame which is the closest to the features. To our knowledge, we report the first result of performing localization and mapping by exploiting multiple views with reflections of features in a riverine environment.

30

Higher Level Structures (Curves) for SLAM

D. Rao, S.-J. Chung, and S. Hutchinson, CurveSLAM: An Approach for Vision-based Navigation without Point Features, IE
EE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Algarve, Portugal, October 7-12, 2012 D. Rao, S.-J. Chung, and S. Hutchinson, CurveSLAM: An Approach for Vision-based, International Journal of Robotics Res earch, to be submitted

Motivation

Objective

Develop a novel curve-based algorithm to perform SLAM utilizing only the path edge curves from stereo data. Benefits of curve-based SLAM:
Can represent more structure in the environment Much smaller state space and uncluttered map More useful semantic information for planning and control

Can we perform SLAM in these environments purely by exploiting the path / river edge structure?

14

Approach

Overview

Observing a planar world curve in two different images, we can determine the curve parameters and the plane orientation. Can eradicate stereo matching of points; instead use a model fit to find the curve parameters to minimize reprojection error.

15

Approach

Curve Parametrization

We utilize planar cubic Bezier curves, defined by 4 control points, with t in [0, 1 ]

=
=0

[0, 1]

Affine transformation on the curve is the same as transforming the control points
Projected curve in image is approximately equivalent to projection of control points.

Can project each control point to the image using the stereo projection equations:

16

Approach

Curve Fitting

Nonlinear model fit to estimate planar curve parameters ( ) and planar orientation (, , ) in the world frame directly. Levenberg-Marquardt optimization

Iterate:

17

Approach

SLAM

State / process model, process noise

and

Observations of out-of-plane pose and curve control points: EKF-based SLAM Curve correspondence? Need to find t values and split curves

18

Approach

Data Association

Curve splitting
Using De Casteljaus algorithm, control points of split curve are a linear transformation of the original

Curve correspondence
Track end points of map curves in images

19

Results

Vision Results

Stereo vision data on various paths of length up to 100m. SLAM estimate based purely on path edge curves. Algorithm can also recover from a series of poor curve measurements (below).

20

Results

Simulation (Consistency) Results


Simulated two loops of the three environments shown (total lengths of 160m, 250m, and 400m). Normalized Estimation Error Squared (NEES) used as a measure of filter consistency (95% Confidence Interval).

21

Results

Simulation (Consistency) Results

NEES plots are over 50 Monte Carlo runs; 95% CI shown in red Improvement in consistency over previous work.
22

Conclusions

Can navigate with fewer landmark structures (smaller state space)

Can operate in environment with few or non-unique point features


Can produce more structured maps of the environment, especially when point features arent meaningful landmarks
Could provide useful cues for planning / control

More experimentation in different environments needed

23

A. P. Dani, S.-J. Chung, and S. Hutchinson, Observer Design for Stochastic Nonlinear Systems via Contraction-based Incremental Stability, IEEE Transactions on Automatic Control, Conditionally accepted in 2013. A. P. Dani, S.-J. Chung, and S. Hutchinson, Observer Design for Stochastic Nonlinear Systems using Contraction Analysis, Proc. IEEE Conference on Decision and Control (CDC), Maui, HI, December 2012, pp. 6028-6035.

25

26

27

28

29

30

31

32

34

SDRE

SDRE

SDRE

SDRE

SDRE

SDRE

38

39

40

41

42

43

A. Dani, S.-J. Chung, and S. Hutchinson, SLAM using Higher Level Feature Representation, IEEE Trans. Robotics (to be submitted), 2013. A. Dani, G. Panahandeh, S.-J. Chung, and S. Hutchinson, Image Moments for Higher-Level Feature Based Navigation, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, November 3-7, 2013, to appear.

45

46

49

50

51

Conclusions

introduced a new vision-based SLAM algorithm which incorporates higher-level structures, such as planar surfaces in visual navigation by using the variation of the image moments of the plana r regions in 2D images. The map is represented by using regions instea d of a large number of feature points. We have derived a camera-IMU SLAM formulation which represents the scene using a minimal set of par ameters. The RMSE and NEES comparison shows that t he proposed estimator outperforms EKF in term s of accuracy and consistency.
54

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Outline

1 2 3 4

Motivation and problem formulation Numerical optimization Time delay-based motion primitive Closed-loop simulations and experiments

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Motivation

(a) Goshawk ight through a forest

(b) Riverine forest

High speed ight through a dense, unstructured, unknown eld of obstacles Collision-free ight guaranteed below a critical speed (Karaman and Frazzoli) How do we ensure collision-free ight in a practical setting at high speeds?
Paranjape et al ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Motion Primitives Library


Flight Modes

Forward Flight

Reversal

Aggressive Turn

Perching2

Forward ight: motion primitive continuously parametrized in the control input space Perching: only in the event of an absolutely unavoidable collision Aggressive Turn Around (ATA): addressed in this paper

1. Paranjape, Chung, and Kim, Novel Dihedral-based Control of Flapping-Wing Aircraft with Application to Perching, IEEE Transactions on Robotics, 2013, to appear. 2. Paranjape, Meier, Shi, Chung, and Hutchinson, Motion Primitives and 3-D Path Planning for Fast Flight through a Forest, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

ATA Problem Formulation

Objective the objective is to get the aircraft to reverse heading () while minimizing the volume required for the reversal.

tf

min
u 0

(x x 2 + y y 2 + h h2 + u T Qu ) dt

subject to |(tf ) (0)| = (turn around) u [umin , umax ], Q > 0 (bounded control inputs) (tf ) = 0 (recover to level ight)

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Aircraft Model
Dene a characteristic length and specic thrust (per unit mass): k= Equations of motion = V sin x = V cos cos , y = V cos sin , h 2 V = T cos kV CD () g sin = = g cos T sin + kVCL () cos V V T sin sin + kVCL () V cos (2) S Thrust , T = 2m m (1)

Control inputs: Thrust (T ), angle of attack (), and wind axis roll angle () Actuator dynamics = aT (Tc T ), T = a (c ), = a (c ) (3)

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

ATA Maneuver Proles


Immelman Turn (in the vertical plane)

Variable altitude turn

Level turn (constant altitude)

Immelman turn: in the vertical plane (narrow turning volume) Level turn: in the horizontal plane (very little altitude change permissible) Optimal solutions in this paper will be in the form of 3-D turns
Paranjape et al ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Aircraft Model
Dened two constants k= Actuator dynamics = aT (Tc T ), T = a (c ), = a (c ) aT = 1, a = a = 2 Aircraft data for simulations
2 CL = 0.4 + 2.5, CD = 0.035 + 0.36 CL + CD ,spoil after ipping

Thrust S = 0.35, T = [0, 10] 2m m

(4)

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Analytical Solution
Dene the Hamiltonian
2 H = x x 2 + y y 2 + h h2 + T Tc + 2 c + x V cos cos + y V cos sin

+h V sin + V (T cos kV 2 CD g sin ) + g cos T sin + kVCL () cos + V V + a (c ) + T aT (Tc T ) + a (c ) T sin + kVCL () V sin cos

The control inputs are found by solving H / u = 0 Ignoring T sin /V gives c = stall sign( ) Determine control inputs numerically (using Gpops-II)

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Numerical Solution: Case 1


Tight constraint on the altitude: x = y = 1 and h = 5

1.5 h [m] 1 0.5 0 2 1 y [m] 0 0 1 x [m] 2 3


0 0.5 1 Time [s] 1.5 2 , [deg] 60 40 20 0

(c) 3-D trajectory

(d) Wind axis angle

8 V [m/s], Thrust [m/s2] 7 6 5 4 3 0 0.5 1 Time [s] 1.5

V T

200 150 [deg] 100 50

0 0

0.5

1 Time [s]

1.5

(e) Thrust and speed

(f) Heading angle

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Numerical Solution: Case 2


Narrow volume: x = h = 1 and y = 5

2.5 2 h [m] 1.5

50 , [deg] 40 30 20 10
0 0 1 2 x [m] 3 4

1 0.5 0 0.15 0.1 0.05 y [m]

0 0

0.5

1 1.5 Time [s]

(g) 3-D trajectory

(h) Wind axis angle

8 V [m/s], Thrust [m/s2] 6 4 2 0 0.5 1 1.5 Time [s] 2

V T
[deg]

200 150 100 50 0 0

0.5

1 Time [s]

1.5

(i) Thrust and speed

(j) Heading angle

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Time Delay-Based Primitive


50
, [deg]

Pull up

Bank

40 30 20 10 0 0 0.5

Recover to level flight

What we learn from the numerical optimisation

1 1.5 Time [s]

Tc 5 (a constant) c stall (the limiting value) throughout the turn


The wind axis roll angle command c can be split into three segments

Segment 1 (0 t d ): c = 0; the aircraft pulls up and decelerates Segment 2 (| (0)| < 150 deg): c = c ,max (rapid turn) Segment 3: c = 0 (recovery to level ight)
Design of a motion primitive

The time delay d depends on the shape of the turning volume A three segment motion primitive, parametrized by d crit can be approximated analytically
Paranjape et al ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Simulations of the ATA Primitive


2.5

3 2.5 2 y [m] 1.5 1 0.5 0 0 1 2 x [m] 3


0.5 0 0 z [m] 1 2 1.5

2 x [m]

(k) x y projection

(l) x z projection

Figure : Plots showing the aggressive turn trajectory for d [0, 1] (dark curves denote a larger time delay) in the x y and x z planes.
Level turns dont usually require stall and c ,max commands Immelman turns require a rather intricate Tc - c schedule
Paranjape et al ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Experiments: Platform

Figure : ParkZone MiniVapor

No direct roll control device (for c ; increases time constant) Rudder used for lateral-directional and roll control

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Control Law Design

Constant thrust and elevator: Tc = 5 and e = 20 deg (maximum up-position) typically involves an interplay between roll and yaw (roll angle + zero sideslip) Hierarchical control for
t

pc r

= =

kp , (c ) + kI ,
0 t

(c ) dt (pc p ) dt
0

kp ,p (pc p ) + kI ,p

Rely on directional stability to regulate the sideslip Recovery segment

Paranjape et al

ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Experimental Results
Comparison of simulations and experiments
1.3 1.2 Turn radius [m] 1.1 1
Simulations Experiments

0.9 0.8 0.7

0.1

0.2

0.3

0.4 0.5 [s]

0.6

0.7

0.8

Optimum time delay for minimum turn radius: 0.2 s (experimental) v/s 0.4 s (simulations) Eect of adding a control law
Paranjape et al ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Motion Planning Algorithm

Visible cone Actual Trajectory Effective Trajectory

d' d

Inaccessible

Inaccessible

Motion planning algorithm based on model predictive control Each step a motion primitive employing constant control inputs Algebraic formula to calculate the control inputs from the equations of motion Finite aircraft agility modelled as a time-delay between successive control inputs
Paranjape et al ONR Project

Problem Formulation Optimization Time Delay-based Primitive Experimental Results

Conclusions

Enable high speed ight through a forest

Forward ight primitives ATA maneuver: retreat from dense pockets


Numerical optimisation of the ATA maneuver ATA primitive: c = stall and c = c ,max

Time-delay d between stall and c ,max commands depends on the shape of the turning volume
Experimental demonstration and eect of adding a control law

Paranjape et al

ONR Project

Unmanned Aerial System Platforms

UAS Platform 1

X-8 UAS Platform Specifications

57

UAS Platform 1

FPV Camera CCD RGB sensor

RC Receiver Ardupilot

58

UAS Platform 2

On-board Capabilities: - Autopilot (Ardupilot) - Inertial Measurement Unit - 3-axis Magnetometer - Ultrasound altimeter sensor - FPV CCD Camera - GPS (for measuring ground truth data)
59

Vision Computer

60

UAS Platform Ground Control

61

Flight Result

Autonomous Flight Mission

62

Conclusions

Developing a fixed-wing unmanned aerial platform with onboard autopilot, IMU, GPS sensors, ground control station and communication channel Developed an unmanned helicopter with on-board autopilot, IMU, GPS, magnetometer sensors

Developing a vision-based object tracking algorithm

63

Publication / Presentation Plan


J. Yang, D. Rao, S.-J. Chung, and S. Hutchinson, Monocular Vision bas ed Navigation in GPS Denied Riverine Environments, AIAA Infotech at Aerospace Conference, St. Louis, MO, Mar. 2011, AIAA-2011-1403. D. Rao, S.-J. Chung, and S. Hutchinson, CurveSLAM: An Approach for Vision-based Navigation without Point Features, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Algar ve, Portugal, October 7-12, 2012, in preparation for International Journal of Robotics Research. A. P. Dani, S.-J. Chung, and S. Hutchinson, Observer Design for Stoch astic Nonlinear Systems using Contraction Analysis, Proc. IEEE Confer ence on Decision and Control (CDC), Maui, HI, December 2012. A. P. Dani, S.-J. Chung, and S. Hutchinson, Observer Design for Stoch astic Nonlinear Systems via Contraction-based Incremental Stability, IEEE Transactions on Automatic Control, Conditionally accepted. A. A. Paranjape, K. Meier, S.-J. Chung, and S. Hutchinson, Optimum S patially Constrained Turns for Agile Micro Aerial Vehicles, AIAA Guidan ce, Navigation, and Control Conference, Boston, MA, August 2013.

64

Publication / Presentation Plan


J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, Nonlinear Observer Desi gn for UAS Navigation in Riverine Environments, Journal of Field Robotics (to be submitted), 2013. J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, Inertial-Aided Vision-Bas ed Localization and Mapping in a Riverine Environment with Reflection Meas urements, AIAA Guidance, Navigation, and Control Conference, Boston, MA , August 2013. A. Dani, S.-J. Chung, and S. Hutchinson, SLAM using Higher Level Feature Representation, IEEE Transactions on Robotics (to be submitted), 2013. A. Dani, G. Panahandeh, S.-J. Chung, and S. Hutchinson, Image Moments f or Higher-Level Feature Based Navigation, IEEE/RSJ International Confere nce on Intelligent Robots and Systems (IROS), Tokyo, Japan, November 3-7 , 2013, to appear. A. A. Paranjape, K. C. Meier, X. Shi, S.-J. Chung, and S. Hutchinson, Motio n Primitives and 3-D Path Planning for Fast Flight through a Forest, The Int ernational Journal of Robotics Research, (to be submitted), 2013. A. A. Paranjape, K. C. Meier, X. Shi, S.-J. Chung, and S. Hutchinson, Motio n Primitives and 3-D Path Planning for Fast Flight through a Forest, IEEE/R SJ International Conference on Intelligent Robots and Systems (IROS), Toky o, Japan, November 3-7, 2013, to appear
65

Concluding Remarks

Developed a novel Curve-based SLAM method

Developed a new state estimation method


Developing a hybrid observer-based navigation algorithm for UAS in riverine environment Developing a motion planning algorithm for agile flight in the riverine environment

Developing a vision-based feature tracking algorithm


Our results will play a key role in enhancing the Navy's int elligence, surveillance, and reconnaissance missions held at GPS-denied riverine environments.
66

You might also like