You are on page 1of 20

Brigham Young University

BYU ScholarsArchive
All Faculty Publications

2017-08-18

Relative Navigation of Autonomous GPS-


Degraded Micro Air Vehicles
David O. Wheeler
Department of Electrical Engineering, Brigham Young University, dowheeler@gmail.com

Daniel P. Koch
Department of Mechanical Engineering, Brigham Young University, dkoch89@gmail.com

See next page for additional authors

Follow this and additional works at: http://scholarsarchive.byu.edu/facpub


Part of the Electrical and Computer Engineering Commons

BYU ScholarsArchive Citation


Wheeler, David O.; Koch, Daniel P.; Jackson, James S.; Ellingson, Gary J.; Nyholm, Paul W.; McLain, Timothy W.; and Beard, Randal
W., "Relative Navigation of Autonomous GPS-Degraded Micro Air Vehicles" (2017). All Faculty Publications. 1962.
http://scholarsarchive.byu.edu/facpub/1962

This Working Paper is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in All Faculty Publications by
an authorized administrator of BYU ScholarsArchive. For more information, please contact scholarsarchive@byu.edu.
Authors
David O. Wheeler, Daniel P. Koch, James S. Jackson, Gary J. Ellingson, Paul W. Nyholm, Timothy W. McLain,
and Randal W. Beard

This working paper is available at BYU ScholarsArchive: http://scholarsarchive.byu.edu/facpub/1962


Relative Navigation of Autonomous
GPS-Degraded Micro Air Vehicles
David O. Wheeler, Daniel P. Koch,
James S. Jackson, Gary J. Ellingson, Paul W. Nyholm,
Timothy W. McLain, Randal W. Beard

1 Introduction
Economists anticipate that autonomous micro air vehicles
(MAVs) will give rise to a handful of billion-dollar mar-
kets, including infrastructure inspection, security, precision
agriculture, transportation, and delivery [1]. Using MAVs
to inspect bridges, dams, chemical plants, and refineries is
particularly motivating as it would take the place of dan-
gerous, time consuming, and expensive human inspections;
however, these markets are still largely speculative because
autonomous MAV navigation is an active research problem,
especially in confined, unknown environments where global
positioning system (GPS) measurements are unavailable or Figure 1: MAV smoothly navigating through a GPS-
degraded. degraded environment.
Current MAV navigation approaches rely heavily on GPS
for estimation, guidance, and control; however, GPS sig-
nals can be spoofed, jammed, or blocked by structures and bogged down, causing non-neglible delays in updating
foliage. GPS measurements can be further degraded by the vehicles global state. The delay has a destabilizing
multipath, atmospheric delays, or poor positioning of visi- effect on the low-level flight control algorithms.
ble satellites. When GPS is unavailable, the MAVs global A MAV receives degraded GPS measurements or incor-
position and heading is not observable [2, 3]. As a re- rectly registers a loop closure due to poor data associ-
sult, the state estimates eventually drift, leading to filter ation. The global state estimate degrades as a result,
inconsistency and non-optimal sensor fusion [4, 5]. Signif- impacting stability and control. There is no method to
icant reliability issues arise when working with respect to later remove these effects from the estimator.
a globally-referenced state during prolonged GPS-dropout
and heading uncertainty [6]. Despite these issues, many This paper uses the recently proposed relative navigation
current GPS-denied MAV navigation approaches continue (RN) framework [7] as an alternative, observable approach
to estimate and control with respect to a single, inertial for GPS-denied MAV navigation. By using a view matcher,
reference frame: either the GPS origin or the MAVs initial such as camera-based visual odometry [8, 9] or laser-based
pose. While this formulation is convenient, the following scan matching [10, 11], relative navigation estimates the
scenarios highlight potential issues when a MAV is con- MAVs state with respect to its local environment. The
trolled with respect to its globally estimated position: relative state estimator ensures that the state is observ-
able and the uncertainty remains bounded, consistent, and
A MAVs GPS receiver drops out for several minutes, normally distributed [6]. By removing the global-state esti-
requiring the estimator to rely strictly on IMU integra- mation from the front-end estimation, RN also ensures that
tion. When GPS is finally reacquired, the state esti- large or delayed global-state updates, which come from in-
mate jumps significantly due to new GPS updates, and corporating loop-closure constraints or eventual global mea-
the jump in state destablizes the vehicle. surements, do not destabilize the flight-critical control and
A multirotor begins indoors with a poor understanding estimation feedback. Rather, the global state is estimated
of its global pose. As the vehicle moves outdoors and independently using a pose-graph back end where the non-
acquires GPS, the state estimator either incorrectly Gaussian uncertainties can be better represented and robust
gates the global GPS measurement, or the vehicles optimization methods can identify and reject gross GPS
estimated state jumps due to the correction. outliers and false-positive loop closures.
During a long flight, a MAVs back-end optimizer gets The contributions of this paper are twofold. First, the

1
details necessary to implement the complete RN framework based SLAM approach to leverage laser scan-matching con-
are presented. Specifically, we describe the relative estima- straints, while [10] and [14] fuse scan matching data with
tor reset operation necessary to maintain observability, and inertial measurements in an extended Kalman filter (EKF),
present the relative guidance and control strategy neces- demonstrating a vision-aided navigation solution. Refer-
sary to ensure smooth, stable flight. We discuss how to ence [15] uses an EKF to track the global pose of indi-
reconstruct the global state with consistent banana-shaped vidual landmarks, demonstrating a successful EKF-SLAM
uncertainty distributions, and describe how to incorporate approach.
GPS and loop-closure information to improve the global Some more recent work in this area, such as [16, 17, 18],
state estimate. We explain how the high-level path planner has focused on improving the consistency of pose estima-
facilitates autonomous missions and show how to leverage tion without global measurements, extending the length
off-the-shelf algorithms for visual odometry, place recogni- of autonomous trajectories, and diversifying the environ-
tion, and robust pose-graph optimization. ments in which MAVs can operate. Shen et al. [16] in-
The second contribution consists of several prolonged troduced a method for simultaneously fusing multiple rel-
hardware flight tests demonstrating the effectiveness of RN ative view-matchers to increase robustness in difficult en-
for autonomous GPS-degraded MAV navigation in var- vironments and demonstrated autonomous flight on a pro-
ied, unknown environments, such as that shown in Fig- longed (440 m) indoor-outdoor flight. They used a stochas-
ure 1. We demonstrate that the relative front end suc- tic cloning filtering approach [19], which is designed to bet-
cessfully fuses multiple vision sensors, works indoors and ter propagate uncertainty but allows the global state co-
outdoors, and results in low drift with no state jumps. variance to grow unbounded in the absence of global mea-
We further demonstrate the onboard generation of a glob- surement updates. Scaramuzza et al. [17] were the first
ally consistent, metric, and localized map by identifying to demonstrate prolonged (350 m) autonomous MAV flight
and incorporating loop-closure constraints and intermittent in a GPS-denied environment. Their work used a single
GPS measurements. Using this map, we demonstrate the monocular camera for onboard stabilization and control.
fully-autonomous completion of mission objectives, includ- Chowdhary et al. [18] also demonstrated a successful GPS-
ing performing a position-hold about a global position way- denied monocular vision-aided INS solution including au-
point while in a GPS-denied environment. tonomous landing and takeoff.
Section 2 reviews current state-of-the-art methods for Each of these previously mentioned methods ultimately
GPS-degraded MAV navigation and Section 3 overviews track the unobserved global state. As shown by [6], these
the relative navigation framework. Sections 4 and 5 de- methods are susceptible to inconsistency and state jumps
scribe the components of the relative front end and global during prolonged GPS dropout. The value of a relative
back end of the RN architecture respectively. In addition parameterization is well documented in the full-SLAM lit-
to outlining each components role, the specific algorithms erature [4, 20, 21, 22], but has not been fully applied to
used for the hardware implementation are also presented. MAV navigation. Moore et al. [20] noted the limitations of
Section 6 describes the experimental flight tests, including using both a body-fixed or a globally-fixed reference frame
the hardware and test procedures, while Section 7 describes for ground vehicles, and proposed using a local frame in
the flight test results. Finally, Section 8 summarizes the which the vehicle moves smoothly. Bailey et al. [4] showed
contributions of the paper. that estimating the vehicle and landmark location with re-
spect to a global coordinate frame results in inconsistency
as heading uncertainty increases, and asserted that submap-
2 Related Work ping was the only method at the time of publication for
implementing consistent large-scale EKF-SLAM. Relative
Because of the many applications of MAVs in GPS-denied submapping methods [21, 22] estimate the state of the vehi-
and GPS-degraded environments, significant research has cle and landmarks with respect to a local coordinate frame.
been performed in improving the capability and robust- These submaps are subsequently fused and form a more
ness of state estimation in these situations. Much of this consistent global estimate. In essence, relative navigation
work builds upon the simultaneous localization and map- demonstrates how to apply these relative submapping ideas
ping (SLAM) literature, but is adapted for MAVs. The full discussed in the full-SLAM literature on computationally
SLAM problem involves concurrently estimating the posi- constrained MAV platforms to ensure stable flight in GPS-
tion of surrounding landmarks while reconstructing the ve- degraded environments.
hicles complete trajectory; however, due to the strict size,
weight, power, and timing requirements associated with au-
tonomous MAV operation, the SLAM problem is typically 3 Relative Navigation Overview
simplified when applied to MAVs, only solving for the cur-
rent pose of the vehicle and surrounding landmarks. The intuition behind relative navigation is straightforward.
Early work performed by [10, 12, 13, 14, 15] demonstrated An alert driver can safely navigate indefinitely, even if com-
indoor MAV flight and provides approaches for many MAV pletely lost or disoriented. This is because humans instinc-
navigation problems such as mapping, path planning, and tively perceive the world and make decisions with respect
control of GPS-denied multirotor platforms for short in- to the current local environment, as opposed to working
door trajectories. References [12] and [13] present a graph- with respect to an arbitrary global reference point. When a

2
4 Relative Front End
GPS Optimization
Global By working with respect to the local environment, the
back end
Place Pose graph Global relative front end ensures that the flight-critical estima-
recognition map path planning
tion, guidance, and control is always observable, allowing
View-based
smooth, stable flight even when global information is de-
Relative state
odometry estimation graded or undergoing large corrections. When relative nav-
Relative igation was first presented in [7], the discussion emphasized
Relative Exteroceptive Proprioceptive path planning
front end sensors sensors and control
a particular choice for a visual odometry algorithm, estima-
tor, path planner, and controller. This section generalizes
Vehicle that discussion by outlining the fundamental nature of each
front-end component, highlighting how existing algorithms
would need to be adapted to fit into the relative naviga-
Figure 2: Relative navigation architecture. Using relative tion architecture. Specifically, we describe how to incor-
motion measurements, such as available from visual odom- porate current state-of-the-art view-based odometry algo-
etry or scan matching algorithms, the vehicle estimates its rithms, describe the relative estimator reset operation nec-
local state. These estimates are used for flight-critical path essary to maintain observability, and present the relative
planning and control. As a separate process, the global guidance and control strategy necessary to ensure smooth,
back end incorporates any available global information. Its stable flight.
only influence on the front end is through locally-defined
guidance objectives.
4.1 View-Based Odometry
When GPS is not available, MAVs commonly use odometry
measurements computed from exteroceptive sensors such
as laser scanners or cameras. A variety of odometry al-
driver is lost, ideally an accompanying passenger looks for gorithms exist including laser scan matching [10, 11] and
landmarks, references a map or GPS unit, plans the optimal visual odometry [8, 9]. While some odometry methods com-
global route, and then provides low-frequency, high-level pare consecutive frames (scans or images), others compare
instructions to the driver in the local framefor example, the current frame to a recent keyframe. When a keyframe
turn around when possible or make the next right turn. is used, a series of odometry measurements are computed
In this way, time- and safety-critical estimation and control with respect to this keyframe. Generally the keyframe is up-
decisions are decoupled from potentially erroneous global dated only when there is insufficient overlap to provide a re-
information. liable odometry measurement. As a result, keyframe-based
odometry reduces temporal drift in the computed odome-
Figure 2 presents the relative navigation architecture in- try as compared to frame-to-frame matching [16, 23]. Many
troduced in [7], where the decoupled responsibilities of the view-based odometry algorithms use bundle-adjustment to
relative front end and global back end are analogous to a further improve accuracy [8].
driver and passenger. Using relative motion measurements, While view-matching algorithms only measure relative
available from a view-based odometry source such as vi- motion, implementations of these algorithms commonly
sual odometry or scan matching, the vehicle estimates its concatenate the measurements to output a global odom-
pose with respect to its local environment. This observ- etry estimate. Rather than treating the odometry as a
able, relative state is used for flight-critical path planning blackbox psuedo-global measurement like in [17] and [24],
and control. As a separate process, the global back end the view-matcher used in the proposed RN architecture di-
uses a pose graph map to combine these relative states into rectly outputs relative measurements. View-matching al-
a global map, and to incorporate any available global infor- gorithms that output pseudo-global measurements can be
mation such as place recognition constraints or GPS mea- readily adapted to supply relative updates.
surements. The only way the global back end influences the As shown in Figure 2, the view-matcher is only loosely
front end is through locally-defined guidance objectives. coupled to the estimator. As such, it is straightforward
to accept relative measurements from any source or sensor,
The relative navigation architecture is readily applied to such as monocular, stereo, and RGB-D visual odometry or
existing systems, as it does not make any assumptions about a laser scan matcher. The framework even handles multi-
the vehicle platform or sensor-suite. A wide variety of al- ple relative sensors, which can increase the robustness of
gorithms can be used to implement each component, and the system in difficult environmental conditions. For exam-
due to the modular nature of RN, it is straightforward to ple, [25] demonstrates using RN to simultaneously incorpo-
interchange the algorithms as needed. The RN framework rate relative measurements from a laser scanner and RGB-D
also allows multiple view-matchers to be used simultane- camera. While the scan matcher breaks down in long hall-
ously for increased robustness in difficult environments. In ways and the visual odometry breaks down in a dark room,
the next two sections we describe the details of the relative the redundant sensing allowed the vehicle to successfully
front end and the global back end. navigate. Many tightly-coupled vision-aided INS estima-

3
and approximately Gaussian [6].
y A variety of estimation techniques are used for MAV navi-
gation and could be adapted to become a relative estimator.
Note that a tightly-coupled vision-aided INS estimator can
be used either as a view-based odometry source or as the
x nk+1 state estimator. The fundamental concept is that the es-
timators state and covariance should be reset whenever a
nk nk
new keyframe is declared. Figure 3 illustrates the process of
transitioning from one keyframe to the next. The relative
estimator tracks the MAVs position and heading (x, y, )
relative to the current node frame nk . Naturally the es-
(a) Before reset (b) After reset timated state will not perfectly align with the true MAV
state, but the estimators covariance should correctly rep-
Figure 3: 2D illustration of node frame reset operation. (a) resent the underlying uncertainty. When a new keyframe is
The relative estimator tracks the MAVs position and head- declared, the new node frame nk+1 is defined at the true,
ing (x, y, ) with respect to the current node frame nk . The yet globally unknown, MAV position. The current pose
estimated state (blue) will not perfectly align with the true and covariance are saved as an edge constraint in the back-
MAV state (green), but the estimators covariance (blue end pose graph and then the MAVs position and heading
oval) should correctly represent the underlying uncertainty. states and their corresponding covariance terms are reset to
(b) When a new keyframe is declared, the new node frame zero. In this way, the global uncertainty is removed from
nk+1 is defined at the true, yet globally unknown, MAV the front-end filter and delegated to the global back end.
pose. The estimated pose (gray) and covariance (gray oval) Figure 4 shows example state estimates, where the hor-
is saved as an edge constraint in the back-end pose graph izontal position and heading states are reset at each new
and the MAVs pose states and their corresponding covari- node frame. While the discontinuities in the state esti-
ance terms are reset to zero. mates may appear concerning from a control perspective,
they occur at known times and thus are reliably handled by
the relative path planner and controller to produce smooth,
tors, such as [18, 26, 27], could be adapted and treated
stable control. It is important to note that while the front-
as a view-based odometry source for a relative estimator.
end filter tracks the full six degrees-of-freedom pose, only
For the results in Section 7 we used DEMO [28] for visual
the relative states (x, y, ) are optimized in the back end.
odometry and CSM [29] for scan matching.
For the flight results described in Section 7, we used an
indirect formulation of the multiplicative extended Kalman
4.2 Relative State Estimation filter as presented in [30]. A unit quaternion is used to repre-
Most MAV navigation approaches continue to estimate the sent the MAVs attitude while attitude error is propagated
global state, even when GPS-dropout makes the global state using a minimal three-state representation. When a new
unobservable. Given an inertial measurement unit, altime- keyframe is declared, care is taken to only reset the unob-
ter, and even visual odometry measurements, the global po- served horizontal position and heading, leaving roll, pitch,
sition and heading of a MAV in the horizontal plane cannot altitude and their associated uncertainties unchanged. Re-
be observed [2]. With time, the associated state estimates fer to [30] for additional estimator details including the
drift and become inconsistent. state, dynamics, sensor models, and specific details about
One fundamental advantages of RN is that the front-end the reset step.
state always remains observable. RN maintains observabil- Some similarities and differences exist between the RN
ity by defining the state with respect to a local node frame. relative estimator and the popular robocentric estimation
The node frame is defined as the gravity-aligned coordi- approach. As described in [30], a relative estimator can be
nate frame that is positioned on the ground exactly under defined in either an inertial or robocentric representation. A
the MAV when the current keyframe was taken. Because robocentric approach tracks the position of landmarks with
each node frame is gravity-aligned and positioned on the respect to the current bodys pose. While robocentric dy-
ground, the MAVs altitude, roll, and pitch (z, , ) with namics are less intuitive, this formulation ensures improved
respect to the node frame is estimated no differently than if observability and consistency properties for the landmark
defined with respect to a global origin. By referencing the states, similar to RN. Most robocentric approaches, how-
current node frame, however, the horizontal position and ever, continue to track the global state of the MAV with
yaw states (x, y, ) now correspond to the relative position respect to the current body. After prolonged flight with-
and heading of the MAV with respect to the most recent out global information, the global uncertainty is not well
keyframe. In this way, relative measurements provided by represented by a Gaussian distribution in typical Cartesian
a view-matcher directly measure the MAVs relative posi- coordinates, leading to estimator inconsistency [6]. The rel-
tion and heading, causing the state to be observable by ative navigation framework provides a method to use either
construction. With regular, direct updates, the uncertainty inertial or body-fixed dynamics, produces smooth, observ-
of the vehicles relative state remains consistent, bounded, able state estimates for control, and represents the global

4
xkk/k1
xkg/k
nk

xk1
g/k1
nk1
goal

Figure 5: Updating a relative goal when a new node frame


is declared. The goal with respect to the previous keyframe,
xk1 k
g/k1 , is expressed with respect to the new keyframe, xg/k ,
k
using the edge constraint xk/k1 provided by the relative
estimator.

state consistently.

4.3 Relative Path Planning and Control


Within the relative navigation framework, all front-end
guidance and control is computed with respect to the cur-
rent node frame. Many current MAV controllers drive the
estimated global state to a desired global state. These same
controllers can be directly applied to drive the estimated
relative state to a desired relative state. Any control ap-
proach can be used as long as care is taken to ensure that
the estimator and controller are working with respect to the
same reference frame.
Each time the relative estimator resets to a new node
frame, the path planner and controller must also update to
ensure that they are operating with respect this new frame.
Depending on the chosen control strategy, this update op-
eration may range from updating an entire potential field
to requiring no action as in the case of a body-fixed velocity
Figure 4: Typical mid-flight state estimates. The vertical controller. Let xca/b represent the state a with respect to
gray lines indicate a new node frame, and the labels indicate frame b, expressed in frame c. Using this notation, Figure 5
the associated node identifier. With each new node frame, illustrates the process of updating a position goal that is
the forward, right, and yaw states are reset to zero, while expressed with respect to the previous node frame nk1 to
the down, roll, and pitch states are unaffected. The vehicle the current node frame nk . In short, the relative path plan-
was yawing from 142 s to 146 s and moving forward at a ner uses the estimated edge constraint between subsequent
constant velocity from 150 s to 152 s. While the state es- node frames provided by the relative estimator, xk1 k/k1 , to
timates are discontinuous, the relative navigation approach
express the previous goal xk1g/k1 in the new node frame,
facilitates smooth, stable navigation in GPS-degraded en-
k
vironments. x g/k . Because each node frame is gravity-aligned and posi-
tioned on the ground, any roll, pitch, altitude, or body-fixed
velocity components of the goal remain unchanged.
As a practical note, we recommend that the relative con-
troller incorporate logic to monitor if the relative estima-
tors node frame identifier matches the node frame identi-
fier of the current goal. If the node frame identifiers are
not in sync or no goal is supplied by the path planner, the
MAV is directed to hover in place. While this step is an im-
portant safety precaution, the controller did not enter this
state during our flight testing. With a careful implementa-
tion, the control performance does not degrade due to the
relative state reset.
Figure 6 presents the control architecture used to avoid
collisions and produce the smooth, flight-critical control
needed to safely operate the MAV in unknown, dynamic en-

5
Onboard Computer n2
goal n1 n3
n0
Relative Relative
estimator path planner nk
d d d nk1
x , x , x
Reactive
obstacle Figure 7: To reconstruct the MAV pose with respect to
avoidance the origin, each estimated edge (blue line) is compounded,
followed by the MAV state with respect to the current node
xd , xd , xd
(green line). A loop-closure constraint (red line) in general
Controller and will not perfectly agree with the odometry constraints, re-
model inversion sulting in an over-constrained system.

d , d , d , T d 5 Global Back End


Autopilot While the relative front end ensures flight-worthiness, if a
MAV is tasked with performing a global mission then a
global state estimate is required. This section describes
how the global state and its uncertainty are reconstructed.
Attitude Attitude
While the overall concept of the RN back end was presented
estimator controller
previously in [7], the implementation details presented in
this section are unique contributions of this paper.
PWM
5.1 Pose Graph Map
Vehicle
Before resetting the state and establishing a new node
frame, the front end saves the estimated relative pose and
Figure 6: Control architecture. The autopilot performs
associated uncertainty. Because each node frame is defined
high-rate feedback control about roll, pitch, yaw rate, and
to be located at the true (yet globally unknown) position
thrust commands provided from the onboard computer. Di-
of the MAV, the uncertainty is reset with each node frame.
agram adapted from [34].
This ensures that the saved pose estimates from one node
frame to the next are mutually independent. This facilitates
structuring the back end as a pose graph.
A pose graph is a conventional graph where each ver-
vironments with unpredictable external disturbances. The
tex or node corresponds to the global pose of a vehicle at
onboard computer uses its current relative state estimate
a certain instant in time, and graph edges represent the
and a path planning algorithm to calculate a trajectory
relative change in position and attitude from one node to
to the current relative goal. We use the reactive obstacle
another. Odometry measurements, such as the relative pose
avoidance plugin framework, presented in [31], to use the
estimates from the relative estimator, provide edge con-
latest sensor information to modify the current trajectory
straints between sequential nodes. If a place recognition
when needed to avoid a pending collision. Control loops
algorithm detects that the vehicle has returned to a previ-
are then closed around this modified trajectory to produce
ous pose, an edge constraint between non-consecutive poses,
desired accelerations. At this point, the non-linear model of
known as a loop closure, is introduced in the graph. The
the MAV dynamics is inverted [32, 33], providing a desired
vehicles global pose can be reconstructed by first traversing
roll, pitch, yaw rate, and thrust command. These attitude
the graph from the origin to the current node, compounding
setpoints are passed to the autopilot where high-rate atti-
each estimated edge in the path, and then incorporating the
tude feedback control is performed.
relative state. When loop closures are added, the graph is
For the results in Section 7, the path planner uses posi- over-constrained and multiple paths, and therefore multiple
tion feedback to supply high-rate velocity goals. These ve- pose estimates, are possible. This is illustrated in Figure 7.
locity goals are then modified using the cushioned extended- A weighted-least-squares optimization can be performed to
periphery obstacle avoidance algorithm from [31]. An LQR reconcile these discrepancies, removing accumulated drift.
feedback controller is closed around the modified velocity Other, more involved frameworks leverage the factor graph
setpoints to produce desired accelerations, which are then data structure which uses Bayesian methods to infer the
passed through the model inversion to produce the roll, pose of the MAV over time by representing edge constraints
pitch, yaw rate, and thrust command that is sent to the as factors. Factor graph methods have the added benefit of
autopilot. being able to solve for the global uncertainty of each pose

6
and can incorporate other measurements such as range-only
or IMU preintegration factors [35, 36]. Both factor-graph
and pose-graph formulations are able to solve for the op-
timal set of poses given odometry and loop-closure edge
constraints with associated uncertainty.
Formulating the back-end optimization problem as a pose
graph results in the following beneficial properties:

A variety of well-developed pose-graph optimization Figure 8: Example loop closure detected using FAB-MAP
frameworks exist to find a consistent global represen- between keyframe 80 and keyframe 416 during flight test 2.
tation of the trajectory after accounting for all con-
straints [37, 38, 39, 40].
Robust pose-graph optimization techniques can iden- used to describe each vehicle image. Using a common vo-
tify and remove the effect of erroneous constraints such cabulary allows for a sparse representation and facilitates
as false-positive loop closures or degraded GPS [41, 42, rapid comparison. Commonly hierarchical trees are also
43]. used for quicker comparisons. Some methods use the es-
A pose-graph representation provides a straightfor- timated global uncertainty to limit the set of past images
ward method to consistently represent a MAVs global that are compared.
state uncertainty. When global measurements are un- While any place recognition algorithm could be used,
available, representing error using the vector space we use fast, appearance-based mapping (FAB-MAP), which
formed by the Lie algebra se(3) produces banana- uses Bayesian probabilities to infer the likelihood of a match
shaped, Gaussian uncertainty distributions that better while explicitly rejecting perceptual aliasing in the environ-
parameterize the underlying distribution [6, 44, 45]. ment [50, 51]. This linear-complexity approach is compu-
A pose graph provides a minimal representation of a tationally tractable for a MAV and has not yet produced a
trajectory, ensuring scalability and practicality on re- false-positive loop closure during our extensive flight test-
source constrained platforms or networks. Long tra- ing. An example loop closure is shown in Figure 8.
jectories with large number of loop closures can benefit
from node removal techniques which further reduce the
5.3 GPS Integration
complexity of the optimization problem [46].
While loop closures and odometry can be used in a pose
Pose graphs are commonly used for MAV back ends; how- graph formulation to produce a metric map of previous
ever, many approaches that track the global state in the states, globally referenced measurements, such as GPS, can
front end do not provide a clear method to construct in- be used to localize the map in the global frame and further
dependent edge constraints and covariances, an issue ad- improve global-state estimation. Measurements to land-
dressed explicitly by relative navigation. marks with known global positions can also be used to
localize the map globally. For example, while the results
5.2 Place Recognition presented in Section 7 do not use any a priori information,
it is trivial to seed the place recognition algorithm with a
An important aspect of pose-graph back ends is the ability set of geo-located images.
to remove accumulated drift if the MAV detects that it has Many MAV navigation methods estimate the global state
returned to a previously visited location. Place recognition in the front end and can directly fuse global measure-
algorithms efficiently compare the current image or scan to ments. This works well when global information is regu-
each previous image or scan. When a strong correspondence larly available and accurate, but is shown to lead to in-
is detected, the relative transformation is computed and an consistency when the estimates drift during prolonged GPS
edge constraint between non-consecutive nodes, known as a dropout [4, 6]. Furthermore, directly applying a global mea-
loop closure, is included in the pose graph. surement to remove drift induces a large state update, often
Place recognition is a challenging problem, but a variety causing the control effort to jump which can destabilize the
of approaches have been successfully demonstrated [47]. To system [2, 16, 52, 53, 54]. Several methods have been pro-
scale well, the method must be fast and efficient. Addi- posed to address this, such as simultaneously tracking a
tionally, the algorithm should correctly detect loop closures GPS-corrected and odometry-only global trajectory [16] or
when there are partial occlusions, varied viewpoints or light- using a series of measurement gates [52].
ing conditions, or minor scene changes, but should correctly Alternatively, global measurements can be handled exclu-
avoid perceptual aliasing which is falsely correlating nearly sively in the back-end pose graph using a virtual-zero node.
identical, yet non-unique scenes such as a brick wall. First described in [55], the virtual-zero node represents the
To ensure scalability, many approaches use a bag-of- GPS origin. To ensure the pose graph is fully connected, an
words approach [48, 49]. Salient features are identified in arbitrary edge constraint with infinite uncertainty, known
a representative set of images and are clustered to form as the virtual constraint, is applied between the virtual-zero
a set of common, yet distinct image features. This pre- node and the node representing the MAVs origin. For each
computed set of features, known as the vocabulary, is then GPS measurement received, one node and two edge con-

7
nk 5.4 Optimization
nk+1
Pose graph optimization is formulated as a weighted least
n0 squares problem. The objective of the optimization is to
find the set of global poses x for each node such that the set
of relative edge constraints are best satisfied collectively.
VZ Edge constraints are partitioned into three sets: odometry
constraints O, loop-closure constraints L, and GPS con-
Figure 9: Back-end GPS integration method. For each straints G. Each edge constraint ij has an associated in-
GPS measurement, one node and two edge constraints are formation matrix ij to represent the confidence of the con-
added. The new node (green circle) is related to the virtual- straint connecting nodes i and j. A particular estimate of
zero node using the measurement and uncertainty reported global node poses x can be used to determine the currently
by the GPS receiver (dashed green line) and is related to estimated relative relationship between nodes
the current node frame using the current relative state esti-
mate (solid green line). A virtual constraint with maximum ij = hij (x) .
uncertainty is added between the first node and the virtual
zero node to ensure connectedness (black line). Using this notation, the optimization is formulated as
X
x = argmin (hij (x) ij )T ij (hij (x) ij )
x ij O
straints are added to the pose graph, as shown in Figure 9. X
A node is added to represent the current vehicle pose. This + (hij (x) ij )T ij (hij (x) ij )
ij L
node is related to the virtual zero node using the measure- X
ment and uncertainty reported from the GPS, and is related + (hij (x) ij )T ij (hij (x) ij ) .
to the current node frame using the current relative state ij G
estimate. Upon optimizing the pose graph after the first
GPS measurement, the virtual constraint will correctly es- Before loop closure and GPS constraints are introduced
timate the global position of the MAVs starting point. In- into the system, the optimization problem is not overcon-
corporating subsequent GPS measurements will refine this strained and a zero-cost, odometry-only trajectory is avail-
position estimate and provide a heading estimate for the able. When additional constraints are added, the optimiza-
MAVs starting point, causing the entire pose graph map tion works to modify the trajectory, particularly adjusting
to be globally localized. Similar concepts have been used to the portions of the trajectory with the greatest uncertainty.
incorporate multiple agents with unknown initial starting Pose-graph optimization is a well-researched problem.
points [22]. The optimization is commonly solved using iterative Gauss-
In practice, pose graph optimizers are less likely to di- Newton techniques. First, the global position of each
verge when constraints are of a similar order of magnitude. node is estimated, often using the odometry-only trajec-
GPS constraints are challenging because the GPS origin tory. Then, for each iteration, the cost function is lin-
is generally far away. To address this issue, we save the earized about the current state estimate and the optimal
initial GPS measurement and subtract it from each GPS state update for the linearized system is computed and ap-
measurement before adding the edge constraint. As a re- plied. There are several known issues with this method that
sult, the virtual zero constraint represents the position of are addressed in the literature:
the first node with respect to the first GPS measurement, as A naive implementation requires large matrix inver-
opposed to representing the position of the first node with sions and therefore does not scale well. However, sev-
respect to the GPS origin. If it is necessary to express the eral popular pose-graph optimization frameworks have
pose graph in a global coordinate frame, such as for visual- been presented that leverage sparse matrix properties
izing the graph on an ortho-rectified image, the initial GPS and show improved scalability [37, 38].
measurement is simply added to each pose. Gauss-Newton approaches can converge to a local mini-
There are several significant advantages of using pose mum or even diverge, particularly when the initial state
graphs for incorporating GPS measurements. First, due estimate is poor, which is common for drifting MAVs
to the decoupled nature of the relative navigation frame- in GPS-denied environments. Several approaches have
work, global state jumps cannot degrade flight-critical con- been presented to address initialization issues includ-
trol. This also means that processing or networking delays ing [56].
can be tolerated. Second, robust optimization techniques Least-squares optimization is highly sensitive to out-
can be used to detect erroneous GPS measurements. Once liers. While outliers are unlikely for odometry con-
detected, any negative effect is completely removed from straints, false-positive loop-closure constraints or de-
the system. Such a claim is not possible using conventional, graded GPS measurements can significantly impact the
front-end filtering methods. Finally, as few as two global optimization. Switching constraints [41], dynamic co-
measurements can be leveraged to localize the pose graph variance scaling (DCS) [42], max-mixture models [57],
map, a research problem originally motivated in [55]. and the RRR algorithm [43] are all proven methods

8
for detecting outliers and mitigating their effect on the
optimization. GPS/Loop closure
While these and similar methods help prevent the back end integration
from diverging, they do not guarantee convergence, nor do
they necessarily provide smooth or timely global-state es- Optimization/fusion
timates. This further highlights the importance of decou-
pling flight-critical processes from global information. For
the flight-test results in Section 7 we used g2o [37] with Global state jump Global goal
dynamic covariance scaling [42].
+

5.5 Global Path Planning Control


The role of the global path planner is first to determine
(a) Typical global state estimation approach
the optimal MAV trajectory by assessing relevant global
information, and second to transform the plan to be with
respect to the current node frame for use in the relative front GPS/Loop closure
integration
end. A variety of path planning algorithms could be used
depending on the mission objective, including autonomous
exploration, mapping, target tracking, waypoint following, Optimization
cooperative control, or landing. After a plan is determined,
the global path planner passes relative goals to the relative
Global state jump Global goal
path planner. When a new keyframe is declared, these goals
are updated to be expressed with respect to the latest node
frame. These relative goals are the only way the global
back end influences the MAV, which helps isolate the front Relative state Relative goal
end from destablizing or erroneous global information. This
idea is illustrated in Figure 10. +

A simple global path planner was implemented for the


flight test results in Section 7. Since the MAV begins with- Control
out any global information, a user initially takes the place of
the global path planner by supplying a series of position or (b) Relative navigation
velocity setpoints. After the MAV travels for some distance
Figure 10: Illustrations of how incorporating global infor-
and creates a global map, the user specifies a desired global
mation influences vehicle control. The columns respectively
waypoint on the map. At this point, Dijkstras algorithm is
represent estimation and planning, and the dashed arrows
used to search through the back-end pose graph to find the
indicate optional relationships. (a) Introducing global infor-
shortest known path to the desired waypoint. The global
mation into a conventional approach causes a global state
path planner then supplies velocity setpoints to the relative
jump which directly influences control [2, 16, 52, 53, 54].
front end to direct the MAV along the path to the global
(b) With the relative navigation approach, a global state
waypoint. This method is sufficient for autonomous MAV
jump never affects the relative state estimate. Vehicle con-
navigation in unknown environments and demonstrates the
trol is only influenced as the global path planner provides
role of the global path planner, but more sophisticated plan-
an updated relative goal to the relative path planner.
ners could be implemented for other mission scenarios.

6 Experimental Setup
GPS
The experimental platform, shown in Figure 11, is a hexa-
copter with a diameter of 0.69 m through the prop centers Autopilot
and a mass of 4.8 kg. The vehicle carries a 3DR Pixhawk
autopilot, onboard computer, IMU, RGB-D camera, pla-
nar laser scanner, GPS receiver, and ultrasonic altimeter. IMU
Processor
The details of the hardware configuration are summarized
in Table 1. It is important to note that the purpose of
this research is to demonstrate a successful framework for Laser
GPS-degraded MAV navigation and not to meet a particu- Camera
lar specification or optimally address a specific application.
We selected common sensors, processors, and algorithms Figure 11: The vehicle used for the flight tests
without any consideration for optimizing the MAVs size,
weight, speed, or endurance.

9
The data flow and networking between the various sys-
tem components are illustrated in Figure 12. The relative
navigation framework was implemented entirely on the on-
board computer in C++ using the Robot Operating System
(ROS) [58] middleware. Attitude control was performed
by a 3DR Pixhawk autopilot running a customized version
of the PX4 firmware1 . During fully autonomous sections
of flight, a ground station laptop was used to send way-
Table 1: Hardware details
point commands to the onboard computer over Wi-Fi via
Component Description the ROS messaging system. During semi-autonomous sec-
Platform Hexacopter, 4.8 kg, 0.69 m diameter tions of flight, velocity commands were sent to the onboard
Autopilot 3DR Pixhawk computer by a human operator using a wireless Microsoft
RGB-D Camera ASUS Xtion Pro Live Xbox controller. At all times, a human safety pilot had a
Laser Scanner Hokuyo UTM-30LX direct RC link to the Pixhawk autopilot to override attitude
IMU MicroStrain 3DM-GX3-15 commands from the onboard computer if necessary. Safety
Altimeter I2CXL-MaxSonar-EZ MB1242 pilot intervention was not required during the flight tests
GPS U-blox LEA-6T described in this paper.
Processor Intel Core i7-2710QE (2.1 GHz 4) The following three flight tests demonstrate autonomous
Memory 8GB DDR3 MAV navigation in a variety of challenging unknown envi-
ronments using the relative navigation framework. All per-
ception, estimation, control, and mapping was performed
onboard the vehicle and in real time. Estimation and con-
trol were performed at the rate of the IMU measurements,
which was 100 Hz. Visual odometry was performed at 15 Hz
using the RGB-D camera, and laser-scan matching was per-
formed at 40 Hz. No adjustments or tuning were required
to prepare the vehicle for the different scenarios other than
choosing between the RGB-D camera and laser scanner, il-
lustrating that the framework does not make environment-
specific assumptions. The flight tests are described in the
following sections, and are summarized in Table 2. A dis-
cussion of the results demonstrated by these flight tests is
given in Section 7.
Vehicle
velocity
setpoints 6.1 Flight test 1: Outdoor GPS-denied
Xbox IMU
controller
In the first flight test, the vehicle flew a trajectory around
RGB-D the perimeter of a large building, marked in black in Fig-
camera
autonomous
waypoints Onboard ure 17. The flight lasted 9 min, and the total distance
computer
Ground (Wi-Fi) Laser traveled was 320 m. For this flight test the system obtained
station scanner
telemetry visual odometry from the RGB-D camera. A human oper-
GPS
ator provided velocity setpoints to the vehicle through the
receiver Xbox controller. Because the MAV flew within a few meters
attitude
commands altimeter of the building throughout the flight, reliable GPS measure-
safety pilot
RC override ments were not available. Because the vehicle did not revisit
transmitter Autopilot Altimeter
any portion of the flight path, loop closure constraints were
PWM also unavailable.
Speed
controllers
6.2 Flight test 2: Indoor GPS-denied
Figure 12: The data flow and networking between the var- This flight test was conducted indoors through a series of
ious system components hallways. The flight path of the vehicle is overlaid on the
floorplan of the building in Figure 13. The flight lasted
12 min, and the total distance traveled was 390 m. Vi-
sual odometry was obtained using the RGB-D camera. The
1 The PX4 firmware is customized to accept inputs from the onboard
computer while also allowing an RC safety pilot to override these
commands if necessary. We have subsequently transitioned to us-
ing the ROSflight autopilot [59]; see http://rosflight.org.

10
Table 2: Summary of the flight tests described in this paper
Flight Loop
Environment Distance Duration Sensor GPS Nodes Figures
Test Closures

1 (6.1) outdoor (dusk) 320 m 9 min RGB-D denied 0 491 17


2 (6.2) indoor 390 m 12 min RGB-D denied 139 659 13,14,16,18
3 (6.3) indoor/outdoor (night) 240 m 9 min laser intermittent 30 891 1,15,19

Figure 14: Large MAV smoothly navigating through a


tight, nondescript hallway.

odometry was of high quality throughout most of the flight,


but its accuracy degraded in the southeast corner when the
camera was pointed at a blank wall. A human operator
provided velocity setpoints to the vehicle using the Xbox
controller to guide the vehicle through the hallways. A to-
tal of 139 loop closures were detected using the RGB-D
camera. This flight test was originally attempted in [7];
however, the trajectory flown was significantly shorter, no
loops were closed, and the back-end place recognition, map
optimization, and global path planner had not yet been im-
plemented.
Figure 14 shows the vehicle flying down one of the hall-
ways. The hallways were relatively nondescript, with few
visually interesting features. Despite this, the odometry
Figure 13: Flight test 2. The vehicle started at the blue and place recognition algorithms performed well. Another
circle moving clockwise, following the blue path, red path, challenge presented by the hallways was their narrow width;
yellow path, and then purple path. The vehicle flew in the hallways ranged between only 1.8 m and 2.5 m wide, as
the middle of the hallway and was facing the direction of compared to the 1.1 m total diameter of the vehicle. The
motion except for the path indicated by purple dots when narrow confines produced significant aerodynamic ground
the vehicle traveled backwards. and wall effect. To highlight the significance of this ef-
fect, a highly experienced safety pilot attempted to fly the
trajectory in attitude stabilized mode via RC control, and
struggled to maintain stability in the wider hallways to the
extent that flying in the narrower hallways was unfeasible.

6.3 Flight test 3: Indoor/outdoor intermittent


GPS
The third flight test consisted of two loops through both in-
door and outdoor environments through and near a build-

11
controller. After the first loop closure constraints were de-
tected and the map was optimized to remove drift, fully
autonomous waypoint following was demonstrated. A hu-
man operator clicked on a previously visited point on the
map, and the vehicle retraced its previous path to arrive
N at the desired waypoint. Three of these fully autonomous
segments were carried out, marked in purple in Figure 15,
including one during an outdoor to indoor transition.
In addition to the results presented in this paper, this
same indoor/outdoor flight was also performed a second
time during the day using the RGB-D camera instead of
the laser scanner. The alternate odometry source produced
comparable front-end estimation and control, introduced 45
loop-closures constraints, successfully incorporated 36 GPS
measurements, and performed four autonomous waypoint
missions. This helps to highlight the modularity and ex-
tensibility of the relative navigation framework. We chose
to present the laser scanner results because they demon-
strate the use of a different odometry source than that used
in the other two flight tests.
25m

Figure 15: Flight test 3. The vehicle started at the blue 7 Results
circle, moving clockwise, following the blue path, red path,
This section discusses the results from the flight tests de-
and then yellow path. The vehicle was facing the direction
scribed in Section 6 as they relate to various aspects of the
of motion. Purple indicates regions of autonomous way-
relative navigation architecture. In general, these results
point following and black indicates the doorways.
demonstrate that the proposed architecture runs onboard
the vehicle in real time, and that it enables missions involv-
ing real vehicles in realistic environments. The results show
ing. This flight test incorporated loop closures, intermit- that the system is able to operate in both indoor and out-
tent degraded GPS, and autonomous path planning and door environments, and handle transitions between them.
flight into a single experiment. The flight lasted 9 min and Notably, no tweaking or tuning of the system was required
traveled a distance of 240 m. The path that the vehicle between the flight tests other than choosing which sensor
flew is overlaid on a satellite image of the building in Fig- (the RGB-D camera or laser scanner) would be used for
ure 15. The vehicle started inside the southeast wing of odometry. This demonstrates that the architecture does
the building, flew through the courtyard into the north- not make environment-specific assumptions, and that it is
east wing of the building, down the alleyway to the east not tied to one particular sensor.
of the building and back into the southeast wing, then re- Section 7.1 discusses the estimation accuracy and con-
peated the same path. In all, there were four transitions sistency from using the relative navigation approach. Sec-
from indoor to outdoor, and four transitions from outdoor tion 7.2 discusses the performance of the pose-graph opti-
to indoor. These transitions are commonly troublesome mization, and Section 7.3 discusses the capabilities for au-
for GPS-degraded navigation approaches because odome- tonomous flight demonstrated by the tests.
try algorithms can sometimes degrade and GPS accuracy
can vary significantly through the transition.
Odometry was obtained from the laser scanner, while 7.1 Estimation Accuracy and Consistency
loop closures were obtained using the RGB-D camera. The Figure 16 shows the pose-graph map for the first 130 m of
flight test was conducted at night, so loop closures were ob- flight test 2. Up to this point no loop closures had been de-
tained only in the well-lit indoor portions. In all, 30 loop tected, meaning that the pose graph simply compounds the
closures were detected. Due to the close proximity to the relative edges produced by the front-end estimator to re-
buildings, GPS updates were very limited. GPS measure- construct the global pose without any additional optimiza-
ments were gated until the GPS receivers self-reported ac- tion. The accuracy of this global pose therefore directly
curacy estimate dropped below a reasonable threshold. As reflects the accuracy of the front-end estimator. Figure 16
a result, all GPS measurements were gated until the sec- shows that only 1.8 m of drift were accumulated in the
ond time the vehicle flew down the alley between buildings. first 130 m of flight, yielding a drift rate of 1.4 percent per
Even then, only ten GPS updates were received, and these distance traveled. For the 139 loop closures in flight test 2,
updates were biased to the north by about two meters. the maximum drift rate was 1.5 percent with an average
During the first loop, the vehicle was guided by veloc- drift rate of 0.85 percent. For the 30 loop closures in flight
ity setpoints provided by a human operator using the Xbox test 3, the maximum drift rate was 2.8 percent with an av-

12
100 m

Figure 17: Pose-graph map for flight test 1. Heading er-


rors cause the position uncertainty to grow. The global
back end compounds the small, Gaussian edge covariances
to form banana-shaped uncertainty estimates that correctly
represent the underlying uncertainty. The 90 percent con-
fidence regions are shown for several instances throughout
the trajectory.
Figure 16: Pose-graph map for the first 130 m of flight
test 2. At this point the first loop closure (red) was detected
and used to improve the global map without affecting local to the resulting distribution of final pose estimates.2 As
can be seen, the resulting distributions correctly capture
stability. Before optimization, the global pose estimate cre-
ated by compounding relative edges had accumulated 1.8 m the banana shape of the true uncertainty distribution. In
of drift. addition, at every point along the trajectory, the 90 percent
confidence region captures the true location of the vehi-
cle. This demonstrates that the uncertainty estimate in the
global pose reconstructed using the pose graph is consistent.
erage of 1.8 percent. The overall accuracy of an approach More details on the consistency of the relative navigation
depends on the environment, quality of sensors and cali- approach, and how it compares with other state-of-the-art
bration, and sophistication of odometry algorithms. These methods, are given in [6].
flight tests highlight that RN facilitates good performance
with off-the-shelf algorithms and sensors.
7.2 Map Optimization
Another advantage of the pose-graph representation is
Figure 18 shows the pose-graph optimization results for
that it accurately captures the uncertainty in the global
flight test 2. Figure 18a shows the unoptimized map pro-
pose of the vehicle. Approaches that estimate the global
duced by compounding the relative front-end pose esti-
pose directly in the filter represent the uncertainty as a
mates. These odometry edges are represented by the blue
Gaussian normal distribution characterized by its covari-
lines with keyframes marked as dots, and loop closures de-
ance matrix, which produces an ellipsoidal confidence re-
tected between keyframes are represented by red lines. Over
gion. Yet, it has been shown that the true uncertainty
the course of the 390 m flight, several meters of drift accu-
distribution produced as a vehicle moves through the envi-
mulated so that the resulting map lies outside of confines
ronment with uncertainty in its heading is a banana-shaped
of the hallway where the vehicle actually flew. Figure 18c
distribution [60] that is a Gaussian distribution expressed in
shows that after the map has been optimized, this drift has
exponential coordinates [45]. A pose graph represents the
been removed and the estimates of the vehicles global tra-
global pose as a sequence of short transforms, each with an
jectory lie within the hallways. The complete optimization
associated ellipsoidal uncertainty. It was shown in [44] that
took seven iterations to converge and took less than 8 ms
this series of uncertainties can be combined to produce a
running onboard the vehicle during flight.
total uncertainty estimate that is an excellent approxima-
tion to the true banana-shaped distribution. Therefore, the During flight test 2, the place recognition algorithm did
pose-graph representation contains all of the information not produce any false-positive loop-closure detections. This
that is necessary to produce an accurate of estimate of the is particularly impressive given the fairly uniform appear-
global pose uncertainty. Figure 17 shows the 90 percent ance of the hallways that the vehicle flew through (see
confidence regions created from the pose graph at several Figures 8 and 14). To demonstrate the impact that false-
points using the method in [6]. This method samples from positive loop-closure constraints can have, and to demon-
the individual edge covariances in a Monte-Carlo fashion, 2 Individual
edge covariances can also be combined using the fourth-
then fits a Gaussian distribution in exponential coordinates order analytical approximation presented in [44].

13
(a) Unoptimized trajectory (b) Non-robust optimization (c) Robust optimization

Figure 18: Pose-graph map for flight test 2. (a) Throughout the 390 m flight 139 loop closures were detected (red) and
three false-positive loop closures were artificially introduced (yellow). (b) False-positive loop closures cause a non-robust
optimization to diverge. (c) Robust optimization techniques result in a consistent map. The optimization ran onboard
and took 8 ms to converge.

strate the ability of the robust optimization algorithm to each of the doors and matches the path that the vehicle
detect and reject these spurious constraints, three false- actually flew.
positive loop-closure constraints were artificially introduced One important result that this flight test demonstrates
to the pose graph. These are shown in yellow in Figure is the ability of the relative navigation architecture to per-
18a. Figure 18b shows the optimized pose graph obtained form delayed localization using few GPS points. Before the
by a non-robust optimization algorithm that naively in- first GPS measurement is received, the map is metrically
corporates the false-positive constraints. The three false consistent with respect to the starting location of the vehi-
constraints have a drastic impact on the accuracy of the cle, but is not localized globally. In other words, the vehicle
optimized map, even though there are 139 valid loop clo- knows where it is relative to its starting point, but has no
sures constraining the map. Figure 18c, on the other hand, knowledge of where it is in the world. This unlocalized map,
demonstrates the effectiveness of dynamic covariance scal- however, is still sufficient for navigation purposes, and the
ing in correctly detecting and rejecting the false-positive vehicle was able to fly autonomous waypoints before it re-
loop closures to produce a highly accurate optimized map. ceived GPS measurements. When the vehicle received the
first GPS measurement, however, it was able to pin the
The unoptimized pose-graph map for flight test 3 is shown map to a location in the world. Subsequent measurements
in Figure 19a. As with flight test 2, the relative edges allowed it to orient the map and refine its position estimate.
from the front-end estimator are shown in blue, and the For flight test 3, this localization did not occur until sev-
loop-closure constraints are shown in red. Again, no false- eral minutes into the flight. In addition, the localization
positive loop closures were detected during this flight test. is accomplished using few (only ten) GPS measurements.
In addition to loop-closure constraints, flight test 3 intro- This is significant in the context of other GPS-degraded
duces intermittent GPS measurements. The ten valid GPS approaches that require GPS for a prolonged (the first 80
measurements are plotted as green points in Figure 19a, seconds of flight) initial alignment phase [17] or have GPS
and the corresponding edges in the graph are represented for a majority of their flight [16].
by green lines. As described in Section 5.3, the GPS con-
straints were defined with respect to the first GPS measure-
7.3 Autonomous Flight
ment, which is plotted at the origin. The final optimized
map incorporating both loop closures and GPS measure- A basic requirement for autonomous flight is robust and
ments is shown in Figure 19b. While truth is not available, stable control of the vehicle. While difficult to quantify, the
the accuracy of the final map can be evaluated by compar- robustness of the relative navigation architecture is demon-
ing it to the satellite image of the building. The doors of the strated by the scope of flight tests presented in this paper.
building that the vehicle flew through are marked as black For example, flight test 2 demonstrates smooth, stable flight
lines in Figure 19b. Due to the challenging urban canyon down narrow hallways that produce significant aerodynamic
environment, all of the GPS measurement were biased to ground and wall effect. The high-rate feedback control and
the north by a few meters, and so the resulting map is also accurate relative state estimates facilitated missions that
biased to the north. Correcting for this bias, however, it would be unfeasible for experienced human pilots. In flight
can be seen that the optimized trajectory passes through test 3, the vehicle smoothly transitions through eight door-

14
ways. Between the three flight tests presented, the platform
was flown for almost a kilometer through congested environ-
ments without incident. Throughout the flight tests, the
control performance did not suffer from the resetting of the
15 relative states.
A unique advantage of relative navigation that is demon-
10 strated by the flight-test results is the architectures innate
ability to handle jumps in the global-state estimate. For
5 example, the pose-graph optimization at the first loop clo-
sure in flight test 2 resulted in a global state jump of 1.8 m,
North (m)

0 and the optimization at the first loop closure in flight test 3


resulted in a jump of 2.3 m. In addition, the first GPS
-5 measurements received in flight test 3 caused a large state
jump as the map was rotated counterclockwise by 90 deg
-10 and translated approximately 28 m when it was first local-
ized globally. Despite these large state jumps, the control
-15 of the vehicle did not suffer at all because, as described
in Figure 10, control is carried out in the relative frame
0 5 10 15 20 25 30 35
and insulated from global state jumps by the path planner.
East (m)
Conceptually, this allows the MAVs perception of the local
(a) Pre-optimization environment to remained fixed while the global map shifts
beneath it.
In addition to stable local control, flight test 3 also
demonstrated autonomous global navigation. After the first
loop closures were received and the drift in the map was re-
moved, a waypoint was provided by an operator clicking on
a previously visited location on the generated pose graph.
N The vehicle then autonomously followed the map back to
this location. Autonomous waypoint following was demon-
strated three times, traveling 35 m through congested envi-
ronments including during an outdoor to indoor transition.
The regions where this took place are highlighted in pur-
ple on Figure 15. The final waypoint was selected after
GPS measurements were incorporated into the pose-graph
map. The user, by selecting a pixel on an ortho-rectified
image, was effectively establishing a desired GPS waypoint
for the vehicle. Of note, this global waypoint was located
indoors. At this point the vehicle autonomously navigated
to that global waypoint and stabilized its position. This re-
sult is particularly compelling because the vehicle correctly
stabilized itself about a global waypoint despite being in a
GPS-denied environment.
25m
(b) Post-optimization 8 Conclusion
Figure 19: Pose-graph map for flight test 3. (a) Trajectory Developing dependable, autonomous MAV solutions that
(blue) before incorporating loop closures (red) and GPS are robust to GPS degradation is a challenging but highly
measurements (green). For plotting purposes, the GPS relevant field of research. This paper demonstrates that the
is plotted relative to the first received GPS measurement. relative navigation framework offers a compelling alterna-
(b) After incorporating the ten available GPS measure- tive paradigm for approaching the problem. By decoupling
ments (green), the trajectory is globally localized. Black flight-critical estimation, guidance, and control algorithms
indicates the doorways. Note that because all of the avail- from unobservable global states that are prone to incon-
able GPS measurements were slightly biased to the north, sistency and destablizing state jumps, relative navigation
the final map is also biased. avoids many issues that plague other state-of-the-art ap-
proaches.
This paper presents the details necessary to implement
the complete relative navigation framework, including re-
setting the relative estimator to ensure observability and

15
adapting existing view-matching, path planning, and con- motion with autocalibration, in ICCV Workshop on Dy-
trol algorithms for reliable, smooth flight. We describe how namical Vision, 2007.
to leverage pose graphs to opportunistically incorporate [4] T. Bailey, J. Nieto, J. Guivant, M. Stevens, and E. Nebot,
loop-closure and GPS constraints, and outline how the high- Consistency of the EKF-SLAM algorithm, in IEEE/RSJ
level path planner facilitates autonomous missions while in- International Conference on Intelligent Robots and Sys-
sulating the vehicle from destabilizing global state jumps. tems, pp. 35623568, 2006.
Through a series of prolonged flight tests, we demon- [5] Y. Bar-Shalom, T. Kirubarajan, and X.-R. Li, Estimation
strated the effectiveness of the relative navigation approach with Applications to Tracking and Navigation. New York:
for autonomous GPS-degraded MAV navigation in varied, John Wiley & Sons, Inc., 2002.
unknown environments. We showed that the system can [6] D. O. Wheeler, D. P. Koch, J. S. Jackson, T. W. McLain,
utilize a variety of vision sensors, works indoors and out- and R. W. Beard, Relative Navigation: an observable ap-
doors, runs in real-time with onboard processing, and does proach to GPS-degraded navigation, IEEE Control Sys-
not require special tuning for particular sensors or envi- tems Magazine, submitted.
ronments. We demonstrated stable front-end performance
[7] R. C. Leishman, T. W. McLain, and R. W. Beard, Rela-
with low drift while leveraging off-the-shelf sensors and al- tive navigation approach for vision-based aerial GPS-denied
gorithms. We further demonstrated the onboard generation navigation, Journal of Intelligent and Robotic Systems,
of a globally consistent, metric, and localized map by identi- vol. 74, pp. 97111, May 2014.
fying and incorporating loop closure constraints and/or in-
[8] D. Scaramuzza and F. Fraundorfer, Visual odometry: Part
termittent GPS measurements. With this map, we demon-
I the first 30 years and fundamentals, IEEE Robotics and
strated the fully autonomous completion of mission objec- Automation Magazine, vol. 18, no. 4, pp. 8092, 2011.
tives, including performing a position-hold about a GPS
waypoint while in a GPS-denied environment. [9] F. Fraundorfer and D. Scaramuzza, Visual odometry:
Part II matching, robustness, and applications IEEE
One of the most important aspects of the relative naviga-
Robotics and Automation Magazine, vol. 19, no. 2, pp. 78
tion architecture is that it does not make any assumptions 90, 2012.
about a particular platform, sensor suite, environment, or
use case. Many existing systems could be readily modified [10] A. Bachrach, S. Prentice, and N. Roy, RANGERobust au-
tonomous navigation in GPS-denied environments, Jour-
to fit within the relative navigation framework, and thereby
nal of Field Robotics, vol. 28, no. 5, pp. 644666, 2011.
benefit from its theoretical and practical advantages.
[11] J.-S. Gutmann and C. Schlegel, AMOS: Comparison of
scan matching approaches for self-localization in indoor en-
Acknowledgments vironments, in Proc. 1st Euromicro Workshop on Advanced
Mobile Robots, pp. 6167, 1996.
This work has been funded by the Center for Unmanned [12] G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard,
Aircraft Systems (C-UAS), a National Science Foun- A tutorial on graph-based SLAM, Intelligent Transporta-
dation Industry/University Cooperative Research Center tion Systems Council, vol. 2, no. 4, pp. 3143, 2010.
(I/UCRC) under NSF award No. IIP-1161036 along with [13] S. Grzonka, G. Grisetti, and W. Burgard, A fully
significant contributions from C-UAS industry members. autonomous indoor quadrotor, IEEE Transactions on
This work was also supported in part by Air Force Research Robotics, vol. 28, no. 1, pp. 90100, 2012.
Laboratory Science and Technology (AFRL S&T) spon- [14] A. Bachrach, R. He, and N. Roy, Autonomous flight in
sorship. Daniel Koch was supported by the Department unknown indoor environments, International Journal of
of Defense (DoD) through the National Defense Science & Micro Air Vehicles, vol. 1, no. 4, pp. 217228, 2010.
Engineering Graduate Fellowship (NDSEG) Program. The [15] M. Blsch, S. Weiss, D. Scaramuzza, and R. Siegwart, Vi-
authors would like to thank Kevin Brink of the Air Force sion based MAV navigation in unknown and unstructured
Research Laboratory Munitions Directorate (AFRL/RW) environments, IEEE International Conference on Robotics
for his support of this project and for his valuable insights. and Automation, pp. 2128, 2010.
[16] S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar,
Multi-sensor fusion for robust autonomous flight in in-
References door and outdoor environments with a rotorcraft MAV, in
IEEE International Conference on Robotics and Automa-
[1] PricewaterhouseCoopers, Clarity from above: PwC global
tion, pp. 49744981, May 2014.
report on commercial applications of drone technology.
https://www.pwc.pl/pl/pdf/clarity-from-above-pwc. [17] D. Scaramuzza, M. C. Achtelik, L. Doitsidis, F. Friedrich,
pdf, May 2016. Accessed: 2017-03-30. E. Kosmatopoulos, A. Martinelli, M. W. Achtelik, M. Chli,
S. Chatzichristofis, L. Kneip, D. Gurdan, L. Heng, G. H.
[2] S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, and R. Sieg-
Lee, S. Lynen, M. Pollefeys, A. Renzaglia, R. Siegward,
wart, Real-time onboard visual-inertial state estimation
J. C. Stumpf, P. Tanskanen, C. Troiani, S. Weiss, and
and self-calibration of MAVs in unknown environments, in
L. Meier, Vision-controlled micro flying robots: from sys-
IEEE International Conference on Robotics and Automa-
tem design to autonomous navigation and mapping in
tion, pp. 957964, 2012.
GPS-denied environments, IEEE Robotics and Automa-
[3] E. Jones, A. Vedaldi, and S. Soatto, Inertial structure from tion Magazine, vol. 21, no. 3, pp. 2640, 2014.

16
[18] G. Chowdhary, E. N. Johnson, D. Magree, A. Wu, and quadrotor helicopter, Automatica, vol. 46, no. 1, pp. 2939,
A. Shein, GPS-denied indoor and outdoor monocular 2010.
vision-aided navigation and control of unmanned aircraft,
[34] D. O. Wheeler, P. W. Nyholm, D. P. Koch, G. J. Ellingson,
Journal of Field Robotics, vol. 30, pp. 415438, May 2013.
T. W. McLain, and R. W. Beard, Relative navigation in
[19] S. I. Roumeliotis and J. Burdick, Stochastic cloning: a GPS-degraded environments, in Encyclopedia of Aerospace
generalized framework for processing relative state measure- Engineering, pp. 110, John Wiley & Sons, Ltd, May 2016.
ments, in IEEE International Conference on Robotics and
Automation, pp. 17881795, 2002. [35] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza,
IMU preintegration on manifold for efficient visual-inertial
[20] D. Moore, A. Huang, M. Walter, E. Olson, L. Fletcher, maximum-a-posteriori estimation, Robotics: Science and
J. Leonard, and S. Teller, Simultaneous local and global Systems, pp. 615, 2015.
state estimation for robotic navigation, in IEEE Interna-
tional Conference on Robotics and Automation, pp. 3794 [36] L. Carlone, Z. Kira, C. Beall, V. Indelman, and F. Del-
3799, May 2009. laert, Eliminating conditionally independent sets in factor
graphs: A unifying perspective based on smart factors,
[21] K. S. Chong and L. Kleeman, Feature-based mapping in
IEEE International Conference on Robotics and Automa-
real, large scale environments using an ultrasonic array,
tion, pp. 42904297, May 2014.
International Journal of Robotics Research, vol. 18, no. 1,
pp. 319, 1999. [37] R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and
[22] B. Kim, M. Kaess, L. Fletcher, J. Leonard, A. Bachrach, W. Burgard, g2o: A general framework for graph opti-
N. Roy, and S. Teller, Multiple relative pose graphs for mization, in IEEE International Conference on Robotics
robust cooperative mapping, in IEEE International Con- and Automation, pp. 36073613, May 2011.
ference on Robotics and Automation, pp. 31853192, 2010. [38] M. Kaess, A. Ranganathan, and F. Dellaert, iSAM: Incre-
[23] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and mental smoothing and mapping, IEEE Transactions on
P. Furgale, Keyframe-based visual-inertial odometry using Robotics, vol. 24, no. 6, pp. 13651378, 2008.
nonlinear optimization, International Journal of Robotics [39] F. Dellaert and M. Kaess, Square root SAM: Simultane-
Research, vol. 34, no. 3, pp. 314334, 2015. ous localization and mapping via square root information
[24] S. Weiss and R. Siegwart, Real-time metric state estima- smoothing, International Journal of Robotics Research,
tion for modular vision-inertial systems, in IEEE Interna- vol. 25, no. 12, pp. 11811203, 2006.
tional Conference on Robotics and Automation, pp. 4531
[40] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. J. Leonard,
4537, 2011.
and F. Dellaert, iSAM2: Incremental smoothing and
[25] D. P. Koch, T. W. McLain, and K. M. Brink, Multi-sensor mapping using the Bayes tree, International Journal of
robust relative estimation framework for GPS-denied mul- Robotics Research, vol. 31, no. 2, pp. 216235, 2012.
tirotor aircraft, in International Conference on Unmanned
Aircraft Systems, pp. 589597, June 2016. [41] N. Sunderhauf and P. Protzel, Switchable constraints vs.
max-mixture models vs. RRR - A comparison of three ap-
[26] C. Forster, M. Pizzoli, and D. Scaramuzza, SVO: Fast proaches to robust pose graph SLAM, IEEE International
semi-direct monocular visual odometry, in IEEE Interna- Conference on Robotics and Automation, pp. 51985203,
tional Conference on Robotics and Automation, pp. 1522, May 2013.
May 2014.
[42] P. Agarwal, G. D. Tipaldi, L. Spinello, C. Stachniss, and
[27] M. Bloesch, S. Omari, M. Hutter, and R. Siegwart, Ro-
W. Burgard, Robust map optimization using dynamic
bust visual inertial odometry using a direct EKF-based ap-
covariance scaling, in IEEE International Conference on
proach, in IEEE/RSJ International Conference on Intelli-
Robotics and Automation, pp. 6269, May 2013.
gent Robots and Systems, pp. 298304, 2015.
[28] J. Zhang, M. Kaess, and S. Singh, Real-time depth en- [43] Y. Latif, C. Cadena, and J. Neira, Robust loop closing
hanced monocular odometry, in IEEE/RSJ International over time for pose graph SLAM, International Journal of
Conference on Intelligent Robots and Systems, pp. 4973 Robotics Research, vol. 32, pp. 16111626, Dec. 2013.
4980, 2014. [44] T. D. Barfoot and P. T. Furgale, Associating uncertainty
[29] A. Censi, An ICP variant using a point-to-line metric, in with three-dimensional poses for use in estimation prob-
IEEE International Conference on Robotics and Automa- lems, IEEE Transactions on Robotics, vol. 30, no. 3,
tion, pp. 1925, May 2008. pp. 679693, 2014.
[30] D. P. Koch and D. O. Wheeler, Derivation of the relative [45] A. Long, K. Wolfe, M. Mashner, and G. Chirikjian, The
multiplicative extended Kalman filter, TODO, 2017. banana distribution is Gaussian: a localization study with
[31] J. Jackson, D. Wheeler, and T. McLain, Cushioned exponential coordinates, in Robotics: Science and Systems,
extended-periphery avoidance: A reactive obstacle avoid- p. 8, July 2012.
ance plugin, in International Conference on Unmanned [46] N. Carlevaris-Bianco, M. Kaess, and R. M. Eustice,
Aircraft Systems, pp. 399405, June 2016. Generic node removal for factor-graph SLAM, IEEE
[32] N. Michael, D. Mellinger, Q. Lindsey, and V. Kumar, The Transactions on Robotics, vol. 30, pp. 13711385, Dec. 2014.
GRASP multiple micro-UAV testbed, IEEE Robotics and [47] S. Lowry, N. Sunderhauf, P. Newman, J. J. Leonard,
Automation Magazine, vol. 17, no. 3, pp. 5665, 2010. D. Cox, P. Corke, and M. J. Milford, Visual place recog-
[33] G. V. Raffo, M. G. Ortega, and F. R. Rubio, An inte- nition: A survey, IEEE Transactions on Robotics, vol. 32,
gral predictive/nonlinear H-infinity control structure for a no. 1, pp. 119, 2016.

17
[48] Sivic and Zisserman, Video Google: a text retrieval ap-
proach to object matching in videos, in IEEE Interna-
tional Conference on Computer Vision, vol. 2, pp. 1470
1477, 2003.
[49] D. Nister and H. Stewenius, Scalable recognition with a
vocabulary tree, in IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, vol. 2, pp. 2161
2168, 2006.
[50] M. Cummins and P. Newman, Appearance-only SLAM at
large scale with FAB-MAP 2.0, International Journal of
Robotics Research, vol. 30, pp. 11001123, Aug. 2011.
[51] A. Glover, W. Maddern, M. Warren, S. Reid, M. Milford,
and G. Wyeth, OpenFABMAP: An open source toolbox for
appearance-based loop closure detection, in IEEE Interna-
tional Conference on Robotics and Automation, pp. 4730
4735, May 2012.
[52] A. Chambers, S. Scherer, L. Yoder, S. Jain, S. Nuske, and
S. Singh, Robust multi-sensor fusion for micro aerial vehi-
cle navigation in GPS-degraded/denied environments, in
American Control Conference, pp. 18921899, June 2014.
[53] T. Tomic, K. Schmid, P. Lutz, A. Domel, M. Kassecker,
E. Mair, I. Grixa, F. Ruess, M. Suppa, and D. Burschka,
Toward a fully autonomous UAV: Research platform for in-
door and outdoor urban search and rescue, IEEE Robotics
and Automation Magazine, vol. 19, no. 3, pp. 4656, 2012.
[54] S. A. Scherer, S. Yang, and A. Zell, DCTAM: Drift-
corrected tracking and mapping for autonomous micro
aerial vehicles, in International Conference on Unmanned
Aircraft Systems, pp. 10941101, 2015.
[55] J. Rehder, K. Gupta, S. Nuske, and S. Singh, Global pose
estimation with limited GPS and long range visual odom-
etry, in IEEE International Conference on Robotics and
Automation, pp. 627633, May 2012.
[56] E. Olson, J. Leonard, and S. Teller, Fast iterative align-
ment of pose graphs with poor initial estimates, in
IEEE International Conference on Robotics and Automa-
tion, pp. 22622269, May 2006.
[57] E. Olson and P. Agarwal, Inference on networks of mix-
tures for robust robot mapping, International Journal of
Robotics Research, vol. 32, pp. 826840, June 2013.
[58] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote,
J. Leibs, R. Wheeler, and A. Y. Ng, ROS: an open-source
robot operating system, in IEEE International Conference
on Robotics and Automation, vol. 3, p. 5, 2009.
[59] J. Jackson, G. Ellingson, and T. McLain, ROSflight: A
lightweight, inexpensive MAV research and development
tool, in International Conference on Unmanned Aircraft
Systems, pp. 758762, 2016.
[60] S. Thrun, W. Burgard, and D. Fox, A real-time algorithm
for mobile robot mapping with applications to multi-robot
and 3D mapping, in IEEE International Conference on
Robotics and Automation, pp. 321328, 2000.

18

You might also like