Professional Documents
Culture Documents
1 Introduction 3
6 Summary 33
References 35
A Measurement plans 37
A.1 Measure , T , , and i for constant supply voltage . . . . . . . . . . . . . . . . . . . . . . 37
A.2 Measure and i at varying voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
A.3 Measure mass moment of inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
B Measurement data 41
1
2
Chapter 1
Introduction
Drones are becoming more readily available and less expensive. Also, due to technological advancement,
microprocessors can be reduced in size and price, while still providing sufficient processing power to
execute significant tasks. Due to this evolution, smaller and lighter autonomous or remote controlled
aircraft can be developed, such as multi-rotors. The applications for these crafts seem endless. They
can be used for geo-archaeology (Oczipka et al., 2009), assisting rescue operations regarding natural dis-
asters (Apvrille, Tanzi, & Dugelay, 2014), monitoring crops in agriculture (Tripicchio, Satler, Dabisias,
Ruffaldi, & Avizzano, 2015), surveillance missions against poaching, and many other applications.
This research project has as final goal to use sound imaging technology (Scholte, 2008) using drones.
This sound imaging technology uses an array of microphones in grid formation to record sound waves
that, using methods from Scholte (2008), can be visualised. In order to be able to achieve similar results
using drones, these need to be able to fly in a grid array formation, maintaining constant distance from
each other. The first steps to achieve the final goal are choosing a platform and design a controller that
is able to control a single drone. Later, the connection and communication between drones can be made
and, lastly, the microphones need to be added and (audio) noise created by the flying drones needs to
be filtered out. The remaining audio data then can be used to visualise sound from sources below the
grid of drones.
This project report is only part of the complete research project. For this project the Parrot AR.Drone 2.0
is used: A quad-rotor in -configuration with a flight controller board that runs a linux distribution.
It sets up its own WiFi access point to which one can connect their smart-device (phone or tablet) in
order to control the AR.Drone 2.0 using an application (AR.FreeFlight). In this project report, however,
we connect to this drone with a computer in order to disable the built-in flight control software and
upload our own. We design our own flight controller program in Simulink and run it from Simulink in
external mode as well. Using the WiFi link with the drone, we are able to maintain connection with the
drone while it is in flight. Hence, no physical tether is required to send and receive data to and from the
AR.Drone 2.0.
In the next chapter we give a non-linear model describing the dynamics of a quad-rotor in -configuration
and design non-linear backstepping controllers for stabilisation of attitude and altitude. Also, we give
a simple supervisory controller that is sufficient for testing purposes of a single drone. In Chapter 3
we discuss how the AR.Drone 2.0 is connected to and controlled by Simulink. We explain what
Simulink blocks are available and what their functions are. Furthermore, we explain what happens
in the background in Simulink when a model is built for external mode execution and how the designed
controller is uploaded to the AR.Drone 2.0. In Chapter 4 we discuss the properties of the sensors and
actuators that are present in the drone. We convert the raw sensor data to SI units, calibrate for static er-
rors and temperature sensitivity, and estimate the noise profile. We have also conducted measurements
that provides information about the rotors and related the motor PWM commands to the generated
thrust, torque, and rotor angular velocity. In Chapter 5, we use the results from previous chapters to
determine the position and orientation of the AR.Drone 2.0 based on the sensor values and calculate
the desired PWM values for each motor to achieve the thrust and torques that the motion controller
3
desires. Also, we give models for the sensors, such that the designed controller can be tested on the
model, given in Chapter 2, with the addition of the sensors before the controller is implemented in the
actual AR.Drone 2.0. Finally we summarize our findings in Chapter 6 and give our recommendations
for further research.
4
Chapter 2
In this chapter we give a non-linear model of a quad-rotor and design non-linear controllers for stabilisa-
tion, using the methods from Choi and Ahn (2015). Also, we give a simple supervisory controller that is
sufficient for testing purposes but we suggest to extend the supervisor for safety and performance reasons.
Choi and Ahn (2015) have derived three separate controllers in order to control the craft: an atti-
tude controller, altitude controller, and a position controller. The AR.Drone 2.0, however, does not
feature an absolute position measurement device, such as GPS. Hence, we only use attitude and altitude
control. It might be possible to add external position measurement in the future, but this is outside the
scope of this project report. Besides the motion controllers, a supervisory controller is required. Its main
goal is to ensure that safety is maintained for both its environment and the quad-rotor itself.
The Parrot AR.Drone 2.0 is a quad-rotor in -configuration. It is essentially a rotating and translating
body with four actuators: the motor-rotor combinations. Each of the rotors applies a thrust and torque
to the drone, allowing it to control its position and orientation.
1 2
Rotor 1 (CW) Rotor 2 (CCW)
x
y z
Body
2l
Figure 2.1: Schematic top view of a quad rotor with body fixed coordinate frame
Choi and Ahn (2015) have derived a model of a quad-rotor based on Euler-Lagrangian equations in
5
terms of translation and rotation, given in (2.1) and (2.2), respectively.
x c s c + s s 0
y = c s s s c U1 + 0 (2.1)
m
z c c g
= f (, , ) + g(, , )U (2.2)
inputs U1 and U2 , U3 , and U4 are used to control the altitude, roll, pitch, and yaw, respec-
and control P
4
tively. = i=1 r,i : The sum of the rotor angular velocities.
In our case, where the quad-rotor has an -configuration, as opposed to the +-configuration in (Choi &
Ahn, 2015), the control inputs U1 and U are defined as:
4
X
U1 = Ti (2.3)
i=1
U2 l(+T1 T2 T3 + T4 )
U = U3 = l(T1 T2 + T3 + T4 ) . (2.4)
U4 r,1 r,2 + r,3 r,4
Li (2014) has derived a dynamical model of the AR.Drone 2.0 in his Master thesis. We shall use his
findings in the development of the initial model. For the AR.Drone 2.0 he derived the following values
for moments of inertia around the x-, y-, and z-axis of the drone and the moment of inertia of the
rotor around its spinning axis, respectively: Ix = 2.2383 103 [kg m2 ], Iy = 2.9858 103 [kg m2 ],
Iz = 4.8334 103 [kg m2 ], Jr = 2.20321 105 [kg m2 ]. Alsothe mass and distance from rotor-centre
to centre of mass have been determined: m = 0.429[kg], L = 2l = 0.1785[m]
e 1 = d (2.5)
e2 = d + k1 e1 (2.6)
e3 = d (2.7)
e4 = d + k3 e3 (2.8)
e5 = d (2.9)
e6 = d + k5 e5 , (2.10)
6
leading to the error dynamics:
e1 = e2 k1 e1 (2.11)
e2 = d + k1 (e2 k1 e1 ) (2.12)
e3 = e4 k3 e3 (2.13)
e4 = d + k3 (e4 k3 e3 ) (2.14)
e5 = e6 k5 e5 (2.15)
e6 = d + k5 (e6 k5 e5 ) . (2.16)
Using a Lyapunov candidate function (2.17) with time derivative (2.18), a stabilising controller can be
realised with control signals U2 , U3 , and U4 , expressed in (2.19).
1 2
e1 + e22 + e23 + e24 + e25 + e26
V1 = (2.17)
2
V1 = e1 e1 + e2 e2 + e3 e3 + e4 e4 + e5 e5 + e6 e6
= e1 (e2 k1 e1 ) + e2 ( d + k1 e1 ) + e3 (e4 k3 e3 ) + e4 ( d + k3 e3 )
+ e5 (e6 k5 e5 ) + e6 ( d + k5 e5 )
Iy Iz Jr l
= e1 (e2 k1 e1 ) + e2 + U2 d + k1 (e2 k1 e1 )
Ix Ix Ix
Iz Ix Jr l
+ e3 (e4 k3 e3 ) + e4 + + U3 d + k3 (e4 k3 e3 )
Iy Iy Iy
Ix Iy 1
+ e5 (e6 k5 e5 ) + e6 + U4 d + k5 (e6 k5 e5 ) (2.18)
Iz Iz
Ix Iy Iz Jr
+ + k12 1 e1 (k1 + k2 ) e2 + d
U2 =
l Ix Ix
Iy Iz Ix Jr
+ k32 1 e3 (k3 + k4 ) e4 + d
U3 =
l Iy Iy
Ix Iy
+ k52 1 e5 (k5 + k6 ) e6 + d
U4 = Iz (2.19)
Iz
Herein are k1 k6 positive constant control parameters. These should be carefully chosen, since these
determine the performance of the non-linear controller. The resulting time derivative of the Lyapunov
candidate (2.17) is
Since (2.17) is positive definite and (2.20) is negative definite, the origin of the controlled error dynamics
is asymptotically stable.
e7 = z zd (2.21)
e8 = z zd + k7 e7 , (2.22)
e7 = e8 k7 e7 (2.23)
e8 = z zd + k7 (e8 k7 e7 ) (2.24)
7
Using a Lyapunov candidate function (2.25) with time derivative (2.26), a non-linear stabilising controller
can be realised, expressed in (2.27).
1 2
e7 + e28
V2 = (2.25)
2
V2 = e7 e7 + e8 e8
= e7 (e8 k7 e7 ) + e8 (z zd + k7 e7 )
U1
= e7 (e8 k7 e7 ) + e8 cos cos g zd + k7 (e8 k7 e7 ) (2.26)
m
m g + (k72 1)e7 (k7 + k8 )e8 + zd
U1 = (2.27)
cos cos
Similar to the attitude controller, k7 and k8 are the constant control parameters that should be chosen
carefully since these determine the performance of the controller. This control law leads to the time
derivative of the Lyapunov candidate (2.28) that is negative definite. With the additional property
that (2.25) is positive definite, we can conclude that the origin of the altitude error dynamics is also
asymptotically stable.
However, when either the pitch or roll angle equals 2 , the control signal U1 = , which is undesirable.
Hence, we require that ||, || < 2 . Given that = d + e1 , we require that |d | < M < 2 for some M .
It holds that
|| = | d + d | | d | + |d | < | d | + M. (2.29)
If
| d | M, (2.30)
2
then
|| < | d | + M M +M = (2.31)
2 2
suffices to guarrantee that |e1 | 2 M . Grnwalls inequality in differential form states that if u(t)
Rt
(t)u(t), then u(t) u(a)e a (s) d s on the interval [a, t]. And using the property that
Hence,
If
ke(0)k M, (2.35)
2
then
q
|e1 (t)| e21 (t), . . . , e26 (t) = ke(t)k ke(0)k M. (2.36)
2
8
2.4 Supervisor
The supervisory controller is used to engage different control laws, ensure safety and proper functioning
of the quad-rotor. For safe operation, it is required that the battery voltage does not drop below a cer-
tain value since this indicates that the battery is nearly drained, hence safe flight is not longer possible.
When the quad-rotor is airborne when this situation occurs, the quad-rotor should engage a manoeuvre
to safely land itself and not be able to take-off until the battery is either charged or replaced. Naturally,
the quad-rotor should not be able to take-off at all if the battery voltage is too low.
For safe operation it is required that the orientation of the quad-rotor stays within limits, and, that
if (some of) the rotors are blocked, the motors shut down to prevent damage to the source of the block-
age and the craft itself.
The supervisor should also incorporate a function that prevents flyaway situations where the quad-
rotor gains too much altitude or crosses the border of a predefined safe flying area.
For normal operation, different states should be defined, such as, but not limited to: initialise, wait for
command, take-off, hover, controlled flight, landing, calibrate. We start, however with a very
simple supervisory controller, since this project is not aimed at designing a controller for fully automated
flight and calibration. Figure 2.2 displays a schematic view of this simple supervisor. The Initialise
state is the starting state. After initialisation, the Wait state is reached. The supervisory controller
waits until signal is received that the drone should fly or lift-off, it arms and starts the motor and sends
a desired trajectory to the altitude and attitude controllers. When the lift-off procedure is completed,
Figure 2.2: Schematic view of the simple supervisory controller that is initially
implemented in the AR.Drone 2.0.
the Hover or Land states can be reached, depending on the desired state and battery level. The
Hover state sends a constant altitude and zero roll, pitch, and yaw angles to the altitude and attitude
controllers, for as long the AR.Drone is hovering. When the battery voltage drops below a threshold,
the controller reaches the Land state (regardless of the desired flight state) to be able to safely land
the drone. It also sends a desired trajectory to the attitude and altitude controllers. The Terminate
state can be reached from any of the flight states if non-safe situations occur. It disarms the motors,
hence stopping the rotors if anything is blocking it. Also this state is reached when the landing phase
is completed. When safely landed, the Wait state is reached again, otherwise the external mode is
stopped in the Stop state.
2.5 Summary
In this chapter we have given a non-linear model of the dynamics of a quad-rotor and designed a non-linear
backstepping controller for stabilisation of position and orientation based upon the methods from Choi
and Ahn (2015) and Li (2014). A quad-rotor is a body in space with the ability to translate and rotate
in six degrees of freedom. The four actuators (rotors) each provide a force, Ti , and torque, i , to this body.
The two backstepping motion controllers require acceleration in position and orientation, and output
a force (summation of the thrust of the four rotors) and three torques (around the x, y, and z-axes with
respect to the body fixed frame). The motion controllers can be tuned by changing the parameter values
9
k1 k8 , which determine performance of the controllers. Furthermore, we have given requirements on
the desired pitch and roll angles, d and d , and on the initial condition in order to prevent division by
zero in the altitude controller, namely: | d | 2 M and | d | 2 M for some M .
Lastly, we have designed a simple supervisory controller that helps governing the operation of the
AR.Drone 2.0. It initialises the controller and waits until it receives signal to fly, when such signal
is received, it lifts-off according to a predetermined desired altitude and hovers until landing is requested
or the battery voltage reaches its threshold value due to depletion. Landing is, similar to lift-off, carried
out according to a predefined altitude setpoint. From any of the states Lift-off, Land, or Hover,
the flight can be terminated in case a blockage occurs at the rotors. In this case, the motors are being
disarmed instantaneously and the external mode in Simulink is terminated. However, when landing is
completed normally (safely), the supervisor again reaches its Wait state and is ready for additional
flight.
In the next chapter we discuss how the AR.Drone 2.0 is connected to and controlled by Simulink.
We give an explanation for the Simulink blocks that can be used and explain what happens in the
background when a Simulink model is built for external connection to the AR.Drone 2.0.
10
Chapter 3
In this chapter we discuss how the AR.Drone 2.0 is connected to and controlled by Simulink. A full
manual for installing the software and connection to the drone is given in Appendix C. In this chapter
we see the Simulink blocks that have been used and what their functions are, also a description is given
about the compiling and uploading process that happens in the background.
In order to interface with the AR.Drone 2.0, a GitHub project (Daranlee & Slovak194, n.d.) was used.
This project is based upon the source code drivers developed by The Paparazzi Project, n.d.; An
open-source project encompassing autopilot systems for unmanned aerial vehicles (UAV). Daranlee and
Slovak194 have used The Paparazzi Project to develop an interface between Matlab-Simulink and
the AR.Drone 2.0. Their project deploys the Simulink model automatically to the drone, using the
Embedded Code C code generation. A WiFi-link between a Simulink model running in real-time on
a PC can receive telemetry data and send commands to the drone. While their project is very useful,
the documentation is very limited. Hence, we have not used their designed filter and controller blocks
for Simulink, but only their framework for deployment and communication has been used besides their
blocks to receive sensor data and send motor commands.
The Inertial Measurement block reads the sensor values from the on-board gyroscope, accelerometer,
magnetometer, barometer, and ultrasound altimeter and outputs a bus signal containing these values.
With the barometer, accelerometer, and gyroscope, it also provides the temperature of these sensors
since these are temperature dependent. The PIC micro controller that reads the sensor data operates
at 200 Hz, according to the description in this block, but it is advised to operate the block at a higher
sample rate to prevent lost data. Hence, the Simulink models operate at a sampling frequency of 400Hz.
A checksum flag is output that indicates when the checksum fails, which can be attributed to data loss.
The Motor block accepts four inputs ranging from 0 to 100. These are the throttle settings for the
four motors, with 0 being no throttle and 100 being full throttle. The block converts the four throttle
settings to a 40 bit number, which is fed to the actual motor controller. This block provides also a
GPIO_Fault_Pin output signal that indicates whether the motors are blocked. When a blockage indeed
occurs, a built in feature shuts down the motors.
The Battery Voltage Measurement block reads the battery voltage from the ADCIN0 pin on the main
board of the AR.Drone 2.0 and outputs the battery potential in decivolts. Hence, the block output is
multiplied by ten in order to obtain the battery voltage.
11
The quad rotor is equipped with four LEDs, one at each motor. These LEDs can emit three col-
ors, namely: red, green, and orange (which is a combination of red and green). The LED block accepts
four integer inputs ranging from 0 to 3, where 0 turns the respective LED off, 1 turns the LED red, 2
turns the LED green, and 3 turns the LED orange. Information about the drone, such as its current
flight status, could be used to assign the LED a certain colour. The Init_Actuator block is required to
be placed in the Simulink model in order to be able to use the Motor and LED blocks.
- program.elf
- program.elf.respawner.sh
- any process that has the same name as the Simulink model that will be uploaded.
Then, ssh_download.bat executes these commands in a telnet session such that the processes above
are terminated. The first process, program.elf, is the factory controller software that starts automat-
ically at boot of the AR.Drone. The second process, program.elf.respawner.sh is a shell script that
relaunches program.elf in case it is terminated, either by crash or by design. The third process it kills
is any process with the same name as the Simulink model that will be uploaded, this ensures that,
if something went wrong during a previous instance, the old process is being terminated if it was still
running. Hence, we can be sure the program that will run is indeed the Simulink model that has been
compiled.
Once these processes are terminated, our compiled executable (New_Controller.elf) is being uploaded
to the drone using FTP on port 5551. This puts the executable in /update, which is a temporary folder
in the operating system of the AR.Drone that is wiped at every boot. Hence, no traces of the custom
controller are left behind that can permanently interfere with normal operation of the AR.Drone 2.0
(intended by the manufacturer). Once uploaded, the mode of New_Controller.elf is being changed such
that every user can read, write, or execute this program and finally, the executable is started up. Now
it is possible for Simulink to connect in external mode and start the simulation (real time remote ex-
ecution of the designed controller) with real time telemetry feedback. Simulink scopes can be used to
view data in real time, and also to save data to the Matlab workspace. The "to workspace" and "to
file" blocks do not seem to work for this purpose.
After finishing the simulation, the process New_Controller.elf on the AR.Drone 2.0 is terminated.
This means that if a new simulation with the same program is required, the Simulink model needs to be
rebuilt using the build command in Simulink, a simple command such as /update/New_Controller.elf
over telnet to 192.168.1.1:23 is not sufficient, since it lacks the connection to Simulink.
3.3 Summary
In this chapter we have discussed how the AR.Drone 2.0 is connected to and controlled by Simulink. A
GitHub project (Daranlee & Slovak194, n.d.) was used to make the connection between Simulink and
the AR.Drone 2.0. The Simulink blocks that were created by this project are able to read all the sensor
data and control the motors and LEDs. Also, we have explained that created Simulink models are
compiled by an ARM (GNU/Linux) compiler tool chain and that the resulted binaries are uploaded to
the drone via FTP to /update in the operating system of the AR.Drone 2.0. Furthermore, when the
12
external mode of Simulink is terminated, the model needs to be rebuilt when the simulation is desired
to be restarted.
13
14
Chapter 4
In this chapter we discuss the sensors and actuators that are present in the AR.Drone 2.0 and can be
accessed by Simulink using the blocks described in the previous chapter. Also we investigate the proper-
ties of said sensors, such as sensitivity to environmental parameters (temperature), and noise magnitude
and distribution. Furthermore, the sensors are being calibrated. Lastly, measurements have been con-
ducted on the motors to investigate their properties. The measurement plans are given in Appendix A,
and their data in Appendix B. Trends have been found that were used to describe the thrust, torque
and rotor angular velocity as functions of PWM values.
The AR.Drone 2.0 is a quad-rotor with an on-board computer running a Linux distribution. To mea-
sure, it is equipped with a variety of sensors (Pleban, Band, & Creutzburg, 2014). It contains a 3-axis
accelerometer with 50 mg precision, a 3-axis gyroscope with 2000 /s precision, a 3-axis magnetometer
with 6 precision, a barometer with 10 Pa precision and an ultrasound sensor with a range of 6 metres.
Furthermore, two cameras are present: a front-facing camera with a resolution of 720p at 30 fps with a
wide angle lens (92 diagonal) and a down-facing camera with QVGA (320240 pixels) resolution (47.5 )
at 60 fps.
Whereas all sensors can be accessed and data can be read, the sensor data are all integer values that
have no Units. Hence, these values need to be converted to meaningful units. Also, the barometer, ac-
celerometer, and gyroscope are equipped with an internal temperature sensor, suggesting that they are
temperature sensitive. Hence, we need to conduct measurements with zero movement and with varying
temperature to analyse these dependencies and compensate for them (calibration).
4.1 Accelerometer
The accelerometer outputs four signals: ax , ay , az , and Tacc , where the first three signals are the
accelerations in x, y, and z directions respectively and the last signal is the temperature. All of these
signals are unsigned 16 bit integers in Simulink. A 1000 second sensor readout has been taken to
evaluate the data, which is displayed in Figure 4.1. From the data, we can see that the acceleration
values depend on the temperature and are subject to noise. Figure 4.2 displays two graphs. The top
graph shows how the acceleration measurement data depends on the temperature, and also shows a
second order polynomial fit through these measurement points. The fitted polynomials for the three
acceleration signals are given in (4.1).
The bottom graph in Figure 4.2 shows the acceleration measurements after correction using the relation
between the measured accelerations and temperatures. The temperature measurements range from 105
to 144. This might seem a significant difference, however, in reality this range is quite small. This
temperature increase is caused by the heating up from the internal components due to a cold start.
15
2600
Acceleration
2400
2200
2000
0 100 200 300 400 500 600 700 800 900 1000
150
Temperature
140
130
120
110
100
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
Figure 4.1: Raw accelerometer data from the IMU measurement block. The top
graph shows the acceleration signals, the bottom graph displays the temperature sig-
nal.
Hence, in order to fully model the temperature dependency of the accelerometer, additional measurements
should be conducted with a greater temperature range. When the raw measurement has been corrected
for temperature, the actual accelerations can be calculated using the property that a measured value
of 512 represents an acceleration of one g (standard gravity, 9.80665 m/s2 ). The actual (body-fixed)
accelerations can be determined by (4.2).
where denotes the body-fixed accelerations and aacc,corr denotes the temperature-corrected accelerom-
eter signals. Note that, whereas a sensitivity of 512 counts per g might suggest a resolution of 1/512 g,
the accelerometer actually outputs steps of four, resulting in a lower resolution of 1/128 g.
Since the accelerometer signals are subject to noise, and we would like to incorporate this into our
model in order to create a more realistic simulation environment, we analysed the corrected accelerom-
eter signals. In the top graph of Figure 4.3, a normalised histogram is displayed that shows how many
times certain values occur in the signal. Additionally, a normal distribution is displayed. As can be seen,
the noise seems to follow the normal distribution. Hence, we can model the noise of the accelerometer
signals as sources of random numbers that are normally distributed. The means and standard deviations
for the corrected accelerometer signals are
4.7382 101
3.6661
acc = 5.7538 101 , acc = 1.9465 .
1
6.2759 10 2.2529
Lilliefors tests have been conducted using the built-in Matlab function lillietest(x). However,
the corrected data fails this test for any probability tolerance, meaning that it is very certain that
the distribution of the noise is not a normal distribution. The bottom graph of Figure 4.3 shows the
cumulative distribution (grey dashed line) and the data points. As can be seen, the data points do not
exactly follow the cumulative normal distribution line, especially at the lower and upper ends of the graph,
hence this is not a normal distribution. This confirms the results from the Lilliefors tests. However, the
deviation from the normal distribution is limited, hence we shall be using a normal distribution in the
models of the accelerometer signal noise.
16
2580
Acceleration
Measurements
2570 Fitted polynomial
2560
2550
5
0
-5
-10
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
Figure 4.2: In the top graph we see the acceleration measurement points versus the
temperature measurement, indicating that the sensor has a temperature dependency.
Alongside is a second degree polynomial fit that can be used to compensate for this drift
caused by the changing temperature. In the bottom graph, the corrected acceleration
is displayed.
0.15
Frequency
0.1
0.05
0
-10 -5 0 5 10
Deviation of corrected accelerometer signal
0.999
Probability
0.99
0.95
0.75
0.50
0.25
0.05
0.01
0.001
-6 -4 -2 0 2 4 6 8 10
Data
Figure 4.3: The top figure shows a histogram of the corrected accelerometer signal
and a bell curve of the normal distribution with mean = 0.6276 and standard
deviation = 2.2529. The bottom figure shows how the cumulative distribution
function of the normal distribution (grey dashed) and where the actual measurement
points.
4.2 Gyroscope
Similarly to the accelerometer, the gyroscope outputs four signals: vx , vy , vz , and Tgyro , where the first
three signals are the angular velocities around the body-fixed x, y, and z axes respectively and the last
signal is the temperature. The angular velocities have signed 16 bit integer signals while the temperature
17
has an unsigned 16 bit integer signal. Again, a 1000 second sensor readout has been taken to evaluate the
signals. These signals are displayed in Figure 4.4. From this figure, we can see that all signals are subject
60
Angular velocity 40
20
-20
0 100 200 300 400 500 600 700 800 900 1000
104
5.6
Temperature
5.4
5.2
4.8
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
Figure 4.4: Raw gyroscope data from the IMU measurement block. The top graph
displays the angular velocity signals, the bottom graph shows the temperature signal.
to noise and the angular velocity signals depend on the temperature. Figure 4.5 shows two graphs; The
Angular velocity
10
5
0 Measurements
-5 Fitted polynomial
-5
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
Figure 4.5: The top graph displays the angular velocity measurement points versus
the temperature measurement, indicating that the sensor is sensitive to temperature
changes. Alongside, a second degree polynomial fit is displayed which can be used to
compensate for changing temperatures. The bottom graph shows a corrected angular
velocity signal.
top graph shows how the angular velocity measurements depend on the temperature, and also shows a
second order polynomial fit through these measurement points, similar to the accelerometer case. The
18
fitted polynomials for the angular velocity signals are given in (4.3).
The bottom graph of Figure 4.5 shows an angular velocity signal that is corrected for temperature.
While the range of the temperature signal of the gyroscope is far greater than that of the accelerometer,
the actual range is quite small. It is the same temperature increase caused by the heating up of the
internal components as previously observed by the accelerometer. A greater range, however suggests a
temperature sensor with a higher resolution. Also for the gyroscope, it would be beneficial to conduct
additional measurements with a greater temperature range in order to obtain a better relation between
the angular velocity signals and the temperature. When the raw measurements have been corrected, the
actual angular velocities can be calculated. From Pleban et al. (2014), we learned that the gyroscope has
a range of 2000 degrees per second, i.e. a total range of 4000 degrees per second. Since the signal is 16
bit integers, we can calculate the resolution of the sensor. Contrary to the accelerometer, the gyroscopes
outputs signals with steps of one, not reducing the apparent resolution. The actual, body-fixed, angular
velocities in radians per second can be determined by (4.4).
0.35
0.3
0.25
Frequency
0.2
0.15
0.1
0.05
0
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5
Deviation of corrected gyroscope signal
Figure 4.6: Histogram of a corrected gyroscope signal and a bell curve of the normal
distribution with mean = 6.0248 103 and standard deviation = 1.2347
incorporate in our model. Hence, we analysed the corrected gyroscope signals in a similar manner as the
accelerometer signals. Figure 4.6 displays a normalised histogram that shows how many times certain
values occur in the signal. Additionally, a normal distribution is displayed. As can be seen, similar to
the accelerometer, the noise follows this bell curve quite well. Conducting the Lilliefors test, however,
indicates that the noise is not normally distributed. Similarly to the accelerometer, the deviation from
the normal distribution is small, hence, we shall model the noise of the gyroscope signals as sources of
random numbers that are normally distributed. The means and standard deviations from the signal
noise are
7.8871 1011
1.3285
gyro = 9.7434 1012 , gyro = 0.98579 .
12
2.8300 10 1.2346
4.3 Ultrasound
The ultrasound distance sensor outputs twelve signals, of which the first, ultrasound, is used by Daranlee
and Slovak194 to determine the altitude of the AR.Drone 2.0. This signal, however is not as smooth as
the previously discussed signals, 25 times per second it outputs a peak as can be seen in the top graph
in Figure 4.7. If we remove these peaks, the resulting signal over a period of 1000 seconds, is displayed
19
104
4
Sensor value
3
2
1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time [s]
886
885
Altitude
884
883
882
881
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
Figure 4.7: The top graph displays the raw sensor data over a period of 1 second.
The bottom graph displays the sensor data after removal of the peaks over a period
of 1000 seconds.
in the bottom graph of Figure 4.7. As we see, similarly to the accelerometer and gyroscope, there is a
drift in the altitude signal whereas the AR.Drone 2.0 was stationary during the 1000 second measure-
ment period. This drift might be due to temperature as well, however, the ultrasound sensor does not
provide a temperature signal. Hence, we cannot correct for this since temperatures between sensors may
vary. This temperature sensitivity, however seems very small compared to that of the accelerometer and
gyroscope. Besides this small error that may be introduced due to changing temperature, the ultrasound
sensor directly measures the distance and not a velocity or acceleration, making the error induced by
varying temperature less significant. If we start to measure the altitude if the AR.Drone 2.0 has been
on for 2000 seconds and the temperature has had time to reach equilibrium, the altitude signal is more
constant. This can be seen in the top graph in Figure 4.8. In the bottom graph of Figure 4.8, a nor-
malised histogram is displayed that shows the deviation of the signal. Additionally, a normal distribution
is displayed. As can be seen, the noise follows this normal distribution with a mean = 882.47 and
standard deviation = 0.66426. The noise from the ultrasound sensor is, similarly to the accelerometer
and gyroscope noise, not actually normally distributed according to the Lilliefors test. However, we
assume again that the deviation is limited and we shall use a normal distribution for our model.
To calculate the actual altitude in SI units, the altitude signal needs to be converted. From the other
Simulink blocks, provided by the GitHub project, we found that the altitude in metres can be calculated
as follows:
ultrasound 880
z= . (4.5)
265.52
Ultrasonic sensors are sensitive to where objects are positioned relative to the sensor. In our case, the
orientation of the AR.Drone 2.0 plays a role in the sensor value of the altitude. Since we do not know
the specific ultrasonic transmitter and receiver types, it is not possible to compensate. We shall assume
that the angles are sufficiently small such that the inaccuracy due to these angles can be neglected. For
future research, experiments can be conducted in order to find a relation between the orientation and
the ultrasound signal.
20
885
Ultrasound signal
884
883
882
881
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
0.8
0.6
Frequency
0.4
0.2
0
880 881 882 883 884 885
Deviation of ultrasone signal
Figure 4.8: The top grap displays the ultrasound signal after removal of the peaks and
after the AR.Drone 2.0 reaches temperature equilibrium. The bottom graph displays
the histogram of the ultrasound signal and a bell curve of the normal distribution with
mean = 882.47 and standard deviation = 0.66426.
4.4 Magnetometer
The magnetometer outputs three signals: mx , my , and mz , which are components of the magnetic field
vector of the field in which the AR.Drone 2.0 is located (the geomagnetic field), with respect to the body-
fixed coordinate frame. These three signals are 16 bit integer signals. A 1000 second sensor readout has
been taken to evaluate the signals. These signals are displayed in the top graph of Figure 4.9. Similarly
Magnetometer signal
20
0
-20
-40
-60
-80
0 100 200 300 400 500 600 700 800 900 1000
Time [s]
0.15
Frequency
0.1
0.05
0
-25 -20 -15 -10 -5 0 5
Deviation of magnetometer signal
Figure 4.9: The top graph displays the raw magnetometer signals. The bottom
grahp displays the histogram of one of the vector components and a bell curve of the
normal distribution with mean = 10.177 and standard deviation = 2.8574.
to the previously discussed sensors, the magnetometer is subject to signal noise. The bottom graph
21
in Figure 4.9 displays a normalised histogram that shows how many times certain values occur in the
signal. Additionally, a normal distribution is displayed. As can be seen, similar to the previous sensors,
the noise follows this bell curve quite well. Hence, we can model the noise of the magnetometer signals
as sources of normally distributed random numbers. The standard deviations of the signal noise are:
2.8574
mag = 2.8587 .
2.8533
The magnetometer, however, is offset in x, y, and z directions. When the drone is spun in 3D space in
different directions, the measurement points of the magnetometer should all lie on the surface of a sphere
with its center in the origin. In the left graph of Figure 4.10, the x and y components of the magnetomer
signal are displayed. As can be seen, the measurements lie within a circle, however, the center of this
circle does not coincide with the origin of the coordinate system. According to Ozyagcilar (2013), offsets
of the magnetometer due to hard iron effects can be calculated by (4.6), fitting a sphere through a set
of measurement data using the method of least squares.
1 1
mmag = 2 , (4.6)
2
3
where
1
= X >X X > Y R41 (4.7)
with
m2x,1 + m2y,1 + m2z,1
mx,1 my,1 mz,1 1
m2x,2 + m2y,2 + m2z,2 mx,2 my,2 mz,2 1
Y = , X=
.. .. .. .. ..
. . . . .
m2x,N + m2y,N + m2z,N mx,N my,N mz,N 1
leading to the corrected magnetometer signals:
1 1
mmag,corr = mmag mmag = mmag 2 . (4.8)
2
3
The corrected x and y components of the magnetometer signal are displayed in the right graph of
Figure 4.10. As can be seen, the center of the circle in which the measurement points are located now
indeed seems to coincide with the origin of the coordinate system. The calculated values for are:
45.7478
61.0844
= .
14.1098
4
18.0095 10
When the AR.Drone 2.0 is positioned level and with the x-axis pointed north, the corrected magnetometer
values are:
14.6055
mmag,0 = 4.3001 .
71.7110
22
200 200
Magnetometer y
Magnetometer y
100 100
0 0
-100 -100
-200 -200
-200 -100 0 100 200 -200 -100 0 100 200
Magnetometer x Magnetometer x
Figure 4.10: Uncorrected x and y components of the magnetometer signal (left) and
corrected x and y components of the magnetometer signal (right).
4.5 Motors
The Parrot AR.Drone 2.0 has four motors that drive four rotors, respectively, providing thrust and torque
to the body of the drone. Two rotors spin in counter clockwise direction (rotor 1 and 3 in Figure 2.1),
and two in clockwise direction (rotor 2 and 4) as viewed from above. The motors can be controlled using
a PWM (pulse width modulation) signal, ranging from 0 to 100, that is fed into the Motor block in
Simulink. The actual motors are controlled by an electronic speed controller (ESC) that receives the
PWM signal and drives the motor to the desired speed. The ESC also receives a back emf signal from
the motor windings, which it uses to control the angular velocity of the motor.
Experiments have been conducted in order to find relations between the PWM signal and rotor thrust,
torque and angular velocity. The measurement plan and results are given in Appendix A. Figure 4.11
displays three measurement sets of the angular velocity of the rotors as a function of the PWM setting.
Also, it displays a linear function that is fitted through these points. Each motor-rotor combination is
500
450
400
r [rad/s]
350
300
250 Series 1
Series 2
200
Series 3
150 Linear Fit
100
0 10 20 30 40 50 60 70 80 90 100
PWM value [-]
Figure 4.11: Relation between the PWM settings and the actual rotor velocity, r .
Three measurement series are displayed along with a linear fit.
slightly different, hence the linear functions relating the angular velocity to each PWM setting differ.
The linear relations are given in (4.9) which holds for PWM values greater than 0.2. For smaller values,
the rotor velocity equals 0.
r,1 3.7503 pwm1 +132.7387
r,2 3.7123 pwm2 +131.5018
r,3 = 3.6891 pwm3 +130.7137 (4.9)
23
The previously mentioned measurements have been conducted using an external power supply instead
of the provided battery in order to eliminate voltage drop due to the depletion of the battery. The
effects of lower voltage, however, may be interesting for modelling or controlling purposes. Hence, we
have measured the effect of supply voltage on the rotor velocity for different PWM values. The results
are displayed in Figure 4.12. As can be seen, the angular velocity remains constant, regardless of
550
500
450
r [rad/s]
400 PWM = 20
PWM = 50
350
PWM = 75
300 PWM = 90
250 PWM = 100
200 Linear fit of max r
Figure 4.12: The effects of supply voltage on rotor angular velocity, r , for different
PWM values. Also, a linear fit is displayed that describes the maximum angular
velocity as a function of supply voltage.
the supply voltage, if the PWM value is sufficiently low. However, when the PWM value surpasses a
certain threshold for a given voltage, the angular velocity no longer increases. Beyond this threshold,
the ESCs are not capable of delivering sufficient power (current) to the motor in order to achieve the
desired angular velocity, since the voltage is not high enough to overcome the armature resistance. The
maximum achievable rotor velocity is given by
The actual rotor velocity is the minimum of the desired velocity and the maximum achievable velocity:
r,i (Vbatt , pwmi ) = min (max,i (Vbatt ), r,i (pwmi )).
For modelling purposes, the thrust and torque are required as functions of the PWM signals. Hence,
these parameter values were also recorded during the measurement sessions. The results are displayed
in Figure 4.13. As can be seen, the generated thrust and torque both seem to follow quadratic trends.
Hence, quadratic functions are fitted through the measurement points in order to find a description relat-
ing the thrust and torque to the PWM values. These functions are given in (4.11) and (4.12) for thrust
and torque, respectively. As stated before, the motor rotor combinations differ from another leading to
different results for each rotor.
Additionally, the relation between thrust and torque directly may be useful, in order to calculate the
required PWM signals from the controller signals. Figure 4.14 displays the thrust and torque in a graph.
As can be seen, the measured points seem to follow a straight line, suggesting a linear relation between
thrust and torque. Hence a linear function, crossing the origin, has been fitted to the measurement
points. The relation is given in (4.13).
24
2.5
Series 1
2 Series 2
1.5
T [N]
Series 3
1 Quadratic Fit
0.5
0
0 10 20 30 40 50 60 70 80 90 100
0.1
0.08
[Nm]
0.06
0.04
0.02
0
0 10 20 30 40 50 60 70 80 90 100
PWM value [-]
Figure 4.13: Thrust (top) and torque (bottom) measurements for three different
measurement series along with quadratic fits through the data points, describing rela-
tions between PWM and thrust, and PWM and torque.
0.09
0.08
0.07
Torque [Nm]
0.06
0.05
0.04 Series 1
0.03 Series 2
0.02 Series 3
0.01 Linear Fit
0
0 0.5 1 1.5 2 2.5
Thrust [N]
Figure 4.14: Thrust and torque measurements plotted in a single graph, along with
a linear function following the linear trend of the measured data.
2.9107 102 T1
1 c,1 T1
2 c,2 T2 2.7543 102 T2
3 = c,3 T3
= (4.13)
3.6171 102 T3
As can be seen in Figures 4.13 and 4.14, the spreading in measurement data is greater for torque than
for thrust. This is caused by the fact that the thrust was measured by a load cell with a sensitivity
of 10 newton per volt. Since the range of the thrust during measurements was around 2 newtons, the
maximum voltage range was around 0.2 volts. The used torque sensor, however, has a sensitivity of 4
Nm per volt. The range of generated torque was in the order of 0.1 Nm during measurements, giving a
maximum voltage range of around 0.025 volt, which is a significantly smaller range. This results in less
accurate measurements and introduces more measurement noise. The measurements can be improved
by using sensors that have a greater sensitivity (smaller load range) such that the noise is reduced and
accuracy is improved.
25
4.6 Summary
In this chapter we have discussed several of the sensors and actuators that are present in the AR.Drone 2.0
and that can be accessed via the Simulink blocks that were provided by Daranlee and Slovak194 (n.d.).
The discussed sensors are the accelerometer, gyroscope, ultrasound distance sensor, and the magnetome-
ter. There are additional sensors available, such as the barometer and the front and down-facing cameras.
These sensors were not investigated in this report due to time limitations. The Inertial Measurement
Simulink block does provide barometer pressure and temperature signals, which can be used to deter-
mine the altitude if the ground is out of range for the ultrasonic distance sensor. The GitHub project
by Daranlee and Slovak194 (n.d.) also provides Simulink blocks that should be able to access the two
cameras. If and how these blocks work, however, was not investigated due to time limitations.
We have seen that both, the accelerometer and gyroscope do not only provide three axis linear ac-
celeration an angular velocity, but both sensors also provide a temperature signal since both sensors are
sensitive to temperature changes. Using measurement data we have been able to compensate for the
temperature by finding quadratic relations between the sensor temperature signal and the actual sensor
values. Measurements regarding the temperature change, however, have only been conducted using the
temperature rise caused by the heating up from the internal components due to a cold start. Hence, we
recommend more extended measurements with greater temperature range, in order to more accurately
compensate for temperature changes. The ultrasound distance sensor seems sensitive to temperature
changes as well but does not include a built-in temperature sensor, making it impossible to accurately
compensate (temperatures between sensors may vary). This temperature sensitivity, however, seems very
small compared to that of the accelerometer and gyroscope. The ultrasound sensor directly measures
the distance and not a velocity or acceleration, making the error induced by varying temperature less
significant.
The magnetometer measures the geomagnetic field and is calibrated by rotating the drone around all
axes in order to obtain a cloud of measurement data. All points should be on the surface of a sphere with
its centre at the origin. This is, however, not the case. Hence, a sphere is fitted through the measured
data using the least squares method and the coordinates of the centre of the fitted sphere is subtracted
from the magnetometer signals. The magnetometer values were not converted to SI units. Since this
sensor only is used to determine the orientation of the AR.Drone 2.0, only the direction of the magnetic
field with respect to the orientation of the drone is relevant.
Furthermore, we have found that all sensors are subject to noise. We have conducted Lilliefors tests
on these noise signals that indicate that the noise is not normally distributed. However, a visual compar-
ison shows that the deviation from a normal distribution is limited, hence we use a normal distribution
for modelling purposes.
Lastly, measurements have been conducted regarding the actuators of the AR.Drone 2.0. The rela-
tions between the PWM signals going to the motor block and the rotor angular velocity, thrust, and
torque have been discussed. We found that the angular velocity is exactly a linear function of the PWM
value and that voltage drop due to depletion of the battery does not affect the angular velocity of the
rotor other than the maximum rotor speed that can be achieved. Also, we found that thrust and torque
are quadratic functions of PWM (and thus ) and that thrust and torque are linearly dependent on each
other, which is a property that can be used in calculating the motor commands in Chapter 5. For the
measurements of the dropping voltage, we recommend to reconduct this measurement for all motor-rotor
combinations since this only has been done for one. Furthermore, we recommend to use sensors that
have a measurement range that is more closely to the range of thrusts and forces that are generated.
The maximum thrust that is generated by one rotor is in the order of 2.5N and the maximum absolute
torque is in the order of 0.1Nm. Hence, a force transducer with a range of 10N (the mass of the drone
needs to be considered as well) and a torque transducer with a range of 0.5Nm may yield more accurate
measurement results.
In the next chapter we discuss the final steps that are required to implement the controller. We de-
termine the orientation and position of the drone, based on the sensor values and we calculate the PWM
26
values that are required to achieve the desired thrusts and forces by the controllers designed in Chapter 2.
27
28
Chapter 5
In this chapter we discuss the last steps that are required to implement the controller designed in
Chapter 2 using the method described in Chapter 3. One of the steps is to determine the orientation and
position of the AR.Drone 2.0 based on the sensors as discussed in Chapter 4. Also, the PWM values that
control the motor velocities (using the Motors block in Simulink) need to be determined based upon
the outputs of the altitude and attitude controllers, U1 , . . . , U4 . Finally, the controller can be tested on
a model of a quad-rotor as given in Chapter 2. However, the AR.Drone 2.0 does not directly provide
its orientation and position but outputs sensor values. For testing, these sensors are emulated using the
findings of Chapter 4.
Using these two parameters, the rotation matrix R(, ~v ) can be determined by (5.6), which is used to
determine the Euler angles.
R(, ~v ) = I cos + W sin + (~v>~v ) (1 cos ) (5.6)
0 ~v,z ~v,y
Where I is the 3 3 identity matrix and W = ~v,z 0 ~v,x .
~v,y ~v,x 0
29
The pitch angle, , can be found by
1 = arcsin (R31 ) , (5.7)
2 = + arcsin (R31 ) . (5.8)
However, since we only consider small rotations, we can neglect the expression (5.8) and only consider
(5.7) for the calculation of the pitch angle. The roll angle, , and yaw angle, , can be determined by
(5.9) and (5.10), respectively.
R32 R33
= atan2 , (5.9)
cos cos
R21 R11
= atan2 , (5.10)
cos cos
Now that the Euler angles are determined, the angular velocities, , , and , can be calculated. The
body-fixed angular velocities in terms of Euler angles can be expressed as
0 0
= I 0 + Rx () + Rx ()Ry () 0
0 0
1 0 s
= 0 c s c , (5.11)
0 s c c
where Rx () is the rotation matrix of a rotation around the x-axis. Hence, the Euler angular velocities
can be calculated by (5.12).
1
1 0 s
= 0 c s c . (5.12)
0 s c c
The AR.Drone 2.0 is not equipped with sensors that can determine its absolute position with respect to
some reference except for its altitude. Hence, we assume that x, y = 0 and we use the ultrasonic distance
sensor to determine the altitude z of the drone according to (4.5). The altitude controller, however,
requires besides the altitude also the vertical velocity, z. Simply taking the (discrete) time derivative of
the altitude signal is not sufficient. Since the ultrasone sensor is sampled at a lower rate than the actual
sample rate of the entire Simulink model, the values of the ultrasone sensor are held for a specific time
to compensate for the difference in sample frequency. This is solved by using rate transition blocks in
Simulink. The rate transition block is placed on the ultrasone signal to bring down the sample freqency
to 25Hz, then a discrete derivative block is used, after which another rate transition block is placed
that bings the sample frequency back up to 400Hz. A schematic view of this construction is given in
Figure 5.1.
Figure 5.1: Schematic view of the calculation of the time derivative of the altitude,
z, in the Simulink model, compensating for the different sample rates. Ts is the local
1
sample time: 25 s.
30
thrust and torque rely on the PWM value and how thrust and torque are related to each other. Using
the property that torque is a linear function of thrust, we can easily express the required torque as a
required thrust to simplify the calculation of the PWM values. The control signals can be expressed as:
P4
U1 i=1 Ti
1 1 1 1 T1
U2 l l l l T2
U3 =
=
T3 . (5.13)
l l l l
U4 c,1 c,2 c,3 c,4 T4
From the required thrust, we can calculate the corresponding PWM values using (4.11) in conjunction
with the quadratic formula:
1.0395102 + (1.03952 104 )4(0.13894T1 )1.5618104
pwm1 21.5618104
8.7242103 + (8.72422 106 )4(0.14425T2 )1.8150104
pwm2
= 21.8150104
.
(5.15)
pwm3 7.3295103 + (7.32952 106 )4(0.11698T3 )1.3478104
pwm4
21.3478104
5.7609103 + (5.76092 106 )4(0.13362T )1.4306104
4
21.4306104
Using the position and orientation accelerations, the absolute position, orientation, and angular ve-
locity can be calculated by integration. The sensors only represent the linear body fixed accelerations
(accelerometer), absolute orientation (magnetometer), absolute altitude (ultrasone sensor), and the an-
gular velocities of the quad rotor (gyroscope).
The sensors are emulated by the following equations, where N () denotes normally distributed noise
with a standard deviation of .
All sensors are sampled at 400Hz, except for the ultrasound sensor, which is sampled at 25Hz. Note
that the temperature is not taken into consideration in these models, we assume that the signals are
calibrated and corrected before they are sent to the controller.
31
Once a controller has been tested on this model, it can be uploaded to the AR.Drone 2.0 if the be-
haviour is as desired. However, whereas the controller is stabilising in continuous time, regardless of the
values of k1 k8 , quantisation effects, time delay, and discretisation might cause instabilities in the con-
trolled system. Also, vibrations are introduced by the rotors that have not been taken into consideration
in the model. In order to improve the behaviour of the controller in the actual drone, one might consider
filtering the sensor signals to remove noise and high frequency vibrations induced by the rotors.
5.4 Summary
In this chapter we have discussed the last steps that are required to implement a controller to the
AR.Drone 2.0. We have determined the orientation in Euler angles based on the simultaneous pitch, roll,
and yaw angles that result from the accelerometer and magnetometer signals. The position (altitude) of
the drone is based on the ultrasone distance sensor and we have explained how the discrete derivative of
the altitude can be calculated using the rate transition blocks in Simulink. Furthermore, we have shown
how the PWM values that control the motors can be calculated using the controller signals by converting
the yaw torques to forces using relations that have been found in Chapter 4. Also, we discussed how the
designed controller can be tested on a model and how the sensors are emulated. And lastly, we discussed
that once a controller is tested on a model, the behaviour of the actual drone might differ from the model
and that instabilities can be introduced by quantisation effects, time delay, and discretisation.
32
Chapter 6
Summary
In this report we have given a non-linear model of the dynamics of a quad-rotor: a body in space with four
actuators, each providing thrust and torque. Also, we designed backstepping controllers for stabilisation
of altitude and orientation of the quad-rotor. These controllers output the combined thrust (altitude
controller) and torques around three orthogonal axes (attitude controller). Furtermore, we have given
bounds on the pitch and roll angles, and , since the altitude controller divides by cos cos . Hence,
if either or equals 2 , the controller signal is undefined.
Next, we have discussed how the AR.Drone 2.0 is connected to and controlled by Simulink. The
blocks are able to read all sensor values and can control the motors and LEDs. We explained that
created Simulink models are compiled by an ARM compiler tool chain, and that the resulted binaries
are uploaded via FTP such that it leaves no trace after reboot of the AR.Drone 2.0.
Furthermore, we have discussed several of the present sensors: Accelerometer, gyroscope, magnetometer,
and ultrasound distance sensor. The accelerometer and gyroscope provide a temperature signal in ad-
dition to their motion signals, implying a sensitivity for varying temperature. Using measurement data
from the stationary AR.Drone 2.0 with a small temperature rise, we have been able to compensate for
temperature changes. Also, we have been able to calibrate static offset of the magnetometer by fitting a
sphere through measurement data and subtract the coordinates of this sphere, translating the centre of
the sphere of measurement data to the origin of the coordinate system. For the accelerometer, gyroscope,
and distance sensor, we have converted the raw sensor values to SI units. For the magnetometer this is
not required since only the direction of the vector is relevant, not the magnitude. The sensors in the
AR.Drone 2.0 are subject to noise. Whereas this noise technically is not normally distributed according
to Lilliefors tests, the deviation is relatively small.
We have conducted measurements on the actuators of the AR.Drone 2.0; the motor-rotor combina-
tions. We have seen that the rotor angular velocity is linear dependent on the PWM motor values and
the thrust and torque are approximated by quadratic functions of PWM value. Also we related the
torque linearly to thrust, which we used to calculate the desired PWM values more easily, based on the
controller signals. Additionally, we have seen that dropping battery voltage only has effect on the max-
imum rotor speed and does not affect the linear relation between the PWM value and angular velocity
in any other way.
We have given models of the sensors, based on measurement data that we have gathered and on the pre-
viously found relations. Designed controllers can be tested on the quad-rotor model, given in Chapter 2,
with emulated sensors for better representation of reality. The actual AR.Drone 2.0, however, is subject
to vibrations due to the imbalance of the rotors, which are visible in the sensor values. This is, however,
not modelled but could impact the performance of designed controllers.
Finally, we determined the position (altitude) and orientation (Euler angles) and its time derivatives
of the AR.Drone 2.0 based on the (calibrated) sensor signals of the accelerometer, magnetometer, gyro-
scope, and distance sensor. We have calculated the PWM values that the motors block requires, based
33
on the thrust and torque signals from the motion controllers by converting the required torques to corre-
sponding thrust. Then the thrusts are determined based on the controller signals which can be converted
to PWM signals. Using the quadratic formula and the relations between PWM values and thrusts that
we found, the PWM values can be calculated from the desired thrusts.
Hence, we are able to design a backstepping controller in Simulink and test it on a model that em-
ulates the AR.Drone 2.0 sensors. After testing, the Simulink controller can be compiled using an ARM
compiler tool chain and is uploaded to the AR.Drone 2.0, such that it can be run in external mode.
Recommendations
We have given a simple supervisory controller that is capable of automatically take-off when it is given
the instruction to hover and automatically land when either the battery is low or the instruction to land
is given. For autonomous flight (and possibly autonomous calibration of sensors) we recommend a more
extensive supervisor that also may include functions to improve safety.
The sensors (mainly gyroscope and accelerometer) are sensitive to temperature changes. However, mea-
surements have only been conducted using a very small temperature difference in order to compensate.
We recommend to conduct experiments with larger changes in temperature to find more accurate rela-
tions between temperature and sensor values, such that compensation is more accurate as well. Also,
more extensive tests might gain insight in how sensitive other sensors (magnetometer and distance sen-
sor) are to temperature changes. Also, in this report we assume that the output of the ultrasonic distance
sensor is independent of the AR.Drone 2.0 orientation if the pitch and roll angles are sufficiently small.
In reality, however, this might not be the case. Hence, we recommend to conduct measurements that
investigate how the altitude signal depends on the pitch and roll angles, such that the results can be
used to determine the actual altitude using the orientation and the signal of the distance sensor.
The thrust and torque for each rotor have been measured using a force transducer with a range of
50N and a torque transducer with a range of 20Nm. The generated thrust by each rotor, however,
is in the order of 2.5N and torque in the range 0.1Nm. Hence, we recommend to reconduct these
measurements using thrust and torque transducers that are closer to this range since that may lead to
more accurate results. Use for example a force transducer with a range of 10N (the mass of the drone
needs to be considered since this also excites the force transducer) and a torque transducer with a range
of 0.5Nm. Also, we recommend to reconduct the measurements that determine the effects of a drop-
ping supply voltage, since this experiment was only conducted once on one rotor. More measurements
may lead to more accurate results and, additionally, the maximum angular velocity may vary between
motors. If these effects are determined for all motors, the results can be taken into consideration in the
model for testing purposes and possibly in the controller software. Furthermore, the mass moments of
inertia of the AR.Drone 2.0 should be determined experimentally as well as that of the rotors around
their rotating axes.
The model on which the controller can be tested assumes that the rotor instantaneously reaches the
desired angular velocity. In reality, however, rotor inertia prevents this and we recommend to determine
this inertia effect, such that it can be included in the testing model.
34
References
Apvrille, L., Tanzi, T., & Dugelay, J.-L. (2014). Autonomous drones for assisting rescue services within
the context of natural disasters. In General assembly and scientific symposium (ursi gass), 2014
xxxith ursi (pp. 14). doi:10.1109/URSIGASS.2014.6929384
Choi, Y.-C. & Ahn, H.-S. (2015, June). Nonlinear control of quadrotor for point tracking: actual im-
plementation and experimental tests. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 20,
11791192. doi:10.1109/TMECH.2014.2329945
Daranlee & Slovak194. (n.d.). Simulink ar drone target. Retrieved September 1, 2015, from https://
github.com/darenlee/SimulinkARDroneTarget
Jardin, M. R. & Mueller, E. R. (2009, May). Optimized measurements of UAV mass moment of inertia
with a bifilar pendulum. Journal of Aircraft, 46, 763775. doi:10.1006/2000.1345
Li, Q. (2014, August). Grey-box system identification of a quadrotor unmanned aerial vehicle (Masters
thesis, Delft University of Technology).
Oczipka, M., Bemmann, J., Piezonka, H., Munkabayar, J., Ahrens, B., Achtelik, M., & Lehmann, F.
(2009). Small drones for geo-archaeology in the steppe: locating and documenting the archaeological
heritage of the orkhon valley in mongolia. Proc. SPIE. doi:10.1117/12.830404
The Paparazzi Project. (n.d.). Retrieved October 5, 2015, from http://wiki.paparazziuav.org/wiki/
Main_Page
Ozyagcilar, T. (2013). Calibrating an eCompass in the presence of hard and soft-iron interference.
Pleban, J.-S., Band, R., & Creutzburg, R. (2014). Hacking and securing the AR.Drone 2.0 quadcopter
investigations for improving the security of a toy. Proceedings of SPIE, 9030 (90300L). doi:10.
1117/12.2044868
Scholte, R. (2008). Fourier based high-resolution near-field sound imaging (Doctoral dissertation, Eind-
hoven University of Technology).
Stanin, S. & Tomai, S. (2011). Angle estimation of simultaneous orthogonal rotations from 3d gyro-
scope measurements. Sensors. doi:10.3390/s110908536
Tripicchio, P., Satler, M., Dabisias, G., Ruffaldi, E., & Avizzano, C. (2015). Towards smart farming and
sustainable agriculture with drones. In Intelligent environments (ie), 2015 international conference
on (pp. 140143). doi:10.1109/IE.2015.29
35
36
Appendix A
Measurement plans
Windows laptop/PC, equipped with a WiFi adapter, with LabVIEW SignalExpress installed and
Simulink configured correctly to connect to the AR.Drone 2.0 and control the four rotors individ-
ually,
Variable power supply with built in voltage and current displays (seperate voltage and current
sensors can be added in case the power supply is not equipped with these),
Sensor interface,
Measurement setup
Figure A.1 shows the test setup. The AR.Drone is mounted upside-down to a bracket in order to reduce
the ground effect, which could cause erroneous results. The bracket is mounted to the force transducer,
which is mounted with an adapter plate to the torque transducer. This whole assembly then is mounted
to the table. The sensor interface is connected to the force and torque transducers in order to read the
sensor voltages. Also, it provides extra power to the amplifier of the torque transducer. The variable
power supply is connected to the battery connector of the AR.Drone 2.0, in place of the battery, such
that it can operate without introducing voltage drop due to battery depletion. Hence, this is a more
stable source of power, with the additional benefit that the voltage can be controlled and the current
draw can be monitored. The angular velocity of the rotors need to be measured manually using the
manual optical RPM counter.
Measurement preparation
After setting up, the following steps need to be executed in order to ensure that measurements can be
conducted.
37
Figure A.1: Picture of test setup. On the left is the sensor interface with on top
a variable power supply, on the right is the AR.Drone upside-down connected to the
force transducer, mounted to the torque transducer, which is mounted to the table.
1. Make sure all sensors are connected, the drone is securely mounted, and that the polarity of the
power supply leads to the battery connector of the drone is correct.
2. Connect the usb cable of the sensor interface to the PC and start LabView SignalExpress. Make
sure that the interface is recognised by the software and power up the interface. The sensors may
need some time to reach a steady-state.
3. Set up LabView to read the correct sensor values and average the signal over a period of 5 seconds
in order to determine the mean thrust and torque. The signals may vary due to signal noise, but
also vibrations in the thrust and torque will be introduced due to imbalance of the rotors. Also,
set up LabView such that it runs once, not continuously.
4. Dial the voltage knob of the power supply all the way down and the current knob (if applicable)
fully up.
5. Switch on the power supply and increase the voltage to 12.6 volts (voltage of a fully charged 3S
LiPo battery), the drone should now power on. If the drone does not power on, leave the voltage
of the power supply at 12.6 volts and switch it off and on again.
6. Connect the PC to the WiFi access point of the AR.Drone 2.0 and start a Simulink model that
is equipped with the Init_Actuator block and the Motor block and configured to run in external
mode with the AR.Drone 2.0. Ensure that all motor commands equal 0.
Measurement procedure
The following steps need to be repeated for each rotor.
1. Read and note the current draw of the drone from the power supply while all pwm values are set
to 0.
38
2. Read the sensor values of the thrust and torque transducers in SignalExpress while all pwm values
are set to 0.
3. Increase the pwm signal of the desired motor with an increment of 5 in Simulink (while the
external mode is running).
4. Manually measure the angular velocity of the rotor using the optical RPM counter.
The motor thrust and torque can be calculated by substracting the measured values at rest from the
measured values at the given PWM value. Then multiply the sensitivity to obtain the thrust and torque
in Newton and Newton metres, respectively (for the above listed sensors these are 10 N/V for the force
transducer and 4 N m/V for the torque transducer). The voltage and current can be directly read from
the power supply in volts and ampres. The RPM counter counts two revolutions for every actual
revolution of the rotor, since the rotors have two blades each. Hence, the angular velocity of the rotors
in radians per second can be deduced from the RPM readings as follows: r,i = RP Mr,i /60 .
Windows laptop/PC, equipped with a WiFi adapter, with Simulink installed and correctly con-
figured to connect to the AR.Drone 2.0 and control the four rotors individually,
Variable power supply with built in voltage and current displays (separate voltage and current
sensors can be added in case the power supply is not equipped with these),
Measurement setup
The AR.Drone 2.0 is mounted upside-down to a bracket some distance away from the surface in order to
reduce the ground effect, which could cause erroneous results. The variable power supply is connected
to the battery connector of the AR.Drone 2.0, in place of the battery, such that it can operate without
introducing voltage drop due to battery depletion. Hence, this is a more stable and controllable source
of power. The angular velocity of the rotors need to be measured manually, using the manual optical
RPM counter.
Measurement preparation
After setting up, the following steps need to be executed in order to ensure that measurements can be
conducted.
1. Make sure the drone is securely mounted to the bracket and that the polarity of the power supply
leads to the battery connector of the drone is correct.
2. Dial the voltage of the power supply all the way down and the current (if applicable) fully up.
39
3. Switch on the power supply and increase the voltage to 12.6 volts (voltage of a fully charge 3S
LiPo battery), the drone should power on. If the drone does not power on, leave the voltage of the
power supply at 12.6 volts and switch it off and on again.
4. Connect the PC to the WiFi access point created by the AR.Drone 2.0 and start a Simulink model
that is equipped with the Init_Actuator block and the Motor block and configured to run in external
mode with the AR.Drone 2.0. Ensure that all motor commands equal 0.
5. Build the Simulink model and start the real-time connection.
Measurement procedure
The following steps need to be repeated for each of the four motors to gain insight in the difference
between the motors.
1. Read and note the current draw of the drone from the power supply while all pwm values are set
to 0.
2. Set the pwm signal of the motor at the desired value.
3. Note the current draw of the drone.
4. Measure the angular velocity of the rotor manually using the optical RPM counter.
8. Repeat steps 37 until all desired pwm values have been measured.
40
Appendix B
Measurement data
Table B.1: Measurements with dropping voltage for varying PMW signals.
41
Table B.2: Measurement data of motor 1 and 2. Measurements conducted with a
supply voltage of 12.6V
42
Table B.3: Measurement data of motor 3 and 4. Measurements conducted with a
supply voltage of 12.6V
43
44
Appendix C
This manual describes the steps required to operate the AR.Drone 2.0 wirelessly from a PC with
Simulink. This package only works on Windows computers and with Matlab Simulink versions
2014a and 2014b. Newer versions do not work. This software package makes use of the GitHub project
resources from Daranlee and Slovak194, n.d. And whereas the compiler tool chain is provided in the zip
file, it can also be downloaded directly from:
https://sourcery.mentor.com/sgpp/lite/arm/portal/subscription?@template=lite
A version of the Cyberduck command line interface is also included in the zip file. This is, however
not necessary to install, unless there are issues uploading the generated binary to the AR.Drone 2.0
(explained later in the manual). If desired, the latest version of the Cyberduck command line interface
can be downloaded directly from:
https://dist.duck.sh/
3. Execute the Matlab script root/install_script.m. This installs the Simulink blocks that can
be used to control the AR.Drone. It also asks the location of the compiler in an explorer window,
browse to this location and select the folder where arm-none-linux-gnuabi-gcc.exe and similar
files are located (typically: C:/Program Files (x86)/CodeSourcery/Sourcery G++ Lite/bin).
- ControlLaw.slx, this model contains the actual control software that is used to control the
AR.Drone 2.0. This file can be edited to change the behaviour of the non-linear controller.
45
- ARDrone_External.slx, this model implements the controller from ControlLaw.slx in the actual
drone. It runs in external mode and requires building and compilation. This file can be edited to
change the sensor pre processing (i.e. filtering and signal selection).
Before starting either one of these models, the .m file ControllerParameters.m needs to be executed.
It contains the controller parameters (k1 , . . . , k8 ) and loads the parameter values for the model and the
controller from Model_Parameters.mat (created by ModelParameters.m). Furthermore, it loads the set-
points for take-off and landing (SetpRef_LiftOff.mat and SetpRef_Landing.mat) that can be created
with ARDrone_Create_Setpoint.slx. Also, ControllerParameters.m loads the calibration parameters
(Calibration_Parameters.mat, created by ARDrone_CalibrationScript) to calibrate the sensors for
temperature and static deviation. Lastly, ControllerParameter.m loads the configuration parameters
for the IMU data bus and the Simulink configuration file.
If a controller has been designed in Simulink (ControlLaw.slx), it can be tested by running the model
ARDrone_Simulation.slx. When the results are satisfactory and the designed controller is ready for
testing on the actual drone, the Simulink model ARDrone_External.slx needs to be opened and built
(ctrl+b). During the build process, the compiler creates a binary that can be run on the AR.Drone 2.0
and automatically uploads it via FTP. A Windows console window opens and gives some information,
do not close this window until the Simulink model is stopped. After building and uploading, click on
connect in Simulink and then on run.
Note that the standard Simulink blocks to file and to workspace do not seem to work in external
mode. The Simulink scopes, however, can be used to export data to the workspace. Additionally, the
scopes can be opened during the execution of the external mode of Simulink, real-time data then is
displayed.
If the upload of the generated binary (.elf) file fails, try installing Cyberduck (can be used to
transfer files via ftp) from root/software/Cyberduck.exe and change the lines
echo open %AR_DRONE_IP_ADDRESS% [5551] > > ftpcmd . dat
echo u s e r >> ftpcmd . dat
echo put "%EXE_PATH%%EXE_NAME%">> ftpcmd . dat
echo d i s c o n n e c t >>ftpcmd . dat
echo q u i t >> ftpcmd . dat
echo Connecting t o FTP and u p l o a d i n g b i n a r y
f t p s : ftpcmd . dat
echo Done Uploading
d e l ftpcmd . dat
where "C:\Program Files (x86)\Cyberduck CLI\duck.exe" is the install directory of the com-
mand line interface of Cyberduck.
46