You are on page 1of 26

OBSTACLE DETECTION

AND AVOIDANCE ROBOT


(USING CAMERA AND
INFRARED SENSOR)
Submitted By:
Siddhant Anand (13105003)
Parag Aggarwal (13105007)
Submitted To:
Rajeev Mahajan (13105064)
Kamal Gupta(13105040) Dr. Sukhwinder
Pulkit Sharma(13105092) Singh
Shubham Garg(13105093)
OBJECTIVE
To prepare a mobile, autonomous robot which
can navigate independently taking depth feed
from infrared sensor.

Hence, to prepare an obstacle avoider robot


using infrared sensor and LabVIEW.
INTRODUCTION
o An infrared (IR) sensor is used to detect obstacles in
front of the robot or to differentiate between colors
depending on the configuration of the sensor. An IR
sensor consists of an emitter, detector and their
associated circuitry

o The emitter is simply an IR LED and the detector is


simply an infrared photodiode which is sensitive to IR
light. When light falls on the photodiode, its resistance
and output voltage change in proportion to the
magnitude of the IR light received.
COMPONENTS USED
Arduino Uno (Microcontroller)
o Microcontroller board based on ATmega328

o It has 14 digital I/O pins, 6 analog inputs and a USB


o connection
o 6 I/O pins provide PWM output

o Includes 5 power pins namely:


VIN
5V
3V3
GND
IOREF
BLUETOOTH MODULE
Permits any microcontroller with standard RS232 serial
port to communicate with any bluetooth device (i.e. PC)

o Bluetooth number: ZS040 / HC-06

o Operating Voltage: 5V, reduced to 3.3V

o Default Baud Rate: 9600 bps

o Default Pin: 1 2 3 4
MOTOR DRIVERS
o Required for ensuring proper functioning of
motors

L239D motor driver

Pins 1, 9 and 16 should not


be held at voltage greater
than 5V

IC7805 (Voltage Regulator)


is used for the above
MOTOR (100 RPM)
o There are four input pins on the L293D 2, 7,
10 and 15
o Two motors can be controlled simultaneously in
any direction.

Pin 2 Pin 7 Logic

0 0 No Rotation

0 1 Anti-Clockwise Rotation

1 0 Clockwise Rotation
1 1 No Rotation
INFRARED SENSOR
The specifications for the infrared sensor can be
seen as shown below:

Range of Functioning 15 cm to 50 cm
Basic Outline Of
The Project
LABVIEW Robotics Module
The new LABVIEW Robotics Module
includes all of the software tools needed
to design a sophisticated autonomous or
semiautonomous system. Robots are very
complex mechatronics systems, so to
simplify them, we can apply a simple
paradigm sense,
think , act. Every autonomous or
semiautonomous robot, in some form or
another, must sense it's environment,
make a decision, and act on the
environment. The new LABVIEW
Robotics module provides APIs and
example programs for each step of the
sense think
act.
SENSE: Retrieve Data From Sensors

The LabVIEW and NI RIO platform makes it


easy for developers to connect to any sensor
signal. In addition to basic digital and analog
I/O, LabVIEW Robotics includes functions
and examples for interfacing with signals
using lowlevel protocols such as PWM, I2C
and SPI, and highlevel protocols such as
NMEA and Joint Architecture for Unmanned
Systems (JAUS).
Furthermore, LabVIEW Robotics includes a
new set of VIs to configure, control, and
retrieve data from the most commonly used
sensors.
The current list of sensor drivers is listed below however, National
Instruments is continuously creating new drivers and expanding this
list. The FPGA based sensor drivers and examples can be found in the
NI Example Finder under Robotics >> Sensor Drivers

o LIDAR Hokuyo URG Series, SICK LMS 2XX Series, Velodyne HDL64E
o Infrared Sharp IR GP2D12, Sharp IR GP2Y0D02YK
o Sonar Devantech SRF02 & SRF05, MaxSonar EZ1
o GPS Garmin GPS Series, NavCom SF2050, ublox, 5 Series, Applanix
POS LV
o Compass Devantech CMPS03 (PWM or I2C), PNI Field Force TCM
o Inertial Measurement Unit (IMU) Microstrain 3DMGXx and
3DMGXx,
o Crossbow NAV440, Ocean Server OS4000
o Camera Axis M1011 IP camera, analog cameras (MoviMED Analog
Frame Grabber for CompactRIO), USB cameras (Windows only), NI
Smart Cameras
Process Flow Diagram
In the collision avoidance routine, every time the robot detects an
obstacle it reverses, goes in the right hand direction and then tries to
reach its original path of travel. However, it keeps of performing the
collision avoidance step until it sees a clearance. All movements in
this case are time dependent.
THINK: Obstacle Avoidance Algorithms
Once you are able to acquire data from sensors and control your mobile robots
movement, the next step might be to apply an obstacle avoidance or path planning
algorithm. LabVIEW Robotics includes new VIs and examples for avoiding
obstacles based on sensor feedback, and for calculating the shortest path between
a start location and a goal location.

Vector Field Histogram (VFH) obstacle avoidance algorithm


Motor Control
To drive a wheeled robot at a
certain velocity and direction,
you must calculate the wheel
velocities depending on the
type of robot.

LabVIEW Robotics includes


a set of steering functions to
simplify this process. These
steering functions have built in
support for Ackerman,
differential and mecanum
steering, to be used with fixed,
steering, caster, mecanum or
omnidirectional wheels.
In the LabVIEW environment, the
movement of the bot is performed
through defining CCW (counter clock
wise) set point velocity to the Servo
Motors. Depending on the positive and
negative CCW velocity the robot moves
in forward, reverse or rotational
direction.
LabVIEW Simulation
The Microsoft Robotics Developer Studio
(formerly known as Microsoft Robotics
Studio) is a Windows-based environment
for academic, hobbyist and commercial
developers to easily create robotics
applications across a wide variety of
hardware. Key features and benefits of
the Microsoft Robotics Studio
environment include: end-to-end
Robotics Development Platform ,
lightweight services-oriented runtime,
and a scalable and extensible platform.

To do this, MSRDS provides development


tool integrated with Visual Studio and
simulation tool, therefore, user can
develop Robot application without H/W
Robot.
MSRDS Simulation
How It Works
Basically, LabVIEW communicates with the simulated robot in
the MSRDS simulation environment as though it were a real
robot. As such, it continuously acquires data from the simulated
sensors (in this case, a camera, a LIDAR and two bump sensors)
and displays it on the front panel.

The user can see the simulated robot from a birds-eye view in the
Main Camera indicator (large indicator in the middle of the front
panel can you see the tiny red robot?). The user can see what is in
front of the robot in the Camera on Robot indicator (top right
indicator on the front panel) . And the user can see what the robot
sees/interprets as obstacles in the Laser Range Finder indicator
(this indicator, right below Camera on Robot, is particularly
useful for debugging).
DRAWBACKS IN IMAGING ALGORITHM

The algorithm basically involves Vector Field Histogram technique which will
focus on colour change of the environment. The major drawback of this
algorithm that was experienced during testing of the code in simulation were
that the algorithm doesnt give the expected results when there is a change in
the floor environment.

In our experiment we found out that when our bot moves from a carpet of
brown red colour to a wooden floor the bot unexpectedly turned around
despite not facing any obstacle in front of it within the minimum specified
distance set in the algorithm. The basic reason for that is that with the change
in colour the values in the histogram correspondingly change which lets the
bot into believing that an obstacle is in front of it and it starts to move in the
reverse direction.
YO U
A NK
TH

You might also like