You are on page 1of 6

International Journal of Computer Engineering & Technology (IJCET)

Volume 7, Issue 3, May-June 2016, pp. 9398, Article ID: IJCET_07_03_008


Available online at
http://www.iaeme.com/IJCET/issues.asp?JType=IJCET&VType=7&IType=3
Journal Impact Factor (2016): 9.3590 (Calculated by GISI) www.jifactor.com
ISSN Print: 0976-6367 and ISSN Online: 09766375
IAEME Publication

DEVELOPMENT OF CONTROL SOFTWARE


FOR STAIR DETECTION IN A MOBILE
ROBOT USING ARTIFICIAL
INTELLIGENCE AND IMAGE PROCESSING
Mukul Anand Pathak, Kshitij Kamlakar, Shwetant Mohapatra,
Prof. Uma Nagaraj
MIT AOE, ALANDI, PUNE, MS, India
ABSTRACT
In this paper our main aim is to design and develop the control software
for the detection and alignment of stairs by a stair climbing and manually
operated robot. The robot platform is a differential drive, with skid steering
system. The system is mounted on a rugged chassis. Vision sensors are
mounted on the robot. These are cameras which will provide motion images of
the robots surroundings. The application software will apply image
processing and artificial intelligence techniques to detect stairs at Real-time
and align the robot at an appropriate distance from the stair. Use of canny
edge detection method to detect the edges of the stairs, smoothen the image
and removing noise from the image. Neural networking will be used to detect
stairs and faults. Machine learning technology to overcome faults in stairs and
act accordingly from the saved experiences. This will be Linux based
application which will have support of OpenCV API.
General Terms: Pattern Recognition, Algorithms et. al.
Key words: Artificial Intelligence, Image Processing, Neural Network,
Opencv API and Mobile Robot
Cite this Article: Mukul Anand Pathak, Kshitij Kamlakar, Shwetant
Mohapatra, Prof. Uma Nagaraj, Development of Control Software For Stair
Detection In A Mobile Robot Using Artificial Intelligence and Image
Processing, International Journal of Computer Engineering and Technology,
7(3), 2016, pp. 9398.
http://www.iaeme.com/IJCET/issues.asp?JType=IJCET&VType=7&IType=3

http://www.iaeme.com/IJCET/index.asp

93

editor@iaeme.com

Mukul Anand Pathak, Kshitij Kamlakar, Shwetant Mohapatra, Prof. Uma Nagaraj

INTRODUCTION
Autonomous ground robots have traditionally been restricted to single floors of a
building or outdoor areas free of abrupt elevation changes such as stairs. Although
autonomous traversal of stairways is an active research area for some humanoid and
ground robots, the focus within the vision and sensor community has been on
providing sensor feedback for control of the mechanical aspects of stair traversal, and
on stair detection as a trigger for the initiation of autonomous climbing, rather than on
stair traversability. The restriction to a single floor presents a significant limitation to
real-world applications such as mapping of multi-floor buildings and rescue scenarios.
Our work seeks a solution to this problem and is motivated by the rich potential of an
autonomous ground robot that can climb stairs while exploring a multi-floor building.
A comprehensive indoor exploration system could be capable of autonomously
exploring an environment that contains stairways, locating them and assessing their
traversability, and then engaging a platform-specific climbing routine in order to
traverse any climbable stairways to explore other floors. The physical properties of a
stairway may limit the platforms that are capable of climbing it. For example, a robot
may not be able to climb some stairways due to step height, and a ground robot may
be restricted to stairways with a low pitch due to its weight distribution. Our proposed
approach is an effort to integrate the existing work in autonomous stair climbing with
autonomous exploration: a system to detect and localize stairways in the environment
during the process of exploration. With a map of the environment and estimated
locations and parameters of the stairways, the robot could plan a path that traverses
the stairs in order to explore the frontier at other elevations that were previously
inaccessible. For example, a robot could finish mapping the ground floor of a
building, return to a stairway that it had previously discovered, and ascend to the
second floor to continue exploring if that stairway is of dimensions (i.e. step height,
width, pitch) that are traversable by that particular platform.Our proposed system
directly addresses the needs of an exploratory platform for solving the problem
defined above. We seek to answer the question Is this traversable? when a stairway is
discovered during exploration. The questions When should I traverse that? and How
do I traverse that? are left for the path planner and stair climbing routines,
respectively. The system is composed of a stairway detection module for extracting
stair edge points using single camera. The physical properties can then be used by a
path planner to determine the traversability of stairs inrelation to the specific robotic
platform.
Autonomous bots still are in the process of developing. Each day a new
technology comes up helping for the same cause. One of a challenge for such is
making the robot climb up the stairs. Things get complicated when the bot is highly
mounted with guns and other gears. Through manual operation it is possible to make
the robot climb upstairs but it has risks of some human error which may result in the
falling or slipping of the robot climbing. An autonomous control system is desired
which will eliminate human error by properly managing and judging the stair
climbing process. With this technology in use dangerous missions can be somehow
managed unmanned. Our aim with such a technology will help Defence Research and
Development Organizations R&DE (E) robots. Such robots will help the country in
the field of defence and security of the nation. Autonomous robot climbing the stairs
will also be helpful for the Army/police personal when the missions are dangerous to
life. Implementing Neural network with image processing and making a control
software for an artificial intelligent robot.

http://www.iaeme.com/IJCET/index.asp

94

editor@iaeme.com

Development of Control Software For Stair Detection In A Mobile Robot Using Artificial
Intelligence and Image Processing

1. LITERATURE SURVEY
The research conducted so far in development of software for mobile robot using
image processing and artificial intelligence are discussed in this section. The set of
challenges outlined above span several domains of research and the majority of
relevant work will be reviewed in this section. In this section, basic concept are
discussed for better understanding of the project. The literature survey regarding
which methodology should be applied was taken considering the following existing
algorithms and models.
Mukul Pathak, Shwetant Mohapatra and Kshitij Kamlakar proposed a paper on
developing a control software for a mobile robot in (IJAFRC). A mechanism is
proposed how the image will be filtered using edge detection technique. Canny edge
detection method is proposed to be better than others as it is less prone to noise.
Certain methods were used to calculate the distance between the robot and the stairs.
Mike Fair and David P. Miller presented a stair climbing robot built around the
chassis of a Quest Access wheelchair. They replaced the control electronics and
several sensors including a laser scanner and a set of Sharp range sensors have been
added. The original wheelchair hardware was capable of negotiating a staircase, but it
required the human operator to detect the staircase, put the chair into stair-climbing
mode, align the chair with the staircase, and then manually control the rate of ascent
or decent. The sequencing of the chair tilt and easy downs (the hydraulic actuators
which gently allowed the chair to transition from level ground to inclines ) was done
automatically.
E.Mihankhah,A.Kalantari presented a Autonomous Staircase Detection and Stair
Climbing for a Tracked Mobile Robot using Fuzzy Controller. Theoretical analysis
and implementation of autonomous staircase detection and stair climbing algorithms
on a novel rescue mobile robot are presented in this paper. The main goals are to find
the staircase during navigation and to implement a fast, safe and smooth autonomous
stair climbing algorithm. Silver is used here as the experimental platform. This
tracked mobile robot is a teleoperative rescue mobile robot with great capabilities in
climbing obstacles in destructed areas. Its performance has been demonstrated in
rescue robot league of international RoboCup competitions. A fuzzy controller is
applied to direct the robot during stair climbing. Controller inputs are generated by
processing the range data from two LASER range finders which scan the environment
one horizontally and the other vertically.
Daniel M.Helmick Stergios and Michael C Henry designed for Small, tracked
mobile robots for general urban mobility have been developed for the purpose of
reconnaissance and/or search and rescue missions in buildings and cities.
Autonomous stair climbing is a significant capability required for many of these
missions. In their paper they presented the design and implementation of a new set of
estimation and control algorithms that increase the speed and effectiveness of stair
climbing. They have developed a Kalman filter that fuses visual/laser data with
inertial measurements and provides attitude estimates of improved accuracy at a high
rate, and(ii) a physics based controller that minimizes the heading error and
maximizes the velocity of the vehicle during stair climbing. Experimental results
using a tracked vehicle validate the improved performance of this control and
estimation scheme over previous approaches
Sonda Ammar Bouhamed and Imene Khanfir Kallel, Dorra Sellami Masmoudi
proposed a paper on staircase detection using ultrasonic sensor in (IJACSA)
International Journal of Advanced Computer Science and Applications, Vol. 4, No. 6
http://www.iaeme.com/IJCET/index.asp

95

editor@iaeme.com

Mukul Anand Pathak, Kshitij Kamlakar, Shwetant Mohapatra, Prof. Uma Nagaraj

on 2013. A new device is then proposed to enable them to see the world with their
ears. Considering not only system requirements but also technology cost, they used,
for the conception of our tool, ultrasonic sensors and one monocular camera to enable
user being aware of the presence and nature of potential encountered obstacles. In this
paper, they are involved in using only one ultrasonic sensor to detect stair- cases in
electronic cane. The system was evaluated on a set of ultrasonic signal where staircases occur with different shapes. Using a multiclass SVM approach, recognition
rates of 82.4

2. PROPOSED SYSTEM
2.1. Project Idea

In this project our main aim is to design and develop the control software for the
deployment and alignment of the flippers/wheels mechanism of a stair climbing robot.
The algorithm is divided in two section first is training and the second is execution.
We have used Neural Network framework for the process.

2.1.1. Neural Network Training


In this section set of images were captured of different stairs as well as from different
angles. Image is taken one by one and the 3-channel image is converted into 1channel image or a grey scale image. Grey sale image is then filtered out using canny
edge detection method. Then hough line transformation is applied to filtered image to
http://www.iaeme.com/IJCET/index.asp

96

editor@iaeme.com

Development of Control Software For Stair Detection In A Mobile Robot Using Artificial
Intelligence and Image Processing

detect straight lines. Throughout the process image is filtered and lines are detected on
a original image. Size of the image is chosen so that the data is not lost. After resizing
the image is converted in binary form. Using some threshold value. As every pixel has
its own intensity, the intensity value is compared by the threshold value if the
intensity value is less than it is treated as 0 else it will be treated as 1. The binary
image is then stored in a file. Binary image is stored in the file along with the output
of that image. A Neural Network is created initializing input, output and hidden layer
and the number no neurons in them. Neural Network is trained using the binary image
file and the weight between the neurons is adjusted till the error is less than 0.001.
Output of hidden neurons as well as output neurons is set between -1 to 1.
Backpropogation algorithm is used to change the weight of the neurons. After the
completion of training a .net file is created which contain all the configuration of the
training and then the file is used for execution process.

2.1.2. Neural Network Execution


In this section we have described how the algorithm will detect the stair using the
training. First the video will be captured by the camera as video is sequence of image.
A image will be taken one by one and the 3-channel image is converted into 1-channel
image or a grey scale image. Grey sale image is then filtered out using canny edge
detection method. Then hough line transformation is applied to filtered image to detect
straight lines. Throughout the process image is filtered and lines are detected on a
original image. Size of the image is chosen so that the data is not lost. After resizing
the image is converted in binary form. Using some threshold value. As every pixel has
its own intensity, the intensity value is compared by the threshold value if the intensity
value is less than it is treated as 0 else it will be treated as 1. Binary image is then
stored in an array of size number of pixel. Now the Neural Network is created using
the .net file which was created during training. After the Neural Network successful
creation, array will be given as a input to the neural network. Neural Network will
process the data and provide the output. Value of the output is between -1 to 1. If there
is a stair in front of the camera the value will be more than 0.80.
Captions should be Times New Roman 9-point bold. They should be numbered
(e.g., Table 1 or Figure 2), please note that the word for Table and Figure are
spelled out. Figures captions should be centered beneath the image or picture, and
Table captions should be centered above the table body.

3. FUTURE WORK
Expandability of automation to descending from the stairs incorporating real-time and
Extending this feature on all the robots under R&DE (E)
Also focusing on recognizing stairs even in dim lighting conditions and presence
of other entities in the background Extending the application to work efficiently and
smoothly on other terrains and developing platforms/tools.

ACKNOWLEDGMENTS
Our sincere thanks to all those who are guiding us in this project. Our sincere thanks
to our guide and mentor:

Prof. Uma Nagaraj, HOD, COMP, MIT-AOE, Pune Our external


Mr. Alok Mukherjee, Scientist, RDE(E), Pune
Mr. Altaf Mirza Baig, Scientist, RDE(E), Pune
Mr. Debjyoti Das, hi-tech robotic systemz ltd, Pune

http://www.iaeme.com/IJCET/index.asp

97

editor@iaeme.com

Mukul Anand Pathak, Kshitij Kamlakar, Shwetant Mohapatra, Prof. Uma Nagaraj

REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]
[9]

[10]

[11]

[12]

[13]

Pathak. M., Kamlakar, K and Mohapatra, S. Development of control software


for mobile robot using artificial intelligence and Image processing. In:
International Journal of Advance Foundation and Research in Computer
(IJAFRC) 2(12) December 2015.
S. Owald, J.-S. Gutmann, A. Hornung, and M. Bennewitz, from 3D point clouds
to climbing stairs: A comparison of plane segmentation approaches for
humanoids. In Proc. of the IEEE-RAS Int. Conf. on Humanoid Robots
(Humanoids), 2011.
Delmerico, J.A., Baran, D., David, P., Ryde, J., Corso, J.J, Ascending stairway
modeling from dense depth imagery for traversability analysis. In: International
Conference on Robotics and Automation (ICRA). pp. 2283 2290 (2013)
A. M. Johnson, M. T. Hale, G. C. Haynes, and D. E. Koditschek, Autonomous
legged hill and stairwell ascent, In IEEE International Workshop on Safety,
Security, Rescue Robotics (SSRR), pp. 134142.November 2011.
J. Delmerico, J. Corso, D. Baran, P. David, and J. Ryde, Toward autonomous
multi-floor exploration: Ascending stairway localization and modeling, U.S.
Army Research Laboratory, Tech. Rep. ARL-TR, 2012
R. B. Rusu and S. Cousins, 3D is here: Point Cloud Library (PCL), In IEEE
International Conference on Robotics and Automation (ICRA), Shanghai, China,
May 9-13 2011
Tung-Sing Leung , Medioni G, Real-time staircase detection from a wearable
stereo system, In: Pattern Recognition (ICPR), 2012 21st International
Conference on Nov. 2012, pp 37703773
Hernandez, D.C, Jo, K.H, Stairway tracking based on automatic target selection
using directional filters. In: Frontiers of Computer Vision (FCV). pp. 16 (2011)
Tseng, C.K., Li, I., Chien, Y.H., Chen, M.C., Wang, W.Y, Autonomous stair
detection and climbing systems for a tracked robot. In: System science and
Engineering (ICSSE), 2013 International Conference on. pp. 201 204. IEEE
(2013)
Cloix, S., Weiss, V., Bologna, G., Pun, T., Hasler, D, Obstacle and planar object
detection using sparse 3D information for a smart walker. In: 9th International
Conference on Computer Vision Theory and Applications. vol. 2, pp. 305312.
Lisbon, Portugal (Jan 2014).
Maha M. Lashin, A Metal Detector Mobile Robot as A New Application of
Microcontroller, International Journal of Computer Engineering and
Technology, 5(7), 2014, pp. 2435.
Kabeer Mohammed and Dr.Bhaskara Reddy, Optimized Solution For Image
Processing Through Mobile Robots Working As A Team with Designated Team
Members and Team Leader, International Journal of Computer Engineering and
Technology, 4(3), 2013, pp. 140148.
Kamaljeet Kaur and Ms. Manpreet Kaur, Case Study of Color Model of Image
Processing, International Journal of Computer Engineering and Technology,
6(12), 2015, pp. 6064.

http://www.iaeme.com/IJCET/index.asp

98

editor@iaeme.com

You might also like