You are on page 1of 5

2012 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2012)

An Intelligent Control Architecture for Search Robot Based on Orthogonal Perception Information
Guanqun Liu, Haibo Tong, Rubo Zhang
College of Computer Science and Technology Harbin Engineering University Harbin, China
AbstractIn order to address the problem of control architecture on Urban Search and Rescue (USAR) robotic system, a comprehensive victims autonomous search and rescue robotic system based on biomimetic sensing technology is developed, and an intelligent robot control architecture based on orthogonal perception information is proposed. The intelligent control architecture contains sensory system, data processing, deliberative layer, priority decision module and so on. Experiments indicate that the search and rescue robotic system utilizing proposed intelligent control architecture search and discover victims promptly and efficiently. Keywords-Urban Search and Rescue; intelligent architecture; victims control
Figure 1. The Urban Search and Rescue Robotic System

I.

INTRODUCTION

With the development of robotic research, mobile robot has been an active field in both academia and industry [1]. Urban search and rescue robots are mobile, autonomous ones, which have the potential to aid human [2] in the domain of victims rescue where it is dangerous for human rescue workers, including environment resulting from earthquakes, floods, mine fields, nuclear catastrophes, etc. In both human-caused and natural disasters, a major concern is to build up a sufficient awareness and assessment of the cluttered environment, to search victims in the debris efficiently. Current applications of Urban Search and Rescue environment require a human operator to guide the robot remotely outside of the hot zone [3]. Although human operation can be effective and reliant, the unknown nature of the environment makes the robot navigation, victims search and identification difficult. Operators can become heavily stressed, fatigued and inefficient due to a loss of the general situation of the disaster environment, causing critical error in control and victims identification [4]. An alternative to using human remote control is to develop fully autonomous controller for rescue robot, which partly benefit from the development of machine intelligence and artificial intelligence. However, there are a considerable number of issues in deploying a fully autonomous rescue robot in an unknown clutter disaster environment. Firstly, rescue workers in general do not fully trust an autonomous rescue robot with no human supervision in critical rescue mission. Furthermore, autonomous navigation in clutter disaster environments

containing large number of rubble is a difficult task to achieve [5]. In this paper, we have developed a comprehensive victims autonomous search and rescue system as Fig. 1, and present a novel intelligent control architecture for this semi-autonomous search and rescue platform, utilizing orthogonal sensory information provided by many kinds of sensors such as sound sensors, vision sensors, sonars, compass, CO sensor and so on. The vision sensor including a light camera and an infrared camera, as Fig. 2, can provide a 3D scene of the disaster environment to implement the victims detection algorithm. Sound source directional localization algorithm, designed to orient the victims utilize the data provided by sound sensors. We also design and develop unique hierarchical control architecture, simultaneously, including Sensory System, Data Processing, Deliberative Layer, Priority Decision Module and Robot Actuators. Data processing deals with sensory data from sound module and vision module respectively. Deliberative layer allocates particular priority to every module on the basis of the priority, role and accuracy of each device, and then robot actuators, utilizing the results of the deliberative layer, sends low-level commands to the robot: Rotate (left/right) or Move (front/back). In this paper, we investigated the robotic Urban Search and Rescue system and intelligent control architecture for this system. Experiments indicate that the search and rescue system utilizing proposed intelligent control architecture can search and discover victims promptly and efficiently.

This work was supported by the National High Technology Research and Development Program of China (863 Program) (No.2009AA04Z215) and the National Natural Science Foundation of China (No.60975071) and (No.61100005).

978-1-4673-0024-7/10/$26.00 2012 IEEE

2348

Figure 2. Light camera and Infrared camera

II.

ROBOTIC URBAN SEARCH AND RESCUE SYSTEM


Figure 3. The Graphical User Intreface

To find a victim in a disaster scene, it is a difficult mission especially in the clutter environment. The physical characteristic of a victim that we can detect contains sound, shape of body, skin color, and clothing texture. In this part, we will present the urban search and rescue robotic systems configuration. A. Vision Sensor Vision sensor is the most effective sensor for detection of human presence. Our search and rescue robotic system is equipped with an infrared camera and a light camera. The infrared camera which shows a picture of the environment heat assists to make the discrimination between human and non human, and then the light camera only need to detect the heat part of the same picture. It will make the detection efficiently and accurately. The algorithm used to detect and identified victims is introduced in [6]. B. Sound Sensor Sound is another human characteristic that we can detect and measure in the disaster scene. A fast and efficient sound detection and localization algorithm based time-delay estimation [7] is used by this search and rescue robot system. The sound detection and localization result guides the robot to move to victims nearby. C. Other Sensors Other sensors include sonars and CO sensor. Sonar is used to detect the distance from the robot to the obstacle, to implement robots obstacle avoiding. The CO sensor can detect the CO concentration, when it is high, it will do harm to victims and rescue workers. D. Graphical User Interface The GUI Fig. 3 builds four capabilities as follow: Display the sensor data transmitted through wireless network; Set the marked point robot follow or path planning strategy; Switch control mode between autonomous control and manual control; Operate the robot, utilizing the keyboard when the robot is under manual control mode. III. DATA PROCESSING AND CONTROL ARCHITECTURE

consider the historical data, it will not be completely credible. We propose a data processing algorithm based on historical data cumulative statistics particularly for sound source detection and orientation. Simultaneously, we design a control architecture applying to guide the search and rescue robot to cover the disaster area and discover victims completely and efficiently. A. Data Processing Data processing is the process of gathering information from different sources to provide a robust and complete description of the environment. The processing of data may be complicated because of the characteristics and accuracy of each sensor, and different sensors have different advantages and disadvantages depending on the disaster conditions.

This section will present the data processing and intelligent control architecture we design. Data generated from sound module and vision module is real-time and isolated. Because the disaster scene usually is noisy, and sensor data do not

1) Vision Module: Vision module utilizes the light and infrared camera to take pictures of disaster environment, execute body detection algorithm to make sure whether there are victims or not. It will record the human-label and distance from robot to victim. We can use current status of robot including coordinate and orientations to calculate the coordination of victims. 2) Sound Module: Considering that the noisy disaster scene influences the accuracy of sound source detection and localization heavily, it is the mainly complicated part in this section. The sound source detection and localization algorithm mentioned in Section will give a direction data every second, but it doesnt consider and utilize the historical localization data, resulting in many incorrect estimates. This mistake wont guide the search and rescue robot working steady. In consideration of this defect, we propose a sound source localization data processing algorithm based on historical data cumulative statistics. We build a structure, as Fig. 4, to store the historical data calculated by sound source detection and localization module and calculate the victim orientation using follow process and algorithm: a) Receive and Store Data: There are two structures called Sound Record and Counter which store sound source detection and localization data and the count respectively. In the next step, the Sound Record will use historical data, so we will store 3 columns consistent with 3ss data, every column contains 72 (360/5) rows. When updating the structure

2349

Figure 5. Intelligent control architecture

Figure 4. The structure to store and calculate historical data

every second, confirm the new data which row belong to, and rewrite the old data with the new one. Every row of Counter will store the valid numbers of every row in the Sound Record b) Confirm orientation Based on cumulative statistics : Bi is equal to the number of valid data in Ai,02. Follow the following steps to confirm if there are 3 valid sound orientation data. Sum the numbers in the three layers where Bi is in the middle of three layers. If not Sum the numbers in the three layers where Bi is at the bottom of the three layers. If not Sum the numbers in the three layers where Bi is on the top of the three layers. If not It doesnt meet the requirements. c) Remove Excess Data: In this step, we will remove excess data which store in the Sound Record for the longest time. 3) Other Modules: Other modules include sonar data and CO sensor data. We utilize the sonar data to implement robots obstacle avoiding using fuzzy reasoning obstacle avoiding algorithm. Data from the CO sensor are directly transmitted to the GUI, so that operator can know the CO concentration of the disaster scene. B. Control Architecture In general, robot control architecture can be defined as deliberative, reactive and hybrid [8]. Deliberative control consists of high-level planning, whereas reactive control executes the results generated and calculated from the sensory data. Traditionally, the reactive architecture has been considered as behavior-based control. In behavior-based robotic control, the overall control of the robot is shared among a set of perception-action units known as behaviors [8]. Our proposed control architecture Fig. 5 is hierarchical. The robot control architecture proposed in our search and rescue robot system consists of following modules: 1) Robot Sensors: There are many kinds of sensors, such as light and infrared cameras, three-dimensional digital

compass and other internal/external sensors. Data generated and calculated from the sensory system is the control system input. They have different characteristics and different advantages and disadvantages to reflex the environment from different respect. 2) Robot Location: The search and rescue system utilizes the three dimensional digital compass and internal encoders to calculate the location of the robot in the disaster environment. Robot location is the foundation of the system. Only we know the coordinate of robot, just can we calculate the victims location. 3) Data Processing: In this phase, data from different sensor modules such as sound module, vision module, sonars are handled with the purpose of gathering effective result, reducing error, eliminating contradiction and redundancies. 4) Deliberative Layer: In this layer, orthogonal data is pooling together, we should process the data according to the charateristics and role in the task cycle. Decision is made by analyzing the results of the last data processing phase. It is also the deliberative layer where the level of autonomy between human control and autonomous control is decided. 5) Priority Decision Module: Vision module is aimed at detecting the disaster environment and identifying victims, in contrast, sound module is aimed at detecting sound and confirm the orientation of the human to guide the robot to move towards victims. Sonar data is used to implement robot obstacle avoiding to guarentee the safety of seach robot. Obviously, in the priority decision module, we should give the highest priority to obstacle avoiding, vision module has the middle priority, and sound module has the lowest ones. 6) Robot Actuators: The robot actuators module consists of the robots motors and motor control boards. Appropriate motor signals are sending based on the result of above module. IV. EXPERIMENTS

Experiments are designed to verify the performance of the Urban Search and Rescue Robotic System and the proposed intelligent control architecture. The experiments are implemented in a room and a corridor respectively, as Fig. 7, with a different light condition. The rubble-like objects within the simulative disaster scene include wood plastic, cardboard. In addition, the scene consists of three victims, who are in a

2350

Figure 7. Experiment scene in the corridor

Figure 6. Experiment environment

random location within the environment. In the room, the cardboard in their hands, in which case only a portion of their body is visible, limbs or head, obstructs them. In the corridor, they will hide themselves in the woodpile. Fig. 7 show the experiment in a corridor, and the robot is in manual control mode. Operator can observe the disaster scene, according to the images generated by the light camera and the infrared camera. They can know the orientation of potential victims by watching the indicator. In addition, operator can control the robot by utilizing keyboard. In the room, they sit in a circle and call for help in turn, as

Fig. 8. When the sound module of the search and rescue system detects voice, and orients the victims (step 2). It will guide the robot to rotate and move towards the potential victims, the robot is in the sound navigation (step 3 and 4). The vision module detects the environment all the time. When vision identifies a victim, it will mark the victim, record the coordinate and inform the GUI by sending the region of victims on the image. GUI can draw this region on the screen (step 5). Fig. 9 is the same course as above-mentioned, but in a dark environment. The probability of discovering victims under different environment is presented in Table and is defined herein as the probability the search and rescue system detects and identifies victims. There are 11 items added up with different illumination, background and pose of victims. The average probability is high than 90% obviously.

Figure 8. Experiment scene in light environment

Figure 9. Experiment scene in dark environment

2351

TABLE I. PROBABILITY OF DISCOVER VICTIMSS

SN 1 2 3 4 5 6 7 8 9 10 11

Illumination Normal Normal Weak Weak Weak Weak Weak Weak Weak Weak Weak V.

Background Complex Complex Simple Simple Complex Complex Complex Complex Complex Complex Complex

Pose of Victim StandBlocked partly StandBlocked partly SitNon-blocked SitNon-blocked StandBlocked StandBlocked StandBlocked heavily StandBlocked heavily StandNon-blocked StandNon-blocked StandBlocked
[1]

Person-time of victims 32 36 10 17 28 10 32 34 25 11 13

Person-time detected 30 34 9 16 26 9 30 31 23 10 12 REFERENCES

Probability of discover victim 93.75% 94.44% 90.00% 94.12% 92.86% 90.00% 93.75% 91.18% 92.00% 90.91% 92.31%

CONCLUSION AND FUTURE WORK

Urban Search and Rescue robotic system has been an important application in the field of mobile robot. In this paper, we propose a comprehensive urban search and rescue robotic system and an intelligent control architecture. The intelligent control architecture provides the robot system with the ability to make decisions regarding which task should be implemented and which control mode is employed. Utilizing our proposed intelligent control architecture, the search and rescue robotic system can search and identify victims quickly and efficiently. The future work will consist of performing extensive experiments with a large number of different USAR scenes and focus on the improving of victims detection and identification algorithm and robot mission planning in the clutter and unknown environment. ACKNOWLEDGMENT The authors would like to especially thank Rubo Zhang, Guanqun Liu, Zhihui Li, Chunyan Shao and Xianglei Zhang for their contributions to the paper and the robot system and gratefully acknowledge assistance of Jianxin Wang, Dahai Yu and Zhongqiu Guo for their accomplishing these experiments.

[2]

[3]

[4]

[5]

[6]

[7]

[8]

Barzin Doroodgar, Maurizio Ficocelli, The Search for Survivors:Cooperative Human-Robot Interaction in Search and Rescue Environment using Seni-Autonomous Robots, IEEE International Conference on Robotic and Automation. USA, vol. A247, pp. 2858 2863, May 2010. Benoit Larochelle,Geert-Jan M.Kruijff, Nanja Smets, Tina Mioch, Peter Groenewegen, Establishing Human Situation Awareness Using a Multi-Modal Operator Control Unit In An Urban Search & Rescue Human-Robot Team, 20th IEEE International Symposium on Robot and Human Interative Communication. USA, pp. 229234, July 2011. Barzin Doroodgar, A Hierarchical Reinforcement Learning Based Control Architecture for Semi-Autonomous Rescue Robots in Cluttered Environment, 6th annual IEEE Conference on Autonmation Science and Engineering. Canada, vol. A247, pp. 948953, August 2010. J. Casper, R. Murphy, Workflow study on human-robot interaction in USAR, IEEE International Conference on Robotics and Autonomous, USA, pp. 19972003, 2002. R. Murphy, Activities of the Rescue Robots at the World Trade Center from 11-21 September 2011, IEEE Robotic & Autonomation Magazine, pp. 5061, 2004. Li Zhihui, Shao Chunyan, Liu Yongmei, Motion estimation based on axis affine model, IEEE International Conference on Mechatronics and Automation, China, pp.572576, August 2010. Liu Guanqun, Zhang Rubo, Xu Dong, A fast and efficient time-delay estimation algorithm for sound localization, International Conference on Advanced Materials and Computer Science, China, pp. 12011204, 2011. P. Pirjanian, CAMPOUT: A Control architecture for multi-robot planetary outposts, SPIE The International Society for Optical Engineering, pp. 221230, 2000.

2352

You might also like