Professional Documents
Culture Documents
2016
3.3 Simulator data input Sensor Observation GTD Data (1b) and the Sensor
Observation Data (2b). The Scenario Environmental Maps
A scenario file or a 3D environmental model serves as (5) contain a 3D environment and provide contextual
an input data for the Simulator (Sensor Simulator Engine information in the form of the CityGML format. Both
+ Virtual Sensor Platform). The Simulator also needs Scenario GTD Data (3) and Dynamic GTD Data (4)
some information from the Knowledge DB (KDB). A user represent GTD about agents, sensors and other objects.
intervention is needed for objects deployment and
specification of objects parameters. These parameters are 3.6 Simulator development environment
written in the scenario file, which contains all necessary
Simulator is based on a modular, three-tier architecture.
information for loading the scene.
The Scenario Visualization and the Simulation
3.4 Simulator data output Visualization stand within the presentation tier. This layer
exploits Ogre Editor - Ogitor [18][13], which is a plug-in
The Simulator provides five different outputs, which based WYSIWYG editor environment for an Object-
can be exploited by the Fusion System. It provides Sensor Oriented Graphics Rendering Engine (OGRE) [19], and
Observation Data and related Sensor Observation Ground uses Qt toolkit [20] for graphical user interface (GUI).
Truth Data, which represent sensor synthetic data output. The Ogitor is released under the LGPL and the OUTL
Then it provides Scenario Ground Truth Data, which license policy. The OGRE is a 3D library for OpenGL
represent static information about objects in the scene and and/or Direct3D and is released under MIT License. The
Dynamic Ground Truth Data, which represent dynamic Qt is released under the GNU LGPL, GPL or a
information about objects in the scene. Finally, the commercial license. The Blender tool (GNU GPL
simulator provides contextual information about 3D License) is used for a 3D environmental model creation
environment in the OGC standard CityGML [13] that is and a database management system (DBMS) PostgreSQL
used for representation, storage and exchange of virtual [21] (license similar to BSD and MIT) is used for
3D cities and landscape models. The Scenario Ground databases creation and management. In the Figure 3, the
Truth Data go along with our own XML Schema Simulator application layout is depicted.
(XSD)[14], [15], [16]. Sensor Observation Data and
Sensor Observation Ground Truth Data can be in format
according to specific XSD or Transducer Markup
Language XSD [17][17].
The Sensor Observation Data contain information about
time stamp, sensor identification, detected object position
and detected object features.
The Scenario Ground Truth Data contain information
about sensor position, orientation and field of view,
features of agents and objects within the scene. The
Dynamic GTD Data are sent when parameters of objects
in the scenario are changed.
2017
and used for sensor and agent modeling. It is possible to Hyper spectral (HSI)
add any needed objects, configure their parameters and Identification friendly - foe (IFF)
specify various parameters specific for the simulation. The Laser range finder (LRF)
available objects for the deployment are described in the Low-end (LE)
chapter 4.1 Simulation objects. The scenario can be saved Small ground radar (SGR)
and used anytime in the future. Synthetic aperture radar (SAR)
Shooting object: This object specifies an attacking and
4.1 Simulation objects a defending group of agents and for each group it specifies
The following objects can be placed into the scene in a loss rates.
order to create the scenario. Timeline object: A timeline specifies which actions are
Agent object: HUMANs, ANIMALs, VEHICLEs, performed in the specific time by particular agents.
EVENTs and WEAPONs are considered as agents. Each Among these actions belong Move, Pick, Drop and
type of agent possesses its own parameters. The agents Execute. The main timeline has to be presented in each
can be controlled and semi-controlled. The goal of scene because it defines the main storyline of the scenario.
controlled agents is specifically determined by the The consecutive timelines are intended for additional
scenario. The goal of semi-controlled agents is chosen storylines. These timelines are activated by triggers.
randomly from a predefined set. There are several internal Triggers define both space and time conditions for the
parameters assigned to the agents like can be carried, activation of additional timelines. For instance,
can carry agents and is movable. improvised explosive device (IED) will explode 10
Area object: Represents significant area in the seconds after Agent1 arrival at Waypoint 5.
simulation. The area can be marked by non-stopping or Waypoint object: Waypoints represents significant
no-entering flags. locations in the scenario. Only the waypoints can be
Communication & Communication Network object: dedicated as goals for an agent movement. There are
These objects allow simulating radio communication special kinds of waypoints through which agents can enter
among agents within the communication group. Groups of the scene or leave the scene.
agents are called communication networks and a Track object: Tracks are represented by waypoints and
particular agent can be part of several communication they determine agent movement. There are several types
networks. The agents in the particular communication of tracks. There are tracks having the exact order of
networks use the same frequency and modulation when waypoints and tracks having a random order of waypoints.
transmitting to each other. There are active parts of Trigger object: Triggers evoke the activation of a
communication in which agents transmit, and there are planned timeline. A trigger can be based on a time (timed
passive parts without transmission. The communication triggers) or a position (positional triggers). Timed triggers
consists of one or more conversations that are divided into are triggered by absolute time and can be used either
speeches. Each communication has several parameters to unrepeated or repeatedly. The positional trigger defines a
be set (start time, end time, half-duplex, full-duplex, distance (from the agent to the object or another agent, or
frequency, modulation, network ID, channel utilization, the distance among the agents of the group), which can be
etc.) and these parameters are stated in the communication lower or bigger than a defined threshold value. The both
object. triggers can affect either a single agent or a group of
Explosion object: Represents the weapon agent with agents.
ability to detonate.
Group object: This object allows to group agents into
the groups for various purposes.
Scenario object: The Scenario object allows setting
important parameters for the scenario run. The main
parameters are Scenario ID, Start Time, Random Seed,
False Alarms Rates, number of automatically deployed
agents etc. Scenario object has to be present in the scene
to be executable.
Sensor object: This object allows sensors creation.
Each sensor has position, direction, field of view,
accuracy and frame rate parameter. There are eleven types
of sensors and each type has specific input parameters for
its modeling. The list of supported sensors is as follows:
Acoustic (AC)
Electro optical (EO)
Communication intelligence (COMINT)
Chemical, biological, radiological, nuclear and
meteorological (CBRN/METEO)
2018
Arrival this behavior is applied when the agent is
5 Simulation stopping. This behavior affects the velocity of agents and
that is why the movement looks natural.
The following picture (Figure 4) depicts the main
Leader Following this behavior allows controlling
modules of the Simulation Management:
the movement of groups.
Obstacle Avoidance agents avoid colliding with each
other or with solid objects.
Path Following this behavior allows agents to move
within the radius of a poly-line.
Wall Following this behavior allows agents to move
parallel to the edges with a defined offset.
2019
phantom agents. The phantoms can represent any agent
type. They appear in random moments close to the other
agents and they move throughout the scene for a few
seconds before they disappear. The number of
the phantom agents is determined by a false positive rate
defined within the scenario. They can be detected by only
one sensor during this time. The False Negative Alarms
include track ID changes, incomplete tracks and track
gaps. The Track ID change occurs when two agents are
close to each other or when they overlap. In that case, the
sensor can swap the target ID or assign a new target ID to
the agent. The incomplete tracks are modeled by dropping
out of some measurements. The probabilities for dropping
out the measurement are determined by the scenario. The
subsequent sensor measurements have different target IDs
and this creates a perception that one agent disappeared
Figure 6: Sensor modeling and another one appeared. The track gaps are modeled by
dropping out some measurements. In contrast to the
The detection function is very significant element in incomplete tracks, subsequent measurements have the
sensor modeling, because it contains the logic for the same Target ID. This creates a perception that there is a
creation of the event data structure. Formula (1) represents gap between the tracks. Errors are controlled by the
declaration of the detection function (DetectFunction) Pseudo Random Generators.
method.
5.4 Classification
sensor simulator event = DetectFunction (
object data, A classification is evaluated within the detection
environment data, function of each sensor. It affects the features assessment
input data) (1) in the Sensor Observation Data. It is possible to classify
the object because all parameters of the detected object
The Object data contains the ground truth information are known (its bounding box, distance from sensor, object
about the sensor and the target (type, position, and sensor parameters). Two thresholds for
parameters). The Environment data contains the the classification are used. First threshold represents
information about the current scene conditions. This maximum distance for classification. Second threshold
information affects the sensor measurement accuracy. represents distance for 100% classification. These
The Input data are defined for each type of sensor and thresholds are stated in the KDB for each sensor type and
they contain the prior information about the sensor they are used for a probability function calculation for all
parameters (e.g. spectral range, spectral accuracy, known object types. Because the classified object is
resolution etc.). The Sensor simulator event is a list of known, the appropriate probability function is used. Then
data structures, which are sent to the SDG and contains an "Unknown" class will be created as a supplement to
distorted sensor data. The Pseudo random number 100%. Then other possible classification classes are
generator is used within the detection function for the data specified with their probability values within the
distortion in order to be able to repeat the simulation with "Unknown" class probability. For example, in the first
the same results. step the object is determined as a Vehicle for 50% and as
Unknown for 50%. In the second step the object is
5.3 Error modeling determined as Vehicle for 50%, as Unknown for 25%, as
Human for 20% and as Animal for 5% (second step can
There are several errors affecting sensor measurements be skipped). Classification is further effected by
and they fall under two categories. environmental conditions.
The errors affecting the position and the features of
detected objects within the Sensor Observation data 6 Data interchange
and
The simulator provides an interface, which allows
the errors affecting object track identification.
sending the sensor simulator output to the sensor fusion
The first category errors are modeled within the detect
system. The solution is based on information
function. Object track identification errors are described
interoperability and standardization. The information
further.
interoperability can be described as an unambiguous
Track errors are represented by two types of false
exchange of information and services among automated
alarms false positives and false negatives. The False
information systems. Standardization is the main
positives alarms represent tracks that do not belong to
presumption for information interoperability creation. The
any object within the scenario. This is modeled as
standardization means a common agreement on what
2020
information is exchanged, in what format, how the sensor networks for fighting in build up and open areas
exchange is performed and under what conditions. and for military operations in urban terrain.
Interoperability standards are a set of agreements that Then, the sensors simulation environment will be used
allow systems to co-operate with other systems by means as a simulator for a system for communication, processing
of exchanging information. and storing sensor data streams [25], [26].
There are various definitions and models of the
interoperability levels. Our solution is based on the
following interoperability layers definition: 8 Conclusion
Physical communications layer (e.g. combat radio, This paper describes a sensors simulation environment
Ethernet, etc.). suitable for a sensor data fusion. This sensors simulation
Protocol layer (a transport protocol as e.g. TCP/IP, environment uses an open source technologies and due to
UDP/IP, etc.). its data interface, it can be interconnected with sensor
Data structure layer (a transport format as e.g. TML, fusion systems easily. Thanks to its universality, it can
SensorML etc.)[17], [23]. fulfill many purposes.
Semantic understanding of the data layer. It can serve for a sensor simulation, a testing based on
realistic observations, and it is possible to simulate
This solution uses TCP/IP or UDP/IP on protocol layer various types of operations in order to generate reliable
and Transducer Markup Language on Data structure layer. data for various types of sensor fusion systems. Due to its
Transducer Markup Language (TML) [17] is an XML- modular architecture, it can be easily extended by
based system for capturing, characterizing and enabling additional modules to provide other functionalities.
the transport of raw sensor data. It has elements designed The simulation is now based on manually created
to make it a self-contained package, including not only the terrains but there are plans to implement the functionality
raw data (dynamic data) but also metadata (static for loading external terrain data and improve the
information) which are necessary for processing (fuse) the simulation in order to work on such surfaces. This
information later. Even if the TML is mainly intended for involves some changes related to the scene creation phase
raw data, TMLs transducer element is flexible enough to and navimesh calculation. Improvements in reaction on
accommodate any type of data. TML packet with dynamic dynamic changes inside the scene belong to future
data can look like this: possible improvements as well.
<data clk="2011-06-04T14:41:31.01Z" Acknowledgements
ref="SENSOR_01_OBJECT_POS">
-35 0 70</data> The authors would like to thank the European Defence
Agency (EDA) for their financing of the DAFNE project
TML data are created by the SDS and can be sent via and thus this research. Further we want to thank the
the Data Interface to the Fusion System (Figure 1) or the partners in the DAFNE project: IDS, University of Udine,
TML output data can be stored in the database and resent TNO, Warsaw University of Technology and IOSB for
later. The Data Interface is a module providing the their collaboration and fruitful discussions.
transfer/delivery mechanism for data. Currently it uses
transport layer protocol TCP/IP or UDP/IP, but it can be References
extended easily to use various application protocols (e.g.
[1] Chandresh Mehta, Govindarajan Srimathveeravalli,
HTTP, SOAP, etc.).
and Thenkurussi Kesavadas. An Approach to design and
development of decentralized data fusion simulator. In
7 Applications Proceedings of the 37th conference on Winter simulation,
December 04-07, 2005, Orlando, Florida.
This sensors simulation environment is developing http://www.informs-sim.org/wsc05papers/145.pdf
within Distributed and Adaptive multisensor FusioN [accessed March 2011]
Engine (DAFNE) project [24], which is the European
Defence Agency (EDA) project. This project aims at [2] G. Srimathveeravalli , N. Subramanian , and T.
designing and experimenting a real-time distributed multi Kesavadas, A scenario generation tool for DDF simulation
sensor fusion engine that will combine data from testbeds, In Proceedings of the 36th conference on Winter
heterogeneous sensors in order to generate reliable simulation, December 05-08, 2004, Washington, D.C.
estimates about the entities and events in an urban warfare http://www.informs- sim.org/wsc04papers/225.pdf
scenario. The DAFNE project focuses mainly on [accessed March 2011]
enhancing situational awareness during military
operations by a fusion engine able to combine data from [3] M. Carlotto and I. Kadar, A Multisource Report
the sensors employed in an urban warfare scenario. level Simulator for Fusion Research. In Proceedings of
Further, the sensors simulation environment will be Signal Processing, Sensor Fusion and Target Recognition
used in training and simulation center in order to simulate XII, SPIE Vol. #5096, 2003.
2021
http://carlotto.us/MJCtechPubs/pdfs/2003spieMRS.pdf [18] Ogitor Scene Builder http://www.ogitor.org/tiki-
[accessed March 2011] index.php
[4] J.W. Joo, J. W. Choi, D. L. Cho, Ground Target [19] OGRE Open Source 3D Graphics Engine
Identification Fusion System. In Proceedings of the fifth http://www.ogre3d.org/
international conference on information fusion vol 2,
2002. [20] Qt A cross-platform application and UI framework
isif.org/fusion/proceedings/fusion02CD/pdffiles/papers/P0 http:// qt.nokia.com/products/
06.pdf [accessed March 2011]
[21] PostgreSQL
[5] FLAMES Simulation framework - http://www.postgresql.org/docs/current/static/index.html
http://www.ternion.com/
[22] OpenSteer C++ library
[6] P. Hrling, V. Mojtahed, P. Svensson, and B. http://opensteer.sourceforge.net/
Spearing. Adapting a Commercial Simulation Framework
to the Needs of Information Fusion Research. In Proc. of [23] SensorML
Fifth Int. Conf. on Information Fusion 220227. 2002. http://www.opengeospatial.org/standards/sensorml
http://www.foi.se/fusion/fusion25.pdf [accessed March
2011] [24] M. Ditzel, S.P. van den Broek, P. Hanckmann, and
M. van Iersel, DAFNE - A Distributed and Adaptive
[7] D. Nicholson, C.M. Lloyd, S.J. Julier, and J.K. Fusion Engine, to be published in the Proceedings of the
Uhlmann. Scalable Distributed Data Fusion, Adv. HAIS conference 2011, Wroclaw, Poland (2011).
Technolgy Centre, BAE Syst., Bristol. 2003.
http://isif.org/fusion/proceedings/fusion02CD/pdffiles/pap [25] P. Pohanka, Architecture of computing systems and
ers/T2B03.pdf [accessed March 2011] data processing in heterogeneous network environment.
Brno, Czech Republic: April 2010. Written thesis for the
[8] Thomas Kreitmair and Diana Norgaard. Use of doctoral state examination.
Simulations in Support of the Multi-Sensor Aerospace-
Ground Joint ISR Interoperability Coalition (MAJIIC) [26] J. Hrabovsky, Architecture of information systems
Project. In Improving M&S Interoperability, Reuse and and data processing in heterogeneous network
Efficiency in Support of Current and Future Forces (pp. environment. Brno, Czech Republic: April 2010. Written
7-1 7-12), 2007. thesis for the doctoral state examination.
http://ftp.rta.nato.int/public//PubFullText/RTO/MP/RTO-
MP-MSG-056///MP-MSG-056- 07.pdf [accessed March
2011]
[11] VR-Forces -
http://www.mak.com/products/vrforces.php
2022