You are on page 1of 206

ATLANTIS A MBIENT AND P ERVASIVE I NTELLIGENCE

VOLUME 1
S ERIES E DITOR : I SMAIL K HALIL

Atlantis Ambient and Pervasive Intelligence


Series Editor:
Ismail Khalil, Linz, Austria
(ISSN: 1875-7669)

Aims and scope of the series


The book series Atlantis Ambient and Pervasive Intelligence publishes high quality titles in the fields of Pervasive Computing, Mixed Reality, Wearable Computing, LocationAware Computing, Ambient Interfaces, Tangible Interfaces, Smart Environments, Intelligent Interfaces, Software Agents and other related fields. We welcome submission of book
proposals from researchers worldwide who aim at sharing their results in this important
research area.
All books in this series are co-published with World Scientific.
For more information on this series and our other book series, please visit our website at:
www.atlantis-press.com/publications/books

A MSTERDAM PARIS

c ATLANTIS PRESS / WORLD SCIENTIFIC




Agent-Based Ubiquitous
Computing

Eleni Mangina, Javier Carbo, Jose M. Molina


School of Computer Science and Informatics
University College Dublin, Dublin 4, Ireland
University Carlos III of Madrid,
Computer Science Department,
Applied Artificial Intelligence Group (GIAA),
Avda. Universidad Carlos III 22, 28270 Colmenarejo, Spain

A MSTERDAM PARIS

Atlantis Press
29, avenue Laumi`ere
75019 Paris, France
For information on all Atlantis Press publications, visit our website at: www.atlantis-press.com
Copyright
This book, or any parts thereof, may not be reproduced for commercial purposes in any form or by
any means, electronic or mechanical, including photocopying, recording or any information storage
and retrieval system known or to be invented, without prior permission from the Publisher.

ISBN: 978-90-78677-10-9
ISSN: 1875-7669

e-ISBN: 978-94-91216-31-2

c 2009 ATLANTIS PRESS / WORLD SCIENTIFIC




To our children,
Patrick, Sofia, Jose, Javier and Maria.

Preface

Ubiquitous computing names the third wave in computing, where the personal computing
era appears when technology recedes into the background of our lives. The widespread use
of new mobile technology implementing wireless communications such as personal digital
assistants (PDAs) and smart phones enables a new type of advanced applications. In the
past years, the main focus of research in mobile services has aimed at the anytime-anywhere
principle (ubiquitous computing). However, there is more to it. The increasing demand for
distributed problem solving led to the development of multi-agent systems. The latter are
formed from a collection of independent software entities whose collective skills can be
applied in complex and real-time domains. The target of such systems is to demonstrate
how goal directed, robust and optimal behavior can arise from interactions between individual autonomous intelligent software agents. These software entities exhibit characteristics
like autonomy, responsiveness, pro-activeness and social ability. Their functionality and
effectiveness has proven to be highly depended on the design and development and the application domain. In fact, in several cases, the design and development of effective services
should take into account the characteristics of the context from which a service is requested.
Context is the set of suitable environmental states and settings concerning a user, which are
relevant for a situation sensitive application in the process of adapting the services and information offered to the user. Agent technology seems to be the right technology to offer
the possibility of exploring the dynamic context of the user in order to provide added-value
services or to execute more and complex tasks. In this respect, agent-based ubiquitous
computing can benefit from marrying the agent-based technology for the extensive usage
of distributed functionality, to be deployed for lightweight devices and enable to combine
ubiquity and intelligence in different application areas and challenge with questions the
research communities in computer science, artificial intelligence and engineering.
We noticed during the AAMAS workshop we organized about this issue in 2007 that, although a number of other books on ubiquitous computing have been published in the last
years, none of these has focused on the agent-based perspective. So we opened a call
for chapters to gather input and feedback concerning the above challenges, through the
collection of the high-quality contributions that reflect and advance the state-of-the art in
agent-based ubiquitous application systems. It brought together researchers, agent-based
vii

viii

Agent-Based Ubiquitous Computing

software developers, users and practitioners involved in the area of agent-based ubiquitous
systems, coming from many disciplines, with the target to discuss the different fundamental principles for construction and design of agents for specific applications, how they
co-operate and communicate, what tasks can be set and how different properties like coordination and communication have been implemented, and which are the different problems
they had to cope with. Existing perspectives of ubiquitous agents within different application domains have been welcome, as well as the different mechanisms for design and
cooperation that can be used within different agent building environments. Specifically, the
book focused on the different disciplines contributing to the design, cooperation, coordination and implementation problems of ubiquitous computing applications and how these
can be solved through the utilization of agents.
Thanks are due to all contributors and referees for their kind cooperation and enthusiasm,
and to Zeger Karssen (Editorial Atlantis Press) for his kind advice and help to publish this
volume.
E. Mangina, J. Carbo and J.M. Molina

Contents

Preface
1.

vii

Solving Conflicts in Agent-Based Ubiquitous Computing Systems: A Proposal Based on Argumentation

Andres Munoz Ortega, Juan A. Bota Blaya, Felix J. Garca Clemente,


Gregorio Martnez Perez and Antonio F. Gomez Skarmeta
1.1
1.2
1.3
1.4
1.5
1.6
1.7
2.

Introduction . . . . . . . . . . . . . . . . . . .
Classification of authorization policies conflicts
The basics of argumentation . . . . . . . . . .
Using argumentation to resolve policy conflicts
Related work . . . . . . . . . . . . . . . . . .
Conclusions and future work . . . . . . . . . .
Acknowledgments . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

Mixed Reality Agent (MiRA) Chameleons

1
2
3
7
10
11
11
13

Mauro Dragone, Thomas Holz, G.M.P. OHare and Michael J. OGrady


2.1
2.2
2.3
2.4
2.5

2.6

2.7
3.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
Social interface agents . . . . . . . . . . . . . . . . . . .
Ubiquitous robots . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Augmented HRI and immersive interfaces . . . .
Ubiquitous agents . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Discussion . . . . . . . . . . . . . . . . . . . . .
Dynamic embodiment . . . . . . . . . . . . . . . . . . .
2.5.1 Agent chameleons . . . . . . . . . . . . . . . . .
2.5.2 Discussion . . . . . . . . . . . . . . . . . . . . .
MiRA chameleons . . . . . . . . . . . . . . . . . . . . .
2.6.1 Requirements . . . . . . . . . . . . . . . . . . .
2.6.2 The socially situated agent architecture (SoSAA) .
2.6.3 Implementation . . . . . . . . . . . . . . . . . .
2.6.4 Testbed . . . . . . . . . . . . . . . . . . . . . . .
2.6.5 Discussion . . . . . . . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

A Generic Architecture for Human-Aware Ambient Computing

13
15
16
17
18
19
20
21
22
23
24
25
27
30
32
33
35

Tibor Bosse, Mark Hoogendoorn, Michel C.A. Klein, and Jan Treur
3.1
3.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modelling approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix

35
36

Agent-Based Ubiquitous Computing

3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10

3.11

3.12

4.

Global structure of the agent-based generic model . . . . . . . . . . . . .


Generic ambient agent and world model . . . . . . . . . . . . . . . . . .
Case study 1: An ambient driver support system . . . . . . . . . . . . . .
Case study 2: Ambient aggression handling system . . . . . . . . . . . .
Case study 3: Ambient system for management of medicine usage . . . .
Specification and verification of dynamic properties . . . . . . . . . . . .
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix 1: Driver case . . . . . . . . . . . . . . . . . . . . . . . . . .
3.10.1 Driver assessment agent: Domain-specific temporal rules . . . .
3.10.2 Cruise control agent: Domain-specific temporal rules . . . . . .
3.10.3 Steering monitoring agent: Domain-specific temporal rules . . .
3.10.4 Steering sensoring agent: Domain-specific temporal rules . . . .
3.10.5 Gaze-focus sensoring agent: Domain-specific temporal rules . .
3.10.6 Alcohol-level monitoring agent: Domain-specific temporal rules
3.10.7 Alcohol sensoring agent: Domain-specific temporal rules . . . .
3.10.8 Driver: Domain-specific temporal rules . . . . . . . . . . . . . .
3.10.9 Car and environment: Domain-specific temporal rules . . . . . .
Appendix 2: Aggression handling case . . . . . . . . . . . . . . . . . . .
3.11.1 Sound analysis agent: Domain-specific temporal rules . . . . . .
3.11.2 Microphone agent: Domain-specific temporal rules . . . . . . .
3.11.3 Persons in crowd: Domain-specific temporal rules . . . . . . . .
3.11.4 Police officer at station: Domain-specific temporal rules . . . . .
3.11.5 Police officer at street: Domain-specific temporal rules . . . . .
Appendix 3: Medicine usage case . . . . . . . . . . . . . . . . . . . . .
3.12.1 Medicine box agent . . . . . . . . . . . . . . . . . . . . . . . .
3.12.2 Usage support agent . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

e-Assistance Support by Intelligent Agents over MANETs

37
40
42
45
48
49
53
54
54
54
54
55
55
55
56
56
56
56
56
57
58
58
58
59
59
60
63

Eduardo Rodrguez, Juan C. Burguillo and Daniel A. Rodrguez


4.1

4.2

4.3
4.4
5.

Introduction . . . . . . . . . . . . .
4.1.1 Multi agent systems (MAS)
4.1.2 Ubiquitous computing . . .
4.1.3 Case based reasoning . . .
4.1.4 Peer-to-peer . . . . . . . .
4.1.5 Mobile ad-hoc networks . .
System architecture . . . . . . . . .
4.2.1 Reasoning process . . . . .
4.2.2 Communication process . .
A case of study: An intelligent gym
Conclusions . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

The Active Metadata Framework

63
64
66
66
69
70
71
72
74
77
83
85

Christopher McCubbin, R. Scott Cost, John Cole, Nicholas Kratzmeier, Markus


Dale, Daniel Bankman
5.1

5.2

Introduction . . . . .
5.1.1 Background:
5.1.2 Background:
SimAMF . . . . . .

. . . . . . . . . . . . . . . .
Concepts . . . . . . . . . .
The active metadata concept
. . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

85
86
87
89

Contents

5.3

5.4
6.

xi

5.2.1 Motivation . . . . . . . . . . . . . . . . . . .
5.2.2 Related work . . . . . . . . . . . . . . . . . .
5.2.3 Implementation . . . . . . . . . . . . . . . .
5.2.4 Simulation visualization . . . . . . . . . . . .
5.2.5 Experiments . . . . . . . . . . . . . . . . . .
SWARM-AMF . . . . . . . . . . . . . . . . . . . . .
5.3.1 Background . . . . . . . . . . . . . . . . . .
5.3.2 System design . . . . . . . . . . . . . . . . .
5.3.3 An experiment using some swarming metrics .
5.3.4 Experimental design . . . . . . . . . . . . . .
5.3.5 Results . . . . . . . . . . . . . . . . . . . . .
5.3.6 Conclusions . . . . . . . . . . . . . . . . . .
List of acronyms . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

. 89
. 89
. 90
. 91
. 92
. 93
. 94
. 95
. 97
. 98
. 99
. 99
. 101

Coalition of Surveillance Agents. Cooperative Fusion Improvement in


Surveillance Systems

103

Federico Castanedo, Miguel A. Patricio, Jesus Garca and Jose M. Molina


6.1
6.2
6.3

6.4

6.5
6.6
7.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
Related works . . . . . . . . . . . . . . . . . . . . . . . . .
Cooperative surveillance agents architecture . . . . . . . . .
6.3.1 Sensor and coalition layer . . . . . . . . . . . . . .
6.3.2 Coalition protocol . . . . . . . . . . . . . . . . . .
Information fusion for tracking during coalition maintenance
6.4.1 Time-space alignment . . . . . . . . . . . . . . . .
6.4.2 Map correction . . . . . . . . . . . . . . . . . . . .
Experiments . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions and future work . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

Designing a Distributed Context-Aware Multi-Agent System

103
104
105
107
108
109
110
110
111
114
117

Virginia Fuentes, Nayat Sanchez-Pi, Javier Carbo and Jose M. Molina


7.1
7.2
7.3

7.4
7.5

7.6

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Context-aware multi-agent framework for heterogeneous domains
7.2.1 Multi-agent architecture . . . . . . . . . . . . . . . . . .
BDI model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.1 Beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.2 Desires . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.3 Intentions . . . . . . . . . . . . . . . . . . . . . . . . .
Gaia methodology . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.1 Analysis phase . . . . . . . . . . . . . . . . . . . . . . .
Analysis and design using Gaia methodology . . . . . . . . . . .
7.5.1 The environmental model . . . . . . . . . . . . . . . . .
7.5.2 The organization structure . . . . . . . . . . . . . . . . .
7.5.3 Role model . . . . . . . . . . . . . . . . . . . . . . . . .
7.5.4 Interaction model . . . . . . . . . . . . . . . . . . . . .
7.5.5 Organizational rules . . . . . . . . . . . . . . . . . . . .
7.5.6 Agent model . . . . . . . . . . . . . . . . . . . . . . . .
7.5.7 Service model . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

117
118
119
120
121
121
122
122
122
123
124
124
124
125
126
126
126
126

xii

8.

Agent-Based Ubiquitous Computing

Agent-Based Context-Aware Service


in a Smart Space

131

Wan-rong Jih, Jane Yung-jen Hsu


8.1
8.2

8.3
8.4

8.5

8.6
8.7
9.

Introduction . . . . . . . . . . . .
Background technology . . . . . .
8.2.1 Context models . . . . .
8.2.2 Context reasoning . . . .
Smart space infrastructure . . . .
Context-aware service platform . .
8.4.1 Context-awarereasoning .
8.4.2 Service planning . . . . .
8.4.3 Context knowledge base .
Demonstration scenario . . . . . .
8.5.1 Context-aware reasoning
8.5.2 Service planning . . . . .
Related work . . . . . . . . . . .
Conclusion . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

Prototype for Optimizing Power Plant Operation

131
132
132
133
134
136
137
138
139
141
142
143
145
146
147

Christina Athanasopoulou and Vasilis Chatziathanasiou


9.1
9.2

9.3

9.4

9.5

9.6
10.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Problem domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.1 Electricity generation units . . . . . . . . . . . . . . . . . .
9.2.2 Knowledge engineering . . . . . . . . . . . . . . . . . . . .
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3.1 Agent programming paradigm . . . . . . . . . . . . . . . .
9.3.2 Intelligent Power Plant engineer Assistant MAS (IPPAMAS)
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4.1 Data mining . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4.2 Multi-agent system . . . . . . . . . . . . . . . . . . . . . .
9.4.3 Wireless transmission . . . . . . . . . . . . . . . . . . . . .
9.4.4 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5.1 MAS performance . . . . . . . . . . . . . . . . . . . . . . .
9.5.2 User evaluation . . . . . . . . . . . . . . . . . . . . . . . .
Concluding remarks and future enhancements . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant


in Third Level Education

147
148
148
149
150
150
151
157
157
157
158
159
159
159
160
161
163

Elaine McGovern, Bernard Roche, Rem Collier, Eleni Mangina


10.1
10.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . .
Related work . . . . . . . . . . . . . . . . . . . . . . .
10.2.1 Multi-agent systems based learning technologies
10.2.2 The mobile device . . . . . . . . . . . . . . . .
10.2.3 Modular education at UCD . . . . . . . . . . .
10.2.4 Learning styles . . . . . . . . . . . . . . . . . .
10.2.5 Teaching strategies . . . . . . . . . . . . . . .
10.2.6 Evaluation techniques . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

163
164
164
167
169
170
170
170

Contents

10.3

10.4

10.5
10.6

xiii

10.2.7 Presenting modules for selection . .


IUMELA: the agent architecture . . . . . . .
10.3.1 The assistant agent . . . . . . . . . .
10.3.2 The moderator agent . . . . . . . . .
10.3.3 Expert agent technologies . . . . . .
IUMELA student user interface . . . . . . .
10.4.1 Initial registration and login . . . . .
10.4.2 Personalised welcome screen . . . .
10.4.3 Learning journal facility . . . . . . .
10.4.4 Student messaging . . . . . . . . . .
10.4.5 The module and assistant facilities .
Evaluation . . . . . . . . . . . . . . . . . . .
10.5.1 ABITS FIPA messenger in IUMELA
Discussion . . . . . . . . . . . . . . . . . . .

Bibliography

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

171
172
173
173
174
174
174
175
177
177
178
179
180
181
183

Chapter 1

Solving Conflicts in Agent-Based Ubiquitous


Computing Systems:
A Proposal Based on Argumentation
Andres Munoz Ortega, Juan A. Bota Blaya, Felix J. Garca Clemente,
Gregorio Martnez Perez and Antonio F. Gomez Skarmeta
Departamento de Ingeniera de la Informacion y las Comunicaciones, Facultad de
Informatica, University of Murcia, Campus de Espinardo, s/n, 30.071 Murcia, Spain
amunoz, juanbot, fgarcia, gregorio, skarmeta@um.es

Abstract
Agent-based ubiquitous computing environments have many sources of complexity. One of the most
important is derived from the high number of agents which must survive in such kind of system.
In systems with a high population of agents, conflicts are very frequent. Thus, there is a need for
highly efficient techniques to solve conflictive situations, when they are produced. In this chapter, we
propose an approach based on argumentation to solve authorization conflicts. Authorization decisions
are taken with authorization policies and conflicts are solved by the agent themselves by arguing.
Keywords: Pervasive environments, argumentation, conflicts, authorization policies

1.1 Introduction
In pervasive environments, one of the main concerns must be to enable effective coordination mechanisms for devices, services, applications and users [Perich et al. (2004)]. Due to the high number of
potential communicating entities in the system, situations in which a drop of productivity may occur
are highly probable. More specifically, these situations are produced because of different conflicts
among entities. In consequence, this kind of systems must be prepared for dealing with conflicts.
Conflicts may arise in coordination tasks within Multi-Agent Systems (MAS), because of conflicting
goals or beliefs. When there is a conflict of this type, the mechanism for settling on it goes through
a negotiation process. Think, for example, on a ubiquitous system which is in charge of managing a
building with many offices and humans. The latter are considered as users of the system by means
of hand held devices, smart phones or any laptop or PC, which are connected to the applications and
services of the building by using the network. When the system is based on autonomous agents, they
probably have a partial and imprecise vision of the world they live in. If, for example, an emergency
situation is produced due to a fire in the building, the autonomous agents could be responsible of deciding how to manage security of the building (e.g. avoid access to persons into unauthorized places)
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_1, 2009 Atlantis Press/World Scientific

001

Agent-Based Ubiquitous Computing

and, at the same time, allowing a quick and effective evacuation. It might be the case that a user needs
to get out of the building through a corridor which has a forbidden access status with respect to that
user. How to proceed in this case to preserve the integrity of the workers in the building?
Traditional mechanisms dealing with such problems are focused on avoiding conflict occurrence,
as for example social laws [Shohama and Tennenholtz (1995)], coordination through cooperation
when agents are benevolent [Lesser (1999)] and truth maintenance systems [Doyle (1979)], among
the most relevant. Social laws are difficult to implement in rather simple environments and they
seem to be non-applicable in complex environments like those of pervasive systems. Cooperation
implies the assumption of benevolent agents. However, since MAS in pervasive scenarios are open
systems, some agents could have private interests, and even being willing to cooperate, they will pursue their own goals. Hence, it seems hard to define a dynamic cooperation plan needed here. Truth
maintenance systems try to explain why conflicts happen by registering the reasons that derive the
conflicting conclusions, but they are not able to reason about the conflict itself, i.e., the sources of the
conflict are not taken into consideration. On the other hand, negotiation processes [Rosenschein and
Zlotkin (1994)] have shown to be useful in pervasive systems to enable effective coordination once
conflicts have appeared. This kind of negotiation is based on self-interested agents, as a pervasive
system, in its general sense, is an open system where agents may not be willing to cooperate for
anything. In this approach, conflicts are taken as unavoidable; therefore some methods are needed
to solve them. Argumentation-based negotiation processes [Parsons et al. (1998)] are a set of specific techniques that relies on argumentative theories in order to solve conflicts, by extending the
negotiation proposals with the reasons or foundations associated to each proposal.
Argumentation deals with several fields in knowledge engineering [Carbogim et al. (2000)]. One
of the outcomes in this extensive line of work is an abstract and generic framework for reasoning
under incomplete and inconsistent information [Prakken and Sartor (1996)]. In this paper, such a
generic framework has been instantiated using the Semantic Web [Berners-Lee et al. (2001)] information representation approach, i.e. OWL ontologies. As a result, all the knowledge managed in the
pervasive system, including arguments and rules themselves, is represented by means of Semantic
Web ontologies. Hence, we are focusing on an innovative research line that is focused on automatically creating and attacking arguments during the argumentation process.
The main goal of this chapter consists of taking the first steps in combining an argumentation
system to solve beliefs conflicts in pervasive environments. As a result, the argumentative approach
is introduced in ubiquitous computing systems as a manner to solve the different types of conflicts that
are found among autonomous entities. The normal security procedures of most working environments
such as the one mentioned above are managed by a policy-based framework. In this kind of systems,
policies are used by the pervasive infrastructure to authorize software entities to perform critical
actions, which directly influence the security of workers when there is an emergency situation. These
policies have the form of if-then rules and are manually defined by the system administrator.
The rest of the chapter is structured as follows. Section 1.2 introduces the kind of conflicts
that could be found in the context of distributed systems management through rule-based policies,
more specifically through authorization policies. Section 1.3 is devoted to give some notions of
argumentation theory, at a basic level. In section 1.4, the use of the argumentative approach in
ubiquitous computing systems is illustrated with a concrete example. Section 8.6 describes some of
the most relevant works related with the ideas presented in this chapter. Finally, section 5.3.6 includes
most important conclusions extracted from the light of this chapter and the future work.

1.2 Classification of authorization policies conflicts


Conflicts [Lupu and Sloman (1999); Syukur et al. (2005)] raised by evaluating policy rules may
be categorized into two different types, depending on whether the application domain is taken into

Solving Conflicts in Agent-Based Ubiquitous Computing Systems: A Proposal Based on Argumentation

account to evaluate the rules or not. In the first case, the conflicts obtained are called semantic
conflicts, because they are generated by using information related to the current state of the system.
They are really difficult to detect, and their appearance is conditioned to the dynamic state of the
application domain. In the second case, we have syntactic conflicts as they can be detected by simply
looking at the rule structure. This type of conflict occurs irrespective of the state of the particular
application domain and may be the result of specification errors in the policy. However, they may
also be legitimately derived from other rules.
Focusing on authorization policies, modality conflicts are the only syntactic conflicts that can be
detected from the rule syntax that defines these policies. Modality conflicts occur when a subject is
both authorized and forbidden for the same activities on a target. Recalling the building scenario, an
example of this type of conflict is observed when two different policies about a user (or role) forbid
and permit her to enter a specific room at the same time.
On the other hand, there are different types of semantic conflicts in authorization policies:
Conflict of Duties. This conflict is a failure to ensure separation of duties. The duties in this case
are privileges on the same resource that are defined as conflicting according to the semantics of
the application domain. A example is observed when an agent is authorized to sound and disable
the same alarm. It should be noted that the conflict does not exist if the alarm that should be
sounded is not the same as the one that should be disabled.
Conflict of Interests. A conflict of interests occurs because a subject has multiple interests that
it must service, i.e. conflicting responsibilities. Such a conflict can arise as a result of various
roles being assigned to the agent with conflicting responsibilities. A example is observed when
an agent is authorized to sound the fire alarm and to activate the flood alarm, but according to
the system general rules both privileges cannot be authorized to the same agent.
Conflict of Multiple Managers. It occurs when the goals of the two subjects are semantically
incompatible. A example is observed when an agent is authorized to sound the alarm and other
agent is authorized to disable the same alarm, and both privileges cannot occur at the same time.
Conflict of Self-Management. In this case, the subject has itself as target in the rule. Role
assignment leads to the situation in which an agent effectively spans two or more levels of the
authority hierarchy. A example is observed when an agent has the privilege to delegate privileges
to itself.
Syntactic conflicts can be analyzed by simply considering the policies. Modality conflicts can be
detected only from the syntax of the policy specification, by looking for inconsistencies. However,
semantic conflicts require a good understanding of the semantics of the system, which are represented
by system models. Successful detection of policy conflicts places a number of requirements on the
description of the system.

1.3 The basics of argumentation


Broadly speaking, argumentation [Kraus et al. (1993); Parsons and Jennings (1996); Sycara (1990)]
could be defined as a process that involves two or more autonomous entities (e.g. software agents) and
in which the entities try to persuade each other about the certainty (or falseness) of some beliefs. In
conventional argumentation, beliefs and the derivations used to reach them (facts and rules from the
application domain) are expressed using some logic language (e.g. First-order logic). The purpose
of argumentation is to provide the necessary tools to solve controversial situations among entities
caused by either conflictive beliefs or beliefs that could generate conflictive situations, although they
are well-founded (e.g. agents partial beliefs in a MAS). These situations are perfectly plausible in
pervasive environments in which agents have their own (partial and possibly imprecise) beliefs about

Agent-Based Ubiquitous Computing

the environment, themselves and the rest of agents. Hence, several scenarios in which different and
conflictive agents opinions about a concrete issue may exist.
Using a symbolic notation, the process of argumentation can be defined by means of the following
expression:
 ( , G)
where is a set of formulas representing the agents knowledge base (a.k.a. theory), the pair ( , G) is
called the argument,  is a consequence relation operator, is a logical formula known as conclusion,
and G is the set of ground facts and rules from which is inferred, with G . Note that the specific
meaning of  will depend on the underlaying logic and the type of inference employed. In this work,
the argument process is driven by deductive inference. As a result,  is equivalent to the modus
ponens operator.
Example 1.1. Suppose that an agent owns this simple beliefs base: s = {a, a b, c}. Thus, this
agent could construct the following argument (c, {a, a b, b, b c}), which concludes c by means
of a ground fact, a, and two implications, a b and b c.
Hence, an argument consists of the conclusion of a reasoning process, and the track of the reasoning process from the initial knowledge base until the resulting conclusion. More formally, using
the notation proposed by Parson in [Parsons et al. (1998)], an argument is defined as follows.
Definition 1.1 (Argument). An argument for a formula is a pair ( , G) where G is a set of grounds
for . Consequently, a set of grounds G for is an ordered set s1 , . . . , sn  such that the following
three conditions hold
(1) sn = n dn , where dn is the rule or axiom applied in the n-th step to deduce from n ; and
(2) every si , i < n, is either a formula from i or si = i di i ; and
(3) every p j i is either an assumed formula in the knowledge base of the entity or deduced,
k , k < i.
Every si is called a step in the argument. Observe that the axiom dn associated to  corresponds
to the modus ponens (mp) operator for all the arguments defined in this work, as it has previously
been explained.
Example 1.2. Applying definition 1.1, the argument of the example 1.1 will be
(c, s1 , s2 ),
where
s1 = {a, a b} mp b,

s2 = {b, b c} mp c.

Since the consequence relation axiom (dn ) in this approach is determined to be the mp operator, it will be omitted henceforth in order to simplify the expressions. When an agent generates a
conclusion which expresses some beliefs about the environment, other agents sharing the same application domain may then propose arguments supporting that formula or contradicting the whole or
a part of it. There are systems which support contradictory beliefs about particular pieces of the application domain. Examples of such systems may be found in paraconsistent logics [Priest (2002)].
However, in many real life applications, reaching an agreement about conflictive beliefs by all the
agents involved in the system is needed. And that is when an argumentation process comes into play.
Basically, all agents with conflicting beliefs must show their own arguments about those beliefs and
commonly determine which one should be accepted for all.

Solving Conflicts in Agent-Based Ubiquitous Computing Systems: A Proposal Based on Argumentation

One widely recognized approach to reach such an agreement is based on establishing a measure
of the strength of all the exposed arguments and accept the strongest one. This is precisely the
approach followed in [Parsons et al. (1998)]. In that work, the authors distinguish between several
kinds of arguments and different argument strengths. Regarding argument definition, they are divided
into two basic types: Tautology and non-trivial.
Definition 1.2 (Tautology). Any argument that is obtained by using any deductive mechanism but
formulae from a theory is a tautology. These arguments will not be present in conflicts because, in
principle, they can not be in contradiction. They are true in any application domain and any situation
(if considering only deductive inference, as in this case).
Definition 1.3 (Non-trivial argument). Any argument that is obtained by using any deductive
mechanism from the theory and that does not contain any contradiction in it is a non-trivial argument.
Tautologies are also known as axioms, and they express universal statements that are accepted
a priori. On the other hand, the most general class of non-trivial arguments are generated from the
ground facts and rules that belong to the domain, with the condition that the steps si that form the
argument can not be contradicted among them.
Example 1.3. Argument UT = (a (a b), s1 ), where s1 = {} A2 a (a b)1 is a tautology,
since it only uses a deductive axiom. Instead, the argument in the example 1.2 is a non-trivial
argument, using facts and implications from s . Finally, the argument Utrivial = (d, s1 ) where
s1 = {d, d d}  d can not be classified as tautology neither as non-trivial, because of s1 contradicts itself.
Before moving into argument classification, it is necessary to explain the attacks that can be
made to an argument. Basically, there are two types of attack over an argument U: Rebut, when
the conclusion of an argument T is in conflict with Us conclusion; and undercut, when the T s
conclusion is in conflict with any Us partial conclusion (from steps si ).
Definition 1.4 (Rebut). An argument T = ( , F) rebuts an argument U = ( , G) iff the pair of conclusions ( , ) is in conflict. The most intuitive conflict in propositional logic is . Other more
complex conflict can be defined depending on the semantic of the application domain (see section
1.2).
Definition 1.5 (Undercut). An argument T = ( , F) undercuts an argument U = ( , G) iff any pair
of conclusions ( , i ) is in conflict, where i are the partial conclusions from each si , i = 1..n 1
such as si G.
Note that the rebutting attack is symmetric, i.e., if argument T rebuts U, then U also rebuts T .
This characteristic does not hold in the undercutting attack.
Example 1.4. Suppose two different agents, Ag1 , Ag2 . Suppose that agent Ag1 has the following
Ag1 = {a, a b, b c} as its beliefs base. Moreover, let suppose that Ag1 builds the following
argument (please, note that, for the sake of readability, we will use the notation si, j to express the j-th
step in the argument i-th):
1 Where

A2 stands for the second axiom of propositional logic, which is


a (a b)

Agent-Based Ubiquitous Computing



U1 = (c, s1,1 , s1,2 ),
s1,1 = {a, a b}  b
s1,2 = {b, b c}  c
This argument is non-trivial. Now, suppose that another Ag2 appears with Ag2 = {a, a d, d
c, a e, e b} and it builds the following argument:


U2 = {b, s2,1 , s2,2 },
s2,1 = {a, a e}  e
s2,2 = {e, e b}  b
Clearly, U2 undercuts U1 since the conclusion of the former, b, attacks step s1,2 of U1 . Finally,
suppose that Ag2 builds another argument U3 :


U3 = {c, s3,1 , s3,2 },
s3,1 = {a, a d}  d
s3,2 = {d, d c}  c
In this case, U3 rebuts U1 , because of both conclusions are contradictories, (c, c).
The argument classification mechanism is based on classes of acceptability following the definitions 1.2-1.5. This mechanism is important since it provides entities with a way of deciding how
to react to the incoming of a new argument, by relating them to its owns arguments and knowledge.
There are five classes ordered according to increasing acceptability or argument strength.
Definition 1.6 (Argument strength). The acceptability of any argument can be divided into classes
A1 A5 depending on its strength:
A1 is the class of all arguments that may be generated from .
A2 is the class of all consistent (non-trivial) arguments that may be generated from .
A3 is the class of all arguments that may be generated from and that could be undercut but not
rebutted with other arguments from .
A4 is the class of all arguments that may be made from and that could neither be undercut nor
rebutted with other arguments from .
A5 is the class of all tautological arguments that may be made from .
Thus, the more an argument is classified in higher class, the more acceptable it is because is
less questionable. For example, if most of the arguments for a proposal are in level A4 and the
arguments against do not reach A3 , an entity will accept that proposal. Obviously, each entity sets an
acceptability class to an argument depending on its own knowledge.
Example 1.5 (Continuing from example 1.4). Before receiving arguments from Ag2 , agent Ag1
will internally classify U1 as an A4 argument, because it is a non-trivial argument and has got no argument attacking it. Consequently, providing that Ag2 communicates U2 , Ag1 will class the argument
U1 in A3 , while U2 would be classified into A4 level (U2 undercuts U1 ). Therefore, Ag1 will have to
choose U2 argument as the more acceptable. Finally, when Ag1 receives U3 , it would set both U3 and
U1 into the A2 class, because they rebut each other.
If several arguments are ranked in the same level, an agent will prefer its own arguments, supposing the rationality of the agent. It could be necessary to determine whether the argumentation
process is resoluble or not, that is to say, if an agent is able to generate an argument that dominates

Solving Conflicts in Agent-Based Ubiquitous Computing Systems: A Proposal Based on Argumentation

the rest. To this end, a persuasion dialogue needs to be defined, in which a set of protocols is defined
to control what utterances can be used each time and when they are allowed to be divulged (i.e., under
which conditions they are appropriated). Furthermore, they regulate the utterances effects (i.e., what
changes represent in the entitys internal model), identify the end of the dialogue and establish what
the outcomes of the dialog are (i.e, which argument is the most acceptable). Discussion about this
topic is beyond the scope of this chapter, but the interested reader is referred to [Prakken (2006)] for
an excellent review on persuasion dialogues.

1.4 Using argumentation to resolve policy conflicts


Once a basic introduction into argumentation has been reviewed, this section shows the initial proposal on how this argumentative technique could be applied to resolve policy conflicts as the ones
that were described in section 1.2.

(;,7

(;,7

$JHQW5RRP

)LUH
$ODUP

)LUH

'RRU$

$JHQW)ORRU
-RKQ

Fig. 1.1

3UHVHQFH
'HWHFWRU

A fire emergency argued in a pervasive environment

Suppose the scenario in Figure 1.1. It represents a concrete part of a floor within a building.
It has two main spaces: a corridor, from the bottom to the right, and a room. Observe that there
are two sensors, one of presence and another one which detects possible fire situations. The area of
responsibility of the two controller agents in the floor is also represented. AgentRoom is in charge
of controlling the security of the room, AdministrationRoom, through Door A because there are
critical equipments there, and AgentFloor is in charge of monitoring the whole set of sensors in the
floor. Finally, there is a user, John, located at the corridor, identified by a RFID label, and detected
by AgentFloor thanks to the presence sensor. Suddenly, a situation of emergency occurs: the floor
is on fire. Thus, when AgentFloor detects the fire through the corresponding sensor, it activates
an evacuation procedure by means of visual panels distributed throughout the building. The problem
arises when John needs to go out, due to the fact that the only authorized way he can take, the corridor,
is on fire. The room is an alternative exit, however it is locked as John is not authorized to enter.
With the aim of modeling this particular scenario and the concepts related to authorization, the
agents make use of Semantic Web ontologies [Berners-Lee et al. (2001)]. In particular, this work
reuses part of the CIM (Common Information Model) information model. This model is a specification of the elements of interest in a distributed system for management. It is a standard of the DMTF
(Distributed Management Task Force). We have redefined this model in order to use it in a Semantic
Web based framework (i.e. by using OWL [Smith et al. (2004)] and SWRL [Garcia-Clemente et al.
(2005)]). A partial view of the CIM ontology that describes the application domain can be seen in
figure 1.2. Main relations here (i.e. they are seen as predicates afterward) are AuthorizedSubject

Agent-Based Ubiquitous Computing

0DQDJHG(OHPHQW


0HPEHU2I&ROOHFWLRQ

$XWKRUL]HG6XEMHFW

$XWKRUL]HG7DUJHW

3URFHVV,QGLFDWLRQ

,GHQWLW\&RQWH[W
&ROOHFWLRQ

5ROH

3ULYLOHJH

,GHQWLW\

'RRU

$XWKRUL]HG3ULYLOHJH

$OHUW,QGLFDWLRQ

Fig. 1.2 Partial view of the CIM ontology used by the agents to model the scenario

and AuthorizedTarget, which refer to the entity or person which is authorized to do something,
and the entity over which the authorized person will perform the authorized action, respectively (for
more information on this topic, please see [Martinez et al. (2006)]). Elements of this ontology will
be used in the rest of the section as part of the example.
In order to exchange arguments, the agents use the persuasive dialogue framework described
in [Amgoud and Parsons (2002)]. Particularly, the agents employ the utterances assert(U) and
accept(U), where U is an argument, to express a tentative argument that has locally been derived,
and to accept it, respectively. Moreover, the utterance request(U) allows an agent to ask for the
performance of some action that satisfies the intention expressed in U.
Next, the development of the proposed scenario follows. Suppose that both AgentFloor and
AgentRoom are BDI agents, and that AgentFloor owns the following beliefs:
AF = {type(DoorA, Door),type(John, Identity), IdentityContext(John, DoorA),
type(Open, AuthorizedPrivilege),type(Student, Role), MemberO fCollection(John, Student),
type(FireAlarm, AlertIndication), ProbableCause(FireAlarm, FireDetected )}
Thus, AgentFloor believes that student John is near Door A (represented by the IdentityContext
relation) and also that a fire has been detected in the floor. Moreover, AgentFloor uses the following
rule Open Door:
Open Door : type(?x, Identity),type(?d, Door), IdentityContext(?x, ?d)]

AuthorizedTarget(Open, ?d), AuthorizedSub ject(Open, ?x),


which states that any person (the authorized subject represented as the variable ?x) next to a door (the
target represented as ?d) is given the authorization to open it. Using all this knowledge, AgentFloor
builds the argument
 
U1 = ([AuthorizedTarget(Open, DoorA), AuthorizedSub ject(Open, John)], s1,1 ),
where
s1,1 = {type(John, Identity),type(DoorA, Door), IdentityContext(John, DoorA),
Open Door}

AuthorizedTarget(Open, DoorA), AuthorizedSub ject(Open, John)

Solving Conflicts in Agent-Based Ubiquitous Computing Systems: A Proposal Based on Argumentation

Note that the rule Open Door is utilized in the argumentation step s1,1 . Although it is only
indicated by its name, the complete definition of the rule is actually contained in the argument (analogously for the rest of the rules in the arguments below). Since AgentFloor knows that there is an
emergency situation on the floor, its argumentation is sent to all nearby agents to make them awareness of the obligation of opening Door A, by means of the communicative act assert(U1 ). Therefore,
argument U1 is received by AgentRoom and then added to its knowledge base. On the other hand,
AgentRoom has the following beliefs:
AR = {type(DoorA, Door),type(John, Identity), IdentityContext(John, DoorA),
type(Open, AuthorizedPrivilege),type(Student, Role),type(Admin, Role),
MemberO fCollection(John, Student)}
and the rules Rest Door and Auth Door:
Rest Door :

[type(?x, Identity), MemberO fCollection(?x, Student),


type(?d, Door), IdentityContext(?x, ?d)]

[AuthorizedTarget(Open, ?d), AuthorizedSub ject(Open, ?x), ]

Auth Door :

[type(?x, Identity), MemberO fCollection(?x, Admin),


type(?d, Door), IdentityContext(?x, ?d)]

[AuthorizedTarget(Open, ?d), AuthorizedSub ject(Open, ?x)],

meaning that students are not authorized to open DoorA (rule Rest Door), since the room that it
opens into is a restricted room, only to be used by administrators (rule Auth Door). Consequently,
AgentRoom manages to bring up an argument that denies opening the door:
 
U2 = {[AuthorizedTarget(Open, DoorA), AuthorizedSub ject(Open, John)], s2,1 },
where
s2,1 = [{type(John, Identity), MemberO fCollection(John, Student),type(DoorA, Door),
IdentityContext(John, DoorA), Rest Door}

AuthorizedTarget(Open, DoorA), AuthorizedSub ject(Open, John)
From definition 1.4, both arguments rebut each other, therefore AgentRoom classifies them into
level A2 , and rationally an agent prefers its own arguments against others arguments. Hence,
AgentRoom will not open DoorA. Afterward, AgentRoom will send its argument with assert(U2 )
to AgentFloor, and then the latter will classify both arguments in the same manner. At this point,
a conflictive situation has been reached, in which AgentFloor forces John to pass the door, and
AgentRoom does not authorize him to do that action, keeping the door closed.
The simple approach of using different classes of arguments given in definition 1.6 does not
suffice for solving the conflict posed here by arguments U1 and U2 . However, argumentation can be
seen inside or as part of a negotiation process in which agents make offers (i.e. AgentFloor sends
U1 to AgentRoom), and the critics to the offers (i.e. AgentRoom) sends their counterarguments to the
proponent (argument U2 ).

10

Agent-Based Ubiquitous Computing

Now, suppose that AgentFloor also owns the following rule:


RuleFire : [type(?x, Identity),type(?d, Door),type(?p, AuthorizedPrivilege),
AuthorizedTarget(?p, ?d),type(?a, AlertIndication),
ProbableCause(?a, FireDetected )]

remove(AuthorizedSub ject(?p, ?x)),


that is to say, if a fire is detected through the alert ?a, any subject ?x must be allowed to do any action
that requires privileges. Thus, when AgentFloor receives argument U2 , it starts generating new arguments that provides an alternative justification for opening the door. First, it tries to attack the conclusions of argument U2 . Successfully, the agent realizes that the fact AuthorizedSub ject(Open, John)
can be attacked by the rule RuleFire. Note that if none of the conclusions could be defeated, the agent
may attack the use of the grounded facts in s2,1 or the rule Rest Door by avoiding its application when
an emergency is detected. Now, AgentFloor builds argument
 
U3 = (remove(AuthorizedSub ject(Open, John)), s3,1 ),
where
s3,1 = {type(John, Identity),type(DoorA, Door),type(Open, AuthorizedPrivilege),
AuthorizedTarget(Open, DoorA),type(FireAlarm, AlertIndication),
ProbableCause(FireAlarm, FireDetected ), RuleFire}

remove(AuthorizedSub ject(Open, John))
Since AgentFloor has the intention that AgentRoom removes part of its knowledge base, it uses
a negotiation communicative act, request(U3 ). AgentRoom receives argument U3 and tries to attack it.
Neither the grounds facts can be defeated (the types of the elements are the same for the agents, and
ProbableCause(FireAlarm, FireDetected ) is a new fact that AgentRoom might assume directly,
or it can check it out with other agents), nor the rule RuleFire. As a result, AgentRoom no longer
believes AuthorizedSub ject(Open, John). Thus, argument U3 is set into class A4 and AgentRoom
consents to open DoorA. U1 is also accepted by this agent, which is the eventual reason why the door
finally opens. As a result, AgentRoom communicates the acceptance of U1 , by means of the utterance
accept(U1 ).

1.5 Related work


Different methodologies can be adopted when solving conflicts in real world multiagent systems.
One of them resides in adopting a fixed synthesis criteria [Malheiro and Oliveira (2000)]. This line is
based on the assignment of a unique belief status to every shared proposition through the application
of a conjunctive synthesis criterion: a shared proposition is believed if it is believed by every agent
in the system. The conflicts are detected by a special agent, which is the responsible of managing
the system meta-knowledge and establishing the priorities among conflicts. However, the criteria to
solve conflicts in this approach is static, and it seems not to fit well in a pervasive environment where
the criterion to set priorities is dynamic and no special agent can be supposed to exist.
On the other hand, a dynamic approach such as argumentation has demonstrated to be an efficient
and useful mechanism to solve conflicts in open environments. Several proposals of argumentation
frameworks can be found in the related literature. Dung[Dung (1995)] develops an argumentation

Solving Conflicts in Agent-Based Ubiquitous Computing Systems: A Proposal Based on Argumentation

11

theory that is mainly concerned with the acceptability of the arguments. His approach is based on
a special form of logic programming with negation as failure, where the attacking relation of arguments is defined using a non-monotonic inference process to derive the fact p from failure to derive
p. However, attack of one argument on another is assumed to succeed. This is not always applicable: sometimes it is needed to decide which argument is more acceptable according to different
parameters. Contrarily, Kakas[Kakas and Moraitis (2003)] uses a framework of logic programming
without negation as failure, thus the attacking relation is realized through monotonic proofs of contrary conclusions and a priority relation on the sentences of the theory that make up these proofs. Our
line is more aligned to the latter, since the priority relations can be dynamically changed in order to
adapt the agents behavior to the changing environments. In [Prakken and Sartor (1996)], a framework for assessing conflicting arguments is presented. In this case, negation could be both weak and
explicit. Arguments that contain weak negation are said to be defeasible, as they can be defeated by
other arguments attacking this type of negation. Conflicts between arguments are resolved according to priorities on the rules. Priorities themselves are derived as argumentative conclusions, thus
they are also defeasible, what in turn means that are not fixed. Arguments are justified dialectically,
i.e., through a dialogue between a proponent and an opponent. We also work in this line, adding
a clearer structure to the arguments. Another interesting framework of dialogue is [Amgoud and
Parsons (2002)], where the agents preferences are taken into account in the argumentation process.

1.6 Conclusions and future work


Some initial ideas about the possibility of using argumentation through negotiation have been presented in this chapter. The aim of this approach is to offer argumentation as a technique to solve
potential conflicts that may appear in ubiquitous environments. These environments are compound
by many kinds of entities, among which autonomous agents present several challenges, because of
they own partial and imprecise beliefs that drive to goal or opinion conflicts. In the particular approach introduced here, information on the application domain is modeled by using an ontology
based on the Semantic Web language (OWL), whereas domain policies are expressed by means of
rules defined with SWRL, an abstract rule language that extends OWL expressivity. As a result, the
knowledge management in this approach consists of representing a domain model through an ontology, which takes into account particularities of users, agents, common and individual spaces, devices
and so on. Decision mechanisms are specified by means of policies and these policies are defined
with if-then rules. In order to solve the different types of potential conflicts that could appear from
these policies, an argumentation process is launched inside of a persuasion dialogue system.
Although the approach needs a lot of work, we believe that this idea is very promising as it will
enable to dynamically solve conflicts, and it will also allow model checking in the early stages of
the system development. There are many open issues. One of the most important pending tasks is
that of designing a valid mechanism which allows complex decision making when using acceptability classes from A1 to A5 is not enough. Questions to be solved are the approach complexity, and
the completeness and consistency of the provided solutions. Moreover, other interesting topics are
coordination and communication issues, as well as argumentation strategies to be employed by the
autonomous agents.

1.7 Acknowledgments
This work has been supported by the Spanish Ministerio de Ciencia e Innovacion (MICINN) under
the grant AP2006-4154 in frames of the FPU Program. Thanks also to the Funding Program for Re-

12

Agent-Based Ubiquitous Computing

search Groups of Excellence granted as well by the Seneca Foundation with code 04552/GERM/06.
Finally, we also acknowledge the Spanish Ministry of Education and Science through the Research
Project TIN-2005-08501-C03-02.

Chapter 2

Mixed Reality Agent (MiRA) Chameleons

Mauro Dragone, Thomas Holz, G.M.P. OHare and Michael J. OGrady


CLARITY Centre for Sensor Web Technologies, School of Computer Science &
Informatics, University College Dublin, Belfield, Dublin 4, Ireland
{mauro.dragone, thomas.holz, gregory.ohare, michael.j.ogrady}@ucd.ie

Abstract
Human-Robot Interaction poses significant research challenges. Recent research suggests that personalisation and individualisation are key factors for establishing lifelong human-robot relationships.
This raises difficulties as roboticists seek to incorporate robots into the digital society where an increasing amount of human activities relies on digital technologies and ubiquitous infrastructures. In
essence, a robot may be perceived as either an embedded or mobile artefact in an arbitrary environment that must be interacted with in a seamless and intuitive fashion. This chapter explores some of
the alternative ways and the design issues to achieve these objectives. Specifically, it describes a new
system, which we call Mixed Reality Agent (MiRA) Chameleon, that combines the latest advancements on agent-based ubiquitous architectures with mixed reality technology to deliver personalised
and ubiquitous robot agents.

2.1 Introduction
The diffusion of robotic platforms into our daily lives involves many new design challenges. The
fact that we are all individuals influences our expectations and design requirements for those tools
and systems with which we interact. Robots are no different in this respect. They need to operate
in our daily life environments, such as hospitals, exhibitions and museums, welfare facilities and
households. Not only will these robots have to deal with various complicated tasks, but they are
also expected to behave in a socially intelligent and individualised manner to meet the diverse requirements of each user [Dautenhahn (1998)]. For these reasons, personalised, custom-made robotic
design is one of the technology strategies advocated by the Japan Robot Association (JARA) for
creating a Robot Society in the 21st Century [JARA (2008)].
The other challenge faced by todays roboticists is the integration of robots into digital society,
as an ever-growing amount of human activities relies on digital technology. Trends such as inexpensive internet access and the diffusion of wireless computing devices have made ubiquitous or
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_2, 2009 Atlantis Press/World Scientific

13

14

Agent-Based Ubiquitous Computing

pervasive computing a practical reality that augments the normal physical environment and supports
the delivery of services to human users anytime and anywhere. Endowing these ubiquitous devices
with intelligent behaviour, and thus creating intelligent environments, is termed Ambient Intelligence
(AmI) [Aarts (2004)].
However, while many robotic initiatives now share the thesis that robots are a compelling instance of those artefacts which comprise and deliver the ambient space, reconciling the personalization/social aspect with pervasiveness and ubiquity - e.g. through the integration with existing
ubiquitous infrastructures - still remains a largely unexplored area of research. On both fronts, user
interface agents, e.g. acting as virtual assistants to their user, have been already widely adopted as
intelligent, adaptive social interfaces to the digital world, e.g. in form of virtual characters interacting
with the user via PCs, PDAs, and the Internet. As such, the experience accumulated it these applicative domains may be purposefully used to inform robotics research. While in the past, software
agents and robots have usually been attributed to distinct domains, software and hardware respectively, the modern conception is, in fact, to consider them as particular instances of the same notion
of agent - an autonomous entity capable of reactive and pro-active behaviour in the environment it
inhabits.
In addition, one of the foremost aids to the design of both modern robot and ubiquitous/pervasive
systems comes from the developments in agent-oriented software engineering (AOSE). AOSE promotes the designing and the developing of applications in terms of multiple autonomous software
agents for their characteristic ability of delivering context-sensitive, adaptive solutions. As such,
multiagent techniques, related software engineering methodologies, and development tools are natural candidates for the implementation of these systems.
The work described in this paper is representative of the of the kind of multi-disciplinary effort characterising todays agent research. This paper describes the MiRA (Mixed Reality Agent)
Chameleons project (Figure 2.1), which leverage on past experiences and agent-related technologies
to build socially competent robot agents by combining physical robotic bodies with virtual characters
displayed via Mixed Reality (MR) visualization. Through such an innovative integration approach,
a MiRA Chameleon exhibits tangible physical presence while offering rich expressional capabilities
and personalisation features that are complex and expensive to realise with pure hardware-based solutions. Crucially, thanks to its AOSE methodology, it also promises to maintain all the expressivity of
the MR medium without loosing the tremendous opportunities for ubiquity and adaptation associated
with its virtual component.

Fig. 2.1 Live demonstration of MiRA Chameleons at the UCD Open Day, December 2007 (actual images displayed in the users head mounted display during live experiments and user trials with our application).

Mixed Reality Agent (MiRA) Chameleons

15

Before proceeding with the rest of the paper, though, it is necessary to reflect further on the
intelligent agent paradigm.

2.2 Social interface agents


AmI was conceived in response to the observation that many embedded artefacts all competing for
the users intention would result in environments that were uninhabitable. Agent-based intelligent
user interfaces are seen as key enabling technology for AmI, and a solution to the interaction issue.
Much of the effectiveness of these interfaces in general stem from their deliberate exploitation
of the fact that end-users consistently assume, if only on a subconscious level, an intentional stance
toward computer systems [Friedman (1995)].
Affective Computing is a branch of HCI that aims to endow computer systems with emotional
capabilities (for example, to express, recognize, or have emotions) in order to make them more lifelike and better adapted to interact with humans. The natural and more recent evolution of intelligent
agents has resulted in the social agent paradigm, that is, the exploitation of social and psychological
insights on human social behaviour to create agents that are capable of acting as more equal social
partners, while also developing and maintaining long-term social relationships with humans. The
investigation of this type of human-agent social interaction is becoming increasingly important, as
both software agents and robots are gradually populating the human social space.
Fong et al. [Fong et al. (2003)] use the term socially interactive robots to describe robots for
which social interaction plays a key role. For Fong et al. these robots are important in domains where
robots must exhibit peer-to-peer interaction skills, either because such skills are required for solving
specific tasks, or because the primary function of the robot is to interact socially with people, e.g. as
in robot companions.
Among the requirements for some these applications is that of being persuasive [Fogg (1999)],
that is, changing behaviour and attitudes of the humans they interact with, and also being capable
of building and maintaining long-term relationships with them. Acceptance and success of such a
system, and its utility in real-world applications, depends upon the robots capacity to maintain relationships based upon believability, credibility and trust. In addition to improving their effectiveness
in working with humans, robots can also harness social interaction for learning practical skills, for
example, through imitation [Billard and Dautenhahn (1999)].
While all these applicative scenarios suggest different degrees of autonomy and intelligence
(skills like mobility, grasping, vacuuming and object recognition), they also open up the common
question of what specific capabilities are necessary for a robot to socially interact with humans and
what the general issues imposed by such a new social context are. Crucially, the same objectives are
addressed in software-only domains, for example, in supporting the development of virtual characters
that assist HCI, for example, as game opponents or personal assistants (see [Bartneck (2002)] for a
survey).
Human-like virtual characters (virtual humans) are being used with success as virtual representatives of human users in virtual conference applications [Slater et al. (1999)], or as fully autonomous
agents inhabiting virtual worlds to enhance the users experience and ease his interaction with the
virtual world. Such characters can make the interaction more engaging and make the user pay more
attention to the system, e.g. in educational and training applications [Chittaro et al. (2005)]. A much
appreciated feature in the latter type of applications is that virtual humans can provide pedagogical
assistance that can be tailored to the needs and preferences of the learner [Baylor et al. (2006)].
Studies focusing on how the appearance of virtual characters can affect cooperation, changing
attitudes, and motivating users [Rickenberg and Reeves (2000); Zanbaka et al. (2007)] indicate that
humans treat them as social partners and, in particular, that many of the rules that apply to humanhuman interaction carry over to human-agent interaction.

16

Agent-Based Ubiquitous Computing

The result is that, despite technical and methodological differences between dealing with robotic
and software/virtual domains, today a large number of issues behind the construction of successful
social agents cross the boundaries of agent species.
What distinguishes all the research in socially intelligent agents is the emphasis given to the role
of the human as a social interaction partner of artificial agents and, subsequently, to the relevance
attributed to aspects of human-style social intelligence in informing and shaping such interactions.
The consensus in social agent research is that effective human-agent interaction greatly leverages the
instauration of a human-style social relationship between human and agent.
Effective social interaction ultimately depends upon the recognition of other points of view and
the understanding of their intentions and emotions [Dautenhahn (1997)]. Humans convey this information through explicit communication (natural language) as well as an array of non-verbal
cues [Kidd and Breazeal (2005)], such as tactile interaction, postures, gestures, and facial expressions. The latter are used both to convey emotions and also as social signals of intent, e.g. for
regulating the flow of dialogue, while gestures are commonly used to clarify speech. Deictic spatial
gestures (e.g. gaze, pointing), in particular, can compactly convey geometric information (e.g. location, direction) while also being a major component of joint visual attention between humans [Kidd
and Breazeal (2005)].

2.3 Ubiquitous robots


Having reflected on how agents offer a suitable paradigm for realising intelligent user interfaces in
general, the problem of effectuating an intelligent social interface paradigm in ubiquitous spaces
can now be considered. In particular, the question of integrating robots into ubiquitous spaces and
enabling effective Human-Robot Interaction (HRI) are explored.
Ubiquitous robots, that is, robots embedded in ubiquitous and intelligent environments, are the
most recent class of networked robot applications motivated by the increasing interest raised by ubiquitous computing scenarios. The role of robots in this context is double faceted: first, toward the
users, the robot is seen as one of the many arms of the environment, providing them with the services
they need, anytime and anywhere [Ha et al. (2005)]. Second, from the robot perspective, this also
implies the augmentation of the robots capabilities, which are extended with the services provided
by the ubiquitous environment but also by virtue of sharing an information network with the user.
The Ubiquitous Robotic Companion (URC) project [Ha et al. (2005)] provided one of the first examples of mass deployment with roughly one thousand URCs distributed to households in Seoul and
the vicinity where they guarded the home, cleaned rooms and read to the kids. Other notable projects
include the UbiBot [Kim (2005)] system, developed at the Robot Intelligence Laboratory KAIST, Republic of Korea, the Agent Chamaleon project [Duffy et al. (2003)], developed at University College
Dublin (UCD), Ireland, and the PlantCare [LaMarca et al. (2002)] system developed by the IBMs
autonomic computing group. Both NECs robot PaPeRoTM (Partner-Type Personal Robot) [Fujita
(2002)] and the iCat robot [van Breemen et al. (2005)], developed by Philips Research, are good
examples of robots designed to act as an intelligent interface between users and network appliances
deployed in ubiquitous home environments. They can, for instance, check the users e-mail, tune the
TV onto the users favourite channel, and access the Internet for retrieving stories to narrate to the
children.
In its child care version, PaPeRoTM also presents some social attributes thanks to the advanced
interaction abilities enabled by an array of sensors, including touch sensors, sonar, directional microphones, and cameras. With these sensors, PaPeRoTM can act as a teacher by locating, identifying,
taking attendance, imparting lessons, and quizzing its child students.
The iCat robot, a research prototype of an emotionally intelligent robot that can provide an easy
to use and enjoyable interaction style in ambient intelligence environments, is also used to investigate

Mixed Reality Agent (MiRA) Chameleons

17

the social interaction aspect. In particular, iCat can communicate information encoded in coloured
light throgh multi-color LEDs in its feet and ears, speak natural language, and also use facial expression to give emotional feedback to the user. One of the key research questions investigated within
this project is to find out whether these expressions and capabilities can be organised to give the robot
a personality, and which type of personality would be most appropriate during interaction with users.
These studies have shown that mechanically-rendered emotions and behaviours can have a significant effect on the way users perceive and interact with robots. Moreover, they have also shown
that users prefer to interact with a socially intelligent robot for a range of applications, compared to
more conventional interaction means.

2.3.1 Augmented HRI and immersive interfaces


Much of the work in ubiquitous robotics focuses on control issues, e.g. by explicitly addressing the
interoperability between heterogeneous hardware and software components, and on augmenting traditional HRI capabilities by leveraging on wireless networking and wearable interfaces. For instance,
wearable RFID tags [Ha et al. (2005); Herianto et al. (2007)] have been proposed in conjunction with
ubiquitous robotic systems to aid the detection and the location of users in the environment, as these
devices are sensed by the ubiquitous infrastructure and the relevant information stored in it (user identity, location, and other information) is communicated to the robots. Notably, such an approach can
substitute conventional (and more difficult to realize) robot-centric perception as even simple robots
(e.g. with no camera) can effectively recognise and track the humans in their environment.
Direct wireless communication between robots and wearable setups can also be used in this
sense, for instance, for overcoming the limitations of todays speech recognition systems. Users
of PaPeRo [Fujita (2002)], for example, employ wireless microphones for cancelling the impact of
changing background noise, while in [J. and A. (2002)] speech recognition is carried out on the
wearable computer thus overcoming the robots difficulty to recognize the voice of different users
without training.
A particular approach is to use these techniques in conjunction with immersive interfaces. Immersive interfaces are the natural complement to ubiquitous spaces as they enable the visualisation
of virtual elements in the same context in the real world.
The first immersive interfaces to virtual environments were invented by Sutherland [Sutherland
(1998)] who built a Head-Mounted Display (HMD), a goggles-like displaying device, that, when
worn by the user, replaces the users normal vision of the world with computer-generated (stereoscopic) 3D imagery. By also availing of positioning devices (e.g. magnetic) for tracking the 3D
pose of the HMD, it is then possible to modify the rendering perspective of the computer-generated
imagery in order to match the movement of the users gaze and, subsequently, to give the user the
impression of being immersed in a 3D graphical environment.
More generally, by using see-through HMDs and availing of AR technology, human users can be
immersed in a shared space obtained by superimposing virtual elements that are rendered in the same
perspective as the real scene. This form of supplementing the physical space with digital information
and augmenting interaction capabilities is what Young & Sharlin call a MR Integrated Environment
(MRIE) [Young and Sharlin (2006)].
Traditionally, these techniques have been used in robotics in the context of tele-presence systems
to enhance the users understanding of the remote environment in tele-robotic systems (e.g. [Milgram
et al. (1995)]). More recently, however, there have been a few applications involving humans working
side by side with mobile robots.
For [Collett and MacDonald (2006)], who implemented an AR visualization toolkit for the robot
device server Player [Gerkey et al. (2003)], AR provides an ideal presentation of the robots worldview, which can be very effective in supporting the development of robot systems. Specifically, by

18

Agent-Based Ubiquitous Computing

viewing the data (e.g. sensor data such as sonar and laser scans) in context with the real world, the
developer is able to compare the robots world view against the ground truth of the real world image thus promoting a better understanding and consequent evaluation of the inner workings of
the robot control system. As such, AR/MR visualisation can also be used as natural complement of
hardware in the loop simulations.
Stilman et al. [Stilman et al. (2005)], for example, instrumented a laboratory space with a motion
capture system that tracks retro-reflective markers on robot bodies and heads and other objects such
as chairs and tables. By combining this ground truth with virtual artefacts, such as simulated robots
or simulated objects, a hybrid real/virtual space is constructed that can be used to support simulated
sensing. Through HMDs, researchers can visualise real or simulated robots during walking tests
among both real and virtual obstacles. In addition, they can also avail of virtual artefacts, such as the
visualisation of the robots intended footsteps on the floor, to aid their judgment of the robots sensing
and path planning capabilities.
These applications are often realised with mobile AR systems [Feiner et al. (1997)], which enable
the combination of AR immersive visualisation and user mobility. Compared to stationary computer
systems, portable solutions enable human operators to gain an understanding and evaluate the state
and the intentions of a robot system while moving around in the same environment that the robot is
operating in.
Giesler [Giesler et al. (2004)] developed a mobile AR system for control of an autonomous
mobile platform. The system combines robot self-localisation with the localisation of the HMDwearing user in the same frame of reference. The position and orientation of the users HMD is
found by means of multiple fiducials distributed in the working area, which are then recognised and
tracked through the ARToolkit software library [Kato and Billinghurst (1999)]. The user is also
equipped with a cursor, in the form of a pen terminating in a cube formed by ARToolkit fiducials.
Since the orientation of the cube is visible from every angle, the user can point to imaginary spots
on the floor, just like moving a mouse on a desktop computer. Gieslers application enables the
rapid prototyping and manipulation of topological maps that are interactively defined in the working
environment. Consequently, the user can instruct the robot with commands such as go to this node.
Finally, Daily et al. [Daily et al. (2003)]. describe an AR system for enhancing HRI with robotic
swarms, e.g. in search and rescue scenarios. Their system is highly specialised toward a very interesting but narrow class of applications (search and rescue) as the particular solution adopted for tracking
and data communication (optic-based with no networking) suffers of obvious bandwidth limitations.
However, their application is very interesting as it shows how AR can enhance the robots interaction
capabilities in this case by allowing each robot in the swarm to point to the direction of the victim
without the need of a physical component (e.g. an arm).

2.4 Ubiquitous agents


Having reviewed the major issues confronted by projects involved in the integration of robots within
ubiquitous environments, we can now have a look at what has been in done in the HCI front. In particular, this section will focus on works that have tried to deploy social interface agents in ubiquitous
settings.
The availability of agent platforms for light and embedded devices such as PDAs and cellular
phones has already provided an opportunity for the deployment of a large number of interface agent
applications in the mobile and ubiquitous sector [Hristova et al. (2003)]. Often, the effectiveness of
these systems depends on the ability of software agents to take into account the state of the physical
environment where their components are deployed, as well as the state of their human users. Furthermore, their ability to migrate between devices in the network and adapt to different circumstances,
e.g. different computational power or different interface capabilities, makes software agents the nat-

Mixed Reality Agent (MiRA) Chameleons

19

ural candidates for maintaining the connection with the user through his daily activities [Barakonyi
et al. (2004)].
However, to date, there are not many examples of virtual characters being used in these applications. The few projects that do [Gutierrez et al. (2003)] suffer of limitations that are inherent in their
screen-based representation. It is not possible for the agents to leave the screen and wander freely
or to interact with physical objects in the real world. In such a context, and with the divide between
their 2D screen embodiment and the 3D physical world, it is also more difficult for them to attract and
engage the user. For example, they cannot effectively point (or look) in a 3D physical surrounding.
Hence, a different, and more popular, approach is to enable virtual agents to share the same 3D space
as humans by employing immersive user interfaces.
The easiest implementations of such systems employ table-top AR platforms, such as Magicbook [Billinghurst et al. (2001)] and TARBoard [Lee et al. (2005)], in which the users HMD pose is
tracked only within a limited range. Other applications are instead confronted with mobile AR systems. Notable implementations include pervasive game environments, such as ARQuake [Thomas
et al. (2000)] or AquaGuantlet [Tamura et al. (2001)], both of which enable multiple players to fight
against virtual enemies added to the real environment through AR visualisation.
In both table-top and mobile AR settings, users can interact with virtual conversational agents,
through speech or tangible user interfaces, e.g. by manipulating objects whose 3D pose is also
tracked. These systems constitute suitable frameworks for investigating face-to-face human-agent
social interaction. The conversational agent Welbo [Anabuki et al. (2000)], for example, is an interface agent that guides, helps, and serves the users in an MR Living Room where the users can
visually simulate the location of virtual furniture. Pedagogical Embodied Conversational Agents
(PECA) [Doswell (2005)] are similar virtual agents that apply proven pedagogical techniques to interact with human learners in various learning scenarios, including outdoor student tours of historical
buildings.
For this purpose, an increasing number of these systems are employing sophisticated agent control architectures. In particular, the Nexus [OHare et al. (2004)] and the UbiAgent [Barakonyi et al.
(2004)] frameworks demonstrate how virtual agents equipped with BDI (Belief, Desire, Intention)
control systems can provide the reasoning apparatus for creating believable characters that are responsive to modifications and stimuli in their environment, but are also proactive and goal-oriented.
Both systems demonstrate the capabilities of creating applications (e.g. supporting collaborative scenarios) with multiple virtual agents and multiple users, where virtual agents posses a high degree of
autonomy and can watch the MR space, in particular by sensing each others relative positions as well
as the movements of other physical objects. In addition to position-tracking devices, these systems
may also employ other sensors, such as light and temperature sensors for gathering information about
the physical environment in which they are visualized.

2.4.1 Discussion
The review reported in the last two sections helps to highlight a convergence between some of the
agent applications produced in both the robotic and virtual domain. From the virtual domain, ubiquitous agents such as in UbiAgent [Barakonyi et al. (2004)] and Nexus [OHare et al. (2004)] (see
Section 2.4) help virtual characters escape purely virtual environments and socially engage humans in
their physical environments, especially when they employ MR technology. From the robotic domain,
ubiquitous robots and augmented HRI systems (with or without AR visualisation) are motivated by
the difficulty of creating autonomous robot agents that sense and act in the physical world.
Independently from the genre, either robotic or virtual, the construction of agents operating in
the ubiquitous space offers some characteristic engineering challenges. For example, their effectiveness requires a geometric correspondence between virtual and real elements of the ubiquitous space,

20

Agent-Based Ubiquitous Computing

which can usually be obtained by employing dedicated localisation mechanisms. Notably, however,
geometric correspondence represents only a basic requirement in these types of applications. The
hybrid model constructed within the hybrid simulation system [Stilman et al. (2005)] reviewed in
the previous section is a good example of ubiquitous space that can be thought as the result of the
superimposition of two sub-spaces, namely: the real and the virtual sub-space. A more sophisticated
system that allows for scenarios involving tactile interaction would necessarily pose the additional
constraint of physical realism (e.g. impenetrability, gravity, etc.), in order to recreate the same robotenvironment dynamic of real experiments.
On the other hand, in order to be employable as effective social characters in real environments,
virtual characters, in systems such as Nexus and UbiAgent, need to perceive not only the position
(for gaze behaviour [Wagner et al. (2006)] [Prendingera et al. (2004)] and the placement of the
virtual agent [Anabuki et al. (2000)]) but also the state of the humans they interact with [Klein et al.
(1999)], e.g. their moods, given, for example, by the state of the conversation and their gestures.
Since these agents, in contrast to robots, lack a physical body they need to cooperate with a ubiquitous
infrastructure, e.g. wireless sensor networks including cameras and localisation sensors deployed in
the environment where human-agent interaction takes place.
The source of most of the difficulties in engineering agent systems present in the ubiquitous
environment, compared, for example, to purely real or purely virtual scenarios, can be explained in
terms of agent embodiment, intended as the structural coupling between the agent and its environment
(see [Ziemke (2003)]). If we want to embody an agent in such an environment, its virtual and its
real sub-space need to be structurally coupled. If this consistency between the two sub-spaces is not
properly engineered, the agents interaction with one sub-space would not be affected by the actions
of the agent with the other sub-space, which will violate the requirement for structural coupling
between the agent and its environment.
The other issue to be resolved for embodying social interface agents into ubiquitous settings is to
decide the type of their embodiment. While a lot of studies that compare physical and virtual agents
credit the robot with better results in a lot of areas (e.g. credibility, social presence, social awareness,
enjoyment, engagement, persuasiveness, likeability), there are others that indicate that some of these
results could originate from the nature of the task investigated. For example, Yamamoto et al. [abd
K. Shinozawa et al. (2003)] have compared the influence of a robot and a virtual character on user
decisions in a recommendation task. They carried out identical tasks, once with virtual objects on
a computer screen and once with physical objects in the real world, and have found that the virtual
agent scores better in the virtual world and the physical agent scores better in the real world. These
experiments suggest that the nature of the task has a big influence on which type of embodiment is
appropriate for the agent.
While there are other works showing that an AR virtual character can be the medium to make
the user interact with the AR space, e.g. to move virtual furniture around the real room in Welbo, or
architectural 3D models in PECA, an AR character still lacks the capability to physically affect the
real world.
Consequently, an agent that wants to be truly ubiquitous and assist the user in both real and
virtual tasks needs to be able to assume both physical and virtual embodiment forms. But to maintain
an individualised and personal relationship, the agent has to also display the same persona/identity to
the user. A possible solution to this conundrum is the concept of dynamic embodiment which will be
explored in the next section.

2.5 Dynamic embodiment


As discussed in the previous section, an agent can only be truly ubiquitous if it can assume a physical
and a virtual body, depending on the task. The identity problem, on the other hand, demands that

Mixed Reality Agent (MiRA) Chameleons

21

the agent projects the same (or, at least, a recognisably similar) persona to the user. So what is really
needed is a form of dynamic embodiment, whereby the agent can change its embodiment form to fit
the task at hand.
Our idea of dynamic embodiment is closely related with the concepts expressed in the work of
Kaplan [Kaplan (2005)], in which he observes how it is often possible to separate a robot software
system from the robot hardware. Modern robot architectures (e.g. see [Dragone (2007)] for a review)
already adhere to the common software engineering practice of placing a middleware layer between
application and operating system. This middleware layer is mainly concerned with enabling access
to the systems hardware (e.g. through interfaces/drivers to sensors and actuators) and computational
resources, resolving heterogeneity and distribution and also automating much of the development of
new systems by providing common, re-usable functionalities.
While traditional robot middleware enable the rapid configuration of the software to support new
robot platforms or hardware layouts, these are essentially off-line integration features. Kaplans work,
instead, refers to the concept of agent migration, i.e. the ability to move the execution of software
agents across the network. Under this perspective, a robot can be considered as a software agent in
charge of a robotic platform.
Kaplan proposes to use the term teleportation to refer to migration between two identical bodies
(e.g. two different robots of the same model/composed of the same hardware components), and metamorphosis to refer to the transfer of a software agent migrating between two non-identical bodies (e.g.
a personal robot and a PDA). Using teleportation and metamorphosis, software agents controlling
robots can change their body in order to find the most appropriate form for any given situation/task.
Physical robots then become another observation point, from which software agents can follow the
user in his/her activities and movements.
This also increases the number of possible agent-user interaction scenarios, which can include
a variety of real world situated interaction, supporting, for example, applications in which the agent
needs to learn to adapt to the user. A notable example of mutation and metamorphosis in agent-based
ubiquitous robots is investigated within the Agent Chameleons project [Duffy et al. (2003); Martin
et al. (2005)], which will be considered shortly.

2.5.1 Agent chameleons


The Agent Chameleons project investigates agent teleportation and metamorphosis in the context of
personal assistant agents helping the user across different interface devices and even crossing the
software/hardware divide.
Agent Chameleons is modelled on the idea of realising a digital friend that will evolve a unique
individual personality through prolonged interaction with the user. Such a trait tries to ensure agentperson familiarisation over time and across platforms (dimensions/information spaces).
In addition to agent teleportation, aimed at acquiring different operational functionalities, the
other concept explored within the Agent Chameleons project is the use of agent mutation (Kaplans
metamorphosis) to alter the agents external appearance in order to engender a sense of identity. This
is motivated by the fact that embodiment is more than just the material body, but it also carries a
gendered, social, political and economic identity. One of the most common psychological effects of
embodiment is the rendering of a sense of presence [Biocca (1997); Sas and OHare (2001)]. Biocca
and Nowak [Biocca and Nowak (1999b,a)] stressed the importance of body in achieving a sense of
place, space and of another entitys presence.
One of the applications considered within the project is the development of personal travelassistant agents. These can take the form of a virtual character operating on a personal computer
that helps the user to find and book a flight to a certain conference, for example, in the style of Microsofts Office Assistant. The agent may then migrate to the users PDA so that it can effectively

22

Agent-Based Ubiquitous Computing

travel with the user, helping with flight connections and attending tasks such as filtering e-mails or
negotiating the transit of the user through the airport. Once at the hotel room, the agent may then
leave its computational environment by migrating from the users PDA to a computer in control of
a service robot that is gracefully provided by the (futuristic) hotel. With its new physical body, the
agent will then be able to continue assisting the user, this time by also using its physical capabilities.
Notably, this scenario is substantially different from the one in which the user is simply assigned a
service robot on his arrival at the hotel. In the scenario foreseen by the Agent Chameleons project, the
robot would effectively act as the users personal assistant. By having access to his personal data, the
robot would, for example, be able to serve the perfect coffee without having to query the user about
his preferences. Such results may be achieved with the personal assistant agent communicating the
necessary instructions to the robot. However, a solution based upon migration may be more efficient
and would also offer the advantage of not having to release private user details. In addition, an Agent
Chameleon goes a step further by making sure that the user knows that all the agents are a mutated
form of his personal assistant. For instance, the Agent Chameleon will use the same synthetic voice
or will display other characteristic personality traits that are independent from his specific bodily
form, thus preserving the advantages of the familiar relationships between the user and its assistant.
Experiments have been undertaken that demonstrate the concepts of teleportation, and mutation
of Agent Chameleons. Figure 2.2 illustrates the layers of the system architecture behind those experiments. At the base exists a middleware layer, interfacing with the specific environment inhabited by
the agent. This may consist either of an API toward the functionalities of a specific application, in
the case of the agent inhabiting a purely software environment, or of a hardware-control API, in the
case where the agent possesses a robotic body. Above this is the native Java Virtual Machine (JVM).
Beyond this resides the Java API and built on top of that is Agent Factory (AF) [OHare et al. (1998);
Collier (2001)], a multiagent programming toolkit developed in UCD.

Fig. 2.2

Agent Chameleons system layers.

Fig. 2.3 Proof of concept migration between real


and virtual world in Agent Chameleons.

Agent Chameleons realises a proof of concept demonstration of migration of a physical robot


from the real world to the virtual and vice-versa (see Figure 2.3). In the specific demonstrative
system the physical world is extended by a virtual world depicted on a computer screen adjoining the
physical world. A small Kephera robot can navigate and explore the desk-mounted world and dock
in a robot garage at the edge of the physical world thus removing the physical robot from vision.
Thereafter the robot seamlessly crosses into the virtual world.

2.5.2 Discussion
The implementation of agent mutation in the demonstrative system realised within the Agent
Chameleons project is limited in several aspects. First of all, it relies upon an existing isomorphism between the entities involved in the mutation, namely the software agents, the robots and their

Mixed Reality Agent (MiRA) Chameleons

23

respective environments (simulated vs. real). Second, both systems are kept very simple by using
miniature Kephera robots for table-top experiments. As a result, mutation can be simply achieved by
migrating the agent in charge of the simulated robot from the simulator to the real robotic platform.
Even within such constraints, the main problem of this type of mutation is that the agent is oblivious
to the type of environment it is operating in, e.g. ignoring if its perceptions are originated by the
simulator or by physical sensing or if its actions are carried out by simulated or hardware actuators.
This entails the assumption that the same agent is equally able to control both the simulated and the
real robot, provided that the connections between its sensory-motor apparatus and the environment
are appropriately rewired (to the simulator or to real robot hardware). Although such an assumption
is admissible in the specific conditions posed by the demonstrative systems, it is an exception rather
than the norm. A more effective implementation of the Agent Chameleon should be mindful of the
differences between simulated and real world in order to better adapt to the different scenarios and
different robots, but also benefit from the different possibilities awarded to the agent by each of its
distinct forms.
In addition, such basic implementations of agent mutation do not address the more general case
in which a generic agent not a simulated robot wishes to take control of a robotic platform in
order to execute a task in the physical world. In particular, the truly interesting potential of the Agent
Chameleon scenario is the ability to support individualised and possibly social interaction with the
user in both real and virtual settings. However, the Agent Chameleon project lacks a test-bed to
investigate these ideas.
Although it is possible to investigate the possibilities of agents mutating their form when embodied in a virtual environment [Martin et al. (2005)], they loose this capability in the physical world.
A possible solution to these limitations would be to use a screen-based interface, e.g. as in the robot
GRACE [Simmons et al. (2003)]. This would give a users personal assistant agent, which assumes
the form of a virtual character on the users PC and PDAs, the possibility to appear on the robots
screen to signal that the agent has taken the control of it. The service robot in the hotel example
discussed in Section 2.5.1 will then have the same appearance and behaviour of the service robot
used by the user in his own home.
A second option, explored in the remainder of this chapter, is to use mixed reality as an interface
between the Agent Chameleon and the end-user.

2.6 MiRA chameleons


Mixed Reality Agents (MiRAs) are an innovative class of applications which combine physical robots
with virtual characters displayed through MR technology [Dragone et al. (2006); Young et al. (2007);
Shoji et al. (2006)]. By giving a virtual interface to a physical robot and a physical body to a virtual
character, the agents in these systems exhibit tangible physical presence while offering rich expressional capabilities and personalisation features that are complex and expensive to realise with pure
hardware-based solutions. These systems constitute a characteristic example of dynamic embodiment
in which the capabilities of both mediums are merged into a single agent.
An obvious disadvantage of employing such an augmentation approach is the cumbersome and
expensive hardware imposed on each user, which at the moment clearly encumbers the deployment of
MiRAs in applications with high user-to-robot ratio. However, this situation is on the verge of change
as both head-mounted displays and wearable computers are becoming cheaper and less invasive.
Collectively, these systems showcase the advantages of employing MR visualisation to combine
physical robot platforms with virtual characters. Among the possibilities, the virtual character can
be overlaid as a form of virtual clothing that envelops the physical robot and acts as a visualisation
membrane, de-facto hiding the robots hardware [Holz et al. (2006); Shoji et al. (2006)]. Alternatively, the virtual character can be visualised on top of the robot, as a bust protruding from the robots

24

Agent-Based Ubiquitous Computing

body, or even figuring as the robots driver [Holz et al. (2006)]. In every case, in contrast to robots
with virtual characters visualised on a screen placed on top of them, such as GRACE [Simmons et al.
(2003)], the mixed reality characters are visible from all angles and are not subjected to diminishing
visibility at greater distances.
When they employ simple robots, such as in Jeeves [Young et al. (2007)] or Dragone et al.s
MiRA project [Dragone et al. (2006)], these systems are advantageous in applications with a high
robot-to-user ratio, as a single wearable interface can augment the interaction capabilities of multiple
simple robots (e.g. with no screen, head or arms).
Dragone et al.s MiRA and Jeeves also take greater advantage of their mixed reality components,
as they are free from the engineering effort of realising sophisticated mechanical interfaces. For
example, a MiRA can have the ability to point and gaze in 3D by means of virtual limbs without
having to construct any physical counterpart. In this manner, it can overcome the inherent limitations
of screen-based solutions, as well as provide a rich repertoire of gestures and facial expressions,
which can be used to advertise its capabilities and communicate its state (see Fig. 2.4).

Fig. 2.4 Examples of gestures and facial expressions in MiRA Chameleons (actual images displayed in the users
head-mounted display during live experiments and user trials with our application).

Notably, being based on virtual artefacts, behavioural capabilities of Mixed Reality Agents are
not limited to natural human-like forms, but can also include more complex effects involving other
virtual objects and cartoon-like animations. Jeeves [Young et al. (2007)], for example, tags real
objects with virtual stickers and uses cartoon animation as an intuitive form of social interaction.
The remaining of this paper will focus on Dragone et al.s MiRA project, which is renamed MiRA
Chameleons here in order to more clearly distinguish it from Jeeves and U-Tsu-Shi-O-Mi [Shoji et al.
(2006)]. The other reason for this name is that the MiRA Chameleons project carries over experience
accumulated within the Agent Chameleons project by adding an agent-based coordination dimension
between the wearable MR user interface and the robot forming a MiRA.
The vision realised within the MiRA Chameleons project is to build an ubiquitous agent that
can truly cross the divide between the digital and physical world, by taking advantage of all aspects
of dynamic embodiment, and which can also simply mutate its external appearance by acting on its
virtual part.

2.6.1 Requirements
In order to create a proper test-bed for such an integrated approach, it is important to support its implementation in a flexible manner that would overcome the limitations of the early implementations
of Agent Chameleons and also ease multiple instantiation of the system. For example, rather than
ad-hoc coupling of the robot and the users wearable MR interface, the system should work with
heterogeneous robot hardware and computational platforms. Rather than using pre-configured connections between a fixed set of devices, the user should be free to move and chose among the devices

Mixed Reality Agent (MiRA) Chameleons

25

that are available.


In order to enable long-term experiments into the personalisation aspect of MiRA Chameleons,
any practical implementation of the system should offer a sub-stratum over which the capabilities
and the knowledge of a personal user agent may be openly re-configured and transferred across the
different devices in the network. In particular, the MiRA Chameleon system can be facilitated by
supporting:
Dynamic discovery and management of spontaneous networks between users MR wearable interfaces, portable devices and robots;
Run-time, adaptive configuration of system components;
Portability across different robot platforms and computational devices.
One software framework that incorporates these features is the Socially Situated Agent Architecture (SoSAA)

2.6.2 The socially situated agent architecture (SoSAA)


SoSAA [Dragone (2007)] is a software framework primarily intended to serve as a foundation on
which to build different intelligent ubiquitous applications while fostering the use of AOSE by promoting code-reuse and integration with legacy systems and third party software. In particular, SoSAA
leverages existing well-established research in the Component-Based Software Engineering (CBSE)
domain [Clements (2001)]. Here, components are not merely passive, but play an essential role in
managing the reactive part of the behaviour of an agent.
Figure 2.5 helps illustrating the SoSAA integration strategy. SoSAA combines a low-level
component-based infrastructure framework with a MAS-based high-level infrastructure framework
that augments it with its multi-agent organisation and goal-reasoning capabilities.
The low-level component framework in SoSAA can be used to instantiate different componentbased systems by posing clear boundaries between different functional modules (the components),
and by stressing the concepts of information hiding and composition rules that guide the developers in assembling these components into the system architecture. In particular, components interact
via inter-component communication channels, which, contrary to object calls in object-oriented programming, are explicit architectural elements that are applied across the whole architecture and can
be programmatically manipulated.
The low-level component framework in SoSAA provides:
(1) Support for the most important component composition styles, namely: connectiondriven/procedural interfaces and data-driven interfaces (based on messaging and/or events).
(2) Brokering functionalities to be used by components to find suitable collaboration partners for
each of the composition styles supported. This enables indirect collaboration patterns among
participating components that are not statically bound at design/compilation time but that can be
bound either at composition-time or at run-time.
(3) Container-type functionalities, used to load, unload, configure, activate, de-activate, and query
the set of functional components loaded in the system, together with their interface requirements
(i.e. in terms of provided and required collaborations).
(4) Binding operations, with which client-side interfaces (e.g. event listener, data consumer, service
client) of one component can be programmatically bound to server-side interfaces (e.g. event
source, data producers, service providers) of other components.

26

Agent-Based Ubiquitous Computing

Fig. 2.5 SoSAA integration strategy.

(5) A run-time engine in charge of the execution and scheduling of activity-type components (e.g.
sensor drivers, sensory-motor behaviours, data-processing routines).
In addition, an adapter layer in SoSAA provides a set of meta-level operators, which collectively
define an interface to the intentional layer. In particular, the adapter layer allows access by multiple
intentional agents, called component agents. The adapter layer provides meta-level operators and
perceptors that collectively define the computational environment shared by the component agents in
the intentional layer. Perceptors query the component framework and provide component agents with
knowledge about events, the set of installed components, their interfaces, their actual binding, and
their run-time performances, while actuators control the loading, unloading, configuration, activation,
de-activation, and binding of components.
In summary: SoSAA requires the wrapping of functional skills within low-level components before they can be administered by component agents in the intentional layer. This enables the adoption
of a low-level component model, for example, in terms of different component types with specific
behavioural and collaboration patterns, which can be oriented toward supporting specific application
domains. Low-level functional components in SoSAA will react to events according to a particular
behaviour until they are instructed to do otherwise by the agents in the SoSAA intentional layer. Additionally, individual components may communicate amongst one another at the sub-symbolic level
using inter-component communication channels. This results in the intentional layer being free to
concentrate on higher-level reasoning. Furthermore, at the intentional level, SoSAA can leverage on
ACL communication and dedicated AOSE methodologies, employing an organisational/social view

Mixed Reality Agent (MiRA) Chameleons

27

through which analysis and modelling of inter-agent interaction can be performed. In particular, roles
in role-based architectures help structuring one system as a set of more manageable sub-systems by
acting as abstract specifications of behavioural patterns Finally, since the adapter layer is defined
in terms of both standard agent capabilities and common features of component models, SoSAAs
design facilitates the replacement of different agent platforms and different component-based frameworks.
The separation between component agents and functional components in SoSAA is essential to
decouple for engineering purposes the agents mind from their working environment. Under
this perspective, functional components provide the agents body, that is, the medium through which
the agents mind can sense and affect a working environment. While the agents mind can then
be programmed according to different cognitive models, for example BDI, domain and environmentspecific issues can be taken into account in the engineering of the underlining functional components.
There are currently two instantiations of SoSAA, both based on two open-source frameworks,
namely: the Agent Factory (AF) multi-agent toolkit and the Java Modular Component Framework
(JMCF)1 . In addition to the standard versions, both frameworks come in versions that address computationally constrained devices, namely with AFME (AF Micro Edition), and JMCFME (JMCF Micro
Edition). In particular, these latter versions loose some of their flexibility in favour of their smaller
footprint, for instance, by renouncing Javas reflection functionalities. However, the result is that by
combining the respective versions, a SoSAA system can be distributed on different computational
nodes on a computer network (SoSAA nodes), each characterized by the presence of the SoSAA
infrastructure.

2.6.3 Implementation
By supporting the integration of intentional agents and functional components in open and ubiquitous
environments, the SoSAA framework enables the characteristic polymorphic combination of robotic
and virtual components that characterises MiRA Chameleons.
Figure 2.6 shows how a MiRA Chameleon system is implemented as the collaboration between
different types of SoSAA nodes, namely the standard SoSAA nodes, which are deployed on the
robots and on back-end server-type nodes, and SoSAA-ME nodes, which are deployed on users
portable devices such as PDAs or mobile phones as well as on MiRA MR wearable interfaces.
In essence, a MiRA Chameleon is the result of a collaboration between a robot and a user node
(see Figure 2.7), which communicate over an ad-hoc wireless network link in order to exhibit cohesion and behavioural consistency to the observer. Tracking is achieved by simply placing a cubic
marker on top of the robot and tracking its position from the camera associated with the users HMD.
The tracking information is then used to align the image of the virtual character with that of the real
robot.
In order to be a believable component of the mixed reality agent, the behaviour of the virtual
character needs to exhibit a degree of awareness of its surroundings, comparable to a robot being
physically embodied through an array of physical sensors and actuators. In MiRA Chamaleon, the
instrument for such situatedness is the update input stream, which notifies the MR interface about
the nature and relative position of the obstacles and other objects of interest perceived by the robot,
and also about the intentions of the robot. This allows direct control of deictic and anticipatory
animations of the associated virtual character, which can visually represent the robots intentions
(similar to Jeevess cartoon-like expressions of the robots state [Young et al. (2007)]).
On the other hand, since the MR overlay of virtual images onto the users field of vision requires
exact knowledge about the position and gaze of the user, this information can also be relayed to the
robot. In doing so, the wearable interface helps the robot to know the position (and the identity) of
1 SoSAA,

AgentFactory and the JMCF are all available for download at http://www.agentfactory.com

28

Agent-Based Ubiquitous Computing

Fig. 2.6 SoSAA nodes for MiRA Chameleons.

the user, while the user can use his gaze direction to spatially reference objects and way-points in
order to influence the robots behaviour. The communication between wearable interface and robot
is therefore essential in reinforcing the embodiment of each part of a MiRA (see the discussion in
Section 2.4.1) and augmenting the systems HRI capabilities by merging these parts into a single
agent.

Fig. 2.7

Collaboration between user and robot node.

The nature of the functional components deployed on each node depends on the node type. Functional components in portable devices wrap application-specific and user interface (e.g. text-based)
functionalities. On a typical robot node, components can range from active components encapsu-

Mixed Reality Agent (MiRA) Chameleons

29

lating behaviour production or data-processing functionalities, to passive data components granting


access to either bodies of data (e.g. task- and environment-related information, knowledge bases,
configuration) or to a robots hardware. Finally, functional components in MiRAs MR wearable interfaces are similar to those deployed on robot nodes, although the hardware in this case is constituted
by the virtual models of the MiRAs virtual characters, while data-processing components encapsulate AR-tracking functionalities or proxy-type components consuming sensors data transmitted by
the robots.
Other than disjointed, passive functionalities that are exported through a low-level API, as in the
original Agent Chameleon, functional components in a SoSAA node are organized in component
contexts that are already capable of primitive functionalities. For instance, in isolation and without
its intentional layer, the low-level component framework of a robot node will still be capable of
reactive behaviours, such as avoiding colliding with approaching obstacles, or responding to simple
instructions, such as wander and turn. Similarly, a virtual character in a MiRA MR wearable interface
will be able to track the position of the user, exhibit simple base-type autonomous behaviour, such as
breathing and eye blinking, and respond to simple instructions, such as gaze-toward-point, and move
or rotate joints.
On top of their respective component-based frameworks, each SoSAA node is supervised by a
number of basic component agents attending a number of high-level behaviours and facilitating the
integration between the node and the rest of the system. These component agents become aware of
the low-level functional capabilities available in the node by querying their component-based framework with the meta-operators defined in the SoSAA adapter. They can thus abstract from the specific
implementation details of the functional components by operating at the knowledge level of the component types and component signatures (in terms of required and exported interfaces). Subsequently,
they can employ domain specific knowledge, which is expressed at the same level, to harness the
functionalities of the node to create high-level behaviours, for instance, by sequencing and conditioning the activation or primitive behaviours within the low-level component framework. At the same
time, by publicising their high-level capabilities in a common ontology, these component agents insulate the specific details of each node and enable other component agents that are unaware of these
details to operate within the same node. For example, both robots equipped with laser range sensors
and robots equipped with sonar will publicise that they are capable of moving to a certain room,
although they will ultimately employ different plans to achieve the same objective.
Collectively, these mechanisms give each SoSAA node reflective capabilities both at the functional and at the ACL level, which respond to the requirements dictated by the MiRA Chameleon by
enabling a flexible migration of functionalities across heterogeneous SoSAA nodes. In the migration
by cloning, as performed in the old prototype of the Agent Chameleon system, an agents deliberative
layer had to find an exact copy of its ties with the environment in each platform it migrated to. In
contrast, functionalities in MiRA Chameleon migrates in terms of component agents, typically covering user- and task-specific roles, which then adapt to their new hosting platform by finding, possibly
organising, and finally initiating the necessary collaborations with the functionalities available in situ.
The adaptation stage in the migration of component agents in SoSAA is what enables the dynamic
integration of MiRA Chameleon functionalities over heterogeneous nodes. In addition, SoSAA facilitates the distribution of these functionalities thanks to a hybrid communication model similar to the
one employed within the RETSINA MAS [Sycara et al. (2003)]. Specifically, the low-level frameworks of each node are connected thanks to the distribution mechanisms associated to the different
component composition styles supported by SoSAA, that is, RMI (Remote Method Invocation) for
component services, and JMS (Java Messaging Services) for data and event distribution. In addition,
as in RETSINA, components can open inter-component back-channels employing different communication mechanisms and network protocols that are specifically designed to cater for specific flows
of data, such as video streaming through RTP (Real Time Protocol) or UDP multicasting for peer discovery. On top of that, dedicated component agents (communicators) in the SoSAA intentional layer

30

Agent-Based Ubiquitous Computing

employ a set of coordination protocols, based on FIPA-ACL, to supervise the federation of their respective low-level component frameworks, and also to facilitate opening, closing, and controlling the
flow of information, and bearing higher-level conversation between inter-component backchannels.
Once aware of each others presence, e.g. through the UDP peer-discovery service, communicators in the different nodes can manage the migration of component agents from the users PAA to
the his portable device to the MiRA MR wearable interface. Similarly, as soon as a robots communicator starts to collaborate with the users wearable node, the two can exchange information, e.g.
about each others identity, and also agree on migrating some functionalities from the user node to
the robot node before the robot enters the visual field of the users HMD. After this first connection,
and until robot and human depart from each other, the two nodes will then collaborate to deliver a
MiRA composed of the real robot and a virtual character visualised through the users HMD.

2.6.4 Testbed
An example will better illustrate how the functionalities of MiRA Chameleon are implemented availing of the SoSAA framework. In particular, in order to drive the implementation of the new MiRA
Chameleon system, we implemented a simple application scenario to demonstrate the joint exploitation of gaze tracking and positional awareness[Dragone et al. (2006)], and the expressivities capabilities of MiRAs [Dragone et al. (2007)] by asking users to observe and collaborate with a soccer
playing robot. Specifically, the user can ask the robot to find and fetch an orange colored ball, and
also direct some of the robots movements to facilitate the successful and speedy execution of the
task. A number of application-specific component agents and functional components in each node
complement the generic infrastructure, described in the previous section.

2.6.4.1 Robot node


Additional component agents on the robot supervise functional context (sub-systems), such as navigation, localisation, object tracking etc. In order to enable the soccer game scenario, low-level
components implement object recognition functionalities (based on blob colour-tracking), which are
used to recognise the ball, and behaviours for approaching, grabbing and dribbling the ball.
The robot control system also includes an object-tracking manager component agent, used for
the definition and maintenance of way-points which can be associated with objects tracked by the
on-board camera or initialised to arbitrary positions registered with the robot positioning system
(i.e. its odometry). Successively, these way-points can be used as inputs for other behaviours. The
mechanism enables behaviour persistence, which allows the robot to keep approaching the ball even
when it gets momentarily occluded, and also supports multiple foci of interest, so that the robot can
turn toward the user and then return to pursuing the ball.
Notably, within the SoSAA-based implementation, setting such a demonstrative scenario allows
fixing the application-specific ontology describing the high-level capabilities of the robot, e.g. in
terms of actions that can be performed (e.g. turn right/move forward), and objectives that can
be achieved by the robot (canAchieve(close(?object,?way point))).

2.6.4.2 User node


Figure 2.8 roughly illustrates the organisation of part of the wearable SoSAA node in the MiRA
Chameleon system. Within the node, component agents supervise two functional areas, namely the
user interface and the control of the virtual character (avatar) associated with the robot. The user
interface components control the display of text and other 2D graphic overlays in the users HMD,
and process user input. Through them, the user can be informed of details of the task and the state
of the robot. These components also process user utterances availing of the IBM ViaVoiceTM speech

Mixed Reality Agent (MiRA) Chameleons

31

recogniser and the Java Speech API (http://java.sun.com/products/java-media/speech/).


The vocal input may be used to trigger a predefined set of commands in order to issue requests to the
robots. To do this, the recognised text is matched and unified with a set of templates and the result
transformed into the corresponding ACL directive, e.g. request (?robot, activate behaviour
(turnRight)), request (?robot, achieve-goal (close (ball, user))), request
(?robot, achieve-goal (close (robot, user))).

Fig. 2.8 Organisation of the users MR node.

The behavioural animations of the virtual character are implemented via OpenVRML (http:
//www.openvrml.org), an open source C++ library enabling a programmatic access to the VRML
model of the avatar. The avatars employed in MiRA Chameleons are also part of the distribution
of the Virtual Character Animator software by ParallelGraphic. Some of these avatars are simple
caricature characters (as the one depicted on the left in Figure 2.9) while others are cartoon-type
humanoid characters compliant with the H-Anim 1.1 standard (see the right avatar in Figure 2.9). By
using the access provided by OpenVRML to the virtual models loaded in the application, the system
can automatically form a picture of the structure of each virtual character, e.g. reporting the joints
in their models, if they are H-Anim models, and also the presence of pre-defined animations scripts.
This information is then exported through the SoSAA adapter and finally represented in the SoSAA
intentional layer in form of belief sets, which are then used by the component agents supervising the
avatar.

2.6.4.3 Collaboration between robot and virtual character


Most of the coordination between robot and virtual character is achieved through high-level requests
from the robots agents to the user interfaces agents (e.g. <greet the user>) or communication of
intentions (e.g. <committed turn-right>) rather than low-level directions (e.g. <move arms up
and wave 3 times>) which require more data and use up much of the robots attention. However,
in order to execute animations that are coherent with the robots perception of its environment, once
activated, the animation components will also access low-level data such as sensor readings or other
variables used within the robot agent, and the tracking information reporting the position of the
observer in the robots coordinate system.

32

Agent-Based Ubiquitous Computing

Fig. 2.9 Different examples of avatars (actual images displayed in the users head mounted display during live
experiments and user trials with our application).

Lets say, for instance, that the robot wants to greet the user. Since the robot does not have
hardware capabilities to do that, it will request the help of the user node through an ACL request
<greet the user>. The user interface agent will forward the request to the avatar agent in charge
of the virtual character associated with the robot. The avatar character will then carry out this highlevel request by using the specific capabilities of the virtual character. As a result, the snowman
character will greet the user by turning its whole body toward the user and by waving its hat - an
animation built into the characters VRML model. In contrast, H-Anim characters will be able to
more naturally gaze at the user and wave their hands. In both cases, facing the user is possible
because the animation activities know the position and the orientation of the user in relation to the
robot-centric frame of reference.

2.6.5 Discussion
In general, through its SoSAA-based implementation, MiRA Chameleons can easily adapt to different users, different robots, and different applications, as there is no pre-defined coupling between
the robot and the appearance and behaviour of its associated virtual character. Instead, thanks to the
agent-based coordination described in the previous section, the system as a whole can take contextsensitive decisions in order to deliver a personalised, adaptive, augmented HRI interface.
In particular, the MiRA Chameleons system is a suitable testbed to experiment with external
mutation as envisaged within the Agent Chameleon project. By way of example, the system may
utilise the role description for the robot as well as a profile of the observer (e.g. his identity and
preferences) in order to personalise the form of the avatar, e.g. to project a snowman avatar while
engaging in playful activity and a more serious avatar when teaching.
Such a personalisation of form may be augmented by way of personalisation of behaviour. As the
user interface agent resides on the users wearable computer, it may easily access (or indeed learn)
useful personal data about the user that can be consequently used to negotiate the robots behaviour
to better suit the perceived needs of the user. Notably, an important advantage of having the user
interface agent acts as an intermediary between the user and the robot is that the robots behaviour

Mixed Reality Agent (MiRA) Chameleons

33

can be influenced without disclosing personal data of the user. For instance, if the user needs to
follow the robot, the user interface agent proactively may ask the robot to slow down in order to
prevent possible collision with the user.

2.7 Conclusion
This paper has advocated that robots play a significant role in the digital society. However, their heterogeneous nature, as well as their characteristically rigid and inflexible forms, pose significant barriers to the development of effective interaction modalities. Thus the objective of attaining seamless
and intuitive interaction, as per the ubiquitous computing and ambient intelligence visions, remains
somewhat distant. One cost-effective and flexible solution to this is to engage an eclectic mixture of
technologies that harmonises human-robot interaction with other interaction modalities.
Dynamic embodiment offers one avenue for realising agents that can traverse technology boundaries and offer a consistent interface to the end-user while taking advantage of the facilities that
individual platforms, of which robots are one instance, offer.
Mixed reality, incorporating virtual characters, offers an intriguing approach for modelling
robotic forms in a flexible and adaptable manner. A combination of dynamic embodiment and MR
enables the robotic entities to be endowed with interfaces that both harmonise with and augment
conventional user interfaces.
Making the leap from proof of concept to practical system is an essential step to investigate the
usability of such systems in everyday applications. This paper describes an important step in such a
direction by presenting our system architecture based on the SoSAA software framework.
Future work will be dedicated to improve the interoperability of our solution, by employing
standard representations of the systems functional ontologies, and to create a stable environment for
testing and developing suitable methodologies supporting the adaptability of the system (e.g. through
learning).

Chapter 3

A Generic Architecture for Human-Aware


Ambient Computing
Tibor Bosse, Mark Hoogendoorn, Michel C.A. Klein, and Jan Treur
Department of Artificial Intelligence, VU University Amsterdam, Amsterdam, the
Netherlands
{tbosse, mhoogen, michel.klein}@cs.vu.nl, treur@few.vu.nl

Abstract
A reusable agent-based generic model is presented for a specific class of Ambient Intelligence applications: those cases addressing human wellbeing and functioning from a human-like understanding.
The model incorporates ontologies, knowledge and dynamic models from human-directed sciences
such as psychology, social science, neuroscience and biomedical sciences. The model has been
formally specified, and it is shown how for specific applications it can be instantiated by applicationspecific elements, thus providing an executable specification that can be used for prototyping. Moreover, it is shown how dynamic properties can be formally specified and verified against generated
traces.

3.1 Introduction
The environment in which humans operate has an important influence on their wellbeing and performance. For example, a comfortable workspace or an attentive partner may contribute to good performance or prevention of health problems. Recent developments within Ambient Intelligence provide
technological possibilities to contribute to such personal care; cf. [Aarts et al. (2003)], [Aarts et al.
(2001)], [Riva et al. (2005)]. For example, our car may warn us when we are falling asleep while
driving or when we are too drunk to drive. Such applications can be based on possibilities to acquire
sensor information about humans and their functioning, but more substantial applications depend
on the availability of adequate knowledge for analysis of information about human functioning. If
knowledge about human functioning is represented in a formal and computational format in devices
in the environment, these devices can show more human-like understanding, and (re)act accordingly
by undertaking actions in a knowledgeable manner that improve the humans wellbeing and performance. As another example, the workspaces of naval officers may include systems that track their
gaze and characteristics of stimuli (e.g., airplanes on a radar screen), and use this information in a
computational model that is able to estimate where their attention is focussed at; cf. [Bosse et al.
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_3, 2009 Atlantis Press/World Scientific

35

36

Agent-Based Ubiquitous Computing

(2006b)]. When it turns out that an officer neglects parts of a radar screen, such a system can either indicate this to the person (by a warning), or arrange in the background that another person or
computer system takes care of this neglected part.
In recent years, human-directed scientific areas such as cognitive science, psychology, neuroscience and biomedical sciences have made substantial progress in providing an increased insight
in the various physical and mental aspects involved in human functioning. Although much work
still remains to be done, dynamic models have been developed and formalised for a variety of such
aspects and the way in which humans (try to) manage or regulate them. From a biomedical angle, examples of such aspects are (management of) heart functioning, diabetes, eating regulation disorders,
and HIV-infection; e.g., [Bosse et al. (2006a)], [Green (2005)]. From a psychological and social angle, examples are emotion regulation, attention regulation, addiction management, trust management,
stress management, and criminal behaviour management; e.g., [Gross (2007)], [Bosse et al. (2007)],
[Bosse et al. (2008c)].
The focus of this paper is on the class of Ambient Intelligence applications as described, where
the ambient software has context awareness (see, for example, [Schmidt (2005)], [Schmidt et al.
(1999)], [Schmidt et al. (2001)]) about human behaviours and states, and (re)acts on these accordingly. For this class of applications an agent-based generic model is presented, which has been formally specified. For a specific application, this model can be instantiated by case-specific knowledge
to obtain a specific model in the form of executable specifications that can be used for simulation and
analysis. In addition to the naval officer case already mentioned, the generic model has been tested
on a number of other Ambient Intelligence applications of the class as indicated. Three of these
applications are discussed as an illustration, in Section 5, 6 and 7 respectively. Section 3.2 describes
the modelling approach. In Section 3.3 the global architecture of the generic model is presented.
Section 4 shows the internal structure of an ambient agent in this model. Section 8 shows how overall
properties of this type of Ambient Intelligence system can be specified, verified against traces and
logically related to properties of the systems subcomponents. Finally, Section 9 is a discussion.

3.2 Modelling approach


This section briefly introduces the modelling approach used to specify the generic model. To specify the model conceptually and formally, the agent-oriented perspective is a suitable choice. The
processes in the generic process model can be performed by different types of agents, some human,
some artificial. The modelling approach used is based on the component-based agent design method
DESIRE [Brazier et al. (2002)], and the language TTL for formal specification and verification of
dynamic properties [Bosse et al. (2008b)], [Jonker and Treur (2002)].

Process and Information Aspects Processes are modelled as components. A component can
either be an active process, namely an agent, or a source that can be consulted or manipulated, which
is a world component. In order to enable interaction between components, interaction links between
such components are identified and specified. Ontologies specify interfaces for components, but also
what interactions can take place between components, and the functionalities of components.
Specification Language In order to execute and verify human-like ambience models, the expressive language TTL is used [Bosse et al. (2008b)], [Jonker and Treur (2002)]. This predicate logical
language supports formal specification and analysis of dynamic properties, covering both qualitative
and quantitative aspects. TTL is built on atoms referring to states, time points and traces. A state of
a process for (state) ontology Ont is an assignment of truth values to the set of ground atoms in the
ontology. The set of all possible states for ontology Ont is denoted by STATES(Ont). To describe
sequences of states, a fixed time frame T is assumed which is linearly ordered. A trace over state

A Generic Architecture for Human-Aware Ambient Computing

37

ontology Ont and time frame T is a mapping : T STATES(Ont), i.e., a sequence of states t (t
T) in STATES(Ont). The set of dynamic properties DYNPROP(Ont) is the set of temporal statements
that can be formulated with respect to traces based on the state ontology Ont in the following manner.
Given a trace over state ontology Ont, the state in at time point t is denoted by state( , t). These
states can be related to state properties via the formally defined satisfaction relation |=, comparable
to the Holds-predicate in the Situation Calculus [Reiter (2001)]: state( , t) |= p denotes that state property p holds in trace at time t. Based on these statements, dynamic properties can be formulated
in a sorted first-order predicate logic, using quantifiers over time and traces and the usual first-order
logical connectives such as , , , , , . A special software environment has been developed for
TTL, featuring both a Property Editor for building and editing TTL properties and a Checking Tool
that enables formal verification of such properties against a set of (simulated or empirical) traces.

Executable Format To specify simulation models and to execute these models, the language
LEADSTO, an executable sublanguage of TTL, is used. The basic building blocks of this language
are causal relations of the format e, f ,g,h , which means:
if
then

state property holds for a certain time interval with duration g,


after some delay (between e and f) state property will hold
for a certain time interval of length h.

where and are state properties of the form conjunction of literals (where a literal is an atom or
the negation of an atom), and e, f, g, h non-negative real numbers.

3.3 Global structure of the agent-based generic model


For the global structure of the model, first a distinction is made between those components that are
the subject of the system (e.g., a patient to be taken care of), and those that are ambient, supporting
components. Moreover, from an agent-based perspective (see, for example, [Brazier et al. (2000)],
[Brazier et al. (2002)]), a distinction is made between active, agent components (human or artificial),
and passive, world components (e.g., part of the physical world or a database). Furthermore, within
an agent a mind may be distinguished from a physical body. This provides the types of components
distinguished shown in Figure 3.1. Here the dotted rectangles depict agents with mind and body
distinguished within them, and the other geometrical shapes denote world components. Given the
distinctions made between components, interactions between such components are of different types
as well. Figure 3.1 depicts a number of possible interactions by the arrows. Table 3.1 shows an
overview of the possible interactions.

Interaction Between Agents Interaction between two agents may be communication or bodily
interaction, for example, fighting. When within the agent a distinction is made between mind and
body, communication can be modelled as information transfer between an agents mind and another
agents mind. Whether for a given application of the generic model, within agents a mind and a
body are distinguished, depends on the assumptions made about the application domain. If it is
assumed that communication is independent of and cannot be affected by other processes in the
world, then communication can most efficiently be modelled as information transfer between minds.
If, in contrast, it is to be modelled how communication is affected by other processes in the world
(e.g., effects on the quality of a channel or network), then it is more adequate to model communication
as bodily interaction. Obviously, also in cases that it is to be modelled how agents affect each others
bodies, as in fighting, the latter is the most adequate option.
Agent-World Interaction Interaction between an agent and a world component can be either
observation or action performance; cf. [Brazier et al. (2000)]. An action is generated by an agent,

38

Agent-Based Ubiquitous Computing

subject

ambient

agents
world

Fig. 3.1 Different types of components and interactions

and transfers to a world component to have its effect there. An observation has two directions: the
observation focus is generated by an agent and transfers to a world component (providing access to
a certain aspect of the world), and the provision of the observation result is generated by the world
component and transfers to the agent. Combinations of interactions are possible, such as performing
an action and observing the effect of the action afterwards. When the agents body is distinguished
from its mind, interaction between agent and world can be modelled as transfer between this body
and a world component. In addition, interaction between the agents mind and its body (the vertical
arrows in Figure 1) can be used to model the effect of mental processes (deciding on actions and
observations to undertake) on the agent-world interaction and vice versa (incorporating observation
results). Also here, whether for a given application of the generic model interaction between an agent
and the world is modelled according to the first or the second option, depends on the assumptions
made about the application domain. If it is assumed that performance of an intended action generated
by the mind has a direct effect on the world and has no relevant effect on an agents body, then it can
most efficiently be modelled according to the first option. If, in contrast, it is to be modelled how
actions and observations are also affected by other processes in the body or world, then the second
option is more adequate. Also in cases that it is to be modelled how the world affects an agent body,
obviously the second option is the most adequate option.

The naval officer example Table 3.2 illustrates the different types of components and interactions for a case concerning a naval officer, as briefly explained in the introduction. The officer keeps
track of incoming planes on a radar screen, and acts on those ones classified as dangerous.
Generic State Ontologies at the Global Level For the information exchanged between components
at the global level, generic ontologies have been specified. This has been done in a universal ordersorted predicate logic format that easily can be translated into more specific ontology languages.
Table 3.3 provides an overview of the generic sorts and predicates used in interactions at the global
level. Examples of the use of this ontology will be found in the case studies.
Generic Temporal Relations for Interaction at the Global Level Interaction between
global level components is defined by the following specifications. Note that in such specifications,
for state properties the prefix input, output or internal is used. This is an indexing of the language
elements to indicate that it concerns specific variants of them either present at the input, output or
internally within the agent.
Action Propagation from Agent to World Component

A Generic Architecture for Human-Aware Ambient Computing

39

Table 3.1 Different types of interaction


to
from
subject agent

subject
agent
subject communication;
subject body interaction

subject world
component

subject observation result;


subject world-body interaction

ambient
agent

ambient-subject communication;
ambient-subject body interaction

ambient
world
component

subject-ambient observation result;


ambient-subject
world-body interaction

subject
world component
subject observation focus;
subject
action
performance;
subject body-world interaction
subject world component
interaction

ambient
agent
subject-ambient communication;
subject-ambient body interaction

ambient-subject observation focus;


ambient-subject action
performance;
ambient-subject body world interaction
ambient-subject
world component interaction

ambient communication;
ambient body interaction

subject-ambient observation result;


subject-ambient
world-body interaction

ambient observation result;


ambient world-body interaction

ambient
world component
subject-ambient observation focus;
subject-ambient action
performance;
subject-ambient bodyworld interaction
subject-ambient world
component interaction

ambient observation focus;


ambient action performance;
ambient body-world interaction
ambient world component interaction

Table 3.2 Components and interactions for a naval officer case


subject components
subject interactions
ambient components
ambient interactions
interactions
between
subject
and ambient

subject agents
subject world components
human naval officer
radar screen with moving planes
observation and action by subject agent
naval officer gaze focuses on radar screen with planes, extracts information from radar screen view,
naval officer acts on planes that are dangerous
ambient agents
dynamic task allocation agent (including an eye tracker), task-specific agent
communication between ambient agents
communication between task allocation agent and task-specific agent on task requests
communication
observation and action
task allocation agent communiambient agent has observation focates over-looked dangerous item to
cus on radar screen and naval officer
naval officer
gaze
ambient agent extracts info from
views

X:AGENT W:WORLD A:ACTION output(X)|performing in(A, W) can perform in(X,A,W) 


input(W)|performing in(A, W)

Observation Focus Propagation from Agent to World Component


X:AGENT W:WORLD I:INFO EL output(X)|observation focus in(I, W) can observe in(X,I,W) 
input(W)|observation focus in(I, W)

Observation Result Propagation from World to Agent


X:AGENT W:WORLD I:INFO EL output(W)|observation result from(I, W) can observe in(X,I,W) 
input(X)|observed result from(I, W)

Communication Propagation Between Agents


X,Y:AGENT I:INFO EL output(X)|communication from to(I,X,Y) can communicate with about(X,Y,I) 
input(Y)|communicated from to(I,X,Y)

40

Agent-Based Ubiquitous Computing

Table 3.3 Generic Ontology for Interaction at the Global Level


SORT
Description
ACTION
an action
AGENT
an agent
INFO EL an information element, possibly complex (e.g., a conjunction of other info elements)
WORLD
a world component
Predicate
Description
performing in(A:ACTION, W:WORLD)
action A is performed in W
observation focus in(I:INFO EL, W:WORLD)
observation focus is I in W
observation result from(I:INFO EL, W:WORLD)
observation result from W is I
communication from to(I:INFO EL, X:AGENT, Y:AGENT)
information I is communicated by X to Y
communicated from to(I:INFO EL,X:AGENT,Y:AGENT)
information I was communicated by X to Y
can observe in(X:AGENT, I:INFO EL, W:WORLD)
agent X can observe I within world W
can perform in(X:AGENT, A:ACTION, W:WORLD)
agent X can perform action A within W
can communicate with about(X:AGENT,Y:AGENT,I:INFO EL)
agent X can communicate with Y about I

3.4 Generic ambient agent and world model


This section focuses on the ambient agents within the generic model. As discussed in Section 3,
ambient agents can have various types of interactions. Moreover, they are assumed to maintain
knowledge about certain aspects of human functioning in the form of internally represented dynamic
models, and information about the current state and history of the world and other agents. Based on
this knowledge they are able to have a more in-depth understanding of the human processes, and can
behave accordingly. This section presents an ambient agent model that incorporates all these.

Components within the Ambient Agent Model In [Brazier et al. (2000)] the componentbased Generic Agent Model (GAM) is presented, formally specified in DESIRE [Brazier et al.
(2002)]. The process control model was combined with this agent model GAM. Within GAM the
component World Interaction Management takes care of interaction with the world, the component
Agent Interaction Management takes care of communication with other agents. Moreover, the component Maintenance of World Information maintains information about the world, and the component
Maintenance of Agent Information maintains information about other agents. In the component Agent
Specific Task, specific tasks can be modelled. Adopting this component-based agent model GAM,
the Ambient Agent Model has been obtained as a refinement, by incorporating components of the
generic process control model described above.
The component Maintenance of Agent Information has three subcomponents. The subcomponent
Maintenance of a Dynamic Agent Model maintains the causal and temporal relationships for the
subject agents functioning. For example, this may model the relationship between a naval officers
gaze direction, characteristics of an object at the screen, and the attention level for this object. The
subcomponent Maintenance of an Agent State Model maintains a snapshot of the (current) state of the
agent. As an example, this may model the gaze direction, or the level of attention for a certain object
at the screen. The subcomponent Maintenance of an Agent History Model maintains the history of
the (current) state of the agent. This may for instance model the trajectory of the gaze direction, or
the level of attention for a certain object at the screen over time.
Similarly, the component Maintenance of World Information has three subcomponents for a dynamic world model, a world state model, and a world history model, respectively. Moreover, the
component Agent Specific Task has the following three subcomponents, devoted to the agents process control task. The subcomponent Simulation Execution extends the information in the agent state
model based on the internally represented dynamic agent model for the subject agents functioning.
For example, this may determine the attention level from a naval officers gaze direction, and the

A Generic Architecture for Human-Aware Ambient Computing

41

characteristics of an object at the screen, and his previous attention level. The subcomponent Process
Analysis assesses the current state of the agent. For instance, this may determine that a dangerous
item has a level of attention that is too low. This component may use different generic methods of
assessment, among which (what-if) simulations and (model-based) diagnostic methods, based on the
dynamic and state models as maintained. The subcomponent Plan Determination determines whether
action has to be undertaken, and, if so, which ones (e.g. to determine that the dangerous item with
low attention from the naval officer has to be handled by another agent).
Finally, as in the model GAM, the components World Interaction Management and Agent Interaction Management prepare (based on internally generated information) and receive (and internally
forward) interaction with the world and other agents. Table 3.4 provides an overview of the different
components within the Ambient Agent Model, illustrated for the case of the naval officer.
Table 3.4 Components within the Ambient Agent Model
Maintenance of Agent Information
maintenance of dynamic models
model relating attention state to human body state and world state
maintenance of state models
model of attention state and gaze state of the naval officer
subject agent
model of state of radar screen with planes
subject world component
maintenance of history models
model of gaze trajectory and attention of time
Maintenance of World Information (similar to Maintenance of Agent Information)
Agent Specific Task
simulation execution
update the naval officers attention state from gaze and radar screen state
process analysis
determine whether a dangerous item is overlooked
plan determination
determine an option to address overlooked dangerous items (to warn the naval
officer, or to allocate another human or ambient agent to this task)
World Interaction Management
processing received observation results of screen and gaze
Agent Interaction Management
preparing a warning to the officer
preparing a request to take over a task

Generic State Ontologies within Ambient Agent and World To express the information
involved in the agents internal processes, the ontology shown in Table 3.5 was specified.
Table 3.5 Generic Ontology used within the Ambient Agent Model
Predicate
Description
belief(I:INFO EL)
information I is believed
world fact(I:INFO EL)
I is a world fact
has effect(A:ACTION, I:INFO EL)
action A has effect I
Function to INFO EL
Description
leads to after(I:INFO EL, J:INFO EL, D:REAL)
state property I leads to state property J after duration D
at(I:INFO EL, T:TIME)
state property I holds at time T

As an example belief(leads to after(I:INFO EL, J:INFO EL, D:REAL)) is an expression based on this
ontology which represents that the agent has the knowledge that state property I leads to state property
J with a certain time delay specified by D. This can provide enhanced context awareness (in addition
to information obtained by sensoring).

Generic Temporal Relations within an Ambient Agent The temporal relations for the
functionality within the Ambient Agent are as follows.
Belief Generation based on Observation, Communication and Simulation

42

Agent-Based Ubiquitous Computing

X:AGENT, I:INFO EL, W:WORLD input(X)|observed from(I, W) internal(X)|belief(is reliable for(W, I)) 
internal(X)|belief(I)
X,Y:AGENT, I:INFO EL input(X)|communicated from to(I,Y,X) internal(X)|belief(is reliable for(X, I)) 
internal(X)|belief(I)
X:AGENT I,J:INFO EL D:REAL T:TIME internal(X)|belief(at(I, T)) internal(X)|belief(leads to after(I, J, D))

internal(X)|belief(at(J, T+D))

Here, the first rule is a generic rule for the component World Interaction Management. Similarly,
the second rule is a generic rule for the component Agent Interaction Management. When the sources
are assumed always reliable, the conditions on reliability can be left out of the first two rules. The
last generic rule within the agents component Simulation Execution specifies how a dynamic model
that is explicitly represented as part of the agents knowledge (within its component Maintenance
of Dynamic Models) can be used to perform simulation, thus extending the agents beliefs about the
world state at different points in time. This can be considered an internally represented deductive
causal reasoning method. As another option, an abductive causal reasoning method can be internally
represented in a simplified form as follows.
Belief Generation based on Simple Abduction
X:AGENT I,J:INFO EL D:REAL T:TIME
internal(X)|belief(at(J, T)) internal(X)|belief(leads to after(I, J, D))  internal(X)|belief(at(I, T-D))

Generic Temporal Relations within a World For World Components the following specifications indicate the actions effects and how observations provide their results.
Action Execution and Observation Result Generation in the World
W:WORLD COMP A:ACTION I:INFO EL input(W)|performing in(A, W) internal(W)|has eect(A,I) 
internal(W)|world fact(I)
W:WORLD COMP I:INFO EL input(W)|observation focus in(I, W) internal(W)|world fact(I) 
output(W)|observation result from(I, W)
W:WORLD COMP I:INFO EL input(W)|observation focus in(I, W) internal(W)|world fact(not(I))
output(W)|observation result from(not(I), W)

3.5 Case study 1: An ambient driver support system


One of the application cases addressed to evaluate the applicability of the generic model is an ambient
driver support system (see Table 3.6 and Figure 3.2). This example was inspired by a system that is
currently under development by Toyota. It is a fail-safe system for cars that analyses whether a driver
is drunk and in that case automatically shuts the vehicle down. The system uses sensors that analyse
sweat on the palms of the drivers hands to assess the blood alcohol level and does not allow the
vehicle to be started if the reading is above specified safety limits. The system can also kick in if
sensors detect abnormal steering operations, or if a special camera shows that the drivers gaze is not
focused. The car is then slowed to a halt. The system makes use of a dynamic model of a drivers
functioning expressing that a high alcohol level in the blood leads to measurable alcohol in the sweat,
and to observable behaviour showing abnormal steering operation and unfocused gaze.
For the ambient driver support case, several domain specific rules have been identified in addition
to the generic rules specified in Section 3.3 and 3.4. Some of the key rules are expressed below. For
all domain specific rules, see Appendix 3.10. First of all, within the Driver Assessment Agent an
explicit representation is present of a dynamic model of the drivers functioning.
In this model it is represented how a high alcohol level in the blood has physiological and behavioural consequences that can be observed: (1) physiological: a high alcohol level in the sweat, (2)

A Generic Architecture for Human-Aware Ambient Computing

43

Table 3.6 Components and Interactions of the Ambient Driver Support System
subject agents
subject world components
subject components
human driver
car and environment
observation and action by subject agent in subject world component
subject interactions
driver observes car and environment, operates car and gaze
ambient agents
ambient components
steering, gaze-focus, and alcohol-level sensoring agent; steering, gaze-focus, and alcohol level monitoring agent; driver assessment agent, cruise control agent
communication between ambient agents
ambient interactions
steering sensoring agent communicates to steering monitoring agent
gaze-focus sensoring agent communicates gaze focus to gaze-focus monitoring agent
alcohol-level sensoring agent communicates to alcohol-level monitoring agent
alcohol level monitoring agent reports to driver assessment agent alcohol level
eye-focus monitoring agent reports to driver assessment agent unfocused gaze
steering monitoring agent reports to driver assessment agent abnormal steering
driver assessment agent communicates to cruise control agent state of driver
observation and action by ambient agent in subject world component
interactions
steering sensoring agent observes steering wheel operation
between subject and
gaze-focus sensoring agent observes driver body gaze focus
ambient
alcohol-level sensoring agent measures alcohol level in sweat of driver hand palms
cruise control agent slows down car or stops engine

Driver
Assessment
agent

Cruise
Control
agent

car and
environment

Steering
Monitoring
agent

Steering Sensoring
agent

Gaze - focus
Monitoring
agent

Alcohol -level
Moni toring
agent

Alcohol -level Sensoring


agent

Gaze-focus Sensoring
agent

driver

Fig. 3.2 Case Study: Ambient Driver Support System

behavioural: abnormal steering operation and an unfocused gaze. The dynamic model is represented
by the following beliefs in the component Maintenance of Dynamic Models.
internal(driver assessment agent)|belief(leadsto(alcohol level high, driver assessment(negative), D))
internal(driver assessment agent)|belief(leadsto(abnormal steering operation unfocused gaze,
driver assessment(negative), D))

44

Agent-Based Ubiquitous Computing

The Driver Assessment Agent receives this observable information from the various monitoring
agents, of which the precise specification has been omitted for the sake of brevity. By the simple
abductive reasoning method specified by the generic temporal rule in Section 3.4, when relevant the
Driver Assessment Agent can derive that the driver has a high alcohol level, from which the agent
concludes that the driver assessment is negative. These are stored as beliefs in the component Maintenance of an Agent State Model and communicated to the Cruise Control Agent. The Cruise Control
Agent takes the appropriate measures. The first temporal rule specifies that if the driver assessment
is negative, and the car is not driving, then the ignition of the car is blocked:
internal(cruise control agent)|belief(driver assessment(negative))
internal(cruise control agent)|belief(car is not driving)
 output(cruise control agent)|performing in(block ignition, car and environment)

If the car is already driving, whereas the assessment is negative, the car is slowed down.
internal(cruise control agent)|belief(driver assessment(negative))
internal(cruise control agent)|belief(car is driving)
 output(cruise control agent)|performing in(slow down car, car and environment)

Based upon such temporal rules, simulation runs of the system have been generated, of which an
example trace is shown in Figure 3.3. In the figure, the left side indicates the atoms that occur during
the simulation whereas the right side indicates a time line where a dark box indicates the atom is true
at that time point and a grey box indicates false.
In the trace, the initial alcohol level in the sweat is 0.4 per mille which is below the maximum
allowed level of 0.5 per mille.
internal(alcohol level sensoring agent)|observed result from(alcohol level(0.4), driver)

The driver starts the car and accelerates, resulting in a driving car.
internal(car and environment)|world fact(car driving)

internal(car_and_environment)|world_fact(car_not_driving)
output(driver)|performing_in(start_engine, car_and_environment)
internal(driver)|world_fact(alcohol_level(0.4))
output(car_and_environment)|observation_result_from(car_not_driving, car_and_environment)
input(car_and_environment)|performing_in(start_engine, car_and_environment)
output(driver)|observation_result_from(alcohol_level(0.4), driver)
input(cruise_control_agent)|observed_result_from(car_not_driving, car_and_environment)
input(alcohol_level_sensoring_agent)|observed_result_from(alcohol_level(0.4), driver)
internal(car_and_environment)|world_fact(engine_running)
output(car_and_environment)|observation_result_from(engine_running, car_and_environment)
output(driver)|performing_in(accelerate, car_and_environment)
output(alcohol_level_sensoring_agent)|communication_from_to(alcohol_level(0.4), alcohol_level_sensoring_agent, alcohol_level_monitoring_agent)
input(driver)|observed_result_from(engine_running, car_and_environment)
input(car_and_environment)|performing_in(accelerate, car_and_environment)
input(alcohol_level_monitoring_agent)|communicated_from_to(alcohol_level(0.4), alcohol_level_sensoring_agent, alcohol_level_monitoring_agent)
internal(car_and_environment)|world_fact(car_driving)
output(car_and_environment)|observation_result_from(car_driving, car_and_environment)
input(cruise_control_agent)|observed_result_from(car_driving, car_and_environment)
input(driver)|observed_result_from(car_driving, car_and_environment)
internal(driver)|world_fact(alcohol_level(0.6))
output(driver)|observation_result_from(alcohol_level(0.6), driver)
input(alcohol_level_sensoring_agent)|observed_result_from(alcohol_level(0.6), driver)
output(alcohol_level_sensoring_agent)|communication_from_to(alcohol_level(0.6), alcohol_level_sensoring_agent, alcohol_level_monitoring_agent)
input(alcohol_level_monitoring_agent)|communicated_from_to(alcohol_level(0.6), alcohol_level_sensoring_agent, alcohol_level_monitoring_agent)
output(alcohol_level_monitoring_agent)|communication_from_to(alcohol_level_high, alcohol_level_monitoring_agent, driver_assessment_agent)
input(driver_assessment_agent)|communicated_from_to(alcohol_level_high, alcohol_level_monitoring_agent, driver_assessment_agent)
output(driver_assessment_agent)|communication_from_to(driver_assessment(negative), driver_assessment_agent, cruise_control_agent)
input(cruise_control_agent)|communicated_from_to(driver_assessment(negative), driver_assessment_agent, cruise_control_agent)
output(cruise_control_agent)|performing_in(slow_down_car, car_and_environment)
input(car_and_environment)|performing_in(slow_down_car, car_and_environment)
output(cruise_control_agent)|performing_in(block_ignition, car_and_environment)
input(car_and_environment)|performing_in(block_ignition, car_and_environment)
internal(car_and_environment)|world_fact(engine_always_off)
time

10

15

Fig. 3.3 Example simulation trace of the ambient driver support system

20

25

30

35

40

45

50

A Generic Architecture for Human-Aware Ambient Computing

45

However, after a while the drivers alcohol level rises to 0.6 per mille, which is classified as high
by the Alcohol Level Monitoring Agent, and this is communicated to the Driver Assessment Agent:
input(driver assessment agent)|communicated from to(alcohol level high, alcohol level monitoring agent,
driver assessment agent)

By the abductive reasoning method this agent assesses the driver as negative, which is communicated
to the Cruise Control Agent, which starts to intervene. First it slows down the car, and after it stopped,
the agent blocks the ignition:
output(cruise control agent)|performing in(slow down car, car and environment)
output(cruise control agent)|performing in(block ignition, car and environment)

A more elaborated description of the model and the simulation results can be found in [Bosse et al.
(2008a)].

3.6 Case study 2: Ambient aggression handling system


This case study is inspired by a system which is operational in the city of Groningen, the Netherlands
(see Table 3.7 and Figure 3.4). It makes use of a camera that is equipped with a microphone, and
mounted at places where aggression could occur, for example in railway stations or near bars in city
centres.
Table 3.7

Components and Interactions of the Ambient Aggression Handling System


subject agents
subject components
persons in crowd
ambient agents
ambient components
camera, microphone, camera control, and sound analysis agent;
police officer at station, police officer at street
communication between ambient agents
ambient interactions
camera control agent communicates to camera agent that inspection is needed
camera agent communicates pictures of scene to officer at police station
microphone agent communicates sound to sound analysis agent
sound analysis agent communicates to camera control agent that inspection is needed
sound analysis agent communicates to police officer at station that inspection is needed
and the sound
police officer at station communicates police officer at street that inspection is needed
observation and action by ambient agent in subject world component
interactions
camera agent observes persons,
between subject and
microphone agent observes sounds
ambient
police officer at street stops aggression of persons

Initially the system only records the sound, which is dynamically analysed by an aggression
detection system. As soon as this system detects that the recorded sound is different (more aggressive)
from the standard background noise, it turns on the camera and warns the officers at the police station.
Subsequently, the police can assess the situation remotely using the camera pictures, and if necessary,
they can send police officers to the place to stop the aggression. Also for the ambient aggression
handling system, a number of domain-specific temporal rules have been established. For a complete
overview of all domain specific rules, see Appendix 3.11. First of all, the component Maintenance of
Dynamic Models within the Sound Analysis Agent contains a representation of a dynamic model of
aggression and consequences thereof. A main consequence considered here is that aggression leads
to sounds and sound levels that deviate from the normal sounds. This is represented as

46

Agent-Based Ubiquitous Computing

internal(sound analysis agent)|belief(leads to(aggression in crowd, sound(loud), D))

stating that aggression in the crowd leads to sounds with a certain frequency (for simplicity
represented as sound(loud)).
Sound
Analysis
Agent

Camera
Control
Agent

police-officerat-station

microphone agent

camera agent

police-officerat-street
persons
in crowd

Fig. 3.4

Case Study: Ambient Aggression Handling System

The latter is observable information, so when this comes in, by a simple abductive reasoning
method the Sound Analysis Agent concludes a belief that there is aggression in the crowd; this
information is transferred to the Camera Control Agent, upon which the latter agent communicates
a request for view to the Camera Agent. This is done via the following rule (which is part of the
component Plan Determination of the Camera Control Agent):
internal(camera control agent)|belief(aggression in crowd)
 output(camera control agent)|communication from to(inspection needed, camera control agent, camera agent)

Eventually, when the current sound and the view are perceived, both types of information are
transferred to the police officer at the station. For the simulation, this police officer uses the following
temporal rule (which is part of the component Process Analysis of the police officer) to conclude that
there is probably aggression in the crowd:
S:SOUND V:VIEW
internal(police ocer at station)|belief(inspection needed)
internal(police ocer at station)|belief(sound(S))
internal(police ocer at station)|belief(view(V))
internal(police ocer at station)|belief(sound view classication(S, V, aggressive))
 internal(police ocer at station)|belief(aggression in crowd)

If this officer concludes the belief that there is aggression in the crowd, the police officer at the
station notifies the police officer at the street that inspection is needed. As a result, this police officer
will go to the location of the aggression to observe the actual situation. He will use a similar rule to
the one above to conclude that there is indeed aggression, and if this is the case, he will perform the
action of stopping the aggression. An example trace that was generated on the basis of these temporal
rules is shown in Figure 3.5.

A Generic Architecture for Human-Aware Ambient Computing

47

As seen in this trace, from the start of the simulation, there is aggression in the crowd, which is
indicated by a loud sound and the view of fighting persons:
internal(persons in crowd)|world fact(sound(loud))
internal(persons in crowd)|world fact(view(ghting persons))

The Microphone Agent transfers the sound to the Sound Analysis Agent:
output(microphone agent)|communication from to(sound(loud), microphone agent, sound analysis agent)

By simple abductive reasoning the Sound Analysis Agent generates the belief that there aggression,
and informs the Camera Control Agent and the police officer at the station:
output(sound analysis agent)|communication from to(inspection needed, sound analysis agent,
camera control agent)
output(sound analysis agent)|communication from to(inspection needed, sound analysis agent,
police ocer at station)
output(sound analysis agent)|communication from to(sound(loud), sound analysis agent, police ocer at station)

output(microphone_agent)|observation_focus_in(sound(loud), persons_in_crowd)
output(microphone_agent)|observation_focus_in(sound(quiet), persons_in_crowd)
internal(persons_in_crowd)|world_fact(sound(loud))
internal(persons_in_crowd)|world_fact(view(fighting_persons))
input(microphone_agent)|observed_result_from(sound(loud), persons_in_crowd)
internal(microphone_agent)|belief(sound(loud))
output(microphone_agent)|communication_from_to(sound(loud), microphone_agent, sound_analysis_agent)
input(sound_analysis_agent)|communicated_from_to(sound(loud), microphone_agent, sound_analysis_agent)
internal(sound_analysis_agent)|belief(sound(loud))
internal(sound_analysis_agent)|belief(aggression_in_crowd)
output(sound_analysis_agent)|communication_from_to(inspection_needed, sound_analysis_agent, camera_control_agent)
output(sound_analysis_agent)|communication_from_to(inspection_needed, sound_analysis_agent, police_officer_at_station)
output(sound_analysis_agent)|communication_from_to(sound(loud), sound_analysis_agent, police_officer_at_station)
input(camera_control_agent)|communicated_from_to(inspection_needed, sound_analysis_agent, camera_control_agent)
input(police_officer_at_station)|communicated_from_to(inspection_needed, sound_analysis_agent, police_officer_at_station)
input(police_officer_at_station)|communicated_from_to(sound(loud), sound_analysis_agent, police_officer_at_station)
internal(camera_control_agent)|belief(inspection_needed)
internal(police_officer_at_station)|belief(inspection_needed)
internal(police_officer_at_station)|belief(sound(loud))
internal(camera_control_agent)|belief(aggression_in_crowd)
output(camera_control_agent)|communication_from_to(inspection_needed, camera_control_agent, camera_agent)
input(camera_agent)|communicated_from_to(inspection_needed, camera_control_agent, camera_agent)
internal(camera_agent)|belief(inspection_needed)
output(camera_agent)|observation_focus_in(view(calm_persons), persons_in_crowd)
output(camera_agent)|observation_focus_in(view(fighting_persons), persons_in_crowd)
input(camera_agent)|observed_result_from(view(fighting_persons), persons_in_crowd)
internal(camera_agent)|belief(view(fighting_persons))
output(camera_agent)|communication_from_to(view(fighting_persons), camera_agent, police_officer_at_station)
input(police_officer_at_station)|communicated_from_to(view(fighting_persons), camera_agent, police_officer_at_station)
internal(police_officer_at_station)|belief(view(fighting_persons))
internal(police_officer_at_station)|belief(aggression_in_crowd)
output(police_officer_at_station)|communication_from_to(inspection_needed, police_officer_at_station, police_officer_at_street)
input(police_officer_at_street)|communicated_from_to(inspection_needed, police_officer_at_station, police_officer_at_street)
internal(police_officer_at_street)|belief(inspection_needed)
output(police_officer_at_street)|observation_focus_in(sound(loud), persons_in_crowd)
output(police_officer_at_street)|observation_focus_in(sound(quiet), persons_in_crowd)
output(police_officer_at_street)|observation_focus_in(view(calm_persons), persons_in_crowd)
output(police_officer_at_street)|observation_focus_in(view(fighting_persons), persons_in_crowd)
input(police_officer_at_street)|observed_result_from(sound(loud), persons_in_crowd)
input(police_officer_at_street)|observed_result_from(view(fighting_persons), persons_in_crowd)
internal(police_officer_at_street)|belief(sound(loud))
internal(police_officer_at_street)|belief(view(fighting_persons))
internal(police_officer_at_street)|belief(aggression_in_crowd)
output(police_officer_at_street)|performing_in(stop_aggression, persons_in_crowd)
input(persons_in_crowd)|performing_in(stop_aggression, persons_in_crowd)
internal(persons_in_crowd)|world_fact(aggression_stops)
internal(persons_in_crowd)|world_fact(sound(quiet))
internal(persons_in_crowd)|world_fact(view(calm_persons))
time

Fig. 3.5

10

20

30

40

Example simulation trace of the ambient aggression handling system

50

60

70

80

90

100

48

Agent-Based Ubiquitous Computing

Next, the Camera Control Agent informs the Camera Agent that inspection is needed:
output(camera control agent)|communication from to(inspection needed, camera control agent, camera agent)

The camera agent observes the fighting persons:


input(camera agent)|observed result from(view(ghting persons), persons in crowd)

This information is then transferred to the police officer at the station, who generates the belief that
there is aggression in the crowd:
internal(police ocer at station)|belief(aggression in crowd)

After this, the police officer at the station notifies the police officer at the street that (further) inspection is needed, who confirms that there is indeed aggression in the crowd, and undertakes the action
of stopping the aggression (which eventually results in a quiet and calm environment):
output(police ocer at street)|performing in(stop aggresion, persons in crowd)

3.7 Case study 3: Ambient system for management of medicine usage


Another case used to evaluate the generic model concerns management of medicine usage; e.g.,
[Green (2005)]. Figure 3.6 presents an overview of the entire system as considered. Two world
components are present in this system: the medicine box, and the patient database; the other components are agents. The top right corner shows the patient, who interacts with the medicine box, and
communicates with the patient phone.

Medicine Box
Agent

Patient
Medicine box

Usage
Support
Agent

Patient Phone

Patient
Data

Doctor Phone
Doctor

Fig. 3.6

Case Study: Ambient System for Management of Medicine Usage

The (ambient) Usage Support Agent has a dynamic model of the medicine concentration in the
patient. This model is used to estimate the current concentration, which is also communicated to
the (ambient) Medicine Box Agent. The Medicine Box Agent monitors whether medicine is taken
from the box, and the position thereof in the box. In case, for example, the patient intends to take
the medicine too soon after the previous dose, it finds out that the medicine should not be taken at
the moment (i.e., the sum of the estimated current medicine level plus a new dose is too high), and

A Generic Architecture for Human-Aware Ambient Computing

49

communicates a warning to the patient by a beep sound. Furthermore, all information obtained by
this agent is passed on to the (ambient) Usage Support Agent. All information about medicine usage
is stored in the patient database by this agent. If the patient tried to take the medicine too early, a
warning SMS with a short explanation is communicated to the cell phone of the patient, in addition
to the beep sound already communicated by the Medicine Box Agent. On the other hand, in case the
Usage Support Agent finds out that the medicine is not taken on time (i.e., the medicine concentration
is estimated too low for the patient and no medicine was taken yet), it can take measures as well. First
of all, it can warn the patient by communicating an SMS to the patient cell phone. This is done soon
after the patient should have taken the medicine. In case the patient still does not take medicine
(for example after a number of hours), the agent can communicate an SMS to the cell phone of the
appropriate doctor. The doctor can look into the patient database to see the medicine usage, and
in case the doctor feels it is necessary to discuss the state of affairs with the patient, he or she can
contact the patient via a call from the doctor cell phone to the patient cell phone. Table 3.8 presents
an overview of the various components and their interactions.
The specification of the interaction between the various components within the medicine box
case has similarity with the two other cases and has therefore been omitted for the sake of brevity, see
Appendix 3.12 for more details. One major difference however is the model the usage support agent
has of the patient. The agent maintains a quantitative model of the medicine level of the patient using
the following knowledge:
internal(usage support agent)|belief(leadsto to (
medicine level(M, C) usage eect(M, E) decay(M, G),
medicine level(M, (C+E) - G*(C+E)*D), D)

This model basically specifies that a current medicine level C of medicine M and a known usage
effect at the current time point of E combined with a decay value of G, leads to a belief of a new
medicine level C + E G * (C + E) * D after duration D. Below, Figure 3.7 shows how the medicine
level varies over time when the ambient system support the medicine versus the case where the
system is not active. The minimum medicine level required in the patients blood is 0.3 whereas the
maximum allowed medicine level is 1.4. As can be seen, the medicine level using the system does
meet these demands whereas without support the level does not.
A more elaborated description of the model and the simulation results can be found in [Hoogendoorn
et al. (2008)].

3.8 Specification and verification of dynamic properties


This section addresses specification and verification of relevant dynamic properties (expressed as
formulae in the TTL language) of the cases considered, for example, requirements imposed on these
systems. In the future, also the IFIP properties on user interfaces for AmI applications can be tested
[Gram and Cockton (1996)].

Properties of the system as a whole A natural property of the Ambient Driver Support System
is that a drunken driver cannot continue driving. A driver is considered drunk if the blood alcohol
level is above threshold a. The global properties (GP) of the presented systems (abbreviated as ADSS,
AAHS and AMUMS respectively) are:
GP1(ADSS) No drunken driver
If the drivers blood alcohol level is above threshold a, then within 30 seconds the car will not drive
and the engine will be off
:TRACE, t:TIME, R:REAL

50

Agent-Based Ubiquitous Computing


Table 3.8

Components and interactions of the ambient medicine usage management system


subject agents
subject world components
subject components
human patient
medicine box
observation and action by subject agent in subject world component
subject interactions
patient takes or puts medicine from a particular compartment in medicine box
ambient agents
ambient world components
ambient components
medicine box agent, usage support
patient database
agent, patient and doctor phone, human
doctor
communication between ambient agents
ambient interactions
medicine box agent communicates to the medicine usage support agent that a
pill has been taken out or added to a compartment of the box
medicine usage support agent communicates to the patient cell phone agent a
text message that medicine needs to be taken
medicine usage support agent communicates to the doctor cell phone agent
a text message that a certain patient has not taken his medicine for a certain
duration
doctor cell phone communicates to patient cell phone (and vice versa)
doctor cell phone communicates to the doctor a text message that a certain
patient has not take his medicine for a certain duration
observation and action by ambient agent in ambient world component
usage support agent adds and retrieves info to/from patient database
communication between ambient
observation by ambient agent in subinteractions
agent and subject agent
ject world component
between subject and
medicine box agent communicates to medicine box agent focuses on
ambient
patient a warning beep
medicine box and receives observation
patient communicates with patient
results from medicine box
phone
doctor communicates with doctor
phone

state( , t, internal(driver)) |= world fact(alcohol level(R)) & R > a


t2:TIME < t:TIME + 30 [state( , t2, internal(car and environment)) |= world fact(car not driving)]

For the Ambient Aggression Handling System a similar property is that aggression is stopped as soon
as it occurs. Here for the example a situation is considered aggressive if persons are fighting and there
is a high sound level.
GP1(AAHS) No aggression
If the persons in the crowd are fighting and noisy, then within 35 time steps they will be calm and
quite
:TRACE, t:TIME
state( , t, internal(persons in crowd)) |= world fact(view(ghting persons)) &
state( , t, internal(persons in crowd)) |= world fact(sound(loud))
t2:TIME < t:TIME + 35 [ state( , t, internal(persons in crowd)) |= world fact(view(calm persons)) &
state( , t, internal(persons in crowd)) |= world fact(sound(quiet)) ]

For the Ambient Medicine Usage Management System, a relevant property is that the medicine concentration is relatively stable, which means that it stays between an upper and lower bound.
GP1(AMUMS) Stable Medicine Concentration
At any time point the medicine concentration is between lower bound M1 and upper bound M2
:TRACE, t:TIME, R:REAL

A Generic Architecture for Human-Aware Ambient Computing

51

Fig. 3.7 Medicine level with (top figure) and without ambient system usage (x-axis denotes time and y-axis
denotes the medicine level, note the different scale)

state( , t, internal(patient)) |= world fact(medicine level(R)) M1 R & R M2

All three of these properties have been automatically verified (using the TTL checker tool [Bosse
et al. (2008b)]) against the traces shown in the paper. For each of these trace whereby the system is
in use, the property GP1 holds.

Interlevel Relations Between Properties at Different Aggregation Levels Following


[Jonker and Treur (2002)], dynamic properties can be specified at different aggregation levels. Illustrated for the Driver case, three levels are used: properties of system as a whole, properties of
subsystems, and properties of agents and the world within a subsystem. In Table 3.9 it is shown for
the Ambient Driver Support System how the property at the highest level relates to properties at the
lower levels (see also Figure 2). The lower level properties in the fourth column are described below.
subsystems and properties
sensoring
S1
monitoring
M1
assessment
A1
plan determination
P1
subject process
SP1

Table 3.9 Properties and their interlevel relations


components and properties
steering, gaze-focus, alcohol-level sensoring agent
steering, gaze-focus, alcohol-level monitoring agent
driver assessment agent
cruise control agent
driver, car and environment

SSA1, GSA1, ASA1


SMA1, GMA1, AMA1
DAA1, DAA2
CCA1, CCA2
CE1, CE2

The property GP1 of the system as a whole can be logically related to properties of the subsystems
(shown in the second column in the table) by the following inter level relation:
S1 & M1 & A1 & P1 & SP1 GP1
This expresses that the system functions well when all of the subsystems for sensoring, monitoring,
assessment, plan determination and the subject process function well.

52

Agent-Based Ubiquitous Computing

Properties of subsystems The properties characterising correct functioning of each of the subsystems are described below.
S1 Sensoring system
If the sensory system receives observation input from the world and driver concerning alcohol level,
gaze focus and steering operation, then it will provide as output this information for the monitoring
system.
M1 Monitoring system
If the monitoring system receives sensor information input concerning alcohol level, gaze-focus and
steering operation from the sensoring system, then it will provide as output monitoring information
concerning qualification of alcohol-level, gaze-focus and steering operation for the assessment system.
A1 Assessment system
If this system receives monitoring information concerning specific qualifications of alcohol-level,
gaze-focus and steering operation, then it will provide as output a qualification of the state.
P1 Plan determination system
If the plan determination system receives an overall qualification of the state, then it will generate as
output actions to be undertaken.
SP1 Subject process
If the subject process receives actions to be undertaken, then it will obtain the effects of these actions.
If the drivers blood alcohol level is above threshold a, then the driver will operate the steering wheel
abnormally and the drivers gaze is unfocused.

Properties of components As indicated in Table 3.9 in the fourth column, each property of a
subsystem is logically related to properties of the components within the subsystem. For example,
the inter level relation
SSA1 & GSA1 & ASA1 S1
expresses that the sensoring subsystem functions well when each of the sensoring agents functions
well (similarly, for the monitoring subsystem). Examples of properties characterising correct functioning of components are the following. The properties for the other sensoring and monitoring
agents (GSA1, ASA1, GMA1, AMA1) are similar.
SSA1 Steering Sensoring agent
If the Steering Sensoring agent receives observation results about steering wheel operation then it
will communicate this information to the Steering Monitoring agent observation.
SMA1 Steering Monitoring agent
If the Steering Monitoring agent receives observation results about the steering wheel, and this operation is abnormal, then it will communicate to the Driver Assessment Agent that steering operation
is abnormal.
The properties for the Driver Assessment Agent are:
DAA1 Assessment based on alcohol
If the Driver Assessment Agent receives input that the alcohol level is high, then it will generate as
output communication to the Cruise Control agent that the driver state is inadequate.
DAA2 Assessment based on behaviour
If the Driver Assessment Agent receives input that steering operation is abnormal and gaze is unfocused, then it will generate as output communication to the Cruise Control agent that the driver state
is inadequate.
For the Cruise Control Agent the properties are:

A Generic Architecture for Human-Aware Ambient Computing

53

CCA1 Slowing down a driving car


If the Cruise Control agent receives communication to that the driver state is inadequate, and the car
is driving, then it will slow down the car.
CCA2 Turning engine off for a non driving car
If the Cruise Control agent receives communication that the driver state is inadequate, and the car is
not driving, then it will turn off the engine.
The properties for the Car and Environment are:
CE1 Slowing down stops the car
If the Car and Environment components perform the slowing down action, then within 20 seconds
the car will not drive.
CE2 Turning off the engine makes the engine off
If the Car and Environment components perform the turn off engine action, then within 5 seconds the
engine will be off.

The Use of Interlevel Relations in Fault Diagnosis Sometimes an error might occur in a
component within the system. Therefore, a trace has also been generated whereby the functioning of
the various agents is correct with a certain probability. In the resulting trace, the overall property GP1
does not hold. Therefore, the refined properties have been verified to determine the exact cause of this
failure, and the results thereof show that the alcohol level monitoring agent does not communicate
that the alcohol level is high, whereas the level is in fact too high.

3.9 Discussion
The challenge addressed in this paper is to provide a generic model that covers the class of Ambient Intelligence applications that show human-like understanding and supporting behaviour. Here
human-like understanding is defined as understanding in the sense of being able to analyse and estimate what is going on in the humans mind and body (a form of mind/bodyreading). Input for these
processes are observed information about the humans physiological and behavioural states and dynamic models for the humans physical and mental processes. For the mental side such a dynamic
model is sometimes called a Theory of Mind (e.g., [Dennett (1987a)], [Gaerdenfors (2003)], [Goldman (2006)]) and may cover concepts such as emotion, attention, intention, and belief. This can be
extended to integration with the humans physical processes, relating, for example, to skin conditions,
heart rates, and levels of blood sugar, insulin, adrenalin, testosterone, serotonin, and specific medication taken. In this class of Ambient Intelligence applications, knowledge from human-directed
disciplines is exploited, in order to take care of (and support in a knowledgeable manner) humans in
their daily living, in medical, psychological and social respects. Thus, an ambience is created that
uses essential knowledge from the human-directed disciplines to provide a more human-like understanding of human functioning, and from this understanding can provide adequate support. This may
concern, for example, elderly people, criminals and psychiatric patients, but also humans in highly
demanding tasks.
The generic model introduced in this paper is a template for the specific class of Ambient Intelligence applications as described. One of the characteristics of this class is that a high level of
human-directed context awareness plays a role; see also [Schmidt (2005)], [Schmidt et al. (1999)],
[Schmidt et al. (2001)]. The ambient software and hardware design is described in an agent-based
manner at a conceptual design level and to support context awareness has generic facilities built in to
represent human state models and dynamic process models, and methods for model-based simulation
and analysis on the basis of such models. For a particular application, biomedical, neurological, psy-

54

Agent-Based Ubiquitous Computing

chological and/or social ontologies, knowledge and dynamic models about human functioning can
be specified. The generic model includes slots where such application-specific content can be filled
in to get an executable design for a working system. This specific content, together with the generic
methods to operate on it, enables ambient agents to show human-like understanding of humans and
to react on the basis of this understanding in a knowledgeable manner. The model has been positively
evaluated in three case studies related to existing Ambient Intelligence applications that already are
operational or in a far stage of development.

3.10 Appendix 1: Driver case


3.10.1 Driver assessment agent: Domain-specific temporal rules
The Driver Assessment agent will believe that the drivers state is assessed as negative either when
it believes the alcohol level is high, or when it believes that the drivers gaze is unfocused and his
steering operations are abnormal.
internal(driver assessment agent)|belief(leadsto(alcohol level high, driver assessment(negative), D))
internal(driver assessment agent)|belief(leadsto(abnormal steering operation unfocused gaze,
driver assessment(negative), D))

If the Driver Assessment agent believes that the drivers state is assessed as negative, then it will
communicate this to the Cruise Control agent.
internal(driver assessment agent)|belief(driver assessment(negative))

output(driver assessment agent)|communication from to(driver assessment negative, driver assessment agent,
cruise control agent)

3.10.2 Cruise control agent: Domain-specific temporal rules


If the Cruise Control agent believes that the drivers state is assessed as negative, and in case the car
is not driving it will stop the engine, and in case the car is driving, it will slow down the car.
internal(cruise control agent)|belief(driver assessment(negative))
internal(cruise control agent)|belief(car is not driving)

output(cruise control agent)|performing in(block ignition, car and environment)
internal(cruise control agent)|belief(driver assessment(negative))
internal(cruise control agent)|belief(car is driving)

output(cruise control agent)|performing in(slow down car, car and environment)

3.10.3 Steering monitoring agent: Domain-specific temporal rules


When the steering operation is classified as abnormal, the Steering Monitoring agent will believe that
there is abnormal steering operation.
internal(steering monitoring agent)|belief(steering operation classication(S, abnormal))

internal(steering monitoring agent)|belief(leadsto(steering operation(S), abnormal steering operation, D))

When the Steering Monitoring agent believes that there is abnormal steering operation, it will communicate this to the Driver Assessment agent.

A Generic Architecture for Human-Aware Ambient Computing

55

internal(steering monitoring agent)|belief(abnormal steering operation)



outputl(steering monitoring agent)|communication from to(abnormal steering operation, steering monitoring agent,
driver assessment agent)

3.10.4 Steering sensoring agent: Domain-specific temporal rules


When the steering sensoring agent observes steering operation, it will communicate this to the steering monitoring agent.
input(steering sensoring agent)|observed result(steering operation(S), driver body)

output(steering sensoring agent)|communication from to(steering operation(S), camera agent,
gaze focus monitoring agent)

Gaze-focus monitoring agent: Domain-specific temporal rules


When the gaze focus is classified as unfocused, the Steering Monitoring agent will believe that there
is unfocused gaze.
internal(gaze focus monitoring agent)|belief(gaze classication(G, unfocused))

internal(gaze focus monitoring agent)|belief(leadsto(gaze focus(G), unfocused gaze, D))

When the Steering Monitoring agent believes that there is unfocused gaze, it will communicate this
to the Driver Assessment agent.
internal(gaze focus monitoring agent)|belief(unfocused gaze)

output(gaze focus monitoring agent)|communication from to(unfocused gaze, gaze focus monitoring agent,
driver assessment agent)

3.10.5 Gaze-focus sensoring agent: Domain-specific temporal rules


When the Gaze focus sensoring agent observes a gaze focus, it will communicate this to the GazeFocus Monitoring agent.
input(gaze focus sensoring agent)|observed result(gaze focus(G), driver body)

output(gaze focus sensoring agent)|communication from to(gaze focus(G), gaze focus sensoring agent,
gaze focus monitoring agent)

3.10.6 Alcohol-level monitoring agent: Domain-specific temporal rules


When the alcohol level is classified as high, the Alcohol-Level Monitoring agent will believe that
there is a high alcohol level.
internal(alcohol level monitoring agent)|belief(alcohol level classication(A, high))

internal(alcohol level monitoring agent|belief(leadsto(alcohol level(A), alcohol level high, D))

When the Alcohol-Level Monitoring agent believes that there is a high alcohol level, it will communicate this to the Driver Assessment agent.

56

Agent-Based Ubiquitous Computing

internal(alcohol level monitoring agent)|belief(alcohol level high)



output(alcohol level monitoring agent)|communication from to(alcohol level high, alcohol level monitoring agent,
driver assessment agent)

3.10.7 Alcohol sensoring agent: Domain-specific temporal rules


When the Alcohol Sensoring agent observes an alcohol level, it will communicate this to the Alcohollevel Monitoring agent.
input(alcohol sensoring agent)|observed result(alcohol level(A))

output(alcohol sensoring agent)|communication from to(alcohol level(A), alcohol sensoring agent,
alcohol level monitoring agent)

3.10.8 Driver: Domain-specific temporal rules


The driver is characterised by the steering operations, the gaze focus and the alcohol level.
output(driver body)|performing in(steering operation(S), car and environment)
output(driver body)|performing in(start engine, car and environment)
output(driver body)|performing in(accelerate, car and environment)
internal(driver body)|world fact(gaze focus(G))
internal(driver body)|world fact(alcohol level(A))

3.10.9 Car and environment: Domain-specific temporal rules


Steering operations can be performed upon the car.
input(car and environment)|performing in(steering operation(S), car and environenment)

internal(car and environment)|world fact(steering operation(S))

The action of slowing down the car has the effect that the car is not driving anymore. The effect
of stopping the engine has the effect that the engine is off.
internal(car and environment)|has eect(slow down car, car not driving)
internal(car and environment)|has eect(block ignition, engine o)
internal(car and environment)|has eect(block ignition start engine, engine running)
internal(car and environment)|has eect(engine running accelerate, car driving)

3.11 Appendix 2: Aggression handling case


3.11.1 Sound analysis agent: Domain-specific temporal rules
The Sound Analysis believes that aggression in the crowd leads to a loud sound (within duration D).
internal(sound analysis agent)|belief(leadsto(aggression in crowd), sound(loud), D)

When the Sound Analysis agent believes that there is aggression in the crowd, it will communicate
this to the camera control agent and to the police officer at the station, together with the sound.

A Generic Architecture for Human-Aware Ambient Computing

57

internal(sound analysis agent)|belief(aggression in crowd sound(loud))



output(sound analysis agent)|communication from to(inspection needed, sound analysis agent,
camera control agent)
output(sound analysis agent)|communication from to(inspection needed, sound analysis agent,
police ocer at station)
output(sound analysis agent)|communication from to(sound(loud), sound analysis agent, police ocer at station)

Camera control agent: Domain-specific temporal rules


When the Camera Control agent believes that inspection is needed, it will believe that there is aggression in the crowd.
internal(camera control agent)|belief(inspection needed)

internal(camera control agent)|belief(aggression in crowd)

When the Camera Control agent believes that there is aggression in the crowd, it will communicate
to the Camera agent that inspection is needed.
internal(camera control agent)|belief(aggression in crowd)

output(camera control agent) |communication from to(view needed, camera control agent, camera agent)

3.11.2 Microphone agent: Domain-specific temporal rules


The Microphone can observe the sound in the crowd.
output(microphone agent)|observation focus in(sound(S), persons in crowd)

When the Microphone agent believes that there is a certain type of sound, it will communicate this to
the Sound Analysis agent.
internal(microphone agent)|belief(sound(S))

output(microphone agent)|communication from to(sound(S), microphone agent, sound analysis agent)

Camera agent: Domain-specific temporal rules


When the Camera believes that inspection is needed, it will focus its observation for this view.
internal(camera agent)|belief(inspection needed)

output(camera agent)|observation focus in(view(V), persons in crowd)

When the Camera agent believes that there is a certain type of view, it will communicate this to the
Police Officer at the Station.
internal(camera agent)|belief(view(V))

output(camera agent)|communication from to(view(V), camera agent, police ocer at station)

58

Agent-Based Ubiquitous Computing

3.11.3 Persons in crowd: Domain-specific temporal rules


The Persons in the Crowd are characterised by the sound and view (that are generated at random).
internal(persons in crowd)|world fact(sound(S))
internal(persons in crowd)|world fact(view(V))

The action stop aggression has as effect a quiet sound.


internal(persons in crowd)|has eect(stop aggression, sound(quiet))

3.11.4 Police officer at station: Domain-specific temporal rules


The Police Officer at the Station believes that a loud sound combined with a view of fighting persons
is an indication for aggression.
internal(police ocer at station)|belief(sound view classication(loud, ghting persons, aggressive))

When Police Officer at the Station believes that inspection is needed, and (s)he classifies the combination of sound and view as aggressive, (s)he will believe that there is aggression.
internal(police
internal(police
internal(police
internal(police

internal(police

ocer
ocer
ocer
ocer

at station)|belief(inspection needed)
at station)|belief(sound(S))
at station)|belief(view(V))
at station)|belief(sound view classication(S, V, aggressive))

ocer at station)|belief(aggression in crowd)

When Police Officer at the Station believes that there is aggression, (s)he will communicate to the
Police Officer at the street that inspection is needed.
internal(police ocer at station)|belief(aggression in crowd)

output(police ocer at station)|communication from to(inspection needed, police ocer at station,
police ocer at street)

3.11.5 Police officer at street: Domain-specific temporal rules


The Police Officer at the Street believes that a loud sound combined with a view of fighting persons
is an indication for aggression.
internal(police ocer at street)|belief(sound view classication(loud, ghting persons, aggressive))

When the Police Officer at the Street receives communication from the Police Officer at the Station
that there is aggression, the will believe that there may be aggression.
input(police ocer at street)|communicated from to(aggression in crowd, police ocer at station,
police ocer at street)

internal(police ocer at street)|belief(inspection needed)

When Police Officer at the Street believes that inspection is needed, then (s)he will focus on observing
the sound and view in the situation.
internal(police ocer at street)|belief(inspection needed)

output(police ocer at street)|observation focus in(sound(S), persons in crowd)
output(police ocer at street)|observation focus in(view(V), persons in crowd)

A Generic Architecture for Human-Aware Ambient Computing

59

When Police Officer at the Street believes that inspection is needed, and (s)he classifies the combination of sound and view as aggressive, (s)he will believe that there is aggression.
internal(police
internal(police
internal(police
internal(police

internal(police

ocer
ocer
ocer
ocer

at street)|belief(inspection needed)
at street)|belief(sound(S))
at street)|belief(view(V))
at street)|belief(sound view classication(S, V, aggressive))

ocer at street)|belief(aggression in crowd)

When Police Officer at the Street believes that there is aggression, (s)he will stop the aggression.
internal(police ocer at street)|belief(aggression in crowd)

output(police ocer at street)|performing in(stop aggression, persons in crowd)

3.12 Appendix 3: Medicine usage case


3.12.1 Medicine box agent
The Medicine Box Agent has functionality concerning communication to both the patient and the
Usage Support Agent. First of all, the observed usage of medicine is communicated to the Usage
Support Agent in case the medicine is not taken too early, as specified in MBA1.

3.12.1.1 MBA1: Medicine usage communication


If the Medicine Box Agent has a belief that the patient has taken medicine from a certain position
in the box, and that the particular position contains a certain type of medicine M, and taking the
medicine does not result in a too high medicine concentration of medicine M within the patient, then
the usage of this type of medicine is communicated to the Usage Support Agent.
internal(medicine box agent)|belief(medicine taken from position(x y coordinate(X,Y)))
internal(medicine box agent)|belief(medicine at location(x y coordinate(X, Y), M))
internal(medicine box agent)|belief(medicine level(M, C))
max medicine level(maxB) dose(P) C + P maxB

output(medicine box agent)|communication from to(
medicine used(M), medicine box agent, usage support agent)

In case medicine is taken out of the box too early, a warning is communicated by a beep and the
information is forwarded to the Usage Support Agent (MBA2 and MBA3).

3.12.1.2 MBA2: Too early medicine usage prevention


If the Medicine Box Agent has the belief that the patient has taken medicine from a certain position in the box, that this position contains a certain type of medicine M, and taking the medicine
results in a too high medicine concentration of medicine M within the patient, then a warning beep is
communicated to the patient.
internal(medicine box agent)|belief(medicine taken from position(x y coordinate(X,Y)))
internal(medicine box agent)|belief(medicine at location(x y coordinate(X, Y), M))
internal(medicine box agent)|belief(medicine level(M, C))
max medicine level(maxB) dose(P) C + P > maxB


60

Agent-Based Ubiquitous Computing

output(medicine box agent)|communication from to(sound beep, medicine box agent, patient)

3.12.1.3 MBA3: Early medicine usage communication


If the Medicine Box Agent has a belief that the patient was taking medicine from a certain position
in the box, and that the particular position contains a certain type of medicine M, and taking the
medicine would result in a too high concentration of medicine M within the patient, then this is
communicated to the Usage Support Agent.
internal(medicine box agent)|belief(medicine taken from position(x y coordinate(X,Y)))
internal(medicine box agent)|belief(medicine at location(x y coordinate(X, Y), M))
internal(medicine box agent)|belief(medicine level(M, C))
max medicine level(maxB) dose(P) C + P > maxB

output(medicine box agent)|communication from to(
too early intake intention, medicine box agent, usage support agent)

3.12.2 Usage support agent


The Usage Support Agents functionality is described by three sets of temporal rules. First, the agent
maintains a dynamic model for the concentration of medicine in the patient over time in the form of
a belief about a leads to relation.

3.12.2.1 USA1: Maintain dynamic model


The Usage Support Agent believes that if the medicine level for medicine M is C, and the usage
effect of the medicine is E, then after duration D the medicine level of medicine M is C+E minus
G*(C+E)*D with G the decay value.
internal(usage support agent)|belief(leadsto to after(medicine level(M, C)
usage eect(M, E) decay(M, G), medicine level(M, (C+E) - G*(C+E)*D), D)

In order to reason about the usage information, this information is interpreted (USA2), and stored in
the database (USA3).

3.12.2.2 USA2: Interpret usage


If the agent has a belief concerning usage of medicine M and the current time is T, then a belief is
generated that this is the last usage of medicine M, and the intention is generated to store this in the
patient database.
internal(usage support agent)|belief(medicine used(M))
internal(usage support agent)|belief(current time(T))

internal(usage support agent)|belief(last recorded usage(M, T)
internal(usage support agent)|intention(store usage(M, T))

3.12.2.3 USA3: Store usage in database


If the agent has the intention to store the medicine usage in the patient database, then the agent
performs this action.

A Generic Architecture for Human-Aware Ambient Computing

61

internal(usage support agent)|intention(store usage(M, T))



output(usage support agent)|performing in(store usage(M, T), patient database)

Finally, temporal rules were specified for taking the appropriate measures. Three types of measures
are possible. First, in case of early intake, a warning SMS is communicated (USA4). Second, in case
the patient is too late with taking medicine, a different SMS is communicated, suggesting to take the
medicine (USA5). Finally, when the patient does not respond to such SMSs, the doctor is informed
by SMS (USA6).

3.12.2.4 USA4: Send early warning SMS


If the agent has the belief that an intention was shown by the patient to take medicine too early, then
an SMS is communicated to the patient cell phone that the medicine should be put back in the box,
and the patient should wait for a new SMS before taking more medicine.
internal(usage support agent)|belief(too early intake intention)

output(usage support agent)|communication from to(put medicine back and wait for signal, usage support agent,
patient cell phone)

3.12.2.5 USA5: SMS to patient when medicine not taken on time


If the agent has the belief that the level of medicine M is C at the current time point, and the level is
considered to be too low, and the last message has been communicated before the last usage, and at
the current time point no more medicine will be absorbed by the patient due to previous intake, then
an SMS is sent to the patient cell phone to take the medicine M.
internal(usage support agent)|belief(current time(T3))
internal(usage support agent)|belief(at(medicine level(M, C), T3))
min medicine level(minB) C < minB
internal(usage support agent)|belief(last recorded usage(M, T))
internal(usage support agent)|belief(last recorded patient message sent(M, T2))
T2 < T usage eect duration(UED) T3 > T + UED

output(usage support agent)|communication from to(sms take medicine(M), usage support agent,
patient cell phone)

3.12.2.6 USA6: SMS to doctor when no patient response to SMS


If the agent has the belief that the last SMS to the patient has been communicated at time T, and
the last SMS to the doctor has been communicated before this time point, and furthermore, the last
recorded usage is before the time point at which the SMS has been sent to the patient, and finally, the
current time is later than time T plus a certain delay parameter for informing the doctor, then an SMS
is communicated to the cell phone of the doctor that the patient has not taken medicine M.
internal(usage support agent)|belief(last recorded patient message sent(M, T))
internal(usage support agent)|belief(last recorded doctor message sent(M, T0))
internal(usage support agent)|belief(last recorded usage(M, T2))
internal(usage support agent)|belief(current time(T3))
T0 < T T2 < T max delay after warning(DAW) T3 > T + DAW

output(usage support agent)|communication from to(sms not taken medicine(M), usage support agent,
doctor cell phone)

Chapter 4

e-Assistance Support by Intelligent Agents over


MANETs
Eduardo Rodrguez, Juan C. Burguillo and Daniel A. Rodrguez
Departamento de Enxenera Telematica, E.T.S. de Enxenera de Telecomunicacion,
Universidade de Vigo, 36310 Vigo, Spain
rodfer@enigma.det.uvigo.es, {jrial,darguez}@det.uvigo.es

Abstract
Through this chapter we introduce an e-Assistance Support system that combines recent technologies
like Case Based Reasoning, Peer-to-Peer networks and Ambient Intelligence. Case Based Reasoning
is a knowledge paradigm able to use previous experiences to solve new problem situations. Peer-toPeer networks represents a well-known and proven asset to share resources among the members of
a community. Last, Ambient Intelligence is an up-and-coming technological paradigm able to assist
people and make their life easier using unobtrusive technologies. We have mixed these technologies
in order to build a seamless e-Assistance system. With this system, people have at its disposal a
mobile ad-hoc network able to solve specific problems through spontaneous connections among different nodes. We propose a system to solve daily problems taking advantage of the power of a CBR
multi-agent system (MAS) that exchanges its cases through a P2P mobile ad-hoc network. Finally,
we tested the proposed system in what we call an intelligent gym for physical training.

4.1 Introduction
Ambient Intelligence is an amalgam of technologies able to sense and actuate over a concrete environment willing achieve some well-known goals. Obviously, these goals are provided by a human
being, who is the main member of that environment and the direct beneficiary of the system. In order
to achieve these goals, the environment needs to be populated with digitally equipped devices able to
carry out computational and communication processes.
Every person facing a situation where he does not know how to act can use ubiquitous computing
and ambient intelligence in order to achieve e-assistance. To carry out this assistance, collaboration
among agents within a multi-agent system for complex troubleshooting can be used.
We present here an architecture to assist users facing novel or unknown specific situations. This
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_4, 2009 Atlantis Press/World Scientific

63

64

Agent-Based Ubiquitous Computing

architecture consists in a set of intelligent agents that are able to communicate with each other through
an ad-hoc mobile network. We start from the idea that current problems or situations are, at least,
similar to the problems or situations that other users had experienced in the past. This means that
we can use the previously acquired experiences. In order to achieve those experiences we establish
a peer-to-peer network among the agents of the system. Through this network we can exchange the
previously acquired experiences, or cases.
Some studies have been done related with using centralized Case Based Reasoning intelligent
agents as well as studies about resource sharing through peer-to-peer networks. We propose a system
that combines both approaches, modeling the Case Based Reasoning agents as mobile peers and using a peer-to-peer network to exchange experiences, i.e., problems and their solutions, among them.

4.1.1 Multi agent systems (MAS)


Agent Theory (AT) and Multi Agents Systems (MAS) has became an active researching area of Artificial Intelligence in the last years. A lot of definitions, and consequently its defining characteristics,
has been proposed for the concept of agent. We are going to back the well known definition given by
Wooldrige and Jennings [Wooldridge and Jennings (1995)] where agents are entities that represents
humans with the following characteristics (see Fig. 4.1):
Autonomy. Agents have to operate without external human control over their actions and decisions.
Reactivity. Agents have to sense the changes of the environment and adapt its behavior to these
changes.
Proactivity. Agents must have the ability to focus its actions and decisions to achieve its own
goals.
Sociability. Agents must have the ability to communicate with humans and other agents through
a predefined protocol.
There exist some other abilities that help to define what an agent is and how it should behave:
Benevolence. Agents must not hide information nor its intentions, or refuse to help when it is
able to do it.
Rationality. Agents must base its decisions on the acquired knowledge and modify its behavior
depending on the experience.
Veracity. Agents can not provide wrong or inaccurate information on purpose.
Mobility. Agents should be able to achieve its objectives being executed in a computing network.
There are tasks or problems that can not be solved for an unique agent due to its lack of knowledge or ability to face a given situation. In this situation, a multi agent system can be useful to face the
task. A MAS is a system where agents collaborate to solve complex domain problems [Wooldridge
(2002)]. In order to solve these problems they cooperate exchanging information. Usually, the original task can be divided in several sub-tasks and the different agents of the MAS can solve individually
one or more of these tasks. This way a MAS can be used to solve more complex problems.
There is not an unique architecture to develop agents given that exist different types of agents for
different tasks. For example, some agents can be designed to make fast decisions and some others can
value more the accuracy of the decision. The architecture depends of the goal, the tasks to be carried

e-Assistance Support by Intelligent Agents over MANETs

65

Fig. 4.1 Characteristics of an agent.

out and the working environment. It describes the software and hardware modules inter connection
in order to exhibit the behavior explained in the agent theory. Some of the possible architectures are:
Deliberative. Deliberative architectures are based on the AI planning theory. Given an initial
state, a set of plans and a final goal, a deliberative agent should know how to concatenate the
steps to achieve its objective.
Reactive. Reactive architectures claim simply decision process and adapts it depending on the
information received from the environment. These architectures, basically, look for a quickly
decision process based on a stimulus-response system.
Hybrid. Most developers and researchers believe that an architecture should not be only deliberative nor reactive. Some hybrids systems have been proposed consisting of a reactive and a
deliberative subsystems. Another systems consist of three levels: a reactive level situated at the
lowest level and sensing the stimulus from its environment, a knowledge level situated at the
intermediate level containing the knowledge of the medium and a symbolic representation and,
finally, a social level located atop managing information about the environment, other agents,
desires, intentions, etc.
Logic Based. Logic Based architectures base its decisions in a logic deduction. They rely on
the semantic clarity and power of logic. Agents constructed with this architectures are, usually,
employed in complex production domains.
BDI. Belief-Desire-Intention architectures make decisions through a reasoning process that starts
with the agent beliefs of the world and the desires it aims to achieve. Its intentions are constructed
as a result of its beliefs and its desires.

66

Agent-Based Ubiquitous Computing

More extensive information about all these topics can be found at [Gonzalez (2008)].

4.1.2 Ubiquitous computing


Ubiquitous Computing (UC) is a term coined by Mark Weiser [Weiser (1991)] to define the integration of computing capacity and information processing in everyday objects. The goal behind this idea
is to offer a new way of Human Machine Interaction (HMI) where users are not necessary aware of
the system running behind. Usually, this concept is also referred to as Ambient Intelligence (AmI) or
Pervasive Computing, although there are slight differences.
An ubiquitous computing environment is compounded by objects and devices equipped with
small, cheap and powerful processors interconnected through wireless networks. These processors
can support intelligent agents able to understand and reason about the environment. This physical
sensing of the environment enables the system to adapt its behavior to the context or to the preferences of the user.
Ambient Intelligence emerges alongside with ubiquitous computing as a framework of technologies and devices able to sense and act over its context. An AmI environment is specially sensitive
to the presence of humans trying to provide support in common activities in the most unobtrusive
possible way. Sensors and actuators are interconnected by using hidden wireless connections among
themselves and a fixed connection to the Internet.
AmI environments, systems and technologies are defined by some characteristics:
Being context-aware. These devices must recognize the environment, the user and the context.
They also must know the resources available and how to maximize its performance according to
these resources.
Being adaptive. These devices must adapt its behavior and its objectives depending on the situation of the context as well as the user presence.
Being embedded. These devices must be powerful, small and be integrated into the system as
unobtrusive hardware.
Being anticipative. These devices must anticipate users intentions according to previous behaviors.
Being personalized. These devices must recognize the user and be specifically oriented to attend
its needs.

4.1.3 Case based reasoning


Aadmot and Plaza [Aamodt and Plaza (1994)] define Case Based Reasoning (CBR in the following)
as a problem solving paradigm that uses specific knowledge acquired from previous experiences to
solve current problems. It relies on the idea that new problem situations, also called cases, are usually
similar to the previous ones, so latter solutions are valid to solve the current case.
We can describe a CBR system as a process model that identifies the main phases of the system.
It needs a database where the previous cases are stored. These cases consist of a problem part and a
solution part. The problem part includes a set of attributes which define the case and determine the
structure of the case base. These attributes can have several formats and display information about
the specification of the problem and about its environment. The information stored in attributes varies

e-Assistance Support by Intelligent Agents over MANETs

67

depending on the context and the purpose of the system. Attributes must describe the goals and the
characteristics of the case as well as the relations between them to reach its objectives. The solution part consists of a set of guidelines on how to face the problem and some indication of the logic
followed to derive the solution. Solutions can also be showed with different formats and structures.
Depending on the situation, the solution part can incorporate indications about its grade of success in
the previous experience.
The other pillar sustaining this model is a reasoning cycle consisting of four phases: retrieving,
reusing, revising and retaining the case (see Fig. 4.2). The cycle starts with the definition of a new
case that is compared to the ones stored in the case base during the retrieving phase. Once the system
determines the most similar cases, the system reuses the information or the knowledge provided by
these solutions to solve the new problem. It is needed to recall that solving a problem does not mean
to obtain a good, accurate solution. The returned solution can be a bad one, related with the grade of
success it has enjoy in the past, but we can learn from that experience and modify it properly. Even if
the solution has been successful in a previous case, most of the times, problems are not the same so
the proposed solution may need some adaptation. This is performed in the revising phase. Finally, the
system has to construct a new case, consisting of a problem part and a solution part, and determine
if this solution is relevant enough to be retained. The retaining phase should be accomplished after
testing the created solution in the real word and seeing how well it worked.

Fig. 4.2 CBR cycle.

There is another model to describe a CBR system consisting of a task-method structure. It complements the process model of the CBR cycle giving us a task-oriented view. In this model we can see

68

Agent-Based Ubiquitous Computing

how every task of the system corresponds with a method to solve it. With this model every process of
the cycle model is seen as a task and it is represented like a node of a tree. These task has sub-tasks
and at the deeper level of the tree we have one o more methods to carry out the task (see Fig. 4.3).

Fig. 4.3

CBR task method model.

The CBR paradigm has some advantages over other knowledge paradigms, for example:
The knowledge domain is given by the experiences of the past and this provide the system with
the capacity to find a solution without having a complete knowledge of the domain or even when
the problem is not perfectly defined. Because using previous experiences similar in some the
degree to the current problem the system can provide an useful solution
Reflects how humans think and reason which eases the understanding of the proposed solutions
as well of the process followed to reach them.
Acquiring new knowledge is easy and less time consuming because knowledge is provided by
the cases and its solutions.
The quantity of the case base increases with the use. As the system adapts previous solutions,
the new ones can be added to extend the case base. And a bigger case base means more specific
solutions and with better quality.
Provides a way to detect the errors of the past and to avoid them. As the solution of a case can be
rated and how the solution was derived is also stored, a CBR system can determine the reasons
of a failure and act in consequence to do not repeat them.

e-Assistance Support by Intelligent Agents over MANETs

69

Allows the system to make predictions about the success of a proposed solution. The system
stores the grade of success of previous solutions and can establish a comparative between the
achieved success and the expected one.
The solutions are created in a limited amount of time because they do not have to be elaborate
from scratch. They just have to be adapted from previous ones.
In the other hand it has to deal with some disadvantages:
Usually, external help is needed in the revising phase.
A previous case can predispose in excess the way to solve the current one.
It is not easy to manage an appropriate set of cases for several problems.
Recovering from an incorrect solution can be expensive in time and work.

4.1.4 Peer-to-peer
Peer-to-Peer (P2P) systems are point-to-point communication systems where network resources are
distributed among all its components, called peers. These resources include data, computing power
and bandwidth.

Fig. 4.4

P2P pure system.

P2P systems can be classified in two types depending on how they behave. A pure P2P system
does not include a server, or the concept of server (see Fig. 4.4). In that situation, peers depend on
each other to achieve information, resources or to route petitions. Examples of this type of systems
are Gnutella [Clip (2003)] and Freenet. In the other hand, there exist P2P system called hybrids.
These systems has a server able to obtain certain meta-data about the peers of the network and share
it with them (see Fig. 4.5). In these cases, peers start a connection with the server, and once they
process its response, they start a connection with the desired peer or peers. Examples of hybrid P2P
systems are Napster [Inc (2002)] and Groove.

70

Agent-Based Ubiquitous Computing

Fig. 4.5 P2P hybrid system.

Nowadays, we can found a mix among the above systems. Kazaa[Hemming (2001)] has peers
that possess more information and/or computing power and are called super-peers. These systems
are hierarchical where super-peers communicate freely among them and manage sets of peers.
Nowadays, P2P system are used in distributed computing application, file sharing applications
and real-time applications like instant messaging, television over IP, voice over IP, etc.

4.1.5 Mobile ad-hoc networks


Mobile networks allow spontaneous connectivity among multiple wireless devices. We can make a
distinction among different types of mobile networks:
Infrastructure-based wireless network. This type of mobile networks include a base station,
working as a bridge, to which nodes can be connected in order to establish communications with
other nodes, with local networks or with the Internet.
Infrastructureless mobile networks. Also known as mobile ad-hoc networks (MANETs), they
enable dynamic connections between wireless devices without any infrastructure supporting the
communication. They are a self-configuring networks of mobile nodes.
Hybrid networks. The mixture of both types of nodes results in a hybrid network in which nodes
can be connected to mobile nodes, but also with a fixed base station.

e-Assistance Support by Intelligent Agents over MANETs

71

MANETs are a vital component of ubiquitous computing. They support the exchange of data
among mobile devices that cannot rely on any infrastructure-bases wireless networks In this situations, wireless connections on-the-fly like mobile ad hoc networks represent the most accurate and
suitable technology.
In order to maximize the battery of networks devices, it is essential to optimize their consumption and the efficiency of communication protocols. Yang [Yang et al. (2006)] proposes a peers
discovery service based on multicast requests and multicast responses.
Another aspect that should not be neglected is the routing protocol and the information the nodes
have to store in order to forward the packets. Broch [Broch et al. (1998)] give us a comparison of
different routing protocols.

4.2 System architecture


As mentioned above, this system looks for providing support through the decision process in several
situations. For example, let imagine a group of fans going to the unknown stadium. They have seats
on the stands but they do not know where they are located. They can ask stadiums staff for help, but
they neither know where is the office. Using the proposed system they can found a quick solution.
Simply making a request asking where the seats are, other fans or even stadiums staff can provide
them with an accurate response. Of course not every fan will know where these seats are, but if they
collaborate spreading the request, this eventually will find someone with seats close to the requested
ones. This little example show us the two main characteristics of this system. First, it has been deployed in mobile devices in order to make the support process the least possible intrusive. Second, it
relies in the intercommunication among several agents in order to solve common problems.
The proposed system consists of a set of intelligent agents embedded in mobile devices. These
intelligent agents exchange knowledge and information through a P2P network and have the ability
to reason thanks to a CBR reasoner. They also must be mobile in order to allow the user to move
around freely.
We decide to use intelligent agents because they possess the ability to adapt themselves to the
environment as well as the ability to learn. These two abilities are crucial for our system because we
are working with heterogeneous environments, where the context and the user are in a continuous
changing situation.
To provide intelligence to the system, we select CBR as the reasoning paradigm because it reflects the human reasoning and is specially suitable for environments where we have little knowledge
of the domain or that knowledge is difficult to model.
We choose P2P technology to share the information among the agents because it represents a
distributed system to share resources. In our scenario, every agent has a potential solution for our
problem, so distributed information sharing fits perfectly with our needs.
Basically, the system works as follows: whenever an agent needs to solve a new case it looks
in its case base to see if it has a solution. If it has one that is suitable enough it takes that solution
and adapts it to the current case. If not, the agent disseminates the case among its partners looking
for a solution. These partners seek solutions in its own case bases and send back those solutions

72

Agent-Based Ubiquitous Computing

considered adequate. With all the received solutions, the originating agent constructs the accurate
solution for the current case. Finally, and after being applied, the solution is stored in the case base
with annotations about its success. We can see a representation in Fig 4.6.

Fig. 4.6 Systems architecture diagram.

The set of intelligent agents have two explicit modules. The first one deals with the reasoning
process and the second one takes care of the communication process.

4.2.1 Reasoning process


Reasoning process represents the ability to solve problems. In the system, it is performed by agents
CBR engine and covers from the definition of a new problem situation to the revision of the proposed
solution for that problem. In the following, we will describe the reasoning process explaining the
important design decisions.
We have seen that the process starts when an user face a new problem situation. Let P be that
problem situation and let Ai be the agent trying to solve it. Ai will disseminate the problem alongside
with a threshold. Every available agent of the system will receive the problem P and will decide
whether to participate or not. Everyone of the participating agents will apply a CBR cycle to its local
case base looking for similar cases.
There are two questions arising at this point. How is problem P, and the problem part of the cases

e-Assistance Support by Intelligent Agents over MANETs

73

stored in the memory base, modeled? And, how is the case base structured in order to perform the
retrieval phase?
Regarding the first question, cases can be showed in different ways depending on factors like the
structure of the problem or the algorithms used in the retrieval phase. So, it is important to note that
the way that the information is presented in the problem part also defines how the retrieval phase is
performed. Cases can be represented using predicates, objects, structures, etc. Some of the classical
methods used are:
Textual representation. Information is represented as succession of questions and answers.
Attribute-value pairs. Every attribute contains a value or a range.
Structures. Cases are represented as a collection of objects. They are suitable for complex
domains.
We decide to model the problem part of a case as a set of attribute-value pairs, where each value
can be a real number or a nominal value. We decide to use this approach because structures requires
complex approaches to determine similarity and, although textual representation is more flexible than
attribute-value pairs, it requires human supervision.
How the case is modeled defines, as well, the structure of the case base. There have been several
approaches to structure the case memory. The main division is between flat memories, where all the
cases are at the same level, and structured memories, that rely on generalization and abstractions of
the cases to facilitate the evaluation and to allow indexation control. Some of the more important
structured memories are:
generalization-based hierarchical memories
concept-based hierarchical memories
decision tree-based memories
subsumption-based memories and,
multi layered memories.
We can find descriptions of these and some others in [Bichindaritz (2006)].
Due to its simplicity, we decided to use a flat memory, where the retrieval phase consists in comparing the new case with all the previous cases stored in the case base.
Resuming the reasoning process, we have all the agents willing to collaborate applying the retrieval phase. It consists in returning the most similar cases to the current one. So we need a formal way to define what similarity is. In our case, following [Dasarathy (1991)], we use a distance
weighted k-NN algorithm. The basic idea consists in weighing the contribution of each attribute according to the distance to query point, giving greater weight to the closer ones.
In this sense, the distance between the two cases (C1,C2) with a given set of attributes is calculated using the similarity function formula:
fx (C1 ,C2 ) =

da (x, y)

(4.1)

a=1

Where n is the number of attributes used to measure the distance between two cases and da is the
distance between two values x and y of a given attribute, where: da (x, y) = 1 if x or y is unknown,

74

Agent-Based Ubiquitous Computing

da (x, y) = overlay (x, y) if x,y are nominal values and da (x, y) =

(x, y)2 if x,y are real values.

Once agents have recovered the most similar cases, those exceeding the threshold. They start the
reuse phase. The collaborative agents try to solve P using its case base and send back a message to
the initiator agent Ai that is either: sorry (if every of its cases have a distance greater than the given
threshold) or a set of solutions < {(Sk ,Ckj )}, P, A j >, where the collection of pairs (Sk ,Ckj ) mean that
the agent A j has found Ckk cases in its case base endorsing solution Sk .
The initiator agent Ai has to choose the most adequate solution among all the received ones. In
order to do this a voting scheme is applied. The voting scheme defines the mechanism by which an
agent reaches an aggregate solution from a collection of solutions coming from other agents. The
principle behind the voting scheme is that the agents vote for some solutions depending on the number of cases they found endorsing those solutions.
Following Plaza [Plaza et al. (1996)], we do not want that agents, having a larger number of
endorsing cases, may have an unbounded number of votes regardless of the votes of the other agents.
Thus, we define a normalization function so that each agent has one vote that can be for a unique
solution or fractionally assigned to a number of solutions depending on the number of endorsing
cases. We denote by the set of agents that have submitted their solutions to agent Ai for a problem
P. The vote of an agent A j for solution Sk is:


Vote Sk , A j =

Ckj
j

1 + kr=1 Cr

(4.2)

Where Ckj is the number of cases found in the case base endorsing solution Sk . It is easy to see
that an agent can cast a fractional vote that is always less than 1. Aggregating the votes from different
agents for a solution Sk we have ballot:
Ballot (Sk , ) =

(4.3)

AkjVote(Sk , )

Therefore the winning solution is:


Sol (P, ) = argmaxkk=1 Ballot (Sk , )

(4.4)

Finally, when Ai has selected a solution Sk , it applies the final steps of the CBR cycle. In the
revision phase the voted solution is revised by the initiator agent Ai to remove unsuitable elements
and to correct inconsistent attributes. In the retain phase, Ai has to decide if the selected solution
is added to the its case base. To make this decision, the agents evaluates how well has worked the
proposed solution.

4.2.2 Communication process


4.2.2.1 Pure distributed communication process
The communication process involves the transmission and reception of information among the agents
of the system. From a communication point of view, the systems consists of a dynamic, scalable and
adaptive P2P network where each agent (located within a user mobile terminal, p.e., a smartphone)
of the multi-agent system is a peer of the network. Peers of the network are mobile and everyone
contains an agent with accumulated experience. That experience may be useful for another agent so

e-Assistance Support by Intelligent Agents over MANETs

75

an ad hoc topology, where every peer is able to establish a connection with any other peer, is useful.
In order to establish the communication we need an ad-hoc network routing protocol. These
protocols have to deal with issues like high power consumption, low bandwidth and high error rates.
There exist two categories:
Table driven protocols. These protocols try to maintain a up-to-date routing information of all
the nodes of the network. Nodes have to maintain tables with information and propagate these
tables in order to keep consistent routing.
Source-initiated on-demand driven protocols. These protocols create a route only when needed.
Nodes maintain the discovered routes while they needed and the node are reachable. In other
case, the initiate another route discovering process.
We decide to use a simple source initiated on-demand driven protocol to support communications. The physiognomy of these wireless routing protocols fits well with a very highly mobile
environment like ours. Some of the principal examples of these type of protocols are:
Ad-hoc On-Demand Distance Vector Routing (AVOC). AVOC is considered a pure on-demand
route acquisition protocol because only the nodes of a selected path maintain information about
the route and participate in table exchanges. The idea is to broadcast a request route packet until
it finds a node with a fresh route to the destination or the destination node itself. To assure a
loop-free route and to contain only the most recent information, a destination sequence number
is used.
Dynamic Source Routing (DSC). In this protocol, when a source node has to send a packet, it
first looks in its cache to see if it has a fresh route to the destination node. If not, broadcasts a
route request packet containing the destinations address, the sources address and a unique identification number. Each node receiving the packet checks if it has a fresh route to the destination
node, and if not broadcast the packet adding its own address to the route record. Route request
packets are forwarded only if the nodes address if not present in the route record. Whenever
a node its the destination or knows a valid route to it, it creates a route reply and sends it back
across the nodes of the route record.
Temporally-Ordered Routing Algorithm (TORA). TORA is a protocol that provides multiple
routes for every source-destination pair. The key of this protocol are the control messages exchanged among a small group of nodes noting the topological change.
More examples and further explanations of these type of protocols can be found in [Royer and
Toh (1999)].
Given the characteristics of our system we opted for using a protocol similar to DSC but with
slightly differences. Our system has to deal with an ever-changing environment where every node
moves freely constantly. In this situation, maintaining routes makes no sense. Routes stop being valid
at any moment, as well as new and better routes can be made up in every second.
Our communication process can be summarized as follows (see Fig. 4.7). Whenever a node
needs to make a request it is because the agent needs help with a specific case. So it disseminates a
packet with its request using a broadcast frame sent to its direct neighbors. If they decide to collaborate, as we have seen in the previous section, these one-hop neighbors answer the request.
At the same time, these nodes forward the request frame to its own direct neighbors. Every node
has its own serial number counter and is increased whenever a new request is sent. Combining the

76

Agent-Based Ubiquitous Computing

Fig. 4.7 a) Request dissemination path, b) Answer followed path.

serial number and the source node IP address, every packet in the network can be identified uniquely.
Nodes need to store in a table the address of the source node, the address of the node from which they
have received the frame and a unique serial number defined by the source node. This way, possible
responses coming from other nodes can be forwarded back to the source node. If a node table has
a packet already registered, it is discarded unless the packet came directly from the source node and
the previous route was created through a third node.
As we said, there are two reasons supporting this decision. First, it makes no sense keeping state
information of a highly mobile environment, where every node changes its position constantly. Secondly, as every single node is a potential candidate for the request, it makes sense to broadcast the
request among all the direct neighbors of the community.
Lets see now the format of the packets exchanged through this protocol (see Fig. 4.8). The first
field is the address of the emitting node, then the addresses of the destination node, then a time to live
(TTL) field, next the address of the source node and a field with the number of packets generated by
that node. These last two fields made the packet able to be identified unequivocally. Finally the last
field is the payload.

Fig. 4.8 Format of the frame exchanged.

e-Assistance Support by Intelligent Agents over MANETs

77

Broadcasted frames should be around only for a limited amount of time so they have a TTL field
with the number of hops allowed. Before forwarding the broadcast frame to its own direct neighbors,
every node decreases the TTL field. A node only behaves this way the first time it receives the frame.
Subsequent times the node discards the frame. As well, when a frame life expires, that is when the
TTL field reaches zero, the frame is discarded. This behavior intends to avoid the packet explosion
problem that Gnutella had to deal using a similar protocol.

4.2.2.2 Hybrid communication process


We consider here a variation of the pure distributed communication process scenario. The idea is
to deploy a hierarchical structure when some nodes have more experience, greater communication
power or reasoning capabilities. We call these nodes super-peers [Yang and Garcia-Molina (2003)].
These nodes are intelligent agents too and form an ad-hoc network with the other peers, but they have
augmented capabilities in order to increase their functionalities. A super-peer periodically collects
cases from other agents what gives it an extensive case base. As well, we added network communication capabilities to this super-peer node. This allows to establish a connection via LAN or the
Internet with other super-peer case repositories requesting the help needed.

4.3 A case of study: An intelligent gym


We have designed a scenario to put into practice our approach: an intelligent gym where users have a
mobile intelligent device (smartphone) able to complement trainers job. These devices know users
long term objectives, like lose weight, rehabilitation or muscle up, and they help users to accomplish
their exercises in the gym machines. The machines at the gym have a set of sensors that provide
information to the mobile terminals in order to supervise the right execution and, at the same time, to
check how the user performs the exercise.
In this scenario, a problem P is defined by a set of attributes like age, weight, height, user profile,
previous training work, users objectives and exercise machines at the particular gym. The solution
part S includes a suggested training plan, adapted to every user, as well as information about how to
carry it out. It also includes its grade of success in the cited case.
The sequence can be like follows. The user enters into the gym and switch on the smartphone,
starting the training program. Depending on the day, the training program suggests the user to start
doing a particular exercise. Lets assume that the first aim is to run a little bit to warm up. Then, the
user gets closer to the running machine. The smartphone, using Bluetooth, connects and recognize
the running machine, sets up the machine for a time and dynamic speed running and tells the user that
information in advance. While doing the exercise the smartphone (together with the body devices that
(s)he wears) monitors the heart frequency and the arterial pressure and alerts the user if something
goes wrong. When finishing the exercise, the smartphone saves the data to keep a record on the user
history profile.
Users smartphones run a training program, which uses a CBR agent to suggest the exercises and
how to perform them. This may include: information about the correct execution of the exercise,
number of series to perform, number of repetitions per serie and execution speed. The case used in
the CBR cycle is the present state of the user together with the present state of the gym (number and
type of machines, number of users, temperature, humidity, etc.). Smartphones may interact among

78

Agent-Based Ubiquitous Computing

them to sequentially organize the occupation of the machines. Besides, when a smartphone does not
know how to manage a particular case it may ask other peers (i.e. smartphones) for suggestions about
it.
In this scenario we could consider a non mobile node acting as a superpeer (see Fig. 4.9). This
object is an intelligent CBR agent too and forms an ad-hoc network with the other peers, but it has
bigger computational and communication capabilities (for instance, it can be a normal PC or a laptop). The main application of this superpeer is to collect data from the smartphones to allow the
trainer to personally supervise the users training when necessary and, at the same time, to have an
updated and complete history profile per user.

Fig. 4.9 Example of the topoly of an intelligent gym.

The superpeer has a global perception of the gym through the interaction with the user smartphones and helps to organize the resources as a global planner. It also collects/provides new cases
from/to the smartphones to create a bigger case base. Moreover, superpeers from several gyms could
also share their cases (see Fig. 4.9) to enhance the system through distributed learning.
In order to test our proposal, we use NetLogo programable modeling environment [Wilensky
et al. (1999)] to simulate a gym consisting of users, machines and a trainer. Users come to the gym
randomly and leave when their workout is finished or when they have spent a given amount of time at
the gym. Machines are spots where users can make exercises. They have a fixed location and every
machine allows to carry out a random number of exercises. Finally, we suppose the trainer has a fixed
location in the gym and, of course, knows how to carry out all the possible exercises.

e-Assistance Support by Intelligent Agents over MANETs

79

Our goal is to prove that this system decreases users average waiting time, and consequently it
helps to increase the number of completed training sessions. Assuming users have to wait less time
using the system, our next question was how much is that time reduced? We also consider interesting
to estimate the number of cases needed to label the system as useful. Finally, we also wonder how
much faster an user can learn new exercises using the system.

Fig. 4.10

Flow chart of a gym user.

Fig 4.10 shows the flow-chart of a random user training session inside the gym. Whenever an
user arrives at the gym, he decides what exercise to do. If the user knows how to carry it out, he just
goes to the machine and starts the exercise. The amount of time between the arrival at the gym and
the start of the training session is considered as waiting time. Waiting time is the amount of time an
user spends at the gym without doing exercise. In contrast, if the user does not know how to carry out
the selected exercise, he has to find out how to do it. In a scenario with smart mobile devices, users

80

Agent-Based Ubiquitous Computing

can learn from other users and from the trainer. Otherwise only the trainer can help them. As before,
the time spent walking to the trainer is considered as waiting time. Once the user is at trainers spot,
this may be busy attending other users. In this case, the amount of waiting time will depend on the
number of people waiting. Let twait be the users waiting time, and we can define it formally as the
sum of the time spent walking towards a machine twma , the time spent walking towards the trainer
twtr and the time waiting to be attended by the trainer twtr .
twait = twma + twtr + twtr

(4.5)

The performance of the proposed system depends on two main factors: the occupation of the gym
and the average knowledge its users. In order to simulate the occupation of the gym we have defined
four parameters: the initial amount of users (initial users), the rate at which new users arrive at
the gym (birth rate), the maximum training time per session (max time) and finally, the initial
energy of an user (user energy). An user will keep working until his energy is down to zero or the
maximum training time per session is reached.
In order to simulate users knowledge we have define two parameters: the maximum number
of exercises known by an user arriving at the gym (nun cases known) and the number of total
exercises that can be carried out in the gym (num total cases). The exercises known by an user
when arriving at the gym are selected randomly. The number of exercises known can be increased
asking the trainer or, if the user has a smart device, exchanging information with other users.
Results from Table 4.1 have been achieved in an scenario with the following parameter values:
initial users = 1. The system starts with one user at the gym.
birth rate = 1. The possibility of a new arrival at any minute is 1%.
max time = 70. The maximun amount of time for an user at the gym is 70 minutes.
energy = 8000. Every user starts its training with 8000 points of energy.
num total cases = 50. The number of total exercises that a user can do in the gym is 50.
num cases known varies from 0 to 50.
In this way we obtain an scenario with an stable population where users posses different levels
of training knowledge. This configuration derives in an average number of users around 10, with a
minimum of 1 and a maximum of 30.
In first place, simulations were carried out in a gym were users do not have any help from the
proposed system. After that, the same simulations were performed in a gym where users do have help
provided by their smartphones. All simulations were halted when 300 users have left the gym and
every different test was performed ten times. Table 4.1 shows the average values after ten executions.
The results confirm our expectations that the proposed system reduces the amount of waiting time
as well as increases the number of users able to complete its training sessions. Regarding the average
waiting time we can see that it is reduxed, in mean, by a 45%. Extremes scenarios where users do not
know how to do any single exercise (see Fig. 4.11) or where they potencially know how to do all of
them obtain reductions of 35% and 40% respectively. In terms of absolute amount of time, the first
scenario is, clearly, the one than obtain more beneficies, but in terms of relative amount of time the
best results are obtained in medium-knowldge level scenarios. The same tendency is followed by the
number of umcompleted sessions.

e-Assistance Support by Intelligent Agents over MANETs

81

Table 4.1 Results of Test 1


nck/ntc1

t 2wait
(min)

t 3wait
(min)

2
t trai
(min)

3
t trai
(min)

#uncompleted 2
sessions

#uncompleted 3
sessions

#o f averagecases2
in the sytem

#o f averagecases3
in the system

0/50
10/50
20/50
23/50
25/50
30/50
50/50

35.11
28.96
23.01
20.68
16.84
14.18
7.74

22.69
16.38
11.55
9.24
8.93
8.27
4.67

33.01
38.69
41.07
42.03
43.12
43.32
44.28

41.26
42.98
44.07
44.21
44.20
44.24
44.3

276
191
128
101
56
35
3

121
69
15
10
5
5
0

4.71
13.58
20.53
22.24
23.40
25.9
33.84

6.32
14.63
21.13
22.85
24.05
26.40
34.12

1
2
3

nck/ntc stands for num-cases-known/num-total-cases.


Results for a gym without the proposed system.
Results for a gym with the proposed system.

Fig. 4.11

Evolution of t wait and t train when num cases known is 0. a) Without the system, b) With the system.

Table 4.1 shows that the scenario where we use our proposed system offers the better results
when the maximum number of known cases at the arrival is 23. In this situation (see Fig. 4.12) the
average waiting time is reduced in a 55%, the number of users that are unable to complete their training sessions is reduced in a 90% and, although the maximum number of known cases at the arrival is
23, the average number of cases per user in the system is below that number. We also asked ourselves
about the number of cases needed to provide the best results of the system. This number is clearly 23
but any value between 20 and 25, as we can see in Table 4.1, also provides excellent results regarding

82

Agent-Based Ubiquitous Computing

Fig. 4.12 Evolution of # of users and # users waiting (top) and t wait and t train (bottom) when num cases
known is 23. Without the system (left) and with the system (right).
Table 4.2 Test 2 results

1
2
3

nck/ntc1

t 2learn
(min)

t 3learn
(min)

0/50
10/50
25/50
35/50
50/50

170
161
145
132
110

122
121
114
104
91

nck/ntc stands for num-cases-known/num-total-cases.


Results for a gym without the proposed system.
Results for a gym with the proposed system.

the number of uncompleted sessions and the average waiting time.


Modifying the initial parameter values we have obtained different values of average waiting times
and number of people that can not complete their training session but, using the proposed system we
always have obtained better results.
We carry another test, Test 2, to find out how faster a person can learn a given number of exercises using the smartphone. We simulate a gym with the same initial parameter values that in Test 1

e-Assistance Support by Intelligent Agents over MANETs

83

and we create an user without knowledge about exercises. He will not finish his training session until
he learns a given number of new exercises (10 in our test). This means that this user does not lose
energy or leaves when reaching max-time. We consider different scenarios depending on the average
knowledge of the other users.
Table 4.2 shows that using our proposed system reduces the average learning time, independently
from the knowledge level of the other users.

4.4 Conclusions
We have proposed an e-Assistance support network of intelligent agents. These agents can move
around freely, and this mobility does not interfere the ability to collaborate facing up diverse problems. We have also detailed the key steps and choices of the designing and developing process. This
include choosing a knowledge paradigm and a communication protocol with all their ins and outs.
Finally, we transfer that theoretic design to a real-life case of study and perform some simulations to
test its validity.
The results achieved in the simulations confirm that using the proposed support system improves
the performance and reduces the amount of employed time. In our case, gyms users were able to
reduce their average waiting time and increase the probability of complete their training sessions in a
fixed amount of time.
Regarding the improvements of the system, the most focused and conscientious work must be
applied to the reasoning process. Improvements in this area may include refinements in case representation and memory base structure, as well as new techniques to be applied in the revision phase of
the CBR cycle.

Chapter 5

The Active Metadata Framework

Christopher McCubbin, R. Scott Cost, John Cole, Nicholas Kratzmeier, Markus Dale,
Daniel Bankman
The Applied Physics Laboratory, Johns Hopkins University, 11100 Johns Hopkins Road,
20723 Laurel, Maryland, USA
mccubcb1|costrs1|colejg1|kratznm1|daleme1|bankmdj1 @jhuapl.edu

Abstract
We have developed a concept ubiquitous agent-based service oriented architecture called Active
Metadata. Due to the nature of the target environment and the resources to be integrated, it is difficult
to easily field and evaluate approaches to metadata distribution. We have developed a simulation environment which allows for simulation of flexible ubiquitous computing environments, and allowing
rapid prototyping and metrics collection of new agent behaviors. Building on these results, we have
designed and built a SOA which combines the advantages of swarming technology and active metadata, extending SOA and service invocation capability to these tactical edge networks. Autonomous
mobile nodes within our swarming architecture also have the capability to reconfigure the edge network topology to optimize service response time, while at the same time completing complementary
tasks such as area search, while communications between the swarming components is limited to
local collaboration.

5.1 Introduction
The increasing need for more easily composable and adaptable intelligent systems in distributed agile
environments has naturally led to the need for Service-Oriented Architectures (SOAs) in the ubiquitous computing domain. The issue of creating a robust SOA in an agile environment poses several
difficulties. Because information resources available may not be known during design or even deployment of a future agile network, the agile information integration will require run-time rather than
static integration. Personnel costs and delays associated with programmer-driven integration makes
using human integrators at best undesirable and often infeasible for agile command and control infrastructures. We therefore conclude that information resources must be capable of autonomously
integrating during the course of an operation. In order to effectively self-integrate, resources must
be able to recognize and understand peer resources whose identity and capabilities are not known at
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_5, 2009 Atlantis Press/World Scientific

85

86

Agent-Based Ubiquitous Computing

deployment. To support this, each resource must be able to determine the range of specific relevant
and available resources. That is, resources must recognize the set of resources that can effectively
be used, and that can provide the most relevant information or valuable service. One approach is a
directory-based approach that supports the registration of resource information with a centralized (or
distributed) directory service. This directory service can in turn be queried by resources requiring a
match. While this provides an effective means of coordination in some situations, there are drawbacks. In a highly dynamic environment, information in the directory may not be updated frequently
enough to reflect reality. Also, there is significant overhead associated with maintaining a common
directory, especially as the scope of the region or the number of entities increases. Also, this approach assumes that all entities have persistent access to a directory service. In a highly dynamic or
hostile environment, this may be too strong an assumption to make. Our emphasis in this work is on
environments in which there is a benefit to exploiting significant amounts of autonomy on the part of
the framework elements.
An alternative approach assumes that resources share information in a distributed manner. In
contrast to the pull-based directory approach, service providers have a responsibility to make themselves known directly to the device network. This puts more of a burden on the service provider, but
has the advantage that resource information is distributed in advance, and therefore may be available
even if access to resources or directory servers is not. We developed an agent-based distributed metadata distribution framework, called the Active Metadata Framework (AMF), that realized these ideas
using a federation of mobile agent platforms.
One serious issue with highly distributed agent-based computing platforms is the inability to
easily and rapidly prototype new behaviors. Often behaviors that look reasonable in theory show
unexpected or undesirable emergent behavior once deployed. For many agent-based ubiquitous applications there exists the need for a simulation environment where various aspects of the system can
be controlled to allow for realistic testing of agent behaviors.
Faced with these issues during field applications of the AMF, we were motivated to develop
a simulation framework that would allow for rapid behavior prototyping. With this framework we
are able to prototype new active metadata strategies without the issues of distributed timing, data
collection, and physical limitations. The simulation environment allows for simulation of flexible
ubiquitous computing environments while enabling rapid prototyping and metrics collection of new
agent behaviors.
Using lessons learned during the analysis of simulation and experiment, we refined the AMF
concept to utilize swarming and Web Service technology to improve the utility and robustness of the
concept. Our most recent approach, a modified version of a swarming model using push technology,
put the burden on the swarm network by allowing the swarm communications to maintain and distribute metadata. Service activations were also carried out in a similar manner. This provided for
a scalable, distributed approach to the advertisement and activation of resource information with a
domain that was also tolerant of network delays and failure.
In section 5.2, we describe the simulation framework that was designed to test new methods of
distributing metadata in the classic AMF paradigm. In section 5.3, we describe the Swarming Active
Metadata Framework (SWARM-AMF) ideas that were first explored in the simulation environment.

5.1.1 Background: Concepts


Several concepts are important for understanding AMF in an appropriate context.

5.1.1.1 Service-Oriented Architecture


Service-oriented architecture (SOA) is an architectural style of building distributed applications by
distilling required functionality into reusable services. This allows for maximum reuse of existing

The Active Metadata Framework

87

infrastructure while accommodating future change. An application can be flexibly composed of services. Services can be reused between applications. Loose coupling, implementation neutrality and
flexible configurability are some of the key elements of SOA [Huhns and Singh (2005)]. Current implementations of SOAs are typically Web Services-based using the HTTP communications protocol,
SOAP [Mitra and Lafon (2007)] for exchanging messages, the Web Services Description Language
(WSDL) [Booth and Liu (2007)] for describing web services, and Universal Description, Discovery
and Integration (UDDI) [Clement et al. (2004)] for registering services.

5.1.1.2 Tactical Networks


The future of tactical networks is varied and complex. According to the DARPA Strategic Technology Office, tactical network-centric operations must be reliable, available, survivable and capable
of distributing large amounts of data quickly and precisely across a wide area. Additionally, these
networks must be able to simultaneously and seamlessly meet the needs of both manned and unmanned systems throughout the strategic and tactical battlespace. [DARPA (2008)]. Certainly these
networks will contain mobile ad-hoc networks, reconfigurable networks, and other networks that are
outside the realm of standard, wired high-bandwidth networking.
Implementing SOAs on mobile ad-hoc networks presents a unique set of problems. Since the
network is highly dynamic, even notions such as point-to-point messaging become an issue. Various
routing protocols have been developed [Clausen and Jacquet (2003); Perkins et al. (2003)], but most
prominent routing protocols only search for routes that exist at the current time and are not delaytolerant. Some research has been done in delay-tolerant routing. Depending on the ad-hoc operational
concept, a centralized directory of services, popular in pull-style SOA networks, may not be a feasible
option. Several alternatives exist, such as on-demand advertising and directory replication.
The swarming network system and belief network that our system used requires only the most
rudimentary capabilities from the underlying tactical network. These requirements included the ability to broadcast to local nodes and a connectionless link layer capable of transporting User Datagram
Protocol (UDP).

5.1.2 Background: The active metadata concept


We have developed a concept ubiquitous, agent-based SOA called Active Metadata. This concept
envisioned a ubiquitous network entity, to which both services and users can attach, and where users
can find services that are applicable to their current situation. This is not unlike any SOA. Our concept
differed in how the knowledge of the services provided, also known as metadata, is propagated
through the network. We have developed two versions of the system. Initially, we developed a system
which is referred to as the classic AMF. After identifying some difficulties with this approach, we
refined the concept to utilize some concepts of the The Johns Hopkins University Applied Physics
Laboratory (APL) SWARM architecture in a framework we refer to as SWARM-AMF.

5.1.2.1 Classic active metadata framework


In the classic active metadata concept, the service metadata was encapsulated in a mobile agent structure which then sought out users who would be interested in the service represented by the metadata.
Users could query the network for appropriate services, which, if the network had worked as envisioned, would already have local copies of matching services and could return service information
rapidly and even in the face of lower-layer network instability.
We have realized the Active Metadata concept in a high-level design called the AMF. This design
currently has two implementations. Our original implementation was an embedded implementation
running on the JADE agent framework [Bellifemine et al. (2002)]. Since this framework was un-

88

Agent-Based Ubiquitous Computing

wieldy to test for the reasons stated above, we have implemented a streamlined simulation version of
the AMF described in section 5.2.
The AMF consists of a set of nodes running on platforms. These platforms may be connected by
a persistent network, such as the internet, or may be part of a looser network such as a Mobile Adhoc NETwork (MANET). The collection of these nodes form a virtual SOA. Services and consumers
may connect to this SOA by connecting to their local AMF Node and performing advertisements or
queries, respectively. This architecture is like many other SOAs. The unique aspects of AMF include
the ways in which service advertisements and metadata are distributed to other nodes, how the service
network is formed in the face of unreliable connections, and how queries are analyzed.
In order to advertise a service to the AMF, a service first formed a metadata object representing
itself. The service then transmitted this metadata object to the local AMF node. To represent the
service, the node created a number of Active Metadata Agents (AMAs). AMAs could also represent
an a-priori service need created by a consumer. In the presence of highly unreliable network connections, such as MANETs, implementing a centralized service directory was not feasible. The AMAs
took the place of the directory service by representing the searchable metadata of the service that
created it. These AMAs were pushed to other nodes in an intelligent way, anticipating where queries
would be made that required the service.
Using the information in the node regarding current network conditions and locations of the other
AMAs, the AMA made a decision about whether or not to migrate, and if so, where. This decision
was influenced by the presence of other AMAs that provided similar services, and any agents that
represented a consumer service need. We envisioned intelligent migration algorithms that would take
into account query histories, consumer a-priori information, and other information to optimize agent
placement in the network.
An AMA also periodically contacted its node of origin to retrieve updated metadata information
from their originating node, if that node was reachable through the network.

Fig. 5.1

An active metadata framework node

An AMF Node could contain supporting data structures, as seen in figure 5.1. The node shown is
from the embedded AMF implementation and contains components to create (instantiator), customize
(detailer), and migrate (deinstantiator and network bridge) AMAs. The simulation implementation
had slightly different components as described in section 5.2.3. We extensively tested this architecture
using a simulation described in section 5.2.

The Active Metadata Framework

89

5.1.2.2 Swarming active metadata framework


Due to some difficulties in maintaining synchronization between the services and their proxies in
classic AMF, we refined the concept to use the Johns Hopkins Swarming architecture which included
a communications system relevant to our needs. We called this system the SWARM-AMF. The
architecture and testing of SWARM-AMF is described in detail in section 5.3.

5.2 SimAMF
The classic AMF framework as described in section 5.1.2.1 was difficult to analyze when using the
full-up mobile agent platform that it was originally written in. We created a simulation environment
to more effectively test new ideas for the framework. The simulation system and experiments we
performed with it are described in this section.

5.2.1 Motivation
Classic AMF implemented an agent-based approach to dynamic resource integration. Using this
approach, agents representing resources were tasked with targeting and supporting consumers or
partners of that resource, by migrating to and providing information about their home resource, and
if possible, extending proxy functionality. This involved the emplacement of support nodes at various locations throughout the environment (ideally, resident at every site which consumes or provides
resources, such as a sensor installation, control station, vehicle based workstation or command platform, etc.) Aspects of this work that we explored included representations for knowledge of the
physical domain, and algorithms and protocols governing agent migration and interaction or negotiation. Due to the nature of the target environment and the resources to be integrated, it was difficult
to easily field and evaluate approaches to metadata distribution. With this type of framework, it was
difficult to hypothesize and experiment with various strategies. Previous experiments had involved
instrumentation of ground stations, autonomous and manned vehicle systems in urban and desert environments, and, while extremely valuable exercises, had been labor intensive and time consuming.
To mitigate these issues, we created a simulation framework that allows us to experiment with various
factors, such as distribution protocols and migration schemes, without the need to field actual AMF
nodes. This simulation framework modeled the migration of AMAs among virtual nodes, simulating
both environmental distribution of nodes and the intervening network infrastructure. Applications
interfaced directly with this simulated environment, in the same way that they would interact with
distributed AMF nodes. This facilitated testing of component interaction, while also allowing for
some control over issues in the network layer. The main design goal was to make the simulation environment as realistic as possible with respect to the nodes themselves, and then to extend the range
of control/simulation with respect to the network environment.

5.2.2 Related work


Simulation frameworks for ubiquitous computing are currently being researched in several projects.
Reynolds et al. [Reynolds et al. (2006)] have laid out a set of requirements for a generic tool to
simulate many types of ubiquitous computing situations. Hewlett Packards Ubiwise [Barton and
Vijayaraghavan (2002)] is another simulation environment that combines a network simulation with
a 3-D visualization environment based on the Quake3 engine and is similar to the Tatus simulator
[ONeill et al. (2005)].
The AMF itself addresses ubiquitous computing with a combination of recent advances in agent
technology, SOAs, and ubiquitous computing. Thanh describes SOAs as a collection of services

90

Agent-Based Ubiquitous Computing

which communicate with each other [Thanh and Jorstad (2005)]. A SOA must support three functions: to describe and publish a service, to discover a service, and to consume or interact with a
service. The AMF migration of metadata, explored in section 5.2.3, ties in very closely with these
concepts.
Combining agent technology, SOAs, and ubiquitous computing technologies is an active area of
research. Implementing SOAs on ubiquitous computing networks presents a unique set of problems.
Since network connectivity is highly dynamic, even notions such as point-to-point messaging become
an issue. Various routing protocols have been developed, but most prominent routing protocols only
search for routes that exist at the time of search and are not delay-tolerant. Some research has been
done in delay-tolerant routing. Depending on the ad-hoc operational concept, a centralized directory
of services, popular in pull-style SOA networks, may not be a feasible option. Several alternatives
exist, such as on-demand advertising and directory replication.
Moro et al. [Moro et al. (2004)] describe the advantages of using agents in a Peer-To-Peer
(P2P) environment to overcome the P2P limitations of message expressiveness, data models, data
integration/transformation and routing. In [Smithson et al. (2003)], we see an example of an agentbased P2P system that uses queries (an information pull paradigm) to discover resources. That
system consists of ResultAgents that generate a resource query, SearchAgents that distribute the
queries to their peers and Resource SearchAgents that match the query against ontology-based
resource descriptors. The resource descriptors remain stored with the resource that they describe.
The AMF in contrast uses an information push paradigm to distribute resource metadata. The
K-Trek system [Busetta et al. (2004)] describes a framework where resource information is also
statically stored with the resource in form of a K-Beacon which stores contextual information
about a specific location and periodically broadcasts information about itself. This information
source can be discovered by K-Voyager mobile devices based on location proximity. AMAs allow
information about a resource to be spread to locations in the network where this information can likely
be used beyond location proximity. The Tuples On The Air (TOTA) system described in [Mamei et al.
(2004)] provides a P2P middleware which uses tuples with content and propagation rules to distribute
location- and content-based information which is similar to the Active Metadata system. However,
the Active Metadata system also provides proxy services to resources via the instantiated AMAs
representing a resource. TOTA also used an emulator environment to analyze propagation in a larger
scale MANET environment.

5.2.3 Implementation
The AMF implementation was written in Java with some specific goals in mind. We wanted the
ability for rapid prototyping and testing of different AMA behavior strategies as well as the different
actions from users of the AMF network. In addition, a flexible, abstract framework was needed
for the porting of the simulation to field tests. We wanted to make the framework flexible, so that
as much code as possible from a simulation could be ported directly into field tests. These goals
were accomplished through the use of an abstract simulation framework, which would apply to all
strategies and underlying network representations.

5.2.3.1 Framework structure


Each AMF node was represented as a class which held a user proxy implementation, as well as a
listing of all AMAs currently occupying the node. In addition, the node maintained a database of
objects that AMAs can query or post to while they inhabit the node.
The abstract user proxy class allowed for varying implementations of user actions throughout
execution. An implementing proxy handled an initialization call upon the start of execution which
would create and return a list of the users AMAs. Furthermore, at each timestep, the user proxy

The Active Metadata Framework

91

would be allowed to execute any additional user action, such as updating the metadata for its AMAs.
Each AMA was represented as a class which holds its assigned metadata. The metadata class
would be abstract, as to allow several possible types of implementing metadata. In order to implement its behavior strategy, the AMA was assigned two abstract classes, behaviors and abilities. The
behaviors class implementation would allow for the implementation of different migration and updating strategies by the AMA, whereas the abilities class implementation provided the basic network
abilities to the behaviors class in order to perform its strategies.

Behaviors The abstract behaviors class was implemented by two functions, an initialization function, and a method to execute the AMA strategy. The initialize function was invoked when the AMA
is first created by its originating nodes user proxy. From then on, each timestep the execute strategy
method was invoked on the AMA. When calling this function, the behavior was additionally provided with whether or not this was the first execution timestep since a successful migration, in case
for example, the behavior called for a logging to its current nodes database upon migration.
Abilities An abilities implementation was the interface with which the behaviors class interacts
with the framework. Functions such as tryToUpdate and tryToMigrate were implemented here according to the underlying framework implementation. For example, these functions may be much
more complex if the framework was implemented for field tests, whereas they might be fairly simple for simulations. This flexibility allowed for rapid prototyping and portability of our code across
different platforms.

5.2.4 Simulation visualization


In order to test our AMA migration strategies, a Graphical User Interface (GUI) was created to view
the migration and logs of the AMAs as they traversed the network. At any point in the simulation,
the user could click on the node to see a listing of the AMAs currently on it, as well as their current
metadata. In addition, the user could click on the nodes database, shown in green, to see a listing of
the logged objects by that node.
To begin the simulation, a user first loaded a configuration file, which described how the user
proxy and AMA behaviors and abilities would be implemented, as well as the structure of the underlying network. The user also had the ability to change the underlying network at given timesteps
through the configuration file. Once this file was loaded, the user clicked initialize which displayed
the network, and invoked the initialize method on each nodes user proxy as well as each AMA
created by the user proxies.
The GUI now allowed the user to step, play and pause the simulation by using the provided
controls. Upon each step, the simulation invoked the execute behavior methods in each nodes user
proxy, as well as on each AMA.

5.2.4.1 Underlying network


Network connections between nodes in this simulated interface were represented by two values, the
probability of failure and the type of failure. Once a link has been determined to have failed by
drawing random numbers according to its probability of failure, the link could fail in two ways.
First, the link could simply bounce all communications, in which case the sending node was aware
of the loss of communications. For example, a migrating AMA would be aware of the failure, and
its sending node would be notified of the loss, and kept the AMA there. However, the other type of
failure forced all communications across a link to be lost, in which case the sending node was not
made aware of the loss. In the case of migration, the sending node believed the communication to
have worked, and removes its local copy of the AMA, whereas the receiving node never received the

92

Agent-Based Ubiquitous Computing

AMA, causing it to be dropped from the simulation.

5.2.5 Experiments
Previous embedded versions of AMF have been tested in live scenarios that were designed to exercise the AMF concept. Since the framework was embedded, it was difficult to test different AMA
behaviors and measure their relative merits. One of the main purposes of the current simulation
framework was to rapidly test and evaluate new concepts for active metadata systems. To determine
if the framework that we developed met this criteria, we reimplemented the behaviors of the embedded AMF in the new simulation framework and compared the development time, metrics capabilities,
and visualization of the two systems.

5.2.5.1 Original AMF test design


We conducted several experiments of AMF in the field (Figure 5.2). In the context of this experiment,
we created three AMF nodes. Two of which were ground stations, while the third resided in a vehicle
and traveled in and out of connectivity along the roads surrounding the test site. As the moving
vehicle came in and out of range of the ground stations, AMAs traveled among the three nodes and
dispersed metadata throughout the network. The AMA migration policy was one of balancing, where
AMAs wrote to their local node database when they arrived on or left a node. The database attempted
to keep an approximate notion of which AMAs were on which nodes. This test was conducted on a
wireless ad-hoc network. Each ground station was able to continually supply images from an infrared
camera, while the moving vehicle was a consumer of these camera images.

Fig. 5.2 Satellite image of AMF testing area. Two ground stations are the two dots in one of the right fields,
while the traversable area of the moving vehicle is shown as the dotted line.

For this set of experiments, we allowed each AMF node to create several AMAs which would
disperse through the network. We tested the persistence of our network by starting the moving vehicle
out of communication range of the ground stations, and allowing it to drive slowly past the ground
stations. We allowed the vehicle to be in range of the ground stations for approximately one minute.
Additionally, we started the vehicle within range of the ground stations, and then allowed it to drive
in and out of communication range over several minutes.
This test was implemented on the full-up embedded active metadata framework using real net-

The Active Metadata Framework

93

work communications and an agent system based on the JADE agent environment. Development and
testing took approximately three man-months.

5.2.5.2 Recreation of AMF test using simulation framework


To exercise and evaluate the simulation framework, we recreated the test conditions from the original
AMF system test in simulation. This involved three steps: writing a behavior class to mimic the
original AMA behavior; writing classes to represent the car and camera; and creating a description of
the network environment present in the test. Due to the simplified nature of the simulation framework,
we were able to reproduce the conditions of the original test in under a week.

5.2.5.3 Result
After we implemented the system test in the Simulation Framework, it was clear that the framework
was a superior way to evaluate the AMA behaviors under consideration, for several reasons. Since
the simulation could be paused or stepped at any point, it was easy to see effects that the individual
behaviors had on the emergent pattern of the holistic system. It was also invaluable to visualize the
complete state of every node at any point in time, which is very hard with asynchronous distributed
systems. Though the purpose of this exercise was to evaluate the simulation framework, it was
impossible not to immediately determine flaws in the AMA behaviors and to suggest solutions. In
fact, we were able to quickly and easily try out several improvements to the AMA behaviors and run
what-if scenarios in seconds. We estimate that running these scenarios in the original, embedded
distributed system would have taken hours or days.
It was immediately clear from the testing that failing to replicate enough AMAs led to service
failure in many scenarios. On the other hand, replicating many AMAs often led to algorithmic difficulties and overkill when many AMAs describing the same service became trapped in an isolated
part of the network. Great care had to be taken in the migration algorithms to not have all AMAs of
one service attempt to go to the same place, and often these algorithms would have great difficulty
dealing with the limited network connectivity between nodes. We decided to try another alternative
for metadata distribution, which is described in the following section. This new alternative solved
many of the problems found with the simulation and proved to be much more robust and successful
at fulfilling the goals of the framework.

5.3 SWARM-AMF
Testing in the simulation environment and other factors led to a design decision to make the metadata objects as lightweight as possible, and to distribute them as widely as possible with the least
overhead. The agency of the metadata objects was a concept deemed to be too computationally
expensive. As a replacement, we sought a network infrastructure that allowed for a distributed, asynchronous backboard to share and update the metadata in efficient a manner as possible. Fortunately,
our previous work on swarming systems had just such an infrastructure built in and readily available. This section describes the new paradigms and implementation of the AMF framework when the
nodes are perceived as members of a swarm and the metadata was pushed into the virtual blackboard
of the swarming network. We call this system SWARM-AMF. A description of the design of the
SWARM-AMF system can be found in section 5.3.2.
We performed a number of simulation runs to tune the SWARM-AMF algorithms. A description
of the simulation setup and some of the results can be found in section 5.3.3. Once our algorithms
were sufficiently tuned, we tested the system with Unmanned Aerial Vehicle (UAV) hardware in the
loop at the Tactical Network Topology (TNT) experiment. The details of this experiment and the

94

Agent-Based Ubiquitous Computing

results can be found in sections 5.3.4 and 5.3.5, respectively.

5.3.1 Background
In this section we describe some of the technologies that are important for understanding the design
of SWARM-AMF.

5.3.1.1 Swarming and dynamic co-fields


We used an internally developed system called Dynamic Co-Fields (DCFs) to generate strong autonomy between our devices. This approach is a form of potential field theory that was extended beyond
previous unmanned vehicle potential field applications by incorporating elements of recent advances
in swarm behavior optimization. These modifications to swarm behavior solved the traditional problems with potential field approaches, generating robust, effective behaviors in diverse, complex environments. The central theme of this approach was the use of stigmergy to achieve effects-based
control of cooperating unmanned vehicles. Stigmergy is defined as cooperative problem solving by
heterarchically organized vehicles that coordinate indirectly by altering the environment and reacting
to the environment as they pass through it. We accomplished stigmergy through the use of locally
executed control policies based upon potential field formulae. These field formulae were used to
coordinate movement, transient acts, and task allocation between cooperating vehicles.
With DCF, a virtual potential field was associated with the all germane entities contained within a
locally held model of the vehicles world. These models typically included peer vehicles and elements
that are impacted by peer vehicles. These fields were used to influence vehicle action, most notably
movement. In these systems a vehicle was considered a point particle with a fixed position in space
at a fixed time. Vehicle movement was determined by the locations of influencing entities that are
likewise fixed in Euclidean space at a fixed time. The force associated with a specific influence may
be attractive, driving the vehicle towards the influence when greater than zero or the force may be
repulsive, driving the vehicle away from the influence when less than zero. Details of the specific
equations used to generate interesting behavior using this approach can be found in the literature
[Chalmers et al. (2004)].
The devices in our network also communicated information, known as beliefs, to each other
through an ad-hoc networking scheme. A networking layer known as the Knowledge Layer performed two functions related to belief communication. First, the Knowledge Layer accepted beliefs
and merged them into the Knowledge Base. Beliefs may be accepted from organic sensors or from
transmissions received through the network. In merging the beliefs the Knowledge Layer must deconflict contradictory beliefs and reduce the complexity of the knowledge base (as necessary) either
through compression, removal of less important knowledge, or through aggregation. Aggregation
was accomplished through the representation of multiple objects as a weighted centroid. This reduction in the complexity of the stored and transmitted knowledge proportionally decreased bandwidth
used by the vehicle group.
The ad-hoc propagation network required for this system was an addressless communication
mechanism by which vehicles shared knowledge using periodic omnidirectional broadcasts. Transmission occurred without knowledge of the recipient or the informations utility. Received information was used to update a vehicles world model without regard for the informations source. By
receiving and transmitting in the blind, knowledge was propagated asynchronously across the swarm
without the need for a continuous, high quality of service, routed network. Numerous off the shelf
link-layer networks were able to accomplish this type of networking effectively.
Both the Dynamic Co-Fields and Propagation Networks have undergone substantial testing in
simulation and in actual hardware. A number of specific behaviors have been developed and evaluated in simulation-based experiments. DCF has been employed to provide highly intelligent high-

The Active Metadata Framework

95

level command and control capabilities on board of Office of Naval Research (ONR)s Unmanned
Sea Surface Vehicle prototype. In these experiments the DCF algorithms were used to generate
operational coordination between Unmanned Sea Surface Vehicles (USSVs) and other cooperating
vehicles.

5.3.1.2 Microformats
The purpose of microformats [Allsopp (2007)] is to provide machine-readable semantics in a web
page along with human-readable content. Microformats are designed for humans first and machines
second: they exist as additions to the page markup and do not change or modify the display of information. On the internet, microformats serve as standards for semantic markup so that different
services and software agents can aggregate and interpret microformatted content. Without microformats, complex artificial intelligence is necessary to gather semantics from plain text. Several
microformats are already widely used, such as hCard and hCalendar, and more are being developed
every day. hCard is designed for marking up contact information (for people, places, and things) and
organizes data such as name, location, address, phone number, etc. hCalendar is used for marking up
events and dates, and is currently used by the new online planning system Eventful.
Microformats are not a new language for semantic markup like Resource Description Framework
(RDF); they simply add semantics to regular HyperText Markup Language (HTML) code. In most
cases, microformats are coded as spans and divs around plain text with standardized names. For
example, the hCard microformat uses HTML markup in the form of class names, hidden from human
readers but clearly visible to machine interpreters. By microformatting this page content, it is directly
parsable by web services.
In the AMF application, microformats served as embedded semantics in metadata pages that
make the services on the tactical network machine accessible. This microformat was designed to
provide all of the necessary information that a machine agent needed to construct the same HyperText
Transport Protocol (HTTP) request that the metadata pages used to communicate with the services.

5.3.2 System design


5.3.2.1 Design goals
The goals of the SWARM-AMF system were as follows:
(1) To encode metadata in such a way that service invocation is simple for humans and machines,
and efficiently represented.
(2) To distribute metadata about services to potentially interested clients who may be mobile, often
over-the-horizon, and often disconnected from the network.
(3) To allow clients to invoke a service that they discover even if that service is currently unreachable
via the tactical network.

5.3.2.2 Overall system concept


The current design concept can be seen in Figure 5.3. Our concept was based around notions of the
Representational State Transfer (REST) architectural principles. All service metadata was encoded
as eXtensible HyperText Markup Language (XHTML). In our architecture, human clients were able
to invoke services just by loading the appropriate web page and submitting a form found on that web
page. Conversely, once a service found out that it had been invoked (through whatever means), it
generated an XHTML response for the client to interpret, usually displayed on the same web page
that invoked the service.

96

Agent-Based Ubiquitous Computing

Fig. 5.3

SWARM-AMF block diagram system concept

Now, the question remained, how do clients discover the metadata, and how does it invoke the
service? If we were in a fully-connected strategic environment this would be a simple matter, and
is routinely performed using the internet paradigm. A client wishing to find a service uses some kind
of search engine to find the metadata, also known as the webpage, of an available service. The client
then loads the webpage, which has details on how to invoke the service. For example, the Amazon
webpage has a field for searching its book database. The client fills in the appropriate query and
executes it, using a HTTP POST operation to send data to the (connected) service. The service then
generates an HTML reply and responds to the query, either with a new webpage or by updating the
current page if it is using something like Asynchronous JavaScript and XML (AJAX).
This way of doing things breaks down in several places in our target environment. Most importantly, the network is not assumed to be fully connected. This fact alone implies many difficulties.
There can be no centralized service search engine since we are not guaranteed to be connected to
such a device often, or indeed, ever. Broadcast search, as is done in the case of wireless routing, is
equally ineffective. A further difficulty with this environment is that services cannot be assumed to
be reached directly. Therefore the service itself cannot host the server that serves metadata web pages
nor can the service be invoked directly using HTTP POST.
The best we can hope for was that using mobile nodes, there will eventually be a delay-fraught
path from a service to a client and back that must include bucket brigading of data through the
mobile nodes. Therefore we must use a delay-tolerant networking style if we wish to get data to the
right places in this network. Our system, referred to as SWARM-AMF, leverages the delay-tolerant
technology of the APL SWARM and DCF systems. As described in section 5.3.1.1, the SWARM
system contains a virtual shared blackboard called the belief network. This device allows for delay
tolerant communication of small pieces of data around the swarm.
Using the belief network, we can create a delay-tolerant version of the internet-style service
architecture discussed previously, with some modifications. We use the belief network itself as the
ultimate repository of all relevant metadata. When a service comes online, it creates a metadata
webpage and deposits it in the belief network.
We also move the webserver that handles human client requests to the client (or a machine always
network-connected to the client). On this client-side webserver we have a daemon running that
periodically looks at the belief network and receives metadata present there. This daemon also creates

The Active Metadata Framework

97

an index of all current metadata on the client-side webserver.


Since the metadata also contains embedded machine-readable data, any machine connected to
the swarm belief network may also read the metadata and programmatically invoke a service.

5.3.3 An experiment using some swarming metrics


It is evident from the system description that a key area to focus optimization on was how the intermediate nodes in the belief network ferry around requests and responses. This section describes the
simulation experiments used to tune the SWARM-AMF system prior to real-world testing.

5.3.3.1 Testing framework


The Tactical SOA swarm algorithms were tested on three spatial configurations of clients and services
in the simulated tactical network. Each of these separate tests was run with three different algorithms
for fulfilling the requests. These algorithms were fulfilling oldest requests first, closest requests
first, and then a mix of the two. For the closest distance routing algorithm, the main influence
factors included distance to the agent and the time since last communicated. For the oldest routing
algorithm, these factors included the number of invocations the agent was holding, the average wait
time for each invocation, and the time since last communicated. The time since last communicated
was included as a factor for all agents to prevent information starvation. When a number was selected
at random from the distribution, the agent whose area the number fell under was selected as the target
that the UAV would pursue.
Configuration 1 involved a single client and two services, positioned completely out of communications range of each other. Two UAVs were used to route invocation beliefs (requests/responses)
between the client and services. The second configuration involved two clients on opposite edges of
the search grid, with three services positioned in the middle of the grid. Again, all clients and services
were out of range of each other, so the only way for them to communicate was through the two UAVs
running in the test. The final configuration involved four clients and four services, arranged in a circle
in an alternating pattern, with 3 mobile nodes. Again, clients and services could only communicate
through the UAVs.

5.3.3.2 Results
Each algorithm was run 10 times each to ensure a good sample size. Each run lasted until all requests
(about 50 per client) were fulfilled. The average response time and maximum response time were
calculated based on timestamps for each one of these tests. When we average these metrics over the
tests, we get the results found in Figures 5.4 and 5.5. In Figure 5.6, the latency (time for a response to
be fulfilled) for one of the runs made in Test 3. The left end of a bar denotes the request being made
while the right shows when it was fulfilled. As can be seen, some requests were fulfilled faster than
others based on what the algortihm took into account for its probability calculations.
For the maximum response time, the closest first algorithm seemed to be the least suitable
algorithm. This made sense: the algorithm did not consider the age of the requests when it makes
decisions. The UAV wouldnt take the time to fly far away to fulfill the more distant, older requests.
The oldest firstalgorithm, as expected, usually performed best under this metric.
When the average response time was calculated, the closest firstalgorithm still was the worst
algortihm in all three configurations. Since the clients and services were arranged in a circular pattern,
the closest first algorithm could take advantage of coincident routes to minimize average response
time, similar to the elevator algorithm for disk access. However, this resulted in flying mostly around
the circle of clients and services before dropping off responses, resulting in higher times.
As can be seen in this data, even in these simple examples there didnt seem to be a clear win-

98

Agent-Based Ubiquitous Computing

Fig. 5.4 Average response time

Fig. 5.5 Maximum response time

Fig. 5.6 Latency

ner amongst these algorithms when measured against these metrics. The closest first algorithm
performed the worst in all three configurations, but not by a large enough margin to make the data
conclusive. More study needs to be done to determine which algorithms perform best under which
scenarios.

5.3.4 Experimental design


This section describes the setup and CONcept of OPerations (CONOPS) for the hardware in the loop
experiment of the SWARM-AMF system performed at the TNT experiment in February 2008.

The Active Metadata Framework

99

5.3.4.1 Scenario
We used a chemical and biological agent monitoring scenario. Figure 5.7 shows a notional scenario
with ground vehicles and UAVs in a tactical edge network environment. The ground vehicles were
equipped with biological and chemical sensors to detect and alert on bio-chemical threats. All mobile
vehicles used wireless communications and the circle around each vehicle indicates the communication range. In the tactical edge network one role of the UAVs is to act as communications links.
The AMF is distributed over all vehicles and proxy services. The UAVs can transport metadata between the ground vehicles (edge-to-edge communications), between the ground vehicles and strategic services (edge-to-strategic communications) and from strategic services to the ground vehicles
(strategic-to-edge communications). The Metadata could also be used within the strategic network
as information retrieval proxies (strategic-to-strategic communications). Service invocation was also
performed by transport over the UAV network, when necessary. The tactical network communicated
via the SWARM software belief network. Using SWARMing, the UAV network reconfigured itself
to provide for more optimal metadata distribution and service invocation.

5.3.5 Results
In testing the SWARM-AMF system, our primary objectives were threefold: to have multiple UAVs
self-coordinate to provide wide area communications coverage based on Unattended Ground Sensors
(UGSs), Command Post location, and operator queries; to have multiple UAVs receive requests for
information from an operator control station and change their flight paths to respond to those requests,
and to acquire telemetry and Received Signal Strength Intensity (RSSI) data on each platform to
analyze performance of autonomous vehicles and mesh network.
APL successfully demonstrated the SWARM-AMF system at the TNT experiment in Camp
Roberts, California in February 2008. A picture of the setup area with distances involved can be
seen in Figure 5.8. Our UAVs were flying at an approximate airspeed of 15 m/s. The communications range imposed on the devices was 150m. During the demonstration, a user at a Command and
Control (C2) station using a laptop or handheld device was able to request information via a dynamically discovered web page, have the requests routed across UAVs to two ground-based services, and
have the responses routed via the mesh network back to the C2 station so the human operator viewed
the query results. In doing so, we met the first two objectives above. We also captured the following
data from the experiment:
Time stamped positions of all APL UAVs
Time stamped RSSI information for all network nodes
Time stamped sensor readings from Chemical Biological Radiological Nuclear (CBRN) UGS
From the data we can conclude the average round-trip time to obtain a response to a query was
166 seconds, compared to the theoretical minimum time of about 140 seconds (if a UAV were already
within comms range of the command station when the request was made). No queries were lost and
all queries were eventually answered. In addition, the system ran quite well during a later test when
there was significant additional traffic on the wireless connections. Overall, the system worked as
designed and there were no significant problems.

5.3.6 Conclusions
In our efforts to support the rapid and dynamic integration of resources in various environments, we
continue to explore various aspects of AMF as an alternative to, or extension of, existing SOA architectures, with an emphasis on node independence and flux in the discovery layer. The development

100

Agent-Based Ubiquitous Computing

Fig. 5.7 AMF-based UAV and ground vehicle nodes in a tactical network with reachback capability to strategic
network and services

of this simulation framework has greatly enhanced our ability to explore different issues relating the
development of such an architecture, while maintaining an easy transition path to fielded implementations. For example, it affords us the ability to explore variations on migration protocols or resource
matching algorithms, while interfacing directly with/supporting real applications, but without the
need to field multiple, distributed nodes, thus isolating key aspects of the research. This ability to
turn ideas around quickly, and rapidly drop them into field exercises is important in that it allows
us to acquire feedback from users in realistic, often harsh environments, and to interact with other
research components in real-world exercises, maximizing the value of our time in the field. We will

The Active Metadata Framework

101

Fig. 5.8

TNT in-experiment display

continue to develop the simulation framework and extend its capabilities as a means of furthering this
research.

5.4 List of acronyms


AJAX Asynchronous JavaScript and XML
APL The Johns Hopkins University Applied Physics Laboratory
AMA Active Metadata Agent
AMF Active Metadata Framework
C2 Command and Control
CBRN Chemical Biological Radiological Nuclear
CONOPS CONcept of OPerations
DCF Dynamic Co-Field
GPS Global Positioning System
GUI Graphical User Interface
HTML HyperText Markup Language
HTTP HyperText Transport Protocol
IDD Interface Design Document
IER Information Exchange Requirement
MANET Mobile Ad-hoc NETwork

102

ONR Office of Naval Research


OTH Over-The-Horizon
P2P Peer-To-Peer
RDF Resource Description Framework
REST Representational State Transfer
RSSI Received Signal Strength Intensity
SOA Service-Oriented Architecture
SWARM-AMF Swarming Active Metadata Framework
TOTA Tuples On The Air
TNT Tactical Network Topology
UAV Unmanned Aerial Vehicle
UDP User Datagram Protocol
UGS Unattended Ground Sensor
USSV Unmanned Sea Surface Vehicle
XHTML eXtensible HyperText Markup Language

Agent-Based Ubiquitous Computing

Chapter 6

Coalition of Surveillance Agents. Cooperative


Fusion Improvement in Surveillance Systems
Federico Castanedo, Miguel A. Patricio, Jesus Garca and Jose M. Molina
Applied Artificial Intelligence Group (GIAA), Computer Science Department, University
Carlos III of Madrid, Avda. Universidad Carlos III 22, 28270 Colmenarejo, Spain
{fcastane, mpatrici, jgherrer}@inf.uc3m.es, molina@ia.uc3m.es

Abstract
In this chapter we describe Cooperative Surveillance Agents (CSAs), which is a logical framework
of autonomous agents working in sensor network environments. CSAs is a two-layer framework. In
the first layer, called Sensor Layer, each agent controls and manages individual sensors. Agents in
the Sensor Layer have different capabilities depending on their functional complexity and limitation
related to specific sensor nature aspects. One agent may need to cooperate in order to achieve better
and more accurate performance, or need additional capabilities that it doesnt have. This cooperation
takes place doing a coalition formation in the second Layer (Coalition Layer) of our framework.
In this chapter we have proposed a framework architecture of the CSAs and protocols for coalition
management. The autonomous agents are modeled using BDI paradigm and they have control over
their internal state. But cooperative problem solving occurs when a group of autonomous agents
choose to work together to achieve a common goal and make a coalition. This emergent behavior
of cooperation fits well with the multi agent paradigm. We present an experimentation of CSAs. In
this environment, the agent perception is carried out by visual sensors and each agent is able to track
pedestrians in their scenes. We show how coalition formation improves system accuracy by tracking
people using cooperative fusion strategies.

6.1 Introduction
Third-generation surveillance systems [Valera and Velastin (2005)] is the term sometimes used in
literature to refer to systems conceived to deal with a large number of cameras, a geographical spread
of resources, many monitoring points, as well as to mirror the hierarchical and distributed nature of
the human process of surveillance. From an image processing point of view, they are based on the
distribution of processing capacities over the network and the use of embedded signal-processing
devices to get the benefits of scalability and potential robustness provided by distributed systems.
Usually surveillance systems are composed of several sensors (camera, radar) to acquire data from
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_6, 2009 Atlantis Press/World Scientific

103

104

Agent-Based Ubiquitous Computing

each target in the environment. These systems face two kinds of problems [Manyika and DurrantWhyte (1994)]: (1) Data Fusion: It is related to the combination of data from different sources in
an optimal way [Waltz and Llinas (1990)]. (2) Multi-sensor Management: It assumes that the previous problem is solved, and it is in charge of optimizing the global management of the joint system
through the application of individual operations in each sensor [Molina et al. (2003)]. This research
is focused on solving third-generation surveillance systems problems using CSAs architecture, an
autonomous multi-agent framework based on BDI agency model. The BDI model is one of the best
known and studied models of practical reasoning [Rao and Georgeff (1995a)]. It is based on a philosophical model of human practical reasoning, originally developed by M. Bratman [Bratman (1987)].
It reduces the explanation for complex human behavior to a motivational stance [Dennett (1987b)].
This means that the causes for actions are always related to the human desires ignoring other facets
of human motivations to act. An finally, it also uses, in a consistent way, psychological concepts that
closely correspond to the terms that humans often use to explain their behavior. In CSAs architecture
each agent has its owns Beliefs, Desires and Intentions. Therefore agents are autonomous, some of
them monitor their environment through a sensor, react to changes that they observe and maintain
their own Beliefs, Desires and Intentions. But they can cooperate with others agents for two different
objectives:
(1) To get a better performance or accuracy for a specific surveillance task. In this way, we incorporate complementary information which is combine through data fusion techniques.
(2) To use capabilities of other agents in order to extend system coverage and carry out tasks that
they are not able to achieve alone.
In this sense, the concept of coalition appears when a group of autonomous agents choose to work
together to achieve a temporal common goal[Wooldridge (2000)]. The process to make a Coalition is
called Coalition Formation (CF), CF has been widely studied [Kahan and Rapoport. (1984); Ketchpel
(1994); Raiffa (1984); Shechory and S.Kraus (1995)], but there are few works related to surveillance
systems. A CF starts at one moment to achieve one task, and when this task ends the Coalition
breaks off. In the next section we show some related works , later on section 3 we show the CSAs
architecture. In section 4 we describe how we fuse the information obtained by each agent during
coalition maintenance, then we show the experimental results of the example. Finally we include
some conclusions and future work.

6.2 Related works


The challenge of extracting useful data from a visual sensor network could become an immense task
if it stretches to a sizeable number of cameras. Current research is focusing on developing surveillance systems that consist of a network of cameras (monocular, stereo, static or PTZ (pan/tilt/zoom))
running vision algorithms, but also using information from neighboring cameras. For example, the
system in [Xu et al. (2004)] consists of eight cameras, eight feature server processes and a multitracker viewer. CCN [Paulidis and Morellas] (co-operative camera network) is an indoor application
surveillance system that consists of a network of PTZ cameras connected to a PC and a central console to be used by a human operator. A surveillance system for a parking lot application is described
in [Micheloni et al. (2003)]. It uses static camera subsystems (SCS) and active camera subsystems
(ACS). The Mahalanobis distance and Kalman filters are used for data fusion for the multitracker, as
in [Xu et al. (2004)]. In [Yuan et al. (2003)] an intelligent video-based visual surveillance system
(IVSS) is presented. This system aims to enhance security by detecting certain types of intrusion
in dynamic scenes. The system involves object detection and recognition (pedestrians and vehicles)
and tracking. The design architecture of the system is similar to ADVISOR [Siebel and Maybank

Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems

105

(2004)]. In [Besada et al. (2004a)] the authors propose a multisensor airport surface surveillance
video system integrated with aircraft identification. An interesting example of a multi-tracking camera surveillance system for indoor environments is presented in [Nguyen et al. (2003)]. The system
is a network of camera processing modules, each of which consists of a camera connected to a computer, and a control module, which is a PC that maintains the database of the current objects in
the scene. Each camera processing module uses Kalman filters to enact the tracking process. An
algorithm was developed that takes into account occlusions to divide the tracking task among the
cameras by assigning the tracking to the camera that has better visibility of the object. This algorithm
is implemented in the control module. A coordination between two static cameras is implemented
in [Patricio et al. (2007)]. The authors present an indoor scenario where two cameras deployed in
different rooms are able to communicate and improve their performance. In [Molina et al. (2002)]
the authors have described some interesting strategies for coordination between cameras for surveillance. They advocate priorization between sensing tasks and also touch on the concepts of conflict
management and hand-off. As has been illustrated, a distributed multi-camera surveillance requires
knowledge about the topology of the links between the sensors and devices that make up the system
in order to collaborate, for example, in tracking an event that may be captured on one camera and to
track it across other cameras. Our chapter presents a multi-agent framework that employs a totally
deliberative process to represent the cooperation between neighboring sensors and to manage the coordination decision-making in the network. The distributedness of this type of systems supports the
sensor-agents proactivity, and the cooperation required among these agents to accomplish surveillance justifies the sociability of sensor-agents. The intelligence produced by the symbolic internal
model of sensor-agents is based on a deliberation about the state of the outside world (including its
past evolution), and the actions that may take place in the future.

6.3 Cooperative surveillance agents architecture


Cooperative Surveillance Agents is a logical framework of autonomous agents working in sensor
network environments. Let suppose that j is equal the number of agents in the multi agent system
and A the set of autonomous agents A = {A1 , A2 , . . . , A j }. Each agent Ai has a vector of k possibles
capabilities C = {C1 ,C2 , ...,Ck }. In surveillance systems, these capabilities are for example, tracking
capability, event recognition capability, recording capability, calibration capability, fusion capability,
etc. In order to act rationally, BDI model represents internally the situation faced and the mental state
in form of beliefs, desires and intentions. Each agent has its owns set of beliefs, desires and intentions.
The state of the agent at any given moment is a triple (B, D, I), where B Belie f s, D Desires and
I Intentions. Next we show a brief description of the different types of the autonomous agents
belong to the multi-agent system (see Figure 6.1):
Surveillance-Sensor Agent: It tracks all the targets and sends data to the fusion agent. It is
coordinated with other agents in order to improve surveillance quality. It has different roles
(individualized agent, object recognition agent, face recognition agent) each one with different
specific capabilities. It is possible to change the role but at any given moment it could be only in
one role.
Fusion Agent: Integrates all the surveillance-sensor agents data information. It analyzes the
situation in order to manage all the resources. It coordinates the surveillance-sensor agents during
the fusion stage.
Record Agent: This type of agent belongs to a specific camera only with recording features.
Planning Agent: It has a general vision of all the scene. It makes inferences on the targets and
the situation.

106

Agent-Based Ubiquitous Computing

Fig. 6.1

MAS architecture

Context Agent: It provides dependent information of the context where this being made the
monitoring.
Interface Agent: The input/output agent interface of the multi agent system. It provides a graphical user interface to the end user.
In the Cooperative Surveillance Agents framework, the surveillance-sensor agent Beliefs represents the knowledge about its own capabilities, neighbor surveillance-sensor agents capabilities
and the environment information gathered by its sensor. Let assume that n is equal the number
of autonomous surveillance-sensor agents in the multi agent system and S the set of autonomous
surveillance-sensor agents S = {S1 , S2 , . . . , Sn }. Therefore we can represent each surveillance-sensor
agent Beliefs as:
i(Bel Si Env(t)), the information of the current surveillance-sensor agent environment at time t.
i (Bel Si Ci ), its own capabilities.
Let i the neighborhood of a surveillance-sensor agent Si , where (i S) (i = 0),
/ then
i (Bel Si j i (Bel S j C j ), the surveillance-sensor agent Si knows its neighbor surveillancesensor agents capabilities.
Let assume that k is equal the number of fusion agents in the multi agent system and F the set of
autonomous fusion agents F = {F1 , F2 , . . . , Fk }, then we can represent each fusion agent Beliefs as:
Let i the subgroup of surveillance-sensor agents that are coordinated by the fusion agent Fi ,
/ then i (Bel Fi j i (Bel S j Env(t)), all the fusion agents knows
where (i A) (i = 0),
the environment of the surveillance-sensor agents that they coordinate.
i(Bel Fi j i(Bel W j C j)), the fusion agent Fi knows the capabilities of all the surveillancesensor agents that are managed by.
Let Ri a record agent, then we can represent record agent Beliefs as:
i (Bel Ri Env(t)), the information of the current record agent environment at time t.
Let Pi a planning agent, the we can represent planning agent Beliefs as:
i (Bel Pi Map(t)), a map of the situation at time t.
Let Xi a context agent, then we can represent context agent Beliefs as:
i j (Bel Xi Ctxt(S j )), the context information of all the surveillance-sensor agents.

Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems

107

6.3.1 Sensor and coalition layer


In Figure 6.2 the Cooperative Surveillance Agents architecture is depicted. The architecture has two
layers: Sensor and Coalition Layer. In Sensor Layer, each sensor is controlled by an autonomous
surveillance-sensor agent or by a record agent.

Fig. 6.2 Cooperative Surveillance Agents Architecture

Sometimes the agents in CSAs need to work together in order to perform a specific task. Each
agent is able to cooperate with neighborhoods (a subset of agents of the multi agent system that
could form a coalition). Depending on the sensor nature and the abilities of the agent, each agent
has different capabilities, Cl for specific surveillance-sensor tasks. Temporarily, the autonomous
agents are able to work together with neighbor agents in order to cooperate with each other. This
cooperation takes place in the Coalition Layer. Let suppose that O is the set of targets at time t:
O = {Ot1 , Ot2 , ..., Otj }.
Definition 1 Apply (Ai ,Ck , Otj ). Is a function that apply capability k of agent i on the target j at time
t.
(6.1)
Apply : Ai Ck Otj Boolean
Definition 2 Coalition at time t is a triple i =< Coi , Cl , Otk >. Where Coi A is a subset of
autonomous agents such that at time t j Coi
Apply (A j ,Cl , Otk ) is true. Therefore, a Coalition is a temporal group of autonomous agents doing
a specific action for a particular target together. Desires capture the motivation of the agents, the
final goal of each surveillance agent is the permanent surveillance-sensor of its environment. So, the
Desire of our surveillance-sensor agents is:
i (Des Si Surveillance(Ok )).
The Desire of the fusion agent is to fuse the information received by the surveillance-sensor agents:
i (Des Fi FuseData(Ok ))
The Desire of the record agent is to record a specific target:
i (Des Ri Record(Ok ))
The Desire of the planning agent is to make inferences on the targets and the situation:
i (Des Pi ToPlan(Ok ))
And finally, the Desire of the context agent is to maintain context information:
i (Des Xi MaintainCtxtIn f ormation(Ok ))

108

Agent-Based Ubiquitous Computing

Intentions are the basic steps the agent has chosen to do in order to achieve its Desires. The
surveillance-sensor agents intentions are:
i (Int Si j i (MakeCoalition(S j , Otk , Cl ))), all the surveillance-sensor agents (Si ), have
the intention to make a coalition with other surveillance-sensor agent (S j ) of their neighborhood
that involves target Ok at time t in order to apply the capability Cl .
i (Int Si j i (AcceptCoalition(S j , Otk , Cl ))) , all the surveillance-sensor agents (Si ), have
the intention to accept making a coalition with other surveillance-sensor agent (S j ) of their neighborhood that involves target Ok at time t in order to apply the capability Cl .
i (Int Si j i (DenyCoalition(S j , Otk , Cl ))) , all the surveillance-sensor agents (Si ), have
the intention to deny making a coalition with other surveillance-sensor agent (S j ) of their neighborhood that involves target Ok at time t in order to apply the capability Cl .
i (Int Si LeaveCoalition(Si , Otk , Cl ))), all the surveillance-sensor agents (Si ) have the intention
to leave the coalition that involves the target Ok at time t and the capability Cl .
i (Int Si Tracking(Ok )), in the multi agent system exists at least one surveillance-sensor agent
with the tracking capability.
i (Int Si Recognition(Ok )), in the multi agent system exists at least one surveillance-sensor
agent with the recognition capability.
i (Int Si Calibration), in the multi agent system exists at least one surveillance-sensor agent
with the calibration capability.
i (Int Si j i (SendTargetIn f o(A j , Otk , Cl ))), all the surveillance-sensor agents can communicate to all the others agents (A j ) belonging to the same coalition the information about target
Ok at time t.
The Intentions of the fusion agent are similar, but one important Intention of fusion agent is:
i (Int Fi j i (FusionTargetIn f o(A j , Otk , Cl ))), the intention to receive and fuse information from other agent A j belonging to the same coalition about the target Ok . at time t.
The Intentions of the other agents are similar and we omitted them due to limitations of space.

6.3.2 Coalition protocol


In [Wooldridge (2000)], the author argues that the first step in a cooperative problem solving process
begins when some agent recognizes the potential for cooperative action. In CSAs it begins when
an agent starts applying one capability and sends a message to other autonomous agents in order
to establish a coalition. In the initial moment the cooperation will exist only in the mental state
of the agent that initiates the process, we call this recognition of cooperation: i (Bel Ai j
i (Bel A j MakeCoalition(Ai , Otk ,Cl ))) that means that agent Ai belief that another agent A j exist
and may want to make a coalition for the target Ok at time t and applying capability Cl . Then the
agent Ai send a message to other agents, in this case A j , in order to complete a coalition.
Call-for-Coalition. < Ai , c f p(A j , < A j , MC >, Re f A j
OtkCl , (A j , Otk ,Cl )) >
< Ai , queryre f (A j , Re f A j OtkCl (IAi Done
(< A j , MC >, (A j , Otk ,Cl ))
(IA j Done (< A j , MC >, (A j , Otk ,Cl ))))) > where MC stands for MakeCoalition action. These
messages fit the FIPA standard that adds a performative to each communicative act. Then agent
A j has two possibilities: accept or reject the coalition proposal.

Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems

109

Accept-Coalition.
< A j , accept proposal (Ai < Ai , MC >, (A j , Otk ,Cl )) > <
A j , in f orm(Ai , IAi Done(< Ai , MC >,
(A j , Otk ,Cl )) >
Reject-Coalition. < A j , re ject proposal (Ai , < Ai , MC >, (A j , Otk ,Cl )
, ) > < A j , in f orm(Ai , IA j Done(< A j , MC >, (A j , Otk ,Cl )) ) > Agent A j informs agent
Ai that, because of proposition , A j does not have the intention for Ai to perform MakeCoalition
act with precondition (A j , Otk ,Cl ).
A Corollary of the fact that agents are autonomous is that the coalition formation processes may
fail. All agents belonging to the same coalition exchange the information of the same target. If
the coalition formation was successful the agents belonging to the same coalition must interchange
messages about the same target:
Inform-Coalition
< Ai , in f orm(A j , (Ai , Otk ,Cl )) >
Any agent could leave the coalition, only need to send a message to other agents that belong to that
coalition:
Cancel-Coalition
< Ai , cancel(A j , MC) >
< Ai , discon f irm(A j , IAi Done(MC)) >

6.4 Information fusion for tracking during coalition maintenance


In this section we show the improvement of tracking in the surveillance system during coalition
maintenance. We consider a specific example of coalition formation between surveillance-sensor
agents and fusion agent, but different coalitions could become. We consider that the visual sensors
(surveillance-sensor agents) are deployed so that their field of view are partially overlapped. This
level of redundancy allows advantages of redundancy and smooth transitions always with overlapped
areas, which may be affordable given the current low cost of equipments and processors. The intersectional region between cameras is treated to track targets as they transit between different agent
fields of view to fuse the output and compute the corrections for time-space alignment. The first
intention carried out by every surveillance-sensor agent in the cooperative architecture is perform the
Calibration capability. Each surveillance-sensor agent is assumed to measure the location of mobile
targets within its field of view with respect to a common reference system. This is a mandatory step in
visual sensor, since they must use same metrics during the cooperative process. Once a surveillancesensor agent detect a new target in its field of view, it starts to perform the Tracking capability.
In order to establish a coalition among their neighbor surveillance-sensor agents, the fusion agent
send a Call-for-Coalition message. The agent which starts the coalition formation, in this case the
fusion agent, is called as agent initiator. The agent initiator is looking for the cooperation in tracking
the new target. After the coalition is formed, data fusion techniques are needed to combine the
local target information among the agents in the coalition. Let Si a surveillance-sensor agent in a
coalition, so that Apply(Si , Ck , Otj ) is true, where Ck is the capability of tracking the new target
O j at time t. Si acquires images I(i, j) at a certain frame rate, Vi . The target O j is bounded by a
rectangle (xmin , ymin , xmax , ymax ) and is represented with associated track vector xij [n], containing the
numeric description of their attributes and state: location, velocity, dimensions, and associated error
covariance matrix, P ij [n]. In a coalition, target location and tracking are measured in pixel coordinates,
local to each i-th camera agent view, Si , and n represents the time dimension: tn . Usually, video frame
grabbers, with A/D conversion in the case of analog cameras, or directly with digital cameras, provide

110

Agent-Based Ubiquitous Computing

a sequence of frames, fn , which can be assigned to a time stamp knowing the initial time of capture,
t0 , and the grabbing rate, Vi (frames per second):
fn = Vi (tn t0 )

(6.2)

tn = t0 + fn /Vi

(6.3)

so the time stamp for frame fn is given by:

Then, these local estimates (or track vectors) are projected to global coordinates as result of the calibration process. Although a practical and direct calibration process is done with GPS coordinates
(latitude, longitude, altitude), these coordinates are then transformed to conventional Cartesian coordinates with a projection like stereographic projection [Collins et al. (2001); Hall and Llinas (2001)],
where objects cinematic descriptions are expressed in a more natural way. Results of the projection
and on-line registration in global coordinates are named as xcj [n], Rcj [n]. In this on-line registration
process, the bias estimation vectors, b cj [n], are computed to align projected tracks. In an analogous
way, associated times are also corrected with a final alignment to detect and remove clock shifts
among agents local processors: bt . Finally, the vectors provided by agents in a coalition are sent
to the fusion agent by the Inform-Coalition message and are processed with algorithms integrating
this information and available map information to derive the fused representation, xFj [n], RFj [n]. This
process is carried out by FusionTargetInfo intention of the fusion agent.

6.4.1 Time-space alignment


The process of multi-sensor alignment, or bias removal, as a previous step for data fusion is referred in
sensor fusion literature to as multi-sensor registration [Nabaa and Bishop (1999); Zhou et al. (1998);
Karniely and Siegelmann (2000); Garca et al. (2000); Besada et al. (2004b)]. As mentioned above,
on-line solutions are needed to estimate the potentially time-variant systematic errors in parallel with
target tracking, using the same available data. In a coalition, consistency across multiple camera
views (either with shared or disjoint fields of view) can only be maintained when spatial and time
coherence is achieved. Otherwise, biased estimates might be produce multiple tracks (splitting) corresponding to views from different agent sensors, instabilities such as the zig-zag effect in different
directions, wrong estimated velocities, etc.

6.4.2 Map correction


The final aspect considered in the fusion algorithm in a coalition is the alignment with map information. Tracking ground targets has specific characteristics distinguishing them from other applications
with free 3D motion. There are families of sensor fusion systems explicitly exploiting this constraint,
known in literature as ground target tracking, GTT [Scott (1994); Garca et al. (2003); Collins and
Baird (1989); Kirubarajan et al. (2000)]. The basic idea is taking advantage of available terrain information, such as elevation or road maps. The road (or footpath) restriction is the most attractive terrain
feature that interests us. In this case, it also allows to apply an external reference which is used to
align the fused vectors from tracks received from all agents in the coalition, which is used back then
by all agents to correct their local estimators. The road network in the scenario map will be used as
the topographic feature applied to fuse with tracking information. This road structure is represented
in the easiest way, with a series of segments linking a series of waypoints Pi and the associated widths
wi . To determine if an observation falls inside a road segment, we need to search if there is an overlap between the rectangular road segment and the uncertainty region of measurement, or in the case

Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems

111

of an available size representation with target bounds, using a geographic ellipse expressed by the
following equation:

F


x [m] x

(6.4)
xF [m] x yF [m] y P1 F
y [m] y
being xF [m] the local target information sent to the fusion agent in the coalition (Inform-Coalition),
with the associated covariance matrix P, and is the probability of locating the target inside the
uncertainty region. Considering that covariance P corresponds to a ellipse error with 1 axes
given by a , b , the relation between and ellipse size, K, is direct:

N(x, y) dx dy = 1 e

K2
2

(6.5)

Ellipse Centered at 0
axes K a , K b
This is a basic procedure to find out the road segment where the target is moving, considering the
transformed local track. It is important to avoid incorrect location of target in map, since it will
directly lead to an incorrect transformation of tracks, with a severe degradation in the performance.
Then, a direct technique to transform the tracks with the path segments where they are located is
a projection on the road centerline, as suggested in Figure 6.3. With the feed-back procedure of
fused tracks to align local tracks in the processing architecture, this projection will be translated into
local corrections to refine local projections and make processors be aligned with the ground map
information as a mechanism to avoid shifts between different agent views.

Fig. 6.3

Overlap between road segment and the uncertainty region for local track, and correction

6.5 Experiments
In order to illustrate the capability of our Cooperative Surveillance Agents framework and evaluate
its performance in coordinated tracking, we have applied it to a practical simple scenario in which
there are only two cooperative surveillance-sensor agents with overlapped fields of view (see Figure
6.4) and a fusion agent. Our system is a prototype for distributed surveillance-sensor at the university
campus, deployed both in outdoor and indoor areas. The illustrative scenario analyzed in this work
is an outdoor scene in which video cameras cover the pedestrians walking along a footpath. Both

112

Agent-Based Ubiquitous Computing

surveillance agents and a fusion agent establish a coalition in order to track the same object. In the
shared area, the agents are simultaneously tracking the object, which can be used for aligning timespace coordinates and fusing their local tracks during the coalition maintenance by the fusion agent.
These two surveillance-sensor agents, referred from now on as analog and digital, use different

Fig. 6.4

Scenario of coalition for tracking, with an overlapping zone

acquisition technologies. The overlapped regions are marked in the Figure 6.5, and the reference
ground-truth lines to identify the footpath are in Figure 6.6.The reference ground-truth lines to identify the footpath are in figure 6.6. This illustrative configuration was enough to run experiments in

Fig. 6.5 Sizes and overlapped area of images provided by both agents of the coalition

which surveillance-sensor agents provide different views of a common scenario (the footpath), and
their local tracks can be fused in a global representation. Previously to this experimentation, an offline calibration process was performed over each scene. Each surveillance-sensor agent is assumed
to measure the location of a moving target within its field of view with respect to a common reference
system. We have chosen the GPS (Global Position System) reference to represent the objects location, using a portable equipment to take the measurements (GarminTM GPS-18 USB). Thanks to the
calibration process, the correspondences between 2D image coordinates (pixels) and their respective
GPS world position can be set up. The overlapped area allows the two surveillance-sensor agents
to track the targets simultaneously. Once the right agent has detected a pedestrian, it calculates its
size, location and velocity. Based on these data from the overlapped area the delivered tracks may
be used to align and correct tracks of the other side of the coalition. We have analyzed ten videos

Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems

113

Fig. 6.6 Ground truth lines in both agents of the coalition

of pedestrians walking at different speeds from right to left through both scenes. In Figure 6.7 we
can see the pedestrians tracked positions, expressed in local image coordinates, for the first recorded
video. Every point is within of the calibrated region described by the calibration markers, at the two
sides of the footpath (asterisks). After the surveillance-sensor agent calibration process (Calibration

Fig. 6.7

Local tracking with both agents of the coalition

intention), we were able to map the image coordinates toward global coordinates. The results of this
transformation for both tracks are depicted in Figure 6.8, in the geodetic coordinates of GPS after
calibration: latitude, longitude, altitude. In fact, they are expressed as a relative shift, in thousandths
of minute, over a reference point at North 40 32 min, West 4 0 min. Then, a direct projection from
geodetic to Cartesian coordinates is carried out, using the stereographic transformation with the reference mentioned above used as tangent point (coordinate 0,0). A detail is depicted in Figure 6.9,
where the initialization of track from digital-agent with noisy velocity can be appreciated, compared
with the track coming from analog-agent . The fusion output, carried out by the fusion agent after
alignment is depicted in Figure 6.10, where the transition is much more smooth that a direct switch
between both tracks. Besides, the alignment correction fixed to tracks from digital agent for the rest
of time, allowed achieving a much more coherent fused track. The tracks of all videos after fusion
are depicted in Figure 6.11.

114

Agent-Based Ubiquitous Computing

Fig. 6.8

Calibrated tracks in global coordinates: GPS and stereographic plane

Fig. 6.9 Calibrated tracks in the Cartesian stereographic plane (detail)

6.6 Conclusions and future work


We have described Cooperative Surveillance Agents, an autonomous multi agent framework for
surveillance systems. In Cooperative Surveillance Agents architecture we use temporal coalitions
of autonomous agents that cooperate each other for two different objectives: (1) to get a better performance or accuracy for a specific surveillance task and (2) to use capabilities of other agents in order
to extend system coverage and carry out tasks that they are not able to achieve alone. As shown, a
continuous and accurate tracking along the transitions was achieved with the coalition establishment,
which included on-line recalibration with delivered output and alignment with map information. The
improved performance was illustrated using a set recorded videos taken by two surveillance-sensor
agents with different technologies, sharing an overlapped field of view in a campus outdoor surveillance configuration. In the future we are interested in the scalability of Cooperative Surveillance
Agents framework, for example networks of hundreds of surveillance-sensor agents.

Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems

Fig. 6.10

115

Position: X, Y coordinates of original tracks and fusion result

Fig. 6.11

Position: X, Y coordinates of fused tracks for all videos

Acknowledgements
This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC200806732-C02-02/TEC, SINPROB and CAM MADRINET S-0505/TIC/0255

Chapter 7

Designing a Distributed Context-Aware


Multi-Agent System
Virginia Fuentes, Nayat Sanchez-Pi, Javier Carbo and Jose M. Molina
University Carlos III of Madrid, Computer Science Department, Applied Artificial
Intelligence Group (GIAA), Avda. Universidad Carlos III 22, 28270 Colmenarejo, Spain
{virginia.fuentes, nayat.sanchez, javier.carbo}@uc3m.es, molina@ia.uc3m.es

Abstract
New mobile technologies with wireless communications provide the possibility to develop new applications on multiple and dynamical environments. The proposal of this research is to offer a BDI
agent system arquitecture for solving the problem of adapting context-aware information to users
depends on the environment by integrating different domains in a coherent framework. This paper
presents an overview of the analysis and design of a context-aware multi-agent system for heterogeneous domains according to Gaia methodology.

7.1 Introduction
Mobile devices, new wireless technologies and the users need of receiving context information increase the need of context-aware systems to adapt to changing and dynamical domains. Mobility
allows users to access information anytime and anywhere, and while they are moving scenarios can
change, so context-aware systems have to adapt to new domain information.
In this paper we present the analysis and design of a innovate solution to address the problem
of adapting context-aware information and offering services to users in different environments. The
proposal consists of agents using context information based on user location and on user profile to
customize and recommend different services to other agents. Context knowledge refers to all information relative to the system environment: location, participants, spatial region (buildings, rooms),
places, services, information products, devices etc. [Fuentes et al. (2006a)].
Since the chosen internal architecture of agents is based on the BDI model (Belief-DesireIntention) [Rao and Georgeff (1995b)], who is one of the most used deliberative paradigms to model
and implement internal reasoning of agents, part of this knowledge is represented as agents beliefs.
Beliefs, Desire and Intention are, respectively, the information, motivational and deliberative states
of the agents [Pokahr et al. (2003)]. The BDI model allows viewing an agent as a goal-directed entity
that acts in a rational way [Rao and Georgeff (1995b)].
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_7, 2009 Atlantis Press/World Scientific

117

118

Agent-Based Ubiquitous Computing

As development platform, framework Jadex has been selected as development platform to support
the use of the reasoning capacities by BDI model deployment with software engineering technologies
as XML and Java [Braubach et al. (2004)]. The JADE eXtension Jadex is an implementation of hybrid
(reactive and deliberative) agent architecture for representing mental states in JADE agents following
the BDI model. Moreover, the JADE platform focuses on implementing the FIPA reference model,
providing the required communication infrastructure and platform services [Pokahr et al. (2003)].
For accomplishing objectives, it is necessary an interaction model between agents due to the need to
communicate between them. Communicative acts between agents are expressed through FIPA-ACL
messages (Foundation for Intelligent Physical Agent ACL), which allows selecting content language
and ontology inside messages [fou (2002)].
Some agent oriented methodologies are considered for the analysis and design of the proposal
multi-agent system like MAS-CommonKADS [Iglesias et al. (1996)], Tropos [Giunchiglia et al.
(2002)], Zeus [Nwana et al. (1999)] or MaSE [Wood and Deloach (2000)]. MAS-CommonKADS is
a methodology for knowledge-based system, Tropos is a requirement-based methodology, Zeus provides an agent platform which facilitates the rapid development of collaborative agent applications,
and MaSE is an object-oriented methodology that considers agents as objects with capabilities of
coordination and conversation, so each methodology has its particularities to be applied. In this case,
there is a need of focusing in role analysis and the activities and protocols associated to them.
For the process of analyzing and designing of agent-oriented systems, we have selected Gaia
methodology. The motivation behind Gaia is the current lack existing methodologies in terms of
representing the autonomous, the problem-solving nature of agents, and the way agents perform
interactions and create organizations. Using Gaia, software designers can systematically develop an
implementation-ready design based on system requirements [Tveit (2001)].
One problem with Gaia could be that it is a very high-level methodology. Gaia only specifies how
a society of agents collaborates to reach the goals of the system, and what is needed of each one to
achieve them, and after this analysis, Gaia propose to apply classic design object-oriented techniques.
However, this high level view is precisely what it is needed in this research. The intention here is to
have a clearly and high level definition of the main aspects of the system, to assist a right development
of the multi-agent system.
This article describes the analysis and design process of a multi-agent framework to provide to
users context-aware information based on user location and user profile for heterogeneous, changing
and dynamical domains. Section 2 presents the context-aware multi-agent problem for heterogeneous
domains that represents the main focus of this research. Section 3 explains the BDI model architecture fundamentals and the application for implementing the multi-agent system. Section 4 defines
the Gaia methodology fundamentals. Section 5 illustrates the problem with the application of Gaia
methodology to the proposal context-aware multi-agent system. It shows the analysis and design
process of the agent architecture. Finally, Section 6 reports some conclusions.

7.2 Context-aware multi-agent framework for heterogeneous domains


The any time/ any place principle of ubiquitous computing emergence as the natural result of research
and technological advances in wireless and sensor networks, embedded systems, mobile computing,
distributed computing, agent technologies, autonomic computing and communication.
Ubiquitous computing paradigm integrates computation into the environment. Context-aware
applications are a large and important subset of the overall set of ubiquitous computing applications,
and have already demonstrated the advantages gained from the ability to perceive the surrounding
environment [Hess et al. (2002); Yau and Karim (2004); Biegel and Cahill (2004)]. There are numerous approaches in context-aware applications but most of the available applications are designed for
working on specific domains. We will name above some of the related works.

Designing a Distributed Context-Aware Multi-Agent System

119

GAIA [Roman et al. (2002)] for instance, develops an application framework that assists in
the creation of generic loosely coupled distributed component-based applications. The framework
defines three basic components that constitute the building blocks for all applications: the model,
presentation and controller.
RCSM [View (2002)], a context-sensitive middleware, is another one which combines the respective strengths of mobile devices and network infrastructures whenever is possible. Its dynamic
integration mechanism is context-sensitive, and as such the integration between the application software in a mobile device and a network interface is restricted to specific contexts, such as a particular
location or a particular time.
CORTEX [Fitzpatrick et al.] is another framework which builds context-aware applications on
the creation and composition of sentient object, which is an autonomous entity that can sense and
respond to the environment. The ubiquitous environments ideally fit with the agent paradigm. Multiagent systems support complex interactions between entities, using high level semantic languages.
Such a feature is essential in Ambient Intelligence environments dealing with heterogeneous information from physical sensors and users preferences. Integration of such data is only possible at a
higher level where all kind of information (about context) is expressed semantically.
Multi-agent systems are adequate for developing applications in dynamic, flexible environments
[Fuentes et al. (2006b)]. There are several researches about developing multi-agent systems for context aware services. For instance: SMAUG [Nieto-Carvajal et al. (2004)] is a multi-agent contextaware system that allows tutors and pupils of a university to fully manage their activities. SMAUG
offers its users with context-aware information from their environment and also gives them a location
service to physically locate every user of the system. Another one is the BerlinTainment [Wohltorf
et al. (2004)], the project has demonstrated a framework for the provision of activity recommendations based on mobile agents. Similar to this one is the AmbieAgents [Lech and Wienhofen (2005)],
an agent-based infrastructure for context-based information delivery for mobile users.
The main difference of the proposed multi-agent framework with these researches relies on the
domain application of the system. Some context-aware proposals focus on specific domains and user
location in the environment. There is a short overview in [Fuentes et al. (2006a,b)].
The main contribution here consist of applying dynamical and changing domains to the multiagent system, so that this system can provide context-aware services related to any domain where the
system works, a fairground, an airport, a shopping centre, for instance. Proposed multi-agent system
adapts context-aware information to users in heterogeneous and dynamical environments based on
user location and user preferences or profile.
The context of proposed multi-agent location-based context-aware system is a spatial region
with zones, buildings, rooms, where users with their mobile devices, as PDAs or mobile phones, are
moving through the environment. In this environment, there are some places or points of interest
which provides services to users.

7.2.1 Multi-agent architecture


The architecture of the system shown in figure 7.1. has three types of agents: Central Agent, Provider
Agents (one per each interest point) and User Agents (one per user) [Fuentes et al. (2006b)]. We will
explain now the tasks developed by each of them with the goal of joining all the agents in charge of
managing knowledge and supporting the different stages of the BDI model we used.
Central Agent is responsible of making decisions about whether to warn or not the closest
providers to the user. It is in charge of registering-deregistering users and of building an initial profile of them. It also has the goal of locating and identifying the users movements inside
the environment. Central Agent matches user profile with the profile of each provider in order to
warn providers about the presence of a potentially interested user.

120

Agent-Based Ubiquitous Computing

Fig. 7.1

MAS architecture

Provider Agents represent permanent static providers and they are spread out in any interest point
of the system. They are warned by the Central Agent about any close user with a matched profile
and propose customized services to users.
User Agents negotiate services with Provider Agents according to its internal profile and they
also cooperate with other users asking for recommendations.
The system incorporates an agent in each mobile device in charge of interfacing with the user as
a repository of local contextual information. It also has a provider agent that is an agent spread out
in each point of interest and a central agent for the management of the system residing in a network
server.

7.3 BDI model


The BDI Model (Belief-Desire-Intention model) was conceived as a theory of human practical reasoning [Bratman (1987)]. The BDI model supports event-based reactive behaviour and proactive or
goal-directed behaviour, so that is one of the reasons to choose this model for implementing the
proposed context-aware multi-agent system. The main goal is to facilitate the utilization of mental
concepts in the implementation, where this is regarded as appropriate by agent developer [Pokahr
et al. (2003)]. This model is characterized by three types of attitudes: beliefs, desires and intentions.
Beliefs are informative component of the system state [Rao and Georgeff (1995b)]. Beliefs represent
the information that an agent has about the world, and about its own internal state [Pokahr et al.
(2003)]. Desire represents the motivational state of agents, which are captured in goals. It is necessary that the system also have information about the objectives to be accomplished or, more generally,
what priorities are associated with the various current objectives [Rao and Georgeff (1995b)]. Inten-

Designing a Distributed Context-Aware Multi-Agent System

121

tions capture the deliberative component of the system [Rao and Georgeff (1995b)]. Plans, which
are deliberative attitudes, are the means by which agents achieve their goals. A plan is not just a
sequence of basic actions, but may also include sub-goals. Other plans are executed to achieve the
sub-goals of a plan, thereby forming a hierarchy of plans. [Pokahr et al. (2003)] The intention is to
apply BDI model to the proposed context-aware multi-agent system. The design and implementation
of each concept in BDI architecture as seen in figure7.2 (beliefs, goals and plans) will be described
as follows:

7.3.1 Beliefs
Each agent has a belief base to store the facts that make up the agents knowledge. These facts can be
stored as Java objects in Jadex platform, and it is possible to capture de semantic of these objects using
ontologies [Pokahr et al. (2003)]. In [Fuentes et al. (2006a)] the proposal is heterogeneous domain
ontology for the context-aware multi-agent system study in this research. A high level of abstraction
for the ontology is defined, so that it covers dynamical and changing domains. The proposed metaconcepts include all context information of the multi-agent system, like location, participants (with
their profiles), spatial region (buildings, rooms etc.), places or points of interest, services, products
etc. In central agent a belief is used to represents the environmental knowledge, the user location and
it knows provider services and location. In client agent, the initial belief is the user private profile.

7.3.2 Desires
According to [Pokahr et al. (2003)], there are three kinds of goals: achieve, maintain and perform
goal. They define, respectively, a desired target state, an agent keeps track of the state, and certain
activities that have to be done. In this case, the goals are presented in a high level abstraction, so there
is no distinction between the three types. Using Gaia methodology, agents goals and functionalities
are studied with more detail level in next section. The goals for each agent are described as follows.
Central agent can reach the following goals:
Detect Users: this goal implies that central agent has to locate and identify users.
Register Users: central agent receives a request from users for registry, and send the respective
agreement to users for register them. The goal of deregistering users is similar.
Improve User Shared Profile: central agent receives user shared profile with the request to registry from user. Furthermore, central agent can update or improve user profile with some information like the time spent for users in visited places gathered by sensors etc.
Filter Providers: central agent can filter providers by matching user profile with providers information, and use the result of matching to warn closer providers. Goals in provider are closely
related to dialogues between users and providers. Provider agent can achieve the following objectives:
Communicate services: provider agents offer services to users according to the result of the
matching realized by central agent.
Reach a compromise with clients: provider agents negotiate with users and, after the negotiation
phase, they can reach agreements and exchange information with them. User agents dialogue
between them, but also they communicate with providers and central agent, so goals can be the
following:
Negotiation between users and providers: users can consult information by requesting to
providers, and can receive services according to their profiles or location. Moreover, they can
ask for agreements to providers and negotiate with them.

122

Agent-Based Ubiquitous Computing

Recommendation between users: users can recommend information to other users and they can
also ask for recommendations.
Trust in other agents: for sharing opinions with other agents and for improving the profiles with
this information.
Manage Internal Profile: users agents can update their internal profiles with information about
other user recommendations. Moreover they send a shared part of their profiles to central.

7.3.3 Intentions
The main functionalities of the multi-agent system are decomposed into separate plans. To achieve
the goals, agents execute these plans. Initially, there are plans predefined in the plan library, and
depending on the current situation, plans are selected in response to occurring events or goals. One
main characteristic of BDI model is than the plans are selected in an automatic way: they can be
executed from beliefs or form external messages that generate events [Pokahr et al. (2003)]. In the
proposed context-aware multi-agent framework, plans represent the main activities of agents [Fuentes
et al. (2006b)]. Central Agent plans can be described as follows:
Register Users Plan: includes detection goal (locate and identify users) and register/deregister
goal.
Provider Warning Plan: this plan dispatched filter provider goal.
Manage User Profiles: manage user shared profile goal Provider Agent Plans:
Dialogue with users Plan includes two goals: offer services to users and negotiate with them for
reaching agreements. User Agent Plans:
Negotiation Plan: users negotiate with providers and make agreements with them.
Dialogue between users: it includes recommendation goal and trusting goal.
Manage Profile Plan: it concerns to manage internal profile goal.

7.4 Gaia methodology


Gaia is a methodology for systems design based on agents which main goal is to obtain a system that
maximizes some global quality measure. Gaia tries to help to go systematically from a few initial
requirements to a design, so that the detail level is sufficient to be implemented directly. Gaia deals
with both the macro (societal) level and the micro (agent) level aspects of design. It represents an
advance over previous agent-oriented methodologies in that it is neutral with respect to both the target
domain and the agent architecture [Wooldridge et al. (2000)]. Applying Gaia to developing multiagent systems, the analysis and design process consist in two phases: analysis phase, design phase
[Wooldridge et al. (2000)].

7.4.1 Analysis phase


The goal of this phase is to develop an understanding of the system and its structure. Gaias view of
agent system is like an artificial society or collection or roles, with interactions between them. A role
is defined by four attributes: responsibilities, permissions, activities, and protocols [Wooldridge et al.
(2000)]. This phase is concerned with the collection and organization of the system specification,

Designing a Distributed Context-Aware Multi-Agent System

123

AdoptedGoals
PlanLibrary
-CommunicateServices
-ProviderWarning

-Reachcompromisewithusers
InstantiatePlans

-ManageUserProfile

Adoptnewgoals

SelectforExecution
Messages

Reaction

RunningPlans
-RegisterUser

Messages

Deliberation

HandleEvents

BeliefBase
Belief/Condition/Events

Dispatch(Sub)Goals/Events

Internal
Events

OntologyKnowledge
Query,add,removefacts

Fig. 7.2

Example of BDI model application.

in particular (i) the organizational structure), (ii) the environment model, (iii) the preliminary role
model, (iv) the preliminary interaction model and (v) organizational rules [Zambonelli et al. (2003)].

7.4.1.1 Design phase


This phase is composed by architectural and detailed design phase. The aim of designing is to transform the analysis models into a sufficiently how level of abstraction that traditional design techniques,
included object-oriented techniques, may be applied in order to implement agents [Wooldridge et al.
(2000)].
(1) Arquitectural phase It includes definition of the systems organizational structure in terms of its
topology and the completion of the preliminary role, and interaction models.
(2) Detailed phase It includes two models: (i) agent model and (ii) service model. They identify,
respectively, the agent types in the system and the main services that are required to realize the
agents role [Zambonelli et al. (2003)].

7.5 Analysis and design using Gaia methodology


There are some previous research works that realize the analysis and design of agent systems according to Gaia methodology [Chiang et al. (2005); Ramanujam and Capretz (2004); Tiba and Capretz
(2006)]. These researches present some differences with this proposal, in order to apply the current
methodology.
One of these studies proposes a multi-agent system to configured autonomic services for telecom-

124

Agent-Based Ubiquitous Computing

munication with multi-agent systems [Chiang et al. (2005)]. This proposal defines the system roles
based in Gaia, and it focuses only in this phase of the methodology. Moreover, the reason for using
Gaia methodology as a multi-agent system methodology is not clear. In contrast of this proposal, all
the phases in Gaia are applied to our context-aware multi-agent system.
In [Ramanujam and Capretz (2004)] a multi-agent system for autonomous database administration is defined and the Gaia methodology is use to study the system analysis and design with best
level detail as it is possible. The main difference with this research is the purpose of the multi-agent
system, since the problem here is to analysis and design a context-aware multi-agent system for
multiple applications.
In the case of SIGMA system [Tiba and Capretz (2006)], there is a combination of Gaia methodology with AUML methodology to provide more detailed description of the architecture. The application of Gaia is similar to our proposal but in this paper Gaia is combined with BDI model to get
a better definition of the entities, characteristics, and functionalities of our multi-agent system. The
goal is to extend with Gaia methodology the functionality obtains with BDI.

7.5.1 The environmental model


Since the importance of environments, because a multi-agent system is always situated in some domain, modeling this environment involves determining which entities and resources take part of the
multi-agent system, for reaching the organizational goal [Zambonelli et al. (2003)].
For the proposed context-aware multi-agent system with the main objective of adapting context
information for providing customized services to users based on their location and profile, the environments is represented for multiple, heterogeneous and dynamical domains, as the behaviour of the
system is always the same in any domain (an airport, a fairground, a shopping center, etc.) It provides
the same functionalities but with different kind of information depends on the context. In [Fuentes
et al. (2006a)] there is an overview of how the environment for this context-aware system is represented by an ontology, since there is a need of define the context knowledge for the communication
process between agents in the system.

7.5.2 The organization structure


The organization structure defines the overall goals and behaviours of the system. As described in
section 2, there are three kinds of agents: central agent, provider agent and client agent that have
to interact between them to reach the main goal of the system: adapt context-aware information to
provide customized services to users. Each of them has to achieve specific goals in the organization
and work together to reach the main objective of the system.

7.5.3 Role model


A role model identifies the main roles that agents can play in the system. A role is viewed as a
description of the functionality of an agent, and it includes four attributes: permissions (resources
use while performing a role), responsibilities (expected behaviours of the role), activities (actions
without interaction between roles) and protocols (actions than involve interaction between roles).
The following roles have been defined in the proposal multi-agent system:
Provider Discover: it obtains closer providers and communicates with them to alert them for the
presence of a user.
User Manager: this role is responsible of coordinating all activities about users like locate, identify and register users, and improve their public profiles.

Designing a Distributed Context-Aware Multi-Agent System

Other

Provider

125

Central

Client

INFORM:707()

AGREE:2(445)

REQUEST:708()

AGREE:10(489)

4
INFORM:709()

5
6

Fig. 7.3

Example of interaction between agents.

Service Manager: it is necessary to manage the matching about services offered by providers and
user profile.
Service Provider: the main function is to communicate the offered services to users according to
user location and profile. Another function is to compromise with clients for making agreements.
Profile Manager: it deals with the update of internal user profile. This profile can be updated
with information given by other users.
Negotiate Role: users can negotiate with providers and reach agreements.
Recommend role: users can communicate with other users to recommend places or points of
interest, and, generally, all kind of information related to system services.
An example of the role model is describing as follows in table 7.1, according to Gaia specifications. It shows the roles corresponding to central agent, provider agent and client agent, with the
permissions, responsibilities, protocols and activities (underlying) associated to them:

7.5.4 Interaction model


Interaction model is used to represent the dependencies and relationships between the roles in the
multi-agent system, according to the protocol definitions. For the previous roles, several protocols
are defined. They are represented in the following table 7.2:
The following figure 7.3 shows a little example of how the interaction between the tree kinds of
agents would be: Provider with Central agent, Central agent with Client (User) agent and Provider
with Client agent.

126

Agent-Based Ubiquitous Computing

7.5.5 Organizational rules


According to Gaia, the preliminary phases capture the basic system functionalities, characteristics
and interactions in an independent way of the organizational structure. However, there are some
relationships between roles and protocols which can be complemented with organizations rules for
a best capture. These organization rules are responsibilities of the organization in a generic way
[Zambonelli et al. (2003)]. In this case, organization rules correspond to the responsibilities analyzed
in role model. For showing this more clearly, here there are some examples of organizational rules
defined about roles and protocols. For instance, there is a rule like the following:
(Filter Provider)n Warn Provider
This means that Filter Provider must precede Warn-Provider and it can be executed n times. Each
time Filter-Provider rule is executed, the action Warn-Provider will be executed after it. Otherwise, if
we add the role, for instance Provider Discover role (PD) played by central agent, the organizational
rule should be:
(Filter Provider(PD))n Warn Provider(PD)
The rest of the organizational rules can be captured from the role model defined in section 5.3 in
the same way. In this paper we have considered not to extend more this section because it corresponds
to the role model and focus in other important phases.

7.5.6 Agent model


According to Gaia, agents are entities that play different roles in the system. The definition of the
agent model consist on identifying which specific roles play agents and how many instances of each
agent have to be instantiated in the actual system. Agent model for the context-aware multi-agent
system is shown in figure 7.4.

7.5.7 Service model


The service model in Gaia methodology represents all protocols, activities, responsibilities and liveness, associate to the roles that agents plays in the system. This model is detailed as follows in table
7.3:

7.6 Conclusions
This paper focuses in the analysis and design of a context-aware multi-agent system for multiple
applications like airports, fairgrounds, universities, shopping centers etc. First of all, there is an analysis of the system that applies the BDI Model to our agents. This analysis offers a general vision
of the main functionalities (goals and plans) of the system, and the agent knowledge (beliefs). The
intention of combining BDI model with Gaia Methodology represents a lower level definition of the
problem than with BDI model. The organization of the multi-agent system in roles provides a clear
definition of the coordination and integration of agent functionalities, responsibilities, protocols and
roles. The application of Gaia methodology to the main aspects of the system helps us to develop
correctly the multi-agent system. An initial development of the multi-agent system has been implemented for checking the matching between the analysis and design, with the implementation of
the system. The future work consists on developing the whole system for accomplished multi-agent
system objectives.

Designing a Distributed Context-Aware Multi-Agent System

127

Central

Provider

Client
+

+
1

Profile
Manager

Recommendation
Role

Negociate
Role

Service
Provider

Provider
Discovery

Service
Manager

User
Manager

Fig. 7.4 Gaia agent model for the context-aware multi-agent system problem.

Acknowledgements
Partially funded by projects CICYT TSI2005-07344, CICYT TEC2005-07186 and CAM
MADRINET S-0505/TIC/0255.

128

Agent-Based Ubiquitous Computing

Table 7.1 Gaia Role Model for the context-aware multi-agent system problem.

RoleSchema: UserManager(UM)

RoleSchema: ServiceManager(SM)

Description
Thisroleisresponsibleforlocating,identifyingand
registeringusers,aswellasimprovingtheirpublic
profiles.

Description
Thisrolemakesthematchingaboutservicesoffered
byprovidersanduserprofile.

RoleSchema: ProviderDiscovery(PD)
Description
Thisroleobtainscloserprovidersandcommunicates
withthemtoalertforthepresenceofauser.

ProtocolsandActivities
Check-user-location, agree-registry,deregister-user,
receive-registry-profile,register-user, identify-user,
receive-user-sequence,improve-user-profile.

ProtocolsandActivities
Match-services-profiles

ProtocolsandActivities
Filter-provider,warn-provider

Permissions
Readsuser_location,user_profile,sequence
Changesuser_registry,user_profile.

Permissions
Readsprovider_service,user_profile
Changesmatching_result

Permissions
Readsmatching_result
Changescommunication_information

Responsibilities
Liveness:
UM=(Check-user-location.Identify-user.
Receive-registry-profile.Agree-registry.
(Register-user|Deregister-user))n|
(Receive-user-sequence.Improve-user-profile)n
Safety:itisnecessarytoassuretheconnection
withthelocationsystemandknowledgebase

Responsibilities
Liveness:
SM=(Match-services-profiles)n
Safety:itisnecessarytheprovider_serviceand
user_profileisavailabletomakethematching.

Responsibilities
Liveness:
PD=(Filter-provider.Warn-provider)n
Safety:ifthereisasuccessmatchingresult,it
ispossibletocommunicatewiththeprovider.

RoleSchema: ServiceProvider(SP)

RoleSchema: ProfileManager(PM)

RoleSchema: NegotiateRole(NR)

Description
Thisroleisresponsibleforinformingaboutservices
andreachagreementswithusersafternegotiation.

Description
Thisroleisresponsibleforupdatinginternaluser
profilesandoffersthepossibilityofsendingashared
parttocentralagent

Description
Thisroleletagentsnegotiateand,accordingtothis,
receivenewservicesorimprovedservices.

ProtocolsandActivities
Offer-service,request-negotiation,agree-negotiation,
exchange-information

ProtocolsandActivities
Update-internal-profile,send-shared-profile-registry

ProtocolsandActivities
Consult-information,receive-services,
ask-for-agreements,receive-request-negotiation,
exchange-information

Permissions
Readsagree-negotiation,information_exchange
Changesservices_offered

Permissions
Readsuser_profile,recommend_information
Changesuser_profile.

Permissions
Readsnegotiate_information
Changesservices

Responsibilities
Liveness:
SD=(Offer-service)n|(Request-negotiation.
Agree-negotiation.Exchange-information)n.
(Offer-service)
Safety:itisnecessarytonegotiatefirstfor
reachinganagreementandexchanging
information.

Responsibilities
Liveness:
PM=(Update-internal-profile)n|
(send-shared-profile-registry)n
Safety:itisnecessarytoreceiveexternal
information,likerecommendinformationto
improveorupdateinternalprofile.

Responsibilities
Liveness:
NR=(Consult-information)n|
(Ask-for-agreements.
Receive-Request-negotiations.
(Exchange-information)n.Receive-services)n
Safety:thereisanegotiationphaseforreach
agreementsandreceivenewservices.

RoleSchema: RecommendedRole(RR)
Description
Thisroleoffersthepossibilitytorecommend
informationbetweenusers,anddecideswhototrust
forsharingopinions.
ProtocolsandActivities
Recommend,ask-for-recommendations,
decide-to-trust,receive-recommendation
Permissions
Readsrecommendations
Changesopinion_to_others,user_profile
Responsibilities
Liveness:
UM=(decide-to-trust.Recommend)n|
(Ask-for-Recommendations.
Receive-recommendation)n
Safety:itisnecessarytoassuretheconnection
withthelocationsystemandknowledgebase.

Designing a Distributed Context-Aware Multi-Agent System

Table 7.2

Gaia Interaction Model for the context-aware multi-agent system problem.

Agree-Registry

Receive-Registry-Profile
ProfileManager

UserManager

Receivearequestofregistrytotheuserwith
thesharedpartoftheuserprofile.

Requestregistry

UserManagerr

ReceivedRequestand
Profile

Sendamessagetoconfirmtheuserregistry.

Alertmessage

ServiceProvider

Receivedmessage

Sendoffersaboutservicestousers,thatcanbe
negotiated.

Negotiationrequired

ServiceProvider

Accept/Reject
Negotiation

Sendamessageforacceptnegotiation.

Objecttoexchange

ProfileManager

Improvedoffers

Sendarequestofregistrytotheuserwiththe
sharedpartoftheuserprofile..

Objecttoconsult

NegotiationRole

Consult

UserAgentreceivesservicesaccordingtotheir
profilesandlocation,fromprovideragent

Requestagreement

ServiceProvider

Accept/Reject
agreement

Useragentreceivearequestmessagefor
negotiationfromprovider

Objecttorecommend

ProfileManager

Recommendation

Useragentscanasktootheragentsfor
recommendations

Warn-provider
ProviderDiscover

Request-Negotiation
ProfileManager

Askforopenanegotiationprocessfromprovider
touser..

ProfileManager

Exchangesomeinformationformakingagreements
andimproveofferedservices.

ServiceProvider

Userscanconsultinformationbyaskingtoproviders.

ServiceProvider

Useragentcanaskforagreementstoprovider,and
negotiateconditionstoreceiveservices.

OfferReceived

ProfileManager

AcceptMessage
MessageReceived

UserManager

Requestregistry
Accept/Rejectrequest

ServiceProvider

Negotiateinformation
ReceivedServices

ProfileManager

Negotiationrequest
Accept/Reject
Negotiation

Ask-for-recommendations
ProfileManager

Useragentcanrecommendtootheruseragentsome
informationaboutproducts,places,servicesetc.

Receive-Recommendations
RecommendationRole

OfferSent

Receive-Request-Negotiation

Recommend
RecommendationRole

NegotiateRole

Receive-services

Ask-for-agreements
NegotiationRole

Userregisteredornot

Send-shared-profile-registry

Consult-information
NegotiationRole

Confirmationmessage

Agree-Negotiation

Exchange-Information
ServiceProvider

ProfileManager

Offer-Service
ServiceProvider

Sendainformmessagetothecloserproviderrole
foralertingthepresenceofauser.

ServiceProvider

129

ProfileManager

Useragentscanreceive-recommendationsfrom
otheruseragents

Recommendation
Received
Recommendation

RecommendationRole

Requestrecommendation
Accept/Reject
Recommendation

130

Agent-Based Ubiquitous Computing

Table 7.3 Gaia Service Model for the context-aware multi-agent system problem.

ServicesSchema: UserManager(UM)

ServicesSchema: ServicesManager(SM)

ServicesSchema: ServicesProvider(SP)

PostService Inputs Outputs PreCondition Condition

PostService Inputs Outputs PreCondition Condition

PostService Inputs Outputs PreCondition Condition

Location
Check- User
userlocation checked
location

Users
Users
connected located
towireless
network

Identifica Users
Identify- User
located
user
location tion
checked
Request
Request- User
Registry- senda for
profile request registry
message anduser
profile
sent

Theuseris
identified

Central
Users
connected Agent
towireless receive
request
network
fromuser

Usersend
Agree- Request Agree
message arequest
registry for
message
registry and
anduser registry
profile done

User
Registered
ornot

Register- Propo- Requested


user
sed
registry registry

Locateand User
registered
identify
users

Deregis- Propo- Requester


sedde- ted
register registry
-user
oruser
outof
thewireless
network
Receive- External Improved
userInforma- profile
sequence tion

Locateand User
registered
identify
users

Theprofile
Receive
information isimproved
from
sensors,etc

Theprofile
Improve- Informa Improved Receive
information isimproved
profile
Profile -tion
aboutuser
about
behaviour
userbehaviour

Location
Match- User
services- profile checked
profiles known
by
central
and
provider
informa
-tion
and
location

Users
Users
connected located
towireless
network

ServicesSchema: ProfileManager(PM)
PostService Inputs Outputs PreCondition Condition
Update- Other Profile
updated
internal- users
profile recommendations
Needof Send
Sendshared- registry request
message
profileandproregistry
fileto
central
agent

Theprofile
Received
otherusers isupdated
recommendations
Obtain
closer
provider

Provider
informs
closer
users

Theresult Provider
FilterResult Closer
provider ofmatching filteredby
providers of
location
isvalid
matching
andinformation
WarnFiltered Provider Obtain
provider Provider received closer
provider
alert
message

Provider
informs
closer
users

Provider
alerted
to
inform

Services Closer
offered provider
according warning
tolocationand
userprofiles

User
receive
services
abouthis
preferences

Request- Provider Process


negotia- request ofnegotion
negotia- tiationis
tionto requested
user
Request
Agree- Request Agree
negotia- message message message
for
tion
for
negotia negotiationor
-tion
nothing

User
acceptsor
rejectsthe
request
message

Exchange Agree Informa- Agree


message
-informa message tionfor
-tion
forne- exchange
gotiation between
provider
anduser

Negotiation
process
finished

Negotiation
Process
initiated

ServicesSchema: NegotiateRole(NR)
PostService Inputs Outputs PreCondition Condition

ServicesSchema: RecommendRole(RR)
PostService Inputs Outputs PreCondition Condition
RecoPropommend- sedof
users
recommendation

Accept
recommendationor
refusal

Recommendation
succeedor
failure

ServicesSchema: ProviderDiscovery(PD)
PostService Inputs Outputs PreCondition Condition

Offerservice

Useragent
receiverecommenda
-tionsor
not

Ask-forRecommenda
-tions

Needof
recommen
dation

Request
recommendations

Receiverecommenda
-tions

Request
recommendations

Askedfor
Recommenda recommen
-dations
-tions
received

Recommen
dations
receivedby
useragent

Decision
totrust

Share
opinions
withother
agents

Decide-
to-trust

Consult Propose Agree-Informa ofnego- mentor


-tion
tiation disagreement

Agreement
reachedor
failure

Receive- Request Services


services forser- received
vicesor
resultof
matching
isvalid

Users
obtain
customized
services
andinformation

Matching
doneby
central
agentand
provideris
warnedfor
offerservices
Ask-for- Needof Request
agree- agree- forreaments ments chingan
agreewith
ment
other
agent.
ReceiveRequestnegotiation

User
agent
askfor
agreementto
provider

Process
ofnegotia
-tion
requested

Exchange Request Informa-informa- forne- tionfor


tion
gotiation exchange
between
provider
anduser

UserAgent
have
requestto
reachan
agreement

Agreement
accepted
orrejected

Negotiation
processis
open
(ornot)for
exchanging
required
information

Negotiation
Askfor
agreements process
andrequest finished
fornegotiation

Chapter 8

Agent-Based Context-Aware Service


in a Smart Space
Wan-rong Jih, Jane Yung-jen Hsu
Department of Computer Science and Information Engineering, National Taiwan
University, 10617 Taipei, Taiwan
jih@agents.csie.ntu.edu.tw, yjhsu@csie.ntu.edu.tw

Abstract
Technologies of ubiquitous computing play a key role for providing contextual information and delivering context-aware services to a smart space. Sensors deployed in smart space can reflect the
changes of the environment and provide contextual information to context-aware systems. Moreover,
it is desirable that services should react to the rapid change of contextual information and all the inner
computing operations have to be hidden behind the users.
We propose a Context-aware Service Platform, implemented in JADE agent platform, and utilize
Semantic Web technologies to analyze the ambient contexts and deliver services. We integrated
ontology and rule-based reasoning to automatically infer high-level contexts and deduce a goal of
context-aware services. An AI planner decomposes complex services and establishes the execution
plan and agents perform the specified task to accomplish the services. A Smart Alarm Clock scenario
demonstrates the detail functions of each agent and shows how these agents incorporate with each
others.

8.1 Introduction
Its obvious that mobile devices, such as smart phone, personal digital assistants (PDAs), and wireless
sensors, are increasingly popular. Moreover, many tiny, battery-powered, and wireless-enabled devices have been deployed in smart spaces for collecting contextual information of the residents. The
Aware Home [Abowd et al. (2000)], Place Lab [Intille (2002)], EasyMeeting [Chen et al. (2004b,c)],
and smart vehicles [Look and Shrobe (2004)] provide intelligent and adaptive services environment
for assisting the users to concentrate on their specific tasks. Apparently, services in a smart space
should have the abilities to react and adapt the dynamic change of context.
Context-awareness is the essential characteristic of a smart space and using the technologies to
achieve context-awareness is a type of intelligent computing. Within a richly equipped, networked
environment, users need not carry any devices with them; instead, the applications have to adapt the
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_8, 2009 Atlantis Press/World Scientific

131

132

Agent-Based Ubiquitous Computing

available resources for delivering services to vicinity of the users, as well as tracking the location
of users. Cyberguide [Long et al. (1996); Abowd et al. (1997)] uses the users locations to provide
an interactive map service. Active Badge [Want et al. (1992)] was originally developed at Olivetti
Research Laboratory. In this system, every user wears a small infrared device, which generates a
unique signal and can be used to identify the user. Xerox PARCTab [Want et al. (1995)] is a personal
digital assistant that uses an infrared cellular network for communication. Bat Teleporting [Harter
et al. (2002)] is an ultrasound indoor location system. PARCTab and Teleporting are similar to Active
Badge; they are deployed to determine user identity and location by interpreting distinct signals from
the sensors.
Context-aware systems involve not only multiple devices and services, but also software agents.
Agent-based architectures can dynamically adapt in rapidly change environment and hence can support context-aware systems. The Context Toolkit [Salber et al. (1999a)] introduces context widgets
that provide applications with access to contextual information while hiding the details of context
sensing. Each widget is a software component, a simple agent that designates to handle the context
acquisition. Chen et al. (2004a) proposes CoBrA architecture, contains a broker can maintain context knowledge and infer high-level context. E-Wallet [Gandon and Sadeh (2003)] is an agent-based
environment for providing context-aware mobile services.
Researchers believe that successful smart spaces must draw computers into our natural world of
human daily activities [Hanssens et al. (2002)]. However, many challenges have been encountered
while building a context-aware system [Want and Pering (2005)]. In a smart space, augmented appliances, stationary computers, and mobile sensors can be used to capture raw contextual information
(e.g. temperature, spatial data, network measurement, and environmental factor), and consequently a
context-aware system need to understand the meaning of a context. Therefore, a model to represent
contextual information is the first issue of developing context-aware systems. Context-aware services
require the high-level description about the users states and the environment situations. However,
high-level context cannot not be directly acquired from sensors. Capability to entail high-level contexts from the existing knowledge is required in context-aware systems. Consequently, how to derive
hight-level contexts is the second issue. As we know that people may move to anywhere at anytime,
it is increasingly important that computers develop a sense of location and context in order to appropriately respond to the users needs. How to deliver the right services to the right places at the right
time will be the third issue.
In this research, we leverage multi-agent and semantic web technologies that providing the means
to express context and using abstract representations to derive usable context for proactively deliver
context-aware service to the users. We deal with the issues of building context-aware systems and
explore the roles of intelligent sensing, mobile and ubiquitous computing in smart home services. Interactions between the users and services are through a wide variety of devices. Meanwhile, contextaware services deliver multimedia messages or contents through the smart home devices for assisting
users.

8.2 Background technology


8.2.1 Context models
A context model is needed to define and store contextual information, which includes temporal relations, geo-spatial entities and relations, user profiles, personal schedule, and actions taken by the
people. Develop a model to represent the wide range of contexts is a challenging task. Strang and
Linnhoff-popien (2004) summarized the most influential context modeling approaches according to
the data structures for representing and sharing contextual information in context-aware systems.
The key-value model is the most simple context model, contextual information is represented

Agent-Based Context-Aware Servicein a Smart Space

133

as data structures or class objects using programming languages [Schilit et al. (1994); Coen (1998);
Dey (2000)]. Such hard-coding representation is intuitive, but lacks expressiveness and extensibility. Using a meta-language (e.g. Extensible Markup Language (XML1 )) to represent contextual
information [Capra et al. (2003)] can gain more extensive, but it only provides a syntax level of
representation. And consequently the markup scheme models unable to provide adequate supports
for semantic representation and interrelation, which is essential to knowledge abstraction and context reasoning [Chen et al. (2005)]. The logic-based context models [McCarthy (1993); Akman and
Surav (1996)] are based on, modeling the contextual information as situations and formulating the
changes of contextual information as actions that are applicable to certain situations. In other words,
just like the situation calculus [McCarthy and Hayes (1969)], the changes of contextual information
can be formulated as a series of situations with a result of various actions being performed. Though
the logic-based models have the reasoning capabilities, it is hard to formulate complex logical expressions while the situations are complicated.
Strang and Linnhoff-popien (2004) concluded that the ontologies are the most expressive models. Gruber (1993) defines ontology as an explicit specification of a conceptualization. Ontology
is developed to capture the conceptual understanding of the domain in a generic way and provide a
semantic basis for grounding the fine-grained knowledge. The COBRA-ONT [Chen et al. (2003)]
provides the key requirements for modeling context in a smart meeting application. It defines concepts and relations of physical locations, time, people, software agents, mobile devices, and meeting
events. The SOUPA [Chen et al. (2004d)] (Standard Ontology for Ubiquitous and Pervasive Applications) is proposed for supporting pervasive computing applications. SOUPA uses some other
standard domain ontologies, such as FOAF2 (Friend of A Friend), OpenGIS3 , the spatial relations
in OpenCyc4 , ISO 8601 date and time formats5 , and DAML time ontology [Hobbs and Pan (2004)].
Strang et al. (2003) introduce CoOL, a context ontology language for enabling context interoperability and context-awareness during service discovery and execution. Roman et al. (2002) develop Gaia,
a distributed middleware platform for smart space that uses ontologies to define the structures and
properties of contextual information; furthermore, it can handle various context reasoning. Clearly,
these ontologies provide not only a rich context representation, but also make use of the abilities of
reasoning and sharing knowledge.

8.2.2 Context reasoning


The processes of context reasoning can infer new contexts from the existing contexts. In a smart
space, if the systems unable to reason and share context, the intelligence of context-aware systems
will be limited and the users will abandon the systems that unable to deliver the services or meet the
requirements.
Design and implementation of context reasoning can vary depending on the types of contextual
information that are involved. The early context-aware systems [Coen (1997); Wu et al. (2002);
Capra et al. (2003)] are tightly coded the logics of context reasoning into the behavior of the systems.
Implementation for understanding the contextual information is bound into the programs for guiding
the context-aware behavior of the systems, therefore, the developed applications often have rigid
implementations and are difficult to maintain [Salber et al. (1999b)].
Rule-based logical inference can help to develop flexible context-aware systems by separating
high-level context reasoning from low-level system behaviors. However, context modeling languages
1 http://www.w3.org/XML/
2 http://xmlns.com/foaf/spec/
3

http://www.opengeospatial.org/standards

4 http://www.cyc.com/cycdoc/vocab/spatial-vocab.html
5 http://www.w3.org/TR/NOTE-datetime

134

Agent-Based Ubiquitous Computing

are used to represent contextual information and the rule languages are used to enable context reasoning. Accordingly, in most cases, these two types of languages have different syntax and semantic
representations; it is a challenge that effectively integrates these distinctive languages to support
context-aware systems. A mechanism to convert between contextual modeling and reasoning languages is one of the solutions for this challenge. Gandon and Sadeh (2003, 2004) propose e-Wallet
that implements ontologies as context repositories and uses a rule engine Jess [Friedman Hill (2003)]
to invoke the corresponding access control rules. The e-Wallet using RDF6 triples to represent contextual information and OWL7 to define context ontology. Contextual information is loaded into the
e-Wallet by using a set of XSLT8 stylesheets to translate OWL input files into Jess assertions and
rules.
Ontology models can represent contextual information and specify concepts, subconcepts, relations, and properties, and facts in a smart space. Moreover, ontologies reasoning can use these
relations to infer the facts that are not explicitly stated in the knowledge base. Ranganathan et al.
(2004) propose that ontologies can make it easier to develop programs for reasoning about context.
Chen (2004) proposes that the OWL language can provide a uniformed solution for context representation and reasoning, knowledge sharing, and meta-language definitions. Anagnostopoulos et al.
(2007) adopt the Description Logic [Baader et al. (2003)] as the most useful language for expressing
and reasoning contextual knowledge. The OWL DL was designed to support the existing Description
Logic business segment and has desirable computational properties for reasoning systems. Typical ontology-based context-aware application is EasyMeeting that uses OWL to define the SOUPA
ontology and OWL DL to support context reasoning. Gu et al. (2004); Wang et al. (2004) propose an OWL encoded context ontology CONON in Service Orientated Context Aware Middleware
(SOCAM). CONON consists two layers of ontologies, an upper ontology that focuses on capturing
general concepts and a domain specific ontology. EasyMeeting and SOCAM are use an OWL DL
reasoning engine to check the consistency of contextual information and provide further reasoning
over low-level context to derive high-level context.

8.3 Smart space infrastructure


Fig. 8.1 shows the infrastructure of a Context-aware Service Platform in a smart space. Context resources can be obtained from the computing softwares, such as personal calendar, weather forecasts,
location tracking system, personal friend list, and shopping list, as well as raw sensor data. Context
Collection Agents obtain raw contexts from softwares and hardware sensors, and raw data are converted into a semantic representation. A Context-aware Service Platform is continuously collecting
these contexts and infers appropriate Service Applications, and then it automatically and proactively
delivers the services to the users.
A Context-aware Service Platform contains the following components:
Message Transportation provides a well-defined protocol for maintaining a set of communicative
acts. Moreover, a common message structure is defined to exchange messages over the Contextaware Service Platform. Common message structure contains sender, receiver, the type of the
communicative act, message content, and description of content. In this platform, each component communicates with each other through message passing.
Life Cycle Management maintains a White Pages and states of services to control over access to
and use of the services. A service can be in one of the following states: initiated, active, waiting,
6 http://www.w3.org/TR/rdf-concepts/
7 http://www.w3.org/TR/owl-features/
8 http://www.w3.org/TR/xslt

Agent-Based Context-Aware Servicein a Smart Space

135

Fig. 8.1 A Smart Space Infrastructure

suspended, and deleted state. Life Cycle Management reflects the state changes and controls the
state transitions. Consequently, every component is controlled by life cycle management.
Rule-based Engine uses IF-THEN rule statements, which are simply patterns and the inference
engine performs the process of matching the new or existing facts against rules. Similar to
OWL DL reasoner, rule engines can also deduce high-level contexts from low-level contexts;
the major difference is, rule engines can handle massive and complex reasoning while the OWL
DL reasoner can not. The derived high-level contexts are asserted into Context Knowledge
Base which serves as a persistent storage for context information. Rules for which and when the
appropriate service can be invoked are defined as the knowledge of service invocation rules for
the Rule-based Engine.
Ontologies are loaded into the OWL DL reasoner to deduce high-level context from low-level
context. OWL DL reasoner provides the inference services to ensure the ontology does not
contain contradictory facts. The class hierarchy of an ontology can be used to answer queries by
checking the subclass relations between classes. Moreover, computes the direct types of every
ontology instance can support to find the specific class that an instance belongs to.
Yellow Pages Service provides the functions for service registration and discovery. New services
register their services to Yellow Pages Service. Service Deliverer and other services can search
the desired services and get the results.
AI Planner generates a service composition plan sequence which satisfies a given goal. Service
Deliverer chooses a service to execution from the service candidate list which returns from
Yellow Pages Service.

136

Agent-Based Ubiquitous Computing

8.4 Context-aware service platform


The Foundation for Intelligent Physical Agents (FIPA9 ) develops computer software standards to
promote the interoperation of heterogeneous agents and the services that they can represent. The
Java Agent DEvelopment Framework (JADE10 ) is a FIPA-compliant software framework for multiagent systems, implemented in Java and comprised serval agents. An Agent Management System
(AMS) controls agents life cycle and plays the role of white pages service, Directory Facilitator (DF)
provides yellow pages service to other agents, an Agent Communication Channel (ACC) is the agent
which can provide the path for basic contact between agents, and an Agent Communication Language
(ACL) has been specified for setting out the message formats, consists of encoding, semantics, and
parameters.
Design of the Context-aware Service Platform shows in Fig. 8.2. The top block depicts a smart

Fig. 8.2 Functional Flow of Context-aware Service Platform

space environment. Context Resource can be collected by Context Collection Agents that receive
sensors or software data and will be delivered to Context-aware Reasoning model. Context Collection
Agents are device dependent agents; each agent will be associated with different types of devices
for providing raw sensor data. Ontology Agent and Context Reasoner can infer high-level context to
provide goals for Service Planning. Context information and service description are stored in Context
Knowledge Base. After perform service composition, discovery, and delivery, a context-aware service
will be delivered to the specified Service Applications.
9 http://www.fipa.org/
10 http://jade.tilab.com/

Agent-Based Context-Aware Servicein a Smart Space

137

8.4.1 Context-awarereasoning
We deploy Jess11 in our Context-aware Service Platform. Jess is a forward-chaining rule engine used
Rete algorithm [Forgy (1982)] to process rules, which is a very efficient algorithm for solving the
difficult many-to-many matching problem. We use an open-source OWL Description Logics (OWL
DL) reasoner Pellet12 that developed by Mindswap Lab at University of Maryland, to infer high-level
contexts. Moreover, Jena13 is a Java framework for building Semantic Web applications, is used for
providing a programmatic environment for RDF, RDFS, and OWL.

8.4.1.1 Context aggregator


Initialization : A configuration file declares the type of context, which Context Aggregator
will received, and subscribe the context to its corresponding Context Collection Agent (refer to
Fig. 8.1).
Input message : There are two types of input contexts: (1) Raw context refers to data obtained
directly from context sensors or software, such as bed sensor data and forecast data, can be
delivered from a bed sensor and weather API respectively. Senders of the these low-level contexts
are called Context Collection Agents and the data will be wrapped as RDF-triple in message
content. (2) High-level context is the information inferred from raw context, such as location of
a furniture and activity who currently participate in, can be inferred from ontology reasoner
and rule-based reasoner respectively.
Process : Value of a context can be changed at anytime and anywhere. Consequently, Context
Aggregator must collect contexts and maintain the consistency of current context. Either raw or
high-level context has an unique type identity and value. The associated value will be replaced
when new context is arrived.
Output message : While raw contexts received from Context Collection Agents, the new context is immediately stored in Context Repository and a set of current context is delivered to
Ontology Agent.

8.4.1.2 Ontology agent


Initialization : An OWL context ontology describes structure and relation between contexts that
will be loaded and parsed into RDF triples by Jena API. It also subscribes to Context Aggregator
for the contexts that has been declared in the context ontology. In addition to ontology loading,
it has to start an OWL DL reasoner for supporting ontology query.
Input message : Context Aggregator sends the current state of contexts when any subscribed
context has been changed.
Process : There are two types of ontology reasoning that perform in Ontology Agent, so they
can provide high-level context. The first is inferred by Jena API that deduces high-level context
from the object property of context, such as bed sensor is attached to a bed and bed is placed
in a bedroom. Relationships between the instances of context object are defined in context
ontology. An OWL DL reasoner, i.e. Pellet, deduces the inferred superclasses of a class and
decides whether or not one class is subsumed by another, for example, living room is an indoor
location.
11 http://herzberg.ca.sandia.gov/
12 http://pellet.owldl.com/
13 http://jena.sourceforge.net/

138

Agent-Based Ubiquitous Computing

Output message : If there is no high-level context to be derived, the input message of current
contexts will be redirected to Context Reasoner. Otherwise, the new high-level contexts must be
delivered to Context Aggregator.

8.4.1.3 Context reasoner


Initialization : The Jess API provides packages to load a rule-based engine when Context Reasoner is started up.
Input message : A set of current context, which is the same as input message of Ontology
Agent.
Process : The input context must be wrapped as Jess rule format and assert into the rule-based
engine, and may trigger the execution of rules that can infer new contexts or derive a goal of
service.
Output message : New high-level contexts are sent to Context Aggregator, whereas the service
goal is delivered to Service Composition Agent.

8.4.2 Service planning


The modern trend of software is to build a platform-independent architecture that distributes software components on the Internet. New services and functionalities can be automatically achieved by
selecting and combining a set of available software components.
A service functionality contains a semantic annotation of what it does and a functional annotation
of how it behaves. (OWL-S14 )(formerly DAML-S15 , DARPA Agent Markup Language for Web
Service) is an ontology for services, and it provides three essential types of knowledge about a service:
service profile, process model, and service grounding. An OWL-S service profile illustrates the
preconditions required by the service and the expected effects that result from the execution of the
service. A process model describes how services interact and how the functionalities offer that can be
exploited to solve the goals. The role of service grounding is to provide concrete details of message
formats and protocols.
According to these semantic annotations, AI planning has been investigated for composing services. Graphplan [Blum and Furst (1997)] is a general purpose graph-based planner. The state
transition is defined by operators consists of preconditions and effects. Given initial states, goals, and
operations, a planning system returns a service execution plan, which is a sequence of actions that
starts from the initial states and accomplishes the given goals.

8.4.2.1 Service composition agent


Initialization : An OWL-S service ontology represents all available services and the service
profile describes service goal, preconditions, and effects for AI planner, i.e. Graphplan. Consequently, the service ontology must be parsed and transferred into Graphplan operations.
Input message : A service goal is sent by Context Reasoner.
Process : According to the service operations, Graphplan creates a sequence of operation to
achieve the goal.
Output message : When a operation represents a composite service, the corresponding service
model will be delivered to Service Discovery Agent. If the execution plan has been generated, it
14 http://www.w3.org/Submission/OWL-S/
15 http://www.daml.org/services/daml-s/0.7/

Agent-Based Context-Aware Servicein a Smart Space

139

delivers a sequence of service to Service Execution Agent.

8.4.2.2 Service discovery agent


Initialization : According to the description of the service model in service ontology, information of every atomic service will be kept in this agent.
Input message : A composite service model receives from Service Composition Agent.
Process : Given the composite service, Service Discovery Agent searches the atomic processes
that are available and can carry out the composite service.
Output message : The atomic services, which can accomplish the given composite service,
must be delivered to Service Composition Agent.

8.4.2.3 Service execution agent


Initialization : Service ontology consists of the description of service grounding, which specifies the details of how an agent can access a service.
Input message : A service list is sent by Service Composition Agent.
Process : Following the control sequence of services, the corresponding device agents will be
invoked for providing atomic service.
Output message : The input parameters, invoking atomic service, must be passed to the device
agent.

8.4.3 Context knowledge base


8.4.3.1 Context repository
Context Repository contains a consistency of context, includeing location, time, person, and activity
information. A RDF-triple represents contexts of the repository, like a subject, a predicate, and an
object. Subject is a resource named by a URI with an optional anchor identity. The predicate is a
property of the resource, and the object is the value of the property for the resource. The following
triple represents Peter is sleeping.
<http://...#Peter>
<http://...#participatesIn>
<http://...#sleeping>
Where Peter represents subject, participatesIn is a predicate, and object sleeping is an activity. According to the elements of RDF-triple, we use subject and predicate as the compound key of
Context Repository.

8.4.3.2 Ontologies
An ontology is a data model that represents a domain and is used to reason about the objects in that
domain and their relations. We define a context ontology depicts in Fig. 8.3 as a representation of
common concepts about the smart space environment. Context information are collected from realworld classes (Person, Location, Sensor, Time, HomeEntity), and a conceptual class Activity. The
class hierarchy represents an is-a relation; an arrow points from a subclass to another superclass.
A class can have subclasses that represent the concepts more specific than their superclass. For
example, we can divide the classes of all locations into indoor and outdoor locations, that is, Indoor

140

Agent-Based Ubiquitous Computing

Fig. 8.3

A Context Ontology

Location and Outdoor Location are two disjoint classes and both of them belong to Location class.
In addition, the subclass relation is transitive, therefore, the Livingroom is a subclass of Location
class because Livingroom is a subclass of Indoor and Indoor is a subclass of Location.
The relationship between classes is illustrated in Fig. 8.4. The solid arrows describe relation
between subject resources and object resources. For example, isLocatedIn describes the relation
between the instances of Person and Location while the instances of Person is the subject resources
and instances of Location is the object resources.
A service ontology defined by OWL-S is for describing available services that comprises service
profile, service model, and service grounding, which has been stated in Section 8.4.2

8.4.3.3 Rules
Rules of a rule-based system serve as IF-THEN statements. Context rules can be triggered to infer
high-level context. According to the description of Fig. 8.4, a rule for detecting the location of a user
is showed as follows:

[Person_Location:
(?person isIdentifiedBy ?tag)
(?tag isMoveTo ?room)
->
(?person isLocatedIn ?room )
]

Agent-Based Context-Aware Servicein a Smart Space

Fig. 8.4

141

Context Relationship

Patterns before -> are the conditions, matched by a specific rule, called left hand side (LHS) of
the rule. On the other hand, patterns after the -> are the statements that may be fired, called right
hand side (RHS) of the rule. If all the LHS conditions are matched, then the actions of RHS will be
executed. The RHS statement can be either asserted new high-level contexts or delivered a service
goal.
Given ?person is an instance of class Person, ?tag is an instance of MovableSensor, and ?room
is an instance of Room, rule Person Location declares that if any person ?person is identified by a
movable sensor ?tag and this movable sensor is move to a room ?room, we can deduce that ?person
is located in ?room.

8.5 Demonstration scenario


We use a simple examples to illustrate detailed picture of Context-aware Service Platform.
In a smart space, a Smart Alarm Clock can check Peters schedule and set the alarm one
hour prior to the daily first task. If Peter does not wake up within 5-minute period after the
alarm is sent, send another sound of alarm and increases its volume. If Peter wake up early
then the alarm time, there will be no more alarm.
On the other hand, when the first task event is approaching and the sensors detect that Peter
remains sleeping, an exceptional control should deal with such situation.
In addition to set the alarm through traditional alarm clock, the alarm service can deliver
to the devices that in Peters bedroom, such as radio, stereo, speaker, or personal mobile
device.

142

Agent-Based Ubiquitous Computing

8.5.1 Context-aware reasoning


In order to archive Smart Alarm Clock, we have to collect Peters schedule to decide the alarm time
and should reasoning whether Peter is awake or not. Google Calendar Data API supports on-line
schedule information and position-aware sensors, bed pressure sensors, etc., can detect whether user
on the bed or not. RFID technologies [Lin and Hsu (2006)] can be used to recognize and identify
the activities of Peter while a wireless-based indoor location tracking system can determine Peters
location with room-level precision [You et al. (2006)].
In order to know whether Peter is sleeping or not, all the related instances form an ontology instance network, shows in Fig. 8.5. Dashed line indicates the connection of a class and its instance.

Fig. 8.5

Instance Network for Detecting Sleeping Activity

Word in the box depicts a instance of its corresponding class, for example, bed is an instance of
Furniture class. Each solid arrow reflects the direction of owl:ObjectProperty relationship, from
domain to range and a inverse property can be declared while specifies the domain and range classes.
For example, a sensor bed sensor is attached to (isAttachedTo) furniture bed, and the inverse property is hasSensor. A boolean data type property isOn is associated with Sensor class for detecting
whether the instances of class is on or off.
If someone is on the bed and the sensor bed sensor is on, then the value of isOn is true. On the
other hand, when nobody touches the bed, the value of isOn has to be false and it infers that Peter
is awake. Therefore, the system is not necessary to deliver the alarm service.
Suppose that calendar agent reports the first event of the day will be held at 8:00am, therefore,
the alarm is set to 7:00am. When the time is up, given the location of Peter and the status of bed
sensor, the rule User is sleeping in Section 8.4.3.3 can deduce that whether Peter is sleeping or
not. Assume that there is another rule reflects that if Peter is sleeping, then deliver smart alarm
service. Consequently, the service goal of Smart Alarm Clock can be derived and deliver to Service
Composition Agent.
If Peter does not wake up for the task, according to the owner and importance of this task, an

Agent-Based Context-Aware Servicein a Smart Space

143

exceptional handling rules will be triggered for deciding whether to postpone or cancel this coming
task. For instance, Peter has to host a meeting at 8:00am and hence the task is a high priority event.
Consequently, a rule will be triggered to postpone the meeting, and an emergency message will
be sent to the corresponding participants for informing the situation. On the contrary, if this task
is watch TV show at 8:00am with low priority, then the context-aware reasoning infers that this
scheduled task should be canceled and a video recording event will be invoked.

8.5.2 Service planning


Operations for planner can be derived from service profile, which gives a brief description about the
service and consists of service name, preconditions, effects, inputs, and outputs of the service. The
following OWL-S statements indicates the profile of Smart Alarm Clock.
<profile:Profile rdf:ID="alarm">
<profile:serviceName
rdf:datatype="http://www.w3.org/...#string">
smart alarm
</profile:serviceName>
<profile:hasPrecondition
rdf:resource="#alarm_precond"/>
<profile:hasInput>
<process:Input rdf:ID="message_stream">
<process:parameterType rdf:datatype=
"http://www.w3.org/...#anyURI">
http://...#MessageStream
</process:parameterType>
</process:Input>
</profile:hasInput>
....
</profile:Profile>
This example shows a service named smart alarm has precondition alarm precond and its input
parameter belongs to MessageStream class.
Service model gives detailed description of the service and each service is modeled as process.
There are three types of process: atomic process is the primary process without any subprocess, simple process are used as elements of abstraction, it can either represents as atomic process or composite
process, and composite process consists of subprocesses. A composite process can be decomposed
by using control operators such as sequence, split, split+join, choice, any order, if-then,
iterate, repeat-until, and repeat-while. Figure 8.6 is the control flow for Smart Alarm Clock,
it uses operator choice to compose the process. TextMessageProcess, VideoPlayerProcess,
and AudioPlayerProcess are atomic process, and Smart Alarm Clock can be served by using one
of the three atomic processes. An example of AudioPlayerProcess shows as follows.
<process:AtomicProcess rdf:ID="AudioPlayerProcess">
<process:hasInput>
<process:Input rdf:ID="audio_stream">
<process:parameterType
rdf:datatype="http://...#anyURI">
http://...#AudioStream
</process:parameterType>
</process:Input>

144

Agent-Based Ubiquitous Computing

Fig. 8.6 Process Graph of Smart Alarm Clock

</process:hasInput>
<process:hasPrecondition>
<expr:KIF-Condition rdf:ID="alarm_precond">
<expr:VariableBinding
rdf:ID="isSleepVariablebinding">
<expr:theObject
rdf:resource="http://...#sleeping"/>
<expr:theVariable rdf:datatype=
"http://...#boolean"> true
</expr:theVariable>
</expr:VariableBinding>
<expr:expressionData
rdf:datatype="http://...#string">
precondition of smart alarm
</expr:expressionData>
</expr:KIF-Condition>
</process:hasPrecondition>
<process:hasResult>
<process:Result rdf:ID="alarm_done"/>
</process:hasResult>
......
</process:AtomicProcess>
Descriptions of atomic process are similar with that of profile, except the service model describes
process in more details. For example the input data type of AudioPlayer Process is belong to
AudioStream class, whereas the alarm service profile only gives an upper-level data type Message
Stream. Moreover, atomic process describes detailed expression of preconditions, for instance, it
binds an instance sleeping of Activity class to a boolean variable.
Service grounding specifies the details of the way to access the service, and deals with the concrete level of specification. Both OWL-S and WSDL are XML-based languages, therefore, the OWLS service is easy to bind with WSDL service, for example:

Agent-Based Context-Aware Servicein a Smart Space

145

<grounding:WsdlGrounding
rdf:ID="AudioPlayerWSDLgrounding">
<service:supportedBy
rdf:resource="#AudioPlayer"/>
<grounding:hasAtomicProcessGrounding>
<grounding:WsdlAtomicProcessGrounding
rdf:ID="WsdlAtomicProcessGrounding">
<grounding:owlsProcess
rdf:resource="#AudioPlayerProcess"/>
<grounding:wsdlOperation>
<grounding:operation
rdf:datatype="http://...#anyURI">
play
</grounding:operation>
<grounding:portType
rdf:datatype="http://...#anyURI">
audio player port type
</grounding:portType>
</grounding:wsdlOperation>
</grounding:WsdlAtomicProcessGrounding>
</grounding:hasAtomicProcessGrounding>
......
</grounding:WsdlGrounding>
A WSDL service has construction of type, message, operation, port type, binding, and
service. The AudioPlayerWSDL grounding briefly shows that OWL-S has provided operation
and portType mapping. Besides, the XSLT can help the transformation from WSDL descriptions to
OWL-S parameters.

8.6 Related work


Smart spaces can be the houses, workplaces, cities, vehicles, and the spaces deploy embedded sensors, augmented appliances, stationary computers, and mobile devices to gather contexts of the user.
Each place has different challenges, but similar technologies and design strategies can be applied. In
order to make the space have capabilities to respond to the complexities of life, researchers explore
new technologies, materials, and strategies to make the idea possible.
Department of Architecture research group at Massachusetts Institute of Technology proposes the
House n16 research, which includes a living laboratory residential home research facility called the
PlaceLab [Intille (2002)]. Hundreds of sensing components are installed in nearly every part of the
house. Interior conditions of the house can be captured by using these sensors, such as temperature,
light, humidity, pressure, electrical current, water flow, and gas flow sensors. Eighty wired switches
can detect the opening of the refrigerator, the shutting of the linen closet, or the lighting of a stovetop
burner events. Cameras and microphones are embedded in the house for recording the residents
movement. Twenty computers collects all the data streams from these devices and sensors to provide
multi-disciplinary research, for example, monitoring the residents behavior, activity recognition,
and dietary status. The Aware Home17 [Abowd et al. (2000)] was proposed by the Future Computing
Environments Group at Georgia Institute of Technology. In this house, multi-discipline sensors have
16 http://architecture.mit.edu/house_n/
17 http://www.awarehome.gatech.edu/

146

Agent-Based Ubiquitous Computing

been constructed for monitoring the activities of the resident.


These smart space projects didnt organize the huge sensing data in a formal structured format.
An independently developed application cant easily interpret contexts that have no explicitly represented structure. We use the Semantic Web standards Resource Description Framework (RDF)
and Web Ontology Language (OWL) to define context ontologies which provide a context model for
supporting information exchange and interpret contexts. By using the Semantic Web technologies to
represent context knowledge, we introduced an infrastructure for inferring higher-level contexts and
provide adaptive service to the user.

8.7 Conclusion
This research presents a context-aware service platform, a prototype system designs to provide
context-aware services in a smart space. It integrates several modern technologies, include contextaware technologies, semantic web, AI planning, and web service. In addition, reasoning approaches
for deriving new contexts and services are adopted in this system.
Ontologies for contexts and services provide information sharing and make the platform integrating services. Contexts are represented as RDF-triple for exchanging information between agents and
deducing new high-level contexts. Moreover, the service planner obtains a goal from context-aware
reasoner, such that it makes the services operated adaptively.
The current design assumed that all the context resources provide consistent contexts and no
conflict information to disturb the process of Context-aware Service Platform. We should consider
the fault tolerance problems but allow some minor errors happened.
As the real-world environment has huge number of contexts and the required tasks are much
more complex, the rule engine and task planner should have capabilities to provide solutions in
reasonable time. Consequently, the concept of clock timer can be adopted to the reasoning and
planning components. In order to provide a possible solution from the partial results, an anytime
algorithm should be taken into account.
We provide a simple scenario to demonstrate the idea of the context-aware service platform.
However, this simple case does not show the power of automated service composition by using AI
planning. Designing other scenarios that can explain and evaluate the needs of service composition
is one of our future direction. Applying this platform to other Web Service composition benchmark
test is another way to evaluate the performance of this platform.

Chapter 9

An Agent Based Prototype for Optimizing Power


Plant Operation
Christina Athanasopoulou and Vasilis Chatziathanasiou
Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki,
54124 Thessaloniki, Greece
{athanasc, hatziath} @eng.auth.gr

Abstract
This chapter concerns an innovative software application (Intelligent Power Plant engineer Assistant
Multi Agent System - IPPAMAS) aiming to facilitate the control of a power plant by the personnel.
Recent advances in agent technology, mobile devices and wireless communications enable the provision of the proposed services wherever around the plant the employees actually need them. The
application aspires to form an intelligent system that will function as an engineer assistant by embedding data mining techniques to the agents. The IPPAMAS comprises of a main part, the on-line
monitoring, and two off-line procedures; the processing of data, in order to form the application
repository, and the re-training of the system. The MAS is structured in three layers: Sensor Layer,
responsible for the identification and reconstruction of sensor faults, Condition Monitoring Layer,
responsible for the safe operation of the Thermal Power Plant and its optimization, and Engineer Assistant Layer, which distributes the information to the appropriate users, at the right time, place and
format. The performance of the application is tested through the simulation of several cases, both in
the laboratory and in a thermal power plant.

9.1 Introduction
Ubiquitous computing embeds computation into the environment, enabling people to interact with
information-processing devices more naturally and casually than they currently do, and in ways that
suit whatever location or context find themselves in. Pervasive computing (a term used as synonym
by many) encompasses a wide range of research topics, including distributed computing, mobile
computing, sensor networks, human-computer interaction and artificial intelligence.
Its development is strongly band together with the evolution of mobile communication devices,
such as mobile phones and Personal Digital Assistants (PDAs). These devices can be used in a
network to exchange information and even more to incorporate an expert system into the routine
of a complicated organization as a Thermal Power Plant (TPP). The responsibilities and tasks of
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_9, 2009 Atlantis Press/World Scientific

147

148

Agent-Based Ubiquitous Computing

the personnel in a TPP oblige them to move quite often around the plant premises. Apparently, an
application functioning only in desktops, which would expect them to stay inside an office, has far
less chances to be accepted than an ubiquitous one.
In this chapter, a theoretical framework for a ubiquitous computing application for power plants
is presented and implementation issues are drawn. Multi agent technology is combined with data
mining (dm) techniques to form an intelligent system that aspires to function as an engineer assistant.
Although the application was originally designed to address the needs of a TPP personnel, the prototype can be equally applied to other industrial domains, as requirements regarding information flow
are similar.
The current project focuses on the provision of the power plant personnel with the appropriate
information at the right time, place and format. While the design of the Multi-Agent System (MAS)
ensures the information flow, the use of PDA permits the mobility.
The employees that will benefit from the proposed advanced application can be divided in three
categories: engineers, operators and foremen, and technicians. The operators and foremen work in
the limited area of a control room, thus a desktop application suits them. On the contrary, technicians
move continuously inspecting, reporting and interfering with the machinery of their section. Likewise, engineers might have various tasks to fulfil that demand their presence at different places from
the head offices to the boiler.
The present state is that employees request or provide information orally whenever they think it
is necessary. This is done over the phone or in some cases requires even the movement on the spot.
Evidently, the quality of information flow is poor due to both the procedure and the mechanical noise
of the factory. Applying this system, information flow will be improved in two aspects: it will be
structured according to explicit rules, rather than depending on personal judgement, and it will reach
the recipient in a written form.
The proposed novel application permits the engineer to execute his duties as usually wheresoever
inside the plant is necessary and to receive information either when he/she chooses to or whenever
a situation arises that needs his/her attention. At the latter case a sound signal will emphasize the
arrival of the message. The information flow will be properly personalized so as to avoid unnecessary
amount of information to cause disorientation or even scorn.
The main contribution of the application is that by ensuring the information flow to the proper
person in time, critical situations can be avoided and parameters can be regulated promptly. Early
diagnosis is extremely important for maintaining high levels of availability and safety, as malfunctions can result even to unscheduled outages. Tuning the system is a demanding, continual task due
to unstable combustion characteristics (caused by variable lignite quality), load variances and several
other factors, as boiler fooling. Optimal tuning leads to optimal performance which constitutes the
desired target of a TPP.
In section 2 the problem domain is outlined. Also, a small part of the knowledge engineering
model, which was utilized for the specification phase, is given. In section 3 the architecture of the
proposed system is described. In section 4 the tools chosen for the implementation are listed. Section
5 includes the description of the simulation experiments that were executed in order to evaluate the
prototype. Finally, section 6 contains some conclusions and some suggestions for future enhancements and extensions.

9.2 Problem domain


9.2.1 Electricity generation units
Todays privatized and deregulated power industry calls for reliable and efficient electric energy production. In addition, power plants must comply with new, tighter environmental regulations. An

Prototype for Optimizing Power Plant Operation

149

optimum function is required, so as production cost and pollutants released in the environment are
minimized. This means that the plants must run much closer to their theoretical operating limit, consequently specifications on the precision and effectiveness of control are much demanding than ever
before. Evidently, the need of assistance for engineers and operators is even more urgent, considering
the fact that a TPP comprises a complex thermodynamic system.
Control and instrumentation equipment has changed more than any other aspect of power generation in the last decades. The pneumatic and then analogue electrical equipment have been replaced
by microprocessors which in their turn were integrated into Distributed Control Systems (DCSs).
Unfortunately, oftentimes the DCS is treated as a direct replacement for older stand alone analogue
or digital controllers. Hence, minimal advantage is taken of its potentialities; it usually stops short at
simple data trends or management summary reports. The huge amount of data, which is up to now
unexploited, can be used by an expert system to ameliorate intelligent maintenance, fault diagnostics
and productivity management.
During the TPP running the operators regulate several parameters in a semi-empiric way. Obviously, this practice has two great disadvantages: 1) the decisions are based on each operators
personal experience and capabilities, and 2) the best possible performance is not always achieved.
A typical example of this way of acting is the appointment of the set point of the automatism that
regulates the air combustion flow. We suggest that this regulation can be optimized if it is done in an
objective way based on rules extracted from previous cases. The importance of such an improvement
is self-evident as this is a frequent handling that has direct economic and environmental impact.
Another frequent situation met is the production of false alarms or the failure to recognize an
alarm situation due to false sensor readings. Both cases can be resolved with the proposed system
that replaces the recorded values of a measurement instrument with values estimated based on models
derived by applying dm algorithms to historical data.
Recently, research efforts aim to introduce the latest accomplishments of the information technology area to the electricity generation domain (Flynn, 2003; Mangina, 2003). There are also several
examples of new generation applications designed towards the modernisation of operation control
that have been experimentally installed in TPPs worldwide (Arranz et al., 2008; Ma et al., 2007).
Unfortunately, practice has proven that in many cases the results failed to meet the expectations and
are not substantially adopted by the personnel. An expedience and feasibility study was undertaken
by the authors in order to identify the causes of unsuccessful applications and to hopefully overcome
them.
A complete list goes beyond the scope of this book. However, it is worth mentioning that one
of the main requirements, as expressed by the potential users, was the provision of a) applicable and
comprehensible indications and b) timely, clearly displayed information.

9.2.2 Knowledge engineering


A Knowledge Engineering (KE) methodology, CommonKADS was chosen as a tool to model the
current plants infrastructure and operation procedures (Shreiber et al., 2000). It offers a predefined
set of models that together provide a comprehensive view of the project under development. For
instance Table 9.1 comprises part of the Organization model (OM), which is focused on problems
and opportunities as seen in the wider organizational context. This specific worksheet explains the
various aspects to consider in order to ultimately judge opportunities, problems and knowledgeoriented solutions within a broader business perspective. It covers in a way the visioning part of
the organization study.
Upon decision to use the agent technology, it was evident that the extension of the CommonKADS methodology, MAS-CommonKADS would be more appropriate (Iglesias et al., 1998).
MAS-CommonKADS deals with aspects which are relevant to MAS. Its new characteristic is the

150

Agent-Based Ubiquitous Computing

introduction of the Coordination model, which models the interaction between agents. In our case
this meant that the only additional task to our previous work was the application of the Coordination
model and some necessary modifications to the Agent model, in order to include the characteristic
aspects of intelligent agents.

9.3 Architecture
9.3.1 Agent programming paradigm
In this section the reasons that led to the adoption of the agent oriented paradigm are listed and the
Intelligent Power Plant engineer Assistant MAS (IPPAMAS) is presented.
As stated in Section 2, the control system of a TPP must meet increasingly demanding requirements stemming from the need to cope with significant degrees of uncertainty. Jennings and Bussmann (2003) suggest that analyzing, designing and implementing such a complex software system
as a collection of interacting, autonomous, flexible components (i.e. as agents) present several significant advantages over contemporary methods. Agents are well suited for developing complex
and distributed systems, since they provide more natural abstraction and decomposition of complex
nearly decomposable systems.
A centralized controller for the domain of TPPs is not a viable option, since it would be nearly
impossible to get a real time response and to maintain an up-to-date representation of the whole
systems state. A central point of control would probably end up to be a bottleneck in the systems
performance and, even more, it might introduce a single point of failure. On the contrary, soundness
can be achieved by using MAS. In IPPAMAS, each operating control aspect is represented by an
autonomous agent, having the goal to accomplish its assigned task, thus to contribute to the overall
aim of performance optimization.
Furthermore, the ability to flexibly manage, at runtime, multiple sources of data and multiple
problem-solving perspectives provides enormous robustness to the overall system because if one of
the agents crashes, the others will still be able to provide some form of solution. For instance, assume
that the steam temperature is used as an input parameter for the decision to start the procedure of
sootblowing. This decision is based on a model that derived by applying dm algorithms to historical

Table 9.1 Identifying problems and opportunities in the organization: Organization Model, Worksheet 1
Subject
Information/Opinions/Ideas
Problems and OpporThe lack of tools for exploitation of historical data for optimization of the operation
tunities
and maintenance of the plants.
Ellipse of means for the improvement of the information flow during the process of
confirmation and confrontation of alarm signals.
Organizational
Objective of the company: more efficient and economic production of electric energy.
context
Exterior factors:
- Competitiveness - deregulation of electric energy market
- Kyoto protocol, ecotaxes
- Price/quality of lignite due to the disassociation of the power plants from the mines
- Lignite reserve
Solutions
The aforementioned objectives can be achieved through the better management and
the more effective control of the TPPs. This can become feasible with the provision
of the experts with an application that would support them in regular routines as well
as in critical situations. This application should among others a) exploit knowledge
extracted from historical data b) display the information at mobile devices.

Prototype for Optimizing Power Plant Operation

151

data. If the agent responsible for this variable fails then the agent in charge of applying the dm models
chooses an alternative solution which does not include this parameter. This backup functionality
flexibly increases the robustness of the application.
In addition, an agent-based approach constitutes a natural representation of the application domain. As an example, the plant employees cooperate in order to handle critical situations. Each one
works towards different objectives, which comprise part of the overall solution. During this procedure they exchange relevant information so as to ensure that a coherent course of action is followed.
The agents are designed to act in a similar way.
Agents are most appropriate for facing personalization issues, as, for instance, is shown by an
empirical study on interaction issues that an interface agent has to consider (Schiaffinoa and Amandia, 2004). The particularity of the presented application is that the preferences are not defined per
user, but rather per post, title, section, etc. This is due to the increased reliability and safety requirements of the power plant domain. Thematic preferences include the type of assistant (submissive
or authoritative), the type of assistance actions (alarms, recommendations, and information distribution), whether the agents are permitted to act on the user behalf and the tolerance to errors. Apart
from the aforementioned thematic preferences, there are also device characteristics which should be
taken into account. The interface of each device (PC, Palm, etc.) has its own characteristics, from
the point of view of interactivity (screen size, resolution, etc.) and from the technical point of view
(memory, transference speed, capacity of processing, etc.).
One last but not least motivation for basing the system upon agents is their suitability for developing context-aware applications (Soldatos et al., 2007). An ubiquitous application running inside a
power plant should also adopt some context-aware features. Throughout a typical working day, plant
engineers frequently find themselves in various situations (e.g. meeting, routine duty, resolution of
crisis). In each of these situations, they have different needs in respect to information flow. For instance, assume that a message is sent to a user that is occupied at that moment. There are alternative
scenarios depending on the content and the significance of the message and the context of the user: 1)
If it is just routine information the agent should postpone its display until the user is available, 2) If it
is an urgent matter the agent should choose between interrupting the user or forwarding the message
to another user. The identification of the users context is based on plant operation data (e.g. crisis
due to imminent system break), explicit user statement (e.g. meeting) and IPPAMAS data (e.g. other
tasks).

9.3.2 Intelligent Power Plant engineer Assistant MAS (IPPAMAS)


The IPPAMAS comprises of three parts: 1) off-line creation/update of the application repository, 2)
on-line monitoring, and 3) off-line re-training procedure, as depicted in Fig. 9.1. The second one is
designed so as to run continuously supporting the users with information and suggestions in time.
The other two ensure that the appropriate models and rules will be available and up-to-date, so as to
be used by the MAS in order to function properly.

9.3.2.1 Creation of the application repository


For the first part, the steps followed were the ones suggested by the Knowledge Discovery in
Databases (KDD), which is a non trivial procedure of identifying implicit, previously unknown and
potentially useful patterns in large databases (Frawley et al., 1992). Data mining comprises its main
step.
In a large scale KDD project the role of domain knowledge is characterized as a leading one
(Domingos, 1999; Kopanas et al., 2002). Having this in mind, the combined application of the KE
method CommonKADS and the KDD process is proposed in order to achieve the optimum results.
In the current project, CommonKADS was applied not only for specifying the overall application,

152

Agent-Based Ubiquitous Computing

but also for driving the initial phases of KDD (Fig. 9.2). It contributed to the successful outcome
of the KDD by reducing the search space, and by eliminating the risk to find patterns statistically
significant, but with limited physical meaning.
The creation of the application repository started with the collection and integration of data from
the plant sensors and handwritten records (Fig. 9.3). The possibility to use as case studies several
TPPs belonging to the Public Power Corporation of Greece (PPC) was investigated in the beginning.
The installed instrumentation and control equipment vary significantly among these plants. As depicted in Fig. 9.3, input data may be from plain raw data to signals that are filtered by microprocessors
or even advanced functions. Finally, most of the datasets were taken from one of the most newly built
plants, the TPP Meliti.
Since the power plant operation diverges according to the conditions, different rules and models
were derived for each case. Each original dataset was divided to subsets concerning start-up, shutdown, lifting-load, lower-load, and steady-state. The latter was divided to full and low load.
Then followed the second step of KDD, data preprocessing. This turned out to be a timeconsuming and labour-intensive procedure, but also a very important one for deriving the most appropriate models. It involved the cleaning of data, i.e. dealing with missing and inaccurate data.
Depending on the cause of missing data one of the following practices was chosen: 1) omitting the
raw of data, 2) value completion by plant experts, 3) use of general constant (eg ?), and 4) value
completion with last valid measurement. In the case of inaccurate data the problem was faced with
either clustering, regression or/and the combined computer and man inspection. Finally, the data preprocessing step was concluded with data transformation, which mainly involved the normalization of
values with decimal escalation and the creation of new characteristics.
The variables that composed each subset had been selected with basic criterion the creation of
as complete as possible models. Nevertheless, attribute selection methods were applied in order to
check whether models of equal quality could derive from smaller number of entry parameters. These
methods combine an attribute subset evaluator with a search method (Witten and Frank, 2005).
For each variable a second model was chosen among those with good statistical results, but with
main concern the differentiation of the input parameters. This is rather important for the functionality
of the MAS; in case a model is needed for estimating a variable there are more chances that one of

Fig. 9.1

The overall IPPAMAS outline

Prototype for Optimizing Power Plant Operation

Fig. 9.2

153

System specification

them will be applicable, i.e. all of its input parameters will be valid at the current moment.
For deriving models of numeric outputs, more than 25 classification algorithms and metaalgorithms were initially applied in a trial and error approach (also diversifying its parameters). Their
performance was evaluated by applying two of the most commonly used methods: the hold-out estimate and the cross-validation (Stone, 1974). These methods provide five statistical parameters when
the output variable is numeric. Three of them, i.e. mean absolute error, relative absolute error, and
correlation coefficient were mainly taken into account.
Apart from the performance metrics, the results were also evaluated based on their physical
meaning (correlation of variables). This also involved knowledge interpretation by domain experts.
For this purpose, adequate knowledge representation formalism was used (e.g. rules defined by a
decision tree). One or more of the aforementioned KDD steps were repeated depending on the results.
An interested reader may have a look at previous publications which include analytical results and
some conclusions on the suitability of attribute selection methods and classification algorithms for
the specific case (Athanasopoulou et al., 2007, 2008).
The extracted models are used by the system for estimating the value of a variable. The estimated
values can be used to replace the recorded values of a measurement instrument in the event of a false
reading.
Models were also extracted for advising on an action and for calculating the optimum value of
controllable parameters (in order to tune a subsystem appropriately). In both cases the dataset, which
was used as input data, comprised of only a part of the historical data that reflected the best system
operation judged according to selected criteria. Since the models are derived from historical TPP
data it is not expected that will drive the system to better operation than whatever has already been
achieved. However, the plant statistics indicate that managing to operate the system as good as the
10% better past cases can result to great economical and environmental benefits. For instance, it is
estimated that the NOx emissions could decrease by 13-16% (Athanasopoulou et al., 2008).

154

Agent-Based Ubiquitous Computing

Fig. 9.3 The KDD procedure for deriving the models

Prototype for Optimizing Power Plant Operation

155

9.3.2.2 On-line monitoring


The second part, the on-line monitoring procedure, consists of agents working towards the overall
goal to assist the condition monitoring of the power plant and to improve its performance. It is based
on a MAS which has a layered architecture, as depicted in Fig. 9.4.
The first layer is the Sensor Layer, responsible for the identification and reconstruction of sensor
faults. It ensures that data entered to the application are valid.
The second layer is Condition Monitoring Layer, responsible for the safe operation of the TPP
and its optimization. In this phase meaning is assigned to data to produce the appropriate information,
as alarm signals and suggestions on handlings.
The third layer is the Engineer Assistant Layer, which distributes the information to the appropriate users. Information pieces are handled differently depending on the current operating conditions
and the context of the users.
At each layer several agents cooperate for confronting the respective task. There are eight basic
agent types:
(1) Variable Agent: identification of faulty sensors
(2) Condition Agent: alarms triggering
(3) Data-Mining Agent: dm models application
(4) Recommender Agent: recommendations on actions
(5) Distribution Agent: information distribution
(6) User-Interaction Agent: information personalization
(7) Trainer Agent: pattern identification
(8) Monitoring Agent: MAS coordination
Monitoring Agent supervises the entire MAS. Trainer Agent is used for the off-line retraining
procedure. These two are designed and deployed as performable agents, whereas the first six are
basic agent types that are extended to form the actual running agents.
Variable and Data-Mining Agents are placed at the first layer, Condition, Data-Mining and Recommender Agents at the second layer, while User-Interaction and Distribution Agents at the third
layer Fig. 9.4. There are also three auxiliary agents that provide their services to all levels:
(1) Surveillant Agent: identifies the operation mode (so as the other agents apply the appropriate
models, rules etc)
(2) Synchronization Agent: handles a singleton flag for the synchronization of the agents
(3) DBpplication Agent: manipulates the database of the application (models, rules, MAS log files)
Each Variable Agent (VA) monitors one specific sensor. There are more than 12500 sensors
in TPP Meliti that provided the test cases. Approximately, the values of 5000 of them are stored
in a database, thus are available for further exploitation. For each subsystem a certain number of
variables is selected to be monitored by a VA. The selection is based on each variable weight and
its usage as input parameter for the data mining resulting models. In case of a faulty sensor, VA is
responsible for assigning an estimated value, based on redundant data or on the estimation produced
by the Data-Mining Agent.
Data-Mining Agents (DMAs) provide an estimation of a specific sensor value or whether an
action should be taken. Each DMA handles a different case and is responsible for recognizing which
of the available models is best suited according to the sensors state and to the power plant operation
mode.

156

Agent-Based Ubiquitous Computing

Fig. 9.4

The IPPAMAS architecture

Condition Agents evaluate the current sensor measurements and decide whether an alarm should
be triggered or not and whether the prerequisites for an action are fulfilled.
Recommender Agents gather the necessary facts and produce an advice on whether to perform
an action or not. Distribution Agents are responsible for the distribution of the produced recommendations or alarms. They decide on which piece of information should be sent to whom, based on the
plant operation conditions and the availability of users.
User-Interaction Agents (UIAs) run either on desktops or PDAs. They personalize the displayed
information based on the users profile and the context he/she is in. In certain situations, they are
responsible for accepting or refusing a message on the users behalf. For the facilitation of the Distribution Agents tasks the User-Interaction Agents send proper acknowledgement messages (received,
displayed, ignored).
Monitoring Agent has a global view of the MAS. In general it is responsible for its coordination
and for ensuring the safe and sound operation.
Special attention was given to predicting any possible problems that could arise and to plan the
alternative scenarios. The reliability and the robustness of the system depend mainly on its ability
to face as many as possible states of abnormal function, e.g. disconnection of a subsystem (i.e. db,
DCS), a not planned Agent death, inability to respond within the predefined time, etc.

9.3.2.3 Re-training procedure


The off-line re-training procedure is initiated periodically by the Trainer Agent Fig. 9.1. It searches
through the application database and the log files trying either to recognize interesting patterns or to

Prototype for Optimizing Power Plant Operation

157

identify the need to update part of the rules or the models. The latter is based on statistic metrics
and on feedback entered by the user. In both cases the Trainer Agent proposes to the administrator
to repeat a specific part of the off-line processing of data. The extraction of new models based on
new operation data and their embedment to the MAS ensures the adaptability and extensibility of
the application. In this way changes that involve the TPP operation (caused by wear, maintenance,
replacement, or even addition of new mechanical equipment) are faced.

9.4 Implementation
9.4.1 Data mining
As far as the off-line preprocessing and processing of data is concerned, the WEKA data-mining
suite was used for the application of the algorithms (wek, 2008). WEKA (distributed under the
GNU Public License) is a comprehensive toolbench for machine learning and data mining. Its main
strengths lie in the classification area, where all current Machine Learning approaches, and quite a few
older ones, have been implemented within a clean, object-oriented Java class hierarchy (Witten and
Frank, 2005). WEKA contains a comprehensive set of data pre-processing tools including filters for
discretization, normalization, resampling, and attribute selection and transformation. Additionally, it
offers visualization, and evaluation methods.

9.4.2 Multi-agent system


The models extracted from dm could only be used for producing sporadic estimations on user demand. The advantage of embedding them to agents is the translation of the derived data models to
useful information and applicable recommendations. The introduction of ubiquitous computing is
a further step towards the utmost exploitation of information, by reaching the right user at the right
place and time.
The Java Agent Development framework was used for the implementation of the MAS (jad,
2008). JADE is a middleware for the development and run-time execution of agent-based applications
which can work both in wired and wireless environment (Bellifemine et al., 2001). Since JADE
version 3.0b1, the LEAP (Lightweight Extensible Agent Platform) libraries are completely integrated,
providing a runtime environment of JADE that can be deployed on a wide range of devices varying
from servers to Java enabled cell phones with J2ME MIDP. For the presented work, this feature
was important as it enabled the design and development of one single User-Interaction Agent type
regardless of its final execution environment (desktop or portable computer).
It should be noted that JADE containers and JADE-LEAP containers cannot be mixed within a
single platform. There is an option to use the J2SE version of JADE-LEAP for running the agents
at desktops, since it is identical to JADE in terms of APIs. Nevertheless, we choose to deploy two
separate platforms, one for the UIAs and one for the rest agents. The two platforms communicated
using FIPA defined protocols.

9.4.2.1 JADE advanced features


Fault tolerance is one of the most substantial requirements for an industrial application. Despite of
the fact that JADE is a distributed platform, it relies on a main container to house basic services
such as the Agent Management System (AMS) and the Directory Facilitator (DF). To avoid having a
potential single point of failure, JADE offers two significant features: the main container replication
service and the DF persistence (for more details see (Bellifemine et al., 2007)).
Another important feature of great importance for the presented application is the split-execution

158

Agent-Based Ubiquitous Computing

mode. This is an alternative mean to implement a JADE run-time, i.e. a container split into two parts:
the front-end and the back-end. The front-end resides on mobile devices and provides agents with
the same features of a container, while the implementation of most of these features is undertaken by
the back-end (hosted by a desktop computer).
The advantages are obvious: optimization of wireless link and more lightweight environment in
handheld devices. Furthermore, in case a connection temporary drops, the agent messages are automatically buffered in either part and delivered as soon as the connection is restored. The connection
between the two ends is managed in a manner transparent to the developers. However, there is a way
to be notified in case a disconnection or a reconnection occurs. This option was used by the IPPAMAS in order the User-Interaction Agents to act accordingly. For example, if an urgent message is
about to be sent, when the PDA is disconnected, the UIA may notify the user so as to use other means
of communication or take whatever alternative action is necessary. On the contrary, in cases that
routine information is not delivered and displayed in time to serve its purpose, the UIA may decide
not to display the respective message at all.

9.4.2.2 Embedding dm models to agents


Particularly, for deploying the Data-Mining Agents the Agent Academy platform (version
Reloaded) was used (aca, 2008). Agent Academy is an integrated development framework, implemented itself as a multi-agent system, which supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agent
or multi-agent communities. As an alternative, Agent Academy could also be used for the development of all the agent types, except from the User-Interaction Agent, since it does not currently
support a lightweight version. A unique functionality offered by Agent Academy is the embedding
of dm extracted intelligence into agents. Its latest version provides an interface through which the
developer can readily embed into agents decision models that derive from the application of most of
the classification algorithms available in WEKA. As it is implemented upon the JADE infrastructure,
it is self-evident that the deployed agents are compatible with the remaining ones created with JADE.

9.4.3 Wireless transmission


For enabling the transmission of information to PDAs there are two main directions: 1) the use of
mobile services offered by mobile providers and 2) the deployment of a Wireless Local Area Network
(WLAN). The first category includes various 2.5G (e.g. GPRS, EDGE) and 3G mobile technologies
that offer greater coverage, but limited data rates, i.e. around 50-100 kbps. The second category
contains several networking technologies as Wi-Fi (IEEE 802.11b standard), WIMAX (IEEE 802.16
standard), BlueTooth and HomeRF. The first two support data rates up to 11 Mbps and 72Mbps
respectively (open air). The actual bandwidth for a wireless Ethernet transmission inside the plant
buildings is expected to be quite smaller, but still more than enough for covering the application
prerequisites.
The requirements of IPPAMAS indicated WiFi as the optimum solution in terms of bandwidth
and operation cost. The WiMAX also seems like a promising solution, but is not yet as widely used.
The WiFi LAN will operate using the unlicensed spectrum in the 2.4GHz band. The power plants
infrastructure does not interfere with the wireless network signals. Problems with noise from cordless
phones or other devices operating at the 2.4GHz frequency can be easily resolved by switching them
to 900MHz.
The WLAN is suggested to be deployed using Access Points (AP) that support infrastructure
mode. This mode will bridge the WLAN with the existing wired Ethernet LAN.
Security concerns are merely covered by the fact that the plant territory extends wide off the
building, thus beyond the network range. Nevertheless, the inherently open nature of wireless access

Prototype for Optimizing Power Plant Operation

159

renders the system vulnerable to malicious attacks. Taken this into account, security issues should be
further considered when developing a commercial version of the presented prototype, e.g. adopting
standards like the WPA and WPA2 that improve significantly the security of WiFi systems, including
techniques for hiding APs from potential hackers etc.
As far as the handheld devices is concerned the minimum requirements were the support for
WiFI and JAVA MIDP 2.0, the 64MB RAM and the possession of a fast enough processor (clock rate
higher than 400MHz).

9.4.4 Profiles
Regarding the implementation of the profiles the XML markup syntax was chosen because it has several advantages, as enhancing interoperability and providing files which are both human readable and
machine processible. The profiles are structured in a hierarchical way with several more specific profiles building on a less specific profile. The application will preserve three kinds of profiles: services,
users and devices. The first two will be comprised of keywords (e.g. boiler/ turbine/ mills/ chemical
laboratory, operation/ maintenance/ administration, engineer/ foreman/ operator/ technician/ officer,
head-of-section/ deputy/ assistant). The device profile will contain a simple collection of device
characteristics.

9.5 Evaluation
The evaluation of the IPPAMAS can be approached from two different points of view. The users
face the application as a black box and care only about its performance and whether it is easy to use.
So, from the end-users perspective, i.e. the plant engineers, the IPPAMAS should be an innovative
application that will contribute to the performance optimization of the power plant and should fulfil
the following four criteria: 1) Precision, 2) Acceptable response time, 3) Reliability, and 4) Userfriendliness.
On the other hand, the developers are more interested in the inside MAS, in features as robustness and soundness, and in questions like what is the benefit compared to other software engineering
technologies?.

9.5.1 MAS performance


The performance of IPPAMAS was tested through simulation of several cases. Twenty experiments
were carried out in the laboratory without human participation, each one lasting 24 hours, in order
to assess several properties of the MAS, as autonomy, adaptability, robustness, proactiveness and
intelligence. Indeed, the IPPAMAS demonstrated properties that characterize in general a MAS, as
defined by various researchers (Wooldridge and Jennings, 1995; awg, 2000; Padgham and Winikoff,
2004).
The MAS deployed for the evaluation comprised of the following agents:
25 Sensor Agents
3 Subsystem Condition Agents (type Condition Agent)
3 Subsystem Recommender Agents (type Recommender Agent)
25 Sensor DM Agents and 10 action DM Agents (type Data Mining Agents)
1 Recommendation Distr. Agent and 1 Alarm Distr. Agent (type Distribution Agent)
6 User-Interaction Agents

160

Agent-Based Ubiquitous Computing

1 Monitoring Agent
Auxiliary agents: Surveillant Agent, Synchronization Agent, DBpplication Agent.
The data used for the simulation were different from the ones used for deriving the models. A
number of real measurements was replaced with erroneous values. Up to 10 erroneous measurements
were introduced simultaneously. The application located the 100% of them. In all the cases the
estimated values diverged negligibly from the real measurements.
The data were selected so as to contain several cases of the three types of alarms, i.e. low
priority: not particularly dangerous, intermediate priority: immediate handling required, and high
priority: imminent break. Furthermore, the simulation included various cases in which indications
to users should be provided. In addition, in some cases more than one users were available while in
some others none, so as to check the adaptability of the Distribution Agents. Besides, the users were
placed in different context, in order to test the reaction of the User-Interaction Agents.
The main aspects that were tested through the simulation were the following:
Robustness of the IPPAMAS. Up to 78 agents run simultaneously without causing bottlenecks or
other kind of problems. Some of them were killed on purpose, so as to check the MAS ability
to overcome such problems. Indeed, alternative predefined scenarios were automatically executed
and the best possible outcome was achieved.
Response time. In all cases the IPPAMAS corresponded successfully within the predefined time
period of 1 minute.
Accuracy. The estimated values were within the acceptable fault tolerance as predefined by the
experts. The alarms were fired correctly.
Reliability. The MAS proved that it can handle many simultaneous alarm signals, run continually
and recover from breakdown, and that it produces the same output for the same input.
Adaptability. When an agent or a user was unreachable within the acceptable time limits for
successive execution circles the other agents adapted their behavior accordingly.

9.5.2 User evaluation


User acceptance is an important issue when introducing pervasive information services to an industrial environment, given that the majority of the personnel are not acquainted with on-line decision
support systems, handheld devices, and even more the emerging context-aware computing paradigm.
In order to evaluate the IPPAMAS in terms of user acceptance, we performed four simulation
experiments inside the TPP Meliti (each one lasting 3-4 hours approximately). Real measurements
taken from this plant were used and PPC employees participated, in order to reproduce real conditions as accurately as possible. The researchers configured the scenarios and made observations
related to the user-application interaction. Upon completion of each simulated scenario the users
were interviewed to gain feedback about the application usability and its potential impact, as well as
to get ideas about possible future improvements and enhancements.
The simulation studies revealed several issues with respect to the service prototype. As far as
the user interface is concerned, it was generally perceived as being user friendly, while the information displayed was easily understood. The fact that the design of the interfaces adapted the standard
coding used in TPPs, as colours to designate alarm classification, helped in that direction. The interfaces were considered comprehensible and easy to interact with, even the ones displayed to PDAs
by someone without particular previous experience in such devices. In addition, the plant experts
thought that pop-ups and various audio sounds emphasize appropriately the arrival of new information. Nevertheless, users declared that certain features needed to be stressed out somehow. With
respect to the content, the indications for actions were found appropriate in the majority of cases

Prototype for Optimizing Power Plant Operation

161

(70%). The remaining were considered unnecessary or overdue by the plant personnel, leading to the
conclusion that either more data are needed for the training of the system or the specifications (predefined rules etc) are not enough. However, it was pointed out that the percentage of unsuccessful
indications might be reduced when more subsystems will be monitored by the application, as it will
have an overall picture of the operation (most of the subsystems are interconnected).
As for the context-awareness, the system corresponded appropriately in all cases in which the
users situation could be identified by the input data. Indeed, the employees said that in a way the UIA
acts similarly to a secretary or an intelligent answering machine. More specifically the employees
thought that taking the initiative (the UIA) to return a message or to postpone its display, when its
receiver is engaged in more important tasks, increases the effectiveness and the attractiveness of
the application. In addition, forwarding the imperative messages that are not undertaken in time,
increases the safety.
However, the users noticed that in real circumstances a users context might be quite different
than what is shown from the operation data, e.g. an accident, a private conversation etc. Particular
attention is required so as in these cases the persistent notification will not be annoying, thus causing
scorn.

9.6 Concluding remarks and future enhancements


The drive for competitive electricity prices and environmentally friendly power generation is leading
to significant control challenges. In this direction, tools have been developed to support the power
plant personnel in achieving optimal operation. Practice has proved that desktop applications can
only cover part of this demand. Indeed, what is missing is an ubiquitous system which can provide
services wherever around the plant premises are required.
A multi-agent system and data mining techniques are combined in order to reproduce effectively
the complex power plant operation, which is difficult to model otherwise. This combination increases
the adaptability and the extensibility of the system; the extraction of new dm models based on new
operation data is sufficient for capturing changes that concern the TPP operation (caused by wear,
maintenance, replacement or addition of new mechanical equipment).
Furthermore, MAS forms the ideal basis for distributing information to users, regardless of their
location, situation or device. This attribute turned out to be of significant importance for the welcome
of the application by the users. In fact, plant employees that participated in simulation experiments
explicitly stressed that the delivery of information at the place where they actually needed it (via
handheld devices) was one of the most attractive features. They also expressed their overall satisfaction from the interfaces and the performance of the IPPAMAS, although remarks and suggestions
were also made. Naturally, there is room for further improvement.
The presented prototype support the power plant personnel, with emphasis on :
(1) Standardization and improvement of information flow
(2) Timely and efficient confrontation of alarms and other problems
(3) Optimum regulation of the various operating parameters
The benefits from the aforementioned are self-evident in regard to safety, availability and efficiency of the TPP. Furthermore, the plant statistics indicate that significant economical and environmental gains may be achieved, as reduction of the NOx emissions up to 15% and decrease of the
steam used for the sootblowing procedure by 10% (i.e. save energy).
The short run plans include the design and configuration of more simulation experiments that
may reveal more points that call for attention and on the same time give rise to innovative ideas.
Then, the feedback provided by the users and the observations made by the developers will be taken

162

Agent-Based Ubiquitous Computing

into consideration in order to develop an advance version of the application. Emphasis will be placed
into the user interfaces and the context-awareness. Also, the IPPAMAS will be extended so as to
monitor more subsystems of the TPP Meliti.
Finally, as future work is concerned, an interesting topic for research would be the application of
data mining techniques to the application data of the IPPAMAS, in order to further adapt the profiles
and to improve the performance of the agents.

Acknowledgment
Partially funded by the 03ED735 research project, implemented within the Reinforcement Programme of Human Research Manpower framework and co-financed by National and Community
Funds (25% from the Greek Ministry of Development-General Secretariat of Research and Technology and 75% from E.U.-European Social Funding).
Special thanks to the engineers and operators of the Public Power Corporation S.A. thermal
power plants of Western Macedonia, Greece for providing information and data.

Chapter 10

IUMELA: Intelligent Ubiquitous Modular


Education Learning Assistant in Third Level
Education
Elaine McGovern, Bernard Roche, Rem Collier, Eleni Mangina
School of Computer Science and Informatics, University College Dublin, Dublin 4, Ireland
{elaine.a.mcgovern, bernard.roche, rem.collier, eleni.mangina} @ucd.ie

Abstract
Education at the University College Dublin has transitioned from a once traditional educational approach to a modularised education framework, the first of its kind in Ireland. IUMELA is an intelligent modular-education learning assistant designed, using multi-agent systems (MAS), to assist students in their module selection decision-making process required as part of their degree programme.
Ubiquitously available to third level students via their mobile device, it answers the call for an application that could assist students who are unfamiliar with the concepts of modularisation, ensuring success from specifically tailored module combinations. The communicative overheads associated with
a fully connected multi-agent system have resulted in the search for increasingly more lightweight
alternatives, particularly when the MAS resides within a ubiquitous mobile environment. This paper
considers an alternative IUMELA MAS architecture that uses a significantly more lightweight mobile
assistant.

10.1 Introduction
IUMELA is an acronym for Intelligent Ubiquitous Modular Education Learning Assistant. It uses
multi-agent systems (MAS) technologies to create an intelligent learning assistant that is capable of
supporting students in their choice of modules based on their learning preferences, academic abilities and personal preferences. The ubiquitously available learning assistant applies expert systems
analysis functionality for the storage, retrieval and analysis of student models that assist in effective
module recommendation. It predicts potential outcomes through the investigation of the students
learning styles and comparative analysis of similar past students achievements. Its conclusions and
recommendations are subsequently displayed in a knowledgeable, yet meaningful manner using java
technologies.
IUMELA has been designed to run using integrated smartphone technologies on the XDA Mini
S. Smartphone technologies are increasingly becoming the de facto standard. Currently, they can
be used as graphical calculators, word processors, databases, test preparation tools and a means for
E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2_10, 2009 Atlantis Press/World Scientific

163

164

Agent-Based Ubiquitous Computing

referencing resources. An initial comparative study has indicated that many smartphone technologies can rival the capabilities of personal digital assistants that are currently available on the market.
Todays third level academic students are frequently seeking increasingly lightweight mobile devices. Unfortunately, PDAs and smartphone technologies are still unable to compete with desktop
and laptop computers based on screen real estate, computational and storage power. The research
into the development of an intelligent assistant that can quickly respond to student issues led to a
lightweight multi-agent system based architecture using comprehensive student modelling facilities
and the inclusion of the ABITS FIPA compliant messaging service (mcgov2006). This fast and efficient architecture would facilitate students by reducing the time taken to interact with and receive
recommendations from the IUMELA assistant.
To fully appreciate the complexities behind the development of an adaptive multi-agent based
architecture that is capable of intelligent human-computer interactions, it is necessary to consider
the evolution of managed learning environments (MLE) and associated intelligent tutoring systems
(ITS). In section 2 we consider the direction of current research into the areas of intelligent multiagent systems in intelligent learning applications, mobile learning and the implementation of these
applications on smartphone technologies and finally, we delve into the study of cognitive strategies
applied to the development of the modularised education learning assistant. In section 3, the structure of the IUMELA MAS and its associated student model are presented. Section 4 considers the
IUMELA interface as a suitable medium for intelligent HCI. Section 5 enters into a debate regarding
whether or not the adoption of an ultra lightweight client side within a distributed multi-agent system
results in an enhanced communicative capacity. Finally, conclusions the conclusions of this research
are drawn in section 6.

10.2 Related work


10.2.1 Multi-agent systems based learning technologies
Multi-agent systems (MAS) and their associated intentional agents have been used to further
distributed intelligent educational systems, promising to influence a multitude of application areas.
Designing an intelligent multi agent system application can be challenging but the results of the
endeavour can be infinitely more rewarding. They can aid in the implementation of new training
paradigms and can serve as assistive entities for both teachers and students in their computer aided
learning and teaching processes (mcgov2006).
To understand why multi-agent systems are suitable for the design and implementation of assistive technologies, we must consider the basic structure of a MAS. An agent is a software system
capable of independent action on behalf of its user. It can autonomously determine what needs to be
done in order to satisfy its design objectives, rather than having to be instructed explicitly.
A multi-agent system is one that consists of a number of agents, which interact with one another,
typically by exchanging messages through a network infrastructure or through their effectorial capabilities. In order to successfully interact, agents must be capable of cooperating, coordinating and
negotiation between each other.
MAS have the ability to cope with modern information rich distributed and online information
processing, storage and browsing facilities. Past studies have shown that the introduction of MAS
into information handling facilities can dramatically improve on the capabilities of their object oriented counterparts (mcgov2006). A comprehensive list of agent capabilities has ensured consensus
amongst researchers. Wooldridge and Jennings (wool2006) refer to the following as fundamental
characteristics of intelligent agents: reactivity, proactivity and social ability.
Beyond this, intelligent agents are inherently autonomous. They have become enabling technologies, demonstrating an ability to cope in situations where interdependencies, dynamic environments

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

165

and sophisticated control are fundamental to the success of the system (roch1995). As in the case of
IUMELA, they can provide multiple robust representational theories that enable the student to benefit
from a rich and varied learning environment.
Todays techno-savvy third level student is sufficiently capable of navigating through and around
most mobile and desktop devices. Their online browsing and searching strategies are, however, still
somewhat amiss. In fact, with the vast array of course information available students have difficulty extracting the useful from the redundant. Students have also demonstrated difficulty forming
beneficial and appropriate queries in order to extract suitable module combinations for consideration.
Multi-agent systems technologies are ideally suited in assisting students to navigate through the
melee of learning material required in todays training and educational environments. E-learning
systems are tools that exist because of external teaching and learning requirements. These applications have been designed to provide a service with minimal difficulty. E-learning tools are intuitively
designed, and the only learning required is that necessary to accomplish the academic task.
In general, multi-agent systems (MAS) agents are constructed specifically to undertake the tedious or mundane tasks. These systems undertake the task of browsing (lieb1995), sorting email and
filtering news group messages (cyph1991, gold2005, lang1995, lash1994), or finding users who share
similar interests (bala1998, fone1997, kuok1997). They are designed as tools to relieve the burden of
undertaking repetitive duties from an otherwise active user.
Sklar and Richards (skla2006) categorise human learning into two congruent field - training and
education - both of which make use of Managed Learning Environments (MLE) and Intelligent Tutoring Systems (ITS). They have divided the MLE into five main components: domain knowledge that
encompasses all subjects available through the MLE; teaching component which is an instructional
model detailing how a module should be taught; the student model presents the student understanding of the subject matter; a user interface that provides an aesthetically pleasing mechanism for HCI;
system adaptivity which ensures that the system can adapt to changes in students behaviour and
knowledge levels.
Sklar and Richards (skla2006) draws a comparison between a typical interactive multi-agent
system and their developed e-learning system. It is distinctive, however, in that instead of a user
model there is a student model. The pedagogical agent replaces the interface agent type in the elearning system architecture. The most compelling addition is the inclusion of a teaching component
in order to direct student learning.
It should be noted here that agents draw upon speech act theories in order to communicate
amongst each other. Speech act theory treats communication as action. Austin (aust1962) theorised
that a particular class of natural language utterances had the characteristics of actions. He referred to
the way that these speech acts change the state of the world in a similar way to physical action.
In the early 1990s, the US-based DARPA-funded Knowledge Sharing Effort was formed, with
the intention of developing protocols for the exchange of represented knowledge between autonomous information systems (fini1993). In 1995, (fipa1997) the Foundation for Intelligent Physical Agents (FIPA) began its work on developing standards for agent systems. The centrepiece of
this initiative was the development of an Agent Communication Language (ACL). It is a structured
language that is not dissimilar to KQML.
In general, agents can neither force other agents to perform some action, nor write data onto the
internal state of other agents. This does not mean that they cannot communicate, however. They can
perform actions - communicative actions - in order to influence other agents according to their needs.
They do, however, have control over their own beliefs (desires, intentions).
So, agents communicate amongst each other using specified Agent Communication Languages
(ACL), which allow them to effect their common environment. Humans, on the other hand, communicate via natural language utterances and, as such, an undergraduate student should never be required
to communicate with the MAS through agent-based structured language. The MAS in a HCI system
must, therefore, communicate with the user on their terms (fini1993).

166

Agent-Based Ubiquitous Computing

Sklar and Richards (skla2006) depict a system that assists students when help is required. They
highlight three mechanisms through which this can be done: directly, upon request from the student;
indirectly, through monitoring student interactions; and finally via mixed initiative, which relies on a
combination of the former and the latter.
A key difference between agent systems for human learning and other agent systems is that there
must be a constant open channel that facilitates communication between the multi-agent system and
the student. A student will be unable to communicate with the MAS using a FIPA-compliant ACL.
As such, a communication channel needs to exist that bridges the human-agent gap. Oftentimes this
takes the form of a personal or interface agent. This agent must be capable of coping with differences
in settings and interfaces affecting the learner as well as knowing and understanding the learner to
further assist them in their educational needs.
Cooperation between organisations to facilitate e-learning is possible with the use of internationally accepted standards. They aim to define common learning technology specifications and
standards. These include:
(i) Learning Technology Standards Committee (LTSC) of the Institute of Electrical and Electronic
Engineers (mcgov2007)
(ii) The Alliance of Remote Instructional Authoring and Distribution Networks for Europe, financed
by the EU Commission (aria2008)
(iii) Instructional Management Systems Project [11] in the United States of America (ieee2008)
These organisations developed technical standards to support the broad deployment of learning technologies.
(gold1992) proposed the use of MAS to support collaboration in an online context. (john2004)
developed a collaborative on-line learning system for use in open and distance learning. Mengelle et
al. (blik2005) employed MAS strategies to create an intelligent tutoring system that would encapsulate multiple learning strategies. Their research presented core agent structures that could cope with
the implementation of multiple strategies for learning through simulating different roles depending
on the current requirements.
The IUMELA multi-agent system can ubiquitously facilitate mediation between multiple student
types and potential third level academic modules through the implementation of student modelling
strategies. It provides the student with a recommendation facility based on the parameters returned
from the student model and the description of the educational content. The modelling services are
provided through specialised expert agents that are dedicated to a number of pedagogical processes.
Every student is identified through a unique user identification number and password combination.
Their current context is updated via the assistant agent upon each login.
Frasson et al. (fras2005) endeavoured to define an agent-based tutoring system composed of the
necessary qualities for an adaptive Intelligent Tutoring System (ITS) in order to enhance learning.
Often, it replicates the behaviour of an intelligent human tutor adapting the subject matter to best
suit the student. More recently, other tutoring strategies were introduced. Enlivened through the
conceptualization of the co-learner and disrupter strategies, their ITS is now capable of instructing
the learner through a variety of mechanisms.
Chan (chan1990) developed a learning companion who simulates the behaviour of a second
learner (the companion) who would learn together with the human learner. Various alternatives to
this co-operative approach were conceived including (palt1991, vanl1994) to an inverted model of
ITS called learning by teaching in which the learner could teach the learning companion by giving structured explanations. Adaptive learning requires a flexible ITS, that can manipulate multiple
strategies.
Frasson et al. present an ITS architecture with the following agent types: a curriculum; a learner
module; and pedagogical module. Multiple agent types exist in order to encompass the variety of

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

167

roles. These adaptive agents are capable of adjusting their perception of situations (fras2005) and
modifying their decisions by choosing new reasoning methods. They adapt their strategies based
on dynamic information flow and varying resource availabilities. The agents perceive changes in
their common environment and amend their actions accordingly. Their reasoning tasks are based on
dynamically changing local and global objectives. These cognitive agent types lend themselves to the
task of intelligent tutoring. They can learn and ascertain new facts and enhance their knowledge-base.
The actor paradigm, presented by Frasson et al., embodies the reactive, adaptive, instruct-able
and cognitive requirements of todays ITS. It is capable of reacting to changes in its environment.
Its perceptive and control modules can adapt based on newly acquired learning strategies and new
learner types. The system is able to learn by experience and can acquire new strategies and similarly,
can generate new control tasks. This ITS architecture is composed of four modules - perception,
action, control and cognition, and is distributed over three layers - reactive, control and cognitive
(fras2005).

10.2.2 The mobile device


A wireless device is any form of networked hardware that can communicate with other devices without being physically attached to them (mcgov2006). IUMELA has been designed to exploit the
smartphone technologies available on the XDA Mini S. These limitations make it difficult to display
large quantities of text and graphical output.
A smartphone can be considered to be any handheld device that has integrated personal information management facilities as well as mobile phone capabilities (bala1998). According to recent
finding, while over 83 per cent of third level students have mobile phones; only 23 per cent have
access to another form of mobile computing device. Of that 23 per cent of students, less than one
fifth would regularly bring this device with them into university. Over 90 per cent of students regularly bring their mobile phones with them to university (ayoo2008). This supports preliminary
IUMELA findings that suggests that providing third level students with ubiquitous access to modularised education via an intelligent learning assistant ensures that the resources are being provided in
an on-demand, an anytime, anywhere manner.
IUMELA was installed on the XDA MINI S smartphone. This device provides a substantial array
of smartphone technologies, combined with traditional mobile phone functionality in a lightweight
and portable device (bala1998). Microsoft Windows Mobile 5.0 drives the XDA Mini (blik2005).
This operating system provides a familiar platform to students ensuring its ease of use. IUMELA
lends from the research of Frasson et al. in that the MAS can learn from experience and adapt its
theories accordingly.

10.2.2.1 The mobile device in education


IUMELA has been designed to run, using smart phone technologies, on the XDA Mini S. Limitations
to the wireless devices that do not currently affect desktop computers to the same degree include
screen size, resolution, colour capabilities and reduced battery life (mcgov2006). These limitations
make it difficult to display large quantities of text and graphical output.
One trade off in improving the wireless devices display is that the weight of device will also
increase. Colour screens with high resolution require more power than the older monochrome equivalents. This will result in an increase in battery weight or less usage time available before the battery
needs to be recharged (mcgov2007). A primary advantage of using a handheld device is that they
are lightweight and portable and can be hot synced to a desktop device. One also has the ability to
download free and inexpensive software to use on it (doej2006). They are inexpensive compared to
laptop computers.
A smart phone is considered to be any handheld device that has integrated personal information

168

Agent-Based Ubiquitous Computing

management facilities as well as mobile phone capabilities (bala1998). This can occur by adding
traditional phone functions to an already capable PDA, or by enhancing a mobile phone with smart
capabilities (mcgov2006). According to recent findings, more than 80 per cent of students in Ireland
own a laptop and mobile phone (ayoo2008). Mobile phone penetration levels have surpassed fixed
line phone levels for the first time, with mobile penetration in Ireland at 88 per cent and more than
3.5m mobile subscribers. The number of people with more than one mobile phone continued to
increase last year. It stood at 114 per cent in the second quarter of 2007. This is amongst the highest
rates in the EU (ayoo2008). This is up 11 per cent year on year.
In 2004, Internet penetration levels increased to 46 per cent. This indicates that students possess
the means by which they can access academic information via multiple forms of mobile device.
Furthermore, another study indicates that 220,000 five to nine year old children in Ireland own mobile
phones and a further 270,000 ten to fourteen year old students own a mobile phone (ayoo2008). This
is indicative of the impending techno-savvy generation third level students.
A Higher Education Student Laptop Programme was introduced throughout third level academic
institutions within Ireland, which has enabled thousands of third level students to gain access to
laptops and associated software. UCD is the first Irish third level institution to adopt the Student
Laptop Programme. Currently, 20 per cent of all students on the Belfield campus own their own
laptop. With the introduction of the Higher Education Student Laptop Programme, however, it is
expected that 75 per cent of all students at the university will own their own laptop over the course of
the next three years. There is 100 per cent wireless internet access support on the Belfield campus.
This enables students to use their mobile devices to ubiquitously access information (nola2006).

10.2.2.2 The wireless device as a learning aid in education


Laptop computers are increasingly being used as a central learning tool within third level educational
institutions in Ireland. Todays students use laptops to access their course materials, search library
catalogues, give presentations and browse the internet. Combined with the unsurpassed growth in
laptop ownership among students, Irish universities are rapidly moving towards digital campuses
where free wireless internet access is available in a ubiquitous manner.
One example of this is the introduction of the National Digital Learning Repository. This is a
system by which the storage, discovery and retrieval of learning materials and their descriptions has
been enabled for access through either local or distributed sources (ndlr2008). It is hoped that an
agreement brokered between Intel, Dell, HP, Microsoft, AIB and Vodafone will foster further use of
mobile devices by third level students at university. The Student Laptop Programme offers students
an affordable means by which they can access academic resources.
Significant research into the area of mobile computing devices have suggested that PDAs facilitate group work, the immediate analysis of data particularly during laboratory exercises or when
conducting scientific investigations in the field rather than in the classroom. They add enhanced
performance to the third level student when used as a mobile learning tool as they can augment collaboration and sharing of information and software. Studies have demonstrated that this sharing and
commenting on others work often leads to a superior end product (mcgov2007).
Students can use many classes of mobile device to act as a graphing calculator, word processor,
database, test prep tool, and reference resource. Preliminary studies, such as Multimedia Portables
for Teachers Pilot, have reported high levels of motivation and self-reliance among teachers who
consider PDAs to be flexible and adaptable in providing a context for teacher professionalism (mcgov2007). The devices gave students opportunities to connect questions and investigations to the
data in a real time setting that enhances systematic investigations, critical thinking and cooperation. Additional research suggests that PDAs facilitate group work, the immediate analysis of
data particularly during laboratory exercises or when conducting scientific investigations in the field
rather than in the classroom (mcgov2006). Collaboration and sharing of information and software is

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

169

enhanced by PDAs as well. PDAs allow users to communicate with email servers, administrative applications (ayoo2008), and databases, such as those containing grades and other student information.
PDAs also allow educators to access the Internet via modem, infrared or serial port connections, or
via wireless access (ayoo2008).

10.2.2.3 The XDA Mini S


IUMELA has been designed for use on the XDA MINI S smart phone but can be conceivably used
on any mobile device having used CSS to format the web interface. This device provides a substantial array of smart phone technologies, combined with traditional mobile phone functionality in
a lightweight and portable device. Previous studies have highlighted that the XDA class of mobile
computing devices provides flexibility, connectivity, pro-activity, cost-efficiency and multimedia capabilities that its users have expressed as essential to the successful completion of their computing
activities (mcgov2006).
Microsoft Windows Mobile 5.0 drives the XDA Mini, a scaled down version of Microsoft Windows that has been specifically designed for PDAs and smart phone technologies (mcgov2007). The
operating system provides a familiar platform to students ensuring its ease of use and simplistic integration facilities with the Managed Learning Environment (MLE) installed at the University as well
as the students home device.
Designed for use in conjunction with Microsoft Exchange Server 2003, it enables students to send
and receive email, make calls and download files and java based applications in real time. The TI
OMP 850 200 MHz processor provides fast mobile processing, ensuring smooth and constant access
to the students academic resources. The XDA mini is facilitated with a 64 k colour touch screen,
240 x 320 backlit LCD screen, Bluetooth, infrared, wireless LAN, mini USB, and mini SD card slots,
slide out qwerty keyboard, traditional text, picture video and instant messaging (mcgov2006).

10.2.3 Modular education at UCD


UCD Horizons is the flagship of full time modularised third level education in Ireland. Modular
education in UCD has provided a structured modular and credit-based taught degree programme.
The subjects within a modularised degree are sub-divided into discrete learning modules. These are
combined to make a degree programme, which gives more flexibility in terms of developing new and
interesting subject combinations for the student. As all modules have a credit value, getting a degree
is based on the principles of credit accumulation (nola2006).
UCD Horizons has been designed to be more flexible than its traditional counterpart and enables students to individualise their academic career. It is student-centric, allowing students to have a
greater choice in degree content. They are required to undertake some core modules and have the opportunity to elect some optional and free choice modules also. This, in theory, enables them to adapt
their degree programme based on their own study preferences and strengths (nola2006). A primary
motivation behind the development of IUMELA was that, although there is enhanced freedom of
choice in a modularised education, students entering third level education are often poorly equipped
to deal with such freedom. They subsequently make misinformed module choices, frequently resorting to poor decision-making metrics or the enlistment of their family and peers to assist in this
fundamental career making process. Thus, by facilitating the student with a ubiquitously available
intelligent assistant, many of these problems can be overcome.

170

Agent-Based Ubiquitous Computing

10.2.4 Learning styles


Psychologists agree that intelligence is an ability. Significant resources have gone into the development of an understanding of how students use these abilities for the purpose of education, otherwise
known as learning styles theory. Learning styles are considered to be preferences for dealing with
intellectual tasks (snow2006) Psychologists agree that a learning style can be considered a consistent
preference over time for perceiving, thinking about and organising information in a particular way.
It is possible to adopt different learning styles as the need arises. Kagan found that some students
seem characteristically impulsive, while others are reflective. Witken theorised that individuals can be
influenced by their surrounding context and that there are two groups of learners: field dependent and
field independent. Sternbergs styles of mental self -government theory describes thirteen styles that
fall into one of five categories. There are functions, forms, levels, scope and learning. This theory
suggests that by noting the types of instruction that the various students prefer and the test types
on which they perform best, students could receive the most appropriate education to their learning
style. This concept supports the belief that IUMELA would assist students by suggesting appropriate
modules based on their preferred learning styles. The student agent in the IUMELA MAS makes
use of learning styles theory in the collection, storage, presentation and dissemination of pertinent
information through intensive student modelling (snow2006).

10.2.5 Teaching strategies


Educators often use various instructional methodologies to engage any number of styles of learning
at one time or another. They are required to use various test formats to measure accurately what
various students have learned. IUMELA measures those classes in which students consistently participate well, through the inclusion of an expert agent. IUMELAs expert agent defines each teachers
style based on one of several well-documented behavioural approaches: constructivist, humanistic
and social (snow2006). It was found that teachers commonly use a top down teaching strategies such
as taxonomies to stimulate three diverse learning fields; cognitive, affective and psychomotor. Behavioural teaching strategies lend themselves well to computer mediated instruction and assessment
and vicariously to the IUMELA application.
Alternatively, IUMELA can adapt a cognitive approach to teaching by enabling the analysis
agent to minimise the cognitive demands of the module research task and assist learners to conceptually formulate patterns of information and subsequently present a module overview and potential
recommendation (snow2006). As a learner-centric intelligent interface, the assistant agent can ubiquitously link concepts to every day experiences, guide students in their problem solving processes
and encourage learners to think analytically when reasoning in a humanistic manner.

10.2.6 Evaluation techniques


Historically, assessment involves measuring how much knowledge and skills a student has and its
acceptability based on the teachers eventual goals. The summative and formative techniques are
two popular methods of evaluation. Teachers use a variety of means to evaluate, either summarily
or formatively, a students knowledge or skills level. Both methodologies are frequently used by the
lectures in UCD and so it is required that the expert agent be capable of incorporating them into its
reasoning abilities and knowledge base.

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

171

10.2.7 Presenting modules for selection


Choosing the course to undertake at third level is possibly the most life altering decision an undergraduate student will have to make. The process of course selection has attracted little attention.
Discussing potential programs with a qualified counsellor can be expensive and time consuming.
Furthermore, vulnerable students with genuine difficulties do not seek out guidance and when they
do, the dichotomy between the recruitment and retention function of course descriptions means that
modules will be presented in such a manner as to encourage enrolment irrespective of suitability
(simp2004).
This decision is an important factor in students retention or dropout rate according to McGivney
(mcgi1996). A student making an inappropriate choice - either the wrong level or the wrong course
content - is at greater risk of not completing a course than a student who has selected a module for
which they are suited in both level and content.
There is considerable supporting evidence with respect to full time students where course choice
has been found to be a very important cause of dropout - for example Yorke (york1999). More
recently Gibson and Walters (gibs2002) identified inappropriate course choice as one of the four
main reasons for incompletion of courses. Initially students rely on the course title and description
in choosing programs to undertake. But there are a number of issues around such titles and descriptions. If descriptions are short then there is a possibility that they may be incomplete or lack
comprehensiveness especially in the case of long courses covering considerable ground. Conversely,
if the description is long then the potential student may be unable to decipher the fundamentals of the
course.
Oftentimes, the use of vague terms can lead to a variety of interpretations by the intending students. Even where the descriptions highlight the outcomes of a course, these outcomes may not
be clear to students who will not necessarily understand an outcome stated in terms they have yet
to learn. It may be little help knowing that a course will give a thorough understanding of vector
algebra if students have only a very hazy idea as to what vector algebra actually is.
One route for ensuring suitable course selection is to offer advice from a course adviser. This
advice can be expensive, however, especially when this facility is offered through mass distance education. Another problem with this facility is that potential students may not know the right questions
to ask in order to determine the course suitability. Course advisers may have been trained in the
general area of course selection but not about the particulars of course content - again, the example
of vector algebra comes to mind.
Students in distance education are often reluctant to seek advice. Furthermore, evidence shows
that when advice is given, it may not always be followed. Once having made a choice, students who
are clearly committed to that choice will have difficulty disregarding it, regardless of its potential
suitability. Finally, access to guidance may be difficult for some students.
In past research, students have indicated that reviews of courses from similar past students are a
valuable mechanism for determining a particular courses suitability. In the Open University United
Kingdom (OU UK) website a selection of students who had recently taken courses were invited to
write a review of the course advising potential students about the course details. Administrators
were initially concerned that comments might be overly critical, negative or simply unfair. This,
however, did not appear to be the case. Furthermore, studies have shown that allowing real-time
online discussion can further benefit the potential students decision making process (simp2004).
The earliest example available of the use of course preview materials is at the National Extension
College, which has enabled students to preview courses during the selection process as far back
as the 1980s. The original rationale was that students considering a particular course should have
the opportunity to peruse the materials to be covered. Short samples of actual course material are
often used as they are representative of course content and are reasonably inexpensive to reproduce.
This enables the potential student to gage the courses suitability. The feedback from similar preview

172

Agent-Based Ubiquitous Computing

materials at the OU UK reveal that potential students find the packs reassuring rather than off-putting.
The limitations lie in its inability to tell the students that they have the right background knowledge
for a course.

Fig. 10.1 The Agent Architecture of IUMELA

Diagnostic materials have been designed as course advisory tools. They can be divided into two
broad categories: generic diagnostic materials that test applicants suitability for higher education;
and course-specific materials that highlight suitability for a particular course. Lending from social
science research, IUMELA attempts to overcome the above issues through the fair representation of
potential modules using a combination of the above methodologies.

10.3 IUMELA: the agent architecture


The IUMELA application conforms to FIPA specifications (fipa1997). The multi-agent system
(MAS) was developed using Agent Factory (coll1995) toolkit, using Java as the programming language. In particular, the assistant agent runs on a XDA Mini S. The high-level communication protocols have been implemented using ACL messages, whose content refers to the IUMELA ontology.
The GAIA methodology was used to identify the agent structures, roles and interactions within the
IUMELA (mcgov2007) MAS system and can be identified in figure 10.1.
IUMELA uses a FIPA compliant MAS architecture, displayed in figure 10.1, to fulfil the task
of an intelligent application capable of autonomous human computer interaction for communication,
event monitoring and the performance of higher order cognitive tasks. The IUMELA multi-agent
system consists of a community of five agent types: assistant, moderator, learning agent, expert agent
and analysis agent. They co-operate in order analyse the students learning patterns and make an
accurate module recommendation in an on-demand manner, at a time and location appropriate to the
student.

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

173

10.3.1 The assistant agent


The assistant agent resides on the students client device. It is responsible for the seamless interaction
between IUMELAs multi-agent system (MAS) and the student. As an interface technology, it is
the assistants task to be aware of the current student and by which device the student is currently
accessing the application. The application makes use of the XDA Mini S in a mobile context and is
implemented using java-based servlet technology.
The effectiveness of interface agent technologies as a means of facilitating human-computer interactions ensures that they remain a prevalent research area within the fields of artificial intelligence
(AI), human-computer interaction (HCI), and user modelling. It is considered a mechanism by which
the mundane and tedious can be altered or be delegated. Each of these research domains reflects upon
a unique facet of the agents capabilities, rating its effectiveness in terms of their own requirements.
Nevertheless, the task of the assistant agent is unequivocal, to provide a mechanism by which the
user can interact with the resources offered by the multi-agent system.
User modelling enables the representation of student information and e learning system interactions in order to adapt the application to the students current needs. User modelling techniques have
been exploited throughout academic and commercial research as a means of constructing, maintaining and updating user information. Research has shown that the application of these techniques can
improve the effectiveness and usability of software applications.
Deploying IUMELA on a mobile device ensures that the manner and location of the studentagent interaction will differ significantly from that which occurs when taking place via the traditional
desktop environment. Enabling the student to interact with IUMELA in a ubiquitous manner ensures
that their learning experience can be transformed into a larger context, incorporating it into every
aspect of their third level academic career. Advantageous to the student, it presented significant
design challenges.
The desktop metaphor affords the multi-agent system with a malleable environment through
which learning can be facilitated. The IUMELA multi-agent system is required to operate within a
rich and varied ubiquitous environment, ensuring that a once traditional course management system
can be used in the myriad of situations that can be encountered in every day studies.
It is the task of the assistant agent to interact with the student and the other agents within the
multi-agent system in order to provide appropriate assistance based on context. Adaptive personalization was considered as the mechanism that would best assist students based on the belief that a
simple interface is not the best way to provide assistance to heterogeneous user groups.

10.3.2 The moderator agent


The mediator agent family is composed of three basic agent patterns: the broker, the matchmaker,
and the mediator. They act as intermediaries between any number of other agent types. Similar to the
broker and mediator agents the moderator arbitrates interactions between the other agent types. In
addition to this, it also maintains an acquaintance model based on past interactions. Drawing on the
abilities of the mediator, the moderator can interpret the requests received and, based on a combined
analysis of the stored acquaintance model and current context it acts accordingly. The moderator
agent acts as a liaison between the other agent types in IUMELA. Its task is to seamlessly provide
access to services and communication channels via the specified agent ontology. This agent type was
appropriate for use in a ubiquitous environment due to the interactions between the other agent types
being well defined.

174

Agent-Based Ubiquitous Computing

10.3.3 Expert agent technologies


The student agent enables all other agents in IUMELA to access the stored student models. To further accessibility and enhance performance each students stored data has been marked up according to
the IUMELA Document Type Definition (DTD). It provides a single, generic method by which other
agents can interact with the student data while, simultaneously, ensuring student confidentiality. It
can then match action requests to the appropriate agent role controlling the students information.
IUMELA aims to help students in their attempts to achieve their ultimate academic goals by assisting them in devising competent and obtainable academic goals while traversing through a specially
tailored module schema. The student agent enables students to envision, at the click of a button,
a potential overview of their academic journey based on the students current academic profile and
previous academic achievements. The student agent comprehensively trawls through the students assignment and evaluation logs to glean an understanding of their learning preferences and to compare
these to the learning preferences of their peers.
The fundamental role of the expert agent is to accurately depict the teaching strategies of the
module lecturers in an accurate and current manner. So too, is it the task of this agent to retrieve
all potential evaluation techniques for each module, ensuring that any prediction or recommendation
made is based on 100 per cent current information. Within the IUMELA application, the expert agent
maintains a knowledge base of all possible teaching strategies used within the university. This knowledge base is then linked to a list of all potential learning strategies within each module offered. It is
the task of the expert agent to maintain this directory of all available modules, the lecturer directing
it, and their preferred teaching style and examination technique. This information is displayed via
the assistant agents, teaching strategy interface.
The analysis agent maintains a knowledge base of potential recommendation algorithms for use
in the determination of all plausible academic outcomes based on the information it receives from
the student and expert agents. Although it maintains several potential recommendation algorithms, it
will proactively choose an appropriate reasoning model based on the students prior knowledge, their
academic history, and their chosen degree program and current level. This agent type is capable of
adapting its current reasoning strategies in the hope that IUMELAs recommendations will improve
and become more accurate over time.

10.4 IUMELA student user interface


It was decided that an interface would be designed that could be accessed using either a mobile device
or a desktop device to enhance ubiquitous access of student information. This would ensure ease of
access no matter when or where the student chooses to access IUMELA. To do this, several design
issues were raised. Larger devices tend to have longer battery life, greater screen real estate, and
improved screen resolution. Mobile devices, however, have benefits that often outweigh those of the
desktop device. They are discrete, accessible in a multitude of settings and provide ubiquitous access
to online data on the go. The requirements of the interface that is displayed when accessed on a
mobile device are subtly different from those of its desktop equivalent. Once proficient in accessing
IUMELA via either variant, the student must be capable of transferring to its counterpart seamlessly.
Otherwise it is requiring the student to apply two separate skill sets to the same application. An
exercise in futility!

10.4.1 Initial registration and login


IUMELA registration process was designed so that third level students would be capable of registering online without encountering any hard copy procedures. As such, the application has an initial

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

175

login screen that provides access to a registration screen. Many of the student records required by
IUMELA are available through other resources available at the university and, for this reason, only
information that is required is requested upon registration.
The initial login screen is minimal and uncomplicated in its design. It is evident from the image
below that the initial login screen has remained true to its original design specification. The application employs tabular navigation for ease of use. Studies have shown that the use of graphical or a
combination of both textual and graphical interface design is preferential to using a purely text-based
format.
The tabs enable navigation from the initial login screen, through the registration screen and to
help screen prior to initial registration. The student is capable of returning to the initial login screen
at any time. The current tab is always highlighted and the other two tabs remain in shadow. The
IUMELA logo is prominent on all three screens. This ensures the student is made aware of the application. The university insignia is displayed here also. The navigation links displayed highlight
the students location in the application, thus preventing the student becoming lost in hyperspace.
Finally, in order to enter the application the student must enter his unique username and password
combination which is available upon completion of the registration process. When the correct username and password combination is entered and the submit button has been depressed, the user is
navigated to the personalised welcome screen. Here, he will have access to the full array of information available to the mobile user.
The registration screen contains the minimum required information for IUMELA MAS to assist
the student. This information has been selected to enable the MAS to determine the students previous
academic history, learning preferences, future goals and learning styles. Due to the detailed nature
of the registration process, the required information was categorised according to the information
being retrieved and each category was displayed on a separate page. This has significantly reduced
vertical scrolling. Once the student had successfully completed the registration process, a username
and password would be allocated to them and a reminder of this combination would be emailed to
their specified email address.
The IUMELA registration process makes use of the Felder and Solomon Index of Learning Styles
that was alluded to in the initial user trial. The results obtained are then used by the IUMELA MAS
to help recommend modules of interest to the student when the need arises. It is evident that the use
of screen real estate has remained minimal to ensure enhanced usability on the smaller, more mobile
device.
The Register tab is highlighted to indicate that the student is currently working in the registration page. The navigation links allude to this also. Below, a series of questions require completion. Following their submission, the student is requested to complete a survey based on Felder and
Solomons Index of Learning Styles. Although this will benefit the student from the first time they
log on to use the application, it is not necessary that the student complete this immediately.
Each student will be afforded the opportunity to access the site once a suitable username and
password combination have been determined. The student will be made aware, however, that the
survey still requires completion. It is beneficial for the student to provide the answers to the survey
as it further assists the MAS to obtain an appropriate representation of students learning capabilities.
The help screen is also accessible prior to logging in. It is, however, limited to that information
that is required in order to determine if the application is suitable to their needs and how to complete
the login and registration information.

10.4.2 Personalised welcome screen


Upon successfully logging in, the student enters the personalised welcome screen. Studies have
shown that a personalised application is conducive to learning and can enhance information retention.

176

Agent-Based Ubiquitous Computing

Fig. 10.2

Personalized welcome screen.

It encourages students to frequently interact with the application and results have shown that students
remain on these pages for longer than their generic counterparts. This has the effect of enhancing
knowledge absorption.
The information presented to the student via IUMELA has been specifically chosen to instil a
sense of community at the university. It is hoped that the student will not feel like he is working
alone in a sterile environment but rather is a member of an interconnected community of students and
whose primary goal is an improvement in their academic experience at third level. The information
displayed on the welcome screen is limited to that which must be instantly accessible at log in. There
were several reasons for strictly adhering to this display. It would:
(i) minimise vertical scrolling
(ii) remove redundant information from the welcome screen
(iii) protect students privacy
(iv) make best use of the limited screen real estate
To this end, the information displayed includes
(i) navigation controls
(ii) recently updated information
(iii) university information
(iv) student details
The navigation controls enable students to navigate through the IUMELA screens in order to
retrieve the required information. Some of the tasks that a student might wish to complete in order to
achieve their academic goals include: obtain information on all modules available at the university,
determining student compatibility with the module of interest, comparison of current student compatibility with other current students compatibility, comparison of current student compatibility with
similar past students achievements in the particular module of interest, recommendation of suitable
modules for consideration, message interaction with other students, tutors and lecturers on the course.
The assistant in the IUMELA application helps students successfully complete their current module
selection. It is also capable of assisting the student in exam preparation and future module selection.
Navigation through the IUMELA screens can be achieved by using the tabs available at the top
of the screen, through the drop down list of most commonly used tools, using the navigation history

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

177

links, and finally, through the highlighted tasks at the bottom of the screen. The tasks highlighted are
those that require immediate attention.
The university details have been included in order to enhance the sense of community. Studies
have shown that students that feel a sense of belonging at school or university are more likely to
achieve higher grades than those that feel disassociated with their academic careers. Also, students
that feel a sense of belonging are more likely to ask for assistance if they get in trouble or do not
understand aspects of the course. Other students, who have no one to turn to will start to become
more introvert and reclusive which can often lead to the student becoming increasingly lost in their
studies. Instilling a sense of community is a fundamental principle of the IUMELA application.
Community and belonging play an important role throughout all aspects of the application in the
belief that if they do not exist; a student is unlikely to thrive in the learning environment.

10.4.3 Learning journal facility


The learning journal facility has been created as a dedicated academic journal for each student. It
is a private space that enables each student to gather their thoughts. It is a space that enables them
to assemble a personal record of their module experiences from lecture to lecture. It is an outlet for
a student to voice their feelings about projects, assignments and tests undertaken during the course
of the module. It enables the student to gain a preliminary understanding of their overall academic
journey prior to their end of term examinations. When revising, it enables the student to analyse their
achievements for each completed lecture. Did they think they gained an understanding of the topic or
will more study of the topic be required? Did they find it difficult? Do they think they will struggle
when revising?
Any information added to the learning journal remains private to the student and only their personal agent is permitted to access these details. The learning journal is secure and password encrypted. It was designed to be a memory aid at revision time. It allows students to add sentiment
to each lecture upon completion; it provides an emotional trigger when reviewing class work and
assignments as well as providing an indicator as to whether or not they should pursue an associated
module at a later date.
The MAS uses the learning journal facility to formulate an association with the language in their
learning journal and the students personal lecture rating. A lecture star rating of five is a positive
grade. So, adjectives and phrases that frequently appear in the learning journal on that day can be
considered positive also. This further enables the assistant agent to gain an understanding of the
students strengths and weaknesses.

10.4.4 Student messaging


Continuous communication and student integration appear to be central themes for ensuring improved
student participation throughout the academic year. Many EU governance decisions have highlighted
the importance of reducing student drop out rates at universities and colleges. For example, in some
instances subsidies can be reduced if the drop-out rate is not reduced. The prevention of avoidable
drop-outs has been an intractable problem in higher education for years. Several interventional methods, like increased counselling and mentoring, have been introduced to combat the phenomenon.
These, however, have proven to be costly.
The University of Ulster in Northern Ireland has had great success in the use of SMS messaging
for the reduction of student drop-out. It found that sending SMS messages to students who have been
identified as being at risk has been a very successful approach for keeping students in the system
and for maintaining the government per capita grant. The University of Ulster sent out messages to
students of the type Sorry, we missed you today. The university initially feared that this might be

178

Agent-Based Ubiquitous Computing

intrusive. On the contrary, the students did not find it intrusive at all. The students appreciated it and
wanted the university to expand the service to other areas - like assignment deadlines.
Based on the preliminary user trials the message inbox was modified to be akin to the standard message inbox available on many low resolution, low screen real-estate devices such as mobile
phones and PDAs. A list of messages is displayed in the inbox. They can be ranked in order of
message type, from sender or by subject. An icon beside each message indicates if the message has
been read previously.
The information given here contains each senders name, the time at which each message was
sent and the first line of text contained in each message. Upon the student selecting a message to read,
the student is confronted with a full screen containing that message. IUMELA offers the student an
opportunity to reply to the message, delete, forward, move the message, mark as unread, or save the
senders details to the students contacts folder.
The messaging screen contains a representation of both students communicating. A further instant messaging facility enables students to undertake a real time chat, either through video messaging
or if they cannot contact the receiver the student can leave a message in the message drop box for
later perusal. Any task alluded to in a message can be immediately stored in the IUMELA task list
facility. These tasks can then be ranked in accordance with its perceived importance. In this way, a
student can choose to personalise this application to whatever degree they choose.

10.4.5 The module and assistant facilities

Fig. 10.3

Sample Student Interactions with IUMELA.

The most innovative part of the IUMELA application is that of the module and assistant facilities.
They are also the facilities that make most use of the MAS intelligence. By clicking the module
tab on the IUMELA welcome screen the student is directed to the module selection screen. These
facilities can be used in conjunction with the Student Information System (SIS) to select modules
from semester to semester. It has multiple views; all designed with the intention of assisting a student
determine the most suitable module combination for their needs. Multiple views were provided to
ensure that a student can browse or search for modules in a manner that suits their current situation.
For example, if a student is aware of the school and faculty in which they wish to attend but are
unsure of the available electives they can navigate to that school and browse through the available
modules under title, lecture, complexity level or key word. In another instance, the student may be
aware of the subject they are interested in but not sure what module this is covered in. For this, a

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

179

student may do a keyword search. The results returned will contain all relevant modules from all
faculties. These can then be narrowed according to faculty, school, complexity, semester and time.
If a student is unsure of where to begin searching for modules, they can access the assistant agent to
help formulate a decision.
The assistant has access to all past academic records of all modules previously taken by the student. Furthermore, it maintains a directory of all HCI interactions that took place with the student
via IUMELA. These interactions include the initial survey taken during the completion of the initial registration process, the students learning journal entries, and also message interactions between
the student and their fellow classmates, tutors and lecturers. The results from these interactions are
aligned with specific academic preferences. These include what learning tasks best suit the current
student, what style of lecturing best suit the students learning preference, where do their strengths
lie, do they work well in group tasks or are better suited to working alone, do they prefer continuous
assessment or one single examination at the end of the module, what modules are their friends attending, what type of modules have they successfully completed in the past and so on. Upon correlating
these details, the results are ranked according to suitability. They are then meaningfully displayed.

Fig. 10.4

Presentation of a potential module

Initially, all the modules that lie within the suitability threshold range will be displayed to the student. He is then given some options that enable the quantity of suitable modules to be reduced. These
options include sorting by faculty, school, lecturer, semester, time and core components. When only
those modules that are of interest remain, the student can select a module for consideration. They are
then redirected to a module details screen, in which they can obtain a greater understanding of the
module under consideration, view a sample lecture, peruse sample presentation slides, read recommended material, see some sample examination questions, communicate directly with the teaching
assistants or lecturers and read feedback given by past pupils. This provides a level of interactivity
that was previously only alluded to.

10.5 Evaluation
The purpose of designing IUMELA was to assist third level students in their selection of module
combinations. During the design phase, however, it became evident that mobile devices still lack
the promised power and connectivity of their desktop counterparts. And, therefore, our research led
us to question whether performance of the IUMELA MAS could be improved through the adoption

180

Agent-Based Ubiquitous Computing

of the ABITS FIPA Messenger. Because IUMELA is a mobile application, this study would be
undertaken using a lightweight client side, where the personalized and adaptive assistant agent is
located, and the more computationally expensive server side where the remainder of the MAS agents
resides. IUMELA makes use of the XDA Mini S, a currently mid range Smartphone technology.
It harnesses the power of the TI OMP 850 200 MHZ processor and is equipped with a 64k colour
touch screen and 240*320 backlit LCD screen. The server-side is maintained on a Dell Dimension
E521 with 4GB Dual-Channel DDR2 SDRAM 533MHz processor. The Moodle Managed Learning
Environment (MLE) is maintained on the server-side. It is an open source MLE, used for Internet
based course administration within the IUMELA application and makes use of the MySQL relational
database management system. IUMELA MAS uses the FIPA compliant Agent Factory toolkit and
Java as the programming language.

10.5.1 ABITS FIPA messenger in IUMELA


Past studies have demonstrated that maintaining a lightweight client side can ensure faster and
more efficient human-computer interactions (HCI). Therefore by using ABITS FIPA Messenger, a
lightweight Java API that enables unidirectional communication from a Java environment to FIPA
compliant MAS, processing and communicative overheads may be reduced. IUMELAs lightweight
assistant agent ensures that the learning assistant can operate an optimal level. Not only does the
IUMELA MAS architecture lend itself well to the premise behind the ABITS FIPA Messenger, but
it also displays an ability to cope when there is a significant increase in inter-agent communications.
This ensures that the application could initially contain only a few nodes but could easily expand to
many more based on network requirements. Merely employing this architecture, however, cannot
guarantee that IUMELA will function at an optimal level. Therefore, to further scrutinize the MAS
agent communication mechanism, a study was undertaken to determine if a more lightweight assistant agent could be used that would result in a reduction of the processing requirements of the mobile
application and increase the efficiency of the inter-agent communications thus improving HumanComputer Interactions (HCI).
To achieve this, the inter agent communications were tested while a simulation of a student
undertaking the initial IUMELA survey was under way via the XDA Mini S. The aim of this survey
was to obtain an initial knowledge base of the preferred learning styles of a newly enrolled student
so that initial recommendations of modular courses could be made. The survey required students
to interact with the IUMELA MAS via the assistant agent in order that they could provide answers
to questions posed. This trial was undertaken via the IUMELA architecture discussed in Section 3.
The lightweight assistant agent, residing on the XDA Mini S, communicates according to the MAS
pull-based agent ontology.
To ensure that meaningful results would be returned and appropriate conclusions could be drawn,
it was necessary that two types of interaction would occur. The control IUMELA interactions, typical
of the Agent Factory MAS, would require the assistant agent to have an understanding of all request
and inform messages specific to the IUMELA ontology. The second set of IUMELA interactions
undertaken merely require the assistant agent to obtain messages from the moderator agent and does
not require it to submit processed information as these messages would be relayed via the ABITS
FIPA Messenger service.
In order that the optimal number of concurrent message interactions could be determined, it was
necessary that several key time codes were retrieved. For each message interaction, regardless of the
mechanism: the time at which the first inform message interaction between the assistant agent and
the moderator agent occurs, the time at which the message is received from the sender, in the case
were the student is logging in for the first time, the sender can be considered to be the assistant agent,
the time at which a second inform message is returned to the sender, the time at which the message

IUMELA: Intelligent Ubiquitous Modular Education Learning Assistant in Third Level Education

181

interaction is completed and finally the overall duration of the message interaction.
On average, around 144 message interactions are required in order for each initial IUMELA
survey to be completed by a student. The time codes retrieved enable the multi-agent system to
determine if and when a bottleneck is occurring within the IUMELA MAS architecture due to the
moderator agents message interactions having surpassed its optimal level. While undertaking the
survey, the bottleneck occurred mid-way through completion, at the sixty-first interaction.
Using the ABITS FIPA Messenger, the optimal number of concurrent message interactions occurs at interaction forty-seven. This does not occur until interaction seventy-eight within the control
scenario and is attributed to the assistant agent maintaining a full ontology and undertaking more
complicated inter-agent communications. Before this time, too few concurrent message interactions
are occurring and after which too many. The results indicate that a bottleneck occurred mid-way
through completion of the survey because it is at this time that the greatest number of message interactions is occurring between the assistant and moderator agents. The ABITS FIPA Messenger
achieved optimality of message interactions at a faster rate than the control scenario due to the inclusion of a java class that maintained a separate queue of completed surveys to be returned to the server
side moderator for processing. This had the effect of reducing the size of the assistant agents ontology and thus reduced its processing requirements. Increased efficiency on the client-side, however,
resulted in the moderator agent becoming a bottleneck sooner than in the control scenario.
Conclusions drawn from this study indicate that in order to maintain message interaction levels at an optimal level it must be ensured that the moderator agent does not become a bottleneck.
IUMELA MAS can achieve this by maintaining two disparate thresholds; a maximum and a minimum. For example, the quantity of messages to be relayed via a particular moderator agent can be set
to a maximum threshold. Whilst the number of messages concurrently being processed and sent remains below the threshold allowable, then it will accept further incoming messages. If, however, the
threshold is reached then using the cloning functionality available to all Agent Factory based MAS,
a moderator agent with similar functionalities is created and is registered on the required platform.
Subsequently, when a new message is relayed to the moderator agent operating at its maximum level,
it replies referencing the platform and port number of the newly created moderator agent.
Because the IUMELA MAS does not exist within a static environment it would not be plausible to
maintain a fixed maximum or minimum threshold. Instead, the moderator agent must use its internal
reasoning abilities to determine, based on the messaging service used, past usage logs, and its current
context, the most appropriate maximum and minimum thresholds.

10.6 Discussion
IUMELA, Intelligent Ubiquitous Modularised Education Learning Assistant makes use of multiagent systems (MAS) technologies to create an intelligent learning assistant that can support students
in their choice of modules based on their learning preferences, academic abilities and personal preferences. The learning assistant utilizes expert systems analysis functionality to recommend and predict
potential outcomes through the investigation of the students learning styles and comparative analysis of similar past students achievements. Its conclusions and recommendations are subsequently
displayed to the student via a mobile device using java-based servlet technologies. User modelling
can result in the recommendation of appropriate modules via the expert agent ubiquitously via the
students XDA Mini S. Because the mobile devices currently available to the average student are still
unable to compete with desktop and laptop computers based on screen real-estate, computational
and processing power, our research has led us to the consideration of a more lightweight client-side
through the reduction of the processing requirements of the assistant agent that resides on the mobile device. An evaluation of such an adaptation was achieved by undertaking a comparative study
between the original IUMELA MAS and that incorporating the ABITS FIPA Messenger.

Bibliography

(2000). Agent working group, 2000. agent technology green paper. object management group, ftp:
//ftp.omg.org/pub/docs/ec/00-03-01.pdf.
(2002). Foundation for intelligent physical agents, .
(2008). Agent academy, 2008, https://sourceforge.net/projects/agentacademy.
(2008). Jade, http://jade.tilab.com.
(2008). Weka, http://www.cs.waikato.ac.nz/~ ml/index.html.
Aamodt, A. and Plaza, E. (1994). Case-Based Reasoning: Foundational Issues, Methodological Variations, and System, AI Communications 7, 1, pp. 3659.
Aarts, E. (2004). Ambient intelligence: A multimedia perspective, IEEE Multimedia 11, pp. 1219.
Aarts, E., Collier, R., van Loenen, E. and de Ruyter, B. (2003). Ambient Intelligence, Proceedings
of the First European Symposium, Lecture Notes in Computer Science, Vol. 2875 (Springer
Verlag).
Aarts, E., Harwig, R. and Schuurmans, M. (2001). Ambient intelligence, in P. Denning (ed.), The
Invisible Future (McGraw Hill, New York), pp. 235250.
abd K. Shinozawa, J. Y., Brooks, R. and Naya, F. (2003). Human-robot dynamic social interaction,
NTT Technical Review 1, 6, pp. 3743.
Abowd, G. D., Atkeson, C. G., Bobick, A. F., Essa, I. A., MacIntyre, B., Mynatt, E. D. and Starner,
T. E. (2000). Living laboratories: the future computing environments group at the georgia
institute of technology, in Proceedings of Conference on Human Factors in Computing Systems
(CHI 00): extended abstracts on Human factors in computing systems (ACM Press, New
York, NY, USA), ISBN 1-58113-248-4, pp. 215216.
Abowd, G. D., Atkeson, C. G., Hong, J., Long, S., Kooper, R. and Pinkerton, M. (1997). Cyberguide:
a mobile context-aware tour guide, Wireless Networks 3, 5, pp. 421 433.
Akman, V. and Surav, M. (1996). Steps toward formalizing context, AI Magazine 17, 3, pp. 5572.
Allsopp, J. (2007). Microformats: Empowering Your Markup for Web 2.0 (Friends of ED).
Amgoud, L. and Parsons, S. (2002). Agent dialogues with conflicting preferences, in ATAL 01: Revised Papers from the 8th International Workshop on Intelligent Agents VIII (Springer-Verlag,
London, UK), pp. 190205.
Anabuki, M., Kakuta, H., Yamamoto, H. and Tamura, H. (2000). Welbo: An embodied conversational
agent living in mixed reality space, in Proceeding of the Conference on Human Factors in
Computing Systems - CHI 00 (The Hague, The Netherlands), pp. 1011.
Anagnostopoulos, C. B., Tsounis, A. and Hadjiefthymiades, S. (2007). Context awareness in mobile
computing environments, Wireless Personal Communications: An International Journal 42, 3,
pp. 445464.
Arranz, A., Cruz, A., Sanz-Bobi, M., Ruiz, P. and Coutino, J. (2008). intelligent system for anomaly
detection in a combined cycle gas turbine plant, Expert Systems with Applications 34, pp.

E. Mangina et al., Agent-Based Ubiquitous Computing, Atlantis Ambient and Pervasive Intelligence 1,
DOI 10.1007/978-94-91216-31-2, 2009 Atlantis Press/World Scientific

183

184

Agent-Based Ubiquitous Computing

22672277.
Athanasopoulou, C., Chatziathanasiou, V., Athanasopoulos, G. and Kerasidis, F. (2008). Reduction
of nox emissions by regulating combustion parameters based on models extracted by applying
data mining algorithms, in 6th Mediterranean Conference on Power Generation, Transmission
and Distribution (Thessaloniki, Greece).
Athanasopoulou, C., Chatziathanasiou, V. and Petridis, I. (2007). Utilizing data mining algorithms
for identification and reconstruction of sensor faults: a thermal power plant case study, in 2007
IEEE Power Engineering Society (Lausanne, Switzerland).
Baader, F., Calvanese, D., McGuinness, D. L., Nardi, D. and Patel-Schneider, P. F. (eds.) (2003). The
Description Logic Handbook: Theory, Implementation, and Applications (Cambridge University Press), ISBN 0-521-78176-0.
Barakonyi, I., Psik, T. and Schmalstieg, D. (2004). Agents that talk and hit back: animated agents
in augmented reality, in Proceedings of the 3rd IEEE and ACM International Symposium on
Mixed and Augmented Reality (ISMAR 2004), pp. 141150.
Bartneck, C. (2002). eMuu An Embodied Emotional Character for the Ambient Intelligent Home,
Ph.D. thesis, Eindhoven University of Technology.
Barton, J. and Vijayaraghavan, V. (2002). Ubiwise: A ubiquitous wireless infrastructure simulation
environment, Technical Report HPL-2002-303, HP Labs.
Baylor, A. L., Rosenberg-Kima, R. B. and Plant, E. A. (2006). Interface agents as social models: the
impact of appearance on females attitude toward engineering, in CHI 06: CHI 06 extended
abstracts on Human factors in computing systems (ACM, New York, NY, USA), pp. 526531.
Bellifemine, F., Caire, G. and Greenwood, D. (2007). Developing multi-agent systems with JADE
(John Wiley and Sons).
Bellifemine, F., Caire, G. and Rimassa, G. (2002). JADE Programmers Guide, TILab and University
of Parma, jADE 2.6.
Bellifemine, F., Poggi, A. and Rimassa, G. (2001). Developing multi-agent systems with a fipacompliant agent framework, Software-Practice and Experience 31, pp. 103128.
Berners-Lee, T., Hendler, J. and Lassila, O. (2001). The semantic web: Scientific american, Scientific
American .
Besada, J., Molina, J., Garca, J., Berlanga, A. and Portillo, J. (2004a). Aircraft identification integrated in an airport surface surveillance video system, Machine Vision and Applications .
Besada, J. A., Garca, J. and Miguel, G. (2004b). A new approach to on-line optimal estimation of
multisensor biases, IEE Proceedings. Radar, Sonar and Navigation 151, 1.
Bichindaritz, I. (2006). Memory Organization as the Missing Link Between Case-Based Reasoning
and Information Retrieval in Biomedicine, Computational Intelligence 22, 3-4, pp. 148160.
Biegel, G. and Cahill, V. (2004). A framework for developing mobile, context-aware applications,
in PERCOM 04: Proceedings of the Second IEEE International Conference on Pervasive
Computing and Communications (PerCom04) (IEEE Computer Society, Washington, DC,
USA), ISBN 0-7695-2090-1, p. 361.
Billard, A. and Dautenhahn, K. (1999). Experiments in learning by imitation: Grounding and use of
communication in robotic agents, Adaptive Behaviour 7, 3, pp. 411434.
Billinghurst, M., Kato, H. and Poupyrev, I. (2001). The magicbook: a transitional ar interface, Computers and Graphics , pp. 745753.
Biocca, F. (1997). The cyborgs dilemma: Embodiment in virtual environments, in CT 97: Proceedings of the 2nd International Conference on Cognitive Technology (CT 97) (IEEE Computer
Society, Washington, DC, USA), p. 12.
Biocca, F. and Nowak, K. (1999a). Communication and progressive embodiment in virtual environments, Paper presented at the International Communication Association, San Francisco.
Biocca, F. and Nowak, K. (1999b). I feel as if Im here, inside the computer:Toward a theory of
presence in ad- vanced virtual environments, Paper presented at the International Communi-

Bibliography

185

cation Association, San Francisco.


Blum, A. L. and Furst, M. L. (1997). Fast planning through planning graph analysis, Artificial Intelligence 90, 1-2, pp. 281300.
Booth, D. and Liu, C. K. (2007). Web Services Description Language (WSDL) version 2.0 part 0:
Primer, http://www.w3.org/TR/wsdl20-primer/.
Bosse, T., Delfos, M., Jonker, C. and Treur, J. (2006a). Modelling adaptive dynamical systems to analyze eating regulation disorders, Simulation Journal: Transactions of the Society for Modeling
and Simulation International 82, 3, pp. 159171.
Bosse, T., Gerritsen, C. and Treur, J. (2007). Integration of biological, psychological and social aspects in agent-based simulation of a violent psychopath, in Y. Shi, G. van Albada, J. Dongarra
and P. Sloot (eds.), Proceedings of the Seventh International Conference on Computational
Science, ICCS07 (Springer Verlag), pp. 888895.
Bosse, T., Hoogendoorn, M., Klein, M. and Treur, J. (2008a). A component-based agent model for
assessment of driving behaviour, in F. Sandnes, M. Burgess and C. Rong (eds.), Proceedings of the Fifth International Conference on Ubiquitous Intelligence and Computing, UIC08
(Springer Verlag), pp. 229243.
Bosse, T., Jonker, C., van der Meij, L., Sharpanskykh, A. and Treur, J. (2008b). Specification and
verification of dynamics in agent models, International Journal of Cooperative Information
Systems .
Bosse, T., Schut, M., Treur, J. and Wendt, D. (2008c). Trust-based inter-temporal decision making:
Emergence of altruism in a simulated society, in L. Antunes, M. Paolucci and E. Norling
(eds.), Proceedings of the Eighth International Workshop on Multi-Agent-Based Simulation,
MABS07 (Springer Verlag), pp. 96111.
Bosse, T., van Maanen, P. and Treur, J. (2006b). A cognitive model for visual attention and its application, in T. Nishida, M. Klusch, K. Sycara, M. Yokoo, J. Liu, B. Wah, W. Cheung and Y. Cheung (eds.), Proceedings of the Sixth International Conference on Intelligent Agent Technology,
IAT06 (IEEE Computer Society Press), pp. 255262.
Bratman, M. (1987). Intentions, Plans and Practical Reasoning (Harvard University Press, Cambridge, Massachusetts).
Braubach, L., Pokahr, A. and Lamersdorf, W. (2004). Jadex: A short overview, in Main Conference
Net.ObjectDays 2004, pp. 195207.
Brazier, F., Jonker, C. and Treur, J. (2000). Compositional design and reuse of a generic agent model,
Applied Artificial Intelligence Journal 14, pp. 491538.
Brazier, F., Jonker, C. and Treur, J. (2002). Principles of component-based design of intelligent
agents, Data and Knowledge Engineering 41, pp. 128.
Broch, J., Maltz, D., Johnson, D., Hu, Y. and Jetcheva, J. (1998). A Performance Comparison
of Multi-Hop Wireless Ad Hoc Network Routing Protocols, Proceedings of the 4th annual
ACM/IEEE international conference on Mobile computing and networking , pp. 8597.
Busetta, P., Bouquet, P., Adami, G., Bonifacio, M., Palmieri, F., Moro, G., Sartori, C. and Singh,
M. P. (2004). K-Trek: a peer-to-peer approach to distribute knowledge in large environments, Agents and Peer-to-Peer Computing. Second International Workshop, AP2PC 2003.
Revised and Invited Papers. (Lecture Notes in Artificial Intelligence Vol.2872) (SpringerVerlag, Berlin, Germany).
Capra, L., Emmerich, W. and Mascolo, C. (2003). CARISMA: Context-aware reflective mIddleware
system for mobile applications, IEEE Transactions on Software Engineering 29, 10, pp. 929
945.
Carbogim, D. V., Robertson, D. and Lee, J. (2000). Argument-based applications to knowledge engineering, Knowledge Engineering Review 15, 2, pp. 119149.
Chalmers, R., Scheidt, D., Neighoff, T., Witwicki, S. and Bamberger, R. (2004). Cooperating unmanned vehicles, in AIAA 1st Intelligent Systems Technical Conference.

186

Agent-Based Ubiquitous Computing

Chen, H. (2004). An Intelligent Broker Architecture for Pervasive Context-Aware Systems, Ph.D.
thesis, University of Maryland, Baltimore County.
Chen, H., Finin, T. and Joshi, A. (2003). An ontology for context-aware pervasive computing environments, The Knowledge Engineering Review 18, 3, pp. 197207.
Chen, H., Finin, T. and Joshi, A. (2004a). A context broker for building smart meeting rooms, in
C. Schlenoff and M. Uschold (eds.), Proceedings of the Knowledge Representation and Ontology for Autonomous Systems Symposium, 2004 AAAI Spring Symposium, AAAI (AAAI Press,
Menlo Park, CA, Stanford, California), pp. 5360.
Chen, H., Finin, T. and Joshi, A. (2005). Ontologies for Agents: Theory and Experiences, chap. The
SOUPA Ontology for Pervasive Computing, Whitestein Series in Software Agent Technologies (Birkhauser Basel), pp. 233258, doi:http://dx.doi.org/10.1007/3-7643-7361-X 10, URL
http://www.springerlink.com/content/k127108k44351226/.
Chen, H., Finin, T., Joshi, A., Kagal, L., Perich, F. and Chakraborty, D. (2004b). Intelligent agents
meet the semantic web in smart spaces, IEEE Internet Computing 8, 6, pp. 6979.
Chen, H., Perich, F., Chakraborty, D., Finin, T. and Joshi, A. (2004c). Intelligent agents meet semantic
web in a smart meeting room, in Proceedings of the Third International Joint Conference
on Autonomous Agents and Multiagent Systems (AAMAS 04) (IEEE Computer Society, Los
Alamitos, CA, USA), ISBN 1-58113-864-4, pp. 854861.
Chen, H., Perich, F., Finin, T. and Joshi, A. (2004d). SOUPA: Standard ontology for ubiquitous
and pervasive applications, in International Conference on Mobile and Ubiquitous Systems:
Networking and Services, pp. 258267.
Chiang, F., Braun, R. M., Magrath, S., Markovits, S. and Huang, S. (2005). Autonomic service configuration in telecommunication mass with extended role-based gaia and jadex. PONER!!!! ,
pp. 13191324.
Chittaro, L., Ieronutti, L. and Rigutti, S. (2005). Supporting presentation techniques based on virtual humans in educational virtual worlds, in CW 05: Proceedings of the 2005 International
Conference on Cyberworlds (IEEE Computer Society, Washington, DC, USA), pp. 245252.
Clausen, T. and Jacquet, P. (2003). RFC 3626: Optimized Link State Routing Protocol (OLSR),
http://www.ietf.org/rfc/rfc3626.txt.
Clement, L., Hately, A., von Riegen, C. and Rogers, T. (2004). UDDI version 3.0.2, http://uddi.
org/pubs/uddi-v3.0.2-20041019.htm.
Clements, P. C. (2001). From subroutines to subsystems: component-based software development, ,
pp. 189198.
Clip, D. (2003). Gnutella Protocol Specification v0. 4, http//www. clip2. com/GnutellaProtocol04.
pdf .
Coen, M. H. (1997). Building brains for rooms: designing distributed software agents, in Proceedings
of the Conference on Innovative Applications of Artificial Intelligence (IAAI97) (AAAI Press),
pp. 971977.
Coen, M. H. (1998). Design principles for intelligent environments, in Proceedings of the fifteenth
national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence (AAAI 98/IAAI 98) (American Association for Artificial Intelligence, Menlo Park, CA,
USA), ISBN 0-262-51098-7, pp. 547554.
Collett, T. and MacDonald, B. (2006). Augmented reality visualisation for player, in Proceedings
of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006), pp.
39543959.
Collier, R. W. (2001). Agent Factory: A Framework for the Engineering of Agent-Oriented Applications, Ph.D. thesis, University College Dublin, Dublin, Ireland.
Collins, N. and Baird, C. (1989). Terrain aided passive estimation, in Proceedings of the IEEE National Aerospace and Electronics Conference, Vol. 3, pp. 909916.
Collins, R. T., Lipton, A. J., Fujiyoshi, H. and Kanade, T. (2001). Algorithms for cooperative multi-

Bibliography

187

sensor surveillance, in Proceedings of the IEEE, Vol. 89 (IEEE).


Daily, M., Cho, Y., Martin, K. and Payton, D. (2003). World embedded interfaces for human-robot
interaction, Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 2003 .
DARPA (2008). DARPA Strategic Technology Office, http://www.darpa.mil/STO/index.
html.
Dasarathy, B. (1991). Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques (Los
Alamitos, CA: IEEE Computer Society Press).
Dautenhahn, K. (1997). I could be you: The phenomenological dimension of social understanding,
Cybernetics and Systems 28, pp. 417453(37).
Dautenhahn, K. (1998). The art of designing socially intelligent agents: Science, fiction, and the
human in the loop, Applied Artificial Intelligence 12, 89, pp. 573617.
Dennett, D. (1987a). The Intentional Stance (MIT Press, Cambridge Mass.).
Dennett, D. (1987b). The Intentional Stance (Bradford Books).
Dey, A. K. (2000). Providing architectural support for building context-aware applications, Ph.D.
thesis, Georgia Institute of Technology.
Domingos, P. (1999). The role of occams razor in knowledge discovery, Data Mining and Knowledge
Discovery 3, 4, pp. 409425.
Doswell, J. T. (2005). Its virtually pedagogical: Pedagogical agents in mixed reality learning environments, in Proceedings of the Thirtysecond International Conference on Computer Graphics
and Interactive Techniques - SIGGRAPH 2005 - Educators Program (Los Angeles, California), p. 25.
Doyle, J. (1979). A truth maintenance system, Artificial Intelligence 12, pp. 231272.
Dragone, M. (2007). SoSAA: An agent-based robot software framework, Ph.D. thesis, School of
Computer Science & Informatics, University College Dublin, Dublin, Ireland.
Dragone, M., Holz, T. and OHare, G. M. P. (2006). Mixing robotic realities, in Proceedings of the
11th international conference on Intelligent user interfaces (IUI 2006) (ACM Press, New York,
NY, USA), pp. 261263.
Dragone, M., Holz, T. and OHare, G. M. P. (2007). Using mixed reality agents as social interfaces
for robots, in RO-MAN 07: Proceedings of the 16th IEEE International Workshop on Robot
and Human Interactive Communication (IEEE Press, Jeju Island, Korea).
Duffy, B. R., OHare, G. M. P., Martin, A. N., Bradley, J. F. and Schon, B. (2003). Agent chameleons:
agent minds and bodies, in Proceedings of the 16th International Conference on Computer
Animation and Social Agents (CASA 2003), pp. 118125.
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games, Artificial Intelligence 77, 2, pp. 321358.
Feiner, S., MacIntyre, B., Hollerer, T. and Webster, A. (1997). A touring machine: Prototyping 3d
mobile augmented reality systems for exploring the urban environment, in Proceedings of
the 1st IEEE International Symposium on Wearable Computers (ISWC 97) (IEEE Computer
Society, Washington, DC, USA), p. 74.
Fitzpatrick, A., Biegel, G., Clarke, S. and Cahill, V. (????). Towards a Sentient Object Model, Workshop on Engineering Context-Aware Object Oriented Systems and Environments (ECOOSE)
.
Flynn, D. (2003). Thermal Power Plant Simulation and Control (IEEE Press).
Fogg, B. J. (1999). Persuasive technologies, Communications of the ACM 42, 5, pp. 2629.
Fong, T., Nourbakhsh, I. and Dautenhahn, K. (2003). A survey of socially interactive robots, Robotics
and Autonomous Systems 42, 34, pp. 143166.
Forgy, C. L. (1982). Rete: A fast algorithm for the many pattern/many object pattern match problem,
Artificial Intelligence 19, 1, pp. 17 37.
Frawley, W., Piatetsky-Shapiro, G. and Matheus, C. (1992). Knowledge discovery in databases: An

188

Agent-Based Ubiquitous Computing

overview, AI Magazine 13, 3, pp. 5770.


Friedman, B. (1995). its the computers fault: reasoning about computers as moral agents, in CHI
95: Conference companion on Human factors in computing systems (ACM, New York, NY,
USA), pp. 226227.
Friedman Hill, E. (2003). Jess in Action: Java Rule-Based Systems (Manning Publications Co.,
Greenwich, CT, USA), ISBN 1930110898.
Fuentes, V., Carbo, J. and Molina, J. M. (2006a). Heterogeneous domain ontology for location based
information system in a multi-agent framework, in IDEAL, pp. 11991206.
Fuentes, V., Pi, N. S., Carbo, J. and Molina, J. M. (2006b). Reputation in user profiling for a contextaware multiagent system, in EUMAS.
Fujita, Y. (2002). Personal robot PaPeRo, Journal of Robotics and Mechatronics 14, 1, pp. 6063.
Gaerdenfors, P. (2003). How Homo Became Sapiens: On The Evolution Of Thinking (Oxford University Press).
Gandon, F. and Sadeh, N. M. (2003). A semantic e-wallet to reconcile privacy and context awareness,
Lecture Notes in Computer Science: The SemanticWeb - ISWC 2003 2870, pp. 385401.
Gandon, F. L. and Sadeh, N. M. (2004). Semantic web technologies to reconcile privacy and context
awareness, Journal of Web Semantics 1, 3, pp. 241260.
Garca, J., Besada, J. A. and Casar, J. R. (2003). Use of map information for tracking targets on
airport surface, IEEE Transactions on Aerospace and Electronic Systems 39, 2, pp. 675694.
Garca, J., Besada, J. A., Jimenez, F. J. and Casar, J. R. (2000). A block processing method for on-line
multisensor registration with its application to airport surface data fusion, in Proceedings of
the IEEE International Radar Conference (Washington-DC, EE. UU.).
Garcia-Clemente, F., Martinez, G., Botia, J. A. and Gomez-Skarmeta, A. (2005). On the application
of the semantic web rule language in the definition of policies for system security management,
in Workshop AWeSOMe05.
Gerkey, B. P., Vaughan, R. T. and Howard, A. (2003). The Player/Stage project: Tools for multi-robot
and distributed sensor systems, in Proceedings of the International Conference on Advanced
Robotics (ICAR 03) (Coimbra, Portugal), pp. 317323.
Giesler, B., Salb, T., Steinhaus, P. and Dillmann, R. (2004). Using augmented reality to interact
with an autonomous mobile platform, in Proceedings of the IEEE International Conference on
Robotics and Automation - ICRA 2004 (New Orleans, Louisiana, USA).
Giunchiglia, F., Mylopoulos, J. and Perini, A. (2002). The tropos software development methodology:
Processes, models and diagrams, in Proceedings of the first international joint conference on
autonomous agents and multiagent systems (ACM press), pp. 6374.
Goldman, A. (2006). Simulating Minds: The Philosophy, Psychology and Neuroscience of Mind
Reading (Oxford University Press).
Gonzalez, C. (2008). Contribuciones al diseno de sistema tutores inteligentes usando razonamiento
basado en casos, Ph.D. thesis, University of Vigo, Spain.
Gram, C. and Cockton, G. (1996). Design Principles for Interactive Software (Chapman & Hall,
London).
Green, D. (2005). Realtime compliance management using a wireless realtime pillbottle - a report
on the pilot study of simpill, in Proceedings of the International Conference for eHealth,
Telemedicine and Health, Med-e-Tel05.
Gross, J. (2007). Handbook of Emotion Regulation (Guilford Press, New York).
Gruber, T. R. (1993). A translation approach to portable ontology specifications, Knowledge Acquisition 5, 2, pp. 199220.
Gu, T., Wang, X. H., Pung, H. K. and Zhang, D. Q. (2004). An ontology-based context model in
intelligent environments, in Proceedings of Communication Networks and Distributed Systems
Modeling and Simulation Conference, pp. 270275.
Gutierrez, M., Vexo, F. and Thalmann, D. (2003). Controlling virtual humans using PDAs, in Pro-

Bibliography

189

ceedings of the 9th International Conference on Multimedia Modelling (MMM03), Taiwan.


Ha, Y.-G., Sohn, J.-C., Cho, Y.-J. and Yoon, H. (2005). Towards a ubiquitous robotic companion:
Design and implementation of ubiquitous robotic service framework, ETRI Journal 27, 6, pp.
666676.
Hall, D. L. and Llinas, J. (2001). Handbook of MultiSensor Data Fusion (CRC Press, Boca Raton).
Hanssens, N., Kulkarni, A., Tuchida, R. and Horton, T. (2002). Building agent-based intelligent
workspaces, in Proceedings of the International Conference on Internet Computing (IC2002)
(CSREA Press, Las Vegas, Nevada, USA), pp. 675 681.
Harter, A., Hopper, A., Steggles, P., Ward, A. and Webster, P. (2002). The anatomy of a context-aware
application, Wireless Networks 8, 2 3, pp. 187 197.
Hemming, N. (2001). KaZaA, Web Site-www. kazaa. com .
Herianto, Sakakibara, T. and Kurabayashi, D. (2007). Artificial pheromone system using RFID for
navigation of autonomous robots, Journal of Bionic Engineering 4, 4, pp. 245253.
Hess, C. K., Roman, M. and Campbell, R. H. (2002). Building applications for ubiquitous computing
environments, in Pervasive, pp. 1629.
Hobbs, J. R. and Pan, F. (2004). An ontology of time for the semantic web, ACM Transactions on
Asian Language Information Processing (TALIP) 3, 1, pp. 6685.
Holz, T., Dragone, M., OHare, G. M. P., Martin, A. and Duffy, B. R. (2006). Mixed reality agents
as museum guides, in ABSHL 06: Agent-Based Systems for Human Learning, AAMAS 2006
Workshop (ACM Press, New York, NY, USA).
Hoogendoorn, M., Klein, M. and Treur, J. (2008). Formal design and simulation of an ambient
multi-agent system model for medicine usage management, in M. Muehlhaeuser, A. Ferscha
and E. Aitenbichler (eds.), Constructing Ambient Intelligence: AmI-07 Workshop Proceedings
(Springer Verlag), pp. 207217.
Hristova, N., OHare, G. M. P. and Lowen, T. (2003). T.: Agent-based ubiquitous systems: 9 lessons
learnt, in In Workshop on System Support for Ubiquitous Computing (UbiSys03), 5th International Conference on Ubiquitous Computing (UbiComp.
Huhns, M. and Singh, M. (2005). Service-oriented computing: Key concepts and principles, IEEE
Internet Computing 9, 1, pp. 7581.
Iglesias, C., Garijo, M., Gonzalez, J. and Velasco, J. (1996). A methodological proposal for multiagent systems development extending commonkads, in Proceedings of the Tenth Knowledge
Acquisition for Knowledge-Based Systems Workshop.
Iglesias, C., Garijo, M., Gonzalez, J. and Velasco, J. (1998). Analysis and design of multiagent
systems using mas-commonkads, in Intelligent agents IV: Agent Theories, Architectures and
Languages (Munchen, Germany), pp. 313327.
Inc, N. (2002). The Napster Homepage, Online: http://www. napster. com .
Intille, S. S. (2002). Designing a home of the future, IEEE Pervasive Computing 1, 2, pp. 7682.
J., S. and A., H. (2002). Novel interactive control interface for centaur-like service robot, in Proceedings of the 15th IFAC World Congress on Automatic Control (Barcelona, Spain).
JARA (2008). Japan robot association, http://www.jara.jp/.
Jennings, N. and Bussmann, S. (2003). Agent-based control systems, IEEE Control Systems Magazine 23, 3, pp. 6174.
Jonker, C. and Treur, J. (2002). Compositional verification of multi-agent systems: a formal analysis
of pro-activeness and reactiveness, International Journal of Cooperative Information Systems
11, pp. 5192.
Kahan, J. and Rapoport., A. (1984). Theories of Coalition Formation (Lawrence Erlbaum Associatates Publishers).
Kakas, A. and Moraitis, P. (2003). Argumentation based decision making for autonomous agents, in
AAMAS 03: Proceedings of the second international joint conference on Autonomous agents
and multiagent systems (ACM Press, New York, NY, USA), ISBN 1-58113-683-8, pp. 883

190

Agent-Based Ubiquitous Computing

890.
Kaplan, F. (2005). Everyday robotics: robots as everyday objects, in sOc-EUSAI 05: Proceedings of
the 2005 joint conference on Smart objects and ambient intelligence (ACM, New York, NY,
USA), pp. 5964.
Karniely, H. and Siegelmann, H. T. (2000). Sensor registration using neural networks, IEEE Transaction on Aerospace and Electronic Systems 36, 1, pp. 85101.
Kato, H. and Billinghurst, M. (1999). Marker tracking and HMD calibration for a video-based augmented reality conferencing system, in Proceedings of the Second International IEEE Workshop on Augmented Reality - IWAR 99 (San Francisco, California, USA).
Ketchpel, S. (1994). Forming coalitions in the face of uncertain rewards, in Proceedings of the National Conferece on Artificial Intelligence (Seattle, WA), pp. 414419.
Kidd, C. and Breazeal, C. (2005). Sociable robot systems for real-world problems, in Proceedings of
the IEEE International Workshop on Robot and Human Interactive Communication (ROMAN
2005), pp. 353358.
Kim, J.-H. (2005). Ubiquitous robot, in Computational Intelligence, Theory and Applications, Vol. 33
(Springer Berlin / Heidelberg).
Kirubarajan, T., Bar-Shalom, Y., Pattipati, K. R. and Kadar, I. (2000). Ground target tracking with
variable structure imm estimator, IEEE Transaction on Aerospace and Electronic Systems 36,
1, pp. 392400.
Klein, J., Moon, Y. and Picarda, R. W. (1999). This computer responds to user frustration, in CHI
99: CHI 99 extended abstracts on Human factors in computing systems (ACM, New York,
NY, USA), pp. 242243.
Kopanas, I., Avouris, N. and Daskalaki, S. (2002). The role of domain knowledge in a large scale
data mining project, Lecture Notes in Artificial Intelligence 2308, pp. 288299.
Kraus, S., Nirkhe, M. and Sycara, K. P. (1993). Reaching agreements through argumentation: a
logical model (preliminary report), in Proceedings of the 12th International Workshop on
Distributed Artificial Intelligence (Hidden Valley, Pennsylvania), pp. 233247.
LaMarca, A., Brunette, W., Koizumi, D., Lease, M., Sigurdsson, S. B., Sikorski, K., Fox, D. and
Borriello, G. (2002). Plantcare: An investigation in practical ubiquitous systems, in UbiComp
02: Proceedings of the 4th international conference on Ubiquitous Computing (SpringerVerlag, London, UK), pp. 316332.
Lech, T. C. and Wienhofen, L. W. M. (2005). Ambieagents: a scalable infrastructure for mobile and
context-aware information services, in AAMAS 05: Proceedings of the fourth international
joint conference on Autonomous agents and multiagent systems (ACM, New York, NY, USA),
ISBN 1-59593-093-0, pp. 625631.
Lee, W., Lee, J. and Woo, W. (2005). TARBoard: Tangible augmented reality system for tabletop game environment, in Proceedings of PerGames 2005, 2nd International Workshop on
Pervasive Gaming Applications.
Lesser, V. (1999). Cooperative multiagent systems: A personal view of the state of the art, IEEE
Transactions on Knowledge and Data Engineering 11, 1.
Lin, C.-y. and Hsu, J. Y.-j. (2006). IPARS: Intelligent portable activity recognition system via everyday objects, human movements, and activity duration, in Modeling Others from Observations
(MOO 2006): Papers from the 2006 AAAI Workshop (AAAI Press, Boston, Massachusetts,
USA), pp. 4452.
Long, S., Aust, D., Abowd, G. and Atkeson, C. (1996). Cyberguide: prototyping context-aware
mobile applications, in Conference companion on Human factors in computing systems (CHI
96) (ACM Press, Vancouver, British Columbia, Canada), ISBN 0-89791-832-0, pp. 293
294.
Look, G. and Shrobe, H. (2004). A plan-based mission control center for autonomous vehicles, in
IUI 04: Proceedings of the 9th international conference on Intelligent user interfaces (ACM

Bibliography

191

Press, New York, NY, USA), ISBN 1-58113-815-6, pp. 277279.


Lupu, E. and Sloman, M. (1999). Conflicts in policy-based distributed systems management, IEEE
Trans. Software Eng. 25, 6, pp. 852869.
Ma, Z., Iman, F., Lu, P., Sears, R., Kong, L., Rokanuzzaman, A., McCollor, D. and Benson, S. (2007).
A comprehensive slagging and fouling prediction tool for coal-fired boilers and its validation/
application, Fuel Process. Technol 88, pp. 10351043.
Malheiro, B. and Oliveira, E. (2000). Solving conflicting beliefs with a distributed belief revision
approach, in IBERAMIA-SBIA 00: Proceedings of the International Joint Conference, 7th
Ibero-American Conference on AI (Springer-Verlag, London, UK), ISBN 3-540-41276-X, pp.
146155.
Mamei, M., Zambonelli, F., Moro, G., Sartori, C. and Singh, M. P. (2004). Location-based and
content-based information access in mobile peer-to-peer computing: the TOTA approach,
Agents and Peer-to-Peer Computing. Second International Workshop, AP2PC 2003. Revised and Invited Papers. (Lecture Notes in Artificial Intelligence Vol.2872) (Springer-Verlag,
Berlin, Germany).
Mangina, E. (2003). Application of intelligent agents in power industry: promises and complex issues, Lecture Notes in Artificial Intelligence 2691, pp. 564573.
Manyika, J. and Durrant-Whyte, H. (1994). Data Fusion and Sensor Management a decentralized
information-theoretic approach (Ellis Horwood).
Martin, A., OHare, G. M. P., Duffy, B. R., Schon, B. and Bradley, J. F. (2005). Maintaining the
identity of dynamically embodied agents, in Proceedings of the 5th International Working
Conference on Intelligent Virtual Agents (IVA 2005) (Springer-Verlag, London, UK), pp. 454
465.
Martinez, G., Garcia, F. and Gomez-Skarmeta, A. (2006). Web and Information Security, chap. Policy
based Management of Web Information Systems Security: an Emerging Technology (Idea
Group).
McCarthy, J. (1993). Notes on formalizing context, in Proceedings of the Thirteenth International
Joint Conference on Artificial Intelligence (IJCAI93), Vol. 1 (Morgan Kaufmann), pp. 555
560.
McCarthy, J. and Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial
intelligence, Machine Intelligence 4, pp. 463502.
Micheloni, C., Foresti, G. L. and Snidaro, L. (2003). A co-operative multicamera system for videosurveillance of parking lots, in Intelligent Surveillance Systems Symp. by the IEE (London),
pp. 2124.
Milgram, P., Rastogi, A. and Grodski, J. J. (1995). Telerobotic control using augmented reality, in
Proceedings of the 4th IEEE International Workshop on Robot and Human Communication
(RO-MAN 1995), pp. 2129.
Mitra, N. and Lafon, Y. (2007). Soap version 1.2 part 0: Primer (second edition), http://www.w3.
org/TR/soap12-part0/.
Molina, J., Garcia, J., Jimenez, F. and Casar, J. (2002). Surveillance multisensor management with
fuzzy evaluation of sensor task priorities, Engineering Applications of Articial Intelligence .
Molina, J., Garca, J., Jimenez, F. and Casar, J. (2003). Cooperative management of a net of intelligent
surveillance agent sensors, International Journal of Intelligent Systems 3, 18, pp. 279307.
Moro, G., Sartori, C. and Singh, M. P. (2004). Agents and Peer-to-Peer Computing. Second International Workshop, AP2PC 2003. Revised and Invited Papers. (Lecture Notes in Artificial
Intelligence Vol.2872) (Springer-Verlag, Berlin, Germany).
Nabaa, N. and Bishop, R. H. (1999). Solution to a multisensor tracking problem with sensor registration errors, IEEE Transactions on Aerospace and Electronic Systems 35, 1, pp. 354363.
Nguyen, N. T., Venkatesh, S., West, G. and Bui, H. H. (2003). Multiple camera coordination in a
surveillance system, Acta Automatica Sinica 29, 3, pp. 408421.

192

Agent-Based Ubiquitous Computing

Nieto-Carvajal, I., Bota, J. A., Ruiz, P. M. and Gomez-Skarmeta, A. F. (2004). Implementation and
evaluation of a location-aware wireless multi-agent system, in EUC, pp. 528537.
Nwana, H. S., Ndumu, D. T., Lee, L. C., Collis, J. C. and Re, I. I. (1999). Zeus: A tool-kit for building
distributed multi-agent systems, Applied Artifical Intelligence Journal 13, pp. 129186.
OHare, G. M. P., Collier, R., Conlon, J. and Abbas, S. (1998). Agent factory: An environment for
constructing and visualising agent communities, in Proceedings of the Ninth Irish Conference
on Artificial Intelligence and Cognitive Science - AICS 98 (Dublin, Ireland), pp. 249261.
OHare, G. M. P., Duffy, B. R. and Campbell, A. G. (2004). NeXuS: Mixed reality experiments with
embodied intentional agents, in Proceedings of the Seventeenth International Conference on
Computer Animation and Social Agents - CASA 2004 (Geneva, Switzerland).
ONeill, E., Klepal, M., Lewis, D., ODonnell, T., OSullivan, D. and Pesch, D. (2005). A testbed for
evaluating human interaction with ubiquitous computing environments, in Proceedings of the
First International Conference on Testbeds and Research Infrastructures for the DEvelopment
of NeTworks and COMmunities (TRIDENTCOM05) (IEEE Computer Society, Washington,
DC, USA), pp. 6069.
Padgham, L. and Winikoff, M. (2004). Developing intelligent agent systems: a practical guide (John
Wiley and Sons).
Parsons, S., Sierra, C. and Jennings, N. R. (1998). Agents that reason and negotiate by arguing,
Journal of Logic and Computation 8, 3, pp. 261292.
Parsons, S. D. and Jennings, N. R. (1996). Negotiation through argumentation-A preliminary report,
in Proceedings of the Second International Conference Multi-Agent Systems (ICMAS96) (Kyoto, Japan), pp. 267274.
Patricio, M., Carbo, J., Perez, O., Garcia, J. and Molina, J. (2007). Multi-agent framework in visual
sensor networks, EURASIP Journal on Advances in Signal Processing , pp. Article ID 98639,
21 pages.
Paulidis, I. and Morellas, V. (????). Two examples of indoor and outdoor surveillance systems
(Kluwer Academic Publishers, Boston 2002).
Perich, F., Joshi, A., Finin, T., and Yesha, Y. (2004). On data management in pervasive computing
environments, IEEE Transactions on Knowledge and Data Engineering .
Perkins, C., Belding-Royer, E. and Das, S. (2003). RFC 3561: Ad hoc on-demand distance vector
(AODV) routing, http://www.ietf.org/rfc/rfc3561.txt.
Plaza, E., Arcos, J. and Martn, F. (1996). Cooperation Modes among Case-Based Reasoning Agents,
Proc. ECAI 96.
Pokahr, A., Braubach, L. and Lamersdorf, W. (2003). Jadex: Implementing a bdi-infrastructure for
jade agents, EXP - in search of innovation (Special Issue on JADE) 3, 3, pp. 7685.
Prakken, H. (2006). Formal systems for persuasion dialogue, Knowledge Engineering Review 21, 2,
pp. 163188.
Prakken, H. and Sartor, G. (1996). A dialectical model of assessing conflicting arguments in legal
reasoning, Artificial Intelligence and Law 4, 3-4, pp. 331368.
Prendingera, H., Dohi, H., Wang, H., Mayer, D. and Ishizuka, M. (2004). Empathic embodied interfaces: Addressing users affective state, in Proceedings of the Tutorial and Research Workshop
on Affective Dialogue Systems (Springer Verlag), pp. 5364.
Priest, G. (2002). Paraconsistent Logic, in D. M. Gabbay (ed.), Handbook of Philosophical Logic
Volume 6, 2nd edn. (Kluwer Academic Pub.).
Raiffa, H. (1984). The Art and Science of Negotiation (Harvard Univ. Press, Cambridge, Massachusetts).
Ramanujam, S. and Capretz, M. A. M. (2004). Design of a multi-agent system for autonomous
database administration, International Journal of Intelligent Systems , pp. 11671170.
Ranganathan, A., Al-Muhtadi, J. and Campbell, R. H. (2004). Reasoning about uncertain contexts in
pervasive computing environments, IEEE Pervasive Computing 3, 2, pp. 6270.

Bibliography

193

Rao, A. and Georgeff, M. (1995a). Bdi agents: from theory to practice, in Proceedings of the First
International Conference on Multi-Agent Systems (ICMAS95) (The MIT Press, Cambridge,
MA, USA), pp. 312319.
Rao, A. S. and Georgeff, M. P. (1995b). Bdi agents: From theory to practice, in ICMAS, pp. 312319.
Reiter, R. (2001). Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems (MIT Press).
Reynolds, V., Cahill, V. and Senart, A. (2006). Requirements for an ubiquitous computing simulation
and emulation environment, in First International Conference on Integrated Internet Ad hoc
and Sensor Networks (InterSense 2006) (OCP Science).
Rickenberg, R. and Reeves, B. (2000). The effects of animated characters on anxiety, task performance, and evaluations of user interfaces, in CHI 00: Proceedings of the SIGCHI conference
on Human factors in computing systems (ACM, New York, NY, USA), pp. 4956.
Riva, G., Vatalaro, F., Davide, F. and Alcaniz, M. (2005). Ambient Intelligence (IOS Press).
Roman, M., Hess, C., Cerqueira, R., Ranganathan, A., Campbell, R. and Nahrstedt, K. (2002). Gaia:
A Middleware Infrastructure to Enable Active Spaces, IEEE Pervasive Computing 1, 4, pp.
7483.
Roman, M., Hess, C., Cerqueira, R., Ranganathan, A., Campbell, R. H. and Nahrstedt, K. (2002).
Gaia: a middleware platform for active spaces, ACM SIGMOBILE Mobile Computing and
Communications Review 6, 4, pp. 6567.
Rosenschein, J. S. and Zlotkin, G. (1994). Rules of Encounter. Designing Conventions for Automated
Negotiation among Computers (MIT Press).
Royer, E. and Toh, C. (1999). A Review of Current Routing Protocols for Ad Hoc Mobile Wireless
Networks, Personal Communications, IEEE [see also IEEE Wireless Communications] 6, 2,
pp. 4655.
Salber, D., Dey, A. K. and Abowd, G. D. (1999a). The context toolkit: aiding the development of
context-enabled applications, in Proceedings of the SIGCHI conference on Human factors in
computing systems (CHI99) (ACM Press, New York, NY, USA), ISBN 0-201-48559-1, pp.
434441.
Salber, D., Dey, A. K., Orr, R. J. and Abowd, G. D. (1999b). Designing for ubiquitous computing: A
case study in context sensing, Tech. Rep. GIT-GVU-99-29, Georgia Institute of Technology.
Sas, C. and OHare, G. M. P. (2001). Defining and measuring presence in non-virtual environments: An experimental study, in Proceedings of the 4th International Workshop on Presence
(Philadelphia, PA, USA).
Schiaffinoa, S. and Amandia (2004). A user interface agent interaction: personalization issues, Int.
J. Human-Computer Studies 60, pp. 129148.
Schilit, B., Adams, N. and Want, R. (1994). Context-aware computing applications, in Proceedings
of IEEE Workshop on Mobile Computing Systems and Applications (Santa Cruz, CA, US), pp.
85 90.
Schmidt, A. (2005). Interactive context-aware systems - interacting with ambient intelligence, in
G. Riva, F. Vatalaro, F. Davide and M. Alcaniz (eds.), Ambient Intelligence (IOS Press), pp.
159178.
Schmidt, A., Beigl, M. and Gellersen, H. (1999). There is more to context than location, Computer
& Graphics Journal 23, 19, pp. 893902.
Schmidt, A., Kortuem, G., Morse, D. and Dey, A. (2001). Situatied interaction and context-aware
computing, Personal and Ubiquitous Computing 5, 1, pp. 13.
Scott, C. (1994). Improved gps positioning for motor vehicles through map matching, in Proceedings
of ION GPS-94: 7th International Technical Meeting of the Satellite Division of the Institute
of Navigation, pp. 13911400.
Shechory, O. and S.Kraus (1995). Feasible formation of stable coalitions among autonomous agents
in general environments, Computational Intelligence Journal .

194

Agent-Based Ubiquitous Computing

Shohama, Y. and Tennenholtz, M. (1995). On social laws for artificial agent societies: off-line design,
Artificial Intelligence 73, pp. 231252.
Shoji, M., Miura, K. and Konno, A. (2006). U-Tsu-Shi-O-Mi: The virtual humanoid you can reach,
in Proceedings of the Thirtythird International Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 2006 (Boston, Massachusetts, USA).
Shreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., de Velde, W. V. and
Wielinga, B. (2000). Knowledge Engineering and Management: the CommonKADS methodology (MIT Press).
Siebel, N. and Maybank, S. (2004). The advisor visual surveillance system, ECCV 2004 workshop
Applications of Computer Vision (ACV) 1.
Simmons, R., Goldberg, D., Goode, A., Monetmerlo, M., Roy, N., Sellner, B., Urmson, C., Schultz,
A., Abramson, M., Adams, W., Atrash, A., Bugajska, M., Coblenz, M., MacMahon, M.,
Perzanowski, D., Horswill, I., Zubek, R., Kortenkamp, D., Wolfe, B., Milman, T. and Maxwell,
B. (2003). Grace: An autonomous robot for the AAAI robot challenge, AI Magazine 24, pp.
5172.
Slater, M., Pertaub, D.-P. and Steed, A. (1999). Public speaking in virtual reality: facing an audience
of avatars, Computer Graphics and Applications, IEEE 19, 2, pp. 69.
Smith, M. K., , Welty, C. and McGuinness, D. L. (2004). OWL web ontology language guide. W3C
recommendation, Tech. rep., W3C.
Smithson, A., Moreau, L., Moro, G. and Koubarakis, M. (2003). Engineering an agent-based peerto-peer resource discovery system, Agents and Peer-to-Peer Computing. First International
Workshop, AP2PC 2002. Revised and Invited Papers. (Lecture Notes in Artificial Intelligence
Vol.2530) (Springer-Verlag, Berlin, Germany).
Soldatos, J., Pandis, I., Stamatis, K., Polymenakos, L. and Crowley, J. (2007). Agent based middleware infrastructure for autonomous context-aware ubiquitous computing services, Computer
Communications 30, pp. 577591.
Stilman, M., Chestnutt, J., Michel, P., Nishiwaki, K., Kagami, S. and Kuffner, J. (2005). Augmented
reality for robot development and experimentation, Tech. Rep. CMU-RI-TR-05-55, Robotics
Institute, Carnegie Mellon University.
Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions, Journal of the
Royal Statistical Society 36, 2, pp. 577591.
Strang, T. and Linnhoff-popien, C. (2004). A context modeling survey, in Workshop on Advanced
Context Modelling, Reasoning and Management at The Sixth International Conference on
Ubiquitous Computing (UbiComp 2004) (Nottingham, England).
Strang, T., Linnhoff-popien, C. and Frank, K. (2003). CoOL: A context ontology language to enable
contextual interoperability, in Proceedings of 4th IFIP WG 6.1 International Conference on
Distributed Applications and Interoperable Systems (DAIS2003) (Springer Verlag), pp. 236
247.
Sutherland, I. E. (1998). A head-mounted three dimensional display, , pp. 295302.
Sycara, K. (1990). Persuasive argumentation in negotiation, Theory and Decision 28, 3, pp. 203242.
Sycara, K., Paolucci, M., Velsen, M. V. and Giampapa, J. (2003). The retsina mas infrastructure,
Autonomous Agents and Multi-Agent Systems 7, 1-2, pp. 2948.
Syukur, E., Loke, S. and Stanski, P. (2005). Methods for policy conflict detection and resolution in
pervasive computing environments, in Policy Management for Web workshop in conjunction
with WWW2005 Conference (Chiba, Japan).
Tamura, H., Yamamoto, H. and Katayama, A. (2001). Mixed reality: future dreams seen at the border
between real and virtual worlds, Computer Graphics and Applications, IEEE 21, 6, pp. 6470.
Thanh, D. V. and Jorstad, I. (2005). A service-oriented architecture framework for mobile services, in
Telecommunications, 2005. Advanced Industrial Conference on Telecommunications/Service
Assurance with Partial and Intermittent Resources Conference/ E-Learning on Telecommuni-

Bibliography

195

cations Workshop. AICT/SAPIR/ELETE 2005. Proceedings, pp. 6570.


Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M. and Piekarski, W. (2000).
Arquake: an outdoor/indoor augmented reality first person application, Wearable Computers,
2000. The Fourth International Symposium on , pp. 139146.
Tiba, F. and Capretz, M. (2006). An overview of the analysis and design of sigma: Supervisory intelligent multi-agent system architecture, Information and Communication Technologies, 2006.
ICTTA 06. 2nd 2, pp. 30523057.
Tveit, A. (2001). A survey of agent-oriented software engineering, Proc. of the First
NTNU CSGS Conference (http://www.amundt.org), URL http://www.abiody.com/
jfipa/publications/AgentOrientedSoftwareEngineering/.
Valera, M. and Velastin, S. (2005). Intelligent distributed surveillance systems: a review. , pp.
152:192204.
van Breemen, A., Yan, X. and Meerbeek, B. (2005). iCat: an animated user-interface robot with
personality, in Proceedings of the 4th international joint conference on Autonomous agents
and multiagent systems (AAMAS 2005) (ACM, New York, NY, USA), pp. 143144.
View, T. (2002). Reconfigurable context-sensitive middleware for pervasive computing, Pervasive
Computing, IEEE 1, 3, pp. 3340.
Wagner, D., Billinghurst, M. and Schmalstieg, D. (2006). How real should virtual characters be? in
ACE 06: Proceedings of the 2006 ACM SIGCHI international conference on Advances in
computer entertainment technology (ACM, New York, NY, USA), p. 57.
Waltz, E. and Llinas, J. (1990). Multisensor Data Fusion (Artech House Inc, Norwood, Massachussets, U.S).
Wang, X. H., Zhang, D. Q., Gu, T. and Pung, H. K. (2004). Ontology based context modeling
and reasoning using OWL, in Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops (PERCOMW 04) (IEEE Computer Society,
Washington, DC, USA), ISBN 0-7695-2106-1, p. 18.
Want, R., Hopper, A., Falcao, V. and Gibbons, J. (1992). The active badge location system, ACM
Transactions on Information Systems (TOIS) 10, 1, pp. 91102.
Want, R. and Pering, T. (2005). System challenges for ubiquitous & pervasive computing, in Proceedings of the 27th international conference on Software engineering (ICSE 05) (ACM Press,
New York, NY, USA), ISBN 1-59593-963-2, pp. 9 14.
Want, R., Schilit, B. N., Adams, N. I., Gold, R., Petersen, K., Goldberg, D., Ellis, J. R. and Weiser,
M. (1995). An overview of the PARCTAB ubiquitous computing experiment, Personal Communications 2, 6, pp. 28 43.
Weiser, M. (1991). The Computer for the Twenty-First Century, Scientific American 265, 3, pp. 94
104.
Wilensky, U. et al. (1999). NetLogo, Evanston, IL .
Witten, I. and Frank, E. (2005). Data Mining: Practical Machine Learning Tools and Techniques
(Morgan Kaufmann Publishers).
Wohltorf, J., Cissee, R., Rieger, A. and Scheunemann, H. (2004). Berlintainment: An agent-based
serviceware framework for context-aware services, in Proceedings of 1st International Symposium on Wireless Communication Systems - ISWCS 2004.
Wood, M. F. and Deloach, S. A. (2000). An overview of the multiagent systems engineering methodology, in The First International Workshop on Agent-Oriented software Engineering (AOSE2000, pp. 207221.
Wooldridge, M. (2000). Reasoning about Rational Agents (The MIT Press, Cambridge, Massachusetts).
Wooldridge, M. (2002). An Introduction to Multiagent Systems, West Sussex, England: John Wiley
and Sons Ltd 348.
Wooldridge, M. and Jennings, N. (1995). Intelligent Agent: Theory and Practice, Knowledge Engi-

196

Agent-Based Ubiquitous Computing

neering Review 10, pp. 115152.


Wooldridge, M., Jennings, N. R. and Kinny, D. (2000). The gaia methodology for agent-oriented
analysis and design, Journal of Autonomous Agents and Multi-Agent Systems 3, pp. 285312.
Wu, H., Siegel, M. and Ablay, S. (2002). Sensor fusion for context understanding, in Proceedings of
IEEE Instrumentation and Measurement Technology Conference (Anchorage, AK, USA).
Xu, M., Lowey, L. and Orwell, J. (2004). Architecture and algorithms for tracking football players with multiple cameras, in IEEE Workshop on Intelligent Distributed Surveillance Systems
(London), pp. 5156.
Yang, B. and Garcia-Molina, H. (2003). Designing a Super-Peer Network, Proceedings of the 19th
International Conference on Data Engineering (ICDE) , p. 49.
Yang, Y., Hassanein, H. and Mawji, A. (2006). Efficient Service Discovery for Wireless Mobile Ad
Hoc Networks, 4th ACS/IEEE International Conference on Computer Systems and Applications , pp. 571578.
Yau, S. S. and Karim, F. (2004). A context-sensitive middleware for dynamic integration of mobile
devices with network infrastructures, J. Parallel Distrib. Comput. 64, 2, pp. 301317.
You, C.-w., Chen, Y.-C., Chiang, J.-R., Huang, P., Chu, H.-h. and Lau, S.-Y. (2006). Sensor-enhanced
mobility prediction for energy-efficient localization, in Proceedings of Third Annual IEEE
Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON 2006), Vol. 2 (Reston, VA, USA), ISBN 1-58113-793-1, pp. 565574.
Young, J. E. and Sharlin, E. (2006). Sharing spaces with robots: An integrated environment for
human-robot interaction, in Proceedings of the First International Symposium on Intelligent
Environments - ISIE 06 (Cambridge, England).
Young, J. E., Xin, M. and Sharlin, E. (2007). Robot expressionism through cartooning, in Proceedings
of the 2007 ACM/IEEE International Conference on Human-Robot Interaction (Arlington,
Virginia, USA).
Yuan, X., Sun, Z., Varol, Y. and Bebis, G. (2003). A distributed visual surveillance system, in IEEE
Conf. on Advanced Video and Signal based Surveillance (Florida), pp. 199205.
Zambonelli, F., Modena, U., Emilia, R., Jennings, N. R. and Wooldridge, M. (2003). Developing
multiagent systems: The gaia methodology, ACM Transactions on Software Engineering and
Methodology 12, pp. 317370.
Zanbaka, C. A., Ulinski, A. C., Goolkasian, P. and Hodges, L. F. (2007). Social responses to virtual
humans: implications for future interface design, in CHI 07: Proceedings of the SIGCHI
conference on Human factors in computing systems (ACM, New York, NY, USA), pp. 1561
1570.
Zhou, Y., Leung, H. and Chan, K. (1998). A two-step extended kalman filter fusion approach for
misaligned sensors, in Proceedings of the FUSION98 International Conference (Las Vegas),
pp. 5458.
Ziemke, T. (2003). Whats that thing called embodiment? in Proceedings of the 25th Annual Meeting
of the Cognitive Science Society.

You might also like