You are on page 1of 23

UNU-IIST

International Institute for


Software Technology

An Ethical Principle for


Ubiquitous Communication
G M Reed and J W Sanders
May 2007

UNU-IIST Report No. 373 T


UNU-IIST and UNU-IIST Reports

UNU-IIST (United Nations University International Institute for Software Technology) is a Research and
Training Centre of the United Nations University (UNU). It is based in Macao, and was founded in
1991. It started operations in July 1992. UNU-IIST is jointly funded by the government of Macao and
the governments of the People’s Republic of China and Portugal through a contribution to the UNU
Endowment Fund. As well as providing two-thirds of the endowment fund, the Macao authorities also
supply UNU-IIST with its office premises and furniture and subsidise fellow accommodation.

The mission of UNU-IIST is to assist developing countries in the application and development of software
technology.

UNU-IIST contributes through its programmatic activities:

1. Advanced development projects, in which software techniques supported by tools are applied,
2. Research projects, in which new techniques for software development are investigated,
3. Curriculum development projects, in which courses of software technology for universities in devel-
oping countries are developed,
4. University development projects, which complement the curriculum development projects by aiming
to strengthen all aspects of computer science teaching in universities in developing countries,
5. Schools and Courses, which typically teach advanced software development techniques,
6. Events, in which conferences and workshops are organised or supported by UNU-IIST, and
7. Dissemination, in which UNU-IIST regularly distributes to developing countries information on
international progress of software technology.

Fellows, who are young scientists and engineers from developing countries, are invited to actively partic-
ipate in all these projects. By doing the projects they are trained.

At present, the technical focus of UNU-IIST is on formal methods for software development. UNU-IIST
is an internationally recognised center in the area of formal methods. However, no software technique is
universally applicable. We are prepared to choose complementary techniques for our projects, if necessary.

UNU-IIST produces a report series. Reports are either Research R , Technical T , Compendia C or
Administrative A . They are records of UNU-IIST activities and research and development achievements.
Many of the reports are also published in conference proceedings and journals.

Please write to UNU-IIST at P.O. Box 3058, Macao or visit UNU-IIST’s home page: http://www.iist.unu.edu,
if you would like to know more about UNU-IIST and its report series.

G. M. Reed, Director
UNU-IIST
International Institute for
Software Technology

P.O. Box 3058


Macao

An Ethical Principle for


Ubiquitous Communication
G M Reed and J W Sanders
Abstract
This paper introduces a normative principle for the behaviour of contemporary computing and
communication systems and considers some of its consequences. The principle, named the prin-
ciple of distribution, says that in a distributed multi-agent system control resides as much as
possible with the individuals constituting the system, rather than in centralised agents; and
when that is infeasible or becomes inappropriate due to environmental changes, control evolves
upwards from the individuals to an appropriate intermediate level rather than being imposed
from above.
The setting for the work is the dynamically changing global space resulting from ubiquitous com-
munication. Accordingly the paper begins by determining the characteristics of the distributed
multi-agent space it spans. It then fleshes out the principle of distribution, with examples from
daily life as well as from Computer Science. The case is made for the principle of distribution
to work at various levels of abstraction of system behaviour: to inform the high-level discussion
that ought to precede the more low-level concerns of technology, protocols and standardisation
but also to facilitate those lower levels.
Of the more substantial applications of the principle of distribution, a technical example con-
cerns the design of secure ad hoc networks of mobile devices, achievable without any form of
4

centralised authentication or identification, but in a solely distributed manner. Here the con-
text is how the principle can be used to provide new and provably secure protocols for genuinely
ubiquitous communication. A second—more managerial—example concerns the distributed pro-
duction and management of open source software, and a third investigates some pertinent ques-
tions involving the dynamic restructing of control in distributed systems, important in times of
disaster or malevolence.

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Mike Reed is the Director of UNU-IIST. He is an Emeritus Fellow of St Edmund Hall, Oxford
University, where from 1986 to 2005, he was the General Electric Company Fellow in Computa-
tion. His previous experience includes terms as a Senior Research Associate at NASA Goddard
Space Flight Center and the US Naval Research Laboratory, and as a Manager of Postdoc-
toral Programs for the US National Science Foundation. He is a former Research Fellow of the
American Mathematical Society and former Professor of Mathematics and Computer Science
at Ohio University, where he was also the Associate Director of the Institute for Medicine and
Mathematics. On three occasions in the 1970’s, he was an Exchange Scholar to Eastern Europe
(Poland and Czechoslovakia) for the US National Academy of Sciences. He has been a Visiting
Professor at the University of Maryland, the US Naval Academy, Tulane University, and the
University of Paris. He has given over two hundred research presentations at Universities, Re-
search laboratories, and International research meetings. In addition, he has been the organizer
of several international conferences in both Mathematics and in Computer Science. His current
work includes the design and analysis of fault-tolerant embedded systems, and automated sup-
port for reasoning about computer security. He holds a Doctorate in Pure Mathematics from
Auburn University (USA) and a Doctorate in computation from Oxford University (UK).
Jeff Sanders is Senior Research Fellow at UNU-IIST, having recently joined from the Program-
ming Research Group at Oxford. His interests lie largely in Formal Methods.

c 2007 by UNU-IIST
Copyright
Contents i

Contents
1 Introduction 1

2 Distributed multi-agent systems 1

3 The principle of distribution 3

4 Discussion and brief examples 5

5 Application of the principle 6


5.1 Human-centric computing and forward . . . . . . . . . . . . . . . . . . . . . . . 7
5.2 Open source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.3 Response to adversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

6 Conclusions and future research 11

7 Acknowledgements 12

8 Notes 12

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Introduction 1

1 Introduction

We live in an age in which information and communications technologies span the globe, provid-
ing users with mobile and real-time access to information, services and each other. Increasingly
the services offered are becoming not mere luxury but an established part of our everyday lives;
a typical example is provided by the growing importance of e-services like e-government.1 The
resulting structure goes under a multitude of names;2 here we refer to ubiquitous communication
in the comsphere. By using ‘ubiquitous communication’ we mean to emphasise the importance
of both synchronous and asynchronous communications and the growing mobility of the devices;
and by using ‘comsphere’ (rather than the more accepted ‘cyberspace’) we mean to emphasise the
difference that ubiquitous communication brings to the internet: the dynamic reconfigurability
not only of communications but also of actions.

The twin features of globality3 and mobility provide distinct opportunities, but also reveal
distinct difficulties. The former enables a global distribution of resources, but regardless of
boundaries and perhaps therefore of propriety; the latter empowers users in remote or transient
locations, but with an increased risk of insecurity. As has been stressed by the International
Telecommunication Union at its World Summit on the Information Society [20, 21, 22, 23] means
are needed to increase globality by increasing the penetration of ubiquitous communication
in developing nations whilst simultaneously making the comsphere more secure. In order to
address those specific points we introduce a normative, or ethical, principle that extends to
the comsphere the style of reasoning by now established in the various fields of applied ethics.
But since the principle has application far beyond those specific topics, and for that matter far
outside Computer Science, it is introduced from first principles in a general setting.

Important features of any principle like that introduced here are its consistency with the standard
normative principles of ethics and the breadth and depth of its applicability. We take care to
address both points (in Sections 3 and 5, respectively).

In Section 2 we summarise the kind of system that abstracts the important features of the
comsphere: a distributed multi-agent system able dynamically to reconfigure its actions in
response to external change. Then, in Section 3, we introduce the general but novel ethical
principle, the principle of distribution, in the context of the principles of (standard) Ethics and
of applied ethics; discussion of the principle is enhanced by examples from everyday life in
Section 4. Section 5 contains a discussion of the principle applied to more dynamic systems and
to the two major examples mentioned in the previous paragraph—security in the comsphere and
making software more openly available—in spite of their apparent incompatibility.

2 Distributed multi-agent systems

The kind of system facilitated by ubiquitous communication in particular, and by contempo-


rary Information and Communication Technology in general, is composed of (typically many)

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
2 Distributed multi-agent systems

spatially-distributed agents able dynamically to configure their communications and the way
actions are performed. Distributed multi-agent systems, and the notions on which they are
based, have been studied in some depth in Computer Science5 though with more emphasis to
date on systems whose design (in particular the way in which actions are executed) remains
static. For the purposes of the present paper it suffices to settle on a system composed of agents
that interact with each other dynamically, either pairwise or in larger groups. If a system action
is performed autonomously by the agents then it is said to be (fully) distributed; if it requires
coordination through one particular agent then it is said to be (fully) centralised. Evidently
those are extremes in a spectrum of possibilities. If a system supports one primary action (with
others being components in its execution), as is the case for the applications in this paper, then
we say that the system is distributed or centralised according to whether or not that primary
action is.

Such a system is normally subject to environmental (or external) influences, which we model as
the setting of system parameters whose values lie, though within determined limits, beyond the
influence of the system itself. It is in response to such influences that the system configures itself
dynamically. How it does so is not our concern here; we may think of the agents, individually
and in groups, as having strategies that enable them to optimise their own private concerns
in the face of external adversity and competition from other agents. Instead we are concerned
with the system-wide principles behind such strategies, and in the next section introduce a
normative principle for them. Whilst each agent might be thought of as being guided by the
normative principles of (standard) Ethics, the new principle acts at the system level, offering a
range of behaviours and analyses that would not be possible were the system to be modelled as
an individual agent guided by (standard) Ethics.

To appreciate the difference between distribution and centralisation—the extremes of control in


a distributed multi-agent system—consider the games of soccer and baseball. In each case the
system under consideration consists of a side whose players constitute agents. The opponent
side forms part of its environment, as do weather and other conditions. In each case the primary
system action is to score, something that is achieved by the agents performing component
actions (like delivering the ball—whether by kicking, pitching or batting, as appropriate—to
a certain position on the field with a certain trajectory and speed). Soccer might be said to
be more distributed because no central agent is responsible for a team’s play from moment to
moment: the ball is passed between the distributed players following no centralised ‘algorithm’
but according to decisions made ‘locally’ by individual players, in spite of the ‘global’ aim of
scoring goals. Indeed therein lies much of the interest of the game: how can such local decisions
reach a globally desirable event? (Observe that the use of plays-in-a-down in American football
imposes partial centralisation on such distribution.) By comparison in baseball there is far less
scope for distributed decision-making: the game evolves on the basis of centralised decisions
(except for double plays, and the routine decision by a fielder where to return the ball, and by
a runner on base whether or not to run for the next base, and if so how to do so).

It is interesting to note the effect a malicious team member would have in each style of game.
In the centralised game of baseball, were the pitcher or catcher in collusion with the opposition
the result would be disasterous. However in the distributed game of soccer, a malevolent team

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
The principle of distribution 3

member would be gradually marginalised (the most difficult case being the goal keeper, although
defenders could to some extent compensate).

We thus see that a major concern with centralised control is its fragility: if a central agent
becomes corrupted or fails then recovery of the entire system may be extremely difficult or even
impossible. There is also a concern of inefficiency of centralised systems: if each individual
in the system has to coordinate its activities with the central agent (as is typically the case)
then many communications may be required and bottlenecks may cripple the system. In spite
of those disadvantages, an advantage of centralised systems is that they are often conceptually
simpler to design and maintain. Recent examples in which distributed control has played an
essential rôle are (a) from the East: the use of cellphones in responding to the Tsunami disaster
and in organising demonstrations in the face of centralised resistance [44, 38] and (b) from the
West: in Pentagon defense4 [30].

As seen from the sport example, the centralised—distributed spectrum is important for quali-
fying forms of control in a multi-agent system. Its use was begun by Wiener [43] for ‘control
systems’ with only a single agent but it is in the distributed algorithms of Computer Science
that it finds its richest expression to date [6, 9]. We highlight the need for further research into
systems able dynamically to reconfigure themselves, a possibility now offered by the comsphere
and required of any system that is expected to respond to environmental changes, whether
routinely or in adversity. This is the setting in which we present the principle of distribution.

3 The principle of distribution

We subscribe to the view, promoted by Moor [29] two decades ago, that problems in Computer
Ethics arise from a policy vacuum concerning the use of new technology and, moreover, that the
standard normative ethical principles are incomplete for reasoning about such problems. Moor
expressed that latter point simply, though without justification (loc. cit.):

Applied ethics is not simply ethics applied.

Although that incompleteness has been affirmed many times in the intervening two decades,
notably by Johnson [25], and a decade ago named (slightly ‘uniquely’) the uniqueness problem
by Maner [28], there remain no convincing arguments or accepted examples to substantiate it.

In this section we propose a novel normative principle of applied ethics, here interpreted in the
realm of Information Ethics, and argue that since it does not follow from any of the princi-
ples of (standard) Ethics, it establishes incompleteness. We take the view that the normative
principles of (standard) Ethics have been proposed and developed with the aim of enlightening
the individual in analysing his or her rôle in an individual-centred ethically-loaded situation. It
is therefore scarcely surprising that for systems in which an individual is merely one of many
components bearing ethical responsibility, as in multi-agent systems, such ethical principles are

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
4 The principle of distribution

by themselves insufficient. In a (fully) centralised system the central agent is sometimes able to
play the part of the individual and so provide a vehicle for the application of standard Ethics.
But otherwise something more than uni-agent Ethics is to be expected in expressing classes (or
properties) of agent-based strategies that achieve the dynamic execution of the system action.
The standard view corresponds to normative behaviour of each agent in isolation; the view
proposed here is that more coordinated, system-wide views are required.

The ethical principle of distributed multi-agent systems is not a consequence of the normative
principles of (standard) Ethics for precisely that reason: it is not individual-centred. It is,
consequently, more involved than the standard principles. That seems to reflect the fact that
distributed systems are comprehensively more complex than centralised systems, in exactly the
same way that societies are comprehensively more complex than individuals.

Principle of distribution: A multi-agent (distributed) system satisfies the principle of dis-


tribution if control for its primary action resides, as much as feasible, with the individual agents
constituting the system; and if, in dynamically reconfiguring execution of that action in response
to environmental factors, control arises from its individual agents (rather than being imposed
centrally).

Notice that the principle involves two conditions: the first is ‘static’, pertaining to the system
in its steady state whilst the second is ‘dynamic’, pertaining to the system as it responds to
external influence.

The usual normative principles from Ethics, which till now have been the only tools available for
use in Information Ethics, include: consequentialism (teleologism); utilitarianism (greatest good;
Bentham and Mill); deontologism (duty); virtue ethics (Aristotle); universal law (Kant); con-
tractualism (Plato, Hobbes, Rousseau, Rawls, game theory); and particularism (Crisp, Dancey).
The principle of distribution relates most closely to utilitarianism and contractualism; the for-
mer in so far as it might be used to support the Marxist-Leninist distribution of wealth and
control across the population at large instead of investing it in a minority; the latter because of
the dynamic nature of the evolution of control arising—in real time and from the bottom up—as
in game theory with each node performing its local strategy and the system as a whole evolving
as a result.

It is important to observe that the principle of distribution is entirely free of anthropomorphic


association. Whilst the standard normative principles of Ethics depend upon the responsible
agent having free will, and hence being restricted essentially to humans (in fact to adults of
sound mind), the the principle of distribution makes no such assumption. It can thus be applied
to systems composed of artificial agents [16] or of any combination of artificial and sentient
agents.

Of course like any normative principle, the principle of distribution seldom holds unequivocally
and, when it does hold that fact seldom provides the whole answer to the matter under analysis.

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Discussion and brief examples 5

It is an ideal situation—a guiding principle—to be used in conjunction with others in resolving


complex issues. It has, as might be expected and as will be substantiated later in the paper,
implications for the design of protocols, the management of software development, education
and policy. Let us continue discussion of the principle with the consideration of two brief but
typical examples.

4 Discussion and brief examples

Many families find themselves confronted with the problem of what access to allow their children
to the web.6 A centralised or ‘top down’ solution would involve system-based control (perhaps
at the national level) of undesirable sites. (Here we identify the agents with individual users
of the web, grouped by computers which they use, and we identify the system action as that
of accessing the web.) But one difficulty with that centralised policy is: who has the right to
make a choice for all, particularly in the context of the web (whose design supports democratic
access)? By comparison the principle of distribution leads us to consider ‘bottom up’ solutions,
empowering individual homes or communities. For instance, each household could filter access
to the web using software chosen and configured by the guardians of the household.7 (Ideally,
free open-source software would be available online.)

The direct empowerment of each user is not feasible in this case since that would be to ignore
the problem. The choice of the family or community as a small group of agents upon which to
impose control reflects the natural structure of the system.

It is interesting to compare this example with that of spam. A fully distributed solution results in
a system in which each user decides individually what he or she regards as spam by configuring a
filter program (typically composed of ‘black’ and ‘white’ lists of sending addresses). A centralised
solution results in a system in which email from certain addresses is simply deleted from the
system. The former solution now turns out naive [23] as it is in practice not strong enough to
deal with the massive quantities of spam; whilst the latter solution suffers the same flaw as that
of centralised screening of web sites. So in practice an intermediate ‘bipartite’ design is currently
adopted: in addition to each user maintaining a filter, certain key servers suppress email from
particular addresses.

This example demonstrates how in response to an increase in spam the fully distributed,
individual-based, design is refined to one that also incorporates partially-centralised control.
Both the static and the dynamic conditions of the principle are needed.

The dynamic bottom-up imposition of control takes time and is not appropriate in every sit-
uation. Indeed there are some situations in which a distributed solution does not exist. For
example if each agent behaves deterministically and identically and all agents start in the same
state, then no matter what communications they exchange and what internal decisions they
reach, they will be unable to reach a state in which one of them differs from the others. This
demonstrates the need for the qualification ‘feasible’ in the statement of the principle. As a

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
6 Application of the principle

practical example, it is far from clear what is the right level of control at which to control ma-
licious minority groups; the obvious solution is highly centralised, but that exhibits the usual
problems involving misuse. It may well be that a hybrid design proves most acceptable. It is
interesting to compare with current practice for neighbourhood security in some western coun-
tries, which combines a centralised police force with partially-centralised security services (for
businesses) and a distributed neighbourhood watch (for individual homes). For such difficult
situations the principle of distribution at least provides a framework and body of concepts to
facilitate discussion.

In view of the proof in the previous paragraph that fully distributed symmetrical deterministic
systems do not exist, it is important as well as interesting to consider how close we can get
to constructing such systems. By weakening the hypothesis ‘deterministic’ in the situation in
which an average-case bound on system efficiency is acceptable, it suffices to permit each agent
the extra capability of coin tossing (or, equivalently, of random number generation). This is a
technique which can be expected to find substantial use in the comsphere; it is already used
in computation-intensive simulations. We illustrate with a remarkably successful distributed
protocol from Computer Science: Rabin’s distributed algorithm for coordinating choice between
two alternatives [33] which, without going into details of the protocol, can be appreciated in our
terms like this.

A coach-load of tourists arrives in a new city and is to decide, by the end of the day, at which of
two places to meet: inside a certain church or inside a certain hotel. The agents of our system
thus consist of the tourists, and the system action consists of their gathering in a single location
by the end of the day. There is no central agent (like a tour guide) and so the tourists are unable
to communicate (centrally) as a group: the tourists function as members of a (fully) distributed
multi-agent system. Rabin’s algorithm shows that merely with a noticeboard at each location
and a coin (to toss) for each tourist, by alternatively visiting each location and following a
certain rule the tourists all end up choosing the same location in, with high probability, a small
number of visits. A centralised goal has been reached on the basis of distributed decisions. Thus
the principle of distribution supports the design of realistic, efficient multi-agent protocols.

In part, the richness of the principle of distribution derives from the fact that it may be applied at
many levels of abstraction: one for each level of abstraction at which the system under analysis
is considered [15]. If matters of policy are being analysed, a rarified level of abstraction will
be chosen in which much of the system detail is abstracted. If matters of system design are
under consideration, a much lower level of abstraction will be chosen, revealing relevant details
of exactly how the system behaves. The level of abstraction at which the system is considered
and at which the principle is applied are determined by the kind of analysis sought.

5 Application of the principle

In this section we return to the context of the Introduction to illustrate the principle of distribu-
tion with two more-substantial examples followed by a third to emphasise the dynamic nature

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Application of the principle 7

of the comsphere.

5.1 Human-centric computing and forward

At present the multifarious applications of ubiquitous communication remain largely untapped


due partly to the increased opportunity that ubiquity in general, and mobility in particular,
offer for malevolence. It appears vital that mobile users be able to generate spontaneously a
secure network. So for our first case study we consider the important topic of security in the
comsphere.

We begin by adapting Kizza’s definition [26] to incorporate Schneier’s point [39] that security is
a dynamic process rather than a static product, and interpret security to consist of the process
of maintaining:

confidentiality: information is available only to those authorised to have it;

integrity: data may be manipulated only by those authorised to do so;

availability: information systems are accessible to all those authorised to access them.

The notion of authorised access underpins each requirement: access is permitted only if authority
has been validated. Thus we concentrate on authorised access: the system under consideration
comprises all users of the comsphere, determined in this case by attribute rather than identity,
and the system action consists of authorised access.

Traditionally it has been identities (of either users or devices) that are authenticated. But in the
context of the comsphere it has been argued by Creese et al. [12, 13] that it is attributes, and not
identities, that must be authorised. Attributes include a device’s location, name, manufacturer,
internal state, service history and so on. Attributes appropriate to a given situation must
be authenticated, and must be chosen to provide assurance not only about which devices are
interacting but also about what they can do.

Centralised implementations ensuring authorised access (and hence also the requirements for
security) are straightforward and rely on maintaining a central trusted list which is consulted
to validate authentication. But in line with the principle of distribution it is preferable to use
instead distributed authorisation (if feasible). This provokes the quest for new protocols; we
report here the recent work of Creese et al, expressed in our context.

Imagine that a group of you, not necessarily previously known to each other, meet (perhaps
it is parents’ night at the local school) and wish to form—spontaneously and in real time—a
network with your wireless PDAs and cellphones. You cannot assume that your devices have
unique identifiers or that any such identifiers are known in advance; and of course you wish
to ensure that the network is established in a distributed manner, contains only those devices

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
8 Application of the principle

you want it to contain (those present) and that messages sent between you are secure to the
network. You must assume, naturally, that none of you is malevolent. It is perhaps not obvious
that those requirements can be met; but Creese et al. have provided and verified a protocol
[10, 11] which meets them. Its verification is achieved by weakening the accepted (Dolev-Yao)
model of security to take account of a second kind of channel. The normal insecure channel
with high bandwidth over which it is desired to send data securely is augmented by a second,
secure but low bandwidth, channel like that established by empirical engagement (for instance
permitted by physical proximity in the example of the group meeting). The protocol (for whose
details we refer to [10] and more recently [36] for the extension showing that device identifiers
are not required) uses the low-bandwidth channel to ‘bootstrap security’ on the high-bandwidth
channel. In the case of the group meeting that would be achieved by comparing the post-
communication values displayed on each others devices which in particular enables participants
to compare the number of devices in communication with the number present in the group (an
empirical engagement that is extremely secure) to ensure that only they are included in their
network. The formalism used by Creese et al. for verification of the protocol is that of automated
Communicating Sequential Processes [37].

That work forms part of the FORWARD programme [17], begun in January 2003 under the
United Kingdom’s Department of Trade and Industry’s initiative into Next Wave Technologies.
Part of the thrust of that programme has been the use of ubiquitous communication and com-
putation to support human-centric goals, like providing information in a form and at a time
that is appropriate to the human user, and exploiting the human user’s (empirical) senses to
complement digital bandwidth. We mention this area (of Computer Science) as one in which
further work is required.

5.2 Open source

We turn to address the issue of making software available, open source, particularly to devel-
oping nations. The productivity and management processes appropriate to such novel modes
of production yield unusual consequences for the assurance that open source software meets its
requirements: it appears to be very difficult to certify such software. We are thus left with a
divide between freely-available, reconfigurable (open source) software that is potentially of huge
benefit in developing countries but for which authentication is difficult, and verified authen-
ticated (closed source) software that is necessary in a growing number of secure applications.
Evidently a balance between both types of software is required. Accordingly, in this section we
view software itself in the light of the principle of distribution.

Commercial ‘shrink wrapped’ software may be seen to be the result of a centralised process:
the producer retains all rights and, whilst allowing the user to use the code, does not provide
direct access to it. The user is thus completely at a loss to modify the code in any way. By
comparision, open source software may be seen as the result of a distributed process: it is
typically available freely over the web and the user may take a copy to which he or she than has
complete access. The differences between the two processes—the cathedral versus the bazaar—

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Application of the principle 9

have been graphically documented by Raymond: for a graphic exposition of the different business
models appropriate to commercial software and open source, see [34]. The resulting difference
is important, because having access to the source enables software to be adapted to its context,
for example so that an interface appears with locally-appropriate features (at the very least,
linguistic). It also promotes local software productivity and so, eventually, promotes commerce.
Perhaps it will one day produce a third-world Bill Gates.

But also of interest to us here is the process underlying open source. In the standard model of
software production, software is produced with some (varying, depending on use and style of
software) degree of assurance that it meets its requirements. The extreme case is formally spec-
ified and verified code (like the protocols reported in the previous section). But of open source,
what guarantees are there that a module downloaded from the web meets its requirements; and
what protection is there against malevolent contributors to an open source project?

One response is to appreciate that a different model is involved. The production of open source,
typical of an example of distributed control, is managed dynamically by feedback with some
degree of conformance but also with attrition. Important, kernel, code is checked before release
by one of a small number of agreed individuals. For less critical software, poor code suffers
an ‘evolutionary disadvantage’ and is gradually superseded. This may seem strange from the
traditional viewpoint based on the concern that even a single bug may lead to program mal-
function. The conclusion, however, is simple. Open source and fully authenticated code lie at
opposite ends of a spectrum, the whole range of which has a place in the comsphere. Fly-by-
wire software, for example, with a huge cost of error, would traditionally be produced by a more
centralised process; uncritical applications software could be open source and so produced by a
more distributed process. There remains the difficult issue of how much trust to place in any
copy of a piece of software, whether downloaded or on disk, regardless of the claims that are
made of it; but that is a topic of current research. We highlight the case of open source as being
particularly important.

At the United Nations University’s International Institute for Software Technology (UNU-IIST)
in Macau, an Open Computing Initiative has recently been launched. The idea is to train
representatives from third-world nations in the development of open source, thereby at once
expanding the applications available in open source and empowering third world programmers.
Together with the fact that Negroponte’s $100 laptop,8 set to make a huge impact on the
underdeveloped nations, will contain only open source software, we can expect a swing in the
accepted style of software, from almost entirely centralised, commercial software to a more
balanced hybrid of the two styles.

But the principle of distribution may be used for a deeper analysis of what privacy measures
the $100 laptop should exhibit. It is quite conceivable that, with ill-chosen software allowing
centralised control, the laptop could become a powerful weapon in the hands of an oppressive
regime or militant splinter group. In line with the principle of distribution one might reason
that the laptop should have (ignoring economic feasibility) robust encryption built-in perhaps
at the hardware level, to ensure secure communication and data storage. It would otherwise
be difficult to avoid misuse and the usual resulting insecurities like eavesdropping, intrusion,

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
10 Application of the principle

impersonation, and so on.

It is to be appreciated that many of the existing structures on which ubiquitous communication


is based are already partially centralised. An extreme case is US control of encryption,9 which is
not easily reconciled with the principle of the principle of distribution. It could be argued, and
indeed has been by many, that precisely that conflict is a flaw in US policy. One response is that
without national US control no realistic control is possible. Perhaps, guided by the principle of
the principle of distribution, other alternatives can be considered in other countries. For it does
seem that misuse of the centralised platforms implementing ubiquitous communication can be
even more iniquitous, in the presence of the novel functionality of ubiquitous communication,
than the more ‘time-honoured’ misuse of centralised agencies.

5.3 Response to adversity

Most examples considered so far function in their ‘steady state’: they continue to behave as they
were originally conceived to do. But the principle of distribution provides important insight into
systems which respond to duress by reconfiguring their (primary) system action. To return to
less technical examples, we reconsider team sports: this time the Tour de France.

The agents are the cyclists and the action in which each agent repeatedly engages is that of
cycling, with the aim of achieving the system action of a win (for the team, or individual,
depending on the agent; thus whilst Lance Armstrong and a novice rider both perform the same
actions, their strategies are vastly different).

Before a race begins, each agent performs an entirely autonomous warm up action (ignoring
warm-up by team). At this point the system is behaving in a fully distributed manner, with no
coordination between agents as they cycle to warm up for the event.

For the bulk of each daily race, the cyclists typically act in teams within the peleton. The
mechanics of cycling are such that wind resistance (an external influence on the agents) is of
paramount concern, with the result that riders help each other in order to overcome it. Thus
within a team cyclists take turns to lead, dropping back to benefit from ‘slip streaming’ after
having expended energy in the lead. At this stage, much (but not all) of the control for an
agent’s cycling action resides with the team. It is complicated, in fact, by extra-team individual
behaviour and by the teams or individuals jockeying for position within the peleton. At this
point the system is behaving in a slightly more centralised manner.

The final interesting agent behaviour concerns pursuit, when one or more agents leave the peleton
to catch the leaders. At this point the cyclist’s action takes account of the state of the leaders
and of his immediate neighbours in pursuit: the action returns to being less centralised, being
controlled by fewer other agents.

Thus the different phases in the Tour are helpfully expressed in terms of the degree of distribu-

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Conclusions and future research 11

tion of control with which each agent performs its action. The principle of distribution expresses
the autonomy of each cyclist whilst recognising the infeasibility of each continuing to remain
autonomous in the face of external conditions. Consistent with it is the practice of team be-
haviour, which provides an intermediate level of control to manage the inherent decentralisation
of the system.

More technical examples of systems that respond to duress, and to which the principle of dis-
tribution applies, come from Computer Science, Economics, Sociology and Ecology. From the
world around us, any community with ubiquitous communication has the capability to organise,
in real time, events that would be impossible to organise without a form of centralised control
(which may simply be impractical for geographic, economic or political reasons). The design
and study of systems that reconfigure themselves dynamically is of vital current interest.

6 Conclusions and future research

This paper has presented the principle of distribution to play a part in the resolution of issues
ranging from high-level matters of policy, through standards down to implementation concerns
of protocol design. It has been presented as a novel principle of Information Ethics (though
its application is much broader) that is not a consequence of standard Ethics. The examples
presented seem indicate its promise.

By introducing a new name, the comsphere, we have emphasised the important new character-
istics of ubiquitous communication, to be exploited if progress is to be made.

The main purpose of this paper has been to introduce the principle of distribution and indicate
its use. We have here merely scratched the surface; much work remains. The principle of
distribution promises much for the work of the Information Ethics Group [19] one of whose
main interests is the investigation of the ethics of information and the extent to which it is
dissociated from homo-centric values.

Important technical work remains in designing security protocols that take into account the
various characteristics of the comsphere; we have mentioned but one kind of example in Section
5.

Overcoming the digit divide is of global importance. The promotion, particularly in develop-
ing countries, of software safeguarded against misuse by militant minorities is key. Here the
$100 laptop of Negroponte and its open source code is expected to make a huge contribution.
More generally the promulgation of open source is important, including the management of
its production, till now largely fully distributed (what are the appropriate partially-centralised
management structures), and some measures and guarantees of its authenticity and conformity.

Perhaps the most exciting topic demanding further work concerns the concepts underpinning
systems that reconfigure the way they execute actions in response to external influence. Such

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
12 Acknowledgements

study should include a rigorous development of the theory [41] and discovery of appropriate
algorithms.

It will be interesting to see the extent to which the principle of distribution will be useful in this
work, ranging from the discussion policy to specific matters of system design.

7 Acknowledgements

The principle of distribution was introduced by the present authors as ‘the principle of dis-
tributed ethics’ in [35], written in response to the proposal ‘Ethical strategies for human secu-
rity’, by Elisabeth Porter (research director for INCORE, the centre for International Conflict
Resolution, a joint initiative between the United Nations University and the University of Ul-
ster), circulated following CONDIR 29, 4–5 April, 2005, Bonn. The authors are thus grateful to
her for the chance to provide an early draft of their views.

They also thank their colleagues Sadie Creese (Systems Assurance Group, QinetiQ, Malvern
Technology Centre, UK) and Scott McNeil (UNU-IIST, Macau) for suggestions made in the
writing of this paper which extends the presentation [14] to the International Telecommunication
Union’s World Summit Thematic Meeting on Cybersecurity.

8 Notes

1. See for example the international survey [32]. To quote a specific instance, the aim of
the recent u-Japan project is to make 80% of citizens feel comfortable with ICT and to
appreciate its rôle in resolving issues, by the year 2010 [1].

2. By comparison with comsphere, cyberspace is usually interpreted as comprising networked,


static, users. It is vital for progress that mobility be acknowledged and catered for, par-
ticularly in the context of security. Cell phones provide point-to-point synchronous com-
munication either by voice, text or image. Computers and personal digital assistants,
increasingly by wireless link, provide asynchronous access to vast repositories of data as
well as to email. Interactive (digital) televisions (with memory) increasingly resemble
networked computers. Embedded chips (for example with radio-frequency identification),
screens [42], surveillance systems, global-positioning systems, and ‘smart’ communicating
devices [18] are changing our work, domestic and even communal environments. Commu-
nications are ‘anytime, anywhere, by anything and anyone’. Just a few alternative terms
for the activity of computing on the comsphere are:

mobile computing: IEEE Transactions on Mobile Computing, founded 2002;


pervasive computing: IEEE Pervasive Computing: Mobile and Ubiquitous Sys-
tems, founded 2002;

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
Notes 13

ubiquitous computing: Weiser [42];


personal computing: a term apparently coined by IBM and now interpreted more
generally to mean individual local access to information facilities;
ubiquitous network societies: for the International Telecommunication Union’s
Workshop on Ubiquitous Network Societies, for example, see [3];
ubiquitous communication itself, interpreted to describe wearable systems: The
UbiCom project, the Faculty of Information Technology and Systems at the Univer-
sity of Delft, led by R. L. Lagendijk.

3. The comsphere is not restricted to just the developed countries. In China, the largest
cellphone market in the world, more than a quarter of the population owns a cellphone
and about 100 million text messages are sent daily, although just under a tenth of the
population uses the internet [44]. In Singapore 80% of the population has a cellphone; in
Malaysia just under half the population does. For comparison, in Australia, for example,
more than half the population uses the internet and about three quarters use cell phones
[2]; and Japan has moved from its e-Japan project to u-Japan to reflect developments
in ubiquitous communication [1]. The ease of achieving mobile point-to-point connection
is matched only by the empowerment it provides by the applications it finds. Evidently
ubiquitous communication and the comsphere constitute a global phenomenon.

4. For an account of how a genuinely distributed system (investing as much control as possible
in its disributed components) averted disaster when American Airlines flight 77 hit the
Pentagon on September 11, 2001, see the article [30], which concludes:

. . . the system also remained functional even though a large part of it had been
destroyed. . . . In addition to playing well in large complex systems, they are
able to autonomously perform actions that previously required a connection to
a central control system.

5. The field is so young that many of the important textbooks study the topic using their
own notation, which unfortunately makes them relatively inaccessible. For a quite general
textbook see [9], and for a slightly more representative text see [6].

6. From [27]:

Two-point-five million use [America Online]. That’s like a city. Parents wouldn’t
let their kids go wandering in a city of 2.5 million people without them, or
without knowing what they’re going to be doing.

7. This is an established commercial enterprise. Off-the-shelf programs include CyberSitter,


SurfWatch and NetNanny; see [4].

8. See [31]. A similar, but established and successful project, is the Jhai Foundation’s PC
used in particular to provide internet access to villages in Laos without electricity; see [24].

9. See the Communications Assistance for Law Enforcement Act, CALEA, 1994, [8]. In
summary, the US government, with appropriate authority, should be able to

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
14 References

intercept all wire and electronic communications originating from or coming to a


particular subscriber;
intercept communications to and from mobile users, for example people using portable
phones or portable computers;
obtain call-identifying information, including the phone number from which a call
originates and the phone number of the destination;
have the intercepted communications and call-identifying information transmitted to
a location specified by the government.

References

[1] http://www.nri.co.jp/english/opinion/papers/2003/np200366.html.

[2] http://www.cia.gov/cia/publications/factbook/geos.

[3] http://www.itu.int/ubiquitous.

[4] http://safety.ngfl.gov.uk/?sec=9&cat=99&clear=y.

[5] http://getpopfile.org.

[6] Attiya, H., & Welch, J. (1998). Distributed Computing: Fundamentals, simulations and
advanced topics. McGraw Hill.

[7] Baase, S. (1997). A Gift of Fire: Social, Legal and Ethical Issues in Computing. Prentic-
Hall International.

[8] http://www.askcalea.net.

[9] Coulouris, G., Dollimore, J., & Kindberg, T. (2001). Distributed Systems: Concepts and
Design, third edition. Addison-Wesley.

[10] Creese, S. J., Goldsmith, M. H., Roscoe, A. W., & Zakiuddin, I. (2003). The attacker in
ubiquitous computing environments: Formalising the threat model. In T. Dimitrakos &
F. Martinelli (Eds.) Formal Aspects of Security and Trust, Pisa, Italy, September 2003.
IIT-CNR Technical Report.

[11] Creese, S. J., Goldsmith, M. H., Harrison, R., Roscoe, A. W., Whittaker, P., & Zakiuddin,
I. (2005). Exploiting empirical engagement in authentication protocol design. In D. Hutter
& M. Ullmann (Eds.), Proceedings of the 2nd International Conference on Security in
Pervasive Computing (SPC ’05), (pp. 119–133). Springer LNCS 3450.

[12] Creese, S. J., Goldsmith, M. H., Roscoe, A. W., & Zakiuddin, I. (2003). Authentication in
pervasive computing. In D. Hutter & M. Ullmann (Eds.), First International Conference
on Security in Pervasive Computing, Boppard. Springer LNCS.

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
References 15

[13] Creese, S. J., Goldsmith, M. H., Roscoe, A. W., & Zakiuddin, I. (2004). Security properties
and mechanisms in human-centric computing. In P. Robinson, H. Vogt & W. Wagealla
(Eds.), Privacy, Security and Trust within the Context of Pervasive Computing, Kluwer
International Series in Engineering and Computer Science. Proceedings of Workshop on
Security and Privacy in Pervasive Computing, Wien, April 2004.

[14] Creese, S. J., Reed, G. M., Roscoe, A. W., & Sanders, J. W. (2005). Security and trust
for ubiquitous communication. ITU WSIS Thematic Meeting on Cybersecurity, 28 June–1
July, Geneva. http://www.itu.int/osg/spu/cybersecurity/
contributions/UNU-IIST contribution.pdf.

[15] Floridi, L., & Sanders, J. W., (2004). The method of abstraction. In M. Negrotti (Ed.),
Yearbook of the Artificial. Nature, Culture and Technology. Models in Contemporary Sci-
ences, (pp. 177–220). Peter Lang, Bern.

[16] Floridi, L., & Sanders, J. W., (2004). On the morality of artificial agents. Minds and
Machines, 14(3):349–379.

[17] The FORWARD project; see www.forward-project.org.uk.

[18] Gershenfeld, N. (2000). When Things Start to Think. Owl Books.

[19] Information Ethics Group. See


http://web.comlab.ox.ac.uk/oucl/research/areas/ieg/.

[20] ITU, WSIS Declaration of Principles (2003). Document WSIS-03/GENEVA/DOC/4-E,


12 December.

[21] ITU, WSIS Plan of Action (2003). Document WSIS-03/GENEVA/DOC/5-E, 12 Decem-


ber.

[22] ITU, WSIS Declaration of Principles (2003). Document WSIS-03/GENEVA/DOC/4-E,


12 December.

[23] ITU, WSIS Thematic Meeting on Cybersecurity (2005).


http://www.itu.int/osg/spu/cybersecurity/index.phtml.

[24] http://www.jhai.org/jhai remoteIT.htm.

[25] Johnson, D. G. (1994). Computer Ethics, second edition. Prentice-Hall.

[26] Kizza, J. M. (1998). Ethical and Social Issues in the Information Age. Springer Verlag.

[27] Pam McGraw, P. (1995). See http://www.cybertoday.com/v1n4/runaway.html.

[28] Maner, W. (1996). Unique Ethical Problems in Information Technology. In T. W. Bynum


& S. Rogerson (Eds.), Global Information Ethics, (pp. 137–52). Opragen Publications,
(the April 1996 issue of Science and Engineering Ethics).

[29] Moor, J. H. (1985). What Is Computer Ethics? In T. W. Bynum (Ed.), Computers &
Ethics, (pp. 266–275). Basil Blackwell.

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao
16 References

[30] Needleman, R. (2005). Disaster response: distributed building management proves itself
under critical circumstances. http://www.microsoft.com/
business/executivecircle/content/page.aspx?cID=979&subcatID=1.

[31] http://laptop.media.mit.edu.

[32] Ojo, A., Janowski, T., & Estevez, E. (2005). Global Survey of e-Government. e-Macau
task 2 report, March.

[33] Rabin, M. O. (1982). The choice-coordination problem. Acta Informatica, 17(2):121–134.

[34] Raymond, E. S. (2001). The Cathedral and the Bazaar. O’Reilly. See in particular the
article after which the book is titled, pp. 19–63, and ‘The magic cauldron’, pp. 113–166.

[35] Reed, G. M., & Sanders, J. W. (2005). Ethical principles for secure ubiquitous communi-
cation. Draft of May 25, UNU/IIST.

[36] Roscoe, A. W. (2005). New protocols for bootstrapping security in ad hoc networks. Draft,
November 15. At http://web/comlab.ox.ac.uk/oucl/work/
bill.roscoe/publications/113.pdf.

[37] Ryan, P. Y. A., Schneider, S. A., Goldsmith, M. H., Lowe, G., & Roscoe, A. W. (2001). The
Modelling and Analysis of Security Protocols: the CSP Approach. Addison-Wesley.

[38] Sang-Hun, C. (2005, 9 May). Article, International Herald Tribune.

[39] Schneier, B. (2000). Secrets and Lies: Digital Security in a Networked World. John Wiley
and Sons.

[40] Tavani, H. T. (2004). Ethics and Technology: Ethical Issues in an Age of Information and
Communication Technology, John Wiley and Sons.

[41] Turilli, M. (2006). DPhil. thesis, in preparation. Oxford University Computing Laboratory.

[42] Weiser, M., (1993). Hot Topics: Ubiquitous Computing. IEEE Computer.

[43] Wiener, N. (1948). Cybernetics: or Control and Communication in the Animal and the
Machine. Technology Press.

[44] Yardley, J. (2005). A Hundred Cellphones Bloom, and Chinese Take to the Streets. Article,
April 25, 2005. Available at: http://www.nytimes.com/2005/
04/25/international/asia/25china.html?pagewanted=1.

Report No. 373, May 2007 UNU-IIST, P.O. Box 3058, Macao

You might also like