Professional Documents
Culture Documents
computable language
Anand S. Rao
1 Introduction
2 Agent Programs
In this section, we introduce the language for writing agent programs. The alphabet of
the formal language consists of variables, constants, function symbols, predicate sym-
bols, action symbols, connectives, quantifiers, and punctuation symbols. Apart from first-
order connectives, we also use ! (for achievement), ? (for test), ; (for sequencing), and
(for implication) 1. Standard first-order definitions of terms, first-order formulas, closed
formulas, and free and bound occurrences of variables are used.
Definition 1. If b is a predicate symbol, and t1 ,...,tn are terms then b(t1,...,tn ) or b(t) is
a belief atom. If b(t) and c(s) are belief atoms, b(t) ^ c(s), and :b(t) are beliefs. A belief
atom or its negation will be referred to as a belief literal. A ground belief atom will be
called a base belief.
For example, let us consider a traffic-world simulation, where there are four adjacent
lanes and cars can appear in any lane and move in the same lane from north to south.
Waste paper can appear on any of the lanes and a robot has to pick up the waste paper
and place it in the bin. While doing this the robot must not be in the same lane as the
car, as it runs the risk of getting run over by the car. Consider that we are writing agent
programs for such a robot.
The beliefs of such an agent represent the configuration of the lanes and the locations
of the robot, cars, waste, and the bin (i.e., adjacent(X,Y), location(robot,
X), location(car, X), etc.). The base beliefs of such an agent are ground instances
of belief atoms (i.e., adjacent(a,b), location(robot, a), etc.).
A goal2 is a state of the system which the agent wants to bring about. We consider
two types of goals: an achievement goal and a test goal. An achievement goal, written as
!g (t) states that the agent wants to achieve a state where g (t) is a true belief. A test goal,
written as ?g(t) states that the agent wants to test if the formula g(t) is a true belief or not.
In our example, clearing the waste on a particular lane can be stated as an achievement
goal, i.e., !cleared(b), and seeing if the car is in a particular lane can be stated as a
test goal, i.e., ?location(car, b).
Definition 2. If g is a predicate symbol, and t1 ,...,tn are terms then !g(t1,...,tn ) (or !g(t))
and ?g(t1,...,tn ) (or ?g(t)) are goals.
1
^ :
In the agent programs we use & for , not for , <- for . Also, like PROLOG, we require
that all negations be ground when evaluated. We use the convention that variables are written
in upper-case and constants in lower-case.
2
In this paper, we discuss only goals, and not desires. Goals can be viewed as adopted desires.
When an agent acquires a new goal or notices a change in its environment, it may
trigger additions or deletions to its goals or beliefs. We refer to these events as triggering
events. We consider the addition/deletion of beliefs/goals as the four triggering events.
Addition is denoted by the operator + and deletion is denoted by the operator ;. In our
example, noticing the waste in a certain lane X, written as +location(waste, X)
or acquiring the goal to clear the lane X, written as +!cleared(X) are example of two
triggering events.
Definition 3. If b(t) is a belief atom, !g(t) and ?g(t) are goals, then +b(t), ;b(t) +!g(t),
+?g(t), ;!g(t), ;?g(t) are triggering events.
The purpose of an agent is to observe the environment, and based on its observation
and its goals, execute certain actions. These actions may change the state of the environ-
ment. For example, if move is an action symbol, the robot moving from lane X to lane
Y, written as move(X,Y), is an action. This action results in an environmental state
where the robot is in lane Y and is no longer in lane X.
Definition 4. If a is an action symbol and t1,...,tn are first-order terms, then a(t1,...,tn )
or a(t) is an action.
An agent has plans which specify the means by which an agent should satisfy an end.
A plan consists of a head and a body. The head of a plan consists of a triggering event and
a context, separated by a “:”. The triggering event specifies why the plan was triggered,
i.e., the addition or deletion of a belief or goal. The context of a plan specifies those
beliefs that should hold in the agent’s set of base beliefs, when the plan is triggered. The
body of a plan is a sequence of goals or actions. It specifies the goals the agent should
achieve or test, and the actions the agent should execute. For example, we want to write
a plan that gets triggered when some waste appears on a particular lane. If the robot is in
the same lane as the waste, it will perform the action of picking up the waste, followed
by achieving the goal of reaching the bin location, followed by performing the primitive
action of putting it in the bin. This plan can be written as:
+location(waste,X):location(robot,X) &
location(bin,Y)
<- pick(waste);
!location(robot,Y);
drop(waste). (P1)
Consider the plan for the robot to change locations. If it has acquired the goal to move
to a location X and it is already in location X, it does not have to do anything and hence
the body is true. If the context is such that it is not at the desired location then it needs
to find an adjacent lane with no cars in it, and then move to that lane.
+!location(robot,X):location(robot,Y) &
(not (X = Y)) &
adjacent(Y,Z) &
(not (location(car, Z)))
<- move(Y,Z);
+!location(robot,X). (P3)
Definition 5. If e is a triggering event, b1,...,bm are belief literals, and h1 ,...,hn are goals
or actions then e:b1 ^ : : : ^ bm h1 ;...;hn is a plan. The expression to the left of the
arrow is referred to as the head of the plan and the expression to the right of the arrow
is referred to as the body of the plan. The expression to the right of the colon in the head
of a plan is referred to as the context. For convenience, we shall rewrite an empty body
with the expression true.
– In a pure logic program there is no difference between a goal in the body of a rule
and the head of a rule. In an agent program the head consists of a triggering event,
rather than a goal. This allows for a more expressive invocation of plans by allow-
ing both data-directed (using addition/deletion of beliefs) and goal-directed (using
addition/deletion of goals) invocations.
– Rules in a pure logic program are not context-sensitive as plans.
– Rules execute successfully returning a binding for unbound variables; however, ex-
ecution of plans generates a sequence of ground actions that affect the environment.
– While a goal is being queried the execution of that query cannot be interrupted in a
logic program. However, the plans in an agent program can be interrupted.
3 Operational Semantics
Informally, an agent consists of a set of base beliefs, B, a set of plans, P, a set of events,
E, a set of actions, A, a set of intentions, I, and three selection functions, SE , SO , and SI .
When the agent notices a change in the environment or an external user has asked the
system to adopt a goal, an appropriate triggering event is generated. These events cor-
respond to external events. An agent can also generate internal events. Events, internal
or external, are asynchronously added to the set of events E. The selection function SE
selects an event to process from the set of events E. This event is removed from E and is
used to unify with the triggering events of the plans in the set P. The plans whose trigger-
ing events so unify are called relevant plans and the unifier is called the relevant unifier.
Next, the relevant unifier is applied to the context condition and a correct answer sub-
stitution is obtained for the context, such that the context is a logical consequence of the
set of base beliefs, B. Such plans are called applicable plans or options and the compo-
sition of the relevant unifier with the correct answer substitution is called the applicable
unifier.
For each event there may be many applicable plans or options. The selection func-
tion SO chooses one of these plans. Applying the applicable unifier to the chosen option
yields the intended means of responding to the triggering event. Each intention is a stack
of partially instantiated plans or intention frames. In the case of an external event the in-
tended means is used to create a new intention, which is added to the set of intentions I.
In the case of an internal event to add a goal the intended means is pushed on top of an
existing intention that triggered the internal event.
Next, the selection function SI selects an intention to execute. When the agent ex-
ecutes an intention, it executes the first goal or action of the body of the top of the in-
tention. Executing an achievement goal is equivalent to generating an internal event to
add the goal to the current intention. Executing a test goal is equivalent to finding a sub-
stitution for the goal which makes it a logical consequence of the base beliefs. If such a
substitution is found the test goal is removed from the body of the top of the intention
and the substitution is applied to the rest of the body of the top of the intention. Exe-
cuting an action results in the action being added to the set of actions, A, and it being
removed from the body of the top of the intention.
The agent now goes to the set of events, E, and the whole cycle continues until there
are no events in E or there is no runnable intention. Now we formalize the above process3 .
The state of an agent at any instant of time can be formally defined as follows:
Definition 6. An agent is given by a tuple <E,B,P,I,A,SE ,SO ,SI >, where E is a set of
events, B is a set of base beliefs, P is a set of plans, I is a set of intentions, and A is a set of
actions. The selection function SE selects an event from the set E; the selection function
SO selects an option or an applicable plan (see Definition 10) from a set of applicable
plans; and SI selects an intention from the set I.
The sets B, P, and A are as defined before and are relatively straightforward. Here
we describe the sets E and I.
Definition 7. The set I is a set of intentions. Each intention is a stack of partially instan-
tiated plans, i.e., plans where some of the variables have been instantiated. An intention
is denoted by [p1z: : :zpz ], where p1 is the bottom of the stack and pz is the top of the
stack. The elements of the stack are delimited by z. For convenience, we shall refer to
the intention [+!true:true <- true] as the true intention and denote it by T.
Definition 8. The set E consists of events. Each event is a tuple <e, i>, where e is a
triggering event and i is an intention. If the intention i is the true intention, the event is
called an external event; otherwise it is an internal event.
Now we can formally define the notion of relevant and applicable plans and unifiers.
As we saw earlier, a triggering event d from the set of events, E, is to be unified with the
triggering event of all the plans in the set P. The most general unifier (mgu) that unifies
these two events is called the relevant unifier. The intention i could be wither the true
intention or an existing intention which triggered this event. More formally,
3
The reader can refer to the Appendix for some basic definitions from first-order logic and horn
clause logic.
Definition 9. Let SE (E) = = < d; i > and let p be e : b1 ^ : : : ^ bm h1 ; : : : ; hn. The
plan p is a relevant plan with respect to an event iff there exists a most general unifier
such that d = e. is called the relevant unifier for .
For example, assume that the triggering event of the event selected from E is
+!location(robot,b).
The two plans P2 and P3 are relevant for this event with the relevant unifier being fX/bg.
A relevant plan is also applicable if there exists a substitutionwhich, when composed
with the relevant unifier and applied to the context, is a logical consequence of the set
of base beliefs B. In other words, the context condition of a relevant plan needs to be a
logical consequence of B, for it to be an applicable plan. More formally,
Continuing with the same example, consider that the set of base beliefs is given by
adjacent(a,b).
adjacent(b,c).
adjacent(c,d).
location(robot,a).
location(waste,b).
location(bin,d).
The applicable unifier is fX/b, Y/a, Z/bg and only plan P3 is applicable.
Depending on the type of the event (i.e., internal or external), the intention will be
different. In the case of external events, the intended means is obtained by first selecting
an applicable plan for that event and then applying the applicable unifier to the body of
the plan. This intended means is used to create a new intention which is added to the set
of intentions I.
Definition 11. Let SO (O) = p, where O is the set of all applicable plans or options for
the event = < d; i > and p is e : b1 ^: : :^bm h1 ; : : : ; hn. The plan p is intended with
respect to an event , where i is the true intention iff there exists an applicable unifier
such that [+!true : true truez(e : b1 ^ : : : ^ bm h1 ; : : : ; hn)] 2 I.
In our example, the only applicable plan P3 will be intended with the intention I now
being
[+!location(robot,b): location(robot,a) &
not(b = a) &
adjacent(a, b) &
not(location(car,b)) <-
move(a,b);
+!location(robot,b)].
In the case of internal events the intended means for the achievement goal is pushed
on top of the existing intention that triggered the internal event.
Definition 12. Let SO (O) = p, where O is the set of all applicable plans or options for
the event = < d; [p1z: : :zf : c1 ^ : : : ^ cy !g(t); h2; : : : ; hn] >, and p is +!g(s): b1 ^
: : : ^ bm k1; : : : ; kj . The plan p is intended with respect to an event iff there exists
an applicable unifier such that [p1z: : :zf : c1 ^ : : : ^ cy !g(t); h2; : : : ; hnz(+!g(s) :
b1 ^ : : : ^ bm ) (k1; : : : ; kj ); (h2 ; : : : ; hn)] 2 I.
The above definition is very similar to SLD-resolution of logic programming languages.
However, the primary difference between the two is that the goal g is called indirectly
by generating an event. This gives the agent better real-time control as it can change its
focus of attention, if needed, by adopting and executing a different intention. Thus, one
can view agent programs as multi-threaded interruptible logic programming clauses.
When an intention is selected and executed, the first formula in the body of the top of
the intention can be: (a) an achievement goal; (b) a test goal; or (c) an action; or (d) true.
In the case of an achievement goal the system executes it by generating an event; in the
case of a test goal it looks for a mgu that will unify the goal with the set of base beliefs
of the agent, and if such an mgu exists it applies it to the rest of the means; in the case
of an action the system adds it to the set of actions A; and in the last case the top of the
intention and the achievement goal that was satisfied are removed and the substitution
is applied to the rest of the body of that intention.
Definition 13. Let SI (I ) = i, where i is [p1z: : :zf : c1 ^ : : : ^ cy g
! (t); h2 : : : hn .
; ; ]
The intention i is said to have been executed iff < +!g(t); i > 2 E.
been executed iff there exists a substitution such that g(t) = g(s) and i is replaced
by [p1z: : :zpz;1 z(e : b1 ^ : : : ^ bx) (h2 ; : : : ; hn ) ].
4 Proof Theory
S
while E = do
= < d; i > = E (E);
E = E/;
f j
O = p is an applicable unifier for event and plan p g
if external-event() then I = I [ O (O )]; [S
S
else push( O (O ), i), where is an applicable unifier for ;
case f irst(body(top( I (I)))) = true S
S
x = pop( I (I));
push(head(top( I (I))) Srest(body (top( I (I)))) , I (I)), S S
where is an mgu such that x = head(top( I (I))); S
case f irst(body(top( I (I)))) = !g(t) S
[
E = E <+!g(t), I (I)> S
case f irst(body(top( I (I)))) = ?g(t) S
S
pop( I (I));
push(head(top( I (I))) Srest(body (top( I (I)))) , I (I)), S S
where is the correct answer substitution
case f irst(body(top( I (I)))) = a(t) S
S
pop( I (I));
S
push(head(top( I (I))) rest(body (top( I (I)))), I (I)); S S
A=A [f g
a(t ) ;
endwhile.
IntendMeans < f: :<:;f<: : :g;gBt ;;fj: :>;:; :p: :gz;:B: :iz; pf: z: p
( )
+! ( ) :; p1z : : : zpz ; : : :g; Ai; i >
[
i 1 [z ] + 1
where pz = f c1 ^ : : : ^ cy
: g t h2 : : : hn, p = g s b1 ^ : : : ^ bm
! ( ); ; ; +! ( ) :
k1 : : : kx, SE (E) = < g t ; j >, j is p1z: : :zpn >, g(t) = g(s) and 8 (c1^...^cy )
; ; +! ( ) [ ]
is a logical consequence of Bi .
Next, we have four proof rules for execution. The four proof rules are based on the
type of the goal or action that appears as the first literal of the body of the top of an
intention chosen to be executed by the function S I . We give the execution proof rule
for achieve ExecAch, the other proof rules can be written analogously.
(ExecAch < Ei<; BEi ; f[: :f:;< p1zg: :t: z; fj >cg1; ^B :;:f::^: :;cy p z g: : t: zph2; ::::::g;hAn ;;i: : :g; A>i ; i >
)
[ : ! ( ); ; ; ]
i +! ( )i 1 [z i] +1
Appendix
Definition 20. An atom of the form s = t, where s and t are terms is called an equation.
Definition 21. A substitution is a finite set fx 1 /t1,...,xn /tng, where x1 ,...,xn are distinct
variables, and t1 ,...,tn are terms such that xi 6= ti for any i from 1..n.
References
1. B. Burmeister and K. Sundermeyer. Cooperative problem-solving guided by intentions and
perception. In E. Werner and Y. Demazeau, editors, Decentralized A.I. 3, Amsterdam, The
Netherlands, 1992. North Holland.
2. P. R. Cohen and H. J. Levesque. Intention is choice with commitment. Artificial Intelligence,
42(3), 1990.
3. M. P. Georgeff and A. L. Lansky. Procedural knowledge. In Proceedingsof the IEEE Special
Issue on Knowledge Representation, volume 74, pages 1383–1398, 1986.
4. M. M. Huntbach, N. R. Jennings, and G. A. Ringwood. How agents do it in stream logic pro-
gramming. In Proceedings of the International Conference on Multi-Agent Systems (ICMAS-
95), San Francisco, USA, June, 1995.
5. F. F. Ingrand, M. P. Georgeff, and A. S. Rao. An architecture for real-time reasoning and
system control. IEEE Expert, 7(6), 1992.
6. N. R. Jennings. On being responsible. In Y. Demazeau and E. Werner, editors, Decentral-
ized A.I. 3. North Holland, Amsterdam, The Netherlands, 1992.
7. N. R. Jennings. Specification and implementation of belief, desire, joint-intention architec-
ture for collaborative problem solving. Journal of Intelligent and Cooperative Information
Systems, 2(3):289–318, 1993.
8. D. Kinny, M. Ljungberg, A. S. Rao, E. A. Sonenberg, G. Tidhar, and E. Werner. Planned
team activity. In Artificial Social Systems, Lecture Notes in Artificial Intelligence (LNAI-830),
Amsterdam, Netherlands, 1994. Springer Verlag.
9. Y. Lesperance, H. J. Levesque, F. Lin, D. Marcu, R. Reiter, and R. B. Scherl. Foundations
of a logical approach to agent programming. In Working notes of the IJCAI-95 Workshop on
Agent Theories, Architectures, and Languages, Montreal, Canada, 1995.
10. J. P. Muller, M. Pischel, and M. Thiel. Modelling reactive behaviour in vertically layered
agent architectures. In Intelligent Agents: Theories, Architectures, and Languages. Lecture
Notes in Artificial Intelligence LNAI 890, Heidelberg, Germany, 1995. Springer Verlag.
11. U Nilsson. Abstract interpretations and abstract machines. Technical Report Dissertation No
265, Department of Computer and Information Science, Linkoping University, Linkoping,
Sweden, 1992.
12. A. S. Rao. Decision procedures for propositional linear-time belief-desire-intention log-
ics. In Working notes of the IJCAI-95 Workshop on Agent Theories, Architectures, and Lan-
guages, Montreal, Canada, 1995.
13. A. S. Rao and M. P. Georgeff. Modeling rational agents within a BDI-architecture. In
J. Allen, R. Fikes, and E. Sandewall, editors, Proceedings of the Second International Con-
ference on Principles of Knowledge Representation and Reasoning. Morgan Kaufmann Pub-
lishers, San Mateo, CA, 1991.
14. A. S. Rao and M. P. Georgeff. An abstract architecture for rational agents. In C. Rich,
W. Swartout, and B. Nebel, editors, Proceedings of the Third International Conference on
Principles of Knowledge Representation and Reasoning. Morgan Kaufmann Publishers, San
Mateo, CA, 1992.
15. A. S. Rao and M. P. Georgeff. A model-theoretic approach to the verification of situated rea-
soning systems. In Proceedings of the Thirteenth International Joint Conference on Artificial
Intelligence (IJCAI-93), Chamberey, France, 1993.
16. G. A. Ringwood. A brief history of stream parallel logic programming. Logic Programming
Newsletter, 7(2):2–4, 1994.
17. Y. Shoham. Agent-oriented programming. Artificial Intelligence, 60(1):51–92, 1993.
18. M. Singh and N. Asher. Towards a formal theory of intentions. In J. van Eijck, editor, Logics
in AI, volume LNAI:478, pages 472–486. Springer Verlag, Amsterdam, Netherlands, 1990.
19. S. R. Thomas. The PLACA agent programming language. In Intelligent Agents: Theories,
Architectures, and Languages. Lecture Notes in Artificial Intelligence LNAI 890, Amsterdam,
Netherlands, 1995. Springer Verlag.
20. W. van der Hoek, B. van Linder, and J.-J. Ch. Meyer. A logic of capabilities. In Proceed-
ings of the Third International Symposium on the Logical Foundations of Computer Science
(LFCS’94), Lecture Notes in Computer Science LNCS 813. Springer Verlag, Heidelberg, Ger-
many, 1994.
21. B. van Linder, W. van der Hoek, and J. J. Ch. Meyer. How to motivate your agents? In
Working notes of the IJCAI-95 Workshop on Agent Theories, Architectures, and Languages,
Montreal, Canada, 1995.
22. D. Weerasooriya, A. S. Rao, and K. Ramamohanarao. Design of a concurrent agent-oriented
language. In Intelligent Agents: Theories, Architectures, and Languages. Lecture Notes in
Artificial Intelligence LNAI 890, Amsterdam, Netherlands, 1995. Springer Verlag.
23. M. Wooldridge and M. Fisher. A decision procedure for a temporal belief logic. In Proceed-
ings of the First International Conference on Temporal Logic, Bonn, Germany, 1994.
This article was processed using the LATEX macro package with LLNCS style