You are on page 1of 44

9-25-2013

Prof. Illah R. Nourbakhsh


Professor of Robotics
The Robotics Institute
Carnegie-Mellon University

Eyes of Science: our explorations in gigapixel


imaging, space-time exploration and citizenengaged big data for scientific empowerment
Thursday, 9/26/13, 2:30 pm, SC354

Knowledge Agents (Agents that reason)


Formal Logic
Entailment
Inference

Procedure

Propositional Logic
Read AIMA Chapter 7: Logical Agents
Project Proposal due today
HW#4 due Thursday, 10/3

Original

Analogous

Problem

Problem

Model

A representation is not just a way of encoding the


knowledge of a problem:
it determines what means are available for
working on the problem
it determines how expensive processing is
it may determine whether or not all relevant
knowledge can be encoded

One day an old monk decides to leave his


monastery at precisely 6:00 am to climb to the top
of a mountain so he can enjoy its solitude. He
travels at various speeds and takes several rests
before arriving at the mountain peak at precisely
5:00 pm. He spends the night in prayer and
meditation, and starts back down the mountain
using the same trail the next day at precisely 6:00
am, again traveling at different speeds and stopping
often for periods of rest. At precisely 5:00 pm he
reaches his starting point back at the monastery. Is
there some point along the mountain trail that he
passes at precisely the same time each day?

Knowledge base = set of sentences in a formal


language
Declarative approach to building an agent (or other
system): Tell it what it needs to know
Then it can Ask itself what to do - answers should
follow from the KB
Agents can be viewed at the knowledge level
i.e., what they know, regardless of how implemented
Or at the implementation level i.e., data structures in
KB and algorithms that manipulate them

The agent must be able to:


Represent

states, actions, etc.


Incorporate new percepts
Update internal representations of the world
Deduce hidden properties of the world
Deduce appropriate actions

Goal: for the agent to find the gold without


falling into a pit or being eaten by the Wumpus

Performance measure
gold +1000, death -1000
-1 per step, -10 for using the arrow
Environment
Squares adjacent to wumpus are smelly
Squares adjacent to pit are breezy
Glitter iff gold is in the same square
Shooting kills wumpus if you are facing it
Shooting uses up the only arrow
Grabbing picks up gold if in same square
Releasing drops the gold in same square
Sensors: Stench, Breeze, Glitter, Bump, Scream
Actuators: Left turn, Right turn, Forward, Grab,
Release, Shoot

Fully Observable No only local perception

Deterministic Yes outcomes exactly specified


Episodic No sequential at the level of actions
Static Yes Wumpus and Pits do not move
Discrete Yes
Single-agent? Yes Wumpus is essentially a
natural feature

A study of correct inference (truth-preserving)

Truth-preserving inference
If there is a potato in the tailpipe, the car will not start.
There is a potato in the tailpipe.
Therefore, the car will not start.

Non-Truth-preserving inference
If there is a potato in the tailpipe, the car will not start.
My car will not start.
Therefore, there is a potato in the tailpipe.

Three components:
The syntax of a formal language.
i.e. what constitutes a well-formed sentence.

The semantics of a formal language.


What is the meanings of the well-formed
sentences; i.e. under what conditions is a
sentence true?

A proof theory
A formal specification of what constitutes correct
inference; i.e. a set of axioms and a set of
inference rules.

Consider the predicate: P(x, y)


A
B

Suppose that P(x, y) means x is above y


so,
P(A, C) is true (for that interpretation)

Suppose that P(x, y) means x is directly


above and touching y
so,
P(A, C) is false (for that interpretation)

Representation

KB
sentences
semantics

World

facts

entails

sentence
semantics

follows

fact

Logical inference generates new sentences that


are entailed by existing sentences.

KB entails is denoted by KB

Entailment means that one thing follows from


another:
KB
Knowledge base KB entails sentence if and only
if is true in all worlds where KB is true
e.g.,

the KB containing the Patriots won and the


Eagles won entails Either the Patriots won or the
Eagles won

e.g.,

x+y = 4 entails 4 = x+y

Entailment

is a relationship between sentences (i.e.,


syntax) that is based on semantics

Logicians typically think in terms of models, which


are formally structured worlds with respect to which
truth can be evaluated
We say m is a model of a sentence if is true in m
M() is the set of all models of
Then KB iff M(KB) M()
e.g.

KB = { Patriots won and


Eagles won }

= Patriots won

Situation after detecting


nothing in [1,1], moving
right, breeze in [2,1]
Consider possible models
for KB assuming only pits
3 Boolean choices 8
possible models

KB = wumpus-world rules + observations

KB = wumpus-world rules + observations


1 = "[1,2] is safe", KB 1, proved by model
checking

KB = wumpus-world rules + observations


2 = "[2,2] is safe", KB 2

KB i = sentence can be derived from KB by


procedure i
Soundness: i is sound if whenever KB i , it is
also true that KB
Completeness: i is complete if whenever KB , it
is also true that KB i
Preview: we will define a logic (first-order logic)
which is expressive enough to say almost anything
of interest, and for which there exists a sound and
complete inference procedure.
That is, the procedure will answer any question
whose answer follows from what is known by the
KB .

An inference rule is complete if, given a set S of


sentences, it can infer every sentence that
logically follows from S.
1. given KB, infer all sentences entailed from it
2. given , discover whether or not KB

1. store facts (memory)


want the KB to be well-formed, consistent
2. retrieve knowledge (explicit knowledge)
yes/no
Is Socrates a person?
return a list of all known people
3. inference (implicit knowledge)
Is Socrates mortal?
Who is mortal?
other operations: remove facts, modify facts, etc.

Declarative just the facts, maam

knowledge is specified, how to use it is not


adv. flexible, modular, easy to add facts

Procedural how to do something (e.g. make a


cup of tea)

control info required to use the knowledge is


embedded into the knowledge itself
adv. control of the search for answers

Tacit knowledge not expressed in language (how


to move hand)
Causal knowledge cause and effect

others?

Three components:
The syntax of a formal language.
i.e. what constitutes a well-formed sentence.
The semantics of a formal language.
What is the meanings of the well-formed
sentences; i.e. under what conditions is a
sentence true?
A proof theory
A formal specification of what constitutes correct
inference; i.e. a set of axioms and a set of
inference rules.

Done in class

KB i = sentence can be derived from KB by


procedure i
Soundness: i is sound if whenever KB i , it is also
true that KB
Completeness: i is complete if whenever KB , it
is also true that KB i
An inference rule is complete if, given a set S of
sentences, it can infer every sentence that logically
follows from S.
1. given KB, infer all sentences entailed from it
2. given , discover whether or not KB

Propositional logic is the simplest logic


illustrates basic ideas

The proposition symbols P1, P2, etc. are sentences


If

S is a sentence, S is a sentence (negation)

If

S1 and S2 are sentences:

S1

S2 is a sentence (conjunction)

S1

S2 is a sentence (disjunction)

S1

S2 is a sentence (implication)

S1

S2 is a sentence (biconditional)

Each model specifies true/false for each proposition symbol


e.g.
P1,2
P2,2
P3,1
false
true false
With these symbols, 8 possible models, can be enumerated
automatically.
Rules for evaluating truth with respect to a model m:
S is true
iff
S is false
S1 S2 is true iff
S1 is true
and
S2 is true
S1 S2 is true iff
S1is true
or
S2 is true
S1 S2 is true iff
S1 is false or
S2 is true
i.e., is false
iff
S1 is true
and
S2 is false
S1 S2 is true iff
S1S2 is true and S2S1 is true

Simple recursive process evaluates an arbitrary sentence, e.g.


P1,2 (P2,2 P3,1) = true (true false) = true true = true

Let Pi,j be true if there is a pit in [i, j].


Let Bi,j be true if there is a breeze in [i, j].
P1,1
B1,1
B2,1

"Pits cause breezes in adjacent squares"


B1,1
B2,1

(P1,2 P2,1)
(P1,1 P2,2 P3,1)

Depth-first enumeration of all models is sound &


complete

For n symbols, time complexity is O(2n)


and space complexity is O(n)

Two sentences are logically equivalent} iff true in


same models: iff and

Proof methods divide into (roughly) two kinds:

Application of inference rules

Sound generation of new sentences from old


Proof = a sequence of inference rule applications
Can use inference rules as operators in a standard search
algorithm
Typically require transformation of sentences into a
normal form

Model checking

truth table enumeration (always exponential in n )

improved backtracking, e.g., Davis-Putnam-LogemannLoveland (DPLL)

heuristic search in model space (sound but incomplete)


e.g., min-conflicts-like hill-climbing algorithms

You might also like