You are on page 1of 98

Logical Agents 2

Propositional Logic

Outline: Propositional logic


Syntax of propositional logic : defines the allowable sentences Semantics - the way in which the truth of sentences is determined Entailment-the relation between a sentence and another sentence Algorithm for logical inference

Propositional logic
Propositional logic is the simplest logicillustrates basic ideas Definition: A proposition is a statement that can be either true or false; it must be one or the other, and it cannot be both. The following are propositions: the reactor is on; the wing-flaps are up; The following are not: are you going out somewhere? 2+3 A good test for a proposition is to ask Is it true that. . . ?. If that makes sense, it is a proposition

Propositional logic : Syntax


Propositional logic consists of:
The logical values true and false (T and F) Propositions: Sentences, which Either atomic (that is, they must be treated as indivisible units, with no internal structure), and Have a single logical value, either true or false Or Complex sentences Operators, both unary and binary; when applied to logical values, yield logical values

A BNF (Backus-Naur Form) grammar of sentences in propositional logic

The atomic sentences consist of a single proposition symbol that can be true or false. Symbols that start with an uppercase letter and may contain other letters or subscripts Ex: P, Q, R, W1,a Symbol -> P| Q | R| . .

A BNF (Backus-Naur Form) grammar of sentences in propositional logic

Complex sentences are constructed from simpler sentences, using parentheses and logical connectives

Propositional logic: Syntax


Five Logical connectives
The proposition symbols S1, S2 etc are sentences If S is a sentence, S is a sentence (negation) If S1 and S2 are sentences, S1 S2 is a sentence (conjunction)

If S1 and S2 are sentences, S1 S2 is a sentence (disjunction)


If S1 and S2 are sentences, S1 S2 is a sentence (implication) If S1 and S2 are sentences, S1 S2 is a sentence (biconditional)
Biconditional means if and only if. premise or antecedent is S1 and its conclusion or consequent is S2 Implications are also known as rules or if-then statements

Operator precedences from highest to lowest


Operator precedences from highest to lowest: , ,, , , are associative, but , are not.

Semantics
The semantics defines the rules for determining the truth of a sentence with respect to a particular model. In propositional logic, a model simply fixes the truth valuetrue or false-for every proposition symbol. Ex: if the sentences in the knowledge base make use of the proposition symbols P1,2, P2,2, and P3,1
Then one possible model is m1 = { P1,2 =false, P2,2 =false, P3,1 = true} With three proposition symbols, there are 23 = 8 possible models

Note: Models are purely mathematical objects with no connection to wumpus worlds. P1,2 might mean "there is a pit in [1 ,2]" or "I'm in AI class now"

Semantics
The semantics for propositional logic must specify how to compute the truth value of any sentence, given a model How to compute the truth of atomic sentences

a. True is true in every model and False is false in every model. b. The truth value of every proposition symbol must be specified directly in the model.
Ex: In the model m1, P1,2 is false.
10

Semantics
How to compute the truth of complex sentences Five rules which hold for any sub sentences P and Q in any model m (here "iff" means "if and only if''):

11

Propositional logic: Semantics


Rules for evaluating truth with respect to a model m: 1. 2. 3. 4. P is true iff P is false in m P Q is true iff P is true and Q is true in m P Q is true iff P is true or P Q is true iff i.e., is false iff Q is true in m

P is false or Q is true in m P is true and Q is false in m

5. P Q is true iff PQ is true and QP is true in m i.e., iff P and Q are both true or both false in m.
12

Propositional logic: Semantics


Truth Tables

The rules can also be expressed with truth tables that specify the truth value of a complex sentence for each possible assignment of truth values to its components. Truth tables for the five connectives are given.

13

Truth tables for five logical connectives

14

Propositional logic: Semantics


Truth Tables From these tables, the truth value of any sentence s can be computed with respect to any model m by a simple recursive evaluation

Simple recursive process evaluates an arbitrary sentence, Ex: The sentence P1,2 (P2,2 P3,1) in m1 is evaluated as P1,2 (P2,2 P3,1) = true (false true) = true true = true
Here m1 = { P1,2 =false, P2,2 =false, P3,1 = true}

15

TT for Implication
The truth table "P implies Q or "if P then Q" is material implication: the antecedent does not have to be in any way relevant or a cause for the consequent Ex1: The sentence "5 is odd implies Tokyo is the capital of Japan is a true sentence of propositional logic even though it is a odd sentence of English. Any implication is true whenever its antecedent is false. Ex2: "5 is even implies Sam is smart" is true regardless of whether Sam is smart. So, we should treat as "If P is true, then I am claiming that Q is true. Otherwise I am making no claim." The only way for this sentence to be false is if P is true but Q is false.

16

TT for Biconditional
The truth table for "P if and only if Q" The biconditional, P Q, is true whenever both P Q and Q P are true. Many of the rules of the wumpus world are best written using Ex: a square is breezy if a neighboring square has a pit, and a square is breezy only if a neighboring square has a pit. Implication: The one-way implication B1,1 (P1,2 v P2,1) is true in the wumpus world, but incomplete. It does not rule out models in which B1,1 is false and P1,2 is true, which would violate the rules of the wumpus world. Biconditional: B1,1 (P1,2 v P2,1) where B1,1 means that there is a breeze in [1 ,1]. Implication vs. Biconditional: implication requires the presence of pits if there is a breeze, whereas the biconditional also requires the absence of pits if there is no breeze. So we need a biconditional here
17

Knowledge base
It is known that a knowledge base consists of a set of sentences. We can now see that a knowledge base is a conjunction of those sentences.
That is, if we start with an empty KB and do TELL(KB; S1) . . . TELL(KB; Sn) then we have KB = S1 ^ . . . ^ Sn.

This means that we can treat knowledge bases and sentences interchangeably.

18

Constructing a knowledge base for the wumpus world

19

Constructing a knowledge base for the wumpus world


we will deal only with pits (the wumpus itself is left as an exercise) Steps: 1. First, choose the vocabulary of proposition symbols construct a knowledge base 2. Provide enough knowledge in step 1 so that inference can be carried out - provide an inference procedure

20

1. A simple Knowledge base


Assumptions: We focus first on the immutable aspects of the wumpus world (the mutable aspects dealt later) We need the following symbols for each [x, y] location: Px,y is true if there is a pit in [x, y]. Wx,y is true if there is a wumpus in [x, y], dead or alive. Bx,y is true if the agent perceives a breeze in [x, y]. Sx,y is true if the agent perceives a stench in [x, y]. Observations: breeze in [2,1] and nothing in [1,1] Rules: perceive breeze in squares directly adjacent to a pit Problem: Find relevant KB after starting in [1,1] and moving to [2,1].
21

A simple Knowledge base


Derive P1,2: there is no pit in [ 1 ,2]
(note: derive the same way as done informally using logic before)

Label each sentence Ri so that we can refer to them:


1. There is no pit in [1,1]: R1 : P1,1
2. A square is breezy if and only if there is a pit in a neighboring square.
This has to be stated for each square For now, we include just the relevant squares:

R2 : B1,1 R3: B2,1

(P1,2 P2,1) (P1,1 P2,2 P3,1)

The preceding sentences are true in all wumpus worlds.


3. Now include the breeze percepts for the first two squares visited in the specific world the agent is in R4 : B1,1 R5 : B2,1
22

A simple Knowledge base


The knowledge base, then, consists of sentences R1 through R5. It can also be considered as a single sentence
the conjunction R1 ^ R2 ^ R3 ^ R4 ^ R5 because it asserts that all the individual sentences are true.

23

2. A simple Inference Procedure


Recall that the aim of logical inference is to decide whether KB for some sentence Ex: Is P2,2 entailed? Our first algorithm for inference will be a direct implementation of the definition of entailment:
enumerate the models, and check that is true in every model in which KB is true. For propositional logic, models are assignments of true or false to every proposition symbol.

24

2. A simple Inference Procedure


Ex: Is P2,2 entailed? The relevant proposition symbols are
B1,1, B2,1, P1,1, P1,2, P2,1, P2,2, and P3,1 With seven symbols, there are 27 =128 possible models;

in three of these, KB is true In those three models, P1,2 is true, hence there is no pit in [1,2]. On the other hand, P2,2 is true in two of the three models and false in one, so we cannot yet tell whether there is a pit in [2,2].

25

Truth tables for inference - KB

Here KB |= B2,1

26

Truth tables for inference KB and

27

Algorithm for Entailment


TT-ENTAILS? (KB, ,) RETURNS TRUE OR FALSE PL-TRUE?(KB, model) : returns true if a sentence holds within a model. The variable model represents a partial model- an assignment to some of the symbols. The keyword "and" is used here as a logical operation on its two arguments, returning true or false

28

Algorithm for Entailment


TT-CHECK-ALL(KB, , symbols, model) So TT-CHECK-ALL checks that for each model (each possible assignment of 'true' or 'false' to the different symbols) that is consistent with the KB, the query evaluates to true: For each model: PL-TRUE?(KB, model) -> PL-TRUE?(query, model)
when passing the KB we would not pass R1 and R2 and ... R5 but symbols such as (not P_1,1) and (B1,1 <=> (P1,2 or P2,1)) and ... and (B2,1).

Essentially enumerates at truth table checking that when KB is true a is true

29

Algorithm for Entailment


TT-CHECK-ALL(KB, a, symbols, []) returns true
if KB is false in model or if KB is true and a is true in the model. Recall checking that KB entails a, when KB is false a can be true or false, only must be true when KB is true.

PL-TRUE?(KB, model) returns true if KB is true in the model PL-TRUE?(a, model) returns true if a is true in the model

30

Inference by enumeration
a recursive enumeration of a finite space of assignment to symbols enumeration of a finite space of assignments to symbols enumeration of a finite space of assignments to symbols

31

Algorithm Trace
The first part of TT-CHECK-ALL: Executed when all the symbols have been given a value in the model checks whether the given model, e.g. [P=true, Q=false], is consistent with the knowledge base (PL-TRUE?(KB, model)). These models correspond to the lines in the truth table, which have a true in the KB column. For those, the algorithm then checks whether the query evaluates to true (PL-TRUE?(query, model)). All other models, that are inconsistent with the KB in the first place, are not considered returning true, is the neutral element of conjugation.
32

Algorithm Trace
Else part of TT-CHECK-ALL: recursively constructs a huge conjunction for all the possible assignments for the symbols occurring in the knowledge base and the query Ex: TT-CHECK-ALL(KB, a, [P, Q], []) will evaluate to TT-CHECK-ALL(KB, a, [], [P=true, Q=true]) and TT-CHECK-ALL(KB, a, [], [P=true, Q=false]) and TT-CHECK-ALL(KB, a, [], [P=false, Q=true]) and TT-CHECK-ALL(KB, a, [], [P=false, Q=false])

33

Evaluation of Algorithm
time complexity space complexity Soundness Completeness

34

Inference by enumeration Evaluation of Algorithm

For n symbols, time complexity is O(2n), space complexity is O(n)


35

Evaluation of Algorithm
Soundness The algorithm is sound because it implements directly the definition of entailment Completeness complete because it works for any KB and query and always terminates- there are only finitely many models to examine. If KB and a contain n symbols in all, then there are 2n models. Time complexity of the algorithm is O(2n). Space complexity is only O(n) because of depth first enumeration Note: propositional entailment is co-NP-complete (i.e., probably no easier than NP-complete) Every known inference algorithm for propositional logic has a worstcase complexity that is exponential in the size of the input.
36

Summary so far:
Logical agents apply inference to a knowledge base to derive new information and make decisions Basic concepts of logic:
syntax: formal structure of sentences semantics: truth of sentences wrt models entailment: necessary truth of one sentence given another inference: deriving sentences from other sentences soundness: derivations produce only entailed sentences completeness: derivations can produce all entailed sentences

37

Truth Tables for inference


1. Recall that KB |= a if and only if M(KB) M(a).
We illustrated this graphically for the Wumpus World in Logic in General.

2. we can determine if M(KB) M() using truth tables. write out every possible combination of truth values for the atomic propositions.
Enumerate all the models.

If KB is true in row, check that a is also true. If this is the case, then M(KB) M(a) and so KB |= a

38

Proposition Logic Sentences

39

Propositional Theorem Proving

40

Inference by Theorem Proving


entailment by model checking So far, we have shown how to determine entailment by model checking: enumerating models and showing that the sentence must hold in all models. entailment by theorem proving applying rules of inference directly to the sentences in our knowledge base to construct a proof of the desired sentence without consulting models. If the number of models is large but the length of the proof is short, then theorem proving can be more efficient than model checking.
41

Inference by Theorem Proving


Given:
A set of sentences in the KB: the premises A sentence to be proved: the conclusion A set of sound rules

Apply a rule to an appropriate sentence Add the resulting sentence to the KB Stop if we have added the conclusion to the KB Essentially a search problem
Better than model-based many models, short proofs
42

Inference by Theorem Proving


Rather than enumerate all possible models, we can apply inference rules to the current sentences in the KB to derive new sentences A proof consists of a chain of rules beginning with known sentences (premises) and ending with the sentence we want to prove (conclusion)

43

Additional Concepts- properties of

logical systems
1. Logical Equivalence

2. Validity
Deduction theorem

3. Satisfiability

44

1. Logical Equivalence
two sentences and are logically equivalent if they are true in the same set of models. We write this as Ex: we can easily show (using truth tables) that P ^ Q and Q ^ P are logically equivalent; Equivalences play the same role in logic as arithmetic identities do in ordinary mathematics. An alternative definition of equivalence any two sentences and are equivalent only if each of them entails the other: iff and

45

Logical equivalence
Two sentences are logically equivalent iff true in same models: iff and

Note: The contrapositive of the statement has its antecedent and consequent inverted and flipped which is is ( ) . The inverse is ( ) . The converse 46 of is

2. Validity
A sentence is valid if it is true in all models.
Ex1: the sentence P V P is valid. Ex2: True Ex3: A A Ex4: (A (A B)) B

Valid sentences are also known as tautologiesthey are necessarily true. Because the sentence True is true in all models, every valid sentence is logically equivalent to True.

Ex: Logical Equivalences are universal tautologies they will always be true in all possible models. Note: the above sentences are true solely because of their own logical form, regardless of how the world(s) happen to be
47

Validity
Validity is connected to inference via the Deduction Theorem: For any sentences KB and , KB if and only if the sentence (KB ) is valid Hence, we can decide if KB by checking that (KB ) is true in every model Conversely, the deduction theorem states that every valid implication sentence describes a legitimate inference Note: Recall inference algorithm TT-ENTAILS? (KB, ,) which essentially does the same or by proving that (KB ) is equivalent to True
48

3. Satisfiability
A sentence is satisfiable if it is true in, or satisfied by, some model. Ex1: KB = R1 ^ R2 ^ R3 ^ R4 ^ R5 is satisfiable because there are three models in which it is true Ex2: A B Satisfiability can be checked by enumerating the possible models until one is found that satisfies the sentence.
The problem of determining the satisfiability of sentences in propositional logic is known as the SAT problem SAT problem is the first problem proved to be NP-complete. Many problems in computer science are really satisfiability problems

A sentence is unsatisfiable if it is true in no models Ex: AA


Note: SAT as a search problem, try to assign true or false to the symbol s in in such a way that becomes true.
49

Validity and satisfiability


Validity and satisfiability are connected:
is valid iff is unsatisfiable; contrapositively, is satisfiable iff is not valid.

Satisfiability is connected to inference via the following: KB if and only if (KB ) is unsatisfiable Proving from KB by checking the unsatisfiability of (KB ) corresponds exactly to the standard mathematical proof technique of proof by refutation or proof by contradiction.
Assume a sentence to be false and shows that this leads to a contradiction with known axioms KB.

This contradiction is exactly what is meant by saying that the sentence (KB ) is unsatisfiable
50

7.5.1Inference and proofs

51

Inference
One can use equivalance to convert one formula to another. Equivalance is not strong enough to be able to prove (some) new statements. Inference rules can generate new statements

52

Syntax of Inference Rules

Premises above the line list all that must hold before this rule can be applied. Conclusion below the line gives what can then be inferred. An inference rule can be read: if I have already inferred these premises, then I can infer this conclusion too. or If premises are proved then conclusion is proved

53

Inference rules
inference rules can be applied to derive a proof-a chain of conclusions that leads to the desired goal. 1. The best-known rule is called Modus Ponens (Latin for mode that affirms) (Implication Elimination)

The notation means that, whenever any sentences of the form AB and A are given, then the sentence B can be inferred. Ex: If I know If the Jets won, they qualified for the playoffs, and I learn The Jets won, then I can conclude The Jets qualified for the playoffs
54

Inference rules
2. Another useful inference rule is And-Elimination, which says that, from a conjunction, any of the conjuncts can be inferred

Ex: If we are told The Jets won and the Giants won, then we know The Jets won and we know The Giants won. Note: By considering the possible truth values of and , it is possible to show that Modus Ponens and And-Elimination are sound These rules can then be used generating sound inferences without the need for enumerating models
55

Inference rules
3. All of the logical equivalences can be used as inference rules. Ex: the equivalence for biconditional elimination yields the two inference rules

56

Monotonicity
the set of entailed sentences can only increase as information is added to the knowledge base. For any sentences and , if KB then KB ^ Ex: suppose the knowledge base contains and additional assertion is added = There is no pit in *1,2+. = There are exactly eight pits in the world This knowledge might help the agent draw additional conclusions but it cannot invalidate any conclusion already inferred-such as the conclusion that there is no pit in [1,2].
56

Inference rules - Problem


Starting with the knowledge base containing R1 through R5 show how to prove P(1,2) : there is no pit in [1,2] using the technique of inference rules What are R1. . . R5? Choose the Background Sentence R2: B1,1 P1,2 P2,1

58

A simple Knowledge base


Derive P1,2: there is no pit in [ 1 ,2]
(note: derive the same way as done informally using logic before)

Label each sentence Ri so that we can refer to them:


1. There is no pit in [1,1]: R1 : P1,1
2. A square is breezy if and only if there is a pit in a neighboring square.
This has to be stated for each square For now, we include just the relevant squares:

R2 : B1,1 R3: B2,1

(P1,2 P2,1) (P1,1 P2,2 P3,1)

The preceding sentences are true in all wumpus worlds.


3. Now include the breeze percepts for the first two squares visited in the specific world the agent is in R4 : B1,1 R5 : B2,1
59

Inference rules - Problem


Starting with the knowledge base containing R1 through R5 show how to prove P(1,2) : there is no pit in [1,2] using the technique of inference rules Background Sentence R2: B1,1 P1,2 P2,1 -elimination to R2 R6: (B1,1 P1,2 P2,1) (P1,2 P2,1 B1,1) -elimination to R6 R7: P1,2 P2,1 B1,1 Apply logical equivalence for contraposition to R7 R8: B1,1 (P1,2 P2,1) Apply Modus Ponens rule to R8 and the percept R4: B1,1 R9: (P1,2 P2,1) Apply de Morgan rule to R9 R10: P1,2 P2,1 So, [1,2] and [2,1] have no pits 60

Inference rules - Problem


Proof by Hand: Starting with the knowledge base containing R1 through R5 we have shown how to prove P(1,2) : there is no pit in [1,2] Instead of proof by hand method, any of the search algorithms (Chapter 3) can be applied to find a sequence of steps that constitutes a proof. Defining a proof problem ?

61

Proofs
Proof: a sequence of application of inference rules. Finding a proof is a search problem. Initial state ? Goal Statement ? Result ( or Successor Function) ? Actions?

62

Inference rules - Problem


Defining a proof problem INITIAL STATE: the initial knowledge base. ACTIONS: the set of actions consists of all the inference rules applied to all the sentences that match the top half of the inference rule. RESULT: the result of an action is to add the sentence in the bottom half of the inference rule. GOAL: the goal is a state that contains the sentence we are trying to prove.

63

Inference rules
Proof vs. Enumerating models: finding a proof can be more efficient because the proof can ignore irrelevant propositions
Ex: the proof leading to P1,2 ^ P2,1 does not mention the propositions B2,1 . P1,1, P2,2, or P3,1 They can be ignored because the goal proposition, P1,2, appears only in sentence R2; the other propositions in R2 appear only in R4 and R2; so R1, R3, and R5 have no bearing on the proof. The same would hold even if a million more sentences are there in the knowledge base;

truth-table algorithm
exponential explosion of models
64

7.5.2 Proof by resolution

65

Proof by resolution
Inference rules shown to be sound What about completeness for the inference algorithms that use them? completeness for Search algorithms Search algorithms are complete if they will find any reachable goal completeness for the inference algorithms if the available inference rules are inadequate, then the goal is not reachable Ex: if the biconditional elimination rule is removed then the previous proof fails

66

Proof by resolution
Resolution: definition a single inference rule, that yields a complete inference algorithm when coupled with any complete search algorithm. Observation: the agent returns from [2, 1] to [1, 1 ] and then goes to [1 ,2], where it perceives a stench, but no breeze. We add the following facts to the knowledge base. R11: B1,2 R12 : B1,2 (P1,1 v P2,2 v P1,3) Problem: From R12, derive that there is no pit at [1,3], [2,2] Solution: By the same process that led to R10, we can derive that there is no pit at [1,3], [2,2] from R12. (we already added [1,1] is pitless by R1) R13 : P2, 2 R14 : P1, 3 , R1: P1,1

67

Proof by resolution
Prove by resolution that there is pit in [3,1]. What is the background sentence? Choose R3 R3: B2,1 (P1,1 P2,2 P3,1)

68

A simple Knowledge base


Derive P1,2: there is no pit in [ 1 ,2]
(note: derive the same way as done informally using logic before)

Label each sentence Ri so that we can refer to them:


1. There is no pit in [1,1]: R1 : P1,1
2. A square is breezy if and only if there is a pit in a neighboring square.
This has to be stated for each square For now, we include just the relevant squares:

R2 : B1,1 R3: B2,1

(P1,2 P2,1) (P1,1 P2,2 P3,1)

The preceding sentences are true in all wumpus worlds.


3. Now include the breeze percepts for the first two squares visited in the specific world the agent is in R4 : B1,1 R5 : B2,1
69

The general Idea of Resolution- Example


Background Sentence R3: B2,1 P1,1 P2,2v P3,1 -elimination to R3 (B2,1 P1,1 P2,2v P3,1) ^ (P1,1 P2,2 V P3,1 B2,1) -elimination B2,1 P1,1 P2,2v P3,1

Modus ponens with R5: B2,1 Resolution with R1: P1,1 Resolution with R13: P2,2 So, there is pit at [3,1]
70

The general Idea of Resolution

Using a proof system P with many rules is fine for humans.


For a computer, the less rules P has the simpler the corresponding search algorithm gets. Just one rule called resolution turns out to be enough if we

1. first rewrite our KB into a certain fixed form 2. then seek refutation proofs
71

The general Idea of Resolution

if the same disjunct occurrs both


positively (= outside ) in one disjunction and negatively (= inside ) in another disjunction

then we can combine these two disjunctions into one resolvent and drop their common .

72

The general Idea of Resolution


In both of these two resolution steps, the common disjunct is just a single Symbol .
We are going to rewrite our KB so that this will always be the case. The reason is that an unmodified KB would not contain very many possible choices for .

The form into which we are going to rewrite our KB can be defined as follows:
Literal is Symbol or Symbol . Clause is a (possibly empty) disjunction of literals.

Then KB is in Conjunctive Normal Form (CNF) if it is a conjunction of clauses.

73

The general rule of Resolution


The full resolution rule takes the form you can form the resolvent of two clauses if the same Symbol X occurrs positively in one of them and negatively in the other

By the associativity and commutativity of this common symbol X can appear anywhere inside these two clauses.
74

Conjunctive normal form

75

CNF
The resolution rule applies only to clauses (that is, disjunctions of literals) So it is relevant to knowledge bases and queries consisting of clauses. How, then, can it lead to a complete inference procedure for all of propositional logic? The answer is that every sentence of propositional logic is logically equivalent to a conjunction of clauses. A sentence expressed as a conjunction of clauses is said to be in conjunctive normal form or CNF

76

Conversion to CNF for R2: B1,1 (P1,2 v P2,1)


A given formula can be converted into CNF with the following 4 steps: 1. Replace each occurrence of with the corresponding two occurrences of (B1,1P1,2 P2,1) (P1,2 P2,1B1,1) 2. Replace each occurrence of with the equivalent (B1,1P1,2 P2,1) ((P1,2 P2,1)B1,1)

3. Move each towards the Symbol (B1,1 P1,2 P2,1) ((P1,2P2,1) B1,1)
77

Conversion to CNF for R2: B1,1 (P1,2 v P2,1)


4. Finally move each from under any by using their distibutivity, which permits replacing ( ) with ( ) ( ) (B1,1 P1,2 P2,1) (P1,2B1,1) (P2,1B1,1) If we now undo step 2, then we see that above sentence does indeed say the same thing as the original sentence but in a different way:

(B1,1P1,2 P2,1) (P1,2B1,1) (P2,1B1,1).


Now CNF form is used as input to a resolution procedure
78

Conversion to CNF Summary


B1,1 (P1,2 P2,1) 1. Eliminate , replacing with ( )( ). (B1,1 (P1,2 P2,1)) ((P1,2 P2,1) B1,1) 2. Eliminate , replacing with .

(B1,1 P1,2 P2,1) ((P1,2 P2,1) B1,1)


3. Move inwards using de Morgan's rules and double-negation: (B1,1 P1,2 P2,1) ((P1,2 P2,1) B1,1) 4. Apply distributivity law ( over ) and flatten: (B1,1 P1,2 P2,1) (P1,2 B1,1) (P2,1 B1,1)
79

Resolution Rule Summary


Conjunctive Normal Form (CNF) : conjunction of disjunctions of literals
clauses E.g., (A B) (B C D)

Resolution inference rule (for CNF): li lk,

m1 mn

li li-1 li+1 lk m1 mj-1 mj+1 ... mn where li and mj are complementary literals. E.g., P1,3 P2,2, P2,2 P1,3 Resolution is sound and complete for propositional logic
80

A grammar for CNF, Horn clauses and definite clauses

81

A resolution algorithm

82

A resolution algorithm
Inference procedures based on resolution work by using the principle of proof by contradiction That is, to show that KB , we show that KB is unsatisfiable We do this by proving a contradiction.

83

Resolution algorithm
Proof by contradiction, i.e., show KB unsatisfiable

84

Resolution algorithm Trace for Example


KB = R2 ^ R4 R2= (B1,1 (P1,2 P2,1)) R4 = B1,1 = P1,2 Method: we can derive using Proof by contradiction, i.e., show KB unsatisfiable Steps: 1. Convert KB into CNF form 2. Take 2 clauses at a time and resolve as per Resolution algorithm
85

Resolution algorithm Trace for Example


Then, the resolution rule is applied to the resulting clauses. Each pair that contains complementary literals is resolved to produce a new clause
which is added to the set if it is not already present.

The process continues until one of two things happens:


there are no new clauses that can be added, in which case KB does not entail or, two clauses resolve to yield the empty clause, in which case KB entails

The empty clause is a disjunction of no disjuncts


and is equivalent to False because a disjunction is true only if at least one of its disjuncts is true. Another way to see that an empty clause represents a contradiction because it arises only from resolving two complementary unit clauses such as P and P.
86

Resolution algorithm Trace for Example


KB = (B1,1 (P1,2 P2,1)) B1,1 = P1,2

Top row:

clauses after CNF conversion Denoted as clauses in the algorithm


Second row:

clauses obtained by resolving pairs in the first row Denoted as new in the algorithm

87

Resolution algorithm Trace for Example


Inspection of trace reveals that many resolution steps are pointless. Ex: the clause B1,1 V B1,1 V P1,2 is equivalent to True V P1,2 which is equivalent to True. Deducing that True is true is not very helpful. Therefore, any clause in which two complementary literals appear can be discarded

88

Completeness of resolution
why PL-RESOLUTION is complete To do this, introduce the resolution closure RC ( S) of a set of clauses S It is the set of all clauses derivable by repeated application of the resolution rule to clauses in S or their derivatives. The resolution closure is what PL-RESOLUTION computes as the final value of the variable clauses. It is easy to see that RC ( S) must be finite, because there are only finitely many distinct clauses that can be constructed out of the symbols P1 , . . . , Pk that appear in S. (Notice that the factoring step removes multiple copies of literals) Hence, PL-RESOLUTION always terminates.
89

Completeness of resolution
why PL-RESOLUTION is complete The completeness theorem for resolution in propositional logic is called the ground resolution theorem: If a set of clauses is unsatisfiable, then the resolution closure of those clauses contains the empty clause.

90

7.5.3 Horn clauses and definite clauses

91

Horn clauses and definite clauses


The completeness of resolution makes it a very important inference method. In many practical situations, however, the full power of resolution is not needed. Some real-world knowledge bases satisfy certain restrictions on the form of sentences they contain
which enables them to use a more restricted and efficient inference algorithm.

Horn clauses and definite clauses: restricted clause forms

92

Horn clauses and definite clauses


definite clause One such restricted form is the definite clause, which is a disjunction of literals of which exactly one is positive. Ex: the clause ( L1,1 V Breeze V B1,1) is a definite clause (B1, 1 V P1 , 2 V P2, 1) is not. Horn clause Slightly more general form is the Horn clause, which is a disjunction of literals of which at most one is positive. So all definite clauses are Horn clauses, as are clauses with no positive literals; these are called goal clauses
93

A grammar for CNF, Horn clauses and definite clauses

94

7.5.4 Forward and backward chaining

95

Forward chaining (FC)


Given a problem
Fire any rule whose premises are satisfied in the KB Add its conclusion to the KB until query is resolved

96

Backward chaining (BC)


Idea: work backwards from the query To answer / prove query
Is q already known in KB Otherwise prove by BC all premises that conclude q Avoid loops by checking is subgoal already in KB

97

Forward vs. backward chaining


FC is data-driven, automatic, unconscious processing,
e.g., object recognition, routine decisions

May do lot of work that is irrelevant to the goal BC is goal-driven, appropriate for problem-solving,
e.g., Where are my keys? How do I get into a PhD program?

Complexity of BC can be much less than linear in size of KB

98

You might also like