You are on page 1of 1

The agent must be able to:Represent states, actions, etc.

, Incorporate new percepts, Update internal representations of the world,


Deduce hidden properties of the world, Deduce appropriate actions..Entailment means that one thing follows from another KB!=F
Logicians typically think in terms of models, which are formally structured worlds with respect to which truth can be evaluated, We
say m is a model of a sentence _ if _ is true in m , M(_) is the set of all models of _ , Then KB j= _ if and only if M(KB) _ M(_), E.g. KB =
Giants won and Reds won, _ = Giants won.Interface.. KB `i _ = sentence _ can be derived from KB by procedure I, Consequences of
KB are a haystack; _ is a needle. Entailment = needle in haystack; inference = finding it, Soundness: i is sound if, whenever KB `i _, it is
also true that KB j= _,Completeness: i is complete if whenever KB j= _, it is also true that KB `i _, Preview: we will de_ne a logic (firstorder logic) which is expressive enough to say almost anything of interest, and for which there exists a sound and complete, inference
procedure. That is, the procedure will answer any question whose answer follows from what is known by the KB. Proof methods
divide into (roughly) two kinds: Application of inference rules, Legitimate (sound) generation of new sentences from old, Proof = a
sequence of inference rule applications, Can use inference rules as operators in a standard search alg. Typically require translation of
sentences into a normal form, Model checking, truth table enumeration (always exponential in n) improved backtracking, e.g., DavisPutnam-Logemann-Loveland heuristic search in model space (sound but incomplete) e.g., min-conicts-like hill-climbing
algorithms.PROOF OF COMPLETENESS: FC derives every atomic sentence that is entailed by KB ,1. FC reaches a fixed point where no
new atomic sentences are derived, 2. Consider the final state as a model m, assigning true/false to symbols, 3. Every clause in the
original KB is true in m, Proof: Suppose a clause a1 ^ : : : ^ ak => b is false in m, Then a1 ^ : : : ^ ak is true in m and b is false in m,
Therefore the algorithm has not reached a fixed point!, 4. Hence m is a model of KB, 5. If KB i= q, q is true in every model of KB,
including m, General idea: construct any model of KB by sound inference, check _...syntax: formal structure of sentences
semantics: truth of sentences wrt models, entailment: necessary truth of one sentence given another, inference: deriving.
Sentences from other sentences soundess: derivations produce only entailed sentences completeness: derivations can produce all
entailed sentencesResource limit: standard approach use Cutoff-Test instead of Terminal-Test(depth limit) an d Use EVAL instead
UTILITY (evaluation function). CSP: state is defined by variable X with value from domain D, goal test is a set of constraints specifying
allowable combination of values for subsets of variable. Varieties of CSP: Discrete variable: finite domains; size d => O(d^n)
complete assignment( Boolean CSPs, job scheduling,) linear constraints solvable, nonlinear undecidableContinuous variable (eg
start/end times for hubble telescope observation), linear constraints solvable in poly time by LP methods. Varieties of constraints:
Unary,Binary,Higher-order,Preferences. Standard Search Formulation: States are defined by the values assigned so far, Initial state:
the empty assignment, {} Successor function: assign a value to an unassigned variable that does not conflict with current
assignment. =>fail if no legal assignments (not fixable!) Goal test: the current assignment is complete 1) This is the same for all CSPs!
2) Every solution appears at depth n with n variables use depth-first search 3) Path is irrelevant, so can also use complete-state
formulation 4) b=(n - l)d at depth l, hence n!dn leaves!!!! States are defined by the values assigned so far. Backtracking search:
variable assigmnet are commutative, only need to consider assignment to a single variable at each node, Depth-first search for CSPs
with single variable assignments is called backtracking search.Forward checking: Keep track of remaining legal values for
unassigned variables Terminate search when any variable has no legal values. Arc consistency: Simplest form of propagation makes
each arc consistent, X->Y is consistent iff for every value x of X there is some allowed y, Arc consistency detects failure earlier than
forword checking, can be run as a preprocessor or after each assignment.Tree-Strctured CSP: if the constraint graph has no loops,
the CSP can be solved in O(n d^2) time, compare the general CSPs, where worst-case time is O(d^n). This property also applies to
logical and probabilistic reasoning: an important example of the relation between syntactic restrictions and the complexity of
reasoning. Algorithm: Choose a variable as root, order variables from root to leaves such that every node's parent precedes it in the
ordering 2) For j from n down to 2, apply RemoveInconsistent(Parent(Xj),Xj) (3) For j from 1 to n, assign Xj consistently with
Parent(Xj)Nearly tree-structured CSPs: Conditioning: instantiate a variable, prune its neighbors' domains and Cutset conditioning:
instantiate (in all ways) a set of variables such that the remaining constraint graph is a tree Cutset size c => runtime O(d^c* (n c)d^2), very fast for small cLocal beam search: idea: keep k state instead of 1; choose top k all their successors, Not the same as k
searches run in parallel!, searched that find good states recruit other searched to join them, Problem: quite often, all k states end up
on same local hill, Idea: choose k successors randomly, biased towards good one Observe the close analogy to natural selection!...
Discretization methods turn continuous space into discrete spaceRelaxed Problem: Admissible heuristics can be derived from the
exact, solution cost of a relaxed version of the problem, If the rules of the 8-puzzle are relaxed so that a tile can move anywhere,
then h1(n) gives the shortest solution, If the rules are relaxed so that a tile can move to any adjacent square,, then h2(n) gives the
shortest solution, Key point: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real
problem. Minimum spanning tree can be computed in O(n^2). Problem Types: Deterministic, fully observable =) single-state
problem: Agent knows exactly which state it will be in; solution is a sequence (2) Non-observable =) conformant problem , Agent
may have no idea where it is; solution (if any) is a sequence (3) Nondeterministic and/or partially observable =) contingency
problem, percepts provide new information about current state, solution is a contingent plan or a policy often interleave search,
execution (4) Unknown state space =) exploration problem (\online")..State: state is a (representation of) a physical configuration, A
node is a data structure constituting part of a search treeAgents include humans, robots, softbots, thermostats, etc.
The agent function maps from percept histories to actions: f : P*-> A, The agent program runs on the physical architecture to
produce f, Types: simplex reflex agents, reflex agents with state, goal-based agents, utility-based agents

You might also like