You are on page 1of 6

CS-271, Intro to A.I.

— Mid-term Exam — Spring Quarter, 2010


1. (2 pts) NAME AND EMAIL ADDRESS:

YOUR ID: ID TO RIGHT: ROW: NO. FROM RIGHT:

2. (12 pts total, -1 each wrong answer, but not negative) For each of the following terms
on the left, write in the letter corresponding to the best answer or the correct definition
on the right. The first one is done for you as an example.

A  Agent  A  Perceives environment by sensors, acts by actuators 
Z  Percept  B   States reachable from the initial state by a sequence of actions 
U  Percept Sequence  C   Guaranteed to find a solution if one is accessible 
K  Agent Function  D   Never over‐estimates cost of cheapest path to a goal state 
P  Performance Measure  E   Maximum number of successors of any node 
L  Rational Agent  F   Set of all leaf nodes available for expansion at any given time 
M  Fully Observable  G   Estimates cost of cheapest path from current state to goal state
Q  Deterministic  H   Sequence of states connected by a sequence of actions 
W  Dynamic Environment  I   Represents a state in the state space 
N  Atomic representation  J   Guaranteed to find lowest cost among all solutions 
R  Factored representation  K   Maps any given percept sequence to an action 
B  State Space  L   Agent that acts to maximize its expected performance measure 
I  Search Node  M   Sensors give the complete state of environment at each time 
V  Link between nodes  N   Each state of the world is indivisible, with no internal structure 
H  Path  O   A node with no children in the tree 
Y  Abstraction  P   Evaluates any given sequence of environment states for utility 
J  Optimal Search  Q   Next state of environment is fixed by current state & action 
C  Complete Search  R   A state is a fixed set of variables or attributes, each with a value 
X  Expand a state  S   How a search algorithm chooses which node to expand next 
O  Leaf node  T   For n’ a successor of n from action a, h(n) ≤ cost(n, a, n’) + h(n’) 
F  Frontier  U   Complete history of everything agent has perceived 
S  Search Strategy  V   Represents an action in the state space 
E  Branching Factor  W   Environment can change while agent is deliberating 
G  Heuristic Function  X   Apply each legal action to state, generating a new set of states 
D  Admissible Heuristic  Y   Process of removing detail from a representation 
T  Consistent Heuristic  Z   Agent’s perceptual inputs at any given instant 

3. (4 pts total, 1 pt each) Your book defines a task environment as a set of four things,
with the acronym PEAS. Fill in the blanks with the names for the PEAS components.

Performance measure Environment Actuators Sensors


4. (15 pts total, -1 for each wrong answer, but not negative) Fill in the values of the four
evaluation criteria for each search strategy shown. Assume a tree search where b is
the finite branching factor; d is the depth to the shallowest goal node; m is the maximum
depth of the search tree; l is the depth limit; step costs are identical and equal to some
positive ε; and in Bidirectional search both directions use breadth-first search.
Criterion Complete? Time complexity Space complexity Optimal?
Breadth-First Yes O(b^d) O(b^d) Yes
Uniform-Cost Yes O(b^(1+floor(C*/ε))) O(b^(1+floor(C*/ε))) Yes
Depth-First No O(b^m) O(bm) No
Depth-Limited No O(b^l) O(bl) No
Iterative Deepening Yes O(b^d) O(bd) Yes
Bidirectional Yes O(b^(d/2)) O(b^(d/2)) Yes
(if applicable)

5. (4 pts total, 1 pt each) The heuristic path algorithm is a best-first-style search in which
the objective function is f(n) = (2−w)*g(n)+w*h(n) and the node of lowest f is expanded
next. You may assume that: the branching factor is finite; the search space contains
loops and cycles; h(n) is consistent; w ≥ 0; and each step cost ≥ ε > 0.

5a. For what values of w is this algorithm guaranteed to be optimal? 0≤w≤1

What kind of search does this perform:

5b. When w = 0? Uniform-cost

5c. When w = 1? A*

5d. When w = 2? Greedy best-first

6. (5 pts total) Suppose you are given the following data about plant diseases:
LEAF INSECTS CLASS
moldy   none   healthy 
wrinkled   none   healthy 
green   none   healthy 
moldy   present   diseased 
wrinkled   present   diseased 
green   present  healthy 
6.a. (3 pts) Which choice of root node has the highest information gain resulting from
that choice? (LEAF, INSECTS) INSECTS

6.b. (2 pts) Using the root node that you chose in 6.a., draw a decision tree that
correctly predicts CLASS. present
INSECTS
none  LEAF
moldy green 
HEALTHY  wrinkled
DISEASED HEALTHY 
DISEASED
7. (8 pts total, 1 pt each) The sliding-tile puzzle consists of three black tiles (B), three
white tiles (W), and an empty space (blank). The starting state is:
B B B W W W
The goal is to have all the white tiles to the left of all the
black tiles; the position of the blank is not important.

The puzzle has two legal moves with associated costs:


(1) A tile may move into an adjacent empty location. This has a cost of 1.
(2) A tile may hop over one or two other tiles into the empty location. This has a cost
equal to the number of tiles jumped over.

7a. What is the branching factor? 6

7b. Does the search space have loops (cycles)? (Y=yes, N=no) Y

7c. Is breadth-first search optimal? (“Y" = yes, “N" = no) N

7d. Is uniform-cost search optimal? (“Y" = yes, “N" = no) Y

7e. Consider a heuristic function h1(n) = the number of black tiles to the left of the left-
most white tile. Is this heuristic admissible? (“Y" = yes, “N" = no) Y

7f. Consider a heuristic function h2(n) = the number of black tiles to the left of the right-
most white tile. Is this heuristic admissible? (“Y" = yes, “N" = no) Y

7g. Consider a heuristic function h3(n) = the number of black tiles to the left of the right-
most white tile plus the number of white tiles to the right of the left-most black tile. Is this
heuristic admissible? (“Y" = yes, “N" = no) N

7h. Consider a heuristic function h4(n) = h3(n) / 2. Is this heuristic admissible? (“Y" =
yes, “N" = no) Y

8. (2 pts total, 1 pt each) Suppose that there is no good evaluation function for a
problem (no cost function g), but there is a good comparison method: a way to tell
whether one node is better than another, but not to assign numerical values to either.
Answer Y (= yes) or N (= no).

8a. Is this enough to do a greedy best-first search? Y

8b. Suppose you also have a consistent heuristic, h(n). Is this enough to do A* search
and guarantee an optimal solution? N
9. (15 pts total, 1 pt each) Label the following as T (= True) or F (= False).

9.a. T An admissible heuristic never over-estimates the remaining cost (or


distance) to the goal.

9.b. T A* search is both complete and optimal when the heuristic is consistent,
step costs ≥ ε > 0, and the total cost estimate is monotonic increasing.

9.c. F Most search effort is expended while examining the interior branch nodes
of a search tree.

9.d. F Uniform-cost search is both complete and optimal when the path cost
never decreases.

9.e. F Greedy best-first search is both complete and optimal when the heuristic
is optimal.

9.f. F Beam search uses O(bd) space and O(bd) time.

9.g. F A* search with a consistent heuristic and step costs ≥ ε > 0 can fail to find
the optimal solution if the search space contains loops.

9.h. F If the search space contains only a single local maximum (i.e., the global
maximum = the only local maximum), then hill-climbing is guaranteed to climb that
single hill and will find the global maximum.

9.i. T Taking two (or more) steps at once may help hill-climbing escape from
the ridge problem.

9.j. F Hill-climbing has very attractive space properties because it uses only
O(bd) space.

9.k. F Local beam search is like Depth-first search except that it keeps the N
best candidates at each level, NOT all of them.

9.l. F Simulated annealing accepts a move that makes things better with
probability exp(∆V ALUE/T).

9.m. F Simulated annealing will accept more moves at low temperature than at
high temperature.

9.n. F The simulated annealing temperature increases as the search


progresses.

9.o. T Simulated annealing uses constant space and can escape local optima.
10. (10 pts total) Basic definitions and expressions.

10a. (2 pts) Write down the definition of P(H | D) in terms of P(H), P(D), P(H and D), and
P(H or D).

P(H | D) = P(H and D) / P(D)

10b. (2 pts) Write the expression that results from applying Bayes’ Rule to P(H | D).

P(H | D) = P(D | H) P(H) / P(D)


(=  P(D | H) P(H) is OK)

10c. (2 pts) Write the definition of P(A or B) in terms of P(A), P(B), and P(A and B).

P(A or B) = P(A) + P(B) – P(A and B)

10d. (1 pt) T P(A and B) = P(A)P(B) if and only if A and B are independent.

10e. (1 pt) F P(A or B) = P(A) + P(B) if and only if A and B are independent.

10f. (1 pt) F P(A and B) = P(A)P(B) if and only if A and B are disjoint (i.e., do
not intersect, or do not occur together).

10g. (1 pt) T P(A or B) = P(A) + P(B) if and only if A and B are disjoint (i.e., do
not intersect, or do not occur together).

11. (18 pts total, 3 pts each) Label the following as Y (= yes) or N (= no) depending on
whether a perceptron with a “hard” decision boundary (step transfer function) can
correctly classify the examples shown. If your answer is Y (= yes), fill in a set of weights
that correctly classifies them. Use w0 as the threshold and wi as the weight for input xi.
All perceptrons have three Boolean inputs, x1, x2, and x3, and a “dummy” input, x0,
which is always equal to one. They all compute the decision function ∑ wi xi > 0.
You may not transform the input space, i.e., they operate on the stated inputs.

11.a. (3 pts) “At least two inputs are 1.”


Correctly classifiable? Y
If yes, weights are w0 = -1.5 ; w1 = 1 ; w2 = 1 ; w3 = 1

11.b. (3 pts) “Exactly two inputs are 1.”


Correctly classifiable? N
If yes, weights are w0 = ; w1 = ; w2 = ; w3 =

11.c. (3 pts) “At most two inputs are 1.”


Correctly classifiable? Y
If yes, weights are w0 = 2.5 ; w1 = -1 ; w2 = -1 ; w3 = -1
*** PROBLEM 11 CONTINUES ON THE NEXT PAGE ***
11.d. (3 pts) “Input x1 = 1, input x2 = 0, input x3 =anything.”
Correctly classifiable? Y
If yes, weights are w0 = -0.5 ; w1 = 1 ; w2 = -1 ; w3 = 0

11.e. (3 pts) “IF input x1 = 1 THEN input x2 = 0 ELSE input x2 = 1.”


Correctly classifiable? N
If yes, weights are w0 = ; w1 = ; w2 = ; w3 =

11.f. (3 pts) “Input x1 = input x2.”


Correctly classifiable? N
If yes, weights are w0 = ; w1 = ; w2 = ; w3 =

12. (5 pts total) Consider the following Bayesian Network. Variables A-D are Boolean:
A P(A=true) = 0.2 B P(B=true) = 0.7
 

C
D

A B P(C=true | A, B) B C P(D=true | B, C)
false false 0.1 false false 0.8
false true 0.5 false true 0.6
true false 0.4 true false 0.3
true true 0.9 true true 0.1

12.a. (1 pt) Use the chain rule to factor the full joint probability distribution over these
variables into a product of conditional probabilities, ignoring conditional independence
from the figure. Factor out the conditional probability of D first, C second, etc.

P(A, B, C, D) = P(D | C, B, A) P(C | B, A) P(B | A) P(A)

12.b. (3 pts) Use the structure of the network to eliminate irrelevant variables from 12.a
based on conditional independence, giving the minimum equivalent expression.

P(A, B, C, D) = P(D | C, B) P(C | B, A) P(B) P(A)

12.c. (1 pt) Substitute probabilities from the network into your equation 12.b to answer
the query: What is the probability that all four of these Boolean variables are false?

P(-a, -b, -c, -d) = P(-d | -c, -b) P(-c | -b, -a) P(-b) P(-a) = 0.2 * 0.9 * 0.3 * 0.8

= 0.0432

You might also like