You are on page 1of 12

CSCI 373

Artificial Intelligence
Midterm Exam
Spring 2004

Question Points Score


1 16
2 4
3 10+10
4 12
5 12+8
6 12
7 16
total 100

Instructions.
You may use only your own class notes, the text, and the course lecture notes
available on-line. The only other source you may use is me. Do not discuss any
aspect of the exam with anyone other than me. Even giving another student your
impression of the difficulty of a problem is to be avoided. You have up to 2 hours to
take this exam. Please return your exam to me (or to Lorraine Robinson) immediately
after you have completed it. All exams must be completed by 4:00 PM on Friday,
April 23. Good luck!

A Note on Writing Solutions.


You should attempt to solve all problems. Partial credit will be given. Use
common sense in writing your solutions, but when in doubt, more detail is probably
better than less.

I have neither given nor received aid on this examination.

1
Problem 1 - Basic Search Strategies (16 points)
Consider the state-space graph shown below, with initial and goal states as indicated.
Note that arcs are directed. For each of the search strategies listed below, indicate
which goal state is reached (if any) and list, in order, the states evaluated to reach
that goal. Here “evaluation” of a state is defined as having the goal test applied to
it.
You may assume that g(n)=Depth(n) for all nodes n.
You may assume that h(n) = the number adjacent (and to the right of) node n.
When all else is equal, nodes should be expanded in alphabetical order.

A 4

B 3 C 1 D 2
 
E 2 F 1 G 9 H 9 I 4 J 9 K 0


 
 
L 9 M 0 N 9 O 9 P 9 Q 1 R 9
 
S 9 T 0 U 9

initial state = A
goal states = gray states

Breadth First
Goal state reached: States expanded:
Depth First
Goal state reached: States expanded:
Iterative Deepening, with a start depth of 0
Goal state reached: States expanded:
Best First (with f=g+h)
Goal state reached: States expanded:

2
Problem 2 - Heuristic Functions (4 points)
Consider the figure below, which represents a world filled with polygonal obstacles.
A robot must find a shortest path from a start position, S, to a goal position, G,
avoiding the obstacles. (The robot can walk alongside the obstacles, but it can’t walk
through them.)

If the path is to be found using A* search, a reasonable heuristic (i.e., h function) is


“straight-line” (Euclidean) distance from the current position to the goal.
Would such a heuristic still be admissible if the true cost of diagonal moves (any
moves other than directly N, S, E, or W) was 10 times as much as non-diagonal
moves? Briefly explain your answer.

3
Problem 3 - Problem Formulation as Search (20 points)
This question asks you to formulate a problem as state space search. Imagine that
you are given a string of n letters. Your task is to unscramble the letters to make a
word of length n. You may assume that you have an oracle that will tell you, given
an n-letter string, whether that is the goal word.

Describe how you would represent and/or define each of the following. You need not
write any code to answer any of the following questions. A combination of pictures
and English descriptions will do. Just be sure to give enough detail to make your
answers clear.

States

The Initial State

The Goal Test

Operators

4
Which search algorithm would you apply to this problem? Justify your answer.

5
Now imagine that you have an oracle that will tell you which of the letters in an
n-letter string are correctly placed. Formulate the problem as a hill-climbing search.
In particular, describe the following:

States

The Initial State

The Goal Test

Operators (i.e., how successors are generated)

The Evaluation Function

6
Problem 4 - Propositional Logic (12 points)

P ⇔ Q is defined as being equivalent to (P ⇒ Q)∧(Q ⇒ P ). Based on this definition,


show that P ⇔ Q is logically equivalent to (P ∨ Q) ⇒ (P ∧ Q).

a. By using truth tables.

7
b. By using the following identities:

¬(¬P ) = P
(P ⇒ Q) = (¬P ∨ Q)
the contrapositive law: (P ⇒ Q) = (¬Q ⇒ ¬P )
deMorgan’s law: ¬(P ∨ Q) = (¬P ∧ ¬Q) and ¬(P ∧ Q) = (¬P ∨ ¬Q)
the commutative laws: (P ∧ Q) = (Q ∧ P ) and (P ∨ Q) = (Q ∨ P )
the associative law: ((P ∧ Q) ∧ R) = (P ∧ (Q ∧ R))
the associative law: ((P ∨ Q) ∨ R) = (P ∨ (Q ∨ R))
the distributive law: P ∨ (Q ∧ R)) = (P ∨ Q) ∧ (P ∨ R)
the distributive law: P ∧ (Q ∨ R)) = (P ∧ Q) ∨ (P ∧ R)

8
Problem 5 - First Order Logic (20 points)

Translate each of the following predicate calculus statements into colloquial English.
∀xW oman(x) ∧ ComputerScientist(x) ⇒ Likes(x, T haiF ood)

W oman(U rsula) ∧ ComputerScientist(U rsula)

V egetarian(U rsula)

∀xV egetarian(x) ∧ EatsAt(s, T haiGarden) ⇒ Eats(x, V egP adT hai)

∀xP erson(x)∧Likes(x, T haiF ood)∧V isits(x, W illiamstown) ⇒ EatsAt(s, T haiGarden)

∀xW oman(x) ⇒ P erson(x)

9
Ursula is in Williamstown visiting her friend Andrea. That is,
V isits(U rsula, W illiamstown)

Using the rules of inference for first order logic, prove that Ursula will be eating
Vegetable Pad Thai. That is,

Eats(U rsula, V egP adT hai)

For each step in the proof, specify the rule of inference applied.

10
Problem 6 - Miscellaneous Short Answers (12 points)

Provide brief answers to the following questions.

Genetic Algorithms take their inspiration from Darwinian evolution. Specifically,


these algorithms are based on two important notions. One of these is inheritance
with variation (implemented in the crossover and mutation operators that generate
offspring). Briefly describe one other notion from Darwinian evolution that is imple-
mented in Genetic Algorithms.

Describe one hill-climbing pathology that Genetic Algorithms are able to overcome.

The minimax algorithm constructs a search tree for each move from scratch. Discuss
(briefly) the advantages and disadvantages of retaining the search tree from one move
to the next and extending the appropriate portion. How would tree retention interact
with the use of alpha-beta pruning to examine “useful” branches of the tree?

11
Problem 7 - Bayesian Networks (16 points)
Consider the Bayesian network shown below.

P(ar) = 0.001 AR DH P(dh) = 0.01

AR P(te) AR DH P(ah)
TE AH
t 0.0001 t t 0.1
f 0.01 AE t f 0.99
f t 0.99
f f 0.00001
TE AR P(ae)
t t 0.1
t f 0.99
f t 0.99
f f 0.00001

ar − patient has arthritis


dh − patient has dishpan hands
te − patient has tennis elbow
ae − patient’s elbow aches
ah − patient’s hand aches

Use the information in the Bayesian network to answer the following query: If we
know that a patient has pain in their elbow and pain in their hand, what is the
probability that they have arthritis? Please show all work.

12

You might also like