You are on page 1of 13

Lecture 6 Local Search

Dr. Muhammad Adnan Hashmi


1 January 2013

In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of configurations Find a configuration satisfying your constraints, e.g., n-queens In such cases, we can use local search algorithms Keep a single "current" state, and then shift states, but dont keep track of paths. Use very limited memory Find reasonable solutions in large state spaces.
2

1 January 2013

Local search algorithms are useful for solving optimization problems Find the best possible state according to a given objective function Optimize the number of products purchased by an E-Commerce user State: Action taken by the user plus the resulting page-view No track is kept of the path costs between the states All that is seen is whether the user is buying more products (or not).
3

1 January 2013

1 January 2013

"Like climbing Everest in thick fog with amnesia A loop that continually moves in the direction of increasing value, i.e., uphill Terminates when it reaches a peak where no neighbor has a higher value Fog with Amnesia: Doesnt look ahead beyond the immediate neighbors of the current state.

1 January 2013

1 January 2013

Pick a random point in the search space 2. Consider all the neighbors of the current state 3. Choose the neighbor with the best quality and move to that state 4. Repeat 2 thru 4 until all the neighboring states are of lower quality 5. Return the current state as the solution state.

1 January 2013

Greedy Local Search: grabs a good neighbor state without thinking about where to go next However, greedy algos do make good progress generally towards the solution Unfortunately, hill-climbing Can get stuck in local maxima Can be stuck by ridges (a series of local maxima that occur close together) Can be stuck by plateaux (a flat area in the state space landscape) Shoulder: if the flat area rises uphill later on Flat local maximum: no uphill rise exists.
8

1 January 2013

Stochastic Hill Climbing: Chooses at random from amongst the uphill moves, based on a probability distribution First-choice Hill Climbing: Implements stochastic HC by generating successors randomly until one is generated that is better than the current state Random-restart Hill Climbing: Selects a series of initial nodes randomly until the solution is found.
9

1 January 2013

Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

If Value[Next] is close to Value[Current], the assignment is more likely to be accepted. If the temperature is high, the exponent will be close to zero, and the probability will be close to 1. As the temperature approaches zero, the exponent approaches -, and the probability approaches zero.

One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc.

1 January 2013

13

You might also like