# Hill Descent with Random Uphill Steps

Rook Jumping Maze Instructions: Starting at the circled cell in the upper-left corner, find a path to the goal cell marked “G”.  From each numbered cell, one may move that exact number of cells horizontally or vertically in a straight line.  How many moves does the shortest path have?

Solution (select to reveal): [19 moves: RDLRUDRLRLUDLDRRUDD]

Problem:  Using a form of stochastic local search called Hill Descent with Random Uphill Steps, search the space of 5-by-5 Rook Jumping Mazes (RJMs) for a given number of iterations and print the best maze evaluated according to the objective function specified by Rook Jumping Maze Evaluation

One problem with pure hill descent is that stochastic local search may become trapped in local minima, where every local step is uphill, making things worse.  One escape strategy allows uphill steps with some small probability.  Thus, with some probability, any local minimum can be escaped.  In this exercise, you will modify hill descent to allow uphill steps with a given fixed probability p

Using the definitions of hill descent, we may describe our modified algorithm as follows:

• Let j be chosen at random, and jbestj.
• For a given number of iterations:
•  j' step(j)
• if  e(j') £ e(j)
• or with probability p
• j j'
• if  e(j) £ e(jbest)
• jbestj
• return jbest

Note: With p = 0, this degenerates to pure hill descent.  With p = 1, this degenerates to random walk.

• Russell and Norvig. Artificial Intelligence: A Modern Approach, 3rd edition, Chapter 4 through section 4.1.1 (hill-climbing search).  Here we implement stochastic hill descent (the reverse of stochastic hill climbing) allowing sideways moves.  Additionally we accept all neighboring uphill steps with some small probability p.
• Hoos & Stützle. Stochastic Local Search: Foundations and Applications, section 1.5 (Stochastic Local Search).  The escape strategy used here is to accept all neighboring uphill steps with some small probability p.  Note that an uphill step does not imply that we have reached a local minimum.
• Wikipedia article on "Hill climbing", where we here focus on stochastic hill descent (the reverse of stochastic hill climbing).  Here, we accept all neighboring, non-uphill steps, and accept all neighboring uphill steps with some small probability p.

Input Format:

• a positive integer number of iterations for hill descent optimization
• a probability for the acceptance of an uphill step

Output Format:

• Initially prompt the user with "Iterations? " and "Uphill step probability? ".
• After the hill descent iterations, print the RJM with the best (minimum) objective function evaluation according to the Rook Jumping Maze Evaluation output specification.

Sample Transcripts (input underlined):

```Iterations? 20000
Uphill step probability? 0
1 4 3 1 1
3 2 3 2 1
2 2 1 3 1
2 1 3 2 1
1 4 1 4 0

Moves from start:
0  1  5 12 13
1  3  9  2 14
7  5  8  6 15
3  4  4  3 16
2  2 10 11 17

-17```

```Iterations? 20000
Uphill step probability? .01
3 3 2 2 3
2 2 2 2 4
4 1 1 3 1
4 3 3 1 4
2 3 1 3 0

Moves from start:
0  2  9  1  3
6 12  7 13  5
3 11 10  2  4
1  3  8 14  2
16 18 17 15 19

-19```

```Iterations? 20000
Uphill step probability? 1
3 4 2 4 2
1 3 3 3 4
1 1 1 2 2
3 3 3 3 4
3 1 1 3 0

Moves from start:
0  8  9  1 10
4  5  8  3  6
5  6  7  8 11
1  7  8  2  8
3  6  7  2 12

-12```

Questions:

1. Empirically, how does the objective function evaluation of the average generated maze vary with (1) the number of iterations, and (2) the probability of accepting uphill steps?
2. Is the probability of taking an uphill step the same as the probability of escaping a local minimum?  Why or why not?

Todd Neller