SE250:lab-X:Maintainers report

From Marks Wiki
Jump to navigation Jump to search

Introduction

This lab was supposedly the more fun lab to do. It involved getting your head around the different search patterns in state space searching. Depth first search, breadth first search and iterative deepening search.

Overview

Thankfully, rwan064 provided instructions on how to get the package to compile.

  • Delete the file "malloc.h"
  • Then build the project - type "make" in Cygwin.

Whats more is that it was done at 1:35am ... talk about dedication!

However, search.exe still failed to run properly because it had been coded for a 4x4 puzzle when infact, we were dealing with a 3x3 puzzle. So in search.c we had to change the init_States function call inputs to exclude 9, a, b, c, d, e and f.

Task1

Calculate how many different states exist for the 8-piece puzzle. Estimate the number of edges in the complete state space graph. Explain how you made these calculations in your report.

Task 1 was performed pretty well by the class. Almost everyone attempted it and got it correct.


Part(a)

The number of different states that exist for the 8 piece puzzle is:

9! = 362880

The question "How many states" is simply asking how many ways are there to arrange 9 different puzzle pieces (remember the empty space is also considered a puzzle piece). To answer the question we simply do a permutation:

9P9 = 9!


Part(b)

The number of edges are 483480. The number of edges are the transitions from one state to another. Basically the question is asking how many different transitions can we make from any one state to another. Consider the puzzle. For every block that we look at there are a number of legal moves that can be made. Some blocks however have more legal moves then others.

e.g.: The centre block has 4 legal moves. The corner blocks only have 2 legal moves.


Here is a representation of the legal moves:

2 3 2
3 4 3
2 3 2

We can now calculate the average amount of legal moves: (3*4 + 4 + 3*2)/9 = 24/9

Therefore the total number of edges so far are: Total number of edges * 24/9

But we must also consider moves which will bring us back to the nodes we’ve just come from:

1 2 3                 1 2 3                1 2 3
4 5 6        --->     4 5 6       --->     4 5 6
7 8 .                 7 . 8                7 8 .

Hence to counter this we divide by 2.

Total number of edges = 9! * 24/18 = 483480

Task2

The main search code can be found in search.c. In your own words, write a summary of the search algorithm in your report.

Some people had problems with task 2 and many people decided to skip the task altogether. The reason for this was likely due to the complex nature of the code provided. It was definitely difficult to sort through the complex structure and appreciate what the code was actually doing. Another major problem with the task was that while it stated throughout the handout that the search was depth first it was in fact a best first search.

The basic structure of the search was as follows:

  1. If the state is not empty.
  2. Pop off the ordered state list.
  3. If the state is the final state then break the program.
  4. Push the next state on to the ordered state list.
  5. Repeat.

The next state however was determined by the Manhattan search code provided in 8-puzzle.

Task3

An important step in state-search is the generation of child states. Child states are linked to their parent state by a single legal move or transformation. In the 8-puzzle, a legal is sliding a tile into the empty space. Knowing the start state and the algorithm for generating child states is sufficient to define the entire state graph. By hand, draw a graph of the region of the state space within two consecutive moves of the initial state shown. Highlight the start and goal states, and the solution path from the start to the goal. Check your graph using the graph.c program.

A lot of people seemed to have drawn their diagram by hand, some did it by typing it up on the wiki as ascii and others drew up something in paint.

The basic answer was given to us using the graph.exe program. But many people failed to "highlight" the initial and goal states. The only person who seems to have done this is rwan064. So here is his diagram:

<html><img src="http://studwww.cs.auckland.ac.nz/~rwan064/lab10/task3.png" border="0" /></html>

The people who drew up something in paint labeled the initial and goal states except for one who failed to label the goal state.

Task4

The search strategy used in search.c is depth first search. Use the print state function to print the first few states the search code checks, compare this to your graph and describe in your own words how this search strategy traverses the search space.

Several people completed this task by providing an explanation of how a Depth First Search works. Basically, depth first search goes all the way down one path before doubling back to the last known intersection and then going down that one.

There was a small amount of people who went to the effort of finding out exactly how the search function worked. This is what they found:

The function takes a state and adds its neighbours to a binary search tree. The states are ordered according to the Manhattan distance. The Manhattan distance is a non-negative number which is smaller when the state is closer to the solution (from a certain point of view). The search function was always given the minimum node on the tree, which would correspond to the state with the smallest Manhattan distance, hence the state which has a higher probability of being close to the solution. This is not in fact a depth first search, but rather a best first search, as Mark explained in the lecture. It uses the Manhattan distance as a guiding heuristic to hopefully reach the solution faster. For more information on best first searching see http://en.wikipedia.org/wiki/Best-first_search

Many people found the rigour of investigating the search function too much, and I don’t blame them. It seemed to require knowledge of function pointers, and I’m pretty sure we haven’t learnt about those yet.

Task5

Run the search code. What do you notice about the solution compared with the solution in your state graph?

From the people who attempted task 5, they all got the same results from the search program. <html>
</html>2 moves, 5 states visited. <html>
</html>What they failed to realise was that those 5 states correspond to the state diagram we drew in task 3. <html>
</html>However, there are 6 states in the tree after the intial state, meaning that it found the goal state before visiting the 4th state in level 2. Root/initial state at level 0, 2 states at level 1. <html>
</html>So the questions which arise are: 'Which state got skipped?' and 'Why is it visiting the states in that order?'

Task6

Breadth first search will check all the children of a state before checking any of the childrens’ children. Make a one-line change to the code in search.c to convert the algorithm from depth-first to breadth first search. <html>
</html>What do you notice about the breadth-first solution compared with the depth first one?

Very few people got this far. <html>
</html>From the people that did attempt this, some studied the code hard enough to find that we had actually been provided with a best first search while others remained clueless on this piece of information. (As discussed in the "meeting" earlier today - 4/6/08)

The people who did not attain this piece of information thought that you had to change the stack to a queue. Most realised that this is not possible in 1 line of code.

Because no-one actually managed to finish this task, tasks 7 and 8 were incompletable.

Task7

Conduct searches from the initial state to the goal state below for both breadth first and depth first search. How many states have to be examined before the solution is found in each case? Compare the path returned for both solutions. Initial state = "5674.8321" and Goal state = "1238.4765"

Not many people even gave this task a thought, and those who did were unable to complete it, although this is probably because of the problem with task six. It seems that from the initial state of ‘5674.8321’ the search algorithm visited about 35139 different states. I am guessing that the search algorithm took quite a random path since the goal state was supposed to be '1238.4765' but the Manhattan distance function assumes that the goal is '12345678.'

If anyone is interested in finding out which of the BFS and DFS would have solved this problem with the least number of visited states, they should code in a queue and a stack, and then use each in turn instead of the tree as the primary data structure for the searching algorithm. I am not that interested.

Task8

What are the obvious benefits of using each of these two different search strategies? Under what circumstances would we prefer one over the other? Suggest some features of the state-space that will affect the relative performance of the algorithms.

Very few people reached this task. The ideas people had were that depth first uses less memory but requires a bit of luck in a large state space, and breadth first is good when you are close to the goal state and has the advantage of finding the guaranteed shortest path.

See this page for a discussion of the two methods. http://www.macs.hw.ac.uk/~alison/ai3notes/paragraph2_6_2_1_0_1.html

Task9

How are infinite loops (such as moving a piece back and forth) prevented in the supplied code? What data structures are used?

Personally I thought this question was one of the easiest, although most people had given up long before reaching it. The simple answer is that a hashtable is used to ensure that no state is visited twice.

Task10

A third search strategy is iterative deepening. This performs consecutive depth first searches with a depth bound, so each time the search will not look for children beyond a certain depth. If no solution is found, the depth bound is increased and the process repeated.<html>
</html> Implement iterative deepening search. Test your code with the initial and goal states used in 7, and comment on its performance.

No-one attempted this task.