next up previous contents
Next: 2 Evolutionary Algorithms Up: 1 Problem Solving and Previous: 1 Requirements of Search   Contents

2 Algorithms to Perform Search

With the ability to generate and evaluate a possible solution, a random search strategy can be defined. A random search method repeatedly generates solutions, evaluates them and generates more. Random search uses no knowledge of previously generated solutions and typically stops when a solution meets a particular criterion, such as being equal to the goal state or having quality above a threshold.

Transformation operators allow a solution to be transformed into one or many new solutions, depending on the number and type of operators. In this way, the neighbourhood of the current solution is generated. The neighbourhood represents all the solutions that can be generated by the application of the operators on the current solution.

Search strategies control the operators used, the size of the neighbourhood searched and which solutions are selected to continue the search from. In traditional search, where typically a goal state is defined, the operators can define a search tree. A node in the tree represents a solution, and its successors, or children, are defined by the operators. Strategies can then be defined to search the tree. Two such blind search strategies are depth-first and breadth-first search.

Breadth-first search generates all the successors of the root node in the search tree. Next, it generates all the successors of those nodes and continues until a solution is found. Depth-first search generates a single successor of the root node. Next, it generates one successor to that node and continues until a maximum depth is reached. At this point, it backtracks to the previous root node and, if possible, generates another successor.

For problems with large search trees, blind search methods become inefficient and impractical. This is particularly true when a search problem is cast as a decision problem (e.g. whether or not a solution exists with quality above a threshold) that is undecidable in polynomial time of the size of the problem instance. That is, if the decision problem is in the $\mathnormal{NP}$ complexity class, the optimisation or search problem must also be at least as hard [Papadimitriou and Steiglitz, 1982].

However, blind search methods typically do not use any knowledge of the problem domain, which is often readily available. As it is common for problems to become intractably large, approximate methods called heuristics are typically used. Heuristics define rules-of-thumb about the problem structure, such as a way to generate a feasible solution in a combinatorial optimisation problem or a search strategy that finds good solutions in a reasonable amount of time [Reeves, 1995]. Heuristics can improve search efficiency by only considering a subset of all possible solutions or by only generating solutions that are likely to be better than the current solution.

A simple heuristic search method is hill-climbing. Hill-climbing (or steepest descent) begins with a randomly generated solution and continues to produce neighbours until a better solution is found. This new solution becomes the current solution and the algorithm begins producing its neighbours. If there are no improving solutions in the current neighbourhood, the method will stop. The current solution may or may not be a global optimum in this case. Stochastic hill-climbing is a variant of hill-climbing that proceeds in the same way but can also accept neighbours that are equivalent to the current solution. The multi-start hill-climbing method attempts to improve performance by performing several trials or runs of hill-climbing that each start from a different random initial solution. The operator used in heuristic techniques typically results in a small random change to the current solution. This type of operator, often called mutation, requires little or no domain knowledge. If all the neighbours of the current solution are created before selecting the best, the method is called neighbourhood search. Variable neighbourhood search adaptively increases the size of the neighbourhood when no improving solution is found.

Another simple heuristic search method is best-first. This method generates all the neighbours of a solution, selects the best one, generates all its neighbors, etc. When no better solution is found in a neighbourhood, the search returns to the previous neighbourhood and selects the second best, and continues until no improvements can be found. Beam search is similar to best-first, but does not remember the complete previous neighbourhood. Instead, beam search remembers a fraction of the best solutions in all previous neighbourhoods to return to when no improvements are found. These methods are also called local search methods as they only accept improving solutions that are in the local neighbourhood of the current solution.

By using knowledge of the domain, heuristic evaluation functions can be defined to provide an estimation of the quality of a current solution or the likelihood that the search will continue toward a goal state. Heuristic evaluation functions are also called cost functions, objective functions or fitness functions. The estimated solution quality reported by these functions is often called the fitness of the solution. With this measure of solution quality, the search strategy has additional information with which to guide search. Metaheuristic methods represent search strategies that use heuristic evaluation functions to guide search. Metaheuristics are iterative improvement algorithms that repeatedly transform solutions, selecting better or worse new solutions in order to improve the overall fitness of the final solution obtained. Metaheuristic methods assume that the heuristic evaluation function, representation and operators combine to lead search on a path from the current solution to a better one, hopefully to the goal state or a global optimum.

When operators induce small changes to the solution that result in small changes to solution quality, the fitness landscape is said to be smooth. The term strong causality describes the property where small changes to the solution result in small changes to its fitness. This property can also apply to large changes in solutions. Strong causality indicates that there is a correlation between the size of solution transformation and the size of change in the resulting solution quality. Weak causality describes the case where there is little or no correlation between the size of the changes in solutions and fitness values. Fitness landscapes were suggested by Wright (1932) and later studied by Kauffman (1993) using the $\mathnormal{NK}$ family of correlated landscapes. When the landscape is not smooth, local search methods are likely to get stuck with sub-optimal solutions.

In order to avoid sub-optimal solutions, metaheuristics use a variety of techniques to be robust to non-smooth, or rough, fitness landscapes. Most metaheuristic search strategies allow non-improving solutions to be selected during search. Two popular methods are simulated annealing [Kirkpatrick et al., 1983] and tabu search [Glover, 1986,Glover and Laguna, 1997]. Simulated annealing is similar to a hill-climbing method that begins with a random current solution. However, in simulated annealing, a parameter (temperature) defines the likelihood that a worse scoring solution is accepted. Over time, the temperature is reduced, allowing the acceptance of worse solutions with smaller probability.

Tabu search generates the neighbourhood of a current solution and considers any of the new solutions that are not on a list of previously visited ones, the tabu list. Thus, the search is temporarily banned from re-visiting solutions, preventing cycling. In this way, if all improving solutions are on the tabu list, then this algorithm can also visit non-improving solutions. Tabu search can also contain an explicit method of directing the search toward similar or dissimilar new solutions, called intensification and diversification. Both simulated annealing and tabu search have been applied to a wide range of real-world problems [Pardalos and Resende, 2002,Glover and Kochenberger, 2003]. Another class of metaheuristic search strategies are those based on the theory of Natural Selection [Darwin, 1859]. These are described in the following section.


next up previous contents
Next: 2 Evolutionary Algorithms Up: 1 Problem Solving and Previous: 1 Requirements of Search   Contents
S Gustafson 2004-05-20