next up previous contents
Next: 3 Recommendations Up: 5 Discussion of a Previous: 2 Does low diversity   Contents

2 A Model of Difficulty

To assess the results in a model without biases introduced by a specific problem domain and fitness function, the following model is created:
  1. Trees contain only `dummy' functions and terminals which are not used to calculate fitness.
  2. Fitness is assigned arbitrarily of tree shape or content. All individuals have an initial fitness of 1.0 (0.0 is the best fitness).
  3. An offspring which is the same size or larger than its parent is awarded a small fitness improvement, an offspring which is smaller than its parent is penalised by a small amount. This incorporates the theories and corresponding empirical evidence of code growth.

The model consists of binary trees that are initially full with 7 nodes. The algorithm is presented below (a population size of 100, tournament selection size of 3, and subtree crossover is used for recombination):
\begin{algorithm}
\begin{algorithmic}[0]
\STATE Generate initial population and...
...h a minimum of $0.0$ \ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}\end{algorithm}

Figure 6.10: The model of difficulty and growth experiment results, with values of the probability term P as .2, .1, .05, .01 and .005.
\begin{figure}\centerline{
\psfig{figure=chapters/ch5figs/model-avefit-correctio...
...=4.5cm}
\psfig{figure=chapters/ch5figs/model-edit.eps,height=4.5cm}}\end{figure}

The P value indicates the hardness of the problem. A smaller value denotes the unlikely event that good solutions are found often and by many individuals (a difficult problem). The higher probability corresponds to the situation where many early individuals can easily represent good solutions, and in each generation more new individuals with the same good fitness can be found (an easy problem). The fact that same size or larger individuals are rewarded represents the general consensus in the community that growth is seen with improved performance in the canonical algorithm. Note that it is the rate of growth which is investigated here and that the assignment of ideal fitness is independent of size.

A range of P values show a marked difference between easier and harder instances in Figure 6.10. Easier instances (P values of 0.2 and 0.1) induce a lower entropy and more diversity, which both contribute to a slower rate of code growth. This effect is similar to the results on symbolic regression instances presented earlier. More difficult instances induce higher entropy and lower diversity that dually contribute toward faster code growth. These results support the previous conjecture that reducing the difficulty of instances in this way leads to lower selection pressure (via entropy) and more diversity (due to less selection pressure). These factors cause more different programs to be recombined to produce less code growth.


next up previous contents
Next: 3 Recommendations Up: 5 Discussion of a Previous: 2 Does low diversity   Contents
S Gustafson 2004-05-20