By observing the behaviour of entropy in Figures 6.5 and 6.6, it can be seen that the harder instances contain longer periods of higher (Binomial-3) or increasing (polynomials) entropy. As entropy describes the number of different fitness values and their distribution in the population, lower entropy will cause selection to become more random. When selection is faced with a population of identical fitness values (the extreme of low entropy), selection becomes purely random. Thus, lower or decreasing entropy in the easier instances will induce a lower selection pressure.
Figures 6.7 and 6.8 show the evolution of the Spearman correlation coefficient between the average size of a population and its best fitness (raw fitness, where lower is better), entropy and diversity, respectively. The correlation is calculated in every 5th generation up to generation 50, then every 10th afterward. Generally, size is negatively correlated with fitness (low fitness with large size), negatively correlated with edit distance diversity (low edit distance with large size) and positively correlated with entropy (high entropy with large size). There is a clear trend of bigger individuals in populations with higher entropy and lower edit distance.
The 7-degree polynomial is the exception with erratic correlation between edit distance and size, and appears to contain aspects of both the 3-degree polynomial and the 11-degree polynomial. For the 7-degree polynomial there are very few good individuals initially, and they are subsequently over-selected (similarly to the degree-11). After the initial period of code-growth, optimal solutions are represented easily by many different programs, leading to reduced entropy and code growth (as with degree-3), but lower diversity following initial over-selection.