[show abstract][hide abstract] ABSTRACT: The Learnable Evolution Model (LEM) involves alternating periods of optimization and learning, performa extremely well on
a range of problems, a specialises in achieveing good results in relatively few function evaluations. LEM implementations
tend to use sophisticated learning strategies. Here we continue an exploration of alternative and simpler learning strategies,
and try Entropy-based Discretization (ED), whereby, for each parameter in the search space, we infer from recent evaluated
samples what seems to be a ‘good’ interval. We find that LEM(ED) provides significant advantages in both solution speed and
quality over the unadorned evolutionary algorithm, and is usually superior to CMA-ES when the number of evaluations is limited.
It is interesting to see such improvement gained from an easily-implemented approach. LEM(ED) can be tentatively recommended
for trial on problems where good results are needed in relatively few fitness evaluations, while it is open to several routes
of extension and further sophistication. Finally, results reported here are not based on a modern function optimization suite,
but ongoing work confirms that our findings remain valid for non-separable functions.