ChapterPDF Available

Paintings, Polygons and Plant Propagation

  • VU University UvA University Yamasan

Abstract and Figures

It is possible to approximate artistic images from a limited number of stacked semi-transparent colored polygons. To match the target image as closely as possible, the locations of the vertices, the drawing order of the polygons and the RGBA color values must be optimized for the entire set at once. Because of the vast combinatorial space, the relatively simple constraints and the well-defined objective function, these optimization problems appear to be well suited for nature-inspired optimization algorithms.
Content may be subject to copyright.
Paintings, Polygons and Plant
Misha Paauw and Daan van den Berg(B
Institute for Informatics, University of Amsterdam, Science Park 904,
1098XH Amsterdam, The Netherlands,
Abstract. It is possible to approximate artistic images from a limited
number of stacked semi-transparent colored polygons. To match the tar-
get image as closely as possible, the locations of the vertices, the drawing
order of the polygons and the RGBA color values must be optimized for
the entire set at once. Because of the vast combinatorial space, the rel-
atively simple constraints and the well-defined objective function, these
optimization problems appear to be well suited for nature-inspired opti-
mization algorithms.
In this pioneering study, we start off with sets of randomized poly-
gons and try to find optimal arrangements for several well-known paint-
ings using three iterative optimization algorithms: stochastic hillclimb-
ing, simulated annealing and the plant propagation algorithm. We discuss
the performance of the algorithms, relate the found objective values to
the polygonal invariants and supply a challenge to the community.
Keywords: Paintings ·Polygons ·Plant propagation algorithm ·
Simulated annealing ·Stochastic hillclimbing
1 Introduction
Since 2008, Roger Johansson has been using “Genetic Programming” to repro-
duce famous paintings by optimally arranging a limited set of semi-transparent
partially overlapping colored polygons [1]. The polygon constellation is rendered
to a bitmap image which is then compared to a target bitmap image, usually a
photograph of a famous painting. By iteratively making small mutations in the
polygon constellation, the target bitmap is being matched ever closer, resulting
in an ‘approximate famous painting’ of overlapping polygons.
Johansson’s program works as follows: a run gets initialized by creating a
black canvas with a single random 3-vertex polygon. At every iteration, one or
more randomly chosen mutations are applied: adding a new 3-vertex polygon to
the constellation, deleting a polygon from the constellation, changing a polygon’s
RGBA-color, changing a polygon’s place in the drawing index, adding a vertex
to an existing polygon, removing a vertex from a polygon and moving a vertex to
a new location. After one such mutation, the bitmap gets re-rendered from the
Springer Nature Switzerland AG 2019
A. Ek´art et al. (Eds.): EvoMUSART 2019, LNCS 11453, pp. 84–97, 2019.
Paintings, Polygons and Plant Propagation 85
constellation, compared to the target bitmap by calculating the total squared
error over each pixel’s RGB color channel. If it improves, it is retained, otherwise
the change is reverted. We suspect the program is biased towards increasing
numbers of polygons and vertices, but Johansson limits the total numbers of
vertices per run to 1500, the total number of polygons to 250 and the number
of vertices per polygon from 3 to 10. Finally, although a polygon’s RGBA-color
can take any value in the program, its alpha channel (opaqueness) is restricted
to between 30 and 601.
Although the related FAQ honestly enough debates whether his algorithm is
correctly dubbed ‘genetic programming’ or might actually be better considered
a stochastic hillclimber (we believe it is), the optimization algorist cannot be
unsensitive to the unique properties of the problem at hand. The lack of hard
constraints, the vastness of the state space and the interdependency of param-
eters make it a very interesting case for testing (new) optimization algorithms.
But exploring ground in this direction might also reveal some interesting prop-
erties about the artworks themselves. Are some artworks easier approximated
than others? Which algorithms are more suitable for the problem? Are there
any invariants to be found across different artworks?
Today, we present a first set of results. Deviating slightly from Johansson’s
original program, we freeze the numbers of vertices and polygons for each run,
but allow the other variables to mutate, producing some first structural insights.
We consistently use three well-known algorithms with default parameter settings
in order to open up the lanes of scientific discussion, and hopefully create an entry
point for other research teams to follow suit.
2 Related Work
Nature-inspired algorithms come in a baffling variety, ranging in their metaphoric
inheritance from monkeys, lions and elephants to the inner workings of DNA [3]
(for a wide overview, see [4]). There has been some abrasive debate recently
about the novelty and applicability of many of these algorithms [5], which is a
sometimes painful but necessary process, as the community appears to be in tran-
sition from being explorative artists to rigourous scientists. Some nature-inspired
algorithms however, such as the Cavity Method [6,7] or simulated annealing [8],
have a firm foothold in classical physics that stretches far beyond the metaphor
alone, challenging the very relationship between physics and informatics. Others
are just practically useful and although many of the industrial designs generated
by nature-inspired or evolutionary algorithms will never transcend their digital
existence, some heroes in the field make the effort of actually building them
out [911]. We need more of that.
The relatively recently minted plant propagation algorithm has also demon-
strated its practical worth, being deployed for the Traveling Salesman Prob-
lem [12,13], the University Course Timetabling Problem [14] and parametric
1Details were extracted from Johansson’s source code [2] wherever necessary.
86 M. Paauw and D. van den Berg
optimization in a chemical plant [12]. It also shows good performance on bench-
mark functions for continuous optimization [12].
The algorithm revolves around the idea that a given strawberry plant off-
shoots many runners in close proximity if it is in a good spot, and few runners
far away if it is in a bad sp ot. Applying these principles to a population of can-
didate solutions (“individuals”) provides a good balance between exploitation
and exploration of the state space. The algorithm is relatively easily imple-
mented, and does not require the procedures to repair a candidate solution from
crossovers that accidentally violated its hard constraints. A possible drawback
however, is that desirable features among individuals are less likely to be com-
bined in a single individual, but these details still await experimental and theo-
retical verification.
Recent history has seen various initiatives on the intersection between nature-
inspired algorithms and art, ranging from evolutionary image transition between
bitmaps [15] to artistic emergent patterns based on the feeding behavior of sand-
bubbler crabs [16] and non-photorealistic rendering of images based on digital
ant colonies [17]. One of the most remarkable applications is the interactive
online evolutionary platform ‘DarwinTunes’ [18], in which evaluation by public
choice provided the selection pressure on a population of musical phrases that
‘mate’ and mutate, resulting in surprisingly catchy melodies.
3 Paintings from Polygons
For this study, we used seven 240 ×180-pixel target bitmaps in portrait or
landscape orientation of seven famous paintings (Fig.1): Mona Lisa (1503) by
Leonardo da Vinci, The Starry Night (1889) by Vincent van Gogh, The Kiss
(1908) by Gustav Klimt, Composition with Red, Yellow and Blue (1930) by Piet
Mondriaan, The Persistance of Memory (1931) by Salvador Dali, Convergence
(1952) by Jackson Pollock, and the only known portrait of Leipzig-based com-
poser Johann Sebastian Bach (1746) by Elias Gottlieb Haussman. Together, they
span a wide range of ages, countries and artistic styles which makes them suit-
able for a pioneering study. Target bitmaps come from the public domain, are
slightly cropped or rescaled if necessary, and are comprised of 8-bit RGB-pixels.
Every polygon in the pre-rendering constellation has four byte sized RGBA-
values: a red, green, blue and an alpha channel for opaqueness ranging from 0
to 255. The total number of vertices v∈{20,100,300,500,700,1000}is fixed
for each run, as is the number of polygons p=v
4. All polygons have at least 3
vertices and, at most v
4+ 3 vertices as the exact distribution of vertices over the
polygons can vary during a run. Vertices in a polygon have coordinate values in
the range from 0 to the maximum of the respective dimension, which is either
180 or 240. Finally, every polygon has a drawing index: in rendering the bitmap
from a constellation, the polygons are drawn one by one, starting from drawing
index 0, so a higher indexed polygon can overlap a lower indexed polygon, but
not the other way around. The polygon constellation is rendered to an RGB-
bitmap by the Python Image Library [19]. This library, and all the programmatic
Paintings, Polygons and Plant Propagation 87
Fig. 1. The paintings used in this study, from a diverse range of countries, eras and
artistic styles. From top left, clockwise: Mona Lisa (1503, Leonardo da Vinci), Com-
position with Red, Yellow, and Blue (1930, Piet Mondriaan), The Kiss (1908, Gustav
Klimt), Portrait of J.S. Bach (1746, Elias Gottlieb Hausmann), The Persistance of
Memory (1931, Salvador Dali), Convergence (1952, Jackson Pollock), and The Starry
Night (1889, Vincent van Gogh).
resources we use, including the algorithms, are programmed in Python 3.6.5 and
are publicly available [20].
After rendering, the proximity of the rendered bitmap to the target bitmap,
which is the mean squared error (MSE) per RGB-channel, is calculated:
180 ·240 (1)
in which Renderediis a Red, Green or Blue channel in a pixel of the rendered
bitmap’s pixel and Target
iis the corresponding channel in the target bitmap’s
pixel. It follows that the best possible MSE for a rendered bitmap is 0, and is
only reached when each pixel of the rendered bitmap is identical to the target
bitmap. The worst possible fitness is 2552·3 = 195075, which corresponds to the
situation of a target bitmap containing only ‘extreme pixels’ with each of the
three RGB-color registers at either 0 or 255, and the rendered bitmap having
the exact opposite for each corresponding color register.
Although it is a rare luxury to know the exact maximal and minimal objective
values (in this case the MSE) for any optimization problem, they’re hardly usable
due to the sheer vastness of the state space |S|of possible polygon constellations.
It takes some special tricks to enable computers to perform even basic operations
on numbers of this magnitude, but nonetheless, some useful bounds can be given
88 M. Paauw and D. van den Berg
S=α·(240 ·180)v·(2564)v
4)! (2)
in which (240·180)vis the combination space for vertex placement, (2564)v
resents all possible polygon colorings and ( v
4)! is the number of possible drawing
orders. The variable αreflects the number of ways the vertices can be distributed
over the polygons. In our case, 3v
4vertices are priorly allocated to assert that
every polygon has at least three vertices, after which the remaining v
4vertices are
randomly distributed over the v
4polygons. This means that for these specifics,
4), in which P is the integer partition function, and |S|can be calculated
exactly from the number of vertices v.
The state space size of the target bitmaps is also known. It is, neglecting
symmetry and rotation, given by
2563(180·240) 7.93 ·10312,107 (3)
which reflects the three RGB-values for each pixel in the bitmap. From Eq. 2,
we can derive that a constellation of 39,328 vertices could render to a total of
1.81·10312,109 different bitmap images. These numbers are incomprehensibly
large, but nonetheless establish an exact numerical lower bound on the number
of vertices we require to render every possible bitmap image of 180 ×240 pixels.
A feasible upper bound can be practically inferred: if we assign a single square
polygon with four vertices to each pixel, we can create every possible bitmap of
aforementioned dimensions with 180 ·240 ·4 = 172,800 vertices.
Upholding these bounds and assumptions, we would need somewhere between
39,328 en 172,800 vertices to enable the rendering of every possible target bitmap
and thereby guarantee the theoretical reachability of a perfect MSE of 0. These
numbers are nowhere near practically calculable though, and therefore it can
not be said whether the perfect MSE is reachable for vertex numbers lower than
172,800, or even what the best reachable value is. But even if a perfect MSE
was reachable, it is unclear what a complete algorithm for this task would look
like, let alone whether it can do the job within any reasonable amount of time.
What we can do however, is compare the performance of three good heuristic
algorithms, which we will describe in the next section.
4 Three Algorithms for Optimal Polygon Arrangement
We use a stochastic hillclimber, simulated annealing, and the newly developed
plant propagation algorithm to optimize parameter values for the polygon con-
stellations of the seven paintings (Fig.2). The mutation operators are identical
for all algorithms, as is the initialization procedure:
4.1 Random Initialization
Both the stochastic hillclimber and the simulated annealing start off with a sin-
gle randomly initialized polygon constellation, whereas the plant propagation
Paintings, Polygons and Plant Propagation 89
algorithm has a population of M= 30 randomly initialized constellations (or
individuals). Initialization first creates a black canvas with the dimensions of
the target bitmap. Then, vvertices and p=v
4polygons are created, each with
the necessary minimum of three vertices, and the remaining v
4vertices randomly
distributed over the polygons. All vertices in all polygons are randomly placed
on the canvas with 0 x<Xmaxand 0 y<Ymax. Finally, all polygons are
assigned the four values for RBGA by randomly choosing values between 0 and
255. The only variables which are not actively initialized are the drawing indices
i, but the shrewd reader might have already figured out that this is unneces-
sary, as all other properties are randomly assigned throughout the initialization
4.2 Mutations
All three algorithms hinge critically on the concept of a mutation. To maximize
both generality and diversity, we define four types of mutation, all of which are
available in each of the three algorithms:
1. Move Vertex randomly selects a polygon, and from that polygon a random
single vertex. It assigns a new randomly chosen location from 0 x<xMax
and 0 y<yMax.
2. Transfer Vertex: randomly selects two different polygons p1andp2, in
which p1 is not a triangle. A random vertex is deleted from p1 and inserted
in p2 on a random place between two other vertices of p2. Because of this
property, the shape of p2 does not change, though it might change in later
iterations because of operation 1. Note that this operation enables diversifi-
cation of the polygon types but it keeps the numbers of polygons and vertices
constant, facilitating quantitative comparison of the end results.
3. Change Color: randomly chooses either the red, green, blue or alpha channel
from a randomly chosen polygon and assigns it a new value 0 q255.
4. Change Drawing Index: randomly selects a polygon and assigns it a new
index in the drawing order. If the new index is lower, all subsequent polygons
get their index increased by one (“moved to the right”). If the new index is
higher, all previous polygons get their index decreased by one (“moved to the
It follows that for any constellation at any point, there are vpossible move
vertex operators, v
4possible transfer vertex operators, 3v
4possible change color
operators, and v
4possible change drawing index operators. As such, the total
number of possible mutation operators for a polygon constellation of vvertices
is 9v
4.3 Stochastic Hillclimber and Simulated Annealing
A run of the stochastic hillclimber works as follows: first, it initializes a random
constellation of vvertices and v
4polygons as described in Sect. 4.1. Then, it
90 M. Paauw and D. van den Berg
renders the polygon constellation to a bitmap and calculates its MSE respective
to the target bitmap. Each iteration, it selects one of the four mutation types
with probability 1
4, and randomly selects appropriate operands for the mutation
type with equal probability. Then it re-renders the bitmap; if the MSE increases
(which is undesirable), the change is reverted. If not, it proceeds to the next
iteration unless the prespecified maximum number of iterations is reached, in
which case the algorithm halts.
Simulated annealing [8] works in the exact same way as the stochastic hill-
climber, accepting random improvements, but has one important added feature:
whenever a random mutation increases the error on the rendered bitmap, there
is still a chance the mutation gets accepted. This chance is equal to
in which ΔMSE is the increase in error and Tis the ‘temperature’, a variable
depending on the iteration number i. There are many ways of lowering the
temperature, but we use the cooling scheme by Geman and Geman [21]:
This scheme is the only one proven to be optimal [22], meaning that it is guaran-
teed to find the global minimum of the MSE as igoes to infinity, given that the
constant cis “the highest possible energy barrier to be traversed”. To understand
what this means, and correctly implement it, we need to examine the roots of
simulated annealing.
In condensed matter physics, e
kT is known as “Boltzmann’s factor”. It
reflects the chance that a system is in a higher state of energy E, relative to
the temperature. In metallurgic annealing, this translates to the chance of dis-
located atoms moving to vacant sites in the lattice. If such a move occurs, the
system crosses a ‘barrier’ of higher energy which corresponds to the distance the
atom traverses. But whenever the atom reaches a vacant site, the system drops
to a lower energetic state, from which it is unlikelier to escape. More importantly:
it removes a weakness from the lattice and therefore the process of annealing sig-
nificantly improves the quality of the metal lattice. The translation from energy
state to the MSE in combinatorial optimization shows simulated annealing truly
fills the gap between physics and informatics: for the algorithm to escape from
a local minimum, the MSE should be allowed to increase, to eventually find a
better minimum.
So what’s left is to quantify “the highest possible energy barrier to be tra-
versed” for the algorithm to guarantee its optimality. As we have calculated
the upper bound for MSE in Sect. 2,wecansetc= 195075 for our 180 ×240-
paintings, thereby guaranteeing the optimal solution (eventually). The savvy
reader might feel some suspicion along the claim of optimality, especially consid-
ering the open issue on P ?
= NP, but there is no real contradiction here. The snag
is in igoing to infinity; we simply do not have that much time and if we would,
we might just as well run a brute-force algorithm, or even just do stochastic
sampling to find the global minimum. In infinite time, everything is easy.
Paintings, Polygons and Plant Propagation 91
Fig. 2. Typical runs for the stochastic hillclimber (top), the plant propagation algo-
rithm (middle) and simulated annealing (bottom). Polyptychs of rendered bitmaps
illustrate the visual improvement from left to right, while the in-between graphs show
the corresponding decrease in MSE throughout the iterative improvement process of
the respective algorithm.
92 M. Paauw and D. van den Berg
4.4 Plant Propagation
The plant propagation algorithm (PPA) is a relatively new member of the opti-
mization family and the only population-based algorithm we use. First, fit-
ness values are assigned. Normalized to (0,1) within the current population,
this ‘relative normalized fitness assignment’ might be beneficial for these kinds
of optimization problems in which the absolute values of the MSE are (usu-
ally)(practically) unknown. Each individual will then produce between 1 and
nMax = 5 offspring, proportional to its fitness, which will be mutated inversely
proportional to the normalized (relative) fitness. So fitter individuals produce
more offspring with small mutations, and unfitter individuals produce fewer off-
spring with large mutations. The notions of ‘large’ and ‘small’ mutations how-
ever, needs some careful consideration.
In the seminal work on PPA, mutations are done on real-valued dimensions
of continuous benchmark functions [12], which has the obvious advantage that
the size of the mutation can be scaled directly proportional to the (real-valued)
input dimension. An adaptation to a non-numerical domain has also been made
when PPA was deployed to solve the Traveling Salesman Problem [13]. In this
case, the size of the mutation is reflected in the number of successive 2-opts
which are applied to the TSP-tour. It does not, however take into account the
multiple k-opts might overlap, or how the effect of the mutation might differ
from edge to edge, as some edges are longer than others.
So where does it leave us? In this experiment, a largely mutated offspring
could either mean ‘having many mutations’ or ‘having large mutations’. Consult-
ing the authors, we tried to stick as closely to the idea of (inverse) proportionality
and find middle ground in the developments so far. In the benchmark paper, all
dimensions are mutated by about 50% in the worst-case fitness. In the TSP-
paper, the largest mutations vary between 7% and 27% of the edges, depending
on the instance size. We chose to keep the range of mutations maximal, while
assigning the number of mutations inversely proportional to the fitness, resulting
in the following PPA-implementation:
1. Initialization: create a population of M= 30 individuals (randomly initial-
ized polygon constellations).
2. Assign Fitness Values: first, calculate the MSEifor all individuals iin
the population, and normalize the fitness for each individual:
fi=MSEmax MSEi
MSEmax MSEmin
2(tanh(4 ·(1 fi)2) + 1) (7)
in which MSEmax and MSEmin are the maximum and minimum MSE in
the population, fiis an individual’s relative fitness and Niis an individual’s
normalized relative fitness.
3. Sort Population on fitness, keep the fittest Mindividuals and discard the
Paintings, Polygons and Plant Propagation 93
4. Create Offspring: each individual iin the population creates nrnew indi-
viduals as
nr=nmax ·Ni·r1(8)
with mrmutations as
·(1 Ni)·r2(9)
in which 9v
4is the number of possible mutation operations for a constellation
of vvertices and r1and r2are random numbers from (0,1).
5. Return to step 2 unless the predetermined maximum number of function
evaluations is exceeded, in which case the algorithm terminates. In the exper-
iments, this number is identical to the number of iterations for the other two
algorithms, that both perform exactly one evaluation each iteration.
5 Experiments and Results
We ran the stochastic hillclimber and simulated annealing for 106itera-
tions and completed five runs for each painting and each number of v
{20,100,300,500,700,1000}. Simulated annealing was parametrized as described
in Sect. 4.2 and we continually retained the best candidate solution so far, as the
MSE occasionally increases during a run. The plant propagation algorithm was
parametrized as stated in Sect. 4.3 and also completed five runs, also until 106
function evaluations had occurred. After completing a set of five runs, we aver-
aged the last (and therefore best) values (Fig. 3).
For almost all paintings with all numbers of v, the stochastic hillclimber is by
far the superior algorithm. It outperformed the plant propagation algorithm on
48 out of 49 vertex-painting combinations with an average MSE improvement of
766 and a maximum improvement of 3177, on the Jackson Pollock with 1000 ver-
tices. Only for the Mona Lisa with 20 vertices, PPA outperformed the hillclimber
by 37 MSE. Plant propagation was the second best algorithm, better than simu-
lated annealing on all painting-vertex combinations by an average improvement
of 17666 MSE. The smallest improvement of 4532 MSE was achieved on Bach
with 20 vertices and a largest improvement of 46228 MSE on the Mondriaan with
20 vertices. A somewhat surprising observation is that for both the stochastic
hillclimber and plant propagation, end results improved with increasing vertex
numbers, but only up to about v= 500, after which the quality of the rendered
bitmaps largely leveled out. This pattern also largely applies to the end results
of simulated annealing, with the exception of Bach, which actually got worse
with increasing numbers of v. This surprising and somewhat counterintuitive
phenomenon might be explained from the fact the algorithm does not perform
very well, and the fact that Hausmann’s painting is largely black which is the
canvas’ default color. When initialized with more vertices (and therefore more
polygons), a larger area of the black canvas is covered with colored polygons,
increasing the default MSE-distance from Bach to black and back.
94 M. Paauw and D. van den Berg
Fig. 3. Best results of five runs for the three algorithms on all seven paintings for all
numbers of vertices. Remarkably enough, results often do not improve significantly with
numbers of vertices over 500. Note that results for the hillclimber and plant propagation
are depicted on a logarithmic scale.
6 Discussion, Future Work, and Potential Applications
In this work, we set out to compare three different algorithms to optimize a con-
stellation of semi-transparent, partially overlapping polygons to approximate a
set of famous paintings. It is rather surprising that in this exploration, the sim-
plest algorithm performs best. As the hillclimber is extremely susceptible to
getting trapped in local minima, this might indicate the projection of the objec-
tive function from the state space to the MSE might be relatively convex when
neighbourized with mutation types such as ours. Another explanation might be
the presence of many high-quality solutions despite the vastness of the state
Plant propagation is not very vulnerable to local minima, but it does not
perform very well considering its track record on previous problems. This might
be due to the mutation operators, or the algorithm’s parametrization, but also
to the vastness of the state space; even if the algorithm does avoid local minima,
their attractive basins might be so far apart that the maximum distance our
mutations are likely to traverse are still (far) too small.
The rather poor performance of simulated annealing can be easily explained
from its high temperature. The logarithmic Geman&Geman-scheme in itself
cools down rather slowly, even over a run of a million iterations. But our choice of
c-value might also be too high. The canvas is not comprised of ‘extreme colors’,
and neither are the target bitmaps, so the maximum energy barrier (increase in
MSE) might in practice be much lower than our upper bound of 195075. An easy
way forward might be to introduce an ‘artificial Boltzmann constant’, effectively
facilitating faster temperature decrease. This method however, does raise some
theoretical objections as the principle of ‘cooling down slowly’ is a central dogma
in both simulated and metallurgic annealing.
Paintings, Polygons and Plant Propagation 95
An interesting observation can be made in Fig. 3. Some paintings appear to
be harder to approximate with a limited number of polygons than others, leading
to a ‘polygonicity ranking’ which is consistently shared between the hillclimber
and plant propagation. The essentially grid based work of Mondriaan seems
to be more suitable for polygonization than Van Gogh’s Starry Starry Night,
which consists of more rounded shapes, or Jackson Pollock’s work which has no
apparent structure at all. To quantitatively assert these observations, it would
be interesting to investigate whether a predictive data analytic could be found
to a priori estimate the approximability of a painting. This however, requires a
lot more theoretical foundation and experimental data.
Fig. 4. Best results for various paintings and various algorithms. From top left, clock-
wise: 1: Mona Lisa, SA, v= 1000. 2: Mondriaan, HC, v= 1000. 3: Klimt, PPA,
v= 1000. 4: Bach, HC v= 1000. 5: Dali, HC, v= 1000. 6: Jackson Pollock, SA,
v= 1000. 7: Starry Night, PPA, v= 1000. This Mondriaan is the best approximation
in the entire experiment; when looking closely, even details like irregularities in the
painting’s canvas and shades from the art gallery’s photograph can be made out.
7 Challenge
We hereby challenge all our colleagues in the field of combinatorial optimization
to come up with better results than ours (Fig. 4). Students are encouraged to
get involved too. Better results include, but are not limited to:
1. Better instance results: finding better MSE-values for values of vand pidenti-
cal to ours, regardless of the method involved. If better MSE-values are found
for smaller vand p, than ours, they are considered stronger.
96 M. Paauw and D. van den Berg
2. Better algorithmic results: finding algorithms that either significantly improve
MSE-results, or achieve similar results in fewer function evaluations.
3. Better parameter settings: finding parameter settings for our algorithms that
that either significantly improve MSE-results, or achieve similar results in
fewer function evaluations. We expect this is well possible.
4. Finding multiple minima: it is unclear whether any minimum value of MSE
can have multiple different polygon constellations, apart from symmetric
For resources, as well as improved results, refer to our online page [20]. We
intend to keep a high score list.
Acknowledgements. We would like to thank Abdellah Salhi (University of Essex)
and Eric Fraga (University College London) for their unrelenting willingness to discuss
and explain the plant propagation algorithm. A big thanks also goes to Arnoud Visser
(University of Amsterdam) for providing some much-needed computing power towards
the end of the project, and to Jelle van Assema for helping with the big numbers.
1. Roger Johansson blog: Genetic programming: Evolution of Mona Lisa. https://
2. Genetic programming: Mona Lisa source code and binaries. https://
3. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing, 1st edn.
Springer, Heidelberg (2003).
4. Fister Jr., I., Yang, X.S., Fister, I., Brest, J., Fister, D.: A brief review of nature-
inspired algorithms for optimization (2013). arXiv preprint: arXiv:1307.4186
5. S¨orensen, K.: Metaheuristics—the metaphor exposed. Int. Trans. Oper. Res. 22,
1–16 (2015)
6. M´ezard, M., Parisi, G., Zecchina, R.: Analytic and algorithmic solution of random
satisfiability problems. Science 297(5582), 812–815 (2002)
7. M´ezard, M., Parisi, G.: The cavity method at zero temperature. J. Stat. Phys.
111(1–2), 1–34 (2003)
8. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.: Optimization by simulated annealing.
Science 220(4598), 671–680 (1983)
9. Hornby, G., Globus, A., Linden, D., Lohn, J.: Automated antenna design with
evolutionary algorithms. In: Space, p. 7242 (2006)
10. Moshrefi-Torbati, M., Keane, A.J., Elliott, S.J., Brennan, M.J., Rogers, E.: Passive
vibration control of a satellite boom structure by geometric optimization using
genetic algorithm. J. Sound Vibr. 267(4), 879–892 (2003)
11. Jelisavcic, M., et al.: Real-world evolution of robot morphologies: a proof of con-
cept.Artif.Life23(2), 206–235 (2017)
12. Salhi, A., Fraga, E.: Nature-inspired optimisation approaches and the new plant
propagation algorithm. In: Proceeding of the International Conference on Numer-
ical Analysis and Optimization (ICeMATH 2011), Yogyakarta, Indonesia (2011)
Paintings, Polygons and Plant Propagation 97
13. Selamo˘glu, B˙
I., Salhi, A.: The plant propagation algorithm for discrete optimisa-
tion: the case of the travelling salesman problem. In: Yang, X.-S. (ed.) Nature-
Inspired Computation in Engineering. SCI, vol. 637, pp. 43–61. Springer, Cham
(2016). 30235-5 3
14. Cheraita, M., Haddadi, S., Salhi, A.: Hybridizing plant propagation and
local search for uncapacitated exam scheduling problems. Int. J. Serv. Oper.
Manag. (in press).
15. Neumann, A., Alexander, B., Neumann, F.: Evolutionary image transition using
random walks. In: Correia, J., Ciesielski, V., Liapis, A. (eds.) EvoMUSART 2017.
LNCS, vol. 10198, pp. 230–245. Springer, Cham (2017).
978-3-319-55750-2 16
16. Richter, H.: Visual art inspired by the collective feeding behavior of sand-bubbler
crabs. In: Liapis, A., Romero Cardalda, J.J., Ek´art, A. (eds.) EvoMUSART 2018.
LNCS, vol. 10783, pp. 1–17. Springer, Cham (2018).
3-319-77583-8 1
17. Semet, Y., O’Reilly, U.-M., Durand, F.: An interactive artificial ant approach to
non-photorealistic rendering. In: Deb, K. (ed.) GECCO 2004, Part I. LNCS, vol.
3102, pp. 188–200. Springer, Heidelberg (2004).
540-24854-5 17
18. MacCallum, R.M., Mauch, M., Burt, A., Leroi, A.M.: Evolution of music by public
choice. Proc. Natl. Acad. Sci. 109(30), 12081–12086 (2012)
19. Python image library 5.1.0.
20. paintings, polygons, and plant propagation. http://heuristieken.
nl/wiki/index.php?title=Paintings from Polygons
21. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the
Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741
22. Nourani, Y., Andresen, B.: A comparison of simulated annealing cooling strategies.
J. Phys. A Math. Gen. 31, 8373–8385 (1998)
... Despite this wide variety of computational endeavours, a relatively popular topic ever seems to be the approximation of paintings (mostly Mona Lisa, in fact) by geometric shapes, such as circles or polygons [6,7,12,35]. Often accomplished by some form of iterated optimization algorithm like simulated annealing or a genetic algorithm, an imposing question remains: is this actually 'computational creativity', or is it something else? Surely, these algorithms engage in recreation, not 'true' creativity, do they? ...
... An extension into the subdiscipline of multiobjective optimization has been put forward in a number of studies by Eric Fraga [18][19][20]. Together, these three algorithms are identical to those described in paintings from polygons (PFP), an optimization problem that approximates target images with polygons (instead of brush strokes) [35]. It should still be noted that our parametrization of simulated annealing is (necessarily) different from PFP, but the numbers of brush strokes in this study is equal to the numbers of polygons in PFP, facilitating some sort of 'compositional comparison'. ...
... Note that as such, the probability to accept a worsening mutation depends not only on the magnitude of the worsening itself, but also on the temperature, which is usually lowered throughout the iterative process. The way in which temperatures get lowered in simulated annealing is known as its cooling schedule, and analogously to paintings-from-polygons, we use the cooling function of Geman and Geman [22,35]: ...
Full-text available
A fixed number of brush strokes images are initialized on a canvas, their position, size, rotation, colour, stroke type and drawing index all randomly chosen. These attributes are then modified by stochastic hillClimbing, simulated annealing or the plant propagation algorithm, approximating a target image ever closer. Simulated annealing showed the best performance, followed by hillClimbing; the plant propagation algorithm performed worst. Finally, the distribution of the attributes of the brush strokes shows us that there appears to be a preference for smaller brush strokes, and strokes of the fourth type.
... A practical problem for an evolutionary approach is that for these puzzles, a mutation is not so easily made. However, the plant propagation algorithm which was used in the Hamiltonian cycle study has been used on a great variety of constrained NP-hard problems as yet [64,65] and shown quite some consistent and parameter-robust behavior [66][67][68]. It might, with some adaptations and transformations, also be able to find extremely hard PRPP-puzzles in extremely unlikely places. ...
Full-text available
In the perfect rectangle packing problem, a set of rectangular items have to be placed inside a rectangular container without overlap or empty space. In this paper, we generate a large number of random instances and decide them all with an exact solving algorithm. Both an instance’s solution probability and its hardness measured in recursions or system time, seems to critically depend on tmax, a parameter in the generation procedure that assigns the maximally choosable random side lengths of items in the instance. We numerically characterize the solvability across instance sizes, and derive a rule for generating (un)solvable problem instances of arbitrary size.
... Modifying the source images with the aim of creating visual art that resembles known pictures or paintings but also provides a reinterpretation has also been a topic in bio-inspired generative art. For example, see the modification of source images from the history of art by stochastic hillclimbers, plant propagation algorithms, simulated annealing, or particle swarms [31,32], as well as image transitions by random walks [5,23,33]. In the experiments reported here, two well-known painting by Johannes Vermeer were used: Girl with a Pearl Earring (1665) and View of Delft (1661). ...
Full-text available
Color symmetry is an extension of the symmetry imposed by isometric transformations and indicates that the colors of geometrical objects are assigned according to the symmetry properties of these objects. A color symmetry permutes the coloring of the objects consistently with their symmetry group. We apply this concept to bio-inspired generative art. Therefore, the geometrical objects are interpreted as motifs that may repeat themselves with a symmetry-consistent coloring. The motifs are obtained by design principles from stigmergy. We discuss the design procedure and present visual results.
... Modifying source images with the aim of creating visual art that resembles known pictures or paintings but also give a reinterpretation has also been a topic in bio-inspired generative art. For example, see the modification of source images from the history of art by stochastic hillclimbers, plant propagation algorithms, simulated annealing, or particle swarms [7,19], but also image transitions by random walks [1,17,18]. In the experiments reported here, two well-known painting by Johannes Vermeer have been used: Girl with a Pearl Earring (1665) and View of Delft (1661). ...
Full-text available
Color symmetry is an extension of symmetry imposed by isometric transformations and means that the colors of geometrical objects are assigned according to the symmetry properties of the objects. A color symmetry permutes the coloring of the objects consistently with their symmetry group. We apply this concept to bio-inspired generative art. Therefore, we interpret the geometrical objects as motifs that may repeat themselves with a symmetry-consistent coloring. The motifs are obtained by design principles from stigmergy. We discuss a design procedure and present visual results.
... Being a blog topic of artistic nature for quite some time, approximating paintings from optimally arranging a limited set of semi-opaque colored polygons has been elevated to the realm of science since EvoMusArt 2019 1 [3]. Having both a strong visual appeal and an untraversably large combinatorial state space, the problem proves an appealing testing ground for heuristic algorithms such as hill climbing, simulated annealing and the plant propagation algorithm [4] [6][1] [5]. ...
Conference Paper
Full-text available
The simplified paintings-from-polygons problem (SPFP), in which paintings or other digital images are approximated by heuristi-cally arranging overlapping semi-opaque colored polygons, is NP-hard. Every instance of the subset sum problem can be transformed to a SPFP instance, solved by some algorithm, and transformed back. Whichever algorithm one chooses, it cannot be more efficient than the most efficient algorithm for subset sum. Since subset sum is known to be NP-hard, and SPFP is at least equally hard, SPFP must also be NP-hard.
... The plant propagation algorithm is a population-based evolutionary algorithm that can be applied to a broad spectrum of continuous, discrete and mixed objective landscapes in scientific, industrial and even artistic optimization problems [8][9] [2][13] [6]. To meet these different requirements, various adaptations have been implemented but the core of the algorithm is always the same: a population of solutions from which fitter individuals spawn many offspring with few mutations, and unfitter individuals spawn few offspring with many mutations, all in an effort to balance the powers of exploration and exploitation in a problem's state space. ...
Conference Paper
Full-text available
Although the Hamiltonian cycle problem is known to be NP-complete, only a few graphs are actually hard to decide for complete backtracking algorithms running on large ensembles of random graphs. Historically, these hard instances are found near the Komlós-Szemerédi bound, the average vertex degree where the Hamiltonian probability phase transition occurs. In this preliminary investigation, we take a different approach, generating hard graphs with two evolutionary algorithms. We find completely new and counterintuitive results. Keywords: Hamiltonian cycle problem · instance hardness · phase transition · evolutionary algorithms · plant propagation algorithm 1 The Hamiltonian cycle problem The undirected Hamiltonian cycle problem involves deciding whether a given graph of v vertices and average degree d contains a closed path that visits every vertex exactly once. Known to be NP-complete, quite a few complete algorithms exist for the problem, but none of those runs in subexponential time. The dynamic programming Help-Karp algorithm is quite memory intensive, but by O(n 2 · 2 n) still holds the lowest time complexity [3]. Depth-first based algorithms such as Cheeseman's, Van Horn's, Rubin's, and Vandegriend-Culberson's are far more memory efficient, but take more time in the theoretical worst case: O(v!) [1][4][7][12]. Even for the least sophisticated of these algorithms however, the majority of randomly generated graphs is relatively easily decided. Low-degree graphs require few recursions, so an exhaustive search is quickly completed. High-degree graphs however, contain many Hamiltonian cycles, so one is easily found. The hardest graphs reside in between, right around the Komlós-Szemerédi bound of average degree v · ln(v) + v · ln(ln(v)) edges, where the probability of a random graph being Hamiltonian goes from almost zero to almost one as d increases [5]. More sophisticated backtracking algorithms such as Vandegriend-Culberson's comb out many of these hard graphs using early-decision techniques, clever pruning and sensible next-vertex heuristics. For this study, we will use a backtracking algorithm that prioritizes low-degree vertices over high-degree vertices, deploys two edge pruning techniques (path pruning and neighbor pruning) and several checks for non-Hamiltonicity (such as degree-one nodes). This algorithm, which
Conference Paper
Full-text available
The plant propagation algorithm is a crossoverless population based evolutionary algorithm, defaultly deploying the plus-selection method for survivor selection – combining parents and offspring population and then selecting the best 𝑝𝑜𝑝𝑆𝑖𝑧𝑒 individuals. In this study, we explore eight different survivor selection methods (plus and comma selection, tournament selection with and without replacement, elitist tournament selection, linear ranking selection and (elitist) roulette wheel selection) on 59 continuous benchmark test function instances and compare the results. In the results, the tournament selection methods outperform PPA’s default selection method on 83% to 88% of the 59 benchmark functions. Results show that the performance of all selection methods diminish as dimensions of benchmark function instances increase. Analysis indicates that the tournament selection methods employ more exploitation by decreasing population diversity. Future research should establish if this also benefits performance on other benchmark functions and industrial problems.
Conference Paper
Full-text available
the performance of the Plant Propagation Algorithm, a crossoverless metaheuristic, when applied to the Euclidean traveling salesman problem. However, with large volumes of experimental data, a clearly characterizable relation emerges, facilitating both an exact parameterization as well as opening up perspectives on a precisely predictable outcome. These results directly question its position between exact, approximation, and PTAS algorithms. Although the metaheuristic is thereby officially parameter sensitive when deployed to the Euclidean traveling salesman problem, it does appear to be largely insensitive to different problem instances. This is very different from earlier results on continuous optimization, which were completely opposite. Why is that, and what does it mean for our practices in algorithmic benchmarking?
Full-text available
We use two evolutionary algorithms to make hard instances of the Hamiltonian cycle problem. Hardness (or 'fitness'), is defined as the number of recursions required by Vandegriend-Culberson, the best known exact backtracking algorithm for the problem. The hardest instances, all non-Hamiltonian, display a high degree of regularity and scalability across graph sizes. These graphs are found multiple times through independent runs, and by both evolutionary algorithms, suggesting the search space might contain monotonic paths towards the global maximum. For Hamiltonian-bound evolution, some hard graphs were found, but convergence is much less consistent. In this extended paper, we survey the neighbourhoods of both the hardest yes-and no-instances produced by the evolutionary algorithms. Results show that the hardest no-instance resides on top of a steep cliff, while the hardest yes-instance turns out to be part of a plateau of 27 equally hard instances. While definitive answers are far away, the results provide a lot of insight in the Hamiltonian cycle problem's state space.
Evolutionary algorithms (ES) have been used in the digital art scene since the 1970s. A popular application of genetic algorithms is to optimize the procedural placement of vector graphic primitives to resemble a given painting. In recent years, deep learning-based approaches have also been proposed to generate procedural drawings, which can be optimized using gradient descent. In this work, we revisit the use of evolutionary algorithms for computational creativity. We find that modern ES algorithms, when tasked with the placement of shapes, offer large improvements in both quality and efficiency compared to traditional genetic algorithms, and even comparable to gradient-based methods. We demonstrate that ES is also well suited at optimizing the placement of shapes to fit the CLIP model, and can produce diverse, distinct geometric abstractions that are aligned with human interpretation of language.
Full-text available
Evolutionary robotics using real hardware has been almost exclusively restricted to evolving robot controllers, but the technology for evolvable morphologies is advancing quickly. We discuss a proof-of-concept study to demonstrate real robots that can reproduce. Following a general system plan, we implement a robotic habitat that contains all system components in the simplest possible form. We create an initial population of two robots and run a complete life cycle, resulting in a new robot, parented by the first two. Even though the individual steps are simplified to the maximum, the whole system validates the underlying concepts and provides a generic workflow for the creation of more complex incarnations. This hands-on experience provides insights and helps us elaborate on interesting research directions for future development.
Conference Paper
Full-text available
We present a study demonstrating how random walk algo- rithms can be used for evolutionary image transition. We design differ- ent mutation operators based on uniform and biased random walks and study how their combination with a baseline mutation operator can lead to interesting image transition processes in terms of visual effects and artistic features. Using feature-based analysis we investigate the evolu- tionary image transition behaviour with respect to different features and evaluate the images constructed during the image transition process.
Full-text available
The Plant Propagation algorithm (PPA), has been demonstrated to work well on continuous optimization problems. In this paper, we investigate its use in discrete optimization and particularly on the well-known Travelling Salesman Problem (TSP). This investigation concerns the implementation of the idea of short and long runners when searching for Hamiltonian cycles in complete graphs. The approach uses the notion of k-optimality. The performance of the algorithm on a standard list of test problems is compared to that of the Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO) and the New Discrete Firefly Algorithm (New DFA). Computational results are included.
Conference Paper
Full-text available
Nature-inspired algorithms are proving to be very successful on complexe optimistion problems. In the following a new algorithm that is inspired by the way plants, and in particular the strawberry plant, propagate is presented. The algorithm is explained and tested on standard test functions as well as a new problem that arises in Chlorobenzene purification process design. Comparative results againts the Nelder-Mead algorithm on standard test problems and results on problems arising in process design are presented. KEYWORDS: Optimisation, Nature-inspired algorithms, Optmisation, Chlorobenzene, Process design
Full-text available
In recent years, the field of combinatorial optimization has witnessed a true tsunami of “novel” metaheuristic methods, most of them based on a metaphor of some natural or man-made process. The behavior of virtually any species of insects, the flow of water, musicians playing together – it seems that no idea is too far-fetched to serve as inspiration to launch yet another metaheuristic. In this paper, we will argue that this line of research is threatening to lead the area of metaheuristics away from scientific rigor. We will examine the historical context that gave rise to the increasing use of metaphors as inspiration and justification for the development of new methods, discuss the reasons for the vulnerability of the metaheuristics field to this line of research, and point out its fallacies. At the same time, truly innovative research of high quality is being performed as well. We conclude the paper by discussing some of the properties of this research and by pointing out some of the most promising research avenues for the field of metaheuristics.
Full-text available
Whereas the current practice of designing antennas by hand is severely limited because it is both time and labor intensive and requires a significant amount of domain knowledge, evolutionary algorithms can be used to search the design space and automatically find novel antenna designs that are more effective than would otherwise be developed. Here we present automated antenna design and optimization methods based on evolutionary algorithms. We have evolved efficient antennas for a variety of aerospace applications and here we describe one proof-of-concept study and one project that produced flight antennas that flew on NASA's Space Technology 5 (ST5) mission.
The uncapacitated exam scheduling problem (UESP) is a well-known computationally intractable combinatorial optimisation problem. It aims at assigning exams to a predefined number of periods, avoiding conflicts over the same period, and spreading exams as evenly as possible. Here, we suggest a new hybrid algorithm combining the plant propagation algorithm (PPA) and local search (LS) for it. PPA is a population-based metaheuristic that mimics the way plants propagate. To the best of our knowledge, this is the first time this idea is exploited in the context of UESP. Extensive testing on the University of Toronto benchmark dataset, and comparison against a large number of new as well as well-established methods shows that this new metaheuristic is competitive and represents a substantial addition to the arsenal of tools for solving the problem.
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.
Using computer experiments on a simple three-state system and an NP-complete system of permanents we compare different proposed simulated annealing schedules in order to find the cooling strategy which has the least total entropy production during the annealing process for given initial and final states and fixed number of iterations. The schedules considered are constant thermodynamic speed, exponential, logarithmic, and linear cooling schedules. The constant thermodynamic speed schedule is shown to be the best. We are actually considering two different schedules with constant thermodynamic speed, the original one valid for near-equilibrium processes, and a version based on the natural timescale valid also at higher speeds. The latter one delivers better results, especially in case of fast cooling or when the system is far from equilibrium. Also with the lowest energy encountered during the entire optimization (the best-so-far-energy) as the indicator of merit, constant thermodynamic speed is superior. Finally, we have compared two different estimators of the relaxation time. One estimator is using the second largest eigenvalue of the thermalized form of the transition probability matrix and the other is using a simpler approximation for small deviations from equilibrium. These two different expressions only agree at high temperatures.