Content uploaded by Daan van den Berg

Author content

All content in this area was uploaded by Daan van den Berg on Feb 12, 2021

Content may be subject to copyright.

Paintings, Polygons and Plant

Propagation

Misha Paauw and Daan van den Berg(B

)

Institute for Informatics, University of Amsterdam, Science Park 904,

1098XH Amsterdam, The Netherlands

mishapaauw@gmail.com,d.vandenberg@uva.nl

Abstract. It is possible to approximate artistic images from a limited

number of stacked semi-transparent colored polygons. To match the tar-

get image as closely as possible, the locations of the vertices, the drawing

order of the polygons and the RGBA color values must be optimized for

the entire set at once. Because of the vast combinatorial space, the rel-

atively simple constraints and the well-deﬁned objective function, these

optimization problems appear to be well suited for nature-inspired opti-

mization algorithms.

In this pioneering study, we start oﬀ with sets of randomized poly-

gons and try to ﬁnd optimal arrangements for several well-known paint-

ings using three iterative optimization algorithms: stochastic hillclimb-

ing, simulated annealing and the plant propagation algorithm. We discuss

the performance of the algorithms, relate the found objective values to

the polygonal invariants and supply a challenge to the community.

Keywords: Paintings ·Polygons ·Plant propagation algorithm ·

Simulated annealing ·Stochastic hillclimbing

1 Introduction

Since 2008, Roger Johansson has been using “Genetic Programming” to repro-

duce famous paintings by optimally arranging a limited set of semi-transparent

partially overlapping colored polygons [1]. The polygon constellation is rendered

to a bitmap image which is then compared to a target bitmap image, usually a

photograph of a famous painting. By iteratively making small mutations in the

polygon constellation, the target bitmap is being matched ever closer, resulting

in an ‘approximate famous painting’ of overlapping polygons.

Johansson’s program works as follows: a run gets initialized by creating a

black canvas with a single random 3-vertex polygon. At every iteration, one or

more randomly chosen mutations are applied: adding a new 3-vertex polygon to

the constellation, deleting a polygon from the constellation, changing a polygon’s

RGBA-color, changing a polygon’s place in the drawing index, adding a vertex

to an existing polygon, removing a vertex from a polygon and moving a vertex to

a new location. After one such mutation, the bitmap gets re-rendered from the

c

Springer Nature Switzerland AG 2019

A. Ek´art et al. (Eds.): EvoMUSART 2019, LNCS 11453, pp. 84–97, 2019.

https://doi.org/10.1007/978-3-030-16667-0_6

Paintings, Polygons and Plant Propagation 85

constellation, compared to the target bitmap by calculating the total squared

error over each pixel’s RGB color channel. If it improves, it is retained, otherwise

the change is reverted. We suspect the program is biased towards increasing

numbers of polygons and vertices, but Johansson limits the total numbers of

vertices per run to 1500, the total number of polygons to 250 and the number

of vertices per polygon from 3 to 10. Finally, although a polygon’s RGBA-color

can take any value in the program, its alpha channel (opaqueness) is restricted

to between 30 and 601.

Although the related FAQ honestly enough debates whether his algorithm is

correctly dubbed ‘genetic programming’ or might actually be better considered

a stochastic hillclimber (we believe it is), the optimization algorist cannot be

unsensitive to the unique properties of the problem at hand. The lack of hard

constraints, the vastness of the state space and the interdependency of param-

eters make it a very interesting case for testing (new) optimization algorithms.

But exploring ground in this direction might also reveal some interesting prop-

erties about the artworks themselves. Are some artworks easier approximated

than others? Which algorithms are more suitable for the problem? Are there

any invariants to be found across diﬀerent artworks?

Today, we present a ﬁrst set of results. Deviating slightly from Johansson’s

original program, we freeze the numbers of vertices and polygons for each run,

but allow the other variables to mutate, producing some ﬁrst structural insights.

We consistently use three well-known algorithms with default parameter settings

in order to open up the lanes of scientiﬁc discussion, and hopefully create an entry

point for other research teams to follow suit.

2 Related Work

Nature-inspired algorithms come in a baﬄing variety, ranging in their metaphoric

inheritance from monkeys, lions and elephants to the inner workings of DNA [3]

(for a wide overview, see [4]). There has been some abrasive debate recently

about the novelty and applicability of many of these algorithms [5], which is a

sometimes painful but necessary process, as the community appears to be in tran-

sition from being explorative artists to rigourous scientists. Some nature-inspired

algorithms however, such as the Cavity Method [6,7] or simulated annealing [8],

have a ﬁrm foothold in classical physics that stretches far beyond the metaphor

alone, challenging the very relationship between physics and informatics. Others

are just practically useful and although many of the industrial designs generated

by nature-inspired or evolutionary algorithms will never transcend their digital

existence, some heroes in the ﬁeld make the eﬀort of actually building them

out [9–11]. We need more of that.

The relatively recently minted plant propagation algorithm has also demon-

strated its practical worth, being deployed for the Traveling Salesman Prob-

lem [12,13], the University Course Timetabling Problem [14] and parametric

1Details were extracted from Johansson’s source code [2] wherever necessary.

86 M. Paauw and D. van den Berg

optimization in a chemical plant [12]. It also shows good performance on bench-

mark functions for continuous optimization [12].

The algorithm revolves around the idea that a given strawberry plant oﬀ-

shoots many runners in close proximity if it is in a good spot, and few runners

far away if it is in a bad sp ot. Applying these principles to a population of can-

didate solutions (“individuals”) provides a good balance between exploitation

and exploration of the state space. The algorithm is relatively easily imple-

mented, and does not require the procedures to repair a candidate solution from

crossovers that accidentally violated its hard constraints. A possible drawback

however, is that desirable features among individuals are less likely to be com-

bined in a single individual, but these details still await experimental and theo-

retical veriﬁcation.

Recent history has seen various initiatives on the intersection between nature-

inspired algorithms and art, ranging from evolutionary image transition between

bitmaps [15] to artistic emergent patterns based on the feeding behavior of sand-

bubbler crabs [16] and non-photorealistic rendering of images based on digital

ant colonies [17]. One of the most remarkable applications is the interactive

online evolutionary platform ‘DarwinTunes’ [18], in which evaluation by public

choice provided the selection pressure on a population of musical phrases that

‘mate’ and mutate, resulting in surprisingly catchy melodies.

3 Paintings from Polygons

For this study, we used seven 240 ×180-pixel target bitmaps in portrait or

landscape orientation of seven famous paintings (Fig.1): Mona Lisa (1503) by

Leonardo da Vinci, The Starry Night (1889) by Vincent van Gogh, The Kiss

(1908) by Gustav Klimt, Composition with Red, Yellow and Blue (1930) by Piet

Mondriaan, The Persistance of Memory (1931) by Salvador Dali, Convergence

(1952) by Jackson Pollock, and the only known portrait of Leipzig-based com-

poser Johann Sebastian Bach (1746) by Elias Gottlieb Haussman. Together, they

span a wide range of ages, countries and artistic styles which makes them suit-

able for a pioneering study. Target bitmaps come from the public domain, are

slightly cropped or rescaled if necessary, and are comprised of 8-bit RGB-pixels.

Every polygon in the pre-rendering constellation has four byte sized RGBA-

values: a red, green, blue and an alpha channel for opaqueness ranging from 0

to 255. The total number of vertices v∈{20,100,300,500,700,1000}is ﬁxed

for each run, as is the number of polygons p=v

4. All polygons have at least 3

vertices and, at most v

4+ 3 vertices as the exact distribution of vertices over the

polygons can vary during a run. Vertices in a polygon have coordinate values in

the range from 0 to the maximum of the respective dimension, which is either

180 or 240. Finally, every polygon has a drawing index: in rendering the bitmap

from a constellation, the polygons are drawn one by one, starting from drawing

index 0, so a higher indexed polygon can overlap a lower indexed polygon, but

not the other way around. The polygon constellation is rendered to an RGB-

bitmap by the Python Image Library [19]. This library, and all the programmatic

Paintings, Polygons and Plant Propagation 87

Fig. 1. The paintings used in this study, from a diverse range of countries, eras and

artistic styles. From top left, clockwise: Mona Lisa (1503, Leonardo da Vinci), Com-

position with Red, Yellow, and Blue (1930, Piet Mondriaan), The Kiss (1908, Gustav

Klimt), Portrait of J.S. Bach (1746, Elias Gottlieb Hausmann), The Persistance of

Memory (1931, Salvador Dali), Convergence (1952, Jackson Pollock), and The Starry

Night (1889, Vincent van Gogh).

resources we use, including the algorithms, are programmed in Python 3.6.5 and

are publicly available [20].

After rendering, the proximity of the rendered bitmap to the target bitmap,

which is the mean squared error (MSE) per RGB-channel, is calculated:

180·240·3

i=1

(Renderedi−Target

i)2

180 ·240 (1)

in which Renderediis a Red, Green or Blue channel in a pixel of the rendered

bitmap’s pixel and Target

iis the corresponding channel in the target bitmap’s

pixel. It follows that the best possible MSE for a rendered bitmap is 0, and is

only reached when each pixel of the rendered bitmap is identical to the target

bitmap. The worst possible ﬁtness is 2552·3 = 195075, which corresponds to the

situation of a target bitmap containing only ‘extreme pixels’ with each of the

three RGB-color registers at either 0 or 255, and the rendered bitmap having

the exact opposite for each corresponding color register.

Although it is a rare luxury to know the exact maximal and minimal objective

values (in this case the MSE) for any optimization problem, they’re hardly usable

due to the sheer vastness of the state space |S|of possible polygon constellations.

It takes some special tricks to enable computers to perform even basic operations

on numbers of this magnitude, but nonetheless, some useful bounds can be given

by

88 M. Paauw and D. van den Berg

S=α·(240 ·180)v·(2564)v

4·(v

4)! (2)

in which (240·180)vis the combination space for vertex placement, (2564)v

4rep-

resents all possible polygon colorings and ( v

4)! is the number of possible drawing

orders. The variable αreﬂects the number of ways the vertices can be distributed

over the polygons. In our case, 3v

4vertices are priorly allocated to assert that

every polygon has at least three vertices, after which the remaining v

4vertices are

randomly distributed over the v

4polygons. This means that for these speciﬁcs,

α=P(v

4), in which P is the integer partition function, and |S|can be calculated

exactly from the number of vertices v.

The state space size of the target bitmaps is also known. It is, neglecting

symmetry and rotation, given by

2563(180·240) ≈7.93 ·10312,107 (3)

which reﬂects the three RGB-values for each pixel in the bitmap. From Eq. 2,

we can derive that a constellation of 39,328 vertices could render to a total of

≈1.81·10312,109 diﬀerent bitmap images. These numbers are incomprehensibly

large, but nonetheless establish an exact numerical lower bound on the number

of vertices we require to render every possible bitmap image of 180 ×240 pixels.

A feasible upper bound can be practically inferred: if we assign a single square

polygon with four vertices to each pixel, we can create every possible bitmap of

aforementioned dimensions with 180 ·240 ·4 = 172,800 vertices.

Upholding these bounds and assumptions, we would need somewhere between

39,328 en 172,800 vertices to enable the rendering of every possible target bitmap

and thereby guarantee the theoretical reachability of a perfect MSE of 0. These

numbers are nowhere near practically calculable though, and therefore it can

not be said whether the perfect MSE is reachable for vertex numbers lower than

172,800, or even what the best reachable value is. But even if a perfect MSE

was reachable, it is unclear what a complete algorithm for this task would look

like, let alone whether it can do the job within any reasonable amount of time.

What we can do however, is compare the performance of three good heuristic

algorithms, which we will describe in the next section.

4 Three Algorithms for Optimal Polygon Arrangement

We use a stochastic hillclimber, simulated annealing, and the newly developed

plant propagation algorithm to optimize parameter values for the polygon con-

stellations of the seven paintings (Fig.2). The mutation operators are identical

for all algorithms, as is the initialization procedure:

4.1 Random Initialization

Both the stochastic hillclimber and the simulated annealing start oﬀ with a sin-

gle randomly initialized polygon constellation, whereas the plant propagation

Paintings, Polygons and Plant Propagation 89

algorithm has a population of M= 30 randomly initialized constellations (or

individuals). Initialization ﬁrst creates a black canvas with the dimensions of

the target bitmap. Then, vvertices and p=v

4polygons are created, each with

the necessary minimum of three vertices, and the remaining v

4vertices randomly

distributed over the polygons. All vertices in all polygons are randomly placed

on the canvas with 0 ≤x<Xmaxand 0 ≤y<Ymax. Finally, all polygons are

assigned the four values for RBGA by randomly choosing values between 0 and

255. The only variables which are not actively initialized are the drawing indices

i, but the shrewd reader might have already ﬁgured out that this is unneces-

sary, as all other properties are randomly assigned throughout the initialization

process.

4.2 Mutations

All three algorithms hinge critically on the concept of a mutation. To maximize

both generality and diversity, we deﬁne four types of mutation, all of which are

available in each of the three algorithms:

1. Move Vertex randomly selects a polygon, and from that polygon a random

single vertex. It assigns a new randomly chosen location from 0 ≤x<xMax

and 0 ≤y<yMax.

2. Transfer Vertex: randomly selects two diﬀerent polygons p1andp2, in

which p1 is not a triangle. A random vertex is deleted from p1 and inserted

in p2 on a random place between two other vertices of p2. Because of this

property, the shape of p2 does not change, though it might change in later

iterations because of operation 1. Note that this operation enables diversiﬁ-

cation of the polygon types but it keeps the numbers of polygons and vertices

constant, facilitating quantitative comparison of the end results.

3. Change Color: randomly chooses either the red, green, blue or alpha channel

from a randomly chosen polygon and assigns it a new value 0 ≤q≤255.

4. Change Drawing Index: randomly selects a polygon and assigns it a new

index in the drawing order. If the new index is lower, all subsequent polygons

get their index increased by one (“moved to the right”). If the new index is

higher, all previous polygons get their index decreased by one (“moved to the

left”).

It follows that for any constellation at any point, there are vpossible move

vertex operators, v

4possible transfer vertex operators, 3v

4possible change color

operators, and v

4possible change drawing index operators. As such, the total

number of possible mutation operators for a polygon constellation of vvertices

is 9v

4.

4.3 Stochastic Hillclimber and Simulated Annealing

A run of the stochastic hillclimber works as follows: ﬁrst, it initializes a random

constellation of vvertices and v

4polygons as described in Sect. 4.1. Then, it

90 M. Paauw and D. van den Berg

renders the polygon constellation to a bitmap and calculates its MSE respective

to the target bitmap. Each iteration, it selects one of the four mutation types

with probability 1

4, and randomly selects appropriate operands for the mutation

type with equal probability. Then it re-renders the bitmap; if the MSE increases

(which is undesirable), the change is reverted. If not, it proceeds to the next

iteration unless the prespeciﬁed maximum number of iterations is reached, in

which case the algorithm halts.

Simulated annealing [8] works in the exact same way as the stochastic hill-

climber, accepting random improvements, but has one important added feature:

whenever a random mutation increases the error on the rendered bitmap, there

is still a chance the mutation gets accepted. This chance is equal to

e

−ΔMS E

T(4)

in which ΔMSE is the increase in error and Tis the ‘temperature’, a variable

depending on the iteration number i. There are many ways of lowering the

temperature, but we use the cooling scheme by Geman and Geman [21]:

T=c

ln(i+1).(5)

This scheme is the only one proven to be optimal [22], meaning that it is guaran-

teed to ﬁnd the global minimum of the MSE as igoes to inﬁnity, given that the

constant cis “the highest possible energy barrier to be traversed”. To understand

what this means, and correctly implement it, we need to examine the roots of

simulated annealing.

In condensed matter physics, e

−E

kT is known as “Boltzmann’s factor”. It

reﬂects the chance that a system is in a higher state of energy E, relative to

the temperature. In metallurgic annealing, this translates to the chance of dis-

located atoms moving to vacant sites in the lattice. If such a move occurs, the

system crosses a ‘barrier’ of higher energy which corresponds to the distance the

atom traverses. But whenever the atom reaches a vacant site, the system drops

to a lower energetic state, from which it is unlikelier to escape. More importantly:

it removes a weakness from the lattice and therefore the process of annealing sig-

niﬁcantly improves the quality of the metal lattice. The translation from energy

state to the MSE in combinatorial optimization shows simulated annealing truly

ﬁlls the gap between physics and informatics: for the algorithm to escape from

a local minimum, the MSE should be allowed to increase, to eventually ﬁnd a

better minimum.

So what’s left is to quantify “the highest possible energy barrier to be tra-

versed” for the algorithm to guarantee its optimality. As we have calculated

the upper bound for MSE in Sect. 2,wecansetc= 195075 for our 180 ×240-

paintings, thereby guaranteeing the optimal solution (eventually). The savvy

reader might feel some suspicion along the claim of optimality, especially consid-

ering the open issue on P ?

= NP, but there is no real contradiction here. The snag

is in igoing to inﬁnity; we simply do not have that much time and if we would,

we might just as well run a brute-force algorithm, or even just do stochastic

sampling to ﬁnd the global minimum. In inﬁnite time, everything is easy.

Paintings, Polygons and Plant Propagation 91

Fig. 2. Typical runs for the stochastic hillclimber (top), the plant propagation algo-

rithm (middle) and simulated annealing (bottom). Polyptychs of rendered bitmaps

illustrate the visual improvement from left to right, while the in-between graphs show

the corresponding decrease in MSE throughout the iterative improvement process of

the respective algorithm.

92 M. Paauw and D. van den Berg

4.4 Plant Propagation

The plant propagation algorithm (PPA) is a relatively new member of the opti-

mization family and the only population-based algorithm we use. First, ﬁt-

ness values are assigned. Normalized to (0,1) within the current population,

this ‘relative normalized ﬁtness assignment’ might be beneﬁcial for these kinds

of optimization problems in which the absolute values of the MSE are (usu-

ally)(practically) unknown. Each individual will then produce between 1 and

nMax = 5 oﬀspring, proportional to its ﬁtness, which will be mutated inversely

proportional to the normalized (relative) ﬁtness. So ﬁtter individuals produce

more oﬀspring with small mutations, and unﬁtter individuals produce fewer oﬀ-

spring with large mutations. The notions of ‘large’ and ‘small’ mutations how-

ever, needs some careful consideration.

In the seminal work on PPA, mutations are done on real-valued dimensions

of continuous benchmark functions [12], which has the obvious advantage that

the size of the mutation can be scaled directly proportional to the (real-valued)

input dimension. An adaptation to a non-numerical domain has also been made

when PPA was deployed to solve the Traveling Salesman Problem [13]. In this

case, the size of the mutation is reﬂected in the number of successive 2-opts

which are applied to the TSP-tour. It does not, however take into account the

multiple k-opts might overlap, or how the eﬀect of the mutation might diﬀer

from edge to edge, as some edges are longer than others.

So where does it leave us? In this experiment, a largely mutated oﬀspring

could either mean ‘having many mutations’ or ‘having large mutations’. Consult-

ing the authors, we tried to stick as closely to the idea of (inverse) proportionality

and ﬁnd middle ground in the developments so far. In the benchmark paper, all

dimensions are mutated by about 50% in the worst-case ﬁtness. In the TSP-

paper, the largest mutations vary between 7% and 27% of the edges, depending

on the instance size. We chose to keep the range of mutations maximal, while

assigning the number of mutations inversely proportional to the ﬁtness, resulting

in the following PPA-implementation:

1. Initialization: create a population of M= 30 individuals (randomly initial-

ized polygon constellations).

2. Assign Fitness Values: ﬁrst, calculate the MSEifor all individuals iin

the population, and normalize the ﬁtness for each individual:

fi=MSEmax −MSEi

MSEmax −MSEmin

(6)

Ni=1

2(tanh(4 ·(1 −fi)−2) + 1) (7)

in which MSEmax and MSEmin are the maximum and minimum MSE in

the population, fiis an individual’s relative ﬁtness and Niis an individual’s

normalized relative ﬁtness.

3. Sort Population on ﬁtness, keep the ﬁttest Mindividuals and discard the

rest.

Paintings, Polygons and Plant Propagation 93

4. Create Oﬀspring: each individual iin the population creates nrnew indi-

viduals as

nr=nmax ·Ni·r1(8)

with mrmutations as

mr=9v

4·1

nmax

·(1 −Ni)·r2(9)

in which 9v

4is the number of possible mutation operations for a constellation

of vvertices and r1and r2are random numbers from (0,1).

5. Return to step 2 unless the predetermined maximum number of function

evaluations is exceeded, in which case the algorithm terminates. In the exper-

iments, this number is identical to the number of iterations for the other two

algorithms, that both perform exactly one evaluation each iteration.

5 Experiments and Results

We ran the stochastic hillclimber and simulated annealing for 106itera-

tions and completed ﬁve runs for each painting and each number of v∈

{20,100,300,500,700,1000}. Simulated annealing was parametrized as described

in Sect. 4.2 and we continually retained the best candidate solution so far, as the

MSE occasionally increases during a run. The plant propagation algorithm was

parametrized as stated in Sect. 4.3 and also completed ﬁve runs, also until 106

function evaluations had occurred. After completing a set of ﬁve runs, we aver-

aged the last (and therefore best) values (Fig. 3).

For almost all paintings with all numbers of v, the stochastic hillclimber is by

far the superior algorithm. It outperformed the plant propagation algorithm on

48 out of 49 vertex-painting combinations with an average MSE improvement of

766 and a maximum improvement of 3177, on the Jackson Pollock with 1000 ver-

tices. Only for the Mona Lisa with 20 vertices, PPA outperformed the hillclimber

by 37 MSE. Plant propagation was the second best algorithm, better than simu-

lated annealing on all painting-vertex combinations by an average improvement

of 17666 MSE. The smallest improvement of 4532 MSE was achieved on Bach

with 20 vertices and a largest improvement of 46228 MSE on the Mondriaan with

20 vertices. A somewhat surprising observation is that for both the stochastic

hillclimber and plant propagation, end results improved with increasing vertex

numbers, but only up to about v= 500, after which the quality of the rendered

bitmaps largely leveled out. This pattern also largely applies to the end results

of simulated annealing, with the exception of Bach, which actually got worse

with increasing numbers of v. This surprising and somewhat counterintuitive

phenomenon might be explained from the fact the algorithm does not perform

very well, and the fact that Hausmann’s painting is largely black which is the

canvas’ default color. When initialized with more vertices (and therefore more

polygons), a larger area of the black canvas is covered with colored polygons,

increasing the default MSE-distance from Bach to black and back.

94 M. Paauw and D. van den Berg

Fig. 3. Best results of ﬁve runs for the three algorithms on all seven paintings for all

numbers of vertices. Remarkably enough, results often do not improve signiﬁcantly with

numbers of vertices over 500. Note that results for the hillclimber and plant propagation

are depicted on a logarithmic scale.

6 Discussion, Future Work, and Potential Applications

In this work, we set out to compare three diﬀerent algorithms to optimize a con-

stellation of semi-transparent, partially overlapping polygons to approximate a

set of famous paintings. It is rather surprising that in this exploration, the sim-

plest algorithm performs best. As the hillclimber is extremely susceptible to

getting trapped in local minima, this might indicate the projection of the objec-

tive function from the state space to the MSE might be relatively convex when

neighbourized with mutation types such as ours. Another explanation might be

the presence of many high-quality solutions despite the vastness of the state

space.

Plant propagation is not very vulnerable to local minima, but it does not

perform very well considering its track record on previous problems. This might

be due to the mutation operators, or the algorithm’s parametrization, but also

to the vastness of the state space; even if the algorithm does avoid local minima,

their attractive basins might be so far apart that the maximum distance our

mutations are likely to traverse are still (far) too small.

The rather poor performance of simulated annealing can be easily explained

from its high temperature. The logarithmic Geman&Geman-scheme in itself

cools down rather slowly, even over a run of a million iterations. But our choice of

c-value might also be too high. The canvas is not comprised of ‘extreme colors’,

and neither are the target bitmaps, so the maximum energy barrier (increase in

MSE) might in practice be much lower than our upper bound of 195075. An easy

way forward might be to introduce an ‘artiﬁcial Boltzmann constant’, eﬀectively

facilitating faster temperature decrease. This method however, does raise some

theoretical objections as the principle of ‘cooling down slowly’ is a central dogma

in both simulated and metallurgic annealing.

Paintings, Polygons and Plant Propagation 95

An interesting observation can be made in Fig. 3. Some paintings appear to

be harder to approximate with a limited number of polygons than others, leading

to a ‘polygonicity ranking’ which is consistently shared between the hillclimber

and plant propagation. The essentially grid based work of Mondriaan seems

to be more suitable for polygonization than Van Gogh’s Starry Starry Night,

which consists of more rounded shapes, or Jackson Pollock’s work which has no

apparent structure at all. To quantitatively assert these observations, it would

be interesting to investigate whether a predictive data analytic could be found

to a priori estimate the approximability of a painting. This however, requires a

lot more theoretical foundation and experimental data.

Fig. 4. Best results for various paintings and various algorithms. From top left, clock-

wise: 1: Mona Lisa, SA, v= 1000. 2: Mondriaan, HC, v= 1000. 3: Klimt, PPA,

v= 1000. 4: Bach, HC v= 1000. 5: Dali, HC, v= 1000. 6: Jackson Pollock, SA,

v= 1000. 7: Starry Night, PPA, v= 1000. This Mondriaan is the best approximation

in the entire experiment; when looking closely, even details like irregularities in the

painting’s canvas and shades from the art gallery’s photograph can be made out.

7 Challenge

We hereby challenge all our colleagues in the ﬁeld of combinatorial optimization

to come up with better results than ours (Fig. 4). Students are encouraged to

get involved too. Better results include, but are not limited to:

1. Better instance results: ﬁnding better MSE-values for values of vand pidenti-

cal to ours, regardless of the method involved. If better MSE-values are found

for smaller vand p, than ours, they are considered stronger.

96 M. Paauw and D. van den Berg

2. Better algorithmic results: ﬁnding algorithms that either signiﬁcantly improve

MSE-results, or achieve similar results in fewer function evaluations.

3. Better parameter settings: ﬁnding parameter settings for our algorithms that

that either signiﬁcantly improve MSE-results, or achieve similar results in

fewer function evaluations. We expect this is well possible.

4. Finding multiple minima: it is unclear whether any minimum value of MSE

can have multiple diﬀerent polygon constellations, apart from symmetric

results.

For resources, as well as improved results, refer to our online page [20]. We

intend to keep a high score list.

Acknowledgements. We would like to thank Abdellah Salhi (University of Essex)

and Eric Fraga (University College London) for their unrelenting willingness to discuss

and explain the plant propagation algorithm. A big thanks also goes to Arnoud Visser

(University of Amsterdam) for providing some much-needed computing power towards

the end of the project, and to Jelle van Assema for helping with the big numbers.

References

1. Roger Johansson blog: Genetic programming: Evolution of Mona Lisa. https://

rogerjohansson.blog/2008/12/07/genetic-programming-evolution-of-mona-lisa/

2. Genetic programming: Mona Lisa source code and binaries. https://

rogerjohansson.blog/2008/12/11/genetic-programming-mona-lisa-source-code-

and-binaries/

3. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing, 1st edn.

Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-662-05094-1

4. Fister Jr., I., Yang, X.S., Fister, I., Brest, J., Fister, D.: A brief review of nature-

inspired algorithms for optimization (2013). arXiv preprint: arXiv:1307.4186

5. S¨orensen, K.: Metaheuristics—the metaphor exposed. Int. Trans. Oper. Res. 22,

1–16 (2015)

6. M´ezard, M., Parisi, G., Zecchina, R.: Analytic and algorithmic solution of random

satisﬁability problems. Science 297(5582), 812–815 (2002)

7. M´ezard, M., Parisi, G.: The cavity method at zero temperature. J. Stat. Phys.

111(1–2), 1–34 (2003)

8. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.: Optimization by simulated annealing.

Science 220(4598), 671–680 (1983)

9. Hornby, G., Globus, A., Linden, D., Lohn, J.: Automated antenna design with

evolutionary algorithms. In: Space, p. 7242 (2006)

10. Moshreﬁ-Torbati, M., Keane, A.J., Elliott, S.J., Brennan, M.J., Rogers, E.: Passive

vibration control of a satellite boom structure by geometric optimization using

genetic algorithm. J. Sound Vibr. 267(4), 879–892 (2003)

11. Jelisavcic, M., et al.: Real-world evolution of robot morphologies: a proof of con-

cept.Artif.Life23(2), 206–235 (2017)

12. Salhi, A., Fraga, E.: Nature-inspired optimisation approaches and the new plant

propagation algorithm. In: Proceeding of the International Conference on Numer-

ical Analysis and Optimization (ICeMATH 2011), Yogyakarta, Indonesia (2011)

Paintings, Polygons and Plant Propagation 97

13. Selamo˘glu, B˙

I., Salhi, A.: The plant propagation algorithm for discrete optimisa-

tion: the case of the travelling salesman problem. In: Yang, X.-S. (ed.) Nature-

Inspired Computation in Engineering. SCI, vol. 637, pp. 43–61. Springer, Cham

(2016). https://doi.org/10.1007/978-3-319- 30235-5 3

14. Cheraita, M., Haddadi, S., Salhi, A.: Hybridizing plant propagation and

local search for uncapacitated exam scheduling problems. Int. J. Serv. Oper.

Manag. (in press). http://www.inderscience.com/info/ingeneral/forthcoming.php?

jcode=ijsom

15. Neumann, A., Alexander, B., Neumann, F.: Evolutionary image transition using

random walks. In: Correia, J., Ciesielski, V., Liapis, A. (eds.) EvoMUSART 2017.

LNCS, vol. 10198, pp. 230–245. Springer, Cham (2017). https://doi.org/10.1007/

978-3-319-55750-2 16

16. Richter, H.: Visual art inspired by the collective feeding behavior of sand-bubbler

crabs. In: Liapis, A., Romero Cardalda, J.J., Ek´art, A. (eds.) EvoMUSART 2018.

LNCS, vol. 10783, pp. 1–17. Springer, Cham (2018). https://doi.org/10.1007/978-

3-319-77583-8 1

17. Semet, Y., O’Reilly, U.-M., Durand, F.: An interactive artiﬁcial ant approach to

non-photorealistic rendering. In: Deb, K. (ed.) GECCO 2004, Part I. LNCS, vol.

3102, pp. 188–200. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-

540-24854-5 17

18. MacCallum, R.M., Mauch, M., Burt, A., Leroi, A.M.: Evolution of music by public

choice. Proc. Natl. Acad. Sci. 109(30), 12081–12086 (2012)

19. Python image library 5.1.0. https://pillow.readthedocs.io/en/5.1.x/

20. Heuristieken.nl paintings, polygons, and plant propagation. http://heuristieken.

nl/wiki/index.php?title=Paintings from Polygons

21. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the

Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741

(1984)

22. Nourani, Y., Andresen, B.: A comparison of simulated annealing cooling strategies.

J. Phys. A Math. Gen. 31, 8373–8385 (1998)