Conference PaperPDF Available

The Painting Fool Sees! New Projects with the Automated Painter.

Authors:
  • Halskov Analytics

Abstract and Figures

In The Painting Fool project, we aim to build an automated painter which is taken seriously as a creative artist in its own right, one day. We report here the most recent advances, where we have integrated machine vision capabilities from the DARCI system into The Painting Fool, to enhance its abilities before, during and after the painting process. These advances have enabled new art projects, including a commission from an Artificial Intelligence company, and we report on this collaboration, which is one of the first instances in Computational Creativity research where creative software has been commissioned directly. The new projects have advanced The Painting Fool as an independent artist able to produce more diverse styles which break away from simulating natural media. The projects have also raised a philosophical question about whether software artists need to see in the same way as people, which we briefly discuss.
Content may be subject to copyright.
The Painting Fool Sees! New Projects with the Automated Painter
Simon Colton1,2, Jakob Halskov3, Dan Ventura4,
Ian Gouldstone2, Michael Cook2 and Blanca P´
erez-Ferrer1
1MetaMakers Institute, Academy for Innovation and Research, Falmouth University, UK
2Computational Creativity Group, Department of Computing, Goldsmiths, University of London, UK
3UBIC INC, Tokyo, Japan
4Computer Science Department, Brigham Young University, USA
Abstract
In The Painting Fool project, we aim to build an au-
tomated painter which is taken seriously as a creative
artist in its own right, one day. We report here the
most recent advances, where we have integrated ma-
chine vision capabilities from the DARCI system into
The Painting Fool, to enhance its abilities before, dur-
ing and after the painting process. These advances have
enabled new art projects, including a commission from
an Artificial Intelligence company, and we report on this
collaboration, which is one of the first instances in Com-
putational Creativity research where creative software
has been commissioned directly. The new projects have
advanced The Painting Fool as an independent artist
able to produce more diverse styles which break away
from simulating natural media. The projects have also
raised a philosophical question about whether software
artists need to see in the same way as people, which we
briefly discuss.
Paper type: Cultural applications paper
Introduction
The Painting Fool1is software that we hope will be taken
seriously as a creative artist in its own right, one day. It is a
well established project, with an emphasis on implement-
ing processes which could be described as artistic and/or
creative, rather than merely producing images which look
like they may have been painted by a person, as with many
graphics packages (Strothotte and Schlechtweg 2002), such
as Adobe’s Illustrator. Many technical details of the project
and discussions of the outreach activities performed with
The Painting Fool are given in (Colton 2012b).
Progress with the project is usually along two axes: tech-
nical and societal, and the work presented here addresses
both aspects. On the technical side, we report how we have
enabled The Painting Fool to use machine vision techniques
before, during and after the painting process, in order to
take more creative responsibility, produce more interesting
pieces and provide better framing information. This has in-
volved integrating aspects of the machine vision abilities
of the DARCI system (Norton, Heath, and Ventura 2013;
Heath, Norton, and Ventura 2014). In addition to being
used in art generation itself (Norton, Heath, and Ventura
1www.thepaintingfool.com
2011), DARCI has been used as an artificial art critic (Nor-
ton, Heath, and Ventura 2010), which makes it the perfect
complement to The Painting Fool. Implementing such syn-
ergies is rare in Computational Creativity research, with a
few notable exceptions, such as the combination of parts
of the MEXICA, Curveship and GRIOT programs into the
Slant storytelling system (Montfort et al. 2013).
On the societal side, to get The Painting Fool accepted as
an artist, we engage the public, journalists and members of
the art world (artists, art students, art educators, critics, cura-
tors, gallery owners, etc.), as stakeholders in the question of
whether software can be creative or not. Further exploration
of some of the stakeholders issues in Computational Cre-
ativity is provided in (Colton et al. 2015). To this end, we
describe here three new art projects where The Painting Fool
has used its new visual capabilities to produce interesting art
and experiences for audiences. These include a mood-based
portraiture demonstration, where the visual processing was
used to express intent; The Painting Fool’s first art commis-
sion for a third party; and a private art project.
The collaboration with DARCI and the projects this en-
abled have progressed The Painting Fool project along a
number of axes. Firstly, the machine vision abilities mean
it is now able to analyse – albeit simplistically – the work
that it produces, and that of others, making it more appre-
ciative. This can be used to motivate and assess art projects,
and can be used during the painting process, for sketching
purposes. Importantly, the task of choosing rendering styles
can be taken from the people in charge of the art projects
and taken on by the software itself, which has added a great
deal of autonomy. An added benefit of this has been that the
paintings now no longer only resemble those produced in
traditional ways by people: the software is using the digital
medium more fully in interesting new styles.
This paper is organised as follows. In the next section,
we describe aspects of The Painting Fool and DARCI used
in the collaboration, followed by a discussion of how as-
sociation networks from DARCI were used by The Paint-
ing Fool in increasing levels of sophistication. We then
present the three new art projects enabled by this collabora-
tion, and put these into the context of related work. We con-
clude with a discussion of the advances made in The Paint-
ing Fool project, and we briefly question whether software
artists need to see in the same way as people.
Figure 1: You Can’t Know My Mind exhibit workflow.
Background
The Painting Fool: Workflows
There is no single way in which The Painting Fool produces
artworks, but rather a set of tasks it can achieve through
performing certain behaviours, and workflows which com-
bine these into art-producing processes. The behaviours
make use of various AI techniques including natural lan-
guage processing (Krzeczkowska et al. 2010), constraint
solving (Colton 2008b), evolutionary search (Colton 2008a),
design grammars (Colton and P´
erez-Ferrer 2012) and ma-
chine learning (Colton 2012a). The workflows are con-
structed through a teaching interface currently consisting of
24 screens. An example workflow, for the You Can’t Know
My Mind exhibit (described below) is given in figure 1. This
highlights that the vision system is used both at the start of
the process and towards the end (the ‘AN evaluation’ node).
Before the work described here, The Painting Fool had
a very rudimentary visual analysis system that was able to
evaluate features of an image such as texture, colour vari-
ance and symmetry. It is also able to segment a given digital
photograph into a set of colour regions, using a threshold-
based neighbourhood construction method, path-finding for
edge rationalisation and edge abstraction methods. A way-
point in every workflow is the construction of such a set of
colour regions, which can be achieved using this segmenta-
tion process, via design grammars, variation of hand-drawn
scenes and/or constraint solvers placing rectangles onto the
canvas. The colour regions direct the rendering process,
whereby each region is either filled-in or outlined via the
simulation of natural media such as paints and implements
such as paintbrushes. The rendering of each region can in-
clude multiple fill/outline passes, and the rendering of the
entire segmentation of colour regions can be done repeat-
edly, building up a layered image.
The segmentation and rendering methods are highly pa-
rameterised, requiring 14 and 57 parameters to be set re-
spectively, as described in (Colton 2012b). Choosing from
the space of possible segmentation and rendering methods
constitutes a large part of the creative responsibility taken
on in an art project, along with choosing and arranging sub-
ject matter, etc. We show below how the software now takes
on the responsibility of choosing the rendering settings.
DARCI: Association Networks
For the combined system to possess a more sophisticated
sense of appreciation for the artefacts it produces and some
level of intentionality, we decided it should have a cognitive
model that is perceptually grounded, i.e., it must possess an
ability to associate visual stimuli with linguistic concepts.
That ability was realized by borrowing a piece of the DARCI
system, a visuo-linguistic association approach, which con-
sists of a set of neural networks that perform a mapping from
low-level computer vision features to adjectival linguistic
concepts, learned from a corpus of human-labeled images.
These images come from a continuously growing dataset
obtained via a public facing website2that solicits volunteer
labeling of random images. Volunteers are allowed to la-
bel images with any and all adjectives they think describe
the image, and as a result, images can be described by their
emotional effects, most of their aesthetic qualities, many of
their possible associations and meanings, and even, to some
extent, by their subject. Furthermore, through additional la-
beling exercises, volunteers can specify labels that explic-
itly do not describe the image, allowing the collection of
explicit negative labels as well as positive ones. The result
is a rich, challenging, dynamic dataset. A recent snapshot of
the data reveals 17,004 positive labels and 16,125 negative
labels using 2,463 unique adjectives associated with 2,562
unique images, an average of approximately 12 unique la-
bels per image, and 110 adjectives with at least 30 positive
and 30 negative image associations.
Images are perceived by the system as a vector of 102
low-level computer vision features extracted from the im-
age using the DISCOVIR system3. This level of image per-
ception does not admit significant semantic understanding,
but it does allow appreciation of concepts that can be ade-
quately expressed with global, abstract features dealing with
characteristics of the image’s color, lighting, texture, and
shape. Given training data in the form of (image feature
vector, adjectival label) pairs, a mapping is learned using
a set of artificial neural networks that we call association
networks. Since learning image-to-concept associations is
a multi-label classification problem, and we cannot assume
implicit negativity, the only appreciation networks trained
for a particular image are those explicitly labeled with (pos-
itive or negative examples of) the associated concept. Each
adjectival concept is learned by a unique association net-
work, which is trained using standard backpropagation and
outputs a single real value, between 0 and 1, indicating the
degree to which an input image can be described by the net-
work’s associated adjectival concept.
2darci.cs.byu.edu
3appsrv.cse.cuhk.edu.hk/˜miplab/discovir
Figure 2: Seventeen painting styles along with layering scheme and partial visual profile.
Implementing Vision-Enhanced Painting
From the DARCI system, The Painting Fool inherited a set
of 236 association networks (ANs), and a method of turning
a given image Iinto the numerical inputs to the ANs. Each
AN corresponds to a particular adjective, i.e., the higher the
output from the AN for adjective Awhen given input values
for I, the more likely (the AN predicts) that a viewer will use
Ato describe I. We first determined which of the adjectival
ANs were suitable for dealing with The Painting Fool’s out-
put. To do this, we ran each AN over hundreds of painterly
images from The Painting Fool and recorded the range of
the numerical outputs. We found that for the majority of the
ANs, the output range was so low that we couldn’t meaning-
fully claim that it was differentiating between images based
on visual properties. We selected all ANs where the range
of outputs was 0.05 or greater, and then performed a sanity
check on those remaining, removing any which described
images in a particularly counter-intuitive way, e.g., the AN
for ‘red’ outputting a higher score for a patently green image
than for a patently red image.
This left a selection of 65 usable ANs, to which we imple-
mented an interface in The Painting Fool. For each selected
AN, we recorded the highest and lowest outputs over the
hundreds of images mentioned above, and when output from
a new image is calculated, it is normalised between these
extremes. As described below, the ANs have been used in a
number of new workflow behaviours for The Painting Fool.
The simplest of these is to allow the software to frame it’s
output (Charnley, Pease, and Colton 2012) by describing it
visually. It can also compare and contrast images in terms
of a particular adjective, or in terms of a profile of multiple
adjectives. It can also employ the ANs during the painting
process, as described in the following subsections.
A Space of Simulated Visual Art Implements/Styles
Given an image segmentation of colour regions as described
above, The Painting Fool produces a non-photorealistic ren-
dering of it in a series of whole-segmentation layers, during
which each region itself is rendered in multiple layers. Dur-
ing the rendering of each layer, which can either be filled
in, or outlined, the software simulates natural media such
as paints, and the usage of implements such as brushes in
outline/fill styles such as hatching, as described in (Colton
2012b). The rendering of a layer is determined by a set of
57 parameters, which cover the simulation of the media it-
self (e.g., wetness of paint), the implement (e.g., brush size)
the support (e.g., canvas roughness) and the style (e.g., num-
ber of times to draw an outline).
We defined a space of painting styles by fixing the ren-
dering to a single whole-segmentation layer during which
a scheme of up to five rendering layers per region was al-
lowed. The region layering scheme was represented as a
string with letters A, B, C, a, b or c. Upper case letters rep-
resent a fill layer with lower case letters representing outline
layers. Where upper and lower case letters correspond (e.g.,
A=a), all the other settings are the same, hence they repre-
sent the simulation of the same natural media in roughly the
same way, but one produces an outline, the other produces
a fill. For instance, ABCab represents three fill layers and
two outline layers, with all the settings of the first two fill
layers exactly as for the two outline layers. We found that it
increased visual coherence if the fill layers corresponded to
the outline layers in this way. After some initial experimen-
tation, we constrained the search space to include only five
layering schemes: aB , Ba, ABab, Aab and ABCab.
We generated 1,200 painting styles by randomly sampling
the space of rendering styles with each of the 57 parame-
ters set randomly to a value in its appropriate range, and
then mapping a set of these styles onto one of the five layer-
ing styles above, also chosen randomly. For each style, we
used The Painting Fool to render a given segmentation of
an abstract flower. Then, for each of the 65 selected ANs
described above, we calculated the normalised output for
each of the 1,200 flower paintings, thus creating a visual
profile for each style. Example painting styles, along with
the layering scheme and part of their visual profile are given
in figure 2. The seventeen pictures demonstrate somewhat
the diversity in the painting styles within this space. The
partial profiles indicate that while the AN outputs have a rel-
atively small range, it is sufficient for a choice of painting
style based on these values to be meaningful.
Employing Vision During Painting
To recap, we supplied The Painting Fool with 1,200 differ-
ent painting styles, each with a visual profile derived from
applying association networks. As described above, there
are various workflows for producing images with The Paint-
ing Fool. When the workflow starts with a digital photo-
graph, images are segmented into a certain number of colour
regions, with more regions usually leading to more photo-
realism in the final paintings. Each colour region corre-
sponds, therefore, to a region of the original photograph, and
this photo-region can be interrogated in order to choose an
appropriate painting style. To do this, The Painting Fool ex-
tracts the photo region onto a transparent image, then applies
all 65 adjectival ANs to the extract, to compile a profile. The
Euclidean distance of this photo-extract profile from the vi-
sual profile of the 1,200 painting styles is used to order the
styles in increasing distance. The distance can be interpreted
as an appropriateness of the painting style to the underlying
photo extract. That is, the style with least distance will ren-
der the region in a way that is most similar in nature to the
original photograph (according to the ANs).
The new workflow for The Painting Fool which uses ma-
chine vision during painting is as follows: it takes a photo-
graph and segments it into colour regions. For each colour
region, a photo-extract profile is produced using the ANs,
and this is used to order the painting styles in The Painting
Fool’s database, in terms of how appropriate they are to the
photo extract. From the top ten most appropriate styles, one
is chosen randomly and used to paint the region in question.
Choosing from the top ten in this fashion means that each
time The Painting Fool paints from the same photograph,
it produces a different image, yet each time, each painting
style is appropriate to the region it is used to paint. We have
enhanced this workflow by enabling a sketching mechanism.
That is, The Painting Fool tries all of the ten most appropri-
ate sketching styles in situ, then produces a visual profile
of the resulting region of the painting, and chooses the one
where this profile is closest to the photo-region profile. This
reduces the reliance on the initial flower experiments some-
what, as The Painting Fool can see what each style looks like
actually in the painting, before committing to one in partic-
ular. It also opens up the potential for The Painting Fool to
produce a sketchbook to accompany each painting as fram-
ing information, which we mention later.
Cultural Applications
In the subsections below, we describe new cultural applica-
tion projects with The Painting Fool which have been en-
abled by its access to a vision system. These span the kinds
of public, private and commissioned art projects that an artist
might expect to undertake as part of their general activities.
The ‘You Can’t Know My Mind’ Exhibition
For the You Can’t Know My Mind exhibit reported in (Colton
and Ventura 2014), we focused on the question of intention-
ality in creative software. As software is programmed di-
rectly, it is fair criticism to highlight that in most Compu-
tational Creativity projects, the intention for the production
of artefacts comes the software’s author and/or user. For the
You Can’t Know My Mind project, we raised our intentions
to the meta-level, i.e., we intended for the software to pro-
duce portraits and entertain sitters in order to learn about its
own painting styles. However, the aim of each artefact pro-
duction session was determined by The Painting Fool itself,
in order for it to exhibit behaviours that unbiased observers
might project the word ‘intentionality’ onto.
An eight-point description of how The Painting Fool oper-
ated in this project is given in (Colton and Ventura 2014). Of
note here, we used the machine vision system from DARCI
offline, to prepare the software for portraiture sessions. That
Figure 3: Example comparisons for conceptions/portraits.
is, for each of 1,000 abstract art images produced by the
Elvira sub-module (Colton, Cook, and Raad 2011), and for
each of 1,000 image filters produced by the Filter Feast sub-
module (Torres, Colton, and Rueger 2008), the output of all
of the adjective ANs in the vision system were calculated.
This meant that the software could choose from the most ap-
propriate abstract backdrops and the most appropriate filters
for an adjective, A,x chosen to fit a mood, in order to pro-
duce a sketch conception to aim for with each portrait. The
‘background image’ and ‘filtered image conception’ nodes
in figure 1 correspond to these.
Under the assumption that the sketch will invoke people
to project certain adjectives onto the image upon viewing,
the sketch conception has aspects which The Painting Fool
aspires to achieve in its painting. The conception image is
segmented into colour regions, and a simulation of various
painting media (paints, pastels and pencils) are used in one
of eight styles, to produce a portrait. At the end of each por-
traiture session, The Painting Fool uses the vision system to
compare the level of adjective projection in the portrait to
that of the sketch. To do this (indicated by the ‘AN evalua-
tion’ node in figure 1), it applies the adjectival AN for Ato
the sketch conception and to the final portrait, and compares
the output. If the portrait output is within 95% to 105% of
the output of the AN for the conception, this is recorded as
satisfactory. If it is higher than the 105%, this is recorded as
a success, and if it is higher than 110%, this is recorded as
a great achievement, with failures similarly recorded. Three
examples comparisons of conception and portrait are given
in figure 3. The level of achievement/failure is used to up-
date a probability distribution that The Painting Fool can use
to choose painting styles later to achieve an image with max-
imal output respect to a given adjectival AN.
Figure 4: Front page and excerpt from the Japanese version of the essay for the ‘I Can See Unclearly Now’ commission. Third
image: an early photograph of artwork hung in the Behaviour Informatics Laboratories of UBIC.
The ‘I Can See Unclearly Now’ Commission
UBIC4is a behavioural information data analysis company
based in Tokyo. In early August of 2014, UBIC’s CTO,
Mr. Hideki Takeda came across the Painting Fool’s website
while exploring recent advances in Artificial Intelligence re-
search on the web. At that time, UBIC’s Behavior Infor-
matics Laboratories (B.I.L.) in Shinagawa, Tokyo, was im-
plementing a complete office renovation scheme reflecting
the company’s reorientation from eDiscovery vendor to sup-
plier of in-house Big Data Analytics solutions powered by
an AI engine called the Virtual Data Scientist. The new of-
fice concept of the B.I.L. can be summed up as: “Shaking the
boundaries between the virtual and the real so as to stimu-
late the senses and promote intelligence and creativity”. For
example, the office now features both real bamboo and bam-
boo imprinted on a glass wall. The choice of bamboo is not
arbitrary, but motivated by the fact that this plant plays a
prominent role in traditional Japanese culture. It is highly
symbolic and associated with, for example, Noh theatre5in
which the protagonists are often ghosts from another plane
of existence but appearing in the real world.
Mr. Takeda decided to commission artworks from The
Painting Fool, as this would fit very well with the blurring
of virtual and real spaces in the B.I.L. The first author of
this paper – who is the lead researcher in The Painting Fool
project – was contacted by the second author acting on be-
half of UBIC, and ultimately three series of images were
commissioned, along with an essay highlighting how the
machine vision system was used in increasingly sophisti-
cated ways from the first to the third series. Constraints were
put on the commission: (i) to include a portrait from a live
sitting, and (ii) to include a piece involving Alan Turing, as
an AI pioneer. Moreover, it was agreed that the commission
would involve an element of research and implementation,
driving The Painting Fool project forward. Example images
(with details) from the three series are given in figure 6, and
details from the essay, along with an early photograph of one
of the pieces hung in the B.I.L. are given in figure 4. The ti-
tle of the commission was chosen to highlight The Painting
Fool’s new usage of machine vision techniques, while indi-
cating that the system is far from perfect.
To tie the three series of images together, the same style of
4www.ubicna.com
5en.wikipedia.org/wiki/Noh
backdrop was used, consisting of 10,000 adjectives rendered
in a handwritten way in varying shades of greyscale pencil,
onto dark backgrounds. In all the pieces, the mass of ad-
jectives open up in multiple places into which red handwrit-
ten adjectives are strategically placed. For the first series,
StarFlowers, paintings of the abstract flowers used for as-
sessing painting styles were placed using a constraint solver
to avoid overlap, as per (Colton 2008b), with slightly dif-
fering sizes. Before placement, each flower image was as-
sessed by the 65 adjective ANs, and from the top ten highest
scoring adjectives, two were chosen to appear alongside the
flower in the piece, in red handwriting. The pairs were cho-
sen so that no flower had the same two adjectives next to
it. For instance, in the detail of figure 6, the first flower is
annotated with ‘peaceful’ and ‘warm’.
In the second series, Good Day, Bad Day, two pho-
tographs of the second author seated, posing firstly in a
good mood, and secondly in a bad mood were used. The
65 adjectives were split into positive, neutral and negative
valence categories, e.g., happy, glazed, bleary respectively.
The painting style with the highest average AN output over
the positive adjectives was chosen to paint the first pose,
and the most negative style was similarly chosen to paint the
second pose. Each portrait was annotated at its edges with
red handwritten adjectives appropriate to the painting at that
edge point. In the third series, Dynamic Portraits: Alan Tur-
ing, a photograph of Turing was hand annotated with lines
to pick out his features. We then used the method of arbi-
trarily choosing from the top ten most appropriate painting
styles for each colour region described above, to produce a
number of portraits, with the annotated lines being painted
on at the end, to gain a likeness. The rendered painting was
analysed with the 65 ANs and the 17 most appropriate ad-
jectives were scattered around the backdrop of the image, in
a non-overlapping way, as usual in red handwriting.
Dozens of images from the three series were sent to UBIC
to choose from for the B.I.L., with very little curation from
the first author. UBIC representatives confirmed that the
commission achieved the brief of producing pieces which
blur the line between the real (i.e., painted by a person) and
the virtual (i.e., painted by a computer), and were very happy
with the commission. They produced a translated version of
the essay for visitors to the lab, and hung an example from
each series in the B.I.L.
Figure 5: Portrait of Geraint Wiggins, with detail.
The Portrait of Geraint Wiggins
In (rather belated) celebration of a milestone birthday, we
used the vision-based sketching approach described above,
to produce a portrait. Given an original image, hand-
annotated with lines picking out facial features, The Paint-
ing Fool segmented it into 150 colour regions/lines, and for
each, chose the top ten most appropriate painting styles,
as described above. For each of the ten, it painted the re-
gion, calculated the visual profile of the region of the paint-
ing that resulted, and finally chose the style with minimal
distance between its visual profile and that of the original
photo-extract. In this way, the painting process was deter-
ministic, but not predictable, and produced a striking portrait
with painterly and distinctly non-painterly effects. To add a
physical uniqueness, the image was printed onto 300 4cm
by 4cm squares which were composed into the final piece
in an overlapping formation, as per the Dancing Salesman
Problem piece described in (Colton and P´
erez-Ferrer 2012).
The portrait is shown in figure 5, along with a detail from it.
Related Work
It is commonplace for an artist to be commissioned to work
with a bespoke piece of software, or even to develop new
code, to produce artwork, with the person using the software
as a tool. However, it is much less common for a commis-
sion to be made specifically because the software will take
on many of the creative responsibilities.
The ANGELINA system (Cook and Colton 2014) has
been commissioned to produce games for the New Scien-
tist, Wired and PC Gamer Magazines. In the former, AN-
GELINA designed a game as normal, but its designer pro-
vided custom visual theming, drawing new sprites and cre-
ating sound effects for Space Station Invaders, since AN-
GELINA was not capable of this. The commissions for
Wired and PC Gamer came much later, when ANGELINA
had more independence and could produce full games, given
just an initial theme of a short phrase, proposed by the jour-
nalist. For the PC Gamer game, NBA Mesquite Volume 2,
ANGELINA used a database of labelled textures compiled
from social media mining, for the first time in a released
game. This happened because the theme chosen, ‘avocado’,
matched a label in the database for the first time since the
database had been added. This created an additional talk-
ing point for the article, and in general the games were well
received and drove up online viewing figures.
The Paul drawing robot by Patrick Tresset (Tresset and
Fol Leymarie 2012) has much in common with The Painting
Fool, in that it uses a camera and machine vision techniques
to capture an image, then automatically draws a portrait: in
this case, physically, using a robotic arm and a pen. It also
simulates looking while it draws, but this is only for enter-
tainment purposes, i.e., after the initial photograph is taken,
the vision system is not used again. Paul has been commis-
sioned on a number of occasions, most notably for a week-
long workshop at the Centre Pompidou in late 2013. Tres-
set has also found success in selling versions of the robot
painter to art museums. Another robotic painter, which does
use machine vision during painting and has also been com-
missioned for art is the eDavid system, as described by (Lin-
demeier, Pirk, and Deussen 2013). Here, a camera is used
to photograph the canvas after a series of paint strokes have
been applied, with a vision system employed to optimise the
placement of future strokes based on the visual feedback.
It is beyond the scope of this paper to perform a sur-
vey of commissions where software creators rather than
artists controlling software have produced artworks. How-
ever, we can tentatively introduce some metrics for compar-
ing projects/software/programmers to begin to characterise
such commissions. For instance, one could compare the do-
main specific training of the programmer, e.g., comparing
the commissions of artist Harold Cohen (who represented
the UK in the Venice Biennale) and his AARON system
(McCorduck 1991) with Oliver Deussen (who has no artis-
tic training) and his eDavid system mentioned above, as this
may indicate more autonomy in the software (but doesn’t
necessarily). Other measures could include how much cura-
tion takes place, i.e., how much of the software’s output is
usable; what amount of hand-finishing of output takes place;
and how much extra coding is required for each project.
Conclusions and Future Work
Through the above projects, The Painting Fool has advanced
as an artist in three major ways. Firstly, the creative re-
sponsibility of choosing a painting style has been handed
to the software. With the You Can’t Know My Mind project,
it learned a probability distribution which can choose be-
tween one of eight painterly rendering styles, to produce an
image which people will probably describe using an adjec-
tive, chosen intentionally to express a mood. With the I Can
See Unclearly Now project, the software gained the abil-
ity to choose between 1,200 painting styles for each colour
region dynamically during painting. With the Portrait of
Geraint Wiggins project, it can go further: performing in
situ sketches to see which painting style is best in the con-
text of the painting at hand. Hence, the decision making
involved in determining rendering styles is now undertaken
by the software, which is a major advance.
Secondly, as we can see from close inspection of the
pieces in figures 5 and 6, while the images produced still
retain a painterly style somewhat, there are aspects which
simply couldn’t be produced with natural media. This is be-
cause the painting styles in its database include ones which
simulate the ground in-between natural media such as paints
and pastels, and others which have no analogue in the phys-
ical world. This means that – for the first time – The Paint-
ing Fool can produce images using a much broader range
of pixel manipulations, which we call Painting with Pixels
in the essay for the commission described above, and can
thus produce styles which have little grounding in traditional
painting, which we also see as a major advance.
The third advance will be expressed more in future work
than in the projects presented here. Through the mapping
of visual stimuli to linguistic concepts, The Painting Fool is
able to project adjectives onto images, and we plan to en-
hance this with the ability to similarly project nouns. This
will increase its capacity to appreciate its own work and that
of others, enabling it to provide more sophisticated com-
mentaries about what it has produced, and we touched on
this with the output in the You Can’t Know My Mind project,
where the conceived and rendered images are compared vi-
sually. We plan to take this framing further, with The Paint-
ing Fool keeping a sketch book for each project, adding
value, and helping audiences to understand its processes.
It’s clear from figure 2 that the visio-linguistic system
does not yet match that of people perfectly, e.g., we might
disagree with the system about which flower is more colour-
ful/textured, etc. This raises a philosophical question: is it
important that an automated artist has a visual system sim-
ilar to ours? We will tackle this question elsewhere, but
we can hint here at discussion points. Firstly, for commu-
nication/framing value, it might be preferable for the soft-
ware’s visual judgements to match ours as closely as pos-
sible. However, as illustrated by the recent internet storm
about the colours in a dress (Rogers 2015), we all have dif-
ferent visual perception systems, and notions of beauty dif-
fer from generation to generation and person to person. As
art is driven forward by such differences, it may be more in-
teresting and important artistically for us to learn The Paint-
ing Fool’s visual system, rather than it learning ours.
References
Charnley, J.; Pease, A.; and Colton, S. 2012. On the notion of fram-
ing in computational creativity. In Proceedings of the 3rd ICCC.
Colton, S., and P´
erez-Ferrer, B. 2012. No photos harmed/growing
paths from seed – an exhibition. In Proceedings of Non-
Photorealistic Animation and Rendering.
Colton, S., and Ventura, D. 2014. You Can’t Know My Mind: A
festival of computational creativity. In Proc. of the 5th ICCC.
Colton, S.; Pease, A.; Corneli, J.; Cook, M.; Hepworth, R.; and
Ventura, D. 2015. Stakeholder groups in computational creativ-
ity research and practice. In Besold, T.; Schorlemmer, M.; and
Smaill, A., eds., Computational Creativity Research: Towards Cre-
ative Machines. Springer.
Colton, S.; Cook, M.; and Raad, A. 2011. Ludic considerations of
tablet-based evo-art. In Proceedings of EvoMusArt.
Colton, S. 2008a. Automatic invention of fitness functions with
application to scene generation. In Proceedings of EvoMusArt.
Colton, S. 2008b. Experiments in constraint based automated scene
generation. In Proceedings of the fifth international workshop on
Computational Creativity.
Colton, S. 2012a. Evolving a library of artistic scene descriptors.
In Proceedings of EvoMusArt.
Colton, S. 2012b. The Painting Fool: Stories from building an
automated painter. In McCormack, J., and d’Inverno, M., eds.,
Computers and Creativity, 3–38. Springer.
Cook, M., and Colton, S. 2014. Ludus ex machina: Building a 3D
game designer that competes alongside humans. In Proceedings of
the 5th ICCC.
Heath, D.; Norton, D.; and Ventura, D. 2014. Conveying semantics
through visual metaphor. ACM Transactions on Intelligent Systems
and Technology 5:31.
Krzeczkowska, A.; El-Hage, J.; Colton, S.; and Clark, S. 2010.
Automated collage generation – with intent. In Proceedings of the
1st ICCC.
Lindemeier, T.; Pirk, S.; and Deussen, O. 2013. Image styliza-
tion with a painting machine using semantic hints. Computers and
Graphics 37(5):293–301.
McCorduck, P. 1991. AARON’s Code: Meta-Art, Artificial Intelli-
gence, and the Work of Harold Cohen. W. H. Freeman & Co.
Montfort, N.; P´
erez y P´
erez, R.; Harrell, F.; and Campana, A. 2013.
Slant: A blackboard system to generate plot, figuration, and narra-
tive discourse aspects of stories. In Proceedings of the 4th ICCC.
Norton, D.; Heath, D.; and Ventura, D. 2010. Establishing appre-
ciation in a creative system. Proceedings of the 1st ICCC.
Norton, D.; Heath, D.; and Ventura, D. 2011. Autonomously cre-
ating quality images. Proceedings of the 2nd ICCC.
Norton, D.; Heath, D.; and Ventura, D. 2013. Finding creativity in
an artificial artist. Journal of Creative Behavior 47(2).
Rogers, A. 2015. The science of why no one agrees on the colour
of this dress. Wired, Science Section, 26th Feb.
Strothotte, T., and Schlechtweg, S. 2002. Non-Photorealistic Com-
puter Graphics. Morgan Kaufmann.
Torres, P.; Colton, S.; and Rueger, S. 2008. Experiments in exam-
ple based image filter retrieval. In Proceedings of the Workshop on
Cross-Media Information Analysis, Extraction and Management.
Tresset, P., and Fol Leymarie, F. 2012. Sketches by Paul the robot.
In Proceedings of the 8th Annual Symposium on Computational
Aesthetics in Graphics, Visualization, and Imaging.
Figure 6: Example images, each with detail, from the ‘I Can See Unclearly Now’ commission. First pair: from the Star Flowers
series. Second pair: from the Good Day, Bad Day series. Third pair: from the Dynamic Portraits: Alan Turing series.
... Neural networks are often used in image, pattern, speech recognition, and computer vision applications. Computer Vision (CV) is an important application of DL that utilizes classifiers such as letter recognizer and shape detector, etc. to identify and classify an object in order to interpret and "understand" the visual world (Colton et al., 2015). Computational Creativity (CC)-a term that combines AI, cognitive psychology, philosophy, and the arts (Cope, 1991)-aims to match human-level creativity in linguistics (Gervás, 2009) and art, such as music (Wiggins, Pearce and Müllensiefen, 2011) and painting (Colton, 2012). ...
... Computational Creativity (CC)-a term that combines AI, cognitive psychology, philosophy, and the arts (Cope, 1991)-aims to match human-level creativity in linguistics (Gervás, 2009) and art, such as music (Wiggins, Pearce and Müllensiefen, 2011) and painting (Colton, 2012). CC tools have reached a level of sophistication in terms of learning to create new artworks with minimal input (Colton et al., 2015). The use of computational neural networks and Generative Adversarial Networks (Goodfellow et al., 2014) can create realistic (Zhu et al., 2017), as well as stylized images. ...
Conference Paper
Morandi colors are popular in architecture, home furnishing, clothing, and other applications. The laws of Morandi colors will be summarized in this paper, at the same time, its color matching is applied to the design of products to explore the factors affecting user preferences. Morandi colors of different hues have medium and low saturation and lightness. Also, the color matching is harmonious, which can bring people comfortable and pleasant visual feelings. Furthermore, Morandi colors on products of modern, elegant, and exquisite visual characteristics are the factors of user preferences, and the color matching can satisfy the basic requirements of function, interaction, and safety. The results can provide guidance for the application of Morandi colors in product design.
... Neural networks are often used in image, pattern, speech recognition, and computer vision applications. Computer Vision (CV) is an important application of DL that utilizes classifiers such as letter recognizer and shape detector, etc. to identify and classify an object in order to interpret and "understand" the visual world (Colton et al., 2015). Computational Creativity (CC)-a term that combines AI, cognitive psychology, philosophy, and the arts (Cope, 1991)-aims to match human-level creativity in linguistics (Gervás, 2009) and art, such as music (Wiggins, Pearce and Müllensiefen, 2011) and painting (Colton, 2012). ...
... Computational Creativity (CC)-a term that combines AI, cognitive psychology, philosophy, and the arts (Cope, 1991)-aims to match human-level creativity in linguistics (Gervás, 2009) and art, such as music (Wiggins, Pearce and Müllensiefen, 2011) and painting (Colton, 2012). CC tools have reached a level of sophistication in terms of learning to create new artworks with minimal input (Colton et al., 2015). The use of computational neural networks and Generative Adversarial Networks (Goodfellow et al., 2014) can create realistic (Zhu et al., 2017), as well as stylized images. ...
Conference Paper
Full-text available
Fear, anxiety, and stress of perceived or real threats of ailment, unemployment, lack of physical contacts and isolation, and movement restrictions leading to remote work and education are some of the new realities arising from the Covid-19 pandemic. New and exacerbated existing mental and physical health concerns are crucial when rethinking living spaces. This paper presents the design concept for an architectural “intelligent” system that will adapt to the user. It will generate, in real time, variable “affective environments” by manipulating space perceptual parameters in order to accommodate a user’s wants, needs, and desires. Machine Learning (ML) provides the data that drives perceptual variability. The concept can be applied in healthcare (e.g., recovery rooms, care units) where sensory stimulation is key to treatment.
Article
While creative artificial intelligence (AI) is becoming integral to our lives, we know little about what makes us call AI “creative”. Informed by prior theoretical and empirical work, we investigate how perceiving evidence of a creative act beyond the final product affects our assessment of robot creativity. We study embodiment morphology as a potential moderator of this relationship, informing a 3x2 factorial design. In two lab experiments on visual art, participants (N=30+60) assessed drawings produced by two physical robots with different morphologies, under exposure to product, process, and producer as three levels of perceptual evidence. The data supports that the human assessment of robot creativity is significantly higher the more is revealed beyond the product about the creation process, and eventually the producer. We find no significant effects of embodiment morphology, contrasting existing hypotheses and offering a more detailed understanding for future work. Future work is further informed by additional exploratory analyses revealing factors potentially influencing creativity assessments, including perceived robot likeability and participants’ experience with robotics and AI. Our insights empirically ground existing design patterns, foster fairness and validity in system comparisons, contribute to a deeper understanding of our relationship with creative AI and thus its adoption in society.
Article
There is a growing interest in the area of machine learning and creativity. This survey presents an overview of the history and the state of the art of computational creativity theories, key machine learning techniques (including generative deep learning), and corresponding automatic evaluation methods. After presenting a critical discussion of the key contributions in this area, we outline the current research challenges and emerging opportunities in this field.
Article
Full-text available
In recent years, the art market has undergone two revolutions thanks to "digital" and "artificial intelligence" technologies. These are respectively the "launch of the online marketplace for fine art as well as the first art virtual auctions" and " the creation and marketing of auto-created art works by the artificial intelligence". In 2020, blockchain technology brought about “the third revolution”. This resulted in the emergence of the concept of "crypto-art", which is an artwork created and certified by a non-fungible token. This third revolution is likely to be more important than the first and the second, because it transforms the structural basis of this market, especially its business model and valuation logic. This article aims to present an overview on its background and a reflection on the definition and the technical and legal issues of this technology in France.
Article
Full-text available
Self-expression is central to mental well-being and mental health therapy. Art therapy offers a wide range of expressive mechanisms, allowing individuals to process their emotions when traditional therapies prove unsuccessful. However, a lack of expertise or comfort with artistic expression, along with cost and waiting times, may hinder one's ability to receive needed mental health support. Creative machines can offer novel therapeutic approaches enabling the bereaved to engage in creative expression as and when needed. In this paper, we apply a co-creative songwriting system, ALYSIA, as a new form of therapy for those who had recently suffered the loss of a loved one. We evaluate the utility of this creative system in aiding bereaved individuals through user testing. The utility of collaborative creative systems for adaptation to bereavement is discussed and may have implications for other therapeutic applications.
Article
Full-text available
In most domains, artefacts and the creativity that went into their production is judged within a context; where a context may include background information on how the creator feels about their work, what they think it expresses, how it fits in with other work done within their community, their mood be-fore, during and after creation, and so on. We identify areas of framing information, such as motivation, intention, or the processes involved in creating a work, and consider how these areas might be applicable to the context of Computational Creativity. We suggest examples of how such framing infor-mation may be derived in existing creative systems and pro-pose a novel dually-creative approach to framing, whereby an automated story generation system is employed, in tan-dem with the artefact generator, to produce suitable framing information. We outline how this method might be developed and some longer term goals.
Chapter
The notion that software could be independently and usefully creative is becoming more commonplace in scientific, cultural, business and public circles. It is not fanciful to imagine creative software embedded in society in the short to medium term, acting as collaborators and autonomous creative agents for much societal benefit. Technologically, there is still some way to go to enable Artificial Intelligence methods to create artefacts and ideas of value, and to get software to do so in interesting and engaging ways. There are also a number of sociological hurdles to overcome in getting society to accept software as being truly creative, and we concentrate on those here. We discuss the various communities that can be considered stakeholders in the perception of computers being creative or not. In particular, we look in detail at three sets of stakeholders, namely the general public, Computational Creativity researchers and fellow creatives. We put forward various philosophical points which we argue will shape the way in which society accepts creative software. We make various claims along the way about how people perceive software as being creative or not, which we believe should be addressed with scientific experimentation, and we call on the Computational Creativity research community to do just that.
Chapter
This chapter deals with the creation of 3D non-photorealistic computer animations. It reviews the basic concepts for traditional and computer-generated animation with an emphasis on non-photorealistic animation which, when examined closely, covers more than just the mere rendering of a number of subsequent images. This chapter will not discuss new techniques for object deformation or motion specification but concentrates on the visualization of moving non-photorealistic images. As we will see, this requires new rendering methods for “drawing” a picture.
Article
Creativity is an important part of human intelligence, and it is difficult to quantify (or even qualify) creativity in an intelligent system. Recently it has been suggested that quality, novelty, and typicality are essential properties of a creative system. We describe and demonstrate a computational system (called DARCI) that is designed to eventually produce images in a creative manner. In this paper, we focus on quality and show, through experimentation and statistical analysis, that DARCI is beginning to be able to produce images with quality comparable to those produced by humans.
Chapter
The Painting Fool is software that we hope will one day be taken seriously as a creative artist in its own right. This aim is being pursued as an Artificial Intelligence (AI) project, with the hope that the technical difficulties overcome along the way will lead to new and improved generic AI techniques. It is also being pursued as a sociological project, where the effect of software which might be deemed as creative is tested in the art world and the wider public. In this chapter, we summarise our progress so far in The Painting Fool project. To do this, we first compare and contrast The Painting Fool with software of a similar nature arising from AI and graphics projects. We follow this with a discussion of the guiding principles from Computational Creativity research that we adhere to in building the software. We then describe five projects with The Painting Fool where our aim has been to produce increasingly interesting and culturally valuable pieces of art. We end by discussing the issues raised in building an automated painter, and describe further work and future prospects for the project. By studying both the technical difficulties and sociological issues involved in engineering software for creative purposes, we hope to help usher in a new era where computers routinely act as our creative collaborators, as well as independent and creative artists, musicians, writers, designers, engineers and scientists, and contribute in meaningful and interesting ways to human culture.
Article
Creativity is an important component of human intelligence, and imbuing artificially intelligent systems with creativity is an interesting challenge. In particular, it is difficult to quantify (or even qualify) creativity. Recently, it has been suggested that conditions for attributing creativity to a system include: appreciation, imagination, and skill. We demonstrate and describe an original computer system (called DARCI) that is designed to produce images through creative means. We present methods for evaluating DARCI and other artificially creative systems with respect to appreciation, imagination, and skill, and use these methods to show that DARCI is arguably a creative system.
Article
In the field of visual art, metaphor is a way to communicate meaning to the viewer. We present a computational system for communicating visual metaphor that can identify adjectives for describing an image based on a low-level visual feature representation of the image. We show that the system can use this visual-linguistic association to render source images that convey the meaning of adjectives in a way consistent with human understanding. Our conclusions are based on a detailed analysis of how the system’s artifacts cluster, how these clusters correspond to the semantic relationships of adjectives as documented in WordNet, and how these clusters correspond to human opinion.
Article
"Aaron's Code" tells the story of the first profound connection between art and computer technology. Here is the work of Harold Cohen - the renowned abstract painter who, at the height of a celebrated career in the late 1960's, abandoned the international scene of museums and galleries and sequestered himself with the most powerful computers he could get his hands on. What emerged from his long years of solitary struggle is an elaborate computer program that makes drawings autonomously, without human intervention - an electronic apprentice and alter ego called Aaron.
Article
Colton discusses three conditions for attributing creativity to a system: appreciation, imagination, and skill. We describe an original computer system (called DARCI) that is designed to eventually produce images through creative means. We show that DARCI has already started gaining appreciation, and has even demonstrated imagination, while skill will come later in her development.
Article
In this paper we present and evaluate painterly rendering techniques that work within a visual feedback loop of eDavid, our painting robot. The machine aims at simulating the human painting process. Two such methods and their semantics-driven combination are compared for different objects. One uses a predefined set of stroke candidates, the other creates strokes directly using line integral convolution. The aesthetics of these methods are discussed and results are shown.