ArticlePDF Available

Uncertainties in the Algorithmic Image, xCoAx Special Issue 2019

Authors:

Abstract and Figures

The incorporation of algorithmic procedures into the automation of image production has been gradual, but has reached critical mass over the past century, especially with the advent of photography, the introduction of digital computers and the use of artificial intelligence (AI) and machine learning (ML). Due to the increasingly significant influence algorithmic processes have on visual media, there has been an expansion of the possibilities as to how images may behave, and a consequent struggle to define them. This algorithmic turnhighlights inner tensions within existing notions of the image, namely raising questions regarding the autonomy of machines, author- and viewer- ship, and the veracity of representations. In this sense, algorithmic images hover uncertainly between human and machine as producers and interpreters of visual information, between representational and non-representational, and between visible surface and the processes behind it. This paper gives an introduction to fundamental internal discrepancies which arise within algorithmically produced images, examined through a selection of relevant artistic examples. Focusing on the theme of uncertainty, this investigation considers how algorithmic images contain aspects which conflict with the certitude of computation, and how this contributes to a difficulty in defining images.
Content may be subject to copyright.
CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019
36
Uncertainties in the Algorithmic Image
Rosemary Lee
REAL group, Center for Computer Games Research,
Department of Digital Design,
IT-University of Copenhagen, Copenhagen, DK
-----
rosl@itu.dk
-----
ABSTRACT
The incorporation of algorithmic procedures into the
automation of image production has been gradual,
but has reached critical mass over the past century,
especially with the advent of photography, the
introduction of digital computers and the use of
artificial intelligence (AI) and machine learning (ML).
Due to the increasingly significant influence
algorithmic processes have on visual media, there
has been an expansion of the possibilities as to how
images may behave, and a consequent struggle to
define them. This algorithmic turn highlights inner
tensions within existing notions of the image, namely
raising questions regarding the autonomy of
machines, author- and viewer- ship, and the veracity
of representations. In this sense, algorithmic images
hover uncertainly between human and machine as
producers and interpreters of visual information,
between representational and non-representational,
and between visible surface and the processes
behind it. This paper gives an introduction to
fundamental internal discrepancies which arise within
algorithmically produced images, examined through a
selection of relevant artistic examples. Focusing on
the theme of uncertainty, this investigation considers
how algorithmic images contain aspects which
conflict with the certitude of computation, and how
this contributes to a difficulty in defining images.
KEYWORDS
Algorithmic Media; Image; Artificial Intelligence;
Machine Learning; Art; Aesthetics
1 | INTRODUCTION
Images are increasingly governed by algorithmic
procedures, which disrupts conceptions of the image
as the product of human creativity, as a way of
evidencing reality and as, above all, visible. By
contrast, algorithmic images are derivative of “a set of
modular or autonomous instructions in execution”
(Bianco, 2018). An algorithmic image may be
transcoded as a text, executed by a computer, and
may or may not become visible in a form humans
would recognise as an image. It thus becomes
difficult to rely on earlier notions concerning what
makes an image as they often neglect defining
features of visual media, such as digital aspects and
the potential of machines to semi-autonomously
generate and interpret visual information. The internal
tension between algorithmic images and historical
tendencies regarding what an image has been or
should be surfaces in conflicts regarding the role
played by machines in the processing of visual
information, forms of representation, and the
importance of process. This paper aims to develop a
better understanding of how the algorithmic
production of images contributes to the establishment
of new tendencies in visual media and ultimately to
new formulations of the image. It introduces the
central issues regarding the incorporation of
algorithmic processes into images and contextualises
them in reference to current artistic and technical
examples, as well as theories.
2 | ALGORITHMIC IMAGE
The algorithmic aspect of images entails that they are
constituted as part of the performance of operations,
rather than solely the product of those processes.
Harun Farocki’s operational image (2004) has been
influential in reframing the image in terms of the
execution of formal procedures, especially by
machines. This kind of image champions procedure
(Carvalhais, 2016) over other qualities formerly held
in high regard, such as resolution (Steyerl, 2009),
realism and being the product of human creativity.
There have arguably been precursors to current
algorithmic images in the much earlier production of
images according to analogue algorithmic processes.
Hoelzl and Marie point to similar behavior at work in
the production of images according to ancient
representational canons governing the internal
proportional relations within a given image, as well as
in the transcription of maps as sets of coordinates
(2015). The use of systems of instructions has been
a recurring theme in several avant-garde art
movements in the 20th century, importantly the
Surrealists’ engagement with the notion of
automatism. They approached the mechanisation of
art by advocating that artists relinquish conscious
control over the artistic process so as to arrive at art
produced by the subconscious mind. Automatic
writing, drawing, and painting led artists to develop
methodologies seeking to elude their own
consciousness, often by employing highly
systematised, rule-based techniques to surrender
creative control by engaging with serendipity and
randomness. In many instances, artists expressly
sought to hand over agency, intentionality, or control
to a process, machine or system. Aleatory processes,
CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019
37
such as rolling dice, or other techniques of
randomisation became popular methods for artistic
creation. Employing randomness or other processes
beyond the artist’s control enabled artists to work in
new ways by bringing in external influences.
Instructional and aleatory approaches have been
used by numerous artists including Vera Molnár,
Brion Gysin, Sol LeWitt, Yoko Ono and John Cage to
name a few. In the case of Molnár, the artist took on
the conceptual role of a computer, one which, or
whom, computes, performing tasks based on a set of
predefined rules (Broeckmann, 2016). Her early
drawings were performed in such a manner that they
may be considered examples of computer art, being
the product of computation, regardless of whether
they were produced using a computer, as such.
3 | NONHUMAN IMAGE
But the human eye perhaps finds itself in a
moment of misapprehension. The machine
constructs the image and we construct
another image out of what we think we are
seeing. (Pohflepp, 2017)
Given the degree to which machines participate in the
interpretation and creation of visual information, the
intimate interrelation between human and nonhuman
vision is embedded into algorithmic images. This can
be seen in forms of what has been referred to as
nonhuman photography (Zylinska, 2017), autonomous
image production, indifferent to the human gaze. As
they are finely attuned to the parameters of human
vision, yet function in vastly different ways,
algorithmically produced images have a tendency to
reveal discrepancies between human vision and the
visual processes performed by computers.
Tracing the boundaries between human and
computer vision, adversarial examples [1] often rely
on the inherent differences between biological and
machine vision in order to cause errors in ML
systems. A common form of adversarial image is the
fooling image, generated with the intention of causing
an image to be misclassified by computers often
while remaining legible to human viewers. Many
adversarial approaches aim to trigger an error while
being as undetectable as possible to humans, such
as in the case of the one-pixel attack (Su et al., 2019),
in which it was proven to be possible to trigger the
misclassification of an image by modifying only one
of its pixels. This kind of strategy implements tasks
which are easily performed by humans, but which are
challenging for current computers to perform, as
epitomised by CAPTCHA, the Completely Automated
Public Turing Test for Telling Computers and Humans
Apart (von Ahn et al., 2008). Humans can easily read
distorted text or identify objects in images. Yet deep
neural networks may be easily fooled (Ngyen et al.,
2014) into making classification errors while giving
their results a high degree of confidence, as they
perform image interpretation based on analysis of
relations between pixel-values in images, not their
visual resemblance to objects in the world as we do.
If the difference between various categories may be
a matter of a single pixel for a computer, it bears
consideration whether our own aesthetic frameworks
are equally flimsy. In spite of a lack of a consistent
metric, human, against which to compare, there is a
persistent inclination to consider the human the
measure of machines. The tradition of producing
fooling images, whether the audience is human or
computer, stems from the desire to be ourselves
fooled by images. Pre-cinema (Mannoni, 2000) is
filled with optical tricks, techniques and devices which
aim to deceive at the same time as to delight, and
serves as a reminder that ultimately, the power of the
image rests in illusion. In similar fashion to optical
tricks used to fool the human eye into seeing two
images in one depending on how one looks (Figure
1), algorithmically produced images may also
function on two levels: meeting our ways of seeing
(Berger, 1973) with ways of machine seeing (Cox,
2016), vacillating between conceptual categories
for us and for computers. The chihuahua or muffin
meme (Figure 2), for example, points to the fact that
certain ML algorithms have misclassified images of
muffins and chihuahuas interchangeably. There is
thus a tension between images’ human-readability
and their legibility to machines. This kind of uncertain
image (Ekman et al., 2017) shows how although there
is a degree of visual similarity between some
blueberry muffins and chihuahua faces, the machinic
interpretation of these images has very little to do with
what we understand as vision and as representation.
Figure 1 | attributed to Charles Allan Gilbert, n.d.
4 | AUTOMATED IMAGE
The photographic paradigm brought with it a notion of
the image as a factual representation of reality as
CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019
38
mediated by an impartial machine but this conflicts
with several aspects of current image production,
particularly that the processes behind algorithmically
produced images are neither truly autonomous nor
neutral, while also being fairly estranged from the
realities they represent. In this regard, artistic
authorship and the relation between the image and
reality become primary issues.
Figure 2 | chihuahua or muffin meme
The artistic validity of art produced by machines and
the notion of autonomy therein has been a
contentious issue for several decades and continues
to stir heated debate. The myth of the machine as
artist, as Broeckmann refers to it (2019), remains a
central element in the mythology surrounding art and
AI. Recent excitement around AI and art has
famously included the sale of collective Obvious’s AI-
generated portrait at Christie’s auction house. The
algorithm used to create the image was inscribed in
the lower right-hand corner, as though it were the
algorithm’s signature on its creation, stoking disputes
around authorship in addition to the fact that the
collective who produced the work were using
borrowed code in the first place (Simonite, 2018).
Other projects, such as Ian Cheng’s BOB (Bag of
Beliefs) (2018-2019), Memo Akten and Jennifer
Walshe’s ULTRACHUNK (2018), Holly Herndon’s
PROTO (2019) and Actress’s Young Paint (2019)
variously frame artistic authorship with AI in terms of
coevolution, giving birth, or collaborative artistic
production, often resting heavily on the idea of the AI
as a character in a narrative about the work. One of
the best-known precedents in this vein is Harold
Cohen’s explorations with his program AARON. From
the late 1960s until his death in 2016, he sought to
create an AI which could in turn produce art. Of
relevance here is the anecdote that his relationship to
the AI is said to have become strained when Cohen
perceived AARON’s creations as having eclipsed his
own role as an artist, to which Cohen responded by
colouring on top of AARON’s drawings (Reichardt,
2018).
The truthfulness of algorithmic images also comes
into question, when it is possible to generate
believable likenesses of reality. Looking closely at
thispersondoesnotexist.com (Wang, 2019), for
example, we see images which have face-like
qualities, but which have more to do with statistics
than resemblance. The faces represented in these
images, while highly realistic, are not windows into
the interior world of the person whose face stares out
at us as traditional portraiture has often aimed
toward they are simulacra (Baudrillard, 2010),
computational portraits without sitters. The
algorithmically generated face may be thought of as
a functional approximation of what a human may take
to be the face of another human, rather than a
representation of how computers interpret humans to
be or to appear. At the same time, an algorithmically
generated image bearing a resemblance to a face is
no less a depiction of a face than traditional forms of
images, such as photography, painting or drawing,
which have indirect (Harman, 2017) relationships with
the objects they are meant to depict.
Figure 3 | Mosaic Virus, Anna Ridler, 2019.
With the potentiality to generate innumerable images
of things which do not necessarily point to any
corresponding objects in the real world, there arises
an issue as to what to make of images which are not
referential, but which appear to be so. Anna Ridler’s
Mosaic Virus is an interesting example of this curious
relationship between real-world objects and images
generated by deep neural networks. The process of
creating the work involved meticulously
photographing 10,000 actual tulips, which functioned
as a dataset with which to train an algorithm. From
this, a video work was produced, influencing the
visual appearance of the generated tulips based on
fluctuations in the value of bitcoin.
CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019
39
it echoes 17th century Dutch still life flower
paintings which, despite their realism, are
“botanical impossibilities” and imagined as all
the flowers in them could never bloom at the
same time. (Ridler, 2019)
The likenesses of flowers in the work are believable,
yet they are not representations of specific flowers.
They partially correspond to the real, being
amalgamations derived from thousands of images of
actual flowers, while actually being simulacra [2]
(Baudrillard, 2010).
This sort of bricolage often employed in ML is related
to the cut-up method (Burroughs, 2003) of creating
new artworks from the recombination of existing
material. Hito Steyerl’s This is the Future similarly
works with creating composite images of flowers from
combining existing ones, within a larger critique of the
friction between the predictive intentions of ML and
their reliance on past data (Steyerl, 2019). It is
significant to consider the fact that although
algorithmic approaches are able to produce new
visual content, they do so by making conjectures from
databases of existing material. This means that while
they have a degree of novelty, it is restricted,
effectively, to projecting the future from what has
occurred in the past. The nature of ML, itself, hovers
between the goal of prediction and its basis in
previous data, meaning that the images created are
in some aspects new, while also being reiterations of
existing patterns.
5 | PROCEDURAL IMAGE
In the creation of images using generative adversarial
networks (GANs) [3], the process begins with noise. In
this case visual noise, or rather, random pixel values,
amounts to guessing. Effectively, the closer the
generator gets to producing a believable image, the
higher the score it will receive from the discriminator.
Pierre Huyghe’s UUmwelt (2018) focuses heavily on
the image as process-based and transcendent of
media-specificity. Conveying a mental image of
particular objects to individuals through speech alone,
he then asked them to think of their respective object,
while using functional magnetic resonance imaging
(fMRI) [4] to record their brain activity. Next, the fMRI
data was interpreted by a GAN in order to render
images from the recorded neural activity. The resulting
images are finally animated on video screens,
displaying differently based on the presence or
absence of viewers in the exhibition space. The
transient images in this work take on various forms
from mental image, verbal image, coded image, finally
taking the form of a digital image displayed on a
screen. There are moments of latency, when an image
may or may not be visible, but nonetheless maintains
its consistency as an image.
The work does not need the public. It’s not
made for us. It’s not addressed to us. It doesn’t
need the gaze to exist. It can live its life as a
work without that need. (Huyghe, 2018)
This understanding of the image as process-based
and not primarily visual is aided by Farocki’s
operational image. In his seminal essay Phantom
Images (2004) and associated trio of video works Eye
/ Machine I-II (2001)I, Farocki highlighted the fact that
the automation of image processes had already
reached a critical mass by the 1990s in the military
and governmental use of intelligent machines and
surveillance technologies to automate visual
processing tasks. This results in the production of
operational images, which, Farocki explains, “are
images that do not represent an object, but rather are
part of an operation.” This kind of image is connected
to the real through the enactment of a procedure
instead of representing something other than itself. In
an operational image, what is visible (displayed on a
screen or otherwise) is merely a by-product of the
performance of an operation, not the explicit end of
that performance. Farocki's work on operational
images has been described as an exploration of how
to see like a machine and it offers a useful
perspective on the human interpretation of images
intended for computers, which he describes as
possessing a “sightless vision” reliant on
computational processes such as the programmed
navigation of robots and drones. The operational
image is a central concept to understanding
algorithmically produced visual media, because it
diverges from previous notions of the image which
have tended to prioritise visual attributes of objects
over processes.
6 | CONCLUSIONS
Traditional criteria for the evaluation of images have
tended to prioritise human perception and ability, a
direct symbolic relationship between image and the
real, and the permanence, objecthood and visibility of
images. These ideas fall short of adequately judging
the products of images created using current
technologies, as they fail to address the extent to
which algorithmic media have augmented the
character of what may be understood as an image.
Not only may an image exist outside the perceptual
capacities of humans, but it may be created with little
human intervention, at that. Additionally, producing
images acts as a way of mediating our reality and
visual technologies intercede in that mediation of
reality. Far from faithfully representing reality in an
impartial manner, visual media participate in the
production of new realities through appearances. The
interchangeability between operations and visual
processes which occurs within algorithmic media
change the image from a fixed, physical and primarily
visual entity into the performance of a spatial
operation. These disparities between current visual
media and existing notions of the image demonstrate
how algorithmic processes contribute to new
modalities of the image. They also point to a growing
CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019
40
area of potential difficulties regarding not only
aesthetic and cultural concerns, but also what
measure of truth can be expected from images now.
ENDNOTES
[1] Adversarial images are inputs designed to cause
errors in ML systems, either with the intention to harm
the system or to test and to improve it.
[2] Echoing OOO’s indirect relations. See (Harman,
2017).
[3] GANs are a generative form of ML which involves
two distinct parts: a generator and a discriminator,
which compete with one another. The images
produced by the generator can appear strikingly
similar to actual digital photographs.
[4] fMRI is a technique for measuring cognitive
activity and blood flow in the brain based on the fact
that these two are coupled.
REFERENCES
Actress. (2019). Actress + Young Paint (Live AI/AV)
[Performance].
Akten, M., & Walshe, J. (2018). ULTRACHUNK
[Performance].
von Ahn, L., Maurer, B., McMillen, C., Abraham, D.,
& Blum, M. (2008). ReCAPTCHA: Human-
Based Character Recognition viia Web Security
Measures. Sciennce, 321, 14651468.
Baudrillard, J. (2010). Simulacra and Simulation. Ann
Arbor: University of Michigan.
Berger, J. (1973). Ways of Seeing. London: BBC,
Penguin Books.
Bianco, J. “Skye.” (2018). Algorithm. In R. Braidotti & M.
Hlavajova (Eds.), Posthuman Glossary (pp. 23
26). London, New York: Bloomsbury Academic.
Broeckmann, A. (2016). Image Machine. In Machine
Art in the Twentieth Century (pp. 123164).
Cambridge: MIT Press.
Broeckmann, A. (2019). The Machine as Artist as
Myth. Arts, 8(1), 25.
https://doi.org/10.3390/arts8010025
Burroughs, W. S. (2003). The Cut-Up Method of Brion
Gysin. In N. Wardrip-Fruin & N. Montfort (Eds.),
The New Media Reader. Cambridge, London:
MIT Press.
Carvalhais, M. (2016). Procedural Practices. In
Artificial Aesthetics: Creative Practices in
Computational Art and Design (pp. 145178).
Porto: U. Porto Edições.
Cheng, I. (2018). BOB (Bag of Beliefs) [Artificial
lifeform].
Cox, G. (2016). Ways of Machine Seeing. Unthinking
Photography.
https://unthinking.photography/articles/ways-of-
machine-seeing
Ekman, U., Agostinho, D., Bonde Thylstrup, N., &
Veel, K. (2017). The Uncertainty of the Uncertain
Image. Digital Creativity, 28(4), 255264.
Farocki, H. (2001). Eye / Machine I-III [Video].
Farocki, H. (2004). Phantom Images (B. Poole,
Trans.). PUBLIC, 29, 1222.
Harman, G. (2017). Aesthetics is the Root of All
Philosophy. In Object Oriented Ontology: A New
Theory of Everything (pp. 61102). London:
Pelican Books.
Herndon, H. (2019). PROTO [Album].
Hoelzl, I., & Marie, R. (2015). Softimage: Towards a
New Theory of the Digital Image. Bristol: Intellect
Ltd.
Huyghe, P. (2018). UUmwelt [Exhibition].
Huyghe, P., & Hans Ulrich Obrist. (2018). Pierre
Huyghe in conversation with Hans Ulrich Obrist.
https://www.youtube.com/watch?v=emYOOVRz
G8E
Mannoni, L. (2000). The Great Art Of Light And
Shadow: Archaeology of the Cinema. Exeter,
Devon: University of Exeter Press.
Obvious. (2018). Edmond De Belamy [Print].
Pohflepp, S. (2017). Spacewalk [Installation].
https://pohflepp.net/Work/Spacewalk
Ridler, A. (2019). Mosaic Virus [Video-installation].
http://annaridler.com/mosaic-virus
Simonite, T. (2018, November 20). How a Teenager’s
Code Spawned a $432,500 Piece of Art.
https://www.wired.com/story/teenagers-code-
spawned-dollar-432500-piece-of-art/
Steyerl, H. (2019). This is the Future [Video-
installation].
Steyerl, H. (2009). In Defense of the Poor Image. E-
Flux Journnal, 10.
Su, J., Vargas, D. V., & Kouichi, S. (2019). One pixel
attack for fooling deep neural networks.
ArXiv:1710.08864v5 [Cs.LG].
Wang, P. (2019). This Person Does Not Exist.
https://thispersondoesnotexist.com
Zylinska, J. (2017). Nonhuman Photography.
Cambridge, London: MIT Press.
BIOGRAPHICAL INFORMATION
Rosemary Lee is an artist and PhD fellow at the IT-
University of Copenhagen. In her PhD project, Seeing
with Machines, she researches how notions of the
image are impacted by algorithmic media, analysing
and contextualising artistic and technical examples in
terms of their earlier precursors, and considering
what this means for what an image is today. Lee’s
research and artistic work have been shown
internationally in contexts including the exhibition and
symposium SCREENSHOTS: desire and automated
image (Galleri Image, 2019), a new we (Kunsthall
Trondheim, 2017), and Obsessive Sensing (LEAP,
2014). Her project Molten Media (2013-2018) was
exhibited in machines will watch us die (The Holden
Gallery, 2018), Hybrid Matters (Nikolaj Kunsthal,
2016), Pitch Drop (Science Friction, 2013), and
resulted in the publication of a book in the context of
the transmediale Vilém Flusser Archive Residency for
Artistic Research (2014).
Thesis
Full-text available
This thesis addresses how current notions of image production remain tied to historical ideas which often prove inadequate for the description of visual artefacts of machine learning (ML). ML refers to the notion of simulating the process of information acquisition computationally, and when applied to the generation of images, it enables visual content to be influenced based on the statistical analysis of data. The increasing use of ML in image production highlights several aspects which have been present in older forms of media, but which now take on new forms and relevance, especially within artistic contexts. This research seeks to clarify the mediating role played by visual technologies and to demonstrate how images produced using ML offer new ways of approaching theories of the image. Images exist at the interstices between human perceptual experience and its technological mediation, which is especially relevant as the development and implementation of technologies offers new possibilities to produce visualisations from data. In so doing, technological mediation tangibly augments relationships between how images are produced, experienced and interpreted. The present incorporation of ML into various forms of visual media offers insight into this issue by enabling images to be produced as the result of the statistical analysis of datasets. Computational relations which are extracted and inferred between features within images help to construct learned representations which are in turn used to generate new images. This results in a form of computationally-determined representation which is informed by the interpretive processes performed by machines. Artists have taken great interest in the potential of ML, in an aesthetic, but also a processual capacity, often considering its relation to human vision. Their productions offer insight into novel aspects of ML in the creation of images through experimental practice which is informed by theory and by art history. Using and reflecting on ML, often in novel or reactionary ways, artistic and humanistic perspectives provide vital counter-narratives to those of computer science (CS), and which facilitate cross-disciplinary understanding. In spite of the hype which surrounds it currently, ML does not present an entirely novel approach to image production and rather builds upon existing modalities and narratives surrounding the technical production of images. Notions of technically produced images often lean heavily on historical narratives regarding the technical production of images, even perpetuating inaccuracies from them. These tend to misconstrue images either as inherently accurate reflections of reality or as the product of artificial perception and genius, by virtue of their engagement with technological processes. This research therefore adopts a media archaeological approach, in order to understand how processes that have been present in visual media much longer than the use of ML continue to colour discourse.
Article
Full-text available
Algorithms do not act alone or with magical (totalising) power but exist as part of larger infrastructures and ideologies. Some well-publicised recent cases have come to public attention that exemplify a contemporary politics (and crisis) of representation in this way. The problem is one of learning in its widest sense, and “machine learning” techniques are employed on data to produce forms of knowledge that are inextricably bound to hegemonic systems of power and prejudice.
Article
Full-text available
The essay proposes an art–historical contextualisation of the notion of the “machine as artist”. It argues that the art–theoretical tropes raised by current speculations on artworks created by autonomous technical systems have been inherent to debates on modern and postmodern art throughout the 20th century. Moreover, the author suggests that the notion of the machine derives from a mythological narrative in which humans and technical systems are rigidly figured as both proximate and antagonistic. The essay develops a critical perspective onto this ideological formation and elucidates its critique in a discussion of a recent series of artworks and a text by US American artist Trevor Paglen.
Book
Three orders of simulacra: 1. counterfeits and false images: from renaissance to industrial revolution, signs become mode of exchange, these signs are obviously flase. 2. Dominated by production and series: mass produced signs as commodities, signs refer not to reality but to other signs (money, posters). 3. Pure simulacra: simulacra mask over the idea that there is no reality, reality is an effect of simulacra (disneyland masks simulacra of LA, Prison masks nonfreedom outside the walls).
Article
Widely regarded by historians of the early moving picture as the best work yet published on pre-cinema, The Great Art of Light and Shadow: Archaeology of the Cinema throws light on a fascinating range of optical media from the twelfth century to the turn of the twentieth. First published in French in 1994 and now translated into English, Laurent Mannoni's account projects a broad picture of the subject area now known as 'pre-cinema'. Starting from the earliest uses of the camera obscura in astronomy and entertainment, Mannoni discusses, among many other devices, the invention and early years of the magic lantern in the seventeenth century, the peepshows and perspective views of the eighteenth century, and the many weird and wonderful nineteenth-century attempts to recreate visions of real life in different ways and forms. This fully-illustrated and accessible account of a strange mixture of science, magic, art and deception introduces to an English-speaking readership many aspects of pre-cinema history from other European countries.
Article
CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99%, matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words.
Actress + Young Paint (Live AI/AV)
  • Actress
Actress. (2019). Actress + Young Paint (Live AI/AV) [Performance].
Ways of Seeing. London: BBC, Penguin Books
  • J Berger
Berger, J. (1973). Ways of Seeing. London: BBC, Penguin Books.
The Cut-Up Method of Brion Gysin
  • W S Burroughs
Burroughs, W. S. (2003). The Cut-Up Method of Brion Gysin. In N. Wardrip-Fruin & N. Montfort (Eds.), The New Media Reader. Cambridge, London: MIT Press.
Procedural Practices
  • M Carvalhais
Carvalhais, M. (2016). Procedural Practices. In Artificial Aesthetics: Creative Practices in Computational Art and Design (pp. 145-178). Porto: U. Porto Edições.