Project

Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media

Goal: This five year, ERC-funded project (2018-2023), led by Professor Jill Walker Rettberg, explores how new algorithmic images are affecting us as a society and as individuals. The Machine Vision team will study theories and histories of visual technologies and current machine vision, analyse digital art, computer games and narrative fictions that use machine vision as theme or interface, and examine the experiences of users and developers of consumer-grade machine vision apps. Three main research questions are woven through all the approaches, addressing 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771800).

Date: 1 August 2018 - 31 July 2022

Updates

0 new
21
Recommendations

0 new
11
Followers

0 new
71
Reads

0 new
852

Project log

Jill Walker Rettberg
added a research item
This commentary tests a methodology proposed by Munk et al. (2022) for using failed predictions in machine learning as a method to identify ambiguous and rich cases for qualitative analysis. Using a dataset describing actions performed by fictional characters interacting with machine vision technologies in 500 artworks, movies, novels and videogames, I trained a simple machine learning algorithm (using the kNN algorithm in R) to predict whether or not an action was active or passive using only information about the fictional characters. Predictable actions were generally unemotional and unambigu-ous activities where machine vision technologies were treated as simple tools. Unpredictable actions, that is, actions that the algorithm could not correctly predict, were more ambivalent and emotionally loaded, with more complex power relationships between characters and technologies. The results thus support Munk et al.'s theory that failed predictions can be productively used to identify rich cases for qualitative analysis. This test goes beyond simply replicating Munk et al.'s results by demonstrating that the method can be applied to a broader humanities domain, and that it does not require complex neural networks but can also work with a simpler machine learning algorithm. Further research is needed to develop an understanding of what kinds of data the method is useful for and which kinds of machine learning are most generative. To support this, the R code required to produce the results is included so the test can be replicated. The code can also be reused or adapted to test the method on other datasets.
Ragnhild Solberg
added a research item
As the increasingly ubiquitous field of surveillance has transformed how we interact with each other and the world around us, surveillance interactions with virtual others in virtual worlds have gone largely unnoticed. This article examines representations of digital games’ diegetic surveillance cameras and their relation to the player character and player. Building on a dataset of forty-one titles and in-depth analyses of two 2020 digital games that present embodied surveillance camera perspectives, Final Fantasy VII Remake (Square Enix 2020) and Watch Dogs: Legion (Ubisoft Toronto 2020), I demonstrate that the camera is crucial in how we organize, understand, and maneuver the fictional environment and its inhabitants. These digital games reveal how both surveillance power fantasies and their critique can coexist within a space of play. Moreover, digital games often present a perspective that blurs the boundaries between the physical and the technically mediated through a flattening of the player’s “camera” screen and in-game surveillance cameras. Embodied surveillance cameras in digital games make the camera metaphor explicit as an aesthetic, narrative, and mechanical preoccupation. We think and play with and through cameras, drawing attention to and problematizing the partial perspectives with which worlds are viewed. I propose the term cyborg vision to account for this simultaneously human and nonhuman vision that’s both pluralistic and situated and argue that, through cyborg vision, digital games offer an embodied experience of surveillance that’s going to be increasingly relevant in the future.
Ragnhild Solberg
added a research item
This data paper documents a dataset that captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 190 digital artworks and 233 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work include title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 874 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
Jill Walker Rettberg
added 2 research items
This dataset captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 191 digital artworks and 236 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work includes title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 884 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The project team identified relevant works by searching databases, visiting exhibitions and conferences, reading scholarship, and consulting other experts. The inclusion criteria were creative works( art, games, narratives (movies, novels, etc)) where one of the following machine vision technologies was used in or represented by the work: 3D scans, AI, Augmented reality, Biometrics, Body scans, Camera, Cameraphone, Deepfake, Drones, Emotion recognition, Facial recognition, Filtering, Holograms, Image generation, Interactive panoramas Machine learning, MicroscopeOrTelescope Motion tracking, Non-Visible Spectrum Object recognition, Ocular implant, Satellite images, Surveillance cameras, UGV, Virtual reality, and Webcams. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
R scripts for analysing "A Dataset Documenting Representations of Machine Vision Technologies in Artworks, Games and Narratives". Scripts are included to provide general import and reformatting of the data as well as scripts for plotting figures showing geographical distribution of the works in dataset, and showing distribution in time.
Jill Walker Rettberg
added an update
We spent more than two years playing video games, exploring digital art and watching and reading scifi movies and novels and more, and look, we have data to share! This preprint of our data paper describes the dataset, which you can see here: https://doi.org/10.18710/2G0XKN. It documents 500 creative works where machine vision technologies like facial recognition, image generation, holograms and more are used or represented, including 77 video games, 190 digital artworks and 233 narratives (movies, novels etc). In these works we identified 884 situations where human and non-human agents interact with machine vision technologies.
We're currently hard at work analysing the data, but want to share it too, both because we think it can be useful for other researchers and because the data paper helped us think through the whole process of collecting, analysing and sharing data in a really productive way. The process of writing up and publishing the data has been quite transformative for how our team thinks about data in the humanities - in the future, we'll be thinking as much about what data we want to produce and why and how we would want to share it and what other researchers might want to use it for as we will about publications and other research outcomes.
We've submitted the data paper to a journal, and any feedback on the preprint will help us make it better before its finally published, so please do tell us what you think!
 
Jill Walker Rettberg
added a research item
This data paper documents a dataset that captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 190 digital artworks and 233 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work includes title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 874 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
Ragnhild Solberg
added a research item
[English translation of full text available: https://doi.org/10.33767/osf.io/zd284] Holograms are common background features conveying a science fiction mood. Digital games allow us to experience worlds where holograms are positioned as actors with functions beyond being atmospheric objects. This article tracks a broad cultural understanding of the hologram and identifies holographic representations in 24 digital games.This is followed by a close reading of holograms in the video game Horizon Zero Dawn (Guerrilla Games, 2017). These holograms provide access to forgotten knowledge and place players and player characters in actively observing positions while the past is replayed in navigable cutscenes. I argue that the holograms’ aesthetic, narrative, and mechanic functions challenge binary conceptualizations of presence and agency. This happens diegetically in the virtual environment but is also mirrored between player and game. Digital game holograms mediate thematically and formally between human and nonhuman actors, which helps us see how machines and humans are connected through agency in complex posthuman assemblages.
Gabriele de Seta
added a research item
The APAIC Report on the Holocode Crisis is a short story that imagines the future of machine-readable data encodings. In this story, I speculate about the next stage in the development of data encoding patterns: after barcodes and QR codes, the invention of “holocodes” will make it possible to store unprecedented amounts of data in a minuscule physical surface. As a collage of nested fictional materials (including ethnographic fieldnotes, interview transcripts, OCR scans, and intelligence reports) this story builds on the historical role of barcodes in supporting consumer logistics and the ongoing deployment of QR codes as anchors for the platform economy, concluding that the geopolitical future of optical governance is tied to unassuming technical standards such as those formalizing machine-readable representations of data.
Gabriele de Seta
added a research item
In China, deepfakes are commonly known as huanlian, which literally means “changing faces.” Huanlian content, including face-swapped images and video reenactments, has been circulating in China since at least 2018, at first through amateur users experimenting with machine learning models and then through the popularization of audiovisual synthesis technologies offered by digital platforms. Informed by a wealth of interdisciplinary research on media manipulation, this article aims at historicizing, contextualizing, and disaggregating huanlian in order to understand how synthetic media is domesticated in China. After briefly summarizing the global emergence of deepfakes and the local history of huanlian, I discuss three specific aspects of their development: the launch of the ZAO app in 2019 with its societal backlash and regulatory response; the commercialization of deepfakes across formal and informal markets; and the communities of practice emerging around audiovisual synthesis on platforms like Bilibili. Drawing on these three cases, the conclusion argues for the importance of situating specific applications of deep learning in their local contexts.
Jill Walker Rettberg
added an update
Gabriele de Seta’s new paper is out in Convergence! Through three case studies, Gabriele shows how deepfakes have been received differently in China than in the US and Europe. Perhaps this is caused by or reflected by the Chinese term huanlian, which means “changing faces”. While the Western response to deepfakes has focused on the danger of fakes and what this could do to trust, the Chinese response emphasised more practical points like fraud risks, image rights, ethical imbalances, economic profit and regulation.
 
Ragnhild Solberg
added a research item
Machine vision – the registration, analysis, and representation of visual information by machines and algorithms (Rettberg 2017) – is currently hiding behind videogames’ playful exterior. However, machine vision technologies such as night vision overlays, facial recognition systems, and surveillance cameras have been represented within virtual environments for decades. To bring into the light and acknowledge this technology as an important agent, I build on theorizations of videogames as assemblages of multiple agents (Taylor 2009) and of posthuman interrelated agency (Hayles 2017; Braidotti 2013). This study thus provides an overview of diegetic representations of machine vision in videogames in order to begin an analysis of distributed agency between human and nonhuman agents.
Jill Walker Rettberg
added an update
Jill Walker Rettberg's new paper proposes situated data analysis as a new method for analysing social media platforms and digital apps Watch a brief animation here for the key points! https://www.youtube.com/watch?v=WRkahyMyy5I&feature=youtu.be
 
Jill Walker Rettberg
added a research item
This paper proposes situated data analysis as a new method for analysing social media platforms and digital apps. An analysis of the fitness tracking app Strava is used as a case study to develop and illustrate the method. Building upon Haraway’s concept of situated knowledge and recent research on algorithmic bias, situated data analysis allows researchers to analyse how data is constructed, framed and processed for different audiences and purposes. Situated data analysis recognises that data is always partial and situated, and it gives scholars tools to analyse how it is situated, and what effects this may have. Situated data analysis examines representations of data, like data visualisations, which are meant for humans, and operations with data, which occur when personal or aggregate data is processed algorithmically by machines, for instance to predict behaviour patterns, adjust services or recommend content. The continuum between representational and operational uses of data is connected to different power relationships between platforms, users and society, ranging from normative disciplinary power and technologies of the self to environmental power, a concept that has begun to be developed in analyses of digital media as a power that is embedded in the environment, making certain actions easier or more difficult, and thus remaining external to the subject, in contrast to disciplinary power which is internalised. Situated data analysis can be applied to the aggregation, representation and operationalization of personal data in social media platforms like Facebook or YouTube, or by companies like Google or Amazon, and gives researchers more nuanced tools for analysing power relationships between companies, platforms and users.
Gabriele de Seta
added a research item
In the second half of the 2010s, AI has become a major hype across Chinese tech industries, venture capital investment, and government policy. The BAT national champions (Baidu, Alibaba and Tencent) have heavily invested in AI research and development, opening research centers in China and abroad to attract global talent, while thousands of startups have jumped on the AI hype to attract investment and reap the benefits of generous government funding. In a wave of innovation rhetoric closely resembling the previous hypes around Web 2.0 and Big Data, AI has become the most recurring buzzword in Chinese tech: besides its more predictable applications (industrial automation, self-driving cars, natural language processing and computer vision), almost everything in China – from e-commerce platforms to public utilities – is revamped as an ostensibly ‘AI-powered’ service. Drawing on research into the development of artificial intelligence technologies and products, this chapter charts China’s AI hype via its representation across government policy documents, industry advertisement, commercial products and popular culture. As trends and catchphrases travel between corporate boardrooms and policy think tanks to propaganda materials and music videos, the interplay between technical innovation and planned development reveals how AI is constructed, in real time, at the intersection of sociotechnical constraints and national imaginations.
Marianne Gunderson
added a research item
Machine vision technologies are increasingly ubiquitous in society and have become part of everyday life. However, the rapid adoption has led to ethical concerns relating to privacy, bias and accuracy. This paper presents the methodology and some preliminary results from a digital humanities project that is mapping and categorising references to and uses of machine vision in digital art, narratives and games in order to find patterns that may help us understand the broader cultural understandings of machine vision in society. Understanding the cultural significance and valence of machine vision is crucial for developers of machine vision technologies, so that new technologies are designed to meet general needs and ethical concerns, and ultimately contribute to a better, more just society.
Jill Walker Rettberg
added a research item
Machine vision technologies are increasingly ubiquitous in society and have become part of everyday life. However, the rapid adoption has led to ethical concerns relating to privacy, agency, bias and accuracy. This paper presents the methodology and preliminary results from a digital humanities project that maps and categorises references to and uses of machine vision in digital art, narratives and games in order to find patterns to help us analyse broader cultural understandings of machine vision in society. Understanding the cultural significance and valence of machine vision is crucial for developers of machine vision technologies, so that new technologies are designed to meet general needs and ethical concerns, and ultimately contribute to a better, more just society.
Jill Walker Rettberg
added a research item
Algoritmer styrer i økende grad visuelle medier, ikke bare ved å sortere og rangere bilder i sosiale medier, men også ved å styre hvilke bilder som i det hele tatt blir tatt, gjennom algoritmer for estetisk inferens som er innebygd i kameraene våre. Denne artikkelen utforsker hvordan disse algoritmene fungerer, og hva slags estetiske kriterier som er programmert inn i disse algoritmiske smaksdommene. Vi har ikke direkte tilgang til algoritmene, så i stedet analyserer jeg tre grupper instagrambilder: Instagram-kontoen @Insta_repeat, NRKs hashtag-kampanje #nrksommer og de tyve mest populære bildene på Instagram for å forstå forskjellige måter «et godt bilde» kan defineres i samspillet mellom algoritmer og mennesker. I tillegg analyserer jeg artikler om estetisk inferensalgoritmer innen informatikken, og jeg gjør en historisk sammenligning av kameraklubbenes estetikk i det 19. og 20. århundre med datasettene dagens algoritmer trenes opp på. Hovedargumentet er at vi programmerer algoritmer som vil gi oss mer og mer ensartede fotografier, og at disse algoritmene er drevet av en kommersiell logikk som har som mål å få oss til å konsumere mer.
Jill Walker Rettberg
added an update
Pierre Huyghe's current exhibition at the Serpentine Gallery in London is fascinating from a machine vision point of view, because it uses technology developed by the Kamatani Lab (https://twitter.com/ykamit/status/948807195205840896) using neural networks to reconstruct images people see using fMRI images. I visited the exhibition last weekend, and spent the next couple of days reading up on the technology (e.g. here: ). My biggest question is what does it really mean to "see" here - not just for the machine, but how do Katamani and his colleagues conceptualise human sight? As something that can be completely represented by brainwaves? Here's my blog post about it: http://jilltxt.net/?p=4795 --Jill
 
Jill Walker Rettberg
added an update
In our first project workshop, we brainstormed the past and future of machine vision and wrote fictional blurbs of and bibliographies for the books that will need to have been written for us to understand this technological shift. Amazingly generative discussions with a creative and very inspiring group of scholars, artists and designers. Lots more to come!
 
Jill Walker Rettberg
added an update
We are hiring three PhD fellows to do aesthetic/cultural research on machine vision in digital art, narrative fiction and computer games. An MA in a relevant discipline (e.g. digital culture, media studies, comparative literature, art history, game studies) is required, and the application deadline is June 20. Annual salary is about €45000/USD55000, and the successful applicants will be eligible for all Norwegian welfare benefits such as parental leave, universal health care etc.
 
Jill Walker Rettberg
added an update
We set up a Facebook page (https://www.facebook.com/machinevisionresearch/) and a Twitter account (https://twitter.com/machvisionERC) for sharing links to the almost-daily stories about new ways in which machine vision is being used in art and society. Sometimes there'll be links to blog posts too, like today, when I realised that my iPhone's image recognition algorithms thought the most salient feature of one of my selfies was "brassiere" (https://medium.com/@jilltxt/best-guess-for-this-image-brassiere-6fea27f90a53).
(Sorry about the URLs, but bizarrely enough, ResearchGate won't allow links in the updates!)
 
Jill Walker Rettberg
added an update
Soon we'll be advertising the three PhD fellowships for the ERC project. Here's a preview from the advertisement text:
MACHINE VISION aims to develop a theory of how everyday machine vision (e.g. facial recognition algorithms, selfie filters, image manipulation, drone cameras and home surveillance systems) affects the way ordinary people understand themselves and their world. The PhD fellows will work with project leader Professor Jill Walker Rettberg to analyse video games, digital art, and fictional narratives (science fiction movies, novels, electronic literature) that either use machine vision in their interfaces or where machine vision is an important theme. Led by Professor Rettberg, the three fellows will map relevant artworks, games and narratives using digital humanities methodologies, and each fellow will then select key works for detailed analysis using methodologies based in literary or visual studies, ludology and other related disciplines.
The positions will be at the University of Bergen in Norway. The fellowships in games and digital art are three year research-only fellowships, while the narrative one is four years with 25% teaching. Annual pay starts at NOK 435,100 (about €45,700 or USD 56,000) and fellows will become members of the Norwegian university health and welfare system.
The formal advertisements will be published soon, here and at JobbNorge.no, and the application deadline will probably be in May 2018. The starting date is 1 January 2019.
 
Jill Walker Rettberg
added an update
Today the ERC announced the projects that will be funded in the 2017 ERC Consolidator call, and MACHINE VISION is one of the projects! I am sharing the five page summary of the project, and will of course be sharing more here and elsewhere as we get the project started!
 
Jill Walker Rettberg
added a research item
The project summary part of B1 in my successful ERC-CoG application, which was awarded €2 million. The project will run from August 2018-July 2023.
Jill Walker Rettberg
added an update
My ERC Consolidator proposal for this project made it to the second round, so in October I will be travelling to Brussels for an interview! I called the project Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media. My goal is to develop a theory of how everyday machine vision affects the way ordinary people understand themselves and their world through 1) analyses of digital art, games and narratives that use machine vision as theme or interface, and 2) ethnographic studies of users of consumer-grade machine vision apps in social media and personal communication. Three main research questions address 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.
If you have any good advice for how to prepare for an ERC interview, please let me know!
 
Jill Walker Rettberg
added a research item
Machines are an important audience for any selfie today. This chapter discusses how our selfies are treated as data rather than as human communication. Rettberg looks at how facial recognition algorithms analyse our selfies for surveillance, authentication of identity and better-customised commercial services, and relates this to understandings of machine vision as post-optical and non-representational. Through examples ranging from Erica Scourti's video art to Snapchat's selfie lenses, the chapter explores how our expectation of machine vision affects the selfies we take, and how it may be locking down our identity as biometric citizens.
Jill Walker Rettberg
added an update
Last week I presented a paper at the Post-Screen Festival in Lisbon on how three works of art explore machine vision. Here is the paper!
Here is my Snapchat Research Story from my first day at the conference: https://www.youtube.com/watch?v=1GAV5NGi91g. It includes some bits of two intersting talks, by Tracy Piper-Wright and Robert Tovey, and videos of some of the artworks in the Post-Screen Festival's art exhibition, which I really enjoyed. Rafael Lozano Hemmer's "Levels of Confidence" (2015) and Gary Hill's SELF ( ) series (2016) were the two art works that were more clearly about machine vision, and they're both in the video. Because the video was made in Snapchat, the still images last for ten seconds and because you're watching on YouTube, not Snapchat, you can't just tap the still to go to the next snap. So the stills are rather slow, but you can fast forward.
 
Jill Walker Rettberg
added a project goal
This five year, ERC-funded project (2018-2023), led by Professor Jill Walker Rettberg, explores how new algorithmic images are affecting us as a society and as individuals. The Machine Vision team will study theories and histories of visual technologies and current machine vision, analyse digital art, computer games and narrative fictions that use machine vision as theme or interface, and examine the experiences of users and developers of consumer-grade machine vision apps. Three main research questions are woven through all the approaches, addressing 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771800).