Project
Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media
Updates
0 new
21
Recommendations
0 new
11
Followers
0 new
71
Reads
0 new
852
Project log
Full text available at: https://hdl.handle.net/11250/3039103
This commentary tests a methodology proposed by Munk et al. (2022) for using failed predictions in machine learning as a method to identify ambiguous and rich cases for qualitative analysis. Using a dataset describing actions performed by fictional characters interacting with machine vision technologies in 500 artworks, movies, novels and videogames, I trained a simple machine learning algorithm (using the kNN algorithm in R) to predict whether or not an action was active or passive using only information about the fictional characters. Predictable actions were generally unemotional and unambigu-ous activities where machine vision technologies were treated as simple tools. Unpredictable actions, that is, actions that the algorithm could not correctly predict, were more ambivalent and emotionally loaded, with more complex power relationships between characters and technologies. The results thus support Munk et al.'s theory that failed predictions can be productively used to identify rich cases for qualitative analysis. This test goes beyond simply replicating Munk et al.'s results by demonstrating that the method can be applied to a broader humanities domain, and that it does not require complex neural networks but can also work with a simpler machine learning algorithm. Further research is needed to develop an understanding of what kinds of data the method is useful for and which kinds of machine learning are most generative. To support this, the R code required to produce the results is included so the test can be replicated. The code can also be reused or adapted to test the method on other datasets.
As the increasingly ubiquitous field of surveillance has transformed how we interact with each other and the world around us, surveillance interactions with virtual others in virtual worlds have gone largely unnoticed. This article examines representations of digital games’ diegetic surveillance cameras and their relation to the player character and player. Building on a dataset of forty-one titles and in-depth analyses of two 2020 digital games that present embodied surveillance camera perspectives, Final Fantasy VII Remake (Square Enix 2020) and Watch Dogs: Legion (Ubisoft Toronto 2020), I demonstrate that the camera is crucial in how we organize, understand, and maneuver the fictional environment and its inhabitants. These digital games reveal how both surveillance power fantasies and their critique can coexist within a space of play. Moreover, digital games often present a perspective that blurs the boundaries between the physical and the technically mediated through a flattening of the player’s “camera” screen and in-game surveillance cameras. Embodied surveillance cameras in digital games make the camera metaphor explicit as an aesthetic, narrative, and mechanical preoccupation. We think and play with and through cameras, drawing attention to and problematizing the partial perspectives with which worlds are viewed. I propose the term cyborg vision to account for this simultaneously human and nonhuman vision that’s both pluralistic and situated and argue that, through cyborg vision, digital games offer an embodied experience of surveillance that’s going to be increasingly relevant in the future.
This data paper documents a dataset that captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 190 digital artworks and 233 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work include title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 874 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
This dataset captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 191 digital artworks and 236 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work includes title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 884 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no.
Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The project team identified relevant works by searching databases, visiting exhibitions and conferences, reading scholarship, and consulting other experts. The inclusion criteria were creative works( art, games, narratives (movies, novels, etc)) where one of the following machine vision technologies was used in or represented by the work: 3D scans, AI, Augmented reality, Biometrics, Body scans, Camera, Cameraphone, Deepfake, Drones, Emotion recognition, Facial recognition, Filtering, Holograms, Image generation, Interactive panoramas Machine learning, MicroscopeOrTelescope Motion tracking, Non-Visible Spectrum Object recognition, Ocular implant, Satellite images, Surveillance cameras, UGV, Virtual reality, and Webcams.
The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
R scripts for analysing "A Dataset Documenting Representations of Machine Vision Technologies in Artworks, Games and Narratives". Scripts are included to provide general import and reformatting of the data as well as scripts for plotting figures showing geographical distribution of the works in dataset, and showing distribution in time.
This data paper documents a dataset that captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 190 digital artworks and 233 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work includes title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 874 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
[English translation of full text available: https://doi.org/10.33767/osf.io/zd284]
Holograms are common background features conveying a science fiction mood. Digital games allow us to experience worlds where holograms are positioned as actors with functions beyond being atmospheric objects. This article tracks a broad cultural understanding of the hologram and identifies holographic representations in 24 digital games.This is followed by a close reading of holograms in the video game Horizon Zero Dawn (Guerrilla Games, 2017). These holograms provide access to forgotten knowledge and place players and player characters in actively observing positions while the past is replayed in navigable cutscenes. I argue that the holograms’ aesthetic, narrative, and mechanic functions challenge binary conceptualizations of presence and agency. This happens diegetically in the virtual environment but is also mirrored between player and game. Digital game holograms mediate thematically and formally between human and nonhuman actors, which helps us see how machines and humans are connected through agency in complex posthuman assemblages.
The APAIC Report on the Holocode Crisis is a short story that imagines the future of machine-readable data encodings. In this story, I speculate about the next stage in the development of data encoding patterns: after barcodes and QR codes, the invention of “holocodes” will make it possible to store unprecedented amounts of data in a minuscule physical surface. As a collage of nested fictional materials (including ethnographic fieldnotes, interview transcripts, OCR scans, and intelligence reports) this story builds on the historical role of barcodes in supporting consumer logistics and the ongoing deployment of QR codes as anchors for the platform economy, concluding that the geopolitical future of optical governance is tied to unassuming technical standards such as those formalizing machine-readable representations of data.
In China, deepfakes are commonly known as huanlian, which literally means “changing faces.” Huanlian content, including face-swapped images and video reenactments, has been circulating in China since at least 2018, at first through amateur users experimenting with machine learning models and then through the popularization of audiovisual synthesis technologies offered by digital platforms. Informed by a wealth of interdisciplinary research on media manipulation, this article aims at historicizing, contextualizing, and disaggregating huanlian in order to understand how synthetic media is domesticated in China. After briefly summarizing the global emergence of deepfakes and the local history of huanlian, I discuss three specific aspects of their development: the launch of the ZAO app in 2019 with its societal backlash and regulatory response; the commercialization of deepfakes across formal and informal markets; and the communities of practice emerging around audiovisual synthesis on platforms like Bilibili. Drawing on these three cases, the conclusion argues for the importance of situating specific applications of deep learning in their local contexts.
Machine vision – the registration, analysis, and representation of visual information by machines and algorithms (Rettberg 2017) – is currently hiding behind videogames’ playful exterior. However, machine vision technologies such as night vision overlays, facial recognition systems, and surveillance cameras have been represented within virtual environments for decades. To bring into the light and acknowledge this technology as an important agent, I build on theorizations of videogames as assemblages of multiple agents (Taylor 2009) and of posthuman interrelated agency (Hayles 2017; Braidotti 2013). This study thus provides an overview of diegetic representations of machine vision in videogames in order to begin an analysis of distributed agency between human and nonhuman agents.
This paper proposes situated data analysis as a new method for analysing social media platforms and digital apps. An analysis of the fitness tracking app Strava is used as a case study to develop and illustrate the method. Building upon Haraway’s concept of situated knowledge and recent research on algorithmic bias, situated data analysis allows researchers to analyse how data is constructed, framed and processed for different audiences and purposes. Situated data analysis recognises that data is always partial and situated, and it gives scholars tools to analyse how it is situated, and what effects this may have. Situated data analysis examines representations of data, like data visualisations, which are meant for humans, and operations with data, which occur when personal or aggregate data is processed algorithmically by machines, for instance to predict behaviour patterns, adjust services or recommend content. The continuum between representational and operational uses of data is connected to different power relationships between platforms, users and society, ranging from normative disciplinary power and technologies of the self to environmental power, a concept that has begun to be developed in analyses of digital media as a power that is embedded in the environment, making certain actions easier or more difficult, and thus remaining external to the subject, in contrast to disciplinary power which is internalised. Situated data analysis can be applied to the aggregation, representation and operationalization of personal data in social media platforms like Facebook or YouTube, or by companies like Google or Amazon, and gives researchers more nuanced tools for analysing power relationships between companies, platforms and users.
In the second half of the 2010s, AI has become a major hype across Chinese tech industries, venture capital investment, and government policy. The BAT national champions (Baidu, Alibaba and Tencent) have heavily invested in AI research and development, opening research centers in China and abroad to attract global talent, while thousands of startups have jumped on the AI hype to attract investment and reap the benefits of generous government funding. In a wave of innovation rhetoric closely resembling the previous hypes around Web 2.0 and Big Data, AI has become the most recurring buzzword in Chinese tech: besides its more predictable applications (industrial automation, self-driving cars, natural language processing and computer vision), almost everything in China – from e-commerce platforms to public utilities – is revamped as an ostensibly ‘AI-powered’ service. Drawing on research into the development of artificial intelligence technologies and products, this chapter charts China’s AI hype via its representation across government policy documents, industry advertisement, commercial products and popular culture. As trends and catchphrases travel between corporate boardrooms and policy think tanks to propaganda materials and music videos, the interplay between technical innovation and planned development reveals how AI is constructed, in real time, at the intersection of sociotechnical constraints and national imaginations.
Machine vision technologies are increasingly ubiquitous in society and have become part of everyday life. However, the rapid adoption has led to ethical concerns relating to privacy, bias and accuracy. This paper presents the methodology and some preliminary results from a digital humanities project that is mapping and categorising references to and uses of machine vision in digital art, narratives and games in order to find patterns that may help us understand the broader cultural understandings of machine vision in society. Understanding the cultural significance and valence of machine vision is crucial for developers of machine vision technologies, so that new technologies are designed to meet general needs and ethical concerns, and ultimately contribute to a better, more just society.
Machine vision technologies are increasingly ubiquitous in society and have become part of everyday life. However, the rapid adoption has led to ethical concerns relating to privacy, agency, bias and accuracy. This paper presents the methodology and preliminary results from a digital humanities project that maps and categorises references to and uses of machine vision in digital art, narratives and games in order to find patterns to help us analyse broader cultural understandings of machine vision in society. Understanding the cultural significance and valence of machine vision is crucial for developers of machine vision technologies, so that new technologies are designed to meet general needs and ethical concerns, and ultimately contribute to a better, more just society.
Algoritmer styrer i økende grad visuelle medier, ikke bare ved
å sortere og rangere bilder i sosiale medier, men også ved å styre
hvilke bilder som i det hele tatt blir tatt, gjennom algoritmer
for estetisk inferens som er innebygd i kameraene våre. Denne artikkelen
utforsker hvordan disse algoritmene fungerer, og hva slags estetiske
kriterier som er programmert inn i disse algoritmiske smaksdommene.
Vi har ikke direkte tilgang til algoritmene, så i stedet analyserer
jeg tre grupper instagrambilder: Instagram-kontoen @Insta_repeat,
NRKs hashtag-kampanje #nrksommer og de tyve mest populære bildene
på Instagram for å forstå forskjellige måter «et godt bilde» kan
defineres i samspillet mellom algoritmer og mennesker. I tillegg
analyserer jeg artikler om estetisk inferensalgoritmer innen informatikken,
og jeg gjør en historisk sammenligning av kameraklubbenes estetikk
i det 19. og 20. århundre med datasettene dagens algoritmer trenes
opp på. Hovedargumentet er at vi programmerer algoritmer som vil
gi oss mer og mer ensartede fotografier, og at disse algoritmene
er drevet av en kommersiell logikk som har som mål å få oss til
å konsumere mer.
The project summary part of B1 in my successful ERC-CoG application, which was awarded €2 million. The project will run from August 2018-July 2023.
Machines are an important audience for any selfie today. This chapter discusses how our selfies are treated as data rather than as human communication. Rettberg looks at how facial recognition algorithms analyse our selfies for surveillance, authentication of identity and better-customised commercial services, and relates this to understandings of machine vision as post-optical and non-representational. Through examples ranging from Erica Scourti's video art to Snapchat's selfie lenses, the chapter explores how our expectation of machine vision affects the selfies we take, and how it may be locking down our identity as biometric citizens.