Lab
Digital Culture Research Group
Institution: University of Bergen
About the lab
The Digital Culture Research Group gathers humanities researchers at UiB who share an interest in studying how technology and culture interact. Current research is on topics including intercultural uses of technology, haptic interfaces, self-representation in social media, critical digital editions, the use of machine learning and chatbots in health, and the cultural implications of machine vision.
Featured projects (4)
This five year, ERC-funded project (2018-2023), led by Professor Jill Walker Rettberg, explores how new algorithmic images are affecting us as a society and as individuals. The Machine Vision team will study theories and histories of visual technologies and current machine vision, analyse digital art, computer games and narrative fictions that use machine vision as theme or interface, and examine the experiences of users and developers of consumer-grade machine vision apps. Three main research questions are woven through all the approaches, addressing 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771800).
ABSTRACT
While our somesthetic apparatus plays a significant role in human perception it is currently underemployed in our everyday interaction with computers. Our senses are not operating separately or independently of each other, but rather simultaneously, however, the way that the different elements of our sensory apparatus are juxtaposed and overlapping is undertheorized. Equally undertheorized are the cognitive effects of employing cross-modal stimulation and response in interface design, although there exist a range of interactive artworks that explore and experiment with cross-modality. The aim of this dissertation is to investigate the impact of the somesthetic senses (touch and proprioception) on our cognitive faculty through new digital and sensor technology, and its employment in interface design. More specifically, I aim to identify and organize interactive scenarios that mediate between the visual and somesthetic, including mediations where the haptic and proprioceptive senses are addressed uniquely to recognize tasks most often presented to the visual sense. I will develop an argument for cross-modal interfaces that emphasize the somesthetic apparatus, through an exploration of the pervasive screen interface (ch. 1), plasticity of the senses and body schema (ch. 2), the extension of the sensory perception through technology (ch. 3), the identification of cross-modal interactions which focuses on mediation between the haptic (touch and in extension, proprioceptive) sense and visual sense in interfaces (ch. 4), and finally present three distinct interface scenarios (ch. 5). One of the main outcomes of the dissertation is to suggest a model for cross-modal interaction that encompasses a phenomenological perspective on multi-modal experience, as well as material effects of the machine platform or hardware medium. My theoretical framework is rooted in literature from several research fields, ranging from phenomenology, embodied cognition, to selected neuroscientific research, and reflected in critical analysis of a series of multi-modal interactive art works.
Meta-research on references and citation practices, more specifically: how digital tools (word processing, EndNote, Google Scholar, database connections) and digital distribution of articles (PDF, online journals, library cross linking) have changed the way scholars are using references and citations.
The goal of the project is to design and implement a conversational agent as an intervetion for persons struggling with chronic illness, through patient participation and service design processes. Further, to obtain findings through qualitative research regarding the uptake of the intervention.
Featured research (10)
This commentary tests a methodology proposed by Munk et al. (2022) for using failed predictions in machine learning as a method to identify ambiguous and rich cases for qualitative analysis. Using a dataset describing actions performed by fictional characters interacting with machine vision technologies in 500 artworks, movies, novels and videogames, I trained a simple machine learning algorithm (using the kNN algorithm in R) to predict whether or not an action was active or passive using only information about the fictional characters. Predictable actions were generally unemotional and unambigu-ous activities where machine vision technologies were treated as simple tools. Unpredictable actions, that is, actions that the algorithm could not correctly predict, were more ambivalent and emotionally loaded, with more complex power relationships between characters and technologies. The results thus support Munk et al.'s theory that failed predictions can be productively used to identify rich cases for qualitative analysis. This test goes beyond simply replicating Munk et al.'s results by demonstrating that the method can be applied to a broader humanities domain, and that it does not require complex neural networks but can also work with a simpler machine learning algorithm. Further research is needed to develop an understanding of what kinds of data the method is useful for and which kinds of machine learning are most generative. To support this, the R code required to produce the results is included so the test can be replicated. The code can also be reused or adapted to test the method on other datasets.
This data paper documents a dataset that captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 190 digital artworks and 233 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work include title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 874 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
This data paper documents a dataset that captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 190 digital artworks and 233 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work includes title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 874 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no. Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
This dataset captures cultural attitudes towards machine vision technologies as they are expressed in art, games and narratives. The dataset includes records of 500 creative works (including 77 digital games, 191 digital artworks and 236 movies, novels and other narratives) that use or represent machine vision technologies like facial recognition, deepfakes, and augmented reality. The dataset is divided into three main tables, relating to the works, to specific situations in each work involving machine vision technologies, and to the characters that interact with the technologies. Data about each work includes title, author, year and country of publication; types of machine vision technologies featured; topics the work addresses, and sentiments associated with that machine vision usage in the work. In the various works we identified 884 specific situations where machine vision is central. The dataset includes detailed data about each of these situations that describes the actions of human and non-human agents, including machine vision technologies. The dataset is the product of a digital humanities project and can be also viewed as a database at http://machine-vision.no.
Data was collected by a team of topic experts who followed an analytical model developed to explore relationships between humans and technologies, inspired by posthumanist and feminist new materialist theories. The project team identified relevant works by searching databases, visiting exhibitions and conferences, reading scholarship, and consulting other experts. The inclusion criteria were creative works( art, games, narratives (movies, novels, etc)) where one of the following machine vision technologies was used in or represented by the work: 3D scans, AI, Augmented reality, Biometrics, Body scans, Camera, Cameraphone, Deepfake, Drones, Emotion recognition, Facial recognition, Filtering, Holograms, Image generation, Interactive panoramas Machine learning, MicroscopeOrTelescope Motion tracking, Non-Visible Spectrum Object recognition, Ocular implant, Satellite images, Surveillance cameras, UGV, Virtual reality, and Webcams.
The dataset as well as the more detailed database can be viewed, searched, extracted, or otherwise used or reused and is considered particularly useful for humanities and social science scholars interested in the relationship between technology and culture, and by designers, artists, and scientists developing machine vision technologies.
This paper analyses the way that the popular Norwegian television and web series SKAM (2015-17) included viewers in the narration. There has been a recent interest in narratives that uses the third person plural voice (“we-narratives”) in print literature. This paper takes this narrative research and connects it to transmedia and social media storytelling, arguing that the we-narrative is characteristic of the digital vernacular of social media, which has accustomed us to narratives that are collective in nature. In SKAM, the narrative we is established by visual and narrative emphasis on the group, and the audience is included in this we through a storyworld that mirrors the audience’s world, using social media and temporality, and explicitly in the focalisation of the final scene of the show.
Lab head
Department
- Department of Linguistic, Literary and Aesthetic studies
About Jill Walker Rettberg
- My current research is on how machine vision changes the way we understand the world, and my ERC project MACHINE VISION will run from August 2018-July 2023. I've researched digital culture for twenty years, starting with hypertext fiction, electronic literature, and digital art, moving through games and narratives, and stories and self-representations in social media, to the convergences between visual, narrative and data-based forms of representation in my book Seeing Ourselves Through Technology. Please cite me as Rettberg, Jill Walker.
Members (16)
Scott Rettberg

Ragnhild solberg