Everyday Automation: Experiencing and Anticipating Emerging Technologies
... The presence of AI is not always immediately apparent: in fact, it has been noted that the terms AI, automated decision-making, and algorithms are invoked by researchers in various ways and often defined contextually (Pink et al. 2022). The situation is even more complex: 'The line between AI proper and other forms of technology can be blurred, rendering AI invisible: if AI systems are embedded within technology we tend not to notice them.' (Coeckelbergh 2020: 16) Indeed, this invisibility may be an explicit goal of the technology developers, aiming for ubiquity (Weiser 1991). ...
... The situation is even more complex: 'The line between AI proper and other forms of technology can be blurred, rendering AI invisible: if AI systems are embedded within technology we tend not to notice them.' (Coeckelbergh 2020: 16) Indeed, this invisibility may be an explicit goal of the technology developers, aiming for ubiquity (Weiser 1991). Not only is AI sometimes invisible, but AI is understood and instantiated in different ways by the people involved at every stage of its ideation, design, development, and deployment; this applies to how AI is discursively constructed (Eynon and Young 2021) and how AI is deployed and taken up by humans (Pink et al. 2022). This applies to how researchers study and describe AI in their work; Seaver, discussing algorithms, makes a point that could be made about AI: 'If we understand algorithms as enacted by the practices used to engage with them, then the stakes of our own methods change. ...
... Not only can it be deployed rhetorically in various forms, but AI is one actor among others and plays party to a reconfiguration of roles and responsibilities (Parker and Grote 2022), as researchers in group 3 elaborate. This requires attending to the everyday aspects of algorithms in these contexts, beyond what is perceived as the immediate impacts or the most impacted person (Pink et al. 2022). ...
Lifelong learning is a current policy focus in many countries, with AI technologies promoted as both the motivation for the need for lifelong learning (due to its assumed role in social change) and as an important way to ‘deliver’ learning across the life course. Such policies tend to be instrumental and technologically deterministic, and there is a need to properly theorize the relationships between AI and lifelong learning to better inform policy and practice. In this paper, we examine the ways that academic communities conceptualize AI and lifelong learning, based on a thematic analysis of existing academic literature in contexts beyond formal education. We identify three groups of research, which vary according to their engagement with theories of learning and AI technology and how AI ‘works’. In group 1 (working AI), AI is assumed to contribute to increased efficiency of humans and learning; in group 2 (working with AI), AI is implemented and conceptualized as a peer or colleague; and in group 3 (reconfiguring AI), AI is viewed as part of a wider reconfiguration of humans and their contexts. This latter group, though least well represented in the literature, holds promise in advancing a postdigital research agenda that focuses not solely on how AI works to increase efficiency, but how people are increasingly working, learning, and living with AI, thus moving beyond exclusively instrumental, economic, and technologically deterministic concerns.
... Given the societal concern caused by the covid-19 crisis, we might have expected more research on this coming out in 2021, although it cannot be ruled out that it will emerge as a more prominent topic in the research pipeline of the coming years. There is also not a lot of what we-with reference to Pink et al. (2022)-might dub as "everyday ADM" research-including research on self-tracking, content moderation systems, chatbots. The dataset includes 29 methodological articles, which deal with big data analytics (e.g., Ibrahim et al., 2020) and data visualization (e.g., Hunt & McKelvey, 2019), as well as 39 articles that describe the development of applied methods for ADM (e.g., Andreeva & Matuszyk, 2019). ...
... It has been observed before that it is rarely made explicit what we talk about when we talk about ADM both in basic academic research (Pink et al., 2022), but also in practical use cases as Kaun has shown in her study of the Trelleborg model (Kaun, 2021). We examined the dataset for explicit definitions and definitional struggles over ADM. ...
... While the study of technological systems in the broad sense has historically been outside the scope of media and communication research, media and communication studies has much to offer-both in terms of concepts for illuminating relations between technological systems and the people they implicate ("what media do to people"), and in terms of an empirical sensitivity to the contextual appropriation of said systems ("what people still do with media") to fit organizational and social agendas and pursuits. Hence, we suggest a communication-based conceptualization of ADM, which also implies an urgently needed rehumanization of ADM (Pink et al., 2022), which reflects how humans and ADM systems shape one another. At the very basic level, then, ADM can be understood as a communicative sequence with feedback loops: it emerges around encoding of data as information in the system, interpretation of these data by processing and socially ordering the information, and output decisions, which must then be decoded, or made sense of, by case workers and other stakeholders in context, with potential to modify future data input and interpretations (Lomborg & Kapsch, 2020). ...
Recently, automated decision‐making (ADM) has been increasingly introduced in for example, the public sector potentially ensuring efficiency and more just decision‐making. The increasing use of ADM has been reflected by a growing interest by scholarly research. While initially mainly researchers within law and computer sciences engaged with ADM, there has also been a growing engagement by social science and humanities‐oriented researchers. This article traces the emergence and evolution of ADM research beyond computer sciences and engineering with a specific focus on social sciences and humanities by identifying central concerns and methods while outlining a stable baseline for future research. Based on a systematic mapping of publications, we outline the contours of ADM as an area of research engaging with an emerging empirical phenomenon. Drawing on findings from the mapping, we discuss ways ahead for ADM research as part of the subfield of digital sociology and suggest that sociological media and communication studies have a crucial role to play in developing future research avenues. Drawing on advances made in audience research, we suggest a radically contextualized and people‐centered approach to ADM. Such an approach would help to develop ADM and ground it alongside people's divergent capabilities and contextual arrangements.
... It is always already about creating spaces for inherently political and affective sociotechnical future relations (Light and Akama 2014). These can point towards 'big futures', i.e. radical ruptures and epochal change, or 'little futures', emergent processes in mundane, everyday practices (Michael 2017;Pink et al. 2022). Beginning with these assumptions, this commentary identifies key issues for concern at the nexus of futures, education, and design in the postdigital condition, in which digital technologies are embedded throughout educational spaces, but no longer conceived as a panacea for socio-economic-ecological ills. ...
... Our priorities in this commentary suggest that these new stories look beyond the dominant grand techno-solutionist narratives about universal high-tech solutions, global demographic trends, or illusions of efficiency and progress. Instead, they tell powerful stories encompassing the locally situated values, worldviews, institutions, structures, and practices by which people want to live (Pink et al. 2022; see also Von Stackelberg and McDowell 2015;Machado de Oliveira 2021). These new narratives include attending to careful design, redesigning institutions to build on community solidarity, and reflecting on the undesignable. ...
... This approach is rooted in the importance of anticipating social challenges and collaboratively developing responsible governance mechanisms within the research community, putting a focus on the changes researchers can make in their daily work toward responsible innovation. In discussions around emerging technologies, discourses often emphasize worst-case scenarios, and hype-fear debates, overshadowing the more ordinary and mundane realities most people encounter, as well as those creating the technologies (see Pink et al. 2022). Shifting our focus to connecting discussions to everyday practices helped ground the workshop discussions in situated contexts, which in turn facilitated our grasp of their perspectives. ...
The implementation of Responsible Research and Innovation (RRI) in research projects has increased the need for interdisciplinary collaboration. This article presents our RRI approach within the Horizon 2020 ‘In Silico World' project, which aims to accelerate the adoption of in silico medicine through computer modeling and simulation tools in healthcare. To address the shortcomings of the ‘checklist approach' for integrating ethics and the risk of becoming checkboxes ourselves, we introduce the term ‘RRI brokers.’ It serves as a lens for evaluating the project's RRI activities and the dynamics that can be faced by Social Science and Humanities scholars (SSH), and as a means to acquire agency in our own positioning. We suggest that to strengthen RRI, more consideration is needed on how we present our expertise in these collaborations, and awareness of how we, as RRI brokers, move between translating, balancing, and shaping worlds, affecting what we broker and ourselves.
... This does not mean that the future is an object of inquiry that is vague or perhaps even impossible to grasp but rather that the future as it unfolds through events and practices becomes a guiding resource that invites us to play with and learn from it. To be able to do this, it requires acknowledging the agency of individuals and communities in shaping these futures and recognising the iterative nature of technological and societal change (Raats, 2023;Pink, 2022) in a way that resonates with how people lead their lives with and through everyday automation technologies (Pink et al., 2022c). These matters are particularly evident when it comes to relating to emerging automation technologies that partake in shaping the everyday of which they are or will become part. ...
... This second example is from a publication written with colleagues during an ethnographic project with a secondary school class in Germany (Wagener-Böck et al., 2023). Our focus was on 'everyday automation' (Pink et al., 2022). The paper's main argument was that phenomena often termed 'automated' are mutually generated (i.e., 'co-produced') across teacher, student, hardware and app. ...
... Bellon & Velkovska 2023;Collins 2018), spojené i s utvářením nových forem sociálních nerovností (Joyce et al. 2021;Lupač 2018). Narůstající všudypřítomnost zařízení využívajících UI v běžném životě (Pink et al. 2022;Pilling et al. 2022) vyvolává nově motivovaný kritický zájem (Sormani 2020;Wallach & Marchant 2019) a také snahu popsat, jak se UI stává součástí již uspořádaného lidského světa (Brooker et al. 2019;Mair et al. 2021). Z těchto a dalších důvodů se zkoumání UI stává jedním z klíčových sociologických úkolů dnešní doby. ...
Příspěvek pojednává o tom, jak může sociologie adekvátně přistoupit k "umělé inteligenci" (UI). Diskutuje některé oborové konceptuální nástroje a naznačuje také inherentní omezení, na něž sociologie může narazit. Jedním z hlavních úskalí je, že UI je běžně (i v sociologii) konceptualizována na základě kognitivistických analogií: mechanickým a digitálním procesům je metaforicky připisována jakási varianta "mysli" a jejích projevů. Jak však ukázala wittgensteinovská filosofie a na ni navazující přístupy, "mysl" a její koreláty lze sociologicky smysluplně uchopit jako pozorovatelné a veřejně přístupné jevy, které jsou zakotveny v sociální interakci. Tento přístup je ilustrován příkladem z interakce s jazykovým modelem ChatGPT a nahrávkou z výzkumného projektu, který zkoumá využití UI v medicínském prostředí. Text směřuje k naznačení základů alternativní konceptualizace UI jako bytostně sociologického problému a sociálního fenoménu produkovaného v reálném čase jednání členů společnosti. Z tohoto hlediska se tak UI neskrývá "uvnitř strojů", ale vyvstává ze specifické uspořádanosti detailů situovaného jednání.
... By defining or rendering in/visible certain vulnerabilities, data-driven systems configure identity positions ascribed to data subjects (Klostermann et al., 2022). Critical data studies and related disciplines widely focus on the interrelation between identities of people and their experiences with and imaginaries of data (Kennedy, 2018;Lupton, 2020a;Newman-Griffis et al., 2023;Pink et al., 2022). The lens of care draws attention to the authorising mechanisms (e.g., paternalism) which move individual vulnerabilities and care for them into focus. ...
In this special issue, we ask: What do we see when we look at datafied societies through the lens of care? Following the footsteps of feminist writers, activists, and academics who take care as a vantage point for scrutinising and reimaging technoscientific societies, this special issue brings together scholars from critical data studies who explore what we might learn (and see) when we apply care ethics to the study of datafication. To develop a view on datafied societies informed by ethics, concepts, and practices of care, we propose a move from critique to care in social studies of data-driven technologies. We specifically identify five moves in which a care lens provides a new perspective when studying datafication and datafied societies: (1) a move from data-driven technologies to socio-digital care arrangements, (2) a move from data science to data work and care, (3) a move from technical to situated modes of knowledge production, (4) a move from studying harms of datafication to the politics of vulnerability, and (5) a move towards building communities of care. Discussing how critical data studies and care ethics can mutually contribute to each other, this collection explores how this way of thinking can inform new ways of seeing datafied societies and imagine living and being well in more than human worlds nurtured by care.
... Fenomén každodennosti v současném výzkumném poli zasahuje do velkého množství oblastí (Williams, 2022;Pink et al., 2022;Cruickshank et al., 2023) a přesto je edukačně mimořádně náročný, protože jeho implementace ve vzdělávání vyžaduje kombinaci specifických znalostní, dovedností a postojů, ale i schopnosti emočního a estetického vnímání určitých situací nebo fenoménů. Z tohoto důvodu jsme se rozhodli pro metodu nelineárního příběhu s významnou narativní linkou, která tyto podmínky splňuje. ...
... Man kan spørge til, hvad der i praksis konstituerer beslutninger med brug af data, som vi vil gøre her. Denne opmaerksomhed er saerlig vigtig, når digitale data og algoritmer spiller en stadig større rolle i de beslutninger, der traeffes i skolesammenhaeng, og hvor sådanne processer kan vaere usynlige (Ruckenstein, Lupton, Berg & Pink, 2022;Selwyn, 2022). ...
Denne artikel undersøger professionsetos i læsevejledningen i folkeskolen med særlig fokus på databrug. Vi argumenterer for, at databrug ikke blot er en ”teknisk” bedrift, men også indebærer udøvelsen af professionsetos. På baggrund af resultater fra et etnografisk studie i læsevejlederes arbejde i folkeskolen, zoomer artiklen ind på en case, hvor en læsevejleder, under coronapandemien i 2020 på et online-møde, informerer nogle forældre om, at testdata viser, at deres barn er ordblind. Vi analyserer, hvordan læsevejlederen udøver et professionelt ansvar i denne situation med udgangspunkt i en skelnen mellem sandfærdighed, retfærdighed og omsorg som tre centrale professions-etiske orienteringspunkter. Vi diskuterer, hvilke konsekvenser denne opmærksomhed kan have for faglig vejledning specifikt og for databrug i folkeskolen generelt.
... In this terrain, scholars in education need to question research assumptions imbued in current glorious narratives on automation, because it is not only edtech or educational policies that contribute to such narratives but also research (cf. Pink et al. 2022). ...
Emerging automated-decision making (ADM) technologies invite scholars to engage with future points in time and contexts that have not yet arisen. This particular state of not knowing yet implies the methodological challenge of examining images of the future and how such images will materialize in practice. In this respect, we ask the following: what are appropriate research methods for studying emerging ADM technologies in education? How do researchers explore sociotechnical practices that are in the making? Guided by these questions, we investigate the increasing adoption of ADM in teachers' assessment practices. This constitutes a case in point for reflecting on the research methods applied to address the future of assessment in education. In this context, we distinguish between representational methods oriented to recounting past experiences and future(s) methods oriented to making futures. Studying the literature on speculative methods in digital education, we illustrate four categories of future(s)-oriented methods and reflect on their characteristics through a backcasting workshop conducted with teachers. We conclude by discussing the need to reconsider the methodological choices made for studying emerging technologies in critical assessment practices and generate new knowledge on methods able to contribute to alternative imaginaries of automation in education.
... Yet at the same time, a research programme into the 'scrappy realities' (Selwyn and Jandrić 2020) of automation in daily life, which are anything but frictionfree, is emerging, which seeks to redefine the analytical focus. Writing about 'everyday automation', for instance, Pink et al. (2022) suggest that critical data and algorithm studies tend to adopt a similar universalising and techno-centric approach as the systems or narratives that they analyse. These studies 'become complicit in making and sustaining the very paradigms and logics that they critique if they do not acknowledge the situatedness of processes of power' (Pink et al. 2022: 8). ...
The work of automation in education is not automatic but needs to be ‘done’. Grounded in an ethnographic study which followed a Grade 9/10 class through their daily activities in a ‘regular’ high school for a year, this paper asks how automation is enacted by students and teachers, and what these practices imply for forms of knowledge and relationality. Inspired by feminist technoscience, and drawing on recent work on everyday automation, the paper suggests that the ‘auto-’ of automation in practice is very often more of a ‘sym-’, a ‘with’, in which students and machines co-produce something that looks like automation. Rather than ‘automation’, observing practices in classrooms shows practices of ‘symmation’. The paper elaborates on symmation scenes of realigning, revising and reworking relations. Automation is, in these scenes, deeply embedded in social relations, involving the processing of ability, difference and hierarchy. Rather than the industry hype of automation, these sets of socio-technical practices alert us to the messy, non-linear, contested, warm realities of education (and not just learning) in schools today. The paper identifies specific aspects of how these socio-technical realities impact knowledge and teacher-student relations.
... To sketch the methodological contours of digital technography, I use my recent research on wearable self-tracking devices (Fors et al., 2020), digital food technologies (Boztepe & Berg, 2020), and work automation systems (Berg, 2022;Pink et al., 2022a). The examples I have selected involve technologies that are often presented as solutions to problems that do not yet exist-and in some cases, may never exist. ...
This article introduces “digital technography” as a methodology to interrogate and voice emerging digital technologies and their anticipated futures. I demonstrate, with reference to recent research on wearable self-tracking devices, digital food technologies, and platforms for work automation, how one can gain an understanding of these technologies by attending to the materials in which they are promoted; and actively engaging with them imaginatively and self-reflexively as a social scientist. This article outlines a digital technographic methodology centered around the three conceptual anchors of specification, valorization, and anticipation, all of which pertain to how a digital technology aims and perhaps even aspires to become a part of everyday life.
Zusammenfassung
Der Artikel hat das Ziel dazu beizutragen, dass die digitale Zukunft – oder genauer gesagt: verschiedene mögliche digitale Zukünfte – systematischer als bisher in der Forschung zur Refiguration von Gesellschaften einbezogen wird. Eine sozialwissenschaftliche Auseinandersetzung mit digitalen Zukünften sollte dabei Teil einer Medien- und Kommunikationsforschung der Emergenz sein. Um diesen Zusammenhang greifbar zu machen, setzt sich der Artikel in einem ersten Schritt damit auseinander, inwieweit die bisherige Forschung zum globalen Wandel von Gesellschaften mit digitalen Medien und deren Infrastrukturen von einem Fokus auf „Konsequenz“ statt „Emergenz“ dominiert wurde. Erweitert man aber – wie in einem zweiten Schritt argumentiert wird – den Blick auf Emergenz, ist man unweigerlich mit Fragen der Zukunft konfrontiert, da diese ein grundlegender Aspekt menschlicher Agency überhaupt ist. Dieses Argument führt in einem dritten Schritt zu einer näheren Betrachtung von Figurationen digitaler Zukünfte, die in einem doppelten Sinne begriffen werden: einerseits als soziale Figurationen, in denen digitale Zukünfte hervorgebracht werden, andererseits als die dabei imaginierten Figurationen menschlichen Zusammenlebens in der digitalen Zukunft. Diese konzeptionelle Klärung wird abschließend zur Skizze einer Medien- und Kommunikationsforschung der Emergenz verdichtet.
Eine sozialwissenschaftliche Auseinandersetzung mit digitalen Zukünften sollte Teil einer Kommunikations- und Medienforschung der Emergenz sein. Hierunter ist ei-ne Kommunikations- und Medienforschung zu verstehen, deren Fokus nicht auf den Folgen bzw. Konsequenzen einmal etablierter Medien und deren Infrastruktu-ren liegt, sondern die den Blick auf das Entstehen derselben und die damit ver-bundenen Imaginationen von Zukunft lenkt. Um dies greifbar zu machen, argumen-tiert dieses Kapitel in folgenden Schritten: Zuerst wird verdeutlicht, dass ein Einbe-zug von Zukunft in die Kommunikations- und Medienforschung erstens bedeutet, sich intensiver mit sozialen Figurationen zu befassen, in denen digitale Zukünfte hervorgebracht werden. Zweitens geht ebenso darum, die imaginierten Figuratio-nen menschlichen Zusammenlebens in der je vorgestellten digitalen Zukunft in die Forschung einzubeziehen. Es ist ein solcher doppelter Blickwinkel, der die Grund-lagen für eine Kommunikations- und Medienforschung der Emergenz bietet.
Knowledge and power are intricately but closely related to each other. Knowledge creates power and power in turn creates its own knowledge, and thus, they mutually reinforce each other. Further, power and knowledge are also closely associated with wealth. Knowledge leads to wealth and power and wealth often leads to power.
Automated decision-making (ADM) systems can be worn in and on the body for various purposes, such as for tracking and managing chronic conditions. One case in point is do-it-yourself open-source artificial pancreas systems, through which users engage in what is referred to as “looping”; combining continuous glucose monitors and insulin pumps placed on the body with digital communication technologies to develop an ADM system for personal diabetes management. The idea behind these personalized systems is to delegate decision-making regarding insulin to an algorithm that can make autonomous decisions. Based on interviews and photo diaries with Danish “loopers”, this paper highlights two interrelated narratives of how users have to care for the loop by maintaining a stable communication circuit between body and ADM system, and by modifying the loop through analysis and reflection. It shows how the human takes turns with the ADM system through practical doings and anticipation to safeguard continuous management of chronic disease.
Emerging technologies of artificial intelligence (AI) and automated decision‐making (ADM) promise to advance many industries. Healthcare is a key locus for new developments, where operational improvements are magnified by the bigger‐picture promise of improved care and outcomes for patients. Forming the zeitgeist of contemporary sociotechnical innovation in healthcare, media portrayals of these technologies can shape how they are implemented, experienced and understood across healthcare systems. This article identifies current applications of AI and ADM within Australian healthcare contexts and analyses how these technologies are being portrayed within news and industry media. It offers a categorisation of leading applications of AI and ADM: monitoring and tracking, data management and analysis, cloud computing, and robotics. Discussing how AI and ADM are depicted in relation to health and care practices, it examines the sense of promise that is enlivened in these representations. The article concludes by considering the implications of promissory discourses for how technologies are understood and integrated into practices and sites of healthcare.
This chapter discusses how upper secondary school teachers in Sweden became concerned with futures of education in the aftermath of the recently launched ChatGPT. To address uncertainties of tomorrow’s education, the school initiated a Futures Day project, engaging both teachers and students in producing futures imaginaries. The study offers insights into value enactments in the teachers’ practical preparatory work with the Futures Day and contributes knowledge about the benefits and challenges of introducing futures in K-12 education, by asking the following key questions: How did teachers frame futures through actions leading up to the Futures Day event? Which values did teachers enact in planning and carrying out their framings of futures? The analysis of data collected draws on Wenger-Trayner and Wenger-Trayner’s value creation framework to understand how teachers’ actions during the preparatory process enacted values in different ways, that contributed to the final delivery of the Futures Day. Our findings point to that the challenges in engagements with futures emerged as threefold, underscoring (1) temporalities that are not common to acknowledge in a school context, (2) a research area to explore from different disciplinary perspectives, and (3) the importance of an awareness of how specific actions and practices frame futures.
In this chapter, we analyse the practices and experiences of people with diabetes who develop, use and share open-source, non-regulated “recipes” for automating insulin delivery with personal digital health technology. The algorithmic systems are known as Open-Source Artificial Pancreas Systems and the algorithm-enabled activity that these people engage in is often referred to as “looping”. Through empirical accounts from the rich and complex practice of using open-source algorithms in diabetes self-management we explore how this concept of looping may hold the potential to critically explore and discuss more general issues related to human-algorithms relations in digital health. We suggest three ways in which looping holds general insights about the potential for more generous human-algorithm relations. First, looping as an active delegation of control given an existing burden of self-care contingent on the acquisition of new skills; second, looping as a collective and recursive engagement with (material) politics of care and data; and third, looping as the ability to opt-out—partly or totally—of toxic intimate entanglement with algorithmic technologies and of extractivist algorithmic assemblages.
Despite its elusiveness as a concept, ‘artificial intelligence’ (AI) is becoming part of everyday life, and a range of empirical and methodological approaches to social studies of AI now span many disciplines. This article reviews the scope of ethnomethodological and conversation analytic (EM/CA) approaches that treat AI as a phenomenon emerging in and through the situated organization of social interaction. Although this approach has been very influential in the field of computational technology since the 1980s, AI has only recently emerged as such a pervasive part of daily life to warrant a sustained empirical focus in EM/CA. Reviewing over 50 peer-reviewed publications, we find that the studies focus on various social and group activities such as task-oriented situations, semi-experimental setups, play, and everyday interactions. They also involve a range of participant categories including children, older participants, and people with disabilities. Most of the reviewed studies apply CA’s conceptual apparatus, its approach to data analysis, and core topics such as turn-taking and repair. We find that across this corpus, studies center on three key themes: openings and closing the interaction, miscommunication, and non-verbal aspects of interaction. In the discussion, we reflect on EM studies that differ from those in our corpus by focusing on praxeological respecifications of AI-related phenomena. Concurrently, we offer a critical reflection on the work of literature reviewing, and explore the tortuous relationship between EM and CA in the area of research on AI.
In this article, we explore Quick Response (QR) codes (machine-readable optical labels that link to information) and how, after a period of having fallen out of favor, they have been reactivated and have come to underpin COVID-19 automation and contact-tracing efforts. During the pandemic, they were used especially for “safe entry” and other kinds of check-in to locations to facilitate contact tracing. In this context, QR codes facilitate automated decision-making in relation to infectious disease surveillance and disease outbreak control. However, the use of QR codes for contact-tracing purposes has enjoyed mixed success and its implementation has encountered several challenges, as we illustrate through a case study that explores QR codes and COVID contact tracing in Singapore and Australia.
Despite decades of research and development, digitalization remains a key challenge for the Swedish district heating sector. Business model innovation is believed to be necessary to capitalize on digitalization, yet it is especially challenging for municipal companies. This study aims to identify the potential impact of digitalization on the business models of Swedish district heating companies and to analyze the barriers that exist for digital business model innovation. Through case studies of eight municipal district heating companies, this study demonstrates how the entire business model is potentially impacted by digitalization. This study also identifies the barriers to digital business model innovation that are linked to two conflicting views (restrictive versus comprehensive) on digitalization. The restrictive view diminishes the importance of business model innovation, outsourcing innovation to minimize both costs and risks for the company. In contrast, the comprehensive view embraces digital business model innovation through trial-and-error and opens the innovation process to stakeholder influence. These two perspectives are motivated by different beliefs about the need for digitalization to secure future business opportunities, as well as differences in owners’ risk appetite. The implications for industry outlooks and the design of policy support for the digitalization of district heating are discussed.
This paper applies the concept of ‘lived experiences’ to understand people's subjective and everyday encounters with automated systems. We reflect on how qualitative longitudinal research methods are useful for capturing the affective and emotional dimensions of these experiences; these flexible methods also allow for iterative changes that can react to new findings and participant feedback. Using our empirical study on Universal Credit (UC), the UK's largest social security payment, we demonstrate how studying lived experiences via qualitative longitudinal research helps us reflect on both the topic of the research and our position as researchers in relation to study participants. We argue that the lived experience framework is extremely valuable for understanding the consequences of automated decisions for users of these systems and to redress the uneven power dynamics of representing the voices of those sharing these encounters.
The World Yearbook of Education 2024 contends with the digitalisation and datafication of education associated with the arrival of big data, algorithms, AI and automated digital technologies. Contemporary digitalisation and datafication in education have emerged from five intertwined trends: the production of shared imaginaries of a digital future, the emergence of educational data science as a model of knowledge production, a political turn to data-driven policy and governance, transformations in the digital data economy, and the rapid growth of the edtech industry. The chapters foreground four analytical approaches as an agenda for research on digitalisation and datafication in education. Focusing on sociotechnical foundations, research interrogates the social, scientific and historical factors involved in the development and deployment of new technologies and practices. Research on the political economy of digitalisation foregrounds the complex relations between locally enacted forms of digitalisation and global economic trends in the technology industry. The dynamics of digitalisation and datafication underpin the ways contemporary education systems can be monitored, controlled and governed, such as through digital surveillance techniques and automated data-driven decision-making. In turn, research investigates consequences like bias and discrimination, inequality and environmental impact, and explores alternative models like technical democracy and design justice approaches.
This preview of the introductory chapter of the collection is also available from https://www.taylorfrancis.com/books/edit/10.4324/9781003359722/world-yearbook-education-2024-ben-williamson-janja-komljenovic-kalervo-gulson
The way in which we describe processes of automation, the digital society and the technology companies that deliver many of its services carry implicit and sometimes contradicting values and ideas about the society envisioned. In this paper, we are interested in unfolding some of the metaphors that guide political discourses on digitalisation in Denmark, particularly those related to the nexus between the welfare state and the market. We propose that metaphorical analysis of policy documents serves to tease out and confront the implicit values and tensions related to how welfare ideologies are reconciled with market logics. This carries important messages about the Danish government’s imaginary of digitalisation and citizens, such as which role citizens are expected to play vis-à-vis digital services and welfare provisions. This paper argues that in contrast to the EU’s declared goal of human-centric digitalisation, the Danish government relies on metaphors that are technology-centric rather than human-centric.
Students’ interaction in postdigital practices is complex, mediated, and unpredictable. As postdigital practices are constantly evolving and changing, it is challenging to research the interactional work that students do. If we want to get an understanding of the complex ways students work together using different mediational resources, it is important that researchers expand their focus and interest towards the interactional and volumetric scenographies that students build and inhabit in and through their interaction. In this chapter, we introduce immersive qualitative digital research as an environment that facilitates a more qualitative, immersive, and emancipated way of working with audio-visual data. Immersive qualitative digital research is closely linked to emergent technologies such as Virtual Reality and 360° video, which create a new environment for researching immersive and volumetric scenographies in the postdigital age. Using three examples of digital software (AVA360VR, CAVA360VR and SQUIVE), we present and discuss how researchers can come to a better and more nuanced understanding of the thickness of lived postdigital practice.KeywordsScenographyPostdigitalVolumetric dataImmersive qualitative researchDigital research environmentsBigSoftVideo
The pandemic affected more than 1.5 billion students and youth, and the most vulnerable learners were hit hardest, making digital inequality in educational settings impossible to overlook. Given this reality, we, all educators, came together to find ways to understand and address some of these inequalities. As a product of this collaboration, we propose a methodological toolkit: a theoretical kaleidoscope to examine and critique the constitutive elements and dimensions of digital inequalities. We argue that such a tool is helpful when a critical attitude to examine ‘the ideology of digitalism’, its concomitant inequalities, and the huge losses it entails for human flourishing seems urgent. In the paper, we describe different theoretical approaches that can be used for the kaleidoscope. We give relevant examples of each theory. We argue that the postdigital does not mean that the digital is over, rather that it has mutated into new power structures that are less evident but no less insidious as they continue to govern socio-technical infrastructures, geopolitics, and markets. In this sense, it is vital to find tools that allow us to shed light on such invisible and pervasive power structures and the consequences in the daily lives of so many.KeywordsTheoretical kaleidoscopeToolkitMethodologyDigital inequalitiesPostdigitalCollaborative writing
Schools produce multiple products and digitization articulates with them in different ways. In this paper we expand the frame for analyzing instructional automation by examining its implications for three scholastic products – embodied learning, grades and test scores, and the narratives that connect the two. We draw on data from interviews with 47 teachers in four full-time virtual elementary and secondary schools in the US to argue that at present most of the actual work of automation in virtual schools is focused on the production of marks and grades, and that the narrative products of digitization efforts – routinization discourses, potentials discourse, and ‘live’ teaching discourse, play key roles in shaping how we understand the connections between those products and student learning.
This paper details ethnographic methods, experiences, and insights from an ethnographer and an industry engaged complex systems engineer in how to study resilience in blockchain‐based DAOs as a novel field site. Amidst digitization of numerous elements of government, work, and everyday life, ‘Decentralized Autonomous Organizations’ (DAOs) provide a field site for the generation of ethnographic insights into opportunities and limitations in organizational resilience in human‐machine assemblages. As a broad organizational form, DAOs aim to enable people to coordinate and govern themselves through automated rules deployed on a public blockchain (Hassan & Di Filippi, 2021). DAOs are an experiment in ‘computer aided governance’. These adaptive, socio‐technical infrastructures are envisioned as capable of restructuring the foundations of governance in human societies (Merkle, 2016; Kolestsi, 2019; Garrod, 2016). Ethnography provides a qualitative tool to elicit the social dynamics of governance, adaptability, and resilience in a context of algorithmic governance and automation. By foregrounding the social dynamics of organizational adaptability and resilience, our resilience framework and vulnerabilities mapping tools help us to operationalize complex domains to de‐mystify and re‐humanize algorithmic systems.
Emerging digital, automated and connected systems, devices, data, and algorithms are increasingly part of our lives. They participate in the everyday realities of participants in qualitative research and they are embedded in the infrastructures through which we research and share our scholarship, research, and practice. This article introduces this new landscape and its implications for qualitative research. I argue that we need to engage not only with questions of how automated futures are imagined by others, but with how to engage with such futures as qualitative researchers.
In this paper we draw on the findings on teens’ transmedia practices from the research project Transmedia Literacy carried out in eight countries from Europe, Latin America and Australia between 2015 and 2018. An ethnographic approach that combined different research methods, including questionnaires, participatory-creative workshops, interviews, media diaries and online community observation, was used to explore what teens are doing with media. In this article we focus on how teens perform their digital identity on Instagram. This social network is notably popular among young people, and the practice of taking, editing, selecting, hiding and sharing photos and videos through it is part of teenagers’ everyday lives and online interactions. We argue that this curating process encompasses several aspects that are central to teenagers creating a digital persona on Instagram, including content creation, validation through ‘likes’ and ‘followers’ and socio-technical automation. This curation can lead in certain profiles to a professionalized use of the platform, so that the self becomes an object of marketing and promotion for career and business purposes.
Ever since the outbreak of the COVID-19 pandemic, questions of whom or what to trust have become paramount. This article examines the public debates surrounding the initial development of the German Corona-Warn-App in 2020 as a case study to analyse such questions at the intersection of trust and trustworthiness in technology development, design and oversight. Providing some insights into the nature and dynamics of trust and trustworthiness, we argue that (a) trust is only desirable and justified if placed well, that is, if directed at those being trustworthy; that (b) trust and trustworthiness come in degrees and have both epistemic and moral components; and that (c) such a normatively demanding understanding of trust excludes technologies as proper objects of trust and requires that trust is directed at socio-technical assemblages consisting of both humans and artefacts. We conclude with some lessons learned from our case study, highlighting the epistemic and moral demands for trustworthy technology development as well as for public debates about such technologies, which ultimately requires attributing epistemic and moral duties to all actors involved.
This article explores the ways in which prisons are imagined as sites of technology development. By attending to expos that showcase prison technologies and constitute “live theatres of technology” (L. Cornfeld, 2018), we carve out ambivalent sociotechnical imaginaries of technological backwardness that are combined with the idea of radical technological innovation to reform the justice system. In doing so, we highlight the prison as one site of technology development and actors at technology trade shows catering to the prison and security sector as platforms for technological mediators that range from corporate prison tech companies to educators as well as representatives of the criminal justice system. The expos emerge as sites where technological development is negotiated through performative sociotechnical imaginaries of prison tech.
Artificial Intelligence-as-a-Service (AIaaS) empowers individuals and organisations to access AI on-demand, in either tailored or 'off-the-shelf' forms. However, institutional separation between development, training and deployment can lead to critical opacities, such as obscuring the level of human effort necessary to produce and train AI services. Information about how, where, and for whom AI services have been produced are valuable secrets, which vendors strategically disclose to clients depending on commercial interests. This article provides a critical analysis of how AIaaS vendors manipulate the visibility of human labour in AI production based on whether the vendor relies on paid or unpaid labour to fill interstitial gaps. Where vendors are able to occlude human labour in the organisational 'backstage,' such as in data preparation, validation or impersonation, they do so regularly, further contributing to ongoing techno-utopian narratives of AI hype. Yet, when vendors must co-produce the AI service with the client, such as through localised AI training, they must 'lift the curtain', resulting in a paradoxical situation of needing to both perpetuate dominant AI hype narratives while emphasising AI's mundane limitations.
A transit trip involves travel to and from transit stops or stations. The quality of what are commonly known as first and last mile connections (regardless of their length) can have an important impact on transit ridership. Transit agencies throughout the world are developing innovative approaches to improving first and last mile connections, for example, by partnering with ride-hailing and other emerging mobility services. A small but growing number of transit agencies in the U.S. have adopted first and last mile (FLM) plans with the goal of increasing ridership. As this is a relatively new practice by transit agencies, a review of these plans can inform other transit agencies and assist them in preparing their own. Four FLM plans were selected from diverse geographic contexts for review: Los Angeles County Metropolitan Transportation Authority (LA Metro), Riverside (CA) Transit Agency (RTA), and Denver Regional Transit District (RTD), and City of Richmond, CA. Based on the literature, we developed a framework with an emphasis on transportation equity to examine these plans. We identified five common approaches to addressing the FLM issue: spatial gap analysis with a focus on socio-demographics and locational characteristics, incorporation of emerging mobility services, innovative funding approaches for plan implementation, equity and transportation remedies for marginalized communities, and development of pedestrian and bicycle infrastructures surrounding transit stations. Strategies in three of the plans are aligned with regional goals for emissions reductions. LA Metro and Riverside Transit incorporate detailed design guidelines for the improvement of transit stations. As these plans are still relatively new, it will take time to evaluate their impact on ridership and their communities’ overall transit experience.
Artificial intelligence (AI) is often discussed as something extraordinary, a dream—or a nightmare—that awakens metaphysical questions on human life. Yet far from a distant technology of the future, the true power of AI lies in its subtle revolution of ordinary life. From voice assistants like Siri to natural language processors, AI technologies use cultural biases and modern psychology to fit specific characteristics of how users perceive and navigate the external world, thereby projecting the illusion of intelligence.
Integrating media studies, science and technology studies, and social psychology, Deceitful Media examines the rise of artificial intelligence throughout history and exposes the very human fallacies behind this technology. Focusing specifically on communicative AIs, Natale argues that what we call “AI” is not a form of intelligence but rather a reflection of the human user. Using the term “banal deception,” he reveals that deception forms the basis of all human-computer interactions rooted in AI technologies, as technologies like voice assistants utilize the dynamics of projection and stereotyping as a means for aligning with our existing habits and social conventions. By exploiting the human instinct to connect, AI reveals our collective vulnerabilities to deception, showing that what machines are primarily changing is not other technology but ourselves as humans.
Deceitful Media illustrates how AI has continued a tradition of technologies that mobilize our liability to deception and shows that only by better understanding our vulnerabilities to deception can we become more sophisticated consumers of interactive media.
It has become trivial to point out that algorithmic systems increasingly pervade the social sphere. Improved efficiency—the hallmark of these systems—drives their mass integration into day-to-day life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic systems, especially when used to sort and predict social outcomes, are not only inadequate but also perpetuate harm. In particular, a persistent and recurrent trend within the literature indicates that society's most vulnerable are disproportionally impacted. When algorithmic injustice and harm are brought to the fore, most of the solutions on offer (1) revolve around technical solutions and (2) do not center disproportionally impacted communities. This paper proposes a fundamental shift—from rational to relational—in thinking about personhood, data, justice, and everything in between, and places ethics as something that goes above and beyond technical solutions. Outlining the idea of ethics built on the foundations of relationality, this paper calls for a rethinking of justice and ethics as a set of broad, contingent, and fluid concepts and down-to-earth practices that are best viewed as a habit and not a mere methodology for data science. As such, this paper mainly offers critical examinations and reflection and not “solutions.”
Based on empirical material from Swedish reformist labour movement associations, this article illustrates how digital technology has been described as a problem (and sometimes a solution) at different points in time. Most significant, for this article, is the role that non-formal adult education has played in solving these problems. Computer education has repeatedly been described as a measure not only to increase technical knowledge, but also to construe desirable (digital) citizens for the future. Problematisations of the digital have changed over time, and these discursive reconceptualisations can be described as existing on a spectrum between techno-utopian visions, where adaptation of the human is seen as a task for education, and techno-dystopian forecasts, where education is needed to mobilise democratic control over threatening machines. As such, the goal for education has been one of political control—either to adapt people to machines, or to adapt machines to people.
Objectives
Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field.
Design
Scoping review.
Setting
Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison.
Results
Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity.
Conclusions
The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
As energy transitions advance through the introduction of renewable energy production and new types of energy demands, expectations for more flexible electricity consumption has risen on agendas among system designers and scholars. Social scientists have followed this development through studies of technological visions and users of new flexibility techniques (e.g. demand-side management, pricing, storage). Based on interviews with electricity systems developers and householders in Norway this article complements this body of scholarship and relates it to emerging themes in sustainability transitions research. We focus on end-user flexibility and operationalize the new concept of flexibility capital, developed within energy justice literature, to examine different framings of flexibility. The research examines how some householders have more capability of being flexible than others. Furthermore, we show how consumer un-derstandings of flexibility are embedded in everyday life, and differs from systems developers, who primarily understands flexibility as acting economically rational and making cost-conscious decisions.
Quantification is particularly seductive in times of global uncertainty. Not surprisingly, numbers, indicators, categorizations, and comparisons are central to governmental and popular response to the COVID-19 pandemic. This essay draws insights from critical data studies, sociology of quantification and decolonial thinking, with occasional excursion into the biomedical domain, to investigate the role and social consequences of counting broadly defined as a way of knowing about the virus. It takes a critical look at two domains of human activity that play a central role in the fight against the virus outbreak, namely medical sciences and technological innovation. It analyzes their efforts to craft solutions for their user base and explores the unwanted social costs of these operations. The essay argues that the over-reliance of biomedical research on “whiteness” for lab testing and the techno-solutionism of the consumer infrastructure devised to curb the social costs of the pandemic are rooted in a distorted idea of a “standard human” based on a partial and exclusive vision of society and its components, which tends to overlook alterity and inequality. It contends that to design our way out of the pandemic, we ought to make space for distinct ways of being and knowing, acknowledging plurality and thinking in terms of social relations, alterity, and interdependence.
Are we reaching the limits of what human-centered and user-centered design can cope with? Developing new design methodologies and tools to unlock the potentials of data technologies such as the Internet of Things, Machine Learning and Artificial Intelligence for the everyday job of design is necessary but not sufficient. There is now a need to fundamentally question what happens when human-centered design is unable to effectively give form to technology, why this might be the case, and where we could look for alternatives.
Traditionally, professionals such as medical doctors, lawyers, and academics are protected. They work within well-defined jurisdictions, belong to specialized segments, have been granted autonomy, and have discretionary spaces. In this way, they can be socialized, trained, and supervised, case-related considerations and decisions can be substantive (instead of commercial), and decisions can be taken independently. Ideally, these decisions are authoritative and accepted, both by clients as well as society (stakeholders) who trust professional services. This ideal-typical but also ‘ideal’ imagery always had its flaws; nowadays, shortcomings are increasingly clear. ‘Protective professionalism’ is becoming outdated. Due to heterogeneity and fragmentation within professional fields, the interweaving of professional fields, and dependencies of professional actions on outside worlds, professionals can no longer isolate themselves from others and outsiders. At first sight, this leads to a ‘decline’, ‘withering away’, or ‘hollowing out’ of professionalism. Or it leads to attempts to ‘reinstall’, ‘reinvent’, or ‘return to’ professional values and spaces. In this article, we avoid such ‘all or nothing’ perspectives on changing professionalism and explore the ‘reconfiguration’ of professionalism. Professional identities and actions can be adapted and might become ‘hybrid’, ‘organized’, and ‘connected’. Professional and organizational logics might be interrelated; professionals might see organizational (or organizing) duties as belonging to their work; and professional fields might open up to outside worlds. We particularly explore connective professionalism, arguing that we need more fundamental reflections and redefinitions of what professionalism means and what professionals are. We focus on the question of how professional action can be related to others and outsiders and remain ‘knowledgeable’, ‘autonomous’, and ‘authoritative’ at the same time. This can no longer be a matter of expertise, autonomy, and authority as fixed and closed entities. These crucial dimensions of professional action become relational and processual. They have to be enacted on a continuous basis, backed by mechanisms that make professionalism knowledgeable, independent, and authoritative in the eyes of others.
The digitization of society creates both challenges and opportunities for prisons. Previous studies show that prisons’ digitization affects interaction between incarcerated people, prison culture and reduces recidivism, however it also poses security risks. In this study, we ask how do barriers to digital inclusion appear among incarcerated people in the prison context, and how do they perceive whether face-to-face interactions with employees can be replaced by digital services. The analytical starting points of the study are rhetorical analysis and Goffman´s micro-sociological analysis. The research material consists of 26 incarcerated people’s interviews from different parts of Finland. The results show that gaps in digital skills and access to the internet are key barriers to digital inclusion in prisons. The question of whether digital services can replace face-to-face encounters raised conflicting comments. Interviewees emphasized the importance of social interaction in their desistance, but also the benefits of digitization such as the possibility of anonymity. In addition, the research highlights the tense nature of prison culture, as well as the different aspirations of prisoners. The pursuit of digital agency can also manifest itself in various secondary adjustments. The digitization of prisons means a change in the prison employee’s role and work approach.
The theoretical concept of trust has been identified as highly important to the successful design of intelligent technologies such as autonomous vehicles (AVs). In human-centred transport research this has resulted in a focus on trust in the technical design of future AVs and has raised the question of how the conditions that form trust change as technologies become more intelligent. In this article we discuss the first stage of an interdisciplinary project that brought together ethnographic and experimental user studies into trust in intelligent cars. This stage focused on the development of an interdisciplinary methodological framework for the user studies, through a review of 258 empirical HCI research articles on trust in automation and AVs. The review investigated the following research questions: a) what are the key themes in HCI methodologies used to research trust in automation and AVs; b) how do they account for trust in AVs as part of wider contexts; and c) how can these methodologies be developed to include more than momentary and individual human-machine interactions. We found that while theoretical understandings of trust in automated technologies acknowledge the relevance of the wider context in which the interaction occurs, existing methodologies predominantly involve experimental studies in simulated environments with a focus on reliance related aspects of trust. We identified that ethnographic user studies can potentially contribute to new connections between theoretical understandings and conventional experimental methods. Therefore, we propose a framework for an interdisciplinary approach that combines experimental and ethnographic methodologies to investigate trust in AVs.
High costs of owning fully-automated or autonomous vehicles (AVs) will fuel the demand for shared mobility, with zero driver costs. Although sharing sounds good for the transport system, congestion can easily rise without adequate policy measures. Many or all public transit lines will continue to exist, and carefully-designed policies can be implemented to make good use of fixed public assets, like commuter- and light-rail lines. In this study, a shared AV (SAV) fleet is analyzed as a potential solution to the first-mile-last-mile (FMLM) problem for access to and from public transit. Essentially, SAVs are analyzed as collector-distributor systems for these mass-movers and compared with a door-to-door (D2D) service. Results from an agent-based simulation of Austin, Texas, show that SAVs have the potential to help solve FMLM transit problems when fare benefits are provided to transit users. Restricting SAV use for FMLM trips increases transit coverage, lowers average access and egress walking distance, and shifts demand away from park-and-ride and long walk trips. When SAVs are available for both D2D use and FMLM trips, high SAV fares help maintain transit demand, without which the transit demand may decrease significantly, affecting the transit supply and the overall system reliability. Policy makers and planners should be wary of this shift away from transit and may be able to increase transit usage using policies tested in this study.
Until recently the main effect of technology on professional or knowledge-based work has been to augment and expand it, partly as described in Autor, Levy and Murnane's 2003 analysis. There are now increasingly instances of knowledge-based work being automated and substituted, developments that are more familiar from factory and basic administrative settings. Two widely-quoted studies, by Frey and Osborne (2013) and Susskind and Susskind (2015), point towards significant technology-driven job losses including in professional fields. Subsequent analyses indicate that while some occupations will disappear or be deskilled, others will be created. The argument made here is that the most significant effect will be occupational transformation, necessitating different types of skills in a net movement towards work that is more digitally-oriented but also complex, creative and value-based. These changes have implications that are already beginning to affect the way that professions are organised and how practitioners are educated and trained.
Many people are involved in making large-scale data, and only some of these tasks are getting attention from researchers or recognition by managers re-organizing the data-driven workplace. New occupations like ‘data analyst’ and ‘data scientist’ have emerged in recent years, but much of the work that makes data analysis, interpretation and responsible-use possible happens in administrative or clerical jobs. As a result this work is often not recognized as vital to producing good quality data. New kinds of data and new kinds of uses of data mean that people in traditional roles are working with data in new ways, requiring new skills and knowledge. But these tasks and competencies in existing occupations have been undervalued and slow to come to scholars’ attention.
Advances in machine learning (ML) and artificial intelligence (AI) present an opportunity to build better tools and solutions to help address some of the world’s most pressing challenges, and deliver positive social impact in accordance with the priorities outlined in the United Nations’ 17 Sustainable Development Goals (SDGs). The AI for Social Good (AI4SG) movement aims to establish interdisciplinary partnerships centred around AI applications towards SDGs. We provide a set of guidelines for establishing successful long-term collaborations between AI researchers and application-domain experts, relate them to existing AI4SG projects and identify key opportunities for future AI applications targeted towards social good. The AI for Social Good movement aims to apply AI/ML tools to help in delivering on the United Nations’ sustainable development goals (SDGs). Here, the authors identify the challenges and propose guidelines for designing and implementing successful partnerships between AI researchers and application - domain experts.
The rapid growth of the platform economy has provoked scholarly discussion of its consequences for the nature of work and employment. We identify four major themes in the literature on platform work and the underlying metaphors associated with each. Platforms are seen as entrepreneurial incubators, digital cages, accelerants of precarity, and chameleons adapting to their environments. Each of these devices has limitations, which leads us to introduce an alternative image of platforms: as permissive potentates that externalize responsibility and control over economic transactions while still exercising concentrated power. As a consequence, platforms represent a distinct type of governance mechanism, different from markets, hierarchies, or networks, and therefore pose a unique set of problems for regulators, workers, and their competitors in the conventional economy. Reflecting the instability of the platform structure, struggles over regulatory regimes are dynamic and difficult to predict, but they are sure to gain in prominence as the platform economy grows.
Expected final online publication date for the Annual Review of Sociology, Volume 46 is July 30, 2020. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
In an age defined by computational innovation, testing seems to have become ubiquitous, and tests are routinely deployed as a form of governance, a marketing device, an instrument for political intervention, and an everyday practice to evaluate the self. This essay argues that something more radical is happening here than simply attempts to move tests from the laboratory into social settings. The challenge that a new sociology of testing must address is that ubiquitous testing changes the relations between science, engineering, and sociology: Engineering is today in the very stuff of where society happens. It is not that the tests of 21st‐century engineering occur within a social context but that it is the very fabric of the social that is being put to the test. To understand how testing and the social relate today, we must investigate how testing operates on social life, through the modification of its settings. One way to clarify the difference is to say that the new forms of testing can be captured neither within the logic of the field test nor of the controlled experiment. Whereas tests once happened inside social environments, today’s tests directly and deliberately modify the social environment.
Technologies change users' existing social, cultural, and material practices by providing new opportunities for reflecting on and managing their lives. As technological advancements pervade our private and professional lives, users are tempted to see them as "magic bullets" that can help them become more organized and efficient. In this paper, we introduce the term "time hacking" to capture the various ways technologies mediate users' time perception and perspective. We will use the examples of virtual assistants like Siri and Alexa and the Quantified Self Movement to illustrate how people feel that they are capable of hacking time by using devices and programs. Imagining tools as neutral entities that help them better manage their lives in a world that seems increasingly sped up, users are often blind to the multifarious ways these technologies, and the companies that produce them, shape what they attend to and how they make sense of information. The concept of time hacking helps us examine what narratives users construct and share about timesaving tools and how users' perception of and perspective about time changes in response to emerging technologies. Most importantly, time hacking can help to explain the allure of timesaving technologies, why users might be enthusiastic about taking them up and integrating them into their lives. 3
Incarcerated individuals have long contributed to crucial societal infrastructures. From being leased work force building the railway in the United States to constructing canal systems in Sweden, prisoners’ labor has been widespread as an important part of value production. Part of the labor conducted by incarcerated people is related to the production, repair, and maintenance of media devices and media infrastructures constituting what we call prison media work. In this article, we trace the changing logics of prison media work historically since the inception of the modern prison at the turn of the 20th century. Based on archival material, interviews, and field observations, we outline a shift from physical manual labor toward the work of being tracked that is constitutive of surveillance capitalism in- and outside of the prison. We argue that prison media work holds an ambiguous position combining elements of exploitation and rehabilitation, but most importantly it is a dystopian magnifying glass of media work under surveillance capitalism.
Purpose
Previous studies repeatedly claim that social media challenge and even disrupt organizational boundaries conditioning discretionary work. The purpose of this paper is to investigate how police officers, drawing on institutionalized value logics, actively shape their awareness of how to use social media with discretion.
Design/methodology/approach
Drawing on semi-structured interviews with police officers from Sweden, the analysis explores similarities and variations in how they assess their discretionary awareness of how to manage social media potentials across different police practices. Supporting documents have been analyzed to put interviews into context.
Findings
The analysis shows how police officers justify their awareness of how to manage two social media potentials providing communicative efficiency and networking opportunities, by applying two justificatory modalities of momentary reconciliation. Contributing to previous research, findings show how these modalities accommodate tensions between different value logics urging officers to engage in situated problem solving or moderation of the intensity in different connections. By drawing on discretionary awareness about enduring value tensions, police officers maintain legitimate claims on social media discretion. The study also complements previous research depicting digital communication and discretion as mutually exclusive. Findings suggest that web-based digitalization like social media raises new demands of awareness of a connected discretion.
Originality/value
Previous research rarely analyses officers’ awareness of how to manage idiosyncratic social media challenges. By introducing the concept discretionary awareness, this study illuminates how arrangements of institutionalized value logics guide police officers in applying “good judgment” in day-to-day use of social media.
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking.
Discretion is a pervasive phenomenon in legal systems. It is of concern to lawyers because it can be a force for justice or injustice: at once a means of advancing the broad purposes of law and of subventing them. For social scientists the discretion exercised by legal actors is an important form of decision-making behaviour, in which legal rules are merely one force in a field of pressures and constraints that push towards certain courses of action or inaction. This book presents a variety of analyses of legal discretion by lawyers and social scientists (drawn from both sides of the Atlantic), who have made discretion and its uses a central part of their scholarly concerns.
Discretion is a pervasive phenomenon in legal systems. It is of concern to lawyers because it can be a force for justice or injustice: at once a means of advancing the broad purposes of law and of subventing them. For social scientists the discretion exercised by legal actors is an important form of decision-making behaviour, in which legal rules are merely one force in a field of pressures and constraints that push towards certain courses of action or inaction. This book presents a variety of analyses of legal discretion by lawyers and social scientists (drawn from both sides of the Atlantic), who have made discretion and its uses a central part of their scholarly concerns.
This chapter focuses on how Automated Decision-Making (ADM) and Artificial Intelligence (AI) are being imagined by stakeholders in the energy industry and policy, as part of a new emerging and future infrastructure. We discuss how people and resources are framed in these discourses, and question the extent to which these framings create plausible future visions. Drawing on ethnographic examples, we demonstrate how everyday life in the present and its future imaginaries complicate the scenarios these discourses imply. In doing so, through this example, we outline how emerging technologies, and the automated systems they are associated with, are embedded in imaginaries of future infrastructures, why they might be flawed and why we have an ethical responsibility to engage in the debates surrounding them.
Saunders’ chapter focuses on political advertising. What, he asks, is it about the microtargeting of political advertising on social media in the Brexit campaign and Donald Trump elections (as spearheaded by Cambridge Analytica) that is so wrong? His response is that, like dog whistle politics, sending different advertising to different targets can obscure the open public discussion of policy that is critical to the democratic process. As such, dark advertising stands as a real threat to democracy.
The home is an ever-changing assemblage of technologies that shapes the organisation and division of housework and supports certain models of what that work entails, who does it and for what purposes. This paper analyses core tensions arising through the ways smart homes are embedding logics of digital capitalism into home life and labour. As a critical way of understanding these techno-political shifts in the means of social reproduction, we advance the concept of Big Mother – a system that, under the guise of maternal care, seeks to manage, monitor and marketise domestic spaces and practices. We identify three tensions arising in the relationships between care and control as they are mediated through the Big Mother system: (a) outsourcing autonomy through enhanced control and choice, (b) increased monitoring for efficient management and (c) revaluation of care through optimisation of housework. For each area, we explore how emerging technological capacities promise to enhance our abilities to care for our homes, families and selves. Yet, at the same time, these innovations also empower Big Mother to enrol people into new techniques of surveillance, new forms of automation and new markets of data. Our purpose in this paper is to push back against the influential ideas of smart homes based on luxury surveillance and caring systems by showing that they exist in constant relation with a supposedly antithetical version of the smart home represented by Big Mother.
This article sets an agenda for and outlines a sensuous futures scholarship. Its aim is to suggest a starting point for this practice and to invite scholars and researchers to engage in its advancement. By way of example, I examine how the anticipatory concept of trust can be re-worked theoretically and ethnographically through a sensuous approach to scholarship articulated through design anthropology and futures anthropology. I thus argue for a sensuous scholarship that participates in both academic debate and in designing for ethical futures.
This essay explores how public reception of, and individual resistance to, public health mandates have reinforced agentic notions of bodily management in the COVID-19 era. Our collective approach to the pandemic continues to secure prevalent understandings of human agency over disease and illness by reifying the concept of personal choice. Notions of risk and shame shape these performances but do little to dislodge cultural frames that reify notions of individualism and the entrepreneurial subject. The wide circulation of viral videos highlighting the defiance of mask mandates is one site where choice and personal autonomy animate these debates. These confrontational acts are not easily segmented from the other cultural apparatuses where the privatization of risk is marshalled for political ends.
The first and last mile (FLM) problem, namely the poor connection between trip origins or destination and public transport stations, is a significant obstacle to sustainable transportation as it is likely to encourage the use of cars for FLM travel, if not for the entire trip. This study examines the role of modality style and built environment in FLM mode choice behaviour, in order to identify the key features that might invoke a travel mode shift from cars to more sustainable travel options for both mandatory and discretionary trips. More specifically, this study draws on disaggregate data from the South East Queensland household travel survey and presents a latent class choice model to unravel modality style groups. Results reveal two distinct individual-level modality style groups: (1) driving and walking oriented; (2) multimodal travellers. Individuals in the second modality style group were found to be relatively inelastic to FLM travel time for mandatory trips, while individuals in the first group were largely unaffected by built environment characteristics and highly habitual in their mode choice behaviour for both mandatory and discretionary trips. Home residence environments with high road intersection density and public transport accessibility, and home residence environments with diverse land use mix, respectively encourage individuals within the second modality style to walk for mandatory trips, and discretionary trips. To this end, when place-based policies seek to change certain built environment features, individuals in the second modality style are more likely to shift their preference from cars to more sustainable modes. Finally, our findings have practical planning implications in targeting mode shift through highlighting the importance of considering the intersection of individual modality style in a given locale and mode choice behaviour. More specifically, our findings advocate for place-based policies that seek to target particular locales with the certain modality style deemed to be more predisposed to adopting a mode shift.
Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.
The life and times of the Smart Wife—feminized digital assistants who are friendly and sometimes flirty, occasionally glitchy but perpetually available.
Meet the Smart Wife—at your service, an eclectic collection of feminized AI, robotic, and smart devices. This digital assistant is friendly and sometimes flirty, docile and efficient, occasionally glitchy but perpetually available. She might go by Siri, or Alexa, or inhabit Google Home. She can keep us company, order groceries, vacuum the floor, turn out the lights. A Japanese digital voice assistant—a virtual anime hologram named Hikari Azuma—sends her “master” helpful messages during the day; an American sexbot named Roxxxy takes on other kinds of household chores. In The Smart Wife, Yolande Strengers and Jenny Kennedy examine the emergence of digital devices that carry out “wifework”—domestic responsibilities that have traditionally fallen to (human) wives. They show that the principal prototype for these virtual helpers—designed in male-dominated industries—is the 1950s housewife: white, middle class, heteronormative, and nurturing, with a spick-and-span home. It's time, they say, to give the Smart Wife a reboot.
What's wrong with preferring domestic assistants with feminine personalities? We like our assistants to conform to gender stereotypes—so what? For one thing, Strengers and Kennedy remind us, the design of gendered devices re-inscribes those outdated and unfounded stereotypes. Advanced technology is taking us backwards on gender equity. Strengers and Kennedy offer a Smart Wife “manifesta,” proposing a rebooted Smart Wife that would promote a revaluing of femininity in society in all her glorious diversity.
Amazon’s projects for future automation contribute to anxieties about the marginalization of living labor in warehousing. Yet, a systematic analysis of patents owned by Amazon suggests that workers are not about to disappear from the warehouse floor. Many patents portray machines that increase worker surveillance and work rhythms. Others aim at incorporating workers’ activities into machinery to rationalize the labor process in an ever more pervasive form of digital Taylorism. Patents materialize the company’s desire for a technological future in which workers act and sense on behalf of machinery, becoming its living and sensing appendages. In this new relationship, humans extend machinery and its reach. Through the work-in-progress process of reaching increasing levels of automation, Amazon develops new technical foundations that consolidate its power in the digital workplace.
In this article we demonstrate how design anthropology theory, methodology and practice can be mobilised to create interventions in how possible human futures with emerging technologies are understood and imagined. Drawing on our research into Human Experience and Expectations of Autonomous Driving (AD) cars we show how: we engaged ethnographic insights to redefine concepts of trust and sharing which contest dominant problem-solution narratives; and we mobilised these insights in applied contexts, through our AD Futures cards which employ ethnographic quotes and examples to disrupt common assumptions, align stakeholders with everyday experience, and pose new questions.
Societies are responding to the covid-19 pandemic at breathtaking speed. Many of these ad hoc responses will have long-lasting consequences, and we must make sure that today’s efforts do not threaten our future wellbeing.
The most consequential transformations may come from new health surveillance technologies that use machine learning and automated decision making to parse people’s digital footprints, identify those who are potentially infected, trace their contacts, and enforce social distancing. Some have argued that such digital contact tracing could be more effective in controlling the epidemic than mass quarantine.
Who benefits from smart technology? Whose interests are served when we trade our personal data for convenience and connectivity?
Smart technology is everywhere: smart umbrellas that light up when rain is in the forecast; smart cars that relieve drivers of the drudgery of driving; smart toothbrushes that send your dental hygiene details to the cloud. Nothing is safe from smartification. In Too Smart, Jathan Sadowski looks at the proliferation of smart stuff in our lives and asks whether the tradeoff—exchanging our personal data for convenience and connectivity—is worth it. Who benefits from smart technology?
Sadowski explains how data, once the purview of researchers and policy wonks, has become a form of capital. Smart technology, he argues, is driven by the dual imperatives of digital capitalism: extracting data from, and expanding control over, everything and everybody. He looks at three domains colonized by smart technologies' collection and control systems: the smart self, the smart home, and the smart city. The smart self involves more than self-tracking of steps walked and calories burned; it raises questions about what others do with our data and how they direct our behavior—whether or not we want them to. The smart home collects data about our habits that offer business a window into our domestic spaces. And the smart city, where these systems have space to grow, offers military-grade surveillance capabilities to local authorities.
Technology gets smart from our data. We may enjoy the conveniences we get in return (the refrigerator says we're out of milk!), but, Sadowski argues, smart technology advances the interests of corporate technocratic power—and will continue to do so unless we demand oversight and ownership of our data.
This article discusses the cyclical nature of automation anxiety and examines ways of thinking about the recurrence of automation debates in culture, particularly with reference to the 1950s, 1960s and today. It draws on the concept of topos, developed by Erkki Huhtamo, to explore the return of automation anxieties (and fevers) and the relationship between material formations and technological imaginaries. We focus in particular on recent left thinking where automation is used to invoke a postcapitalist utopia. Examples include Nick Srnicek and Alex Williams's Inventing the Future: Postcapitalism and a World Without Work (2015) and Aaron Bastani's Fully Automated Luxury Communism: A Manifesto (2018). This strand of contemporary thinking is re-framed through our return to early automation scares emerging in the late 1960s. We explore engagements between labour, civil rights, left public intellectuals, and emerging industrial figures, over questions of automation and work. We pay particular attention to questions of 'who benefits and when?' These are germane to the question of utopian futures or non-reformist reformism as it recurs today. What interests us here is the concept of revived salience: not only how the tropes evident in these debates are revived and re-embedded today, but how do they find their force, and what do they imply.
This article argues that our intimate entanglement with digital technologies is challenging the foundations of current HCI research and practice. Our relationships to virtual realities, artificial intelligence, neuro-implants or pervasive, cyberphysical systems generate ontological uncertainties, epistemological diffusion and ethical conundrums that require us to consider evolving the current research paradigm. I look to post-humanism and relational ontologies to sketch what I call Entanglement HCI in response. I review selected theories—Actor-Network Theory, Post-Phenomenology, Object-Oriented Ontology, Agential Realism—and their existing influences on HCI literature. Against this background, I develop Entanglement HCI from the following four perspectives: (a) the performative relationship between humans and technology; (b) the re-framing of knowledge generation processes around phenomena; (c) the tracing of accountabilities, responsibilities and ethical encounters; and (d) the practices of design and mattering that move beyond user-centred design.
The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target---or exclude---particular groups of users seeing their ads, comparatively little attention has been paid to the implications of the platform's ad delivery process, comprised of the platform's choices about which users see which ads. It has been hypothesized that this process can "skew" ad delivery in ways that the advertisers do not intend, making some users less likely than others to see particular ads based on their demographic characteristics. In this paper, we demonstrate that such skewed delivery occurs on Facebook, due to market and financial optimization effects as well as the platform's own predictions about the "relevance" of ads to different groups of users. We find that both the advertiser's budget and the content of the ad each significantly contribute to the skew of Facebook's ad delivery. Critically, we observe significant skew in delivery along gender and racial lines for "real" ads for employment and housing opportunities despite neutral targeting parameters. Our results demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters to be highly inclusive. This underscores the need for policymakers and platforms to carefully consider the role of the ad delivery optimization run by ad platforms themselves---and not just the targeting choices of advertisers---in preventing discrimination in digital advertising.