Laura Sartori’s research while affiliated with University of Bologna and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Minding the gap(s): public perceptions of AI and socio-technical imaginaries
  • Article
  • Full-text available

March 2022

·

483 Reads

·

111 Citations

AI & SOCIETY

Laura Sartori

·

Giulia Bocca

Deepening and digging into the social side of AI is a novel but emerging requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much-needed efforts into ethics, explainability and responsible AI. The article addresses this challenge by problematizing the discussion around AI shifting the attention to individuals and their awareness, knowledge and emotional response to AI. First, we outline our main argument relative to the need for a socio-technical perspective in the study of AI social implications. Then, we illustrate the main existing narratives of hopes and fears associated with AI and robots. As building blocks of broader “sociotechnical imaginaries”, narratives are powerful tools that shape how society sees, interprets and organizes technology. An original empirical study within the University of Bologna collects the data to examine the levels of awareness, knowledge and emotional response towards AI, revealing interesting insights to be carried on in future research. Replete with exaggerations, both utopian and dystopian narratives are analysed with respect to some relevant socio-demographic variables (gender, generation and competence). Finally, focusing on two issues—the state of AI anxiety and the point of view of non-experts—opens the floor to problematizing the discourse around AI, sustaining the need for a sociological perspective in the field of AI and discussing future comparative research.

Download

A sociotechnical perspective for the future of AI: narratives, inequalities, and human control

March 2022

·

835 Reads

·

143 Citations

Ethics and Information Technology

Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.

Citations (2)


... Dystopian narratives reinforce a near hopelessness in future projections while still being grounded in present day fact (Potter, 2012). These narratives have been examined within fiction and nonfiction text (Cave and Dihal, 2019), and their emotive responses have been surveyed as well (Sartori and Bocca, 2023;Wang et al., 2023). We define dystopian in the context of AI as threatening, or broadly problematic (Marčetić and Nolin, 2023), while following boyd and Crawford's (2012) depiction of dystopia imaginaries regarding big data and its effects on privacy invasion and decreased civic freedoms. ...

Reference:

The dystopian imaginaries of ChatGPT: A designed cycle of fear
Minding the gap(s): public perceptions of AI and socio-technical imaginaries

AI & SOCIETY

... Privacy concerns are paramount as AI systems collect and process vast amounts of personal data, raising questions about consent, data ownership, and surveillance (Adobor & Yawson, 2023;Floridi & Cowls, 2019). Autonomous decisionmaking by AI systems introduces questions about human agency and control, particularly in high-stakes domains like healthcare, criminal justice, and finance (Sartori & Theodorou, 2022). ...

A sociotechnical perspective for the future of AI: narratives, inequalities, and human control

Ethics and Information Technology