Article

Data Civics: A Response to the “Ethical Turn”

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

In addition to the recent proliferation of approaches, programs, and research centers devoted to ethical data and Artifiical Intelligence, it is becoming increasingly clear that we need to directly address the political question. Ethics, while crucial, comprise only an indirect response to recent concerns about the political uses and misuses of data mining, AI, and automated processes. If we are concerned about the impact of digital media on democracy, it will be important to consider what it might mean to foster democratic arrangements for the collection and use of data, and for the institutions that perform these tasks. This essay considers what it might mean to supplement ethical concerns with political ones. It argues for the importance of considering the tensions between civic life and the wholesale commercialization of news, information, and entertainment platforms—and how these are exacerbated by the dominant economic model of data-driven hyper-customization.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... There have been calls for studies of the embedding of artificial intelligence (AI) in organisational practice for quite some time (Orlikowski, 2016;Andrejevic, 2020;Hafermalz and Huysman, 2021). This is raising questions on the way users are configured (Woolgar 1990) and how organisational knowledge is changed (Hafermalz and Huysman, 2021). ...
... Taking the normative concepts of "Trustworthy AI" including explainability (European Commission, 2019, 2020, 2021 as a point of departure, we raise the questions of how our organisational context looks, who the users or our future AI system are, what other actors have an important role to play, and how the concepts of explainability -with its systems-in-use and process-of-development aspects -intersect with the organisational context. We investigate this by viewing the development and use of AI as structuration of practices (Giddens 1984). ...
Conference Paper
Full-text available
Artificial intelligence (AI) is in need for a framework that balances the opportunities it represents with its risks. But while there is a broad consensus on this, and public regulative initiatives are taken; there is far less knowledge about how these dilemmas/opportunities/risks are played out in practice. The interest into ethics in organisation driven by a discourse on "Trustworthy AI"; makes us wonder whether an ethical approach to AI in organisation is purposeful; or needs modification. We investigate this by viewing the development and use of AI as structuration of practices. The empirical material is our own development of an AI system. Using studies of ethics in moral engineering design; AI is a question of structuration processers with unintended consequences. It is a "slide" from ethics of virtue to ethics of benefit as corroborated by engineers/designers referring ethical dilemmas to managers and politicians. The EU framework of Trustworthy AI for designing and using more accountable AI systems-considering ethics; human autonomy; harm prevention; fairness etc., conflicts with contemporary construction organisations. We propose an extension of the EU guidelines.
... Consequently, the status and even viability of the continuation of diverse ethnocultural projections of human being is, indeed, one of the characteristic stakes of digital technological production, and therefore of creative making with computational automation and AI. This is all too apparent from, for example, the controversies over the increasingly evident biases in AI systems which 'learn' from big data that record and operationalise various specific cultural values and perspectives, and the so-called 'ethical turn' in IT and related social science disciplines which attempts to 'correct' these (Andrejevic, 2020). The overarching consequence of this post-war globalization is its acceleration of the ecological and biological impacts of global industrialisation and the imminent threat this poses to the maintenance of human and other organic forms of life in their current variety and condition. ...
Article
This essay considers the nature and stakes of creative making with computational automation technologies. I will argue that Bernard Stiegler’s organological approach to the human as “technical life” takes care of the question of the nature of creative making, and the pharmacological critical practice that it mandates takes care of the question of the stakes. I say “takes care” to emphasise that Stiegler’s theoretical enterprise is dedicated to a “therapeutics” of contemporary technocultural transformation, because culture is best understood as a taking care of the technical pharmakon – both poison and cure – that is our irreducible technical supplementarity. After providing an assessment of Stiegler’s thinking on organology and pharmacological critique, I will discuss the work of some creative makers I have worked with or was able to interview as part of the South West Creative Technologies Network’s Automation Fellowship programme in 2019-2020. The goal is to interpret their work pharmacologically and so to elaborate and extend Stiegler’s work on contemporary technocultural becoming. Digital automation and AI are powerful drivers of the so-called Silicon Valley era of disruptive “creative destruction”. This means that the stakes of creative making and its possibilities for taking care of the future cannot be higher today.
... Building data literacy at several fronts through (public) education, critical news reporting, and transparent communication by ethical data-driven organizations could lead to the formation of data civics that are capable of challenging current power imbalances politically (Andrejevic, 2020). While strong arguments have been made for formally including data literacy into educational policies (Knaus, 2020), it is for the European context largely unclear whether, where and how exactly the issue is integrated in school-and university curricula. ...
Article
Full-text available
This conceptual paper explores the role of communication around data practices of Big Tech companies. By critiquing communication practices, we argue that Big Tech platforms shape users into data subjects through framing, influencing behaviour, and the black-boxing of algorithms. We approach communication about data from three perspectives: (1) current data communication constructs reductive data identities for users and contributes to the colonization of daily routines; (2) by strategically deploying the black box metaphor, tech companies try to legitimize abuses of power in datafication processes; (3) the logic in which communication is mediated through the interfaces of Big Tech platforms is normalizing this subjectification. We argue that critical data literacy can foster individual resilience and allows users to resist exploitative practices, but this depends on transparent communication. The opposite seems standard among tech companies that obfuscate their data practices. Current commercial appropriations of data ethics need to be critically assessed against the background of increasing competition in the digital economy.
... Ethical problems have received considerable attention in these policies. Critics have argued that 'AI ethics' are toothless, easy for the tech industryi.e., companies engaged in digital technological innovation and growthto manipulate and unable to ensure compliance [18,19]. The HLEG on AI's 'Ethical guidelines for trustworthy AI' [20] have been criticised for serving as industrial 'ethics washing' and for being 'watered down' as a result of compromises between the public interests of sustainability and 'the common good' and the tech industry's interests of boosting industrial capacity and growth [21,22]. ...
Article
Full-text available
Artificial intelligence (AI) and digitalisation have become an integral part of public governance. While digital technology is expected to enhance neutrality and accuracy in decision-making, it raises concerns about the status of public values and democratic principles. Guided by the theoretical concepts of input, throughput and output democracy, this article analyses how democratic principles have been interpreted and defended in EU policy formulations relating to digital technology over the last decade. The emergence of AI policy has changed the conditions for democratic input and throughput legitimacy, which is an expression of a shift in power and influence between public and private sectors. Democratic input values in AI production are promoted by ethical guidelines directed towards the industry, while democratic throughput, e.g., accountability and transparency, receive less attention in EU AI policy. This indicates future political implications for the ability of citizens to influence technological change and pass judgement on accountable actors.
... Ethical problems have received considerable attention in these policies. Critics have argued that 'AI ethics' are toothless, easy for the tech industryi.e., companies engaged in digital technological innovation and growthto manipulate and unable to ensure compliance [18,19]. The HLEG on AI's 'Ethical guidelines for trustworthy AI' [20] have been criticised for serving as industrial 'ethics washing' and for being 'watered down' as a result of compromises between the public interests of sustainability and 'the common good' and the tech industry's interests of boosting industrial capacity and growth [21,22]. ...
Article
Purpose This paper aims to analyze the relationships between human resource supply chain management (HRSCM), corporate culture (CC) and the code of business ethics (CBE) in the MENA region. Design/methodology/approach In this study, the author adopted a quantitative approach through an online Google Form survey for the data-gathering process. All questionnaires were distributed to the manufacturing and service firms that are listed in the Chambers of the Industries of Jordan, Saudi Arabia, Morocco and Egypt in the MENA region using a simple random sampling method. About 567 usable and valid responses were retrieved out of 2,077 for analysis, representing a 27.3% response rate. The sample unit for analysis included all middle- and senior-level managers and employees within manufacturing and service firms. The conceptual model was tested using a hypothesis-testing deductive approach. The findings are based on covariance-based analysis and structural equation modeling (SEM) using PLS-SEM software. The author performed convergent validity and discriminant validity tests, and bootstrapping was also applied. Findings The empirical results display a significant and positive association between HRSCM and the CBE. The CC and the CBE tend to be positively and significantly related. Therefore, HRSCM can play a key role in boosting and applying the CBE in firms. For achieving the firm purposes, more attention to the HR personnel should be paid to implement the CBE. The high importance of the CBE becomes necessary for both the department and the firm. Practical implications Such results can provide insightful information for HR personnel, managers and leaders to encourage them to develop and maintain an effective corporate code of conduct within their organizations. Originality/value This paper tries to explore the linkages between HRSCM, CC and CBE in the Middle East region due to the lack of research available that analyzes the relationship between them. Not only that, but it also offers great implications for Middle Eastern businesses.
Article
“Machine listening” is one common term for a fast-growing interdisciplinary field of science and engineering that “uses signal processing and machine learning to extract useful information from sound”. This article contributes to the critical literature on machine listening by presenting some of its history as a field. From the 1940s to the 1990s, work on artificial intelligence and audio developed along two streams. There was work on speech recognition/understanding, and work in computer music. In the early 1990s, another stream began to emerge. At institutions such as MIT Media Lab and Stanford’s CCRMA, researchers started turning towards “more fundamental problems of audition”. Propelled by work being done by and alongside musicians, speech and music would increasingly be understood by computer scientists as particular sounds within a broader “auditory scene”. Researchers began to develop machine listening systems for a more diverse range of sounds and classification tasks: often in the service of speech recognition, but also increasingly for their own sake. The soundscape itself was becoming an object of computational concern. Today, the ambition is “to cover all possible sounds”. That is the aspiration with which we must now contend politically, and which this article sets out to historicise and understand.
Article
Full-text available
Conspirituality refers to the confluence of New Age spirituality and conspiracism that frame reality through holistic thinking—connecting events and energies, the inner self to the outer world in unseen ways. Conspirituality has thrived online: between the pleasure of the weekly horoscope and the obsession with the QAnon drop is a mode of causal promiscuity in which, as Q puts it, “future proves past.” This panel traces forms of conspirituality from MAGA mystics to New Age influencers, from technolibertarian imageboards to Silicon Valley vision quests. While conspirituality marks an online psychographic segmentation, it also traces a formal quality that organizes ways of navigating, knowing, and critiquing the internet, which is undergirded by New Age spirituality’s perennialism: a belief that different spiritual traditions are equally valid, because they all essentially worship the same divine source that emanates throughout the cosmos and the human body. The internet supercharges perennialism, providing a connective medium for New Age ideology of manifesting: the belief that we create our own reality. As users trawl the internet for snippets and statistics to feed their confirmation bias and populate their vision boards, the connective medium of the internet manifests toxicity and misinformation at scale. The papers in this panel develop a line of research on the coevolution of spirituality and technology from organized to new religious movements. Instead of demystification, we use ethnographic, textual, and hermeneutic approaches—examining internet users, governance, genealogies, and internet studies itself—to politicize networked conspirituality as vernacular theories of power and powerlessness.
Article
Full-text available
In the present age AI (artificial intelligence) emerges as both a medium to and message about (or even from) the future, eclipsing all other possible prospects. Discussing how AI succeeds in presenting itself as an arrival on the human horizon at the end times, this theoretical essay scrutinizes the ‘inevitability’ of AI-driven abstract futures and probes how such imaginaries become living myths, by attending how the technology is embedded in broader appropriations of the future tense. Reclaiming anticipation existentially, by drawing and expanding on the philosophy of Karl Jaspers – and his concept of the limit situation – I offer an invitation beyond the prospects and limits of ‘the new AI Era’ of predictive modelling, exploitation and dataism. I submit that the present moment of technological transformation and of escalating multi-faceted and interrelated global crises, is a digital limit situation in which there are entrenched existential and politico-ethical stakes of anticipatory media. Attending to them as a ‘future present’ (Adam and Groves 2007, 2011), taking responsible action, constitutes our utmost capability and task. The essay concludes that precisely here lies the assignment ahead for pursuing a post-disciplinary, integrative and generative form of Humanities and Social Sciences as a method of hope, that engages AI designers in the pursuit of an inclusive and open future of existential and ecological sustainability.
Article
Television & New Media commemorates its 20th year anniversary with this diverse collection of short reflection pieces on the “intellectual and institutional turbulence” facing media studies and the ways our colleagues have taken up these challenges in their work. Our introduction to the anniversary issue specifically addresses the role of media and media studies in the COVID-19 pandemic moment. On the one hand, our discipline has the opportunity to reinforce and reflect on its long-held arguments as we see how the pandemic reveals key insights of the field with uncanny clarity. On the other hand, for some, there is the nagging sensation we will have to do more and better if we are to adequately account for all the features of the current crisis.
Book
As seen in Wired and Time A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
The Very Angry Tea Party.” The New York Times
  • J M Bernstein
Principled Artificial Intelligence: A Map of Ethical and Rights Based Approaches
  • Fjeld Jessica
  • Hilligoss Hannah
  • Achten Nele
  • Daniel Maia
  • Feldman Joshua
  • Kagay Sally
This Plan for an AI-based Direct Democracy Outsources Votes to a Predictive Algorithm
  • Anzilotti Eillie
Silicon Valley Came to Kansas Schools. That Started a Rebellion
  • Bowles Nellie
Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor
  • Virginia Eubanks
BBC Building ‘Public Service’ Algorithm
  • Savage Mark
Machine Bias.” ProPublica
  • Angwin Julia
  • Jefflarson Surya Mattu
  • Kirchner Lauren