ArticlePublisher preview available

Caring in an Algorithmic World: Ethical Perspectives for Designers and Developers in Building AI Algorithms to Fight Fake News

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article suggests several design principles intended to assist in the development of ethical algorithms exemplified by the task of fighting fake news. Although numerous algorithmic solutions have been proposed, fake news still remains a wicked socio-technical problem that begs not only engineering but also ethical considerations. We suggest employing insights from ethics of care while maintaining its speculative stance to ask how algorithms and design processes would be different if they generated care and fight fake news. After reviewing the major characteristics of ethics of care and the phases of care, we offer four algorithmic design principles. The first principle highlights the need to develop a strategy to deal with fake news on the part of the software designers. The second principle calls for the involvement of various stakeholders in the design processes in order to increase the chances of successfully fighting fake news. The third principle suggests allowing end-users to report on fake news. Finally, the last principle proposes keeping the end-user updated on the treatment in the suspected news items. Implementing these principles as care practices can render the developmental process more ethically oriented as well as improve the ability to fight fake news.
Vol.:(0123456789)
Science and Engineering Ethics (2023) 29:30
https://doi.org/10.1007/s11948-023-00450-4
1 3
ORIGINAL RESEARCH/SCHOLARSHIP
Caring inanAlgorithmic World: Ethical Perspectives
forDesigners andDevelopers inBuilding AI Algorithms
toFight Fake News
GalitWellner1,3 · DmytroMykhailov2
Received: 18 August 2022 / Accepted: 6 July 2023 / Published online: 9 August 2023
© The Author(s), under exclusive licence to Springer Nature B.V. 2023
Abstract
This article suggests several design principles intended to assist in the development
of ethical algorithms exemplified by the task of fighting fake news. Although numer-
ous algorithmic solutions have been proposed, fake news still remains a wicked
socio-technical problem that begs not only engineering but also ethical considera-
tions. We suggest employing insights from ethics of care while maintaining its spec-
ulative stance to ask how algorithms and design processes would be different if they
generated care and fight fake news. After reviewing the major characteristics of eth-
ics of care and the phases of care, we offer four algorithmic design principles. The
first principle highlights the need to develop a strategy to deal with fake news on the
part of the software designers. The second principle calls for the involvement of var-
ious stakeholders in the design processes in order to increase the chances of success-
fully fighting fake news. The third principle suggests allowing end-users to report
on fake news. Finally, the last principle proposes keeping the end-user updated on
the treatment in the suspected news items. Implementing these principles as care
practices can render the developmental process more ethically oriented as well as
improve the ability to fight fake news.
Keywords Ethics of care· Fake news· Algorithmic design· Philosophy of
technology· Stakeholders· Human involvement
* Galit Wellner
galitw@HIT.ac.il
1 The Interdisciplinary Program inHumanities, Tel Aviv University, TelAviv, Israel
2 School ofHumanities, Southeast University, Nanjing, China
3 Present Address: School ofMulti-Disciplinary Studies, Holon Institute ofTechnology (HIT),
Holon, Israel
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... Ethical considerations surrounding the use of AI to combat disinformation are discussed in studies [12,64,68,83,91,92]. These studies stress the importance of principles such as transparency, privacy protection, bias mitigation, and accountability in guiding the development and deployment of AI systems. ...
... Ethical considerations are paramount when deploying AI technologies to combat disinformation. Studies emphasize the importance of transparency, privacy protection, and accountability in AI systems [12,64,68,[82][83][84]. For instance, Jobin et al. [104] highlighted that without explicit ethical guidelines, organizations risk creating systems that exacerbate biases rather than mitigate them. ...
Article
Full-text available
In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false information. This study aims to provide a comprehensive analysis of how AI technologies have been utilized from 2014 to 2024 to detect, analyze, and mitigate the impact of misinformation across various platforms. This research utilized an exhaustive search across prominent databases such as ProQuest, IEEE Explore, Web of Science, and Scopus. Articles published within the specified timeframe were meticulously screened, resulting in the identification of 8103 studies. Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sophisticated algorithms and natural language processing. They further emphasize the integration of human oversight and continual algorithm refinement emerges as pivotal in augmenting AI’s effectiveness in discerning and countering misinformation. By fostering collaboration across sectors and leveraging the insights gleaned from this study, researchers can propel the development of ethical and effective AI solutions.
... Rather, we should evaluate whether and how a technology should be introduced into our societies on a case-specific, context-dependent basis. Contributing to Technology Assessment (TA) is indeed one of the most fruitful assets of the postphenomenological approach (e.g., de Boer et al., 2018;Kudina & de Boer, 2021;Morrison, 2020;Mykhailov, 2023;Wellner & Mykhailov, 2023). I think that it could be rendered even more consistent by appreciating how technology shapes human evolution. ...
Article
Full-text available
In this paper, I aim to assess whether postphenomenology’s ontological framework is suitable for making sense of the most recent technoscientific developments, with special reference to the case of AI-based technologies. First, I will argue that we may feel diminished by those technologies seemingly replicating our higher-order cognitive processes only insofar as we regard technology as playing no role in the constitution of our core features. Secondly, I will highlight the epistemological tension underlying the account of this dynamic submitted by postphenomenology. On the one hand, postphenomenology’s general framework prompts us to conceive of humans and technologies as mutually constituting one another. On the other, the postphenomenological analyses of particular human-technology relations, which Peter-Paul Verbeek calls cyborg relations and hybrid intentionality, seem to postulate the existence of something exclusively human that technology would only subsequently mediate. Thirdly, I will conclude by proposing that postphenomenology could incorporate into its ontology insights coming from other approaches to the study of technology, which I label as human constitutive technicity in the wake of Peter Sloterdijk’s and Bernard Stiegler’s philosophies. By doing so, I believe, postphenomenology could better account for how developments in AI prompt and possibly even force us to revise our self-representation. From this viewpoint, I will advocate for a constitutive role of technology in shaping the human lifeform not only in the phenomenological-existential sense of articulating our relation to the world but also in the onto-anthropological sense of influencing our evolution.
... AI algorithms have been shown to be useful in detecting fake news or misinformation that may be interfering with efficiency and optimization [60,61]. Proponents of using AI in the detection of fake news suggest that certain principles need to be followed, including the development of strategies by the software designers to combat fake news, enabling software users to report fake news when detected, and keeping users informed of the dissemination of fake news [62]. For example, deep learning, machine learning, and natural language processing can extract text-or image-based cues to train models to aid in the prediction of the authenticity of news [2,63]. ...
Article
Full-text available
In the digital age, where information is a cornerstone for decision-making, social media's not-so-regulated environment has intensified the prevalence of fake news, with significant implications for both individuals and societies. This study employs a bibliometric analysis of a large corpus of 9678 publications spanning 2013-2022 to scrutinize the evolution of fake news research, identifying leading authors, institutions, and nations. Three thematic clusters emerge: Disinformation in social media, COVID-19-induced infodemics, and techno-scientific advancements in auto-detection. This work introduces three novel contributions: 1) a pioneering mapping of fake news research to Sustainable Development Goals (SDGs), indicating its influence on areas like health (SDG 3), peace (SDG 16), and industry (SDG 9); 2) the utilization of Prominence percentile metrics to discern critical and economically prioritized research areas, such as misinformation and object detection in deep learning; and 3) an evaluation of generative AI's role in the propagation and realism of fake news, raising pressing ethical concerns. These contributions collectively provide a comprehensive overview of the current state and future trajectories of fake news research, offering valuable insights for academia, policymakers, and industry.
Article
This article contends that the responsible artificial intelligence (AI) approach—which is the dominant ethics approach ruling most regulatory and ethical guidance—falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI’s impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new “therapeutic” area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools.
Chapter
This research aims to evaluate the articles published from 2018 to 2023. We focused on the deep learning issues that have risen in the last decade. Deep learning is the popular approach in news research, especially in the classification or detection of the news. Moreover, in Artificial Intelligence (AI), numbers of applications are invented to help journalists to optimization their work. On the other hand, it can be the dark side of AI if used without wisdom. We have used the bibliometric method to extract the total data N = 69 to be analyzed, and we used several parameters such as scholarly landscape, keyword plus theme, co-networking, and evolution of research theme. The result of this research is that we found the matrix of research direction for future works, and it should be observed closely to the news classification and detection research. Since the large language model was invented, news production has changed and influenced journalism practices.
Article
Full-text available
We present a novel model of individual people, online posts, and media platforms to explain the online spread of epistemically toxic content such as fake news and suggest possible responses. We argue that a combination of technical features, such as the algorithmically curated feed structure, and social features, such as the absence of stable social-epistemic norms of posting and sharing in social media, is largely responsible for the unchecked spread of epistemically toxic content online. Sharing constitutes a distinctive communicative act, governed by a dedicated norm and motivated to a large extent by social identity maintenance. But confusion about this norm and its lack of inherent epistemic checks lead readers to misunderstand posts, attribute excess or insufficient credibility to posts, and allow posters to evade epistemic accountability—all contributing to the spread of epistemically toxic content online. This spread can be effectively addressed if (1) people and platforms add significantly more context to shared posts and (2) platforms nudge people to develop and follow recognized epistemic norms of posting and sharing.
Article
Full-text available
In the present paper, I take findings from the postphenomenological variation of instrumental realism to develop an ‘environmental framework’ to provide a philosophical answer to the ‘problem of representation.’ The framework focuses on three elements of the representational environment, image-making technology, image as a representational device, and scientific hermeneutic strategies occurring within the image interpretation process in the laboratory set-up. The central idea in this regard is that scientific images do not produce meanings without their instrumental environment or that an image becomes representational through the interplay between three framework elements. In the second part of the paper, I apply the framework to contemporary debates on fMRI imaging. I show that fMRI images receive meaning not in isolation but within a complex instrumental environment.
Article
Full-text available
This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital technologies by focusing on the objects’ inner ‘life’ thanks to the analysis of passive synthesis and phenomenological horizons in the objects. These elements can be used in computer technologies to show how digital objects are ‘alive.’ This paper focuses on programs developed through high-order languages like C++ and unsupervised learning techniques like ‘Generative Adversarial Model.’ The phenomenological analysis reveals the computer’s autonomy within the programming stages. At the same time, the conceptual inquiry into the digital system’s learning ability shows the alive and changeable nature of the technological object itself.
Chapter
Full-text available
To date, there is no comprehensive linguistic description of fake news. 4 This chapter surveys a range of fake news detection research, focusing specifically 5 on that which adopts a linguistic approach as a whole or as part of an integrated 6 approach. Areas where linguistics can support fake news characterisation and 7 detection are identified, namely, in the adoption of more systematic data selection 8 procedures as found in corpus linguistics, in the recognition of fake news as a 9 probabilistic outcome in classification techniques, and in the proposal for integrating 10 linguistics in hybrid approaches to fake news detection. Drawing on the research of 11 linguist Douglas Biber, it is suggested that fake news detection might operate along 12 dimensions of extracted linguistic features. 13
Article
Full-text available
Ever since Achterhuis designated American philosophy of technology “empirical” there has been a Continental “push-back” defending the first generation of European—mostly Heidegger’s essentialistic “transcendental”—philosophy of technology. While I prefer a “concrete” turn—to avoid confusing with British “empiricism”—in a belief that particular technologies are different from others—this is a quibble. I admit I was very taken by Richard Rorty’s “anti-essentialism” and “non-foundationalism” in his version of pragmatism, and have adapted much of that stance into postphenomenology. In this contribution I reply to the comments of Lars Botin and Robert Rosenberger.
Article
THE ETHICAL ALGORITHM: The Science of Socially Aware Algorithm Design by Michael Kearns and Aaron Roth. New York: Oxford University Press, 2019. 232 pages. Hardcover; 24.95.ISBN:9780190948207.Cananalgorithmbeethical?Thatquestionappearstobesimilartoaskingifahammercanbeethical.Isnttheethicssolelyrelatedtohowthehammerisused?Usingittobuildahouseseemsethical;usingittoharmanotherpersonwouldbeimmoral.Thatlineofthinkingwouldbeappropriateifthealgorithmweresomethingassimpleasasortingroutine.Ifwesortthelistofnamesinaweddingguestbooksothatthethankyoucardscanbesentmoresystematically,itsusewouldbeacceptable;sortingalistofemailaddressesbyeducationlevelinordertotargetpeoplewithascamwouldbeimmoral.ThealgorithmsunderconsiderationinTheEthicalAlgorithmareofadifferentnature,andtheethicalissuesaremorecomplex.Thesealgorithmsareoffairlyrecentorigin.Theyariseaswetrytomakeuseofvastcollectionsofdatatomakemoreaccuratedecisions:forexample,usingincome,credithistory,currentdebtlevel,andeducationleveltoapproveordisapprovealoanapplication.AsecondexamplewouldbetheuseofhighschoolGPA,ACTorSATscores,andextracurricularactivitiestodeterminecollegeadmissions.Thealgorithmsunderconsiderationusemachinelearningtechniques(abranchofartificialintelligence)tolookatthesuccessratesofpaststudentadmissionsandinstructthemachinelearningalgorithmtodetermineasetofcriteriathatsuccessfullydistinguish(withminimalerrors)betweenthosepaststudentswhograduatedandthosewhodidnt.Thatsetofcriteria(calleda"model")canthenbeusedtopredictthesuccessoffutureapplicants.Theethicalcomponentisimportantbecausesuchmachinelearningalgorithmsoptimizewithparticulargoalsastargets.Andtheretendtobeunintendedconsequencessuchashigherratesofrejectionofapplicantsofcolorwhowouldactuallyhavesucceeded.Thesolutiontothisproblemrequiresmorethanjustaddingsocialequitygoalsaspartofwhatistobeoptimizedalthoughthatisanimportantstep.Theauthorsadvocatethedevelopmentofprecisedefinitionsofthesocialgoalsweseek,andthenthedevelopmentofalgorithmictechniquesthathelpproducethosegoals.Oneimportantexampleisthesocialgoalofprivacy.Whatfollowsleavesoutmanyimportantideasfoundinthebook,butillustratesthekeypoints.KearnsandRothcitethereleaseinthemid1990sofadatasetcontainingmedicalrecordsforallstateemployeesofMassachusetts.Thedatasetwasintendedfortheuseofmedicalresearchers.Thegovernorassuredtheemployeesthatidentifyinginformationhadbeenremovednames,socialsecuritynumbers,andaddresses.Twoweekslater,LatanyaSweeney,aPhDstudentatMIT,sentthegovernorhismedicalrecordsfromthatdataset.Itcosther24.95. ISBN: 9780190948207. *Can an algorithm be ethical? That question appears to be similar to asking if a hammer can be ethical. Isn't the ethics solely related to how the hammer is used? Using it to build a house seems ethical; using it to harm another person would be immoral. *That line of thinking would be appropriate if the algorithm were something as simple as a sorting routine. If we sort the list of names in a wedding guest book so that the thank-you cards can be sent more systematically, its use would be acceptable; sorting a list of email addresses by education level in order to target people with a scam would be immoral. *The algorithms under consideration in The Ethical Algorithm are of a different nature, and the ethical issues are more complex. These algorithms are of fairly recent origin. They arise as we try to make use of vast collections of data to make more-accurate decisions: for example, using income, credit history, current debt level, and education level to approve or disapprove a loan application. A second example would be the use of high school GPA, ACT or SAT scores, and extra-curricular activities to determine college admissions. *The algorithms under consideration use machine-learning techniques (a branch of artificial intelligence) to look at the success rates of past student admissions and instruct the machine-learning algorithm to determine a set of criteria that successfully distinguish (with minimal errors) between those past students who graduated and those who didn't. That set of criteria (called a "model") can then be used to predict the success of future applicants. *The ethical component is important because such machine-learning algorithms optimize with particular goals as targets. And there tend to be unintended consequences--such as higher rates of rejection of applicants of color who would actually have succeeded. The solution to this problem requires more than just adding social equity goals as part of what is to be optimized--although that is an important step. *The authors advocate the development of precise definitions of the social goals we seek, and then the development of algorithmic techniques that help produce those goals. One important example is the social goal of privacy. What follows leaves out many important ideas found in the book, but illustrates the key points. Kearns and Roth cite the release in the mid-1990s of a dataset containing medical records for all state employees of Massachusetts. The dataset was intended for the use of medical researchers. The governor assured the employees that identifying information had been removed--names, social security numbers, and addresses. Two weeks later, Latanya Sweeney, a PhD student at MIT, sent the governor his medical records from that dataset. It cost her 20 to legally purchase the voter rolls for the city of Cambridge, MA. She then correlated that with other publicly available information to eliminate every other person from the medical dataset other than the governor himself. *Achieving data privacy is not as simple as was originally thought. To make progress, a good definition of privacy is needed. One useful definition is the notion of differential privacy: "nothing about an individual should be learnable from a dataset that cannot be learned from the same dataset but with the individual's data removed" (p. 36). This needs to also prevent identification by merging multiple datasets (for example, the medical records from several hospitals from which we might be able to identify an individual by looking for intersections on a few key attributes such as age, gender, and illness). One way to achieve this goal is to add randomness to the data. This can be done in a manner in which the probability of determining an individual changes very little by adding or removing that person's data to/from the dataset. *A very clever technique for adding this random noise can be found in a randomized response, an idea introduced in the 1960s to get accurate information in polls about sensitive topics (such as, "have you cheated on your taxes?"). The respondent is told to flip a coin. If it is a head, answer truthfully. If it is a tail, flip a second time and answer "yes" if it is a head and "no" if it is a tail. Suppose the true proportion of people who cheat on their taxes is p. Some pretty simple math shows that with a sufficiently large sample size (larger than needed for surveys that are less sensitive), the measured proportion, m, of "yes" responses will be close to m = ¼ + ½ p. We can then approximate p as 2m - ½, and still give individuals reasonable deniability. If I answer "yes" and a hacker finds my record, there is still a 25% chance that my true answer is "no." My privacy has been effectively protected. So we can achieve reasonable privacy at the cost of needing a larger dataset. *This short book discusses privacy, fairness, multiplayer games (such as using apps to direct your morning commute), pitfalls in scientific research, accountability, the singularity (a future time where machines might become "smarter" than humans), and more. Sufficient detail is given so that the reader can understand the ideas and the fundamental aspects of the algorithms without requiring a degree in mathematics or computer science. *One of the fundamental issues driving the need for ethical algorithms is the unintended consequences that result from well-intended choices. This is not a new phenomenon--Lot made a choice based on the data he had available: "Lot looked about him, and saw that the plain of the Jordan was well watered everywhere like the garden of the Lord, like the land of Egypt ..." Genesis 13:10 (NRSV). But by choosing that apparently desirable location, Lot brought harm to his family. *I have often pondered the command of Jesus in Matthew 10:16 where he instructs us to "be wise as serpents and innocent as doves." Perhaps one way to apply this command is to be wise as we are devising algorithms to make sure that they do no harm. We should be willing to give up some efficiency in order to achieve more equitable results. *Reviewed by Eric Gossett, Department of Mathematics and Computer Science, Bethel University, St. Paul, MN 55112.
Article
This paper presents a case of severe uncertainty in the development of autonomous and intelligent systems in Artificial Intelligence and autonomous robotics. After discussing how uncertainty emerges from the complexity of the systems and their interaction with unknown environments, the paper describes the novel framework of explorative experiments. This framework presents a suitable context in which many of the issues relative to uncertainty, both at the epistemological level and at the ethical one, in this field should be reframed. The case of autonomous robot systems for search and rescue is used to make the discussion more concrete.
Chapter
Modeling information diffusion on social media has gained tremendous research attention in the last decade due to its impact in understanding the overall spread of news contents through network links such as followers, friends, etc. Those fake stories which gain quick visibility are deployed on social media in a strategic way in order to create maximum impact. In this context, the selection of initiators, the time of deployment, the estimation of the reach of the news, etc. play a decisive role to model the spread appropriately. In this chapter, we start by defining the problem of fake news diffusion and addressing the challenges involved. We then model information cascade in various ways such as a diffusion tree. We then present a series of traditional and recent approaches which attempt to model the spread of fake news on social media.