Conference PaperPDF Available

Computational Philosophy

Authors:
Computational Philosophy
Hubert Etienne
Facebook AI Research
Paris, France
hae@.com
ABSTRACT
e critical value of interdisciplinarity is increasingly accepted
not only for promoting the responsible use of machine learning
models, but also for increasing their performance. Social scientists
can then help engineers beer understand the population they are
gathering data from. To be successful, however, interdisciplinary
collaborations require more than just gathering researchers from
various elds around a table. ey call for addressing challenges
such as developing a common language, understanding dierent
ways of reasoning, and addressing epistemic controversies to
agree on shared criteria upon which can be assessed the validity
of the co-produced knowledge. Controversies bring about
relevant epistemic questions to promote an ongoing reection on
scientic methods. In contrast, when a discipline leverages its own
methods to approach a topic traditionally associated with another
discipline, the unsolicited interference is oen followed by a
backlash which undermines the possibility of collaboration (e.g.,
[1], [2], [3]).
New environments emerge in academic centers willing to
welcome interdisciplinary research and a key challenge is creating
practices and methodologies that could enable such research.
Computational philosophy is the approach I develop to serve this
goal. I present here two examples of such collaborations I have
led, as a philosopher, alongside machine learning engineers. ese
collaborations resulted in great outcomes and signicantly
advanced the state of the art in the domain of online social
interactions.
e rst example is an empirical study on misinformation based
on an analysis of user-generated reports from Facebook and
Instagram [4]. e mixed approach I developed with Onur Çelebi
allowed us to identify meaningful variations in the volumes and
types of false news, as well as in the manipulative strategies
developed to spread misinformation among countries and
platforms. anks to an original typology we created to classify
content, we were able to identify four distinct types of behaviors
for users reporting content to moderators. is allowed us to
propose explanations for up to 55% of the inaccuracy in user
reports, suggest solutions to improve the overall signal by taking
action on the dierent sources of inaccuracy, and build a classier
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full
citation on the rst page. Copyrights for third-party components of this work must
be honored. For all other uses, contact the Owner/Author(s).
AIES ’22, August 1–3, 2022, Oxford, United Kingdom.
© 2022 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-9247-1/22/08. hps://doi.org/10.1145/3514094.3539562
capable of distinguishing credible user reports from others to
support misinformation detection.
e second example is an empirical study of social inuencers on
Instagram [5]. François Charton and I developed an original
typology to classify Instagram inuencers based on their source
of legitimacy. is allowed us to identify dierent kinds of
audiences characteristic of the dierent types of inuencers,
which we analyzed through René Girard’s mimetic desire theory
[6]. is research enriched Girard’s philosophical theory while it
helped us advance the understanding of social interactions
between inuencers and their followers by superimposing a
communication system inspired by Marshall McLuhan’s media
theory onto them [7]. Leveraging dierent signals, we were then
able to identify for each category of inuencer which kinds of
posts were most likely to generate a positive response and
negative feedback.
I am now expanding my approach to hate speech, bringing
together the psychological mechanisms of hate conceptualized by
Girard [8] to beer understand the manifestations of hate on
Instagram. ese three blocks should then allow me to present a
holistic approach to online interactions upon which an ethical
system for the moderation of problematic online interactions
could be erected.
KEYWORDS: Philosophy of AI, AI Ethics, Content
Moderation, Behavioral sciences, Misinformation
ACM Reference format:
Hubert Etienne. 2022. Computational philosophy. In Proceedings of 2022
AAAI/ACM Conference on Articial intelligence, Ethics and Society
(AIES’22), 1-3 August, Oxford, ACM, New York, NY, USA, DOI:
hps://doi.org/10.1145/3514094.3539562
REFERENCES
[1] Hubert Etienne. 2021. The dark side of the ‘Moral Machine’ and the fallacy of
computational ethical decision-making for autonomous vehicles. In Law,
Innovation and Technology 13, 1 (2021), 85107, DOI:
hps://doi.org/10.1080/17579961.2021.1898310
[2] John Harris. 2020. The immoral machine. Cambridge Quarterly of Healthcare
Ethics 29, 1 (2020), 7179.
[3] Zeerak Talat, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell,
and Adina Williams. 2021. A Word on Machine Ethics: A Response toJiang et
al. arXiv:2111.04158 (2021).
[4] Hubert Etienne and Onur Celebi. 2022. Listen to what they say: better
understand and detect misinformation with user feedback. in review.
[5] Hubert Etienne and François Charton. 2022. Computational philosophy
enlightens Social influencers on Instagram. in review.
[6] René Girard. 1961. Mensonge romantique et vérité romanesque. Grasset, Paris.
[7] Marshall McLuhan. 1994. Understanding media: The extensions of man. MIT
press, Cambridge.
[8] René Girard. 1972. La Violence et le sacr. Grasset, Paris.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
To cite this article: Hubert Etienne (2021): The dark side of the 'Moral Machine' and the fallacy of computational ethical decision-making for autonomous vehicles, Law, Innovation and Technology, ABSTRACT This paper reveals the dangers of the Moral Machine experiment, alerting against both its uses for normative ends, and the whole approach it is built upon to address ethical issues. It explores additional methodological limits of the experiment on top of those already identified by its authors and provides reasons why it is inadequate in supporting ethical and juridical discussions to determine the moral settings for autonomous vehicles. Demonstrating the inner fallacy behind computational social choice methods when applied to ethical decision-making, it also warns against the dangers of computational moral systems, such as the 'voting-based system' recently developed out of the Moral Machine's data. Finally, it discusses the Moral Machine's ambiguous impact on public opinion; on the one hand, laudable for having successfully raised global awareness with regard to ethical concerns about autonomous vehicles, and on the other hand pernicious, as it has led to a significant narrowing of the spectrum of autonomous vehicle ethics, de facto imposing a strong unidirectional approach, while brushing aside other major moral issues.
Article
In a recent paper in Nature¹ entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument: 1) Find out what “public morality” will prefer to see happen. 2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face. 3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences. 4) This yields “permission” to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants. This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
A Word on Machine Ethics : A Response toJiang
  • Zeerak Talat
  • Hagen Blix
  • Josef Valvoda
  • Maya Indira Ganesh
  • Ryan Cotterell
  • Adina Williams
  • Talat Zeerak
Listen to what they say: better understand and detect misinformation with user feedback. in review. Hubert Etienne and Onur Celebi. 2022. Listen to what they say: better understand and detect misinformation with user feedback
  • Hubert Etienne
  • Onur Celebi
Hubert Etienne and Onur Celebi. 2022. Listen to what they say: better understand and detect misinformation with user feedback. in review.
Mensonge romantique et vérité romanesque . Grasset , Paris . René Girard. 1961. Mensonge romantique et vérité romanesque
  • René Girard
  • Girard René
La Violence et le sacré . Grasset , Paris . René Girard. 1972. La Violence et le sacré
  • René Girard
  • Girard René