![Michele Loi](https://i1.rgstatic.net/ii/profile.image/692539275345928-1542125638943_Q128/Michele-Loi-2.jpg)
About
108
Publications
57,019
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,607
Citations
Introduction
Additional affiliations
September 2008 - March 2012
September 2008 - March 2012
July 2012 - March 2013
Publications
Publications (108)
Recent epidemiological reports of associations between socioeconomic status and epigenetic markers that predict vulnerability
to diseases are bringing to light substantial biological effects of social inequalities. Here, we start the discussion of
the moral consequences of these findings. We firstly highlight their explanatory importance in the con...
This paper argues that human germ-line biomedical enhancements (such as those that may be enabled by the application of CRISPR/CAS9 technology to human embryos) should be regulated in the name of preventing the inter-generational transmission and amplification of extreme socio-economic inequality. The protection of a background condition of "rough...
This paper discusses the concept of “human disenhancement”, i.e. the worsening of human individual abilities and expectations through technology. The goal is provoking ethical reflection on technological innovation outside the biomedical realm, in particular the substitution of human work with computer-driven automation. According to some widely ac...
In this article, we defend a normative theory of prenatal equality of opportunity, based on a critical revision of Rawls's principle of fair equality of opportunity (FEO). We argue that if natural endowments are defined as biological properties possessed at birth and the distribution of natural endowments is seen as beyond the scope of justice, Raw...
Recent evidence of intergenerational epigenetic programming of disease risk broadens the scope of public health preventive interventions to future generations, i.e. non existing people. Due to the transmission of epigenetic predispositions, lifestyles such as smoking or unhealthy diet might affect the health of populations across several generation...
In the field of algorithmic fairness, many fairness criteria have been proposed. Oftentimes, their proposal is only accompanied by a loose link to ideas from moral philosophy -- which makes it difficult to understand when the proposed criteria should be used to evaluate the fairness of a decision-making system. More recently, researchers have thus...
By combining the philosophical literature on statistical evidence and the inter-disciplinary literature on algorithmic fairness, we revisit recent objections against classification parity in light of causal analyses of algorithmic fairness and the distinction between predictive and diagnostic evidence. We focus on trial proceedings as a black-box c...
In the field of algorithmic fairness, many fairness criteria have been proposed. Oftentimes, their proposal is only accompanied by a loose link to ideas from moral philosophy -- which makes it difficult to understand when the proposed criteria should be used to evaluate the fairness of a decision-making system. More recently, researchers have thus...
This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespecti...
Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages...
Trust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including q-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accou...
The widespread use of algorithms for prediction-based decisions urges us to consider the question of what it means for a given act or practice to be discriminatory. Building upon work by Kusner and colleagues in the field of machine learning, we propose a counterfactual condition as a necessary requirement on discrimination. To demonstrate the phil...
Holm (Res Publica, 2022. https://link.springer.com/article/10.1007/s11158-022-09546-3) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is bec...
Counterfactual explanations are a prominent example of post-hoc interpretability methods in the explainable Artificial Intelligence (AI) research domain. Differently from other explanation methods, they offer the possibility to have recourse against unfavourable outcomes computed by machine learning models. However, in this paper we show that retra...
In a recent paper, Brian Hedden has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden’s...
In prediction-based decision-making systems, different perspectives can be at odds: The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly. Balancing these two perspectives is a question of values. We provide a framework to make these value-laden choices clearly visible. For...
Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems. However, these metrics are still insufficiently linked to philosophical theories, and their moral meaning is often unclear. We propose a general framework for analyzing the fairness of decision systems based on theories of distributi...
In this paper, we provide a moral analysis of two criteria of statistical fairness debated in the machine learning literature: 1) calibration between groups and 2) equality of false positive and false negative rates between groups. In our paper, we focus on moral arguments in support of either measure. The conflict between group calibration vs. fal...
A recent paper (Hedden 2021) has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden 's ar...
We argue that an imperfect criminal law procedure cannot be group-fair, if 'group fairness' involves ensuring the same chances of acquittal or convictions to all innocent defendants independently of their morally arbitrary features. We show mathematically that only a perfect procedure (involving no mistake), a non-deterministic one, or a degenerate...
Clients may feel trapped into sharing their private digital data with insurance companies to get a desired insurance product or premium. However, private insurance must collect some data to offer products and premiums appropriate to the client’s level of risk. This situation creates tension between the value of privacy and common insurance business...
The initial online publication contained a typesetting mistake in the author information. The original article has been corrected.
We provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it dur...
The ELSI White Paper is the final achievement of the ELSI Task Force for the National Research Programme “Big Data” (NRP 75). It is an informational document that provides an overview of the key ethical, legal, and social challenges of big data and provides guidance for the collection, use, and sharing of big data. The document aims to bring togeth...
Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fai...
The traditional business model of insurance is going through a disruptive change. InsurTech and predictive analytics are raising high expectations because they can offer personalized policy premiums adapted to the individual level of risk. Whereas the personalization of premium setting represents an opportunity, however, it could also become a thre...
We provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation considers the justification of the trustworthiness of an AI with the need to monitor it d...
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3404040
In this paper we argue that transparency, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable. These approaches simplify the real natur...
This paper is a reply to "On Statistical Criteria of Algorithmic Fairness," by Brian Hedden. We question the significance of arguing that many group fairness criteria discussed in the machine learning literature are not necessary conditions for the fairness of predictions or decisions based on them. We show that it may be true, in general, that F i...
We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indi...
To appear in the Encyclopedia of Technology & Politics, Edward Elgar Publishing.
Full-text available on SSRN: http://dx.doi.org/10.2139/ssrn.3817377
A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are proposed, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this p...
Trust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including q-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accou...
This paper is a reply to "On Statistical Criteria of Algorithmic Fairness," by Brian Hedden. We question the significance of arguing that many group fairness criteria discussed in the machine learning literature are not necessary conditions for the fairness of predictions or decisions based on them. We show that it may be true, in general, that F i...
A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are defined, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this pa...
In his recent article 'Limits of trust in medical AI,' Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this respo...
In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approach...
Counterfactual explanations are a prominent example of post-hoc interpretability methods in the explainable Artificial Intelligence research domain. They provide individuals with alternative scenarios and a set of recommendations to achieve a sought-after machine learning model outcome. Recently, the literature has identified desiderata of counterf...
Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define acti...
In the utilitarian tradition, well-being is conceived as what is ultimately (non-instrumentally) good for a person. Since the right is defined as a function of the good, and well-being is conceived as the good, well-being is also considered as an input to moral theory that is not itself shaped by prior moral assumptions and provides reasons to act...
In this paper, we provide a philosophical account of the value of creative systems for individuals and society. We characterize creativity in very broad philosophical terms, encompassing natural, existential, and social creative processes, such as natural evolution and entrepreneurship, and explain why creativity understood in this way is instrumen...
Digital apps using Bluetooth to log proximity events (henceforth, digital contact tracing) are increasingly supported by technologists and governments. By and large, the public debate on this matter focuses on privacy, with experts from both law and technology offering very concrete proposals and participating to a lively debate. Far less attention...
With this paper, we aim to put an issue on the agenda of AI ethics that in our view is overlooked in the current discourse. The current discussions are dominated by topics such as trustworthiness and bias, whereas the issue we like to focus on is counter to the debate on trustworthiness. We fear that the overuse of currently dominant AI systems tha...
With this paper, we aim to put an issue on the agenda of AI ethics that in our view is overlooked in the current discourse. The current discussions are dominated by topics suchas trustworthiness and bias, whereas the issue we like to focuson is counter to the debate on trustworthiness. We fear that the overuse of currently dominant AI systems that...
This paper supports the personal data platform cooperative as a means of bringing about John Rawls’s favoured institutional realisation of a just society, the property-owning democracy. It describes personal data platform cooperatives and applies Rawls’s political philosophy to analyse the institutional forms of a just society in relation to the ec...
Luciano Floridi was not the first to discuss the idea of group privacy, but he was perhaps the first to discuss it in relation to the insights derived from big data analytics. He has argued that it is important to investigate the possibility that groups have rights to privacy that are not reducible to the privacy of individuals forming such groups....
Up to date, more than 80 codes exist for handling ethical risks of artificial intelligence and big data. In this paper, we analyse where those codes converge and where they differ. Based on an in-depth analysis of 20 guidelines, we identify three procedural action types (1. control and document, 2. inform, 3. assign responsibility) as well as four...
This open access book provides the first comprehensive collection of papers that provide an integrative view on cybersecurity. It discusses theories, problems and solutions on the relevant ethical issues involved. This work is sorely needed in a world where cybersecurity has become indispensable to protect trust and confidence in the digital infras...
The “Code of Ethics for Data-Based Value Creation” is aimed at companies and organizations that offer services or products based on data. Its purpose is to systematically address the ethical issues that arise in the creation or use of such products and services. To this end, concrete recommendations are made, based on three ethical and three proced...
The goal of this chapter is to provide a conceptual analysis of ethical hacking, comprising history, common usage and the attempt to provide a systematic classification that is both compatible with common usage and normatively adequate. Subsequently, the article identifies a tension between common usage and a norma-tively adequate nomenclature. 'Et...
This chapter presents several ethical frameworks that are useful for analysing ethical questions of cybersecurity. It begins with two frameworks that are important in practice: the principlist framework employed in the Menlo Report on cybersecurity research and the rights-based principle that is influential in the law, in particular EU law. It is a...
This chapter provides a political and philosophical analysis of the values at stake in ensuring cybersecurity for critical infrastructures. It presents a review of the boundaries of cybersecurity in national security, with a focus on the ethics of surveillance for protecting critical infrastructures and the use of AI. A bibliographic analysis of th...
Data-ethics code for companies, written by the ethics working group of the Swiss Alliance for Data-Intensive Services.
We are collecting feedback on the website, but you can also provide your feedback here. Please mention the section of the paper on which you provide your feedback.
Purpose: Cybersecurity in healthcare has become an urgent matter in recent years due to various malicious attacks on hospitals and other parts of the healthcare infrastructure. The purpose of this paper is to provide an outline of how core values of the health systems, such as the principles of biomedical ethics, are in a supportive or conflicting...
Privacy self-management conferring absolute control to the data subject leads to cognitive overload in most people. In order to promote autonomy, it must be focused and choices have to be simplified. This is a conceptual contribution to Taylor’s unified framework to data justice (Taylor 2017). We focus on two questions: 1. What nudges (default rule...
In this paper, we outline the structure and content of a code of ethics for companies engaged in data-based business, i.e. companies whose value propositions strongly depends on using data. The code provides an ethical reference for all people in the organization who are responsible for activities around data. It is primarily targeting private indu...
Privacy self-management conferring absolute control to the data subject leads to cognitive overload in most people. In order to promote autonomy, it must be focused and choices have to be simplified. This is a conceptual contribution to Taylor’s unified framework to data justice (Taylor 2017). We focus on two questions: 1. What nudges (default rule...
The concept of the digital phenotype has been used to refer to digital data prognostic or diagnostic of disease conditions. Medical conditions may be inferred from the time pattern in an insomniac’s tweets, the Facebook posts of a depressed individual, or the web searches of a hypochondriac. This paper conceptualizes digital data as an extended phe...
In this chapter, we analyze questions pertaining to the use of predictive models based on big data from the point of view of Allen Buchanan’s ‘morality of inclusion”. Here we discuss to potential ethical risks of big data that are normally not discussed in the literature. We illustrate ethical threats of big data for the morality of inclusion with...
We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be inte...
Luciano Floridi was not the first to discuss the idea of group privacy, but he was perhaps the first to discuss it in relation to the insights derived from big data analytics. He has argued that it is important to investigate the possibility that groups have rights to privacy that are not reducible to the privacy of individuals forming such groups....
In this paper we argue that transparency, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable. These approaches simplify the real nature of the black boxes and risk misleading the public about the...
Here we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts and business managers. The reference to private insurance as a business practice is essential in our a...
Definitive (and shortened) version published in Italian as a book chapter in "Per cosa lottare. Le frontiere del progressismo", edited by Enrico Biale and Corrado Fumagalli, Fondazione Giangiacomo Feltrinelli, Milano
Paper is open access: https://ercim-news.ercim.eu/en116/r-s/how-to-include-ethics-in-machine-learning-research
Equality of opportunity (EOP) is an extensively studied conception of fairness in political philosophy. In this work, we map recently proposed notions of algorithmic fairness to economic models of EOP. We formally show that through our proposed mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality o...
Heutzutage gibt es kaum eine schlimmere Anschuldigung als die, diskriminierende Praktiken anzuwenden. Kein Manager, Wissenschaftler oder Politiker würde öffentlich zugeben wollen, dass er Individuen aufgrund ihres Geschlechts oder ihrer ethnischen Zugehörigkeit willkürlich behandeln würde. Doch unsere Intoleranz gegenüber Diskriminierung wird heute...
Relationship of cybersecurity and ethics in the health domain
The introduction of Web 2.0 technology, along with a population increasingly proficient in Information and Communications Technology (ICT), coupled with the rapid advancements in genetic testing methods, has seen an increase in the presence of participant-centred research initiatives. Such initiatives, aided by the centrality of ICT interconnection...
This White Paper outlines how the ethical discourse on cybersecurity has developed in the scientific literature, which ethical issues gained interest, which value conflicts are discussed, and where the “blind spots” in the current ethical discourse on cybersecurity are located. The White Paper is based on an extensive literature with a focus on thr...
Intensified and extensive data production and data storage are characteristics of contemporary western societies. Health data sharing is increasing with the growth of Information and Communication Technology (ICT) platforms devoted to the collection of personal health and genomic data. However, the sensitive and personal nature of health data poses...
1. I am grateful to the respondents (and the journal editors) for the opportunity provided, to clarify the concept of a libertarian right to test (LRT in what follows) and its normative implications. To sum up, I concede that genomes have a normatively salient informational aspect, that exercising the LRT may cause informational harm and violate ri...
To say that “data is the new oil” is already an abused commonplace. What are the implications for justice? In our contribution, we argue that if data is the new oil, the extraction of value from data is unjust if it is achieved by means of dominant internet platforms that are unjust. We define the concept of “dominant internet platforms” and explai...
Again a piece driven by political engagement and recent emotions. (Needless to say, this opinion piece is politically partisan. I have not tried to be detached and objective as a social scientist ought to be.) I am trying to explain (to Italians, but also foreigners) how I understand the key strengths of the Italian Five Star Movement.
I do this b...
This chapter summarizes non-paternalist arguments for state interference with individual diet and food choice, based on the scientific hypothesis that, through epigenetic and other biological phenomena, parental diet affects the disease propensity of future generation individuals. According to the developmental theory of health and disease (DOHAD),...
I sketch a libertarian argument for the right to test in the context of 'direct to consumer' (DTC) genetic testing. A libertarian right to genetic tests, as defined here, relies on the idea of a moral right to self-ownership. I show how a libertarian right to test can be inferred from this general libertarian premise, at least as aprima facieright,...
Many people believe that individuals have a right not to know their genetic disease risk. Here it is argued that, if this is correct, individuals also have a right not to know their diet-related disease risk. Reasons to remain ignorant are analogous in the case of risk related to diet and genetic susceptibilities. It follows that any policy to prom...
Axel Gosseries introduces his view on intergenerational justice by means of a comparison between Rawlss view and his own. In debating his carefully argued position, I will highlight what I see as important differences between the two views. I will argue that Gosseries view is less liberal because it involves a prohibition of saving in the "steady...