Jean-Marie John-Mathews

Jean-Marie John-Mathews
Institut Mines-Télécom | telecom-sudparis.eu · SHS

PhD

About

18
Publications
2,216
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
8
Citations
Introduction
I work on AI Ethics, particuliarly on Fair ML and XAI (Explainable Artificial Intelligence) using quantitative and experimental techniques, but also philosophy and sociology theories. Right now, I am working on the relation between Fairness and interpretability in AI.

Publications

Publications (18)
Article
Full-text available
Fairness of Artificial Intelligence (AI) decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the deb...
Article
We consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the productio...
Conference Paper
This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of b...
Thesis
Full-text available
While Artificial Intelligence (AI) is raising more and more ethical issues (fairness, explainability, privacy, etc.), a set of regulatory tools and methods have emerged over the past few years, such as fairness metrics, explanation procedures, anonymization methods, and so on. When data are granular, voluminous and behavioral, these “responsible to...
Preprint
Full-text available
While the number of reported Artificial Intelligence (AI) ethical incidents of different kinds (discrimination, opacity, lack of privacy, etc.) increase worldwide, many actors seek to produce technical and legal tools to regulate AI. These methods of “responsible AI”, which generally rely on technical metrics and high-level principles, often fail a...
Preprint
This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of b...
Preprint
Full-text available
We consider two fundamental and related issues currently faced by Artificial Intelligence (AI) development: the lack of ethics and interpretability of AI decisions. Can interpretable AI decisions help to address ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends...
Poster
Full-text available
Les décisions issues des algorithmes d’apprentissage supervisé s’adaptent à partir d’un historique d’exemples. Un des problèmes éthiques majeurs posés par les algorithmes du Machine Learning est celui de l’équité de la décision vis-à-vis de certains groupes de la population.
Conference Paper
Full-text available
Les algorithmes d’apprentissage automatique, et particulièrement les réseaux de neurones profonds, connaissent ces dernières années de fortes performances prédictives dans de nombreux domaines tels que la reconnaissance d’images, l’analyse textuelle ou vocale. Néanmoins, ces bons résultats prédictifs s’accompagnent généralement d’une difficulté à i...
Thesis
Full-text available
L’objectif de cette étude est d’analyser les fondements épistémologiques du phénomène de « boîte noire » qui semble actuellement toucher les algorithmes d’intelligence artificielle, notamment les réseaux de neurones profonds (Deep Learning). Pour ce faire, nous avons montré en quoi le passage de la régression linéaire, au modèle probit, logit, perc...
Technical Report
Full-text available
Nous montrons dans ce travail que la discipline de l’intelligence artificielle possède une structure historique marquée par des hibernations et des périodes de fort enthousiasme. La première période d’enthousiasme a été portée par les attentes disproportionnées des chercheurs ainsi que des grands programmes de recherche en défense financé dans une...
Technical Report
Full-text available
La condition causale de Markov est une condition fondamentale des réseaux causaux afin de modéliser la relation causale. Elle permet de réaliser le pont, si critiquée en sciences sociales, entre dépendance probabiliste (corrélation) et causalité. Les réseaux bayésiens, c’est-à-dire les réseaux causaux qui satisfont la condition de Markov, sont ains...
Technical Report
Full-text available
L’article de Mayrhofer et Waldmann, Agents and Causes: Dispositional Intuitions As a Guide to Causal Structure paru dans le Journal Cognitive Science en 2014 est une tentative intéressante d’intégrer des intuitions issues de l’approche dispositionnelle dans les réseaux bayésiens. Selon les auteurs, il existe une véritable dichotomie dans les approc...
Technical Report
Full-text available
Ce travail propose une réflexion sur l’explication des résultats des algorithmes de Machine Learning et plus particulièrement des algorithmes issus des réseaux de neurones profonds, à savoir le Deep Learning. Cette réflexion fait suite aux débats actuels concernant le caractère « black box » des algorithmes de Machine Learning actuellement en place...
Thesis
Full-text available
In this work, the traditional endogeneity problem using instrumental variables is addressed in a context of high-dimensionality. Thus, the number of explanatory variables or instrumental variables can possibly be much larger than the sample size. The high-dimensional model and its assumptions, namely sparsity and restricted eigenvalues condition, a...
Thesis
In this work, the traditional endogeneity problem using instrumental variables is addressed in a context of high-dimensionality. Thus, the number of explanatory variables or instrumental variables can possibly be much larger than the sample size. The high-dimensional model and its assumptions, namely sparsity and restricted eigenvalues condition, a...

Questions

Question (1)
Question
Looking for literature review if possible

Network

Cited By

Projects