Marko TesicUniversity of Cambridge | Cam · Leverhulme Centre for the Future of Intelligence
Marko Tesic
PhD
I am a Research Associate at the Leverhulme Centre for the Future of Intelligence, University of Cambridge
About
14
Publications
1,665
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
48
Citations
Introduction
I am a Research Associate at Leverhulme Centre for the Future of Intelligence, University of Cambridge. I currently explore the capabilities of AI systems and how these capabilities translate onto the specific demands in the human workforce. This research is carried out in collaboration with the OECD and experts in occupational psychology.
Publications
Publications (14)
Bayesian reasoning and decision making is widely considered normative because it minimizes prediction error in a coherent way. However, it is often difficult to apply Bayesian principles to complex real world problems, which typically have many unknowns and interconnected variables. Bayesian network modeling techniques make it possible to model suc...
As AI systems come to permeate human society, there is an increasing need for such systems to explain their actions, conclusions, or decisions. This is presently fuelling a surge in interest in machine-generated explanation in the field of explainable AI. In this chapter, we will consider work on explanations in areas ranging from AI to philosophy,...
Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)—both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology have pointed out that people regularly use CFs to express causal relationships. Most AI systems, however, are...
Providing an explanation is a communicative act. It involves an explainee, a person who receives an explanation, and an explainer, a person (or sometimes a machine) who provides an explanation. The majority of research on explanation has focused on how explanations alter explainees' beliefs. However, one general feature of communicative acts is tha...
In this paper, we bring together two closely related, but distinct, notions: argument and explanation. We clarify their relationship. We then provide an integrative review of relevant research on these notions, drawn both from the cognitive science and the artificial intelligence (AI) literatures. We then use this material to identify key direction...
Providing an explanation is a communicative act. It includes an explainee, a person who is receiving an explanation, and an explainer, a person (or sometimes a machine) who provides an explanation. The majority of research on explanation has focused on how explanations alter explainees’ beliefs. However, one general feature of communicative acts is...
Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable AI-both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology, however, have pointed out that people regularly use CFs to express causal relationships. Most AI systems are only able to capture assoc...
In real world contexts of reasoning about evidence, that evidence frequently arrives sequentially. Moreover, we often cannot anticipate in advance what kinds of evidence we will eventually encounter. This raises the question of what we do to our existing models when we encounter new variables to consider. The standard normative framework for probab...
In their 2010 (Erkenntnis 73:393–412) paper, Dizadji-Bahmani, Frigg, and Hartmann (henceforth ‘DFH’) argue that the generalized version of the Nagel–Schaffner model that they have developed (henceforth ‘the GNS’) is the right one for intertheoretic reduction, i.e. the kind of reduction that involves theories with largely overlapping domains of appl...
Recent research suggests that people do not perform well on some of the most crucial components of causal reasoning: probabilistic independence, diagnostic reasoning, and explaining away. Despite this, it remains unclear what contexts would affect people's reasoning in these domains. In the present study we investigated the influence of manipulatin...
We provide a novel Bayesian justification of inference to the best explanation (IBE). More specifically, we present conditions under which explanatory considerations can provide a significant confirmatory boost for hypotheses that provide the best explanation of the relevant evidence. Furthermore, we show that the proposed Bayesian model of IBE is...