Liz Sonenberg’s research while affiliated with University of Melbourne and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (155)


Towards the New XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence
  • Chapter

October 2024

·

18 Reads

·

3 Citations

·

·

Liz Sonenberg

·

Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches. A recent paper has proposed a paradigm shift calling for hypothesis-driven XAI through a conceptual framework called evaluative AI that gives people evidence that supports or refutes hypotheses without necessarily giving a decision-aid recommendation. In this paper, we describe and evaluate an approach for hypothesis-driven XAI based on the Weight of Evidence (WoE) framework, which generates both positive and negative evidence for a given hypothesis. Through human behavioural experiments, we show that our hypothesis-driven approach increases decision accuracy and reduces reliance compared to a recommendation-driven approach and an AI-explanation-only baseline, but with a small increase in under-reliance compared to the recommendation-driven approach. Further, we show that participants used our hypothesis-driven approach in a materially different way to the two baselines.



Fig. 1. Study procedure: We run the two studies in parallel to allocate different sets of participants to the two studies.
An Actionability Assessment Tool for Explainable AI
  • Preprint
  • File available

June 2024

·

34 Reads

In this paper, we introduce and evaluate a tool for researchers and practitioners to assess the actionability of information provided to users to support algorithmic recourse. While there are clear benefits of recourse from the user's perspective, the notion of actionability in explainable AI research remains vague, and claims of `actionable' explainability techniques are based on the researchers' intuition. Inspired by definitions and instruments for assessing actionability in other domains, we construct a seven-question tool and evaluate its effectiveness through two user studies. We show that the tool discriminates actionability across explanation types and that the distinctions align with human judgements. We show the impact of context on actionability assessments, suggesting that domain-specific tool adaptations may foster more human-centred algorithmic systems. This is a significant step forward for research and practices into actionable explainability and algorithmic recourse, providing the first clear human-centred definition and tool for assessing actionability in explainable AI.

Download


Visual Evaluative AI: A Hypothesis-Driven Tool with Concept-Based Explanations and Weight of Evidence

May 2024

·

3 Reads

This paper presents Visual Evaluative AI, a decision aid that provides positive and negative evidence from image data for a given hypothesis. This tool finds high-level human concepts in an image and generates the Weight of Evidence (WoE) for each hypothesis in the decision-making process. We apply and evaluate this tool in the skin cancer domain by building a web-based application that allows users to upload a dermatoscopic image, select a hypothesis and analyse their decisions by evaluating the provided evidence. Further, we demonstrate the effectiveness of Visual Evaluative AI on different concept-based explanation approaches.


Explaining Model Confidence Using Counterfactuals

June 2023

·

20 Reads

·

3 Citations

Proceedings of the AAAI Conference on Artificial Intelligence

Displaying confidence scores in human-AI interaction has been shown to help build trust between humans and AI systems. However, most existing research uses only the confidence score as a form of communication. As confidence scores are just another model output, users may want to understand why the algorithm is confident to determine whether to accept the confidence score. In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction. We present two methods for understanding model confidence using counterfactual explanation: (1) based on counterfactual examples; and (2) based on visualisation of the counterfactual space. Both increase understanding and trust for study participants over a baseline of no explanation, but qualitative results show that they are used quite differently, leading to recommendations of when to use each one and directions of designing better explanations.



FIGURE 1. Matching the Cards created by Dai Vernon (1894-1992) [44, 45].
FIGURE 3. Basic BDI cycle, adapted from [49].
FIGURE 4. Abstract BDI-interpreter [109].
Logics and collaboration

May 2023

·

127 Reads

Logic Journal of IGPL

Since the early days of artificial intelligence (AI), many logics have been explored as tools for knowledge representation and reasoning. In the spirit of the Crossley Festscrift and recognizing John Crossley’s diverse interests and his legacy in both mathematical logic and computer science, I discuss examples from my own research that sit in the overlap of logic and AI, with a focus on supporting human–AI interactions.


Explaining Model Confidence Using Counterfactuals

March 2023

·

14 Reads

Displaying confidence scores in human-AI interaction has been shown to help build trust between humans and AI systems. However, most existing research uses only the confidence score as a form of communication. As confidence scores are just another model output, users may want to understand why the algorithm is confident to determine whether to accept the confidence score. In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction. We present two methods for understanding model confidence using counterfactual explanation: (1) based on counterfactual examples; and (2) based on visualisation of the counterfactual space. Both increase understanding and trust for study participants over a baseline of no explanation, but qualitative results show that they are used quite differently, leading to recommendations of when to use each one and directions of designing better explanations.


Directive Explanations for Actionable Explainability in Machine Learning Applications

January 2023

·

31 Reads

·

29 Citations

The ACM Transactions on Interactive Intelligent Systems

In this paper, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also by explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people’s preference for and perception towards directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centred and context-specific approach to explainable AI.


Citations (70)


... During the development of the present study, an empirical evaluation by Le et al. (2024a) was conducted, comparing a hypothesis-driven approach with recommendation-driven and explanation-only methods. They found that the hypothesis-driven approach improved decision quality without increasing decision time, and participants cognitively engaged with the evidence, thereby considering the uncertainty of the underlying models. ...

Reference:

An Empirical Examination of the Evaluative AI Framework
Towards the New XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence
  • Citing Chapter
  • October 2024

... Given that AI is not infallible and often makes better decisions than humans (Mnih et al., 2015;Nori et al., 2023), a calibrated level of trust is essential for a trade-off that encourages user to rely more on AI, while avoiding blind trust (Vered et al., 2023;Wischnewski et al., 2023). To address the issue of overreliance, various strategies have been developed, such as cognitive forcing functions (Buçinca et al., 2021) and user-adapted, selective explanations (Lai et al., 2023b). ...

The Effects of Explanations on Automation Bias
  • Citing Article
  • June 2023

Artificial Intelligence

... [62] note that sequential counterfactuals are an active area of research. [55] demonstrate, through a user study, that end-users generally strongly prefer such directive explanations. However, they note that social factors can affect these preferences. ...

Directive Explanations for Actionable Explainability in Machine Learning Applications
  • Citing Article
  • January 2023

The ACM Transactions on Interactive Intelligent Systems

... That means that the language does not cover any form of incomplete knowledge or disjunctions (horn clauses) and so very limited forms of inference were possible. Although an extension to knowing-whether was later proposed in work by Miller et al. (2016), it still lacks arbitrary disjunctions, which our framework can handle fully. In a non-modal setting, Lakemeyer and Levesque (2004) proposed a tractable reasoning framework for disjunctive information. ...

'Knowing Whether' in Proper Epistemic Knowledge Bases
  • Citing Article
  • February 2016

Proceedings of the AAAI Conference on Artificial Intelligence

... Moreover, a formalisation such as ours lends itself to various types of implementations. For example, the synthesis of (epistemic) programs and plans (Wang and Zhang 2005;Baral et al. 2017;Muise et al. 2015;McIlraith and Son 2002) that achieve goals in socio-technical applications in a fair manner is an worthwhile research agenda. Likewise, enforcing fairness constraints while factoring for the relationships between individuals in social networks (Farnadi et al. 2018), or otherwise contextualising attributes against other concepts in a relational knowledge base (Aziz et al. 2018;Fu et al. 2020) are also worthwhile. ...

Planning Over Multi-Agent Epistemic States: A Classical Planning Approach
  • Citing Article
  • March 2015

Proceedings of the AAAI Conference on Artificial Intelligence

... Similarly, Chakraborti and Kambhampati [11] observe that the apparent outcome of embedding models of mental states of human users into AI programs is that it opens up the possibility of manipulation. Masters et al. [39] provide a taxonomy of computer deception forms, including imitating, obfuscating, tricking, calculating, and reframing. More recently, the phenomenon of hallucination in Large Language Models (LLMs) has been discussed as a bug but also as an integral feature of this technology [19]. 1 By advancing a different and broader application of the notion of deception in HCI, these and other interventions resonate with perspectives that have been recently developed in areas such as philosophy and cognitive sciences. ...

Characterising Deception in AI: A Survey
  • Citing Chapter
  • January 2021

Communications in Computer and Information Science

... Within our previously mentioned project on using the science of magic to inf luence the design of a computational theory of strategic deception [124], was the development of a computational model of human-like goal recognition [88,89] exploiting principles underlying the 'fallibility' of human memory and belief, principles that have been empirically demonstrated through magic tricks [71,125]. ...

The Role of Environments in Affording Deceptive Behaviour: Some Preliminary Insights from Stage Magic
  • Citing Chapter
  • January 2021

Communications in Computer and Information Science

... Integrating ToM into epistemic planning allows agents to anticipate and respond to the knowledge and beliefs of other agents, leading to more effective coordination and decision-making in multi-agent systems. The authors in [9] and [10] show that nested beliefs and reasoning in multiagent planning can better equip agents to work in teams and show that this integration is crucial for applications requiring sophisticated interaction and collaboration among multiple intelligent agents. This paper characterizes ToM for a multirobot system similar to [11] in that we employ epistemic planning as a logical mechanism to account for the system's knowledge and beliefs. ...

Efficient multi-agent epistemic planning: Teaching planners about nested belief
  • Citing Article
  • January 2022

Artificial Intelligence

... Therefore, we define the mental model of agents as the planning neural network in the "perception-planning" layer. Then, [1] mentioned that the updating conditions of the SMM of multiple agents are as follows: (1) agents observe the common state, and (2) agents have common knowledge about the common state; therefore, the planning neural network only accepts common information (information that all agents know) from multiple agents, and it is renamed as the shared planning neural network. SMM denotes that all the agents' shared planning neural networks have the same parameters. ...

Modeling communication of collaborative multiagent system under epistemic planning