About
26
Publications
16,295
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
618
Citations
Citations since 2017
Introduction
I am a computer scientist and philosopher by training and a technologist by trade. I care about people first, technology second.
With formal backgrounds in philosophy and engineering, my research spans the domains of Human-Computer Interaction (HCI), Artificial Intelligence (AI), Computational Creativity (CC), Machine Ethics, and ICTD. The common denominator in my work is the human-centered approach. I design and create technology that are explainable, ethical, and encultured motivated by complex technical problems analyzed through the sociotechnical and cultural dimensions.
Publications
Publications (26)
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap--divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insi...
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. While Explainable AI (XAI) has predominantly tackled algorithmic opaqueness, we propose that se...
There is a growing frustration amongst researchers and developers in Explainable AI (XAI) around the lack of consensus around what is meant by 'explainability'. Do we need one definition of explainability to rule them all? In this paper, we argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI's development....
When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illu...
To make Explainable AI (XAI) systems trustworthy, understanding harmful effects is just as important as producing well-designed explanations. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls(EPs), unanticipated negative downstream effects from AI explanations manifesting e...
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While "opening the opaque box" is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, we conduct a mixed-methods study of how two different groups of whos-people with and without a back...
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algo...
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally. In this position paper, we define a framework called the \textit{layers of explanation} (LEx), a lens through which we can assess the appropriateness of different types of explanations. The framework uses the notions of \texti...
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algo...
Explanations—a form of post-hoc interpretability—play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understan...
While algorithm-centered explainability in AI systems has made commendable progress, human-centered approaches are crucial for fair and accountable use of consequential AI systems. In this paper, we highlight the socially situated nature of AI systems and advocate for a sociotechnical approach to Human-centered Explainable AI (HCXAI). We outline th...
Explanations-a form of post-hoc interpretability-play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holis-tic understa...
Explanations--a form of post-hoc interpretability--play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic underst...
As AI systems become ubiquitous in our lives, the human side of the equation needs careful investigation. The challenges of designing and evaluating "black-boxed" AI systems depends crucially on who the human is in the loop. Explanations, viewed as a form of post-hoc interpretability, can help establish rapport, confidence, and understanding betwee...
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this...
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this...
Autism Spectrum Disorder (ASD) is a critical problem worldwide; however, low and middle-income countries (LMICs) often suffer more from it due to the lack of contextual research and effective care infrastructure. Moreover, ASD in LMICs offers unique challenges as cultural misperceptions and social practices often impede effective care there. Howeve...
We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate internal state-action representations of an autonomous agent into natural language. We evaluate our technique in th...
In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration to make it more effe...
Parents’engagementintheirchildren’seducationiskeytochildren’s academic success and social development. For many parents in the U.S., engagement is still a struggle partly due to a lack of communication and community-building tools that support the broader ecology of parenting, or parental ecology. Although current technologies have the potential to...
The rhetoric of world leaders has considerable influence on civic engagement and policy. Twitter, in particular, has become a consequential means of communication for politicians. However, the mechanisms by which these politicians use Twitter to communicate with the public are not well-understood from a computational perspective. This paper describ...
Parents' engagement in their children's education is key to chil-dren's academic success and social development. For many parents in the U.S., engagement is still a struggle partly due to a lack of communication and community-building tools that support the broader ecology of parenting, or parental ecology. Although current technologies have the po...