Upol Ehsan

Upol Ehsan
Georgia Institute of Technology | GT · School of Interactive Computing

About

26
Publications
16,295
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
618
Citations
Citations since 2017
26 Research Items
618 Citations
2017201820192020202120222023050100150200250
2017201820192020202120222023050100150200250
2017201820192020202120222023050100150200250
2017201820192020202120222023050100150200250
Introduction
I am a computer scientist and philosopher by training and a technologist by trade. I care about people first, technology second. With formal backgrounds in philosophy and engineering, my research spans the domains of Human-Computer Interaction (HCI), Artificial Intelligence (AI), Computational Creativity (CC), Machine Ethics, and ICTD. The common denominator in my work is the human-centered approach. I design and create technology that are explainable, ethical, and encultured motivated by complex technical problems analyzed through the sociotechnical and cultural dimensions.

Publications

Publications (26)
Preprint
Full-text available
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap--divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insi...
Preprint
Full-text available
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. While Explainable AI (XAI) has predominantly tackled algorithmic opaqueness, we propose that se...
Preprint
Full-text available
There is a growing frustration amongst researchers and developers in Explainable AI (XAI) around the lack of consensus around what is meant by 'explainability'. Do we need one definition of explainability to rule them all? In this paper, we argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI's development....
Preprint
Full-text available
When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illu...
Preprint
Full-text available
To make Explainable AI (XAI) systems trustworthy, understanding harmful effects is just as important as producing well-designed explanations. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls(EPs), unanticipated negative downstream effects from AI explanations manifesting e...
Article
Full-text available
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While "opening the opaque box" is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, we conduct a mixed-methods study of how two different groups of whos-people with and without a back...
Conference Paper
Full-text available
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algo...
Preprint
Full-text available
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally. In this position paper, we define a framework called the \textit{layers of explanation} (LEx), a lens through which we can assess the appropriateness of different types of explanations. The framework uses the notions of \texti...
Preprint
Full-text available
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algo...
Chapter
Full-text available
Explanations—a form of post-hoc interpretability—play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understan...
Preprint
Full-text available
While algorithm-centered explainability in AI systems has made commendable progress, human-centered approaches are crucial for fair and accountable use of consequential AI systems. In this paper, we highlight the socially situated nature of AI systems and advocate for a sociotechnical approach to Human-centered Explainable AI (HCXAI). We outline th...
Conference Paper
Full-text available
Explanations-a form of post-hoc interpretability-play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holis-tic understa...
Preprint
Full-text available
Explanations--a form of post-hoc interpretability--play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic underst...
Conference Paper
Full-text available
As AI systems become ubiquitous in our lives, the human side of the equation needs careful investigation. The challenges of designing and evaluating "black-boxed" AI systems depends crucially on who the human is in the loop. Explanations, viewed as a form of post-hoc interpretability, can help establish rapport, confidence, and understanding betwee...
Conference Paper
Full-text available
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this...
Preprint
Full-text available
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this...
Conference Paper
Full-text available
Autism Spectrum Disorder (ASD) is a critical problem worldwide; however, low and middle-income countries (LMICs) often suffer more from it due to the lack of contextual research and effective care infrastructure. Moreover, ASD in LMICs offers unique challenges as cultural misperceptions and social practices often impede effective care there. Howeve...
Conference Paper
Full-text available
We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate internal state-action representations of an autonomous agent into natural language. We evaluate our technique in th...
Conference Paper
Full-text available
In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration to make it more effe...
Conference Paper
Full-text available
Parents’engagementintheirchildren’seducationiskeytochildren’s academic success and social development. For many parents in the U.S., engagement is still a struggle partly due to a lack of communication and community-building tools that support the broader ecology of parenting, or parental ecology. Although current technologies have the potential to...
Conference Paper
Full-text available
The rhetoric of world leaders has considerable influence on civic engagement and policy. Twitter, in particular, has become a consequential means of communication for politicians. However, the mechanisms by which these politicians use Twitter to communicate with the public are not well-understood from a computational perspective. This paper describ...
Conference Paper
Full-text available
Parents' engagement in their children's education is key to chil-dren's academic success and social development. For many parents in the U.S., engagement is still a struggle partly due to a lack of communication and community-building tools that support the broader ecology of parenting, or parental ecology. Although current technologies have the po...

Network

Cited By