Shreyas Bhat

Shreyas Bhat
University of Michigan | U-M · Department of Industrial and Operations Engineering

About

8
Publications
483
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
43
Citations

Publications

Publications (8)
Chapter
With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic goal alignment within a task) between the robot and the human is gaining increasing research attention. Prior li...
Preprint
Full-text available
With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic goal alignment within a task) between the robot and the human is gaining increasing research attention. Prior li...
Article
Full-text available
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables...
Preprint
Full-text available
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables...
Article
In this paper, we present a framework for trust-aware sequential decision-making in a human-robot team wherein the human agent’s trust in the robotic agent is dependent on the reward obtained by the team. We model the problem as a finite-horizon Markov Decision Process with the trust of the human on the robot as a state variable. We develop a rewar...
Preprint
Full-text available
In this paper, we present a framework for trust-aware sequential decision-making in a human-robot team. We model the problem as a finite-horizon Markov Decision Process with a reward-based performance metric, allowing the robotic agent to make trust-aware recommendations. Results of a human-subject experiment show that the proposed trust update mod...

Network

Cited By