Figure 6 - uploaded by Shreyas Bhat
Content may be subject to copyright.
Number of agreements between the recommendation from the robot and the human's action choice, mean and standard error.

Number of agreements between the recommendation from the robot and the human's action choice, mean and standard error.

Source publication
Article
Full-text available
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables...

Contexts in source publication

Context 1
... the non-adaptive-learner and adaptive-learner strategies (p = 0.057). This trend could reach significance with more data which we are currently working on. The end-of- mission trust rating should be a stable trust rating since the participants have had enough interactions with the intelligent agent to have a good sense of their trust on it. Fig. 6 shows the number of agreements between the recommendation from the intelligent agent and the participant's action selection. We expect there to be a positive correlation between the number of agreements and trust reported by the participants. Repeated measures ANOVA shows significant difference between the three strategies (F (2, 22) = ...
Context 2
... the non-adaptive-learner and adaptive-learner strategies (p = 0.057). This trend could reach significance with more data which we are currently working on. The end-of- mission trust rating should be a stable trust rating since the participants have had enough interactions with the intelligent agent to have a good sense of their trust on it. Fig. 6 shows the number of agreements between the recommendation from the intelligent agent and the participant's action selection. We expect there to be a positive correlation between the number of agreements and trust reported by the participants. Repeated measures ANOVA shows significant difference between the three strategies (F (2, 22) = ...

Citations

... Where in previous studies there has been an approach wherein, Bayesian methods are used to represent the flow of information on a schedule and how information on that schedule can vary according to place and perception of humans and robots. Based on this model, a computational model can be determined to understand where and how the human process informs the schedule to a robot [10]. Another algorithm that can be used is an evolutionary algorithm that is proven to be able to maintain several solutions for each iteration and it can work well resolving problems with noise [11]. ...
Article
Abstrak. Karena semakin populernya kelas daring, diperlukan solusi penjadwalan fleksibel yang mengakomodasi berbagai jadwal dan preferensi pengguna. Ketika fleksibilitas jadwal menjadi salah satu alasan seseorang memilih kelas daring, maka menjadi jelas bahwa algoritma penjadwalan yang baik merupakan kebutuhan dasar agar pengguna mendapatkan fleksibilitas yang mereka cari. Tujuan dari penelitian ini adalah untuk membuat dan mengembangkan algoritma penjadwalan berdasarkan jaringan temporal multi-agen untuk mengatasi batasan penjadwalan guru dan siswa baik dalam situasi kelas maupun jarak jauh. Penelitian ini menggunakan metode jaringan temporal multi-agen untuk membuat algoritma yang menawarkan solusi penjadwalan independen untuk guru dan siswa. Algoritme ini mempertimbangkan berbagai batasan, sehingga menghasilkan penjadwalan yang berhasil di dalam dan di luar situasi kelas pada umumnya. Ide yang dikemukakan menunjukkan hasil penjadwalan yang efektif, memberikan kebebasan bagi siswa dan guru. Strategi multi-agen secara efektif mengontrol berbagai batasan, memberikan solusi penjadwalan yang dapat disesuaikan untuk berbagai kebutuhan pengguna. Guru dan siswa dapat secara mandiri membentuk solusi penjadwalan dengan algoritma yang akan menyelesaikan semua kendala internal dan eksternal guru dan siswa. Abstract. Because of the growing popularity of online classes, flexible scheduling solutions that accommodate a wide range of user schedules and preferences are required. When schedule flexibility is one reason a person chooses online classes, it becomes clear that good scheduling algorithms are a basic requirement so that users get the flexibility they seek. The goal of this research is to create and develop a scheduling algorithm based on a multi-agent temporal network to solve the scheduling restrictions of teachers and students in both classroom and distant situations. This study uses a multi-agent temporal network method to create algorithms that offer independent scheduling solutions for teachers and students. These algorithms consider a variety of restrictions, providing successful scheduling within and outside of typical classroom situations. The idea that was put forward demonstrates effective scheduling outcomes, allowing students as well as teachers freedom. The multi-agent strategy effectively controls numerous restrictions, providing customizable scheduling solutions for a wide range of user requirements. Teachers and students can independently form scheduling solutions with algorithms that will solve all internal and external constraints of teachers and students.
Chapter
With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic goal alignment within a task) between the robot and the human is gaining increasing research attention. Prior literature on value alignment makes an inherent assumption that aligning the values of the robot with that of the human benefits the team. This assumption, however, has not been empirically verified. Moreover, prior literature does not account for human’s trust in the robot when analyzing human-robot value alignment. Thus, a research gap needs to be bridged by answering two questions: How does alignment of values affect trust? Is it always beneficial to align the robot’s values with that of the human? We present a simulation study and a human-subject study to answer these questions. Results from the simulation study show that alignment of values is important for trust when the overall risk level of the task is high. We also present an adaptive strategy for the robot that uses Inverse Reinforcement Learning (IRL) to match the values of the robot with those of the human during interaction. Our simulations suggest that such an adaptive strategy is able to maintain trust across the full spectrum of human values. We also present results from an empirical study that validate these findings from simulation. Results indicate that real-time personalized value alignment is beneficial to trust and perceived performance by the human when the robot does not have a good prior on the human’s values.