
Myrthe Tielman- Professor (Assistant) at Delft University of Technology
Myrthe Tielman
- Professor (Assistant) at Delft University of Technology
About
49
Publications
6,550
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
693
Citations
Introduction
Skills and Expertise
Current institution
Publications
Publications (49)
As intelligent systems become more integrated into people’s daily life, systems designed to facilitate lifestyle and behavior change for health and well-being have also become more common. Previous work has identified challenges in the development and deployment of such AI-based support for diabetes lifestyle management and shown that it is necessa...
As machines' autonomy increases, the possibilities for collaboration between a human and a machine also increase. In particular, tasks may be performed with varying levels of interdependence, i.e. from independent to joint actions. The feasibility of each type of interdependence depends on factors that contribute to contextual trustworthiness, such...
As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and the way these are communicated can affect the human's trust, which in t...
As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and the way these are communicated can affect the human's trust, which in t...
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication. However, a comprehensive understanding of the field is lacking d...
Child helplines offer a safe and private space for children to share their thoughts and feelings with volunteers. However, training these volunteers to help can be both expensive and time-consuming. In this demo, we present Lilobot, a conversational agent designed to train volunteers for child helplines. Lilobot's reasoning is based on the Belief-D...
Agent-based training systems can enhance people's social skills. The effective development of these systems needs a comprehensive architecture that outlines their components and relationships. Such an architecture can pinpoint improvement areas and future outlooks. This paper presents ARTES: a general architecture illustrating how components of age...
In human-machine teams, the strengths and weaknesses of both team members result in dependencies, opportunities, and requirements to collaborate. Managing these interdependence relationships is crucial for teamwork, as it is argued that they facilitate accurate trust calibration. Unfortunately, empirical research on the influence of interdependence...
Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful human control over robots by satisfying accountability, responsibility,...
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthiness as the combination of (1) whether someone will do a task and (2)...
As human-machine teams become a more common scenario, we need to ensure mutual trust between humans and machines. More important than having trust, we need all teammates to trust each other appropriately. This means that they should not overtrust or undertrust each other, avoiding risks and inefficiencies, respectively. We usually think of natural...
Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which lead...
Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focu...
With the increasing adoption of AI as a crucial component of business strategy, the challenge of establishing trust between human and AI teammates remains a key issue. The project "We are in this together" highlights current theories on team trust in human-AI teams and proposes a research model that integrates insights from Industrial and Organizat...
Human-AI teams count on both humans and artificial agents to work together collaboratively. In human-human teams, we use trust to make decisions. Similarly, our work explores how an AI can use trust (in human teammates) to make decisions while ensuring the team's goal and mitigating risks for the humans involved. We present the several steps and ch...
For personal assistive technologies to effectively support users, they need a user model that records information about the user, such as their goals, values, and context. Knowledge-based techniques can model the relationships between these concepts, enabling the support agent to act in accordance with the user’s values. However, user models requir...
Changing one’s behavior is difficult, so many people look towards technology for help. However, most current behavior change support systems are inflexible in that they support one type of behavior change and do not reason about how that behavior is embedded in larger behavior patterns. To allow users to flexibly decide what they desire to change,...
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact of displaying integrity, which is one of the factors that influence tru...
Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasi...
AI alignment is about ensuring AI systems only pursue goals and activities that are beneficial to humans. Most of the current approach to AI alignment is to learn what humans value from their behavioural data. This paper proposes a different way of looking at the notion of alignment, namely by introducing AI Alignment Dialogues: dialogues with whic...
The rapid development of Artificial Intelligence (AI) requires developers and designers of AI systems to focus on the collaboration between humans and machines. AI explanations of system behavior and reasoning are vital for effective collaboration by fostering appropriate trust, ensuring understanding, and addressing issues of fairness and bias. Ho...
Although well-established therapies exist for post-traumatic stress disorder (PTSD), barriers to seek mental health care are high. Technology-based interventions may play a role in improving the reach of efforts to treat, especially when therapist availability is low. The goal of the current randomized controlled trial was to pilot the efficacy of...
Humans and robots are increasingly working together in human-robot teams. Teamwork requires communication, especially when interdependence between team members is high. In previous work, we identified a conceptual difference between sharing what you are doing (i.e., being transparent) and why you are doing it (i.e., being explainable). Although the...
As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the h...
Disabled people can benefit greatly from assistive digital technologies. However, this increased human-machine symbiosis makes it important that systems are personalized and transparent to users. Existing work often uses data-oriented approaches. However, these approaches lack transparency and make it hard to influence the system’s behavior. In thi...
Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and promote a...
Because of recent and rapid developments in Artificial Intelligence (AI), humans and AI-systems increasingly work together in human-agent teams. However, in order to effectively leverage the capabilities of both, AI-systems need to be understandable to their human teammates. The branch of eXplainable AI (XAI) aspires to make AI-systems more underst...
As AI systems are increasingly involved in decision making, it also becomes important that they elicit appropriate levels of trust from their users. To achieve this, it is first important to understand which factors influence trust in AI. We identify that a research gap exists regarding the role of personal values in trust in AI. Therefore, this pa...
In human-agent teams, how one teammate trusts another teammate
should correspond to the latter's actual trustworthiness, creating what
we would call appropriate mutual trust. Although this sounds obvious,
the notion of appropriate mutual trust for human-agent teamwork lacks a formal definition. In this article, we propose a formalization which repr...
Personal assistant agents have been developed to help people in their daily lives with tasks such as agenda management. In order to provide better support, they should not only model the user’s internal aspects, but also their social situation. Current research on social context tackles this by modelling the social aspects of a situation from an ob...
We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxono...
Introduction: Virtual reality (VR)-based interventions, wearable technology and text mining hold promising potential for advancing the way in which military and Veteran mental health conditions are diagnosed and treated. They have the ability to improve treatment protocol adherence, assist in the detection of mental health conditions, enhance resil...
Background
Systems incorporating virtual agents can play a major role in electronic-mental (e-mental) health care, as barriers to care still prevent some patients from receiving the help they need. To properly assist the users of these systems, a virtual agent needs to promote motivation. This can be done by offering motivational messages.
Objecti...
Background
Digital health interventions can fill gaps in mental healthcare provision. However, autonomous e-mental health (AEMH) systems also present challenges for effective risk management. To balance autonomy and safety, AEMH systems need to detect risk situations and act on these appropriately. One option is sending automatic alerts to carers,...
Behavior support technology is increasingly used to assist people in daily life activities. To do this properly, it is important that the technology understands what really motivates people. What values underlie their actions, but also the influence of context , and how this can be translated to norms which govern behavior. In this paper, we expand...
BACKGROUND
Systems incorporating virtual agents can play a major role in electronic-mental (e-mental) health care, as barriers to care still prevent some patients from receiving the help they need. To properly assist the users of these systems, a virtual agent needs to promote motivation. This can be done by offering motivational messages.
OBJECTI...
Background and objective:
With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how...
Motivating users is an important task for virtual agents in behaviour chance support systems. In this study we present a system which generates motivational statements based on situation type, aimed at a virtual agent for Post-Traumatic Stress Disorder therapy. Using input from experts (n=13), we built a database containing what categories of motiv...
Although post-traumatic stress disorder (PTSD) is well treatable, many people do not get the desired treatment due to barriers to care (such as stigma and cost). This paper presents a system that bridges this gap by enabling patients to follow therapy at home. A therapist is only involved remotely, to monitor progress and serve as a safety net. Wit...
Internet-based guided self-therapy systems provide a novel method for Post-Traumatic Stress Disorder patients to follow therapy at home with the assistance of a virtual coach. One of the main challenges for such a coach is assisting patients with recollecting their traumatic memories, a vital part of therapy. In this paper, an ontology-based questi...
One domain in which intelligent virtual agents are becoming more popular is the health domain. With the changing demography in the western world, the health-care costs are expecting to increase. Less health care professionals will be available for more " care needy persons ". Virtual health agents could play several roles to address part of the inc...
Patients with Post Traumatic Stress Disorder (PTSD) often need to specify and relive their traumatic memories in therapy to relieve their disorder, which can be a very painful process. One new development is an internet-based guided self-therapy system (IBGST), where people work at home and a therapist is remotely involved. We propose to enrich an...
Expressive behaviour is a vital aspect of human interaction. A model for adaptive emotion expression was developed for the Nao robot. The robot has an internal arousal and valence value, which are influenced by the emotional state of its interaction partner and emotional occurrences such as winning a game. It expresses these emotions through its vo...