Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Today, intelligent machines \emph{interact and collaborate} with humans in a way that demands a greater level of trust between human and machine. A first step towards building intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real-time. In this paper, two approaches for developing classifier-based empirical trust sensor models are presented that specifically use electroencephalography (EEG) and galvanic skin response (GSR) measurements. Human subject data collected from 45 participants is used for feature extraction, feature selection, classifier training, and model validation. The first approach considers a general set of psychophysiological features across all participants as the input variables and trains a classifier-based model for each participant, resulting in a trust sensor model based on the general feature set (i.e., a "general trust sensor model"). The second approach considers a customized feature set for each individual and trains a classifier-based model using that feature set, resulting in improved mean accuracy but at the expense of an increase in training time. This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor. Implications of the work, in the context of trust management algorithm design for intelligent machines, are also discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Yi et al. [100] use a multi-modal feature fusion network to predict trust from physiological signals like galvanic skin response and heart rate variability. Other studies have used XGBoost [8] and discriminant classification [5] to accomplish similar ends. ...
... Disagree-Strg. Agree (1)(2)(3)(4)(5). ...
... Disagree-Strg. Agree (1)(2)(3)(4)(5). [30] AV Feasibility The AV and the infrastructure necessary to use the AV are practically feasible. ...
Preprint
Full-text available
Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.
... Some of the most used stimuli are musical videos [18], [48], [63]- [65], visual stimuli [23], [40], [41], [45], [53], [54], [67], images [31], [33], [36], [46], [50], [51], [68], [69], Audio [29], [34], [37], [52], task based stimuli [24], [25], [27], [38], [39], [42]- [44], [48], [55]- [57], [58], [62], [70]- [75], film clips [76], [77], normal video clips [26], [62], [78]. In task-based stimuli either the subjects are instructed to do the mental task (mathematics-related problems, memorizing, computer-based gaming, and reading) or physical task (coldpressor test, rope skipping, surgical task, and fatigue exercise). ...
... Physical stress is induced by cold pressor task [70], [71], rope skipping [25], fatiguing exercise [22], [56], handgrip task [44], [75]. Cognition is evaluated using go/no go task [45], surgical task [74], audio stimuli [29], images [67], [69], visual stimuli [41], film clips [77], reading comic strips [73], audio visual stimuli [43], gamming [57], and deceptive task [28]. Sleep based task is also done in some cases namely [21], [27], [30], [34], [35], [49], [59], [60], [66]. ...
... For EEG, the following time domain features are extracted namely N550 latency and amplitude [34], mean [18], [50], [57], [69], maximum [18], minimum [18], standard deviation [50], [57], [76], skewness [50], [76], kurtosis [50], [76], variance [57], [69], peak-to-peak amplitude [57], [69], mean of the absolute values of the first difference of raw signals [18], [50], and mean of the absolute values of the first difference of normalized EEG signal [50]. Some of them also have used power spectra of one second EEG epoch [55], approximate entropy [46], corrected conditional entropy (CCE) [38], duration of overall NREM [21], number of Slow oscillation events [21], global power [36], the correlation coefficient between two channels [69], root mean square value and energy of EEG signal [69], median frequency (MF) and mean power frequency (MPF) of resting state EEG. ...
Article
Full-text available
The interaction between the central nervous system (CNS) and peripheral nervous system (PNS) governs various physiological functions, influences cognitive processes and emotional states. It is necessary to unravel the mechanisms governing the interaction between the brain and the body, enhancing our understanding of physical and mental well-being. Neuro-ergonomics-based human-computer interaction can be improved by comprehending the intricate interrelation between the CNS and PNS. Various studies have been explored using diverse methodologies to study CNS-PNS interaction in specific psychophysiological states, such as emotion, stress, or cognitive tasks. However, there is a need for a thorough, extensive, and systematic review covering diverse interaction forms, applications, and assessments. In this work, an attempt has been made to perform a systematic review that examines the interaction between the CNS and PNS across diverse psychophysiological states, focusing on varied physiological signals. For this, scientific repositories, namely Scopus, PubMed, Association for Computing Machinery, and Web of Science, are accessed. In total, 61 articles have been identified within the period of January 2008 to April 2023 for systematic review. The selected research articles are analyzed based on factors, namely subject information, stimulation modality, types of interactions between the brain and other organs, feature extraction techniques, classification methods, and statistical approaches. The evaluation of the existing literature indicates a scarcity of publicly available databases for CNS-PNS interaction and limited application of machine learning and deep learning-based advanced tools. Furthermore, this review underscores the urgent need for enhancements in several key areas including the development of a more refined psycho-physiological model, improved analysis techniques, and better electrode-surface interface technology. Additionally, there is a need for more research involving daily life activities, female-oriented studies, and privacy considerations. This review contributes to standardizing protocols, improves the diagnostic relevance of various instruments, and extracts more reliable biomarkers. The novelty of this study lies in guiding researchers to point out various issues and potential solutions for future research in the field of bio-signal-based CNS-PNS interaction.
... The common psychophysiological methods used in the selected trust-related studies include electroencephalogram (EEG) (e.g., Akash et al. 2018;Gupta et al. 2020;Hu et al. 2016;, electrodermal activity (EDA) (e.g., Cominelli et al. 2021), galvanic skin response (GSR) (e.g., Akash et al. 2018;Gupta et al. 2020;Hu et al. 2016), heart rate (e.g., Gupta et al. 2020;Kunze et al. 2019;, and eye-tracking (e.g., Kunze et al. 2019;Lu and Sarter 2020;. For instance, signals from central regions of the brain (e.g., C3 and C4) collected by EEG sensors were discovered to be related to human trust. ...
... The common psychophysiological methods used in the selected trust-related studies include electroencephalogram (EEG) (e.g., Akash et al. 2018;Gupta et al. 2020;Hu et al. 2016;, electrodermal activity (EDA) (e.g., Cominelli et al. 2021), galvanic skin response (GSR) (e.g., Akash et al. 2018;Gupta et al. 2020;Hu et al. 2016), heart rate (e.g., Gupta et al. 2020;Kunze et al. 2019;, and eye-tracking (e.g., Kunze et al. 2019;Lu and Sarter 2020;. For instance, signals from central regions of the brain (e.g., C3 and C4) collected by EEG sensors were discovered to be related to human trust. ...
... For instance, signals from central regions of the brain (e.g., C3 and C4) collected by EEG sensors were discovered to be related to human trust. In an experiment, participants were required to determine whether to trust the detection results provided by an obstacle detection system, and EEG sensors identified trust metrics (Akash et al. 2018). Also, using eye-tracking sensors, measured human trust by conducting a multitasking experiment (i.e., a searching task assigned to a drone and a tracking task assigned to participants). ...
Article
Full-text available
With the construction sector primed to incorporate such advanced technologies as artificial intelligence (AI), robots, and machines, these advanced tools will require a deep understanding of human-robot trust dynamics to support safety and productivity. Although other disciplines have broadly investigated human trust-building with robots, the discussion within the construction domain is still nascent, raising concerns because construction workers are increasingly expected to work alongside robots or cobots, and to communicate and interact with drones. Without a better understanding of how construction workers can appropriately develop and calibrate their trust in their robotic counterparts, the implementation of advanced technologies may raise safety and productivity issues within these already-hazardous jobsites. Consequently, this study conducted a systematic review of the human-robot trust literature to (1) understand human-robot trust-building in construction and other domains; and (2) establish a roadmap for investigating and fostering worker-robot trust in the construction industry. The proposed worker-robot trust-building roadmap includes three phases: static trust based on the factors related to workers, robots, and construction sites; dynamic trust understood via measuring, modeling, and interpreting real-time trust behaviors ; and adaptive trust, wherein adaptive calibration strategies and adaptive training facilitate appropriate trust-building. This roadmap sheds light on a progressive procedure to uncover the appropriate trust-building between workers and robots in the construction industry.
... However, excessive trust in automation will lead to problems, such as people's inability to respond quickly in emergency situations and the deterioration of manual driving ability over time (Lee & See, 2004). At the same time, without trust in the automated system, automation cannot fulfill its potential and reduce human workload effectively (Akash et al., 2018). Understanding the human-machine trust relationship in automatic driving systems and ensuring that drivers maintain their ability to respond to problems and operate systems with the assistance of automation are key safety issues that cannot be ignored in automated high-speed train operation. ...
... When a driver is in a state of high trust, alpha and beta waves are more active in the frontal lobe (Hirshfield et al., 2011). When a driver is in a state of distrust, pupil diameter changes significantly (He et al., 2022) and heart rate increases (Akash et al., 2018). Many studies have shown that there is a correlation between trust and related physiological information. ...
... Although the overall trend of accuracy increases with the addition of modalities, this trend is more pronounced in modalities that already have good classification performance. At present, scholars have started to pay attention to multimodal information (Akash et al., 2018). Similarly, Gupta et al. (2019) used a multi-sensory approach of EEG, galvanic skin response (GSR), and heart rate variability (HRV) to measure trust in virtual agents and explore the relationship between trust and cognitive load, confirming the results of this study. ...
Article
Full-text available
With the development of intelligent transportation, it has become mainstream for drivers and automated systems to cooperate to complete train driving tasks. Human-machine trust has become one of the biggest challenges in achieving safe and effective human-machine cooperative driving. Accurate evaluation of human-machine trust is of great significance to calibrate human-machine trust, realize trust management, reduce safety accidents caused by trust bias, and achieve performance and safety goals. Based on typical driving scenarios of high-speed trains, this paper designs a train fault judgment experiment. By adjusting the machine’s reliability, the driver’s trust is cultivated to form their cognition of the machine. When the driver’s cognition is stable, data from the Trust in Automated (TIA) scale and modes of physiological information, including electrodermal activity (EDA), electrocardiograms (ECG), respiration (RSP), and functional near-infrared spectroscopy (fNIRS), are collected during the fault judgment experiment. Based on analysis of this multi-modal physiological information, a human-machine trust classification model for high-speed train drivers is proposed. The results show that when all four modes of physiological information are used as input, the random forest classification model is most accurate, reaching 93.14%. This indicates that the human-machine trust level of the driver can be accurately represented by physiological information, thus inputting the driver’s physiological information into the classification model outputs their level of human-machine trust. The human-machine trust classification model of high-speed train drivers built in this paper based on multi-modal physiological information establishes the corresponding relationship between physiological trust and human-machine trust level. Human-machine trust level is characterized by physiological trust monitoring, which provides support for the dynamic management of trust.
... However, due to the arrival of automatic systems and the decreased cost of acquiring and analysing psychophysiological signals, focus has shifted towards examining these types of signals in response to specific stimuli in a bid to lower the subjectivity and potential biases associated with questionnairebased approaches. Recent studies, like [9,[32][33][34], have been centred on the usage of psychophysiological measurements in the study of human trust. ...
... A common pattern has emerged from studies in which EEG is the most used signal to measure central nervous system activity, with fMRI closely behind it -the latter being more extensively used in the context of interpersonal trust [35]. Additionally, attempts have been made to study trust through EEG measurements which only look at event-related potentials (ERPs), but ERP has proven to be unsuitable for real-time trust level sensing during human-machine interaction due to difficulty in identifying triggers [33,36]. ...
... GSR, a classic psycho-physiological signal that captures arousal based on the conductivity of the skin's surface, not under conscious control but instead modulated by the sympathetic nervous system, has seen use in measuring stress, anxiety, and cognitive load [37]. Some research revealed that the net phasic component as well as the maximum value of phasic activity in GSR, might play a critical role in trust detection [33]. ...
Preprint
Full-text available
Recognizing trust as a pivotal element for success within Human-Robot Collaboration (HRC) environments, this article examines its nature, exploring the different dimensions of trust, analysing the factors affecting each of them, and proposing alternatives for trust measurement. To do so, we designed an experimental procedure involving 50 participants interacting with a modified 'Inspector game' while we monitor their brain, electrodermal, respiratory, and ocular activities. This procedure allowed us to map dispositional (static individual baseline) and learned (dynamic, based on prior interactions) dimensions of trust considering both demographic and psychophysiological aspects. Our findings challenge traditional assumptions regarding the dispositional dimension of trust and establish clear evidence that the first interactions are critical for the trust-building process and the temporal evolution of trust. By identifying more significant psychophysiological features for trust detection and underscoring the importance of individualized trust assessment, this research contributes to understanding the nature of trust in HRC. Such insights are crucial for enabling more seamless human-robot interaction in collaborative environments.
... However, due to the arrival of automatic systems and the decreased cost of acquiring and analysing psychophysiological signals, focus has shifted towards examining these types of signals in response to specific stimuli in a bid to lower the subjectivity and potential biases associated with questionnaire-based approaches. Recent studies, like [9,[32][33][34], have been centred on the usage of psychophysiological measurements in the study of human trust. ...
... EEG analysis is increasingly being utilized in human-robot interaction evaluation and brain-computer interfaces [36], as it provides the means to create real-time non-interruptive evaluation systems enabling the assessment of human mental states such as attention, workload, and fatigue during interaction [37][38][39]. Additionally, attempts have been made to study trust through EEG measurements which only look at event-related potentials (ERPs), but ERP has proven to be unsuitable for real-time trust level sensing during human-machine interaction due to the difficulty in identifying triggers [33,39]. ...
... GSR, a classic psychophysiological signal that captures arousal based on the conductivity of the skin's surface, not under conscious control but instead modulated by the sympathetic nervous system, has seen use in measuring stress, anxiety, and cognitive load [40]. Some research revealed that the net phasic component, as well as the maximum value of phasic activity in GSR, might play a critical role in trust detection [33]. ...
Article
Full-text available
Recognizing trust as a pivotal element for success within Human–Robot Collaboration (HRC) environments, this article examines its nature, exploring the different dimensions of trust, analysing the factors affecting each of them, and proposing alternatives for trust measurement. To do so, we designed an experimental procedure involving 50 participants interacting with a modified ‘Inspector game’ while we monitored their brain, electrodermal, respiratory, and ocular activities. This procedure allowed us to map dispositional (static individual baseline) and learned (dynamic, based on prior interactions) dimensions of trust, considering both demographic and psychophysiological aspects. Our findings challenge traditional assumptions regarding the dispositional dimension of trust and establish clear evidence that the first interactions are critical for the trust-building process and the temporal evolution of trust. By identifying more significant psychophysiological features for trust detection and underscoring the importance of individualized trust assessment, this research contributes to understanding the nature of trust in HRC. Such insights are crucial for enabling more seamless human–robot interaction in collaborative environments.
... Other time-domain measures, such as EEG signal mean amplitude, variance or pairwise correlations between signals are noise prone and thus less reliable. Nevertheless, studies showed that they can be effectively used to objectively classify the level of trust of a driver in a vehicle equipped with an automated obstacle detection system [59]. Authors showed that time domain measures recorded over frontal, central and parieto-occipital regions, in conjunction with frequency domain features and electrodermal activity measures can be used in a binary classification context (trust-distrust), with an accuracy of about 70%. ...
... Power spectral measures, estimated using the discrete wavelet transform, have been used also in [59] to develop a multimodal (EEG and electrodermal signals) Table 1 Overview of studies investigating neural correlates of trust in AV contexts. ERP = event related potentials; PSD = power spectral density; wPLI = weighted phase lag index; PLI = phase lag index; VLPC = ventrolateral prefrontal cortex; DLPC = dorso-lateral prefrontal cortex; VMPC = ventromedial prefrontal cortex; AV = autonomous vehicle; HV = human-driven vehicle classifier model for determining human trust in an automated driving system. ...
... However, research in this area can still make use of the vast body of research in the trust in automation area, as well as human-machine interaction, both in terms of rigorous theoretical models of trust, as well as in terms of neuroimaging findings and related neural mechanisms [24,25]. So far, findings of AV studies confirmed previously known neural mechanisms of trust, as well as behavioral aspects from cognitive neuropsychology research, such as: (i) the relevance of the approach versus withdrawal motivation and decision-making mechanisms [55]; (ii) the role played by cognitive load and affect on trust [59]; and (iii) attentional monitoring and working memory [68]. With respect to these, neural activity in the theta, alpha, beta and gamma bands was reported to be highly correlated with different levels of trust in AV, as overviewed in Sect. ...
Conference Paper
Poor mental states-such as fatigue, low vigilance and low trust-in-automation-have been known to interfere with the appropriate use and interaction with vehicular automation. This has spurred strong interest in driver state monitoring systems (DSMS) that support adaptive interfacing between human drivers and automated driving system to enhance road safety and driver experience. While there have been thriving developments in fatigue and vigilance monitoring, research on trust monitoring is still in its infancy. Trust-in-automation has predominantly been measured subjectively via self-report measures, with fewer studies attempting to measure trust objectively owing to the difficulties in capturing this relatively abstract mental state. Nevertheless, recent progress has unveiled promising potential for objective trust monitoring that can be implemented in future intelligent vehicles. This review presents a framework for understanding the cognitive, affective and behavioural components of driver trust, and surveys current approaches and developments in objective trust measurement in autonomous vehicle contexts using behavioural and brain-based techniques. Approaches are evaluated for strengths and limitations in both their conceptual validity in capturing trust-relevant information, measure reliability, and their practical value in real-world driving settings. Future directions for improving trust monitoring towards practical implementation are also discussed.
... When a team's goal is to develop or train a model, the research question gives insight on the data to be obtained. For example, the goal of the paper by Akash et al. (2018) was to train a model that captured the relationship between trust level (the dependent variable) and participant experience with a certain sensor (the independent variable) in a driving context. The research question that stemmed from this goal was "How can trust be measured in real-time during an automation-assisted task?" ...
... When physiological signals are used to infer mental activity, they are referred to as psychophysiological signals. GSR "is a classical psychophysiological signal that captures arousal based upon the conductivity of the surface of the skin" (Akash et al. 2018) and has been used in measuring stress, trust, and cognitive load (Nikula 1991;Jacobs et al. 1994;Khawaji et al. 2015). Other common psychophysiological signals are the electrocardiogram (ECG) and functional near-infrared spectroscopy (fNIRS) (Fairclough and Gilleade 2014;Sibi et al. 2016;Verdière et al. 2018;Causse et al. 2017). ...
... Similar to the hypothesis testing approach, the model training approach requires a study scenario or context to be specified. For example, Akash et al. (2018) created a scenario where participants needed to respond to feedback provided by an image processing sensor during a simulated driving task. Details were given to the participants about the sensor's algorithm which detects obstacles in front of the car and asks them (the participants) to respond to the algorithm's report. ...
... These results suggest that during a low cognitive load scenario, GSR can be employed to measure trust (Khawaji, et al., 2015). Akash et al. (2018) aimed to identify if physiological metrics could be utilized to develop a classifier-based empirical trust model. Across 100 trials participants engaged in a driving simulator equipped with an obstacle detection sensor that would identify objects and generate a report for the participant to evaluate. ...
... Finally, physiological measures including cardiovascular and GSR measures, provide extremely high levels of temporal resolution as they can be captured second-by-second, and in a manner fairly unobtrusive to the task (although some argue that some of the sensors can be physically obtrusive). They also have the potential to be fairly sensitive as has been demonstrated in their ability to capture small changes in workload (Akash, et al., 2018) and sensor technology is becoming increasingly reliable. The biggest disadvantage of physiological measures is that sensor technology can be extremely expensive, especially if there is a need for software to support analysis by a non-expert, and resource intensive to learn to administer and analyze. ...
Conference Paper
Key to studying and assessing trust and other team emergent states in human-agent teams (HATs) is the ability to measure trust, which has predominantly been assessed through self-report survey methodologies. However, on their own, self-report measures are limited by issues such as social desirability (e.g., Arnold & Feldman, 1981; Taylor, 1961), inaccuracies due to retrospective assessments of abstract concepts (Podsakoff & Organ, 1986), the assessment of trust as static rather than a dynamically emerging state (Kozlowski, 2015), and the impracticality of asking team members to pause tasks to complete surveys. There is a clear need for innovative approaches to better capture trust for both research and applied purposes. Recently, researchers have recommended and begun incorporating more unobtrusive measurement methodologies such as physiological measures, event-based behavioral assessments, and analysis of language/communication (Azevedo-Sa et al., 2021; Hill et al., 2014; Marathe et al., 2020; Waldman et al., 2015). Unobtrusive measures offer many benefits beyond self-report measures, including being more objective, more predictive, more dynamic and real-time, and interfering less with taskwork and teamwork. Meanwhile, behavioral measures of trust, such as allocating tasks to autonomous agents and manually controlling agents, are readily available and also correlated with trust (Schaefer et al., 2021; Khalid et al., 2021). On their own, none of these approaches comprehensively measure trust across a variety of HAT domains and interactions. By evaluating and mapping out known measures of trust to use cases, this paper presents a review of the literature in this field and proposes a theoreticallygrounded Integrative Measurement Framework of Trust Dynamics in HATs that will more accurately, effectively, and practically capture trust in HATs by combining traditional and contemporary measurement approaches.
... As robots become seamlessly integrated with workplaces across industries, the understanding and prediction of human trust have created considerable value for ensuring the efficiency and safety of human-robot interaction. In the existing literature, the fusion of machine learning (ML) with psychophysiological measurements has emerged as a powerful tool for trust prediction [16][17][18][19]. ...
... For instance, in the study examining the relationship between personalization and privacy, the findings indicated that personalized model creation would trigger users' privacy concerns [15]. Moreover, there is an intuitive inference that the development of personalized models requires a significant investment of time and computational resources [17]. Notably, the construction industry presents a unique challenge because of the substantial workforce and frequent changes. ...
... In contrast to a single-turn, a multi-turn setting typically maintains the conversational context (e.g., co-reference resolution) 3 in a back-and-forth information exchange with the user [137]. Some advantages of multi-turn CIS include alleviating the cognitive burden on the user by breaking down the information, assisting with information need formulation, or providing highly personalized information for a given context [124]. ...
... Multi-disciplinary expertise is needed to interpret the bio-mechanism behind such patterns [107]. To this end, machine learning models with a large number of extracted features are commonly used [3,34,49,106]. Deep learning models that decode entire signals can also reveal more intricate details [42,130,131], but risk making the analysis less interpretable. ...
Preprint
Full-text available
Instruments such as eye-tracking devices have contributed to understanding how users interact with screen-based search engines. However, user-system interactions in audio-only channels -- as is the case for Spoken Conversational Search (SCS) -- are harder to characterize, given the lack of instruments to effectively and precisely capture interactions. Furthermore, in this era of information overload, cognitive bias can significantly impact how we seek and consume information -- especially in the context of controversial topics or multiple viewpoints. This paper draws upon insights from multiple disciplines (including information seeking, psychology, cognitive science, and wearable sensors) to provoke novel conversations in the community. To this end, we discuss future opportunities and propose a framework including multimodal instruments and methods for experimental designs and settings. We demonstrate preliminary results as an example. We also outline the challenges and offer suggestions for adopting this multimodal approach, including ethical considerations, to assist future researchers and practitioners in exploring cognitive biases in SCS.
... That is why trust in a human-machine context is considered a dynamic component that can be altered within a moment during a live performance in a practical situation. A study investigated this dynamic nature of trust in a live-sensing method, using EEG and GSR (Galvanic Skin Resistance), and found that customized (individual-based) features provide more accuracy than general features (Akash et al. 2018), which supports the statement that large variance of trust arises from human's individual characteristics, which not only reflects in neural signal but also in physiological (in this case Skin Resistance) features. Another study found that, even though the machine characteristics remain constant, human's individual perception can differ for the characteristics (Merritt and Ilgen 2008); they found that 52% of trust variances are influenced by an individual's perception, and trust can vary over time. ...
... The reward of all the enhancement comes with one challenge, which is efficiency; overly presentation of information can cause a higher workload. Even one study has found that higher transparency increase trust in human when the trust is low but decreases the trust when the trust of the human operator is already high in the machine (Akash et al. 2018). ...
Conference Paper
Full-text available
This study aims to sense trust and distrust in a real-time inspired scenario through the classification of brain signals. Here, a word elicitation study is used to invoke the mental state associated with trust and distrust associated with the machine. Participants think of any event or experience that comes into their mind when they observe the word. They think or recall that event/experience without deliberately filtering out any kind of cognitive or affective mental state, which we consider as a replica of a real-life scenario where all kinds of mental states or emotions possibly co-exist along with trust or distrust. While thinking or recalling such events, Electroencephalography data is recorded from the participants' cortex and analyzed through Machine Learning approaches with several classification algorithms. The study developed an approach to sense whether the human is going through trust or distrust and compared different methods to discuss their efficacies in different scenarios. Here, the individualistic and generalistic approach is delved into, and it found that individualistic approaches provide better accuracy in sensing trust or distrust state of the human brain. Also, this study explored ways to increase the efficiency of the method by reducing the number of channels and compared the performance of the models by observing the loss of accuracy caused by the reduced number of channels. This study found that the K-Nearest Neighbor and/or Random Forest classifier algorithm provides the best result using raw data with the individualistic approach in most scenarios, achieving up to 100% average accuracy.
... While a system could routinely ask its user to evaluate his/ her trust, there are potential methodological concerns (e.g., anchoring and response bias) in addition to concerns over whether such a redundant request may be seen as an annoying behavior which could impact the use of such a system (Segura et al., 2012). In an effort to develop a real-time assessment of trust, research has examined the utility of physiological and behavioral measures such as heart rate (Khalid et al., 2016;Mitkidis et al., 2015;Perello-March et al., 2022;Tolston et al., 2018), galvanic skin response (Akash et al., 2018;Chen et al., 2015), interventions (Tenhundfeld et al., , 2020, monitoring behaviors (Bahner et al., 2008;Bailey & Scerbo, 2007;Banks et al., 2018;Endsley, 2017), and eye-tracking (Hergeth et al., 2015;Lu & Sarter, 2019). ...
... As such, we chose to use subjective selfreport, behavioral, and physiological data to assess trust, each of which has a body of literature detailing its use for assessment of trust (Ajenaghughrure et al., 2020;Akash et al., 2018;Banks et al., 2018;Mitkidis et al., 2015;Sauer et al., 2016;Schwarz et al., 2019;Thayer et al., 2012;Tolston et al., 2018;Wang et al., 2018;Figure 8. Scatterplots for physiological measures. Note. ...
Article
Full-text available
As automated and autonomous systems become more widely available, the ability to integrate them into environments seamlessly becomes more important. One cognitive construct that can predict the use, misuse, and disuse of automated and autonomous systems is trust that a user has in the system. The literature has explored not only the predictive nature of trust but also the ways in which it can be evaluated. As a result, various measures, such as physiological and behavioral measures, have been proposed as ways to evaluate trust in real-time. However, inherent differences in the measurement approaches (e.g., task dependencies and timescales) raise questions about whether the use of these approaches will converge upon each other. If they do, then the selection of any given proven approach to trust assessment may not matter. However, if they do not converge, it raises questions about the ability of these measures to assess trust equally and whether discrepancies are attributable to discriminant validity or other factors. The present study used various trust assessment techniques for passengers in a self-driving golf-cart. We find little to no convergence across measures, raising questions that need to be addressed in future research.
... These behaviours reflect an individual's emotional and cognitive states and can provide valuable insights into their trust-related responses during interactions [5]. Several studies have explored the use of PBs in human-robot trust research such as [6,26,32,33]. ...
... H2 acknowledges the potential influence of repeated interactions on PBs, as trust development is a dynamic process, and trust levels may change over time [36]. H3 is supported by previous work in various domains, demonstrating the effectiveness of machine learning classifiers in analyzing and predicting human behaviour based on physiological measures [3,6]. ...
... In addition to this complex artificial cognitive system, there are important aspects to guarantee efficiency in HRI, e.g. eliciting trust [10,[30][31][32] and acceptance in the human co-worker [33] through situation awareness [8]. ...
... Such combination might be useful either to re-evaluate or to augment the previously harvested information decoded from anticipatory activity. Furthermore, it could be useful to combine these single-trial measures with long-term measures of human trust towards a machine/robot during interaction which have been shown to be decodeable from fMRI activity [30,90] and more recently also from EEG signals [10,31,32,91]. ...
Article
Full-text available
Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning.
... In contrast, other works studies have assessed trust based solely on a single psycho-physiological signal from the peripheral nervous system. For instance, [176] [177] analysed how the electrodermal response is significantly affected variations in trust. Regarding pupillometry and eye behaviour, [178] revealed that humans trust partners with dilating pupils and withhold trust from partners with constricting pupils. ...
... Therefore, most studies tend to combine signals from the central nervous system (mostly brain activity) with those from the peripheral nervous system. Works such as [176], [184] employed brain and electrodermal responses to obtain real-time trust assessment while [185] included cardiovascular responses. However, owing to the limited temporal resolution of signals from the peripheral nervous system, it is infrequent to combine more than three different psycho-physiological responses for trust assessment [186]. ...
Article
Full-text available
The purpose of this study is to explore the measurement of human factors in the workplace that can provide critical insights into workers’ well-being. Human factors refer to physical, cognitive, and psychological states that can impact the efficiency, effectiveness, and mental health of workers. The article identifies six human factors that are particularly crucial in today’s workplaces: physical fatigue, attention, mental workload, stress, trust, and emotional state. Each of these factors alters the human physiological response in a unique way, affecting the human brain, cardiovascular, electrodermal, muscular, respiratory, and ocular reactions. This paper provides an overview of these human factors and their specific influence on psycho-physiological responses, along with suitable technologies to measure them in working environments and the currently available commercial solutions to do so. By understanding the importance of these human factors, employers can make informed decisions to create a better work environment that leads to improved worker well-being and productivity.
... However, these models have linear assumptions of human trust behaviors. Considering these limitations, machine learning models became popularly used to predict driver trust, such as support vector machine (SVM), K neighbor (KNN), and quadratic discriminant analysis (QDA) (33)(34)(35)(36). For instance, Akash et al. (33) applied electroencephalography (EEG) and GSR to identify users' trust in an obstacle detection system (as used in AVs) by the QDA. ...
... Considering these limitations, machine learning models became popularly used to predict driver trust, such as support vector machine (SVM), K neighbor (KNN), and quadratic discriminant analysis (QDA) (33)(34)(35)(36). For instance, Akash et al. (33) applied electroencephalography (EEG) and GSR to identify users' trust in an obstacle detection system (as used in AVs) by the QDA. Ayoub et al. (36) established the prediction model of dispositional trust using extreme gradient boosting based on many factors affecting trust, such as perception of risk, feelings, and knowledge of AVs. ...
Article
Full-text available
Trust calibration is essential to prevent misuse and disuse of automated vehicles (AVs). Accurate measurement and real-time identification of driver trust is an important prerequisite for achieving trust calibration. Currently, in conditionally automated driving, most researchers utilize self-reported ratings as the ground truth to evaluate driver trust and explore objective trust indicators. However, inconsistencies between the subjective rating and objective behaviors were reported, indicating that trust measurements cannot rely solely on self-reported ratings. To fill this research gap, a method of subjective and objective combination was proposed to measure and identify driver trust in AVs. Thirty-four drivers were involved in a sequence of takeover events. Monitoring ratio and subjective trusting ratings were collected, and combined to measure driver trust levels (i.e., higher and lower trust). Compared with the subjective measurement, the hybrid measurement can more reliably evaluate driver trust in AVs. More importantly, we established a real-time driver trust recognition model for AVs using label smoothing-based convolutional neural network and long short-term memory network fusing multimodal physiological signals (i.e., galvanic skin response and electrocardiogram) and interactive experiences (i.e., takeover-related lead time, takeover frequencies and system usage time). Four common models were developed to compare with the proposed model: Gaussian naive Bayes, support vector machine, convolutional neural network, and long short-term memory network. The comparison results suggest that the performance of our model outperforms others with an F1-score of 75.3% and an area under curve value of 0.812. These findings could have implications for the development of trust monitoring systems in conditionally automated driving.
... One could also use nonverbal cues as a basis to develop computational models that predict trust-related outcomes [15]. There are works also on psychophysiological approaches for devising a predictive model of trust in which they use skin responses (data from primarily galvanic skin response), and neural measures such as EEG and heart rate variability (HRV) [1,10]. ...
... The robot should pick up the coffee powder from the topmost shelf and the cup from the lower one to make coffee.1 ...
Preprint
Handling trust is one of the core requirements for facilitating effective interaction between the human and the AI agent. Thus, any decision-making framework designed to work with humans must possess the ability to estimate and leverage human trust. In this paper, we propose a mental model based theory of trust that not only can be used to infer trust, thus providing an alternative to psychological or behavioral trust inference methods, but also can be used as a foundation for any trust-aware decision-making frameworks. First, we introduce what trust means according to our theory and then use the theory to define trust evolution, human reliance and decision making, and a formalization of the appropriate level of trust in the agent. Using human subject studies, we compare our theory against one of the most common trust scales (Muir scale) to evaluate 1) whether the observations from the human studies match our proposed theory and 2) what aspects of trust are more aligned with our proposed theory.
... And in the frequency domain, the power distribution over the different frequency bands helps to understand various brain functions. Neural signals can be grouped into five bands frequency-wise: Delta b (1 -4 Hz), theta (4 -8 Hz), alpha (8 -12 Hz), beta (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25), and gamma (> 25 Hz) [6]. Power spectrum density over these bands shows association with different mental activities and/or states. ...
... So we can say that trust is dynamic. This dynamic characteristic of trust is addressed in the context of trust in machines in a study using EEG and Galvanic Skin Resistance (GSR) [14]in which it is found that customized features provide more accuracy than general features. That indicates that neural and psychological features of trust mistrust significantly vary among individuals. ...
... Such a measurement frequency would not suffice for responsive interaction in dynamically changing conditions. Among physiological measurements, both electrodermal activity and heart rate positively correlated with stress, anxiety, and cognitive workload (Caplan and Jones, 1975;Payne and Rick, 1986;Jacobs et al., 1994), emerge as valuable indicators of trust (Waytz et al., 2014;Akash et al., 2018). Since affective touch has soothing effects, e.g., acting as a stress buffer (Morrison, 2016) and reducing anxiety levels and autonomic responses under certain conditions (Mazza et al., 2023), as explained in Section 1, we believe that it can improve the trust of a person on a semi-autonomous system. ...
Article
Full-text available
In this paper, we discuss the potential contribution of affective touch to the user experience and robot performance in human-robot interaction, with an in-depth look into upper-limb prosthesis use as a well-suited example. Research on providing haptic feedback in human-robot interaction has worked to relay discriminative information during functional activities of daily living, like grasping a cup of tea. However, this approach neglects to recognize the affective information our bodies give and receive during social activities of daily living, like shaking hands. The discussion covers the emotional dimensions of affective touch and its role in conveying distinct emotions. In this work, we provide a human needs-centered approach to human-robot interaction design and argue for an equal emphasis to be placed on providing affective haptic feedback channels to meet the social tactile needs and interactions of human agents. We suggest incorporating affective touch to enhance user experience when interacting with and through semi-autonomous systems such as prosthetic limbs, particularly in fostering trust. Real-time analysis of trust as a dynamic phenomenon can pave the way towards adaptive shared autonomy strategies and consequently enhance the acceptance of prosthetic limbs. Here we highlight certain feasibility considerations, emphasizing practical designs and multi-sensory approaches for the effective implementation of affective touch interfaces.
... In the automotive industry, Körber et al. (2018) used response time to measure trust and found that participants with high levels of trust spent less time on driving tasks in a driving simulator using an automated vehicle. Akash et al. (2018) used EEGs and EDA as psychophysiological measures for real-time measurement of trust in an obstacle detection sensor in a car driving task. De Visser et al. (2018) also used EEGs to identify the relationship between trust in automated algorithms and brain activity in an error detection task and found that the occipital and frontal areas of the human are the areas predominantly correlated with trust. ...
Article
Human–robot collaboration (HRC) has emerged as a promising frontier within the construction industry, offering the potential to enhance productivity, safety, and efficiency. The effectiveness of HRC critically depends on the degree of trust that workers place in their robots, and establishing a seamless level of trust in robots is essential to realize the full benefits of HRC. Despite the extensive exploration of trust dynamics in various industries, there is a notable research gap with regard to trust in construction robots, which possess distinctive characteristics in terms of appearance, capabilities, and interaction compared to robots in other sectors. Therefore, in this study, we analyzed trust dynamics within the context of HRC during construction tasks. Both subjective survey data and objective psychophysiological data—including heart rate variability (HRV), electrodermal activity (EDA), and electroencephalogram (EEG)-based emotional valence and arousal—were employed as human trust measures. We conducted experiments for bricklaying tasks in an immersive virtual construction environment and analyzed multifaceted robot factors—including workspace environment, level of interaction, and robot speed, proximity, and angle of approach—and their relationships with human trust measures using statistical analysis, such as -test, two-way ANOVA, Spearman’s rank correlation, and moderation analysis. The results indicated that workspace environment and level of interaction were the most significant robot factors affecting human trust. EDA exhibited the most sensitivity to variations in robot factors. It was also observed that the effect of speed, proximity, and angle of approach were also dependent on level of interaction and type of workspace environment. There was a significant positive correlation between proximity and perceived trust. The findings of this study contribute to the optimization of robot design and interaction protocols for construction tasks, fostering greater worker trust, and enhancing project productivity and efficiency.
... An intriguing avenue for future research involves exploring additional trust measures that can distinguish between the gain/loss of moral and performance trust in human-robot interaction (HRI). Examining alternative measures, such as physiological measurements, which have been increasingly utilized by researchers to measure and model trust in HRI [1,21,23,24,44], may ofer more profound insights into this subject. In recent years, researchers have also employed other types of measurements for assessing trust in HRI. ...
Article
The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.
... The Galvanic Skin Response (GSR; detailed description in: Akash et al., 2018) was recorded using two passive Ni hon Kohden electrodes placed on the fingertips of the index and middle fingers of participants' non-dominant hand. These signals were also sampled by the BioSemi system, and were synchronized to EEG data. ...
Preprint
Full-text available
Many real-life situations can be extremely noisy. Psychoacoustic studies have shown that background noise can have a detrimental effect on the ability to process and understand speech. However, most studies use stimuli and task designs that are highly artificial, limiting their generalization to more realistic contexts. Moreover, to date, we do not fully understand the neurophysiological consequences of trying to pay attention to speech in a noisy place. To address this lab to real-life gap and increase the ecological validity of speech in noise research, here we introduce a novel audiovisual Virtual Reality (VR) experimental platform. Combined with neurophysiological measurements of neural activity (EEG), eye-gaze and skin conductance (GSR) we studied the effects of background noise in a realistic context where the ability to process and understand continuous speech is especially important: A VR Classroom. Participants (n=32) sat in a VR Classroom and were told to pay attention to mini-lecture segments by a virtual teacher. Trials were either Quiet or contained background construction noise, emitted from outside the classroom window, which was either Continuous (drilling) or Intermittent (air hammers). Result show that background noise had a detrimental effect on learning outcomes, which was also accompanied by reduced neural tracking of the teacher’s speech. Comparison of the two noise types showed that the intermittent construction noise was more disruptive than continuous noise, as index by both behavioral and neural measures, and it also elicited higher skin-conductance levels, reflecting heightened arousal. Interesting, eye-gaze dynamics were not affected by the presence of noise. This study advances our understanding of the neurophysiological effects of background noise and extends it to more ecologically relevant contexts. It also emphasizes the role that temporal dynamics play for processing speech in noise, highlighting the need to consider the features of realistic noises, as we expand speech in noise research to increasingly realistic circumstances.
... It can be used before or after an experiment to capture the attitude of trust and the beliefs. Behavioral and physiological measures, such as electroencephalography (EEG) [23], [24], provide more objective data but can be influenced by factors not related to trust and can require specialized equipment. It may be used during the experiment and will give online data about the behaviors but also on the attitude and intention. ...
Article
Full-text available
—Understanding how human trust in AI evolves over time is essential to identify the limits of each party and provide solutions for optimal collaboration. With this goal in mind, we examine the factors that directly or indirectly influence trust, whether they come from humans, AI, or the environment. We then propose a summary of methods for measuring trust, whether subjective or objective, to show which ones are best suited for longitudinal studies. We then focus on the main driving force behind the evolution of trust: feedback. We justify how learning feedback can be transposed to trust and what types of feedback can be applied to impact the evolution of trust over time. After understanding the factors that influence and how to measure trust, we propose an application example on a maritime surveillance tool with an AI-based decision aid.
... Moreover, physiological and neural measures aim to capture the results of complex cognitive processes related to trust, which has proven promising for measuring trust in real time (Kohn et al., 2021). These common physiological and neural approaches, including eye gaze tracking (Elkins and Derrick, 2013), heart rate change (Waytz et al., 2014), electrodermal activity (EDA), and electroencephalogram (EEG) (Akash et al., 2018), provide objective evidence of trust, with high resolution into the temporal aspects of trust, which enables reliable trust measurement in real time (Hopko and Mehta, 2021). However, all these methods suffer from some unique drawbacks, requiring extensive expertise and planning to apply correctly (Körber et al., 2018). ...
Article
Trust in an automated vehicle system (AVs) can impact the experience and safety of drivers and passengers. This work investigates the effects of speech to measure drivers ’ trust in the AVs. Seventy-five participants were randomly assigned to high-trust (the AVs with 100% correctness, 0 crash, and 4 system messages with visual-auditory TORs) and low-trust group (the AVs with a correctness of 60%, a crash rate of 40%, 2 system messages with visual-only TORs). Voice interaction tasks were used to collect speech information during the driving process. The results revealed that our settings successfully induced trust and distrust states. The corresponding extracted speech feature data of the two trust groups were used for back-propagation neural network training and evaluated for its ability to accurately predict the trust classification. The highest classification accuracy of trust was 90.80%. This study proposes a method for accurately measuring trust in automated vehicles using voice recognition.
... Akash et al. [3] conducted a study in which two approaches were explored to develop a "classifierbased empirical trust-sensor model" that use electroencephalography and galvanic skin response measurements. The first approach is a generalized one. ...
Thesis
Full-text available
Embodied Virtual Agents (EVAs) are human-like computer agents which can serve as assistants and companions in different tasks. They have numerous applications such as interfaces for social robots, educational tutors, game counterparts, medical assistants, and companions for the elderly and/or individuals with psychological or behavioral conditions. Forming a reliable and trustworthy interaction is critical to the success and acceptability of this new type of user interface. This dissertation explores the interaction between humans and EVAs in cooperative and uncooperative conditions to increase understanding of how trust operates in these interactions. It also investigates how interactions with one agent influences the perception of other agents. In addition to participants achieving significantly higher performance and having higher trust for the cooperative agent, we found that participants’ trust for the cooperative agent was significantly higher if they interacted with an uncooperative agent in one of the sets, compared to working with cooperative agents in both sets. The results suggest that the trust for an EVA is relative and it is dependent on agent behavior and user history of interaction with different agents. We found out that biases such as primacy bias, can contribute into humans trusting one agent over the other even if they look similar and serve the same purpose. Primacy bias can also be responsible for having higher trust for the first agent when working with multiple cooperative agents having the same behavior and performing the same task. We also observed that working with one agent will have a significant effect on users’ initial trust for other agents within the same system, even before collaborating with the agent in an actual task. Based on lessons learnt through conducting the experiments, specifically through users’ personal reflections on their interactions with EVAs, we discuss ethical issues that arise in interactions with virtual worlds. Based on the experimental results obtained in the user experiments, and the findings in previous literature in the field of trust between humans and virtual agents, we suggest guidelines for trust-adaptive virtual agents. We provide justifications for each guideline to increase transparency and provide additional resources to researchers and developers who are interested in these suggestions. The results of this dissertation provide insights into interaction between humans and virtual agents in scenarios which require the collaboration of humans and computers under uncertainty in a timely and efficient way. It also provides directions for future research to use EVAs as primary user interfaces due to the similarity of interaction with such agents to natural human-human interaction and possibility of building high-level, resilient trust toward them.
... The different measurement scales from participants can directly affect the estimation result of model parameters and the resultant preferable path. In the future, we will investigate objective measurements, such as psychophysical signals, of the trust value that can reduce the bias [43]. ...
Article
Full-text available
In this paper, we seek to develop a computational human to multi-robot system (MRS) trust model to encode human intention into the MRS motion tasks in offroad environments. Our computational trust model builds a linear state-space equation to capture the influence of environmental attributes on human trust in an MRS. Bayesian inference is used to derive the posterior distribution of the trust model parameters. Due to the intractable computation of the posterior distributions, we develop a Markov Chain Monte Carlo sampling algorithm by integrating the Gibbs sampler with the forward-filtering-backward-sampling to approximate the distributions. A Bayesian optimization based experimental design (BOED) is proposed to sequentially learn the human-MRS trust model parameters. Inspired by decision field theory, we develop a human preference based acquisition function for the BOED to explore the MRS motion path and collect data for the trust model in an efficient way. A case study on human-MRS collaborative bounding overwatch task is deployed, which is a multi-robot motion task traditionally used in offroad environments and requires a heavy cognition workload for the human to collaborate with the MRS. Trials using simulated human agents and human subjects collaborating with an MRS are conducted in the ROS Gazebo simulator. The simulated human agent with MRS shows that the BOED can correctly estimate the trust model parameters. The human subject tests demonstrate the capability of our computational trust model in capturing the human’s trust dynamics with the goodness of fit metrics. The tests also show statistically significant results by comparing the BOED with a benchmark experimental design approach. The BOED resulted in fewer collisions with obstacles, lower frequency of contact loss between robots, lower operator workload, and higher system usability.
... Recent studies have attempted to classify trust in automated agents via neurophysiological signals [2,31,81]. However, many such efforts use rigid task environments and complex machine learning techniques that are poorly interpretable and prone to overfitting, thus failing to generalize to ecologically valid scenarios. ...
Article
As collaborative technologies evolve from supportive tools to interactive teammates, there is a growing need to understand how trust and team processes develop in human-agent teams. To contribute effectively, these systems must be able to support human teammates in a task without disrupting the delicate interpersonal states and team processes that govern successful collaboration. In order to break down the complexity of monitoring multiple actors in human-agent collaborations, there is a need to identify interpretable, generalizable measures that can monitor the emergence of interpersonal and team-level processes that underlie effective teaming. We address this gap by using functional Near-Infrared Spectroscopy to concurrently measure brain activity of two individuals in a human-human-agent team during a complex, ecologically valid collaborative task, with a goal of identifying quantitative markers of cognition- and affect-based trust alongside team processes of coordination, strategy formulation, and affect management. Two multidimensional extensions of recurrence quantification analysis, a nonlinear method based in dynamical systems theory, are presented to quantify interpersonal coupling and team-level regularity as reflected in the hemodynamics of three cortical regions across multiple time-scales. Mixed-effects regressions reveal that neural recurrence between individuals uniquely reflects changes in self-reported trust, while team-level neural regularity inversely predicts self-reported team processes. Additionally, we show that recurrence metrics capture temporal dynamics of affect-based trust consistent with existing theory, showcasing the interpretability and specificity of these metrics for disentangling complex team states and processes. This paper presents a novel, interpretable, and computationally efficient model-free method capable of differentiating between latent trust and team processes a complex, naturalistic task setting. We discuss the potential applications of this technique for continuous monitoring of team states, providing clear targets for the future development of adaptive human-agent teaming systems.
... However, retrieving human selfreported behavior continually for use in a feedback control system is impractical. Although these measurements have been linked to human trust levels [9], they have not been investigated in the context of real-time trust sensing. This research work offers a model to predict user trust based on collected datasets on users' interaction with artificial intelligence (AI) enabled machines and devices. ...
Article
Full-text available
User trust in technology is an essential factor for the usage of a system or machine. AI enabled technologies such as virtual digital assistants simplify a lot of process for humans starting from simple search to a more complex action like house automation and completion of some transitions notably Amazon's Alexa. Can human actually trust these AI enabled technologies? Hence, this research applied adaptive boosting ensemble learning approach to predict users trust in virtual assistants. A technology trust dataset was obtained from figshare.com and engineered before training the adaptive boosting (AdaBoost) algorithm to learn the trends and pattern. The result of the study showed that AdaBoost had an accuracy of 94.31% for the testing set.
Article
For robots to seamlessly interact with humans, we first need to make sure that humans and robots understand one another. Diverse algorithms have been developed to enable robots to learn from humans (i.e., transferring information from humans to robots). In parallel, visual, haptic, and auditory communication interfaces have been designed to convey the robot’s internal state to the human (i.e., transferring information from robots to humans). Prior research often separates these two directions of information transfer, and focuses primarily on either learning algorithms or communication interfaces. By contrast, in this survey we take an interdisciplinary approach to identify common themes and emerging trends that close the loop between learning and communication. Specifically, we survey state-of-the-art methods and outcomes for communicating a robot’s learning back to the human teacher during human-robot interaction. This discussion connects human-in-the-loop learning methods and explainable robot learning with multimodal feedback systems and measures of human-robot interaction. We find that—when learning and communication are developed together—the resulting closed-loop system can lead to improved human teaching, increased human trust, and human-robot co-adaptation. The paper includes a perspective on several of the interdisciplinary research themes and open questions that could advance how future robots communicate their learning to everyday operators. Finally, we implement a selection of the reviewed methods in a case study where participants kinesthetically teach a robot arm. This case study documents and tests an integrated approach for learning in ways that can be communicated, conveying this learning across multimodal interfaces, and measuring the resulting changes in human and robot behavior.
Article
In the ever-evolving construction industry, grappling with challenges such as labor shortages and workplace hazards, human-robot collaboration (HRC) has emerged as a transformative solution. However, the industry faces hurdles in comprehending the intricacies of trust dynamics within the domain of HRC. It exerts considerable influence on both productivity and safety within the construction sector. To address this issue, the paper proposes machine learning-based models to predict and enhance human trust in construction robots using psychophysiological data. Through a virtual reality bricklaying task across varied construction settings, this study collected psychophysiological data from participants and predicted trust score. Results indicated that electrodermal activity and skin temperature were two significant standalone variables for trust prediction. With similar R squared value of 0.98, the XG boost, and random forest models displayed superior predictive accuracy, with minor standard deviations of 0.003 and 0.004, respectively. This study contributes valuable insights into trust dynamics, paving the way for more dependable and secure HRC in construction, optimizing workflows and ensuring industry-wide advancements.
Article
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. this study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNiRs signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. the results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rtPJ in the highly reliable condition than those with high openness. the results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness. PRACTITIONER SUMMARY the study could deepen practitioners' understanding of the effect of openness on trust in robots by examining the psychological intention, task performance, visual behaviours, and physiological activations. Moreover, the interaction effect could provide guidelines for designing robots adaptive to users' personalities, and the multimodal method would be practical for measuring trust in interaction.
Article
Smoking remains a worldwide public health issue, with significant personal consequences well-being and society as a whole. Traditional approaches to understanding and combatting smoking have their limitations, and in recent years, machine learning has become a viable instrument to tackle this problem. This review article provides a comprehensive overview of predictive modeling to understand and combat smoking using machine learning. We delve into the diverse data sources and preprocessing techniques, feature engineering approaches, and machine learning models employed in the context of smoking prediction. The review categorizes studies into smoking initiation and smoking cessation prediction, shedding light on the methodologies, results, and challenges in each domain. Furthermore, we explore the real-world applications of predictive modeling in smoking control, emphasizing their impact on public health policy and awareness campaigns. Ethical considerations and challenges related to bias, privacy, and model interpretability are also discussed. The paper concludes by suggesting future research directions and emphasizing the crucial role of machine learning in comprehensively addressing the smoking epidemic..
Article
Full-text available
Trust model is a topic that first gained interest in organizational studies and then human factors in automation. Thanks to recent advances in human-robot interaction (HRI) and human-autonomy teaming, human trust in robots has gained growing interest among researchers and practitioners. This article focuses on a survey of computational models of human-robot trust and their applications in robotics and robot controls. The motivation is to provide an overview of the state-of-the-art computational methods to quantify trust so as to provide feedback and situational awareness in HRI. Different from other existing survey papers on human-robot trust models, we seek to provide in-depth coverage of the trust model categorization, formulation, and analysis, with a focus on their utilization in robotics and robot controls. The paper starts with a discussion of the difference between human-robot trust with general agent-agent trust, interpersonal trust, and human trust in automation and machines. A list of impacting factors for human-robot trust and different trust measurement approaches, and their corresponding scales are summarized. We then review existing computational human-robot trust models and discuss the pros and cons of each category of models. These include performance-centric algebraic, time-series, Markov decision process (MDP)/Partially Observable MDP (POMDP)-based, Gaussian-based, and dynamic Bayesian network (DBN)-based trust models. Following the summary of each computational human-robot trust model, we examine its utilization in robot control applications, if any. We also enumerate the main limitations and open questions in this field and discuss potential future research directions.
Chapter
The role of trust in human-robot interaction (HRI) is becoming increasingly important for effective collaboration. Insufficient trust may result in disuse, regardless of the robot’s capabilities, whereas excessive trust can lead to safety issues. While most studies of trust in HRI are based on questionnaires, in this work it is explored how participants’ trust levels can be recognized based on electroencephalogram (EEG) signals. A social scenario was developed where the participants played a guessing game with a robot. Data collection was carried out with subsequent statistical analysis and selection of features as input for different machine learning models. Based on the highest achieved accuracy of 72.64%, the findings indicate the existence of a correlation between trust levels and the EEG data, thus offering a promising avenue for real-time trust assessment during interactions, reducing the reliance on retrospective questionnaires.
Conference Paper
Biosignals are an illustration of the various physiological phenomenon in the body. With changes in the physiological parameters, one can provide essential information about one's internal status that can be analyzed and interpreted for use in different domains. Frequently analyzed biosignals are: Electro-myogram (EMG), Electrocardiogram (ECG), Electrooculogram (EOG) and Elec-troencephalogram (EEG). An electromyogram is generated due to the action potentials of the human body's muscles. Electrocardiogram refers to the Biosignal generated from the functioning of the heart. Electrooculogram measures the cornea -retinal standing potential. Furthermore, Electroencephalogram refers to brain waves and functions. Integration of the Signal Processing and Analyzing techniques with Interaction Design Techniques is a promising field that can provide a significant area of research and product development. The relevance can be seen in designing interactive devices where the biosignals obtained from EEG can develop such products following the emotional expectation of the potential consumers. Likewise, integrating EMG and EOG can help develop products and thoughts that can ensure the protection of cognitive and physical freedom and ease. ECG has already established a significant place and can be utilized in the future with a better perspective of providing health and fitness devices to ensure better living. In conclusion, integrating Biosignal acquisition and analysis Techniques with Design has a vision towards conveying better devices and products to people that can enrich their experience with the products and ensure better physical, cognitive and emotional well-being. This paper provides insights into such domains and explores possibilities of developing such interactive experiences .
Article
Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time.
Article
Full-text available
As Artificial Intelligence (AI) proliferates across various sectors such as healthcare, transportation, energy, and military applications, the collaboration between human-AI teams is becoming increasingly critical. Understanding the interrelationships between system elements - humans and AI - is vital to achieving the best outcomes within individual team members' capabilities. This is also crucial in designing better AI algorithms and finding favored scenarios for joint AI-human missions that capitalize on the unique capabilities of both elements. In this conceptual study, we introduce Intentional Behavioral Synchrony (IBS) as a synchronization mechanism between humans and AI to set up a trusting relationship without compromising mission goals. IBS aims to create a sense of similarity between AI decisions and human expectations, drawing on psychological concepts that can be integrated into AI algorithms. We also discuss the potential of using multimodal fusion to set up a feedback loop between the two partners. Our aim with this work is to start a research trend centered on exploring innovative ways of deploying synchrony between teams of non-human members. Our goal is to foster a better sense of collaboration and trust between humans and AI, resulting in more effective joint missions.
Article
Autonomous systems that can assist humans with increasingly complex tasks are becoming ubiquitous. Moreover, it has been established that a human’s decision to rely on such systems is a function of both their trust in the system and their own self-confidence as it relates to executing the task of interest. Given that both under- and over-reliance on automation can pose significant risks to humans, there is motivation for developing autonomous systems that could appropriately calibrate a human’s trust or self-confidence to achieve proper reliance behavior. In this paper, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process without a reward function (POMDP/R) that leverages behavioral and self-report data as observations for estimation of these cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the proposed model captures the probabilistic relationship between trust, self-confidence, and reliance for all discrete combinations of high and low trust and self-confidence. The use of the proposed model to design an optimal policy to facilitate trust and self-confidence calibration is a goal of future work.
Article
Full-text available
Background Human experiences are key considerations in design research and practice. Neuroscience techniques allow quantitative measurement of underlying human neurophysiological responses to design. However, despite the importance of EEG in performing such quantification, design experiments have not widely applied EEG, limiting the insights that design researchers can produce. Thus, this paper describes the use of EEG in experimentation in various design fields and suggests its integration into design research. Methods This study systematically reviewed experimental design research that utilized EEG in various design domains, such as product design or architecture. Twenty-nine papers were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) method. The selected papers were published in peer-reviewed journals between 2012 and 2022, written in English, and were analyzed for their design, variables, EEG tools and indicators, stimuli, experimental settings, analysis methods, and findings. Analysis was applied through a framework, population, intervention, control, outcome, and setting (PICOS) methodology. Results This paper analyzed EEG-based experiments according to PICOS to provide information about how EEG is used in experimental design research, shedding light on the application of EEG methodology in various design fields, including product design, interior (or architecture) design, and service design. The results show that neuroscience techniques can be used to collect brain data for design research. EEG has been used in various experimental design research fields to explore how an individual user reacts to specific design elements and experience. Conclusions Neurophysiological data retrieved from experiments can be used to develop evidence-based design strategies to improve the design process and design decision-making. The findings in this study contribute to our understanding of cognitive, emotional, and behavioral responses to design.
Article
Full-text available
Artificial intelligence (AI) refers to technologies which support the execution of tasks normally requiring human intelligence (e.g., visual perception, speech recognition, or decision-making). Examples for AI systems are chatbots, robots, or autonomous vehicles, all of which have become an important phenomenon in the economy and society. Determining which AI system to trust and which not to trust is critical, because such systems carry out tasks autonomously and influence human-decision making. This growing importance of trust in AI systems has paralleled another trend: the increasing understanding that user personality is related to trust, thereby affecting the acceptance and adoption of AI systems. We developed a framework of user personality and trust in AI systems which distinguishes universal personality traits (e.g., Big Five), specific personality traits (e.g., propensity to trust), general behavioral tendencies (e.g., trust in a specific AI system), and specific behaviors (e.g., adherence to the recommendation of an AI system in a decision-making context). Based on this framework, we reviewed the scientific literature. We analyzed N = 58 empirical studies published in various scientific disciplines and developed a “big picture” view, revealing significant relationships between personality traits and trust in AI systems. However, our review also shows several unexplored research areas. In particular, it was found that prescriptive knowledge about how to design trustworthy AI systems as a function of user personality lags far behind descriptive knowledge about the use and trust effects of AI systems. Based on these findings, we discuss possible directions for future research, including adaptive systems as focus of future design science research.
Book
Full-text available
This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment. The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering: • one-human–one-robot collaboration; • one-human–multiple-robot collaboration; and • human–swarm collaboration. Individual topic areas include resource optimization (human and robotic), safety in collaboration, abstraction of swarm systems to make them suitable for human control, modeling and control of internal force interactions for collaborative manipulation, and the sharing of control between human and automated systems, etc. Control and decision algorithms feature prominently in the text, importantly within the context of human factors and the constraints they impose. Applications such as assistive technology, driverless vehicles, cooperative mobile robots, and swarm robots are considered. Illustrative figures and tables are provided throughout the book. Researchers and students working in controls, and the interaction of humans and robots will learn new methods for human–robot collaboration from this book and will find the cutting edge of the subject described in depth.
Conference Paper
Full-text available
In an increasingly automated world, trust between humans and autonomous systems is critical for successful integration of these systems into our daily lives. In particular, for autonomous systems to work cooperatively with humans, they must be able to sense and respond to the trust of the human. This inherently requires a control-oriented model of dynamic human trust behavior. In this paper, we describe a gray-box modeling approach for a linear third-order model that captures the dynamic variations of human trust in an obstacle detection sensor. The model is parameterized based on data collected from 581 human subjects, and the goodness of fit is approximately 80% for a general population. We also discuss the effect of demographics, such as national culture and gender, on trust behavior by re-parameterizing our model for subpopulations of data. These demographic-based models can be used to help autonomous systems further predict variations in human trust dynamics.
Article
Full-text available
Human trust in automation plays an important role in successful interactions between humans and machines. To design intelligent machines that can respond to changes in human trust, real-time sensing of trust level is needed. In this paper, we describe an empirical trust sensor model that maps psychophysiological measurements to human trust level. The use of psychophysiological measurements is motivated by their ability to capture a human's response in real time. An exhaustive feature set is considered, and a rigorous statistical approach is used to determine a reduced set of ten features. Multiple classification methods are considered for mapping the reduced feature set to the categorical trust level. The results show that psychophysiological measurements can be used to sense trust in real-time. Moreover, a mean accuracy of 71.57% is achieved using a combination of classifiers to model trust level in each human subject. Future work will consider the effect of human demographics on feature selection and modeling.
Conference Paper
Full-text available
Exchanging text messages via software on smart phones and computers has recently become one of the most popular ways for people to communicate and accomplish their tasks. However, there are negative aspects to using this kind of software, for example, it has been found that people communicating in the text-chat environment may experience a lack of trust and may face different levels of cognitive load [1, 11]. This study examines a novel way to measure interpersonal trust and cognitive load when they overlap with each other in the text-chat environment. We used Galvanic Skin Response (GSR), a physiological measurement, to collect data from twenty-eight subjects at four gradients and overlapping conditions between trust and cognitive load. The findings show that the GSR signals were significantly affected by both trust and cognitive load and provide promising evidence that GSR can be used as a tool for measuring interpersonal trust when cognitive load is low and also for measuring cognitive load when trust is high.
Article
Full-text available
In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.
Article
Full-text available
Human-agent collectives (HAC) offers a new science for exploring the computational and human aspects of society. They are a new class of socio-technical systems in which humans and smart software (agents) engage in flexible relationships in order to achieve both their individual and collective goals. Sometimes the humans take the lead, sometimes the computer does and this relationship can vary dynamically. HACs are fundamentally socio-technical systems. Relationships between users and autonomous software systems will be driven as much by user-focused issues as technical ones. Humans and agents will form short-lived teams in HACs and coordinate their activities to achieve the various individual and joint goals present in the system before disbanding. This will be a continual process as new goals, opportunities and actors arrive. The novel approaches to HAC formation and operation must also address the needs of the humans within the system. Users will have to negotiate with software agents regarding the structure of the coalitions they will collectively form, and then coordinate their activities within the resulting coalition. The ways in which HACs operate requires us to reconsider some of the prevailing assumptions of provenance work. HACs need to understand and respond to the behavior of people and how this human activity is captured, processed, and managed raises significant ethical and privacy concerns.
Article
Full-text available
This article presents a framework of adaptive, measurable decision making for Multiple Attribute Decision Making (MADM) by varying decision factors in their types, numbers, and values. Under this framework, decision making is measured using physiological sensors such as Galvanic Skin Response (GSR) and eye-tracking while users are subjected to varying decision quality and difficulty levels. Following this quantifiable decision making, users are allowed to refine several decision factors in order to make decisions of high quality and with low difficulty levels. A case study of driving route selection is used to set up an experiment to test our hypotheses. In this study, GSR features exhibit the best performance in indexing decision quality. These results can be used to guide the design of intelligent user interfaces for decision-related applications in HCI that can adapt to user behavior and decision-making performance.
Article
Full-text available
Objective: We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. Background: Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators’ trust. Method: We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. Results: Our analysis revealed three layers of variability in human–automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. Conclusion: Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
Article
Full-text available
Promise is one of the most powerful tools producing trust and facilitating cooperation, and sticking to the promise is deemed as a key social norm in social interactions. The present study explored the extent to which promise would influence investors' decision-making in the trust game where promise had no predictive value regarding trustees' reciprocation. In addition, we examined the neural underpinnings of the investors' outcome processing related to the trustees' promise keeping and promise breaking. Consistent with our hypothesis, behavioral results indicated that promise could effectively increase the investment frequency of investors. Electrophysiological results showed that, promise induced larger differentiated -FRN responses to the reward and non-reward discrepancy. Taken together, these results suggested that promise would promote cooperative behavior, while breach of promise would be regarded as a violation of the social norm, corroborating the vital role of non-enforceable commitment in social decision making.
Article
Full-text available
High cognitive load arises from complex time and safety-critical tasks, for example, mapping out flight paths, monitoring traffic, or even managing nuclear reactors, causing stress, errors, and lowered performance. Over the last five years, our research has focused on using the multimodal interaction paradigm to detect fluctuations in cognitive load in user behavior during system interaction. Cognitive load variations have been found to impact interactive behavior: by monitoring variations in specific modal input features executed in tasks of varying complexity, we gain an understanding of the communicative changes that occur when cognitive load is high. So far, we have identified specific changes in: speech, namely acoustic, prosodic, and linguistic changes; interactive gesture; and digital pen input, both interactive and freeform. As ground-truth measurements, galvanic skin response, subjective, and performance ratings have been used to verify task complexity. The data suggest that it is feasible to use features extracted from behavioral changes in multiple modal inputs as indices of cognitive load. The speech-based indicators of load, based on data collected from user studies in a variety of domains, have shown considerable promise. Scenarios include single-user and team-based tasks; think-aloud and interactive speech; and single-word, reading, and conversational speech, among others. Pen-based cognitive load indices have also been tested with some success, specifically with pen-gesture, handwriting, and freeform pen input, including diagraming. After examining some of the properties of these measurements, we present a multimodal fusion model, which is illustrated with quantitative examples from a case study. The feasibility of employing user input and behavior patterns as indices of cognitive load is supported by experimental evidence. Moreover, symptomatic cues of cognitive load derived from user behavior such as acoustic speech signals, transcribed text, digital pen trajectories of handwriting, and shapes pen, can be supported by well-established theoretical frameworks, including O'Donnell and Eggemeier's workload measurement [1986] Sweller's Cognitive Load Theory [Chandler and Sweller 1991], and Baddeley's model of modal working memory [1992] as well as McKinstry et al.'s [2008] and Rosenbaum's [2005] action dynamics work. The benefit of using this approach to determine the user's cognitive load in real time is that the data can be collected implicitly that is, during day-to-day use of intelligent interactive systems, thus overcomes problems of intrusiveness and increases applicability in real-world environments, while adapting information selection and presentation in a dynamic computer interface with reference to load.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
Automation does not mean humans are replaced; quite the opposite. Increasingly, humans are asked to interact with automation in complex and typically large-scale systems, including aircraft and air traffic control, nuclear power, manufacturing plants, military systems, homes, and hospitals. This is not an easy or error-free task for either the system designer or the human operator/automation supervisor, especially as computer technology becomes ever more sophisticated. This review outlines recent research and challenges in the area, including taxonomies and qualitative models of human-automation interaction; descriptions of automation-related accidents and studies of adaptive automation; and social, political, and ethical issues.
Article
Full-text available
Trust is among the most important factors in human life, as it pervades almost all domains of society. Although behavioral research has revealed a number of insights into the nature of trust, as well as its antecedents and consequences, an increasing number of scholars have begun to investigate the topic from a biological perspective to gain a deeper understanding. These biological investigations into trust have been carried out on three levels of analysis: genes, endocrinology, and the brain. Based on these three levels, we present a review of the literature on the biology of trust. Moreover, we integrate our findings into a conceptual framework which unifies the three levels of analysis, and we also link the biological levels to trust behavior. The results show that trust behavior is at least moderately genetically predetermined. Moreover, trust behavior is associated with specific hormones, in particular oxytocin, as well as specific brain structures, which are located in the basal ganglia, limbic system, and the frontal cortex. Based on these results, we discuss both methodological and thematic implications. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Current inductive machine learning algorithms typically use greedy search with limited lookahead. This prevents them to detect significant conditional dependencies between the attributes that describe training objects. Instead of myopic impurity functions and lookahead, we propose to use RELIEFF, an extension of RELIEF developed by Kira and Rendell [10, 11], for heuristic guidance of inductive learning algorithms. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems and the results are compared with some other well known machine learning algorithms. Excellent results on artificial data sets and two real world problems show the advantage of the presented approach to inductive learning.
Article
Full-text available
In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes.
Article
Full-text available
Electrodermal activity is characterized by the superposition of what appear to be single distinct skin conductance responses (SCRs). Classic trough-to-peak analysis of these responses is impeded by their apparent superposition. A deconvolution approach is proposed, which separates SC data into continuous signals of tonic and phasic activity. The resulting phasic activity shows a zero baseline, and overlapping SCRs are represented by predominantly distinct, compact impulses showing an average duration of less than 2 s. A time integration of the continuous measure of phasic activity is proposed as a straightforward indicator of event-related sympathetic activity. The quality and benefit of the proposed measure is demonstrated in an experiment with short interstimulus intervals as well as by means of a simulation study. The advances compared to previous decomposition methods are discussed.
Article
Full-text available
In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.
Article
Full-text available
To address the neurocognitive mechanisms that underlie choices made after receiving information from an anonymous individual, reaction times (Experiment 1) and event-related brain potentials (Experiment 2) were recorded as participants played three variants of the coin toss game. In this game, participants guess the outcomes of unseen coin tosses after a person in another room (dubbed 'the reporter') observes the coin toss outcomes and then sends reports (which may or may not be truthful) to participants about whether the coins landed on heads or tails. Participants knew that the reporter's interests were aligned with their own (common interests), opposed to their own (conflicting interests) or opposed to their own, but that the reporter was penalized every time he or she sent a false report about the coin toss outcome (penalty for lying). In the common interests and penalty for lying conditions, participants followed the reporter's reports over 90% of the time, in contrast to <59% of the time in the conflicting interests condition. Reaction time results indicated that participants took similar amounts of time to respond in the common interests and penalty for lying conditions and that they were reliably faster than in the conflicting interests condition. Event-related potentials timelocked to the reporter's reports revealed a larger P2, P3 and late positive complex response in the common interests condition than in the other two, suggesting that participants' brains processed the reporter's reports differently in the common interests condition relative to the other two conditions. Results suggest that even when people behave as if they trust information, they consider communicative efforts of individuals whose interests are aligned with their own to be slightly more informative than those of individuals who are made trustworthy by an institution, such as a penalty for lying.
Article
Full-text available
Previous research has indicated that the frequency of skin conductance responses without external stimulation or motor activity is a reliable indicator of psychophysiological states and traits. Some authors have suggested that cognitions elicit nonspecific skin conductance responses. These cognitions may resemble the stimuli that evoke a specific skin conductance response. In a within subjects design (n = 31 graduate students) the onset of nonspecific skin conductance responses triggered a signal for the subject to rate cognitions on several indices. These ratings ("absent" to "fully present") were compared with samples in the absence of phasic electrodermal activity. The subjects' current concerns, negative emotion, subjective arousal, and inner speech were rated to be significantly more intense at the time of nonspecific skin conductance responses compared to electrodermal nonresponding periods. Cognitive processes seem to be concomitants of nonspecific skin conductance responses.
Article
Full-text available
Mental stress testing is used to study the cardiovascular changes caused by psychologic stress. To examine the effects of cardiac drugs on mental stress-induced changes, it is useful to attain a degree of arousal that can be replicated in serial studies. Skin conductance level, a cholinergically mediated index of arousal, was assessed for its stability in serial studies and under conditions of beta-blockade. In normal subjects, skin conductance increased in response to mental stress (p < 0.001) and was stable across three sessions. In patients with mild hypertension, skin conductance was elevated during mental stress during both placebo and nadolol therapy (p < 0.001). As expected, nadolol reduced baseline and stress-induced peak arterial pressure and heart rate but had no significant effect on skin conductance. Thus skin conductance level can serve as a stable and useful index of autonomic arousal in clinical trials, even in patients using beta-blocking medications.
Article
Full-text available
We present an overview of our research into brain-computer interfacing (BCI). This comprises an offline study of the effect of motor imagery on EEG and an online study that uses pattern classifiers incorporating parameter uncertainty and temporal information to discriminate between different cognitive tasks in real-time.
Article
Full-text available
The ability to continuously and unobtrusively monitor levels of task engagement and mental workload in an operational environment could be useful in identifying more accurate and efficient methods for humans to interact with technology. This information could also be used to optimize the design of safer, more efficient work environments that increase motivation and productivity. The present study explored the feasibility of monitoring electroencephalo-graphic (EEG) indices of engagement and workload acquired unobtrusively and quantified during performance of cognitive tests. EEG was acquired from 80 healthy participants with a wireless sensor headset (F3-F4,C3-C4,Cz-POz,F3-Cz,Fz-C3,Fz-POz) during tasks including: multi-level forward/backward-digit-span, grid-recall, trails, mental-addition, 20-min 3-Choice Vigilance, and image-learning and memory tests. EEG metrics for engagement and workload were calculated for each 1 -s of EEG. Across participants, engagement but not workload decreased over the 20-min vigilance test. Engagement and workload were significantly increased during the encoding period of verbal and image-learning and memory tests when compared with the recognition/ recall period. Workload but not engagement increased linearly as level of difficulty increased in forward and backward-digit-span, grid-recall, and mental-addition tests. EEG measures correlated with both subjective and objective performance metrics. These data in combination with previous studies suggest that EEG engagement reflects information-gathering, visual processing, and allocation of attention. EEG workload increases with increasing working memory load and during problem solving, integration of information, analytical reasoning, and may be more reflective of executive functions. Inspection of EEG on a second-by-second timescale revealed associations between workload and engagement levels when aligned with specific task events providing preliminary evidence that second-by-second classifications reflect parameters of task performance.
Article
Full-text available
The overall aim of this research is to develop an EEG-based computer interface for use by people with severe physical disabilities. The work comprises an `offline' study and an `online' study, the offline study establishing principles of interface design and the online study putting those principles into practice. The work focuses on using EEG signals to drive one-dimensional cursor movements on a computer screen and our approach is characterised by our emphasis on pattern recognition methods rather than on biofeedback training. Two key technical features further define our approach: firstly, we use dynamic rather than static pattern recognition algorithms and, secondly, we infer not just the parameters of our classifier but also the uncertainty on those parameters. Both of these features result in more robust cursor control. 1 Introduction The ultimate aim of this research is to develop an EEG-based computer interface for use by people with severe physical disabilities. This would, ...
Conference Paper
Full-text available
. The aim of this paper is to analyse and formalise the dynamics of trust in the light of experiences. A formal framework is introduced for the analysis and specification of models for trust evolution and trust update. Different properties of these models are formally defined. 1 Introduction Trust is the attitude an agent has with respect to the dependability/capabilities of some other agent (maybe itself) or with respect to the turn of events. The agent might for example trust that the statements made by another agent are true. The agent might trust the commitment of another agent with respect to a certain (joint) goal. The agent might trust that another agent is capable of performing certain tasks. The agent might trust itself to be able to perform some tasks. The agent might trust that the current state of affairs will lead to a state of affairs that is agreeable to its own intentions, goals, commitments, or desires. In [1], [2] the importance of the notion trust is shown for agent...
Article
Sequential search methods characterized by a dynamically changing number of features included or eliminated at each step, henceforth “floating” methods, are presented. They are shown to give very good results and to be computationally more effective than the branch and bound method.
Book
During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for "wide" data (p bigger than n), including multiple testing and false discovery rates. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.
Book
Provides easy learning and understanding of DWT from a signal processing point of view. Presents DWT from a digital signal processing point of view, in contrast to the usual mathematical approach, making it highly accessible Offers a comprehensive coverage of related topics, including convolution and correlation, Fourier transform, FIR filter, orthogonal and biorthogonal filters Organized systematically, starting from the fundamentals of signal processing to the more advanced topics of DWT and Discrete Wavelet Packet Transform. Written in a clear and concise manner with abundant examples, figures and detailed explanations Features a companion website that has several MATLAB programs for the implementation of the DWT with commonly used filters.
Article
This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task-Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition-eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.
Book
The study of event-related potentials (ERPs) -- signal-averaged EEG recordings that are time-locked to perceptual, cognitive, and motor events -- has increased dramatically in recent years, but until now there has been no comprehensive guide to ERP methodology comparable to those available for fMRI techniques. Event-Related Potentials meets the need for a practical and concise handbook of ERP methods that is suitable for both the novice user of an ERP system and a researcher more experienced in cognitive electrophysiology.The chapters in the first section discuss the design of ERP experiments, providing a practical foundation for understanding the design of ERP experiments and interpreting ERP data. Topics covered include quantification of ERP data and theoretical and practical aspects of ANOVAs as applied to ERP datasets. The second section presents a variety of approaches to ERP data analysis and includes chapters on digital filtering, artifact removal, source localization, and wavelet analysis. The chapters in the final section of the book cover the use of ERPs in relation to such specific participant populations as children and neuropsychological patients and the ways in which ERPs can be combined with related methodologies, including intracranial ERPs and hemodynamic imaging.
Article
A survey revealed that researchers still seem to encounter difficulties to cope with outliers. Detecting outliers by determining an interval spanning over the mean plus/minus three standard deviations remains a common practice. However, since both the mean and the standard deviation are particularly sensitive to outliers, this method is problematic. We highlight the disadvantages of this method and present the median absolute deviation, an alternative and more robust measure of dispersion that is easy to implement. We also explain the procedures for calculating this indicator in SPSS and R software.
Article
A problem in the design of decision aids is how to design them so that decision makers will trust them and therefore use them appropriately. This problem is approached in this paper by taking models of trust between humans as a starting point, and extending these to the human-machine relationship. A definition and model of human-machine trust are proposed, and the dynamics of trust between humans and machines are examined. Based upon this analysis, recommendations are made for calibrating users' trust in decision aids.
Article
Sequential search methods characterized by a dynamically changing number of features included or eliminated at each step, henceforth “floating” methods, are presented. They are shown to give very good results and to be computationally more effective than the branch and bound method.