Figure - uploaded by Iryna Pentina
Content may be subject to copyright.
Source publication
Purpose
This study aims to investigate the differences in consumers’ perceptions of trust, performance expectancy and intention to hire between human financial advisors with high/low expertise and robo-advisors.
Design/methodology/approach
Three experiments were conducted. The respondents were randomly assigned to human advisors with high/low expe...
Context in source publication
Context 1
... rated Jennifer in the high financial expertise vignette as having higher financial expertise than in the low expertise vignette (M high = 5.11; M low = 3.87) [p < 0.001]. Table 3 provides descriptive summaries for perception ratings across three different stimuli. MANCOVA was used to assess subjects' perceptions about financial advisors/robo-advisor. ...Similar publications
Positive emotion regulation (ER) strategies may contribute to the development and maintenance of generalized anxiety disorder (GAD) and depression; nonetheless, the underlying and transdiagnostic mechanisms are still unknown. To examine: 1) the mediating role of positive ER strategies in the relationship between ER deficits and experiential avoidan...
Citations
... These societal views create an unfavorable environment for algorithm use. Furthermore, feedback from current or previous users and insights into how algorithmic decisions have impacted their performance play a crucial role in determining an individual's willingness to trust and adopt algorithms [84,85]. ...
... The review also highlights that in risky and volatile DM environments, managers tend to reject algorithms, even when they offer optimal solutions. Studies in areas like high-risk financial advice, medical DM, and demand forecasting show a clear preference for humans over algorithmic advisors due to concerns over uncertain outcomes and their consequences [76,85,93]. ...
... These anthropomorphic features create a sense of social presence, which enhances the perceived relatability and trustworthiness of algorithms, ultimately increasing their acceptance among managers. However, the anthropomorphic design must be user-friendly to obtain the desired effect [85,103,104]. ...
Introduction-Decision making (DM) is a fundamental responsibility for managers, with significant implications for organizational performance and strategic direction. The increasing complexity of modern business environments, along with the recognition of human reasoning limitations related to cognitive and emotional biases, has led to a heightened interest in harnessing emerging technologies like Artificial Intelligence (AI) to enhance DM processes. However, a notable disparity exists between the potential of AI and its actual adoption within organizations, revealing skepticism and practical challenges associated with integrating AI into complex managerial DM scenarios. This systematic literature review aims to address this gap by examining the factors that influence managers' adoption of AI in DM. Methods-This study adhered to the PRISMA guidelines. Articles from 2010 to 2024 were selected from the Scopus database using specific keywords. Eligible studies were included after rigorous screening and quality assessment using checklist tools. Results-From 202 articles screened, a data synthesis of 16 eligible studies revealed seven major interconnected factors acting as key facilitators or barriers to AI integration within organizations. These factors-Managers' Perceptions of AI, Ethical Factors, Psychological and Individual Factors, Social and Psychosocial Factors, Organizational Factors, External Factors, and Technical and Design Characteristics of AI-were then organized into a complex analytical framework informed by existing theoretical constructs. Discussion-This contribution provides valuable insights into how managers perceive and interact with AI systems, as well as the conditions necessary for successful integration into organizational DM processes.
... When algorithms are observed to promote replacing humans rather than merely assisting them, many individuals distance themselves from algorithms. For them, humans are regarded as the dominate entity and are expected to make the ultimate decision (Zhang et al., 2021). For instance, people actively avoid machine choices in highly uncertain 18 | WANG ET AL. environments such as medical decision-making (Dietvorst et al., 2015). ...
The integration of algorithmic decision-making into daily life gives rise to a need to understand public attitudes toward this phenomenon. This study uses online experiments to explore how decision scenarios and roles influence public preferences for algorithms. In-depth interviews were conducted to examine interpretations of algorithmic fairness. The findings indicate a preference for algorithms, yet a stronger preference for human decision-making in ethically complex scenarios. Decision-makers demonstrate greater acceptance of algorithms. Participants perceive algorithmic fairness from social and technical perspectives, emphasizing autonomy and transparency. Despite a general preference for algorithms, concerns persist, revealing a nuanced view of algorithmic fairness as a form of societal power. K E Y W O R D S algorithmic decision-making, algorithmic preference, human-machine relations, online experiment, public decision-making Economics Politics. 2024;1-22. wileyonlinelibrary.com/journal/ecpo
... Recent research studies have primarily focused on assessing their performance and efficiency; risk-adjusted returns, portfolio diversification, and investment outcomes achieved by robo-advisors compared to traditional investment approaches; and effectiveness of different algorithmic trading strategies in terms of market timing, price discovery, and transaction costs (Brenner & Meyll, 2020;D'Acunto et al., 2019;Uhl & Rohner, 2018;Zhang et al., 2021). ...
Finance, as a multidimensional field, is constantly evolving, driven by economic, technological, and regulatory shifts. The dynamic nature of the field of finance research necessitates a continual identification of the latest areas of investigation for researchers to embark upon their future research projects. This study delves into the latest literature to uncover major trends in finance research, presenting a content analysis of 160 influential publications. Utilizing a systematic approach, six broad areas are identified, namely, behavioural finance, FinTech and digital finance, sustainable finance, financial risk management, financial econometrics, and asset pricing and portfolio management. Within these six areas, 14 prominent areas have been recognized, thoroughly discussed, and accompanied by relevant citations. The findings provide researchers with an overview of the current state of finance research and directions for future research. Future research will focus mainly on the application of machine learning, AI tools, big data, and quantum computing techniques for predictive analytics, fraud detection, cryptographic security, financial risk management, and improvement of financial services. Sustainable finance may be combined with ESG issues into FinTech, so promoting green investing and reporting. Embedded finance may be developed to incorporate financial services into non-financial platforms. The study contributes to new knowledge creation by identifying the emerging trends of finance research. The study’s focus on the most recent financial trends facilitates innovation-driven economic development and societal progress. The study leads to a better understanding of the impact of financial innovations on larger economic systems.
... The latter are measured against their perceived level of performance, the trust in the advisor and the financial institution, as well as the ability to provide accurate investment advice. In contrast, the credibility of robo-advisors is also determined by the level of trust toward the technology itself (Johnson and Grayson, 2005;Zhang et al., 2021). Yet, for the customer segment between the age of 18-35, the technology's data-driven nature has been found to favor credibility of robo-advisors. ...
... Yet, for the customer segment between the age of 18-35, the technology's data-driven nature has been found to favor credibility of robo-advisors. Older customer segments prefer the apparent interpersonal cues of the human financial advisor (Chua et al., 2023;Wu and Wang, 2011;Zhang et al., 2021). ...
The mini review assesses the value propositions of robo-advisors through the lens of behavioral finance. Despite their promise of data-driven, rational investment strategies, robo-advisors may not fully replicate the personalized service of human financial advisors or eliminate human biases in decision-making. A content analysis of 80 peer-reviewed articles and publications was conducted, focusing on the intersection of financial technology and behavioral finance. Literature was retrieved using The Chicago School University Library's OneSearch and the EBSCO host database, with key terms including “robo-advisor,” “investment behavior,” “risk tolerance,” “financial literacy,” and “affective trust.” The review identifies four key limitations of robo-advisors: (1) their inability to replicate the service-relationship of human advisors; (2) the presence of human bias in supposedly rational algorithms; (3) the inability to minimize market risk; and (4) their limited impact on improving users' financial literacy. Instead, robo-advisors temporarily compensate for a lack of financial knowledge through passive investment strategies. The findings suggest that integrating behavioral finance principles could enhance the predictive power of robo-advisors, though this would introduce additional complexities. The review calls for further research and regulatory measures to ensure that these technologies prioritize investor protection and financial literacy as they continue to evolve.
... This capability is crucially aligned with the increasing focus on simulating Affective Empathy in human-AI interactions Welivita, Xie, and Pu 2021). In light of this, there is growing research interest in studying affective aspects of trust in AI (Glikson and Woolley 2020;Granatyr et al. 2017;Kyung and Kwon 2022;Zhang, Pentina, and Fan 2021;Guerdan, Raymond, and Gunes 2021). However, a critical gap exists in the lack of generalizable and accurate specialized measurement tools for assessing affective trust in the context of AI, especially with the enhanced and nuanced capabilities of LLMs. ...
... There is growing research interest in exploring the role of affective trust in the use of AI technologies. A few recent works have highlighted that affect-based trust plays a decisive role in people's acceptance of AI-based technology in preventative health interventions (Kyung and Kwon 2022) and financial services robo-advising (Zhang, Pentina, and Fan 2021). Research in explainable AI (XAI) has also shown that people's affective responses to explanations are crucial in improving personalization and increasing trust in AI systems (Guerdan, Raymond, and Gunes 2021). ...
Trust is not just a cognitive issue but also an emotional one, yet the research in human-AI interactions has primarily focused on the cognitive route of trust development. Recent work has highlighted the importance of studying affective trust towards AI, especially in the context of emerging human-like LLM-powered conversational agents. However, there is a lack of validated and generalizable measures for the two-dimensional construct of trust in AI agents. To address this gap, we developed and validated a set of 27-item semantic differential scales for affective and cognitive trust through a scenario-based survey study. We then further validated and applied the scale through an experiment study. Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents. Our study methodology and findings also provide insights into the capability of the state-of-art LLMs to foster trust through different routes.
... Some individuals are naturally hostile to algorithmic decisions (Kawaguchi 2021;Önkal et al. 2009;Prahl and Van Swol 2017). Individual causes of this aversion include a lack of trust in algorithms (Zhang et al. 2021); the perception that algorithms are less competent and empathetic than humans when it comes to informing and making decisions (Luo et al. 2019) 12 ; an egocentric bias that makes individuals prefer their own decisions over not only those of other humans but also those of algorithms (Sutherland et al. 2016); feeling responsible for the consequences of a decision ( van Dongen and van Maanen 2013); or the motivation to follow algorithmic decisions (Mahmud et al. 2022, p. 11). ...
This study investigates the factors influencing the aversion of Swiss HRM departments to algorithmic decision-making in the hiring process. Based on a survey provided to 324 private and public HR professionals, it explores how privacy concerns, general attitude toward AI, perceived threat, personal development concerns, and personal well-being concerns, as well as control variables such as gender, age, time with organization, and hierarchical position, influence their algorithmic aversion. Its aim is to understand the algorithmic aversion of HR employees in the private and public sectors. The following article is based on three PLS-SEM structural equation models. Its main findings are that privacy concerns are generally important in explaining aversion to algorithmic decision-making in the hiring process, especially in the private sector. Positive and negative general attitudes toward AI are also very important, especially in the public sector. Perceived threat also has a positive impact on algorithmic aversion among private and public sector respondents. While personal development concerns explain algorithmic aversion in general, they are most important for public actors. Finally, personal well-being concerns explain algorithmic aversion in both the private and public sectors, but more so in the latter, while our control variables were never statistically significant. This said, this article makes a significant contribution to explaining the causes of the aversion of HR departments to recruitment decision-making algorithms. This can enable practitioners to anticipate these various points in order to minimize the reluctance of HR professionals when considering the implementation of this type of tool.
... Prior literature suggests that task-related factors influence these preferences. For instance, consumers prefer robots for simple, objective, and standard tasks (Castelo, Bos, and Lehmann 2019;Xu et al. 2020), while they favor human agents for complex, subjective, and personalized tasks (Longoni, Bonezzi, and Morewedge 2019;Zhang, Pentina, and Fan 2021). The literature also largely attributes consumers' preferences to their perceptions of the functional capabilities of service robots. ...
... Service robots provide distinct interactive experiences, which shape consumers' expected and actual psychological experiences differently compared to human service agents (Jago and Laurin 2022;Zhang, Pentina, and Fan 2021). High and low SES consumers have different concerns and aspirations in social interactions . ...
... For instance, consumers favor service robots for tasks that are simple, objective, and do not require personalization (Castelo, Bos, and Lehmann 2019;Hayashi et al. 2011;Xu et al. 2020). Conversely, consumers prefer human agents for tasks that are complex, involve subjective judgment, and require attention to an individual's unique characteristics (Granulo, Fuchs, and Puntoni 2021;Longoni, Bonezzi, and Morewedge 2019;Zhang, Pentina, and Fan 2021). ...
Service industries are increasingly utilizing service robots to substitute or collaborate with human service providers. Extant literature mainly focuses on studying the usability of service robots and found that consumers with high socioeconomic status (SES) have an advantage in adopting new technology, given their high educational level and abundant resources. However, little research has paid attention to the psychological preference of low SES consumers when facing the choice of service robots and human service agents. This research investigates how consumers' SES influences their concerns and expectations when facing interpersonal interactions in services and, in turn, affects their preferences for service agents (robot vs. human). Across four studies, we found that low SES consumers are more concerned of being evaluated by human service agents in luxury shopping contexts, leading to the preference for interacting with service robots. In contrast, high SES consumers display a higher expectation of receiving preferential treatment from human service agents, but it does not increase high SES consumers' preference for human service agents over service robots. Furthermore, we found that varying the service environment (i.e., a store located in a neighborhood matches with low SES consumers' status) attenuated low SES consumers' preference for service robots. This research offers novel insights for marketers' use of service robots to promote consumer experience and well‐being.
... The vast majority of the 128 most cited articles include a single study (N = 108, 84.4%) with a one-time measurement of trust in AI (N = 88, 68.6%). Exceptions include the work by Zhang et al. (2021) who conducted three experiments to compare perceptions of human vs. robo-advisors in the context of financial services, or the work by Buçinca et al. (2020), who conducted three experiments to study the misleading nature of proxy tasks in evaluating explainable AI systems. Furthermore, quite a few studies develop their own questionnaire items (N = 35, 27.3%). ...
Trust is widely regarded as a critical component to building artificial intelligence (AI) systems that people will use and safely rely upon. As research in this area continues to evolve, it becomes imperative that the research community synchronizes its empirical efforts and aligns on the path toward effective knowledge creation. To lay the groundwork toward achieving this objective, we performed a comprehensive bibliometric analysis, supplemented with a qualitative content analysis of over two decades of empirical research measuring trust in AI, comprising 1’156 core articles and 36’306 cited articles across multiple disciplines. Our analysis reveals several “elephants in the room” pertaining to missing perspectives in global discussions on trust in AI, a lack of contextualized theoretical models and a reliance on exploratory methodologies. We highlight strategies for the empirical research community that are aimed at fostering an in-depth understanding of trust in AI.
... Oppure si pensi, ancora, alla diffusione della consulenza automatizzata in campo finanziario -trattasi della figura del c.d. robo-advisor, una sorta di consulente finanziario intelligente, a supporto della personalizzazione dei servizi e della gestione della tecnologia finanziaria (FinTech) -che ha indotto lo IOSCO (International Organizations of Securities Commissions, 2017) a proporre l'elaborazione di apposite Linee Guida per la mappatura dei rischi connessi alla diffusione di tali strumenti (Di Porto, 2017). In ogni caso, la robo-advisoryconsulenza automatizzata sugli investimenti basata sul web ( trasparenza e generale incapacità o riluttanza a confrontarsi con le questioni relative agli investimenti (Cheng, Guo, Chen et al., 2019;Jung, Dorner, Weinhardt et al., 2018;Morana, Gnewuch, Jung et al., 2020;Oehler, Horn & Wendt, 2022;Tokic, 2018;Zhang, Pentina, & Fan, 2021). ...
The "Big Data Governance and Legal Aspects" booklet delves into the governance challenges and legal implications surrounding Big Data in an era marked by the rapid evolution of Emerging and Disruptive Technologies (EDTs). It highlights the essential processes for managing data throughout its lifecycle, emphasizing collection, storage, and analysis while ensuring data security and ethical usage. The text also navigates the balance between privacy and national security, exploring the necessity for ethical and legal frameworks that can address these evolving threats.
The publication investigates the dual potential of Big Data: maximizing value for national security and minimizing privacy risks. It discusses the complexity of reconciling privacy with national security, particularly in the context of CBRNe threats. The book includes a comprehensive examination of Open Source Intelligence (OSINT) methods and the deployment of demonstrators to monitor global asymmetric threats. By analyzing regulatory landscapes and presenting case studies, it offers an integrated approach to understanding Big Data's role in contemporary security and defense, providing a valuable resource for policy makers, researchers, and security professionals.
... Their concerns about the process and outcomes of automated AIenabled advisory services could also be intensified (Huang and Rust 2021), making technology anxiety emerge as an important influence that could restrict trust and intention to use. Furthermore, as clients typically prefer human financial advisors with higher perceived proficiency to robo-advisors (Jung et al. 2018;Zhang, Pentina, and Fan 2021), their satisfaction with current financial advisory services (i.e., with human financial advisors) is relevant when considering robo-advisors adoption as an added investment method. Prior studies investigate satisfaction as clients' evaluation of robo-advisors (e.g., Bai 2024). ...
... Clients' satisfaction with current financial 23 advisory services or current investment methods (e.g., human financial advisors)that is, they are satisfied with the financial performance, customer service, and securitydirectly impedes them from using robo-advisors. Such clients may prefer proficient human advisors to roboadvisors (Zhang, Pentina, and Fan 2021) or become less likely to shift to new advisors (Hsu 2014). Nonetheless, these clients could have the intention to use robo-advisors when they gain trust in the robo-advisors' competence. ...
... We demonstrate that clients' level of satisfaction with their current (non-robo-advisors) services received is significant when they consider using robo-advisors as an added investment method. Our results support prior 25 research (e.g., Jung et al. 2018;Zhang, Pentina, and Fan 2021), in that, satisfaction with the current services directly lessens the intention to use robo-advisors. ...
The COVID-19 pandemic accelerated clients’ robo-advisors adoption as a digital financial advisory platform in Thailand. Drawing upon relationship marketing theory, this study examines the influence of trust, technology anxiety, and satisfaction with current financial advisory services on intention to use. It also explores the moderating effects of switching costs and the attractiveness of alternatives. Utilizing data from 401 Thai investors, structural equation modelling (SEM) is employed for data analysis. Results indicate that trust not only directly influences the intention to use robo-advisors but also mediates the influence of technology anxiety and current service satisfaction on usage intention. Attractiveness of alternatives strengthens the effect of trust while switching costs do not show a significant impact. This study broadens the services marketing and relationship marketing literature, highlighting a key mediating role of trust in navigating robo-advisors usage during and post the pandemic.