ThesisPDF Available

La influencia de los algoritmos en las decisiones y juicios humanos. Experimentos en contextos de política, citas y arte

Authors:

Abstract

La inteligencia artificial forma ya parte de nuestro día a día, y muchas veces no somos conscientes de ello. Son algoritmos de inteligencia artificial los que nos recomiendan qué libro leer, qué productos adquirir, qué nueva serie ver, dónde alojarnos o comer, o con quién salir. Su amplia penetración ha generado un debate sobre hasta qué punto su presencia puede influyendo en nuestras decisiones. Sin embargo, este debate no ha tenido, por el momento, un reflejo muy amplio en la investigación empírica. Por ello, en este trabajo comprobamos si los algoritmos pueden influir las decisiones con diferentes tipos de recomendaciones (explícitas y encubiertas), en contextos de decisión de impacto para las personas como la política y las citas románticas. Además, exploramos cómo las personas juzgan el desempeño de los algoritmos en un terreno donde no es tan común la interacción con ellos: el campo del arte. Nuestros resultados, a lo largo de nueve experimentos, muestran que la mera recomendación de un supuesto algoritmo puede influir en las decisiones humanas y que el desempeño de la inteligencia artificial en el terreno artístico resulta minusvalorado cuando el público conoce su autoría. Comprender mejor cómo los juicios y decisiones humanas se ven afectados en la interacción con sistemas algorítmicos resulta esencial para evitar subestimar el efecto de la recomendación y la presencia del algoritmo en nuestras vidas.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers—first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world.
Article
Full-text available
Owing to advancements in artificial intelligence (AI) and specifically in machine learning, information technology (IT) systems can support humans in an increasing number of tasks. Yet, previous research indicates that people often prefer human support to support by an IT system, even if the latter provides superior performance – a phenomenon called algorithm aversion. A possible cause of algorithm aversion put forward in literature is that users lose trust in IT systems they become familiar with and perceive to err, for example, making forecasts that turn out to deviate from the actual value. Therefore, this paper evaluates the effectiveness of demonstrating an AI-based system’s ability to learn as a potential countermeasure against algorithm aversion in an incentive-compatible online experiment. The experiment reveals how the nature of an erring advisor (i.e., human vs. algorithmic), its familiarity to the user (i.e., unfamiliar vs. familiar), and its ability to learn (i.e., non-learning vs. learning) influence a decision maker’s reliance on the advisor’s judgement for an objective and non-personal decision task. The results reveal no difference in the reliance on unfamiliar human and algorithmic advisors, but differences in the reliance on familiar human and algorithmic advisors that err. Demonstrating an advisor’s ability to learn, however, offsets the effect of familiarity. Therefore, this study contributes to an enhanced understanding of algorithm aversion and is one of the first to examine how users perceive whether an IT system is able to learn. The findings provide theoretical and practical implications for the employment and design of AI-based systems.
Article
Full-text available
Rapid development and adoption of applications of AI, machine learning, and natural language processing challenge managers and policy-makers to harness these transformative technologies. In this context, we provide evidence of a novel word-of-machine effect, the phenomenon by which utilitarian/hedonic attribute trade-offs determine preference for, or resistance to, AI-based recommendations compared to traditional word-of-mouth, or human-based recommendations. The word-of-machine effect stems from a lay belief that AI recommenders are more competent than human recommenders in the utilitarian realm, and less competent than human recommenders in the hedonic realm. As a consequence, importance or salience of utilitarian attributes determine preference for AI recommenders over human ones, and importance or salience of hedonic attributes determine resistance to AI recommenders over human ones (studies 1-4). The word-of machine effect is robust to attribute complexity, number of options considered, and transaction costs. The word-of machine effect reverses for utilitarian goals if a recommendation needs matching to a person’s unique preferences (study 5), and is eliminated in case of human-AI hybrid decision making (i.e., augmented rather than artificial intelligence; study 6). An intervention based on the consider-the-opposite protocol attenuates the word-of-machine effect (studies 7A-7B).
Article
De Neys (this issue) argues that the debate between single- and dual-process theorists of thought has become both empirically intractable and scientifically inconsequential. I argue that this is true only under the traditional framing of the debate-when single- and dual-process theories are understood as claims about whether thought processes share the same defining properties (e.g., making mathematical judgments) or have two different defining properties (e.g., making mathematical judgments autonomously versus via access to a central working memory capacity), respectively. But if single- and dual-process theories are understood in cognitive modeling terms as claims about whether thought processes function to implement one or two broad types of algorithms, respectively, then the debate becomes scientifically consequential and, presumably, empirically tractable. So, I argue, the correct response to the current state of the debate is not to abandon it, as De Neys suggests, but to reframe it as a debate about cognitive models.
Article
Popular dual-process models of thinking have long conceived intuition and deliberation as two qualitatively different processes. Single-process-model proponents claim that the difference is a matter of degree and not of kind. Psychologists have been debating the dual-process/single-process question for at least 30 years. In the present article, I argue that it is time to leave the debate behind. I present a critical evaluation of the key arguments and critiques and show that—contra both dual- and single-model proponents—there is currently no good evidence that allows one to decide the debate. Moreover, I clarify that even if the debate were to be solved, it would be irrelevant for psychologists because it does not advance the understanding of the processing mechanisms underlying human thinking.
Article
Advances in personalization algorithms and other applications of machine learning have vastly enhanced the ease and convenience of our media and communication experiences, but they have also raised significant concerns about privacy, transparency of technologies and human control over their operations. Going forth, reconciling such tensions between machine agency and human agency will be important in the era of artificial intelligence (AI), as machines get more agentic and media experiences become increasingly determined by algorithms. Theory and research should be geared toward a deeper understanding of the human experience of algorithms in general and the psychology of Human–AI interaction (HAII) in particular. This article proposes some directions by applying the dual-process framework of the Theory of Interactive Media Effects (TIME) for studying the symbolic and enabling effects of the affordances of AI-driven media on user perceptions and experiences.