Conference Paper

Exploring Mental Models for Transparent and Controllable Recommender Systems: A Qualitative Study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

While online content is personalized to an increasing degree, eg. using recommender systems (RS), the rationale behind personalization and how users can adjust it typically remains opaque. This was often observed to have negative effects on the user experience and perceived quality of RS. As a result, research increasingly has taken user-centric aspects such as transparency and control of a RS into account, when assessing its quality. However, we argue that too little of this research has investigated the users' perception and understanding of RS in their entirety. In this paper, we explore the users' mental models of RS. More specifically, we followed the qualitative grounded theory methodology and conducted 10 semi-structured face-to-face interviews with typical and regular Netflix users. During interviews participants expressed high levels of uncertainty and confusion about the RS in Netflix. Consequently, we found a broad range of different mental models. Nevertheless, we also identified a general structure underlying all of these models, consisting of four steps: data acquisition, inference of user profile, comparison of user profiles or items, and generation of recommendations. Based on our findings, we discuss implications to design more transparent, controllable, and user friendly RS in the future.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This understanding is particularly important due past research showing that moral tensions exist: users feel a trade-off between benefits of personalisation and their own data disclosure impacting their privacy [47], or report perceived "creepiness" resulting from overly accurate recommendations [44]. Other research has explored users' understanding of the operation of such systems [30,31], and how it links to subsequent attitudes to systems [17]. Thus, it is well-reported that engaging with systems that rely heavily on our personal data can result in a behaviour-intention rift, as users trade suboptimal data sharing for immediacy and ease, conflicting with reported preferences. ...
... Ngo et al. aimed to learn about users' understanding of recommender systems through studying participants' mental models of how these systems worked. They found that even users with little technical knowledge were able to construct a four-step process required for recommender systems to operate, consisting of data acquisition, inference of a user profile, comparison of items or users, and generation of recommendations [31]. In another study Ngo et al. compared lay users' meaning making of mechanism and elements of algorithmic systems to that of experts. ...
... Paradoxically, feeling overwhelmed can diminish users' perceived control and understanding of the system, resulting in less agency and transparency [19]. In line with this, it is also important to understand that transparency and agency are interdependent: A lack of transparency can lead to a "gulf of execution" where participants are not aware of their possibilities to interact with the recommendation system and assert their agency [31]. ...
Conference Paper
Full-text available
Providing personalised recommendations has become standard practice across social and streaming services, online news aggre- gators, and various other media platforms. While success metrics usually paint a picture of user satisfaction and steer development to- wards further personalisation, these do not directly articulate users’ experiences of and opinions towards personalised content. This paper presents a mixed-methods investigation into the benefits, harms, and comfort levels regarding personalised media perceived by 211 people in the UK. Overall, participants believe that the ben- efits of personalisation outweigh the harms. However, they reveal conflicted feelings in relation to their comfort levels. Participants advocated for more agency, and provider transparency, including around data collection and handling. Given the high likelihood of ac- celerating media personalisation, we conclude that it is imperative to emphasise user-centric design in personalisation development and provide a set of design recommendations.
... Jin [78] conducted a critical examination of plausibility as a common XAI criterion and emphasized the need for explainability-specific evaluation objectives in XAI. Schoonderwoerd et al. [30] examined a case study on a human-centered design approach for AIgenerated explanations in clinical decision support systems and developed design patterns for explanations. Weitz et al. [76] investigated end-user's preferences for explanation styles and content for stress monitoring in mobile health apps and created user personas to guide human-centered XAI design. ...
... Most studies [15,27,28,29,30,32,[33][34][35][36][37][38][39][40][41]42,44,45,49,50,53,[54][55][56][58][59][60][66][67][68][69][70][71]76,77,82,83,84] in our sample are driven by the research on explanation models and their representation. The objective is to comprehend what constitutes a high-quality explanation that increases human cognition and decision-making. ...
... Explainable recommender systems have also been studied with regard to media contexts, including news, movies, music, books, gaming, and art [30,98,99], [73,100,101], [31,39,53], [29,56,71]. These systems have much in common with product recommendation systems, yet there is a significant difference. ...
Preprint
Full-text available
Recent advances in technology have propelled Artificial Intelligence (AI) into a crucial role in everyday life, enhancing human performance through sophisticated models and algorithms. However, the focus on predictive accuracy has often resulted in opaque, black box models that lack transparency in decision-making. To address this issue, significant efforts have been made to develop explainable AI (XAI) systems that make outcomes comprehensible to users. Various approaches, including new concepts, models, and user interfaces, aim to improve explainability, build user trust, enhance satisfaction, and increase task performance. Evaluation research has emerged to define and measure the quality of these explanations, differentiating between formal evaluation methods and empirical approaches that utilize techniques from psychology and human-computer interaction. Despite the importance of empirical studies, evaluations remain underutilized, with literature reviews indicating a lack of rigorous evaluations from the user perspective. This review aims to guide researchers and practitioners in conducting effective empirical user-centered evaluations by analyzing several studies, categorizing their objectives, scope, and evaluation metrics, and offering an orientation map for research design and metric measurement.
... To facilitate such effective contribution, designers require resources that aid in shaping human-AI interactions [15]. Currently, however, designers feel that that research is lacking in volume, in accessibility and in transferability to their projects at hand [15], despite the ongoing efforts of the pioneer researchers in this domain (see, for instance, [13,[16][17][18][19][20][21]). ...
... In other works, these interactions have been dubbed 'user control mechanisms' [11,16,19,37]. It is exactly because of these algorithmic affordances that AI and humans meet, and it is through these interactions that the user's experience of qualities such as transparency, control and trust instantiate [10,20,[38][39][40][41]. ...
... Among other aspects, in these smaller studies we were interested in the formation and development of users' mental model of the AI. We consider a mental model to be a subjective representation of the algorithm, including how it reasons, what its sources are and how it weighs those sources [20,45]. Mental models are relevant for the user's interaction with an AI system, since a user's trust and willingness to comply with its outcomes, will generally be higher when the outcomes are consistent with a user's mental model [45,46].. ...
Chapter
In this paper, we argue that the creation of Responsible AI over the past four decades has predominantly relied on two approaches: contextual and technical. While both are indispensable, we contend that a third equally vital approach , focusing on human-AI interaction design, has been relatively neglected, despite the ongoing efforts of pioneers in the field. Through the presentation of four case studies of real-world AI systems, we illustrate, however, how small design choices impact an AI platform's responsibleness independent of the technical or contextual level. We advocate, therefore, for a larger role for design in the creation of Responsible AI, and we call for increased research efforts in this area to advance our understanding of human-AI interaction design in Responsible AI. This includes both smaller case studies, such as those presented in this paper, that replicate or swiftly test earlier findings in different contexts, as well as larger, more comprehensive studies that lay the foundations for a framework.
... She further observed that all mental models were either visualized as a decision tree, a network, or a storm. The work by Ngo et al. [31] demonstrated that people hold very diverse mental models of recommender systems, which adhere to a basic structure (labeled as general model). Their participants also expressed high amounts of uncertainty and confusion about the inner working of a recommender system which may explain the wide diversity of the mental models. ...
... The aim of this study was to find out to what extent users are aware of how intelligent voice assistant systems work (RQ1). Similar to Ngo et al. [31] we found a broad range of diverse mental models, which followed a basic structure of inputprocessing -output. A large proportion of our participants showed a very simplistic mental model which we categorized as basic awareness. ...
... This could also be evaluated as a sign of uncertainty and confusion about the inner working of a voice assistant. In previous research, this was expressed by users of a recommender system which also displayed diverse mental models and agreed only on a very basic underlying structure [31]. However, as argued by DeVito [9,11], there are interesting differences in the low-level and rather "abstract" understandings. ...
... Recommender systems are known to perpetuate various types of algorithmic harms and effects. While a number of automatic methods to quantify and mitigate such biases have been proposed, studies [4,15,21,22] indicate that their impacts can vary significantly among different users. For instance, in career recommendations [21], users tend not to perceive gender stereotypes as harmful as long as recommendations are effective for them. ...
... For instance, in career recommendations [21], users tend not to perceive gender stereotypes as harmful as long as recommendations are effective for them. [15] has shown that users develop their own mental models very differently based on experiences and background, resulting in different perception of algorithmic harms. ...
Preprint
Full-text available
Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, this tool benefits both general users and researchers by increasing transparency and offering personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.
... When the focus expanded and also started including qualities such as novelty, serendipity, control, autonomy and relatedness, this inspired a host of studies that evaluated the dynamics of the interaction between user and algorithm. Studies into user control mechanisms [14][15][16][17][18][19] or algorithmic affordances [1,2,20,21] examined how different affordances impacted various interaction qualities or experiential goals (the terms are used interchangeably) [22,23]. In those studies, a recommender system is no longer framed as a monologuewhere the source of knowledge just presents the list of recommendations and the user can choose to consider or ignore that list. ...
... The second paper, by Smits et al. [4] tackles a second fundamental challenge. It has been overwhelmingly demonstrated that a user's experience with a recommender system is significantly shaped by the dynamics of user-algorithm interaction [5,17,[36][37][38]. In order to build a systematic pattern library, it is, therefore, necessary to also find means to systematically evaluate the impact of algorithmic affordances on these experiences. ...
Chapter
Algorithmic affordances are defined as user interaction mechanisms that allow users tangible control over AI algorithms, such as recommender systems. Designing such algorithmic affordances, including assessing their impact, is not straightforward and practitioners state that they lack resources to design adequately for interfaces of AI systems. This could be amended by creating a comprehensive pattern library of algorithmic affordances. This library should provide easy access to patterns, supported by live examples and research on their experiential impact and limitations of use. The Algorithmic Affordances in Rec-ommender Interfaces workshop aimed to address key challenges related to building such a pattern library, including pattern identification features, a framework for systematic impact evaluation, and understanding the interaction between al-gorithmic affordances and their context of use, especially in education or with users with a low algorithmic literacy. Preliminary solutions were proposed for these challenges.
... 37,38 For media and entertainment, involves personalized recommendation systems, based on collected personal information. [39][40][41][42] Uses of XAI in education encompass smart tutoring systems, university admission decision making, and grade estimation systems. [43][44][45] The transportation domain includes navigation systems, applications for autonomous cars and flight planning for the aviation industry. ...
... sub-tours (28), require that a vehicle that arrives at a center also departs from it (29), mandate the distribution of all available waste to the appropriate center (30), limit the total demand in a particular route to the vehicle capacity (31), dictate that vehicles start and end at the warehouse (32)(33), regulate the start time of service (33)(34)(35)(36), ensure that vehicles respect time windows of the centers and warehouse (37)(38)(39), limit the number of vehicles to those available (40), and specify the types of variables used (41)(42)(43). ...
Article
Full-text available
Background The management of medical waste is a complex task that necessitates effective strategies to mitigate health risks, comply with regulations, and minimize environmental impact. In this study, a novel approach based on collaboration and technological advancements is proposed. Methods By utilizing colored bags with identification tags, smart containers with sensors, object recognition sensors, air and soil control sensors, vehicles with Global Positioning System (GPS) and temperature humidity sensors, and outsourced waste treatment, the system optimizes waste sorting, storage, and treatment operations. Additionally, the incorporation of explainable artificial intelligence (XAI) technology, leveraging scikit-learn, xgboost, catboost, lightgbm, and skorch, provides real-time insights and data analytics, facilitating informed decision-making and process optimization. Results The integration of these cutting-edge technologies forms the foundation of an efficient and intelligent medical waste management system. Furthermore, the article highlights the use of genetic algorithms (GA) to solve vehicle routing models, optimizing waste collection routes and minimizing transportation time to treatment centers. Conclusions Overall, the combination of advanced technologies, optimization algorithms, and XAI contributes to improved waste management practices, ultimately benefiting both public health and the environment.
... 37,38 For media and entertainment, involves personalized recommendation systems, based on collected personal information. [39][40][41][42] Uses of XAI in education encompass smart tutoring systems, university admission decision making, and grade estimation systems. [43][44][45] The transportation domain includes navigation systems, applications for autonomous cars and flight planning for the aviation industry. ...
... x 0 dictate that vehicles start and end at the warehouse (32)(33), regulate the start time of service (33)(34)(35)(36), ensure that vehicles respect time windows of the centers and warehouse (37)(38)(39), limit the number of vehicles to those available (40), and specify the types of variables used (41)(42)(43). ...
Article
Full-text available
Background: The management of medical waste is a complex task that necessitates effective strategies to mitigate health risks, comply with regulations, and minimize environmental impact. In this study, a novel approach based on collaboration and technological advancements is proposed. Methods: By utilizing colored bags with identification tags, smart containers with sensors, object recognition sensors, air and soil control sensors, vehicles with Global Positioning System (GPS) and temperature humidity sensors, and outsourced waste treatment, the system optimizes waste sorting, storage, and treatment operations. Additionally, the incorporation of explainable artificial intelligence (XAI) technology, leveraging scikit-learn, xgboost, catboost, lightgbm, and skorch, provides real-time insights and data analytics, facilitating informed decision-making and process optimization. Results: The integration of these cutting-edge technologies forms the foundation of an efficient and intelligent medical waste management system. Furthermore, the article highlights the use of genetic algorithms (GA) to solve vehicle routing models, optimizing waste collection routes and minimizing transportation time to treatment centers. Conclusions: Overall, the combination of advanced technologies, optimization algorithms, and XAI contributes to improved waste management practices, ultimately benefiting both public health and the environment.
... Users can control the recommender system input (i.e., user profile), process (i.e., algorithm parameters), and/or output (i.e., recommendations) [34,42]. Previous work shows that interactive recommender systems allow users to build better mental models [24,82] and can increase transparency [109,110], trust [32,112], as well as perceived effectiveness and user satisfaction [37,46,92]. ...
... In some cases, opening the black-box of the recommender system to users by providing explanations for systemgenerated recommendations has the potential to help users achieve a better understanding of the system's functionality; thus increasing user-perceived transparency [27,124]. Moreover, to give feedback and exert control over the system's recommendations effectively, users need insights into the system's reasoning, which can be achieved through explanation [24,82,109]. Thus, in this paper, we go beyond interactive recommendation and rather focus on studies that combine Manuscript submitted to ACM explanation with visualization techniques to support users' understanding of and interaction with the recommendation process. ...
Preprint
Full-text available
Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the last two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation goal, explanation scope, explanation style, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.
... In this context, trust and transparency are often linked, following the intuition that you will more likely trust a system that you can understand than one that is a black box to you. Transparency is often linked to users' understanding of the RS inner logic and is supporting the user to build an accurate mental model of how the system works [32,26]. Moreover, providing transparency could enhance users' trust in the system [27,38,50]. ...
... Given that trust is characterized by depending on another actor [30], it makes sense to assume that correct mental models lead to (appropriate) trust. The reason for this is that a correct mental model enables a user to correctly predict the behavior of a system and therefore enables the user to trust a system in the future [32]. ...
Preprint
Full-text available
Trust is long recognized to be an important factor in Recommender Systems (RS). However, there are different perspectives on trust and different ways to evaluate it. Moreover, a link between trust and transparency is often assumed but not always further investigated. In this paper we first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust. We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS such as explanation, exploration and exploranation (i.e., a combination of exploration and explanation). We identify a need for further studies to explore these concepts as well as the relationships between them.
... Several studies have experimented with the capabilities of LLMs by employing techniques like parameter-efficient tuning or instructionbased tuning to tailor recommendations. Some researchers have also transformed various recommendation scenarios into unified tasks of natural language generation, optimizing these models through multi-task learning frameworks such as P6 (Ngo et al. 2020). A notable development is the TIGER method (Rajput et al. 2024), which utilizes an RQ-VAE for constructing generative identification descriptors, followed by employing encoder-decoder based transformers for sequential recommendation. ...
Preprint
Full-text available
In this study, we introduce Convolutional Transformer Neural Collaborative Filtering (CTNCF), a novel approach aimed at enhancing recommendation systems by effectively capturing high-order structural information in user-item interactions. CTNCF represents a significant advancement over the traditional Neural Collaborative Filtering (NCF) model by seamlessly integrating Convolutional Neural Networks (CNNs) and Transformer layers. This sophisticated integration enables the model to adeptly capture and understand complex interaction patterns inherent in recommendation systems. Specifically, CNNs are employed to extract local features from user and item embeddings, allowing the model to capture intricate spatial dependencies within the data. Furthermore, the utilization of Transformer layers enables the model to capture long-range dependencies and interactions among user and item features, thereby enhancing its ability to understand the underlying relationships in the data. To validate the effectiveness of our proposed CTNCF framework, we conduct extensive experiments on two real-world datasets. The results demonstrate that CTNCF significantly outperforms state-of-the-art approaches, highlighting its efficacy in improving recommendation system performance.
... This trade-off is further supported by Torkamaan et al. who found that overly accurate recommendations resulted in systems being perceived as "creepy" [69]. A growing body of literature has also investigated users' understandings of recommender systems, which have pointed at a diversity of views, attitudes, and perceptions as well as gaps and inconsistencies in the understanding of recommender systems [28,53,54]. Considering this variety of user standpoints on top of the internal trade-offs of perceived benefits and harms, the question becomes if there may be a behaviour-intention rift at play. ...
... The impact of algorithmic affordances on interaction qualities, such as controllability, autonomy, transparency, interpretability, autonomy, fun, etc. has been demonstrated overwhelmingly in literature [3,8,11,20,[26][27][28][29][30][31][32][33][34]. However, currently, no systematic overview exists in which the individual studies and their impact on the various interaction qualities have been brought together. ...
Chapter
Full-text available
The user's experience with a recommender system is significantly shaped by the dynamics of user-algorithm interactions. These interactions are often evaluated using interaction qualities, such as controllability, trust, and autonomy , to gauge their impact. As part of our effort to systematically categorize these evaluations, we explored the suitability of the interaction qualities framework as proposed by Lenz, Dieffenbach and Hassenzahl. During this examination, we uncovered four challenges within the framework itself, and an additional external challenge. In studies examining the interaction between user control options and interaction qualities, interdependencies between concepts, inconsistent terminology , and the entity perspective (is it a user's trust or a system's trustworthiness) often hinder a systematic inventory of the findings. Additionally, our discussion underscored the crucial role of the decision context in evaluating the relation of algorithmic affordances and interaction qualities. We propose dimensions of decision contexts (such as 'reversibility of the decision', or 'time pressure'). They could aid in establishing a systematic three-way relationship between context attributes , attributes of user control mechanisms, and experiential goals, and as such they warrant further research. In sum, while interaction qualities framework serves as a foundational structure for organizing research on evaluating the impact of algorithmic affordances, challenges related to interdependencies and context specific influences remain. These challenges necessitate further investigation and subsequent refinement and expansion of the framework.
... An important design parameter that should be considered with regard to explainability in AI-empowered systems is the users' ability and familiarity with such technologies (Miller, 2019). Thus, a combination of scientific language and simpler approaches should be contemplated (Arrieta et al., 2020;Ngo, Kunkel, & Ziegler, 2020;Tsai & Carroll, 2022).Responsible AI (RAI) extends the concept of XAI further and refers to the deployment of AI-empowered systems where transparency and alignment with the basic human values of ethics and accountability is of most importance (Floridi et al., 2018). ...
Article
The significant proliferation of AI-empowered systems and machine learning (ML) across various examined domains underscores the vital necessity for comprehensive and customised explainability frameworks to lead to usable and trustworthy systems. Especially in the medical domain, where validation of methodologies and outcomes is as important as the adoption rate of such systems, the requirements of the depth and the level of abstraction of the explainability are particularly important and necessitate a systemic approach to ensure a proper definition. Explainability and interpretability are important usability and trustworthiness properties of AI-empowered systems and, as such, constitute important factors for technology acceptance. In this paper, we propose a novel framework for explainability requirements in AI-empowered systems using the Technology Acceptance Model (TAM). This framework employs targeted ML (hierarchical clustering, k-means or other) to acquire a user model for personalised, multi-layered explainability. Our novel framework integrates a rule-based system, which guides the degree of trustworthiness to be achieved based on user perception and AI literacy level. We test this methodology in the case of AI-empowered medical systems to (1) assess and quantify the doctors’ abilities and familiarisation with technology and AI, (2) generate layers of personalised explainability based on user ability and user needs in terms of trustworthiness and (3) provide the necessary environment for transparency and validation. To assess and quantify the doctors’ abilities we have considered Rapid Estimate of Adult Literacy in Medicine (REALM) a tool commonly used in the medical domain to bridge the communication gap between patients and doctors.
... Ensuring users comprehend the characteristics of utilized models and providing a transparent representation of the deployed algorithms are crucial [17]. The predominant issue pertains to establishing user trust in machine learning (ML) models for predictions and decision support, while developers must furnish adequate descriptions and decision-making roadmaps [20]. ...
Chapter
Full-text available
Climate change and energy production and consumption are two inextricably linked concrete concepts of great concern. In an attempt to guarantee our future, the European Union (EU) has prioritized the addressing of both concepts, creating a new social contract between its citizens and the environment. The dazzling progress in its methodologies and applications during the recent years and the familiarization of the public with its abilities indicate Artificial Intelligence (AI) as a potential and powerful tool towards addressing important threats that climate change imposes. However, when using AI as a tool, it is vital to do so responsibly and transparently. Explainable Artificial Intelligence (xAI) has been coined as the term that describes the route of responsibility when implementing AI-driven systems. In this paper, we expand applications that have been previously built to address the problem of energy production and consumption. Specifically, (i) we conduct a survey to key stakeholders of the energy sector in the EU, (ii) we analyse the survey to define the required depth of AI explainability and (iii) we implement the outcomes of our analysis by developing a useful xAI framework that can guarantee higher adoption rates for our AI system and a more responsible and safe space for that system to be deployed.
... Ensuring users comprehend the characteristics of utilized models and providing a transparent representation of the deployed algorithms are crucial [17]. The predominant issue pertains to establishing user trust in machine learning (ML) models for predictions and decision support, while developers must furnish adequate descriptions and decision-making roadmaps [20]. ...
Article
Full-text available
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be used by different stakeholders, with different backgrounds, preferences, abilities, skills, and goals. Our methodology is aligned with the explainable artificial intelligence (XAI) paradigm and aims to enhance the interpretability of AI-empowered decision support systems (DSSs). Specifically, a clustering-based approach is adopted to customize the depth of explainability based on the specific needs of different user groups. This approach improves the accuracy and effectiveness of energy management analytics while promoting transparency and trust in the decision-making process. The methodology is structured around an iterative development lifecycle for an intelligent decision support system and includes several steps, such as stakeholder identification, an empirical study on usability and explainability, user clustering analysis, and the implementation of an XAI framework. The XAI framework comprises XAI clusters and local and global XAI, which facilitate higher adoption rates of the AI system and ensure responsible and safe deployment. The methodology is tested on a stacked neural network for an analytics service, which estimates energy savings from renovations, and aims to increase adoption rates and benefit the circular economy.
... Well-designed systems should implicitly induce effective conceptual models, as Norman stated [33,35], but explicit information may become essential as the systems increase in complexity. Indeed, some studies suggested that providing detailed information on the operations and functioning of the system helped users create accurate conceptual models, resulting in more effective interactions and precise outcome predictions (e.g., [27,28,36,37]). ...
Conference Paper
Enabling individuals to personalize and control the functionality of their smart devices is essential, particularly in educational environments where customized curricula are necessary for each student. To achieve this goal, End-User Programming through trigger-action rules seems to be a promising approach to empower teachers to reach this goal. However, to handle complex scenarios, End-User Programming systems must support naive users in adopting effective reasoning strategies and mental models. My Ph.D. research intends to explore specific linguistic aspects that can guide teachers in creating and debugging trigger-action rules for programming their smart educational devices, supporting them to assume effective mental models and reasoning strategies.
... This problem leads users to perceive RS as a "black box". Ngo et al. (2020) conducted a qualitative study using drawing to investigate users' mental models of RS, and their findings emphasized the necessity of designing transparent and controllable RS. To address this, researchers have attempted to help users better understand the inner workings of the system, for example by explaining why this content was recommended (Cramer et al., 2008;Rader et al., 2018), or have proposed interaction concepts in 3 which users actively teach the ways of RS's learning about them (Kim & Lim, 2021). ...
... However, I would contend that the objective of such studies should not be to determine user perceptions of the relationship between two purposes, as previous studies have consistently shown that the functioning of digital systems might not be sufficiently transparent to users (e.g. Rader and Gray, 2015;Eslami et al., 2016;Ngo, Kunkel, and Ziegler, 2020). Instead, the focus of the study could be on evaluating the level of perceived system behavior change when data is used for the new purpose, such as a noticeable improvement or a degradation in performance, the perception of emergent system capabilities (Steinhardt, 2022), or qualitative changes in the results received. ...
Preprint
Reuse of data in new contexts beyond the purposes for which it was originally collected has contributed to technological innovation and reducing the consent burden on data subjects. One of the legal mechanisms that makes such reuse possible is purpose compatibility assessment. In this paper, I offer an in-depth analysis of this mechanism through a computational lens. I moreover consider what should qualify as repurposing apart from using data for a completely new task, and argue that typical purpose formulations are an impediment to meaningful repurposing. Overall, the paper positions compatibility assessment as a constructive practice beyond an ineffective standard.
... They are widely used in various domains, including streaming services, web shops, dating apps, journey planners, and professional decision support systems [4,5]. As recommenders can significantly impact people's choices, their omnipresence has societal implications [6,7]. Therefore, it is crucial that their interfaces -the part of the system that facilitates communication between the user and the algorithm or, from a grander perspective, between society and algorithms -are well-designed, user-friendly, and transparent. ...
Chapter
Recommenders play a significant role in our daily lives, making decisions for users on a regular basis. Their widespread adoption necessitates a thorough examination of how users interact with recommenders and the algorithms that drive them. An important form of interaction in these systems are algorithmic affordances: means that provide users with perceptible control over the algorithm by, for instance, providing context (‘find a movie for this profile’), weighing criteria (‘most important is the main actor’), or evaluating results (‘loved this movie’). The assumption is that these algorithmic affordances impact interaction qualities such as transparency, trust, autonomy, and serendipity, and as a result, they impact the user experience. Currently, the precise nature of the relation between algorithmic affordances, their specific implementations in the interface, interaction qualities, and user experience remains unclear. Subjects that will be discussed during the workshop, therefore, include but are not limited to the impact of algorithmic affordances and their implementations on interaction qualities, balances between cognitive overload and transparency in recommender interfaces containing algorithmic affordances; and reasons why research into these types of interfaces sometimes fails to cross the research-practice gap and are not landing in the design practice. As a potential solution the workshop committee proposes a library of examples of algorithmic affordances design patterns and their implementations in recommender interfaces enriched with academic research concerning their impact. The final part of the workshop will be dedicated to formulating guiding principles for such a library.KeywordsUser Interface DesignRecommender systemsAlgorithmic AffordancesExample Library
... In addition to navigation spaces (see for more examples [44][45][46][47][48]), other aspects of smart interface design that have been examined in various studies include onboarding and cold starts [14] and how different interface designs evoke different mental models [5,49,50]. Studies in the "compare and contrast" category meticulously examine the impact of different design components on the user experience, making them a valuable resource for both researchers and practitioners. However, as noted by Beel and Dixon [2,51], most studies in this domain, including their own, suffer from two critical limitations. ...
Chapter
Brian Shackel Award winner for most outstanding contribution with international impact in the field of human interaction with, and human use of, computers and information technology Reviewers' Choice winner The design of recommender systems’ graphical user interfaces (GUIs) is critical for a user's experience with these systems. However, most research into recommenders focuses on algorithms, overlooking the design of their interfaces. Additionally, the studies on the design of recommender interfaces that do exist do not always manage to cross the research-practice gap. This disconnect may be due to a lack of alignment between academic focus and the most pressing needs of practitioners, as well as the way research findings are communicated. To address these issues, this paper presents the results of a comprehensive study involving 215 designers worldwide, aiming to identify the primary challenges in designing recommender GUIs and the resources practitioners need to tackle those challenges. Building on these findings, this paper proposes a practice-led research agenda for the human-computer interaction community on designing recommender interfaces and suggestions for more accessible and actionable ways of disseminating research results in this domain. KeywordsRecommender systemInterface designResearch-practice gapAlgorithmic affordances
... We argue, however, that these techniques could be useful for obtaining insights into users' actual needs with respect to the interaction with such a system, in particular, when it comes to the relation to other decision aids. In this context, it is worth noting that it has been found only recently, that users' mental models do not necessarily correspond to the implementations of recommender systems, and are subject to large inter-individual differences [31]. However, identifying the understanding users have of the system behavior is considered highly important for evaluating the impact of a recommender and improving it [32]. ...
Conference Paper
Full-text available
Thus far, in most of the user experiments conducted in the area of recommender systems, the respective system is considered as an isolated component, i.e., participants can only interact with the recommender that is under investigation. This fails to recognize the situation of users in real-world settings, where the recommender usually represents only one part of a greater system, with many other options for users to find suitable items than using the mechanisms that are part of the recommender, e.g., liking, rating, or critiquing. For example, in current web applications, users can often choose from a wide range of decision aids, from text-based search over faceted filtering to intelligent conversational agents. This variety of methods, which may equally support users in their decision making, raises the question of whether the current practice in recommender evaluation is sufficient to fully capture the user experience. In this position paper, we discuss the need to take a broader perspective in future evaluations of recommender systems, and raise awareness for evaluation methods which we think may help to achieve this goal, but have not yet gained the attention they deserve.
... For example, a streaming service may explain why a certain show or movie is recommended based on the user's preferences, ratings, or viewing history, or an ecommerce site may disclose how sponsored products are ranked or selected. [54,62,69] • Dumb it down: This pattern involves providing users with clear and understandable explanations of how their data is processed or used by an AI system. These explanations can be visual, personalized, and even counterfactual. ...
Preprint
Full-text available
User experience designers are facing increasing scrutiny and criticism for creating harmful technologies, leading to a pushback against unethical design practices. While clear-cut harmful practices such as dark patterns have received attention, trends towards automation, personalization, and recommendation present more ambiguous ethical challenges. To address potential harm in these "gray" instances, we propose the concept of "bright patterns" - persuasive design solutions that prioritize user goals and well-being over their desires and business objectives. The ambition of this paper is threefold: to define the term "bright patterns", to provide examples of such patterns, and to advocate for the adoption of bright patterns through policymaking.
Article
Investigating digital privacy behavior requires consideration of its contextual nuances and the underlying social norms. This study delves into users' joint articulation of such norms by probing their implicit assumptions and "common sense" surrounding privacy conventions. To achieve this, we introduce Privacy Taboo, a card game designed to serve as a playful breaching interview method, fostering discourse on unwritten privacy rules. Through nine interviews involving pairs of participants (n=18), we explore the decision-making and collective negotiation of privacy's vagueness. Our findings demonstrate individuals' ability to articulate their information needs when consenting to fictive data requests, even when contextual cues are limited. By shedding light on the social construction of privacy, this research contributes to a more comprehensive understanding of usable privacy, thereby facilitating the development of democratic privacy frameworks. Moreover, we posit Privacy Taboo as a versatile tool adaptable to diverse domains of application and research.
Article
Full-text available
Recent advances in technology have propelled Artificial Intelligence (AI) into a crucial role in everyday life, enhancing human performance through sophisticated models and algorithms. However, the focus on predictive accuracy has often resulted in opaque black-box models that lack transparency in decision-making. To address this issue, significant efforts have been made to develop explainable AI (XAI) systems that make outcomes comprehensible to users. Various approaches, including new concepts, models, and user interfaces, aim to improve explainability, build user trust, enhance satisfaction, and increase task performance. Evaluation research has emerged to define and measure the quality of these explanations, differentiating between formal evaluation methods and empirical approaches that utilize techniques from psychology and human–computer interaction. Despite the importance of empirical studies, evaluations remain underutilized, with literature reviews indicating a lack of rigorous evaluations from the user perspective. This review aims to guide researchers and practitioners in conducting effective empirical user-centered evaluations by analyzing several studies; categorizing their objectives, scope, and evaluation metrics; and offering an orientation map for research design and metric measurement.
Chapter
This chapter investigates regulatory activities and policies in the European Union, the USA, and—considering its highly different cultural and political background—China, where regulation of AI and IRRSs has quite different aims than in the Western world.
Chapter
This chapter reviews existing regulations on discrimination, provides a multifaceted categorization of biases and fairness criteria, discusses techniques to measure popularity and demographic biases, presents methods to mitigate harmful biases, and concludes with a reflection on open challenges.
Chapter
This chapter discusses recent regulatory and ethical aspects of transparency, clarifies the relevant terminology, and highlights the benefits, challenges, and barriers to transparency in IRRSs. It then discusses techniques for enhancing transparency in IRRSs, outlines how transparency can be evaluated and achieved via documentation, and presents how algorithmic auditing helps improve the transparency of IRRSs. The chapter concludes with highlighting open challenges and further related work.
Chapter
This chapter outlines potential privacy and security risks inherent in IRRSs. It explores how personal data and models can be protected and discusses relevant regulations and corresponding technical solutions. The chapter closes by discussing open challenges related to privacy and security in IRRSs and points to additional related work.
Chapter
This chapter briefly synthesizes the key themes discussed throughout this work and outlines a roadmap for future research directions in this critical field.
Conference Paper
Full-text available
Many design schools struggle with questions of how recent AI advancements should be integrated into their curriculum. This is especially challenging for curricula with a substantial digital design component, such as media design or interaction design. Undoubtedly, curricula must include the aspect of designing 'with' AI, teaching students how to responsibly and ethically use AI in their design process. More importantly, programs should also integrate the concept of designing 'for' AI. While designing for emerging technologies, such as mobile, immersive, and social technologies, has been a constant challenge over the past decades, designing for AI is distinct from these challenges, since interaction design must adapt now, not to a new device, but to a new agent. This paper examines four different perspectives on how designing for AI alters interaction design education and the scale of its impact. Firstly, as mentioned above, future digital designers will be working with tools that are partially AI-based, including generative AI tools and decision aids. Secondly, their work context will undergo changes, as they assume different roles at different types of companies. Thirdly, they will need to address vastly different design challenges as they will work on an entirely new type of applications. Finally, the design of intelligent systems demands a new solution repertoire for designers. This paper will sketch the challenges for all these perspectives but will primarily focus on the last two: equipping students for designing ‘for’ AI. For these last two challenges the educational debate centers around a ‘lightweight’ approach versus a ‘heavyweight’ approach to designing for AI. The lightweight approach prioritizes a solution repertoire associated with the front end of AI applications, with a focus on user interfaces, the user-AI interactions that need to be designed, and their immediate impact on user experience. We will argue that this is a deceptively novel area where students need to get adept at designing for shaky mental models and assume responsibility in creating ethical applications. Designing the front-end of AI presents fresh challenges in education, which, contrary to common beliefs among educators, are largely disconnected from a deep understanding of the underlying technology. The heavyweight choice for digital design curricula entails a focus on the conceptual design of AI applications. This encompasses challenges such as involving users in the design of applications with AI, altering the AI design processes, facilitating communication between data scientists and designers and fostering responsible design practices. These challenges do require a basic understanding of the technology, although the level of specific declarative and experiential knowledge required by students to excel in this domain remains uncertain. In this paper, we compare these approaches and discuss their complementarity. Specifically, we explore whether it is advantageous for students to begin with the lightweight approach - grasping practical applications and user-facing aspects of AI and then gradually transitioning to a heavyweight approach - exploring technical intricacies, and learning how to innovate and improve AI technologies. Finally, we draw conclusions regarding the broader transformation of the design field resulting from the influence of AI.
Chapter
Explainable Artificial Intelligence (XAI) is transforming artificial intelligence by boosting end-user trust in technology. The literature still lacks a well-organized and comprehensive survey on the use of XAI in healthcare. We are attempting to address this need by emphasizing the capabilities of XAI frameworks in healthcare to achieve accountability, transparency, result tracing, and model improvement in the healthcare sector, infecting a wide range of fields and disciplines of research. Some of them call for a high standard of transparency and responsibility in the medical industry. Therefore, machine choices and predictions need explanations to support their veracity. Greater interpretability is necessary for this, which frequently requires knowledge of the algorithms underlying mechanisms. In this chapter, we present the well-known XAI methods, services, and applications. The technique in XAI is discussed as being used to analyze and diagnose health data utilizing AI-based technologies. As a result, we may gain a better understanding of how the healthcare industry functions as a whole by studying how diverse inputs interact. The various categories show diverse aspects of interpretability research, ranging from methods that produce information that is “obviously” interpretable to investigations of complex patterns. It is hoped that by categorizing interpretability in medical research similarly to other types of research, (1) clinicians and practitioners will be able to approach these techniques with caution, (2) new understandings of interpretability will emerge with greater consideration for medical practices, and (3) initiatives to promote data-driven, mathematically and technically sound medical education will be supported.
Article
Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the past two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation aim, explanation scope, explanation method, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.
Article
Machine learning and artificial intelligence produce algorithms that appear to be able to make "intelligent" decisions similar to those of humans but function differently from human thinking. To make decisions based on machine suggestions, humans should be able to understand the background of these suggestions. However, since humans are oriented to understand human intelligence, it is not yet fully clear whether humans can truly understand the "thinking" generated by machine learning, or whether they merely transfer human-like cognitive processes to machines. In addition, media representations of artificial intelligence show higher capabilities and greater human likeness than they currently have. In our daily lives, we increasingly encounter assistance systems that are designed to facilitate human tasks and decisions based on intelligent algorithms. These algorithms are predominantly based on machine learning technologies, which make it possible to discover previously unknown correlations and patterns by analyzing large amounts of data. One example is the machine analysis of thousands of X-ray images of sick and healthy people. This requires identifying the patterns by which images labeled as "healthy" can be distinguished from those labeled as "sick" and to find an algorithm that identifies the latter. In the meantime, "trained" algorithms created in this way are used in various fields of application, not only for medical diagnoses but also in the pre-selection of applicants for a job advertisement or in communication with the help of voice assistants. These voice assistants are enabled by intelligent algorithms to offer internet services through short commands. Harald Lesch, referring to his book Unpredictable, written together with Thomas Schwarz, says the development of artificial intelligence can be compared to bringing aliens to Earth. With machine learning, a previously unknown form of non-human intelligence has been created. This chapter discusses whether forms of artificial intelligence, as they are currently being publicly discussed, differ substantially from human thinking. Furthermore, it will be discussed to what extent humans can comprehend the functioning of artificial intelligence that has been created through machine learning when interacting with them. Finally, the risks and opportunities will be weighed and discussed..
Book
In today's fast-paced technological landscape, companies are continuously seeking innovation to stay competitive. Artificial Intelligence (AI) has become ubiquitous across various sectors, notably impacting the realm of e-commerce (EC). AI applications like recommendation systems, fake filters, and fraud detection have revolutionized the EC industry. However, a persistent challenge is understanding and explaining the outcomes produced by AI algorithms, which affects their trustworthiness. To address this concern, there's a growing focus on the ethics and privacy implications of AI, prompting additional research efforts. The goal is to improve the trustworthiness and ethical integrity of AI systems, leading to the resurgence of Explainable AI (XAI). XAI is dedicated to making AI results more understandable to users. However, a major challenge persists: existing technologies often struggle to provide detailed explanations of how algorithms arrive at specific results or recommendations. In e-commerce, where decisions often require swift action, integrating XAI systems becomes vital. These systems aim to offer immediate justifications, bridging the gap left by current technologies that struggle to provide comprehensive explanations of the decision-making process behind AI-generated results or recommendations.
Chapter
Businesses across industries have changed how they operate as a result of the introduction and adoption of technology. Importantly, significant technological advancements in e-commerce try to persuade consumers to purchase particular goods and brands. AI is increasingly used as a vital new tool for personalization and product customization to meet specific needs. It provides insights into the decision-making criteria, elements, and data required to provide a recommendation. The machine learning field known as XAI studies and strives to understand the models and techniques utilized in the black box decisions produced by AI systems. In order to deploy explainable XAI systems, this study suggested that ML models need to be improved in order to make them easier to comprehend and interpret. A branch of machine learning known as XAI studies and aims to understand the models and processes involved in how AI systems make decisions in a “black box.” It offers insights into the considerations, factors, and information needed to generate a suggestion. This study made the recommendation that ML models be enhanced, making them interpretable and understandable, in order to deploy explainable XAI systems. This paper addresses this issue by examining and analyzing recent work in XAI methodologies, needs, principles, applications, and case studies. We introduce a novel XAI approach that facilitates the development of explainable models while maintaining a high level of learning performance.
Article
Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner of the web and facilitate the human decision-making process. However, despite their enormous capabilities and potential, RS may also lead to undesired effects on users, items, producers, platforms, or even the society at large, such as compromised user trust due to non-transparency, unfair treatment of different consumers, or producers, privacy concerns due to extensive use of user’s private data for personalization, just to name a few. All of these create an urgent need for Trustworthy Recommender Systems (TRS) so as to mitigate or avoid such adverse impacts and risks. In this survey, we will introduce techniques related to trustworthy recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation, user-controllable recommendation, as well as the relationship between these different perspectives in terms of trustworthy recommendation. Through this survey, we hope to deliver readers with a comprehensive view of the research area and raise attention to the community about the importance, existing research achievements, and future research directions on trustworthy recommendation.
Chapter
The pervasive use of artificial intelligence (AI) in processing users’ data is well documented with the use of AI believed to profoundly change users’ way of life in the near future. However, there still exists a sense of mistrust among users who engage with AI systems some of this stemming from lack of transparency, including users failing to understand what AI is, what it can do and its impact on society. From this, the emerging discipline of explainable artificial intelligence (XAI) has emerged, a method of designing and developing AI where a systems decisions, processes and outputs are explained and understood by the end user. It has been argued that designing for AI systems especially for XAI poses a unique set of challenges as AI systems are often considered complex, opaque and difficult to visualise and interpret especially for those unfamiliar with their inner workings. For this reason, visual interpretations which match users’ mental models of their understanding of AI are a necessary step in the development of XAI solutions. Our research examines the inclusion of designers in an early-stage analysis of an AI recruitment system taking a design thinking approach in the form of 3 workshops. We discovered that workshops with designers included yielded more visual interpretations of big ideas related to AI systems, and the inclusion of designers encouraged more visual interpretations from non-designers and those not typically used to employing drawing as a method to express mental models.
Conference Paper
Full-text available
Informal caregivers play an essential role in caring for persons who require assistance and in managing the health of their loved ones. Unfortunately, they need more health, leisure, and relaxation time. Nature interaction is one of many kinds of self-care intervention. It has long been regarded as a refreshing break from stressful routines, and research suggests exposure to nature interventions to improve the quality of life of caregivers. Despite not being the real thing, technology allows us alternatives that can still have some beneficial effects. In this preliminary study, we explore the benefits of natural environment videos on informal caregivers as an alternative to exposure to nature. Specifically, we are interested in the effects of their own choices versus a random video. We found that natural environment videos improve the well-being of informal caregivers in at least three key areas: valence, arousal, and negative affect. Furthermore, the effect increases when they choose the video they want to watch instead of a random video. This effect benefits the studied subjects because they need more time and energy to visit real natural environments.KeywordsInformal caregiversSelf-careWell-beingNature videos
Chapter
Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs is crucial in developing accessible Explainable Artificial Intelligence (XAI) solutions. Investigating mental models of AI systems is core in understanding and explaining the often opaque, complex, and unpredictable nature of AI. Researchers engage surveys, interviews, and observations for software systems, yielding useful evaluations. However, an evaluation gulf still exists, primarily around comprehending end-users’ understanding of AI systems. It has been argued that by exploring theories related to human decision-making examining the fields of psychology, philosophy, and human computer interaction (HCI) in a more people-centric rather than product or technology-centric approach can result in the creation of initial XAI solutions with great potential. Our work presents the results of a design thinking workshop with 14 cross-collaborative participants with backgrounds in philosophy, psychology, computer science, AI systems development and HCI. Participants undertook design thinking activities to ideate how AI system behaviours may be explained to end-users to bridge the explanation gulf of AI systems. We reflect on design thinking as a methodology for exploring end-users’ perceptions and mental models of AI systems with a view to creating effective, useful, and accessible XAI.KeywordsArtificial IntelligenceExplainable Artificial IntelligenceHuman Computer InteractionDesign Thinking
Chapter
This paper presents a qualitative study that investigates the effects of some language choices in expressing the trigger part of a trigger-action rule on the users’ mental models. Specifically, we explored how 11 non-programmer participants articulated the definition of trigger-action rules in different contexts by choosing among alternative conjunctions, verbal structures, and order of primitives. Our study shed some new light on how lexical choices influence the users’ mental models in End-User Development tasks. Specifically, the conjunction “as soon as” clearly supports the idea of instantaneousness, and the conjunction “while” the idea of protractedness of an event; the most commonly used “if” and “when”, instead, are prone to create ambiguity in the mental representation of events. The order of rule elements helps participants to construct accurate mental models. Usually, individuals are facilitated in comprehension when the trigger is displayed at the beginning of the rule, even though sometimes the reverse order (with the action first) is preferred as it conveys the central element of the rule. Our findings suggest that improving and implementing these linguistic aspects in designing End-User Development tools will allow naive users to engage in more effective and expressive interactions with their systems.KeywordsEnd-User ProgrammingTrigger-Action ParadigmMental ModelsLanguage
Conference Paper
Full-text available
Recommender systems have become a mainstay of modern inter-net applications. They help users identify products to purchase on Amazon, movies to watch on Netflix and songs to enjoy on Pandora. Indeed, they have become so commonplace that users, through years of interactions with these systems, have developed an inherent understanding of how recommender systems function, what their objectives are, and how the user might manipulate them. We describe this understanding as the Theory of the Recommender. In this pilot study, we design and administer a survey to 25 users familiar with recommender systems. Our detailed analysis of their responses demonstrates that they possess an awareness of how recommender systems profile the user, build representations for items, and ultimately construct recommendations. The success of this pilot study provides support for a larger user study and the development of a grounded theory to describe the user's cognitive model of how recommender systems function.
Conference Paper
Full-text available
Trust in a Recommender System (RS) is crucial for its overall success. However, it remains underexplored whether users trust personal recommendation sources (i.e. other humans) more than impersonal sources (i.e. conventional RS), and, if they do, whether the perceived quality of explanation provided account for the difference. We conducted an empirical study in which we compared these two sources of recommendations and explanations. Human advisors were asked to explain movies they recommended in short texts while the RS created explanations based on item similarity. Our experiment comprised two rounds of recommending. Over both rounds the quality of explanations provided by users was assessed higher than the quality of the system's explanations. Moreover, explanation quality significantly influenced perceived recommendation quality as well as trust in the recommendation source. Consequently, we suggest that RS should provide richer explanations in order to increase their perceived recommendation quality and trustworthiness.
Conference Paper
Full-text available
Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability.
Conference Paper
Full-text available
Recommender systems relying on latent factor models often appear as black boxes to their users. Semantic descriptions for the factors might help to mitigate this problem. Achieving this automatically is, however, a non-straightforward task due to the models' statistical nature. We present an output-agreement game that represents factors by means of sample items and motivates players to create such descriptions. A user study shows that the collected output actually reflects real-world characteristics of the factors.
Conference Paper
Full-text available
A user's trust in recommendations plays a central role in the acceptance or rejection of a recommendation. One factor that influences trust is the source of the recommendations. In this paper we describe an empirical study that investigates the trust-related influence of social presence arising in two scenarios: human-generated recommendations and automated recommending. We further compare visual cues indicating the expertise of a human recommendation source and its similarity with the target user, and evaluate their influence on trust. Our analysis indicates that even subtle visual cues can signal expertise and similarity effectively, thus influencing a user's trust in recommendations. These findings suggest that automated recommender systems could benefit from the inclusion of social components-especially when conveying characteristics of the recommendation source. Thus, more informative and persuasive recommendation interfaces may be designed using such a mixed approach.
Conference Paper
Full-text available
Intelligent systems, which are on their way to becoming mainstream in everyday products, make recommendations and decisions for users based on complex computations. Researchers and policy makers increasingly raise concerns regarding the lack of transparency and comprehensibility of these computations from the user perspective. Our aim is to advance existing UI guidelines for more transparency in complex real-world design scenarios involving multiple stakeholders. To this end, we contribute a stage-based participatory process for designing transparent interfaces incorporating perspectives of users, designers, and providers, which we developed and validated with a commercial intelligent fitness coach. With our work, we hope to provide guidance to practitioners and to pave the way for a pragmatic approach to transparency in intelligent systems.
Article
Full-text available
In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
Article
Full-text available
This project has been carried out in the context of recent major developments in botics and more widespread usage of virtual agents in personal and professional sphere. The general purpose of the experiment was to thoroughly examine the character of the human–non-human interaction process. Thus, in the paper, we present a study of human–chatbot interaction, focusing on the affective responses of users to different types of interfaces with which they interact. The experiment consisted of two parts: measurement of psychophysiological reactions of chatbot users and a detailed questionnaire that focused on assessing interactions and willingness to collaborate with a bot. In the first quantitative stage, participants interacted with a chatbot, either with a simple text chatbot (control group) or an avatar reading its responses in addition to only presenting them on the screen (experimental group. We gathered the following psychophysiological data from participants: electromyography (EMG), respirometer (RSP), electrocardiography (ECG), and electrodermal activity (EDA). In the last, declarative stage, participants filled out a series of questionnaires related to the experience of interacting with (chat)bots and to the overall human–(chat)bot collaboration assessment. The theory of planned behavior survey investigated attitude towards cooperation with chatbots in the future. The social presence survey checked how much the chatbot was considered to be a “real” person. The anthropomorphism scale measured the extent to which the chatbot seems humanlike. Our particular focus was on the so-called uncanny valley effect, consisting of the feeling of eeriness and discomfort towards a given medium or technology that frequently appears in various kinds of human–machine interactions. Our results show that participants were experiencing lesser uncanny effects and less negative affect in cooperation with a simpler text chatbot than with the more complex, animated avatar chatbot. The simple chatbot have also induced less intense psychophysiological reactions. Despite major developments in botics, the user’s affective responses towards bots have frequently been neglected. In our view, understanding the user’s side may be crucial for designing better chatbots in the future and, thus, can contribute to advancing the field of human–computer interaction. *L. Ciechanowski, A. Przegalinska, and M. Magnuski equally contributed to the article.
Article
Full-text available
Although youth are increasingly going online to fulfill their needs for information, many youth struggle with information and digital literacy skills, such as the abilities to conduct a search and assess the credibility of online information. Ideally, these skills encompass an accurate and comprehensive understanding of the ways in which a system, such as a Web search engine, functions. In order to investigate youths’ conceptions of the Google search engine, a drawing activity was conducted with 26 HackHealth after-school program participants to elicit their mental models of Google. The findings revealed that many participants personified Google and emphasized anthropomorphic elements, computing equipment, and/or connections (such as cables, satellites and antennas) in their drawings. Far fewer participants focused their drawings on the actual Google interface or on computer code. Overall, their drawings suggest a limited understanding of Google and the ways in which it actually works. However, an understanding of youths’ conceptions of Google can enable educators to better tailor their digital literacy instruction efforts and can inform search engine developers and search engine interface designers in making the inner workings of the engine more transparent and their output more trustworthy to young users. With a better understanding of how Google works, young users will be better able to construct effective queries, assess search results, and ultimately find relevant and trustworthy information that will be of use to them.
Conference Paper
Full-text available
What does a user need to know to productively work with an intelligent agent? Intelligent agents and recommender systems are gaining widespread use, potentially creating a need for end users to understand how these systems operate in order to fix their agent's personalized behavior. This paper explores the effects of mental model soundness on such personalization by providing structural knowledge of a music recommender system in an empirical study. Our findings show that participants were able to quickly build sound mental models of the recommender system's reasoning, and that participants who most improved their mental models during the study were significantly more likely to make the recommender operate to their satisfaction. These results suggest that by helping end users understand a system's reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each user's intentions.
Article
Full-text available
This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.
Conference Paper
Full-text available
ABSTRACT Collaborative ltering (CF) has been successfully de- ployed over the years to compute predictions on items based on a user’s correlation with a set of peers. The black-box nature of most CF applications leave the user wondering how the system arrived at its recommenda- tion. This note introduces PeerChooser, a collaborative recommender,system with an interactive graphical ex- planation interface. Users are provided with a visual explanation of the CF process and opportunity to ma- nipulate their neighborhood at varying levels of gran- ularity to reect aspects of their current requirements. In this manner we overcome the problem of redundant prole information in CF systems, in addition to pro- viding an explanation interface. Our layout algorithm produces an exact, noiseless graph representation of the underlying correlations between users. PeerChooser’s prediction component,uses this graph directly to yield the same results as the benchmark. User’s then improve on these predictions by tweaking the graph to their cur- rent requirements. We present a user-survey in which PeerChooser compares favorably against a benchmark CF algorithm. ACM Classification Keywords H.5.4 Information Interfaces and Presentation (e.g., HCI): Hyper-
Conference Paper
Full-text available
Recommender systems have shown great potential to help users find interesting and relevant items from within a large information space. Most research up to this point has focused on improving the accuracy of recommender systems. We believe that not only has this narrow focus been misguided, but has even been detrimental to the field. The recommendations that are most accurate according to the standard metrics are sometimes not the recommendations that are most useful to users. In this paper, we propose informal arguments that the recommender community should move beyond the conventional accuracy metrics and their associated experimental methodologies. We propose new user-centric directions for evaluating recommender systems.
Conference Paper
Full-text available
Netflix.com uses star ratings, Digg.com uses up/down votes and Facebook uses a "like" but not a "dislike" button. Despite the popularity and diversity of these rating scales, research offers little guidance for designers choosing between them. This paper compares four different rating scales: unary ("like it"), binary (thumbs up / thumbs down), five-star, and a 100-point slider. Our analysis draws upon 12,847 movie and product review ratings collected from 348 users through an online survey. We a) measure the time and cognitive load required by each scale, b) study how rating time varies with the rating value assigned by a user, and c) survey users' satisfaction with each scale. Overall, users work harder with more granular rating scales, but these effects are moderated by item domain (product reviews or movies). Given a particular scale, users rating times vary significantly for items they like and dislike. Our findings about users' rating effort and satisfaction suggest guidelines for designers choosing between rating scales.
Conference Paper
Full-text available
This research was motivated by our interest in understanding the criteria for measuring the success of a recommender system from users' point view. Even though existing work has suggested a wide range of criteria, the consistency and validity of the combined criteria have not been tested. In this paper, we describe a unifying evaluation framework, called ResQue (Recommender systems' Quality of user experience), which aimed at measuring the qualities of the recommended items, the system's usability, usefulness, interface and interaction qualities, users' satisfaction with the systems, and the influence of these qualities on users' behavioral intentions, including their intention to purchase the products recommended to them and return to the system. We also show the results of applying psychometric methods to validate the combined criteria using data collected from a large user survey. The outcomes of the validation are able to 1) support the consistency, validity and reliability of the selected criteria; and 2) explain the quality of user experience and the key determinants motivating users to adopt the recommender technology. The final model consists of thirty two questions and fifteen constructs, defining the essential qualities of an effective and satisfying recommender system, as well as providing practitioners and scholars with a cost-effective way to evaluate the success of a recommender system and identify important areas in which to invest development resources.
Article
Full-text available
The increasing availability of (digital) cultural heritage artefacts offers great potential for increased access to art content, but also necessitates tools to help users deal with such abundance of information. User-adaptive art recommender systems aim to present their users with art content tailored to their interests. These systems try to adapt to the user based on feedback from the user on which artworks he or she finds interesting. Users need to be able to depend on the system to competently adapt to their feedback and find the artworks that are most interesting to them. This paper investigates the influence of transparency on user trust in and acceptance of content-based recommender systems. A between-subject experiment (N = 60) evaluated interaction with three versions of a content-based art recommender in the cultural heritage domain. This recommender system provides users with artworks that are of interest to them, based on their ratings of other artworks. Version 1 was not transparent, version 2 explained to the user why a recommendation had been made and version 3 showed a rating of how certain the system was that a recommendation would be of interest to the user. Results show that explaining to the user why a recommendation was made increased acceptance of the recommendations. Trust in the system itself was not improved by transparency. Showing how certain the system was of a recommendation did not influence trust and acceptance. A number of guidelines for design of recommender systems in the cultural heritage domain have been derived from the study’s results.
Article
Full-text available
Recommender systems have become valuable resources for users seeking intelligent ways to search through the enormous volume of information available to them. One crucial unsolved problem for recommender systems is how best to learn about a new user. In this paper we study six techniques that collaborative filtering recommender systems can use to learn about new users. These techniques select a sequence of items for the collaborative filtering system to present to each new user for rating. The techniques include the use of information theory to select the items that will give the most value to the recommender system, aggregate statistics to select the items the user is most likely to have an opinion about, balanced techniques that seek to maximize the expected number of bits learned per presented item, and personalized techniques that predict which items a user will have an opinion about. We study the techniques thru offline experiments with a large preexisting user data set, and thru a live experiment with over 300 users. We show that the choice of learning technique significantly affects the user experience, in both the user effort and the accuracy of the resulting predictions.
Article
Full-text available
Recommender Systems act as personalized decision guides, aiding users in decisions on matters related to personal taste. Most previous research on Recommender Systems has focused on the statistical accuracy of the algorithms driving the systems, with little emphasis on interface issues and the user's perspective. The goal of this research was to examine the role of transparency (user understanding of why a particular recommendation was made) in Recommender Systems. To explore this issue, we conducted a user study of five music Recommender Systems. Preliminary results indicate that users like and feel more confident about recommendations that they perceive as transparent.
Article
Full-text available
Automated collaborative filtering (ACF) systems predict a person's affinity for items or information by connecting that person's recorded interests with the recorded interests of a community of people and sharing ratings between likeminded persons. However, current recommender systems are black boxes, providing no transparency into the working of the recommendation. Explanations provide that transparency, exposing the reasoning and data behind a recommendation. In this paper, we address explanation interfaces for ACF systems -- how they should be implemented and why they should be implemented. To explore how, we present a model for explanations based on the user's conceptual model of the recommendation process. We then present experimental results demonstrating what components of an explanation are the most compelling. To address why, we present experimental evidence that shows that providing explanations can improve the acceptance of ACF systems. We also describe some initial explor...
Conference Paper
Recommender systems (RS) often use implicit user preferences extracted from behavioral and contextual data, in addition to traditional rating-based preference elicitation, to increase the quality and accuracy of personalized recommendations. However, these approaches may harm user experience by causing mixed emotions, such as fear, anxiety, surprise, discomfort, or creepiness. RS should consider users' feelings, expectations, and reactions that result from being shown personalized recommendations. This paper investigates the creepiness of recommendations using an online experiment in three domains: movies, hotels, and health. We define the feeling of creepiness caused by recommendations and find out that it is already known to users of RS. We further find out that the perception of creepiness varies across domains and depends on recommendation features, like causal ambiguity and accuracy. By uncovering possible consequences of creepy recommendations, we also learn that creepiness can have a negative influence on brand and platform attitudes, purchase or consumption intention, user experience, and users' expectations of--and their trust in--RS.
Conference Paper
When automating tasks using some form of artificial intelligence, some inaccuracy in the result is virtually unavoidable. In many cases, the user must decide whether to try the automated method again, or fix it themselves using the available user interface. We argue this decision is influenced by both perceived automation accuracy and degree of task "controllability" (how easily and to what extent an automated result can be manually modified). This relationship between accuracy and controllability is investigated in a 750-participant crowdsourced experiment using a controlled, gamified task. With high controllability, self-reported satisfaction remained constant even under very low accuracy conditions, and overall, a strong preference was observed for using manual control rather than automation, despite much slower performance and regardless of very poor controllability.
Article
We introduce TagMF, a model-based Collaborative Filtering method that aims at increasing transparency and offering richer interaction possibilities in current Recommender Systems. Model-based Collaborative Filtering is currently the most popular method that predominantly uses Matrix Factorization: This technique achieves high accuracy in recommending interesting items to individual users by learning latent factors from implicit feedback or ratings the community of users provided for the items. However, the model learned and the resulting recommendations can neither be explained, nor can users be enabled to influence the recommendation process except by rating (more) items. In TagMF, we enhance a latent factor model with additional content information, specifically tags users provided for the items. The main contributions of our method are to use this integrated model to elucidate the hidden semantics of the latent factors and to let users interactively control recommendations by changing the influence of the factors through easily comprehensible tags: Users can express their interests, interactively manipulate results, and critique recommended items—at cold-start when no historical data is yet available for a new user, as well as in case a long-term profile representing the current user’s preferences already exists. To validate our method, we performed offline experiments and conducted two empirical user studies where we compared a recommender that employs TagMF against two established baselines, standard Matrix Factorization based on ratings, and a purely tag-based interactive approach. This user-centric evaluation confirmed that enhancing a model-based method with additional information positively affects perceived recommendation quality. Moreover, recommendations were considered more transparent and users were more satisfied with their final choice. Overall, learning an integrated model and implementing the interactive features that become possible as an extension to contemporary systems with TagMF appears beneficial for the subjective assessment of several system aspects, the level of control users are able to exert over the recommendation process, as well as user experience in general.
Conference Paper
While conventional Recommender Systems perform well in automatically generating personalized suggestions, it is often difficult for users to understand why certain items are recommended and which parts of the item space are covered by the recommendations. Also, the available means to influence the process of generating results are usually very limited. To alleviate these problems, we suggest a 3D map-based visualization of the entire item space in which we position and present sample items along with recommendations. The map is produced by mapping latent factors obtained from Collaborative Filtering data onto a 2D surface through Multidimensional Scaling. Then, areas that contain items relevant with respect to the current user's preferences are shown as elevations on the map, areas of low interest as valleys. In addition to the presentation of his or her preferences, the user may interactively manipulate the underlying profile by raising or lowering parts of the landscape, also at cold-start. Each change may lead to an immediate update of the recommendations. Using a demonstrator, we conducted a user study that, among others, yielded promising results regarding the usefulness of our approach.
Conference Paper
How can you discover something new, that matches your interest? Recommender Systems have been studied since the 90ies. Their benefit comes from guiding a user through the density of the information jungle to useful knowledge clearings. Early research on recommender systems focuses on algorithms and their evaluation to improve recommendation accuracy using F-measures and other methodologies from signal-detection theory. Present research includes other aspects such as human factors that affect the user experience and interactive visualization techniques to support transparency of results and user control. In this paper, we analyze all publications on recommender systems from the scopus database, and particularly also papers with such an HCI focus. Based on an analysis of these papers, future topics for recommender systems research are identified, which include more advanced support for user control, adaptive interfaces, affective computing and applications in high risk domains.
Conference Paper
To achieve high quality initial personalization, recommender systems must provide an efficient and effective process for new users to express their preferences. We propose that this goal is best served not by the classical method where users begin by expressing preferences for individual items - this process is an inefficient way to convert a user's effort into improved personalization. Rather, we propose that new users can begin by expressing their preferences for groups of items. We test this idea by designing and evaluating an interactive process where users express preferences across groups of items that are automatically generated by clustering algorithms. We contribute a strategy for recommending items based on these preferences that is generalizable to any collaborative filtering-based system. We evaluate our process with both offline simulation methods and an online user experiment. We find that, as compared with a baseline rate-15-items interface, (a) users are able to complete the preference elicitation process in less than half the time, and (b) users are more satisfied with the resulting recommended items. Our evaluation reveals several advantages and other trade-offs involved in moving from item-based preference elicitation to group-based preference elicitation.
Chapter
This chapter gives an overview of the area of explanations in recommender systems. We approach the literature from the angle of evaluation: that is, we are interested in what makes an explanation “good”. The chapter starts by describing how explanations can be affected by how recommendations are presented, and the role the interaction with the recommender system plays w.r.t. explanations. Next, we introduce a number of explanation styles, and how they are related to the underlying algorithms. We identify seven benefits that explanations may contribute to a recommender system, and relate them to criteria used in evaluations of explanations in existing recommender systems. We conclude the chapter with outstanding research questions and future work, including current recommender systems topics such as social recommendations and serendipity. Examples of explanations in existing systems are mentioned throughout.
Conference Paper
We present an approach to interactive recommending that combines the advantages of algorithmic techniques with the benefits of user-controlled, interactive exploration in a novel manner. The method extracts latent factors from a matrix of user rating data as commonly used in Collaborative Filtering, and generates dialogs in which the user iteratively chooses between two sets of sample items. Samples are chosen by the system for low and high values of each latent factor considered. The method positions the user in the latent factor space with few interaction steps, and finally selects items near the user position as recommendations. In a user study, we compare the system with three alternative approaches including manual search and automatic recommending. The results show significant advantages of our approach over the three competing alternatives in 15 out of 24 possible parameter comparisons, in particular with respect to item fit, interaction effort and user control. The findings corroborate our assumption that the proposed method achieves a good trade-off between automated and interactive functions in recommender systems.
Conference Paper
In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.
Article
give an overview of the origins, purposes, uses, and contributions of grounded theory methodology / grounded theory is a general methodology for developing theory that is grounded in data systematically gathered and analyzed (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Since their introduction in the early 1990's, automated recommender systems have revolutionized the marketing and delivery of commerce and content by providing personalized recommendations and predictions over a variety of large and complex product offerings. In this article, we review the key advances in col-laborative filtering recommender systems, focusing on the evolution from research concentrated purely on algorithms to research concentrated on the rich set of ques-tions around the user experience with the recommender. We show through examples that the embedding of the algorithm in the user experience dramatically affects the value to the user of the recommender. We argue that evaluating the user experience of a recommender requires a broader set of measures than have been commonly used, and suggest additional measures that have proven effective. Based on our analysis of the state of the field, we identify the most important open research problems, and outline key challenges slowing the advance of the state of the art, and in some cases limiting the relevance of research to real-world applications.
Article
In online shopping environments, the product-advising function originally performed by salespeople is being increasingly taken over by software-based product recommendation agents (PRAs). However, the literature has mostly focused on the functionality design and utilitarian value of such decision support systems, mostly ignoring the potential social influence they could exert on their users. The objective of this study is to apply a social relationship perspective to the design of interfaces for PRAs. We investigate the effects of applying anthropomorphic interfaces-namely, humanoid embodiment and voice output-on users' perceived social relationship with a technological and software-based artifact designed for electronic commerce contexts. The findings from a laboratory experiment indicate that using humanoid embodiment and human voice-based communication significantly influences users' perceptions of social presence, which in turn enhances users' trusting beliefs, perceptions of enjoyment, and ultimately, their intentions to use the agent as a decision aid. These results extend the applicability of theories concerning traditional shopper-salesperson relationships to customers' interactions with technological artifacts residing on Web sites-that is, the recommendation agent software-and provide practitioners with guidelines on how to design Internet stores with the goal of building social relationships with online shoppers and enhancing their overall shopping experiences.
Streaming wars: the actors Netflix and Amazon customers want to see
  • Will Dahlgreen
Will Dahlgreen. 2016. Streaming wars: the actors Netflix and Amazon customers want to see. Retrieved January, 15, 2020 from https://yougov.co.uk/topics/politics/articles-reports/2016/01/14/streamingwars-actors-netflix-and-amazon-customers
The Netflix Recommender System: Algorithms, Business Value, and Innovation
  • A Carlos
  • Neil Gomez-Uribe
  • Hunt
  • Gomez-Uribe Carlos A.
Carlos A. Gomez-Uribe and Neil Hunt. 2015. The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Transactions on Management Information Systems 6, 4 (Dec. 2015), 13:1-13:19. https://doi.org/10.1145/2843948
Most popular video streaming services in the United States as of
  • Statista
  • Com
Statista.com. 2018. Most popular video streaming services in the United States as of July 2018, by monthly average users. Retrieved January, 15, 2020 from https://www.statista.com/statistics/910875/us-most-popular-videostreaming-services-by-monthly-average-users/s
Representation in Memory
  • David E Rumelhart
  • Donald A Norman
Welchen Video-on-Demand-Anbieter nutzen Sie?
  • Deloitte
Deloitte. 2017. Welchen Video-on-Demand-Anbieter nutzen Sie? Retrieved April, 24, 2020 from https://de.statista.com/statistik/daten/studie/443820/umfrage/ genutzte-video-on-demand-anbieter-in-deutschland/
Determinants of Customers' Responses to Customized Offers: Conceptual Framework and Research Propositions
  • Itamar Simonson
Itamar Simonson. 2005. Determinants of Customers' Responses to Customized Offers: Conceptual Framework and Research Propositions. Journal of Marketing 69, 1 (Jan. 2005), 32-45. https://doi.org/10.1509/jmkg.69.1.32.55512
How Can They Know That? A Study of Factors Affecting the Creepiness of Recommendations
  • Catalin-Mihai Helma Torkamaan
  • Jürgen Barbu
  • Ziegler
Helma Torkamaan, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. How Can They Know That? A Study of Factors Affecting the Creepiness of Recommendations. In Proceedings of the 13th ACM Conference on Recommender Systems (RecSys '19).
Using Groups of Items for Preference Elicitation in Recommender Systems
  • Shuo Chang
  • F Maxwell Harper
  • Loren Terveen
  • Chang Shuo
Hamed Qahri-Saremi, and Bamshad Mobasher. 2019. Does the User Have A Theory of the Recommender? A Pilot Study
  • Arman Muheeb Faizan Ghori
  • Jonathan Dehpanah
  • Hamed Gemmell
  • Bamshad Qahri-Saremi
  • Mobasher
  • Ghori Muheeb Faizan