December 2022
·
56 Reads
·
12 Citations
International Journal of Human-Computer Studies
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
December 2022
·
56 Reads
·
12 Citations
International Journal of Human-Computer Studies
August 2021
·
43 Reads
·
6 Citations
Lecture Notes in Computer Science
How users interact with an intelligent system is determined by their subjective mental model of the system’s inner working. In this paper, we present a novel method based on card sorting to identify such mental models of recommender systems quantitatively. Using this method, we conducted an online study (). Applying hierarchical clustering to the results revealed distinct user groups and their respective mental models. Independent of the recommender system used, some participants held a strict procedural-based, others a concept-based mental model. Additionally, mental models can be characterized as either technical or humanized. While procedural-based mental models were positively related to transparency perception, humanized models might influence the perception of system trust. Based on these findings, we derive three implications for the consideration of user-specific mental models in the design of transparent intelligent systems.
July 2020
·
213 Reads
·
93 Citations
While online content is personalized to an increasing degree, eg. using recommender systems (RS), the rationale behind personalization and how users can adjust it typically remains opaque. This was often observed to have negative effects on the user experience and perceived quality of RS. As a result, research increasingly has taken user-centric aspects such as transparency and control of a RS into account, when assessing its quality. However, we argue that too little of this research has investigated the users' perception and understanding of RS in their entirety. In this paper, we explore the users' mental models of RS. More specifically, we followed the qualitative grounded theory methodology and conducted 10 semi-structured face-to-face interviews with typical and regular Netflix users. During interviews participants expressed high levels of uncertainty and confusion about the RS in Netflix. Consequently, we found a broad range of different mental models. Nevertheless, we also identified a general structure underlying all of these models, consisting of four steps: data acquisition, inference of user profile, comparison of user profiles or items, and generation of recommendations. Based on our findings, we discuss implications to design more transparent, controllable, and user friendly RS in the future.
July 2020
·
42 Reads
·
16 Citations
September 2019
·
33 Reads
·
1 Citation
Recommender systems (RS) are very common tools designed to help users choose items from a large number of alternatives. While their algorithms are already quite mature in terms of precision, RS cannot unfold their full potential due to a lack of transparency and missing means of control. In this paper we introduce a method aiming at creating recommendations that are comprehensible and controllable by their users while granting an overview over the item domain. To achieve this, the entire item space of a domain is visualized using a map-like interface. Inside, users can express their preferences on which the RS reacts with matching recommendations. To change recommendations, users can alter their preferences expressed, which creates a continuous feedback loop between user and RS. We demonstrate our general method using two prototype applications, located in different item domains and utilizing different forms of visualization and interaction modalities. Empirical user studies with both prototypes show a great potential of our method to increase overview, transparency and control in RS.
September 2019
·
18 Reads
Empfehlungssysteme, die mit Hilfe latenter Faktormodelle Empfehlungen generieren, arbeiten äußerst genau und sind entsprechend weit verbreitet. Da die Berechnung der Empfehlungen jedoch auf der statistischen Auswertung von Benutzerbewertungen basiert, gestaltet es sich schwierig, die Empfehlungen dem Nutzer gegenüber zu erklären. Daher werden die Systeme häufig als intransparent wahrgenommen und können oft ihr volles Potential nicht entfalten. Erste Ansätze zeigen allerdings, dass die latenten Faktoren solcher Modelle semantische Eigenschaften der Produkte widerspiegeln. Dabei ist bislang unklar, ob die zum Teil sehr komplexe Parametrisierung, die z.B. die Anzahl der Faktoren festlegt, Auswirkungen auf die semantische Verständlichkeit hat. Da dies sehr von der subjektiven Wahrnehmung abhängt, präsentieren wir mit LittleMissFits ein Online-Spiel, das es erlaubt, mittels Crowd-Sourcing die Konsistenz der latenten Faktoren zu untersuchen. Die Ergebnisse einer Nutzerstudie mit diesem Spiel zeigen, dass eine höhere Anzahl von Faktoren das Modell weniger verständlich erscheinen lässt. Darüber hinaus fanden sich Unterschiede innerhalb der Faktormodelle bezüglich der Verständlichkeit der einzelnen Faktoren. Zusammengenommen stellen die Ergebnisse eine wertvolle Grundlage dar, um künftig die Transparenz entsprechender Empfehlungssysteme zu steigern.
April 2019
·
1,276 Reads
·
176 Citations
Trust in a Recommender System (RS) is crucial for its overall success. However, it remains underexplored whether users trust personal recommendation sources (i.e. other humans) more than impersonal sources (i.e. conventional RS), and, if they do, whether the perceived quality of explanation provided account for the difference. We conducted an empirical study in which we compared these two sources of recommendations and explanations. Human advisors were asked to explain movies they recommended in short texts while the RS created explanations based on item similarity. Our experiment comprised two rounds of recommending. Over both rounds the quality of explanations provided by users was assessed higher than the quality of the system's explanations. Moreover, explanation quality significantly influenced perceived recommendation quality as well as trust in the recommendation source. Consequently, we suggest that RS should provide richer explanations in order to increase their perceived recommendation quality and trustworthiness.
October 2018
·
15 Reads
Recommender systems relying on latent factor models often appear as black boxes to their users. Semantic descriptions for the factors might help to mitigate this problem. Achieving this automatically is, however, a non-straightforward task due to the models' statistical nature. We present an output-agreement game that represents factors by means of sample items and motivates players to create such descriptions. A user study shows that the collected output actually reflects real-world characteristics of the factors.
October 2018
·
73 Reads
·
7 Citations
Recommender systems relying on latent factor models often appear as black boxes to their users. Semantic descriptions for the factors might help to mitigate this problem. Achieving this automatically is, however, a non-straightforward task due to the models' statistical nature. We present an output-agreement game that represents factors by means of sample items and motivates players to create such descriptions. A user study shows that the collected output actually reflects real-world characteristics of the factors.
September 2018
·
17 Reads
·
3 Citations
Empfehlungssysteme, die auf latenten Faktormodellen basieren, sind dafür bekannt sehr genaue Vorschläge zu generieren. Häufig werden diese Systeme jedoch von Nutzern als intransparent wahrgenommen. Semantische Beschreibungen der latenten Faktoren könnten helfen, dieses Problem zu lindern. Solche Beschreibungen automatisch zu ermitteln gestaltet sich allerdings aufgrund der statistischen Herleitung der Faktoren aus numerischen Bewertungsdaten als schwierig. In diesem Beitrag stellen wir ein Output-Agreement-Spiel vor, das Spieler dazu motiviert, anhand repräsentativer Produkte Beschreibungen zu den Faktoren zu erstellen. Eine durchgeführte Nutzerstudie zeigt, dass das Spiel viel Spaß bereitet und die erhobenen Beschreibungen realweltliche Eigenschaften der Faktoren widerspiegeln.
... We continued our analysis by assessing the visual clutter of the tested idioms. Similar to recent research in InfoVis (e.g., Flittner and Gabbard, 2021;Locoro et al., 2023;Kunkel and Ziegler, 2023), we measured the visual clutter through the feature congestion algorithm developed by Rosenholtz et al. (2007). The algorithm runs on arbitrary images (Rosenholtz and Jin, 2005), and its analogy is that the more cluttered a display is, the more difficult it is to introduce a visually salient object. ...
December 2022
International Journal of Human-Computer Studies
... Technical Expertise: In this work, technical expertise (TE) refers to users' knowledge about artificial intelligence and recommender systems. To measure TE, this study adopted the scale used in the work by Kunkel et al. [19]. ...
August 2021
Lecture Notes in Computer Science
... Harambam et al., 2019). This can create distrust (Kunkel et al., 2020). It also makes it difficult for users to understand the trade-offs involved in any form of recommender system. ...
July 2020
... Several studies have experimented with the capabilities of LLMs by employing techniques like parameter-efficient tuning or instructionbased tuning to tailor recommendations. Some researchers have also transformed various recommendation scenarios into unified tasks of natural language generation, optimizing these models through multi-task learning frameworks such as P6 (Ngo et al. 2020). A notable development is the TIGER method (Rajput et al. 2024), which utilizes an RQ-VAE for constructing generative identification descriptors, followed by employing encoder-decoder based transformers for sequential recommendation. ...
July 2020
... Beispielsweise können die Ausprägungen der Faktoren für einzelne Produkte visualisiert [16] oder gesamte Produkträume auf Basis der latenten Faktoren dargestellt werden [2,10]. Ebenso positive Erfahrungen konnten mit crowd-basierten Ansätzen gemacht werden, bei denen Nutzer die Bedeutung hinter den latenten Faktoren identifizieren [11,12]. Der Einsatz von Games-with-a-Purpose (GWAP) scheint hierfür besonders geeignet zu sein. ...
September 2018
... Map-based interfaces can resolve some of the mentioned visualization challenges including transparency, explorability, and context-awareness [1,7]. [7] proposes a method to make recommendations more comprehensible and controllable even in application areas with items that are not location-based (e.g. ...
September 2019
... Explanations may be less effective in helping users detect AI errors [44], partly due to increased information load [43], which can reduce appropriate reliance and task performance [50,55]. Beyond the AI-generated response, the perceived quality, credibility, and informativeness of explanations influence trust and reliance on AI [11,23]. ...
April 2019
... This does not mean that the RS has to solely rely on content-based filtering though. There is some research on how to combine collaborative filtering with content data [20,21,23], which could be used to make systems based on collaborative filtering more transparent using the content of items. When communicating the relation of preferences and recommendations adequately, it can also be used to exert control over recommendations (see, e.g., [1,19]). ...
October 2018
... In other works, these interactions have been dubbed 'user control mechanisms' [11,16,19,37]. It is exactly because of these algorithmic affordances that AI and humans meet, and it is through these interactions that the user's experience of qualities such as transparency, control and trust instantiate [10,20,[38][39][40][41]. ...
March 2018
... Programmingby-example tools enable users to provide input-output examples, with the system generating a function that fits these examples [8,73,80]. Similarly, recommender systems allow users to steer outputs through limited feedback [5,44,56], such as adjusting a 2D plane to influence movie recommendations [31]. Teachable Machines also offer an interactive approach, allowing users to train ML models by supplying labeled examples, with real-time feedback facilitating iterative refinement [7]. ...
March 2017