Recommender Systems Handbook
Abstract
The explosive growth of e-commerce and online environments has made the issue of information search and selection increasingly serious; users are overloaded by options to consider and they may not have the time or knowledge to personally evaluate these options. Recommender systems have proven to be a valuable way for online users to cope with the information overload and have become one of the most powerful and popular tools in electronic commerce. Correspondingly, various techniques for recommendation generation have been proposed. During the last decade, many of them have also been successfully deployed in commercial environments. Recommender Systems Handbook, an edited volume, is a multi-disciplinary effort that involves world-wide experts from diverse fields, such as artificial intelligence, human computer interaction, information technology, data mining, statistics, adaptive user interfaces, decision support systems, marketing, and consumer behavior. Theoreticians and practitioners from these fields continually seek techniques for more efficient, cost-effective and accurate recommender systems. This handbook aims to impose a degree of order on this diversity, by presenting a coherent and unified repository of recommender systems major concepts, theories, methodologies, trends, challenges and applications. Extensive artificial applications, a variety of real-world applications, and detailed case studies are included. Recommender Systems Handbook illustrates how this technology can support the user in decision-making, planning and purchasing processes. It works for well known corporations such as Amazon, Google, Microsoft and AT&T. This handbook is suitable for researchers and advanced-level students in computer science as a reference.
Chapters (25)
Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this
introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured
way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the
handbook offers.
In this chapter, we give an overview of the main Data Mining techniques used in the context of Recommender Systems. We first
describe common preprocessing methods such as sampling or dimensionality reduction. Next, we review the most important classification
techniques, including Bayesian Networks and Support Vector Machines. We describe the k-means clustering algorithm and discuss several alternatives. We also present association rules and related algorithms for
an efficient training process. In addition to introducing these techniques, we survey their uses in Recommender Systems and
present cases where they have been successfully applied.
Recommender systems have the effect of guiding users in a personalized way to interesting objects in a large space of possible
options. Content-based recommendation systems try to recommend items similar to those a given user has liked in the past. Indeed, the basic process
performed by a content-based recommender consists in matching up the attributes of a user profile in which preferences and
interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items.
This chapter provides an overview of content-based recommender systems, with the aim of imposing a degree of order on the
diversity of the different aspects involved in their design and implementation. The first part of the chapter presents the
basic concepts and terminology of contentbased recommender systems, a high level architecture, and their main advantages and
drawbacks. The second part of the chapter provides a review of the state of the art of systems adopted in several application
domains, by thoroughly describing both classical and advanced techniques for representing items and user profiles. The most
widely adopted techniques for learning user profiles are also presented. The last part of the chapter discusses trends and
future research which might lead towards the next generation of systems, by describing the role of User Generated Content
as a way for taking into account evolving vocabularies, and the challenge of feeding users with serendipitous recommendations,
that is to say surprisingly interesting items that they might not have otherwise discovered.
Among collaborative recommendation approaches, methods based on nearest-neighbors still enjoy a huge amount of popularity,
due to their simplicity, their efficiency, and their ability to produce accurate and personalized recommendations. This chapter
presents a comprehensive survey of neighborhood-based methods for the item recommendation problem. In particular, the main
benefits of such methods, as well as their principal characteristics, are described. Furthermore, this document addresses
the essential decisions that are required while implementing a neighborhood-based recommender system, and gives practical
information on how to make such decisions. Finally, the problems of sparsity and limited coverage, often observed in large
commercial recommender systems, are discussed, and a few solutions to overcome these problems are presented.
The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it
played a central role within the recently completed Netflix competition has contributed to its popularity. This chapter surveys
the recent progress in the field. Matrix factorization techniques, which became a first choice for implementing CF, are described
together with recent innovations. We also describe several extensions that bring competitive accuracy into neighborhood methods,
which used to dominate the field. The chapter demonstrates how to utilize temporal models and implicit feedback to extend
models accuracy. In passing, we include detailed descriptions of some the central methods developed for tackling the challenge
of the Netflix Prize competition.
Traditional recommendation approaches (content-based filtering [48] and collaborative filtering[40]) are well-suited for the
recommendation of quality&taste products such as books, movies, or news. However, especially in the context of products such
as cars, computers, appartments, or financial services those approaches are not the best choice (see also Chapter 11). For
example, apartments are not bought very frequently which makes it rather infeasible to collect numerous ratings for one specific
item (exactly such ratings are required by collaborative recommendation algorithms). Furthermore, users of recommender applications
would not be satisfied with recommendations based on years-old item preferences (exactly such preferences would be exploited
in this context by content-based filtering algorithms).
The importance of contextual information has been recognized by researchers and practitioners in many disciplines, including
e-commerce personalization, information retrieval, ubiquitous and mobile computing, data mining, marketing, and management.
While a substantial amount of research has already been performed in the area of recommender systems, most existing approaches
focus on recommending the most relevant items to users without taking into account any additional contextual information,
such as time, location, or the company of other people (e.g., for watching movies or dining out). In this chapter we argue
that relevant contextual information does matter in recommender systems and that it is important to take this information
into account when providing recommendations. We discuss the general notion of context and how it can be modeled in recommender
systems. Furthermore, we introduce three different algorithmic paradigms – contextual prefiltering, post-filtering, and modeling
– for incorporating contextual information into the recommendation process, discuss the possibilities of combining several
contextaware recommendation techniques into a single unifying approach, and provide a case study of one such combined approach.
Finally, we present additional capabilities for context-aware recommenders and discuss important and promising directions
for future research.
Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing
recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of
candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application
to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience,
such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set
of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared
using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate
for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation
approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment
with the system and report on the experience, and finally describe large scale online experiments, where real user populations
interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols
for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a
large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation
metrics in the context of the properties that they evaluate.
In this chapter we describe the integration of a recommender system into the production environment of Fastweb, one of the
largest European IP Television (IPTV) providers. The recommender system implements both collaborative and content-based techniques,
suitable tailored to the specific requirements of an IPTV architecture, such as the limited screen definition, the reduced
navigation capabilities, and the strict time constraints. The algorithms are extensively analyzed by means of off-line and
on-line tests, showing the effectiveness of the recommender systems: up to 30% of the recommendations are followed by a purchase,
with an estimated lift factor (increase in sales) of 15%.
A personalised system is a complex system made of many interacting parts, from data ingestion to presenting the results to
the users. A plethora of methods, tools, algorithms and approaches exist for each piece of such a system: many data and metadata
processing methods, many user models, many filtering techniques, many accuracy metrics, many personalisation levels. In addition,
a realworld recommender is a piece of an even larger and more complex environment on which there is little control: often
the recommender is part of a larger application introducing constraints for the design of the recommender, e.g. the data may
not be in a suitable format, or the environment may impose some architectural or privacy constraints. This can make the task
of building such a recommender system daunting, and it is easy to make errors. Based on the experience of the authors and
the study of other works, this chapter intends to be a guide on the design, implementation and evaluation of personalised
systems. It presents the different aspects that must be studied before the design is even started, and how to avoid pitfalls,
in a hands-on approach. The chapter presents the main factors to take into account to design a recommender system, and illustrates
them through case studies of existing systems to help navigate in the many and complex choices that have to be faced.
Recommender systems form an extremely diverse body of technologies and approaches. The chapter aims to assist researchers
and developers to identify the recommendation technologies that are most likely to be applicable to different domains of recommendation.
Unlike other taxonomies of recommender systems, our approach is centered on the question of knowledge: what knowledge does
a recommender system need in order to function, and where does that knowledge come from? Different recommendation domains
(books vs condominiums, for example) provide different opportunities for the gathering and application of knowledge. These
considerations give rise to a mapping between domain characteristics and recommendation technologies.
Technology enhanced learning (TEL) aims to design, develop and test socio-technical innovations that will support and enhance
learning practices of both individuals and organisations. It is therefore an application domain that generally covers technologies
that support all forms of teaching and learning activities. Since information retrieval (in terms of searching for relevant
learning resources to support teachers or learners) is a pivotal activity in TEL, the deployment of recommender systems has
attracted increased interest. This chapter attempts to provide an introduction to recommender systems for TEL settings, as
well as to highlight their particularities compared to recommender systems for other application domains.
Over the past decade a significant amount of recommender systems research has demonstrated the benefits of conversational
architectures that employ critique-based interfacing (e.g., Show me more like item A, but cheaper). The critiquing phenomenon has attracted great interest in line with the growing need for more sophisticated decision/recommendation
support systems to assist online users who are overwhelmed by multiple product alternatives. Originally proposed as a powerful
yet practical solution to the preference elicitation problem central to many conversational recommenders, critiquing has proved
to be a popular topic in a variety of related areas (e.g., group recommendation, mixed-initiative recommendation, adaptive
user interfacing, recommendation explanation). This chapter aims to provide a comprehensive, yet concise, source of reference
for researchers and practitioners starting out in this area. Specifically, we present a deliberately non-technical overview
of the critiquing research which has been covered in recent years.
Whether users are likely to accept the recommendations provided by a recommender system is of utmost importance to system
designers and the marketers who implement them. By conceptualizing the advice seeking and giving relationship as a fundamentally
social process, important avenues for understanding the persuasiveness of recommender systems open up. Specifically, research
regarding the influence of source characteristics, which is abundant in the context of humanhuman relationships, can provide
an important framework for identifying potential influence factors. This chapter reviews the existing literature on source
characteristics in the context of human-human, human-computer, and human-recommender system interactions. It concludes that
many social cues that have been identified as influential in other contexts have yet to be implemented and tested with respect
to recommender systems. Implications for recommender system research and design are discussed.
This chapter gives an overview of the area of explanations in recommender systems. We approach the literature from the angle
of evaluation: that is, we are interested in what makes an explanation “good”, and suggest guidelines as how to best evaluate
this. We identify seven benefits that explanations may contribute to a recommender system, and relate them to criteria used
in evaluations of explanations in existing systems, and how these relate to evaluations with live recommender systems. We
also discuss how explanations can be affected by how recommendations are presented, and the role the interaction with the
recommender system plays w.r.t. explanations. Finally, we describe a number of explanation styles, and how they may be related
to the underlying algorithms. Examples of explanations in existing systems are mentioned throughout.
Over the past decade, our group has developed a suite of decision tools based on example critiquing to help users find their preferred products in e-commerce environments. In this chapter, we survey important usability research work relative to example critiquing and summarize the major results by deriving a set of usability guidelines. Our survey is focused on three key interaction activities between the user and the system: the initial preference elicitation process, the preference revision process, and the presentation of the systems recommendation results. To provide a basis for the derivation of the guidelines, we developed a multi-objective framework of three interacting criteria: accuracy, confidence, and effort (ACE). We use this framework to analyze our past work and provide a specific context for each guideline: when the system should maximize its ability to increase users’ decision accuracy, when to increase user confidence, and when to minimize the interaction effort for the users. Due to the general nature of this multi-criteria model, the set of guidelines that we propose can be used to ease the usability engineering process of other recommender systems, especially those used in e-commerce environments. The ACE framework presented here is also the first in the field to evaluate the performance of preference-based recommenders from a user-centric point of view.
Designers can use these guidelines for the implementation of an effective and successful product recommender.
Traditionally, recommender systems present recommendations in ranked lists to the user. In content- and knowledge-based recommender
systems, these lists are often sorted on some notion of similarity with a query, ideal product specification, or sample product.
However, a lot of information is lost in this way, since two products with the same similarity to a query can differ from
this query on a completely different set of product characteristics. When using a two dimensional map based visualization
of the recommendations, it is possible to retain part of this information. In the map, we can then position recommendations
that are similar to each other in the same area of the map.
Web search engines are the primary means by which millions of users access information everyday and the sheer scale and success
of the leading search engines is a testimony to the scientific and engineering progress that has been made over the last ten
years. However, mainstream search engines continue to deliver largely one-size-fits-all services to their user-base, ultimately limiting the relevance of their result-lists. In this chapter we will explore recent
research that is seeking to make Web search a more personal and collaborative experience as we look towards a new breed of
more social search engines.
The new generation of Web applications known as (STS) is successfully established and poised for continued growth. STS are
open and inherently social; features that have been proven to encourage participation. But while STS bring new opportunities,
they revive old problems, such as information overload. Recommender Systems are well known applications for increasing the
level of relevant content over the “noise” that continuously grows as more and more content becomes available online. In STS
however, we face new challenges. Users are interested in finding not only content, but also tags and even other users. Moreover,
while traditional recommender systems usually operate over 2-way data arrays, STS data is represented as a third-order tensor
or a hypergraph with hyperedges denoting (user, resource, tag) triples. In this chapter, we survey the most recent and state-of-the-art
work about a whole new generation of recommender systems built to serve STS.We describe (a) novel facets of recommenders for
STS, such as user, resource, and tag recommenders, (b) new approaches and algorithms for dealing with the ternary nature of
STS data, and (c) recommender systems deployed in real world STS. Moreover, a concise comparison between existing works is
presented, through which we identify and point out new research directions.
Recommendation technologies and trust metrics constitute the two pillars of trust-enhanced recommender systems. We discuss
and illustrate the basic trust concepts such as trust and distrust modeling, propagation and aggregation. These concepts are
needed to fully grasp the rationale behind the trust-enhanced recommender techniques that are discussed in the central part
of the chapter, which focuses on the application of trust metrics and their operators in recommender systems. We explain the
benefits of using trust in recommender algorithms and give an overview of state-of-the-art approaches for trust-enhanced recommender
systems. Furthermore, we explain the details of three well-known trust-based systems and provide a comparative analysis of
their performance. We conclude with a discussion of some recent developments and open challenges, such as visualizing trust
relationships in a recommender system, alleviating the cold start problem in a trust network of a recommender system, studying
the effect of involving distrust in the recommendation process, and investigating the potential of other types of social relationships.
This chapter shows how a system can recommend to a group of users by aggregating information from individual user models and
modelling the users affective state. It summarizes results from previous research in this area. It also shows how group recommendation
techniques can be applied when recommending to individuals, in particular for solving the cold-start problem and dealing with
multiple criteria.
This chapter gives an overview of aggregation functions toward their use in recommender systems. Simple aggregation functions
such as the arithmetic mean are often employed to aggregate user features, item ratings, measures of similarity, etc., however
many other aggregation functions exist which could deliver increased accuracy and flexibility to many systems. We provide
definitions of some important families and properties, sophisticated methods of construction, and various examples of aggregation
functions in the domain of recommender systems.
Recommender Systems (RSs) are often assumed to present items to users for one reason – to recommend items a user will likely be interested in. Of course RSs do recommend, but this assumption is biased, with no help of the
title, towards the “recommending” the system will do. There is another reason for presenting an item to the user: to learn
more about his/her preferences, or his/her likes and dislikes. This is where Active Learning (AL) comes in. Augmenting RSs with AL helps the user become more self-aware of their own likes/dislikes
while at the same time providing new information to the system that it can analyze for subsequent recommendations. In essence,
applying AL to RSs allows for personalization of the recommending process, a concept that makes sense as recommending is inherently
geared towards personalization. This is accomplished by letting the system actively influence which items the user is exposed
to (e.g. the items displayed to the user during sign-up orduring regular use), and letting the user explore his/her interests
freely.
This chapter aims to provide an overview of the class of multi-criteria recommender systems. First, it defines the recommendation
problem as a multicriteria decision making (MCDM) problem, and reviews MCDM methods and techniques that can support the implementation
of multi-criteria recommenders. Then, it focuses on the category of multi-criteria rating recommenders – techniques that provide recommendations by modelling a user’s utility for an item as a vector of ratings along several
criteria. A review of current algorithms that use multi-criteria ratings for calculating predictions and generating recommendations
is provided. Finally, the chapter concludes with a discussion on open issues and future challenges for the class of multi-criteria
rating recommenders.
Collaborative recommender systems are vulnerable to malicious users who seek to bias their output, causing them to recommend
(or not recommend) particular items. This problem has been an active research topic since 2002. Researchers have found that
the most widely-studied memory-based algorithms have significant vulnerabilities to attacks that can be fairly easily mounted.
This chapter discusses these findings and the responses that have been investigated, especially detection of attack profiles
and the implementation of robust recommendation algorithms.
http://josquin.cti.depaul.edu/~rburke/pubs/burke-etal-handbook10.pdf
... Notable examples encompass metrics like the Cosine and Pearson similarities. Further details on these similarity and correlation functions can be found in prior works, such as [36] and [37]. Additionally, for the determination of w(A, k), supplementary extensions, including inverse user frequency and case amplification, as elucidated in [36], can be considered. ...
... This process encompasses three distinct categories of social network trust (SNT) techniques. The initial category is characterized by the trust weighted prediction method [37], followed by the Bayesian inference-based prediction method as the second category [41], and the random walk-based method as the third category [42]. It is essential to note that the SNT technique fundamentally relies on trust-aware methodologies, which will be comprehensively expounded upon in the subsequent Sect. ...
... This category employs unsupervised machine learning techniques as a key component of data mining research. In the work by [37], these methodologies are extensively elucidated. A predominant focus within this category pertains to diverse clustering methodologies, which aim to group data entities based on the similarity of their features. ...
This paper provides a thorough review of recommendation methods from academic literature, offering a taxonomy that classifies recommender systems (RSs) into categories like collaborative filtering, content-based systems, and hybrid systems. It examines the effectiveness and challenges of these systems, such as filter bubbles, the "cold start" issue, and the reliance on collaborative filtering and content-based approaches. We trace the development of RSs, emphasizing the role of machine learning and deep learning models in overcoming these challenges and delivering more accurate, personalized, and context-aware recommendations. We also highlight the increasing significance of ethical considerations, including fairness, transparency, and trust, in the design of RSs. The paper presents a structured literature review, discussing various aspects of RSs, such as collaborative filtering, personalized recommender systems, and strategies to improve system robustness. It also points out the limitations of the existing approaches and suggests promising research directions for the future. In summary, this paper offers a comprehensive analysis of RSs, focusing on their evolution, challenges, and potential future improvements, particularly in enhancing accuracy, diversity, and ethical practices in recommendations.
... As a prominence of recommendation techniques, collaborative filtering (CF) methods [1] recommend items based on the preference of other similar items or like-minded users, including the neighborhood method and the matrix factorization model [2]. The former intends to predict a user's rating on an item from like-minded users or similar items. ...
... Firstly, different users towards various items may differ, so it is necessary to convert individual ratings to a universal scale. Z-score normalization [25] is the most popular rating normalization scheme and considers the spread in the individual rating scales [2]. In user-based methods, the z-score normalization of r ui divides the usermean-centered rating by the standard deviation δ u the ratings given by item u [2]: ...
... Z-score normalization [25] is the most popular rating normalization scheme and considers the spread in the individual rating scales [2]. In user-based methods, the z-score normalization of r ui divides the usermean-centered rating by the standard deviation δ u the ratings given by item u [2]: ...
Recommender systems (RSs) have been a widely exploited approach to solving the information overload problem. However, the performance is still limited due to the extreme sparsity of the rating data. With the popularity of Web 2.0, the social tagging system provides more external information to improve recommendation accuracy. Although some existing approaches combine the matrix factorization models with co-occurrence properties and context of tags, they neglect the issue of tag sparsity without the commonly associated tags problem that would also result in inaccurate recommendations. Consequently, in this paper, we propose a novel hybrid collaborative filtering model named WUDiff_RMF, which improves Regularized Matrix Factorization (RMF) model by integrating Weighted User-Diffusion-based CF algorithm(WUDiff) that obtains the information of similar users from the weighted tripartite user-item-tag graph. This model aims to capture the degree correlation of the user-item-tag tripartite network to enhance the performance of recommendation. Experiments conducted on four real-world datasets demonstrate that our approach significantly performs better than already widely used methods in the accuracy of recommendation. Moreover, results show that WUDiff_RMF can alleviate the data sparsity, especially in the circumstance that users have made few ratings and few tags.
... Recommender system, as an essential component of modern ecommerce websites, tries to predict what the most suitable products or services are of users, based on the users' preferences [30]. With the mechanism development of e-commerce, a massive amount of customer interactions (e.g., browse, click, collect, cart, purchase) have been logged, which imply luxuriant consumption patterns. ...
... Recommender systems play an essential role in many Internetbased services [30], such as e-commerces, and have arouse the great attention from both industry and academia. The relevant works of this study can be concluded into two main paradigms: the General Recommenders and the Sequential Recommenders. ...
... Cold start is a common problem of recommender systems that new users or items have not yet gathered sufficient information to recommend or be recommended [30]. As we have removed users from test set that are not in the training set on the above experiments, which is shown in Figure 3 shown, here we focus on these users and examine the performances of our model on cold-start problem of new users. ...
In the modern e-commerce, the behaviors of customers contain rich information, e.g., consumption habits, the dynamics of preferences. Recently, session-based recommendations are becoming popular to explore the temporal characteristics of customers' interactive behaviors. However, existing works mainly exploit the short-term behaviors without fully taking the customers' long-term stable preferences and evolutions into account. In this paper, we propose a novel Behavior-Intensive Neural Network (BINN) for next-item recommendation by incorporating both users' historical stable preferences and present consumption motivations. Specifically, BINN contains two main components, i.e., Neural Item Embedding, and Discriminative Behaviors Learning. Firstly, a novel item embedding method based on user interactions is developed for obtaining an unified representation for each item. Then, with the embedded items and the interactive behaviors over item sequences, BINN discriminatively learns the historical preferences and present motivations of the target users. Thus, BINN could better perform recommendations of the next items for the target users. Finally, for evaluating the performances of BINN, we conduct extensive experiments on two real-world datasets, i.e., Tianchi and JD. The experimental results clearly demonstrate the effectiveness of BINN compared with several state-of-the-art methods.
... Due to the variety of artificial intelligence systems used in educational systems in the past years [30,31], a definition of requirements for our use case is needed. First, due to the lack of available datasets about German-speaking cochlear implant recipients, a system design was needed to overcome a so-called "cold start problem" [31]. ...
... Due to the variety of artificial intelligence systems used in educational systems in the past years [30,31], a definition of requirements for our use case is needed. First, due to the lack of available datasets about German-speaking cochlear implant recipients, a system design was needed to overcome a so-called "cold start problem" [31]. Second, due to ethical concerns, we were looking for an algorithm in the context of explainable artificial intelligence [25,30]. ...
... We assumed that a well-selected preconfigured competence profile would reduce the number of tasks a cochlear implant recipient has to solve until the system reaches a point where it properly approximates the recipient's real competence level. As such, this addresses the so-called cold start problem [31]. This procedure also mimics an existing routine in face-to-face settings, where the therapist gets a first impression of a new patient by asking multiple basic questions. ...
Background
Cochlear implants are implanted hearing devices; instead of amplifying sounds like common hearing aids, this technology delivers preprocessed sound information directly to the hearing (ie, auditory) nerves. After surgery and the first cochlear implant activation, patients must practice interpreting the new auditory sensations, especially for language comprehension. This rehabilitation process is accompanied by hearing therapy through face-to-face training with a therapist, self-directed training, and computer-based auditory training.
Objective
In general, self-directed, computer-based auditory training tasks have already shown advantages. However, compliance of cochlear implant recipients is still a major factor, especially for self-directed training at home. Hence, we aimed to explore the combination of 2 techniques to enhance learner motivation in this context: adaptive learning (in the form of an intelligent tutoring system) and game-based learning (in the form of a serious game).
Methods
Following the suggestions of the evidence-centered design framework, a domain analysis of hearing therapy was conducted, allowing us to partially describe human hearing skill as a probabilistic competence model (Bayesian network). We developed an algorithm that uses such a model to estimate the current competence level of a patient and create training recommendations. For training, our developed task system was based on 7 language comprehension task types that act as a blueprint for generating tasks of diverse difficulty automatically. To achieve this, 1053 audio assets with meta-information labels were created. We embedded the adaptive task system into a graphic novel–like mobile serious game. German-speaking cochlear implant recipients used the system during a feasibility study for 4 weeks.
Results
The 23 adult participants (20 women; 3 men) fulfilled 2259 tasks. In total, 2004 (90.5%) tasks were solved correctly, and 255 (9.5%) tasks were solved incorrectly. A generalized additive model analysis of these tasks indicated that the system adapted to the estimated competency levels of the cochlear implant recipients more quickly in the beginning than at the end. Compared with a uniform distribution of all task types, the recommended task types differed (χ²6=86.713; P<.001), indicating that the system selected specific task types for each patient. This is underlined by the identified categories for the error proportions of the task types.
Conclusions
This contribution demonstrates the feasibility of combining an intelligent tutoring system with a serious game in cochlear implant rehabilitation therapies. The findings presented here could lead to further advances in cochlear implant care and aural rehabilitation in general.
Trial Registration
German Clinical Trials Register (DRKS) DRKS00022860; https://drks.de/search/en/trial/DRKS00022860
... A RS suggests items that users are likely to find relevant, addressing information overload when users lack the knowledge or time to explore alternatives (Xv et al., 2022). It predicts user preferences by analyzing data about items, users, and their interactions (Shapira et al., 2022;Xv et al., 2022). Unlike search engines that use text strings, RSs rely on behavior data to provide personalized recommendations (Burke, 2007). ...
... Examples of items include songs (X. Wang et al., 2023), political parties (Nagarjan & Mohamed, 2019), news (Shapira et al., 2022), products (Grbovic et al., 2015;Ma et al., 2020), and job applicants (Shapira et al., 2022). Users of a RS may be individuals, businesses, institutions or even other software (Pawlicka et al., 2021). ...
... Examples of items include songs (X. Wang et al., 2023), political parties (Nagarjan & Mohamed, 2019), news (Shapira et al., 2022), products (Grbovic et al., 2015;Ma et al., 2020), and job applicants (Shapira et al., 2022). Users of a RS may be individuals, businesses, institutions or even other software (Pawlicka et al., 2021). ...
This study examines the relationship between intrinsic biases in Large Language Models
(LLMs) and extrinsic biases in recommendations generated by Recommender Systems (RSs)
using these LLMs. The research focuses on gender and race-related biases in research grant
recommendations, utilizing data from the National Institutes of Health (NIH). The method-
ology involves 19 BERT-family base models, each fine-tuned on three datasets with varying
gender and race balance, and trained with and without Principal Investigator (PI) names.
This process results in 114 fine-tuned models. The study employs name-based classification
models for gender and race labeling due to the absence of explicit labels in the NIH data.
Research questions address the presence and extent of extrinsic bias in recommendations,
the relationship between intrinsic and extrinsic biases, the effects of including PI names in
fine-tuning data, and the impact of gender and race balance in fine-tuning datasets. Findings
reveal significant variations in miscalibration and recommended grant values across models
and configurations. White PIs generally received lower-valued recommendations compared
to their actual grants, while Female Black PIs received the most overvalued recommenda-
tions. Asian and Hispanic PIs consistently received lower-valued recommendations than
White PIs when not compared to actual grant values. Fine-tuning with balanced datasets re-
duced both extrinsic bias and its correlation with intrinsic bias metrics. The inclusion of PI
names in fine-tuning data significantly impacted extrinsic bias, particularly disadvantaging
Black PIs in unbalanced datasets. The research faced limitations including the use of name-
based models for gender and race labeling, potential errors in grant value extraction, and
challenges related to dataset size and balance. The study also highlighted the complexity of
defining and measuring bias in AI systems, particularly the distinction between descriptive
and normative aspects of extrinsic bias. This work contributes to efforts to create more eq-
uitable recommendation algorithms and addresses the need for guidelines in the use of AI in
high-stakes decision-making processes. It suggests that fine-tuning with balanced datasets
may lead to more equitable grant value recommendations across gender and race, though
correlations with intrinsic bias metrics often remain ambiguous.
... Recommender systems (RS) have traditionally been a promising research area, not only because of their ability to handle the problem of information overload but also because of the huge number of applications that use them, particularly in marketing and ecommerce [1,2]. Initially, RS were used to make recommendations of potentially interesting items to individuals. ...
... However, satisfying all group members uniformly remains a challenge. Most traditional approaches make use of aggregation techniques [2,36]. Although they are widely used in many domains, the limitations of aggregation techniques mean that, in many cases, the generated recommendations fail at satisfying all group members uniformly. ...
Recommender systems aim to predict the preferences of users and suggest items of interest to them in various domains. While traditional recommendation techniques consider users as individuals, some approaches aim to satisfy the needs of a group of people. Multi-agent systems can be used to develop such recommendations, where multiple intelligent agents interact with each other to achieve a common goal, i.e., deciding which item to recommend. Particularly, negotiation techniques can be used to find a decision that aims at maximizing the satisfaction of all group members. The proposed approach introduces a multi-agent recommender system for a group of users by considering their personality traits, relationships and social interactions during the negotiation process that leads to the generation of recommendations. While traditional recommendation techniques do not take into account the effects of personality traits and relationships between individuals, our approach demonstrates that personality traits, especially personality types in the context of conflict management, and social relationships can significantly impact on the group recommendation. The results indicate that the opinion of an individual can be influenced when she is part of a group that cooperates towards a shared goal. Overall, the proposed approach shows that recommender systems can benefit from considering that factors. This work contributes to understanding the impact of personality traits and social relationships on group recommendations and suggests potential directions for future research.
... The rapid growth of online education platforms has led to an unprecedented abundance of learning resources available to students worldwide (Ricci et al., 2015). However, this wealth of options often presents a challenge: how can learners efficiently navigate through thousands of courses to find those that best match their interests, goals, and learning styles? ...
... Online course recommendation systems aim to analyze user behavior, preferences, and course characteristics to suggest personalized learning paths for individual students (Bobadilla et al., 2013). These systems not only enhance the learning experience by providing tailored suggestions but also contribute to increased engagement and completion rates on educational platforms (Ricci et al., 2015). ...
The proliferation of e-learning platforms has created a need for sophisticated course recommendation systems. This paper presents an innovative online course recommendation system using Neural Collaborative Filtering (NCF), a deep learning technique designed to surpass traditional methods in accuracy and personalization. Our system employs a hybrid NCF architecture, integrating matrix factorization with multi-layer perceptron to capture complex user-course interactions. The proposed NCF-based recommendation system aims to address key challenges in the e-learning domain, such as diverse user preferences, varying course content, and evolving learning patterns. By leveraging the power of neural networks, our approach seeks to provide more relevant and personalized course suggestions to learners. Our research contributes to the intersection of deep learning and educational technology, offering new insights into how advanced machine learning techniques can be applied to improve online learning experiences. The proposed system has the potential to enhance the quality of course recommendations, leading to more effective learning pathways for users. This work has important implications for e-learning platforms, educational institutions, and lifelong learners navigating the vast landscape of online courses. By improving the match between learners and courses, we aim to increase engagement, completion rates, and overall satisfaction in online education. Future work will explore the long-term impact of such personalized recommendations on learning outcomes and skill development.
... They help users make their decisions by providing lists of items that should be most likely of interest to them. The design of recommendation engines has thus been the subject of numerous scientific researches, compiled in several books 2,3 . ...
... Another difficulty inherent in recommender systems based on implicit rating is the evaluation of these systems. The estimation of explicit ratings is a clear goal and is discussed in the literature 3 . It can be evaluated directly. ...
The consumption of music has its specificities in comparison with other media, especially in relation to listening durations and replays. Music recommendation can take these properties into account in order to predict the behaviours of the users. Their impact is investigated in this paper. A large database was thus created using logs collected on a streaming platform, notably collecting the listening times. The proposed study shows that a high proportion of the listening events implies a skip action, which may indicate that the user did not appreciate the track listened. Implicit like and dislike can be deduced from this information of durations and replays and can be taken into account for music recommendation and for the evaluation of music recommendation engines. A quantitative study as usually found in the literature confirms that neighborhood-based systems considering binary data give the best results in terms of MAP@k. However, a more qualitative evaluation of the recommended tracks shows that many tracks recommended, usually evaluated in a positive way, lead to skips or thus are actually not appreciated. We propose the consideration of implicit like/dislike as recommendation engine inputs. Evaluations show that neighbourhood-based engines remain the most precise, but filtering inputs according to durations and/or replays have a significant positive impact on the objective of the recommendation engine. The recommendation process can thus be improved by taking account of listening durations and replays. We also study the possibility of post-filtering a list of recommended tracks so as to limit the number of tracks that will be unpleasantly listened (skip and implicit dislike) and to increase the proportion of tracks appreciated (implicit like). Several simple algorithms show that this post-filtering operation leads to an improvement of the quality of the music recommendations.
... One the types of IR is recommender systems. Hence, we can define a recommender system (RS) as a subcategory of an information filtering system that calculates the most accurate rating a user would provide for an item [2]. RS as a software solution has its roots in the most basic human tendency of asking for suggestions or recommendations before trying out any new experience or object or even for making friends. ...
... The recommendations presented are designed to assist individuals in making informed decisions across a range of contexts. This means that the primary objective of these systems is to provide personalized recommendations which is the major difference between recommender systems and information retrieval search engines [2]. Recommender systems have emerged as essential tools in electronic commerce, providing effective solutions for online users grappling with information overdose. ...
... In the practical setting of offline MBO, the standard procedure is to select the top-k designs (e.g., k = 128), which maximize the surrogate model's predictions, for evaluation (Trabucco et al., 2022). Thus, we introduce a novel metric, Area Under the Precision-Recall Curve (AUPRC) in Definition 1 (Raghavan et al., 1989;Davis & Goadrich, 2006;Ricci et al., 2010), for offline MBO to assess the model's capability in identifying the top-k ones from a set of candidate designs. Although AUPRC is traditionally associated with imbalanced classification tasks, it can be interpreted as a listwise ranking metric (Wen et al., 2024), which enables us to employ AUPRC for evaluating a model's ability to select the top-k designs within the context of offline MBO. ...
... As noted in prior works and this link, this is a bug for the implementation of Hopper in Design-Bench. For the ChEMBL task, we exclude it because almost all methods produce the same results, as shown in (Krishnamoorthy et al., 2023a;b), which is not suitable for comparison. We also exclude NAS due to its high computation cost for exact evaluation over multiple seeds, which is beyond our budget. ...
Offline model-based optimization (MBO) aims to identify a design that maximizes a black-box function using only a fixed, pre-collected dataset of designs and their corresponding scores. A common approach in offline MBO is to train a regression-based surrogate model by minimizing mean squared error (MSE) and then find the best design within this surrogate model by different optimizers (e.g., gradient ascent). However, a critical challenge is the risk of out-of-distribution errors, i.e., the surrogate model may typically overestimate the scores and mislead the optimizers into suboptimal regions. Prior works have attempted to address this issue in various ways, such as using regularization techniques and ensemble learning to enhance the robustness of the model, but it still remains. In this paper, we argue that regression models trained with MSE are not well-aligned with the primary goal of offline MBO, which is to select promising designs rather than to predict their scores precisely. Notably, if a surrogate model can maintain the order of candidate designs based on their relative score relationships, it can produce the best designs even without precise predictions. To validate it, we conduct experiments to compare the relationship between the quality of the final designs and MSE, finding that the correlation is really very weak. In contrast, a metric that measures order-maintaining quality shows a significantly stronger correlation. Based on this observation, we propose learning a ranking-based model that leverages learning to rank techniques to prioritize promising designs based on their relative scores. We show that the generalization error on ranking loss can be well bounded. Empirical results across diverse tasks demonstrate the superior performance of our proposed ranking-based models than twenty existing methods.
... This kind of picture can not simply assist the system in comprehending the user's interests and requirements, nevertheless likewise supplies tailored material and services throughout interactions. Ricci and Rokach [16] provided an introduction of the standard principles and developments of idea systems and went over how to produce user portraits through user behavior analysis to improve the accuracy of personalized recommendations. This strategy significantly boosts the user experience on e-commerce, social networks and other platforms, permitting users to get recommendations and services that are more in line with their interests. ...
As artificial intelligence technology continues to progress, human-computer interaction has emerged as a pivotal method for humans to engage with intelligent systems. This article delves into the evolutionary journey of human-computer interaction, highlighting cutting-edge technologies that merge human-computer interaction with artificial intelligence. This paper also addresses the current challenges that demand immediate resolution and potential issues that may arise in the future. The path to seamlessly integrating human-computer interaction with artificial intelligence is fraught with complexities, indicating that there is a substantial journey ahead.
... interests, knowledge levels, learning styles and feedback) [ [16]. Besides course recommendation, there is extensive work on recommender systems for assisting users with finding desirable products or services [17] [18] [19]. However, several unique features of course sequence recommendation make these approaches unsuitable for the considered problem. ...
Given the variability in student learning it is becoming increasingly important to tailor courses as well as course sequences to student needs. This paper presents a systematic methodology for offering personalized course sequence recommendations to students. First, a forward-search backward-induction algorithm is developed that can optimally select course sequences to decrease the time required for a student to graduate. The algorithm accounts for prerequisite requirements (typically present in higher level education) and course availability. Second, using the tools of multi-armed bandits, an algorithm is developed that can optimally recommend a course sequence that both reduces the time to graduate while also increasing the overall GPA of the student. The algorithm dynamically learns how students with different contextual backgrounds perform for given course sequences and then recommends an optimal course sequence for new students. Using real-world student data from the UCLA Mechanical and Aerospace Engineering department, we illustrate how the proposed algorithms outperform other methods that do not include student contextual information when making course sequence recommendations.
... In practical applications, users in general only rate a small portion of items, but accurate recommendations are expected for the cold users who rate only a few items. This raises two inherent obstacles to obtain satisfactory recommending quality, namely data sparsity and cold start [8][9][10][11]. In principle, this is caused by the lack of sufficient and reliable elements in U and/or I to calculate Eqns.(1) and (2). A possible solution to get around this is to incorporate trust relationships into the CF framework, resulting in the trust based or trust-aware CF (TaCF) [8][9][10][12][13][14]. ...
The growth of Internet commerce has stimulated the use of collaborative filtering (CF) algorithms as recommender systems. A collaborative filtering (CF) algorithm recommends items of interest to the target user by leveraging the votes given by other similar users. In a standard CF framework, it is assumed that the credibility of every voting user is exactly the same with respect to the target user. This assumption is not satisfied and thus may lead to misleading recommendations in many practical applications. A natural countermeasure is to design a trust-aware CF (TaCF) algorithm, which can take account of the difference in the credibilities of the voting users when performing CF. To this end, this paper presents a trust inference approach, which can predict the implicit trust of the target user on every voting user from a sparse explicit trust matrix. Then an improved CF algorithm termed iTrace is proposed, which takes advantage of both the explicit and the predicted implicit trust to provide recommendations with the CF framework. An empirical evaluation on a public dataset demonstrates that the proposed algorithm provides a significant improvement in recommendation quality in terms of mean absolute error (MAE).
... Conditional preference networks (CP-nets) as described by Boutilier et al. (2004a) are structures for modelling a person's conditional preferences over a set of discrete variables. Representing and reasoning with a person's preferences is an area of interest in AI with applications in automated decision making (Nunes et al., 2015), recommender systems (Ricci et al., 2011), and product configuration (Alanazi and Mouhoub, 2014). CP-nets represent preferences in a compact manner that is easily interpreted. ...
Conditional preference networks (CP-nets) are a graphical representation of a person's (conditional) preferences over a set of discrete variables. In this paper, we introduce a novel method of quantifying preference for any given outcome based on a CP-net representation of a user's preferences. We demonstrate that these values are useful for reasoning about user preferences. In particular, they allow us to order (any subset of) the possible outcomes in accordance with the user's preferences. Further, these values can be used to improve the efficiency of outcome dominance testing. That is, given a pair of outcomes, we can determine which the user prefers more efficiently. Through experimental results, we show that this method is more effective than existing techniques for improving dominance testing efficiency. We show that the above results also hold for CP-nets that express indifference between variable values.
... Collaborative filtering has been successfully used for recommendation systems (see, e.g., [17]). A typical approach to using collaborative filtering for recommendation systems is to consider all the observed ratings given by a set of users to a set of products as elements in a matrix, where the row and column of this matrix correspond to users and products, respectively. ...
Recent work has shown that collaborative filter-based recommender systems can be improved by incorporating side information, such as natural language reviews, as a way of regularizing the derived product representations. Motivated by the success of this approach, we introduce two different models of reviews and study their effect on collaborative filtering performance. While the previous state-of-the-art approach is based on a latent Dirichlet allocation (LDA) model of reviews, the models we explore are neural network based: a bag-of-words product-of-experts model and a recurrent neural network. We demonstrate that the increased flexibility offered by the product-of-experts model allowed it to achieve state-of-the-art performance on the Amazon review dataset, outperforming the LDA-based approach. However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.
... The objective of Recommender Systems is to recommend new products or items for users based on their history [1]. There are two major approaches to create a Recommender System. ...
Matrix factorization is one of the best approaches for collaborative filtering, because of its high accuracy in presenting users and items latent factors. The main disadvantages of matrix factorization are its complexity, and being very hard to be parallelized, specially with very large matrices. In this paper, we introduce a new method for collaborative filtering based on Matrix Factorization by combining simulated annealing with levy distribution. By using this method, good solutions are achieved in acceptable time with low computations, compared to other methods like stochastic gradient descent, alternating least squares, and weighted non-negative matrix factorization.
... The World Wide Web's tremendous rise in information and users now have many options thanks to the rapid development of e-services, which often makes decision-making more challenging. The main purpose of recommender systems is to help those who lack expertise or understanding navigate the wide range of options available to them [1]. In order to forecast user preferences for items of interest, recommender systems leverage multiple information sources [2]. ...
Utilizing past behaviour data and forecasting future actions, recommender systems offer users individualized service support. Recommender systems have naturally incorporated artificial intelligence (AI), particularly computational intelligence and machine learning techniques and algorithms, to improve prediction accuracy and tackle problems with data sparsity and cold start. In-depth discussions of recommender system fundamentals and current practices are provided in this position paper, along with an analysis of how artificial intelligence (AI) might advance the field's technological advancements and applications. This study highlights fresh research paths and addresses current research difficulties in addition to reviewing recent theoretical and practical achievements. The work thoroughly examines a range of concerns pertaining to artificial intelligence (AI) recommender systems. Additionally, it assesses the progress made in these systems through the use of AI methods such as neural networks and deep learning, fuzzy approaches, transfer learning, genetic algorithms, evolutionary algorithms, and active learning. Key Words: Generative AI, Recommendation Systems, Content based recommendation systems, Reinforcement Learning, Deep Neural Network.
... While pairwise representations, which incorporate the knowledge about the pair they are making a prediction for, are able to contextualize the prediction, one would need to generate pair-wise representations for all possible user-item pairs, which is ineffable due to its quadratic complexity. Alternatively, one can pre-filter the set of candidate pairs, e.g., via content-based filtering or collaborative filtering Ricci et al. (2010); Campana & Delmastro (2017), but the model's capabilities are then limited by the recall of the candidate generation procedure. ...
Recommendation systems predominantly utilize two-tower architectures, which evaluate user-item rankings through the inner product of their respective embeddings. However, one key limitation of two-tower models is that they learn a pair-agnostic representation of users and items. In contrast, pair-wise representations either scale poorly due to their quadratic complexity or are too restrictive on the candidate pairs to rank. To address these issues, we introduce Context-based Graph Neural Networks (ContextGNNs), a novel deep learning architecture for link prediction in recommendation systems. The method employs a pair-wise representation technique for familiar items situated within a user's local subgraph, while leveraging two-tower representations to facilitate the recommendation of exploratory items. A final network then predicts how to fuse both pair-wise and two-tower recommendations into a single ranking of items. We demonstrate that ContextGNN is able to adapt to different data characteristics and outperforms existing methods, both traditional and GNN-based, on a diverse set of practical recommendation tasks, improving performance by 20% on average.
... However, achieving personalization at scale presents a significant challenge, as it requires balancing the customization of AI experiences with the scalability demands of large-scale systems. Techniques such as collaborative filtering, contentbased filtering, and reinforcement learning can be used to personalize AI systems, but these approaches must be carefully managed to avoid issues such as filter bubbles, overfitting, and unintended biases [96,113]. Moreover, personalization efforts must be transparent and give users control over how their data is used and how their AI experiences are shaped. ...
The rapid growth in interest in Artificial Intelligence (AI) has been a significant driver of research and business activities in recent years. This raises new critical issues, particularly concerning interaction with AI systems. This article first presents a survey that identifies the primary issues addressed in Human-Centered AI (HCAI), focusing on the interaction with AI systems. The survey outcomes permit to clarify disciplines, concepts, and terms around HCAI, solutions to design and evaluate HCAI systems, and the emerging challenges; these are all discussed with the aim of supporting researchers in identifying more pertinent approaches to create HCAI systems. Another main finding emerging from the survey is the need to create Symbiotic AI (SAI) systems. Definitions of both HCAI systems and SAI systems are provided. To illustrate and frame SAI more clearly, we focus on medical applications, discussing two case studies of SAI systems.
... For a broad discussion of modern recommendation systems, see the handbook edited by Ricci et al. (2015), and, for an early account of collaborative filtering methods, see the work of Sarwar et al. (2001). As examples of works on movie recommendations, we mention (a) the paper of Ghosh et al. (1999), who described a movie recommendation system using Black's voting rule with weighted user preferences, (b) the paper of Azaria et al. (2013), who focused on maximizing the revenue of the recommender, (c) the paper of Choi et al. (2012), who discussed recommendations based on movie genres, and (d) the paper of Phonexay et al. (2018), who adapted some techniques from social networks to recommendation systems. ...
We show a prototype of a system that uses multiwinner voting to suggest resources (such as movies) related to a given query set (such as a movie that one enjoys). Depending on the voting rule used, the system can either provide resources very closely related to the query set or a broader spectrum of options. We show how this ability can be interpreted as a way of controlling the diversity of the results. We test our system both on synthetic data and on the real-life collection of movie ratings from the MovieLens dataset. We also present a visual comparison of the search results corresponding to selected diversity levels.
... They expressed their opinion about the features included, excluded and importance in the evaluation procedure. Our framework testing method is inspired by the evaluation procedure that is used to evaluate context-based recommendation systems [76]. The supposed model is evaluated by three important metrics such as accuracy, precision, and recall. ...
Internet of Things (IoT) platforms have become the building blocks of any automated system but they are more important in case of industrial systems where sensitive data are captured and handled by the information system. Therefore, it is imperative to deploy the right IoT platform to perform the computational and operational tasks in a better way. During the last few years, an array of IoT technologies/platforms with different capabilities and features were introduced in the markets. This abrupt rise created selection and decision-making issues particularly for the network engineers, designers, and industrial managers due to a lack of technical understanding and skill in this area. Therefore, we present an integrated assessment model focusing on evaluating and ranking IoT platforms in the industrial environment. It encompasses multiple methods such as the proposed model leverages a well-known data collection technique such as Delphi for data collection related to the criteria features. It adopts the Analytic Hierarchy Process (AHP) for giving weights to the criteria features. The technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method has been applied for the evaluation of the top twenty (20) Industrial IoT(IIoT) platform alternatives according to the proposed criteria. It selects the most rational choice of IoT platform that can be employed in the Industry 4.0 setting. The proposed integrated assessment model produces the most accurate and consistent outcomes. Hence, it is believed that it can be used as a guideline by different stakeholders like researchers, developers, network engineers, and policymakers for the assessment and deployment of IoT platforms in the industrial environment. It is believed that it is the first kind of multi-methods integrated assessment mode for the assessment, decision-making, and prioritization of IoT technologies in the industry 4.0 domain.
... Model performance evaluation is conducted using Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). RMSE is calculated to measure the deviation between predicted and actual ratings, imposing a greater penalty on larger errors [44], while MAE is employed as a complementary metric to provide a more intuitive perspective on the average absolute error in predictions [45]. This comprehensive methodological process is designed to ensure scientific rigor and reproducibility while leveraging state-of-the-art techniques in the field of collaborative filtering-based recommendation systems. ...
This research presents a new Demographic-Enhanced Cosine-KNN method for collaborative filtering in recommender systems. Our method demonstrates superior performance compared to state-of-the-art techniques across various datasets, indicating substantial enhancements in recommendation accuracy. Assessments of the MovieLens 100K and 1M datasets demonstrate significant improvements in RMSE and MAE metrics relative to traditional KNN-Basic and advanced ExtKNNCF algorithms. The proposed method demonstrates improvements of up to 17.1% in RMSE and 14.4% in MAE compared to KNN-Basic, while consistently exceeding ExtKNNCF by margins ranging from 2.0% to 10.1%. Our method demonstrates significant improvement compared to the standard Cosine-KNN approach, achieving enhancements of 1.9% in RMSE and 2.4% in MAE for the 100K dataset, and 0.7% in RMSE and 1.9% in MAE for the 1M dataset. The consistent gains observed across various sample sizes indicate the stability and scalability of the strategy employed. The results highlight the efficacy of our demographic-enhanced strategy in overcoming the limitations of current collaborative filtering methods, providing a scalable and robust solution for enhancing recommendation accuracy across various application contexts.
... Adaptive technologies have long been established in fields such as e-learning and adaptive textbooks [40,41], recommender systems [42,43], personalized information re-trieval [44,45], and the adaptive Web [46], to name just a few. Most relevant to ontology visualization, user adaptive systems in the field of InfoViz date back several decades. ...
The current research landscape in ontology visualization has largely focused on tool development, yielding an extensive array of visualization tools. Although many existing solutions provide multiple ontology visualization layouts, there is limited research in adapting to an individual user’s performance, despite successful applications of adaptive technologies in related fields, including information visualization. In an effort to innovate beyond traditional one-size-fits-all visualizations, this paper contributes one step towards realizing user adaptive visualization by recognizing timely moments when users may potentially need intervention, as real-time adaptation can only occur if it is possible to correctly predict user success and failure during an interaction in the first place. In addition, an open-source, reusable, and extensible software: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze) is made available to the broader scientific community interested in descriptive and predictive gaze analytics. Building on a wealth of research in eye tracking, this paper compares four approaches to predictive gaze analytics through a series of experiments that utilize scheduled gaze digests, irregular gaze events, the last known gaze status, as well as all gaze captured for a user at a given moment in time. The results from a set of experimental trials suggest that irregular gaze events are most informative of early predictions of user performance, whereas cognitive workload appears to be most indicative of overall user performance in the task scenario presented in this paper. These empirical findings highlight the importance of an analytical approach to gaze on user predictions and indicate careful consideration when applying.
This research aims to explore the impact of machine learning (ML) on the evolution and efficacy of recommendation systems (RS), particularly in the context of their growing significance in commercial business environments. Methodologically, the study delves into the role of ML in crafting and refining these systems, focusing on aspects such as data sourcing, feature engineering, and the importance of evaluation metrics, thereby highlighting the iterative nature of enhancing recommendation algorithms. The deployment of recommendation engines (RE), driven by advanced algorithms and data analytics, is explored across various domains, showcasing their significant impact on user experience and decision-making processes. These engines not only streamline information discovery and enhance collaboration but also accelerate knowledge acquisition, proving vital in navigating the digital landscape for businesses. They contribute significantly to sales, revenue, and the competitive edge of enterprises by offering improved recommendations that align with individual customer needs. The research identifies the increasing expectation of users for a seamless, intuitive online experience, where content is personalized and dynamically adapted to changing preferences. Future research directions include exploring advancements in deep learning models, ethical considerations in the deployment of RS, and addressing scalability challenges. This study emphasizes the indispensability of comprehending and leveraging ML in RS for researchers and practitioners, to tap into the full potential of personalized recommendation in commercial business prospects.
This survey is intended to inform non-expert readers about the field of recommender systems, particularly collaborative filtering, through the lens of the impactful Netflix Prize competition. Readers will quickly be brought up to speed on pivotal recommender systems advances through the Netflix Prize, informing their prospective state-of-the-art research with meaningful historic artifacts. We begin with the pivotal FunkSVD approach early in the competition. We then discuss Probabilistic Matrix Factorization and the importance and extensibility of the model. We examine the strategies of the Netflix Prize winner, providing comparisons to the Probabilistic Matrix Factorization framework as well as commentary as to why one approach became extensively used in research while another did not. Collectively, these models help to understand the progression of collaborative filtering through the Netflix Prize era. In each topic, we include ample discussion of results and background information. Finally, we highlight major veins of research following the competition.
Accommodating the stakeholders’ preferences and mitigating their conflicts is a critical aspect of the Multi-Stakeholder Recommendation Systems (MSRS). Existing MSRS methods have addressed this by evaluating stakeholders’ utility gains. However, these methods fail to personalize the stakeholders’ goals, including providers’ aim to recommend their novel items and systems’ preference for fair recommendations, resulting in inequitable and unfair recommendations. Also, the optimization techniques of these methods neglect the stakeholders’ and item features while generating new recommendation solutions. Resolving these issues and optimizing the integration of conflicting multi-stakeholder utilities motivates this study to develop personalized objective functions focusing on relevance-visibility-based consumer preferences and novelty-fairness-based provider and system preferences. Next, we propose the Multi-Stakeholder Features-based Crossover (MSFCross) method to conserve the stakeholders’ preferences while generating new offspring by evaluating the cumulative probability of the gene and parent chromosome population. Finally, this study develops the MSFCross-based Preference Optimization (MSFCrossPO) framework to establish a balanced trade-off recommendation environment by optimizing the conflicting multi-stakeholder objectives. Comparative evaluation across five benchmark datasets validates the performance of MSFCrossPO employing MSFCross crossover scheme over baseline methods with the minimum improvement of 8.92 % in precision, 20.17 % in exposure, and 4.32 % and 10.94 % in consumer and provider satisfaction scores.
Travel Recommender Systems (TRSs) have been proposed to ease the burden of choice in the travel domain, by providing valuable suggestions based on user preferences. Despite the broad similarities in functionalities and data provided by TRSs, these systems are significantly influenced by the diverse and heterogeneous contexts in which they operate. This plays a crucial role in determining the accuracy and appropriateness of the travel recommendations they deliver. For instance, in contexts like smart cities and natural parks, diverse runtime information—such as traffic conditions and trail status respectively—should be utilized to ensure the delivery of pertinent recommendations, aligned with user preferences within the specific context. However, there is a trend to build TRSs from scratch for different contexts, rather than supporting developers with configuration approaches that promote reuse, minimize errors, and accelerate time-to-market. To illustrate this gap, in this paper, we conduct a systematic mapping study to examine the extent to which existing TRSs are configurable for different contexts. The conducted analysis reveals the lack of configuration support assisting TRSs providers in developing TRSs closely tied to their operational context. Our findings shed light on uncovered challenges in the domain, thus fostering future research focused on providing new methodologies enabling providers to handle TRSs configurations.
ผู้เล่นบอร์ดเกมมักเผชิญกับความยากลำบากในการค้นหาเกมที่เหมาะสม การหาผู้ที่จะเล่นบอร์ดเกมด้วยกัน และการค้นหาสถานที่ในการเล่นบอร์ดเกม งานวิจัยนี้ได้นำเสนอเว็บไซต์ใหม่ที่จัดการกับปัญหาเหล่านี้โดยผสมผสานระบบแนะนำบอร์ดเกมและแพลตฟอร์มชุมชนสำหรับผู้เล่นบอร์ดเกม ผู้วิจัยได้พัฒนาและประเมินโมเดลคำแนะนำต่าง ๆ ได้แก่การกรองตามเนื้อหา, การกรองการทำงานร่วมกันตามผู้ใช้, การกรองการทำงานร่วมกันตามสินค้า และการเรียนรู้เชิงลึก ในขณะที่โมเดลการเรียนรู้เชิงลึกแสดงประสิทธิภาพสูงสุด การกรองการทำงานร่วมกันตามสินค้ากลับเป็นโมเดลที่ใช้งานได้จริงที่สุดเนื่องจากมีประสิทธิผลและความเป็นไปได้ในการดำเนินการ งานวิจัยนี้แสดงให้เห็นถึงประสิทธิภาพของโมเดลการกรองการทำงานร่วมกันโดยพิจารณาสินค้าเป็นหลักในการแนะนำบอร์ดเกมที่เหมาะสมให้กับผู้ใช้ โดยเว็บไซต์นี้จะมีระบบแนะนำบอร์ดเกมและพื้นที่ชุมชนสำหรับผู้เล่นบอร์ดเกม ซึ่งจะช่วยเพิ่มประสบการณ์ที่ดีในการเล่นบอร์ดเกมสำหรับผู้เล่น
Collaborative filtering recommender systems are systems that aid users in finding relevant items. However, collaborative Filtering is the most popular approach for building recommender system due to superior performance. However, collaborative filtering approach has the many inherent problems including data sparsity. This research work presents an enhanced approach to creating a personalized movie recommendation system, leveraging the power of collaborative filtering; the research work employs a Singular Value Decomposition (SVD) algorithm to capture user preferences and item characteristics from a vast movie rating dataset, ensuring the accuracy of movie recommendations. It incorporates user profiling to comprehend user preferences in a more interpretable fashion, and Principal Component Analysis (PCA), to visualize users in a reduced 2D space. It further applies K-Means clustering that categorized users into distinct segments based on their movie preferences, by facilitating target user recommendations and analysis. The fusion of these techniques results in a sophisticated recommendation system, demonstrated through practical implementation, offering both accurate movie suggestions and insights into user clusters. The experimental evaluation results revealed that, the proposed system outperformed the existing system performance, in terms of both Mean Absolute Error (MAE) and Mean Squared Error (MRSE).
The increasing complexity of healthcare and the growing volume of available medications necessitate the development of efficient medicine recommendation systems to assist healthcare providers and patients in making informed decisions. This paper presents the design and implementation of a web- based medicine recommendation system aimed at improving medication selection and adherence. The system leverages advanced machine learning algorithms, such as collaborative filtering and content-based filtering, to provide personalized medicine recommendations based on user profiles, medical history, and specific health conditions. The architecture of the system comprises a user-friendly frontend developed using React.js, which allows for seamless interaction and visualization of recommendations. The backend is powered by Flask, facilitating the handling of user requests, database interactions, and machine learning model deployment. A PostgreSQL database is employed to securely store user data, medication details, and historical interactions, ensuring data integrity and security
In today's age of information overload, companies incorporates a number of strategies to help people to make smart choices in various areas on the Internet including what to buy, how to buy, how to do some tasks, how to pass their leisure time, and even whom to date. Recommender systems are developed by companies for these reasons that can provide people personal, affordable and high-quality recommendations. For example, Google makes use of recommender system to maximize its target ads revenue. Many e-commerce websites make use of recommender system to advocate people such as "one who bought this can buy this also". Facebook has developed a recommender system to suggest people to tag friends in pictures. Various recommender system algorithms have been proposed so far to provide an affordable, personal and efficient recommendation. We have tried through this paper to put a light on the research aspects of the recommender system and then do analysis of a possible way to design a recommender system algorithm. I. INTRODUCTION In this era of information age where the amount of information on the World Wide Web (WWW) is on a significant rise along with the number of users of the Internet, the companies find it increasingly important to search, map and help the users to get the relevant amount of information as per their preferences, interest, and tastes. Companies use various recommender systems of their respective fields to filter, prioritize and deliver significant information efficiently so that the problem of information overload can be alleviated. The recommender systems are capable enough to do an efficient search through the large volume of information (that are dynamically generated) so as to provide personalized services and content to users. A recommender system is basically understood to be an information system that is meant to do information filtering and is generally used to do the prediction of the "preference" or the "rating" given by a user to a product. It has become quite popular in the recent times and is used generally in myriad areas including music, movies, books, news, social tags, search queries, research articles, and products. Understanding the significance of recommender system in today's era of overgrowing data and their filtering to make a good decision in the field of social, commercial and education, we have presented this paper to illustrate the research aspects of the recommender system and then made the analysis of a possible way to design a recommender system algorithm. The section-II of this paper describes the recommender system types, section-III discusses the objectives of designing a recommender system algorithm, section-IV reveals the underlying principal behind designing a recommender system algorithm, section-V analyses the possible way to design a recommender system algorithm, section-VI unearths the challenges before designing a recommender system algorithm and section-VII warps up the paper with a certain set of conclusions.
It is the month of October, and in Bengaluru, India, most schools have completed mid-term examinations. Leks has two kids, and they are looking forward to a short vacation nearby. After online research and discussions, they converged on Pondicherry (southern part of India and not very far from Bangalore or Chennai) for the trip. Recommendations are at the heart of every modern e-commerce platform, from travel booking sites to fashion marketplaces. In this chapter, the author delves into the evolution of recommendation algorithms, tracing their journey from basic methodologies to advanced artificial intelligence-driven systems. Along the way, conceptual nuances are explored, offering readers a clear understanding of how these algorithms work and why they matter. The narrative highlights how advancements in AI have transformed recommendation systems, presenting a vivid progression of algorithms and the significant improvements they bring to customer engagement. With practical use cases from e-commerce, the chapter provides valuable insights and examples, equipping readers with ideas to design and deploy recommendation systems across diverse scenarios. This chapter not only showcases the magic of delivering personalized customer experiences but also demonstrates how effective recommendations drive substantial business impact, making them indispensable for any e-commerce platform.
O Congresso Brasileiro de Pesquisa e Desenvolvimento em Design – P&D Design destaca-se entre os eventos científicos mais relevantes e tradicionais da área no Brasil. Com periodicidade bienal, a conferência tem caráter itinerante por diferentes regiões e universidades do país. A 15ª Edição aconteceu em Manaus – AM, realizada na Universidade Federal do Amazonas – UFAM, sob coordenação e organização do PPGD - Programa de Pós-graduação em Design da Faculdade de Tecnologia. Os ANAIS desta edição, conta com 568 artigos, que abordam temas relevantes da área e subdivididos nos seguintes eixos de pesquisa: Design e educação; Design e Fatores Humanos; Design e tecnologia; Design, Sociedade e Sustentabilidade; Design: história e teoria; Design: Metodologias e Processos; Design: Pesquisa, Desenvolvimento e Inovação - PD&I e Empreendedorismo; Práticas e ferramentas do Design. Todos os artigos publicados são produções inéditas de grupos de pesquisadores, de todas as regiões do Brasil e também de outros países.
With the shifting focus of organizations and governments towards digitization of academic and technical documents, there has been an increasing need to use this reserve of scholarly documents for developing applications that can facilitate and aid in better management of research. In addition to this, the evolving nature of research problems has made them essentially interdisciplinary. As a result, there is a growing need for scholarly applications like collaborator discovery, expert finding and research recommendation systems. This research paper reviews the current trends and identifies the challenges existing in the architecture, services and applications of big scholarly data platform with a specific focus on directions for future research.
Traditional recommendation systems, which rely on static user profiles and historical interaction data, frequently face difficulties in adapting to the rapid changes in user preferences that are typical of dynamic environments. In contrast, recommendation algorithms based on deep reinforcement learning are capable of dynamically adjusting their strategies to accommodate real-time fluctuations in user preferences. However, current deep reinforcement learning recommendation algorithms encounter several challenges, including the oversight of item features associated with high long-term rewards that reflect users’ enduring interests, as well as a lack of significant relevance between user attributes and item characteristics. This leads to an inadequate extraction of personalized information. To address these issues, this study presents a novel recommendation system known as the Multi-Level Hierarchical Attention Mechanism Deep Reinforcement Recommendation (MHDRR), which is fundamentally grounded in a multi-layer attention mechanism. This mechanism consists of a local attention layer, a global attention layer, and a Transformer layer, allowing for a detailed analysis of individual attributes and interactions within short-term preferred items, while also exploring users’ long-term interests. This methodology promotes a comprehensive understanding of users’ immediate and enduring preferences, thereby improving the overall effectiveness of the system over time. Experimental results obtained from three publicly available datasets validate the effectiveness of the proposed model.
The object of research is the process of recommending items to the system user and improving the recommendation results. The principle of the recommendation system and the stages of creating recommendations are considered. The article describes various methods of filtering information to generate personalized recommendations. The advantages and disadvantages of each of the methods are revealed. Therefore, taking into account both theoretical and practical aspects, it is important to conduct research in order to correct the limitations characteristic of various filtering methods. A compact hybrid user model has been proposed for the collaborative filtering method of information. This model overcomes the limitations often encountered when using traditional approaches and allows for more efficient generation of personalized recommendations. For a visual example of the described approach, the recommender system of motion pictures was taken. This model combines user ratings with descriptions of the content of objects and uses the concept of genre interest rate, which was obtained by deriving formulas for calculating a hybrid feature, that is, an indicator of how much the user is interested in a particular genre. This facilitates the formation of a set of close associates for an active user. The proposed approach is aimed at ensuring high accuracy of recommendations in the system, reducing the requirements for computing resources and effectively using information about the content of objects. The hybrid model combines the benefits of memory-based collaborative filtering and model-based recommender systems, providing accuracy and scalability. Several examples with calculation results are given. A conclusion was drawn regarding the potential improvement of the quality of recommendations, taking into account the capabilities of the developed algorithm.
The company's kitchen set ordering system still uses a manual system, so when a customer wants to order a kitchen set, the customer has to come to the company. Another problem that arises is when the customer is selecting a sample photo of the kitchen set he wants to order, this information is not available. This research produces a website-based system with the Item Based Collaborative Filtering method that is used for the process of calculating recommendations for kitchen set photo examples, where the system can recommend kitchen set photo examples to customers. Making this system aims to make it easier for customers and admins when making an order process, and can make the order process anywhere and anytime.
In recent years, the study of recommendation systems has become crucial, capturing the interest of scientists and academics worldwide. Music, books, movies, news, conferences, courses, and learning materials are some examples of using the recommender system. Among the various strategies employed, collaborative filtering stands out as one of the most common and effective approaches. This method identifies similar active users to make item recommendations. However, collaborative filtering has two major challenges: sparsity and gray sheep. Inspired by the remarkable success of deep learning across a multitude of application areas, we have integrated deep learning techniques into our proposed method to effectively address the aforementioned challenges. In this paper, we present a new method called Enriched_AE, focused on autoencoder, a well-regarded unsupervised deep learning technique renowned for its superior ability in data dimensionality reduction, feature extraction, and data reconstruction, with an augmented rating matrix. This matrix not only includes real users but also incorporates virtual users inferred from opposing ratings given by real users. By doing so, we aim to enhance the accuracy of predictions, thus enabling more effective recommendation generation. Through experimental analysis of the MovieLens 100K dataset, we observe that our method achieves notable reductions in both RMSE (Root Mean Squared Error) and MAE (Mean Absolute Error), underscoring its superiority over the state-of-the-art collaborative filtering models.
Nowadays, recommender systems are mostly used in many online applications to filter information and help users in selecting their relevant requirements. It avoids users to become overwhelmed with the massive amount of possible options. To provide an efficient and accurate personalized recommendation, such systems require a large amount of data of user’s personal data which can provide by collecting privacy sensitive data from users such as ratings, consumption histories, and personal profiles. However, the privacy risks in gathering and processing personal data are often underestimated or ignored. The common privacy risks associated with recommender systems are the lack of adequate implementation of privacy protection principles. This review article aimed to evaluate the privacy risks in recommender systems. This paper discusses recommender systems and privacy concepts. Then, it gives an overview of the data that are used in recommender systems and examines the associated risks to data privacy. After that, the paper discusses relevant research areas for privacy-protection techniques and their applicability to recommender systems. The paper discussed various insights of user privacy, in both technical and non-technical environments, privacy design strategies, and privacy engineering approaches for developing a privacy-friendly recommender system. Finally, the paper concludes with a discussion on applying and combining different privacy-protection techniques. The results indicated that better user privacy can be achieved if privacy is considered by design and by default. Moreover, prediction accuracy is not limited by better user privacy when the privacy by architecture is considered alongside the privacy by design. Keywords: Recommender systems, Privacy risk, Privacy design strategies.
One of the most typical examples of interactive multi-criteria decision making (MCDM) is modeling of a prediction by the collaborative filtering recommender systems (CFRS), the class of methods which recommends items to users (customers) on the basis of preferences of other users on these items. Basing on the preferences over items made by users in the past, a CFRS generates a class of users similar to the targeted user, and then recommends those items which were approved by the users from the generated class. In many cases, existing similarity relations cannot provide reliable prediction of the item recommendations in CFRS applications, which is due not only to users’ interests in objects of complex structure, but also to the lack of subjective information, which is related to the users’ social status, in short with his profile data. In many cases, in CFRS models, it is necessary to consider the interactions of model criteria and their individual degrees of dominance and influence on possible predictive decisions. The idea of our approach is based on the use of the possibility theory, when criteria importance levels and criteria pair interaction indexes in the model environment are evaluated by the decision maker or the experts, people who are involved in the assessments. In our modeling scheme the generated possibility degrees of influence on the alternatives of the criteria, take into account the values of criteria pairwise interaction indexes. Based on the principle of maximum for the Shapley entropy determined on the criteria, a mathematical programming problem is formulated, the solution of which is the possibility measure’s distribution generated on the set of criteria. We use the generated possibility measure in the definition of extensions of aggregation operators such as the ordered weighted averaging (OWA) and finite Choquet averaging (CA) operators. The confidence discrimination q-rung picture linguistic fuzzy (CD-q-RPLF) environment of expert evaluations are considered as aggregation arguments. New constructed confidence q-rung picture linguistic fuzzy ordered weighted averaging (C-q-RPLFOWA) and confidence q-rung picture linguistic fuzzy Choquet averaging (C-q-RPLFCA) operators are used in the evaluation of predictions of the CFRS. We develop CD-q-RPLF CFRS methodology, where users’ profile data by the constructed operators in the new users’ similarity measure under q-RPLF environment are aggregated. New extensions of discrimination values in the constructions of users’ similarity measures are included. Classical Jaccard index is transformed into discrimination q-RPLFNs. The main goal of the results illustration was to aggregate users’ profile data by the constructed operators in the similarity measure under confidence discrimination q-RPLF environments. For the illustration of received results, the prediction problem of the constructed Collaborative Filtering Recommender Systems is considered.
ResearchGate has not been able to resolve any references for this publication.