Article

Recommender Systems for Evaluating Computer Messages

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Many people read online newsgroups but their evaluations are not collected. These people would continue to read and evaluate once a centralized entry and collection system is enacted. But such a system would create incentives to avoid the burden of reading unhelpful messages by free-riding on the evaluations provided by others. Free-riding leads to too few evaluations. Evaluations will be provided by an unrepresentative group. Hence, their evaluations may be misleading. Three centralized mechanisms for improving the provision of evaluations include subscription services, transactions-based compensation and exclusion. In subscription services, some readers would pay a regular fee to receive the evaluations of individuals who act as professional evaluators. In transactions-based compensation, the system pays cash to those who provide early evaluations. Those who evaluate the most messages, would reap a surplus. Those who evaluate the least would have to pay. In exclusion, threatening to exclude readers from the group receiving evaluations could induce evaluations. This system provides incentives without explicit payments, but may waste resources if low-quality evaluators make costly efforts that yield little value.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Figure 2.1 gives a general overview of a recommendation system and its main components. In general, a RS first collects a user's historical patterns they have expressed, either explicitly or implicitly [11] [49]. Then it finds other users with similar patterns or items in the historical patterns. ...
... If distan(q,q i ) ≠ 0 and s qi =1 s.t. q j H v then 11 User u Send q to user v 12 ...
... End If 8 End For 9 If q.TTL not equal to zero then 10 For each useful friend vfriend(u) do 11 user u computes useful(q,v) 12 ...
Article
Recommendation systems (RS) and P2P are both complementary in easing large-scale data sharing: RS to filter and personalize users' demands, and P2P to build de-centralized large-scale data sharing systems. However, many challenges need to be overcome when building scalable, reliable and efficient RS atop P2P. In this work, we focus on large-scale communities, where users rate the con-tents they explore, and store in their local workspace high quality content related to their topics of interest. Our goal then is to provide a novel and efficient P2P-RS for this context. We exploit users' topics of interest (automatically extracted from users' contents and ratings) and social data (friendship and trust) as parameters to construct and maintain a social P2P overlay, and generate recommendations. The thesis addresses several related issues. First, we focus on the design of a scalable P2P-RS, called P2Prec, by leveraging collaborative- and content-based filter-ing recommendation approaches. We then propose the construction and maintenance of a P2P dynamic overlay using different gossip protocols. Our performance experi-mentation results show that P2Prec has the ability to get good recall with acceptable query processing load and network traffic. Second, we consider a more complex in-frastructure in order to build and maintain a social P2P overlay, called F2Frec, which exploits social relationships between users. In this new infrastructure, we leverage content- and social-based filtering, in order to get a scalable P2P-RS that yields high quality and reliable recommendation results. Based on our extensive performance evaluation, we show that F2Frec increases recall, and the trust and confidence of the results with acceptable overhead. Finally, we describe our prototype of P2P-RS, which we developed to validate our proposal based on P2Prec and F2Frec.
... Because the user has to examine the item and then rank it on the rating scale, it imposes a cognitive cost to the user which might lead to several bad effects: lowered motivation and incentives for evaluators [Avery and Zeckhauser, 1997], biased evaluators [Palme, 1997], avoiding free-reading problems, and achieving a critical mass of users. In order to solve this problem, researchers started to look at other ways to gather user preferences, which are referred to as implicit ratings. ...
... One of the elements that brings together many of the tactics mentioned above is the ability "to manage to change ones opinion, thanks to the arguments used and to the psychological and emotional reasons transmitted" [Artal, 2003]. This means that the customer, generally, does not acquire the product itself, but the perception or reassurance that the product is very useful for him [Artal, 2003;Cámara and Sanz, 2001;Hills, 2000;Chapman, 1992]. ...
... One of the elements that brings together many of the tactics mentioned above is the ability "to manage to change ones opinion, thanks to the arguments used and to the psychological and emotional reasons transmitted" [Artal, 2003]. This means that the customer, generally, does not acquire the product itself, but the perception or reassurance that the product is very useful for him [Artal, 2003;Cámara and Sanz, 2001;Hills, 2000;Chapman, 1992]. Therefore, the objective is to create that positive image, to convince with arguments and to create a pleasant atmosphere during the selling process. ...
... If the data is sensitive personal information (as in medical applications), revealing it with high precision entails a privacy cost that might incentivize individuals to decrease the disclosure precision [47,19]. Also, producing high-precision data may require a certain amount of effort (possibly monetary): this is the case in crowdsourcing [16] or recommender systems [21,4,29] where providing content or feedback requires effort, or in applications where the data is produced by costly computations. ...
... It is a property of the estimator and it is a classical proxy to assess its quality. 4 In particular, in the linear regression setting, it does not depend on the realization of the valuesỹ i but only on the independent variables x i and on the precisions of the response variablesỹ i (unlike the empirical mean squared error). ...
Preprint
Full-text available
We consider the problem of linear regression from strategic data sources with a public good component, i.e., when data is provided by strategic agents who seek to minimize an individual provision cost for increasing their data's precision while benefiting from the model's overall precision. In contrast to previous works, our model tackles the case where there is uncertainty on the attributes characterizing the agents' data -- a critical aspect of the problem when the number of agents is large. We provide a characterization of the game's equilibrium, which reveals an interesting connection with optimal design. Subsequently, we focus on the asymptotic behavior of the covariance of the linear regression parameters estimated via generalized least squares as the number of data sources becomes large. We provide upper and lower bounds for this covariance matrix and we show that, when the agents' provision costs are superlinear, the model's covariance converges to zero but at a slower rate relative to virtually all learning problems with exogenous data. On the other hand, if the agents' provision costs are linear, this covariance fails to converge. This shows that even the basic property of consistency of generalized least squares estimators is compromised when the data sources are strategic.
... This tendency leads to a situation of too few evaluations for a recommender system to make effective recommendations. (Avery and Zeckhauser 1999). 52. ...
... Other than relying on the altruism of users to rate items for the benefit of others, incentives may play a role in encouraging users to provide ratings. Several suggestions have been offered in the literature to deal with the "free-ride" issue including 1) subscription services, 2) pay-per-use, 3) compensation for ratings, 4) and exclusion from recommendations, (Avery and Zeckhauser 1999;Resnick and Varian 1997). ...
Article
Full-text available
Scholarly research has extensively examined a number of issues and challenges affecting recommender systems (e.g. ‘cold-start’, ‘scrutability’, ‘trust’, ‘context’, etc.). However, a comprehensive knowledge classification of the issues involved with recommender systems research has yet to be developed. A holistic knowledge representation of the issues affecting a domain is critical for research advancement. The aim of this study is to advance scholarly research within the domain of recommender systems through formal knowledge classification of issues and their relationships to one another within recommender systems research literature. In this study, we employ a rigorous ontology engineering process for development of a recommender system issues ontology. This ontology provides a formal specification of the issues affecting recommender systems research and development. The ontology answers such questions as, “What issues are associated with ‘trust’ in recommender systems research?”, “What are issues associated with improving and evaluating the ‘performance’ of a recommender system?” or “What ‘contextual’ factors might a recommender systems developer wish to consider in order to improve the relevancy and usefulness of recommendations?” Additionally, as an intermediate representation step in the ontology acquisition process, a concept map of recommender systems issues has been developed to provide conceptual visualization of the issues so that researchers may discern broad themes as well as relationships between concepts. These knowledge representations may aid future researchers wishing to take an integrated approach to addressing the challenges and limitations associated with current recommender systems research.
... The users in general are unwilling to take extra actions if they do not bring them extra benefits that they instantly perceive [7]. It often leads to resignation from expressing opinions on browsed products [8]. ...
... Implicit techniques are devoid of grave disadvantages of explicit techniques since preference discovery takes place in a transparent way, invisible to the user, who is not being distracted or requested to perform extra tasks. Inobtrusive user monitoring does not require special motivation like in explicit methods [8,10,11], and can be continuously performed. ...
Article
Full-text available
The purpose of this paper is to investigate how a study group consisting of 85 participants interact with selected online stores and how the interactions correlate with interest in products for each store. This work uses a quantitative research methodology involving a dedicated tool for implicit monitoring of human-website interaction, instrumented for selected stores and registering product interest for the benefit of recommender systems. One of the findings was that to predict product interest it seems a good idea to start with monitoring scrolling activities, mouse usage and time spent on a website and its sections. Rich product information played a crucial role in shaping user interest. For all stores in the study, a misclassification rate of 28.7% was achieved in a CART model, while modeling for particular stores, it varied from 39.3% to 24.8%, and we feel that it reflected different page layouts in stores. Models built to represent individual behavior patterns of most active study participants varied in terms of misclassification rates from 17% to 26%, and the analysis suggested that individual preference modeling could be considered for recommender systems, in particular for key customers or customer groups. The study leads to insights into online store user behavior and product interest prediction, and as a result to possible implications for recommender systems design, including ergonomics optimization and interaction personalization.
... Zeckhauser [13] demonstrate by showing that the payoffs of the users of a recommender system may resemble the payoffs in the famous Prisoner´s Dilemma game. Biased Recommendations. ...
... (On February 18, 2000 for 4184 information products of a total of 7875 in the VU recommendations were available.) This seems to indicate that for unobtrusive recommender services like the VU the argument of Avery and Zeckhauser [13] does not apply. The myVU recommender services work on a tit-for-tat basis. ...
Article
Full-text available
In this article we investigate the role of recommender systems and their potential in the educational and scientific environment of a Virtual University. The key idea is to use the information aggregation capabilities of a recommender system to improve the tutoring and consulting services of a Virtual University in an automated way and thus scale tutoring and consulting in a personalized way to a mass audience. We describe the recommender services of myVU, the collection of the personalized services of the Virtual University (VU) of the Vienna University of Economics and Business Administration which are based on observed user behavior and self assignment of experience which are currently field-tested. We show, how the usual mechanism design problems inherent to recommender systems are addressed in this prototype.
... al. [12] argumentan que los sistemas de recomendación actuales son dependientes del altruismo de un conjunto de sus usuarios que se dispongan a valorar ítems sin haber recibido recomendaciones. Economistas han reexionado que mismo si no fuera necesario ningún tipo de esfuerzo para valorar ítems, muchos usuarios preferirían esperar que otros usuarios valorasen antes dichos ítems [13]. Por lo tanto, se hace necesario establecer medios para incentivar a los usuarios a realizaren valoraciones de ítems disponibles. ...
Conference Paper
Seeing that nowadays there it is technologically feasible to store huge volumes of data with, more and more, lower costs, current e-commerce systems usually o�er a large number of products to their user. Therefore, there is a constant need for personalization in these systems. In this context, recommender systems try to bring such personalization by making suggestions and providing information about items available. However, such systems still present numerous limitations. In this work, we describe the main drawbacks these systems present nowadays and the problems they cause, which motivate the current main research challenges of the area. Moreover, we describe some scientific efforts made to reduce, or even eliminate, such drawbacks. Such e�orts consist of a diversity of methods, including data mining and agent-based techniques, which were adapted in order to be employed in recommender systems. We also discern, according to the type of method employed, how they are commonly divided in the literature. In general, each drawback is related to one type of method. At the end of this work, we depict the conclusions we got from this bibliographic revision.
... Similarly, new items must receive ratings before they can be recommended by an RS. This early rater issue arises because users who provide the first ratings for new items receive little benefit (Avery and Zeckhauser 1997). As a subclass of CF approaches, MF approaches suffer equally from both new user and new item problems. ...
Article
Many online retailers, such as Amazon, use automated product recommender systems to encourage customer loyalty and cross-sell products. Despite significant improvements to the predictive accuracy of contemporary recommender system algorithms, they remain prone to errors. Erroneous recommendations pose potential threats to online retailers in particular, because they diminish customers’ trust in, acceptance of, satisfaction with, and loyalty to a recommender system. Explanations of the reasoning that lead to recommendations might mitigate these negative effects. That is, a recommendation algorithm ideally would provide both accurate recommendations and explanations of the reasoning for those recommendations. This article proposes a novel method to balance these concurrent objectives. The application of this method, using a combination of content-based and collaborative filtering, to two real-world data sets with more than 100 million product ratings reveals that the proposed method outperforms established recommender approaches in terms of predictive accuracy (more than five percent better than the Netflix Prize winner algorithm according to normalized root mean squared error) and its ability to provide actionable explanations, which is also an ethical requirement of artificial intelligence systems.
... Furthermore, in case of fast Internet browsing there could be a lot of unconscious attraction caused by some parts of a webpage. Therefore, inobtrusive implicit measures are better suited for the purpose of the study, allowing the monitored subjects to focus normally on tasks performed, not causing extraneous cognitive load and not requiring special motivation to continuously provide explicit ratings [24,25,26,27]. ...
Chapter
The abundance of advertising in e-commerce results in limited user attention to marketing-related content on websites. As far as recommender systems are concerned, presenting recommendation items in a particular manner becomes equally relevant as the underlying product selection algorithms. To enhance content presentation effectiveness, marketers experiment with layout and visual intensity of website elements. The presented research investigates those aspects for a recommending interface. It uses a quantitative research methodology involving gaze tracking for implicit monitoring of human-website interaction in an experiment instrumented for a simple-structure recommending interface. The experimental results are discussed from the perspective of the attention attracted by recommended items in various areas of the website and with varying intensity, while the main goal is to provide advice on the most viable solutions.
... This incorporates some issues already discussed, such as the time and effort needed to provide a rating, and provides a framework for reasoning about how users are motivated to interact with recommender systems. Avery and Zeckhauser [11] argue that external incentives are needed to provide an optimal set of recommendations and that market-based systems or social norms can provide a framework for promoting user contribution to the rating data. ...
Article
Full-text available
Nowadays, there are several repositories of educational resources which provide support to students in the teaching-learning process. However one of the problems that are presented to students to make specific searches for resources in different repositories is found with a large number of results and sometimes lose too much time to select a resource or can not find what they need. This paper presents EmoRemSys an educational recommender system based on affective computing techniques for locating educational resources by using emotions detection. The recommendation and emotion detection processes are described in detail. Results show a high percentage of precision about recommendations provided by EmoRemSys.
... To evaluate the movie recommendation engine three different metrics are used. These metrics are precision, recall and F-measure (Billsus and Pazzani, 1998;Avery and Zeckhauser, 1997;Karypis, 2001). The three metrics are popular in information retrieval. ...
Article
Over the last decade, there has been a burgeoning of data due to social media, e-commerce and overall digitisation of enterprises. The data is exploited to make informed choices, predict marketplace trends and patterns in consumer preferences. Recommendation systems have become ubiquitous after the penetration of internet services among the masses. The idea is to make use of filtering and clustering techniques to suggest items of interest to users. For a media commodity like movies, suggestions are made to users by finding user profiles of individuals with similar tastes. Initially, user preference is obtained by letting them rate movies of their choice. Upon usage, the recommender system will be able to understand the user better and suggest movies that are more likely to be rated higher. The experiment results on the MovieLens dataset provides a reliable model which is precise and generates more personalised movie recommendations compared to other models.
... To evaluate the movie recommendation engine three different metrics are used. These metrics are precision, recall and F-measure (Billsus and Pazzani, 1998;Avery and Zeckhauser, 1997;Karypis, 2001). The three metrics are popular in information retrieval. ...
... Implicit measures are generally less accurate than explicit ones [1], but they are usually available in large quantities and can potentially be acquired without any extra time or effort from the user. Moreover, inobtrusive implicit user monitoring allows them to focus normally on tasks performed and does not require special motivation to continuously provide explicit ratings [3,4,5], even when the possible benefits are clear, e.g. personalized interface in the future. ...
Article
The convenience of online shopping is an attractive benefit for customers. At the same time, online purchase process is often complicated. As a result, some customers have difficulty with or even fail to complete the process. This article presents a tool for detailed monitoring users’ interaction with shopping websites. Data collected can be used for many purposes, including interface and content adaptation. By means of personalization, a website can automatically adapt to suit the needs of a particular user, thus vastly improving human media interaction and its efficiency. In this article the human-website interaction monitoring tool ECPM is presented and sample results based on selected B2C stores are discussed.
... Sarwar et al. [34] affirm that current recommender systems depend on the altruism of a set of users who are willing to rate many items without receiving many recommendations. Economists have speculated that even if rating required no effort at all, many users would choose to delay considering items to wait for their neighbors to provide them with recommendations [4]. Thus, it is necessary to find a way to encourage users to make evaluations about items available in the system. ...
... This problem shows up in domains such as news articles where there is a constant stream of new items and each user only rates a few. It is also known as the "early rater" problem, since the first person to rate an item gets little benefit from doing so: such early ratings do not improve a user's ability to match against others (Avery and Zeckhauser, 1997). This makes it necessary for recommender systems to provide other incentives to encourage users to provide ratings. ...
Article
Full-text available
Kennt man das Kaufverhalten seiner Kunden im Onlinehandel, so lassen sich daraus mit Softwarelösungen personalisierte Empfehlungen ableiten. Dies praktizieren Online-Händler schon seit vielen Jahren. Transaktionsdaten aus Online-Verkäufen, Rating-Daten und neuerdings auch Kontextinformationen werden gesammelt und mit ausgefeilten Algorithmen verarbeitet, um Produktempfehlungen manchmal sogar annähernd in Echtzeit zu berechnen. Die Weiterentwicklung von Empfehlungsalgorithmen und -systemen schreitet sowohl in der Forschung als auch in der Praxis im Zuge der nächsten E-Commerce-Generation voran. Meistens sind die von großen Online-Händlern verwendeten Softwarelösungen heute sehr individuell und im Detail der Öffentlichkeit vorenthalten. Um die Möglichkeiten kleinerer Online-Anbieter zu verbessern, wurde im Competence Center Wirtschaftsinformatik der Hochschule München (CCWI) gemeinsam mit Industriepartnern auf Basis von Open-Source-Technologien wie Apache Mahout eine mit vertretbarem Aufwand einsetzbare (leichtgewichtige) und plattformunabhängige Lösung entwickelt. Neben einer Webservice-Schnittstelle zur einfachen Integration des Empfehlungsdienstes in die eigene Anwendung wurde weiterhin auch eine webbasierte Anwendung entwickelt, mit der die genutzten Recommendation-Algorithmen konfiguriert und erprobt werden können, um so die Auswirkungen von Parameteränderungen bei der Empfehlungsgenerierung besser nachvollziehen zu können.
... Diese Methode wird impliziter Ansatz genannt, da sie keinerlei Auswirkung auf das Verhalten des Benutzers hat. Der Benutzer muss keine expliziten Daten manuell eingeben, denn dies wurde in einer Studie über die Erwartungshaltung gegenüber Blogs von den Nutzern als inakzeptabel zurückgewiesen[AZ97]. Ein weiterer Vorteil der impliziten Methode gegenüber der Expliziten ist, dass diese schwieriger zu manipulieren ist. ...
... 제품추천에 대한 연구가 진행됨에 따라 개 인화된 제품추천에 관한 연구로 점차 발전되었는데, 개 인화된 제품추천 유형은 추천알고리즘에 따라 내용 기 반 추천방식(content-based filtering: CBF)과 협업 필터 링 추천방식(collaborative filtering: CF)으로 구분된다. 전자는 소비자의 과거 구매이력을 분석하여 상품선호를 도출하고자 상품 간의 유사 정도를 측정하여 추천하는 방식이며 (Avery & Zeckhauser, 1997;Basu et al., 1998), 후자는 대상 소비자와 유사한 구매이력을 나타내는 소 비자들의 상품선호를 이용하여 추천하는 구전 추천의 유형이다 (Hill et al., 1995;Xiao & Benbasat, 2007). 이 와 같은 제품추천 서비스는 사용자의 요구에 가장 부합 되는 정보만을 선별하여 제공하거나 소비자 선택에 영 향을 미치는 정보를 제공함으로써 온라인 쇼핑몰에서 소비자가 원하는 제품을 구매할 수 있도록 지원하는 대 표적인 서비스로 사용되어 왔다 (Cho et al., 2005;Park & Kim, 2012;Senecal & Nantel, 2004 (Gröne et al., 2009;Shankar et al., 2010;Yaniv, 2008) (Childers et al., 2001;Smith et al., 2012). ...
Article
Full-text available
This study examined the effects of consumers' usefulness and the hedonic perception of their willingness to provide information and cooperation intention in the use of location-context based mobile product recommendation services for fashion stores. We examined the influence of consumers' beliefs regarding marketer's information practices on their perceptions of provided services. In addition, the moderating effects of consumers' epistemic curiosity and information control level were investigated. A total of 400 smartphone users were included as participants for the present study. The results showed that consumers who perceived information services as more hedonic and useful are more likely to provide personal information and cooperate with marketers. The findings of the study suggest that fashion retailers who plan to introduce mobile product recommendation services should pay attention to the hedonic aspects of the services. In addition, the effects of usefulness and hedonic perception of the two dependent variables were different according to the level of epistemic curiosity and information control.
... (Basu et al., 1998). 이 유형의 경우 고객의 과거 구매이력에 근거하여 선호 를 도출하는 방법의 한계와 과거 구매이력이 없는 신규 고객에게 적용하기 어렵다는 점이 지적되어 왔다 (Avery & Zeckhauser, 1997). 협업 필터링 추천 방식(collaborative filtering: CF)은 목표고객과 유사한 구매이력을 나타내 는 이웃고객들의 상품선호를 이용하여 추천하는 것으로, 구전추천이라고 할 수 있다 (Hill et al., 1995;Xiao & Benbasat, 2007 (Brunato & Battiti, 2003 (Eroglu et al., 2003;Mehrabian & Russell, 1974;Kahn, 1995;Ward & Barnes, 2001). ...
Article
Full-text available
This study examined the effects of product recommendation services as an atmosphere for online mass customization shopping sites on consumers' cognitive and affective responses. We conducted a between-subject experimental study using a convenience sample of college students. A total of 196 participants provided usable responses for structural equation modeling analysis. The findings of the study support the S-O-R model for a product recommendation system as an element of the shopping environment with an influence on OMC product evaluations and arousal. The results showed that OMC product recommendation service positively affected cognitive and affective responses. The findings of the study suggest that OMC retailers might pay attention to the affective and cognitive responses of consumers through product recommendation services that can enhance product evaluations and OMC usage intentions.
... This problem shows up in domains such as news articles where there is a constant stream of new items and each user only rates a few. It is also known as the "early rater" problem, since the first person to rate an item gets little benefit from doing so: such early ratings do not improve a user's ability to match against others (Avery and Zeckhauser, 1997). This makes it necessary for recommender systems to provide other incentives to encourage users to provide ratings. ...
Article
Full-text available
Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering.
... First, the Ocean Registry provides a Resource Rating API that allows users to explicitly rate a given Resource by providing a URI and an associated rating value (e.g., using a 5-point Likert scale). Second, as the effort required to generate adequate numbers of explicit ratings often dissuades participation [25], Ocean also provides an implicit rating scheme whereby ratings are inferred by observing Resource selections (i.e., clickthroughs) made by the Ocean applications during runtime. ...
Conference Paper
Context-awareness is becoming an important foundation of adaptive mobile systems; however, techniques for discovering contextually relevant Web content and Smart Devices (i.e., Smart Resources) remain consigned to small-scale deployments. To address this limitation, this paper introduces Ambient Ocean, a Web search engine for context-aware Smart Resource discovery. Ocean provides scalable mechanisms for supplementing Resources with expressive contextual metadata as a means of facilitating in-situ discovery and composition. Ocean supports queries based on arbitrary contextual data, such as location, biometric details, telemetry data, situational cues, sensor information, etc. Ocean utilizes a combination of crowd-sourcing, context-enhanced query expansion and personalization techniques to continually optimize query results over time. This paper presents Ocean’s conceptual foundations, its reference implementation, and a preliminary evaluation that demonstrates significantly improved Smart Resource discovery results in real-world environments.
... By contrast, in CF approaches, new items must receive ratings before they can be recommended by an RS. The new item problem is also known as the "early rater" problem because users who provide the first ratings for new items receive little benefit from this process (given that these early ratings do not increase the user's ability to be matched with other users) (Avery and Zeckhauser 1997). Thus, CF systems must provide other incentives to encourage users to provide ratings (Burke 2002). ...
Book
Full-text available
Recommender systems (RS) are intended to assist consumers by making choices from a large scope of items. By recommending items with a high likelihood of suiting a consumer’s needs or preferences, they are able to considerably mitigate the information overload problem at the user’s side, thus increasing their trust in, satisfaction with, and loyalty to RS providers, such as online shops, internet music catalogs, and online DVD rental services. However, recommendations are prone to errors and often fail to address consumers’ context specific needs. Explanations of the underlying reasons behind recommendations can allow users to handle algorithmic errors in recommendations and to better judge their suitability for the users’ current decision contexts, thus increasing the choice efficiency and effectiveness. The latter, in turn, increases the users’ acceptance of and satisfaction with RS, as well as it positively affects consumers’ trust in, loyalty to, and credibility of RS providers. However, in order for these benefits of explanation facilities to surface, they should explain the recommendations in such terms that the consumers themselves use when evaluating their choices. The latter sets restrictions upon recommendation algorithms constraining them with respect to how recommendations should be produced and what information they should rely on. This interaction between RS and explanation facilities, however, was not covered by recent research on RS. Therefore, the aim of the current thesis is to narrow this gap and to develop a recommendation technique that accounts for the concurrent objectives of RS, i. e., a method which is capable of providing both accurately predicted recommendations and actionable explanations of the reasons behind them, so that the recommendation process is aligned with the user preference structures.
... Sarwar et al. [42] affirm that current recommender systems depend on the altruism of a set of users who are willing to rate many items without receiving many recommendations. Economists have speculated that even if rating required no effort at all, many users would choose to delay considering items to wait for their neighbors to provide them with recommendations [5]. ...
Chapter
Full-text available
CRM (Customer Relationship Management) is one important area of Business Intelligence (BI) where information is strategically used for maximizing the value of each customer in a company. Recommender systems constitute a suitable context to apply CRM strategies. This kind of systems are becoming indispensable in the e-commerce environment since they represent a way of increasing customer satisfaction and taking positions in the competitive market of the electronic business activities. They are used in many application domains to predict consumer preferences and assist web users in the search of products or services. There are a wide variety of methods for making recommendations; however, in spite of the advances in the methodologies, recommender systems still present some important drawbacks that prevent from satisfying entirely their users. This chapter presents one of the most promising approaches consisting of combining data mining and fuzzy logic.
... Explicit rating systems also pose other problems: use of appropriate scales, motivation and incentives for evaluators (Avery and Zeckhauser, 1997), avoiding the free-riding problem, achieving a critical mass of users (Oard and Marchionini, 1996), etc. ...
Article
Abstract This project studied the correlation between implicit ratings and the explicit rating for a single Web page. A browser was developed to record the user actions (implicit ratings) and the explicit rating of a page. Using the data collected by the browser, the individual implicit ratings and the combined,implicit ratings were analyzed and compared with the explicit rating. We found that time spent on a page and scrolling time had a strong correlation with the explicit rating. ii Acknowledgement We would like to thank our advisors, Professor David Brown and Professor Mark
... These systems adapt their explanations and teaching strategies to the individual needs of users in terms of their knowledge level and learning progress. Another area dealing with personalisation issues but emphasising more on adapting the content of an application are information filtering and recommender systems like found in [Aver97,Loeb92]. The goal of these systems is to go through large volumes of dynamically generated textual information and present to the user those which are likely to satisfy his/her information requirements. ...
Article
Full-text available
Ubiquitous web ,applications adhering ,to the ,anytime/anywhere/ anymedia,paradigm are required to be customisable meaning,the adaptation of their services towards a certain ,context. Several approaches ,for customising ubiquitous web applications have been already proposed, each of them having different origins and pursuing ,different goals for dealing with the unique characteristics of ubiquity. This paper compares some of these proposals, trying to identify their strengths and shortcomings. As a prerequisite, an evaluation framework,is suggested,which categorises the major characteristics ofcustomisation into different dimensions. On the basis of this framework, customisation approaches are surveyed and compared to each other, pointing the way to next-generation customisationapproaches. , Keywords: ubiquity, web applications, customisation, context, adaptation,
... This domain is still considered to be immature (Resnick and Varian, 1997): systems are mostly incomparable in their performance, lacking standard metrics and datasets. Developers of recommender systems approach the task of helping a user to find preferred items mainly by using an algorithm known as the 'top-N neighbors' (Karypis, 2000) that selects the 'neighbors group' (Terveen and Hill, 2001), namely the population whose tastes and preferences are highly correlated with those of the user, and this group is considered to be most 'qualified' to serve as recommendation providers for the user (Avery, Resnick and Zeckhauser, 1999;Avery and Zeckhauser, 1997;Resnick, Zeckhauser, Friedman and Kuwabara, 2000;Miller, 1982). ...
Article
Full-text available
Our research integrated computerized social collaborative systems known as Recommender Systems , with distance learning . The relationships among collaborative filtering and scientific literature, distant learners and organizational learning are quite new, applying 'high risk', knowledge-intensive domains to the "next generation" of recommender systems. Noting that computer-based learning, like other learning methodologies, does not present equal benefits for everyone, we conducted a longitudinal field study in which, for two years, our research tool, QSIA (at: http://www.qsia.org), a web-based Java-programmed collaborative system for collection, management, sharing and assignment of knowledge items for learning, was free for use on the web and was adopted by various institutions and classes of heterogeneous learning domains.. The option of testing and online assessment at any time and in any site is a flexible method to mentor progress and focus difficulties in learning along with analysis of online computerized tasks. The system enables creating and editing of the knowledge items and conducting online educational tasks and includes a recommendation module that assists the students and teachers in filtering relevant information. QSIA was implemented in over ten courses of academic institutions. At the time of this analysis, QSIA's database and logs were comprised of approximately 31,000 records of items-seeking actions, 3,000 users (mostly students) and 10,000 items (mainly in the field of medical pathology). The main findings of this study were that users acquire a tendency to seek recommendations from 'friends groups' and there was a significant positive difference in the acceptance level of recommendations by users when they asked for 'friends groups' recommendations. The choice of one's own group was the most important characteristic for users to assign to the advising group members. We also noted that the majority of users sought recommendations from teachers rather than from students. Users chose participants with either higher or equal grades to their own to populate the advising group.
Thesis
In this thesis, we consider the problem of learning when data are strategically produced. This challenges the widely used assumptions in machine learning that test data are independent from training data which has been proved to fail in many applications where the result of the learning problem has a strategic interest to some agents. We study the two ubiquitous problems of classification and linear regression and focus on fundamental learning properties on these problems when compared to the classical setting where data are not strategically produced.We first consider the problem of finding optimal classifiers in an adversarial setting where the class-1 data is generated by an attacker whose objective is not known to the defender---an aspect that is key to realistic applications but has so far been overlooked in the literature.To model this situation, we propose a Bayesian game framework where the defender chooses a classifier with no a priori restriction on the set of possible classifiers. The key difficulty in the proposed framework is that the set of possible classifiers is exponential in the set of possible data, which is itself exponential in the number of features used for classification. To counter this, we first show that Bayesian Nash equilibria can be characterized completely via functional threshold classifiers with a small number of parameters. We then show that this low-dimensional characterization enables us to develop a training method to compute provably approximately optimal classifiers in a scalable manner; and to develop a learning algorithm for the online setting with low regret (both independent of the dimension of the set of possible data).We then consider the problem of linear regression from strategic data sources. In the classical setting where the precision of each data point is fixed, the famous Aitken/Gauss-Markov theorem in statistics states that generalized least squares (GLS) is a so-called ``Best Linear Unbiased Estimator'' (BLUE) and is consistent (the model is perfectly learned when the amount of data grows). In modern data science, however, one often faces strategic data sources, namely, individuals who incur a cost for providing high-precision data. We model this as learning from strategic data sources with a public good component, i.e., when data is provided by strategic agents who seek to minimize an individual provision cost for increasing their data's precision while benefiting from the model's overall precision. Our model tackles the case where there is uncertainty on the attributes characterizing the agents' data---a critical aspect of the problem when the number of agents is large. We show that, in general, Aitken's theorem does not hold under strategic data sources, though it does hold if individuals have identical provision costs (up to a multiplicative factor) . When individuals have non-identical costs, we derive a bound on the improvement of the equilibrium estimation cost that can be achieved by deviating from GLS, under mild assumptions on the provision cost functions and on the possible deviations from GLS. We also provide a characterization of the game's equilibrium, which reveals an interesting connection with optimal design. Subsequently, we focus on the asymptotic behavior of the covariance of the linear regression parameters estimated via generalized least squares as the number of data sources becomes large. We provide upper and lower bounds for this covariance matrix and we show that, when the agents' provision costs are superlinear, the model's covariance converges to zero but at a slower rate relative to virtually all learning problems with exogenous data. On the other hand, if the agents' provision costs are linear, this covariance fails to converge. This shows that even the basic property of consistency of generalized least squares estimators is compromised when the data sources are strategic.
Thesis
In this thesis, we consider the problem of learning when data are strategically produced. This challenges the widely used assumptions in machine learning that test data are independent from training data which has been proved to fail in many applications where the result of the learning problem has a strategic interest to some agents. We study the two ubiquitous problems of classification and linear regression and focus on fundamental learning properties on these problems when compared to the classical setting where data are not strategically produced. We first consider the problem of finding optimal classifiers in an adversarial setting where the class-1 data is generated by an attacker whose objective is not known to the defender — an aspect that is key to realistic applications but has so far been overlooked in the literature. To model this situation, we propose a Bayesian game framework where the defender chooses a classifier with no a priori restriction on the set of possible classifiers. The key difficulty in the proposed framework is that the set of possible classifiers is exponential in the set of possible data, which is itself exponential in the number of features used for classification. To counter this, we first show that Bayesian Nash equilibria can be characterized completely via functional threshold classifiers with a small number of parameters. We then show that this low-dimensional characterization enables us to develop a training method to compute provably approximately optimal classifiers in a scalable manner; and to develop a learning algorithm for the online setting with low regret (both independent of the dimension of the set of possible data). We illustrate our results through simulations and apply our training algorithm to a real bank fraud data set in a simple setting. We then consider the problem of linear regression from strategic data sources. In the classical setting where the precision of each data point is fixed, the famousAitken/Gauss-Markov theorem in statistics states that generalized least squares (GLS) is a so-called “Best Linear Unbiased Estimator” (BLUE) and is consistent (the model is perfectly learned when the amount of data grows). In modern data science, however, one often faces strategic data sources, namely, individuals who incur a cost for providing high-precision data. We model this as learning from strategic data sources with a public good component, i.e., when data is provided by strategic agents who seek to minimize an individual provision cost for increasing their data’s precision while benefiting from the model’s overall precision. Our model tackles the case where there is uncertainty on the attributes characterizing the agents’ data — a critical aspect of the problem when the number of agents is large. We show that, in general, Aitken’s theorem does not hold under strategic data sources, though it does hold if individuals have identical provision costs (up to a multiplicative factor). When individuals have non-identical costs, we derive a bound on the improvement of the equilibrium estimation cost that can be achieved by deviating from GLS, under mild assumptions on the provision cost functions and on the possible deviations from GLS. We also provide a characterization of the game’s equilibrium, which reveals an interesting connection with optimal design. Subsequently, we focus on the asymptotic behavior of the covariance of the linear regression parameters estimated via generalized least squares as the number of data sources becomes large. We provide upper and lower bounds for this covariance matrix and we show that, when the agents’ provision costs are superlinear, the model’s covariance converges to zero but at a slower rate relative to virtually all learning problems with exogenous data. On the other hand, if the agents’ provision costs are linear, this covariance fails to converge. This shows that even the basic property of consistency of generalized least squares estimators is compromised when the data sources are strategic.
Article
When the value of a product or service is uncertain, outcomes can be inefficient. A market for evaluations can theoretically increase efficiency by voluntarily eliciting an evaluation that would otherwise not be provided. This paper uses a controlled laboratory experiment to test the performance of four market mechanisms to provide product evaluations. The mechanisms considered are derived from the oft studied uniform price sealed bid, discriminatory price sealed bid, English clock auction, and Dutch clock auction. Our results indicate for this nonrivalrous product that (i) each of these institutions improves social welfare and (ii) the performances of the four mechanisms are equivalent. This second point is particularly noteworthy given that differing behavior is routinely observed in traditional private value auctions.
Article
Purpose Exponential growth in online video content makes viewing choice and video promotion increasingly challenging. While explicit recommendation systems have value, they inherently distract the user from normal behaviour and are open to numerous biases. To enhance user interest evaluation accuracy, the purpose of this paper is to comprehensively examine the relationship between implicit feedback and online video content, and reviews gender differentials in the interest indicated by a comprehensive set of viewer responses. Design/methodology/approach This paper includes 200 useable observations based on an experiment of user interaction with the Youku platform (one of the largest video-hosting websites in China). Logistic regression was employed for its simple interpretation to test the proposed hypotheses. Findings The findings demonstrate gender differentials in cursor movement behaviour, explainable via well-studied splits in personality, biological factors, primitive behaviour and emotion management. This work offers a solution to the sparsity of work on implicit feedback, contributing to the literature that combines explicit and implicit feedback. Practical implications This study offers a launch point for further work on human–computer interaction, and highlights the importance of looking beyond individual metrics to embrace wider human traits in video site design and implementation. Originality/value This paper links implicit feedback to online video content for the first time, and demonstrates its value as an interest capturing tool. By reviewing gender differentials in the interest indicated by a comprehensive set of viewer responses, this paper indicates how user characteristics remain critical. Consequently, this work signposts highly fruitful directions for both practitioners and researchers.
Chapter
The objective is a neural-based feature selection in intelligent recommender systems. In particular, a hybrid neural genetic architecture is modeled based on human nature, interactions, and behaviour. The main contribution of this chapter is the development of a novel genetic algorithm based on human nature, interactions, and behaviour. The novel genetic algorithm termed “Buabin Algorithm” is fully integrated with a hybrid neural classifier to form a Hybrid Neural Genetic Architecture. The research presents GA in a more attractive manner and opens up the various departments of a GA for active research. Although no scientific experiment is conducted to compare network performance with standard approaches, engaged techniques reveal drastic reductions in genetic operator operations. For illustration purposes, the UCI Molecular Biology (Splice Junction) dataset is used. Overall, “Buabin Algorithm” seeks to integrate human related interactions into genetic algorithms as imitate human genetics in recommender systems design and understand underlying datasets explicitly.
Chapter
Recently, with the presence of a lot of information and the emergence of many programs, sites and companies that provide items to customers like Amazon for products or Netflix for movies …, it was necessary to exploit this data to achieve a quantum leap in the world of technology and specially do not leave the customer confused in the item to be chosen among other huge options, so many of sciences that are interested in the field of Big data and using the large information to meet the needs of users intervened to improve the area of recommendation such as data science, machine learning…. however there is one solution to give suggestions for customers is recommender systems. Recommender systems is a useful information filtering tool for guiding users in a personalized way of discovering products or services they might be interested in from a large space of possible options. It predicts interests of users and makes recommendation according to the interest model of users. On one hand, there is a traditional recommender systems recommend items based on different criteria of users or items like item price, user profile …on another hand we have recommender systems using deep learning techniques even if not been well explored yet. In this article, we first introduce different kinds of the most famous category of recommender systems and focus on one type to do movies recommendations and then make a quantitative comparison.
Chapter
Generally speaking, the reason people could be interested in using a recommender system is that they have so many items to choose from—in a limited period of time—that they cannot evaluate all the possible options. A recommender should be able to select and filter all this information to the user. Nowadays, the most successful recommender systems have been built for entertainment content domains, such as: movies, music, or books.
Chapter
We propose an online hybrid recommender strategy named content-boosted collaborative filtering with dynamic fuzzy clustering (CBCFdfc\mathrm{{CBCF}}_\mathrm{{dfc}}) based on content boosted collaborative filtering algorithm which aims to improve the prediction accuracy and efficiency. CBCFdfc_\mathrm{{dfc}} combines content-based and collaborative characteristics to solve problems like sparsity, new item and over-specialization. CBCFdfc\mathrm{{CBCF}}_\mathrm{{dfc}} uses fuzzy clustering to keep a certain level of prediction accuracy while decreasing online prediction time. We compare CBCFdfc\mathrm{{CBCF}}_\mathrm{{dfc}} with pure content-based filtering (PCB), pure collaborative filtering (PCF) and content-boosted collaborative filtering (CBCF) according to prediction accuracy metrics, and with online CBCF without clustering (CBCFonl)\mathrm{{CBCF}}_\mathrm{{onl}}) according to online recommendation time. Test results showed that CBCFdfc\mathrm{{CBCF}}_\mathrm{{dfc}} performs better than other approaches in most cases. We also evaluate the effect of user-specified parameters to the prediction accuracy and efficiency. According to test results, we determine optimal values for these parameters. In addition to experiments made on simulated data, we also perform a user study and evaluate opinions of users about recommended movies. The user evaluation results are satisfactory. As a result, the proposed system can be regarded as an accurate and efficient hybrid online movie recommender.
Article
The objective is a neural-based feature selection in intelligent recommender systems. In particular, a hybrid neural genetic architecture is modeled based on human nature, interactions, and behaviour. The main contribution of this chapter is the development of a novel genetic algorithm based on human nature, interactions, and behaviour. The novel genetic algorithm termed "Buabin Algorithm" is fully integrated with a hybrid neural classifier to form a Hybrid Neural Genetic Architecture. The research presents GA in a more attractive manner and opens up the various departments of a GA for active research. Although no scientific experiment is conducted to compare network performance with standard approaches, engaged techniques reveal drastic reductions in genetic operator operations. For illustration purposes, the UCI Molecular Biology (Splice Junction) dataset is used. Overall, "Buabin Algorithm" seeks to integrate human related interactions into genetic algorithms as imitate human genetics in recommender systems design and understand underlying datasets explicitly.
Article
The large amount of information resources that are available to users imposes new requirements on the software systems that handle the information. This chapter provides a survey of approaches to designing recommenders that address the problems caused by information overload.
Article
dition, 1991. [Sho92] L. Shoshana, Architechtng Personalized Delivery of Multimedia Information, CACM, 35(12), 39-50, December 1992. [SKB+98] B. M. Sarwar, J. A. Konstan, A. Borchers, J. L. Herlocker, B. N. Miller, and J. Riedl, Using Filtering Agents to Improve Prediction Quality in the Groupiens Research Collaborative Filtering System. In Proc. ACM Conf Computer Support Cooperativ Work (CSCVO 1998, Seattle, WA., page 345-354 Novemer 1998. [SKR99] J. B. Schafer, J. Konstan, and J. Riedl, Recommender Systems in E-Commerce, ACM Conf. Electronic Commerce (EC-99), Denver, CO, pages 158-166, November 1999. [SMc83] G. Salton and M. J. McGill, Introduction to Modern Information Retrieval, McGraw- Hill, 1983. [SM95] U. Shardanand and P. Maes, Social information filtering: Algorithms for automating "word of mouth." In Proc. 1995 ACM Conf. Human Factors in Computing Systems, New York, NY, pages 210-217, 1995. BmiIOGmPh'Y 101 [HCC98] J. Han, S. Chee, and J. Y. Chiang, Issues for On-Line
Article
Full-text available
Information filtering systems retrieve documents from document streams according to their users’ long-term in-formation interests represented by so-called profiles. The Profile Editor proposed in this article allows the interac-tive, direct manipulative construction of profiles. It takes a set of ranked queries and compiles them into a single pro-file by cropping and re-ranking the queries’ results. The approach of manual profile generation is expected to lead to two advantages: a) Profile generation is expected to be much faster than feedback-based automatic profile genera-tion and b) users’ confidence in their profiles should be higher because they are in control of their profiles. The Profile Editor is currently being implemented in the con-text of an Internet TV program guide, in which it will be evaluated during the next months.
Article
Knowledge of users’ preferences are of high value for every e-commerce website. It can be used to improve customers’ loyalty by presenting personalized products’ recommendations. A user’s interest in a particular product can be estimated by observing his or her behaviors. Implicit methods are less accurate than the explicit ones, but implicit observation is done without interruption of having to give ratings for viewed items. This article presents results of e-commerce customers’ preference identification study. During the study the author’s extension for FireFox browser was used to collect participants’ behavior and preference data. Based on them over thirty implicit indicators were calculated. As a final result the decision tree model for prediction of e-customer products preference was build.
Article
Full-text available
Nowadays various new items are available, but limitation of searching effort makes it difficult for customers to search new items which they want to purchase. Therefore new item providers and customers need recommendation systems which recommend right items for right customers. In this research, we focus on the new item recommendation issue, and suggest preference boundary- based procedures which extend traditional content-based algorithm. We introduce the concept of preference boundary in a feature space to recommend new items. To find the preference boundary of a target customer, we suggest heuristic algorithms to find the centroid and the radius of preference boundary. To evaluate the performance of suggested procedures, we have conducted several experiments using real mobile transaction data and analyzed their results. Some discussions about our experimental results are also given with a further research area.
Article
In this paper, we present the tours planning system entitled TOURSPLAN, along with a new lightweight user modelling UM process intended to work as a tourism recommendation system in a commercial environment. The new process tackles issues like cold start, grey sheep and over-specialisation through a rich user model and the application of a gradual forgetting function to the collected user action history. Also, significant performance improvements were achieved regarding the previously proposed UM process.
Article
Effective knowledge integration plays a very important role in knowledge engineering and knowledge-based machine learning. The combination of Bayesian networks (BNs) has shown a promising technique in knowledge fusion and the way of combining BNs remains a challenging research topic. An effective method of BNs combination should not impose any particular constraints on the underlying BNs such that the method is applicable to a variety of knowledge engineering scenarios. In general, a sound method of BNs combination should satisfy three fundamental criteria, that is, avoiding cycles, preserving the conditional independencies of BN structures, and preserving the characteristics of individual BN parameters, respectively. However, none of the existing BNs combination method satisfies all the aforementioned criteria. Accordingly, there are only marginal theoretical contributions and limited practical values of previous research on BNs combination. In this paper, following the approach adopted by existing BNs combination methods, we assume that there is an ancestral ordering shared by individual BNs that helps avoid cycles. We first design and develop a novel BNs combination method that focuses on the following two aspects: (1) a generic method for combining BNs that does not impose any particular constraints on the underlying BNs, and (2) an effective approach ensuring that the last two criteria of BNs combination are satisfied. Further through a formal analysis, we compare the properties of the proposed method and that of three classical BNs combination methods, and hence to demonstrate the distinctive advantages of the proposed BNs combination method. Finally, we apply the proposed method in recommender systems for estimating users' ratings based on their implicit preferences, bank direct marketing for predicting clients' willingness of deposit subscription, and disease diagnosis for assessing patients' breast cancer risk.
Article
Full-text available
cBN/NiCrAl nanocomposite coatings were deposited by cold spraying using mechanically alloyed composite powders. To examine their thermal stability, the nanocomposite coatings were annealed at different temperatures up to 1000 °C. The microstructure of composite coatings was characterized by x-ray diffraction, scanning electron microscopy, and transmission electron microscopy. The results showed that the nanostructure can be retained when the annealing temperature is not higher than 825 °C, which is 0.7 times of the melting point of the NiCrAl matrix. The dislocation density was significantly reduced when the annealing temperature was higher than 750 °C. The reaction between cBN particles and the NiCrAl matrix became noticeable when the annealing temperature was higher than 825 °C. The effects of grain refinement and work-hardening strengthening mechanisms were quantitatively estimated as a function of annealing temperature. The influence of annealing temperature on the contribution of different strengthening mechanisms to coating hardness was discussed.
Article
The paper examines the current approaches employed to improve the quality of online learning resources, which include the current state of the field and typical evaluation approaches adopted in major learning resources repositories. It then proposes a new approach on providing personalized recommendations on online learning resources by recommender system.
Article
As new items are frequently released nowadays, item providers and customers need the recommender system which is specialized in recommending new items. Because most of previous approaches for recommender system have to rely on the usage history of customers, collaborative filtering is not directly applicable to solve the new item problem. Therefore they have suggested content-based recommender system using feature values of new items. However it is not sufficient to recommend new items. This research aims to suggest hybrid recommendation procedures based on preference boundary of target customer. We suggest TC, BC, and NC algorithms to determine the preference boundary. TC is an algorithm developed from contents-based filtering, whereas BC and NC are algorithms based on collaborative filtering, which incorporates neighbors, similar customers to a target customer. We evaluate the performances of suggested algorithms with real mobile image transaction data set. Experimental test results that the performances of BC and NC is better than that of TC, which means that the suggested hybrid procedures are more effective than the content-based approach.
Article
We live in the information overload age. Don’t believe that? Here is some evidence: The world’s total yearly production of print, film, optical, and magnetic content would require roughly 1.5 billion gigabytes of storage. This is the equivalent of 250 megabytes per person for each man, woman, and child on earth. (Lyman and Varian, 2000) The massive amount of content produced each day is changing the way each of us lives our life. Historically, society has coped with the problem of too much information by employing editors, reviewers, and publishers to separate the signal from the noise. The problem is that we do not have enough editors, publishers, and reviewers to keep up with the volume of new content. One solution to this problem is to use technology to allow each of us to act as an editor, publisher, and reviewer for some subset of the rest of society. The technology that enables us to work together to solve the information overload problem for each other is called collaborative filtering.
Conference Paper
Full-text available
In diesem Beitrag werden zwei verschiedene Skalen zur Durchführung von Teilnehmerbewertungen in Ideenwettbewerben auf ihre Validität untersucht. Dafür werden diese mit einer unabhängigen, validierten Expertenbewertung verglichen. Auf Basis von Kreuztabellen wird gezeigt, dass eine einfache binäre Skala („Go“ oder „No Go“) eine höhere Übereinstimmungsvalidität besitzt als ein komplexes, mehrdimensionales Bewertungsformular. Auf Basis dieser Untersuchung werden konkrete Gestaltungsempfehlungen für die Praxis abgeleitet sowie zukünftiger Forschungsbedarf aufgezeigt.
ResearchGate has not been able to resolve any references for this publication.