MeSH term explosion and author rank improve expert recommendations

School of Information Sciences
AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium 11/2010; 2010:412-6.
Source: PubMed


Information overload is an often-cited phenomenon that reduces the productivity, efficiency and efficacy of scientists. One challenge for scientists is to find appropriate collaborators in their research. The literature
describes various solutions to the problem of expertise location, but most current approaches do not appear to be very suitable for expert recommendations in biomedical research. In this study, we present the development and initial evaluation of a vector space modelbased algorithm to calculate researcher similarity using four inputs: 1) MeSH terms of publications; 2) MeSH terms and author rank; 3) exploded MeSH terms; and 4) exploded MeSH terms and author rank. We developed and evaluated the algorithm using a data set of 17,525 authors and their 22,542 papers. On average, our algorithms correctly predicted 2.5 of the top 5/10 coauthors of individual scientists. Exploded MeSH and author rank outperformed all other algorithms in accuracy, followed closely by MeSH and author rank. Our results show that the accuracy of MeSH term-based matching can be enhanced with other metadata such as author rank.

Download full-text


Available from: Danielle Lee, Jan 21, 2014
16 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Unlike expertise location systems which users query actively when looking for an expert, expert recommender systems suggest individuals without the context of a specific problem. An interesting research question is whether expert recommender systems should consider a users' social context when recommending potential research collaborators. One may argue that it might be easier for scientists to collaborate with colleagues in their social network, because initiating collaboration with socially unconnected researchers is burdensome and fraught with risk, despite potentially relevant expertise. However, many scientists also initiate collaborations outside of their social network when they seek to work with individuals possessing relevant expertise or acknowledged experts. In this paper, we studied how well content-based, social and hybrid recommendation algorithms predicted co-author relationships among a random sample of 17,525 biomedical scientists. To generate recommendations, we used authors' research expertise inferred from publication metadata and their professional social networks derived from their co-authorship history. We used 80% of our data set (articles published before 2007) as our training set, and the remaining data as our test set (articles published in 2007 or later). Our results show that a hybrid algorithm combining expertise and social network information outperformed all other algorithms with regards to Top 10 and Top 20 recommendations. For the Top 2 and Top 5 recommendations, social network-based information alone generated the most useful recommendations. Our study provides evidence that integrating social network information in expert recommendations may outperform a purely expertise-based approach.
    Proceedings of the American Society for Information Science and Technology 10/2011; 48(1). DOI:10.1002/meet.2011.14504801025