Measuring the Impact of Medical Research: Moving From Outputs to Outcomes

Department of Psychiatry, MGH-East, Charlestown, MA 02129, USA.
American Journal of Psychiatry (Impact Factor: 13.56). 03/2007; 164(2):206-14. DOI: 10.1176/appi.ajp.164.2.206
Source: PubMed

ABSTRACT Billions of dollars are spent every year to support medical research, with a substantial percentage coming from charitable foundations. To justify these expenditures, some measure of the return on investment would be useful, particularly one aligned with the intended ultimate outcome of this scientific effort: the amelioration of disease. The current mode of reporting on the success of medical research is output based, with an emphasis on measurable productivity. This approach falls short in many respects and may be contributing to the well-described efficacy-effectiveness gap in clinical care. The author argues for an outcomes-based approach and describes the steps involved, using an adaptation of the logic model. A shift in focus to the outcomes of our work would provide our founders with clearer mission-central return-on-investment feedback, would make explicit the benefits of science to an increasingly skeptical public, and would serve as a compass to guide the scientific community in playing a more prominent role in reducing the efficacy-effectiveness gap. While acknowledging the enormous complexity involved with the implementation of this approach on a large scale, the author hopes that this essay will encourage some initial steps toward this aim and stimulate further discussion of this concept.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The real-world impact of research is gaining much attention across the international Higher Education sector. Funding agencies, government organisations and community groups are seeking evidence that research initiatives are delivering impact beyond contributions to academia. Researchers, practitioners, educators, learning designers and developers require a good understanding of research impact, and associated terminology, to articulate the real-world benefits of technology-enabled initiatives. There are three good reasons to understand research impact in a Higher Education context. Firstly, comprehending the language of research impact facilitates meaningful discussion with research stakeholders. Secondly, recognising and communicating the real-world impact of an initiative affirms the 'so what' factor of a research project. And thirdly, demonstrating research impact, rather than reporting research outputs, is becoming more important in funding applications and project documentation. This paper concludes with a brief review of assessment frameworks developed to evaluate the real-world impact of Higher Education research.
    Rhetoric and Reality: Critical perspectives on educational technology, ascilite 2014, Dunedin, New Zealand; 11/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Faculty of 1000 (F1000) is a post-publishing peer review web site where experts evaluate and rate biomedical publications. F1000 reviewers also assign labels to each paper from a standard list or article types. This research examines the relationship between article types, citation counts and F1000 article factors (FFa). For this purpose, a random sample of F1000 medical articles from the years 2007 and 2008 were studied. In seven out of the nine cases, there were no significant differences between the article types in terms of citation counts and FFa scores. Nevertheless, citation counts and FFa scores were significantly different for two article types: “New finding” and “Changes clinical practice”: FFa scores value the appropriateness of medical research for clinical practice and “New finding” articles are more highly cited. It seems that highlighting key features of medical articles alongside ratings by Faculty members of F1000 could help to reveal the hidden value of some medical papers.
    Scientometrics 11/2013; 97(2). DOI:10.1007/s11192-013-0993-9 · 2.27 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Practice development (PD) and knowledge translation (KT) have emerged recently as methodologies which assist advancement in gathering and using evidence in practice. For nursing to benefit from these methodologies there is a need to advance the dialogue between academia and the service sector concerning the use and further development of these methodologies as well as how we create the most effective partnerships between academia and practice. To advance this dialogue and to gain insights into the similarities and differences between KT and PD and between the academic and the service sectors, four conversations from different leaders in these sectors have been gathered and are presented here. These four discrete narratives are presented to showcase the diversity of sector contexts in relation to PD and KT methodologies. Narrative One focuses on some of the theoretical and policy issues related to creating partnerships between traditional "knowledge creation systems" (universities) and "knowledge utilization systems" Narrative Two discusses how a large school of nursing responded to the challenge of creating partnerships for practice development in an attempt to bridge the academic/service divide and produce benefits to both organisations. Narratives Three and Four describe the view of practice development from the service side. The final section of the paper presents an agenda for discussion and action based on the emerging set of principles.
    Collegian Journal of the Royal College of Nursing Australia 06/2012; 19(2):67-75. DOI:10.1016/j.colegn.2012.02.001 · 0.84 Impact Factor