ArticlePDF Available

Developing a Tool to Measure Knowledge Exchange Outcomes

Authors:

Abstract and Figures

This article describes the process of developing measures to assess knowledge exchange outcomes using the dissemination of a best practices in type 2 diabetes document as a specific example. A best practices model consists of knowledge synthesis, knowledge exchange (dissemination/adoption), and evaluation stages. Best practices are required at each stage. An extensive literature review found no previous knowledge syntheses of concrete tools and models for evaluating dissemination or exchange strategies. This project developed a practical and usable tool to measure the reach and uptake of disseminated innovations. The instrument itself facilitates an opportunity for knowledge exchange to occur between producers and adopters. At this point the tool has a strong theoretical basis. Initial pilot-testing has begun; however, the accumulation of evidence of validity and reliability is only in the planning stages. The instrument described here can be adapted to other areas of population health and evaluation research.
Content may be subject to copyright.
49
Abstract:
Corresponding author: Kelly Skinner, Department of Health Studies and
Gerontology, University of Waterloo, 200 University Ave. W., Waterloo, ON
N2L 3G1; <kskinner@ahsmail.uwaterloo.ca>
DEVELOPING A TOOL TO MEASURE
KNOWLEDGE EXCHANGE OUTCOMES
Kelly Skinner
University of Waterloo
Waterloo, Ontario
This article describes the process of developing measures to assess
knowledge exchange outcomes using the dissemination of a best
practices in type 2 diabetes document as a speci c example. A
best practices model consists of knowledge synthesis, knowledge
exchange (dissemination/adoption), and evaluation stages. Best
practices are required at each stage. An extensive literature re-
view found no previous knowledge syntheses of concrete tools and
models for evaluating dissemination or exchange strategies. This
project developed a practical and usable tool to measure the reach
and uptake of disseminated innovations. The instrument itself
facilitates an opportunity for knowledge exchange to occur be-
tween producers and adopters. At this point the tool has a strong
theoretical basis. Initial pilot-testing has begun; however, the
accumulation of evidence of validity and reliability is only in the
planning stages. The instrument described here can be adapted
to other areas of population health and evaluation research.
Cet article décrit le processus de développement de mesures
pour l’évaluation de résultats de l’échange de connaissances. Ce
processus sera décrit en utilisant comme exemple spéci que un
document sur la diffusion des meilleures pratiques pour le dia-
bète de type 2. Un modèle des meilleures pratiques comprend la
synthèse de connaissances, l’échange de connaissances (diffusion
ou adoption) ainsi que des étapes d’évaluation. Les meilleures
pratiques sont nécessaires à toutes les étapes. Un examen appro-
fondi de la littérature n’a permis d’identi er aucune synthèse de
connaissances d’outils concrets ou de modèles pour l’évaluation
de stratégies de diffusion ou d’échange. Dans ce projet, un outil
pratique et utilisable a été développé pour mesurer la portée et la
compréhension des innovations diffusées. L’instrument comme tel
facilite l’échange de connaissances entre les producteurs et les uti-
lisateurs. À l’heure actuelle, l’outil présente des appuis théoriques
solides. Les expérimentations initiales sont amorcées, cependant
les tests de validité et de abilité sont encore à l’étape de plani -
The Canadian Journal of Program Evaluation Vol. 22 No. 1 Pages 49–73
ISSN 0834-1516 Copyright © 2007 Canadian Evaluation Society
Résumé :
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
50
cation. L’instrument décrit peut être adapté à d’autres domaines
d’évaluation et de recherche sur la santé des populations.
A three-phase project related to Best Practices in Type 2
Diabetes Prevention was conducted for Health Canada, beginning in
2002. In Phase 1, a systematic review of literature and a nominated
practices scan identi ed interventions for the primary prevention of
type 2 diabetes (Hanning, Manske, Skinner, McGrath, & Heipel, 2004;
Hanning, Skinner, et al., 2004). These interventions were evaluated
using effectiveness and plausibility criteria to suggest which could
be considered “best” or “promising” practices. They were then docu-
mented in detail and ready to be disseminated. In Phase 2, Dubois,
Wilkerson, and Hall (2003) developed a dissemination plan and frame-
work for the practices. One limitation of the Phase 2 work was that
it did not describe measurement tools for assessing the knowledge
exchanged following dissemination. This article represents Phase 3,
which follows systematic search methodology to identify literature to
support the development of a tool to assess the reach and uptake of
the dissemination of the Best Practices in Type 2 Diabetes Prevention
project. It describes:
a search for literature that conceptualizes dissemination/dif-
fusion and adoption of health interventions
a search for approaches for measuring knowledge exchange
in the health eld
a search for methods of measuring and/or evaluating the
usage of disseminated health information
development of a usable tool to assess outcomes of knowledge
exchange for best practices
In this article, dissemination of the Best Practices in Type 2 Diabetes
Prevention project is used as an example in which the knowledge
exchange tool could be applied. However, the tool described here is
not exclusive to the area of type 2 diabetes prevention. Many areas
of population health and evaluation research may nd this to be
a valuable tool for measuring the outcomes of their dissemination
strategies and provide an opportunity for interaction (i.e., exchange)
between researchers and users.
BEST (BETTER) PRACTICES
The area of Best Practices (BP) (also called Better Practices — see,
e.g., Moyer, Maule, Cameron, & Manske, 2002; Program Training and
51
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
Consultation Centre, 2005) attempts to close the research–practice
gap by developing a system for assessing and evaluating community-
based practices to determine their effectiveness in certain contexts.
It also provides a foundation for knowledge synthesis (creating a
clearer understanding of what we know) and knowledge exchange
(KE) by supporting the use of the recommended practices. The BP
eld has evolved because of a crucial barrier between science and
practice: a lack of appropriate methods to measure the effectiveness
of community programs (Cameron, Jolin, Walker, McDermott, &
Gough, 2001).
The idea of “best practices” in health promotion has evolved over the
past decade. Numerous groups have attempted to clarify the evidence-
based or BP movement by constructing de nitions, frameworks for
evaluation, and guidelines for developing a best practice (Cameron et
al., 2001; Centers for Disease Control and Prevention, 2003; Centre
for Substance Abuse Prevention, 2003; Dubois et al., 2003; Green,
2001; Kahan & Goodstadt, 2001; Nova Scotia Group, 2002). Each
group has developed a model that de nes what quali es as evidence
and evaluates that evidence with different emphases. Each has done
so to provide guidance to health promotion and population health
practitioners who are being pushed for greater accountability to adopt
BP interventions.
THE EVALUATION OF COMMUNITY PROGRAMS
While large-scale, multi-centre research interventions often receive
thorough evaluations to determine their effectiveness, the practices
they evaluate may not be easily adopted by or even appropriate for
health promotion practitioners (Green & Glasgow, 2006). Some of
the dif culty in identifying and determining which interventions
can be considered best practice is the lack of appropriate evaluation
of small community-driven grassroots initiatives. Although these
interventions are often the most plausible and practical for public
health practitioners to adopt, it is unclear whether a given program
meets another key criterion of BP, effectiveness. The development of
assessment criteria for these initiatives can facilitate the selection of
BPs that are plausible and practical for health promoters to adopt.
Phase 1 of the Best Practices in Type 2 Diabetes Prevention project
addressed the need for appropriate methods to measure effectiveness
in population health. This phase resulted in 16 best and 71 promis-
ing practices for chronic disease prevention (Heart Health Resource
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
52
Centre, 2006). Phase 2 of the full project recommended a plan to get
the practices disseminated and used (Heart Health Resource Centre,
2005).
KNOWLEDGE EXCHANGE
Research literature uses various terms (e.g., diffusion, dissemination,
knowledge exchange, knowledge transfer, knowledge translation,
knowledge utilization, etc.) to describe similar processes (Garcia,
2006; Graham et al., 2006). For consistency this article uses the
term KE to describe these concepts. A number of the terms appear
unidirectional, and the term KE is preferred as it implies a two-way
dialogue for information exchange (Garcia, 2006; Gravois Lee &
Garvin, 2003).
Appropriate and effective dissemination is one strategy often con-
sidered to facilitate a link between science and practice (Cameron,
Brown, & Best, 1996). Another key factor to encourage knowledge
diffusion and utilization is for researchers to acknowledge and involve
end users. Early and ongoing collaboration with potential knowledge
users has been shown to enhance research utilization (Vingilis et al.,
2003). An existing relationship between academia and practitioner
can better support knowledge translation and thus successful diffu-
sion of knowledge (Nutbeam, 1996). Rosen eld (2000) differentiates
between information and usable knowledge and suggests there is a
need for the development of a “consumer mindset” by researchers.
Jacobson, Butterill, and Goering (2003) echo this recommendation
and claim that knowledge translation requires an understanding of
user context.
Lavis, Robertson, Woodside, McLeod, and Abelson (2003) differentiate
producer-push, user-pull, and exchange models of knowledge transfer
for getting knowledge used. Exchange models emphasize interac-
tive, mutually respectful, collaborative approaches driven jointly by
researchers and practitioners (including policy makers). An exchange
model ts most closely with the framework developed by Dubois et
al. (2003) for Phase 2 of this project.
A key question is how to measure the extent of KE that has occurred,
so that exchange efforts can be improved. We can gain precision by
examining different levels of outcome of KE processes. For example,
Rogers (1995) de nes ve stages in the exchange (diffusion) process:
awareness, interest, evaluation, trial, and adoption. To simplify, for
53
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
presentation sake, we can condense these components into two stages:
reach and uptake. Reach is an important component because, without
exposure to materials, no further knowledge use can occur. Uptake
re ects behavioural efforts to use the materials. This article reports
on Phase 3, which uses reach and uptake as proxy measures for KE.
The goal was to search for quantitative models or scales that could
be drawn upon in the development of a tool to measure outcomes of
KE, speci cally reach and use. A number of researchers in this eld
identify uptake and use as indicators for knowledge exchange and
research utilization (Dobbins, Ciliska, Cockerill, Barnsley, & DiCenso,
2002; Dobbins, Cockerill, & Barnsley, 2001; Knott & Wildavsky, 1980;
Landry, Lamari, & Amara, 2001a, 2001b, 2003). The uptake tool
questions developed during Phase 3 of this project include the use of
innovations as well as other components along the continuum of KE
outcomes. Reach was included in the tool because it is the rst step
before uptake and use can occur. Reach alone is not a re ection of KE,
but innovations must be disseminated and “reach” target users for
uptake and utilization to be possible. The “exchange” occurs during
the interaction between knowledge producers and knowledge users
toward joint actions (Davies, Nutley, & Walters, 2005).
METHOD
Development of the proposed tool began with a systematic search
for published, unpublished, and grey literature related to measuring
outcomes of efforts for KE (i.e., knowledge use). An annotated bib-
liography of relevant literature was generated. This facilitated the
author’s ability to conceptualize measurement in the context of KE.
Several key articles and reports were chosen for their applicability to
developing a tool to measure KE. The measurement models in these
sources were compared for overlapping concepts. Key ideas emerged
and were adapted to design speci c questions and scales to assess
reach and uptake following KE efforts of BP documents.
Because the knowledge exchange domain covers many contexts, it
was decided to explore sources within and beyond those typically
used in systematic literature reviews in health. Therefore, to con-
duct a thorough investigation, four different search strategies were
used. Initially a search of peer-reviewed, published literature was
performed using eight database search engines (CINAHL, Com-
munication Studies, ERIC, MEDLINE (via PubMed), PsychINFO,
Social Services Abstracts, Sociological Abstracts, and Web of Sci-
ence). All searches were restricted to published articles from 1970 to
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
54
December 2004 in English-language journals. Search strings of many
combinations were created using the following key words: knowledge,
exchange, evaluat*, measure*, disseminat*, diffusion, knowledge ex-
change, knowledge translation, knowledge transfer, model, process,
outcome, program, intervention, adoption, reach, and uptake. The
terms health and diabetes were used to focus the results. Then a table
of contents search of 12 accessible electronic journals was conducted.
These journals had the potential to yield informative articles due
to their titles, which included the terms knowledge, evaluation, or
measurement. Journals that frequently emerged from the database
search and were accessible electronically were also included. Third,
an Internet-based search engine (Google) was used to access grey
literature using the same keyword search strings as the database
search. The last strategy included a review of the reference lists from
articles retrieved.
Articles and resources were retrieved if they t the following inclu-
sion criteria:
addressed at least one outcome of KE (dissemination/diffu-
sion, adoption, reach, uptake, utilization, transfer, transla-
tion)
either pertained speci cally to KE within health or could be
adapted to the health domain.
The inclusion criteria remained very broad so that a variety of re-
sources could be considered in the creation of the tool.
RESULTS
The database search yielded 4,023 hits (duplicates removed), of which
413 titles were selected for further review. Abstracts corresponding
to these titles were read and 103 papers tting the selection criteria
were then retrieved. Nine relevant articles (not overlapping with the
database search) were retained from the table of contents search. The
Internet search led to 6 unpublished and 2 published documents.
The review of reference lists provided 12 additional publications, the
majority being articles prior to 1980.
Similar to the ndings of Dubois et al. (2003), the four search strate-
gies located numerous models and strategies for effective dissemi-
nation. A wealth of information on knowledge utilization was also
retrieved. However, the goal of this search was to nd speci c tools
55
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
and literature relevant to a sub-component of dissemination, that is,
how to measure outcomes of KE efforts. Although the 130 resources
retrieved t the inclusion criteria and were peripherally related to KE
outcomes, very few of the articles identi ed dealt speci cally with the
measurement of KE outcomes and none of the papers displayed actual
measurement tools. Much of the literature that exists on measuring
knowledge use was published between 1975 and 1983 (Dunn, 1983).
More recently, research has been conducted on research utilization
(Dobbins et al., 2001, 2002; Estabrooks, 1999; Estabrooks, Floyd,
Scott-Findlay, O’Leary, & Gushta, 2003; Landry et al., 2001a, 2001b,
2003) and evaluating team knowledge (Castka, 2003; Cooke, Kiekel,
Salas, & Stout, 2003; Mohammed & Dumville, 2001).
DEVELOPMENT OF A TOOL TO MEASURE OUTCOMES OF
KNOWLEDGE EXCHANGE
Measuring Reach
The dissemination report and framework by Dubois et al. (2003) is a
valuable tool for organizations that are interested in disseminating
BP material for population health. Although developed with type 2
diabetes prevention in mind, the framework can be used to guide dis-
semination within a wide variety of public health areas. The purpose
of the Dubois project was to develop a dissemination framework to
support the uptake or use of the BP material available to practitioners
in heart health in Ontario. The framework was developed following a
literature review, key informant interviews (experts in dissemination
and diabetes), and practitioner interviews (diabetes prevention prac-
titioners). A draft framework was piloted and additional feedback was
received from knowledge developers, knowledge brokers, and adop-
ters. The nal dissemination framework and dissemination report
by Dubois et al. (2003) has guided the development of the proposed
tool for measuring reach.
Reach can be quanti ed as the number of
intended users who were aware of the project, giving them the op-
portunity to adopt it. Thus, it is important to identify who the end
users are and what distribution points they contact to become aware
of and access resources. Dubois and colleagues (2003) concluded their
report with a list of recommended actions for dissemination of the
BP resource. One of these actions was to disseminate the resource
through various distribution points. These particular distribution
points (expanded in Table 1) were selected based on their ability to
reach target users/adopters of the resource. They were chosen by key
informants (end users) interviewed by Dubois. Thus, for the speci c
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
56
example of the Best Practices in Type 2 Diabetes Prevention docu-
ment, the process for evaluating its diffusion or “reach” following
dissemination could be conducted as follows:
1. Identify target users of the best practices
After dissemination has occurred, obtain lists of potential users (prac-
titioners) to act as key informants and contact a sample of them via e-
mail, telephone, and mail to complete the uptake questions (Appendix
A). Answering “yes” to question 1, “I am aware of the document” (in
this case, Best Practices in Type 2 Diabetes Prevention), indicates the
dissemination effort has reached the intended user. The respondent
could then continue on with the uptake questionnaire to determine
their use of the BPs.
2. Identify distribution points for the best practices to reach users
Huge potential exists for reaching target adopters and beyond this
population by dissemination through commonly accessed distribu-
tion sites. Distribution points could be considered intermediaries
for potential end users, and Table 1 is an example of the range of
possible distribution points for examining reach. After dissemination
has occurred, relevant distribution points can be scanned to see if the
Best Practices document is listed in their organization as a resource.
Examining reach in this way determines whether key distribution
points have been captured. Distribution points included people, net-
works, listservs, websites, databases, and coordinating bodies and
therefore will need to be contacted using a variety of methods. These
methods may include phone calls to people involved, accessing list-
servs, websites, and databases.
Measuring Uptake
Development of the uptake tool began by summarizing potentially
relevant literature into an annotated bibliography. This was an ef-
fective way to gain a better understanding of the literature content.
This exercise assisted in grasping the ideas central to evaluation,
measurement, and, most importantly, in speci c reference to KE.
Several key articles and reports were chosen for their applicability
to the development of a tool to measure KE, as they exhibited speci c
scales that could be adapted into a framework. Articles were reviewed
to identify the knowledge outcomes considered and how various
authors’ conceptualization of KE outcomes mapped onto each other.
57
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
Table 1
Examples of Distribution Points to be Considered for the Diabetes Best Practices
Document
Type of Specifi c
Source(s) Organization Acronym Website
Website Resource Canadian Diabetes CDA www.diabetes.ca
centres across Association
Canada
Website Chronic Disease CDPAC www.cdpac.ca
Prevention Alliance
Website Canadian Health Network CHN www.canadian-health-network.ca
Website Listserv Nutrition Resource Centre NRC www.nutritionrc.ca
Website Listserv Activitalk www.active2010.ca
Listserv Ontario Health Promotion OHPE www.ohpe.ca
Email-Bulletin Feature Bulletin
Listserv Database Lifestyle Information LIN www.lin.ca
Network
Database G7/G8 Heart Health www.med.mun.ca/g8hearthealth
Coordinating For Diabetes Strategy n/a
Committee
Other Dietitians of Canada DC www.dietitians.ca
Other Registered diabetes n/a
educators
Other Banting and Best DC BBDC www.bbdc.org
Institute
Other Provincial Chronic e.g., www.opha.on.ca/projects/
Disease Prevention ocdpa.html
partnerships
Other Alberta Centre for Active www.centre4activeliving.ca
Living
Note. Expanded from Dubois, Wilkerson, and Hall (2003).
Five key resources (Hall, George, & Rutherford, 1979; Hall, Loucks,
Rutherford, & Newlove, 1975; Johnson, 1980; Larson, 1982; Pelz &
Horsley, 1981) contained measurement models that, while differing
in terminology, signi cantly overlapped in the concepts measured.
These resources consisted of scales and indices described by Dunn
(1983). A gure of the selected scales was created in order to compare
their indices to each other (Figure 1). A thematic analysis of the com-
ponents of each scale determined that the index that corresponded to
most of the categories in the other scales was the Level of Use (LoU)
Scale (Hall et al., 1975). In Figure 1, themes corresponding to each
other are indicated by connecting lines. This scale also had the most
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
58
comprehensive chart with detailed notes for each level. Key ideas
emerged and were adapted to design a speci c set of questions to
assess uptake and knowledge exchanged after the dissemination of
BP documents (Appendix A).
The uptake tool was constructed with a combination of the stages
from the Seven Standards of Utilization (Knott & Wildavsky, 1980)
and the categories of the LoU Scale (Hall et al., 1975), and designed
as a questionnaire with questions similar to those used by Landry
et al. (2001a, 2001b) and Estabrooks (1999). Sections of questions
in the uptake tool were guided by the seven categories of knowledge
utilization from Knott and Wildavsky (1980) (Table 2).
Table 2
Stages/Standards of Knowledge Utilization
Stage Category Description
1 Reception Receiving information/information
a
is within reach
2 Cognition Read, digest, and understand information
3 Discussion Altering frames of reference to the new information
4 Reference Information infl uences action/adoption of information
5 Adoption Infl uences outcomes and results/effort to favour information
6 Implementation Adopted information becomes practice
7 Impact Tangible benefi ts of information
Note. Summarized from Knott and Wildavsky (1980).
a
The term “information” could be substituted by project, program, intervention, innovation, practice,
policy, research, knowledge, document, evaluation, etc.
Knowledge uptake is strongly related to the context within which it
is delivered (Davies et al., 2005), and it is critical to capture reasons
behind the non-adoption of a disseminated innovation, especially if
the practitioner was aware of and had access to it. Therefore, ques-
tions were added for deliberate non-use (or non-adoption) in Section
2 and were guided by Dobbins et al. (2002).
Landry et al. (2001a, 2001b) adapted the Knott and Wildavsky (1980)
standards into six stages and used them to measure the extent of
utilization of university research in public administration. They
developed a series of questions that corresponded to each stage and
requested answers using a Likert scale (1 = “never,” 2 = “rarely,” 3 =
“sometimes,” 4 = “usually,” 5 = “always”). Likert scales are often used
to measure opinions, beliefs, and attitudes (DeVellis, 2003). A Likert
scale was not chosen for this tool for a number of reasons: the tool was
59
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
Figure 1
Measuring Knowledge Use: Mapping of Selective Scales and Indices
Stages of concern
Information utilization concern scale Levels of use scale Research utilization index Evaluation utilization scale
scale (Larson, 1982) (Hall et al., 1979) (Hall et al., 1975) (Pelz & Horsley, 1981) (Johnson, 1980)
considered and rejected beliefs in outcome evaluation
nothing done awareness non-use beliefs in process evaluation
(lack of)
under consideration informational orientation you reviewed research literature plans to use evaluation processes
you evaluated a research study or outcomes
steps toward implementation personal preparation
general use of evaluation research
partially implemented management mechanical you transferred knowledge into practice
you planned for implementation and particular adaptive use of
implemented as presented consequence routine evaluation of a new practice evaluation research
collaboration refi nement particular development use of
evaluation research
integration
implemented and adapted refocusing renewal you discontinued a practice because particular use of formative
of new knowledge evaluation
Categories / stages / questions
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
60
not intended to measure opinions, beliefs, or attitudes; respondents
may not be able to discriminate meaningfully between a range of 5
and 6 response options (DeVellis, 2003); Likert-style response options
did not follow logically as answers to many of the questions; and a
primarily binary scale reduces the burden placed on the respondents
(DeVellis, 2003). The uptake tool outcomes and levels of use (Table 3)
were based on the LoU Scale by Hall et al. (1975). The LoU dimen-
sion intends to describe behaviours of innovation users and does not
focus on attitudinal, motivational, or other affective characteristics of
the user (Hall et al., 1975), thus further supporting binary response
options instead of a Likert scale.
UTILIZING THE TOOL
Using the proposed tool should not require a lot of planning by the
group interested in measuring the utilization of their BPs. Initial
planning would entail identifying target users (as key informants)
and distribution points, and determining a reasonable time frame
from the time of dissemination.
It is expected that reach will be determined rst, and then the uptake
questionnaire can be used as an extension of the measurement of
reach. When using the uptake questionnaire, the term “document” can
be substituted by project, program, intervention, innovation, practice,
research, knowledge, information, evaluation, policy, and so on.
Outcomes
The reach and uptake tools are primarily descriptive measures of
outcomes. Reach is measured and described in two parts. First, after
a list of target users has been generated, a random sample could be
contacted to calculate a percentage of target users that are aware of
the document (or innovation, program, etc.). Second, the number and
type of distribution points captured can be described and compared
to a list of relevant distribution points that were to be used for dis-
semination. It is bene cial to collect information on the type of users
aware of the document and the type of distribution points that were
captured, because this can inform future dissemination efforts.
The uptake tool intends to capture the thought process and actions
of the user as well as provide valuable feedback for the dissemina-
tor. Thus, it is not informative to have a numerical score. Previous
61
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
experience will inform users’ current practices, and they will integrate
their existing knowledge as they adopt and implement new innova-
tions. This will alter their progressive responses while completing
the uptake questions and affect their level of uptake.
After questionnaires for uptake have been completed, the level of use
for each individual user can be determined (Table 3). The levels are
determined based on the responses to the uptake questions.
Interpreting the level of use or knowledge exchanged is not neces-
sarily meant to be a continuous measure with a de nitive endpoint
(i.e., a Guttman-type scale). For example, in Table 3, re nement is not
Table 3
Uptake Outcomes and Levels of Use (LoU)
Scale point defi nitions:
Levels of use of the innovation
Relationship to questions:
Determining level
NON-USE: State in which the user has little or no knowledge of
the innovation, no involvement with the innovation, and is doing
nothing toward becoming involved
End here if No to Q 2, 5 or ended
at Q 9
Decision Point A: Takes action to learn more detailed information about the innovation
ORIENTATION: State in which the user has acquired or is ac quiring
information about the innovation and/or has explored or is
exploring its value orientation and its demands upon user and user
system
Yes or Maybe or Sometimes/Often
to any of Q 5, 6, 7, 8, 10, 11, 12
End here if No to Q 8
Decision Point B: Makes a decision to use the innovation by establishing a time to begin
PREPARATION: State in which the user is preparing for fi rst use of
the innovation
Fully/Partially to Q 26
Yes to Q 27
End here if Not at all/Not sure to Q
25 and 26
Decision Point C: Begins fi rst use of the innovation
MECHANICAL USE: State in which the user focuses most effort on
the short-term, day-to-day use of the innovation with little time for
refl ection. Changes in use are made more to meet user needs than
client needs. The user is primarily engaged in a stepwise attempt to
master the tasks required to use the innovation, often resulting in
disjointed and superfi cial use.
Yes to any of Q 25, 32, 33, 34
End here if No to all of Q 32, 33,
34, 36
Decision Point D-1: A routine pattern of use is established
ROUTINE: Use of the innovation is stabilized. Few if any changes
are being made in ongoing use. Little preparation or thought is
being given to improving innovation use or its consequences.
Yes to Q 36
End here if No to Q 37
continued next page
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
62
necessary for integration to occur. If practitioners are satis ed with a
practice that they have adopted and have found to be effective, there
is no motivation to change the intervention, and this does not prohibit
the potential for collaboration and integration of the innovation with
other organizations. Likewise, renewal (the last level) may not be the
goal following adoption, as the innovation is likely already evaluated
and does not require major modi cations. However, an increased LoU
or knowledge uptake occurs as a user moves toward higher levels and
potentially passes through decision points (Table 3). The user does not
need to complete consecutive levels before moving higher up on the
scale, but can skip over a level. Skipping levels may be an indication
of a user’s previous knowledge and experience. Along with the com-
pletion of the uptake questions, it is important to gather qualitative
information from the user. This will provide context for the uptake of
the innovation. The nal question in Section 1 allows for this dialogue
to occur. In most cases, the goal of the disseminator will be for the
user to reach the levels of routine, re nement, and integration. The
outcome for Section 2 is a level of non-use and provides feedback on
deliberate non-use of an innovation. This information is valuable for
disseminators to consider in their future work when tailoring their
innovations to end users.
Table 3 (continued)
Decision Point D-2: Changes use of the innovation based on formal or informal evaluation in order to
increase client outcomes
REFINEMENT: State in which the user varies the use of the inno-
vation to increase the impact on clients within immediate sphere
of infl uence. Variations are based on knowledge of both short- and
long-term consequences for clients.
Yes to Q 37
End here if No to Q 38 and 39
Decision Point E: Initiates changes in use of innovation based on input of and in coordination with what
colleagues are doing
INTEGRATION: State in which the user is combining own efforts to
use the innovation with related activities of colleagues to achieve
a collective impact on clients within their common sphere of infl u-
ence.
Yes to Q 38 or 39
End here if No to Q 40
Decision Point F: Begins exploring alternatives to or major modifi cations of the innovation presently in use
RENEWAL: State in which the user evaluates the quality of use of the
innovation, seeks major modifi cations of or alternatives to present
innovation to achieve increased impact on clients, examines new
developments in the fi eld, and explores new goals for self and the
system.
Yes to Q 40
Note. Defi nitions of levels of use and decision points are from Hall et al. (1975).
63
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
DISCUSSION
As the title of this article suggests, this tool is in the development
stage. This article describes details of the process used during tool
development, in particular the theoretical basis on which the tool
is based. While the tool appears to have face validity, the main
limitation is that reliability and validity testing have not yet been
conducted. Implications for use of the tool in practice are cautioned
until further evidence is available.
Initial pilot-testing of the uptake tool has begun, and it appears to
have a strong practical application. As part of a larger research study,
the uptake tool has been adapted and used to facilitate and study
the KE processes intended to enhance evidence-informed practice
in public health. Thus far, it has been used to interview 20 Ontario
health unit practitioners who received data and reports on smoking
and physical activity in secondary school students in their region (E.
Bonin, personal communication, September 29, 2006). The knowledge
broker conducting these interviews has found the tool to be quick
and easy to use (e.g., it takes about 10 minutes to conduct a phone
interview). The knowledge broker made minor suggestions to improve
the readability of the questions, and these have been incorporated.
The KE research study demonstrates that the tool can be adapted to
domains outside of BP documents, even toward measuring KE proc-
esses when the disseminated product is data.
Canada’s premier health research funding agency has impact on
health as part of its parliamentary mandate. Similarly, policy makers
and practitioners are looking to be good stewards of resources. Satis-
factory systems for KE are critical to meet these goals. Unfortunately,
the system lacks adequate practical measurement of outcomes. KE
measurement may be underdeveloped due to the lack of clarity in
quantifying knowledge and poor tangibility of knowledge outcomes.
Both the reach and uptake components of the proposed tool are based
on theory in the published and grey literature, which gives the tool
some credibility even though its utility has not yet been assessed.
Measuring knowledge utilization (and exchange) requires the exami-
nation of a process of several events including (a) information pick-
up, (b) information processing, and (c) information application (Rich,
1997). This tool attempts to capture that process in addition to collect-
ing information on the reasons why a document or practice may not
be adopted. Speci cally, this tool can facilitate efforts of measuring
the extent to which BPs have reached and are used by practitioners.
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
64
However, the tool is not exclusive to BPs and is valuable for many
areas of population health and evaluation. Potential users of this tool
include evaluators, researchers, policy and program decision-makers,
as well as any person, group, or organization wanting to facilitate
and measure KE processes. One of the important contributions of this
tool is that when it is implemented, the tool creates an opportunity
for knowledge to be exchanged. The uptake tool itself initiates or
continues a connection between the end user of the innovation and
the developer and/or disseminator of the innovation.
Future validation efforts are required to move this tool forward
from the development stage. Qualitative data should be gathered to
validate the tool, and steps outlined by DeVellis (2003) on scale de-
velopment should be followed. Initially, several experts in the eld of
knowledge exchange should review the tool for content validity, and
items determined to be ambiguous should be revised or removed. The
tool should then be administered to a sample of participants selected
from two groups: (a) intended users of the reach and uptake tools,
and (b) target adopters/non-adopters who might be completing the
uptake questions. Item responses from these two groups should be
evaluated using factor analysis. Concurrent validity could be tested by
collecting semi-structured qualitative data and analyzing the results
to assess consistency of individual responses to the tool. Validation of
this tool will add to its current utility as a quick, easy to implement,
and theoretically grounded measure of KE outcomes.
The key contribution of this article is the synthesis of evidence on KE
into a practical and usable questionnaire that works toward lling
the gap on measuring KE outcomes. The signi cant value of this tool
is that it engages both the producer and the user, from both sides of
the KE equation, to exchange knowledge with each other.
ACKNOWLEDGEMENTS
This article was awarded honourable mention in the 2005 Annual
Student Paper Contest of the Canadian Evaluation Society. I would
like to thank Steve Manske for his thoughts and comments through-
out the development of this tool and paper and Elissa Bonin for her
work toward adapting and pilot-testing the tool.
65
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
REFERENCES
Cameron, R., Brown, K.S., & Best, J.A. (1996). The dissemination of chronic
disease prevention programs: Linking science and practice. Canadian
Journal of Public Health, 87(Suppl. 2), 50–53.
Cameron, R., Jolin, M., Walker, R., McDermott, N., & Gough, M. (2001).
Linking science and practice: Toward a system for enabling commu-
nities to adopt best practices for chronic disease prevention. Health
Promotion Practice, 2, 35–42.
Castka, P. (2003). Measuring teamwork culture: The use of a modi ed EFQM
model. Journal of Management Development, 22(2), 149–170.
Center for Substance Abuse Prevention (CSAP). (2003). Building a successful
prevention program. University of Nevada, Reno. Retrieved January
21, 2004, from <http://casat.unr.edu/bestpractices/bestprac.htm>.
Centers for Disease Control and Prevention. (2003). Promising practices for
chronic disease prevention and control: A public health framework for
action. Atlanta, GA: U.S. Department of Health and Human Services.
Cooke, N.J., Kiekel, P.A., Salas, E., & Stout, R. (2003). Measuring team
knowledge: A window to the cognitive underpinnings of team per-
formance. Group Dynamics: Theory, Research and Practice, 7(3),
179–199.
Davies, H., Nutley, S., & Walter, I. (2005). Approaches to assessing the non-
academic impact of social science research. Report of the ESRC sym-
posium on assessing the non-academic impact of research, Research
Unit for Research Utilisation, University of St Andrews. Retrieved
May 4, 2006, from <http://www.st-andrews.ac.uk/%7Eruru/publica-
tions.htm>.
DeVellis, R.F. (2003). Scale development: Theory and applications (2nd ed.).
Thousand Oaks, CA: Sage.
Dobbins, M., Ciliska, D., Cockerill, R., Barnsley, J., & DiCenso, A. (2002).
A framework for the dissemination and utilization of research for
health-care policy and practice. The Online Journal of Knowledge
Synthesis for Nursing, 9(7). <http://www.blackwell-synergy.com/doi/
abs/10.1111/j.1524-475X.2002.00149.x> (subscription only).
Dobbins, M., Cockerill, R., & Barnsley, J. (2001). Factors affecting the uti-
lization of systematic reviews. International Journal of Technology
Assessment in Health Care, 17(2), 203–214.
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
66
Dubois, N., Wilkerson, T., & Hall, C. (2003). Addendum to international best
practices in diabetes prevention: Identi cation and national dissemi-
nation. Toronto: Heart Health Resource Centre. Retrieved October 2,
2006, from <http://www.hhrc.net/skills/bpt/pubs/addendum.pdf> and
<http://www.hhrc.net/pubs/skills/illustration.pdf>.
Dunn, W.N. (1983). Measuring knowledge use. Knowledge: Creation, Diffu-
sion, Utilization, 5(1), 120–133.
Estabrooks, C.A. (1999). The conceptual structure of research utilization.
Research in Nursing and Health, 22, 203–216.
Estabrooks, C.A., Floyd, J.A., Scott-Findlay, S., O’Leary, K.A., & Gushta, M.
(2003). Individual determinants of research utilization: A systematic
review. Journal of Advanced Nursing, 43(5), 506–520.
Garcia, J.M. (2006). Toward a science of knowledge exchange for cancer con-
trol. Unpublished comprehensive examination paper “A”, University
of Waterloo, Waterloo, ON.
Graham, I.D., Logan, J., Harrison, M.B., Straus, S.E., Tetroe, J., Caswell, W.,
et al. (2006). Lost in knowledge translation: Time for a map? Journal
of Continuing Education in the Health Professions, 26(1), 13–24.
Gravois Lee, R., & Garvin, T. (2003). Moving from information transfer to
information exchange in health and health care. Social Science &
Medicine, 56, 449–464.
Green, L. (2001). From research to “best practices” in other settings and
populations. American Journal of Health Behavior, 25, 165–178.
Green, L.W., & Glasgow, R.E. (2006). Evaluating the relevance, generaliza-
tion, and applicability of research: Issues in external validation and
translation methodology. Evaluation and the Health Professions,
29(1), 126–153.
Hall, G.E., George, A., & Rutherford, W. (1979). Measuring stages of concern
about the innovation: A manual for use of the SoC Questionnaire.
Austin: Research and Development Center for Teacher Education,
University of Texas.
Hall, G.E., Loucks, S.F., Rutherford, W.L., & Newlove, B.W. (1975). Levels
of use of the innovation: A framework for analyzing innovation adop-
tion. Journal of Teacher Education, 26(1), 52–56.
Hanning, R.M., Manske, S., Skinner, K., McGrath, H., & Heipel, R. (2004).
International best practices in type 2 diabetes prevention: Project nal
67
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
report and appendices. Waterloo, ON: Health Behaviour Research
Group, University of Waterloo.
Hanning, R.M., Skinner, K., Manske, S., Lessio, A., Howson, T., May, H.,
et al. (2004, November). Test driving a process for identi cation and
assessment of community-based “Best Practices” in chronic disease
prevention. Paper presented at the rst national conference on In-
tegrated Chronic Disease Prevention: Getting It Together, hosted
by the Chronic Disease Prevention Alliance of Canada (CDPAC),
Ottawa, ON.
Heart Health Resource Centre. (2005). Towards evidence-informed practice
for chronic disease prevention in Ontario. Toronto: Heart Health
Resource Centre and Ontario Public Health Association. Retrieved
June 20, 2006, from <http://www.hhrc.net/skills/bpt/pubs/logicModel.
pdf>.
Heart Health Resource Centre. (2006). Towards evidence-informed practice
(TEIP): A project in chronic disease prevention. Toronto: Heart Health
Resource Centre and Ontario Public Health Association. Retrieved
October 3, 2006, from <http://teip.hhrc.net/tools/eipc/index.cfm>.
Jacobson, N., Butterill, D., & Goering, P. (2003). Development of a frame-
work for knowledge translation: Understanding user context. Journal
of Health Services Research and Policy, 8(2), 94–99.
Johnson, K.W. (1980). Stimulating evaluation use by integrating academia
and practice. Knowledge: Creation, Diffusion, Utilization, 2(2), 237–
262.
Kahan, B., & Goodstadt, M. (2001). The interactive domain model of best
practices in health promotion: Developing and implementing a best
practices approach to health promotion. Health Promotion Practice,
2, 43–67.
Knott, J., & Wildavsky, A. (1980). If dissemination is the solution, what
is the problem? Knowledge: Creation, Diffusion, Utilization, 1(4),
537–578.
Landry, R., Lamari, M., & Amara, N. (2001a). Extent and determinants of
utilization of university research in public administration. Retrieved
February 17, 2004, from <http://kuuc.chair.ulaval.ca/english/pdf/
apropos/publication1.pdf>.
Landry, R., Lamari, M., & Amara, N. (2001b). Utilization of social science
research knowledge in Canada. Research Policy, 30(2), 333–349.
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
68
Landry, R., Lamari, M., & Amara, N. (2003). The extent and determinants of
the utilization of university research in government agencies. Public
Administration Review, 63(2), 192–205.
Larson, J.K. (1982). Information utilization and non-utilization. Palo Alto,
CA: American Institutes for Research in the Behavioral Sciences.
Lavis, J.N., Robertson, D., Woodside, J.M., McLeod, C.B., & Abelson, J.
(2003). The Knowledge Transfer Study Group. How can research
organizations more effectively transfer research knowledge to deci-
sion-makers? Milbank Quarterly, 81(2), 221–248.
Mohammed, S., & Dumville, B.C. (2001). Team mental models in a team
knowledge framework: Expanding theory and measurement across
disciplinary boundaries. Journal of Organizational Behavior, 22,
89–106.
Moyer, C., Maule, C., Cameron, R., & Manske, S. (2002). Better solu-
tions for complex problems: Description of a model to support better
practices for health. Toronto: Canadian Tobacco Control Research
Initiative. Retrieved May 14, 2004, from <http://www.ctcri.ca/ les/
BETTER%20SOLUTIONS%2012_02.pdf>.
Nova Scotia Group. (2002). Best practices in health promotion. Health Promo-
tion Clearinghouse. White Point, Nova Scotia. Retrieved January 21,
2004, from <http://www.hpclearinghouse.ca/initiatives/other.asp>.
Nutbeam, D. (1996). Improving the t between research and practice in
health promotion: Overcoming structural barriers. Canadian Journal
of Public Health, 87(Suppl 2), 18–23.
Pelz, D.C., & Horsley, J. (1981). Measuring utilization of nursing research.
In J.A. Ciarlo (Ed.), Utilizing evaluation (pp. 125–149). Thousand
Oaks, CA: Sage.
Program Training and Consultation Centre. (2005). PTCC’s better practices
toolkit in tobacco control. Retrieved June 27, 2006, from <http://www.
ptcc-cfc.on.ca/bpt/bpt.cfm>.
Rich, R.F. (1997). Measuring knowledge utilization: Processes and outcomes.
Knowledge and Policy: The International Journal of Knowledge Trans-
fer and Utilization, 10(3), 11–24.
Rogers, E.M. (1995). Diffusion of innovation (4th ed.). Toronto: Free
Press.
69
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
Rosen eld, S. (2000). Crafting usable knowledge. American Psychologist,
55, 1347–1355.
Vingilis, E., Hartford, K., Schrecker, T., Mitchell, B., Lent, B., & Bishop,
J. (2003). Integrating knowledge generation with knowledge dif-
fusion and utilization. Canadian Journal of Public Health, 94(6),
468–471.
Appendix A
Uptake Questions: Questions for the Dissemination of Best Practices
SECTION 1
Awareness (I know the document exists)
1 Are you aware of the BP document?
YES (go to question 3)
NO (go to question 2)
2 Would you like to learn more about this document?
YES (discontinue questions and distribute information)
NO (discontinue questions)
Reception (I have a copy of the document OR know how to access the document)
3 Have you received a copy of the document?
YES (go to question 6)
NO (go to question 4)
4 Did you retrieve a copy of the document on your own?
YES (go to question 6)
NO (go to question 5)
5 Do you plan to access the document some time in the future?
YES
MAYBE
NO (discontinue questions)
DON’T KNOW
6 Even before reading it, did you think the document might be useful?
YES
MAYBE
NO
DON’T KNOW
Cognition (read, digest, and understand the document)
7 Have you read the document?
FULLY (go to question 10)
PARTIALLY (go to question 10)
NOT AT ALL (go to question 8)
8 Do you plan to read the document?
YES (go to question 13)
MAYBE (go to question 13)
NO (go to question 9)
9 Do you have the intention of reading the document in the future?
YES (discontinue questions)
NO (discontinue questions)
10 Was the material in the document presented in a way you could understand?
YES
NO
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
70
11 Did you understand the material presented in the document?
YES
NO
DON’T KNOW
12 Have you thought about the contents of the document since you read it?
NEVER
RARELY
SOMETIMES
OFTEN
Discussion (altering frames of reference to the new information)
13 Have you made other colleague(s) aware of this document?
YES
NO
DON’T KNOW
14 Have you discussed the document with colleagues within your organization?
YES (go to question 16)
NO (go to question 15)
15 Do you plan to discuss the document with colleagues within your organization?
YES
MAYBE
NO
16 Have you discussed the document with colleague(s) outside of your organiza-
tion?
YES (go to question 18)
NO (go to question 17)
17 Do you plan to discuss the document with colleague(s) outside of your organiza-
tion?
YES
MAYBE
NO
18 Have you sought the opinion(s) of other(s) who have used this document (e.g.,
through discussions, visits, or workshops)?
YES
NO
Reference (document infl uences action/adoption of information)
19 Have you cited this document in your own reports or documents?
YES (go to question 21)
NO (go to question 20)
20 Do you plan to cite this document in your own reports?
YES
MAYBE
NO
DON’T KNOW
21 Has this document introduced you to a new idea/way of thinking for a currently
used practice (i.e., not a practice adopted from the document)?
YES
NO
22 Has this document changed your beliefs about a particular approach to prac-
tice?
YES
NO
71
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
Effort (efforts made to favour information)
23 Have you favoured the results in this document over other document(s)/sources
of information?
YES
NO
24 Have you favoured using this document over other document(s)/sources of infor-
mation?
YES
NO
Adoption (document infl uences adoption of a practice/practice adopted from document)
25 Have you adopted a practice outlined in the document?
FULLY (go to question 28)
PARTIALLY (go to question 28)
NOT AT ALL (go to question 26)
26 Do you plan to adopt a practice outlined in the document?
FULLY (go to question 27)
PARTIALLY (go to question 27)
NOT AT ALL (discontinue questions)
NOT SURE (discontinue questions)
If answered NOT AT ALL or NOT SURE to Question 26 proceed to Section 2.
27 Do you know when you will begin to use the practice you plan to adopt?
YES (discontinue questions)
NO (discontinue questions)
28 a) Was the practice you adopted a Best Practice (as defi ned by the document/
source)?
YES (go to question 30)
NO (go to question 29)
b) Was the practice you adopted a Promising Practice (as defi ned by the document/
source)?
YES
NO
29 Have you stopped a non-recommended practice?
YES
NO
NOT APPLICABLE
30 Have you combined together the components of more than one practice?
YES
NO
Implementation (adopted information becomes practice)
31 Overall, in the past 1 (6, 12, 18) month(s), how fully have you used a practice
recommended in the document?
NOT AT ALL
A LITTLE
A LOT
A LOT, BUT ADAPTED FROM THE ORIGINAL
32 Have you employed short-term strategies for using this practice?
YES
NO
33 Do you know the short-term effects (outcomes) from using this practice?
YES
NO
THE CANADIAN JOURNAL OF PROGRAM EVALUATION
72
34 Do you spend your time managing the activities of the practice?
YES
NO
35 Do you know the long-term requirements to using this practice?
YES
NO
36 Has using this practice become routine (i.e., practice runs smoothly with minimal
management problems)?
YES
NO
37 Have you varied your use (i.e., made modifi cations) of the practice to increase
its impact on your target population?
YES
NO
38 Have you collaborated with colleagues and/or other organizations targeting the
same population to implement this practice?
YES (go to question 40)
NO (go to question 39)
39 Do you plan to collaborate with colleagues and/or other organizations targeting
the same population to implement this practice?
YES
MAYBE
NO
40 Have you explored other practices that could be used in combination with, or in
place of, the current practice to improve effectiveness?
YES
NO
Impact
41 Has this practice made an impact on your target population?
YES
MAYBE
NO
DON’T KNOW
42 Has your use of this document changed a current practice or routine in your
work?
YES
MAYBE
NO
DON’T KNOW
43 Have you encouraged a colleague(s) to adopt this practice?
YES
NO
44 Have you persuaded a colleague(s) to adopt this practice?
YES
NO
Additional Comments
Are there any additional comments you would like to make about the document or practice?
(Your comments do not need to be related to an adopted or implemented practice.)
SECTION 2: Deliberate Non-use
This section only applies to answers NOT AT ALL or NOT SURE to Question 26.
x Please indicate ALL of the following reasons why you chose not to adopt this new source of
information/document/practice/intervention/innovation.
73
LA REVUE CANADIENNE DÉVALUATION DE PROGRAMME
Innovation Characteristics
Relative Advantage
I have an equivalent program already in place
The innovation was not perceived to be better than the current program
The innovation did not show any economic advantage from adopting it
The innovation was more time-consuming and required more effort than the current program
Compatibility
The innovation was not consistent with the current values of my program or organization
The innovation did not meet the needs of my program or organization
Complexity
The innovation was too diffi cult to understand
The innovation was too diffi cult to implement or use
Trialability
The innovation could not be implemented on a small scale to determine its advantages or disadvan-
tages
I have not heard of any other organization(s) related to mine that have adopted this innovation
Observability
I have not seen this innovation successfully implemented
Organizational Characteristics
Size and Resources
My organization is too small or too large to adopt this innovation
My organization does not have enough personnel resources (staff) to adopt this innovation
My organization does not have enough fi nancial resources to adopt this innovation
Location
My organization was not in an appropriate location to adopt or implement this innovation
Hierarchy
I do not have enough decision-making authority in my position to decide to adopt this innovation
I was not able to prove to my supervisor that this was an important innovation to adopt
Formalization
This innovation did not follow the rules and procedures of my organization
There was not enough research evidence that this innovation would be effective or successful
Environmental Characteristics
There is not enough collaboration or potential for networking with other organizations to be able to
adopt and implement this innovation
Individual Characteristics
This innovation did not seem relevant to my practice
It is not an appropriate time to be adopting this innovation
This innovation does not coincide with my values or beliefs about what is effective
I have insuffi cient time to adopt and implement a new innovation
Other
Other reasons not mentioned above have resulted in non-adoption of this innovation
These other reasons are:
Kelly Skinner is a Ph.D. candidate in the Department of Health
Studies and Gerontology at the University of Waterloo. She currently
holds a doctoral research award from the Institute of Population and
Public Health of the Canadian Institutes of Health Research. Her
main academic interests include chronic disease prevention in Abo-
riginal communities, nutrition in youth, and knowledge exchange.
... Given that the vast majority of evaluation literature on KTE and KUU comes from the health field (Kramer et al., 2013;Lee Pettman et al., 2020;Sketris, Carter, Traynor, Watts, & Kelly, 2020;Skinner, 2007;van Eerd et al., 2011), a systematic search of peer-reviewed literature was conducted in the health and medical database, PubMed, as well as a broader life, social, physical and health sciences database, Scopus. The search strategy was developed in consultation with a university librarian and tailored for each database. ...
... multiple studies, including the Theory of Planned Behaviour (Daugherty et al., 2018;De Cocker et al., 2015;Moodie et al., 2011), Transtheoretical Model of Change (De Cocker et al., 2015;Dilorio et al., 2009;Kramer et al., 2013;Strayer et al., 2012), Social Cognitive Theory (Dilorio et al., 2009;Reid), Diffusion of Innovations Theory (Doull et al., 2014;Moodie et al., 2011), Levels of Use Scale (Lane, Stone, Nobrega, & Tomita, 2015;Skinner, 2007) and RE-AIM (Glasgow et al., 2006;Lane et al., 2015). ...
... For example, the Brooke (1986) System Usability Scale was used by 14 articles uncovered by the review (however, not included in the final shortlist as they did not meet eligibility criteria) for evaluating a range of KPs, clearly demonstrating the wide interpretation of "system" and the need to assess usability. Only two tools emerged as options that could be applied to evaluation of KUU of KPs in any sector or organization, and for any KP/KTE process type: Level of Knowledge Use Survey (LOKUS) (Lane et al., 2015;Stone et al., 2014), and the Knowledge Uptake and Utilization Tool (KUUT) (Skinner, 2007). ...
Article
Full-text available
Knowledge transfer and exchange (KTE) has become an integral part of organizational practice. Evaluation of KTE, as well as knowledge products generated through this process, is important for understanding the effectiveness of KTE strategies. This scoping review aimed to identify tools and frameworks used to evaluate knowledge uptake and utilization (KUU). The search strategy included review of PubMed and Scopus databases, hand searching of relevant journals, and citation tracing. Over 6500 abstracts were screened; 292 full-text articles were shortlisted by two reviewers. Seventy-two articles described tools for evaluating KUU. A total of 23 tools could be generally applied to knowledge products/processes used in different sectors; 36 evaluation tools were designed for specific knowledge products (i.e., websites); 9 tools were discipline-specific (i.e., medical field), and four articles described evaluations of knowledge products/processes using alternative methods such as Google Analytics or qualitative methods. The majority of tools (n = 40, 56 %) focused on usability of a knowledge product or process. This scoping review identified various tools being used to assess the effectiveness and impact of KTE processes/products, however, the measures were as varied as the projects, and were often not designed to evaluate KTE in particular.
... However, large gaps exist between the knowledge that is revealed through research and the knowledge that is used in practice. To address this gap, Skinner [1] developed the Knowledge Uptake and Utilization Tool (KUUT). Over a decade since its initial development, the KUUT has been used and adapted by a variety of large national level health organizations such as the Public Health Agency of Canada, Health Canada and the Canadian Partnership Against Cancer. ...
... This evaluation found that there was 93% agreement between the KUUT and qualitative assessment [25]. As originally developed by Skinner [1], the KUUT is primarily informed by theory, including Knott and Wildavsky's [26] Seven Standards of Utilization. In addition, two extra stages were included in the original 2007 KUUT [1]awareness and effort. ...
... Part 1agather preliminary information to revise the tool Since the original development of the KUUT by Skinner [1], both informal and formal feedback has been received about its content and response format. This initial phase consists of gathering preliminary information from a literature review, as well as interview feedback about the KUUT from KUUT users (ORE#31771). ...
Article
Full-text available
Background: Measurement of what knowledge is taken-up and how that information is used to inform practice and policies can provide an understanding about the effectiveness of knowledge uptake and utilization processes. In 2007, the Knowledge Uptake and Utilization Tool (KUUT) was developed to evaluate the implementation of knowledge into practice. The KUUT has been used by numerous large health organizations despite limited validity evidence and a narrow understanding about how the tool is used in practice and interpreted by users. As such, the overall purpose of this protocol is to redevelop the KUUT and gather validity evidence to examine and support its use in various health-related organizations. This protocol paper outlines a validation and redevelopment procedure for the KUUT using the unitary view of validity. Methods: The protocol outlined in this article proceeds through four phases, starting with redeveloping the tool, then evaluating validity evidence based on: test content, response processes and internal structure. The initial phase gathers information to redevelop the tool, and evaluates item content and response format. The second phase evaluates response process validity evidence by examining how a variety of users interact with the tool. In the third phase, the tool will be pilot tested with knowledge users and, in the final phase, psychometric properties of the tool will be examined and a final scoring structure will be determined. A knowledge translation plan described herein outlines where the final tool will be housed and how the information about the tool will be disseminated. Discussion: This protocol outlines a procedure to gather different sources of validity evidence for the KUUT. By addressing limitations in the original KUUT, such as complexities with scoring, a redeveloped KUUT supporting validity evidence will enhance the ability of health-related organizations to effectively use this tool for its intended purpose.
... The bulk of the literature indicates lack of knowledge synthesis and need to develop concrete tools on how to assess the impact of K&IT (Skinner, 2007). The emerging calls highlight research necessity to "recognize" and "formalize" the role of K&IT and to exploit its benefit through multifaceted processes (Easterby-Smith, 2008;Fazey et al., 2013;Gagnon, 2011). ...
... Evaluation survey administered via smartphone app (modified Mobile App Rating Scale [68] and Knowledge Uptake and Utilization Tool [69]) ...
Article
Full-text available
Background: Despite having the tools at our disposal to enable an adequate food supply for all people, inequities in food acquisition, distribution, and most importantly, food sovereignty, worsen food insecurity. The detrimental impact of climate change on food systems and mental health is further exacerbated by a lack of food sovereignty. We urgently require innovative solutions to enable food sovereignty, minimize food insecurity, and address climate change-related mental distress (ie, solastalgia). Indigenous communities have a wealth of Traditional Knowledge for climate change adaptation and preparedness to strengthen food systems. Traditional Knowledge combined with Western methods can revolutionize ethical data collection, engagement, and knowledge mobilization. Objective: The Food Equity and Environmental Data Sovereignty (FEEDS) Project takes a participatory action, citizen science approach for early detection and warning of climate change impacts on food sovereignty, food security, and solastalgia. The aim of this project is to develop and implement a sustainable digital platform that enables real-time decision-making to mitigate climate change-related impacts on food systems and mental well-being. Methods: Citizen science enables citizens to actively contribute to all aspects of the research process. The FEEDS Project is being implemented in five phases: participatory project planning, digital climate change platform customization, community-led evaluation, digital platform and project refinement, and integrated knowledge translation. The project is governed by a Citizen Scientist Advisory Council comprising Elders, Traditional Knowledge Keepers, key community decision makers, youth, and FEEDS Project researchers. The Council governs all phases of the project, including coconceptualizing a climate change platform, which consists of a smartphone app and a digital decision-making dashboard. Apart from capturing environmental and health-related big data (eg, weather, permafrost degradation, fire hazards, and human movement), the custom-built app uses artificial intelligence to engage and enable citizens to report on environmental hazards, changes in biodiversity or wildlife, and related food and mental health issues in their communities. The app provides citizens with valuable information to mitigate health-related risks and relays big data in real time to a digital dashboard. Results: This project is currently in phase 1, with the subarctic Métis jurisdiction of Île-à-la-Crosse, Saskatchewan, Canada. Conclusions: The FEEDS Project facilitates Indigenous Peoples' self-determination, governance, and data sovereignty. All citizen data are anonymous and encrypted, and communities have ownership, access, control, and possession of their data. The digital dashboard system provides decision makers with real-time data, thereby increasing the capacity to self-govern. The participatory action research approach, combined with digital citizen science, advances the cocreation of knowledge and multidisciplinary collaboration in the digital age. Given the urgency of climate change, leveraging technology provides communities with tools to respond to existing and emerging crises in a timely manner, as well as scientific evidence regarding the urgency of current health and environmental issues.
... They provide or coordinate the full range of services that meet the individual's specific needs (medical and psychosocial rehabilitation, including criminal justice, addiction and employment; MSSS, 2017). However, the level of service intensity is different, since the ACT teams offer a higher level of service intensity (ratio professional vs. service user, ACT: from 1/8 to 1/15; ICM: from 1/12 to 1/25; MSSS, 2005 quality of services; Lemire et al., 2013;Rocher, 1988;Skinner, 2007). Table I provides brief definitions of concepts from the analytical framework. ...
In most developed countries, health systems are attempting to compensate for underuse scientific evidence and its integration into healthcare services and practices. This qualitative study aimed to identify perceived benefits of a knowledge translation program implemented within mental health community services ((At your fingertips, Quebec, 2016-2018)). Results suggests that the production of a collaborative platform composed of a variety of activities and techno-educational tools, derived from integrated knowledge, facilitates the uptake by professionals in a context of reflective practices. Dissemination of these tools through technology of information and communication provides access to best recovery-oriented practices at your fingertips.
... Three measures were included in our review: Evidence Based Practice Sustaining Telephone Survey [48], Program Sustainability Assessment Tool [52], and Program Sustainability Index [54]. The remaining ten were excluded: School-wide Universal Behavior Sustainability Index-School Teams [58] and the Amodeo Counselor Maintenance Measure [59], as they were specific for a single intervention; the Change Process Capability Questionnaire [60], as it is a measure of an organization's capability for successful implementation rather than an outcome measure; Eisen Provider Knowledge and Attitudes Survey [61], as it measures prospective intentions; Knowledge Exchange Outcomes Tool [62], as it does not measure constructs of sustainment; the Organization Checklist [63], as it is unpublished; the Prevention Program Assessment [Maintenance Scales] [64] and the Sustainability and Spread Activities Questionnaire [65], as items were not available for review; the Sustainment Cost Survey [66], as it measures costs only; and the General Organizational checklist [67], as it measures quality of delivery. ...
Article
Full-text available
Background: Sustainment, an outcome indicating an intervention continues to be implemented over time, has been comparatively less studied than other phases of the implementation process. This may be because of methodological difficulties, funding cycles, and minimal attention to theories and measurement of sustainment. This review synthesizes the literature on sustainment measures, evaluates the qualities of each measure, and highlights the strengths and gaps in existing sustainment measures. Results of the review will inform recommendations for the development of a pragmatic, valid, and reliable measure of sustainment. Methods: A narrative review of published sustainment outcome and sustainability measures (i.e., factors that influence sustainment) was conducted, including appraising measures in the Society of Implementation Research Collaboration (SIRC) instrument review project (IRP) and the Dissemination and Implementation Grid-Enabled Measures database initiative (GEM-D&I). The narrative review used a snowballing strategy by searching the reference sections of literature reviews and definitions of sustainability and sustainment. Measures used frequently and judged to be comprehensive and/or validated by a team of implementation scientists were extracted for analysis. Results: Eleven measures were evaluated. Three of the included measures were found in the SIRC-IRP, three in the GEM-D&I database, (one measure was in both databases) and six were identified in our additional searches. Thirteen constructs relating to sustainment were coded from selected measures. Measures covered a range of determinants for sustainment (i.e., construct of sustainability) as well as constructs of sustainment as an outcome. Strengths of the measures included, development by expert panels knowledgeable about particular interventions, fields or contexts, and utility in specific scenarios. A number of limitations were found in the measures analyzed including inadequate assessment of psychometric characteristics, being overly intervention or context specific, being lengthy and/or complex, and focusing on outer context factors. Conclusion: There is a lack of pragmatic and psychometrically sound measures of sustainment that can be completed by implementation stakeholders within inner context settings (e.g., frontline providers, supervisors).
Chapter
The journey from research to impact is neither direct nor a solo journey. In order to get there (impact) from here (research) you will need to understand the pathway between research and impact, understand how to plan your specific journey, understand why you are taking this journey, and understand why dissemination is necessary but not sufficient to accomplish your journey and be able to place your work in a global context. This chapter on knowledge dissemination and translation presents practical tools for impact planning, dissemination, and stakeholder engagement, all underpinned by the logic of the co-produced pathway to impact (CPPI). We present these tools and provide practical “how can you use this” tips to help your research have an impact on the lives of children and families living with a neurodevelopmental disorder.KeywordsKnowledge translationResearch impactStakeholder engagementDisseminationPathway to impact
Article
Performance measurement (PM) aims to ensure transparency and effectiveness in public spending. More specifically, it provides a foundation that allows funders to determine whether an intervention has achieved its stated objective, thus improving understanding of what interventions should be funded in the future. Ideally, performance measurement should help us understand “what works, for whom, and in what context.” Unfortunately, more often than not, performance reports are limited to a collection of indicators that make it difficult to answer this question. Based on our work with the Public Health Agency of Canada Innovation Strategy (PHAC-IS), we offer several recommendations to address this issue and support the performance measurement process, especially for complex interventions. We discuss the importance of contextualizing results to better understand impact and associating these results to a well-defined intervention. We also suggest using a validated tool to improve data collection and analysis and critically appraise the methods used to collect impact information. The integration of these key considerations will save time in data analysis and ensure funded recipients are not overburdened by the data collection process. Although this commentary is presented within the context of a complex multi-year population health funding program, we believe our approach can be applied to any performance management process and ultimately improve decisions such as whether an intervention should be continued, scaled up, or adapted to different contexts.
Thesis
Full-text available
The lack of specialists when looking for highly qualified labor (e.g. IT) has become globally omnipresent. Well educated team members need to be attracted and remain motivated to perform and stay satisfied. The goal of this research is to analyze possible cause and effect relationships between personality traits, human work-related needs, cultural background and the preference for specific gamification mechanics in order to address motivation and ultimately knowledge flow and collaboration in organisations. This is addressed by combining and interweaving the status quo of research in the fields of knowledge management, social capital, motivation, personality, culture, and gamification. Goal dimensions of this research model is to measure different effects on the individual’s intention to use internal gamification at the workplace. Building on this research’s results the insights shall help to improve work environments by fostering engagement, social interaction and hence, knowledge flow and creation. The results of the multivariate analysis indicate that BIG5 personality traits do not exhibit a substantial relationship in most models (neither on need nor on the preference for gamification elements). But there is a considerable relationship and effect of human work-related needs on the preference for belonging game elements. Also, cultural background is strongly influential on the preference of specific game mechanics. Finally, a strong effect between the preference for gamification elements and the intention to use gamification is identified, thus preferences for certain elements impact the intention to work in a gamified environment.
Article
Full-text available
Background Systematic reviews of measures can facilitate advances in implementation research and practice by locating reliable and valid measures and highlighting measurement gaps. Our team completed a systematic review of implementation outcome measures published in 2015 that indicated a severe measurement gap in the field. Now, we offer an update with this enhanced systematic review to identify and evaluate the psychometric properties of measures of eight implementation outcomes used in behavioral health care. Methods The systematic review methodology is described in detail in a previously published protocol paper and summarized here. The review proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and measure forward searches. Phase II, data extraction, involved coding psychometric information. Phase III, data analysis, involved two trained specialists independently rating each measure using PAPERS (Psychometric And Pragmatic Evidence Rating Scales). Results Searches identified 150 outcomes measures of which 48 were deemed unsuitable for rating and thus excluded, leaving 102 measures for review. We identified measures of acceptability ( N = 32), adoption ( N = 26), appropriateness ( N = 6), cost ( N = 31), feasibility ( N = 18), fidelity ( N = 18), penetration ( N = 23), and sustainability ( N = 14). Information about internal consistency and norms were available for most measures (59%). Information about other psychometric properties was often not available. Ratings for internal consistency and norms ranged from “adequate” to “excellent.” Ratings for other psychometric properties ranged mostly from “poor” to “good.” Conclusion While measures of implementation outcomes used in behavioral health care (including mental health, substance use, and other addictive behaviors) are unevenly distributed and exhibit mostly unknown psychometric quality, the data reported in this article show an overall improvement in availability of psychometric information. This review identified a few promising measures, but targeted efforts are needed to systematically develop and test measures that are useful for both research and practice. Plain language abstract When implementing an evidence-based treatment into practice, it is important to assess several outcomes to gauge how effectively it is being implemented. Outcomes such as acceptability, feasibility, and appropriateness may offer insight into why providers do not adopt a new treatment. Similarly, outcomes such as fidelity and penetration may provide important context for why a new treatment did not achieve desired effects. It is important that methods to measure these outcomes are accurate and consistent. Without accurate and consistent measurement, high-quality evaluations cannot be conducted. This systematic review of published studies sought to identify questionnaires (referred to as measures) that ask staff at various levels (e.g., providers, supervisors) questions related to implementation outcomes, and to evaluate the quality of these measures. We identified 150 measures and rated the quality of their evidence with the goal of recommending the best measures for future use. Our findings suggest that a great deal of work is needed to generate evidence for existing measures or build new measures to achieve confidence in our implementation evaluations.
Article
Current emphasis on evaluating interventions does not address the problems of dissemination and utilization of these interventions, particularly in complex settings such as schools. Research on interventions is of value, but its generalizability to specific contexts is limited. Further, little is known about actual use of empirically supported interventions in practice settings. These concerns suggest the following: (a) There is a need to examine the dissemination process, including practitioner education and the development of a consumer information mindset by researchers; (b) guidance about selecting interventions would benefit from a systematic problem-solving orientation; and (c) research training and methodology need to be augmented with strategies and techniques suitable for developing an empirical approach to practice. These issues are addressed with specific examples drawn from school-based practice.
Purpose (1) The purpose of this paper is to construct a comprehensive framework of research dissemination and utilization that is useful for both health policy and clinical decision-making. Organizing Construct (2) The framework illustrates that the process of the adoption of research evidence into health-care decision-making is influenced by a variety of characteristics related to the individual, organization, environment and innovation. The framework also demonstrates the complex interrelationships among these characteristics as progression through the five stages of innovation-namely, knowledge, persuasion, decision, implementation and confirmation-occurs. Finally, the framework integrates the concepts of research dissemination, evidence-based decision-making and research utilization within the diffusion of innovations theory. Methods (3) During the discussion of each stage of the innovation adoption process, relevant literature from the management field (i.e., diffusion of innovations, organizational management and decision-making) and health-care sector (i.e., research dissemination and utilization and evidence-based practice) is summarized. Studies providing empirical data contributing to the development of the framework were assessed for methodological quality. Conclusions (4) The process of research dissemination and utilization is complex and determined by numerous intervening variables related to the innovation (research evidence), organization, environment and individual.
Article
This article addresses three questions: To what extent is university research used in government agencies? Are there differences between the policy domains in regard to the extent of use? What determines the use of university research in government agencies? The data analysis is based on a survey of 833 government officials from Canadian government agencies. Comparisons of the magnitude of uptake of university research show large and significant differences across policy domains. The results of the multivariate regression analyses show that the characteristics of research and the focus on the advancement of scholarly knowledge or on users’ needs do not explain the uptake of research. Users’ adaptation of research, users’ acquisition efforts, links between researchers and users, and users’ organizational contexts are good predictors of the uptake of research by government officials.
Article
In this article, the development of a system for collecting and assessing best community-based health promotion practices for dissemination is described. The key system components are (a) a protocol for identifying meritorious practices, (b) criteria for assessing those practices, and (c) an assessment procedure. A key informant process was used to identify interventions, and interviews were conducted to acquire detailed information on them. Categories of criteria pertaining to (a) effectiveness, (b) plausibility, and (c) practicality were developed for assessing practices. Application of the criteria led to selected practices’ being designated as “best,”“promising,” or “to be tracked.”