Content uploaded by Paul Mccawley
Author content
All content in this area was uploaded by Paul Mccawley on Sep 22, 2015
Content may be subject to copyright.
Paul F. McCawley
Associate Director
University of Idaho Extension
What is the Logic Model?
The Logic Model process is a tool that has been used for
more than 20 years by program managers and evaluators
to describe the effectiveness of their programs. The model
describes logical linkages among program resources, activ-
ities, outputs, audiences, and short-, intermediate-, and
long-term outcomes related to a specific problem or situ-
ation. Once a program has been described in terms of the
logic model, critical measures of performance can be iden-
tified.1
Logic models are narrative or graphical depictions of
processes in real life that communicate the underlying
assumptions upon which an activity is expected to lead to
a specific result. Logic models illustrate a sequence of
cause-and-effect relationships—a systems approach to
communicate the path toward a desired result.2
A common concern of impact measurement is that of lim-
ited control over complex outcomes. Establishing desired
long-term outcomes, such as improved financial security
or reduced teen-age violence, is tenuous because of the
limited influence we may have over the target audience,
and complex, uncontrolled environmental variables. Logic
models address this issue because they describe the con-
cepts that need to be considered when we seek such out-
comes. Logic models link the problem (situation) to the
intervention (our inputs and outputs), and the impact
(outcome). Further, the model helps to identify partner-
ships critical to enhancing our performance.
Planning Process
The logic model was characterized initially by program
evaluators as a tool for identifying performance measures.
Since that time, the tool has been adapted to program
planning, as well. The application of the logic model as a
planning tool allows precise communication about the
purposes of a project, the components of a project, and
the sequence of activities and accomplishments. Further, a
project originally designed with assessment in mind is
much more likely to yield beneficial data, should evalua-
tion be desired.
In the past, our strategy to justify a particular program
often has been to explain what we are doing from the per-
spective of an insider, beginning with why we invest allo-
cated resources. Our traditional justification includes the
following sequence:
SITUATION
INPUTS OUTPUTS OUTCOMES
What we
Invest!
What we
Do!
Who we
Reach!
Short- Medium- Long-
Change in:
•knowledge
•skills
•attitude
•motivation
•awareness
Change in:
•behaviors
•practices
•policies
•procedures
Change in
situation:
•environment
•social
conditions
•economic
conditions
•political
conditions
•workshops
•publications
•field days
•equipment
demonstrations
•customers
•participants
•time
•money
•partners
•equipment
•facilities
External Influences, Environmental, Related Programs
The Logic Model
for Program Planning and Evaluation
CIS 1097
2
1) We invest this time/money so that we can generate
this activity/product.
2) The activity/product is needed so people will learn
how to do this.
3) People need to learn that so they can apply their
knowledge to this practice.
4) When that practice is applied, the effect will be to
change this condition;
5) When that condition changes, we will no longer be
in this situation.
The logic model process has been used successfully fol-
lowing the above sequence. However, according to Millar
et al,2logic models that begin with the inputs and work
through to the desired outcomes may reflect a natural
tendency to limit one’s thinking to existing activities, pro-
grams, and research questions. Starting with the inputs
tends to foster a defense of the status quo rather than cre-
ate a forum for new ideas or concepts. To help us think
“outside the box,” Millar suggests that the planning
sequence be inverted, thereby focusing on the outcomes
to be achieved. In such a reversed process, we ask our-
selves “what needs to be done?” rather than “what is
being done?” Following the advice of the authors, we
might begin building our logic model by asking questions
in the following sequence.
1) What is the current situation that we intend to
impact?
2) What will it look like when we achieve the desired
situation or outcome?
3) What behaviors need to change for that outcome to
be achieved?
4) What knowledge or skills do people need before the
behavior will change?
5) What activities need to be performed to cause the
necessary learning?
6) What resources will be required to achieve the
desired outcome?
One more point before we begin planning a program using
the logic model: It is recognized that we are using a lin-
ear model to simulate a multi-dimensional process. Often,
learning is sequential and teaching must reflect that, but
the model becomes too complicated if we try to communi-
cate that reality (figure 2). Similarly, the output from one
effort becomes the input for the next effort, as building a
coalition may be required before the “group” can sponsor
a needed workshop. Keep in mind that the logic model is
a simple communication device. We should avoid compli-
cations by choosing to identify a single category to enter
each item (i.e., inputs, outputs or outcomes). Details of
order and timing then need to be addressed within the
framework of the model, just as with other action planning
processes.
Planning Elements
Using the logic model as a planning tool is most valuable
when we focus on what it is that we want to communicate
to others. Figure 3 illustrates the building blocks of
accountability that we can incorporate into our program
plans (adapted from Ladewig, 1998). According to Howard
Ladewig, there are certain characteristics of programs that
inspire others to value and support what we do. By
describing the characteristics of our programs that com-
municate relevance, quality, and impact, we foster buy-in
from our stakeholders and audience. By including these
characteristics within the various elements of the logic
SITUATION
INPUTS OUTPUTS OUTCOMES
Research base,
4-weeks time,
editor & print $
42 page
curriculum,
classroom,
teaching partners
2 participants
neglected new
equipment, 12
needed retraining
42 p. curriculum
3-day workshop
for 20 participants
1-day follow-up
workshop for
8 participants had
increased
knowledge of
proper
fermentation
techinques
12 participants in
follow-up had
increased
knowledge of
techniques
6 participants
installed timing
equipment
10 participants
installed timing
equipment
60% of
participants
increased
product yield
by 15%
Figure 2. Over-complicated, multi-dimensional planning model.
3
model, we communicate to others why our programs are
important to them. The elements of accountability are fur-
ther described in the context of the logic model, below.
Situation
The situation statement provides an opportunity to com-
municate the relevance of the project. Characteristics that
illustrate the relevance to others include:
• A statement of the problem, (What are the
causes? What are the social, economic, and/or
environmental symptoms of the problem?
What are the likely consequences if nothing is
done to resolve the problem? What are the
actual or projected costs?);
• A description of who is affected by the prob-
lem (Where do they live, work, and shop? How
are they important to the community? Who
depends on them–families, employees, organ-
izations?);
• Who else is interested in the problem? Who are
the stakeholders? What other projects address
this problem?
The situation statement establishes a baseline for compari-
son at the close of a program. A description of the problem
and its symptoms provides a way to determine whether
change has occurred. Describing who is affected by the prob-
lem allows assessment of who has benefited. Identifying
other stakeholders and programs builds a platform to meas-
ure our overall contribution, including increased awareness
and activity, or reduced concern and cost.
Inputs
Inputs include those things that we invest in a program or
that we bring to bear on a program, such as knowledge,
skills, or expertise. Describing the inputs needed for a pro-
gram provides an opportunity to communicate the quality
of the program. Inputs that communicate to others that
the program is of high quality include:
• human resources, such as time invested by
faculty, staff, volunteers, partners, and local
people;
• fiscal resources, including appropriated funds,
special grants, donations, and user fees;
• other inputs required to support the program,
such as facilities and equipment;
• knowledge base for the program, including
teaching materials, curriculum, research
results, certification or learning standards etc.
• involvement of collaborators - local, state,
national agencies and organizations involved
in planning, delivery, and evaluation.
Projects involving credible partners, built on knowledge
gained from research and delivered via tested and proven
curricula, are readily communicated as quality programs.
Assessing the effectiveness of a program also is made eas-
ier when planned inputs are adequately described. By com-
paring actual investments with planned investments, eval-
uation can be used to improve future programs, justify
budgets, and establish priorities.
Outputs
Outputs are those things that we do (providing products,
goods, and services to program customers) and the people
we reach (informed consumers, knowledgeable decision
Buy-In
Figure 3. Structure of Acountablility.
4
makers). Describing our outputs allows us to establish
linkages between the problem (situation) and the impact
of the program (intended outcomes). Outputs that help
link what we do with program impact include:
• publications such as articles, bulletins, fact
sheets, CISs, handbooks, web pages;
• decision aids such as software, worksheets,
models;
• teaching events such as workshops, field days,
tours, short courses;
• discovery and application activities, such as
research plots, demonstration plots, and prod-
uct trials.
The people we reach also are outputs of the program and
need to be the center of our model. They constitute a
bridge between the problem and the impact. Information
about the people who participated and what they were
taught can include:
• their characteristics or behaviors;
• the proportion or number of people in the tar-
get group that were reached;
• learner objectives for program participants;
• number of sessions or activities attended by
participants;
• level of satisfaction participants express for
the program.
Outcomes
Program outcomes can be short-term, intermediate-term,
or long-term. Outcomes answer the question “What hap-
pened as a result of the program?” and are useful to com-
municate the impacts of our investment.
Short-term outcomes of educational programs may include
changes in:
• awareness–customers recognize the problem
or issue;
• knowledge–customers understand the causes
and potential solutions;
• skills–customers possess the skills needed to
resolve the situation;
• motivation–customers have the desire to
effect change;
• attitude–customers believe their actions can
make a difference.
Intermediate-term outcomes include changes that follow
the short-term outcomes, such as changes in:
• practices used by participants;
• behaviors exhibited by people or organizations;
• policies adopted by businesses, governments,
or organizations;
• technologies employed by end users;
• management strategies implemented by indi-
viduals or groups.
Long-term outcomes follow intermediate-term outcomes
when changed behaviors result in changed conditions, such
as:
• improved economic conditions–increased
income or financial stability;
• improved social conditions–reduced violence or
improved cooperation;
• improved environmental conditions–improved
air quality or reduced runoff;
• improved political conditions–improved partic-
ipation or opportunity.
External Influences
Institutional, community, and public policies may have
either supporting or antagonistic effects on many of our
programs. At the institutional level, schools may influence
healthy eating habits in ways that are beyond our control
but that may lead to social change.5Classes in health edu-
cation may introduce children to the food pyramid and to
the concept of proportional intake, while the cafeteria may
serve pizza on Wednesdays and steak fingers on Thursdays.
The community also can influence eating habits through
availability of fast-food restaurants or produce markets.
Even public policies that provide support (food bank, food
stamps) to acquire some items but not others might impact
healthy eating habits.
Documenting the social, physical, political, and institution-
al environments that can influence outcomes helps to
improve the program planning process by answering the fol-
lowing:
• Who are important partners/collaborators for
the program?
• Which part(s) of the issue can this project real-
istically influence?
• What evaluation measures will accurately
reflect project outcomes?
• What other needs must be met in order to
address this issue?
Evaluation Planning
Development of an evaluation plan to assess the program
can be superimposed, using the logic model format. The
evaluation plan should include alternatives to assess the
processes used in planning the program. Process indicators
should be designed to provide a measurable response to
questions such as:
• Were specific inputs made as planned, in terms
of the amount of input, timing, and quality of
input?
• Were specific activities conducted as planned,
in terms of content, timing, location, format,
quality?
• Was the desired level of participation achieved,
in terms of numbers and characteristics of par-
ticipants?
• Did customers express the degree of customer
satisfaction expected?
The evaluation plan also should identify indicators appropri-
ate to the desired outcomes, including short-, medium-and
long-term outcomes. Outcome indicators also should be
measurable, and should be designed to answer questions
such as:
• Did participants demonstrate the desired level
of knowledge increase, enhanced awareness, or
motivation?
• Were improved management practices adopted,
behaviors modified, or policies altered to the
extent expected for the program?
• To what extent were social, economic, political,
or environmental conditions affected by the
program?
Conclusion
Developing appropriate and measurable indicators during
the planning phase is the key to a sound evaluation. Early
identification of indicators allows the program
manager/team to learn what baseline data already may be
available to help evaluate the project, or to design a process
to collect baseline data before the program is initiated. The
logic model is useful for identifying elements of the program
that are most likely to yield useful evaluation data, and to
identify an appropriate sequence for collecting data and
measuring progress. In most cases, however, more work on
a project will be required before indicators are finalized.
Outcome indicators to measure learning should be based on
specific learner objectives that are described as part of the
curriculum. Indicators to measure behavioral change should
specify which behaviors are targeted by the program.
Conditional indicators may require a significant invest-
ment of time to link medium-term outcomes to expected
long-term outcomes through the application of a targeted
study or relevant research base.
SITUATION
INPUTS OUTPUTS
What we
Invest!
What we Do!
Who we
Reach!
OUTCOMES
Short- Medium- Long-
Change in:
•knowledge
•skills
•attitude
•motivation
•awareness
Change in:
•behaviors
•practices
•policies
•procedures
Change in
situation:
•environment
•social
conditions
•economic
conditions
•political
conditions
•workshops
•publications
•field days
•equipment
demonstrations
•time
•money
•partners
•equipment
•facilities
Evaluation Study: Measurement of process indicators — measurement of outcome indicators
Figure 4. Insertion of evaluation plan into the logic model.
1McLaughlin, J.A. and G.B. Jordan. 1999. Logic models: a tool for telling your program’s performance story. Evaluation and Planning 22:65-72.
2Millar, A., R.S. Simeone, and J.T. Carnevale. 2001. Logic models: a systems tool for performance management. Evaluation and Program Planning 24:73-81.
3Adapted from Taylor-Powell, E. 1999. Providing leadership for program evaluation. University of Wisconsin Extension, Madison.
4Ladewig, Howard. 1998-1999. Personal communication during sessions on “building a framework for accountability” with ECOP Program Leadership Committee (Tannersville, PA, 1998)
and the Association of Extension Directors/ECOP (New Orleans, LA, 2000). Dr. Ladewig was a professor at Texas A&M University at the time of communication; he now is at the University
of Florida.
5Glanz, K. and B.K. Rimer. 1995. Theory at a glance: a guide for health promotion practice. NIH pub. 95-3896. National Institutes of Health-National Cancer Institute. Bethesda, MD.
Issued in furtherance of cooperative extension work in agriculture and home economics, Acts of May 8 and June 30, 1914, in cooperation with the U.S.
Department of Agriculture, A. Larry Branen, Acting Director of Cooperative Extension, University of Idaho, Moscow, Idaho 83844. The University of
Idaho provides equal opportunity in education and employment on the basis of race, color, religion, national origin, age, gender, disability, or status
as a Vietnam-era veteran, as required by state and federal laws.
400 10-01 © University of Idaho