ArticlePDF Available

Planning and plan implementation: notes on evaluation criteria

Authors:

Abstract and Figures

This paper concerns the distinction between 'good' and 'bad' planning. Three views of the planning process are distinguished, with their associated criteria of the quality of plans: planning as control of the future, implying that plans not implemented indicate failure; planning as a process of decisionmaking under conditions of uncertainty, where implementation ceases to be a criterion of success, but where it becomes difficult, therefore, to give stringent criteria of the quality of a plan; and a view holding the middle ground, where implementation is still important but where, as long as outcomes are beneficial, departures from plans are viewed with equanimity. Similar distinctions are drawn in the implementation literature and in the literature on programme evaluation. The authors seek to develop a rigorous approach to evaluation under conditions of uncertainty. For this purpose, the authors draw on the policy-plan/programme-implementation-process (PPIP) model developed by Alexander and give five criteria for comprehensive evaluation: conformity, rational process, optimality ex ante, optimality ex post, and utilisation. The procedure is outlined in considerable detail, by means of tables and flowcharts. The framework confronts the dilemma that, although policy and planning must face uncertainty, we must at the same time be able to judge policies, plans, and their effects.
Content may be subject to copyright.
Environment
and
Planning
B:
Planning
and
Design,
1989,
volume
16,
pages
127-140
Planning
and
plan
implementation:
notes
on
evaluation
criteria
E
R
Alexander^
Department
of
Urban
Planning,
University
of
Wisconsin-Milwaukee,
Milwaukee,
Wl
53201,
USA
A
Faludi
Planologisch
en
Demografisch
Instituut,
Universiteit
van
Amsterdam,
1011
NH
Amsterdam,
The
Netherlands
Received
15
November
1988
Abstract.
This
paper
concerns
the distinction
betvs'een
'good' and 'bad' planning. Three views
of
the planning
process
are distinguished,
with
their
associated
criteria of the quality of plans:
planning as control of the future,
implying
that
plans
not implemented indicate
failure;
planning
as a
process
of decisionmaking
under
conditions of uncertainty, where implementation
ceases
to be a criterion of
success,
but where it
becomes
difficult,
therefore, to give stringent criteria
of
the quality of a plan; and a view holding the middle ground, where implementation is
still
important but where, as long as
outcomes
are beneficial,
departures
from
plans
are viewed
with
equanimity. Similar distinctions are drawn in the implementation hterature and in the
literature on programme evaluation. The
authors
seek
to develop a rigorous approach to
evaluation
under
conditions of uncertainty. For this
purpose,
the
authors
draw on the policy-
plan/programme-implementation-process
(PPIP)
model developed by Alexander and give
five
criteria
for comprehensive evaluation:
conformity,
rational
process,
optimahty ex
ante,
optimahty
ex
post,
and utilisation. The procedure is outlined in considerable detail, by
means
of
tables
and flowcharts. The framework confronts the dilemma that, although policy and planning must
face
uncertainty, we must at the
same
time be
able
to judge policies, plans, and their effects.
A
question which must naturally interest us as
planners
is: What is 'good' or 'bad'
planning? This is of
course
intimately
linlced
to
another
issue
which has
been
the
subject of
some
discussion over the
years:
What is planning? In
some
views
perhaps
both
these
questions
are
trivial;
after all,
Vicicers
(1968) simply said
"planning is what plarmers do", and evaluating the effectiveness of planning may be
just as obvious.
This
paper
is
presented
on the premise that the
answers
to
these
questions
are
neither obvious nor simple, and that they
have
important implications for how we
view
and practise planning. We
suggest
that
ideas
on what planning is and how it
should be evaluated are changing.
Estabhshed
views are fading, and alternative
models of the planning
process
are proposed as
replacements.
The conventional plaiming model
also
impUed
a set of criteria for 'good' and
'bad' planning.
These
criteria had, and probably
still
have,
a strong influence on
how planning and planning efforts are regarded both by planning practitioners and
by
others. We
will
suggest
that
perhaps
these
criteria were
never
realistic to begin
with,
and that other criteria should
replace
them. At the
same
time, if planning is
to
have
any
credibility
as a discipline or a profession, evaluation criteria must
enat)le
a
real judgment of planning effectiveness: good planning must be distinguishable
from
bad.
First,
the relationship
will
be explored between different definitions of planning
that
have
been
proposed, and various
perspectives
on plaiming evaluation. Next,
we
will
discuss
the Unk between planning and plan evaluation and implementation
H
This
paper
is
based
on
discussions
held while
Professor
Alexander was a
visiting
professor
at the Institute of Planning and Demography of the University of Amsterdam in the
fall
term
of
1987.
128
E
R
Alexander,
A
Faludi
assessment,
which has
also
been
the subject of a growing Hterature. Last, we
will
suggest
some
criteria for evaluating plaiming
processes,
plans, and their outcomes,
criteria
that respond to the shortcomings in previous evaluation
approaches.
Planning
deflnitions
and evaluation
In
his
paper:
"If planning is everything, maybe it's nothing",
WUdavsky
(1973)
showed the
link
between the
definition
and evaluation of planning. He defined
planning as control of the future, and
suggested
that,
since
uncertainty
makes
control
of
the future impossible, the question 'what is good planning?' is
unanswerable.
If
Wildavsky's
premises
are
accepted,
his conclusion is irrefutable: planning carmot
be evaluated, and is, in
essence,
an act of
faith.
One of Wildavsky's
implied
axioms is, indeed, incontrovertible. A human
activity,
if it is not undertaken solely as a symbolic
ritual,
must be
capable
of
evaluation, so that practice can learn the
lessons
of experience, and
success
can be
distinguished
from
failure. The
link
between planning and action, and between
plans, implementation, and results, is today universally
accepted.
But
defining planning as control of the future implies that planning is not
successful if
there
is anything
less
than total conformity.
Less
extreme definitions
of
planning than Wildavsky's
have
been
proposed, which would
make
evaluation
possible without making
demands
that are impossible to
nleet.
Responding to Wildavsky, Alexander
(1981)
suggested
that planning is not
everything.
He defined planning as the societal activity of developing optimal
strategies
to attain desired goals,
Unked
to the intention and power to implement.
This
definition
limits
plarming and
excludes
many
areas
of important social and
individual
activity. At the
same
time, it
suggests
some
criteria for evaluating
plans
and planning
processes.
These
criteria are
still
focused on implementation, but they
link
the quality of
planning and
plans
to the optimahty of the
strategies
that were devised. In this
view,
a plan that was implemented, and where expected positive
outcomes
significantly
outweigh unanticipated undesired effects, is effective. In
retrospect,
it
seems
easy
to judge such a strategy as successful.
Unfortunately,
evaluation is rarely that simple. What of planning efforts which
were not, or were only partly, implemented? Are
these
total failures? In the
Wildavsky
view they would be. But our evaluation
begins
to be more complex.
Rationahty, as a model for
justifying
decisions (Faludi, 1986a,
page
84),
becomes
an important evaluative criterion, in addition to
outcomes
as compared
with
intentions. Here, rationality
means
the superiority of a proposed
course
of action
over its alternatives. Demonstrating rationahty may require
analyses,
predictions,
and evaluations in support of the proposals.
Did
the planning
process
conform to the requirements of rational decisionmaking?
Given
the information available to
planners
and decisionmakers at the time, could
the
chosen
strategies
reasonably
be judged to be feasible and optimal?
These
are
not
easy
questions
to
answer.
Answering
these
questions
demands
an ex
post
reconstruction of the decision-
makers' ex
ante
perception of their situation. Such reconstructions are
difficult
but not
impossible—exercises
in historical interpretation.
Like
the historian, the
analyst has to stick rigorously to what the decisionmakers knew about the situation,
and to their motives and the context of their actions.
But
if the
scores
on the rationality
test
are positive, then any shortfalls between
plan
and reality cannot be attributable to the
planners
or plans,
unless
planning is
expected to be superhuman.
Rather,
planning failure
here
must be
the
result of
changes
that could not be anticipated.
Planning
and
plan
implementation
129
This
is very important,
because
such
changes
are intrinsic to our human and
social condition: the power of anticipation is
limited
by uncertainty. Uncertainties
include
uncertainty about the decision environment: what are future
trends
going
to
be?; uncertainty about goals: for what values (our own and
those
of future
'consumers' of our plans' results) should we plan?; and uncertainty about related
areas
of choice: what decisions and choices are going to be
made
in
areas
related
to
the subject of current
policy
or planning efforts, for example, national economic
pohcy, pending enviromnental
legislation,
etc?
(Friend and
Jessop,
1977,
pages
88-89;
Hall,
1980,
pages
4-11).
Uncertainty
is a central element in another
definition
of planning that has
recently
been
proposed. Faludi's "decision-centred" view of planning (1987,
pages
116-137)
abandons
the direct
link
to action that has
been
suggested
by
observers
of the planning
process
(Friedmann, 1969; 1987,
pages
44-46; Gross,
1971).
Instead, he defines plaiming as a
process
of creating a frame of reference
for
operational decisions:
those
decisions
which
represent
the commitment to
action
by the decisionmaking
agent
or through
which
the decision
agent
deploys
other organisations or units in planning or implementation activities.
Faludi
breaks
this hnk not to
deprecate
the importance of action. On the
contrary, decisions on action to be taken
here
and now are so important that
decisionmakers cannot be overconcerned
with
following
some
plan.
Plans
are only
there
to be
helpful,
when
some
form
of
advance
structuring of decision situations
is
needed.
But the structuring devices are
secondary
in importance. What are of
primary
importance are
decisions.*^'
It
is for this
reason
that
flexibility
is incorporated into the decision-centred view
of
planning
from
the start. In this view,
change
in decision situations is
likely
between planning and operational decisionmaking, so nonconformity of outcomes
or nonimplementation of plans are not necessarily failures. If plans were
used
in
operational decisionmaking, then they served their purpose, even if operational
decisions and their outcomes prove to be quite different
from
those
prescribed.
This
approach
sees
plans as prior investments
which
help to improve the
operational decisionmakers'
grasp
of their situation. As long as decisionmakers
avail
themselves
of plans, the plans
fulfil
their purpose. So, to come to a positive
conclusion about a plan, it is not
necessary
for it to be
foUowed
strictly;
indeed, it
need
not be
followed
at all. AU that is required in this view for the plan to be
effective
is that it be
used.
In
overview, we can recognise
three
different
approaches
to uncertainty,
each
conforming
to one of the
three
above
definitions of planning. Wildavsky's planning
is a 'straw man' who has to
ehminate
uncertainty if he is to be conceded the
right
to
exist. Alexander's
definition
recognises
uncertainty,
which
planned
strategies
have
to
incorporate if they are to be effective, and
which
plan evaluation must take into
account in
assessing
implementation. Faludi's
definition
embraces
uncertainty, to
the extent that the
link
between planning and outcomes is broken, and implementation
conformity
becomes
ultimately irrelevant to the evaluation of planning.
Arraying
the
three
definitions on a continuum, we
find
Wildavsky at one pole
where plans not implemented always indicate failure, and Faludi at the other where
implementation
ceases
to be a criterion of
success.
Alexander holds the middle
ground
where implementation is
still
important but where, as long as outcomes are
beneficial,
departures
from
plans are viewed
with
equanimity.
<i)
This is even true where plans carry legal force: if they do not fit the exigencies of the
operational decision, they are ignored as a matter of course.
130
E
R
Alexander,
A
Faludi
Implementation
and plan evaluation
Related fields may
offer
some
illuminating
parallels to the evolution in views of
plaiming
that has
been
reviewed above.
These
are the study and
assessment
of
implementation
and programme evaluation. Observers and analysts of implementation
recognised the Importance of uncertainty
from
the beginning, for example
Pressman
and Wildavsky's
(1973)
classic study of the Oakland
Office
of Economic Opportunity
project,
which
can almost be said to
have
launched this
field.
Nevertheless, the
approaches
to implementation and implementation
assessment
that developed in the early to mid-1970s can be characterised as
'Hnear'
(Alexander,
1985,
pages
407-408;
Faludi, 1987) or 'top-down' (Sabatier, 1986). The authors
concerned
assume
that pohcies or plans are complete at a given point in time, and
evaluate implementation by the
degree
to
which
outcomes conform to pohcy. This
approach is
best
presented in the
work
of Mazmanian and Sabatier
(1981;
1983).
Subsequent
views of the policy-implementation
process
modified
this approach
considerably, and saw the transformation of
ideas
into action as much more
interactive.
The
process
was variously described as 'circular', 'reflexive', or,
finaUy,
as a 'negotiative
process'
(Alexander, 1985,
pages
408-409). Clearly, such views
have
imphcations for implementation
assessment:
evaluation can no longer simply
compare the conformity of the outcomes
with
the pohcy or plan. Instead,
implementation
itself
becomes
the object of evaluation.
Approaches to programme evaluation went through a similar transformation.
Originally,
programme evaluation was presented as an
objective—almost
scientific-
undertaking.
A programme's
success
could be ascertained by measuring its impacts,
using
one of a variety of more or
less
rigorous experimental
designs
(Campbell and
Stanley, 1966; Hatry et al, 1973;
Tripodi
et al, 1971). Gradually, the
objectivity
of
programme evaluation
came
under question (House, 1980; Weiss, 1972; Wholey,
1979),
and recognition of the
'politics'
of evaluation
became
the order of the day
(Williams,
1975; Weiss, 1978).
As
a result, accepted styles of programme evaluation changed. Programmes
were no longer expected to deliver outputs, or to
generate
recognisable impacts.
Instead, programmes also
became
the objects of 'process'-type evaluations, in
which
the delivery of the programme
itself
rather
than its product
became
the focus of
attention
(Alterman et al, 1984; Madsen, 1983; Patton, 1980; Rutman, 1977). In
these
evaluations, relevant criteria were no longer
only
the
presence
of positive
impacts that had
been
planned and were attributable to the programme intervention,
but
rather
process
characteristics such as chent involvement, organisational inter-
action,
or
sense
of accomplishment.
Thus, evaluation of planning and plans, implementation
assessment,
and
programme evaluation
have
evolved through the last two
decades
in ways
which
reflect
a common problem. AU began
with
models that
impUed
a relatively
determinate relationship between intention and outcome, where accomplishment
was measured by
assessing
conformity between pohcies, plans, and programme
objectives, and actual outcomes and impacts. All
have
substituted a
consciousness
of
process
for that preoccupation
with
product, and
have
recognised the
falUbiUty
of
understanding, the
ubiquity
of uncertainty, and the socially constructed nature of
'objective'
knowledge.
The problem is now: how can we
evaluate?
How can we distinguish between
success
and
faüure,
between effective planning and incompetent or misguided
efforts? This problem must be confronted, if we are not to succumb in a sea of 1
/
!
This
summary review
focuses
on the US
scene;
for a valuable international comparison,
see Levine et al (1981).
Planning
and
plan
implementation
131
relativism
which
makes
us vulnerable to our
harshest
critics, who should be, and
often
are, ourselves. Evaluation, in
each
of
these
fields, is a challenge that must be
met so that learning can be possible. Learning
from
experience
can only be
accumulated and transformed into knowledge through
systematic
evaluation,
generalisation, and development of new theories and norms for practice.
Evaluating
plans
and planning
Unquestionably the evolution described
above
represents
progress:
from
simplicity
to complexity. The decision-centred view of planning,
like
the interactive or
negotiative model of implementation, and the process-oriented approach to
programme evaluation, are improvements on their
predecessors,
because
they are
more
reaUstic
and incorporate uncertainty and
change
as facts of
life.
But
here
we want to
address
the problem that this complexity
raises:
the fact
that it has apparently
become
impossible, if we
take
these
approaches
at their
face
value, to
undertake
any evaluation. This warrants
some
explanation.
According
to
Popper
(1959),
in empirical enquiry, only falsifiable propositions
should be called scientific. This can be applied to evaluation of planning or
plans
as
well,
in the
sense
that the objects of the
assessment
must be
able
to
fail
any of
the
tests
involved. At the end of the day, it must be possible to give a 'thumbs
down'
and, furthermore, to convey the
reasons
for
one's
negative judgment to
others. In other words, evaluation is unworthy of that
name
unless
there
are
criteria
for the evaluator to
recognise
the 'good' and distinguish it
from
the 'bad'.
Thus, using a term borrowed
from
Popper
and his school, we may say that
plans
are
fallible,
and that evaluation must relate to their
fallibility.
Of
the
three
modes
discussed,
the decision-centred model
seems
to be particularly
vulnerable to this type of
critique.
This
is
because
it deliberately
breaks
the
link
between
plans
and
outcomes
'on the ground'. If this were to
mean
that all planning
is 'good' planning, how could planning be
evaluated?
And if it cannot be evaluated,
how can any
claim
be
made
for planning as an
activity
that contributes something
to society and humankind? For a view of planning
which
again
makes
the
evaluation of planning, plans, and plan
outcomes
possible, we must see planning in
its
larger context: planning as part of the social deliberative and interactive
process
which
links aims to action, and
which
transforms
ideas
into realities.
This
process
is recursive (Mack, 1973,
pages
135
-139),
and essentially hierarchical
in
its progression
from
broad abstraction to
concrete
and
case-specific
reality.
This
is not to
suggest
that the
flow
through the
stages
is
necessarily
top-down:
depending on the stimulus and context, it can be bottom-up (Elmore, 1979/80), or
begin at any intermediate
level,
as shown in figure 1 (see over). This
process
has
been
called the PPIP:
'policy-plan/programme-implementation
process'
(Alexander,
1985).
The
PPIP
model offers a view of plaiming that allows us to integrate
policy,
planning,
projects, and programmes, operational decisions, implementation and
implementation decisions, and the outputs, outcomes, and impacts of
plans
and
their
implementation. First we must define all
these
and relate them to one
another;
this is
done
below,
with
the relationships between
these
elements
shown in table 1.
Postuma
(1987)
shows
that this is not
necessarily
so in his evaluation of the 1935 General
Extension Plan of Amsterdam. This plan provided a framework for housing-related decisions
until
long after
World
War 2, but
with
respect
to port developments it
failed
to give meaningful
guidance. Thus, a plan, or
parts
of it, can be shown not to
have
worked.
132
E
R
Alexander,
A
Faludi
A
policy
or a
plan
can be defined as a set of "instructions ... that spell out both
goals and the
means
for achieving
those
goals"
(Nalcamura
and
SmaUwood,
1980,
page
31).
Pohcies
and plans may be distinguishable
from
one another by their
respective
scope
and range, and their relative
degrees
of abstraction or
concretness
and
specificity.'"'
Programmes
and
projects
are specific interventions to achieve defined objectives,
discrete 'chunks' of solutions, as it were, to specific problems
(Wildavsky,
1979,
pages
391-393). The programme delivers
services
or initiates
some
course
of
action,
such as regulation, reorganisation,
etc.'^'
The project
produces
a concrete product: a facihty, construction, infrastructure,
etc. A useful distinction is between 'strategic projects', that is, projects undertaken
by
higher
level
authorities as part of their broad
mandate
(for example, facihties or
infrastructure of national or regional importance such as airports, harbours, or
major
highways) and other projects implemented by
local
jurisdictions and the
private
sector
(Faludi,
1986b,
page
260).
Operational decisions
are
those
decisions
made
in the context of the deliberative
process
that commit the decision
agent
to action. Reversal of an operational
decision entails
costs.
Operational decisions can be likened to output. In a
marmer of speaking, they are whatever
leaves
the
plaiming
agency
in terms of
stated
intentions,
persuasive
statements,
etc. Operational decisions
need
not,
however, be implementation decisions; they can
also
be decisions affecting lower-
level
or other
agencies
or organisations: regulatory approvals, funding allocations,
etc. But they are distinct, in their association
with
commitment,
from
planning
decisions.
Plans
reflect commitments that are easily
suspended
or reversed by
merely substituting one
form
of words for another
(Faludi,
1987,
pages
116-117).
Implementation
and
implementation decisions
here
refer to action and operations
in
the
field.
Indeed, if we adopt current
perspectives
on implementation, the
division
between pohcy, planning, and implementation is fuzzy, and the
definition
of
implementation
will
vary relative to the
level
of organisation or government
concerned (Alexander, 1985,
pages
409-410).
Stimulus
ei
'
(stop)
3
o
Policy
Plan
Programme
Implementation
Link
3
Figure
1. The
pohcy-plan/programme-implementation
process.
Key
terms
need
to be defined for the
purposes
of discussion
because
they are sometimes
used
in different
senses
(compare
Williams,
1976,
pages
272-273). There are other
usages,
like
that of Friend and
Jessop
(1977,
page
111),
which
define pohcies as forms of expression
to
be
used
within
plans—the
other forms being programmatic
statements.
Again,
other
usages
of
these
terms exist (for example, see Friend and
Jessop,
1977).
Williams
(1976) defines a programme as a cluster of activities (by
imphcation,
with
spatial
extension, for example, nationwide) and a project as a single
activity
within
such a cluster.
Planning
and
plan
implennentation
133
But
viewing
the PPIP as a whole we can distinguish implementation as action
and operations in the
field
designed to achieve
change
'on the ground'. Implementation
decisions are a special
class
of operational decisions, therefore:
those
decisions
which
produce the
final
outputs of a programme or a project, and
which
impact
directly
upon the client, the organisational, or physical environment. Such decisions
include
the apphcation of regulations, disbursement of funds, contracting and
procurement, personnel actions and management, service dehvery, etc (see table 1).
In
table 1 we see all
these
elements of the
process
which
transforms
ideas
into
realities arrayed on the dimension
which
relates
to their essential difference: the
degree
of abstraction and generality, or
concreteness
and particularity. We can
now
review the alternative evaluation criteria for planning
which
have
evolved as a
result of the alternative definitions that
have
been
discussed above.
Three distinct evaluation
approaches
can be
identified.
Traditional or conventional
'objective'
assessment
of
policy
or planning effectiveness,
success
in implementation,
and programme accomplishment, ignores uncertainty, as we
have
seen.
It
demands
Table
1. The policy-plan/programme-implementation
process:
elements and relationships.
Abstract
Concrete
General Particular
Broad
Specific
Deliberative
Pohcy Plan(s) Programme(s)
process
Project(s)
Agent: government(s), organisations, institutions, agencies
Decisions operational decisions: operational decisions:
elaborating or elaborating or
implementing
policy
implementing plans
Agent: agencies implementing policy, plans,
or
programmes
and
projects
Actions
and outputs plan(s) programme(s)
programme(s) strategic and other
(strategic) projects projects
Results and impacts
Object
change(s)
elaboration
development
implementation
policy
plan(s)
strategic project(s)
plan(s)
programme(s)
project(s)
operational or
implementation
decisions and
actions
legislation
personnel actions
contracts and
procurement
resource
allocations
disbursements, etc
construction
and
development
projects
service dehvery
(service
programmes)
administrative
action
(managerial and
reorganisational
programmes)
apphcation of
legislation
and
regulations
physical,
built,
and
socioeconomic
environment
other organisations,
agencies,
firms,
households,
individuals
134
E
R
Alexander,
A
Faludi
conformity
of operational decisions, implementation
processes,
and concrete results
with
the intentions
expressed
in policies and plans.
'Subjective'
evaluation
takes
uncertainty into account. Uncertainty is incorporated
by
evaluating the planning
process
and
assessing
the optimahty of the resulting
strategies.
This must be
done
in the
light
of the actual planners' or decisionmakers'
ex
ante
knowledge and information and their perceived and actual constraints.
This
is different
from
'objective'
assessment.
So, we may expect
some
plans to
fail
one
test,
but
pass
the other. We may conclude that the
tests
are complementary.
'Decision-centered'
evaluation examines
the
use of the
policy
or plan as a frame
of
reference for operational decisions. Acceptance of uncertainty is integral to this
evaluation approach:
changes
in the perceived decision situation
(which
is the
cognitive
context in
which
pohcies and plans are developed and operational decisions
are taken) are a sufficient
reason
for nonconformity between operational decisions
and their frame of
reference,
c*'
Each of
these
approaches
has its
strengths
and
weaknesses.
'Objective' evaluation
has the
advantage
of being concrete and
intuitively
acceptable.
Its
weakness
is its
failure
to allow for unavoidable and irreducible uncertainty: thus such evaluations
(and they are common)
have
made
demands
for performance
which
have
been
impossible to
fulfill.
'Subjective' evaluation has the
advantage
of
allowing
for uncertainty, and
allowing
for
plarmers',
progranmie
designers'
and implementors'
fallibility
by making
judgments
based
on their perceived decision situations. Including rationality and
optimahty
criteria
still
enables
positive and negative evaluations to be
made.
This
approach
meets
the requirement, set out previously, that the
test
must be
constructed so that a plan could
fail
it. Its
weaknesses
are its complexity and the
difficulty
of reconstructing the ex
ante
decision situation. But neither
difficulty
is
uncommon in social
research.
The
strengths
of 'decision-centred' evaluation are in its
logical
consistency.
Embracing uncertainty, it
absolves
pohcy and plaiming
from
responsibihty for
subsequent
operational decisions and implementation. But this is
also
its
apparent
weakness:
in severing the
link
between policies, plans, and outcomes, it
seems
to
have
lost the essential ingredient that any evaluation must
have—that
outcomes
could
be negative as
weU
as positive. This concern, however, is alleviated if we
specify the conditions under
which
we
would
regard a plan as useful to operational
decisionmakers.
A
proposed framework for
policy-plan-implementation
evaluation
Here a framework for evaluating pohcy and plan-implementation is
presented.
This
combines the
three
evaluation
approaches
which
we see as, in effect,
complementary. The framework
hsts
criteria in a programmed
sequence
of
questions to be applied to the pohcy, plan, or planning
process
under consideration,
as
weU
as to its outcomes. Depending on the
responses
to this
sequence,
evaluation
can be positive, neutral, or negative.
By
an extension of the
same
logic,
decision-centred
plans must deal
with
contingencies, if
in
no other way than by
allowing
for future
adaptations
(Faludi, 1987).
This
approach has
been
applied empirically by Postuma (1987). An elaboration specifies
four
conditions
which
in part foreshadow the complementary evaluation proposed below:
(1)
conformity
with
reference to the plan; (2) deliberate (that is,
reasoned)
departure
from
the plan; (3) reference to the plan in analysing the
consequences
of nonconforming operational
decisions; (4) regenerative capacity of the plan, that is, systematic review and amendment
using the plan as frame of reference (Wallagh, 1988,
pages
122-123).
Planning
and
plan
implementation
135
This
evaluation framework sequentially applies criteria
from
each
of the
three
evaluation
approaches
discussed
above.
(1)
Conformity
This
intuitive
question is taken over
from
the conventional
evaluation approach. It
asks:
"To what
degree
do operational decisions,
implementation decisions, and actual outputs, outcomes, and impacts conform to
the goals, objectives, intentions, and instructions
expressed
in the policy, plan, or
programme being
evaluated?"
This
test
concerns
two questions, therefore: (a) Was
the plan
foUowed,
or is it being implemented? (b) Are its effects as
desired?
But,
unlike in the conventional evaluation approach, conformity is not the
sole
criterion
of
success.
Implementation or
results
of policies or
plans
which do not
conform,
in
some
degree
or other, do not automatically
elicit
a negative evaluation
of
the
pohcies
or
plans
'responsible'.
Rather,
additional criteria are sequentially
applied.
To
the
degree
that conformity exists, the policy, plan, or programme has met
one condition for a positive evaluation. Other conditions involve additional criteria
which
are
presented
below.
(2)
Rational process
A rational approach to the planning and decisionmaking
process
is
another
criterion that is applied, whether or not operational decisions
and
outcomes
are found to be conforming to plan or pohcy requirements. A
rational
process
here
means
conforming to certain normative requirements in
process
and method.
These
essentiaUy
consist of the
following
general conditions
(the more specific
ones
associated
with
formal rationality in a narrower
sense
of
the word are
discussed
below
under
ex
ante
optimahty):
(a)
Completeness
Reasonable
acquisition and use of available knowledge and
information,
and the 'design' [search for, or development of, options (Alexander,
1982)]
and evaluation of alternative
courses
of action; applying this requirement
means
an
assessment
of the ex
ante
decision situation.
(b)
Consistency
Logical consistency in the
data,
methods
used
in their analysis and
synthesis,
and
strategies
presented
in the conclusions and recommendations;
adoption and implementation of recommended strategy; examination of policy or
plan
documents
can
iUuminate
the consistency of policies and plans.
(c)
Participation
Involvement in policy or plan development of relevant affected
parties, and their participation in
critical
decisions; the
values
reflected in the
goals
and objectives of a policy or plan must be a weighted aggregation of
these
interests. This criterion reflects the aspiration toward uninhibited cormnunication
and
consensus
of
critical
rationality (Habermas, 1984). Legislative, policy, and plan
documents, and interpretive reconstruction of the planning
process
may be
necessary
to
assess
the
degree
to which this requirement has
been
met, and at
best
this
remains an essentially ideological, pohtical, or subjective evaluation.
(3)
Optimality
ex
ante,
or
rationality
in the
narrow sense
Could the strategy or the
courses
of action prescribed in the pohcy or plan
under
assessment
be considered
optimal? Determining optimality involves
assessing
relationships between aims and
means.
When this
happens
ex
ante,
obviously we are talking about such relation-
ships
as perceived by the decisionmakers in the
course
of taking their decisions.
(4)
Optimality
ex
post
Was the strategy or were the
courses
of action prescribed
in
the policy or plan
under
assessment
in fact optimal? As
against
the evaluation
of
the plan
under
(2) and (3) above, this is ex
post
assessment
of the
goals
and
objectives of the undertaking that has
been
implemented. It
also
goes
beyond the
test
proposed
under
(1) above, where one question was whether the effects were
the
ones
the plan aimed for. But, even if they were,
with
hindsight it is possible to
conclude that
these
effects were not, in fact, optimal; this is why a
separate
evaluation is
necessary.
136
ER
Alexander,
A
Faludi
Table
2. Evaluation questions.
Criterion
and question
1
Conformity
1.1 Do
policy-plan-programme-project
(PPPP)
outcomes or impacts conform
to
PPPP
instructions or projections?
1.1.1 Is conformity complete or partial?
1.1.2 Is
degree
of partial conformity
significant
in terms of impact on
the relevant (socioeconomic,
physical,
built)
environment?
1.1.3 Is partial conformity so
limited
as to
be almost neghgible?
1.2 Does
PPPP
have
a significant directive
function
(that is, is it more than a
projection
of practices, procedures,
or
trends that
would
have
occurred
without
the respective
PPPP,
and is
it
more than a collage of other
PPPPs)?
2
Utilisation
2.1 Was the
PPPP
used or consulted in
making
operational decisions
involved
in the development or
implementation
of this or other
PPPPs?
2.2 What was (were)
reason(s)
for non-
conformance or nonutihsation?
2.2.1 Change in decisionmakers?
2.2.2 Could this
change
have
been
anticipated, or could the
PPPP
have
incorporated
flexibility
or
adapt-
abUity
to respond to such a
change?
2.3 Change in decision situation?
2.3.1
Caused
by
(a) objective
changes
in environment,
phenomena,
trends?
(b)
perceived
changes
in environment,
phenomena,
trends?
(c)
changes
in societal or organisational
values, goals, objective?
(d)
changes
in available
means,
resources,
strategies, technologies?
2.3.2 Could the change(s) in the decision
situation
have
been
anticipated or
allowed
for in the
PPPP
(for
example, through prediction,
flexibility,
adaptabihty, potential
for
revisions, etc)?
Conditional
response
and/or evaluation
If
yes, go to 1.1.1
If
no, go to 2
Ji
complete, go to 1.2
If
partial, go to 1.1.2
If
yes, go to 1.2
If
no, go to 1.1.3
If
yes,
PPPP
rates
negative; go to 2
If
no, disaggregate pohcy or plan evaluation
into
more conforming and
less
conforming
parts and go to start for
each
separately
If
yes,
PPPP
rates
positive;
assume
that
PPPP
has
been
used; but it can
stiU
be evaluated
for
rationahty and optimahty; go to 3
If
no,
PPPP
rates
negative, in spite of
conformity
due to
absence
of directive
function
Since
response
to 1 indicates nonconformance,
explore
reasons
for nonconformance
with
utilisation
or