ArticlePDF Available


An earlier study by Chakrabarty and Rogé evaluated the dimensionality of the Organizational Learning Survey for assessing the learning capability of organizations developed by Goh and Richards. In this paper, the survey was re-evaluated for unidimensionality using both exploratory and confirmatory factor analysis. Retaining 18 of the original 21 items was sufficient to establish unidimensionality. A number of different confirmatory models validated the unidimensional measurement model on two independent data sets. Implications for further empirical research using this inventory are discussed.
ABSTRACT. In recent review of the literature on integrating evaluative inquiry into
organizational culture, Cousins, Goh, Clark and Lee [Cousins, J.B., Goh, S.,
Clark, S. & Lee, L. (2004). Canadian Journal of Program Evaluation 19(2), 99–144]
suggest that there is a link between evaluative inquiry and organizational learning in
schools. However, there have been no published studies examining the views, per-
ceptions and importance teachers and administrators attach to these practices and
activities in their schools. This article reports results from a survey of 970 educators
about their views on both of these topics – organizational learning and evaluation.
Teachers and school administrators in 41 middle and secondary schools in Mani-
toba, Canada, responded to questions about current evaluation practices, attitudes
towards evaluation and experience with systematic inquiry, as well as organizational
learning capacity, school support structures and their readiness for evaluation and
change. The survey results suggest that educators perceive their schools to have a
moderate capacity for organizational learning. Similarly, respondents indicated that
a moderate to low level of evaluation activity is currently taking place in their
schools. Some implications for change in building a learning capacity and an eval-
uative inquiry culture in schools and suggestions for further research are discussed.
KEY WORDS: educators, evaluation, evaluative inquiry, organizational learning,
readiness for change, schools
1. Introduction
Looking back over the last century in education, school reform has
been a constant theme. Whether masquerading under the guise of
improved student outcomes or better teaching practices, operating
efficiencies, (‘‘doing more with less’’), accountability, or change in
government policy, schools need to continually adapt to meet the
needs (and demands) of their many stakeholders. There is increasing
evidence to suggest that this pace of change is intensifying and the
teaching environment is becoming more complex (Conner, 1992;
Fullan, 1995, 2001). This climate of constant change has prompted
the growing support for the
importance of organizational learning in
schools (Fullan, 1995; Leithwood, Leonard & Sharratt, 1998; Leith-
wood & Louis, 1999; Stevenson, 2001). Teachers, educational
Journal of Educational Change (2006) 7:289–318 ÓSpringer 2006
DOI 10.1007/s10833-005-5033-y
administrators and researchers alike are interested in ‘‘reculturing’’
schools in order to build their capacity for organizational learning
and change (Marks & Printy, 2003).
In recent review of the literature on integrating evaluative inquiry
into organizational culture, Cousins, Goh, Clark and Lee (2004)
suggest that there is a link between evaluative inquiry and organi-
zational learning in schools. However, there have been no published
studies examining the views, perceptions and importance teachers
and administrators attach to these practices and activities in their
schools. Since organizational learning capacity (OLC) is becoming
increasingly important in schools and evaluative inquiry represents a
potentially viable way to enhance it, a direct empirical investigation
of these constructs is warranted. The purpose of this paper is to
report on a study in which we surveyed 970 Manitoba teachers and
school administrators on their views and perceptions of OLC, their
readiness for change and the use of evaluative inquiry in their
schools. We also assessed the extent to which these same schools
possess organizational learning attributes. Other factors such as
school support structures, and attitudes towards evaluation were also
explored. Our intention is to present data from the survey at a more
detailed and descriptive level than has previously been the case. Our
aim is to provide rich insights into the views and perceptions of the
educators on these issues and the extent to which organizational
learning and evaluative inquiry is present or practiced in schools.
To begin with, the paper will provide a brief review of literature an
organizational learning and evaluative inquiry as it relates to schools
and why evaluative inquiry can play an important role in organiza-
tional learning.
2. Background Literature
Popularized by Senge (1990), the concept of the learning organi-
was originally targeted at business leaders, with the promise
that corporations would realize increased competitive advantage
and sustained success in the marketplace by adopting his ‘‘five
disciplines’’. Since then, the concept has continued to generate sig-
nificant interest among researchers and practitioners, in a variety of
domains. In educational administration, a significant body of
research exists, linking organizational learning to school reform,
leadership, school change, and evaluation (Cousins, 1996; Fullan,
1995; Leithwood, Aitken & Jantzi, 2001; Scribner, Cockrell,
Cockrell & Valentine, 1999; Silins, Mulford & Zarins, 2002;
Stevenson, 2001).
What exactly then, is a learning organization? There are a variety
of definitions, reflecting the extensive body of literature and the broad
range of perspectives and paradigmatic assumptions, which have
been applied to the subject. Garvin (1993) attempted to synthesize
these definitions and proposed the following, which we have adopted
for this study: ‘‘a learning organization is one which is skilled at
creating, acquiring, and transferring knowledge and at modifying its
behavior to reflect new knowledge and insights’’ (1993, p. 80).
The literature on evaluation has also suggested that evaluative
inquiry can play an important role in encouraging organizational
learning. In fact, a number of authors have documented their interest
in evaluation as an organizational learning system over the last
decade or more (Cousins & Earl, 1992; Forss, Cracknell & Samset,
1994; Owen & Lambert, 1995; Preskill, 1994). Evaluative inquiry can
contribute to organizational learning in several ways: first, the results
of an evaluation can help measure the success of a particular project
or program, provide feedback from which the school staff can learn,
and create knowledge for decision-making; second, the process of
conducting the evaluation can in itself act as a learning system.
Patton coined the term ‘process evaluation’ (1997, 1998) to describe
this phenomenon. By virtue of their proximity to the evaluation,
organization members or other stakeholders may develop in ways
that are quite independent of the findings or substantive knowledge
emerging from the inquiry. Individuals learn to see organizational
phenomena differently and to question basic assumptions. And when
these leanings are integrated into the organizational culture, organi-
zational learning capability can be enhanced (Cousins, 1996, 1999;
Owen & Lambert, 1995; Preskill & Torres, 1999; Torres & Preskill,
Several other authors have also reported that learning may stay at
the incremental, single-loop level if it is not incorporated into orga-
nizational culture (Cousins & Earl, 1995; Forss et al., 1994). Cousins,
Goh, Clark and Lee (2004) found that penetrating organizational
effects can not be reasonably expected until evaluative inquiry
becomes an organizational norm, and this depends heavily upon
organizational support structures, leadership, relevance and
perceived utility of evaluation.
3. Constructs and Questions Guiding this Study
The objective of this study is to gain further insight into educators’
perceptions of organizational learning and evaluative inquiry in
schools. First, we wanted to address the following questions about
evaluation: (1) To what extent do educators believe that they are
currently incorporating evaluative activities in their schools? (2) What
are their opinions about evaluation as a method of systematic inquiry
to assist decision-making? With respect to organization learning, our
research questions were the following: (3) To what degree do edu-
cators believe that their schools exhibit the attributes of a learning
organization and have the support structures necessary to enable
learning to occur? and (4) what are their perceptions of their school’s
level of readiness for evaluation and change?
In order to address these questions, the following constructs were
operationalized for teachers and administrators, so that their views
and perceptions could be captured.
3.1. Organizational Learning Capacity
For the first construct, OLC, we adopted Goh’s (1998) conceptuali-
zation of a learning organization. According to Goh (1998), those
organizations with a higher OLC are more likely to demonstrate five
learning organization characteristics or building blocks, as defined in
this model: Clarity and support for mission and vision, leadership
that supports learning, an experimenting organizational culture, the
ability to transfer knowledge effectively, and teamwork and cooper-
ation. These attributes are mutually supportive conditions which
foster a learning organization. The strength of each of these attributes
is measured using the Organizational Learning Survey developed by
Goh and Richards (1997). To the extent that educators see their
schools as having these attributes, the greater the degree the school
will be seen as having OLC.
3.2. Evaluative Inquiry
Evaluative inquiry refers to a range of activities, from needs assess-
ment, program evaluation and outcome monitoring to environmental
scanning. Evaluative activities vary in their intent, their objects, the
type of information collected and methodology employed. They may
be ‘‘summative’’ in nature, providing judgment of the merit or worth
of a program, or ‘‘formative’’, providing information for decision
making. Typically, a summative evaluation is performed upon com-
pletion of a program, for the purposes of accountability, certification,
or selection (Nevo, 1983). In contrast, a formative evaluation is
aimed at program improvement, and is typically conducted while the
activity is ongoing or the program is developed. It may be performed
by external evaluators, internal personnel, or a combination of both.
A recent trend has been to increase the participation of stakeholders
in evaluation activities, with the primary intent of enhancing the
utilization of results; but within this ‘‘collaborative’’ approach, there
is some disagreement among authors on the issues of participant
selection, depth/type of involvement, and control of the evaluation
technical decision making (Cousins, Donohue & Bloom, 1996;
Turnbull, 1999).
Our own conceptualization of evaluation emphasizes the positive,
formative aspects of evaluation, as described earlier. We believe that
evaluative inquiry can assist decision makers in making informed
decisions about an object such as school improvement program or
project (Cousins & Earl, 1995) and this was the definition we pro-
vided to respondents in our research. Questions related to this con-
struct assess the degree to which teachers and administrators see
evidence of a wide range of evaluation activities in their schools and
have active participation in such activities.
3.3. Organizational Readiness for Change
An important factor that can influence the implementation of eval-
uative inquiry and organizational learning is organizational readiness
for change. This construct draws from the literature on change
management, which suggests that an organization with a culture that
supports openness and flexibility can influence the degree to which its
members are adaptable and generally open to new ideas and change,
and thereby able to benefit from evaluative inquiry. Authors such as
Preskill and Torres (1999) and Seiden (1999) have argued that
organizational readiness for change is a critical factor in under-
standing more about the ability of an institution to respond to and
learn from evaluative information. This construct measures the
degree of openness of educators to change, for example, whether
there is resistance to change or the introduction of new ideas into
schools or that they are generally receptive to innovations.
3.4. Attitudes Towards Evaluation Processes
This construct refers to a specific receptivity to evaluative activities.
We believe that the degree to which an organization and its members
view evaluation activities as positive or negative will also have an
influence on its capacity to learn. In other words, the educators’
opinions and attitudes about evaluation will influence their willing-
ness to become involved in evaluative activities, the extent to which
they will view the results as meaningful and relevant and therefore
will be interested in utilizing them within the school or classroom.
Support for this contention is available in the literature (Cousins,
1999; Cousins, Goh & Lee, 2003; Lysyk, 2000). The questions
measuring this construct ask teachers and administrators whether
they support evaluation processes and have the time to be involved in
such activities and see them as having a positive effect on their
3.5. School Support Structure
Organizational support structures also influences an organization’s
capacity to learn. Goh (1998) contends that the five organizational
building blocks mentioned above – mission and vision, leadership,
experimentation, transfer of knowledge, teamwork and cooperation –
are dependent upon two supporting foundations. The first, organiza-
tion design, refers to the fact that organizational learning is enhanced
when an organization has a flat non-hierarchical structure with few
formalized controls over employees’ work. The second, employee skills
and competencies, refers to the need for organizational to invest in
professional development by building team competencies through
shared learning experiences. Rather than focusing only on individual
job-based skills, learning organizations emphasize the development of
general behaviors such as critical thinking and collective problem
solving skills (Goh, 1998). Other organizational support structure,
which enhance OLC, include communication systems to enhance
knowledge dissemination and appropriate recognition programs to
reward and motivate employees. Since this may be critical to enhancing
learning activities, we asked educators the degree to which they per-
ceive these support structures to be present in their schools.
The next two sections of this paper will present the methodology
and results of this survey study. This is a cross-sectional study, a
snapshot of the views and perceptions of educators in schools, in a
particular location, at a particular point in time. Using descriptive
statistics, we will focus our discussion on the questionnaire results for
each construct. Clearly there are potential relational issues between
the constructs that have been described. However, although implied,
we will not focus on relational issues in this paper, this is the focus of
a companion paper, where we use multi-level analyses to establish the
influences of evaluative inquiry on organizational learning as a
between-school phenomenon.
4. Method
4.1. Sampling Procedures
Our sample was selected from public schools in the province of Manitoba,
Canada. This site was chosen because of the province’s favorable attitude
towards evaluation and school improvement, as evidenced by the Manitoba
School Improvement Program (MS1P). A multi-year program, the MSIP’s
aim is to encourage teaching innovations in provincial schools; however,
one of the requirements to receive funding was the implementation of a self-
evaluation. Therefore, we knew that at least those schools involved in the
MSIP would be employing evaluative practices. To balance this sample
group of schools, we also included non-MSIP schools and ensured diversity
in geographic location (urban/rural) and school panel (secondary/middle)
similar to MSIP schools.
Initially, 80 secondary and middle schools in 22 school divisions (districts)
were identified and 41 agreed to participate (representing 15 school divi-
sion). Of those which declined, 31 requests were denied by principals on
behalf of their staff, usually citing competing time demands as the rationale.
In the case of the other 8 schools, senior administration denied permission at
the divisional level. To the remaining 41 schools, we sent out 1900 ques-
tionnaires with a cover letter, which explained the purpose of the study and
indicated that participation was voluntary. Principals at each school
distributed and collected the questionnaires and returned them to us in the
respondents’ sealed envelopes.
Overall, we received 970 usable responses, representing a response rate of
almost 50%. The average number of questionnaires returned per school was
19 with a high of 51 and a low of 2. The response rate varied considerably
between schools, ranging from a high of 80% to low of 9%. A comparative
analysis was performed to identify any difference between participating and
non-participating schools in variables such as enrollment, MSIP participa-
tion, location (urban/rural) or panel (middle/secondary), but none were
found to be significant.
4.2. Instrument
The instrument used for this study was a hybrid survey tool, combining
items from different questionnaires, which have been proven to be reliable
and valid in previous research. The survey was organized in 7 sections, 2 of
which focused on basic demographic data/background information and 5
which measured the dimensions of the conceptual framework – OLC,
organizational readiness for evaluation, school support structures, attitude
towards evaluation processes, and evaluation inquiry. The first construct,
OLC, was measured with the 21-item Organizational Learning Survey (Goh
& Richards, 1997). Organizational readiness for evaluation (collaborative
culture, tolerance for change) was measured using items from instruments
developed by Seiden (1999) and Preskill and Torres (1999). Organizational
systems and support structures (work formalization, open and accessible
work environment, employee training and competencies, and rewards and
recognition systems and practices) was taken from Goh and Richards (1997)
and Preskill, Torres and Martinex-Papponi (1999). Evaluative inquiry
(intensity, frequency, and type) and receptivity to evaluation processes
(control, staff involvement, depth of participation) were measured using
items developed by Cousins et al. (1996) and Turnbull (1999). Each of the
studies from which items were drawn provides evidence of suitable reliability
and validity. To further confidence in data arising from the hybrid instru-
ment we pilot tested it on sample of 50 masters of education students
(mostly school-based personnel) and refined it to ensure face validity and
internal consistency of scale measures.
4.3. Who Responded to the Survey?
Eighty-three percent of the respondents were teachers. Of the remaining
17%, the majority were educational/teaching assistants (12%), while 5%
were administrative staff (Principals/Vice-Principals). The split between
genders was almost equal, with 54.5% female and 45.5% male. In terms of
tenure, 50% of the respondents had more than 15 years of experience as an
educator. Only 3% were in their first year of teaching. When asked how long
they had been at their current school, we found a fairly wide distribution:
while the largest cluster was 1–5 years (36%), the next largest group was
15 years+(21%), and the smallest was 11–15 (12%). The respondents’
background and experience with research methods and evaluative inquiry
was of prime interest to us as well in this study, as we felt that this might
influence their receptiveness to evaluation. The majority had not completed
any formal training in research methods, either at a university, college or at
an in-service course (64%). Of the 35% who had completed such courses,
17% felt that they were highly valuable, while 5% felt that they were not.
The majority indicated that they were moderately valuable (63%).
5. Results
Descriptive statistics were calculated using SPSS. The results will be
discussed according to each construct measured by the survey.
5.1. Organizational Learning Capacity
The first section of the questionnaire employed Goh and Richards’
(1997) Organizational Learning Survey to assess the OLC of partic-
ipating schools. It consists of 21 statements to which respondents
were asked to indicate their level of agreement on a 4-point Likert
scale. These statements reflect the five dimensions attributed to
learning organizations: mission and vision, leadership, experimenta-
tion, transfer of knowledge, teamwork and cooperation. Table I
displays the survey results. Overall, there was a modest tendency for
schools to see themselves as having a well-developed OLC. While
there was some variability across schools on this measure, the average
school score was 2.8, which is only slightly lower than 3, ‘agree’.
The scores of separate items provided us with more insight into the
respondents’ perceptions of organizational learning attributes. First,
we examined the rank order of scores and how they related to the 5
OLC dimensions above. An interesting pattern emerged. Scores
related to knowledge transfer and experimentation were fairly evenly
split at either extreme, represented either at the far upper end (i.e.,
higher ranked scores) or lower end (lowest scores). In contrast, scores
related to mission and vision and leadership tended slightly more
towards the middle. Teamwork ratings were centered on the mid-
point. What does this tell us? First, there is not one single dimension
that is particularly dominant in terms of contribution to OLC;
therefore, we can conclude that schools have strengths and weak-
nesses across all of these areas. However, if we examine the state-
ments more closely we can see some common themes. The highest
scores are related to sharing new ideas or success stories, or being
encouraged to try out new work processes. For example, 85% of staff
agreed or strongly agreed that they can ‘‘often bring new ideas into
the school’’; and 82% agreed and strongly agreed that they ‘‘have an
opportunity’’ to share success stories with colleagues and are
encouraged by administrators to ‘‘experiment in order to improve
work processes’’. In contrast, the lowest average scores were related
to accepting criticism and discussing failures or mistakes. Only 41%
of respondents, for example, agreed that new staff, were ‘‘encouraged
Perspectives on OLC
Items Response distribution
Mean SD Rank
NSD (%) D (%) A (%) SA (%)
I have an opportunity to talk to the
other staff about successful programs
or work activities in order to under-
stand why they succeed
955 2.7 15.1 57.7 24.5 3.04 0.71 2
There is widespread support and
acceptance of the school’s mission
908 5.1 20.5 62.4 12.0 2.81 0.70 9
I can often bring new ideas into the
953 2.4 13.1 59.2 25.3 3.07 0.69 1
Failures are constructively discussed
in our school
944 9.2 34.1 49.7 7.0 2.54 0.76 20
Current school practice encourages
staff to solve problems together before
discussing them with a supervisor
949 6.2 24.3 57.7 11.7 2.75 0.74 10
People who are new to the school are
encouraged to question the way things
are done
936 13.5 45.9 34.8 5.8 2.33 0.78 21
Administrators accept change and are
not afraid of new ideas
949 6.1 19.2 54.8 19.9 2.89 0.78 4.5
Administrators encourage staff to
experiment in order to improve work
947 3.6 14.6 59.5 22.4 3.01 0.72 3
New work processes that may be
useful to the school as a whole are
usually shared with all staff
947 3.6 21.6 59.9 14.9 2.86 0.70 6
Innovative ideas that work are often
rewarded by administration
922 6.8 35.5 47.6 10.1 2.61 0.76 17
Administrators and staff share a
common vision of what our work
should accomplish
946 8.8 26.7 54.4 10.0 2.66 0.78 16
In my experience, new ideas from staff
are treated seriously by the adminis-
944 4.8 19.6 61.3 14.2 2.85 0.71 7
Administrators in this school fre-
quently involve staff in important
946 8.6 27.1 48.3 16.1 2.72 0.83 13
We can usually form informal groups
to solve school programs
942 3.7 28.2 58.8 9.4 2.73 0.68 11.5
Administrators can accept criticism
without becoming overly defensive
928 10.9 29.7 53.0 8.8 2.60 0.79 18
Administrators have a system that
allows us to learn successful practices
from other schools
939 7.0 38.3 47.2 7.3 2.55 0.74 19
Administrators often provide feed-
back that helps to identify potential
problems and opportunities
941 5.6 27.0 58.6 8.7 2.70 0.71 14
I understand how the school’s mission
is to be achieved
890 6.0 27.8 57.8 8.5 2.69 0.71 15
We I have opportunities for self-
assessment with respect to goal
930 2.9 18.4 65.8 12.9 2.89 0.65 4.5
Items Response distribution
Mean SD Rank
NSD (%) D (%) A (%) SA (%)
The school’s mission statement iden-
tifies values to which all staff must
871 3.3 20.3 65.6 10.8 2.84 0.65 8
Most problem-solving groups feature
staff from a variety of functional
areas on divisions
942 4.4 28.0 58.0 9.7 2.73 0.69 11.5
SD=strongly disagree (1); D=disagree (2); A=agree (3); SA=strongly agree (4).
The raking scale is the following: 1= highest agreement to statement (mean score); 21=lowest agreement to statement.
to question the way things are done’’; and only 57% thought that
‘‘failures are constructively discussed’’.
These results seem to indicate that while some experimentation
and knowledge sharing is encouraged, only the ‘‘good news’’ is
shared. There appears to be some fear around questioning the status
quo, making mistakes and taking risks with innovative ideas. In
general, these results paint a picture of a school being a somewhat
insular community, one that will accept new ideas or positive change
that are proposed by accepted members, but does not necessarily seek
novel ideas, best practices, or constructive criticism, particularly from
‘‘outsiders’’ to the school.
5.2. Evaluative Inquiry
This section of the questionnaire elicited educators’ opinions about
the degree, type, and frequency of evaluation activities taking place in
their school in the ‘‘last two years’’. Educators were asked 26 state-
ments to which they indicated their level of agreement. On average,
the respondents reported that a moderately low amount of evaluation
activity is taking place in the school. The average school score was
between 2 ‘rarely’ and 3 ‘sometimes’ (2.84) and there was little vari-
ation across schools on this measure (SD=0.33). When the respon-
ses, summarized in Table II, are examined in more detail it becomes
evident that the highest level of agreement was to general statements
about administrator support for evaluation, the perceived benefits of
evaluation, and the fact that basic data collection and analysis steps
of the evaluative process are performed. This indicates that while
administrators and teachers believe in the concept of evaluations and
‘sometimes’ do collect and analyze data, they do not typically inter-
pret the information on once it has been collected – to promote more
meaningful discussions, action planning or decision-making. Their
efforts tend to remain at the data information processing stage.
In terms of the participants’ degree of participation, it appears to
be relatively low, particularly in certain stages of the evaluation
process. While respondents did report being involved in data collec-
tion activities, they indicated that they ‘rarely’ or ‘seldom’ influenced
‘‘technical’’ design decisions (i.e., how the evaluation is carried out,
how data are collected, what criteria are used). And two of the lowest
scoring items indicate that educators are not involved in the inter-
pretation of results stage. Furthermore, they reported that evaluation
results are not displayed in interesting visual ways, (e.g., using charts
Evaluative inquiry: views on activities and practices
Item Response distribution
Mean SD Rank
NN (%) R (%) S (%) F (%) A (%)
Administrators encourage us to evaluate our
940 3.6 12.6 39.6 34.8 9.5 3.34 0.94 1
Staff are allowed the time to be involved in
evaluation activities
937 6.2 30.3 40.3 18.8 4.4 2.85 0.94 11
Planning meetings are held to decide no eval-
uation activities
936 10.9 32.6 35.8 17.1 3.6 2.70 0.99 21.5
Staff influence how evaluation activities are
carried out
931 10.4 28.8 37.6 18.8 4.4 2.78 1.01 16
Staff influence decisions about what criteria are
used to evaluate programs
929 10.7 29.2 38.9 17.7 3.7 2.74 0.99 18
There are discussions among staff about what
information would be useful to collect
932 8.2 29.4 40.0 19.5 2.9 2.80 0.94 13
Staff influence decisions about how the evalu-
ation evidence is gathered
926 10.3 30.6 41.1 14.5 3.6 2.71 0.96 20
Existing files are reviewed for information that
is relevant for decision making
882 10.0 30.2 42.0 15.4 2.5 2.70 0.93 21.5
Decisions are made about from whom to col-
lect information
886 8.1 25.3 44.0 19.9 2.7 2.84 0.93 12
Decisions are made about how to collect
882 7.5 25.5 43.9 20.1 3.1 2.86 0.93 10
Existing questionnaires or data collection tools
are examined for local use
873 11.2 27.6 41.5 16.8 2.9 2.73 0.97 19
Questionnaires or data collection tools are
developed locally
866 11.1 24.2 43.8 18.9 2.0 2.76 0.95 17
Searches are conducted for projects where data
collection tools were used
849 13.9 33.8 38.5 11.7 2.1 2.54 0.94 25
Information is gathered from students for
evaluation purposes
898 7.1 22.5
43.7 21.9 4.8 2.95 0.96 6
Information is gathered from parents for
evaluation purposes
896 8.7 25.3 44.5 18.6 2.8 2.81 0.93 14.5
Information is gathered from staff for evalua-
tion purposes
899 4.9 19.0 48.2 23.0 4.9 3.04 0.90 5
Collected information is processed 876 4.8 15.3 40.6 29.9 9.4 3.24 0.98 2
School staff ensure the accuracy of collected
873 8.9 24.2 39.2 22.6 5.2 2.91 1.01 7
Information collected for evaluation is ana-
873 5.4 17.9 40.5 28.6 7.6 3.15 0.98 3.5
Tables and graphs are constructed to show
patterns is analyzed information
882 20.2 30.7 31.9 13.9 3.4 2.50 1.07 26
Staff interpret analyzed evaluation information 877 13.7 28.4 37.9 16.3 3.8 2.68 1.02 23
Evaluation reports are compiled 874 I0.4 24.1 39.1 20.7 5.6 2.87 1.04 8.5
Presentations about evaluation results are
881 11.6 25.3 37.8 21.6 3.7 2.81 1.02 14.5
Staff meetings are held to discuss evaluation
893 10.2 23.3 40.0 22.4 4.1 2.87 1.01 8.5
Evaluation results are shared outside of the
school community
861 15.3 31.1 36.1 14.3 3.1 2.59 1.01 24
Evaluation helps us provide better programs,
processes, products, and services
883 6.9 15.4 42.7 25.5 9.5 3.15 1.02 3.5
N=never; R=rarely; S=sometimes; F=frequently, A=always.
The ranking scale is the following: 1=highest agreement to statement (mean score); 25=lowest agreement to statement.
or graphs), which could assist in highlighting emergent patterns or
trends. There appears to be little knowledge sharing as well. The
participants reported that evaluation results are ‘rarely’or ‘seldom’
shared. Similarly, during the data collection phase, educators did not
search elsewhere for data collection tools, which could be leveraged
(and modified), for their own needs. Even within the school, existing
data sources were not exhausted (e.g., files and records).
In summary, it appears that a moderately low level of evaluation
activity is occurring at schools. What is occurring appears to be fairly
limited in design. Data is mainly collected from staff, (as opposed to
students or parents); it may be analyzed and/or interpreted by an
external entity (or possibly administrative staff) and presented in
standard reports which are usually discussed at staff meetings. It
appears that there could be room for increased educator involvement
in terms of the depth of participation in the upfront design as well as
the interpretation of results. In addition, there are alternative data
sources which could be tapped into, some of which are simple to find
and easy to implement. Finally, educators could foster linkages to the
external community in order to share common practices and identify
innovative approaches to conducting evaluations.
5.3. Organizational Readiness for Change
To what extent do schools exhibit a culture of change? Are schools
adaptable and open to new ideas such as integrating evaluative
inquiry into their practices? The 17 questions this section of the
questionnaire attempted to address these attributes by measuring the
organization’s degree of readiness. Looking first at the overall mean
rating of 3.4, (midway between 3, ‘sometimes’ and 4, ‘frequently’), it
appears that educators see schools as being ‘‘moderately’’ ready for
change. That is, on average, schools exhibit most of the character-
istics of a culture of change on a moderately consistent basis. How-
ever, the scores for individual responses indicate that some
organizational attributes are more prevalent than others. As shown in
Table III, the highest scores indicates that, in general, school staff
work collaboratively with one another in a spirit of cooperation and
mutual respect seeking out each other’s opinions about work-related
matters. On the other hand, they perceive that bureaucracy hinders
the school’s ability to bring about change. This score is significantly
lower than the others, with 43% of respondents saying that ‘never’ or
‘rarely’ does the ‘‘degree of bureaucracy’’ make it easy to implement
Views on school readiness for change
Item Response distribution
Mean SD Rank
NN (%) R (%) S (%) F (%) A (%)
Staff respect each other’s perspectives
and opinions
949 0.7 3.5 31.3 54.1 10.4 3.70 0.73 5
Staff ask each other for information
about work issues and activities
948 0 3.0 26.4 56.5 14.1 3.82 0.70 1
Staff look for ways to improve pro-
cesses, products, and services
942 0.5 4.0 29.1 52.5 13.8 3.75 0.76 3
Staff are provided opportunities to
think about and reflect on their work
941 2.2 18.0 40.8 30.7 8.3 3.25 0.92 12
Staff stop to talk about the pressing
work issues we are facing
942 1.7 8.2 39.8 39.8 10.5 3.49 0.85 6
When trying to solve problems, staff
use a process of working through the
problem before identifying solutions
925 2.1 16.5 49.9 26.7 4.8 3.16 0.83 15
Staff do not compete for recognition
or rewards
918 5.9 21.8 27.6 28.4 16.3 3.28 1.15 11
Staff operate from a spirit of coop-
eration, rather than competition
942 1.7 7.1 24.0 48.1 19.1 3.76 0.90 2
Staff tend to work collaboratively
with each other
941 0.6 4.8 28.4 52.5 13.7 3.74 0.78 4
Item Response distribution
Mean SD Rank
NN (%) R (%) S (%) F (%) A (%)
Staff are more concerned about how
their work contributes to the success
of the school than they are about
their individual success
944 1.5 10.7 36.5 40.7 10.6 3.48 0.88 7
Staff face conflict over work issues in
productive ways
940 0.9 9.4 51.6 32.4 5.7 3.33 0.76 10
Staff view problems or issues as
opportunities to learn
934 1.7 15.0 49.9 28.2 5.2 3.20 0.82 14
The degree of bureaucracy makes it
easy to bring about school change
938 10.4 32.2 35.0 18.1 4.3 2.74 1.01 17
When someone tries to do something
different, s/he is met with support
933 1.8 11.8 43.5 36.4 6.4 3.34 0.84 9
Administration does not oppose
needed school changes
930 2.2 13.4 33.9 38.1 12.5 3.45 0.95 8
Administration has an open mind
toward any negative data about the
938 6.0 17.0 34.5 33.6 9.0 3.23 1.03 13
Staff members do not resist change 943 3.0 19.5 53.7 20.6 3.3 3.02 0.81 16
N=never; R=rarely; S=sometimes; F=frequently, A=always.
The ranking scale is the following: 1=highest agreement to statement (mean score); 17=lowest agreement to statement.
change. Other low scores are in the area of problem solving (using an
effective process and having a positive attitude) and general staff
resistance to change.
These results suggest that school staff generally work well together
on a daily basis to solve ongoing classroom issues and make incre-
mental school improvements. Nonetheless, there is a perception that
larger school change is hampered by a high degree of bureaucracy. As
well, staff is ‘sometimes’ resistant to change and the administration
and staff are not always open-minded towards new ideas. These
responses indicate that the school culture may be only moderately
conducive to change, including the adoption of systematic inquiry.
One challenge in particular may be the interpretation of evaluation
results – building a process for joint problem solving, avoiding
defensive routines,
and facilitating team learning.
5.4. School Support Structures
Participants’ opinions about school support structures are presented
in Table IV. In this part of the survey, respondents were asked to rate
their organization with respect to organizational support mechanisms
such as communication, training and professional development,
knowledge sharing, rewards and recognition, and flexibility of orga-
nization design. Viewed in totality, the average response to this sec-
tion of the survey was not overwhelmingly positive, but indicates that
staff perceive support structures to be present to some extent in most
schools. Schools scored on average, between 3 ‘sometimes’ and 4,
‘frequently’ in terms of staff perceptions of the degree to which the
organization has support structures in place to enhance OLC
Respondents gave the highest marks to skills training and educa-
tion, reporting that school staff is encouraged to continuously
develop work-related skills. For example, 57% scored the following
statements as ‘frequently’ or ‘always’, learning ‘‘that increases my
work skills is encouraged’’ and school staff are ‘‘encouraged to
continuously upgrade and increase their knowledge and education
level’’. Worthy of note, however, is that the actual provision of
training was rated less favorably, as was support for team learning
and sharing. Similarly, average scores for recognition of team per-
formance were the lowest overall, with an average score of ‘some-
times’ (M=2.98). (65% of respondents reported that, ‘‘the current
performance appraisal system recognizes, in some way, team learning
Perceptions of school support for learning and skill development
Item Response distribution
Mean SD Rank
NN (%) R (%) S (%) F (%) A (%)
There is no excess of bureaucratic
‘red tape’ when trying to do some-
thing new or different
903 4.4 23.3 45.1 21.5 5.8 3.01 0.93 15
Workspaces are designed to allow for
easy communication with each other
940 7.0 24.8 35.3 27.1 5.7 3.00 1.02 16
There are no ‘boundaries’ between
departments/ units that keep staff
from working together
890 4.8 25.2 34.5 25.4 10.1 3.11 1.05 14
Staff are available (i.e., not out of the
school or otherwise too busy) to
participate in meetings
943 1.8 13.7 31.1 43.2 10.3 3.46 0.92 6
Staff are recognized for learning new
knowledge and skills
941 2.6 17.3 39.1 33.8 7.2 3.26 0.92 12
Staff are recognized for helping solve
school problems
940 1.9 15.6 41.7 33.3 7.4 3.29 0.88 9.5
The current performance appraisal
system recognizes, in some way, team
learning and performance
871 7.7 22.4 38.5 27.0 4.5 2.98 0.99 17
Staff are recognized for experiment-
ing with new ideas
942 3.3 17.6 44.2 29.6 5.3 3.16 0.89 13
Staff in this school are provided with
work-related skill training
942 1.8 15.9 38.5 35.1 8.6 3.33 0.91 8
The skill training I receive can be
applied to improve my work
934 1.9 11.9 33.4 40.8 12.0 3.49 0.92 5
Staff in this school are encouraged to
continuously upgrade and increase
their knowledge and education level
944 2.0 12.5 28.1 39.6 17.8 3.59 0.99 3
We support the development of skills
such as leadership, coaching, and
teambuilding among staff
946 1.7 9.8 30.5 42.8 15.1 3.60 0.92 2
Learning that increases my work
skills and knowledge is encouraged
942 1.4 9.8 32.1 37.7 19.1 3.63 0.95 1
Staff training is emphasized equally
at all levels
933 5.4 17.1 34.7 28.4 14.4 3.29 1.08 9.5
Training is done in teams where
937 2.0 12.8 39.8 35.8 9.6 3.38 .89 7
Training is relevant to my work 936 1.8 9.5 37.7 34.1 16.9 3.55 0.94 4
I have opportunities to share my
knowledge/skills learned with other
945 2.9 16.7 41.5 28.8 10.2 3.27 0.95 11
N=never; R=rarely; S=sometimes; F=frequently, A=always.
The ranking scale is the following: 1=highest agreement to statement (mean score); 17=lowest agreement to statement.
and performance’’ either ‘never’, ‘rarely’, or ‘sometimes’.) Although
this result is not surprising, given the somewhat individualistic nature
of public school teaching, (separate classroom, infrequent contact
with other teachers, departmental structure), it is counter to much of
the literature on organizational learning, which emphasizes the
importance of teamwork and collaborations (Cousins, 1996; Senge,
1990). One of Senge’s five disciplines, for example, is team learning,
which he feels is a cornerstone of organizational learning (1990).
A related point is the nature of the workplace. Educators indicated
that workspaces are not designed ‘‘to allow for easy communication
with each other’’. This serves to reinforce the independent nature of
teaching rather than facilitating collaboration, knowledge sharing
and organizational learning. Bureaucratic ‘red tape’ was also noted as
an inhibitor to innovative ideas; and staff is only ‘‘sometimes’’ rec-
ognized for ‘‘experimenting with new ideas’’. When viewed together,
it appears that there is an opportunity for school staff to look for new
ways to facilitate communication across departmental boundaries –
despite restrictive physical spaces – by diminishing unnecessary
bureaucratic procedures and thinking about creative means of
encouraging and rewarding risk-taking and innovative behaviors.
5.5. Attitudes towards Evaluation
Educators’ viewpoints about school-based evaluation activities were
captured in this last section of the questionnaire using a four-point
Likert scale. Respondents indicated the extent to which they agreed
to 13 statements about educational evaluation – the process, potential
benefits, and utility. When the overall score was tabulated, the
average agreement rating was only slightly above the mid-point of the
agree-disagree scale (M=2.7), with little variation between schools.
This result indicates that school staff is somewhat indifferent to
evaluation activities being performed in their schools. However, when
individual items were rank ordered, the top three scores were all
associated with perceived benefits: 87.1% of participants agreed (or
strongly agreed) that their programs ‘‘could use evaluation to learn
how to be even more effective’’; 86.5% felt that an evaluation would
provide new information; and 84.4% agreed that evaluation would
‘‘pave the way for better teaching and learning’’ for students (see
Table V). The educators appeared to be slightly less interested in the
possible political benefits of an evaluation or in enhancing the quality
of decision-making (ranked 4th and 5th, with 75% agreement).
In contrast, the lowest level of agreement was associated with the
realities of conducting an evaluation. Sixty-two percent of partici-
pants did not agree (disagree or strongly disagree) that the ‘‘arguments
for conducting an evaluation are clear’’. Furthermore, they did not
believe that they are allocated the time to be involved in evaluation
activities (58.1%); neither did they feel that there would be support
among staff to participate in evaluation work (54.8%). Taken toge-
ther, these results explain the somewhat indifferent (even ambivalent)
opinion expressed by school staff towards evaluative inquiry.
Although they realize some of the potential advantages – improved
programs, more information and possibly even improved classroom
practice – they remain unconvinced that they have the time to actually
participate in an evaluation unless they see tangible proof. Essentially,
they appear to be asking the practical question: ‘‘What’s in it for me?’’
6. Discussion and Conclusions
Data from the present survey need to be interpreted with a number of
caveats in mind. First, although a definition of evaluation was pro-
vided for the respondents, there is wide range of evaluation meth-
odologies which can be employed and with which the participants
may be familiar. Therefore, when responding to questions about
evaluation – the activities currently taking place in their schools, their
receptivity to evaluation, and experience with/attitude towards eval-
uation – educators’ responses may be colored according to their own
personal ‘mental map’ of what evaluation means to them.
Second, there may be sample bias associated with those respon-
dents who did not agree to participate in the survey (or for whom the
administration declined on their behalf). This could cause the sample
participants to be slightly more positively disposed towards evalua-
tive inquiry and/or feel more supported in participating in research
Thirdly, in this present research, we have drawn some general
conclusions from the data and identified some opportunities for
improvement or further investigation. However, in order to answer
the important questions about how to build an organizational
learning and/or evaluative capacity, controlled intervention studies
would be needed.
Despite these cautionary notes, we believe that the results of this
study warrant serious consideration. To summarize some of the main
Attitudes towards Evaluation Practice in Schools
Item Response distribution
Mean SD Rank
NSD (%) D (%) A (%) SA (%)
Evaluation would pave the way
for better teaching and learning
for our students
952 1.5 14.1 64.7 19.7 3.03 0.63 2
An evaluation would give us new
951 1.1 12.5 71.1 15.4 3.01 0.57 3
Our programs could use evalua-
tion to learn how to be even more
946 0.1 12.8 68.4 18.7 3.06 0.56 1
Implementing an evaluation
would enhance our stature as a
933 3.6 31.3 51.7 13.4 2.75 0.73 6
An evaluation would make it
easier to convince administration
of needed changes
933 2.3 22.7 58.7 16.3 2.89 0.69 4
The integration of evaluation
activities into our work would
enhance the quality of decision
941 1.2 23.7 64.7 10.4 2.84 0.60 5
It would be worthwhile to inte-
grate evaluation activities into our
daily work practices
935 3.4 29.5 57.6 9.4 2.73 0.67 7
There would be support among
staff if we tried to do evaluation
936 7.4 47.4 41.6 3.6 2.41 0.68 11
This would be a good time to be-
gin efforts to conduct evaluations
923 4.7 35.8 52.9 6.6 2.62 0.69 8
There are evaluation processes in
place that enable staff to review
how well changes we make are
934 6.7 46.8 43.1 3.3 2.43 0.67 10
We are allowed the time to be in-
volved in evaluation activities
937 12.1 46.0 39.1 2.9 2.33 0.72 12
The arguments for conducting an
evaluation are clear to staff mem-
934 9.5 52.1 36.4 1.9 2.31 0.67 13
We have the expertise to conduct
an evaluation
935 8.0 36.7 51.0 4.3 2.52 0.70 9
SD=strongly disagree; D=disagree; A=agree; SA=strongly agree
The ranking scale is the following: 1=highest agreement to statement (mean score); 13=lowest agreement to statement.
findings, let us first turn to the two main constructs, evaluative
inquiry and organizational learning. In terms of evaluative inquiry, it
appears that schools have, on average, a fairly low level of evaluation
activity taking place. Evaluations, which are taking place, seem to be
somewhat limited in terms of the involvement of educators in the
initial design decisions and the interpretation of results. For example,
the lower scoring items indicate a pattern of not analyzing evaluation
data in great detail and not interpreting the information they have.
The implication may be that schools are seen as encouraging evalu-
ation activities but it is limited to gathering data from teachers and
students, who do not have a great deal of influence or participation in
the process.
This result is similar to OLC scores. There was a modest tendency
for schools to see themselves as having a well-developed OLC (the
average school score was 2.8, slightly lower than 3, ‘agree’). The
highest ranking scores indicate that while educators are encouraged
to explore new ideas and share knowledge with their colleagues,
constructive criticism and questioning ‘‘the way we do things around
here’’ or discussing failures is not supported. Taken together, these
results suggest that the schools in our sample, on average, have a
moderate to low level of OLC and evaluative inquiry activities and
The participants’ responses to questions about the other con-
structs also provide some interesting insights. With respect to the
schools’ readiness for change, respondents indicated that school
culture is moderately conducive to change, although there is a rela-
tively high degree of bureaucracy and staff is ‘‘sometimes’’ resistant to
new ideas. Interestingly, the average score for school support struc-
tures is almost identical (3.4 for organizational readiness and 3.3 for
support structures, using the same scoring scale).
This indicates that organizational support structures are present to
some extent across the 41 schools. Whereas participants gave the
highest marks to support for skills training and education, the least
favorable scores included support and/or recognition for team
learning, collaboration and horizontal communication. Bureaucracy
was also mentioned again as ‘‘sometimes’’ a barrier to experimenta-
tion. When respondents’ were questioned about their attitudes
towards evaluation, (receptivity to evaluative inquiry), they appeared
to be somewhat indifferent. In this section of the questionnaire, once
again, the results were ‘‘moderate’’. While participants ‘agreed’ with
some of the potential benefits (increased program effectiveness, new
information, improved student learning), they seemed skeptical of the
practical realities (e.g., time available, clear arguments to support
In general, these results indicate that the adoption of evaluative
inquiry and the presence of organizational learning attributes in these
Manitoba schools are not strong. On the other hand, these findings
present significant opportunities for development. We believe that,
like Fullan (1995), schools need to be recultured to become learning
organizations and that evaluators, teachers and administrators can
play an active role as ‘‘change agents’’. Preskill and Torres (1999),
suggest that a learning approach to evaluative inquiry ‘‘blends
organization development with evaluation, and evaluation with
program work’’ (p. 393). Some effective change interventions can
include, increasing teacher and student participation and involvement
in deciding on evaluation projects that could include how the data is
to be collected, more sophisticated analysis of the data and feedback
and action based on the results. Mechanisms can also be introduced
to reduce the level of defensiveness in learning capacity such as dis-
cussing failures in a ‘‘safe’’ environment and admitting to mistakes,
but using that experience to encourage a search for new behaviors
and innovative solutions. Our data show that many teachers and
administrators can improve learning capacity if there are clear
rewards and recognition for innovation, trying new things and
working together. Time needs to be set aside for reflection about
work issues and concerns. In addition, there has to be a culture in
schools that is not resistant but open to change.
As this is a cross-sectional study, a follow-up to this would be a
more in-depth participatory action research study conducted (pref-
erably at one or more of these Manitoba schools) to further explore
what actions and practices can result in building a capacity for
evaluative inquiry and organizational learning in schools. This would
be a longer term, in-depth intervention study, whereby the research
team would work with school staff in a collaborative manner. Some
suggested steps could include: (1) a collective diagnosis stage whereby
researchers and educators assess the key areas for attention; (2) col-
laborative inquiry to determine the intervention strategies; (3)
implementation of prioritized action steps; (4) continual dialogue and
reflection to evaluate progress, determine next steps, and identify
improvements; and (5) a strategy for ensuring that local capacity has
been built. For this to succeed, there is a need to foster an environ-
ment of trust and collaboration, where true dialogue and double-loop
learning can occur, risk-taking is rewarded, and educators have the
courage to challenge the status quo. Commitment at the senior levels
(within the school, division, and even provincial levels) is also critical,
to ensure that the appropriate time and resources are allocated. It is
our hope that a participative intervention of this type would meet a
number of goals simultaneously: build organizational learning
capability, increase evaluative practice, and enable us, as researchers,
to participate in, document and learn from the process itself.
We will use the terms ‘‘learning organization’’ and ‘‘organizational learning’’
interchangeably, as the former describes the characteristics of a particular type of
organization (the ‘‘what’’) and the latter describes the process whereby an
organization becomes a learning organization (the ‘‘how’’). Therefore, if an
organization is good at organizational learning, then it is a learning organization.
Defensive routines refer to those habits of behavior that we all adopt to protect
ourselves from the embarrassment and threat that comes with exposing our
thinking, (Senge, 1990). Drawing heavily from Argyris (1985), Senge (1990)
describes how ‘defensive routines’ can severely limit team learning by covering up
the underlying assumptions and mental models that guide our behavior. We are
thereby limited to single-loop learning, which can only be incremental in nature. In
order to promote true innovation and change, (double-loop learning), we need to
challenge our underlying paradigms and develop a deeper level of communication,
called ‘dialogue’. According to Senge (1990), these are some of the key building
blocks of organizational learning.
Research for this paper was supported by a grant (#410-2001-0653)
from The Social Science and Humanities Research Council of
Argyris, C. (1985). Strategy, Change, and Defensive Routines. Boston: Pitman.
Conner, D. (1992). Managing at the Speed of Change. NY: Random House.
Cousins, J.B. (1996). Understanding organizational learning for educational
leadership and school reform. In K.A. Leithwood (ed), International Handbook
of Educational Leadership and Administration. (pp. 575–640). Boston: Klewer.
Cousins, J.B. (1999). Organizational consequences of participatory evaluation. In
K.A. Leithwood & K.S. Louis (eds), Communities of Learning and Learning Schools:
New Directions for School Reform. (pp. 127–142). Amsterdam: Swets & Zeitlinger.
Cousins, J.B., Donohue, J.J. & Bloom, G.A. (1996). Collaborative evaluation in
North America: Evaluators’ self-reported opinions, practices and consequences.
Evaluation Practice 17(3), 207–226.
Cousins, J.B. & Earl, L.M. (1992). The case for participatory evaluation. Educational
Evaluation and Policy Analysis 14(4), 397–418.
Cousins, J.B. & Earl, L.M. (eds) (1995). Participatory Evaluation in Education:
Studies in Evaluation Use and Organizational Learning. London: Falmer Press.
Cousins, J.B., Goh, S., Clark, S. & Lee, L. (2004). Integrating evaluative inquiry into
the organizational culture: a review and synthesis of the knowledge base. Canadian
Journal of Program Evaluation 19(2), 99–141.
Cousins, J.B., Goh, S. & Lee, L. (2003). The Learning Capacity of Schools as a
Function of Evaluative Inquiry: An Empirical Study, Paper presented at ‘The
Learning Conference 2003: What Learning Means’, Institute of Education,
University of London. (Currently under peer review for publication).
Forss, K., Cracknell, B. & Samset, K. (1994). Can evaluation help organization to
learn? Evaluation Review 18(5), 574–591.
Fullan, M. (1995). The school as a learning organization: distant dreams. Theory
Into Practice 34(4), 230–235.
Fullan, M. (2001). Leading in a Culture of Change. San Francisco, CA: Jossey-Bass.
Garvin, D.A. (1993). Building a learning organization. Harvard Business Review
July–August, 78–91.
Goh, S. & Richards, G. (1997). Benchrmarking the learning capability of
organizations. European Management Journal 15(5), 575–583.
Goh, S. (1998). Towards a learning organization: the strategic building blocks. SAM
Advanced Management Journal 63(2), 15–22.
Leithtwood, K., Aitken, R. & Jantzi, D. (2001). Making Schools Smarter A System
for Monitoring School and District Progress (2nd edition). Thousand Oaks: Sage.
Leithwood, K., Leonard, L. & Sharratt, L. (1998). Conditions fostering organiza-
tional learning in schools. Educational Administration Quarterly 34(2), 243–276.
Leithwood, K.A. & Louis, K.S. (eds) (1999). Communities of Learning and Learning
Schools: New Directions for School Reform. Amsterdam: Swets & Zeitlinger.
Lysyk, M. (2000). Organizational consequences of evaluation as a function of strategic
planning. Unpublished Master of Arts (Education) thesis. Ottawa: University of
Marks, H. & Printy, S.M. (2003). Organizational learning in high stakes account-
ability environments: lessons from an urban school district. In W. Hoy & C.
Miskel (eds), Theory and Research in Educational Administration (pp. 1–39).
Greenwich, CN: Information Age Publishing.
Nevo, D. (1983). The conceptualization of educational evaluation: an analytical
review of the literature. Review of Educational Research 53(1), 117–128.
Owen, J.M. & Lambert, F.C. (1995). Roles for evaluation in learning organizations.
Evaluation 1(2), 237–250.
Patton, M.Q. (1998). Discovering process use. Evaluation 4(2), 225–233.
Patton, M.Q. (1997). Utilization-focused Evaluation: The New Century Text (3rd
edition). Newbury Park, CA: Sage.
Preskill, H. (1994). Evaluation’s role in enhancing organizational learning. Evalu-
ation and program planning 17(3), 291–297.
Preskill, H. & Torres, R. (1999). Evaluative Inquiry for Organizational Learning.
Twin Oaks, CA: Sage.
Preskill, H., Torres, R. & Martinez-Papponi, B. (1999). Assessing an Organization’s
Readiness for Learning from Evaluative Inquiry. Paper presented at the American
EvaluationAssociation Annual Conference, Orlando, Florida, and November, 1999.
Scribner, J., Cockrell, K., Cockrell, D. & Valentine, J. (1999). Creating professional
communities in schools through organizational learning: an evaluation of a school
improvement process. Educational Administration Quarterly 35(1), 130–160.
Seiden, K. (1999). Organizational Readiness for Evaluation Survey Instrument: A
Preliminary Factor Analysis. Paper presented at the annual meeting of the
American Evaluation Association, Orlando, November, 1999, Florida.
Senge, P.M. (1990). The Fifth Discipline: The Art and Practice of Organizational
Learning. Doubleday: New York.
Silins, H., Mulford, W. & Zarins, S. (2002). Organizational learning and school
change. Educational Administration Quarterly 38(5), 613–642.
Stevenson, R.B. (2001). Shared decision making and core school values: a case study
of organizational learning. The International Journal of Educational management
15(2), 103.
Torres, R.T. & Preskill, H. (2001). Evaluation and organizational learning: past,
present and future. American Journal of Evaluation 22(3), 387–395.
Turnbull, B. (1999). The mediating effect of participation efficacy on evaluation use.
Evaluation and program Planning 22(2), 131–440.
School of Management
University of Ottawa
136 Jean-Jacques Lussier,
KIN 6N5,
Faculty of Education
University of Ottawa
... In this area, as a consequence, the tourism industry needs to be proactive to threats of this nature and needs to promote an organizational culture with the ability to incorporate new knowledge, work processes, and warning systems that better anticipate these risks (Goh, Quon, & Cousins, 2007). Nevertheless, it is clear that tourism operators cannot deal with this kind of risk on their own, but in close collaboration with public authorities. ...
... The integration and transfer of knowledge gained from experience are key dimensions of organizational learning. In this field, we can identify two types of actions that would allow the company to be better prepared for an uncertain future and to enrich the review phase of the tourism disaster management planning process presented by Faulkner (2001): -Knowledge transfer at the firm level, to avoid information silos and integrate past experiences into the stock of knowledge that can be shared with its members (Jerez-Gómez et al., 2005;Goh et al., 2007). -Establishment of systems of networking to enable and ease learning from the knowledge and experience of other organizations, that is, systems that allow learning from successful practices at other organizations (Goh et al., 2007). ...
... In this field, we can identify two types of actions that would allow the company to be better prepared for an uncertain future and to enrich the review phase of the tourism disaster management planning process presented by Faulkner (2001): -Knowledge transfer at the firm level, to avoid information silos and integrate past experiences into the stock of knowledge that can be shared with its members (Jerez-Gómez et al., 2005;Goh et al., 2007). -Establishment of systems of networking to enable and ease learning from the knowledge and experience of other organizations, that is, systems that allow learning from successful practices at other organizations (Goh et al., 2007). The tourism sector consists mostly of Small and Medium-sized Enterprises (SMEs), so its resilience is based on a complex set of networks (Scott & Laws, 2005). ...
... Subscriber: OUP-Reference Gratis Access; date: 26 January 2020 and Cousins 2007;Yang, Watkins, and Marsick 2004). Some of the measures have also been adapted and refined by others (Mavondo, Chimhanzi, and Stewart 2005). ...
... The survey instrument reported high reliability scores and also showed discriminant and convergent validity, having a negative correlation with formalization and a positive corre lation with job satisfaction. In the original survey there were twenty-one items; however, a subsequent analysis with a larger data set resulted, after confirmatory factor analysis (CFA), in a reduced eighteen-item scale (Goh et al. 2007). ...
... All the authors of these measures report approaches to construct validity in their develop ment through the use of CFA to establish the underlying dimensions of the LO construct (Goh et al. 2007;Yang et al. 2004). With acceptable internal consistency estimates of reli ability being reported, it can be concluded that based on the published data, the devel oped LO construct measures can be considered to have very good psychometric proper ties as organizational research instruments. ...
Full-text available
This chapter explores and reviews the development of survey research instruments to measure the learning organization construct. Some examples of such measures are presented and discussed to illustrate the approach used by researchers to establish the reliability and construct validity of these instruments. The contribution in the use of such measures to empirical research in linking the learning organization to outcomes such as organizational performance is also reviewed. A critical perspective is provided as to some of the potential issues for research in the use and further development of such survey instruments. Lastly, some suggested future research directions are proposed on how, using such measures, the field can advance our knowledge of the learning organization through more novel research methods and approaches.
... Several authors have argued that organizations should be appropriately structured and managed for effective learning to occur. Goh and Richards [56] and Goh et al. [57] identified five major underlying organizational characteristics and management practices that are key conditions for learning to take place in an organization. These are the following: clarity of mission and vision, leadership (commitment and empowerment), experimentation, transfer of knowledge, and teamwork and group problem-solving. ...
... For this purpose, we built a fifteen-item open-ended questionnaire. The questionnaire used in this research was based on the scales developed by Goh and Richards [56] and Goh et al. [57] who developed an organizational learning survey to measure learning capability from a managerial perspective. The interview questions were taken from the major underlying organizational characteristics and management practices that are key conditions for learning to take place in an organization. ...
Full-text available
The COVID-19 crisis has encouraged a major shift towards greater environmental awareness and sustainable consumption. However, in times of severe crisis, SMEs primarily look to return to normalcy and their own survival rather than implementing a sustainable agenda. This paper aims to contribute to the understanding of the learning problems faced by small tourism enterprises in a crisis such as the COVID-19 pandemic. This paper explores the learning capacity of SMEs and the importance of establishing mechanisms that provide SMEs with the keys to organizational learning as a source of continuous knowledge. Open-ended semi-structured interviews with 39 tourism SMEs managers in Galicia (Spain) were conducted during the toughest months of the COVID-19 pandemic. The results show that SMEs have not been fully involved in the learning process, which is mainly related to knowledge transfer and integration. DMOs can act as promoters of knowledge management for organizational preparedness by providing SMEs with learning mechanisms and strategies to go beyond simple problem solving when they arise.
... It took nearly two years for me to complete this process before I was ready to begin writing a number of papers about it for peer review. Those who are interested can read my papers that explain the developmental process and how the instrument was finally validated and used for my research (Goh and Richards, 1997;Goh, 2001Goh, , 2007. ...
... This fueled our research, and we began a long collaborative research journey that has lasted since that serendipitous meeting I had with his graduate student. For those who are interested, I have listed some of the papers we have written together (Cousins et al., 2004;Goh et al., 2006;Goh et al., 2007). This was a very important development in my career, as it allowed for a cross-appointment to the Faculty of Education, another set of colleagues to interact with, and invitations to sit on number of doctoral thesis advisory committees at the Faculty of Education. ...
Full-text available
Purpose In this paper, the author explores his research journey into the learning organization and its impact on his academic career. This paper describes how Peter Senge’s book The Fifth Discipline: The Art and Practice of The Learning Organization (1990) was the spark that led to the author’s focus on empirical research in the field. Design/methodology/approach This paper provides author’s personal reflections on how this decision put him on a path to a variety of serendipitous experiences, exciting research areas and also enabled him to engage in productive collaborative research with many of his colleagues. Findings The findings conclude with a discussion on what the author see as new challenges and perspectives for advancing research into the learning organization. Originality/value This paper provides a unique perspective on how The Fifth Discipline by Peter Senge has influenced an academic career. It presents a personal reflection of a research journey into the learning organization that spans over 30 years.
... Furthermore, Chakrabarty and Roge (2002) scholarly work worked on the dimensionality of organizational learning by using CFA approach through LISREL and recognized the uni-dimensionality for the construct. Uni-dimensionality was also assessed by (Goh et al., 2007). In view of this evidence, the current study has used organizational learning as a unidirectional construct (Chakrabarty & Rogé, 2002). ...
Full-text available
The theme of the current research is to assess the link between entrepreneurial orientation, organizational learning with organizational performance using structural equation modeling. The current work was based on a quantitative approach using cross-sectional survey methodology. The study examined the effect of entrepreneurial orientation (EO) on organizational learning (OL), organizational performance (OP), and of organizational learning (OL) on organizational performance (OP). Moreover, the study also examined the mediating effect of organizational learning (OL) in the relationship between entrepreneurial orientation (EO), and organizational performance (OP). Data were collected from managers and owners of manufacturing pharmaceutical SMEs in Indonesia. A sample of 340 was selected by using the purposive sampling technique. The results support in current empirical model confirming all the direct hypothesized relationships. The study results also revealed mediation of organizational learning in the entrepreneurial orientation and organizational performance relationship. Considering the association of the factors studied in the current research, the authors imply managers, owners to develop a learning-oriented environment. The implication sections provide further details based on the results.
... The ORC includes 115 items rated on a 5-point scale. Organizational learning climate was assessed using the organizational learning survey (OLS) which includes 18 items rated on a 7-point scale [45]. Clinician readiness for change was assessed using the brief individual readiness for change scale (BIRCS) which includes five items rated on a 5-point scale [46]. ...
In this study, we evaluated a blended implementation approach with teams learning to provide family-based treatment (FBT) to adolescents with eating disorders. Four sites participated in a sequential mixed method pre–post study to evaluate the implementation of FBT in their clinical settings. The implementation approach included: (a) preparatory site visits; (b) the establishment of implementation teams; (c) a training workshop; (d) monthly clinical consultation; (e) monthly implementation consultation; and (f) fidelity assessment. Quantitative measures examining attitudes toward evidence-based practice, organizational learning environment and organizational readiness for change, as well as, individual readiness for change were delivered pre- and postimplementation. Correlational analyses were used to examine associations between baseline variables and therapist fidelity to FBT. Fundamental qualitative description guided the sampling and data collection for the qualitative interviews performed at the conclusion of the study. Seventeen individuals participated in this study (nine therapists, four medical practitioners, and four administrators). The predetermined threshold of implementation success of 80% fidelity in every FBT session was achieved by only one therapist. However, mean fidelity scores were similar to those reported in other studies. Participant attitudes, readiness, and self-efficacy were not associated with fidelity and did not change significantly from pre- to postimplementation. In qualitative interviews, all participants reported that the implementation intervention was helpful in adopting FBT. Our blended implementation approach was well received by participants. A larger trial is needed to determine which implementation factors predict FBT fidelity and impact patient outcomes.
... According to Nilsen, classic theories, such as the Theory of Diffusion of Innovations [16], various social cognitive and social network or organisational theories, have been developed in psychology, sociology or organisational sciences with a realm that goes beyond implementation but can usefully explain selected aspects of implementation [10]. On the other hand, various implementation theories, such as the Implementation Climate Theory [17] or the Absorptive Capacity Theory [18], are directly concerned with implementation and regard behaviours that accompany or facilitate the implementation of evidence on an individual or community level [19][20][21][22][23][24][25][26]. These theories have been developed within the comparatively new field of implementation sciences, either de novo or by modifying existing theories [10]. ...
Full-text available
Background The effectiveness of complex interventions, as well as their success in reaching relevant populations, is critically influenced by their implementation in a given context. Current conceptual frameworks often fail to address context and implementation in an integrated way and, where addressed, they tend to focus on organisational context and are mostly concerned with specific health fields. Our objective was to develop a framework to facilitate the structured and comprehensive conceptualisation and assessment of context and implementation of complex interventions. Methods The Context and Implementation of Complex Interventions (CICI) framework was developed in an iterative manner and underwent extensive application. An initial framework based on a scoping review was tested in rapid assessments, revealing inconsistencies with respect to the underlying concepts. Thus, pragmatic utility concept analysis was undertaken to advance the concepts of context and implementation. Based on these findings, the framework was revised and applied in several systematic reviews, one health technology assessment (HTA) and one applicability assessment of very different complex interventions. Lessons learnt from these applications and from peer review were incorporated, resulting in the CICI framework. ResultsThe CICI framework comprises three dimensions—context, implementation and setting—which interact with one another and with the intervention dimension. Context comprises seven domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, ethical, legal, political); implementation consists of five domains (i.e., implementation theory, process, strategies, agents and outcomes); setting refers to the specific physical location, in which the intervention is put into practise. The intervention and the way it is implemented in a given setting and context can occur on a micro, meso and macro level. Tools to operationalise the framework comprise a checklist, data extraction tools for qualitative and quantitative reviews and a consultation guide for applicability assessments. Conclusions The CICI framework addresses and graphically presents context, implementation and setting in an integrated way. It aims at simplifying and structuring complexity in order to advance our understanding of whether and how interventions work. The framework can be applied in systematic reviews and HTA as well as primary research and facilitate communication among teams of researchers and with various stakeholders.
... Classic theories embrace the Theory of Diffusion (Rogers, 2003) as well as the whole range of social cognitive theories, social network theories, organizational theories etc. In addition, various theories have been published regarding the behaviours accompanying and/or facilitating the implementation of evidence, on an individual as well as on a community level (Duckers et al., 2008;Fineout-Overholt & Melnyk, 2006;Goh et al., 2007;Good & Nelson, 1971;Helfrich et al., 2007b;Lubomski et al., 2008;Marsick & Watkins, 2003;Mathisen et al., 2004). Implementation theories aim to provide understanding and/or explanation of aspects of implementation, as does the Implementation Climate Theory (Klein & Sorra, 1996) or the Absorptive Capacity Theory (Zahra & George, 2002). ...
Full-text available
Abstract Purpose – Organizational democracy is the new model of organizational design for a Democratic Age, and out of this new model grows a freedom-centered and healthy climate. Democratic management is a key to greater organization success and a necessity to gain higher levels of performance and innovation. The purpose of this paper is to explore the antecedents and consequences of organizational democracy in an Iranian context. Design/methodology/approach – Statistical population includes the employees of the Gas Company of Isfahan Province. For data analysis, 263 accurate completed questionnaires are used. Structural equation modeling is applied to investigate the relationship between the research variables. Findings – The findings showed that some types of organizational culture (i.e. self-criticism, team, and participatory culture) ( β¼0.33); and some dimensions of organizational structure (i.e. decentralization, flat hierarchy, and less formalization) ( β¼0.55) as antecedent variables have a significant direct effect on organizational democracy. Also, organizational democracy has a significant direct effect on human resources outcomes consist of organizational commitment, self-efficacy, and improving work relationships ( β¼0.64); and organizational outcomes consist of organizational learning and organizational agility ( β¼0.96). Originality/value – Despite years of encouragement from consultants and theorists, managers have generally shown little interest toward democratic process as a system of decision making and management in organizations. This study proposes a comprehensive model for identifying the antecedents and consequences of organizational democracy. Most studies in this field are theoretical rather than empirical. But, in this research, the proposed relationships are examined empirically. Keywords Organizational culture, Organizational structure, Human resources outcomes, Organizational democracy, Organizational outcomes
Full-text available
The literature review reveals that there is a relationship between organizational learning organizational commitment, job satisfaction and work performance. However, it is apparent that the integrated relationships between these variables have not been found to be reported. Hence, we examine the relationship among these variables using a sample of public service managers in Malaysia. Organizational learning was found positively related to organizational commitment, job satisfaction, and work performance. Organizational commitment and job satisfaction are also positively related with work performance and these variables partially mediate the relationship between organizational learning and work performance. Implication of the study and suggestions for future research been discussed in this paper.
A learning organization is an organization skilled at creating, acquiring, and transferring knowledge, and at modifying its behavior to reflect new knowledge and insights. This paper discusses the essentials of building a learning organization. It also suggests that beyond high philosophy and grand themes, building a learning organization requires the gritty details of practice.
This article presents an analysis of the potential for a school improvement process to foster professional community in three rural middle schools through the processes of organizational learning. The findings of this 2-year qualitative case study demonstrate the tensions schools must negotiate between bureaucracy and professional community and suggest that four organizational factors influence the establishment of professional community: principal leadership, organizational history, organizational priorities, and organization of teacher work The findings further suggest that double-loop learning is invaluable to sustain professional community.