Content uploaded by Nicolae Toderas
Author content
All content in this area was uploaded by Nicolae Toderas on Feb 22, 2020
Content may be subject to copyright.
Evaluation Capacity Building as a Means
to Improving Policy Making and Public
Service in Higher Education
Nicolae Toderaşand Ana-Maria Stăvaru
Keywords Capacity building for evaluation Organizational learning Sustain-
able evaluation practice Evaluation use in the Romanian higher education system
1 The Need for Evaluation Capacity Building
In order to strengthen the processes of organizational learning and to improve the
policy making and implementation process in various public sector areas, organi-
zations have been searching for means of putting evaluation into practice at the
organizational level and offering their staff and management opportunities to learn
about evaluation and to include evaluative thinking and acting in their day-to-day
routine. In this respect, evaluation capacity building has become a very prolific
topic of discussion and writing in the evaluation field since the year 2000 (Compton
et al. 2001; Preskill and Boyle 2008). In the higher education system, among other
fields like public health or social policy, evaluation capacity building has to deal
with various stakeholders’interests and values and try to find a way of integrating
evaluation as part of the system and not as an intrusive, external activity that has to
be done in order to comply to external or internal pressures.
Throughout this analysis, the higher education system is seen as consisting of at
least two types of organizations: those who provide educational services (such as
universities), and those which possess attributions in the decision making process,
policy planning and implementation, regulation, control or mere executive functions
(such as the Ministry of National Education, the Romanian Agency for Quality
Assurance in Higher Education (ARACIS), the National Authority for Qualifica-
N. Toderaş(&)A.-M. Stăvaru
National University of Political Studies and Public Administration (SNSPA),
Bucharest, Romania
e-mail: nicktod@yahoo.com
A.-M. Stăvaru
e-mail: anamaria.stavaru@gmail.com
©The Author(s) 2015
A. Curaj et al. (eds.), Higher Education Reforms in Romania,
DOI 10.1007/978-3-319-08054-3_5
87
tions, the Executive Agency for Higher Education, Research, Development and
Innovation Funding (UEFISCDI)). The following evaluation capacity building
framework is addressed to the second type of organizations, in order to discuss a set
of elements which can facilitate their organizational learning and improve their
policy making functions. This choice was made because of the important part that
the second type of organizations plays in the decision-making process.
There is a growing need in the Romanian higher education system for identi-
fying mechanisms for improving public policies in the field of higher education
and, implicitly, of other services which are complementary to the educational
process, such as the impact of scholarships on improving the access to higher
education and the quality of the educational process; the implementation of systems
for acknowledging and validating qualifications and the related consequent com-
petences; improving and increasing access to counselling and professional orien-
tation services, among others. Evaluation is becoming more and more visible and
used as one mean to improve policies, programs and/or organizations. Evaluation
can be used not only as a step in the public policy making process, but also as an
individual process for collecting, analyzing and interpreting the necessary facts for
grounding, improving, legitimating, correcting and adapting policies, or even for
developing capacities at the organizational level in general, and especially at the
level of expert teams within organization. In spite of the fact that evaluation has
been promoted in the Romanian public sector as part of the public policy cycle, in
practice using evaluation at this state is relatively infrequent as there is no strategy
in developing evaluation expert or funding the evaluation step of the policy making
process and thus, evaluation rarely stays at the bottom of a new public policy.
The practice of evaluation started to develop in Romania only after the second
half of the 1990s, and one of the factors which led to the institutionalization of this
practice was the conditionality and expectations linked to different financing
opportunities from external sources (Perianu 2008; Cerkez 2009a,b). During this
period, the best examples of this are represented by the financing which was offered
by international financial institutions and external donors. Even though the use of
joint evaluation was encouraged as a mean for contributing to the development of
an evaluation culture in partner countries, Romania as a recipient country and
partner in the evaluation process was not able to create its own evaluation capacity,
partly due to the fact that the evaluations were centred on the needs of the external
donors, following their planning and programming cycle and not that of the partner
country—and this was a common issue among those experimenting the joint
evaluations (Dabelstein 2003). The educational field was one of the first benefi-
ciaries of these development instruments which were accessed by Romania. For
example, on the 31st of May 1996 the Romanian Government and the EU signed
the Financial Memorandum for implementing the Phare Programme RO9601
Universitas 2000, which consisted in undertaking activities for evaluating compo-
nents of the system in order to accomplish structural changes in Romanian higher
education system. Thus, in association with the Reform of Higher Education and
Research Project (Loan 4096-RO), financed by the World Bank and implemented
by the Romanian Government, a series of exercises were conducted during the
88 N. Toderaşand A.-M. Stăvaru
1997–2001 period for evaluating procedures, methodologies and organizations
within the national higher education and research system in order to improve the
public policy making processes in this sector. On the other hand, a vast exercise in
the field of evaluation took place between 2000 and 2001 as part of the Education
Reform Project RO 3742 which was financed by the Romanian Government and the
World Bank, its objective being to evaluate the implementation of the curricular
reform for compulsory schools in the 1990–2000 decade (Vlasceanu et al. 2002).
After this period, once Romania’s participation in European programs in the
field of education and higher education began to increase, it became understood and
clear that financing would critically depend on the evaluation of programs and
projects which were implemented. What is more, with the advance of the reform in
the central and local administrative sectors, the need for evaluation intensified both
in terms of evaluating projects, programs and policy, but also in terms of evaluating
organizations in order to increase their performance. For example, in the case of
higher education system, quality assurance was established as a compulsory process
(Law 87/2006). It wanted to lead to the implementation of a national system for
quality assurance and was based on periodical internal and external evaluations.
This entails the continuous evaluation of the educational process as a whole, as well
as the organizational performance of higher education institutions which are subject
to periodical evaluations. As a consequence, based on the experience which was
accumulated during a policy cycle of quality assurance in higher education it
became possible to apply a national exercise for collecting data and information for
evaluating universities and study programs in order to classify universities and rank
study programs. On the one hand, this exercise offered an overview of Romanian
higher education institutions, as well as a series of data for grounding a new cycle
of policies regarding higher education financing, quality assurance, developing
research programs etc. On the other hand, this exercise demonstrated a level of
institutional maturity of the actors within the system, as far as the use of evaluation
as an useful instrument is regarded. In this case, universities as actors which are part
of the system, used quality assurance as a guide in order to increase their perfor-
mance, adaptability and friendliness, as well as a means for public accountability.
Evaluation capacity building as a means for improving organizational perfor-
mance and public policies and programs is an aspect which has not been studied
extensively in Romania. Also, its practical use for organizational learning is reduced.
On the one hand, this subject is approached by few authors in Romanian specialized
literature in spite of the fact that at the international level the interest for developing
the evaluation capacity as an element of organizational change, and also the causes,
motivation, influences, results or its use has a long tradition. On the other hand,
universities’superficial approaches to evaluations prove that neither evaluation, nor
organizational learning are understood and perceived as instruments capable of
generating knowledge and reducing time for solution-finding. Though they could
lead to organizational development towards finding more efficient, flexible and
lasting solutions, university tend to neglect them. They mime achieving the stan-
dards, replicate the behaviour of older organizations or accomplish only the
Evaluation Capacity Building …89
minimum of what is demanded through standards and indicators in order to obtain
formal recognition or financing, proving once again the lack of prospective thinking.
Organizational learning thus occurs in an unstructured manner, with significant
losses regarding the accumulated experience and with weak emphasis on vision.
Practices such as reforms which dismantle everything that was being built through
the previous reform, without thinking strategically and selecting elements which
can be used or further developed are another indicator of the lack of continuity in
the thinking of policies and of the insufficient use of organizational learning. For
example, between 2009 and 2011, through the Quality and Leadership for The
Romanian Higher Education Project, UEFISCDI performed an exercise of systemic
foresight for the development of higher education policy makers’prevision and
leadership capacities. This approach was based on learning by examples and par-
ticipation in the elaboration of strategic documents like Green Paper and White
Paper (Andreescu et al. 2012). Although the exercise involved a large participation,
the universities and policy makers did not implement the institutional recommen-
dations designed within the White Paper.
Evaluation can also be used for adapting policies and organizations, thus con-
tributing to saving time and increasing the probability of identifying an adequate
alternative. Thus, during the time when a policy, a program or an organization
develops, on-going or intermediary evaluations can point out eventual problematic
aspects, difficulties, reticence, unfavourable conditions, unintended effects (positive
or negative), alternative ways for handling problems, as well as opportunities occurred
on the way that could be valued in order to increase the impact of the development
process. This would allow reflection and finding, in due time, solutions for improving
implementation and for getting closer to the intended results or effects. An anticipative
adaptation approach offers the possibility of diminishing uncertainty periods and risks,
informing debate and decision taking thus ensuring the continuity of the programs’
implementation or of the organizations’activity. Understanding as early as possible
which aspects can be improved also increases flexibility, allowing measures to be
taken before an activity has advanced too far for changes to be made. Furthermore,
costs are reduced because activities are stopped from unfolding towards possible
deficient outcomes and allow for fixing inefficiencies as they appear, and for redi-
recting resources to aspects which deserve more or are in need of additional support.
2 Developing a Logical Framework for Evaluation
Capacity Building in the Romanian Higher Education
System
In spite of the fact that the technical assistance programs from the EU pre-accession
period enabled the development of initiatives aimed towards generating a culture of
evaluation. These initiatives which were expected to gradually lead to the full scale
use of evaluation practices in order to improve the public policy making process,
and a solid culture of evaluation have not been fully successful at the system level,
90 N. Toderaşand A.-M. Stăvaru
including in the higher education system. In spite of this fact, in recent years some
ex-ante and intermediary implementation evaluations have been conducted
regarding the operational programs for implementing structural and cohesion funds,
some of which targeted components of the higher education system. However, these
evaluations were rather meant to point out the needs within the system which could
be addressed through the use of structural and cohesion funds, without directly
targeting the improvement of the policy making process in the field of higher
education through evaluation exercises.
The focus on the internalization of quality assurance, which was sustained by
ARACIS, has led for some organizations to the perspective that the methodology
and instruments used by ARACIS is the only possible approach. This could be seen
as an aspect of coercive isomorphism (Păunescu et al. 2012), without learning
through evaluation what it would mean to diversify and particularize evaluation
approaches, models, methods and instruments. However, the methodological
framework which is being used by ARACIS does not oblige universities to conduct
deeper evaluations for understanding the way in which their established objectives
are accomplished, estimating the social impact which the evaluated programs have,
comparing the evaluated programs with each other (Cerkez 2010). The use of
specific methods of evaluation capacity building would have been facilitating the
enhancement of the institutional responsibility for quality.
Even though regulatory and executive higher education agencies sustained such
a process of diversifying evaluation approaches, models and methods in order to
increase the quality of services which are offered by actors within the system, they
have not had the logistical capacity or the expertise necessary to sustain this pro-
cess. Consequently, because of the lack of an organizational culture of evaluation
the regulatory and executive higher education agencies within the system adopt a
refractory behaviour when comprehensive system evaluations are being conducted,
whether we are talking about quantitative of qualitative methods. For example, in
the April–August 2011 period, when the first exercise for conducting the primary
evaluation of universities and the evaluation of study programs in order to
accomplish the classification of universities and the ranking of study programs,
evaluators noticed the hostility with which the personnel and the management staff
reacted to such a normal process of collecting the necessary evidences for this
exercise of evaluating the systems’status. Such behaviour can be explained by the
fact that until that moment there was no institutionalization or routine for collecting,
processing and using evidence from the systemic level in order to evaluate the
respective organizations within the system in order to improve the services they
offer, and such a necessity was not perceived and treated as a priority either at the
institutional level, or at national level. What is more, because of the lack of exer-
cises such as this, public policies in the field of higher education have frequently
been based only on the use of statistical data which was supplied by the National
Statistics Institute which are rather scatter and frequently irrelevant, rather than on
systematically collected, processed and interpreted evidence which would allow the
evaluation of the actual state of different aspects of the system. For example it did
not make it possible to assess the efficiency and impact of the policy for raising
Evaluation Capacity Building …91
access and maintaining within the system of Roma ethnics or the degree of active
participation of students who are over 34 years old. This lack of evidence-based
policies has led to policies and programs that do not respond directly to the needs,
capacities and availability of the main actors, but rather to momentary political
desires. The National Student Enrolment Registry, which was designed as an
electronic database for registering all the students in Romania in state and private
universities which are either accredited or authorized to operate, has proven to be a
difficult instrument to implement. There are several difficulties in ensuring that all
the functions with which it was designed are working properly, even though is
should be already in place as the National Education Law (Art. 201) stipulates that
this instrument has to be totally functional within maximum 2 years after the law
was passed, which was in February 2013. The implementation of a program or
policy should be seen as an open system, which is sensitive to a certain degree to
interferences (Chen 2005). At the same time, the dynamics of transforming an
initial state into a desired state through the implementation of a program or policy is
dependent of the dynamics of the organizational internal and external pressures
among other factors. Evaluations regarding the organizational development of the
actors within the higher education system, including those for quality assurance
which are specific to the suppliers of higher education programs, can be seen as a
practice for improving both the actual services that they are offering, as well as the
policies which they are implementing. From these evaluation exercises, organiza-
tions in the higher education system can learn from each other how to better
accomplish the mission which they have undertaken and how better to accomplish
their strategies, improve their practices etc. Learning through evaluation entails the
fact that the evaluation process does not end when the final results are identified.
Instead it includes prospective thinking of the next period of programming and
implementation with the use of the knowledge and experience which have been
gained, and, ultimately, restarting the evaluation cycle. This is a circular process, as
it can be seen in Fig. 1, being made up of 4 steps, each step offering explanations
for the situations which are identified in the subsequent steps
2.1 Stage 1: Shaping Evaluation Priorities and Creating
Institutionalised Evaluation Structures
Evaluation knowledge and practice become better understood and increasingly used
in organizations which resort to the implementation of intentional ECB strategies
(Bourgeois and Cousins 2013). Shaping evaluation priorities at the organizational
level implies developing processes like: (a) identifying important topics for dealing
with organization’s mission and objectives; (b) analysing the topics and revealing
the logical connections between them; and (3) arrange them by previously estab-
lished criteria and select priorities. Even if the Evaluation Capacity Building
strategies rely on the creation of specific internal structures (such as evaluation
92 N. Toderaşand A.-M. Stăvaru
departments or units) within the organization, they can have a broader impact and
be more effective in the process of getting staff member used to addressing eval-
uation needs and using specific toolkits as a day-to-day routine. From the ECB
perspective these structures have the role of ensuring the continuous evaluation and
monitoring component, including the component for evaluating the projects which
have been implemented by the respective organizations, by planning and con-
ducting periodical evaluations regarding the adequacy of institutional arrangements
(institutional blockages, necessary time etc.), efficiency and effectiveness, rele-
vance, usefulness, the performance of implementing policies, programs, strategies
and/or supplying services, administrative capacity etc. For example, within the
Ministry of National Education this function is exerted by the Public Policy Unit,
and within UEFISCDI evaluation is treated as an on-going process for the programs
and system strategic projects which are implemented. In the case of UEFISCDI, this
approach has been institutionalized, strengthened and perpetuated through the
implementation of the Phare Universitas 2000 Program between 1996 and 2002, as
well as the Higher Education and Research Reformation Program RO-4096, pro-
grams which can be consider as the basis for learning through evaluation at a
systemic level. Apart from the functions which were presented earlier, these
structures which have a role in evaluation could also serve as communication
channels with beneficiaries and interested actors by generating a framework for
participatory debate, thus implicating them in the evaluation process, as well as
increasing the evaluations’degree of responsiveness to the needs of the community
which it serves (Bărbulescu et al. 2012). Thus, this could lead to increased orga-
nizational learning, which can be understood as “the vehicle for utilizing past
experiences, adapting to environmental changes and enabling future options”
1.b Develop
evaluation
structures
participative debate
2. Select eval-
uation model
3. Train evalua-
tion skills
4. Routinise
evaluation:
collect evidences, im-
plement, evaluate.
1.a Shape evalua-
tion priorities
Guidelines
Presentations
Workshops
Simulations
Pilot evaluations
Communities of practice
expert team learning
Organizational learning
diffuse evaluation practice
reshape evaluation priorities
system learning
strengthen leadership
other organizations
in the HE systrem
Fig. 1 Evaluation capacity building framework (authors)
Evaluation Capacity Building …93
(Berends et al. 2003). The learning process can use different means, such as dia-
logue, reflection, asking questions, identifying and clarifying values, beliefs,
assumptions and knowledge (Preskill and Torres 1999), but in order for participants
to become involved, they need to have the proper motivation to learn about eval-
uation and use it. In addition to motivation, participants need the organization to
offer them “leadership, support, resources, and the necessary learning climate”
(Taylor-Ritzler et al. 2013) so that the impact of ECB becomes visible.
At the level of organizations within the higher education system ECB can be
undertaken both through internal means, as well as through external means. For
example, in order to gain the status of European Association for Quality Assurance
in Higher Education (ENQA) member, a process which represents one of the main
factors which has led to strengthening the position of ARACIS within the config-
uration of the national institutional environment, ARACIS needed to develop its own
organizational evaluation capacity. The process of becoming an ENQA member was
long and was carried out through both categories (internal and external) of ECB
specific means, which needed examining the extent to which the standards the
ENQA required for candidates had been achieved. From the internal perspective of
the consolidation of ECB within ARACIS, the agency established a set of internal
procedures and instruments through which it carried out an auto-evaluation exercise
which represented the base for all the subsequent activities for applying to become
an ENQA member. From the external perspective of the ECB consolidation within
ARACIS, between the years 2007 and 2008, the European Universities Association
(EUA) led the process of evaluating ARACIS, recommending at the end of the
process the inclusion of ARACIS in the European Registry for Quality Assurance,
which was a significant step in gaining the status of ENQA membership, which
occurred in 2009. ARACIS was considered to meet the ENQA criteria in terms of its
activities, its official statute, independence and other aspects, while it did not fully
meet the following criteria: the processes for external quality assurance, resources,
the declaration of principles, the criteria and processes for external quality assurance
used by members and the responsibility procedures (European Association for
Quality Assurance in Higher Education 2009, pp. 46–47). Regarding the latter,
ENQA recommended ARACIS to continue its’efforts in these directions in order to
achieve full conformity as fast as possible. Taking into consideration this example, it
can be concluded that the process of shaping evaluation priorities and improving or
adapting institutionalized evaluation structures is continuous and plays a role in the
process of institutional strengthening.
2.2 Stage 2: Using a Participative Approach for Deciding
the Appropriate Evaluation Model
Developing the evaluation capacity at the level of public systems implies the need
of thinking from an evaluative point of view and improves the organizational and
system learning processes through a participative approach. Introducing evaluative
94 N. Toderaşand A.-M. Stăvaru
activities in the usual practice of organizations requires the adoption of evaluation
models, able to transpose this practice into a set of systematic activities, with a clear
methodology and a useful purpose. Thus, the development of the evaluation
capacity ensures “the design and implementation of teaching and learning strategies
to help individuals, groups, and organizations, learn about what constitutes effec-
tive, useful, and professional evaluation practice”, the final purpose being to create
a“sustainable evaluation practice—where members continuously ask questions that
matter, collect, analyze, and interpret data, and use evaluation findings for decision-
making and action”(Preskill and Boyle 2008). To this end, in order to support
organizations, the analysis of several evaluation models and approaches from the
specialized literature can be useful in order to select elements which will be
included in the organizations’own model. What is more, there are in hand different
checklists which have been designed especially for facilitating the process of
evaluative practices to be more vastly used, such as “A Checklist for Building
Organizational Evaluation Capacity”(Volkov and King 2007)and“Institutional-
izing Evaluation Checklist”(Stufflebeam 2002).
A criticism that can be addressed concerning the way in which the practice of
evaluation has been introduced in the education field is that the choice of evaluation
approaches, models and methods often ignores the opinions of those who are part of
the organization where this process is taking place. For example faculty members,
in the case of university, or experts, in the case of agencies which have responsi-
bilities in the educational policy making process are often excluded from the
decisional process regarding the undertaking of an evaluation. This can result in a
certain degree of rejection from these groups as a consequence of the insufficient
relevance of the chosen approaches in relation to their role in the educational
process (Nevo 2006). Continuing this line of thinking, the activities which are
specific to evaluation can seem foreign or external to the agencies’field of activity
if the experts within it are not consulted while choosing them and if they have
nothing to say regarding the way in which evaluation activities will be integrated
within their usual, day-to-day routine. For these reasons, but also for choosing an
evaluation model which is as adapted as close as possible to the organizations’
particularities and to the needs of individuals and teams which form it, it is fun-
damental that the choice of an evaluation model be based on a wide and informed
debate at the organizations’level. This allows for the integration of the different
needs of individuals, but also for them to become more easily accustomed to the
new routine. In the case of the higher education system, however, routines can also
become an impediment in the way of improving organizational performance and
adapting to a dynamic environment. For example, in the case of universities, the
Quality Barometers, which were conducted by ARACIS in order to present a
subjective status analysis, show the fact that the internal evaluation of quality is a
ritualistic and conformist routine, mostly decoupled from the managements pro-
cesses within the university. This leads to the miming of standards, the dysfunc-
tional transposition of norms into practices, the weak professionalization of internal
evaluation commissions and the focus on entry values rather than on effectively
increasing quality (Vlăsceanu et al. 2011;Păunescu et al. 2011). On the other hand,
Evaluation Capacity Building …95
routines can generate a framework for comparing the evolution of different policies
and programs which leads some agencies within the system to establish their own
evaluation model and customize specific instruments according to the characteris-
tics of the implemented programs. For example, in the case of the National Agency
for Community Programs for Education and Professional Development (ANPC-
DEFP is the acronym in the Romanian language), program and projects evaluations
highly depend on the approach, methods and practices which are used by the
European Commission, DG Education and Culture, which are transposed to the
agency’s level by expert evaluators which it has selected.
Still, what are the fundamental elements which are the base of constructing or
adapting an evaluation model? Which are the most frequent evaluation questions
which agencies in the field of higher education should be taking into consideration
including in their own model, in order to develop their own evaluation capacity and
be able to answer to the evaluation needs and priorities? Different meta-approaches
to evaluation tend to assign an increased importance to different functions of the
evaluative process, for both formative or summative evaluation (Scriven 1967),
responsive evaluation (Stake 1975), illuminative evaluation (Parlett and Hamilton
1977), utilization focused evaluation (Patton 1986), systematic evaluation (Rossi
and Freeman 1985), constructivist evaluation (Guba and Lincoln 1989), goal-free
evaluation (Scriven 1991), empowerment evaluation (Fetterman 1994), realist
evaluation (Pawson and Tilley 1997) developmental evaluation, etc. All these
approaches propose various selections of concepts, instruments, elements of design
and roles and instances of the evaluator in order to achieve the emphasized func-
tion. But how can regulatory and executive higher education agencies distinguish
and choose between all of these, in order to use an adequate evaluation model,
which takes into account the system’s constraints and conditions such as: quality
assurance standards, various needs and values of the various stakeholders, scarce
resources, institutional and organizational context? Given the sectors’dynamics and
the multidirectional influences to which it is subjected (external conditions, the
coherence with national and international strategic and reform lines, changes which
are diffused from other sectors etc.), it is difficult for a single evaluation model to
offer all and the most appropriate answers when evaluating an educational program,
the effects of a reform package, organizational accountability or responsiveness etc.
Thus, for each situation agencies can choose from a large number of combinations
of different elements, dimensions and values which are useful for the evaluator,
different methodological approaches, quantitative (surveys, large-scale analysis)
and/or qualitative (in-depth interviews, public consultation, focus-groups). Though
in some cases there will be an obvious predisposition towards choosing a certain
method or a certain type of methods, the process of choosing or establishing the
most adequate evaluation model might seem very confusing and stressful and could
attract contradictory discussions, as well as resistance to change in the case of some
organizations. However, this debate at the level of each agency about the ways in
which they develop their evaluation capacity and they choose an evaluation model
that they will integrate in the agencies’current activities, can also be very pro-
ductive. Ultimately, it can build the strategy that the agency is going to follow in a
96 N. Toderaşand A.-M. Stăvaru
practical manner and create the basis of the decision of the evaluation model which
will be chosen, having both an informative role, as well as a formative role for the
experts which take part in the debate. The choices that are going to be made or
which may be favoured by those who participate in the debate lead to the mani-
festation of a certain degree of subjectivity which is connected to various factors
such as:
•values and preferences for certain approaches of methods;
•competences, skills, education;
•the way in which they interact and the intra and inter-organizational levels;
•the emitting of judgments regarding the quality of a policy or program;
•other elements which model their decision and that will guide them subse-
quently in the practice of evaluation activities.
This process is also connected with the way in which they or the activity of the
organization which they are part of or of other organizations within the system with
which they interact could be improved. All these elements help them understand
why a certain decision has been made, why a certain alternative was implemented,
why a certain action generated a specific effect, why an organization took a certain
course of changing or adapting, why the response behaviour of an organization to a
certain situation followed one pattern and not another. Eventually it could enable
them to identify generalizable effectiveness models and the sense of an intervention.
As a concluding remark, regulatory and executive higher education agencies have
to ensure an open and participative environment for the most adequate evaluation
approaches, methods and instruments to be chosen. Also, the choices made should
be representative for the various stakeholders’needs and values.
2.3 Stage 3: Training Evaluation Skills
How can evaluation skills develop in bureaucratic routine-embedded systems? The
need for training that agencies have to address consists in making managers and
staff aware of the importance of using evaluations and also in giving them the
practical tools for doing it. Training and capacity building is essential but putting
the training process into practice is itself a challenge if there is a desire to introduce
a new “routine”in the normal schedule of experts within an organization. This is
why it is important that attending the training not be considered a boring and
imposed activity. A major part is played by the two steps which were previously
described, which offer on the one hand the an institutionalized function of the
agency, which will become part of every staff members’current activity, and on the
other hand the familiarization with elements which are specific to the practice of
evaluation and consulting and integrating their own needs, values and opinions
within the new activity. The various means through which evaluation skills can be
formed at the level of organizations, both for staff and management, include:
Evaluation Capacity Building …97
Elaboration of agencies’own evaluation guidelines: offering written, explana-
tory and exemplifying materials which support training activities and which guide
evaluation activities by taking into consideration the particularities of the activities
which the organization carries out within the higher education system. In the
context of deepening European integration, the instruments which are elaborated/
adapted by organizations at the system level should lead to the adaptation to the
multiannual financial programming principles, taking decisions according to
Romania’s needs, generating coherence with the EU’s priorities, ensuring consul-
tation in order to rank national priorities, initiatives for adapting and aligning the
legislative support and instruments, strengthening the relevance of programming,
stimulating risk awareness, initiatives that ensure the coherence of the institutional
and normative system.
Brief presentations of concepts, guidelines and standards: theorists and practi-
tioners can share their expertise with the organization members by presenting
different approaches, concepts, models and methods which are specific to evalua-
tion, adapting these to the organizations’evaluation priorities and to the changes
that are taking place within the higher education system.
Interactive courses: discussion about the expectations regarding the results and
the use of evaluation processes.
Workshops: interactive sessions during which participants are offered an
extended participatory framework for dialogue and learning by carrying out team
activities regarding the way in which evaluation instruments relate to the educa-
tional policies, programs or reforms implementation and their day-to-day activities.
Evaluation simulations: undertaking, in an organized environment, all the steps
which need to be taken during each step of an evaluation cycle (contracting,
preparation, evaluation design, implementation of the evaluation and reporting of
the results) related to an educational program, policy or organization within the
higher education system.
Pilot evaluation implementing and reporting: carrying out a pilot evaluation in a
more narrow geographical or thematic area within the higher education system and
discussing about it with decision makers in those areas. In the case of the higher
education system in Romania in the 2006–2013 period most emphasis was placed
on organizational evaluations from the perspective of the quality assurance
dimension and these entailed conducting several national pilot evaluative exercises.
As a consequence of these exercises in the year 2011 a comprehensive national
exercise was conducted for establishing the hierarchy of study programs and for
classifying higher education institutions. Furthermore, as a consequence of these
evaluations at the system level, in the period 2012–2013, the European University
Association is conducting a longitudinal institutional evaluation of 42 universities
using its own methodology, which has been adapted and particularized to the
specific characteristic of the higher education system in Romania. Of course, this
latest national evaluation exercise could not have been successfully (efficiently and
efficaciously) implemented if pilot and common exercises/learning activities had
not been undertaken before the year 2011 regarding quality assurance. An inter-
esting aspect is that after the exercise of ranking study programs, alternative
98 N. Toderaşand A.-M. Stăvaru
methodologies have been developed in order to create a comparative framework for
the official hierarchy (Vîiu et al. 2012), thus diversifying the perspectives which are
taken into account when such evaluation exercises are conducted.
Consultations regarding improving the agencies’own evaluation guidelines:
regular initiatives to improve the channels for public consultation regarding the
design of evaluations which should be undertaken both for organizational evalua-
tion, as well as for the evaluation of the programs which have been implemented.
For example, both UEFISCDI, as well as ANDCDEFP periodically carry out
activities for increasing the awareness of the beneficiary and interested public
regarding the achievement of specific objectives and the contribution towards
achieving policies’objectives. They disseminate information regarding the eco-
nomic and social impact and the coherence with the directions which are stated in
strategic documents. They also organize public debate on results from the per-
spective of the contribution to the accomplishment of priorities.
Collaboration with universities for professional Master programs: developing
specialized study programs and curriculum for evaluating policies, programs and
organizations and adapting them to the students’profile. At present no public or
private higher education institution offers a Master’s program dedicated to higher
education management which studies organizational evaluation in the higher edu-
cation system. This component is instead treated as a subject in related programs
such as: management and governance, educational management, the evaluation of
public policies and programs, public policies, European integration etc. Further-
more, this subject is discussed in continuous professional training activities, which
have taken place in the last few years as part of projects which were financed by the
Human Resources Development Operational Sectorial Program 2007–2013. It is
expected that in the next years universities which offer master’s programs that are
connected to the field of organizational evaluation will extend this framework of
mutual cooperation and learning towards the specialized professional environment
(professional associations, consortiums and companies which offer services for
evaluation public policies and programs).
Apart from becoming familiar with specific elements from literature and the
practice of evaluation, the training of experts within the agency should include the
strengthening of their competencies in the use of social research models for eval-
uation activities. Thus, depending of the approaches that will be chosen and the
selection of quantitative and/or qualitative methods, they can practice in workshops,
simulations or pilot evaluations quantitative research activities such as social
inquiries, surveys, etc. or qualitative research activities such as undertaking
observations regarding the ways in which individuals work within the target
organization, conducting in depth interview with decision makers which are
responsible for the management and implementation of programs, document
analysis, content analysis, root cause analysis etc. In the case of pilot evaluations,
experts will have the possibility to approach evaluation results in an integrated
manner and to validate them by soliciting feedback from the other organizations
with which they will interact during the evaluation process. As a concluding
remark, for the training process to be efficient and relevant to the training needs,
Evaluation Capacity Building …99
regulatory and executive higher education agencies could also take into consider-
ation being open to involve the expertise of independent evaluators or training staff
from another agencies within the system.
2.4 Stage 4: Routinising Evaluation and Continuously
Reshaping Priorities
The routinization and redefining of the priorities in ECB entails the formation of a
critical mass of individuals who will support the use and dissemination of evalu-
ation practices, the reconceptualization of problems and of the solutions which are
proposed, the analysis of the implementation’sfluidity, the consistence, relevance
and plausibility of changes, the persistence of problems in programming and
implementation—aspects which have to be maintained/aspects which need to be
modified, the utilization of experience and lessons which have been learned for new
policies and programs in higher education. What is more, these steps should pro-
vide answers to the following question: what happens with the evaluation skills
when the training, simulations and the pilot evaluations end? If the involvement of
management and staff is reduced to short term engagements during training
activities and they are not offered continuity through their involvement in on-going
evaluation activities, it is very likely that the impact of evaluation capacity building
strategies will be minimal, and that the new competencies which have been formed
will not be used in the normal routines of the agencies. This is why the organization
needs to offer its management and staff opportunities to practice evaluation by
“developing tools, processes, and understandings about how new knowledge and
skills are transferred to the everyday work of program staff and leaders”(Preskill
2013). Monitoring the degree of routinization of organizational evaluation can be
achieved by using a matrix like the one which is presented in the table below. The
matrix is structured on four levels of intensity, understanding, use and learning
transfer (Table 1).
The essence of the ECB approach is that in the case of organizations which have
already internalized a culture of evaluation, transforming evaluation into a routine
and continuously reshaping priorities involves considering the readiness of par-
ticipants, their motivations and expectations, organizational conditions, opportu-
nities they may or may not have to use their new evaluation knowledge and skills,
and the extent to which leaders encourage, coach, support, and resource their
evaluation activities (Preskill and Boyle 2008). Thus, their staffs adopt a proactive
behaviour when undertaking activities with an evaluative character. What is more,
through the experience that they accumulate, managers and staff contribute to the
dissemination of experience to other actors within the system, both through insti-
tutional transfer mechanisms, as well as through opportunities for becoming
independent evaluators, as in the case of the Phare Universitas 2000 Program, as
well as the Higher Education and Research Reform Program RO-4096. As a
100 N. Toderaşand A.-M. Stăvaru
Table 1 Levels of evaluation routinisation in organizations (authors)
Level Understanding, use and learning transfer
High routinisation There is a functional evaluation unit within the organization, which
systematically and actively carries out evaluation activities and whose
members are open to continuous professional development opportunities
related to their work
Staff and management members have a comprehensive understanding
about evaluation concepts, models, methods, uses and functions, they
have access to internal learning resources and share common knowledge
and skills with those within the organizations and with experts from other
organizations in the system
Skills among management and staff are periodically assessed and
continuously updated
The contact of organization members with activities related to evaluation
is frequent, being an integrated part of their work
Evaluation activities are generally well conducted and implemented, and
the difficulties which appear along the way are handled efficiently
The evaluation findings are used for improving current activities of the
organization, such as the implementation of policies or programs
There is a stable evaluation budget at the organizational level, which is
clearly delimited in the budgetary allocation, which is conceived based
on evaluation priorities and is adequate for responding to the costs which
are implied by evaluation activities
Intermediate
routinisation
There is a functional evaluation unit within the organization, which
periodically undertakes evaluation activities
Staff and management members have a general understanding about
evaluation concepts, models, methods, uses and functions
The contact of organization members with activities related to evaluation
is periodical, in order to respond to the major evaluation priorities
Evaluation activities are implemented without any major problems, and
the difficulties which appear along the way are generally well handled
Evaluation findings are partially used for improving current activities of
the organization
The budget allocation for evaluation activities is included in the
budgetary allocation for a wider range of activities within the
organization
In progress
routinisation
At a formal level an evaluation unit has been created within the
organization and visible efforts are being made for it to become
functional and active
Staff and management members have a minimal understanding about
evaluation concepts, models, methods, uses and functions
The contact of organization members with activities related to evaluation
is occasional, depending on the projects that will be implemented and
which include an evaluation component
The implementation of evaluation activities is faced with some problems
which are more difficult to handle
The evaluation findings are minimally used for improving current
activities of the organization
(continued)
Evaluation Capacity Building …101
conclusion for this stage, the process of transforming evaluation into a routine
involves different levels of awareness and practices related to most relevant topics
that managers and staff have to deal with for continuously improve their activity.
3 Conclusions
There is a growing need in the Romanian higher education system for identifying
mechanisms for improving public policies and agencies’attributions in the decision
making process, policy planning and implementation, regulation, control or mere
executive functions. Introducing evaluative activities into the usual practice of
organizations requires landmarks such as evaluation models, which are able to
transpose this practice into a set of systematic activities, with a clear methodology
and a useful purpose. We thus propose a logical framework for evaluation capacity
building based on a cyclical model of shaping evaluation priorities and developing
evaluation structures, selecting evaluation models, training evaluation skills, trans-
forming evaluation into a routine, and reshaping evaluation priorities. The frame-
work relies on the way in which evaluation practice can become a routine at the
micro level (within the organization) through expert team learning and organiza-
tional learning processes and diffuses at macro level (within the system) through
system learning and interactions at the system level. In spite of the fact that in the
case of organizations within the higher education system in Romania, the ECB is not
institutionalized yet as a current practice for improving the way in which programs
are implemented, assumed objectives are reached, and the way in which services are
offered, while presenting the steps of the logical framework, several relevant
examples were offered which prove the fact that in certain regulatory and executive
higher education agencies the practices which are specific to ECB are routinized and
Table 1 (continued)
Level Understanding, use and learning transfer
The budget allocation for evaluation activities is occasional, depending
on the budgets of projects that will be implemented and which include an
evaluation component
Low routinisation There is no structure specialized in evaluation within the organization
Staff and management members have a poor understanding about
evaluation concepts, models, methods, uses and functions
The contact of organization members with activities related to evaluation
is short and sporadic
The implementation of evaluation activities is fractured, and major
problems appear along the way
The evaluation findings are not used for improving current activities of
the organization
There is no budget allocation for evaluation activities
102 N. Toderaşand A.-M. Stăvaru
are even gradually diffused towards other agencies and consultative organisms
within the system, offering at the same time a context for mutual learning. Learning
through evaluation means that the evaluation process does not end when the final
results are identified, implying instead, a prospective thinking of the next period of
programming and implementation making use of the knowledge and experience
which have been gained, and, ultimately, restarting the evaluation cycle.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
Andreescu, L., Gheorghiu, R., Zulean M., & Curaj, A. (2012). Systemic foresight for Romanian
higher education. In A. Curaj, P. Scott, L.Vlăsceanu, & L. Wilson (Eds.), European higher
education at the crossroads: between the Bologna process and national reforms
(pp. 995–1017), London: Springer.
Bărbulescu, I. G., Toderaş, N., & Ion, O. A. (2012). Purposes and criteria for evaluating the way in
which the responsiveness principle is implemented within public organizations. Case-study:
Romanian universities. Quality Assurance Review for Higher Education, 4(2), 99–108.
Berends, H., Boersma, K., & Weggeman, M. (2003). The structuration of organizational learning.
Human Relations, 56, 1035–1056.
Bourgeois, I., & Cousins, J. B. (2013). Understanding dimensions of organizational evaluation
capacity. American Journal of Evaluation 2013 34: 299 originally published online 2 May
2013. DOI: 10.1177/1098214013477235.
Cerkez, M. (2009a). Introducere în teoria şi practica evaluării programelor şi politicilor publice. In
M. Cerkez (Ed.), Evaluarea programelor şi politicilor publice (pp. 17–53). Polirom: Iaşi.
Cerkez, Ş. A. (2009b). Construirea capacității de evaluare la nivelul sectorului public din România.
In M. Cerkez (Ed.), Evaluarea programelor şi politicilor publice (pp. 117–141). Polirom: Iaşi.
Cerkez, M. (2010). Defining quality in higher education–practical implications. Quality Assurance
Review for Higher Education, 2(2), 109–119.
Chen, H. T. (2005). Practical program evaluation: Assess and improve program planning,
implementation, and effectiveness. Thousand Oaks, CA: Sage.
Compton, D., Baizerman, M., Preskill, H., Rieker, P., & Miner, K. (2001). Developing evaluation
capacity while improving evaluation training in public health: The American Cancer Society’s
Collaborative Evaluation Fellows Project. Evaluation and Program Planning, 24,33–40.
Dabelstein, N. (2003). Evaluation capacity development: Lessons learned. Evaluation, 9(3),
365–369.
Fetterman, D. M. (1994). Empowerment evaluation. Evaluation Practice, 15(1), 1–15.
Guba, E. G., & Lincoln, Y. S. (1989) Fourth generation evaluation. Newbury Park, CA: Sage.
Nevo, D. (2006). Evaluation in education. In I. F. Shaw, J. C. Greene, & M. M. Mark (Eds.),
Handbook of evaluation. Policies, programs and practices (pp. 441–460). London: Sage.
Parlett, M., & Hamilton, D. (1977). Evaluation as illumination: A new approach to the study of
innovatory programmes. In D. Hamilton, et al. (Eds.), Beyond the numbers game: a reader in
educational evaluation (pp. 6–22). London: Macmillan.
Patton, M. Q. (1986) Utilization-focused evaluation (2nd ed.). Newbury Park, CA: Sage.
Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.
Păunescu, M., Miroiu, A., and Vlăsceanu L. (Eds.). (2011) Calitatea învăţământului superior
românesc (pp. 24–42). Iaşi: Polirom.
Evaluation Capacity Building …103
Păunescu, M., Florian, B., & Hâncean, G. M. (2012). Internalizing quality assurance in higher
education: Challenges of transition in enhancing the institutional responsibility for quality. In A.
Curaj, P. Scott, L. Vlăsceanu, & L. Wilson (Eds.), European higher education at the crossroads:
between the Bologna process and national reforms (pp. 317–337). London: Springer.
Perianu, E. (2008). Politicile publice în România. De la cultura raportării la cultura evaluării. In C.
Crăciun & P. E. Collins (Eds.), Managementul Politicilor Publice: Transformări şi perspective
(pp. 267–288). Iaşi: Polirom.
Preskill, H., & Torres, R. T. (1999). Building capacity for organizational learning through
evaluative inquiry. Evaluation, 5(1), 42–60.
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building.
American Journal of Evaluation, 29, 443–459.
Preskill, H. (2013) Now for the hard stuff: Next steps in ECB research and practice. American
Journal of Evaluation, published online 9 August 2013. DOI: 10.1177/1098214013499439.
Rossi, P. H., & Freeman, H. E. (1985). Evaluation: A systematic approach. Beverly Hills, CA:
Sage.
Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven
(Eds.), Perspectives of curriculum evaluation (pp. 39–83). Chicago, IL: Rand McNally.
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.
Stake, R. E. (1975). To evaluate an arts program. In R. E. Stake (Ed.), Evaluating the arts in
education: A responsive approach (pp. 13–31). Colombus, OH: Merrill.
Taylor-Ritzler, T., Suarez-Balcazar, Y., Garcia-Iriarte, E., Henry, D. B., & Balcazar, F. (2013).
Understanding and measuring evaluation capacity: A model and instrument validation study.
American Journal of Evaluation, 34, 190–206.
Vlasceanu, L., Neculau, A., Miroiu, A., Mărginean, I., Potolea D. (Eds.). (2002) Şcoala la
răscruce. Schimbare şi continuitate în curriculumul învăţământului obligatoriu. Studiu de
impact, Vol. 1 şi2,Iaşi: Polirom.
Stufflebeam, D. L. (2002) Institutionalizing evaluation checklist. Evaluation Checklists Project,
Western Michigan University, The Evaluation Center, www.wmich.edu/evalctr/checklists
Vîiu, G. A., Vlăsceanu, M., & Miroiu, A. (2012). Ranking political science departments: The case
of Romania’.Quality Assurance Review for Higher Education, 4(2), 79–97.
Vlăsceanu, L., Miroiu, A., Păunescu M., and Hâncean M.G. (2011) ‘Barometrul Calităţii 2010.
Starea calităţii înînvăţământul superior din România. Braşov: Editura Universității
Transilvania din Braşov
Volkov, B. B. & King, J. A. (2007). A checklist for building organizational evaluation capacity,
Evaluation Checklists Project, Western Michigan University, The Evaluation Center, www.
wmich.edu/evalctr/checklists.
The Law no. 87/2006 on Quality of Education.
The Law no. 1/2012 on National Education.
104 N. Toderaşand A.-M. Stăvaru