Content uploaded by Adrian Miroiu
Author content
All content in this area was uploaded by Adrian Miroiu on Jun 29, 2016
Content may be subject to copyright.
The Quest for Quality in Higher
Education: Is There Any Place Left
for Equity and Access?
Gabriel-Alexandru Vîiu and Adrian Miroiu
Keywords Quality assessment Accreditation Funding policy Principal-agent
theory Equity and access
1 Introduction
Over the past two decades Romanian higher education has been the subject of
numerous policy reforms aiming to increase the overall quality
1
and performance of
higher education institutions (HEIs): two comprehensive laws on education and
several second-order legislative acts intended to transform the sphere of higher
education into a modern system of teaching and research. Funding was one of the
most important mechanisms used to stimulate universities in the direction of
improved performance. Throughout the article elements of student equity and
access which operate within the framework of quality assessment are highlighted
and the impact of this framework on the funding process is evaluated. The article
mainly focuses on public HEIs because, unlike private institutions which do not
rely on state funding, public universities are particularly sensitive to shifts in the
funding policies.
Following a brief section outlying the theoretical framework used in the paper,
two distinct but related subjects are discussed. First, the early efforts undertaken by
G.-A. Vîiu (&)A. Miroiu
Department of Political Science, National University of Political Studies and Public
Administration (SNSPA), Bucharest, Romania
e-mail: gabiviiu@yahoo.com
A. Miroiu
e-mail: admiroiu@snspa.ro; ad_miroiu@yahoo.com
1
In this article we adopt a broad definition of quality: it represents “a multi-dimensional, multi-
level, and dynamic concept that relates to the contextual settings of an educational model, to the
institutional mission and objectives, as well as to specific standards within a given system,
institution, programme, or discipline”(Vlăsceanu et al. 2007).
©The Author(s) 2015
A. Curaj et al. (eds.), Higher Education Reforms in Romania,
DOI 10.1007/978-3-319-08054-3_9
173
the government in the nineties to implement a system of accreditation for all higher
education institutions and the subsequent transformation of the initial accreditation
scheme into a broader process of quality assurance are analysed. Second, the efforts
of policymakers to define and integrate aspects of quality in the distribution of state
funds to Romanian public universities are described with reference to the recent
implications of the national process of university classification and study pro-
gramme ranking for issues of quality, funding and equity. This approach is meant to
reflect the general process through which the study programmes in the Romanian
public HEIs come to operate: first they must be accredited (a process which inter
alia assures financial support from the state); second, they must meet further per-
formance and quality requirements which determine the financial allocations they
can secure for their subsequent activities. Throughout the article the evolution of
accreditation, quality assurance and funding is deconstructed using the framework
of principal-agent theory in order to illustrate specific problems typical in the
governance of higher education.
2 Theoretical Considerations
As summarized by Moe (1984), the principal-agent model is a theoretical tool
initially developed in the field of economics that postulates a contractual relation-
ship between two parties: a principal who is interested in providing certain out-
comes and an agent that the principal entrusts with the operational tasks needed to
achieve these desired outcomes. The model assumes that the parties are rational.
Therefore, it must take into account the fact that the agent has his own interests
(which may be different from the principal’s interests), and so he pursues the
principal’s objectives only to the extent that the incentive structure imposed in their
contract renders such behaviour advantageous. The principal’s chief dilemma is
therefore that of defining the incentive structure in such a way that the agent is
compelled to pursue the preferences of the principal, i.e. to provide the outcomes
specified in the contract.
However, this problem is further compounded by a specific feature of the
relationship between the principal and the agent, namely asymmetric information
manifest in the fact that agents possess information that the principal does not have
(or that could only be acquired at great and unfeasible cost). Asymmetric infor-
mation brings about two important problems (Lane and Ersson 2000): ex ante
adverse selection of agents resulting from hidden information, and ex post moral
hazard resulting from hidden actions taken by the agent without the knowledge of
the principal. In the first case, the principal may decide to enter a contract with an
agent he may only later find is not suited to accomplish his desired outcomes; in the
second case, even if adverse selection has been avoided, the principal may find he is
confronted with an agent that does not strictly adhere to the terms of the initial
contract. The main concern of the principal-agent theory is therefore that of finding
174 G.-A. Vîiu and A. Miroiu
solutions to both adverse selection and moral hazard.
2
Because it tends to view the
behaviour of the agent as primarily opportunistic and self-interested, the theory
identifies various instruments needed to counteract the potentially opportunistic
behaviour of the agent. Three such common instruments are available to the
principal: monitoring or surveillance, risk-sharing contracts or retaliation (Lane
2008).
Two separate traditions in the application of principal-agent models can be
discerned (Miller 2005; Lane and Kivistӧ2008): on the one hand there are the
canonical economic versions of principal-agent theory. On the other hand there is a
distinct political science perspective which has relaxed some of the more rigid
assumptions formulated by the economic version. According to it, the contract
between the parties is implicit in nature, focuses on both agent and principal,
considers that all actors (i.e. all principals and agents) are motivated by economic
utility as well as political power, and acknowledges the existence of multiple and
collective principals, as well as the possibility that intermediary agents and prin-
cipals can exist between a primary principal and agent. In addition, the political
science-driven principal-agent model considers that a social/political contract is the
principal’s primary mode of control; it also recognizes that the output of the con-
tractual relationship is a public (rather than a private) good, and, lastly, admits that
shirking (the agent’s wilful neglect of his responsibilities toward the principal) need
not only be a consequence of individual action, but may also result because of
structural considerations, especially in cases involving multiple principals and
agents where information is not properly communicated.
In its most general form, the principal-agent model can be used in political
science to represent the relationship between the population of a given country (the
principal) and its government (the agent that has to provide specific public goods
and services). However, the model has a much wider range of application. In
particular, this article explores how adverse selection, moral hazard and information
asymmetry have had direct implications for the operation of Romanian HEIs over
the past two decades and how they have shaped governmental policies in the field
of quality assurance and funding of the higher education system.
3
Throughout this
paper accreditation is considered as a specific screening device that governments
employ in order to select which universities they support from the state budget,
while the education funding policy makes up the reward rules that frame the
interactions of government and accredited HEIs and periodic quality assessment of
universities acts as a monitoring device. In such a setting the government is the
principal and HEIs are agents
4
entrusted with specific outcomes (creation and
2
Note that the principal-agent model is a particularly useful tool in discussing both the screening
devices employed by principals to select an agent prior to entering a contract and the subsequent
reward rules that govern the relationship between the principal and the agent (Petersen 1993).
3
See also Kivistӧ(2005,2007) for another appeal to the principal-agent theory in the
investigation of educational policies.
4
To be more precise, the government is the primary principal and HEIs are the primary agents;
various intermediary principals and agents may exist between the primary ones; for example, the
The Quest for Quality in Higher Education ... 175
dissemination of knowledge, preparation and training of skilled individuals for the
labour market, etc.). Accreditation thus becomes an instrument in solving the
problem of adverse selection, while differential (quality or performance-based)
funding and monitoring through periodic assessment are solutions to the problem of
moral hazard.
3 Approaches to Quality Assurance in the Romanian
Higher Education
Consistent with the overall pattern in Central and Eastern Europe where, at least
initially, the “predominant approach to assuring quality in higher education has
been accreditation by a state-established agency”(Kahoutek 2009), early concep-
tions of quality assurance processes in the Romanian higher education system seem
to have been very narrowly identified with the process of accreditation. In general,
accreditation has at least two crucially important financial implications for HEIs
(Schwarz and Westerheijden 2007): first, it may function as a prerequisite for
funding; second, it makes institutions and programmes that have accreditation
status more attractive to students and can therefore indirectly increase institutional
funding in systems where the funding depends on student numbers. Both provisions
apply to Romania. Therefore, a discussion of accreditation is important, both in
itself as well as for student equity and access.
As Scott points out, issues of quality assessment, accreditation and evaluation
became common themes in Central and Eastern European higher education fol-
lowing the collapse of communism, with most quality systems in the region being
adapted from West European or American models (Scott 2000). Consistent with
this depiction, the issue of quality in Romanian higher education began to emerge
as a pressing concern during the early nineties when the country embarked on the
difficult transition from a centralized socialist system to a democratic society. Like
most other Central and East European countries, following the economic and
political liberalization brought about by the fall of communist rule Romania wit-
nessed a rapid expansion of private higher education suppliers
5
which eventually
demanded a governmental response. The main problem in this turbulent period was
that numerous corporations started declaring themselves as suppliers of higher
education services in a volatile setting where “no criteria or standards existed for the
coordination of private initiative in the field of higher education”(Korka 2009).
From an agency perspective, the unchecked proliferation of private HEIs put early
(Footnote 4 continued)
national agencies responsible with accreditation, quality assurance and funding may be viewed as
intermediary principals (if one considers their relation to individual HEIs), but also as intermediary
agents (if one considers their relation to the government).
5
There is no consensus regarding the exact number of private HEIs operating during the initial
years of transition but estimates range between a few dozen to more than 250.
176 G.-A. Vîiu and A. Miroiu
governments in a position where they were faced with a typical adverse selection
problem in that they could not know which agents (HEIs and other new suppliers)
offered quality educational services.
6
A corollary of this situation was that gov-
ernment had difficulties in deciding how to distribute state funds to support the
newly-established higher education providers.
In order to solve the increasing problem of adverse selection, the government’s
initial response, enacted through the Law no 88/1993 regarding the accreditation of
higher education institutions and the recognition of diplomas, sought to establish
firm rules regarding what type of entities were officially sanctioned to provide
higher education services. The law established a state supported National Council
for Academic Evaluation and Accreditation (CNEAA) which was invested with the
task of temporarily authorizing and then accrediting institutions and study pro-
grammes which met certain minimum standards regarding teaching staff and other
input criteria.
7
The law, however, made no explicit reference to the notion of quality; it was
strictly concerned with the process of accreditation and, to a lesser extent, with a
distinct process labelled “periodic academic evaluation”, a process which was
tantamount to (periodic) external quality assessment.
8
The law’s overall positive
influence was its remarkable success in combating the chaos of the early post-
communist higher education landscape (Miroiu 1998), but, nonetheless, it also
suffered from several shortcomings. Korka (2009) mentions three of them: the
neglect of mechanisms of internal quality control; apparent quality homogeneity of
study programmes
9
; the lack of any substantial difference between initial
6
In this article we limit the application of agency theory to a top-down approach, where HEIs are
the agents of government. But if we change the perspective to a bottom-up approach, the
population of prospective students is the principal, and HEIs are its agents. This principal is also
confronted with the same pressing problem of adverse selection. When searching for adequate
agents to meet their desired outcomes, principals may appeal to accreditation: in a rapidly
changing environment it is an efficient signalling device employed by universities to communicate
good quality to prospective students (Batteau 2006).
7
It is important to note that CNEAA could not grant authorization or accreditation itself but,
based on its evaluations, could only make proposals; the formal power to temporarily authorize an
institution remained in the hands of the government which was also in charge of elaborating
proposals to Parliament for accreditation of new institutions. CNEAA’s successor, ARACIS, is in a
similar situation.
8
The law therefore also incorporated an element of monitoring but it is important to note that the
law openly discriminated against newly established (private) institutions, as they were the only
ones obliged to go through the procedures of institutional accreditation. The (public) universities
already operating before the regime change of 1989 were only subject to monitoring through the
process of periodic evaluation of their study programmes which was to be conducted at five year
intervals. However, all HEIs were on the same par in case a new study programme was initiated.
9
In a logic of path-dependence this apparent homogeneity initially triggered by accreditation
procedures can be seen as a first expression of a later phenomenon already well documented by
Romanian scholars (Miroiu and Andreescu 2010;Vlăsceanu et al. 2010;Păunescu et al. 2011;
Florian 2011), namely structural isomorphism whereby HEIs mimic each other in terms of the
study programmes they offer.
The Quest for Quality in Higher Education ... 177
accreditation and subsequent periodic evaluation.
10
All of these were further
compounded by the fact that quality evaluation was virtually neglected in practice
because of the more pressing tasks of authorization and accreditation (Vlăsceanu
2005). To put it in a different way, throughout the first decade of transition the
government was more preoccupied with the problem of adverse selection (which it
did solve with the aid of CNEAA) than with recurring issues of moral hazard.
Quality in this stage was defined strictly in terms of compliance with a set of
minimum standards to be attained in order to secure entry in the market of higher
education providers. As noted by other authors, Law no 88/1993 “appeared rather
as a response to the market evolution than as part of higher education policy”
(Nicolescu 2007). Subsumed to the narrow interpretation of accreditation, quality
had no nuances and functioned solely on the dichotomous logic of approval
(authorization/accreditation) and rejection: those institutions meeting the minimum
requirements were accredited (and therefore considered to be of quality), while
those that did not were excluded from the system.
The need to reconfigure the normative framework regarding quality assessment
began to emerge as an important concern following Romania’s adhesion to the
Bologna Process, given the specific objective of establishing a European dimension
in quality assurance. Only a month after the Bologna Declaration was signed, Law
no 88/1993 was amended by Law no 144/1999 which, although offering virtually
no conceptual elaboration, introduced the notion of “quality assurance”as such in
the legal framework governing higher education. Following the amendments made
through Law no 144/1999 quality assurance came to be an objective of CNEAA,
although evaluation and accreditation remained the Council’s main focus. It would
take another 6 years, however, for a more substantial conception of quality
assurance to be implemented by Romanian policymakers.
Following the European drive for convergence of higher education systems,
alongside a number of other structural reforms
11
meant to implement the Bologna
objectives, a Cabinet Emergency Ordinance issued in 2005 (and subsequently
endorsed by Parliament and enacted as Law no 87/2006) was specifically devoted
to the issue of quality assurance in education. The new law marked, at least in
formal terms, a visible turn in the process of quality assurance: it made a firmer
distinction between accreditation and quality assurance (accreditation was now
explicitly defined as “a component of quality assurance”); it differentiated between
internal and external quality assurance following the Standards and Guidelines for
Quality Assurance in the European Higher Education Area; it outlined a meth-
odology for quality assurance and explicitly listed the domains and criteria
10
Although mentioned in the law as separate processes, both accreditation and periodic academic
evaluation relied on the same standards and criteria. Moreover, the periodic evaluation only
entailed verification that the standards set for initial accreditation were still met by a particular
study programme in a HEI several years after accreditation had been secured.
11
These included, for example, the introduction of the ECTS system; the implementation of the
3-2-3 system for bachelor, master and doctoral studies; the introduction of the Diploma
Supplement.
178 G.-A. Vîiu and A. Miroiu
encompassed by this methodology; it instituted the obligation of HEIs to create a
commission responsible with internal evaluation and quality assurance; it created
the Romanian Agency for Quality Assurance in Higher Education (ARACIS) which
was to supersede CNEEA
12
and which would operate as an autonomous institu-
tion.
13
The law also contained a provision of meta-accreditation because it required
ARACIS itself to submit to a periodic process of international evaluation.
From a structural point of view, the new methodology
14
for external evaluation
comprised three broad domains–institutional capacity, educational efficacy and
quality management–each with distinct criteria to which standards and corre-
sponding performance indicators were attached. A total of 43 distinct performance
indicators were specified (10 for institutional capacity; 16 for educational efficacy
and 17 for quality management). The methodology made a further distinction
between minimum (obligatory) standards and reference (optimal) standards. In
order to secure authorization or accreditation an institution had to meet the mini-
mum level for all standards. Failure to comply with the minimum level for even a
single performance indicator prohibited the possibility of authorization/
accreditation.
In the context of this paper it is worth noting that the methodology of ARACIS
includes indicators specifically associated with elements that could be considered as
part of a broader concern with student equity and access, in that they specify
general student facilities and various types of services which must be provided by
HEIs. It should also be mentioned that these indicators are among the few explicit
(albeit indirect) constraints imposed by law on HEIs with respect to equity and
access issues: (1) the system of scholarships allocation and other forms of financial
aid for students. As a minimum standard, this indicator requires the existence and
consistent application of clear regulations for awarding scholarships; as a standard
of reference, however, the indicator outlines as desirable that at least 10–20 % of
the institution’s resources be devoted explicitly to a scholarship fund. Another
relevant indicator is (2) incentive and remediation programmes which, as a mini-
mum, specifies that a university should have programmes that further encourage
students with high performance but, additionally, that it also have programmes to
support those with difficulties in learning
15
; as a desirable standard of reference, the
12
According to the new law ARACIS has two distinct departments: one for accreditation and one
for external evaluation; the department for accreditation took over the attributions of CNEAA.
13
CNEEA had previously been subordinated to the Romanian Parliament.
14
See the Methodology for External Evaluation, Standards, Standards of Reference, and List of
Performance Indicators of The Romanian Agency for Quality Assurance in Higher Education,
available at http://www.aracis.ro/fileadmin/ARACIS/Proceduri/Methodology_for_External_
Evaluation.pdf.
15
The methodology does not give any explanation regarding what “students with learning
difficulties”represent; they could simply be students with lower levels of performance who need
extra guidance to reach the standards set by the faculty or they could be students with certain
disabilities; in the latter case a certain level of affirmative action could be implied in the use of the
indicator.
The Quest for Quality in Higher Education ... 179
methodology mentions the existence of supplementary tutorial programmes. A final
indicator we wish to note is (3) student services which, as a minimum, states that
universities are required to have social, cultural and sport services; a particularly
noteworthy fact is the explicit provision that the university must offer (again as a
minimum) housing for at least 10 % of its students.
These indicators point towards the fact that within the context of quality
assessment there are at least some elements of potential relevance to equity and
access which universities must take into account. However, without comprehensive
data regarding the individual universities’actual attainment of specific values for
the performance indicators (especially in terms of reference standards), no systemic
judgement can be made as to whether or not universities provide sufficient services
and support, in sufficient quantities, for all the students that require them.
Since it began its activities, ARACIS has analysed more than three thousand
study (bachelor and master) programmes and has also completed institutional
external evaluation of more than ninety universities. Equally important, its annual
activity reports indicate that it has also undertaken the task of periodic evaluation
(both of institutions and their study programmes) which signals that unlike its
predecessor, CNEAA, which was mostly concerned with the problem of adverse
selection, ARACIS is also preoccupied with issues of moral hazard which can arise
when universities or their study programmes fail to continuously meet the initial
standards which served as the basis for their accreditation. However, the efforts of
ARACIS to instil a quality culture in Romanian HEIs seem to have met with limited
success, as evidenced, for example, by the finding that the institutional commis-
sions for internal evaluation and quality assurance only have a discontinuous, quasi-
formal activity (Vlăsceanu et al. 2010), an element which points to shirking on the
part of HEIs. Overall, despite the intentions of policymakers, compliance with the
minimum standards specified in the methodology of ARACIS is still the prevalent
form of quality assurance which thus remains “preponderantly administrative,
decoupled from (organic) processes of learning and teaching”(Păunescu et al.
2011).
4 Quality and Funding
However important for purposes of evaluation and accreditation, the methodology
devised by ARACIS has not been the sole instrument of assessing (or indeed
rewarding) quality in Romanian Higher education. In order to present a more
complete picture of quality assessment, special attention must also be paid to a
second aspect: the way in which (public) universities have actually been financed
by the government. In this context, our paper focuses on only one feature of the
evolution of the Romanian funding mechanism, namely the quality components
used by the National Higher Education Funding Council (CNFIS) to distribute
basic funding to universities.
180 G.-A. Vîiu and A. Miroiu
The term basic funding was introduced in 1999 alongside a separate notion, that
of complementary funding,
16
through a policy that marked the transition from an
approach whereby public universities received funding “according to principles
more or less inherited from the socialist period”(Miroiu and Aligica 2003) to a new
mechanism of formula-based funding. With the introduction of formula-based
funding the number of enrolled students became central to the funding scheme: the
amount of funding received by a university became contingent on the number of
physical students it enrolled, following a formula which attached different equiv-
alence coefficients for each programme level (bachelor, PhD, etc.) and different cost
indicators for each field of study (medical, technical, economic, etc.). The funding
formula in effect translated the physical students a university had: first into
equivalent students and then into unitary equivalent students; these could then be
used to determine funding for each distinct university.
Although a remarkable break from previous practices, the initial formula for
allocation of funds had a strictly quantitative approach inherent in its reliance on the
single dimension of physical students and had the consequence that universities
received funding in strict proportion to their number of (unitary equivalent) students
(CNFIS 2007). The formula-based funding mechanism had two important conse-
quences for universities (Vlăsceanu and Miroiu 2012): it put universities in a
position to autonomously use their budget and it stimulated them to reduce oper-
ating costs; however, most universities reduced costs by decreasing the amount and
the quality of facilities offered to students and by increasing the student/staff ratio
(instead of developing a more responsible scheme for cost control). In this context
the following potential access and equity paradox can be noted: since a university
received funding in accordance with its number of students, it had direct and
powerful incentives to enrol as many students as possible to ensure its survival;
however, the more students it enrolled, the less it was able to provide them with
adequate facilities and services.
Aware of this danger, policymakers began experimenting with a way to directly
build into the funding formula a series of quality measures: starting in 2003 the
formula incorporated several quality indicators which were meant to stimulate
differential funding based on measurable aspects of institutional performance. Once
introduced in 2003, the number and complexity of the indicators grew continually
as did, more importantly, the final amount of funding determined through their use.
Between 2003 and 2011 the number of indicators increased from 4 to 17 (some
having a complex structure determined by numerous sub-indicators). At the same
time, the level of basic funding these quality indicators determined expanded from
12.7 to 30 %.
16
Basic funding (which still represents the better part of public financial support received by
universities) included expenses with salaries of university personnel and various material costs,
while complementary funding referred to subsidies for students, funds for equipment and major
renovation, but also to funds allocated on a competitive basis for scientific research. The two
notions appeared in a major change of the Law of education no 84/1995 which was passed through
Parliament in June 1999.
The Quest for Quality in Higher Education ... 181
Starting in 2003 the total amount of basic funding was thus divided into two
distinct components: a quantitative component relying on the number of students
and a qualitative component influenced by the universities’individual level of
performance. The quality indicators were grouped into categories mainly dealing
with the following issues: (1) teaching staff (2) research (3) material resources, and
(4) academic and administrative management of the university. Table 1below
provides a detailed list of these indicators and their individual weight in the process
of allocating funding during three distinct years: 2006, 2009 and 2011; this is a
period when the overall structure of the methodology used by CNFIS stabilized and
yearly revisions focused more on the individual weights attached to the indicators,
rather than on their content. Although an exhaustive description and treatment of
these indicators is outside the scope of this paper, it is important to emphasize
several aspects.
To begin with, it is obvious from the development of the indicators and their
growing significance in funding allocation that there is a clear trend toward
increased quality assessment leading to greater competition between universities.
This competition is not only the result of monetary rewards (which need not always
be substantial) but may also appear due to added legitimacy associated with higher
scores which can serve as a powerful motivator for universities to improve their
performance (Miroiu and Andreescu 2010). From an agency theory perspective,
however, incorporation of such performance-oriented funding is a direct expression
of concern with moral hazard problems resulting from a stable setting in which
public universities, once accredited, receive funding in accordance with their
number of students and therefore have no stimulus to improve their performance.
Thus, changes in the funding mechanism are actually equivalent to a restructuring
of the incentive system devised by the principal in order to assure accountability of
the agents and greater competition among them.
A second aspect that merits attention is the nature of the distribution implied by
the funding formula once the qualitative indicators were introduced: funding partly
became a zero-sum game in which losses of one university with low scores on
quality indicators were gains to another that had superior performance. However,
because within the funding mechanism it was necessary to avoid the treatment of
universities as “a-dimensional entities”(Ţeca 2011) the number of students (the
quantitative component which already determined the better part of the total amount
of basic funding) also had a powerful indirect influence on the qualitative side of the
funding formula. In other words, within the framework of the zero-sum game
determined by quality indicators, the quantitative aspect still played an important
role, in effect determining the size of the reward (or penalty) for each university.
17
17
It is important to note that funding received by any individual university was not based on the
absolute value (actual score) of its quality indicators, but on their relative value; to determine this
relative value the absolute scores of each university were compared to those of all other
universities within a formula that factored in the dimension of the university expressed as total
number of unitary equivalent students; therefore two universities with very similar scores on a
182 G.-A. Vîiu and A. Miroiu
Table 1 Quality Indicators and per cent of total basic funding they determined in 2006, 2009 and
2011
Group Indicators 2006
(%)
2009
(%)
2011
(%)
1. Teaching staff IC1- Ratio between full-time teaching
staff and students
3.50 3.00 3.00
IC2- Ratio between professors and
students
–1.00 1.00
IC3- Ratio between associate professors
and students
–1.00 1.00
IC4- Ratio between teaching staff having a
PhD title and students
1.00 1.50 1.50
IC5- Ratio between teaching staff below
35 years of age and students
1.50 2.00 2.00
2. Impact of
research on the
teaching process
IC6- Level of performance in scientific
research (complex structure)
3.00 7.00 7.00
IC7- Percent of students in master and
doctorate programmes within the total
number of students
–1.00 1.00
IC8- Percent ratio between the value of
research contracts and the university’s
total income
0.50 1.00 1.00
3. Material
resources
IC9- Ratio between expenses with
endowments and investment and the
number of physical students
1.00 1.50 1.00
IC10- Ratio between material expenses
and the number of physical students
1.00 1.00 1.00
IC11- Ratio between expenses with
books, journals and manuals and the
number of physical students
0.50 1.00 1.00
4.University
management
IC12- Percent of investment expenses
within the total budgetary allocation
received by universities for this purpose
0.50 0.50 0.00
IC13- Overall quality of academic and
administrative management (complex
structure)
3.00 3.00 3.75
IC14- Percent of income gained from
sources other than budgetary allocation
within the total income of the university
1.50 2.00 2.00
IC15- Percent of income gained from
other sources than budgetary allocation
utilized for institutional development in
the total income of the university
1.00 1.50 1.50
(continued)
The Quest for Quality in Higher Education ... 183
Afinal aspect worthy of mention is a certain shift in emphasis noticeable
towards the end of the period during which quality indicators were used: although
most indicators maintained a relatively constant weight throughout the entire period
(see in particular quality of social and administrative student services which
determined 2 % of the total amount of basic funding and which was mainly con-
cerned with student dormitories), one indicator (the level of performance in sci-
entific research) more than doubled in size. It had a complex structure, meaning it
was actually made up of many other sub-indicators dealing with items such as the
number of articles or books published by university staff and, compared to all other
indicators, it was responsible for the largest amount of funding distributed on the
grounds of quality assessment.
Although research played an important role in the funding allocations, starting in
2012 it came to have an even more prominent role in the higher education land-
scape following the introduction of the new comprehensive law on education (Law
no 1/2011). This law required all universities to be classified into three distinct
categories and all study programmes to be ranked according to their performance.
Following a thorough evaluation, a university could be classified as focused on
teaching, as focused on teaching and research, or as a research intensive university.
In addition, all individual study programmes of accredited HEIs were ranked into
five distinct categories ranging from A (best performance) to E (lowest perfor-
mance).
18
The methodology
19
used in the process of university classification and
Table 1 (continued)
Group Indicators 2006
(%)
2009
(%)
2011
(%)
IC16- Quality of social and administrative
student services (complex structure)
2.00 2.00 2.00
Total per cent of basic funding determined: 20.00 30.00 30.00
Source CNFIS (2006), (2009) and (2011)
For quality indicators 1 through 7 “students”should be read as “unitary equivalent students”;
quality indicators 1, 2 and 7 were present in the 2006 proposal of CNFIS but were not used in
funding allocations following consultations with representatives of the Ministry of Education;
quality indicator 12 for the year 2006 referred to the number of computers owned by the
university per 1000 full-time students, not to investment expenses; in 2011, 0.25 % of the total
30 % allotted to the quality component was distributed following a newly introduced indicator
(IC17) regarding lifelong learning
(Footnote 17 continued)
quality indicator could receive very different funding because of their different number of students.
In effect small universities could win or lose much less than larger universities.
18
Unlike the classification of universities which was intended to be functional and non-
hierarchical in nature, the ranking of study programmes was expressly intended to be hierarchical
in order to differentiate between the best programmes and those that had lower levels of
performance.
19
See OMECTS 5212/2011.
184 G.-A. Vîiu and A. Miroiu
study programmes ranking relied on more than 60 distinct indicators grouped into
four main criteria: (1) research performance;(2) teaching; (3) relation to the external
environment; and (4) institutional capacity. Research was particularly important as
it had a global weight ranging from 40 % (in the case of arts and humanities and
certain social sciences) to 60 % (for mathematics, engineering and biomedical
sciences).
In accordance with the new law, a first (and for the time being only) compre-
hensive evaluation of the universities and their study programmes was conducted in
2011.
20
The general structure of public funding devoted to universities also chan-
ged: in addition to basic and complementary funding a new category of supple-
mentary funding was introduced (equivalent to 30.5 % of the basic funding),
together with a distinct institutional development fund. Supplementary funding was
further divided among three major components: (1) supplementary funding for
excellence which accounted for 25 % of basic funding and which can be seen as a
successor to the previous idea of distributing funds based on quality indicators; (2)
preferential funding for master and PhD programmes in advanced science and
technology, for programmes taught in foreign languages and for jointly supervised
PhD programmes; and (3) a fund to support HEIs with an active local or regional
role.
Since 2012, the former quality indicators used between 2003 and 2011 are no
longer in operation, but quality constraints are instead incorporated into the funding
mechanism through the use of the results of the national evaluation of universities
and their study programmes.
21
This can be seen as “a recent preoccupation for
unifying the different existing approaches to quality”(CNFIS 2013) because CNFIS
replaced its own indicators with the results of the national evaluation. Operation-
alization of this idea entailed the use of certain excellence indices which became
multiplication factors in the allocation of supplementary funding for excellence.
The excellence indices reflect the results of the national ranking of study pro-
grammes. For example, at the bachelor level, a study programme belonging to class
A (best performance) translated into an excellence index of 3, but 0 if the pro-
gramme was ranked in class D or E (low performance). For master level studies,
programmes ranked in class A received an excellence index of 4, those in B an
index of 1 and those in C, D, and E received 0.
22
Access and equity elements within the methodology used for the process of
university classification and study programmes ranking included several indicators.
Under relation to external environment one can find the following three indicators:
20
Although the law requires that the evaluation be done yearly, no such efforts were made in
2012 or up to the present moment in 2013. The Ministry of Education is currently defending itself
in a lawsuit with a university which contested the results of the evaluation process.
21
For full results of the study programmes ranking see http://chestionar.uefiscdi.ro/docs/
programe_de_studii.pdf.
22
In its proposed methodology for (2013), CNFIS has operated some adjustments to these indices
that tend not to penalize less competitive programmes as much as the ones in 2012 but the
methodology has not yet been adopted by the Ministry of Education.
The Quest for Quality in Higher Education ... 185
students from lower socio-economic groups, mature students (defined as aged
30 years or more), and students with disabilities. Under institutional capacity one
can find several other indicators dealing with student cafeterias, dormitories, per-
sonnel responsible with medical services, infrastructure devoted to students with
disabilities, and personnel specifically employed to support students with
disabilities.
Although the methodology used in the process of university classification and
study programmes ranking thus seems to have more indicators dealing with equity
and access issues, it remains doubtful whether these had any significant impact on
the final results of classification and ranking and, following these processes, on the
funding universities received in 2012. This claim may be supported by studying the
methodology itself, the individual weights of the indicators and the aggregate
weights of the criteria it used. To begin with, it should be noted that the method-
ology had several intermediate levels of aggregation: at the lowest level were
individual indicators that were then aggregated into composite (intermediary)
indicators
23
which, finally, were further aggregated into the four criteria listed in the
previous paragraphs, namely research performance, teaching, relation to the
external environment and institutional capacity. A natural consequence of such a
hierarchical structure that uses multiple layers of indicators is that the overall impact
of any one individual indicator tends to become diluted. With respect to the indi-
cators dealing with equity and access elements this is particularly evident because at
the most general level of aggregation, both relation to external environment and
institutional capacity had, without exception, the smallest weight of all four criteria
used by the Ministry (ranging between 5 and 20 %) but also had the largest number
of individual indicators (more than 20 in each case).
However, the new methodology used by CNFIS starting in (2012) also included
a different component that can account for access and equity. Based on the pro-
visions of the Law no 1/2011, a special fund for stimulating the universities which
develop policies addressed to students from disadvantaged groups was created (i.e.
the fund to support HEIs with an active local or regional role mentioned above).
Disadvantaged groups can be ethnic minorities (e.g. Roma), or people living in
certain areas (rural areas, small towns, etc.).
24
In 2012 the funding for this com-
ponent represented 3 % of the total allocations for universities that were distributed
by CNFIS. Funds were allocated by the Ministry of Education mainly to univer-
sities located in small towns and which had study programmes aimed to satisfy
local needs (CNFIS 2013).
23
For example, the three individual indicators “students from lower socio-economic groups,
mature students, and students with disabilities”were grouped under a composite indicator—
relation with social environment—which itself had a weight of 0.05 within the larger frame of
relation to external environment. It is highly doubtful whether 0.25 (the weight of the indicator
dealing with student disabilities for example) within 0.05 within yet another, final, 0.20 could have
any substantial impact on the final results of ranking and classification.
24
Nonetheless, these categories have not been very clearly defined and no systematic study has
been carried out yet in order to identify the needs of these groups.
186 G.-A. Vîiu and A. Miroiu
5 Conclusions
Over the past two decades quality assessment and quality constraints have become a
central feature in the process of policymaking for Romanian higher education. This
article has illustrated how problems of adverse selection and moral hazard typical of
principal-agent models have spurred Romanian governments to develop specific
solutions in the form of normative constraints limiting the potentially opportunistic
behaviour of universities. Prior to 2012 such quality constraints took two distinct
shapes: one is given by the process of accreditation (together with its corollary,
periodic academic evaluation), while the other is represented by specific indicators
used to determine the level of funding for each public university. In both cases the
complexity and number of indicators used for overall quality assessment increased
over time. However, starting in 2012, quality indicators are no longer in use; quality
is instead incorporated into the funding mechanism through a proxy measure –
excellence indices derived from the results of the national process of study pro-
grammes ranking which relied heavily on research aspects.
In terms of aspects that promote equity and access, all methodologies pertaining
to quality assessment discussed in this article can be found to incorporate only a
limited number of indicators devoted to such issues. In addition, rather than dealing
with targeted measures for specific (potentially more vulnerable) groups of students,
most of these indicators only concern themselves with material resources and
minimal facilities and services for all students in general. The scope and importance
of these indicators varies between the distinct methodologies under discussion:
within the methodologies used by CNFIS between 2003 and 2011 such indicators
generally accounted for 2 % of the basic funding allocated to universities and
mainly dealt with student dormitories and general administrative services; within
the methodology for accreditation used by ARACIS the three indicators we iden-
tified also deal with input aspects related to the universities’distribution of material
resources and services provided to students. A more comprehensive list of indi-
cators sensitive to equity and access issues can be found in the methodology used to
assess universities and their study programmes in 2011 but, paradoxically, the
effects of these indicators is diluted by the existence of dozens of other indicators
and by the presence of intermediary levels of aggregation to which the indicators
contribute only to a negligible degree.
Overall, based on these methodologies and their evolution we may conclude that
general quality considerations play an increasingly important role for higher edu-
cation institutions and their funding, but equity and access elements do not act as
important factors within quality assessment processes themselves. This does not
mean, however, that equity and access have no impact on funding itself. To the
contrary, although such elements are limited within the various frameworks of
quality and performance evaluation, they have also been recently included in the
funding mechanism in a more direct manner, through the provision of a distinct
component within the newly-introduced supplementary funding. Therefore, the
impact of equity and access elements for Romanian HEIs is now twofold: on one
The Quest for Quality in Higher Education ... 187
hand this impact is indirect (and limited), mediated by the processes of accreditation
and performance assessment which have their distinct leverage on funding; on the
other hand, however, the impact is also taking a more explicit form through
specification of a distinct component geared towards equity and access issues in the
funding scheme. The inclusion of this distinct component may indicate a growing
importance assigned by policymakers to equity and access in general but, because
objective criteria for distribution of these earmarked funds have yet to be clearly
formulated, it remains to be seen what substantial consequences this policy will
have for HEIs and their students.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
Batteau, P. (2006). Aspects of evaluation and accreditation in higher education in France. In
Orsingher, C. (Ed.), Assessing quality in European higher education institutions. Dissemina-
tion, methods and procedures (pp. 147–166). Heidelberg: Physica-Verlag.
CNFIS. (2006). Metodologia de repartizare pe instituţii de învăţământ superior a alocaţiilor
bugetare pentru finanţarea de bazăîn anul 2006—Sinteză. Retrieved from http://vechi.cnfis.ro/
fb2006/MetodologieFB2006.pdf.
CNFIS. (2007). Finanţarea învăţământului Superior în România - punct de vedere al CNFIS.
Retrieved from http://vechi.cnfis.ro/documente/FinantareIS.pdf.
CNFIS. (2009). Metodologia de repartizare pe instituţii de învăţământ superior a alocaţiilor
bugetare pentru finanţarea de bazăîn anul 2009. Retrieved from http://vechi.cnfis.ro/fb2009/
OMECI-MetodologieFB2009.pdf.
CNFIS. (2011). Metodologia de repartizare pe instituţii de învăţământ superior a alocaţiilor
bugetare pentru finanţarea de bazăîn anul 2011. Retrieved from http://vechi.cnfis.ro/fb2011/
MetodologieFB2011.pdf.
CNFIS. (2012). Metodologie de alocare a fondurilor bugetare pentru finanţarea de bazăşi
finanţarea suplimentarăa instituţiilor de învăţământ superior de stat din România pentru anul
2012. Retrieved from http://www.cnfis.ro/wp-content/uploads/2012/08/Metodologie-
Finantare-2012.pdf.
CNFIS. (2013). Raport privind starea finanţării învăţământului superior şimăsurile de optimizare
ce se impun, CNFIS. Retrieved from http://www.cnfis.ro/wp-content/uploads/2013/04/Raport
%20CNFIS%202012%20-%20Starea%20finantarii%20invatamantului%20superior..pdf.
Florian, B. (2011). Analiza instituţionalăa evoluţiei sistemului de asigurare a calităţii în
învăţământul superior din România (1993–2011). In M. Păunescu, L. Vlăsceanu, & A. Miroiu
(Eds.), Calitatea învăţământului superior din România: o analizăinstituţionalăa tendinţelor
actuale (pp. 43–58). Iaşi: Polirom.
Kahoutek, J. (2009). Setting the stage: Quality assurance, policy change, and implementation. In
Kahoutek, J. (Ed.), Implementation of the standards and guidelines for quality assurance in
higher education in the central and East-European Countries –agenda ahead (pp. 11–19).
Bucharest: UNESCO-CEPES.
Kivistӧ, J. (2005). An assessment of agency theory as a framework for the government–university
relationship. Journal of Higher Education Policy and Management, 30(4), 339–350.
Kivistӧ, J. (2007). Agency theory as a framework for the Government-University relationship.
Tampere: Tampere University Press.
188 G.-A. Vîiu and A. Miroiu
Korka, M. (2009) Contextul European şi internaţional al unei noi culturi a calităţii în
managementul învăţământului superior din România, in Korka, M. Educaţie de calitate
pentru piaţa muncii (pp. 9–30). Bucureşti: Editura Universităţii Bucureşti.
Lane, J.-E. (2008). The principal-agent perspective. London: Routledge.
Lane, J. E., & Ersson, S. (2000). The new institutional politics. Performance and outcomes.
London: Routledge.
Lane, J. E., & Kivistӧ, J. A. (2008). Interests, information, and incentives in higher education:
Principal-agent theory and its potential applications to the study of higher education
governance. In J. C. Smart (Ed.), Higher education: Handbook of theory and research
(Vol. XXIII, pp. 141–179). New York: Springer.
Miller, G. J. (2005). The political evolution of principal-agent models. Annual Review of Political
Science, 8, 203–225.
Miroiu, A. (1998). Managementul universitar. In Miroiu, A. (Ed.), Învăţământul românesc azi
(pp. 117–147). Iaşi: Polirom.
Miroiu, A., Aligica, P. D. (2003). Public higher education financing: A comparison of the
historical and formula-based mechanism. Working paper. Retrieved from http://unpan1.un.
org/intradoc/groups/public/documents/NISPAcee/UNPAN009148.pdf.
Miroiu, A., & Andreescu, L. (2010). Goals and instruments of diversification in higher education.
Quality Assurance Review, 2(2), 89–101.
Moe, T. M. (1984). The new economics of organization. American Journal of Political Science, 28
(4), 739–777.
Nicolescu, L. (2007). Institutional efforts for legislative recognition and market acceptance:
Romanian private higher education. In S. Slantcheva, & D. C. Levy (Eds.), Private higher
education in post-communist Europe: In search of legitimacy (pp. 201–222). New York:
Palgrave Macmillan.
Păunescu, M., Vlăsceanu, L., & Miroiu, A. (2011). Calitatea învăţământului superior românesc. O
analizăinstituţională.InM.Păunescu, L. Vlăsceanu, & A. Miroiu (Eds.), Calitatea
învăţământului superior din România: o analizăinstituţionalăa tendinţelor actuale
(pp. 24–42). Iaşi: Polirom.
Petersen, T. (1993). Recent developments in: The economics of organization: The principal-agent
relationship. Acta Sociologica,36(3), 277–293.
Schwarz, S., & Westerheijden, D. (2007). Accreditation in the framework of evaluation activities:
A comparative study in the European higher education area. In S. Schwarz & D. Westerheijden
(Eds.), Accreditation and evaluation in the European higher education area (pp. 1–42).
Dordrecht: Springer.
Scott, P. (2000). Higher education in Central and Eastern Europe: An analytical report. In
L. C. Barrows (Ed.), Ten years after and looking ahead: A review of the transformations of
higher education in Central and Eastern Europe (pp. 341–407). UNESCO-CEPES: Bucharest.
Ţeca, M. (2011). Viziune de ansamblu asupra modelului matematic de construcţie şi utilizare a
indicatorilor relativi de calitate înfinanţarea învăţământului superior utilizat în perioada
2003–2011. Quality Assurance Review, 3(1), 81–92.
Vlăsceanu, L. (2005). Asigurarea calităţii în educaţie, Bucureşti: UNESCO-CEPES, pp. 2–15.
Retrieved from http://www.ad-astra.ro/library/papers/vlasceanu.pdf.
Vlăsceanu, L., Grünberg, L., & Pârlea, D. (Eds.). (2007). Quality assurance and accreditation: a
glossary of basic terms and definitions. Bucharest: UNESCO-CEPES.
Vlăsceanu, L., Miroiu, A., Păunescu, M., & Hâncean, M. G. (Eds.). (2010) Barometrul calităţii –
2010. Starea calităţii înînvăţământul superior din România, ARACIS. Retrieved from http://
www.aracis.ro/fileadmin/ARACIS/Publicatii_Aracis/Publicatii_ARACIS/Romana/barometru-
final.pdf.
Vlăsceanu, L., & Miroiu, A. (2012). Relating quality and funding. The Romanian case. In
A. Curaj, P. Scott, L. Vlăsceanu, & L. Wilson, (Eds.), European higher education at the
crossroads: Between the Bologna process and national reforms (pp. 791–807). Dordrecht:
Springer.
The Quest for Quality in Higher Education ... 189