Available via license: CC BY 4.0
Content may be subject to copyright.
Universities Can Regain the Public’s
Trust
Frans van Vught
1 Introduction
Public trust in universities appears to be decreasing. In this age of “fake news” and
even “fake science”, the esteem of academic institutions is diminishing. In the eyes
of the general public, universities may still be respectable institutions, but they are
also seen to be relatively self-centred and to have an insatiable hunger for (public)
resources. Furthermore, doubts are being raised about the self-organising capacities
of autonomous academic institutions to assure and protect the quality, relevance and
efficiency of their activities. Stakeholders ask for more information about costs and
benefits. And for greater accountability.
There are several reasons underlying this growing demand for information and
accountability. First, the financial contributions made by students, taxpayers and
others to higher education are rising. Second, the increasing number and variety of
providers of higher education and the (degree and non-degree) programmes they offer
makes it increasingly difficult for (prospective) students to decide where and what to
study. Similarly, employers and governments wish to be assured that higher education
providers deliver the quality education and research services that are needed for
their labour markets, their businesses, and their communities. Third, our society is
increasingly characterised by mass individualisation, where the different clients of
universities (in particular, their students) demand services that are customised to their
needs, plans and abilities.
The result is an increasing demand for transparency tools: instruments that aim to
provide information to stakeholders about the profiles and performances of universi-
ties. From the perspective of students, employers, public authorities and the general
public, the need for tools that provide better and broader use of information regarding
the services and performances of universities is growing.
F. van Vught (B
)
University of Twente, Enschede, The Netherlands
e-mail: f.a.vanvught@utwente.nl
© The Author(s) 2021
H. van’t Land et al. (eds.), The Promise of Higher Education,
https://doi.org/10.1007/978-3-030- 67245-4_31
205
206 F. van Vught
For more than three decades, several tools have been (re-)designed to increase the
transparency of the activities and performances of universities across their different
missions: education, research, knowledge transfer and community engagement. In
this chapter, I will address two higher education transparency tools: accreditation
and rankings. I will present these tools in a brief theoretical context and will argue
that the need for transparency can be seen as a new challenge for universities and the
IAU, but also as an opportunity to regain the public’s trust.
2 Information Asymmetry
The basic theoretical notion underlying the increasing interest in transparency in
higher education stems from an (economic) understanding of higher education as
an experience good. An experience good is a good or service whose quality can
only be judged after consuming it. This contrasts with the textbook case of “search
goods”, whose quality can be judged by consumers in advance. Experience goods
are typically purchased based upon reputation and recommendation since physical
examination of the good is of little use in evaluating its quality. It might even be argued
that higher education is a credence good: a product whose utility consumers do not
know even after consumption. Higher education being an experience or credence
good emphasises the importance of trust.
From the perspective of the provider, academics may argue that they know better
than any other stakeholder what it takes to deliver high-quality higher education;
and surely, they have a case. At the same time, this view implicitly perpetuates—
and justifies—information asymmetry between client and provider. According to the
principal–agent theory, information asymmetry might tempt academics and univer-
sities not to maximise the quality of their educational services. For instance, univer-
sities might—and do—exploit information asymmetries to cross-subsidise research
activity using resources intended for teaching.
In principal–agent theory, several policy tools are suggested to protect clients and
society against the possible abuse of information asymmetries. All of these tools are
designed to affect the behaviour of the providers of higher education and research.
Influencing the behaviour of universities—by governments, independent agencies or
by the providers themselves—may take different forms. It may involve regulation:
rules on service quality, standards for teaching, qualifications frameworks, quality
assurance requirements, or conditions imposed on providers. Secondly, (financial)
incentives may be developed to reward desirable behaviour and sanction undesir-
able behaviour. Thirdly, influencing the behaviour of universities may aim to allevi-
ate information asymmetry by focusing on the provision of information; this is the
intention behind the use of transparency tools.
Universities Can Regain the Public’s Trust 207
3 Accreditation
Accreditation is the most common form of external quality assurance in higher educa-
tion. The distinguishing characteristic of accreditation is that external quality assess-
ment leads to a summary judgment (pass/fail, or graded) that has consequences for
the official status of the institution or programme. Often, accreditation is a condition
for the recognition of degrees and their public funding. Accreditation is the simplest
form that quality assurance can take. However, the transparency function of quality
assurance appears to be only an additional aim—its primary aim is to assure that
quality standards are met.
When accreditation and other forms of external quality assurance were introduced,
their focus was on what higher education institutions were offering, measured by
input indicators such as the numbers and qualifications of teaching staff, the size
of libraries, or staff–student ratios. However, the relevance of input indicators for
making the quality of the teaching and learning experience more transparent, or for
demonstrating the quality of outputs (e.g. degree completions) and outcomes (e.g.
graduate employment) was questioned.
Increasingly, therefore, accreditation standards first began to include measures of
institutional educational performance, such as drop-out or time-to-degree indicators.
More recently, the focus of accreditation has also emphasised achieved learning
outcomes. The degree to which study programmes succeed in enabling students to
learn what the programme curriculum intends is argued to present a more transparent,
more pertinent, and more locally-differentiated picture of quality.
The emphasis on achieved learning outcomes redirects accreditation more towards
the diversified information needs of stakeholders, i.e. more on higher education’s
public value; in this way, it aims to enhance transparency. However, this is only the
case if the assessment of learning outcomes is comparative in nature, preferably on
an international scale, and the results are made public.
Admittedly, whether stakeholders are interested in measures of achieved learning
outcomes is another matter. For instance, even if students behave as rationally as
policy would have it, they would not only be interested in outcomes in the distant
(uncertain) future but also in characteristics of the educational process and its context.
Potential students (and others) are likely also to be interested in current students’ sat-
isfaction with such factors, allowing them to benchmark satisfaction scores across
different institutions and thus to make proxy assessments of programme quality.
However, in accreditation systems, such information is often hard to find. Unlock-
ing this information is one of the challenges in further redesigning accreditation
mechanisms as stronger transparency tools.
208 F. van Vught
4 Rankings
Whereas quality assurance and accreditation were introduced mainly on the ini-
tiative of governments, university rankings have appeared mostly through private
(media) initiatives. Rankings emerged in reaction to the binary (pass/fail recogni-
tion) information resulting from accreditation. They intend to address a need for
more fine-grained distinctions in a context where many institutions and programmes
pass the basic accreditation threshold.
It is widely recognised that, although current global rankings are controversial,
they are here to stay and that especially global university league tables have a con-
siderable impact on decision-makers worldwide, including those in universities. Yet,
major concerns persist about the rankings’ methodological underpinnings and their
drive towards stratification rather than diversification.
The following sets of problems surrounding the familiar global rankings can be
distinguished. First, traditional university rankings do not distinguish their various
users’ different information needs but provide a single, fixed ranking for all. Sec-
ond, they ignore intra-institutional diversity, presenting universities as a whole, while
research and education are “produced” in faculties, hospitals, laboratories, etc., which
each may exhibit quite different qualities. Third, rankings tend to use available infor-
mation on a narrow set of dimensions only, overemphasising research. This suggests
to lay users that more and more frequently cited research publications are an indica-
tion of high-quality educational programmes. Fourth, the bibliometric databases used
for the underlying information on research output and impact on peer researchers
mostly contain journal articles, while journal articles are a type of scientific com-
munication that is relevant for many natural science and medical disciplines, but
this is less so for fields such as engineering, humanities, law and social sciences. In
addition, the journals included in these databases are mostly English-language jour-
nals, largely disregarding publications in other languages. Fifth, the diverse types of
information and indicators that underlie the rankings are weighted by the ranking
producers and consolidated into a single composite value for each university, usually
presented in a league table with a ratio scale. This is done without any explicit—let
alone empirically corroborated—theory on the relative importance and priorities of
the indicators or with a sound methodological base for the league table scale.
Given these criticisms, some analysts (including this chapter’s author) have
endeavoured to construct alternative rankings, and in recent years—partly due to
these efforts—not only have innovative rankings appeared but also the methodology
of traditional global rankings has improved: information on individual areas (fields,
disciplines) have been added to the global rankings, and the dimensions of the data
included have been broadened.
In particular, U-Multirank has addressed the shortcomings of the traditional global
rankings. As a transparency tool, this ranking is very different from its competitors.
Firstly, because U-Multirank has adopted a multi-dimensional view on university
performance; when comparing universities, it provides information about the differ-
ent activities the institution engages in: teaching and learning, research, knowledge
Universities Can Regain the Public’s Trust 209
transfer, international orientation and regional engagement. Secondly, U-Multirank
invites its users to compare institutions with similar profiles, thus enabling com-
parisons of “like with like”, rather than “comparing apples with oranges”. Thirdly,
U-Multirank is interactive and stakeholder-focused; it allows users to choose from
a menu of performance indicators and to select indicators according to their own
preferences. Fourthly, U-Multirank does not create league tables; it does not force
its users to combine indicators into a weighted score or a numbered league table
position. Fifth, U-Multirank allows universities to analyse and communicate their
own specific “profiles” and hence to emphasise their individual strengths. Sixth,
U-Multirank assigns scores on individual indicators using five broad performance
groups (“very good” to “weak”) to compensate for the imperfect comparability of
information. Finally, U-Multirank complements institutional information pertinent
to the whole institution with a large set of disciplinary (field-based) performance pro-
files, focusing on particular academic disciplines or groups of programmes, using
indicators specifically relevant to the different subjects.
In general, rankings provide information to the different stakeholders of universi-
ties. From this perspective, they can be seen as transparency tools. However, not all
rankings are methodologically sufficiently developed to offer relevant and custom-
made information and to assist clients and other stakeholders in making choices. As
such, many global rankings are still relatively weak in their transparency function.
5 Conclusion
From the perspective of the need to increase the transparency of the performance of
universities, the conclusions regarding the two transparency tools discussed are as
follows.
Accreditation remains a crude transparency instrument, providing little informa-
tion of value to clients beyond the basic though crucial protection against substandard
provision. The refinement that stresses public value-oriented ideas, namely focusing
accreditation on achieved learning outcomes, which would make accreditation more
directly relevant to (prospective) students, cannot overcome this basic crudeness.
Moreover, designing such apparently more relevant accreditation schemes remains
a challenge, also given academics’ resistance to their intrusiveness and the effort
needed to design and incorporate sensible indicators of learning outcomes.
Regarding rankings, some recent initiatives—in particular U-Multirank—appear
to have been designed to overcome the drawbacks of traditional global rankings. The
basic characteristics of U-Multirank empower stakeholders to compensate for their
asymmetrical information position vis-à-vis higher education providers, while at the
same time assisting these higher education providers in communicating their specific
profiles. Multi-dimensional, user-driven rankings have the potential to function as
rich transparency tools, as client-driven and diversity-oriented instruments. How-
ever, such a transparency tool is only as useful as the information it offers to users.
Specifically, the underlying data on the higher education institutions’ value added
210 F. van Vught
in terms of education performance (e.g. learning outcomes, societal engagement of
higher education institutions) needs further elaboration.
For the improvement of both accreditation and rankings, universities can play a
major role. Both sets of transparency tools will profit from stronger commitment
by universities, in making them better tools for stakeholders’ information needs. For
the universities, these tools offer the possibility for stronger accountability and better
public visibility.
This is where the IAU can play a major role. As a well-respected global association
of universities, the IAU can take a leading role in assisting its members to show their
profiles and communicate their specific strengths, while at the same time creating a
more open and transparent attitude about their performances. Building such an open
attitude may well be the best way to regain the public’s trust.
Frans van Vught was Rector and President of the University of Twente, the Netherlands. He
currently acts as an international ‘higher education consultant’ and is the co-project leader of U-
Multirank, the only user-driven multi-dimensional global ranking system. Internationally he was
a member of the board of the European University Association (EUA), president of the European
Centre for Strategic Management of Universities, a member of the Universities Grants Committee
of Hong Kong, and a member of the board of the European Institute of Technology Foundation.
He published widely on higher education and holds several honorary doctorates.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropri-
ate credit to the original author(s) and the source, provide a link to the Creative Commons license
and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.