Conference PaperPDF Available

The TIHP Framework – An Instrument for Measuring Quality of Hybrid Services

Authors:

Abstract

Services are often provided as hybrids containing digital (e.g., online-registration) and physical (e.g., on-site appointment) service components. We lack appropriate instruments that could measure the quality of hybrid services across different types of organizations. Building on prior literature in service quality measurement, we develop and validate an instrument to measure service recipient’s perceptions of quality in hybrid services that can be used by organizations in the public and private sector. The key contribution of this paper is the Technology, Information, Human, Process framework (TIHP), a measurement instrument with four quality dimensions and 28 quality attributes. Our empirical validation with answers from 121 service recipients supports the psychometric validity of this instrument. We discuss how the TIHP framework can be useful to governments, practitioners, and researchers alike.
11.#)!2).-&.0-&.0,!2).-612%,111.#)!2).-&.0-&.0,!2).-612%,1
+%#20.-)#)"0!06%+%#20.-)#)"0!06%
0.#%%$)-'1 !#)9#1)!.-&%0%-#%.--&.0,!2).-
612%,1

(%0!,%5.0*8--1203,%-2&.0%!130)-'3!+)26.&(%0!,%5.0*8--1203,%-2&.0%!130)-'3!+)26.&
6"0)$%04)#%16"0)$%04)#%1
!")!- !+*%
-)4%01)26.&!'%-
&!")!-5!+*%&%0-3-)(!'%-$%
)++ )-*+%0
-)4%01)26.&!'%-
2)++5)-*+%0&%0-3-)(!'%-$%
.++.52()1!-$!$$)2).-!+5.0*1!2(22/1!)1%+!)1-%2.0'/!#)1
%#.,,%-$%$)2!2).-%#.,,%-$%$)2!2).-
!+*%!")!-!-$ )-*+%0)++(%0!,%5.0*8--1203,%-2&.0%!130)-'3!+)26.&6"0)$
%04)#%1
0.#%%$)-'1

(22/1!)1%+!)1-%2.0'/!#)1
()1,!2%0)!+)1"0.3'(22.6.3"62(%!#)9#1)!.-&%0%-#%.--&.0,!2).-612%,1!2+%#20.-)#
)"0!06%2(!1"%%-!##%/2%$&.0)-#+31).-)-0.#%%$)-'1"6!-!32(.0)7%$!$,)-)120!2.0.&
+%#20.-)#)"0!06%.0,.0%)-&.0,!2).-/+%!1%#.-2!#2%+)"0!06!)1-%2.0'
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
1
The TIHP Framework – An Instrument for
Measuring Quality of Hybrid Services
Completed Research Paper
Fabian Walke
University of Hagen
Hagen, Germany
fabian.walke@fernuni-hagen.de
Till J. Winkler
University of Hagen, Hagen, Germany
and Copenhagen Business School,
Frederiksberg, Denmark
till.winkler@fernuni-hagen.de
Abstract
Although services are often provided as hybrids containing digital (e.g., online -
registration) and physical (e.g., on-site appointment) service components, we lack
appropriate instruments that could measure the quality of hybrid services across
different types of organizations. Building on prior literature in service quality
measurement, we develop and validate an instrument to measure service recipient’s
perceptions of quality in hybrid services that can be used by organizations in the public
and private sector. The key contribution of this paper is the Technology, Information,
Human, Process framework (TIHP), a measurement instrument with four quality
dimensions and 28 quality attributes. Our empirical validation with answers from 121
service recipients supports the psychometric validity of this instrument. We discuss how
the TIHP framework can be useful to governments, practitioners, and researchers alike.
Keywords: hybrid service quality, quality framework, measurement scale, public sector,
private sector, organizations, context-independency, cross-industry, cross-authority
Introduction
Public services are often provided as hybrid services containing digital (e.g., online-registration) and
physical (e.g., on-site appointment) service components. Due to changing governmental regulations during
the Covid-19 pandemic and because of an increasing customer demand for digital services (McKinsey,
2020), also the private sector was forced to provide more digital and hybrid services. The Covid-19
pandemic brought to light the weaknesses of public and private sector services and especially the public
service quality of health, security, and regulatory authorities has been stress tested by the pandemic.
With the change in the form of service from physical to digital or hybrid services, the characteristics change
with which service quality can be measured. In previous decades, the SERVQUAL instrument
(Parasuraman et al., 1988) was frequently used to measure quality of traditional physical services
(Hartwig & Billert, 2018; Ladhari, 2009). Many models and scales have been developed to measure digital
service interactions (Ladhari, 2010), also in the public sector (e.g., Papadomichelaki & Mentzas, 2012). The
digital transformation of organizations, which has been accelerated by the Covid-19 pandemic (McKinsey,
2020), lead to changed requirements for service providers and the measurement models of service quality.
A literature review of service quality measurements by Hartwig and Billert (2018) shows, that less research
has been done related to hybrid service quality in the private sector, compared to traditional physical and
digital services. Additionally, Hartwig and Billert (2018) called for more context-independency to measure
service quality across different industries since a research gap was found on context-independent
measurement models. The conditions regarding hybrid services and the different types of organizations are
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
2
particularly given in the public sector. In the public sector governments must manage various types of
services provided by different types of public organizations (e.g., universities, police departments,
vaccination centers, resident’s registration offices). Context-independency in the public sector means that
different services are provided across a wide range of different types of departments or authorities. Different
levels of maturity in digitalization of public organizations can make the quality management of hybrid
services in different cases of application particularly difficult. Besides that, citizens as service recipients
have high demands on public services, even during a global pandemic (Bagheri Lankarani et al., 2021).
Hence, there is a need for a comprehensive instrument to measure quality in hybrid services across different
types of organizations in order to receive information about opportunities of potential quality
improvements in organizations with the long-term objective to maintain equivalent quality standards.
The objective of this study is to develop and validate a framework and instrument to measure service
recipients’ perceptions of quality in hybrid services across different types of organizations. This leads to the
research question: How should a framework and measurement scale be designed to measure quality in
hybrid services across different types of organizations?This paper shows the development and empirical
validation of a framework and measurement scale in multiple steps. The first step consisted of an extensive
literature review to show the research gap, the development of the framework and the identification of
multiple quality dimensions and attributes related to service quality. As a next step, the measurement scale
was designed as a questionnaire based on these criteria. In the last step, an empirical validation of the
framework and measurement scale was performed. We collected 121 responses via an online survey that
took place from October to December of 2021. To understand the dimensionality of the acquired data, an
exploratory factor analysis was performed. Additionally, the reliability of the scale was assessed using
standard psychometric quality measures.
The key contribution of this paper is a measurement instrument with four quality dimensionsTechnology,
Information, Human, Process (TIHP)and 28 quality attributes in these dimensions. The TIHP instrument
can be useful to governments, practitioners and researchers interested in measuring quality of hybrid
services across different types of organizations. The analytical results support the psychometric adequacy
of our instrument and show the different influences of the quality dimensions on overall perceived service
quality.
Literature Review of Service Quality Measurements
Our starting point was the literature review from Hartwig and Billert (2018) which provides an overview of
41 service quality measurements in the private sector. Hartwig and Billert (2018) only considered literature
regarding the private sector. Therefore, we performed a structured literature review considering additional
service quality measurements related to the public sector. We followed the guidelines described by vom
Brocke et al. (2015) and applied a sequential search process, a representative coverage, and techniques of
keyword search, backward and forward search in different sources (citation indexing services, bibliographic
databases, publications). The distinction between private and public sector was made on the basis of the
legal form of the service organization (corporation under public law versus privately managed). As a result,
we identified 23 service quality measurements related to the public sector, thus that in total 64 service
quality measurements have been considered for this review.
Worth noting is, that we found no contributions which compared the literature of service quality
measurements in the private and the public sector. We found different typologies of services and service
quality models and measurements, through which services and service quality models and measurements
can be clustered. Three prominent typologies are:
General service quality, e-service quality, and IT/IS service quality (Hartwig & Billert, 2018)
Routine-, contact-, knowledge-, and technology-intensive services (Jaakkola et al., 2017)
Physical (traditional) services, digital services, and hybrid services (Hartwig & Billert, 2018)
The focus of this paper is on the latter, the three-categorical typology of services, which distinguishes
services by their form, into physical, digital and hybrid service categories. Table 1 shows a classification of
the 64 given service quality measurement models. Considering only the public sector, the results show, that
research related to public sector service quality can be divided into digital E-Government services (20
contributions; clustered in digital services in Table 1) and traditional physical services of local governments
(4 contributions; clustered in physical services in Table 1).
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
3
Considering both, the private and the public sector, the analysis of our literature review shows that 20% of
the service quality models and measurements are related to physical traditional services, 72% to digital
services and 8% to hybrid services, consisting of both physical and digital services (Table 1). This shows,
that research lacks consideration of hybrid services, compared to permanent physical and digital services.
Particularly noteworthy is that we found no contributions which measure quality in hybrid services in the
public sector.
Physical (20%)
Hybrid (8%)
Digital (72%)
Public Sector
SERVQUAL
(Donnelly et al.,
1995);
MM (Scott & Shieff,
1993);
SERVQUAL
(Wisniewski, 1996);
SERVQUAL
(McFadyen et al.,
2001);
None
(research gap)
SQ mGov (Shareef et al., 2014);
E-Government SERVQUAL (Huai, 2011);
E-GovQual (Papadomichelaki & Mentzas, 2012);
E-Govqual (Shanshan, 2014);
EGOSQ (Agrawal, 2009);
WEQ (Elling et al., 2012);
COBRA (Osman et al., 2014);
MM (Sá et al., 2016);
MM (Sá et al., 2017)
MM (Stiglingh, 2014);
MM (Bikfalvi et al., 2013);
MM (Hien, 2014);
MM (Kaisara & Pather, 2011);
MM (Henriksson et al., 2007);
MM (Bhattacharya et al., 2012);
MM (Sigwejo, 2015);
MM (Balushi & Ali, 2016);
MM (Tan et al., 2013);
MM (Osei-Kojo, 2017);
MM (Connolly et al., 2010);
Private Sector
SERVPERF (Cronin
& Taylor, 1992)
Retail Service
Quality Scale
(Dabholkar et al.,
1996);
Service quality
model (Grönroos,
1984);
P-E-SQ gap concept
(Parasuraman et al.,
1985);
SERVQUAL
(Parasuraman et al.,
1988)
SERVQUAL NQ and
EP (Teas, 1993);
IS SERVQUAL
(Jiang et al., 2002);
SERVQUAL (L. F.
Pitt et al., 1995);
SERVQUAL (L. Pitt
et al.);
ASP model (Ma
et al., 2005);
ASP-Qual
(Sigala, 2004);
IT service
climate (Q. R.
Jia et al., 2008;
R. Jia & Reich,
2013);
SSTQUAL (Lin &
Hsieh, 2011);
IS ZOT
SERVQUAL
(Kettinger & Lee,
2005);
MM (Aladwani & Palvia, 2002);
WebQual (S. Barnes & Vidgen, 2000; S. J. Barnes & Vidgen,
2002; S. J. Barnes & Vidgen, R., 2001; S. J. Barnes & Vidgen,
R. T., 2001);
MM (Hidayanto et al., 2013);
WebQualTM (Loiacono et al., 2002);
SITEQUAL (H. Webb & Webb, 2001; H. W. Webb & Webb,
2004);
SiteQual (Yoo & Donthu, 2001);
MM (Gounaris & Dimitriadis, 2003);
MM (Yang et al., 2005);
PeSQ (Cristobal et al., 2007);
MM (Collier & Bienstock, 2006);
IRSQ scale (Janda et al., 2002);
E-S-Qual / E-RecS-Qual (Parasuraman et al., 2005);
E-tail SQ (Rolland & Freeman, 2010);
MM (Santos, 2003);
MM (Swaid & Wigand, 2009);
etailQ (Wolfinbarger & Gilly, 2003);
e-SQ (Zeithaml et al., 2002)
PeSQ (Ho & Lin, 2010);
MM (Wu et al., 2012);
MM (Yang et al., 2004);
M-S-Qual (Huang et al., 2015);
MS-Qual (Hosseini et al., 2013);
MM (Lu et al., 2009);
SaaS-Qual (Benlian et al., 2011);
Table 1. Classification of Service Quality Measurement Models (MM)
(adapted from Hartwig & Billert, 2018, by adding the public sector category)
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
4
Previous published models and measurements in the service quality related literature in the private sector
are 1) context-dependent, 2) most often related to SERVQUAL (Parasuraman et al., 1988), 3) use an
external perspective of service quality and 4) focus on B2C relationships (Hartwig & Billert, 2018). These
previous models and measurements are context-dependent inasmuch as these models have been developed
for a specific industry and/or geographical context. According to Hartwig and Billert (2018),
context-independency can be defined as the extent of cross-industry or cross-country conditions of a
measurement model. Therefore, context-independency in the public sector can be defined as the extent of
cross-authority and/or cross-country conditions of a measurement model and will be referred to in this
paper as the applicability across a wide range of public sector organizations. The conditions regarding
hybrid services and the wide range of organizations with different types of services (e.g., police departments
and vaccination center) are particularly given in the public sector. In summary, there is a research gap
regarding hybrid cases of service quality measurement models, especially, but not exclusively in the public
sector, which we address in this paper. In addition, we consider a wide range of different types of
organizations to take a step towards context-independency.
Methodology
The objective of this study is to develop and validate a framework and instrument to measure service
recipient’s perceptions of quality in hybrid services across different types of organizations. The development
and validation is methodically divided into seven steps. First, a comprehensive literature review was carried
out to determine the research gap (see previous section). In a second step and based on the guidelines for
scale development by Rossiter (2002) the literature review was continued to deductively derive suitable
dimensions and attributes for the desired construct of service quality to be measured in hybrid cases across
different types of organizations. In order to consider the service quality phenomenon holistically, literature
was taken into account which refers to public and private sector service quality including physical, digital
and hybrid services (Table 1), general customer satisfaction (Smith et al., 1999; Tax, 1993), service success
factors (Wood-Harper et al., 2004), service disruptions, failures and recoveries (Abdullah et al., 2016;
Hogreve et al., 2019; Michel, 2001; Tan et al., 2016) and public sector benchmarking (Dorsch & Yasin,
1998). In a third step the dimensions and attributes obtained from the literature were translated into
concrete items, as proposed by MacKenzie et al. (2011), which can be used as measures in a survey
instrument. In addition, we measured if there is a physical, digital or hybrid service case by a 7-point
Likert-scale with five additional items regarding the degree of digitalization of the service. Choosing the
highest point value in all five items is counted as a completely digital service and the lowest point value
possible as a completely physical service, while hybrid services have both components. The 7-Point
Likert-scale to measure the degree of digitalization is described in section “Measurement Scale and Items.
In a fourth step the developed measurement scale was empirically validated in a data collection with n=121
service recipients in Germany and Austria, following the guidelines by MacKenzie et al. (2011). The survey
was administrated as an online survey and took place from October to December of 2021 as a part of a
research seminar at our university. The requirement for participation in the survey was that participants
had a recent service experience in the public sector and reported on this experience as the unit of analysis.
In a fifth step, an exploratory factor analysis was performed to understand the dimensionality of the
acquired data, and following the guidelines by Churchill (1979) and MacKenzie et al. (2011), the reliability
of the measurement was demonstrated through standard psychometric quality measures. In the sixth step
and following the guidelines from MacKenzie et al. (2011) we included alternative measures of the same
constructs as a part of the empirical validation and confirmatory evaluation of the instrument. „To assess
convergent validity, alternative measures of the same construct should be included as part of this data
gathering effort.” (MacKenzie et al., 2011). These alternative measures used a 7-point scale and are
described in the section “Measurement Scale and Items. In the last step the final design of the TIHP
Framework is presented. The following sections will lead through each step with more detail.
Development of the TIHP Framework
The literature review was continued to find suitable quality dimensions and attributes for the framework
that measure the phenomenon of service quality and can be used in hybrid services across different types
of organizations. Since the largest research gap is in hybrid services and in the public sector, the focus is on
these two areas.
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
5
Wood-Harper et al. (2004) describe three main dimensions that can contribute to the success or failure of
electronic services: process, people and systems. Abdullah et al. (2016) describe that the literature on
service disruptions and failures is often related to processes, technologies and humans. Tan et al. (2016)
use for their theoretical service model a systems dimension and an information dimension. Michel (2001)
also use the dimensions systems, information, process and human (advice). Looking at this service-related
dimensions in this literature the dimensions systems and technology are used equivalent and target
technology-based systems. Dorsch und Yasin (1998) suggest the dimensions technology, people and
procedures when benchmarking different public organizations. It can be summarized that in the
organization-related service sector the dimensions technology, information, human and process were used
in different publications as relevant categories that determine the success or failure of services. We hence
adopted the dimensions technology, information, human and process for our TIHP framework.
The dimensions technology, information, human and process are sufficiently generic to cover several
application areas in the public sector (e.g., universities, police departments, and registry offices). In
addition, these four dimensions represent the core components of a hybrid service case, as these dimensions
can be measured in a combined case with digital and physical service components. In these dimensions a
user-oriented (service recipient) and a service-oriented quality perspective is present. The user-oriented
quality perspective is the ability of a service to satisfy human needs and is equivalent to customer’s
contentment with service attributes (Forker, 1991; Garvin, 1984; Tan et al., 2013; Teas, 1993). The service-
oriented is a function of the discrepancy between actual and ideal attributes of a service that determines its
desirability (Forker, 1991; Garvin, 1984; Hauser & Clausing, 1988; Tan et al., 2013).
In a next step quality attributes from the literature need to be determined for the proposed quality
dimensions technology, information, human and process. The quality attributes represent the causes of
changed quality of a dimension as a formative construct. The given quality dimension construct is designed
as an explanatory combination of indicators (quality attributes).
Technology Quality Dimension and Attributes
The technology quality dimension describes all technology-based systems that provide services both
digitally (e.g., for appointment booking to the service fulfillment) and physically on site in an organization
(e.g., physical service desk). Tan et al. (2013) proposed four suitable technology related service quality
attributes called accessibility, adaptability, interactivity and navigability. These attributes can be adapted
to hybrid services. Accessibility has the goal to provide assurance of universal accessibility of services in the
face of diverse needs, technical capabilities present and to run flawless. We define accessibility as the extent
to which the service technologies are compatible with the technologies used by the service recipient.
Navigability has the goal to categorize and present hybrid services in a clear and uncluttered format to
ensure the maximum level of ease and comfort for service recipients. We define it as the extent to which the
navigational structure of the service technologies can be easily and in a user-friendly manner used by the
service recipient. Adaptability has the goal to accommodate unpredictable usage demand patterns due to
the diversity in the lifestyles and needs within a population. We define adaptability as the extent to which
a service technology accommodates fluctuations in service recipients’ use patterns. Interactivity has the
goal to reward service recipients with an engaging experience with the service provider during service
transactions. We define interactivity as the extent to which a service technology proactively engages the
service recipient during transactions. Additionally, we propose the quality attributes quickness, fit and
sovereignty for the technology dimension. Quickness has the goal to enable the service technology to be
used quickly and without delay. We define quickness as the speed with which the service technologies
reacted when used. Fit is inspired by the Organization System Fit Framework (Strong & Volkoff, 2010). It
has the goal to provide need-oriented service technologies. We define fit as the extent to which a service
technology is tailored to the needs of the service recipient. Sovereignty is a attribute related to digital
technologies which already has been discussed in literature (Couture & Toupin, 2019; Pohle & Thiel, 2021).
The notion sovereignty is increasingly used to describe various forms of independence, control, and
autonomy over digital technologies and data (Couture & Toupin, 2019). The aim of the sovereignty as a
quality attribute is to ensure a high degree of privacy and control over the technologies by the service
recipient itself. We define sovereignty as the extent of self-determination of the service recipient related to
the control over technologies and personal data.
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
6
Information Quality Dimension and Attributes
The information quality dimension describes information that has been provided by the service
organization both digitally (e.g., on a website) and physically (e.g., information brochure). Tan et al. (2016)
proposed in a service-failure classification system four suitable informational quality attributes called
inaccurate, incomplete, irrelevant and untimely information. We reverse these attributes from a negative
to a positive direction and chose the labels accuracy, completeness, timeliness and relevance. Accuracy can
be defined as the extent of correctness of the information provided by the service. Completeness stand for
the extent of fullness of information regarding the service recipients’ concern. Timeliness is the extent of
up-to-dateness of information of the service. Relevance is the extent of importance of the information
regarding the service recipients’ concern. Additionally, we propose the quality attributes conciseness,
intelligibility and findability for the information quality dimension due to their relevance to capture
additional characteristics of information quality. Due to the increasing availability of information in society
and information systems, conciseness is suggested as an attribute. Conciseness is defined as the extent of
succinctness of the service’ information. Since the language, especially legal language in the public sector,
can be difficult to understand, therefore the attribute intelligibility is introduced. Intelligibility is defined
as the extent of easiness regarding the understanding of information by the service. Since the search
possibilities of information plays an increasingly important role in times of frequently used online search
engines, the findability attribute is suggested. Findability is defined as the extent of easiness with which
information for the service recipient’ concern can be found.
Human Quality Dimension and Attributes
The human quality dimension describes the human interaction between the service recipient and the
employees and aspects of employees work quality in the service organization. The interaction can take place
digitally (e.g., via e-mail or video-chats) and physically (in front of a physical service desk). Ahearne et
al. (2007) used the attribute diligence by analyzing sales-person service behavior. This attribute is highly
human-related and is defined as the extent of carefulness of the service employees’ work. Parasuraman et
al. (1988) introduced with his SERVQUAL measurement model the quality attribute empathy, which can
be defined as the extent of caring and individualized attention which is provided to the service recipient,
which is also suitable for this quality dimension. Michel (2001) already used the attributes not-attainable
and incompetent which can be reversed from a negative to a positive effect called attainable and expertise.
Attainable is defined as the extent of easy reachability of the service employees via the offered
communication channels. Expertise is defined as the extent of sufficiency regarding the skills of the service
employees related to the service recipient’ concern. By analyzing customer satisfaction Smith et al. (1999)
and Tax (1993) used items to measure the appropriateness of employee’s communication and the
employees effort into resolving the service recipient’ concern. Both attributes are highly human-related.
Appropriateness of communication is defined as the extent of adequacy regarding the communication of
the service employees with the service recipient. Effort is defined as the extent of energy which was invested
by the service employees with the service recipient concern. According to Shannon-Weaver's classic
transmitter-receiver model (Claude E. Shannon, 1948; C. E. Shannon & Weaver, 1949) interference can
occur when two people communicate. This can have the consequence, for example, that what was said by
the sender (e.g., service recipient) is received differently by the recipient (e.g., service employee). This
possible interference is covered by the quality attribute transmission. It is defined as the extent of
correctness regarding the understanding of the service employees what the service recipient said.
Process Quality Dimension and Attributes
The process quality dimension describes all processes that have been carried out by the service organization
both digitally (e.g., during online-registration) and physically (e.g., on-site check-in). Since processes
always include communication, i.e. the exchange of information from A to B, this attribute is listed in the
process dimension. It can be defined as the extent to which the service process was characterized by good
and effective communication from the service provider. Parasuraman et al. (1988) proposed a
process-related quality attribute called reliability. We define reliability as the extent of meeting deadlines
in the service process. Transparency is an important attribute during service processes and acts as an
important signal of quality (Hogreve et al., 2019). In public and private sector, work processes and the status
of processing can be opaque to service recipients. Transparency is defined as the extent of clear and
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
7
observable steps of the service process from the perspective of the service recipient. Prompt service is a
quality attribute which was used by Michel (2001) by analyzing service failures and recoveries. Named as
immediacy in the process dimension it is defined as the extent of promptness of the service processes.
Efficiency as one of the main characteristics examined in business and public administration has been used
among others by Zeithaml et al. (2002) and Connolly et al. (2010). Efficiency is defined as the extent of
little effort with which the service recipient concern can be completed. Additionally, we propose the quality
attributes order and logic for this dimension. Due to bureaucracy or specifications of the service provider,
illogical steps can take place in the process from the service recipient's point of view. Logic is defined as the
extent of consequentialness regarding the sequence of the service process steps. Since single process steps
can also be disordered from the point of view of the service recipient, the process order is added as a quality
attribute. Order is defined as the extent of expediency regarding the arrangement of the service processes.
Measurement Scale and Items
In a next step, the dimensions and attributes were transferred into concrete survey items, see Table 2. The
original items were developed in German language and translated to English language for the purpose of
presentation in this paper.
Attribute
Items
Accessibility
The technologies used in the service were accessible and worked flawlessly.
Quickness
The technologies used in the service executed quickly.
Adaptability
The technologies used in the service were adapted to my personal usage behavior.
Navigability
The technologies used in the service were easy to navigate.
Interactivity
The technologies used in the service proactively asked for my personal data and stored it
digitally.
Fit
The technologies used in the service were tailored to my needs.
Sovereignty
The technologies used in the service gave me complete control over my personal data and
self-determination was guaranteed.
Accuracy
The information of the service were free of errors.
Timeliness
The information of the service were up to date.
Relevance
The information of the service were relevant for my concern.
Completeness
The information of the service were sufficient to help me with my concern.
Conciseness
The information of the service were concisely worded and not too extensive.
Intelligibility
The information of the service were easy to understand.
Findability
Information for my concern were easy to find.
Attainable
The service employees were easy to reach via the offered communication channels.
Appropriateness
The service employees communicated appropriately with me.
Diligence
The service employees worked meticulous and diligently.
Transmission
The service employees correctly understood what was said.
Expertise
The service employees had sufficient expertise to deal with my concern.
Effort
The service employees have made enough effort with my concern.
Empathy
The service employees cared about my concern.
Reliability
In the service process agreed dates were met.
Immediacy
The service process was immediate and without delay.
Order
The order of the service process was expedient and makes sense.
Communication
The service process was characterized by good and effective communication from the
service provider.
Transparency
The individual steps of the service process were designed clearly and transparently.
Logic
The individual steps of the service process were structured in a logical sequence.
Efficiency
The service process was efficiently designed so that my concern could be completed with
little effort.
Table 2. Items of the quality dimensions (D) and quality attributes
We chose 7-point Likert-scales, ranging from completely agree to completely disagreefor each item.
The middle point of the Likert-scale Neither nor can be selected if the item is not suitable for the
respondent. Oaster (1989) shows, that a 7-point Likert-scale has the highest test-retest reliability compared
to lower and higher numbers of alternatives per choice.
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
8
The degree of digitalization was measured by whether (1) the information, (2) the initiation and (3) the
completion of the service process, (4) the service result and (5) the communication with the service provider
were completely digital and not analogue. The middle point of the Likert-scale was “neutral".
To assess convergent validity, four additional single items were included as alternative quality measures for
the four quality dimensions, consisting of service (1) technologies, (2) information, (3) human interaction,
and (4) processes, containing a disconfirmed-expectancy approach (Zeithaml et al., 1993). The scale of these
items ranges from “much better than expected” to “much worse than expected” with “as expected” as the
middle point of the scale.
Empirical Validation and Analytical Results
The sample of service recipients was recruited from the personal networks of students during a research
seminar at our university. The students themselves were no participants in the survey and only responsible
for recruiting survey participants. We required that participants had a recent experience with a public sector
service. Although this sampling approach preferred certain age groups and levels of education, we regard it
as sufficient for the purpose of evaluating our measurement scale. The descriptive statistics of the sample
are provided in Table 3.
Sample (n=121)
Age
Education
Male
Female
Diverse
Collected in:
Germany
Austria
56.2%
42.9%
0.8%
<20
20-29
30-39
40-49
50-59
>=60
3.3%
31.4%
32.2%
9.9%
15.7%
7.4%
No school degree
Lower secondary school qualification
Higher secondary school certificate
Technical college entrance qualification
General University Entrance Qualification
Bachelor degree
Master degree
Doctoral degree
0.8%
0.8%
12.4%
9.1%
11.6%
24.0%
38.8%
2.5%
Table 3. Descriptive Statistics of the Sample
Table 4 shows the percentage distribution of measured service experiences with different public service
organizations to show a wide range of service providers across the public sector in the collected sample.
Service experiences which have physical and digital components are categorized as hybrid services. 5.0% of
the samples’ experiences have been physical services, 7.4% digital services and 87.6% hybrid services.
Administration
Office for Residents and Registration
Office for Foreigners
Citizens and Regulatory Office
Registry Office
City Administration
Police, Justice & Fire Protection
Fire department
Police
District court
Office of Justice
Education
University
Student Union
Office for Educational Support
Ministry of Education and Research
Finance and Economics
Tax office
Office of Economics
Office for Trade Control
38.8%
5.8%
7.4%
16.5%
3.3%
5.8%
6.6%
3.3%
0.8%
1.7%
0.8%
9.1%
5.0%
0.8%
1.7%
1.7%
5.0%
3.3%
0.8%
0.8%
Health
Association of Statutory Health Insurance
Vaccination Centre
Central facility for disease monitoring
Department of Health
Ministry of Health
Work & Social
Office for Supply and Social Affairs
Office for Work
Ministry of Social Affairs and Integration
Housing & Construction
Office for Construction
Office for Housing
Traffic & Infrastructure
Driver's License Office
Vehicle registration office
Road Traffic Office
Network agency
Social Insurance
Pension insurance
Statutory health insurance
12.4%
1.7%
1.7%
5.8%
2.5%
0.8%
7.4%
0.8%
5.0%
1.7%
2.5%
1.7%
0.8%
13.2%
0.8%
10.7%
0.8%
0.8%
5.0%
1.7%
3.3%
Table 4. Distribution of Measured Service Experiences with Different
Service Organizations Across the Public Sector (n=121)
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
9
D
Attribute
Factor 1
Factor 2
Factor 3
Factor 4
Weights
Technology
Accessibility
.639
.154**
Quickness
.678
.256**
Adaptability
.581
.253**
Navigability
.718
.260**
Interactivity
.355
.166**
Fit
.456
.251**
Sovereignty
.574
.210**
Information
Accuracy
.652
.189**
Timeliness
.716
.212**
Relevance
.682
.199**
Completeness
.752
.201**
Conciseness
.719
.194**
Intelligibility
.692
.203**
Findability
.598
.189**
Human
Attainable
.400
.101**
Appropriateness
.800
.193**
Diligence
.850
.195**
Transmission
.779
.188**
Expertise
.800
.189**
Effort
.882
.205**
Empathy
.722
.177**
Process
Reliability
.482
.145**
Immediacy
.576
.178**
Order
.719
.227**
Communication
.641
.194**
Transparency
.733
.208**
Logic
.634
.215**
Efficiency
.736
.226**
*SPSS Software is used, KMO-Value is .818 and Bartlett-Test p<.001, extraction method is principal component
analysis, rotation method is Varimax with Kaiser-normalization, rotation converged in 6 iterations, loadings <.45
omitted for cross-loadings, loadings <.45 in italics for substantive loadings; ** p<.001
Table 5. Rotated Component Matrix* and Final Attribute Weights
In a next step an exploratory factor analysis was done to understand the dimensionality of the data acquired
with the measurement instrument. The results are shown in Table 5, which show a clear separation of four
individual factors, which represent the developed quality dimensions.
The attributes attainable and interactivity do not exceed the threshold of 0.45. Attainable seems to be an
attribute that in hybrid cases has less impact on the perceived quality of service recipients in relation to the
work of service employees. Interactivity as a technological quality attribute may have less impact, since
service recipients rarely perceive the awareness, that the technologies proactive request to enter personal
data, or that the technologies do not have these functions in a sufficient extent. Since the framework is a
formative construct, the two attributes attainable and interactivity, which load less than 0.45, are kept in
the framework.
In a next step, we calculated the factor scores for the technology, information, human and process quality
dimensions. Factor scores are created from the substantive quality attributes of each factor using the
regression method (DiStefano et al., 2009). The final weights of the quality attributes are shown in Table 5.
Since a formative model is present, the variance inflation factor (VIF) is reported (Weiber & Mühlhaus,
2014). The correlation matrix of the four quality dimensions and the VIF is shown in Table 6. Similar to the
four dimensions, a factor for overall service quality was created from the four factor scores for technology,
information, human and process quality using the regression method (DiStefano et al., 2009). The weights
in Table 6 represent the regression coefficient.
For the collinearity diagnosis of the indicators of a formative construct the variance inflation factor (VIF) is
suitable. It indicates which factor increases the variance of a parameter in the presence of multicollinearity
(Weiber & Mühlhaus, 2014). The VIF should be close to 1, then there is none multicollinearity, and do not
exceed 10 (Diamantopoulos & Winklhofer, 2001).
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
10
Each VIF calculated (Table 6) for the four quality dimensions has a value close to 1, so none of the factors
are redundant. Each factor can stand on its own and is suitable to represent the service quality factor.
Dimension
Technology
Information
Human
Process
Weights
VIF
Technology
.355*
1.335
Information
.357
.341*
1.273
Human
.255
.249
.299*
1.178
Process
.461
.416
.366
.390*
1.494
Table 6. Correlations, Variance Inflation Factors and Weights of the Quality Dimensions
related to the Service Quality Factor (* p<.001)
In a next step, as a part of a confirmatory evaluation of our instrument, the convergent validity is assessed.
For formative measurements, convergent validity can be assessed by the extent to which the formative
construct is positively correlated with an alternative measure using different indicators for the same
construct (Joseph F. Hair et al., 2017; Joe F. Hair et al., 2020). The relationship between the formative
construct and the alternative measure is typically examined using correlation or regression (Cheah et al.,
2018; Joe F. Hair et al., 2020).
In this paper, we consider the correlation of the constructs with the four single items that provide alternative
quality measures of the service (1) technologies, (2) information, (3) human interaction and (4) processes
(additional designation alt. in Table 7). Additionally, an alternative score of overall service quality was
calculated from the four alt. measures (designation “ServiceQ alt.” in Table 7) using the regression method
(DiStefano et al., 2009). The second-order service quality factor, which was created out of the four formative
factors for technology, information, human and process, is designated as “ServiceQuality” in Table 7.
Dimension
Technology alt.
Information alt.
Human alt.
Process alt.
ServiceQ alt.
Technology
.478**
.331**
.175
.169
.475**
Information
.276**
.455**
.218*
.188*
.417**
Human
.046
.177
.706**
.359**
.414**
Process
.257**
.273**
.275**
.504**
.479**
ServiceQuality
.378**
.432**
.455**
.478**
.622**
** The correlation is significant at the 0.01 (2-tailed) level;
* The correlation is significant at the 0.05 (2-tailed) level
Table 7. Correlations of the formative factors with alternative single-item measures
Figure 1. Analytical model of TIHP Service Quality (including Alternative Measure)
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
11
The correlations of the formative and alternative measures in Table 7 show that the highest correlation of
the measures is present between the formative and the alternative single-item measure of the same
construct (e.g., “Technology” and “Technology alt.”, highlighted in bold). This supports the convergent
validity of our formative first-order quality dimensions.
The formative second-order construct for overall service quality also has the highest correlation with its
alternative measurement (“ServiceQuality and “ServiceQ alt.” in Table 7). This supports convergent validity
of the second-order formative service quality construct. Figure 1 shows the analytical model of the formative
and alternative measures.
Discussion
In this paper we followed an instrument development approach to measure the quality of hybrid services
across different types of organizations. One goal of this instrument is to be able to measure service quality
across different types of organizations, therefore we considered a wide range of different types of
organizations across the public sector. We validated this in a survey with n=121 service recipients.
Table 4 shows the percentage distribution of measured service experiences with 32 different public service
organizations in 9 different service areas. A second goal of this instrument is to be able to measure quality
of hybrid services. The distinction if there is a physical, digital or hybrid service case was measured by the
degree of digitalization of the service. The analytical results show, that over 87% of the service experiences
in our sample contained both physical and digital components, which underlines the applicability of our
instrument for hybrid services.
The data analysis in Table 5 shows a clear separation of the individual factors, which represent the four
quality dimensions. This indicates that the quality dimensions are suitable as independent quality
dimensions when surveying service quality in hybrid and context-independent cases of application. The
results also show, that the attributes process order, human effort, information timeliness and technology
navigability have the highest impact on their quality dimension.
Although the two attributes attainable and interactivity have lower loadings, they are kept in the framework,
because of the formative measurement model in which both are important from a content validity
perspective. Additionally, both show positive and significant weights.
Our results show that process quality has the highest impact on service quality (Table 6). The highest
correlation between the quality dimension itself is between technology and processes and the lowest
between human and information. This shows, that technology has a high impact on service processes and
that there is a low relation between provided information of the service and the service employees work.
Designing the Framework
The research question “How should a framework and measurement scale be designed to measure quality in
hybrid services across different types of organizations? is addressed by the fact that quality dimensions are
used for this framework, which can be applied to different cases of application and contain quality attributes
that can be applied in both, digital and physical services.
As a result of the research process 28 quality attributes sorted by weights classified under four quality
dimensions have been developed: technology, information, human and process (TIHP). Attributes with low
weights are related to the outer border and attributes with higher weights towards the inner border
(Figure 2). This circular-centric arrangement aims to illustrate the importance of the individual attributes
in relation to the overall service quality construct.
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
12
Figure 2. The TIHP Framework
Implications for Theory and Practice
The factor analysis shows, that the dimensions technology, information, human and process from service
success factors (Wood-Harper et al., 2004), service disruptions, failures and recoveries (Abdullah et al.,
2016; Hogreve et al., 2019; Michel, 2001; Tan et al., 2016) and public sector benchmarking (Dorsch & Yasin,
1998) can be clearly separated, when measuring quality of hybrid services across the public sector.
Therefore, theory and practice should take all four dimensions into account, when measuring quality of
services in hybrid and context-independent cases of application. Since the highest correlation between
technology and processes is present and both dimensions have the highest impact on service quality, those
dimensions should receive special consideration when analyzing service quality. Furthermore, the analysis
shows that the quality attributes process order, human effort, timeliness of information and navigability of
technologies have the highest impact in their given quality dimension. This means, that these high impact
quality attributes should have a particularly high priority when designing hybrid services. The TIHP
instrument can be useful to governments, practitioners and researchers interested in measuring quality of
hybrid services across different types of organizations in order to receive information about opportunities
of potential quality improvements.
Limitations and Future Research
The following limitations merit consideration. The study includes a sample of 121 selected subjects. Even if
the sample appears sufficient for an initial validation of the framework and the measurement scale, an
increased sample size could be useful for future investigations. In addition, the study is limited to a specific
age group with a high level of education. Other values may possibly arise in other age groups with lower
educational qualifications. Since the sample was collected in the public sector, further investigations must
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
13
follow to validate the instrument in the private sector. Furthermore, the data collection took place only in
Germany and Austria. In order to map context-independency not only related to a wide range of different
types of organizations and to expand it to a cross-country level, studies in other countries would have to be
conducted. In addition, the quality attributes that do not have such a high weight would also have to be
subjected to further investigations with additional analyses.
The characteristics of hybrid services are addressed in this paper by using quality dimensions, attributes
and items that are as generic as possible in order to cover both hybrid services and different types of
organizations. Our approach determined the degree of hybridity based on the degree of digitalization of the
service. There may also be other ways to go into hybridity in more detail. In further research, the specific
characteristics of hybrid services could be examined more closely, for example in specific application areas
and with different measurement approaches.
The focus of future research should also consider the internal perception of service quality and
organizational quality. The new developed framework called TIHP could also be suitable for the internal
perspective from the employees point of view on organizational quality by reversing the items to the
working conditions of the employees. Since many quality related decisions are systemic (e.g., legal
requirements), especially in large companies in the private sector and in complex hierarchical structures in
the public sector, an expansion of the framework could be useful by considering additional quality
dimensions that take into account the causal structures for the respective management setup of the given
organizations.
References
Abdullah, N. A. S., Noor, N. L. M. & Ibrahim, E. N. M. (2016). Contributing factors to E-government service
disruptions. Transforming Government: People, Process and Policy, 10(1), 120138.
Agrawal, A. (2009). Assessing E-Governance Online-Service Quality (EGOSQ). In G. P. Sahu, Y. K. Dwivedi
& V. Weerakkody (Hrsg.), E-Government Development and Diffusion (S. 133148). IGI Global.
https://doi.org/10.4018/978-1-60566-713-3.ch009
Ahearne, M., Jelinek, R. & Jones, E. (2007). Examining the effect of salesperson service behavior in a
competitive context. Journal of the Academy of Marketing Science, 35(4), 603616.
Aladwani, A. M. & Palvia, P. C. (2002). Developing and validating an instrument for measuring user-
perceived web quality. Information & Management, 39(6), 467476.
https://doi.org/10.1016/S0378-7206(01)00113-6
Bagheri Lankarani, K., Honarvar, B., Kalateh Sadati, A. & Rahmanian Haghighi, M. R. (2021). Citizens'
Opinion on Governmental Response to COVID-19 Outbreak: A Qualitative Study from Iran.
Inquiry: the journal of medical care organization, provision and financing, 58,
469580211024906. https://doi.org/10.1177/00469580211024906
Balushi, T. A. & Ali, S. (2016). Exploring the Dimensions of Electronic Government Service Quality. In 28th
International Conferences on Software Engineering and Knowledge Engineering (S. 341344).
KSI Research Inc. and Knowledge Systems Institute Graduate School.
https://doi.org/10.18293/seke2016-061
Barnes, S. & Vidgen, R. (2000). WebQual: An exploration of Web-Site Quality. In Proceedings of the 8th
European Conference on Information Systems, Vienna, Austria.
Barnes, S. J. & Vidgen, R. T. (2001). Assessing the quality of auction web sites. In Proceedings of the 34th
Annual Hawaii International Conference on System Sciences. IEEE.
https://doi.org/10.1109/HICSS.2001.927087
Barnes, S. J. & Vidgen, R. (2001). An evaluation of cyber-bookshops: the WebQual method. International
journal of electronic commerce, 6(1), 1130.
Barnes, S. J. & Vidgen, R. T. (2002). An integrative approach to the assessment of e-commerce quality. J.
Electron. Commer. Res., 3(3), 114127.
Benlian, A., Koufaris, M. & Hess, T. (2011). Service quality in software-as-a-service: Developing the SaaS-
Qual measure and examining its role in usage continuance. Journal of management information
systems, 28(3), 85126.
Bhattacharya, D., Gulla, U. & Gupta, M. P. (2012). Eservice quality model for Indian government portals:
citizens' perspective. Journal of Enterprise Information Management, 25(3), 246271.
https://doi.org/10.1108/17410391211224408
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
14
Bikfalvi, A., Rosa, J. L. & Keefe, T. N. (2013). E-Government Service Evaluation: A Multiple-item Scale for
Assessing Information Quality. In M. A. Wimmer, M. Janssen, A. Macintosh, H. J. Scholl & E.
Tambouris (Hrsg.), Electronic Government and Electronic Participation - Joint Proceedings of
Ongoing Research of IFIP EGOV and IFIP ePart 2017 (S. 5461). Gesellschaft für Informatik e.V.
Cheah, J.H., Sarstedt, M., Ringle, C. M., Ramayah, T. & Ting, H. (2018). Convergent validity assessment of
formatively measured constructs in PLS-SEM. International Journal of Contemporary
Hospitality Management, 30(11), 31923210. https://doi.org/10.1108/IJCHM-10-2017-0649
Churchill, G. A. (1979). A Paradigm for Developing Better Measures of Marketing Constructs. Journal of
Marketing Research, 16(1), 64. https://doi.org/10.2307/3150876
Collier, J. E. & Bienstock, C. C. (2006). Measuring service quality in e-retailing. Journal of service research,
8(3), 260275.
Connolly, R., Bannister, F. & Kearney, A. (2010). Government website service quality: a study of the Irish
revenue online service. European Journal of Information Systems, 19(6), 649667.
https://doi.org/10.1057/ejis.2010.45
Couture, S. & Toupin, S. (2019). What does the notion of “sovereignty” mean when referring to the digital?
New Media & Society, 21(10), 23052322. https://doi.org/10.1177/1461444819865984
Cristobal, E., Flavian, C. & Guinaliu, M. (2007). Perceived eservice quality (PeSQ): Measurement
validation and effects on consumer satisfaction and web site loyalty. Managing Service Quality:
An International Journal, 17(3), 317340. https://doi.org/10.1108/09604520710744326
Cronin, J. J. & Taylor, S. A. (1992). Measuring Service Quality: A Reexamination and Extension. Journal of
Marketing, 56(3), 5568. https://doi.org/10.1177/002224299205600304
Dabholkar, P. A., Thorpe, D. I. & Rentz, J. O. (1996). A measure of service quality for retail stores: Scale
development and validation. Journal of the Academy of Marketing Science, 24(1), 316.
https://doi.org/10.1007/BF02893933
Diamantopoulos, A. & Winklhofer, H. M. (2001). Index construction with formative indicators: An
alternative to scale development. Journal of Marketing Research, 38(2), 269277.
DiStefano, C., Zhu, M. & Mindrila, D. (2009). Understanding and using factor scores: Considerations for
the applied researcher. Practical Assessment, Research, and Evaluation, 14(1), Artikel 20.
Donnelly, M., Wisniewski, M., Dalrymple, J. F. & Curry, A. C. (1995). Measuring service quality in local
government: the SERVQUAL approach. International Journal of Public Sector Management, 8(7),
1520. https://doi.org/10.1108/09513559510103157
Dorsch, J. J. & Yasin, M. M. (1998). A framework for benchmarking in the public sector. International
Journal of Public Sector Management, 11(2/3), 91115.
https://doi.org/10.1108/09513559810216410
Elling, S., Lentz, L., Jong, M. de & van den Bergh, H. (2012). Measuring the quality of governmental
websites in a controlled versus an online setting with the ‘Website Evaluation Questionnaire’.
Government Information Quarterly, 29(3), 383393. https://doi.org/10.1016/j.giq.2011.11.004
Forker, L. B. (1991). Quality: American, Japanese, and Soviet perspectives. Academy of Management
Perspectives, 5(4), 6374. https://doi.org/10.5465/ame.1991.4274751
Garvin, D. A. (1984). What Does Product Quality Really Mean? Sloan Management Review, 26(1), 2543.
Gounaris, S. & Dimitriadis, S. (2003). Assessing service quality on the web: evidence from businessto
consumer portals. Journal of Services Marketing, 17(5), 529548.
https://doi.org/10.1108/08876040310486302
Grönroos, C. (1984). A Service Quality Model and its Marketing Implications. European Journal of
Marketing, 18(4), 3644. https://doi.org/10.1108/EUM0000000004784
Hair, J. F [Joe F.], Howard, M. C. & Nitzl, C. (2020). Assessing measurement model quality in PLS-SEM
using confirmatory composite analysis. Journal of Business Research, 109, 101110.
https://doi.org/10.1016/j.jbusres.2019.11.069
Hair, J. F [Joseph F.], Hult, G. T. M., Ringle, C. M. & Sarstedt, M. (2017). A primer on partial least squares
structural equation modeling (PLS-SEM) (2. Aufl.). Sage publications.
Hartwig, K. & Billert, M. (2018). Measuring Service Quality: A Systematic Literature Review. In 26th
European Conference on Information Systems (ECIS 2018), Portsmouth, UK.
https://aisel.aisnet.org/ecis2018_rp/108
Hauser, J. R. & Clausing, D. (1988). The house of quality. Harvard Business Review, May 1988, 6373.
Henriksson, A., Yi, Y., Frost, B. & Middleton, M. (2007). Evaluation instrument for e-government websites.
Electronic Government, an International Journal, 4(2), 204226.
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
15
Hidayanto, A. N., Mukhodim, W. M., Kom, F. M. & Junus, K. M. (2013). A study of service quality and
important features of property websites in Indonesia. Pacific Asia Journal of the Association for
Information Systems, 5(3), Artikel 2.
Hien, N. M. (2014). A study on evaluation of e-government service quality. International Journal of
Humanities and Social Sciences, 8(1), 1619.
Ho, C.T. B. & Lin, W.C. (2010). Measuring the service quality of internet banking: scale development and
validation. European Business Review, 22(1), 524. https://doi.org/10.1108/09555341011008981
Hogreve, J., Bilstein, N. & Hoerner, K. (2019). Service Recovery on Stage: Effects of Social Media Recovery
on Virtually Present Others. Journal of service research, 22(4), 421439.
https://doi.org/10.1177/1094670519851871
Hosseini, S. Y., Zadeh, M. B. & & Bideh, A. Z. (2013). Providing a multidimensional measurement model
for assessing mobile telecommunication service quality (MS-qual). Iranian Journal of
Management Studies, 6, 729.
Huai, J. (2011). Quality evaluation of e-government public service. In 2011 International Conference on
Management and Service Science (S. 14). IEEE. https://doi.org/10.1109/ICMSS.2011.5999011
Huang, E. Y., Lin, S.W. & Fan, Y.C. (2015). MS-QUAL: Mobile service quality measurement. Electronic
Commerce Research and Applications, 14(2), 126142.
Jaakkola, E., Meiren, T., Witell, L., Edvardsson, B., Schäfer, A., Reynoso, J., Sebastiani, R. & Weitlaner, D.
(2017). Does one size fit all? New service development across different types of services. Journal of
Service Management, 28(2), 329347. https://doi.org/10.1108/JOSM-11-2015-0370
Janda, S., Trocchia, P. J. & Gwinner, K. P. (2002). Consumer perceptions of Internet retail service quality.
International Journal of Service Industry Management, 13(5), 412431.
https://doi.org/10.1108/09564230210447913
Jia, Q. R., Reich, B. H [B. H.] & Pearson, J. M. (2008). IT service climate: an extension to IT service quality
research. Journal of Association for Information Systems, 9(5).
Jia, R. & Reich, B. H [Blaize Horner] (2013). IT service climate, antecedents and IT service quality
outcomes: Some initial evidence. The Journal of Strategic Information Systems, 22(1), 5169.
Jiang, J. J., Klein, G. & Carr, C. L. (2002). Measuring Information System Service Quality: SERVQUAL
from the Other Side. MIS Quarterly, 26(2), 145. https://doi.org/10.2307/4132324
Kaisara, G. & Pather, S. (2011). The e-Government evaluation challenge: A South African Batho Pele-
aligned service quality approach. Government Information Quarterly, 28(2), 211221.
https://doi.org/10.1016/j.giq.2010.07.008
Kettinger, W. J. & Lee, C. C. (2005). Zones of tolerance: Alternative scales for measuring information
systems service quality. MIS Quarterly, 29(4), 607623.
Ladhari, R. (2009). A review of twenty years of SERVQUAL research. International Journal of Quality and
Service Sciences, 1(2), 172198. https://doi.org/10.1108/17566690910971445
Ladhari, R. (2010). Developing e-service quality scales: A literature review. Journal of Retailing and
Consumer Services, 17(6), 464477. https://doi.org/10.1016/j.jretconser.2010.06.003
Lin, J.S. C. & Hsieh, P.L. (2011). Assessing the self-service technology encounters: development and
validation of SSTQUAL scale. Journal of retailing, 87(2), 194206.
Loiacono, E. T., Watson, R. T. & Goodhue, D. L. (2002). WebQual: A measure of website quality. Marketing
theory and applications, 13(3), 432438.
Lu, Y., Zhang, L. & Wang, B. (2009). A multidimensional and hierarchical model of mobile service quality.
Electronic Commerce Research and Applications, 8(5), 228240.
Ma, Q., Pearson, J. M. & Tadisina, S. (2005). An exploratory study into factors of service quality for
application service providers. Information & Management, 42(8), 10671080.
MacKenzie, S. B., Podsakoff, P. M. & Podsakoff, N. P. (2011). Construct Measurement and Validation
Procedures in MIS and Behavioral Research: Integrating New and Existing Techniques. MIS
Quarterly, 35(2), 293334. https://doi.org/10.2307/23044045
McFadyen, K., Harrison, J. L., Kelly, S. J. & Scott, D [Donald] (2001). Measuring Service Quality in a
Corporatised Public Sector Environment. Journal of Nonprofit & Public Sector Marketing, 9(3),
3551. https://doi.org/10.1300/J054v09n03_03
McKinsey. (2020). How COVID-19 has pushed companies over the technology tipping pointand
transformed business forever. https://www.mckinsey.com/business-functions/strategy-and-
corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-
tipping-point-and-transformed-business-forever
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
16
Michel, S. (2001). Analyzing service failures and recoveries: a process approach. International Journal of
Service Industry Management, 12(1), 2033. https://doi.org/10.1108/09564230110382754
Oaster, T. R. (1989). Number of alternatives per choice point and stability of Likert-type scales. Perceptual
and Motor Skills, 68(2), 549550.
Osei-Kojo, A. (2017). E-government and public service quality in Ghana. Journal of Public Affairs, 17(3),
e1620. https://doi.org/10.1002/pa.1620
Osman, I. H., Anouze, A. L., Irani, Z., Al-Ayoubi, B., Lee, H., Balcı, A., Medeni, T. D. & Weerakkody, V.
(2014). COBRA framework to evaluate e-government services: A citizen-centric perspective.
Government Information Quarterly, 31(2), 243256. https://doi.org/10.1016/j.giq.2013.10.009
Papadomichelaki, X. & Mentzas, G. (2012). e-GovQual: A multiple-item scale for assessing e-government
service quality. Government Information Quarterly, 29(1), 98109.
https://doi.org/10.1016/j.giq.2011.08.011
Parasuraman, A [A.], Zeithaml, V. A. & Berry, L. L. (1985). A Conceptual Model of Service Quality and Its
Implications for Future Research. Journal of Marketing, 49(4), 4150.
https://doi.org/10.1177/002224298504900403
Parasuraman, A [A.], Zeithaml, V. A. & Berry, L. (1988). SERVQUAL: A multiple-item scale for measuring
consumer perceptions of service quality. 1988, 64(1), 1240.
Parasuraman, A [A.], Zeithaml, V. A. & Malhotra, A. (2005). ES-QUAL: A multiple-item scale for assessing
electronic service quality. Journal of service research, 7(3), 213233.
Pitt, L., Watson, R., King, R., Hartman, A., Hartzel, K., Papageorgiou, E. & Gerwing, T. Longitudinal
measurement of service quality in information systems: a case study. In ICIS 1994 Proceedings.
Pitt, L. F., Watson, R. T. & Kavan, C. B. (1995). Service Quality: A Measure of Information Systems
Effectiveness. MIS Quarterly, 19(2), 173. https://doi.org/10.2307/249687
Pohle, J. & Thiel, T. (2021). Digital sovereignty. In Practicing Sovereignty: Digital Involvement in Times
of Crises (S. 4767). Transcript Verlag.
Rolland, S. & Freeman, I. (2010). A new measure of eservice quality in France. International Journal of
Retail & Distribution Management, 38(7), 497517. https://doi.org/10.1108/09590551011052106
Rossiter, J. R. (2002). The C-OAR-SE procedure for scale development in marketing. International
Journal of Research in Marketing, 19(4), 305335. https://doi.org/10.1016/S0167-
8116(02)00097-6
Sá, F., Rocha, Á. & Cota, M. P. (2016). Potential dimensions for a local e-Government services quality
model. Telematics and Informatics, 33(2), 270276. https://doi.org/10.1016/j.tele.2015.08.005
Sá, F., Rocha, Á., Gonçalves, J. & Cota, M. P. (2017). Model for the quality of local government online
services. Telematics and Informatics, 34(5), 413421. https://doi.org/10.1016/j.tele.2016.09.002
Santos, J. (2003). Eservice quality: a model of virtual service quality dimensions. Managing Service
Quality: An International Journal, 13(3), 233246.
https://doi.org/10.1108/09604520310476490
Scott, D [Don] & Shieff, D. (1993). Service Quality Components and Group Criteria in Local Government.
International Journal of Service Industry Management, 4(4), 4253.
https://doi.org/10.1108/09564239310044280
Shannon, C. E [C. E.] & Weaver, W. (1949). The Mathematical Theory of Communication. Illini books.
University of Illinois Press. https://books.google.de/books?id=dk0n_eGcqsUC
Shannon, C. E [Claude E.] (1948). A mathematical theory of communication. The Bell System Technical
Journal, 27(3), 379423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
Shanshan, S. (2014). Assessment of E-government Service Quality under User Satisfaction Orientation: The
Establishment of E-Govqual Model. Asian Journal of Business Management, 6(2), 111117.
https://doi.org/10.19026/ajbm.6.5335
Shareef, M. A., Dwivedi, Y. K., Stamati, T. & Williams, M. D. (2014). SQ mGov: A Comprehensive Service-
Quality Paradigm for Mobile Government. Information Systems Management, 31(2), 126142.
https://doi.org/10.1080/10580530.2014.890432
Sigala, M. (2004). The ASPQual model: measuring ASP service quality in Greece. Managing Service
Quality: An International Journal, 14(1), 103114. https://doi.org/10.1108/09604520410513703
Sigwejo, A. O. (2015). Evaluating e-government services: a citizen-centric framework [Thesis]. Cape
Peninisula University of Technology. http://ir.cput.ac.za/handle/20.500.11838/2285
Smith, A. K., Bolton, R. N. & Wagner, J. (1999). A model of customer satisfaction with service encounters
involving failure and recovery. Journal of Marketing Research, 36(3), 356372.
TIHP: Measuring Quality of Hybrid Services
Pacific Asia Conference on Information Systems 2022
17
Stiglingh, M. (2014). A measuring instrument to evaluate e-service quality in a revenue authority setting.
Public Relations Review, 40(2), 216225. https://doi.org/10.1016/j.pubrev.2013.12.001
Strong & Volkoff (2010). Understanding OrganizationEnterprise System Fit: A Path to Theorizing the
Information Technology Artifact. MIS Quarterly, 34(4), 731. https://doi.org/10.2307/25750703
Swaid, S. I. & Wigand, R. T. (2009). Measuring the quality of e-service: Scale development and initial
validation. Journal of Electronic Commerce Research, 10(1), 1328.
Tan, C.W., Benbasat, I. & Cenfetelli, R. T. (2013). IT-Mediated Customer Service Content and Delivery in
Electronic Governments: An Empirical Investigation of the Antecedents of Service Quality. MIS
Quarterly, 37(1), 77109. http://www.jstor.org/stable/43825938
Tan, C.W., Benbasat, I. & Cenfetelli, R. T. (2016). An Exploratory Study of the Formation and Impact of
Electronic Service Failures. MIS Quarterly, 40(1), 129.
Tax, S. S. (1993). The Role of perceived justice in complaint resolutions: implications for services and
relationship marketing [doctoral dissertation]. doctoral dissertation, Arizona State University.
Teas, R. K. (1993). Expectations, Performance Evaluation, and Consumers’ Perceptions of Quality. Journal
of Marketing, 57(4), 1834. https://doi.org/10.1177/002224299305700402
vom Brocke, J., Simons, A., Riemer, K., Niehaves, B., Plattfaut, R. & Cleven, A. (2015). Standing on the
shoulders of giants: Challenges and recommendations of literature search in information systems
research. Communications of the association for information systems, 37(1).
Webb, H. & Webb, L. (2001). Business to consumer electronic commerce Website quality: integrating
information and service dimensions. AMCIS 2001 Proceedings, 111.
Webb, H. W. & Webb, L. A. (2004). SiteQual: an integrated measure of Web site quality. Journal of
Enterprise Information Management, 17(6), 430440.
https://doi.org/10.1108/17410390410566724
Weiber, R. & Mühlhaus, D. S. (2014). Eine anwendungsorientierte Einführung in die Kausalanalyse mit
Hilfe von AMOS, SmartPLS und SPSS. Structural equation modeling. An application-oriented
introduction to causal analysis using AMOS, SmartPLS and SPSS]. 2nd ed. Berlin: Springer
Gabler.
Wisniewski, M. (1996). Measuring service quality in the public sector: The potential for SERVQUAL. Total
Quality Management, 7(4), 357366. https://doi.org/10.1080/09544129650034710
Wolfinbarger, M. & Gilly, M. C. (2003). eTailQ: dimensionalizing, measuring and predicting etail quality.
Journal of retailing, 79(3), 183198.
Wood-Harper, T., Ibrahim, O. & Ithnin, N. (2004). An interconnected success factor approach for service
functional in Malaysian electronic government. In Proceedings of the 6th international conference
on Electronic commerce - ICEC '04 (S. 446450). ACM Press.
https://doi.org/10.1145/1052220.1052277
Wu, Y.L., Tao, Y.H. & Yang, P.C. (2012). Learning from the past and present: measuring Internet banking
service quality. The Service Industries Journal, 32(3), 477497.
Yang, Z., Cai, S., Zhou, Z. & Zhou, N. (2005). Development and validation of an instrument to measure user
perceived service quality of information presenting web portals. Information & Management,
42(4), 575589.
Yang, Z., Jun, M. & Peterson, R. T. (2004). Measuring customer perceived online service quality: scale
development and managerial implications. International Journal of operations & production
Management, 24(11), 11491174.
Yoo, B. & Donthu, N. (2001). Developing a scale to measure the perceived quality of an Internet shopping
site (SITEQUAL). Quarterly journal of electronic commerce, 2(1), 3145.
Zeithaml, V., Leonard, B. & Parasuraman, A. P. (1993). The Nature and Determinants of Customer
Expectations of Service. Journal of the Academy of Marketing Science, 21(1), 112.
https://doi.org/10.1177/0092070393211001
Zeithaml, V., Parasuraman, A. P. & Malhotra, A. (2002). Service quality delivery through web sites: a
critical review of extant knowledge. Journal of the Academy of Marketing Science, 30(4), 362375.
... In our GTM study model the influence factors are referred to the action strategies. To address a wide range of possible action strategies, reference was made within the category of action strategies to the four quality dimensions technology, information, human and process of Walke & Winkler (2022). These quality dimensions were expanded to include an additional dimension called systemic, which was used to query organizational, regulatory, structural and strategical factors, from a system theory (Luhmann, 2017) perspective with regard to the actions strategies. ...
... Influence factors and their codingThe action strategies are divided in five dimensions, based on the framework ofWalke & Winkler (2022): Technology, information, human, process and systemic ...
Conference Paper
Full-text available
Digital identities (eID) are one of the crucial building blocks of a digital infrastructure. There are major differences between countries of the European Union when it comes to the success of digital identity infrastructure, yet, we lack insights into the conditions for successful digital identity infrastructure evolution (eID evolution success). Taking the outset in a digital infrastructure perspective, we conducted 18 expert interviews in the context of the European Union with the focal case of the eID infrastructure in Germany. We used the grounded theory method to develop a model of eID evolution success. We discuss how the model can be useful to governments, practitioners and researchers alike.
Article
Full-text available
Studying the people’s expectations of government measures to control and manage the coronavirus disease 2019 (COVID-19) can help to prepare for future crises. This study aimed to investigate the opinion of the Iranian people on authorities’ management during a crisis, such as the COVID-19 pandemic. This qualitative study was conducted on 70 Iranian citizens in Shiraz to explore their opinion about the government response to the COVID-19 outbreak in February and March 2020. Based on saturation criteria, the data was collected by 1 open-end question: “What is your attitude toward the readiness of the officials and government in this epidemic?” Thematic analysis was conducted to explore themes. At the first step of the outbreak, people had critical opinions on their authorities’ management. Four themes were studied, including trust and responsiveness, policymaking during a health crisis, economic management, and epidemic management. Although the citizens’ expectation is dynamic, our study showed that there are still high demands from citizens toward the authorities, even in a new crisis that was not perceived before. One of the best ways to respond to these demands is appropriate risk communication.
Conference Paper
Full-text available
Digital transformation of industries, technologies and society changed the way of service provision and led to changing requirements on service quality. Several models for measuring service quality exist within the literature. Literature reviews on these models mostly focus on the model structure to emphasize differences or commons in e.g. number of dimensions or context of measurement. The following paper will raise this detailed level of investigation to a higher level. The main purpose is to provide an overview about different service quality measurement models within IS literature and emphasize differences between these models compared to traditional measurement scales. A systematic literature review was conducted to structure the literature and reveal further research gaps. Findings were assigned to the service typology matrix of Jaakkola et al. (2017) to gain further research gaps for service quality measurement models directly related to socio-technical change as the two dimensions of the matrix reflect two important attributes of digitization. Beneath these gaps, findings indicate that service quality measurement models are e.g. context-dependent and offer several areas for further research. The results contribute to IS literature as they structure service quality measurement models from literature on the basis of different service forms and emphasize research gaps.
Article
Full-text available
Increasingly, customers use social media to voice complaints, making those comments visible to a wide range of uninvolved, virtually present others (VPOs). Many companies seek to shift their complaint-handling efforts away from public online platforms and toward private interactions. However, this approach might not be optimal due to the importance of transparency in social media recovery and its impact on VPOs. Using multiple experiments and building on signaling theory, vicarious learning, and trust repair mechanisms, this study reveals that service recovery transparency acts as an important signal of quality, eliciting trust, and improving VPOs' word-of-mouth (WOM) and purchase intentions. However, service recovery transparency forms a signal of poor quality when the service recovery is unsuccessful, resulting in negative implications for VPOs' WOM and purchase intentions. Conditional transparency provides transparency about selected aspects of the service recovery (i.e., the process or result), enabling companies to exploit the positive aspects of transparency and evoke more favorable VPO intentions than would arise with complete opaqueness. Such efforts are necessary because even high brand equity firms suffer when failing to provide recovery transparency.
Article
This article analyzes how the notion of “sovereignty” has been and is still mobilized in the realm of the digital. This notion is increasingly used to describe various forms of independence, control, and autonomy over digital infrastructures, technologies, and data. Our analysis originates from our previous and current research with activist “tech collectives” where we observed a use of the notion to emphasize alternative technological practices in a way that significantly differs from a governmental policy perspective. In this article, we review several publications in order to show the difference, if not diverging ways in which the notion is being conceptualized, in particular by different groups. We show that while the notion is generally used to assert some form of collective control on digital content and/or infrastructures, the precise interpretations, subjects, meanings, and definitions of sovereignty can significantly differ.
Article
The author examines conceptual and operational issues associated with the “perceptions-minus-expectations” (P-E) perceived service quality model. The examination indicates that the P-E framework is of questionable validity because of a number of conceptual and definitional problems involving the (1) conceptual definition of expectations, (2) theoretical justification of the expectations component of the P-E framework, and (3) measurement validity of the expectation (E) and revised expectation (E*) measures specified in the published service quality literature. Consequently, alternative perceived quality models that address the problems of the traditional framework are developed and empirically tested.
Article
The authors investigate the conceptualization and measurement of service quality and the relationships between service quality, consumer satisfaction, and purchase intentions. A literature review suggests that the current operationalization of service quality confounds satisfaction and attitude. Hence, the authors test (1) an alternative method of operationalizing perceived service quality and (2) the significance of the relationships between service quality, consumer satisfaction, and purchase intentions. The results suggest that (1) a performance-based measure of service quality may be an improved means of measuring the service quality construct, (2) service quality is an antecedent of consumer satisfaction, (3) consumer satisfaction has a significant effect on purchase intentions, and (4) service quality has less effect on purchase intentions than does consumer satisfaction. Implications for managers and future research are discussed.
Article
Purpose The extant new service development (NSD) literature tends to assume that the key practices for NSD identified in one context apply for all services, and has failed to sufficiently consider differences in NSD between service types. The purpose of this paper is to explore the nature of NSD across different service types. Design/methodology/approach An extensive, cross-sectoral survey was conducted in seven countries. Data from 1,333 NSD projects were analyzed to empirically derive a service typology and examine if and how different types of services vary in terms of NSD resources, practices, methods, and results. Findings Based on six service characteristics, the study identifies four service types: routine-intensive, technology-intensive, contact-intensive, and knowledge-intensive services. The study also identifies specific NSD resources, practices, methods, and results that are prevalent across the service typology. The evidence indicates that the use of advanced practices and methods differs dramatically between service types. Practical implications The paper enables practitioners to expand their current understanding on NSD by providing insights into the variability of NSD across service types. The results suggest that either service-type-specific models or a configurable model for NSD should be developed. Originality/value This study provides one of the first empirically derived service typologies for NSD. The study demonstrates that NSD resources, practices, methods, and results differ across service types, thereby challenging the “one size fits all” assumption evident in current NSD research.