Conference PaperPDF Available

Examining Technology Use Factors of Privacy-Enhancing Technologies


Abstract and Figures

Today's environment of data-driven business models relies heavily on collecting as much personal data as possible. This is one of the main causes for the importance of privacy-enhancing technologies (PETs) to protect internet users' privacy. Still, PETs are rather a niche product used by relatively few users on the internet. We undertake a first step towards understanding the use behavior of such technologies. For that purpose, we conducted an online survey with 141 users of the anonymity service "JonDonym". We use the technology acceptance model as a theoretical starting point and extend it with the constructs perceived anonymity and trust in the service. Our model explains almost half of the variance of the behavioral intention to use JonDonym and the actual use behavior. In addition, the results indicate that both added variables are highly relevant factors in the path model.
Content may be subject to copyright.
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 1
Examining Technology Use Factors of
Privacy-Enhancing Technologies:
The Role of Perceived Anonymity and Trust
Completed Research
David Harborth
Chair of Mobile Business &
Multilateral Security
Goethe University Frankfurt
Sebastian Pape
Chair of Mobile Business &
Multilateral Security
Goethe University Frankfurt
Today's environment of data-driven business models relies heavily on collecting as much personal data as
possible. This is one of the main causes for the importance of privacy-enhancing technologies (PETs) to
protect internet users' privacy. Still, PETs are rather a niche product used by relatively few users on the
internet. We undertake a first step towards understanding the use behavior of such technologies. For that
purpose, we conducted an online survey with 141 users of the anonymity service "JonDonym". We use the
technology acceptance model as a theoretical starting point and extend it with the constructs perceived
anonymity and trust in the service. Our model explains almost half of the variance of the behavioral
intention to use JonDonym and the actual use behavior. In addition, the results indicate that both added
variables are highly relevant factors in the path model.
Privacy-enhancing technologies (PETs), technology use, technology acceptance, perceived anonymity,
trust, privacy, structural equation model.
Perry Barlow (Ball 2012) states: “The internet is the most liberating tool for humanity ever invented, and
also the best for surveillance. It's not one or the other. It's both.” One of the reasons for surveilling users is
a rising economic interest in the internet (Bédard 2016). However, users who have privacy concerns and
feel a strong need to protect their privacy are not helpless, they can make use of privacy-enhancing
technologies (PETs). PETs allow users to improve their privacy by eliminating or minimizing personal data
disclosure to prevent unnecessary or unwanted processing of personal data (van Blarkom et al. 2003). PETs
have a property that is not characteristic for many other technology types. They usually serve not only the
primary goals of the users, but also their secondary goals (Cranor and Garfinkel 2008). It is important to
understand that in many cases PET users make use of the PET while they pursue another goal like browsing
the internet or using instant messengers. These aims become more indistinct if the PET is integrated in the
regular service (e.g. anonymous credentials (Benenson et al. 2015)). In contrast to PETs integrated in
services, standalone PETs (e.g. overlay networks like Tor (The Tor Project 2018)) are not integrated into a
specific service and can be used for several purposes.
In this paper, we investigate how the users’ main goal (privacy respectively anonymity) and their trust in
the service influence the intention to use the PET. In order to focus on the PET itself and not to interfere
with possible other goals, we choose a standalone PET as object for investigation. This allows us to focus on
the usefulness of the PET with regard to privacy protection and avoids confounders due to other goals of
the user. Therefore, we conducted a survey of the users of the anonymity service JonDonym. JonDonym is
a proxy client and will forward the traffic of the users’ internet applications encrypted to the mix cascades
to hide their IP addresses (JonDos Gmbh 2018).
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 2
To determine the use factors of this PET, we focused on perceived anonymity and trust: Since most users
do not base their decisions on any kind of formal (technical or mathematical) anonymity measurement, we
decided to measure the perceived anonymity. The resulting research question is:
RQ1: Does perceived anonymity influence the behavioral intention to use a PET?
However, perceived anonymity is a subjective perception of each user. Since we assume, that most users
will not dig into mathematical proofs of the assured anonymity or challenge the implementation of the
service provider, we conclude that it is important to also consider the trust in the service provider and the
service itself:
RQ2: Does trust in the PET influences the behavioral intention to use it?
We further refine the two research questions and in particular the connection between perceived
anonymity, perceived usefulness and trust in the service (JonDonym) in section 3. This allows us to
integrate them into a technology acceptance model (TAM) which we then use to answer the research
The remainder of the paper is structured as follows: Section 2 briefly introduces the JonDonym
anonymization service and lists related work on PETs and technology acceptance. In section 3, we present
the research hypotheses and describe the questionnaire and the data collection process. We assess the
quality of our empirical results with regard to reliability and validity in section 4. In section 5, we discuss
the implications of the results, elaborate on limitations of our work and conclude the paper with suggestions
for future work.
Background and Related Work
Privacy-Enhancing Technologies (PETs) is an umbrella term for different privacy protecting technologies.
Borking and Raab define PETs as a “coherent system of ICT measures that protects privacy [...] by
eliminating or reducing personal data or by preventing unnecessary and/or undesired processing of
personal data; all without losing the functionality of the data system” (Borking and Raab 2001, p. 1).
In this paper, we investigate the role of perceived anonymity and trust in the context of a technology
acceptance model for the case of the anonymity service JonDonym (JonDos Gmbh 2018). Comparable to
Tor (The Tor Project 2018), JonDonym is an anonymity service and a PET. However, unlike Tor, it is a
proxy system based on mix cascades. It is available for free with several limitations, like a restricted
maximum download speed. In addition, there are different premium rates without these limitations that
differ with regard to duration and included data volume. Thus, JonDonym offers several different tariffs
and is not based on donations like Tor. The actual number of users is not predictable since the service does
not keep track of this. JonDonym is also the focus of an earlier user study on user characteristics of privacy
services (Spiekermann 2005). However, the focus of the study is rather descriptive and does not focus on
users’ beliefs and concerns.
Previous non-technical work on PETs considers mainly usability studies and does not primarily focus on
privacy concerns and related trust and risk beliefs of PET users. For example, Lee et al. (2017) assess the
usability of the Tor Launcher and propose recommendations to overcome the found usability issues.
Comparable studies to ours are the ones by Benenson et al. (2014, 2015) and Krontiris et al. (2015), who
investigate acceptance factors for an anonymous credentials service. However, in their case the anonymous
credential service is integrated into an evaluation system. Thus, the users of their anonymous credential
service had a clearly defined primary task (evaluation of the course system) and a secondary task (ensure
privacy protection). Benenson et al. (2014) focused on the measurement of the perceived usefulness of the
anonymous credential system (the secondary goal), but state that considering the perceived usefulness for
the primary goals as well, may change the relationship between the variables in their model. In contrast to
their study, we examine a standalone PET, and thus can focus on privacy protection as the primary goal of
the users with respect to the PET.
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 3
We base our research on the well-known technology acceptance model (TAM) by Davis (1985, 1989). For
analyzing the cause-effect relationships between the latent variables, we use structural equation modelling
(SEM). There are two main approaches for SEM, namely covariance-based SEM (CB-SEM) and partial least
squares SEM (PLS-SEM). Since our research goal is to predict the target construct actual use behavior of
JonDonym, we use PLS-SEM for our analysis (Hair et al. 2011). In the following subsections, we discuss the
research model and hypotheses based on the extended TAM, the questionnaire and the data collection
process. The demographic questions were not mandatory to fill out. This was done on purpose since we
assumed that most of the participants are highly sensitive with respect to their personal data. Therefore, we
resign from a discussion of the demographics in our research context. This decision is backed up by Singh
and Hill, who found no statistically significant differences across gender, income groups, educational levels,
or political affiliation in the desire to protect one’s privacy (Singh and Hill 2003).
Research Model and Hypotheses
PETs are structurally different than formerly investigated technologies in the job context or hedonic
information systems. In general, it is obvious to users what a certain technology does. For example, if users
employ a spreadsheet program in their job environment, they will see the immediate result of their action
when the program provides them a calculation. The same holds for hedonic technologies which provide an
immediate feedback to the user during the interaction. However, this interaction and feedback structure is
different with PETs. The main impact a user can achieve by using JonDonym is anonymity. However, most
PETs are designed to not harm the user experience. Besides some negative side effects such as a loss of
speed during browsing the internet or an increasing occurrence of captchas (Chirgwin 2016), the user may
not be able to detect the PET at all. The direct effects of the increased anonymity in general go undetected
since they consist of long term consequences, e.g. different advertisements, unless the user visits special
websites with anonymity tests or showing the internet address of the request.
Therefore, perceptions about the achieved impact of using the technology should be specifically
incorporated in any model dealing with drivers of use behavior. This matches the observation that most
users do not base their decisions on any kind of formal (technical or mathematical) anonymity
measurement. Thus, we adapted a formerly validated construct named "perceived anonymity" to the case
of JonDonym (Benenson et al. 2015). The construct mainly asks for the perceptions of users about their
level of anonymity achieved by the use of the PET. Due to the natural importance of anonymity for a PET,
we argue that these perceptions will have an important effect on the trust in the technology. Thus, the more
users think that the PET will create anonymity during their online activities, the more they will trust the
PET (H1a). Creating anonymity for its users is the main use of a PET. Thus, we hypothesize that the
perceived anonymity has a positive effect on the perceived usefulness of the PET to protect the user's privacy
H1a: Perceived anonymity achieved by using JonDonym has a positive effect on trust in JonDonym.
H1b: Perceived anonymity achieved by using JonDonym has a positive effect on the perceived
usefulness of JonDonym to protect the user's privacy.
Trust is a diverse concept integrated in several models in the IS domain. It is shown that different trust
relationships exist in the context of technology adoption of information systems (Söllner et al. 2016). Trust
can refer to the technology (in our case JonDonym) itself as well as to the service provider (in our case
JonDos). However, JonDonym is the company’s main product. Therefore, we argue that it is rather difficult
for users to distinguish which label refers to the technology itself and which refers to the company. Thus,
we decided to ask for trust in the service (JonDonym), assuming that the difference to ask for trust in the
company is negligible. The items for measuring trust and the effects of trust on other variables of the
technology acceptance model are adapted from Pavlou (2003). Thus, we hypothesize that trust influences
behavioral intention, perceived usefulness and perceived ease of use positively.
H2a: Trust in JonDonym has a positive effect on the behavioral intention to use the technology.
H2b: Trust in JonDonym has a positive effect on the perceived usefulness of JonDonym to protect
the user's privacy.
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 4
H2c: Trust in JonDonym has a positive effect on the perceived ease of use of JonDonym.
The theoretical underlying of hypotheses H3, H4a, H4b and H5 can be adapted from the original work on
TAM by Davis (1985, 1989) since PETs are not different to other technologies with regard to relationships
of perceived usefulness, perceived ease, behavioral intention to use and actual use behavior. However,
perceived usefulness refers explicitly to privacy protection as it is the sole purpose of the technology. Thus,
we hypothesize:
H3: The perceived usefulness of protecting the user's privacy has a positive effect on the behavioral
intention to use the technology.
H4a: Perceived ease of use has a positive effect on the behavioral intention to use the technology.
H4b: Perceived ease of use has a positive effect on the perceived usefulness of JonDonym to protect
the user's privacy.
H5: The behavioral intention to use JonDonym has a positive effect on the actual use behavior.
Questionnaire Composition and Data Collection Procedure
The questionnaire constructs are adapted from different sources. The constructs Perceived ease of use
(PEOU) and perceived usefulness are adapted from Venkatesh and Davis (2000), behavioral intention (BI)
is adapted from Venkatesh et al. (2012), trust in the PET service is adapted from Pavlou (2003) and
perceived anonymity is adapted from Benenson et al. (2015). The actual use behavior is measured with a
ten-item frequency scale (Rosen et al. 2013). We conducted the study with German and English speaking
JonDonym users. Thus, we administered two questionnaires. All items for the German questionnaire had
to be translated into German since all of the constructs are adapted from English literature.
To ensure content validity of the translation, we followed a rigorous translation process. First, we translated
the English questionnaire into German with the help of a certified translator (translators are standardized
following the DIN EN 15038 norm). The German version was then given to a second independent certified
translator who retranslated the questionnaire to English. This step was done to ensure the equivalence of
the translation. Third, a group of five academic colleagues checked the two English versions with regard to
this equivalence. All items were found to be equivalent. The items can be found in Table 1.
Since we investigate the drivers of the use behavior of JonDonym, we collected data from actual users of
the PET. We installed the surveys on a university server and managed it with the survey software
LimeSurvey (version 2.63.1) (Schmitz 2015). The links to the English and German version were distributed
with the beta version of the JonDonym browser and published on the official JonDonym homepage. This
made it possible to address the actual users of the PET in the most efficient manner. In sum, 416
participants started the questionnaire (173 for the English version and 243 for the German version). Of
those 416 approached participants, 141 (53 for the English version and 88 for the German version) remained
after deleting unfinished sets and all participants who answered a test question in the middle of the survey
We tested the model using SmartPLS version 3.2.7 (Ringle et al. 2015). Before looking at the result of the
structural model and discussing its implications, we discuss the measurement model, and check for the
reliability and validity of our results. This is a precondition of being able to interpret the results of the
structural model. Furthermore, it is recommended to report the computational settings. For the PLS
algorithm, we choose the path weighting scheme with a maximum of 300 iterations and a stop criterion of
10−7. For the bootstrapping procedure, we use 5000 bootstrap subsamples and no sign changes as the
method for handling sign changes during the iterations of the bootstrapping procedure. In addition, it is
relevant to mention that we met the suggested minimum sample size with 141 datasets considering the
threshold of ten times the number of structural paths headed towards a latent construct in the model (Hair
et al. 2011).
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 5
Measurement Model Assessment
As the model is measured solely reflectively, we need to evaluate the internal consistency reliability,
convergent validity and discriminant validity to assess the measurement model properly (Hair et al. 2011).
Internal consistency reliability (ICR) measurements indicate how well certain indicators of a construct
measure the same latent phenomenon. Two standard approaches for assessing ICR are Cronbach’s and
the composite reliability. The values of both measures should be between 0.7 and 0.95 for research that
builds upon accepted models. Values of Cronbach’s α are seen as a lower bound and values of the composite
reliability as an upper bound of the assessment (Hair et al. 2017). Table 1 includes the ICR of the variables
in the last two rows. It can be seen that all values for Cronbach’s α and the composite reliability are above
the lower threshold of 0.7 and no value is above 0.95. In sum, ICR is established for our variables.
BI1. I intend to continue using JonDonym in the future.
BI2. I will always try to use JonDonym in my daily life.
BI3. I plan to continue to use JonDonym frequently.
PEUO1. My interaction with JonDonym is clear and
PEUO2. Interacting with JonDonym does not require a
lot of my mental effort.
PEUO3. I find JonDonym to be easy to use.
PEUO4. I find it easy to get JonDonym to do what I want
it to do.
PA1. JonDonym is able to protect my anonymity in during
my online activities.
PA2. With JonDonym I obtain a sense of anonymity in my
online activities.
PA3. JonDonym can prevent threats to my anonymity
when being online.
Trust1. JonDonym is trustworthy.
Trust2. JonDonym keeps promises and commitments.
Trust3. I trust JonDonym because they keep my best
interests in mind.
PU1. Using JonDonym improves the performance of my
privacy protection.
PU2. Using JonDonym increases my level of privacy.
PU3. Using JonDonym enhances the effectiveness of my
PU4. I find JonDonym to be useful in protecting my
Composite Reliability
Table 1. Loadings and Cross-Loadings of the Reflective Items and ICR measures
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 6
In a next step, we assess the convergent validity to determine the degree to which indicators of a certain
reflective construct are explained by that construct. For that, we calculate the outer loadings of the
indicators of the constructs (indicator reliability) and evaluate the average variance extracted (AVE) (Hair
et al. 2017). Loadings above 0.7 imply that the indicators have much in common, which is desirable for
reflective measurement models. Table 1 shows the outer loadings in bold on the diagonal. All loadings are
higher than 0.7. Convergent validity for the construct is assessed by the AVE. AVE is equal to the sum of the
squared loadings divided by the number of indicators. A threshold of 0.5 is acceptable, indicating that the
construct explains at least half of the variance of the indicators. The first column of Table 2 presents the
AVE of the constructs. All values are well above 0.5, demonstrating convergent validity.
The next step for assessing the measurement model is the evaluation of discriminant validity. It measures
the degree of uniqueness of a construct compared to other constructs. Comparable to the convergent
validity assessment, two approaches are used for investigated discriminant validity. The first approach,
assessing cross-loadings, is dealing with single indicators. All outer loadings of a certain construct should
be larger than its cross-loadings with other constructs (Hair et al. 2017). Table 1 illustrates the cross-
loadings as off-diagonal elements. All cross-loadings are smaller than the outer loadings, fulfilling the first
assessment approach of discriminant validity. The second approach is on the construct level and compares
the square root of the constructs’ AVE with the correlations with other constructs. The square root of the
AVE of a single construct should be larger than the correlation with other constructs (Fornell-Larcker
criterion). Table 2 contains the square root of the AVE as on-diagonal values. All values are larger than the
correlations with other constructs, indicating discriminant validity.
Table 2. Discriminant Validity with AVEs and Construct Correlations
The last step of the measurement model assessment is the check for common method bias (CMB). CMB can
occur if data is gathered with a self-reported survey at one point in time in one questionnaire (Malhotra et
al. 2006). Since this is the case in our research design, the need to test for CMB arises. An unrotated
principal component factor analysis is performed with the software package STATA 14.0 to conduct the
Harman’s single-factor test to address the issue of CMB (Podsakoff et al. 2003). The assumptions of the test
are that CMB is not an issue if there is no single factor that results from the factor analysis or that the first
factor does not account for the majority of the total variance. The test shows that four factors have
eigenvalues larger than 1 which account for 75.48% of the total variance. The first factor explains 45.35% of
the total variance. Thus, no single factor emerged and the first factor does not explain the majority of the
variance. Hence, we argue that CMB is not likely to be an issue in the data set.
Structural Model Assessment
We first test for possible collinearity problems before discussing the results of the structural model.
Collinearity is present if two predictor variables are highly correlated with each other. This is important
since collinearity can otherwise bias the results heavily. To address this issue, we assess the inner variance
inflation factor (inner VIF). All VIF values above 5 indicate that collinearity between constructs is present
(Hair et al. 2017). For our model, the highest VIF is 1.688. Thus, collinearity is apparently not an issue.
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 7
Figure 1 presents the results of the path estimations and the R2-values of the target variables behavioral
intention and actual use behavior. In addition, we provide the R2-values for trust, perceived ease of use and
perceived usefulness. R2-values are weak with values around 0.25, moderate with 0.50 and substantial with
0.75 (Hair et al. 2011). Based on this classification, the R2-values for behavioral intention and actual use are
rather moderate in size. Thus, our model explains 42.9% of the variance in the behavioral intention to use
the PET and 46.1% of the variance of the actual use behavior. This result is very good considering the
parsimonious measurement model. In addition, the explained variance of perceived usefulness is 54.7%,
indicating that the three variables, perceived anonymity, trust and perceived ease of use explain more than
half of the variance of this construct.
Thus, we identified three major drivers of users' perceptions with regard to the usefulness of a privacy-
enhancing technology. The strongest effect is exerted by the users' perceived anonymity provided by the
service (H1b confirmed). This result is not surprising considering that providing anonymity is the main goal
of a PET. In addition, perceived anonymity has a strong and statistically significant effect on trust (H1a
confirmed). Thus, users' trust in the PET is mainly driven by their perceptions that the service can create
As hypothesized in H2a - H2c, trust has a significant positive effect on the behavioral intention to use the
PET, the perceived usefulness and the perceived ease of use. Therefore, trust emerges as a highly relevant
concept when determining the drivers of users' use behavior of PETs. It has the strongest effect size (0.416)
on behavioral intention. As discussed earlier, hypotheses H3 - H5 are adapted from the original work on
TAM (Davis 1985, 1989) and can be confirmed for the case of PETs.
Figure 1. Path Estimates and Adjusted R2-values of the Structural Model
Since the effects of perceived anonymity and trust on behavioral intention and the actual use behavior are
partially indirect, we determine and analyze the total effects for these variables (cf. Table 3). It can be seen
that all total effects are relatively large and highly statistically significant. Thus, perceived anonymity and
trust strongly influence the target variables BI and USE.
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 8
Total effect
Effect size
Trust BI
Trust USE
Table 3. Total Effects for the Variables Perceived Anonymity and Trust
As a next, we assessed the predictive relevance of the two added variables for behavioral intention and
actual use behavior. A simple measure for the relevance of perceived anonymity and trust is to delete both
variables and run the model again. The results show that the R2-value for behavioral intention decreases to
31.9% (= eleven percentage points less). Thus, without the two new variables the explained variance for
behavioral intention decreases by roughly a quarter (25.64%). A more advanced measure for predictive
relevance is the Q2 measure. It indicates the out-of-sample predictive relevance of the structural model with
regard to the endogenous latent variables based on a blindfolding procedure (Hair et al. 2017). We used an
omission distance d=7. Recommended values for d are between five and ten. Furthermore, we report the
Q2 values of the cross-validated redundancy approach, since this approach is based on both the results of
the measurement model as well as of the structural model. Detailed information about the calculation
cannot be provided due to space limitations. For further information see Chin (1998). For our model, Q2 is
calculated for behavioral intention and use behavior. Values above 0 indicate that the model has the
property of predictive relevance. Omitting both new variables leads to a decrease of Q2 for behavioral
intention from 0.304 to 0.223. R2 as well as Q2 did not change for actual use when deleting the new variables,
since there is not direct relation from the constructs to the actual use construct and behavioral intention
solely explains a large share of variance in use.
Discussion and Conclusion
Research on privacy-enhancing technologies mainly focused on the technical aspects of the technologies up
to now. However, a successful implementation and adoption of PETs requires of profound understanding
of the perceptions and behaviors of actual and possible users of the technologies. The IS domain has the
proper methods and knowledge to tackle such questions. Thus, with this paper we investigated actual users
of an existing PET as a first step to address this research problem. Our results indicate that the basic
rationale of technology use models holds for privacy-enhancing technologies. However, the newly
introduced variables perceived anonymity and trust strongly improved the explanatory of the structural
model for the case of a PET and should be considered for comparable research problems in future work.
Although we checked for several reliability and validity issues, certain limitations might impact our results.
First, the sample size of 141 participants is relatively small for a quantitative study. However, since we
reached the suggested minimum sample size for the applied method, we argue that our results are still valid.
In addition, it is very difficult to gather data of actual users of PETs since it is a comparable small population
that we could survey. It is also relevant to mention that we did not offer any financial rewards for the
participation. A second limitation concerns possible self-report biases (e.g. social desirability). We
addressed this possible issue by gathering the data fully anonymized. Furthermore, demographic questions
were not mandatory to fill out. Third, mixing results of the German and English questionnaire could be a
source of errors. On the one hand, this procedure was necessary to achieve the minimum sample size. On
the other hand, we followed a very thorough translation procedure to ensure the highest level of equivalence
as possible. Thus, we argue that this limitation did not affect the results. Lastly, we did not control for the
participants' actual or former use of different standalone PETs. This experience might have an impact on
their assessments of JonDonym.
We found strong effects for the influence of the perceived anonymity on the behavioral intention to use a
PET (RQ1). In contrast to the findings of Benenson et al. (2015), who found that trust in the PET has no
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 9
statistically significant impact on the intention to use the service, we also found a strong effect of trust in
the PET on the behavioral intention to use it (RQ2). One reason for the difference might be that the trust in
the service and the trust in the service provider were very likely equivalent in our use case. However, to
adequately address the difference further research is needed. From a practical point of view, our results
indicate that PET providers should aim to establish a trustworthy service with a high level of transparency
in order to increase the perceived anonymity of users.
Future work can build on the proposed relationships and extensions of our model to investigate the
acceptance and use of PETs in more detail. We could explain almost half of the variance in the target
constructs behavioral intention and actual use behavior with a rather parsimonious model. Thus, the
current model provides a good starting point to investigate other comparable PETs, like Tor or a VPN
service. In addition, new privacy or technology-specific variables could be added to strengthen the
understanding about usage of PETs. Based on our findings, future work could also investigate the found
relationships with a qualitative research approach in more detail. In a next step, it would be interesting to
investigate the perceptions of non-users about PETs and compare the findings to actual users. By that, it
would be possible for developers and marketers to specifically address issue hindering a broader diffusion
of PETs. This could be a real contribution for strengthening the personal right for privacy in times of ever-
increasing personal data collection in the internet.
This research was partly funded by the German Federal Ministry of Education and Research (BMBF) with
grant number: 16KIS0371. In addition, we thank Rolf Wendolsky (JonDos GmbH) for his help during the
data collection process.
Ball, J. 2012. “Hacktivists in the Frontline Battle for the Internet,” The Guardian.
(, accessed
February 26, 2018).
Bédard, M. 2016. “The Underestimated Economic Benefits of the Internet,” in Regulation Series, The
Montreal Economic Institute.
Benenson, Z., Girard, A., and Krontiris, I. 2015. “User Acceptance Factors for Anonymous Credentials: An
Empirical Investigation,” 14th Annual Workshop on the Economics of Information Security (WEIS),
pp. 133.
Benenson, Z., Girard, A., Krontiris, I., Liagkou, V., Rannenberg, K., and Stamatiou, Y. C. 2014. “User
Acceptance of Privacy-ABCs: An Exploratory Study,” in HumanComputer Interaction, pp. 375386.
van Blarkom, G. W., Borking, J. J., and Olk, J. G. E. 2003. “PET”. Handbook of Privacy and Privacy-
Enhancing Technologies.
Borking, J. J., and Raab, C. 2001. “Laws, PETs and Other Technologies for Privacy Protection,” Journal of
Information, Law and Technology (1), pp. 114.
Chin, W. W. 1998. “The Partial Least Squares Approach to Structural Equation Modeling,” in Modern
Methods for Business Research, G. A. Marcoulides (ed.), Mahwah, NJ: Lawrence Erlbaum, pp. 295
Chirgwin, R. 2016. “CloudFlare Shows Tor Users the Way out of CAPTCHA Hell,” The Register.
(, accessed February 23, 2018).
Cranor, L. F., and Garfinkel, S. 2008. Security and Usability: Designing Secure Systems That People Can
Use, Farnham: O’Reilly.
Davis, F. D. 1985. “A Technology Acceptance Model for Empirically Testing New End-User Information
Systems: Theory and Results,” Massachusetts Institute of Technology.
Davis, F. D. 1989. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information
Technology,” MIS Quarterly (13:3), pp. 319340.
Hair, J., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. 2017. A Primer on Partial Least Squares Structural
Equation Modeling (PLS-SEM), SAGE Publications.
Hair, J., Ringle, C. M., and Sarstedt, M. 2011. “PLS-SEM: Indeed a Silver Bullet,” The Journal of Marketing
Theory and Practice (19:2), pp. 139152.
JonDos Gmbh. 2018. “Official Homepage of JonDonym.” (, accessed
Examining Technology Use Factors of Privacy-Enhancing Technologies
Twenty-fourth Americas Conference on Information Systems, New Orleans, 2018 10
January 16, 2018).
Krontiris, I., Benenson, Z., Girard, A., Sabouri, A., Rannenberg, K., and Schoo, P. 2015. “Privacy-ABCs as a
Case for Studying the Adoption of PETs by Users and Service Providers,” in APF, pp. 104123.
Lee, L., Fifield, D., Malkin, N., Iyer, G., Egelman, S., and Wagner, D. 2017. “A Usability Evaluation of Tor
Launcher,” Proceedings on Privacy Enhancing Technologies (3), pp. 90109.
Malhotra, N. K., Kim, S. S., and Patil, A. 2006. “Common Method Variance in IS Research: A Comparison
of Alternative Approaches and a Reanalysis of Past Research,” Management Science (52:12), pp.
Pavlou, P. A. 2003. “Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the
Technology Acceptance Model,” International Journal of Electronic Commerce (7:3), pp. 101134.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., and Podsakoff, N. P. 2003. “Common Method Biases in
Behavioral Research: A Critical Review of the Literature and Recommended Remedies.,” Journal of
Applied Psychology (88:5), pp. 879903.
Ringle, C. M., Wende, S., and Becker, J. M. 2015. SmartPLS 3, Boenningstedt: SmartPLS GmbH,
Rosen, L. D., Whaling, K., Carrier, L. M., Cheever, N. A., and Rokkum, J. 2013. “The Media and Technology
Usage and Attitudes Scale: An Empirical Investigation,” Comput Human Behav. (29:6), pp. 2501
Schmitz, C. 2015. LimeSurvey Project Team, LimeSurvey Project Hamburg, Germany, LimeSurvey: An
Open Source survey tool.
Singh, T., and Hill, M. E. 2003. “Consumer Privacy and the Internet in Europe: A View from Germany,”
Journal of Consumer Marketing (20:7), pp. 634651.
Söllner, M., Hoffmann, A., and Leimeister, J. M. 2016. “Why Different Trust Relationships Matter for
Information Systems Users,” European Journal of Information Systems (25:3), pp. 274287.
Spiekermann, S. 2005. “The Desire for Privacy: Insights into the Views and Nature of the Early Adopters of
Privacy Services,” International Journal of Technology and Human Interaction (1:1), pp. 7483.
The Tor Project. 2018. “Tor.” (, accessed February 20, 2018).
Venkatesh, V., and Davis, F. D. 2000. “A Theoretical Extension of the Technology Acceptance Model: Four
Longitudinal Studies,” Management Science (46:2), pp. 186205.
Venkatesh, V., Thong, J., and Xu, X. 2012. “Consumer Acceptance and User of Information Technology:
Extending the Unified Theory of Acceptance and Use of Technology,” MIS Quarterly (36:1), pp. 157
... Given (1) the above reports of nuances in privacy concerns over the years and that some individuals take actions towards privacy, as well as (2) the well known privacy paradox phenomenon [4,38,55,95], this paper seeks to investigate the following questions: 'what privacy methods or PETs do Internet users employ to protect their privacy? How do Internet users perceive the use of PETs?' We contribute to the rich landscape of user-centric privacy research, that has expansively addressed privacy behavior, including via the (extent of) disclosure of personal information [3,4,8,26,72,95], privacy strategies [1,17,75,105] or the use of privacy controls and individual PETs [1,12,26,39,46,71,87,100], with a large-scale and cross-national study of the use of a collection of PETs. ...
... We first investigate the use/non-use of a range of privacy methods and PETs across three countries, rather than taking an in-depth look at how users interact with individual PETs [12,46,71,87] or engage with controls [39,69]. This provides a broad view of what privacy methods and PETs individuals use, and enables us to uncover patterns of use and preferences. ...
... Second, while various factors may influence the use/non-use of particular privacy controls and PETs, such as perceived risks [36,39], perceived usefulness [12,46] or demographics [75,76], we focus on what happens following potential concern about privacy, in particular, whether individuals are aware of PETs and how they perceive use of PETs (c.f. the staircase/steps towards using a particular PET by Renaud et al. [87]). We postulate that PETs use patterns may be evidence of differences in use perceptions, in rationale for use or support needed. ...
Full-text available
The steady reports of privacy invasions online paints a picture of the Internet growing into a more dangerous place. This is supported by reports of the potential scale for online harms facilitated by the mass deployment of online technology and the data-intensive web. While Internet users often express concern about privacy, some report taking actions to protect their privacy online. We investigate the methods and technologies that individuals employ to protect their privacy online. We conduct two studies, of N=180 and N=907, to elicit individuals' use of privacy methods online, within the US, the UK and Germany. We find that non-technology methods are among the most used methods in the three countries. We identify distinct groupings of privacy methods usage in a cluster map. The map shows that together with non-technology methods of privacy protection, simple PETs that are integrated in services, form the most used cluster, whereas more advanced PETs form a different, least used cluster. We further investigate user perception and reasoning for mostly using one set of PETs in a third study with N=183 participants. We do not find a difference in perceived competency in protecting privacy online between advanced and simpler PETs users. We compare use perceptions between advanced and simpler PETs and report on user reasoning for not using advanced PETs, as well as support needed for potential use. This paper contributes to privacy research by eliciting use and perception of use across $43$ privacy methods, including $26$ PETs across three countries and provides a map of PETs usage. The cluster map provides a systematic and reliable point of reference for future user-centric investigations across PETs. Overall, this research provides a broad understanding of use and perceptions across a collection of PETs, and can lead to future research for scaling use of PETs.
... While there has been a lot of research on Tor and JonDonym [37,47], the large majority of it is of technical nature and does not consider the users and their perceptions. That changed with a series of papers investigating reasons for the (non-)adoption of Tor [20] and JonDonym [17]. Based on the construct of internet users' information privacy concerns [42,43] Harborth and Pape found that trust beliefs in the anonymization service played a huge role for the adoption [18,19]. ...
Users report that they have regretted accidentally sharing personal information on social media. There have been proposals to help protect the privacy of these users, by providing tools which analyze text or images and detect personal information or privacy disclosure with the objective to alert the user of a privacy risk and transform the content. However, these proposals rely on having access to users' data and users have reported that they have privacy concerns about the tools themselves. In this study, we investigate whether these privacy concerns are unique to privacy tools or whether they are comparable to privacy concerns about non-privacy tools that also process personal information. We conduct a user experiment to compare the level of privacy concern towards privacy tools and non-privacy tools for text and image content, qualitatively analyze the reason for those privacy concerns, and evaluate which assurances are perceived to reduce that concern. The results show privacy tools are at a disadvantage: participants have a higher level of privacy concern about being surveilled by the privacy tools, and the same level concern about intrusion and secondary use of their personal information compared to non-privacy tools. In addition, the reasons for these concerns and assurances that are perceived to reduce privacy concern are also similar. We discuss what these results mean for the development of privacy tools that process user content.
... Thus, regulations of information provisioning in this context could help to provide transparency and fair choices. However, related work on privacy-enhancing technologies (PETs) and user acceptance indicates that, even given the opportunity, a majority of users will not use technologies to protect their privacy on the internet (especially if there is an explicit cost in the form of money or an implicit cost in form of time or higher effort involved with the use of PETs) [50,[58][59][60][61][62][63]. Thus, we must rather think about combining the proposed measures from before with clear regulations on what AR technologies are allowed to do. ...
Full-text available
Augmented reality (AR) has found application in online games, social media, interior design, and other services since the success of the smartphone game Pokémon Go in 2016. With recent news on the metaverse and the AR cloud, the contexts in which the technology is used become more and more ubiquitous. This is problematic, since AR requires various different sensors gathering real-time, context-specific personal information about the users, causing more severe and new privacy threats compared to other technologies. These threats can have adverse consequences on information self-determination and the freedom of choice and, thus, need to be investigated as long as AR is still shapeable. This communication paper takes on a bird’s eye perspective and considers the ethical concept of autonomy as the core principle to derive recommendations and measures to ensure autonomy. These principles are supposed to guide future work on AR suggested in this article, which is strongly needed in order to end up with privacy-friendly AR technologies in the future.
... However, with regards to the use of PETs for tracking protection, tools and mechanisms have been thought to suffer from usability issues [16,44,66]. For general privacy protection, interaction aspects such as knowing that PETs can support privacy [11,62,68] and perceiving them as useful [4,11,33], are thought to impact usage of PETs, among other factors. In addition, it has been demonstrated that the existing implementation of PETs and in particular cookie notices do not offer fair practices. ...
Full-text available
Online tracking is complex and users find it challenging to protect themselves from it. While the academic community has extensively studied systems and users for tracking practices, the link between the data protection regulations, websites’ practices of presenting privacy-enhancing technologies (PETs), and how users learn about PETs and practice them is not clear. This paper takes a multidimensional approach to find such a link. We conduct a study to evaluate the 100 top EU websites, where we find that information about PETs is provided far beyond the cookie notice. We also find that opting-out from privacy settings is not as easy as opting-in and becomes even more difficult (if not impossible) when the user decides to opt-out of previously accepted privacy settings. In addition, we conduct an online survey with 614 participants across three countries (UK, France, Germany) to gain a broad understanding of users’ tracking protection practices. We find that users mostly learn about PETs for tracking protection via their own research or with the help of family and friends. We find a disparity between what websites offer as tracking protection and the ways individuals report to do so. Observing such a disparity sheds light on why current policies and practices are ineffective in supporting the use of PETs by users.
... Although there is a broad technical discussion how to implement and build PETs, the investigation of acceptance factors and the users' intention to adopt PETs is still scant [9]. Recent studies mostly utilised the Technology Acceptance Model [10] such as work on Tor [11,12,13,14], attribute-based credentials [15,16,17] or virtual private networks [11,12,18]. However, all of the investigated PETs have in common that they have their main popularity among privacy experts -maybe with the exception of virtual private networks which are also used to circumvent geoblocking for streaming services [19]. ...
Conference Paper
Privacy sensitive information (PSI) detection tools have the potential to help users protect their privacy when posting information online, i. e. they can identify when a social media post contains information that users could later regret sharing. However, although users consider this type of tools useful, previous research indicates that the intention of using them is not very high. In this paper, we conduct a user survey (n=147) to investigate the factors that influence the intention to use a PSI detection tool. The results of a logistic regression analysis indicate a positive association of intention to use a PSI detection tool with performance expectation, social influence, and perception of accuracy of the tool. In addition, intention is negatively associated with privacy concerns related to the tool itself and with the participants' self-perceived ability to protect their own privacy. On the other hand, we did not find significant association with the participants' demographic characteristics or social media posting experience. We discuss these findings in the context of the design and development of PSI detection tools.
The SARS-CoV-2 pandemic is a pressing societal issue today. The German government promotes a contact tracing app named Corona-Warn-App (CWA), aiming to change citizens’ health behavior during the pandemic by raising awareness about potential infections and enable infection chain tracking. Technical implementations, citizens’ perceptions, and public debates around apps differ between countries, i.e., in Germany there has been a huge discussion on potential privacy issues of the app. Thus, we analyze effects of privacy concerns regarding the CWA, perceived CWA benefits, and trust in the German healthcare system to answer why citizens use the CWA. We use a sample with 1,752 actual users and non-users and find support for the privacy calculus theory, i.e., individuals weigh privacy concerns and benefits in their use decision. Thus, citizens’ privacy perceptions about health technologies (e.g., shaped by public debates) are crucial as they can hinder adoption and negatively affect future fights against pandemics.
Enabling cybersecurity and protecting personal data are crucial challenges in the development and provision of digital service chains. Data and information are the key ingredients in the creation process of new digital services and products. While legal and technical problems are frequently discussed in academia, ethical issues of digital service chains and the commercialization of data are seldom investigated. Thus, based on outcomes of the Horizon2020 PANELFIT project, this work discusses current ethical issues related to cybersecurity. Utilizing expert workshops and encounters as well as a scientific literature review, ethical issues are mapped on individual steps of digital service chains. Not surprisingly, the results demonstrate that ethical challenges cannot be resolved in a general way, but need to be discussed individually and with respect to the ethical principles that are violated in the specific step of the service chain. Nevertheless, our results support practitioners by providing and discussing a list of ethical challenges to enable legally compliant as well as ethically acceptable solutions in the future.
The German Corona-Warn-App (CWA) is one of the most controversial tools to mitigate the Corona virus spread with roughly 25 million users. In this study, we investigate individuals’ knowledge about the CWA and associated privacy concerns alongside different demographic factors. For that purpose, we conducted a study with 1752 participants in Germany to investigate knowledge and privacy concerns of users and non-users of the German CWA. We investigate the relationship between knowledge and privacy concerns and analyze the demographic effects on both.
Full-text available
In order to address security and privacy problems in practice, it is very important to have a solid elicitation of requirements, before trying to address the problem. In this thesis, specific challenges of the areas of social engineering, security management and privacy enhancing technologies are analyzed: Social Engineering: An overview of existing tools usable for social engineering is provided and defenses against social engineering are analyzed. Serious games are proposed as a more pleasant way to raise employees’ awareness and to train them. Security Management: Specific requirements for small and medium sized energy providers are analyzed and a set of tools to support them in assessing security risks and improving their security is proposed. Larger enterprises are supported by a method to collect security key performance indicators for different subsidiaries and with a risk assessment method for apps on mobile devices. Furthermore, a method to select a secure cloud provider – the currently most popular form of outsourcing – is provided. Privacy Enhancing Technologies: Relevant factors for the users’ adoption of privacy enhancing technologies are identified and economic incentives and hindrances for companies are discussed. Privacy by design is applied to integrate privacy into the use cases e-commerce and internet of things.
Full-text available
Digital disruptions are almost visible in every section of society, which poses the question of how well a business or industry could blend in keeping the strategic competitiveness. These Digital Technologies synchronize technologies and non-technological antecedents which demand to persist over adaptation towards the new technologies, than merely adapting the technology element alone. Despite the interest and involvement, many Digital Transformational activities do fail. Hence this study adapted Theories of the Unified Theory of Acceptance and Use of Technology (UTAUT) and Technology-organization-environment framework (TOE) to establish predictor behavior of extrinsic and intrinsic antecedents towards the Technology acceptance of Digital Transformation together with the behavioral elements on behavioral intention and behavioral expectation towards the use of the Digital technology. Parallel to the main Total sample of 138 Star Rated hotels in Sri Lanka, a Pilot test was carried out to validate measurement. 38 responses received from an online questionnaire and Instrument validity was established by computing Cronbach's alpha. Latent variables that represent extrinsic and intrinsic predictors recorded an average Cronbach alfa of (0.825) over 40 items along with mediating variables of Behavioral Intention (0.965) and Behavioral Expectation (0.965). The total Questionnaire validates with Cronbach's alpha (0.954) over 54 items thus validating to proceed with the main data collection.
Full-text available
Provides a nontechnical introduction to the partial least squares (PLS) approach. As a logical base for comparison, the PLS approach for structural path estimation is contrasted to the covariance-based approach. In so doing, a set of considerations are then provided with the goal of helping the reader understand the conditions under which it might be reasonable or even more appropriate to employ this technique. This chapter builds up from various simple 2 latent variable models to a more complex one. The formal PLS model is provided along with a discussion of the properties of its estimates. An empirical example is provided as a basis for highlighting the various analytic considerations when using PLS and the set of tests that one can employ is assessing the validity of a PLS-based model. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
Technology acceptance research has shown that trust is an important factorfostering use of information systems (IS). As a result, numerous IS researchershave studied factors that build trust in IS. However, IS research on trust hasmainly focused on the trust relationship between the user and the IS itself,largely neglecting that other targets of trust might also drive IS use from a user’spoint of view. Accordingly, we investigate the importance of different targets oftrust in IS use. Therefore, we use the concept of a network of trust and identifyfour different targets of trust that are prevalent from a user’s point of view.Afterwards, we develop our research model and evaluate it using a freesimulation experiment. The results show that multiple targets of trust areimportant in the context of IS use. In particular, we highlight the importance ofa second target–trust in the provider–which is equally important as trust in theIS itself. Consequently, IS providers should focus not only on fostering users’trustin their IS but also on positioning themselves as trustworthy providers. Inaddition, we show that a third target–trust in the Internet–has significantindirect effects on multiple constructs that impact IS use.
Full-text available
Current approaches to measuring people's everyday usage of technology-based media and other computer-related activities have proved to be problematic as they use varied outcome measures, fail to measure behavior in a broad range of technology-related domains and do not take into account recently developed types of technology including smartphones. In the present study, a wide variety of items, covering a range of up-to-date technology and media usage behaviors. Sixty-six items concerning technology and media usage, along with 18 additional items assessing attitudes toward technology, were administered to two independent samples of individuals, comprising 942 participants. Factor analyses were used to create 11 usage subscales representing smartphone usage, general social media usage, Internet searching, e-mailing, media sharing, text messaging, video gaming, online friendships, Facebook friendships, phone calling, and watching television in addition to four attitude-based subscales: positive attitudes, negative attitudes, technological anxiety/dependence, and attitudes toward task-switching. All subscales showed strong reliabilities and relationships between the subscales and pre-existing measures of daily media usage and Internet addiction were as predicted. Given the reliability and validity results, the new Media and Technology Usage and Attitudes Scale was suggested as a method of measuring media and technology involvement across a variety of types of research studies either as a single 60-item scale or any subset of the 15 subscales.
Conference Paper
In this work, we present the first statistical results on users’ understanding, usage and acceptance of a privacy-enhancing technology (PET) that is called “attribute-based credentials”, or Privacy-ABCs. We identify some shortcomings of the previous technology acceptance models when they are applied to PETs. Especially the fact that privacy-enhancing technologies usually assist both, the primary and the secondary goals of the users, was not addressed before. We present some interesting relationships between the acceptance factors. For example, understanding of the Privacy-ABC technology is correlated to the perceived usefulness of Privacy-ABCs. Moreover, perceived ease of use is correlated to the intention to use the technology. This confirms the conventional wisdom that understanding and usability of technology play important roles in the user adoption of PETs.
Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.
Conference Paper
We describe theoretical development of a user acceptance model for anonymous credentials and its evaluation in a real-world trial. Although anonymous credentials and other advanced privacy-enhanced technologies (PETs) reached technical maturity , they are not widely adopted so far, such that understanding user adoption factors is one of the most important goals on the way to better privacy management with the help of PETs. Our model integrates the Technology Acceptance Model (TAM) with the considerations that are specific for security-and privacy-enhancing technologies, in particular, with their " secondary goal " property that means that these technologies are expected to work in the background, facilitating the execution of users' primary, functional goals. We introduce five new constructs into the TAM: Perceived Usefulness for the Primary Task (PU1), Perceived Usefulness for the Secondary Task (PU2), Situation Awareness, Perceived Anonymity and Understanding of the PET. We conduct an evaluation of our model in the concrete scenario of a university course evaluation. Although the sample size (30 participants) is prohibitively small for deeper statistical analysis such as multiple regressions or structural equation modeling, we are still able to derive useful conclusions from the correlation analysis of the constructs in our model. Especially, PU1 is the most important factor of user adoption, outweighing the usability and the usefulness of the deployed PET (PU2). Moreover, correct Understanding of the underlying PET seems to play a much less important role than a user interface of the system that clearly conveys to the user which data are transmitted when and to which party (Situation Awareness).
Despite the widely varying estimates about the strength of the global Internet economy, there is general agreement among experts about the dramatic influence of this medium on consumers as well businesses. However, one area that has generated conflicting views is the issue of Internet privacy. As businesses develop more sophisticated technologies to collect, store and disseminate information on consumers, privacy and security of this information are raising concerns among consumers and public policy advocates. Anecdotal evidence seems to suggest that consumers in some European countries view privacy protection as a very important issue. Investigates consumer attitudes in the Federal Republic of Germany. The findings indicate that consumers in Germany have very strong views about protecting their privacy. They believe that both companies and governments are obligated to protect the information of their consumers and citizens. To this end, German consumers are willing to support stricter legislation. Results also suggest that German consumers’ views about Internet use and on-line behaviors, are affected, among other things, by their views regarding privacy in general, their personal expertise in Internet technologies, and how they view the role of the government and the role of companies in protecting consumer privacy.