Conference PaperPDF Available

How Privacy Concerns and Trust and Risk Beliefs Influence Users’ Intentions to Use Privacy-Enhancing Technologies - The Case of Tor


Abstract and Figures

Due to an increasing collection of personal data by internet companies and several data breaches, research related to privacy gained importance in the last years in the information systems domain. Privacy concerns can strongly influence users’ decision to use a service. The Internet Users Information Privacy Concerns (IUIPC) construct is one operationalization to measure the impact of privacy concerns on the use of technologies. However, when applied to a privacy enhancing technology (PET) such as an anonymization service the original rationales do not hold anymore. In particular, an inverted impact of trusting and risk beliefs on behavioral intentions can be expected. We show that the IUIPC model needs to be adapted for the case of PETs. In addition, we extend the original causal model by including trust beliefs in the anonymization service itself. A survey among 124 users of the anonymization service Tor shows that they have a significant effect on the actual use behavior of the PET.
Content may be subject to copyright.
How Privacy Concerns and Trust and Risk Beliefs Influence Users’
Intentions to Use Privacy-Enhancing Technologies - The Case of Tor
David Harborth
Chair of Mobile Business and Multilateral Security
Goethe University Frankfurt am Main
Sebastian Pape
Chair of Mobile Business and Multilateral Security
Goethe University Frankfurt am Main
Due to an increasing collection of personal data by
internet companies and several data breaches, research
related to privacy gained importance in the last years in
the information systems domain. Privacy concerns can
strongly influence users’ decision to use a service. The
Internet Users Information Privacy Concerns (IUIPC)
construct is one operationalization to measure the impact
of privacy concerns on the use of technologies. However,
when applied to a privacy enhancing technology (PET)
such as an anonymization service the original rationales
do not hold anymore. In particular, an inverted impact
of trusting and risk beliefs on behavioral intentions can
be expected. We show that the IUIPC model needs to be
adapted for the case of PETs. In addition, we extend the
original causal model by including trust beliefs in the
anonymization service itself. A survey among 124 users
of the anonymization service Tor shows that they have a
significant effect on the actual use behavior of the PET.
1. Introduction
“Surveillance is the business model of the internet.
Everyone is under constant surveillance by many
companies, ranging from social networks like Facebook
to cellphone providers.” [
]. Privacy and the related
concerns have been discussed since the very beginning of
computer sharing [
]. Due to a raising economic interest
in personal data during the last years [
], privacy gains
an increasing importance in individuals’ everyday life.
The majority of internet users has privacy concerns and
feels a strong need to protect their privacy [4].
A popular model for measuring and explaining
privacy concerns of online users is the model focusing
on the Internet Users Information Privacy Concerns
(IUIPC) construct by Malhotra et al. [
]. Their research
involves a theoretical framework and an instrument
for operationalizing privacy concerns, as well as a
This research was partly funded by the German Federal Ministry
of Education and Research (BMBF) with grant number: 16KIS0371.
causal model for this construct including trust and risk
beliefs about the online companies’ data handling of
personal information. The IUIPC construct has been
used in various contexts, e.g. Internet of Things [
internet transactions [
] and Mobile Apps [
]. Originally,
the IUIPC instrument was applied to use cases for
individuals’ decisions to disclose personal information
to service providers. However, for privacy enhancing
technologies (PETs) the primary purpose is to help
users to protect personal information when using regular
internet services. As a consequence, it is necessary to
reconsider the impact of trust and risk beliefs within
IUIPC’s causal model with respect to PETs. We expected
this impact to be inverted and thus the trust model
needs to be adapted for the investigation of PETs. In
addition, trust in the PET itself is an important factor to
consider. This is the case since Tor is used by a diverse
group of people whose life might be endangered in case
their identity is revealed (e.g. whistleblowers, opposition
supporters, etc. [
]). To the best of our knowledge the
IUIPC construct has never been applied to a PET. Thus,
we address the following research questions:
What influence have privacy concerns and
associated trust and risk beliefs on the behavioral
intention and actual use of Tor?
What influence does trust in Tor itself have on the
behavioral intention and the actual use?
For that purpose, we conducted an online survey with
users of one of the most widely used anonymization
services Tor (Tor has approximately 2,000,000 regular
users) [
]. We collected 124 complete questionnaires out
of 314 participants for the empirical analysis. Our results
contribute to the understanding of users’ perceptions
about PETs and indicate how privacy concerns and trust
and risk beliefs influence the use behavior of PETs.
The remainder of the paper is as follows: Sect. 2
introduces Tor and lists related work on PETs. In Sect. 3,
we present research hypotheses and the data collection
process. We assess the reliability and validity of our
results in Sect. 4. In Sect. 5, we discuss the implications
and limitations of our work and suggest future work.
2. Background and Related Work
Privacy-Enhancing Technologies (PETs) is
an umbrella term for different privacy protecting
technologies. PETs can be defined as a “coherent system
of ICT measures that protects privacy [...] by eliminating
or reducing personal data or by preventing unnecessary
and/or undesired processing of personal data; all without
losing the functionality of the data system” [10, p. 1].
In this paper, we investigate the privacy, trust and
risk beliefs associated with PETs for the case of the
anonymity service Tor [
]. Tor is a free-to-use anonymity
service that is based on the onion routing principle.
Everybody can operate a server (relay) over which the
encrypted traffic is routed. The routing occurs randomly
over several different servers distributed world-wide. Tor
aims to protect against an adversary who can observe or
control some fraction of network traffic, but it does not
protect against a global passive adversary, which means
an adversary who can observe all network connections.
Among the available PETs, Tor has one of the biggest
user bases with approximately 2,000,000 active users [
Related work on PETs considers mainly usability
studies and does not primarily focus on privacy concerns
and related trust and risk beliefs of PET users. For
example, Lee et al. [
] assess the usability of the Tor
Launcher and propose recommendations to overcome the
found usability issues. Benenson et al. [
] investigate
acceptance factors for anonymous credentials. Among
other things, they find that trust in the PET has no
statistically significant impact on the intention to use
the service. This result is relevant for our study since
we hypothesize that trust in Tor has a positive effect on
the actual use of the service (see Section 3.1). Another
highly relevant study for our research is the one by
Brecht et al. [
], who investigate acceptance factors
of anonymization services. Among other variables, they
hypothesize a positive influence of privacy concerns on
the intention to use such a service. Although they find
a statistically significant effect, the effect is relatively
small (effect size of 0.061) compared to other variables
like perceived usefulness or internet privacy awareness.
In contrast to our study, Brecht et al. [
] use another
operationalization of privacy concerns (the one by Dinev
and Hart [
]) and they do not investigate it in the
nomological network with trust and risk beliefs.
3. Methodology
We base our research on the Internet Users
Information Privacy Concerns (IUIPC) model by
Malhotra et al. [
]. The original research on this
model investigates the role of users’ information privacy
concerns in the context of releasing personal information
to a marketing service provider. Since we are focusing
on the role of privacy concerns, trust and risk beliefs
for the case of a PET (i.e. Tor), we adapt the original
model according to the following logic. Originally, the
service in question can be seen as the attacker (from a
privacy point of view). If we apply the model to a service
with the opposite goal, namely protecting the privacy of
its users, certain relationships need to change. We will
elaborate on the detailed changes in the next section. In
addition, to this we extend the original model by trusting
beliefs in the PET itself. We argue that the level of trust
in a PET is a crucial factor determining the use decision.
For analyzing the cause-effect relationships between
the latent (unobserved) variables, we use structural
equation modelling (SEM). Since our research goal is
to predict the target constructs behavioral intention and
actual use behavior of Tor, we use partial least squares
SEM (PLS-SEM) for our analysis [
] and not
covariance-based SEM. In the following subsections, we
discuss the hypotheses based on the IUIPC model [
the questionnaire and the data collection process.
3.1. Research Hypotheses
The structural model contains several relationships
between exogenous and endogenous variables (cf.
Fig. 1). We develop our research hypotheses for
these relationships along the hypotheses of the IUIPC
model [
]. IUIPC is operationalized as a second-order
of the sub-constructs collection (COLL),
awareness (AWA) and control (CONTROL). Thus, the
users’ privacy concerns are determined by their concerns
about “[...] individual-specific data possessed by others
relative to the value of benefits receive” [
, p. 338], the
control they have over their own data (i.e. possibilities
to change or opt-out) and the “[...] degree to which
a consumer is concerned about his/her awareness of
organizational information privacy practices” [
, p. 339].
The effect of IUIPC on the behavioral intention is
mediated by trusting beliefs and risk beliefs. Trusting
beliefs are users’ perceptions about the behavior of online
firms to protect the users’ personal information. In
contrast, risk beliefs represent users’ perception about
losses associated with providing personal data to online
firms [
]. Thus, the higher the privacy concerns of a user,
the lower are his or her trusting beliefs and the higher
are his or her risk beliefs. In addition, a higher level of
trust is assumed to decrease the risk beliefs. Thus, we
Due to space limitations, we will not elaborate on second-order
constructs in more detail. For an extensive discussion see Steward [
Internet Users Information Privacy Concerns
(IUIPC) have a negative effect on Trusting Beliefs
Internet Users Information Privacy Concerns
(IUIPC) have a positive effect on Risk Beliefs (RB).
Trusting Beliefs (TB) have a negative effect on Risk
Beliefs (RB).
Since we investigate the use of a specific PET, we
extend the model by the trust in Tor itself with the adapted
trust construct by Pavlou [
]. However, in order to
protect their privacy, users with higher privacy concerns
are assumed to rather trust the privacy-enhancing
technology compared to online firms which process
personal data. This is especially true, because we
surveyed users of a PET which are assumed to take great
care of their privacy. Therefore, we hypothesize:
Internet Users Information Privacy Concerns
(IUIPC) have a positive effect on the trusting
beliefs in Tor (T BT or ).
Trust is an important factor in the acceptance decision
of users [
]. Mcknight et al. [
] show that trust in
a specific technology will positively affect individuals
intention to explore the technology and to use more
features of the technology in a postadoption context.
Especially for the case of privacy protection, we assume
that trust in the technology is a major factor for the
intention to use the technology. For a further discussion
on the concept of trust in a technology, we refer to
Lankton et al. [20]. We hypothesize that:
Trusting beliefs in Tor (
T BT or
) have a positive
effect on the behavioral intention to use Tor (BI).
It is logical that trusting beliefs have a positive effect
and risk beliefs have a negative effect on releasing data
and thus the intended behavior of using a regular service.
However, for use behavior of a PET, we assume these
effects reverse. The higher the trusting beliefs in online
firms, the lower is the use frequency of Tor, since the
protection of data becomes less important. Following
this rationale, a higher degree of risk beliefs in data
processing of online firms leads to a higher degree of
use. Thus, we hypothesize that:
Trusting beliefs (TB) have a negative effect on the
behavioral intention to use Tor (BI).
Risk beliefs (RB) have a positive effect on the
behavioral intention to use Tor (BI).
Research on the relationship between behavioral
intention and use behavior goes back to Fishbein et
al. [
]. Later research indicates a positive link between
the two constructs [22]. Thus, we hypothesize that:
The behavioral intention to use Tor (BI) has a
positive effect on the actual use behavior (USE).
3.2. Data Collection
The questionnaire constructs are adapted from the
original IUIPC paper [
]. We conducted the study
with German and English speaking Tor users. Thus,
we administered two questionnaires. All items for the
German questionnaire had to be translated into German
since all of the constructs are adapted from English
literature. To ensure content validity of the translation,
we followed a rigorous translation process. First, we
translated the English questionnaire into German with the
help of a certified translator (translators are standardized
following the DIN EN 15038 norm). The German
version was then given to a second independent certified
translator who retranslated the questionnaire to English.
This step was done to ensure the equivalence of the
translation. Third, a group of five academic colleagues
checked the two English versions with regard to this
equivalence. All items were found to be equivalent. The
items of the English version can be found in Appendix B.
Since we investigate the effect of privacy concerns,
trust and risk beliefs on the use of Tor, we collected
data of actual users. We installed the surveys on
a university server and managed it with the survey
software LimeSurvey (version 2.72.6) [
]. The links to
the English and German version were distributed over
multiple channels on the internet. Although there are
approximately 2,000,000 active users of the service, it
was relatively difficult to gather the necessary number
of complete answers for a valid and reliable quantitative
analysis. Thus, to foster future research about Tor users,
we provide an overview of every distribution channel
in the Appendix A. In sum, 314 participants started
the questionnaire (245 for the English version, 40 for
the English version posted in hidden service forums and
29 for the German version). Of those 314 approached
participants, 135 (105 for the English version, 13 for the
English version posted in hidden service forums and 17
for the German version) filled out the questionnaires
completely. After deleting all sets from participants
who answered a test question in the middle of the
survey incorrectly, 124 usable data sets remained for
the following analysis.
The demographic questions were not mandatory to
fill out. This was done on purpose since we assumed
that most of the participants are highly sensitive with
respect to their personal data. Therefore, we had to resign
from a discussion of the demographics in our research
context. This decision is backed up by Singh and Hill,
who found no statistically significant differences across
gender, income groups, educational levels, or political
affiliation in the desire to protect one’s privacy [4].
4. Results
We tested the model using SmartPLS version
3.2.7 [
]. Before looking at the result of the structural
model and discussing its implications, we discuss the
measurement model, and check for the reliability and
validity of our results. This is a precondition of
being able to interpret the results of the structural
model. Furthermore, it is recommended to report the
computational settings. For the PLS algorithm, we
choose the path weighting scheme with a maximum
of 300 iterations and a stop criterion of
. For
the bootstrapping procedure, we use 5000 bootstrap
subsamples and no sign changes as the method for
handling sign changes during the iterations of the
bootstrapping procedure.
4.1. Assessment of the Measurement Model
As the model is measured solely reflectively, we
need to evaluate the internal consistency reliability,
convergent validity and discriminant validity to assess
the measurement model properly [15].
Internal Consistency Reliability
Internal consistency
reliability (ICR) measurements indicate how well certain
indicators of a construct measure the same latent
phenomenon. Two standard approaches for assessing
ICR are Cronbach’s
and the composite reliability. The
values of both measures should be between 0.7 and 0.95
for research that builds upon accepted models. Values
of Cronbach’s
are seen as a lower bound and values
of the composite reliability as an upper bound of the
assessment [
]. Table 1 includes the ICR of the variables
in the last two rows. It can be seen that all values
for Cronbach’s
are above the lower threshold of 0.7
except for RISK. However, for the composite reliability
the value for RISK is higher than 0.7. Therefore, we
argue that ICR is not a major issue for this variable.
For all variables, no value is above 0.95. Values above
that upper threshold indicate that the indicators measure
the same dimension of the latent variable, which is not
optimal with regard to the validity [
]. In sum, ICR is
established for our variables. Since IUIPC and USE are
single-item constructs they have ICR values of 1.
Convergent Validity
Convergent validity determines
the degree to which indicators of a certain reflective
construct are explained by that construct. This is assessed
by calculating the outer loadings of the indicators of the
constructs (indicator reliability) and by looking at the
average variance extracted (AVE) [
]. Loadings above
0.7 imply that the indicators have much in common,
which is desirable for reflective measurement models
]. Table 1 shows the outer loadings in bold on the
diagonal. All loadings were higher than 0.7, except for
TRUST4 with a value of 0.275. Therefore, we dropped
this item after an initial analysis. Convergent validity for
the construct is assessed by the AVE. AVE is equal to the
sum of the squared loadings divided by the number of
indicators. A threshold of 0.5 is acceptable, indicating
that the construct explains at least half of the variance
of the indicators [
]. The diagonal values of Table 2
present the AVE of our constructs. All values are well
above 0.5, demonstrating convergent validity.
Discriminant Validity
Discriminant validity measures
the degree of uniqueness of a construct compared to
other constructs. Comparable to the convergent validity
assessment, two approaches are used for investigating
discriminant validity. The first approach, assessing
cross-loadings, is dealing with single indicators. All
outer loadings of a certain construct should be larger than
its cross-loadings with other constructs [
]. Table 1
illustrates the cross-loadings as off-diagonal elements.
All cross-loadings are smaller than the outer loadings,
fulfilling the first assessment approach of discriminant
validity. The second approach is on the construct
level and compares the square root of the constructs’
AVE with the correlations with other constructs. The
square root of the AVE of a single construct should
be larger than the correlation with other constructs
(Fornell-Larcker criterion) [
]. Table 2 contains the
square root of the AVE on the diagonal in parentheses.
All values are larger than the correlations with other
constructs, indicating discriminant validity. Since
there are problems in determining the discriminant
validity with both approaches, researchers propose
the heterotrait-monotrait ratio (HTMT) for assessing
discriminant validity as a superior approach [
HTMT divides between-trait correlations by within-trait
correlations, therefore providing a measure of what
the true correlation of two constructs would be if the
measurement is flawless [
]. Values close to 1 for
HTMT indicate a lack of discriminant validity. A
conservative threshold is 0.85 [
]. Table 3 contains the
values for HTMT and no value, except for the correlation
between IUIPC and COLL (with 0.888), is above the
suggested threshold of 0.85. To assess if the HTMT
statistics are significantly different from 1, we conducted
a bootstrapping procedure with 5,000 subsamples to get
the confidence interval in which the true HTMT value
lies with a 95% chance. The HTMT measure requires
that no confidence interval contains the value 1. The
conducted analysis shows that this is the case, and thus
discriminant validity is established for our model.
Common Method Bias
The common method bias
(CMB) can occur if data is gathered with a self-reported
survey at one point in time in one questionnaire [
Since this is the case in our research design, the need to
Table 1. Loadings and Cross-Loadings of the Reflective Items and Internal Consistency Reliability
AWA1 0.911 0.234 0.302 0.223 -0.136 0.066 0.202 0,630 -0.124
AWA2 0.923 0.230 0.219 0.136 -0.155 0.072 0.198 0.586 -0.171
AWA3 0.891 0.323 0.315 0.221 -0.103 0.066 0.250 0.660 -0.059
CONTROL1 0.095 0.825 0.271 0.106 -0.167 0.137 0.215 0.475 -0.021
CONTROL2 0.405 0.821 0.226 0.245 -0.156 0.132 0.237 0.577 -0.033
CONTROL3 0.174 0.756 0.438 0.214 -0.345 0.098 0.099 0.578 0.068
COLL1 0.264 0.358 0.888 0.547 -0.468 0.176 0.301 0.742 0.045
COLL2 0.206 0.332 0.812 0.205 -0.335 0.232 0.376 0.665 0.042
COLL3 0.292 0.359 0.906 0.444 -0.446 0.272 0.376 0.764 0.071
COLL4 0.304 0.309 0.850 0.467 -0.403 0.182 0.316 0.720 0.091
RB1 0.196 0.200 0.487 0.880 -0.453 0.217 0.258 0.429 -0.015
RB2 0.170 0.160 0.326 0.831 -0.298 0.156 0.233 0.312 0.015
RB3 0.155 0.252 0.364 0.857 -0.354 0.233 0.221 0.359 0.007
RB4 0.245 0.231 0.374 0.827 -0.260 0.257 0.326 0.396 0.042
RB5 -0.105 -0.145 -0.427 -0.702 0.401 -0.004 -0.144 -0.339 0.003
TB1 -0.149 -0.261 -0.455 -0.417 0.898 -0.097 -0.265 -0.412 -0.050
TB2 -0.118 -0.186 -0.410 -0.377 0.887 -0.033 -0.194 -0.347 -0.109
TB3 -0.107 -0.339 -0.397 -0.395 0.775 -0.131 -0.155 -0.387 -0.007
TB5 -0.069 -0.009 -0.219 -0.070 0.663 -0.109 -0.169 -0.158 -0.007
TBT or 1 0.064 0.149 0.257 0.159 -0.087 0.879 0.561 0.225 -0.050
TBT or 2 0.077 0.121 0.236 0.244 -0.124 0.925 0.554 0.209 -0.020
TBT or 3 0.059 0.138 0.169 0.178 -0.079 0.883 0.488 0.169 0.002
BI1 0.236 0.240 0.355 0.228 -0.249 0.586 0.865 0.384 0.166
BI2 0.262 0.202 0.322 0.319 -0.152 0.465 0.859 0.363 0.075
BI3 0.143 0.158 0.363 0.234 -0.233 0.522 0.923 0.323 0.216
IUIPC 0.691 0.685 0.837 0.451 -0.431 0.226 0.404 1,000 -0.009
USE -0.128 0.008 0.073 0.010 -0.059 -0.026 0.177 -0.009 1,000
Cronbach’s α0.894 0.722 0.887 0.567 0.831 0.877 0.859 1.000 1.000
Comp. Reliability 0.934 0.843 0.922 0.817 0.884 0.924 0.914 1.000 1.000
test for CMB arises. An unrotated principal component
factor analysis is performed with the software package
STATA 14.0 to conduct the Harman’s single-factor test
to address the issue of CMB [
]. The assumptions
of the test are that CMB is not an issue if there is no
single factor that results from the factor analysis or that
the first factor does not account for the majority of the
total variance [
]. The test shows that seven factors
have eigenvalues larger than 1 which account for 75.35%
of the total variance. The first factor explains 30.29%
of the total variance. Based on the results of previous
literature [
], we argue that CMB is not likely to be an
issue in the data set.
4.2. Assessment and Results of the Structural
To assess the structural model, we follow the steps
proposed by Hair et al. [
] which include an assessment
of possible collinearity problems, of path coefficients, of
the level of
, of the effect size
, of the predictive
and the effect size
. We address these
evaluation steps to ensure the predictive power of the
model with regard to the target constructs.
Collinearity is present if two predictor
variables are highly correlated with each other. To
address this issue, we assess the inner variance inflation
factor (VIF). All VIFs above 5 indicate that collinearity
between constructs is present. For our model, the highest
VIF is 1.278. Thus collinearity is apparently not an issue.
Significance and Relevance of Model Relationships
Figure 1 shows the results of the path estimations and the
-values of the endogenous variables BI and USE. The
is 0.400 for BI and 0.031 for USE. Thus, our models
explains 40% of the variance of BI and 3.1% of USE.
There are different proposals for interpreting the size
of this value. We choose to use the very conservative
threshold proposed by Hair et al. [
], where
values are weak with values around 0.25, moderate
with 0.50 and substantial with 0.75. Based on this
classification, the
value for BI is weak to moderate
and for USE the value is very weak. For use behavior
Table 2. Discriminant Validity with AVEs and Construct Correlations
AWA (0.825) 0.908
BI (0.780) 0.240 0.883
COLL (0.748) 0.309 0.395 0.865
CONTROL (0.642) 0.291 0.228 0.393 0.801
IUIPC (1.000) 0.691 0.404 0.837 0.685 1,000
RB (0.675) 0.215 0.291 0.486 0.242 0.451 0.822
TB (0.658) -0.143 -0.244 -0.480 -0.283 -0.431 -0.434 0.811
TBT or (0.803) 0.075 0.599 0.249 0.152 0.226 0.216 -0.109 0.896
USE (1.000) -0.128 0.177 0.073 0.008 -0.009 0.010 -0.059 -0.026 1,000
Note: AVEs in parentheses in the first column. Values for AV E are shown on the diagonal and construct correlations are off-diagonal elements.
Table 3. Heterotrait-Monotrait Ratio (HTMT)
BI 0.274
COLL 0.343 0.452
CONTROL 0.346 0.290 0.486
IUIPC 0.728 0.436 0.888 0.798
RB 0.238 0.337 0.541 0.294 0.478
TB 0.159 0.278 0.528 0.336 0.439 0.449
TBT or 0.084 0.681 0.280 0.192 0.240 0.244 0.131
USE 0.138 0.186 0.077 0.060 0.009 0.021 0.058 0.029
several participants answered that they never use Tor (21
participants answered ”never”) although they stated to
use the service several years (answers to the question:
How many years are you using Tor? with a median of 6
years and an average of 6.87 years on a seven-point Likert
scale). The correlation coefficient between the years of
using Tor and the use frequency is very small, negative
and statistically insignificant with -0.0222 and a p-value
of 0.8066. These 21 answers massively bias the results
for the relationship between behavioral intention and
actual use behavior (the median value of use frequency is
5). However, we cannot explain why the participants
answered like this. They either misunderstood the
question, answered it intentionally like this to disguise
their activity with Tor or found the scale for use behavior
inappropriate. This might be due to the fact that the scale
only contains ”once a month” as the lowest use frequency
besides ”never”. It might be possible that these 21 users
use Tor only a few times per year or that they used Tor
some years ago and have not used it again since then.
Therefore, they might have chosen never as an answer.
However, we used an established scale to measure use
behavior [
], but recommend to consider this issue in
future research studies with a similar context.
The path coefficients are presented on the arrows
connecting the exogenous and endogenous constructs in
Figure 1. Statistical significance is indicated by asterisks,
ranging from three asterisks for p-values smaller than
0.01 to one asterisk for p-values smaller than 0.10. We
chose this p-value range since p-values tend to be larger
if the sample size is comparable small and we wanted to
capture also significant effects above the 5% level. The
p-value indicates the probability that a path estimate is
incorrectly assumed to be significant. Thus, the lower
the p-value, the higher the probability that the given
relationship exists. The relevance of the path coefficients
is shown by the relative size of the coefficient compared
to the other explanatory variables [16].
It can be seen that IUIPC has a relatively large
statistically significant negative effect on trusting beliefs
and a positive effect on risk beliefs. The effect of
IUIPC on trusting beliefs in Tor is significant, positive
and relatively weak compared to the other significant
effects in the model. The construct trusting beliefs has
a statistically significant medium-sized negative effect
on risk beliefs. The effects of trusting beliefs and
risk beliefs on behavioral intention are not statistically
significant (for both
). In contrast, the effect of
trusting beliefs in Tor on behavioral intention is highly
statistically significant, positive and large with 0.560.
Effect Sizes f2
effect size measures the impact
of a construct on the endogenous variable by omitting
it from the analysis and assessing the resulting change
in the
value [
]. The values are assessed based
on thresholds by Cohen [
], who defines effects as
small, medium and large for values of 0.02, 0.15 and
0.35, respectively. Table 4 shows the results of the
evaluation. Values in italics indicate small effects, values
Figure 1. Path Estimates and Adjusted R2values of the Structural Model
Table 4. f2and q2Effect Size Assessment Values
Variables f2q2
Endogenous BI BI
RB 0.016 0.018
TB 0.025 0.025
TBT or 0.499 0.766
in bold indicate medium effects and values in bold and
italics indicate large effects. All other values have no
substantial effect. The results correspond to those of the
previous analysis of the path coefficients whereas trusting
beliefs have a small effect on the behavioral intention to
use tor. As the path estimates have shown, trust in tor has
a large effect on the behavioral intention.
Predictive Relevance Q2
measure indicates
the out-of-sample predictive relevance of the structural
model with regard to the endogenous latent variables
based on a blindfolding procedure [
]. We used an
omission distance d
7. Recommended values for d are
between five and ten [
]. Furthermore, we report the
values of the cross-validated redundancy approach,
since this approach is based on both the results of
the measurement model as well as of the structural
model [
]. Detailed information about the calculation
cannot be provided due to space limitations. For further
information see Chin [
]. Values above 0 indicate that
the model has the property of predictive relevance. In our
case, the
value is equal to 0.278 for BI and 0.002 for
USE. Since they are larger than zero, predictive relevance
of the model is established.
Effect Sizes q2
The assessment of
follows the
same logic as the one of
. It is based on the
values of the endogenous variables and calculates the
individual predictive power of the exogenous variables by
omitting them and comparing the change in
]. All
individual values for
are calculated with an omission
distance d of seven. The results are shown in Table 4.
The thresholds for the
interpretation can be applied
here, too [
]. Values in italics indicate small effects and
values in bold indicate medium effects. All other values
have no substantial effect. As before, only the trust in Tor
has a large effect, implying the highest predictive power
of all included exogenous variables.
5. Discussion and Conclusion
Based on our results, hypotheses H1 to H5 and H8 can
be confirmed, whereas H6 and H7 cannot be confirmed
(cf. Table 5). The results for H6 and H7 are surprising,
considering that they are in contrast to the rationale
explained in Sect. 3.1 and the results from previous
literature [
]. However, it must be said that when effect
sizes are rather small it is possible that the relatively small
sample size of 124 leads to a statistical non-significance.
Thus, we cannot rule out that the effects of risk beliefs
and trusting beliefs on behavioral intention would be
significant with a larger sample size. Thus, only the
degree of trust in the PET (Tor) has a direct significant
effect on the intention to use the PET. This result shows
that a reputation of being trustworthy is crucial for a
PET provider. The trusting beliefs in the PET itself are
positively influenced by the users’ information privacy
concerns. Thus, the results imply that users with a higher
level of privacy concerns rather tend to trust a PET.
The limitations of the study primarily concern the
sample composition and size. First, a larger sample
Table 5. Summary of the Results
Hypothesis Result
Internet Users Information Privacy Concerns (IUIPC) have a negative effect on Trusting Beliefs
H2: Internet Users Information Privacy Concerns (IUIPC) have a positive effect on Risk Beliefs (RB) 3
H3: Trusting Beliefs (TB) have a negative effect on Risk Beliefs (RB) 3
Internet Users Information Privacy Concerns (IUIPC) have a positive effect on the trusting beliefs in
Tor (TBTor)
H5: Trusting beliefs in Tor (TBTor ) have a positive effect on the behavioral intention to use Tor (BI) 3
H6: Trusting beliefs (TB) have a negative effect on the behavioral intention to use Tor (BI) 7
H7: Risk beliefs (RB) have a positive effect on the behavioral intention to use Tor (BI) 7
H8: The behavioral intention to use Tor (BI) has a positive effect on the actual use behavior (USE) 3
would have been beneficial. However, in general, a
sample of 124 participants is acceptable for our kind
of statistical analysis [
] and active users of a PET are
hard to find for a relatively long online questionnaire.
This is especially the case, if they do not have any
financial rewards as in our study and if they are highly
privacy sensitive which might repel them to disclose any
kind of information (even if it is anonymous). Second,
the combination of the results of the German and the
English questionnaire can be a potential source of errors.
German participants might have understood questions
differently than the English participants. We argue that
we achieved equivalence with regard to the meaning
through conducting a thorough translation process, and
therefore limiting this potential source of error to the
largest extent possible. In addition, combining the data
was necessary from a pragmatic point of view to get
a sample size as large as possible for the statistical
analysis. Lastly, possible self-report biases (e.g. social
desirability) might exist. We addressed this possible
issue by gathering the data fully anonymized. As
discussed earlier, we had issues with certain data sets
of participants with regard to actual use behavior (cf.
Sect. 4.2.). Although it might be more beneficial in
certain settings to directly refer to actual use behavior as
the sole target variable, we decided to include behavioral
intention as an antecedent because of these issues.
Further work is required to investigate the specific
determinants of use decisions for or against PETs
and break down the interrelationships between the
associated antecedents. In particular, it would be
interesting to investigate the relationship between trusting
beliefs in online companies and trust in the PET itself.
A theoretical underlying is required to include this
relationship in our structural equation model.
In this paper, we contributed to the research on
privacy-enhancing technologies and users’ privacy by
assessing the specific relationships between information
privacy concerns, trusting beliefs in online firms and
a privacy-enhancing technology (in our case Tor), risk
beliefs associated with online firms data processing and
the actual use behavior of Tor. By adapting and extending
the IUIPC model by Malhotra et al. [
], we could show
that several of the assumptions for regular online services
do not hold for PETs.
L. Mineo, “On internet privacy, be very
afraid (Interview with Bruce Schneier).
\-be-very-afraid-analyst-suggests/, 08 2017.
E. E. David and R. M. Fano, “Some thoughts about
the social implications of accessible computing,” in
Proceedings 1965 Fall Joint Computer Conference,
1965. Available via
M. B
edard, “The underestimated economic benefits
of the internet,” regulation series, The Montreal
Economic Institute, 2016. Economic Notes.
T. Singh and M. E. Hill, “Consumer privacy and the
Internet in Europe: a view from Germany,Journal
of consumer marketing, vol. 20, no. 7, pp. 634–651,
N. K. Malhotra, S. S. Kim, and J. Agarwal,
“Internet users’ information privacy concerns
(IUIPC): The construct, the scale, and a causal
model,” Information Systems Research, vol. 15,
pp. 336–355, dec 2004.
P. E. Naeini, S. Bhagavatula, H. Habib,
M. Degeling, L. Bauer, L. Cranor, and N. Sadeh,
“Privacy expectations and preferences in an iot
world,” in Symposium on Usable Privacy and
Security (SOUPS), 2017.
J. Heales, S. Cockcroft, and V.-H. Trieu, “The
influence of privacy, trust, and national culture on
internet transactions,” in Social Computing and
Social Media. Human Behavior (G. Meiselwitz,
ed.), (Cham), pp. 159–176, Springer International
Publishing, 2017.
F. Raber and A. Krueger, “Towards understanding
the influence of personality on mobile app
permission settings,” in IFIP Conference on
Human-Computer Interaction, pp. 62–82, 2017.
[9] The Tor Project., 2018.
J. J. Borking and C. Raab, “Laws, PETs and Other
Technologies for Privacy Protection,Journal of
Information, Law and Technology, vol. 1, pp. 1–14,
L. Lee, D. Fifield, N. Malkin, G. Iyer, S. Egelman,
and D. Wagner, “A Usability Evaluation of Tor
Launcher,” Proceedings on Privacy Enhancing
Technologies, no. 3, pp. 90–109, 2017.
Z. Benenson, A. Girard, and I. Krontiris, “User
Acceptance Factors for Anonymous Credentials:
An Empirical Investigation,14th Annual Workshop
on the Economics of Information Security (WEIS),
pp. 1–33, 2015.
F. Brecht, B. Fabian, S. Kunz, and S. Mueller, “Are
You Willing to Wait Longer for Internet Privacy?,
in ECIS 2011 Proceedings, 2011.
T. Dinev and P. Hart, “An extended privacy calculus
model for e-commerce transactions,” Information
Systems Research, vol. 17, no. 1, pp. 61–80, 2006.
J. Hair, C. M. Ringle, and M. Sarstedt, “PLS-SEM:
Indeed a Silver Bullet,The Journal of Marketing
Theory and Practice, vol. 19, no. 2, pp. 139–152,
J. Hair, G. T. M. Hult, C. M. Ringle, and
M. Sarstedt, A Primer on Partial Least Squares
Structural Equation Modeling (PLS-SEM). SAGE
Publications, 2017.
K. A. Stewart and A. H. Segars, “An Empirical
Examination of the Concern for Information
Privacy Instrument,Information Systems Research,
vol. 13, no. 1, pp. 36–49, 2002.
P. A. Pavlou, “Consumer Acceptance of Electronic
Commerce: Integrating Trust and Risk with the
Technology Acceptance Model, International
Journal of Electronic Commerce, vol. 7, no. 3,
pp. 101–134, 2003.
D. H. Mcknight, M. Carter, J. B. Thatcher, and
P. F. Clay, “Trust in a specific technology: An
investigation of its components and measures,
ACM Transactions on Management Information
Systems (TMIS), vol. 2, no. 2, p. 12, 2011.
N. K. Lankton, D. H. McKnight, and J. Tripp,
“Technology, humanness, and trust: Rethinking
trust in technology,Journal of the Association for
Information Systems, vol. 16, no. 10, p. 880, 2015.
M. Fishbein and I. Ajzen, Belief, Attitude, Intention
and Behavior: An Introduction to Theory and
Research. Reading, MA: Addison-Wesley, 1975.
B. H. Sheppard, J. Hartwick, and P. R. Warshaw,
“The Theory of Reasoned Action: A Meta-Analysis
of Past Research with Recommendations for
Modifications and Future Research,” Journal of
Consumer Research, vol. 15, no. 3, pp. 325–343,
C. Schmitz, “LimeSurvey Project Team.” http://, 2015.
C. M. Ringle, S. Wende, and J. M. Becker,
“SmartPLS 3.”, 2015.
J. Henseler, C. M. Ringle, and M. Sarstedt, “A
new criterion for assessing discriminant validity
in variance-based structural equation modeling,
Journal of the Academy of Marketing Science,
vol. 43, no. 1, pp. 115–135, 2015.
N. K. Malhotra, S. S. Kim, and A. Patil, “Common
Method Variance in IS Research: A Comparison
of Alternative Approaches and a Reanalysis of Past
Research,” Management Science, vol. 52, no. 12,
pp. 1865–1883, 2006.
P. M. Podsakoff, S. B. MacKenzie, J. Y. Lee,
and N. P. Podsakoff, “Common method biases
in behavioral research: a critical review of the
literature and recommended remedies.,” Journal
of Applied Psychology, vol. 88, no. 5, pp. 879–903,
C. Blome and A. Paulraj, “Ethical Climate and
Purchasing Social Responsibility: A Benevolence
Focus,Journal of Business Ethics, vol. 116, no. 3,
pp. 567–585, 2013.
L. Rosen, K. Whaling, L. Carrier, N. Cheever,
and J. Rokkum, “The Media and Technology
Usage and Attitudes Scale: An empirical
investigation,Comput Human Behav., vol. 29,
no. 6, pp. 2501–2511, 2013.
J. Cohen, Statistical Power Analysis for the
Behavioral Sciences. 1988.
W. W. Chin, “The Partial Least Squares Approach
to Structural Equation Modeling,” in Modern
Methods for Business Research (G. A. Marcoulides,
ed.), pp. 295–336, Mahwah, NJ: Lawrence
Erlbaum, 1998.
A. Distribution Channels of the Tor
Online Survey
1. Mailinglists:
(a) tor-talk2
(b) liberationtech3
(c) IFIP TC 114
(d) FOSAD5
(e) GI PET6
2. Twitter with #tor and #privacy
3. Boards:
(a) reddit (sub-reddits: r/TOR, r/onions, r/privacy)
4. Tor Hidden Service Boards, Sections posted into:
Darknet Avengers
Off Topic
The Hub
Onion Land
, Off
(d) 8chan11, /tech/
Unverified Users
Code Green
(h) Atlayo15, Posting
5. Personal Announcements at Workshops
B. Questionnaire
The following items are measured with a seven-point
Likert scale from ”strongly disagree” to ”strongly agree”.
Trusting Beliefs (TB)
Online companies are trustworthy in handling
Online companies tell the truth and fulfill promises
related to information provided by me.
I trust that online companies would keep my best
interests in mind when dealing with information.
Online companies are in general predictable and
consistent regarding the usage of information.
Online companies are always honest with customers
when it comes to using the provided information.
Trusting Beliefs in Tor (TBTor)
1. Tor is trustworthy.
2. Tor keeps promises and commitments.
I trust Tor because they keep my best interests in mind.
Risk Beliefs (RB)
In general, it would be risky to give information to
online companies.
There would be high potential for loss associated with
giving information to online firms.
There would be too much uncertainty associated with
giving information to online firms.
Providing online firms with information would
involve many unexpected problems.
I would feel safe giving information to online
Awareness (AWA)
Companies seeking information online should
disclose the way the data are collected, processed,
and used.
A good consumer online privacy policy should have a
clear and conspicuous disclosure.
It is very important to me that I am aware and
knowledgeable about how my personal information
will be used.
Collection (COLL)
It usually bothers me when online companies ask me
for personal information.
When online companies ask me for personal infor-
mation, I sometimes think twice before providing it.
It bothers me to give personal information to so many
online companies.
Im concerned that online companies are collecting too
much personal information about me.
Control (CONTROL)
Consumer online privacy is really a matter of
consumers right to exercise control and autonomy
over decisions about how their information is
collected, used, and shared.
Consumer control of personal information lies at the
heart of consumer privacy.
I believe that online privacy is invaded when control is
lost or unwillingly reduced as a result of a marketing
Behavioral Intention (BI)
1. I intend to continue using Tor in the future.
2. I will always try to use Tor in my daily life.
3. I plan to continue to use Tor frequently.
Use Behavior (USE)
1. Please choose your usage frequency for Tor16
Once a month
Several times a month
Once a week
Several times a week
Once a day
Several times a day
Once an hour
Several times an hour
All the time
16The frequency scale is adapted from Rosen et al. [29].
... Another specific factor that impacts user acceptance of technology as well as privacy that is generally unique among intelligence professionals is compliance with information collection processes regarding the collection and use of information created by or about (Harborth & Pape, 2019). A study by Karwatzki et al. (2018) also examined the concept of risk and the impact on behavior intention, developing a nomological network model focusing on the antecedents of privacy experience and familiarity affecting privacy risks, which is represented by a seven-dimensional construct of the various ways privacy invasions affect individuals, such as physical, social or psychological effects. ...
... Within DOD Manual 5240.01, it specifically states that if information is publicly available regarding United States Persons, there are no restrictions(Carter, 2016). However, this broad exemption is frequently limited by subordinate organizations (H.Williams & Blum, 2018), and may affect the perception of risk experienced by members of the study population.Privacy and risk are increasingly important aspects in understanding the causal and indirect factors affecting the selection, use, and discontinuation of technology in all its forms, including hardware, operating systems and applications(Harborth & Pape, 2019;Ho et al., 2017). A study undertaken by Harborth and Pape(2019)examined what "…influence have privacy concerns and associated trust and risk beliefs on the behavioral intention and actual use of Tor?" and "What influence does trust in Tor itself have on the behavioral intention and the actual use?" (p. ...
Full-text available
Information technology security policies are designed explicitly to protect IT systems. However, overly restrictive information security policies may be inadvertently creating an unforeseen information risk by encouraging users to bypass protected systems in favor of personal devices, where the potential loss of organizational intellectual property is greater. Current models regarding the acceptance and use of technology, Technology Acceptance Model Version 3 (TAM3) and the Unified Theory of Acceptance and Use of Technology Version 2 (UTAUT2), address the use of technology in organizations and by consumers, but little research has been done to identify an appropriate model to begin to understand what factors would influence users that can choose between using their own personal device and using organizational IT assets, separate and distinct from “bring your own device” constructs. There are few organizations with radical demarcations between organizational assets and personal devices. One such organization, the United States Intelligence Community (USIC), provides a controlled environment where personal devices are expressly forbidden in workspaces and therefore provides a uniquely situated organizational milieu in that the use of personal devices would have to occur outside of the organizational environment. This research aims to bridge the divide between these choices by identifying the factors that influence users to select their own devices to overcome organizational restrictions in order to conduct open-source research. The research model was amalgamated from the two primary theoretical frameworks, TAM3 and UTAUT2, and is the first to integrate these theories as they relate to the intention to use personal or organizational systems to address the choices employees make when choosing between personal and organizational assets to accomplish work related tasks. Using survey data collected from a sample of 240 employees of the USIC, Partial Least Squares Structural Equation Modeling (PLS-SEM) statistical techniques were used to evaluate and test the model, estimate the path relationships, and provide reliability and validity checks. The results indicated that the Perception of Risk in the Enterprise (PoRE) significantly increased the Intention to Use Private Internet and decreased the Intention to Use Enterprise devices, as well as increasing the Perceived Ease of Use of Private Internet (PEUPI). The results of this study provide support to the concept that organizations must do more to balance threats to information systems with threats to information security. The imposition of safeguards to protect networks and systems, as well as employee misuse of information technology resources, may unwittingly incentivize users to use their own Internet and devices instead, where enterprise safeguards and protections are absent. This incentive is particularly pronounced when organizations increase the perceived threat of risk to users, whether intentional or inadvertent, and when the perception of the ease of use and usefulness of private Internet devices is high.
... That changed with a series of papers investigating reasons for the (non-)adoption of Tor [20] and JonDonym [17]. Based on the construct of internet users' information privacy concerns [42,43] Harborth and Pape found that trust beliefs in the anonymization service played a huge role for the adoption [18,19]. Further work [21] indicates that the providers' reputation, aka trust in the provider, played also a major role in the users' willingness to pay for or donate to these services. ...
Users report that they have regretted accidentally sharing personal information on social media. There have been proposals to help protect the privacy of these users, by providing tools which analyze text or images and detect personal information or privacy disclosure with the objective to alert the user of a privacy risk and transform the content. However, these proposals rely on having access to users' data and users have reported that they have privacy concerns about the tools themselves. In this study, we investigate whether these privacy concerns are unique to privacy tools or whether they are comparable to privacy concerns about non-privacy tools that also process personal information. We conduct a user experiment to compare the level of privacy concern towards privacy tools and non-privacy tools for text and image content, qualitatively analyze the reason for those privacy concerns, and evaluate which assurances are perceived to reduce that concern. The results show privacy tools are at a disadvantage: participants have a higher level of privacy concern about being surveilled by the privacy tools, and the same level concern about intrusion and secondary use of their personal information compared to non-privacy tools. In addition, the reasons for these concerns and assurances that are perceived to reduce privacy concern are also similar. We discuss what these results mean for the development of privacy tools that process user content.
... Some scholars refer to PET as technology including, for example, end-to-end secure messaging tools, virtual private networks (VPNs), and anti-tracking software (Heurix et al. 2015;Mangiò et al. 2020). Other studies specify PET as digital solutions that incorporate privacy protection features (Harborth and Pape 2019). Deng et al. (2011) differentiate PET into hard privacy (e.g. ...
Conference Paper
Full-text available
Privacy assurances (PA) describe organizational measures that provide users with assurances about privacy protection. They act as instruments to resolve the tension field between users and providers of digital services resulting from diverging interests regarding the collection and processing of data. Building on the agency theory, we propose a framework to illustrate the mechanisms of PA in the user-provider relationship, which is marked by asymmetric information. Although PA are thematized over a decade in Information Systems (IS) research, there is no consensus about their conceptualization. We intend to shed light on the academic discussion, aggregating the status quo of literature by conducting a systematic literature analysis. The results indicate that three overarching categories of PA have emerged in IS literature. We further discuss two significant areas of future research. The findings emphasize the relevance of PA as tool to align the conflicting interests of users and providers of digital services.
... In particular for privacy enhancing technologies, this is not new, as for Tor 11 and Jondonym 12 , two tools safeguarding against mass surveillance, trust in the technology has been shown to be one of the major drivers [37,38,39,41]. The trust in the technology was driven by online privacy literacy [40] supporting Schoentgen and Wilkinsons' theory. In accordance with Schoentgen and Wilkinson [77] is also the result of a study [35] investigating incentives and barriers for the implementation of privacy enhancing technologies from a corporate view where ethics and reputation of the company were among the named incentives. ...
Enabling cybersecurity and protecting personal data are crucial challenges in the development and provision of digital service chains. Data and information are the key ingredients in the creation process of new digital services and products. While legal and technical problems are frequently discussed in academia, ethical issues of digital service chains and the commercialization of data are seldom investigated. Thus, based on outcomes of the Horizon2020 PANELFIT project, this work discusses current ethical issues related to cybersecurity. Utilizing expert workshops and encounters as well as a scientific literature review, ethical issues are mapped on individual steps of digital service chains. Not surprisingly, the results demonstrate that ethical challenges cannot be resolved in a general way, but need to be discussed individually and with respect to the ethical principles that are violated in the specific step of the service chain. Nevertheless, our results support practitioners by providing and discussing a list of ethical challenges to enable legally compliant as well as ethically acceptable solutions in the future.
... Or do knowing users have less privacy concerns and therefore tend to more likely use the app? In particular, in the context of privacy enhancing technologies, trust in the software or the provider has shown to have a significant impact on the users decision to use a certain technology [11,12,14]. Thus, In future work we also aim to consider the users' perceived benefits of the app and the user's trust into the health system (cf. ...
The German Corona-Warn-App (CWA) is one of the most controversial tools to mitigate the Corona virus spread with roughly 25 million users. In this study, we investigate individuals’ knowledge about the CWA and associated privacy concerns alongside different demographic factors. For that purpose, we conducted a study with 1752 participants in Germany to investigate knowledge and privacy concerns of users and non-users of the German CWA. We investigate the relationship between knowledge and privacy concerns and analyze the demographic effects on both.
Augmented reality (AR) has found application in online games, social media, interior design, and other services since the success of the smartphone game Pokémon Go in 2016. With recent news on the metaverse and the AR cloud, the contexts in which the technology is used become more and more ubiquitous. This is problematic, since AR requires various different sensors gathering real-time, context-specific personal information about the users, causing more severe and new privacy threats compared to other technologies. These threats can have adverse consequences on information self-determination and the freedom of choice and, thus, need to be investigated as long as AR is still shapeable. This communication paper takes on a bird’s eye perspective and considers the ethical concept of autonomy as the core principle to derive recommendations and measures to ensure autonomy. These principles are supposed to guide future work on AR suggested in this article, which is strongly needed in order to end up with privacy-friendly AR technologies in the future.
Full-text available
This book presents the main scientific results from the GUARD project. It aims at filling the current technological gap between software management paradigms and cybersecurity models, the latter still lacking orchestration and agility to effectively address the dynamicity of the former. This book provides a comprehensive review of the main concepts, architectures, algorithms, and non-technical aspects developed during three years of investigation.
Detection of unknown attacks is challenging due to the lack of exemplary attack vectors. However, previously unknown attacks are a significant danger for systems due to a lack of tools for protecting systems against them, especially in fast-evolving Internet of Things (IoT) technology. The most widely used approach for malicious behaviour of the monitored system is detecting anomalies. The vicious behaviour might result from an attack (both known and unknown) or accidental breakdown. We present a Net Anomaly Detector (NAD) system that uses one-class classification Machine Learning techniques to detect anomalies in the network traffic. The highly modular architecture allows the system to be expanded with adapters for various types of networks. We propose and discuss multiple approaches for increasing detection quality and easing the component deployment in unknown networks by known attacks emulation, exhaustive feature extraction, hyperparameter tuning, detection threshold adaptation and ensemble models strategies. Furthermore, we present both centralized and decentralized deployment schemes and present preliminary results of experiments for the TCP/IP network traffic conducted on the CIC-IDS2017 dataset.
Full-text available
For many years signature-based intrusion detection has been applied to discover known malware and attack vectors. However, with the advent of malware toolboxes, obfuscation techniques and the rapid discovery of new vulnerabilities, novel approaches for intrusion detection are required. System behavior analysis is a cornerstone to recognizing adversarial actions on endpoints in computer networks that are not known in advance. Logs are incrementally produced textual data that reflect events and their impact on technical systems. Their efficient analysis is key for operational cyber security. We investigate approaches beyond applying simple regular expressions, and provide insights into novel machine learning mechanisms for parsing and analyzing log data for online anomaly detection. The AMiner is an open source implementation of a pipeline that implements many machine learning algorithms that are feasible for deeper analysis of system behavior, recognizing deviations from learned models and thus spotting a wide variety of even unknown attacks.
This paper aims to investigate the competencies that citizens should hold to protect own information privacy and personal data. Based on conceptual analysis, this study examines theoretical frameworks on competency models (e.g., the Iceberg Competency Model) and proposes a roadmap for developing the first information privacy competency model in the information systems literature. The study conducts a systematic analysis to reveal the lack of information privacy competency models in the literature and derive any reported information privacy competencies. In sequence, synthesizes the results into a preliminary information privacy competency model comprising attributes that citizens should hold to be competent to protect own information privacy and personal data, including knowledge, skills, attitudes, values, etc. The results of this work can be valuable for information privacy researchers, online service providers, policy makers and educators.
Conference Paper
Full-text available
With the rapid deployment of Internet of Things (IoT) technologies and the variety of ways in which IoT-connected sensors collect and use personal data, there is a need for transparency, control, and new tools to ensure that individual privacy requirements are met. To develop these tools, it is important to better understand how people feel about the privacy implications of IoT and the situations in which they prefer to be notified about data collection. We report on a 1,007-participant vignette study focusing on privacy expectations and preferences as they pertain to a set of 380 IoT data collection and use scenarios. Participants were presented with 14 scenarios that varied across eight categorical factors, including the type of data collected (e.g. location, biometrics, temperature), how the data is used (e.g., whether it is shared, and for what purpose), and other attributes such as the data retention period. Our findings show that privacy preferences are diverse and context dependent; participants were more comfortable with data being collected in public settings rather than in private places, and are more likely to consent to data being collected for uses they find beneficial. They are less comfortable with the collection of biometrics (e.g. fingerprints) than environmental data (e.g. room temperature, physical presence). We also find that participants are more likely to want to be notified about data practices that they are uncomfortable with. Finally, our study suggests that after observing individual decisions in just three data-collection scenarios, it is possible to predict their preferences for the remaining scenarios, with our model achieving an average accuracy of up to 86%.
Conference Paper
Full-text available
A privacy paradox still exists between consumers’ willingness to transact online and their stated Information privacy concerns. MIS research has the capacity to contribute to societal research in this area (Dinev 2014) and cultural differences are one important area of investigation. The global nature of e-commerce makes cultural factors likely to have a significant impact on this concern. Building on work done in the area of culture and privacy, and also trust and privacy, we explore the three way relationship between culture, privacy and trust. Emerge. A key originality of this work is the use of the GLOBE variables to measure culture. These provide a more contemporary measure of culture and overcome some of the criticisms levelled at the much used Hofstede variables. Since the late 1990s scholars have been exploring ways of measuring Privacy. Whilst attitudinal measures around concern for information privacy are only one proxy for privacy itself, such measures have evolved in sophistication. Smith et al. developed the Global Information Privacy Scale which evolved into the 15 question parsimonious CFIP scale (Smith 1996) Leading on from this Malhotra developed the internet users information privacy concerns (IUIPC) which takes into account individuals differing perceptions of fairness and justice using social contract theory. We present the results of an exploratory empirical study that uses both GLOBE and IUIPC via a set of scenarios to determine the strength of national culture as an antecedent to IUIPC and the concomitant effect of IUIPC on trust and risk.
Full-text available
Provides a nontechnical introduction to the partial least squares (PLS) approach. As a logical base for comparison, the PLS approach for structural path estimation is contrasted to the covariance-based approach. In so doing, a set of considerations are then provided with the goal of helping the reader understand the conditions under which it might be reasonable or even more appropriate to employ this technique. This chapter builds up from various simple 2 latent variable models to a more complex one. The formal PLS model is provided along with a discussion of the properties of its estimates. An empirical example is provided as a basis for highlighting the various analytic considerations when using PLS and the set of tests that one can employ is assessing the validity of a PLS-based model. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
Information systems (IS) research has demonstrated that humans can and do trust technology. The current trust in technology literature employs two different types of trust in technology constructs. Some researchers use human-like trust constructs (e.g., benevolence, integrity, and ability), while other researchers use system-like trust constructs (e.g., helpfulness, reliability, and functionality). Interestingly, past research shows that both sets of measures influence important dependent variables, but the literature does not explain when one type should be used instead of the other type. In this paper, we use trust, social presence, and affordance theories to shed light on this research problem. We report on two studies. In study 1, we argue first that technologies vary in their perceived “humanness”. Second, we argue that, because users perceive two technologies to differ in humanness, they will develop trust in each technology differently (i.e., along more human-like criteria or more system-like criteria). We study two technologies that vary in humanness to explore these differences theoretically and empirically. We demonstrate that, when the trust construct used aligns well with how human the technology is, it produces stronger effects on selected outcome variables than does a misaligned trust construct. In study 2, we assess whether these technologies differ in humanness based on social presence, social affordances, and affordances for sociality. We find that these factors do distinguish whether technology is more human-like or system-like. We provide implications for trust-in-technology research. © 2015, Association for Information Systems. All rights reserved.
Conference Paper
In this paper we investigate the question whether users’ personalities are good predictors for privacy-related permissions they would grant to apps installed on their mobile devices. We report on results of a large online study (n = 100) which reveals a significant correlation between the user’s personality according to the big five personality scores, or the IUIPC questionnaire, and the app permission settings they have chosen. We used machine learning techniques to predict user privacy settings based on their personalities and consequently introduce a novel strategy that simplifies the process of granting permissions to apps.
Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.
Conference Paper
We describe theoretical development of a user acceptance model for anonymous credentials and its evaluation in a real-world trial. Although anonymous credentials and other advanced privacy-enhanced technologies (PETs) reached technical maturity , they are not widely adopted so far, such that understanding user adoption factors is one of the most important goals on the way to better privacy management with the help of PETs. Our model integrates the Technology Acceptance Model (TAM) with the considerations that are specific for security-and privacy-enhancing technologies, in particular, with their " secondary goal " property that means that these technologies are expected to work in the background, facilitating the execution of users' primary, functional goals. We introduce five new constructs into the TAM: Perceived Usefulness for the Primary Task (PU1), Perceived Usefulness for the Secondary Task (PU2), Situation Awareness, Perceived Anonymity and Understanding of the PET. We conduct an evaluation of our model in the concrete scenario of a university course evaluation. Although the sample size (30 participants) is prohibitively small for deeper statistical analysis such as multiple regressions or structural equation modeling, we are still able to derive useful conclusions from the correlation analysis of the constructs in our model. Especially, PU1 is the most important factor of user adoption, outweighing the usability and the usefulness of the deployed PET (PU2). Moreover, correct Understanding of the underlying PET seems to play a much less important role than a user interface of the system that clearly conveys to the user which data are transmitted when and to which party (Situation Awareness).