Content uploaded by Truong Luu
Author content
All content in this area was uploaded by Truong Luu on Feb 12, 2025
Content may be subject to copyright.
Proposing A Unified Concept of Information Privacy:
An Actor/Action-Oriented Approach
Truong (Jack) Luu
University of Cincinnati
luutp@mail.uc.edu
Andrew Harrison
University of Cincinnati
andrew.harrison@uc.edu
Michael Jones
University of Cincinnati
m.jones@uc.edu
Binny M. Samuel
University of Cincinnati
samuelby@uc.edu
Abstract
This conceptual paper introduces a contemporary
conceptualization of information privacy to align with
the reality of its multifaceted nature amid rapid
technological advancements and enmeshed diverse
perspectives. We propose a unified approach
emphasizing the dynamic nature of information
privacy as it interacts with the evolving digital
landscape. This new encompassing conceptualization
integrates theoretical perspectives from existing
research on infrastructure, institutional, regulatory
systems, and individuals as actors and expands actors’
actions from sole data flows to encompass data
inference. This conceptualization empowers a deeper
understanding and mitigation of the entanglement and
multilayered nature of information privacy, laying the
groundwork for future research and practical
applications in privacy research. By incorporating
action- and actor-oriented perspectives, our unified
conceptualization offers a robust framework for
assessing and managing privacy risks in an
increasingly complex digital ecosystem.
Keywords: conceptual paper, information privacy,
data flows, data inference, multilevel privacy
1. Introduction
Information privacy has been a central topic in IS
research; however, the rapid advancement of
innovative technologies is not always reflected in its
existing conceptualizations. This paper focuses on two
main issues affecting modern privacy research: (1) the
proliferation of new technologies that collect,
combine, manage, and analyze data, and (2) the
expansion of personal data collection and use beyond
traditional consumer-business relationships. We aim
to update the conceptualization of information privacy
by synthesizing previous research with contemporary
studies on relevant actors and actions in data systems.
The rise of advanced data processing, analytics, and
AI/ML models blurs the lines of data ownership,
necessitating a reevaluation of privacy in our complex
digital ecosystem. While privacy is widely recognized
as crucial for preserving dignity, autonomy, and
freedom in the digital age (Richards, 2021; Westin,
1968), institutions often accumulate, combine, and
analyze vast amounts of user data, frequently
exceeding informed consent and fair business
practices—an issue exacerbated by evolving
technologies and sophisticated predictive models.
Accordingly, the current approach of focusing on
data flows—i.e., data collection and data management
(e.g., error and misuse)—has lagged in capturing the
dynamics of technological advancement. Specifically,
there is an urgent need to consider how data is used for
analysis and inference in addition to the conventional
data flows approach (Xu & Dinev, 2022). Indeed, the
inference component is critical because institutions
can make improper or wrong decisions about
individuals by analyzing data, creating power
inequality (Richards, 2021). The consequences of
these inferences can lead to serious problems, such as
discrimination in employment, marketing, and credit
scoring (Favaretto et al., 2019).
Additionally, "information privacy is a multilevel
concept, but rarely studied as such" (Bélanger &
Crossler, 2011, p. 1017). Indeed, this oversight in
recognizing information privacy as a multilevel
concept could be problematic as most current
conceptualizations primarily examine the
relationships between institutions and their users. This
approach may overlook crucial elements such as the
role of regulatory frameworks, peer dynamics, and
technological advancements. Technological
advancement poses a challenge to individual
information privacy. While the early days of the
Internet focused on e-commerce and simple
information exchange (Web 1.0), the rise of personal
webpages and social platforms like Facebook and X
(Web 2.0) amplified online social interaction. Then,
Web 3.0, with the ability to read, write, and own,
emerged as a more decentralized network manifesting
through blockchain technology.
While this shift was intended to empower
individuals to have more control over their personal
information, the widespread automated data collection
and advanced analytical tools have mitigated many
protections meant to secure personal data. The current
"single level" concept of information privacy hinders
the evolving progress of theoretical and empirical
aspects of studying information privacy.
Theoretically, this lack of a multilevel perspective
Proceedings of the 58th Hawaii International Conference on System Sciences | 2025
Page 4552
URI: https://hdl.handle.net/10125/109392
978-0-9981331-8-8
(CC BY-NC-ND 4.0)
prevents building a consistent, holistic theoretical
framework. Similarly, in empirical aspects, research
might neglect to quantify the effects of other levels in
terms of hypothesizing and quantifying effect sizes
regarding information privacy. The different levels of
information privacy have been either mentioned or
discussed but have not been adequately
conceptualized and measured. We incorporate
multilevel privacy by studying a user's concerns when
interacting with various individuals, organizations, IT
infrastructure, and regulatory systems. We do so by
disentangling the relevant actions and actors when
formulating information privacy.
Specifically, we identify how combinations of
actions (data collection, data integration, data
management, data analysis, and outcome use) and
actors (infrastructure, institutions, regulatory systems,
and individuals) manifest through information privacy
concerns, a proxy of information privacy.
Our conceptualization offers a foundation for
relevant privacy stakeholders to assess privacy risks
and evaluate the effectiveness of interventions across
a broad range of real-life scenarios. Notably, this new
conceptualization allows us to integrate the influence
of advancements in infrastructure and practices (e.g.,
AI/ML and advanced analytics) on privacy research.
This paper will first briefly present the existing
conceptualization of information privacy. Next, we
will outline a direction for updating the concept of
information privacy in the modern era. Finally, we
propose a new unified conceptual framework for
examining information privacy through its proxy,
information privacy concerns, and discuss how this
updated conceptualization can address current issues
in empirical research on information privacy.
2. Theoretical foundations
2.1. Challenges in defining and
conceptualizing information privacy
Many have tried to define information privacy.
One of the first attempts was made by Westin (1968,
p. 7), who defined information privacy as "the claim
of individuals, groups, or institutions to determine for
themselves when, how, and to what extent information
about them is communicated to others." This
definition is impactful because it positions privacy as
a form of control. For example, Bélanger and Crossler
(2011, p. 1017) defined information privacy as "the
desire of individuals to control or have some influence
over data about themselves." While these definitions
have served our society reasonably well, technological
advancements, including the development of AI
systems, have blurred their clarity. Consider AI-driven
surveillance cameras in public spaces. These systems
can automatically recognize and track individuals
without consent or control, challenging the traditional
notion of privacy as control over one's information.
This illustrates how technologies complicate existing
privacy definitions, highlighting the need for updated
frameworks.
Moreover, using advanced statistical tools, actors
such as organizations and individuals can gather data
from different sources and infer many novel insights.
This makes the actors with access to this data
extremely powerful (Richards, 2021). For instance,
Target used data science to analyze purchasing
patterns and predict when customers were likely
pregnant, allowing them to send targeted baby product
ads and coupons. More importantly, as data is often
combined from various sources, defining data
ownership becomes more challenging. If ownership
cannot be determined, it becomes difficult for
individuals to exercise control, as one cannot control
an asset they do not own. Given the complexity of the
term “privacy” and the dynamic nature of our world,
conceptualizing it effectively is critical for researchers
studying information privacy-related behaviors.
2.2. Using privacy concerns as a proxy for
information privacy
Given the inherent difficulty in fully
conceptualizing information privacy, researchers have
used proxies that are easier to define and measure.
Information privacy concerns are often used as a
practical measure of information privacy, aligning
with Percy Bridgman's operationalism. This
philosophical approach defines scientific concepts
strictly in terms of specific operations or procedures
used to measure them (Bridgman, 1927). Thus, the
meaning of a concept is entirely determined by the
empirical methods used to observe and measure it,
emphasizing the practical application and avoiding
metaphysical assumptions. Since direct measurement
of information privacy is complex, privacy concerns
provide a quantifiable and observable proxy, making
the abstract concept of information privacy accessible
and empirically manageable. As Smith et al. (2011)
noted, "Because of the near impossibility of measuring
privacy itself, and also because the salient
relationships depend more on cognitions and
perceptions than on rational assessments, almost all
empirical privacy research in the social sciences relies
on the measurement of a privacy-related proxy of
sorts." This reliance on proxies indicates the
challenges in directly quantifying privacy but enables
a more manageable approach to studying it in
empirical settings. However, the current proxies tend
Page 4553
to focus on specific aspects, like data collection
practices or user perceptions, which may not
encompass broader privacy issues or the evolving
nature of technology and outcome use (Pavlou, 2011).
2.3. Overlooking broader contexts in privacy
research
As information privacy concerns are still arduous
to measure, most measurements created to gauge
information privacy concerns are psychometric
assessments, specifically self-reported in a Likert scale
fashion. However, the majority of the scales capture
individual concerns about organizational practices
related to data collection and management (e.g.,
secondary use, errors, and awareness). Many popular
scales exist for measuring information privacy
concerns, including individuals' concerns about
organizational information privacy practices (Smith et
al., 1996), concern for information privacy (Stewart &
Segars, 2002), internet users' information privacy
concerns (IUIPC) (Malhotra et al., 2004), internet
privacy concerns (IPC) (Dinev & Hart, 2005), mobile
users' concerns for information privacy (MUIPC) (Xu
et al., 2012), and internet privacy concerns (IPC)
(Hong & Thong, 2013). These measurements
effectively capture different nuances of how
institutions collect and manage user data, and the level
of control is essential in preserving one’s privacy.
Overall, the aforementioned scales address users’
concerns about overcollection, unauthorized release,
or misuse, which Xu and Dinev (2022) refer to as a
data flows focus. Notably, there are instances where
researchers propose the unforeseen consequences of
institutional practices, such as perceived surveillance
and perceived intrusion (Xu et al., 2012), indicating a
transition from the third era - the rise of the Internet
1990-2010) - to the fourth era – the rise of AI and
machine learning (AI/ML) (2011-present). We
observe a shift in focus from data flows to the impact
of advanced learning algorithms on individual well-
being. This evolving landscape highlights the need to
update the concept of information privacy for the new
era, as the literature shows that advanced techniques
such as differential privacy are not the end-all solution
(Domingo-Ferrer et al., 2021).
2.4. Updating information privacy concepts
for the fourth era
With technological advancements, it is crucial to
reexamine the concept of information privacy to tackle
the challenges and seize the opportunities presented by
the AI/ML age. This reevaluation should aim to
establish a unified and comprehensive framework for
information privacy that keeps pace with swiftly
evolving data inference practices. Indeed, to better
study privacy, " one needs to be attentive to not only
the characteristics of an individual but also the
complex, socially and culturally dependent,
commonsense background, for which our
understanding is still nascent" (Xu & Zhang, 2023, p.
2). Therefore, the new conceptualization needs to
account for how entities other than traditional
institutions handle personal information and should
include elements such as the legal framework and the
advancement of technology. Despite the significant
development of empirical literature on information
privacy over the past several decades—reflecting
changes in technological capabilities, psychological
impacts, and regulatory frameworks (Xu & Dinev,
2022) — researchers have noted that most current
studies still primarily focus on the second party (the
institution/vendor with which users do business) (see
Yun et al. (2019) for a review). Furthermore, legal
scholars concur that the current conceptualization of
privacy "…fails to provide a useful framework for
determining what constitutes a privacy problem and,
as a consequence, has begun to disserve the
community" (Angel & Calo, 2024, p. 521).
In the following sections, we will first discuss a
direction to reconceptualize information privacy and
propose a unified conceptualization of privacy and the
progress of empirical studies related to the multilevel
view. Next, we will discuss (a) the need to study
information privacy from a multilevel perspective
(Bélanger & Crossler, 2011), (b) moving past the data
flows-oriented perspective to the inference perspective
(Xu & Dinev, 2022), and (c) considering the
interactions between actions and actors.
3. Directions to reconceptualize
information privacy
3.1. Moving beyond a focus on data flows by
including emphasis on inference
The data flows perspective on information
privacy emphasizes controlling the overcollection,
unauthorized release, or misuse of data (Xu & Dinev,
2022). This approach seeks to manage privacy by
regulating how personal information moves through
checks and compliance measures. However, as data
flows become increasingly complex and difficult to
track, this perspective faces significant challenges.
Data collected for one purpose can be repurposed
without additional consent, and technological
advancements continually find new ways to mine,
Page 4554
combine, analyze, and utilize data in ways that bypass
traditional privacy protections, potentially violating
certain rights, including those protected by the U.S.
Fourth Amendment (Slobogin, 2008).
Shifting to an inference perspective can better
address these challenges by focusing on the
consequences of data integration and analysis and
using inference outcomes rather than merely how data
is handled. This perspective highlights the risks posed
by anonymized or aggregated data, which can still lead
to sensitive inferences about individuals, affecting
their autonomy (Cormode et al., 2013). It also
addresses individual concerns from algorithmic
decisions like credit scoring, job screening, and law
enforcement profiling (Favaretto et al., 2019). As large
language models increasingly draw on vast public
databases, incorporating an inference component into
the concept of information privacy is crucial for
studying privacy concerns in the age of AI/ML.
Incorporating inference into information privacy
would open new research avenues and enhance legal
protections. This approach would focus on limiting
harmful inferences and setting boundaries on
permissible data inferences, regardless of how data is
collected or processed. Current data protection laws
are inadequate against the challenges posed by big
data and AI, highlighting the urgent need for legal
revisions to better safeguard individuals from privacy
risks associated with inferential analytics (Richards,
2021; Wachter & Mittelstadt, 2019). This data
inference approach aligns with Daniel Solove's
privacy taxonomy, categorizing privacy into
information collection, processing, dissemination, and
invasion (Solove, 2005). While the first two categories
focus on data handling, the latter two address data
management and its outcomes.
More importantly, explicitly extending the
concept of privacy to include inference will advance
studies on accountability and transparency in
algorithmic decision-making, ensuring individuals can
understand decisions made about them. Moreover, it
will promote the development of ethical AI that
respects privacy by design and inherently limits
intrusive inferences (Mingers & Walsham, 2010).
In conclusion, integrating an inference
component—encompassing data integration, analysis,
and outcome use—into the concept of privacy better
addresses the complexities of modern data
environments and the evolving threats to personal
privacy. This approach offers a more effective and
forward-looking defense of individual rights in the
digital age by focusing on the implications of
intelligence extracted from data rather than just its
movement through databases.
3.2. Moving beyond individual-vendor
relationships by including a multilevel
perspective
Research on information privacy has traditionally
focused on direct interactions between individuals and
vendors (Yun et al., 2019). However, in today's
interconnected digital environment, there is a need for
multilevel and multiparty perspectives. To assess the
current scope of this research, we conducted a targeted
literature review via Web of Science, using keywords
such as "information privacy," "privacy concerns,"
"internet privacy," and "information privacy
concerns." These keywords were used because they
encapsulate the core concepts and variations in
terminology prevalent in the literature (e.g., privacy
concerns are often used to measure information
privacy and internet privacy in the literature - Pavlou
(2011)), allowing us to capture a broad spectrum of
studies relevant to both the general and specific
aspects of privacy in the digital age. This review
targeted three leading information systems journals:
Management Information Systems Quarterly (MISQ),
Information Systems Research (ISR), and the Journal
of Management Information Systems (JMIS). The
search yielded 49 papers published between 1990 and
2023. An evaluation of these articles revealed that,
although various levels of information privacy are
mentioned or discussed, they have not been
sufficiently conceptualized or measured.
Table 1 below provides an overview of the articles
addressing information privacy. Among these, the
theoretical work by Bélanger and James (2020) stands
out. Their paper introduces a two-level theory of
information privacy, coining the terms "we-privacy"
and "I-privacy." This review highlights that
information privacy extends beyond individuals
managing their data with institutions and encompasses
collective privacy considerations.
Table 1. Information privacy is mentioned across
various levels in the IS literature
Level of Mentioned
# of articles
Infrastructure
23
Institution-specific
40
Regulatory systems
7
Individuals
10
Building on the foundational concepts of "we-
privacy" and "I-privacy," it is logical to extend the
focus to include broader elements critical to IS
research, such as regulatory systems, IT infrastructure,
and individual actors like peers and hackers, to
develop a comprehensive understanding of privacy
issues.
Page 4555
First, regulatory systems are critical in shaping
privacy norms and practices by enforcing standards
and penalties, directly influencing privacy
stakeholders' behavior (Bellia, 2009). Additionally, IT
infrastructure—including hardware (e.g., IoT devices,
CCTVs), software (e.g., AI/ML and statistical
models), internet networks, and databases—is vital
because it stores, transmits, combines, and analyzes
personal information, creating potential
vulnerabilities. As Quach et al. (2022, p. 1308) warn:
"Algorithms can extract sensitive information such as
people’s political opinions, sexual orientation, and
medical conditions from less sensitive information."
(p. 1038). Additionally, individual-level actors, such
as peers, influence privacy through social sharing and
receiving norms, while hackers pose continuous
threats to data security (Liu & Wang, 2018).
Moreover, digital stalking has emerged as a significant
concern, where malicious actors exploit personal
information and online behavior to track and harass
individuals, further exacerbating privacy risks
(Stevens et al., 2021).
By incorporating these layers of actors, privacy
scholars can develop more effective
conceptualizations for managing and protecting
privacy within this complex ecosystem.
3.3. Incorporating the interaction between
data flows, data inference, and multilevel
focus
Reconceptualizing information privacy through
the lens of data flows—including overcollection and
management (e.g., unauthorized release)—combined
with data inference (covering data integration,
analysis, and outcome use) and involving multilevel
actors (such as infrastructure, institutions, regulatory
systems, and individuals) offers a nuanced
understanding of privacy. This integrated approach
considers all aspects of data handling, providing a
comprehensive view of data manipulation and
associated privacy risks at each stage. Engaging
stakeholders across various levels, from IT
infrastructure to policy regulators, promotes a
collaborative approach that balances individual rights
with societal needs. This alignment is crucial for
developing more effective and adaptable privacy
frameworks to address and mitigate emerging threats
in our data-driven world.
This reconceptualization facilitates a dynamic and
proactive response to privacy challenges, keeping pace
with technological advancements and evolving
regulatory landscapes. The interplay between
individual privacy concerns, advancements in IT
infrastructure (e.g., generative AI and deepfakes), and
the dynamics of regulatory systems (such as GDPR in
Europe, HIPAA, and ADPPA in the US), along with
the rise of peer-to-peer business models, opens new
avenues for exploration. The scope of information
privacy stakeholders has expanded beyond the
traditional dyad of service providers and customers,
especially as AI models increasingly predict user
behavior in real time, leading to more intricate
relationships.
More importantly, a multi-actor perspective
enables discussions of information privacy across
various contexts, thereby extending Helen
Nissenbaum's model of privacy as "contextual
integrity" (Nissenbaum, 2004). This model stresses
that privacy norms should align with specific contexts,
with information flow rules tied to the norms of each
particular context (e.g., the same information may
raise different privacy concerns when dealing with
institutions compared to peers or AI).
Following the detailed exploration of the
expanded focus on inference in information privacy,
the following section introduces a broader conceptual
framework. This framework integrates the interactions
between data flows, data inference, and multilevel
stakeholder perspectives, paving the way for a
comprehensive approach to addressing the evolving
challenges in data governance and privacy protection
in the digital age.
4. Proposing a unified conceptualization
of information privacy
This reconceptualization of information privacy is
based on two essential foundations: (1) following the
established practice of using information privacy
concerns as a proxy for information privacy and (2)
employing operationalism as a framework to measure
information privacy concerns.
First, privacy concerns serve as a reliable proxy
for information privacy because they encapsulate the
complex nature of privacy in the digital age. Initially
defined as "the right to be let alone" (Warren &
Brandeis, 1890, p. 193), privacy has evolved from a
rights-based concept to one centered on consent and
control over personal information (Westin, 1968).
This evolution aligns with the modern understanding
of information privacy, involving data collection,
improper access, and unauthorized secondary use
(Bélanger & Crossler, 2011; Smith et al., 1996). Given
the challenges in directly measuring information
privacy, information privacy concerns have emerged
as a practical proxy (Smith et al., 1996). These
concerns reflect individuals' beliefs, attitudes, and
perceptions about the control and fairness of their
information privacy, effectively operationalizing the
Page 4556
concept (Malhotra et al., 2004). Defined as a
dispositional belief that reflects the loss of control over
personal information, privacy concerns significantly
influence privacy-related decisions (Alashoor et al.,
2023). As information systems and data analytics have
advanced, research on privacy concerns has shifted
from secure data flows to the implications of data
output in knowledge inference, emphasizing the
growing importance of privacy concerns in behavioral
research models (Xu & Dinev, 2022). This shift
underscores the role of privacy concerns as a robust
and practical means of assessing information privacy,
especially as firms increasingly use large datasets to
train AI and machine learning models.
Second, by adopting the operationalism
perspective, which defines scientific concepts through
the specific procedures used to measure them
(Bridgman, 1927), we propose a unified framework
for studying information privacy through privacy
concerns. This approach focuses on measurement and
tackles the challenge of defining information privacy
concerns through a bottom-up method by examining
how data reflects individual concerns at the
operational level—integrating data flows (collection
and management), inference (integration, analysis,
and outcomes), and multilevel stakeholder
perspectives. This method fills a gap in the literature
regarding the lack of focus on inference and multilevel
perspectives (Bélanger & Crossler, 2011). As
information privacy concerns manifest at the
operational level, in the following section, we will
discuss each subcomponent in detail, illustrating how
they can enhance the current understanding of
information privacy by addressing individuals'
concerns toward each dimension of action and the
associated actors.
4.1. Actor-oriented focus
While existing conceptualizations have focused
on the roles of institutions (Hong & Thong, 2013) and,
more recently, the psychological concerns of peers
(Zhang et al., 2022) in information privacy, privacy
literature often overlooks the role of infrastructure and
regulatory systems as critical actors. For instance,
traditional approaches to privacy concerns may fall
short in scenarios where AI integrates data from
multiple unknown sources and employs machine
learning processes that users do not fully understand.
The use of AI as a digital agent, with its "black box"
operations, provides an opportunity to evaluate the
effectiveness of our expanded privacy
conceptualization in emerging contexts where existing
models show gaps. AI's influence is not necessarily
tied to a specific organization or vendor, but users'
concerns may arise from the unpredictable power of
the AI model itself. Technological infrastructure plays
a crucial role in driving information privacy concerns.
For instance, the Internet of Things (IoT) generates
countless data points, which are centralized in cloud
storage and analyzed by powerful algorithms.
Everyday devices like thermostats, smartphones, and
facial recognition systems collect extensive data,
tracking movements, learning routines, and scanning
faces. This infrastructure—comprising hardware,
platforms, and applications—stores, transmits, and
processes human digital footprints. While privacy
features like anonymization and encryption can be
built into these technologies, data breaches remain
risky due to system vulnerabilities, human error, or
sophisticated cyberattacks.
Additionally, even anonymized data can be re-
identified through advanced analytical techniques,
compromising privacy (Majeed & Lee, 2020). The
protocols governing data transmission and security
within these systems are often opaque, leading to
distrust and uncertainty among users. Individuals
typically have limited control over how these systems
use and analyze their data, fostering feelings of
powerlessness and privacy violations. In conclusion,
the evolution of IT infrastructure is transforming our
world and reshaping how individuals perceive privacy.
Regulatory systems play a crucial role in setting
the rules for data handling, often balancing business
interests, technological innovation, and individual
privacy. However, the relationship between law and
privacy is complex and influenced by several factors.
First, regulations often lag behind rapid technological
advancements, leaving legal frameworks struggling to
keep pace. As data becomes a valuable commodity
moving quickly through networks, emerging data
practices can outpace legal protections, creating
uncertainty and vulnerability. Additionally, cultural
and political differences shape diverse legal
landscapes worldwide. For example, the EU's
stringent General Data Protection Regulation (GDPR)
contrasts with the more fragmented approach in the
US, reflecting differing values and priorities that
influence how individuals interact with their data and
institutions. In state-owned systems, the potential for
government-business collusion heightens surveillance
and data misuse concerns. Conversely, independent
market economies may offer more individual control
but still face challenges related to corporate data
collection practices. Given the diverse regulatory
systems, legal scholar Strahilevitz argues for the
necessity of a unified framework for privacy law,
stating, "It is time to move aggressively toward the
reunification" (Strahilevitz, 2010, p. 2009).
Page 4557
While information privacy concerns among peers
have been studied, particularly in psychological
contexts like virtual territory and communication
privacy (Zhang et al., 2022), the current
conceptualization overlooks how peers leverage data
flows and inference to impact privacy. Individuals
collecting, integrating, and analyzing peers' data for
secondary or malicious purposes introduce significant
privacy concerns in the digital age. This goes beyond
traditional breaches involving unauthorized or
unforeseen use of personal information by
acquaintances, colleagues, or even strangers in shared
digital spaces. The rise of social media, online forums,
and digital platforms has made it easier for individuals
to collect personal information about peers, often
without their consent. This data can be analyzed to
reveal patterns, preferences, and behaviors, which can
be used for anything from targeted advertising to more
harmful activities like identity theft, stalking, or
manipulation (Stevens et al., 2021). These practices
underscore the urgent need for solid digital privacy
protections and careful consideration of the
information shared online, as data misuse can have
lasting consequences for personal privacy and
security. As technology continues to evolve and
become more integrated into daily life, there is an
increasing need to expand our understanding of
privacy issues, especially as legal frameworks struggle
to keep up with rapid technological advancements.
4.2. Action-oriented focus
The action-oriented approach to information
privacy emphasizes the dynamic management of data
from collection to outcome use. By integrating data
flows and inference, this perspective highlights that
privacy in the digital age depends on how data is
actively handled. Addressing concerns about data
overcollection and focusing on inference through
integration, analysis, and outcome use, this approach
offers a comprehensive framework aligned with the
fluid nature of digital information. It redefines the
privacy paradigm by emphasizing the active phases of
the data lifecycle, providing a more relevant and
practical framework for protecting privacy in the
digital age.
4.3. A unified conceptualization of information
privacy
After understanding the actions and actors, the
next step is to expand the existing framework, where
each action interacts with various actors, such as
infrastructure, institutions, regulatory systems, and
individuals. Table 2 illustrates the comprehensiveness
of the unified model, categorizing privacy concerns
across four levels of actors: infrastructure, institution-
specific, regulatory systems, and individuals. The
table provides a detailed breakdown and definition of
privacy concerns at each level, highlighting the
multifaceted nature of these issues and the
interconnectedness of the factors that influence them.
This comprehensive perspective is essential for
developing effective strategies to protect and promote
privacy in the digital age.
5. Discussion
This unified framework builds on existing
conceptualizations by encompassing critical
components of information privacy, such as data
collection, management (e.g., errors, unauthorized
secondary use), surveillance, and intrusion, as
discussed by Smith et al. (1996), Malhotra et al.
(2004), Hong and Thong (2013), Dinev and Hart
(2004, 2006), Xu et al. (2012). Moreover, it allows
researchers to extend beyond the individual-institution
relationship by incorporating other actors into
consideration, making it more relevant to Nissenbaum
(2004)’s model of privacy as contextual integrity,
which emphasizes the importance of context-specific
norms in governing the flow of information and
maintaining privacy by ensuring that data is shared
and used in ways that are consistent with these norms.
Since it accounts for data inference, this unified
conceptualization can also serve as a strong predictor
for the three essential aspects of privacy: identity,
freedom, and protection (Richards, 2021).
This conceptualization allows for a nuanced
understanding of privacy concerns by considering how
different actions and actors interact to influence
privacy outcomes. For instance, integrating data from
multiple sources and using advanced analytics can
lead to new privacy risks that are not adequately
addressed by traditional privacy frameworks. The
reconceptualization of information privacy presented
in this paper addresses several critical concerns in the
existing literature, particularly the lack of focus on the
consequences of how learning algorithms can infer
knowledge from massive data collections (Xu &
Dinev, 2022). Contributing to the multilevel privacy
movement, such as considering “I-privacy” and “we-
privacy” (Bélanger & James, 2020), this multilevel
approach highlights the need to consider various
stakeholders involved in the privacy ecosystem. This
includes individuals, institutions, regulatory
frameworks, and technological infrastructures that
shape privacy practices. By examining privacy
concerns at these multiple levels, we provide a more
holistic view of the challenges and opportunities in
Page 4558
managing information privacy in the digital age.
Furthermore, our framework underscores the
importance of regulatory systems in shaping privacy
norms and practices while providing tangible
attributes of information privacy for evaluating and
updating regulations to effectively address new data
practices' challenges.
In terms of practical implications, this approach to
conceptualizing information privacy extends beyond
traditional data flows considerations to include data
inferences and multilevel perspectives. It allows an
institution's privacy governance committee or public
board members to have a comprehensive approach to
reviewing and monitoring organizational practices,
which can help "prevent harmful impacts of learning
algorithms on individuals' autonomy and agency" (Xu
& Dinev, 2022, p. 7). Institutions that directly collect,
integrate, analyze, and use the outcomes of data should
implement robust privacy measures that address not
only the collection and management of data but also
the potential inferences and insights that can be drawn
from it. This involves communicating to users how
their data will be used, including any inferences that
might be made, and employing advanced privacy-
preserving technologies such as differential privacy,
federated learning, and secure multiparty computation
to safeguard data during its lifecycle—from collection
to analysis and inference. Practitioners should adhere
Table 2. Proposed unified conceptualization of information privacy
Action
Data flows
Data inference
Collection
Management
Integration
Analysis
Outcome Use
Concern that
extensive amounts of
data are being
collected.
Concern about how
data is managed.
Concern about
how data from
different
applications, data
storages, and
systems are being
combined.
Concern that
extensive
amounts of
personal data are
being analyzed
and inferred.
Concern about
the outcome of
the nefarious use
of data and
information.
Actor
Infrastructure
Individuals' concerns
about how IT
infrastructure is
used to collect
personal data.
Individuals'
concerns toward the
ability to govern
data of the existing
IT infrastructure.
Individuals'
concerns about IT
infrastructure's
capability in
combining data.
Individuals are
concerned about
the IT
infrastructure's
ability to analyze
their data to infer
new information.
Individuals'
concerns about
the vulnerable
outcomes caused
by the IT
infrastructure.
Institution
Individuals are
concerned that
organizations are
collecting and
storing an extensive
amount of personally
identifiable
information in
databases.
Individuals are
concerned that
organizations are
not adequately
managing/governing
the data and not
complying with the
laws and
regulations.
Individuals are
concerned about
how much data is
combined or
merged by
organizations.
Individuals are
concerned about
organizations
analyzing
extensive data to
infer new
information.
Individuals are
concerned about
the vulnerable
outcomes of data
analyses from
organizations
such as
surveillance,
manipulation,
and propaganda.
Regulatory
systems
Individuals are
concerned that the
regulatory system is
ineffective in
regulating data
collection.
Individuals are
concerned that the
regulatory system is
ineffective in
regulating data
management.
Individuals are
concerned that the
regulatory system
is ineffective in
regulating data
integration.
Individuals are
concerned that
the regulatory
system is
ineffective in
regulating data
analyses.
Individuals are
concerned that
the regulatory
system is
ineffective in
regulating data
or data outcome
use.
Individuals
Individuals are
concerned that
others, such as peers
or hackers, are
collecting and
storing extensive
data about them.
Individuals are
concerned that
others are not
adequately
managing/governing
the shared data.
Individuals are
concerned about
the amount of data
combined or
merged by others.
Individuals are
concerned that
others can
analyze and infer
extensive
amounts of
information about
them.
Individuals are
concerned about
the vulnerable
outcomes when
other people
analyze their
data—
vulnerabilities
such as
cyberstalking,
misjudgment,
bias, and privacy
invasion.
Page 4559
to current regulatory frameworks while advocating for
updated laws addressing modern data practices'
complexities. Moreover, because this framework
provides a multifaceted view of information privacy,
it can guide the design and development of privacy-
enhancing technologies (PETs), which can help
mitigate privacy risks, enhance data security, and
empower individuals with greater control over their
personal information.
In conclusion, this work contributes to the
ongoing discourse on information privacy by
proposing a comprehensive and forward-looking
framework that addresses the multifaceted nature of
privacy concerns in the digital age. Our unified
conceptualization provides a robust foundation for
future research and policy development, ultimately
aiming to enhance the protection of individual privacy
in an increasingly interconnected and data-driven
world.
6. Future research directions
Future research can account for the sophisticated
ways data can be combined, analyzed, managed, and
outcome used by moving beyond a focus on data flows
to include data inference. This shift is particularly
relevant in emerging technologies such as artificial
intelligence and machine learning, which can
potentially generate significant privacy risks through
inferential analytics (Xu & Dinev, 2022).
Our research opens up several avenues for future
empirical work. By providing a comprehensive
conceptual model of information privacy concerns, we
offer a valuable tool for privacy scholars to investigate
privacy actors that have been understudied in the
literature (Yun et al., 2019). Due to the framework's
extensible and modular nature, it can be employed to
assess the impact of novel technological
advancements and regulatory interventions on privacy
outcomes. More importantly, this unified framework
aids future research in creating comprehensive
measurement items to capture information privacy
concerns more cohesively and holistically. In fact, this
unified conceptualization of information privacy
concerns can help answer the call to develop “more
common measurements to be used across studies”
(Bélanger & Crossler, 2011, p. 1035).
As design science in information systems focuses
on creating and evaluating innovative artifacts to solve
real-world problems, a unified framework for
examining information privacy - encompassing both
actors and actions can significantly enhance this
approach by providing a comprehensive lens to better
identify and evaluate privacy-related features in
system development. This framework facilitates the
systematic identification of privacy issues across
different interaction levels and data-handling phases,
guiding the development of IT artifacts precisely
tailored to these concerns. At the systems design and
analysis level, this framework can guide system
analysts and development teams in designing and
evaluating information systems that effectively
alleviate user privacy concerns. Legal scholars
suggested that better privacy conceptualization could
become more relevant to policy by offering guidance
on weighing and ordering various privacy interests
when tough choices arise (Pozen, 2016).
This unified conceptualization of information
privacy provides a comprehensive framework for legal
scholarship and regulatory decision-making. Using
this actor/action approach, lawmakers can develop
targeted regulations that address each aspect of
privacy, ensuring vulnerabilities are thoroughly
considered and effectively mitigated in legislation.
7. Conclusion
This paper explores information privacy's
complex and evolving nature by proposing a unified
conceptual framework that integrates data flows, data
inference, and multilevel stakeholder perspectives.
Our research emphasizes the need to update traditional
privacy frameworks in response to technological
advancements. By incorporating an inference
component and adopting a multilevel approach, we
provide a more comprehensive understanding of
privacy concerns that align with contemporary data
practices. The proposed framework highlights the
dynamic interplay between actions and actors, offering
a multifaceted perspective that enhances the
theoretical understanding of information privacy. It
also provides tools for assessing and mitigating
privacy risks in real-world scenarios. Our work
underscores the importance of considering both the
flow and inferential use of data, addressing gaps in the
existing literature, and paving the way for future
research and policy development in information
systems.
Acknowledgements: Truong (Jack) Luu
acknowledges support from the University of
Cincinnati Digital Futures Graduate Student Fellows
Program.
8. References
Alashoor, T., Keil, M., Smith, J., & McConnell, A. R.
(2023). Too Tired and in Too Good of a Mood to Worry
About Privacy: Explaining the Privacy Paradox
Through the Lens of Effort Level in Information
Page 4560
Processing. Information Systems Research, 34(4),
1415-1436.
Angel, M. P., & Calo, R. (2024). Distinguishing privacy law:
a critique of privacy as social taxonomy. Columbia Law
Review, 124(2), 507-562.
Bélanger, F., & Crossler, R. E. (2011). Privacy in the digital
age: a review of information privacy research in
information systems. MIS quarterly, 35(1), 1017-1041.
Bélanger, F., & James, T. L. (2020). A Theory of Multilevel
Information Privacy Management for the Digital Era.
Information Systems Research, 31(2), 510-536.
Bellia, P. L. (2009). Federalization in Information Privacy
Law. Yale Law Journal, 118(5), 868-900.
Bridgman, P. W. (1927). The logic of modern physics (Vol.
3). Macmillan.
Cormode, G., Procopiuc, C. M., Shen, E., Srivastava, D., &
Yu, T. (2013). Empirical privacy and empirical utility
of anonymized data. 2013 IEEE 29th International
Conference on Data Engineering Workshops
(ICDEW), Australia.
Dinev, T., & Hart, P. (2004). Internet privacy concerns and
their antecedents-measurement validity and a
regression model. Behaviour & Information
Technology, 23(6), 413-422.
Dinev, T., & Hart, P. (2005). Internet privacy concerns and
social awareness as determinants of intention to
transact. International Journal of Electronic
Commerce, 10(2), 7-29.
Dinev, T., & Hart, P. (2006). An extended privacy calculus
model for e-commerce transactions. Information
Systems Research, 17(1), 61-80.
Domingo-Ferrer, J., Sánchez, D., & Blanco-Justicia, A.
(2021). The limits of differential privacy (and its
misuse in data release and machine learning).
Communications of the ACM, 64(7), 33-35.
Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data
and discrimination: perils, promises and solutions. A
systematic review. Journal of Big Data, 6(1), 1-27.
Hong, W., & Thong, J. Y. (2013). Internet privacy concerns:
An integrated conceptualization and four empirical
studies. MIS Quarterly, 37(1), 275-298.
Liu, Z. L., & Wang, X. Q. (2018). How to regulate
individuals' privacy boundaries on social network sites:
A cross-cultural comparison. Information &
Management, 55(8), 1005-1023.
Majeed, A., & Lee, S. (2020). Anonymization techniques for
privacy preserving data publishing: A comprehensive
survey. IEEE access, 9, 8512-8545.
Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet
users' information privacy concerns (IUIPC): The
construct, the scale, and a causal model. Information
Systems Research, 15(4), 336-355.
Mingers, J., & Walsham, G. (2010). Toward ethical
information systems: The contribution of discourse
ethics. MIS quarterly, 34(4), 833-854.
Nissenbaum, H. (2004). Privacy as contextual integrity.
Washington Law Review, 79, 119.
Pavlou, P. A. (2011). State of the information privacy
literature: Where are we now and where should we go?
MIS quarterly, 35(4), 977-988.
Pozen, D. E. (2016). Privacy-Privacy Tradeoffs. University
of Chicago Law Review, 83(1), 221-247.
Quach, S., Thaichon, P., Martin, K. D., Weaven, S., &
Palmatier, R. W. (2022). Digital technologies: tensions
in privacy and data. Journal of the Academy of
Marketing Science, 50(6), 1299-1323.
Richards, N. (2021). Why privacy matters. Oxford
University Press, Oxford.
Slobogin, C. (2008). Government data mining and the fourth
amendment. The University of Chicago Law Review,
75(1), 317-341.
Smith, H. J., Dinev, T., & Xu, H. (2011). Information
privacy research: an interdisciplinary review. MIS
quarterly, 35(4), 989-1015.
Smith, H. J., Milberg, S. J., & Burke, S. J. (1996).
Information privacy: Measuring individuals' concerns
about organizational practices. MIS quarterly, 20(2),
167-196.
Solove, D. J. (2005). A taxonomy of privacy. University of
Pennsylvania Law Review, 154, 477.
Stevens, F., Nurse, J. R., & Arief, B. (2021). Cyber stalking,
cyber harassment, and adult mental health: A
systematic review. Cyberpsychology, Behavior, and
Social Networking, 24(6), 367-376.
Stewart, K. A., & Segars, A. H. (2002). An empirical
examination of the concern for information privacy
instrument. Information Systems Research, 13(1), 36-
49.
Strahilevitz, L. J. (2010). Reunifying Privacy Law.
California Law Review, 98(6), 2007-2048.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable
inferences: re-thinking data protection law in the age of
big data and AI. Columbia Business Law Review, 494.
Warren, S. D., & Brandeis, L. D. (1890). The Right to
Privacy. Harvard Law Review, 4(5), 193-220.
Westin, A. F. (1968). Privacy and freedom (Vol. 25).
Washington and Lee Law Review, VA.
Xu, H., & Dinev, T. (2022). Guest Editorial: Reflections on
the 2021 Impact Award: Why Privacy Still Matters.
Management Information Systems Quarterly, 46(4), 1-
13.
Xu, H., Gupta, S., Rosson, M. B., & Carroll, J. M. (2012).
Measuring mobile users' concerns for information
privacy. Thirty Third International Conference on
Information Systems, Orlando.
Xu, H., & Zhang, N. (2023). An Onto-Epistemological
Analysis of Information Privacy Research. Information
Systems Research(Articles in advance), 1-13.
Yun, H., Lee, G., & Kim, D. J. (2019). A chronological
review of empirical research on personal information
privacy concerns: An analysis of contexts and research
constructs. Information & Management, 56(4), 570-
601.
Zhang, N., Wang, C., Karahanna, E., & Xu, Y. (2022). Peer
privacy concern: conceptualization and measurement.
MIS Quarterly, 46(1), 491-530.
Page 4561