Conference PaperPDF Available

Proposing A Unified Concept of Information Privacy: An Actor/Action-Oriented Approach

Authors:

Figures

Content may be subject to copyright.
Proposing A Unified Concept of Information Privacy:
An Actor/Action-Oriented Approach
Truong (Jack) Luu
University of Cincinnati
luutp@mail.uc.edu
Andrew Harrison
University of Cincinnati
andrew.harrison@uc.edu
Michael Jones
University of Cincinnati
m.jones@uc.edu
Binny M. Samuel
University of Cincinnati
samuelby@uc.edu
Abstract
This conceptual paper introduces a contemporary
conceptualization of information privacy to align with
the reality of its multifaceted nature amid rapid
technological advancements and enmeshed diverse
perspectives. We propose a unified approach
emphasizing the dynamic nature of information
privacy as it interacts with the evolving digital
landscape. This new encompassing conceptualization
integrates theoretical perspectives from existing
research on infrastructure, institutional, regulatory
systems, and individuals as actors and expands actors
actions from sole data flows to encompass data
inference. This conceptualization empowers a deeper
understanding and mitigation of the entanglement and
multilayered nature of information privacy, laying the
groundwork for future research and practical
applications in privacy research. By incorporating
action- and actor-oriented perspectives, our unified
conceptualization offers a robust framework for
assessing and managing privacy risks in an
increasingly complex digital ecosystem.
Keywords: conceptual paper, information privacy,
data flows, data inference, multilevel privacy
1. Introduction
Information privacy has been a central topic in IS
research; however, the rapid advancement of
innovative technologies is not always reflected in its
existing conceptualizations. This paper focuses on two
main issues affecting modern privacy research: (1) the
proliferation of new technologies that collect,
combine, manage, and analyze data, and (2) the
expansion of personal data collection and use beyond
traditional consumer-business relationships. We aim
to update the conceptualization of information privacy
by synthesizing previous research with contemporary
studies on relevant actors and actions in data systems.
The rise of advanced data processing, analytics, and
AI/ML models blurs the lines of data ownership,
necessitating a reevaluation of privacy in our complex
digital ecosystem. While privacy is widely recognized
as crucial for preserving dignity, autonomy, and
freedom in the digital age (Richards, 2021; Westin,
1968), institutions often accumulate, combine, and
analyze vast amounts of user data, frequently
exceeding informed consent and fair business
practicesan issue exacerbated by evolving
technologies and sophisticated predictive models.
Accordingly, the current approach of focusing on
data flowsi.e., data collection and data management
(e.g., error and misuse)has lagged in capturing the
dynamics of technological advancement. Specifically,
there is an urgent need to consider how data is used for
analysis and inference in addition to the conventional
data flows approach (Xu & Dinev, 2022). Indeed, the
inference component is critical because institutions
can make improper or wrong decisions about
individuals by analyzing data, creating power
inequality (Richards, 2021). The consequences of
these inferences can lead to serious problems, such as
discrimination in employment, marketing, and credit
scoring (Favaretto et al., 2019).
Additionally, "information privacy is a multilevel
concept, but rarely studied as such" (Bélanger &
Crossler, 2011, p. 1017). Indeed, this oversight in
recognizing information privacy as a multilevel
concept could be problematic as most current
conceptualizations primarily examine the
relationships between institutions and their users. This
approach may overlook crucial elements such as the
role of regulatory frameworks, peer dynamics, and
technological advancements. Technological
advancement poses a challenge to individual
information privacy. While the early days of the
Internet focused on e-commerce and simple
information exchange (Web 1.0), the rise of personal
webpages and social platforms like Facebook and X
(Web 2.0) amplified online social interaction. Then,
Web 3.0, with the ability to read, write, and own,
emerged as a more decentralized network manifesting
through blockchain technology.
While this shift was intended to empower
individuals to have more control over their personal
information, the widespread automated data collection
and advanced analytical tools have mitigated many
protections meant to secure personal data. The current
"single level" concept of information privacy hinders
the evolving progress of theoretical and empirical
aspects of studying information privacy.
Theoretically, this lack of a multilevel perspective
Proceedings of the 58th Hawaii International Conference on System Sciences | 2025
Page 4552
URI: https://hdl.handle.net/10125/109392
978-0-9981331-8-8
(CC BY-NC-ND 4.0)
prevents building a consistent, holistic theoretical
framework. Similarly, in empirical aspects, research
might neglect to quantify the effects of other levels in
terms of hypothesizing and quantifying effect sizes
regarding information privacy. The different levels of
information privacy have been either mentioned or
discussed but have not been adequately
conceptualized and measured. We incorporate
multilevel privacy by studying a user's concerns when
interacting with various individuals, organizations, IT
infrastructure, and regulatory systems. We do so by
disentangling the relevant actions and actors when
formulating information privacy.
Specifically, we identify how combinations of
actions (data collection, data integration, data
management, data analysis, and outcome use) and
actors (infrastructure, institutions, regulatory systems,
and individuals) manifest through information privacy
concerns, a proxy of information privacy.
Our conceptualization offers a foundation for
relevant privacy stakeholders to assess privacy risks
and evaluate the effectiveness of interventions across
a broad range of real-life scenarios. Notably, this new
conceptualization allows us to integrate the influence
of advancements in infrastructure and practices (e.g.,
AI/ML and advanced analytics) on privacy research.
This paper will first briefly present the existing
conceptualization of information privacy. Next, we
will outline a direction for updating the concept of
information privacy in the modern era. Finally, we
propose a new unified conceptual framework for
examining information privacy through its proxy,
information privacy concerns, and discuss how this
updated conceptualization can address current issues
in empirical research on information privacy.
2. Theoretical foundations
2.1. Challenges in defining and
conceptualizing information privacy
Many have tried to define information privacy.
One of the first attempts was made by Westin (1968,
p. 7), who defined information privacy as "the claim
of individuals, groups, or institutions to determine for
themselves when, how, and to what extent information
about them is communicated to others." This
definition is impactful because it positions privacy as
a form of control. For example, Bélanger and Crossler
(2011, p. 1017) defined information privacy as "the
desire of individuals to control or have some influence
over data about themselves." While these definitions
have served our society reasonably well, technological
advancements, including the development of AI
systems, have blurred their clarity. Consider AI-driven
surveillance cameras in public spaces. These systems
can automatically recognize and track individuals
without consent or control, challenging the traditional
notion of privacy as control over one's information.
This illustrates how technologies complicate existing
privacy definitions, highlighting the need for updated
frameworks.
Moreover, using advanced statistical tools, actors
such as organizations and individuals can gather data
from different sources and infer many novel insights.
This makes the actors with access to this data
extremely powerful (Richards, 2021). For instance,
Target used data science to analyze purchasing
patterns and predict when customers were likely
pregnant, allowing them to send targeted baby product
ads and coupons. More importantly, as data is often
combined from various sources, defining data
ownership becomes more challenging. If ownership
cannot be determined, it becomes difficult for
individuals to exercise control, as one cannot control
an asset they do not own. Given the complexity of the
term privacy and the dynamic nature of our world,
conceptualizing it effectively is critical for researchers
studying information privacy-related behaviors.
2.2. Using privacy concerns as a proxy for
information privacy
Given the inherent difficulty in fully
conceptualizing information privacy, researchers have
used proxies that are easier to define and measure.
Information privacy concerns are often used as a
practical measure of information privacy, aligning
with Percy Bridgman's operationalism. This
philosophical approach defines scientific concepts
strictly in terms of specific operations or procedures
used to measure them (Bridgman, 1927). Thus, the
meaning of a concept is entirely determined by the
empirical methods used to observe and measure it,
emphasizing the practical application and avoiding
metaphysical assumptions. Since direct measurement
of information privacy is complex, privacy concerns
provide a quantifiable and observable proxy, making
the abstract concept of information privacy accessible
and empirically manageable. As Smith et al. (2011)
noted, "Because of the near impossibility of measuring
privacy itself, and also because the salient
relationships depend more on cognitions and
perceptions than on rational assessments, almost all
empirical privacy research in the social sciences relies
on the measurement of a privacy-related proxy of
sorts." This reliance on proxies indicates the
challenges in directly quantifying privacy but enables
a more manageable approach to studying it in
empirical settings. However, the current proxies tend
Page 4553
to focus on specific aspects, like data collection
practices or user perceptions, which may not
encompass broader privacy issues or the evolving
nature of technology and outcome use (Pavlou, 2011).
2.3. Overlooking broader contexts in privacy
research
As information privacy concerns are still arduous
to measure, most measurements created to gauge
information privacy concerns are psychometric
assessments, specifically self-reported in a Likert scale
fashion. However, the majority of the scales capture
individual concerns about organizational practices
related to data collection and management (e.g.,
secondary use, errors, and awareness). Many popular
scales exist for measuring information privacy
concerns, including individuals' concerns about
organizational information privacy practices (Smith et
al., 1996), concern for information privacy (Stewart &
Segars, 2002), internet users' information privacy
concerns (IUIPC) (Malhotra et al., 2004), internet
privacy concerns (IPC) (Dinev & Hart, 2005), mobile
users' concerns for information privacy (MUIPC) (Xu
et al., 2012), and internet privacy concerns (IPC)
(Hong & Thong, 2013). These measurements
effectively capture different nuances of how
institutions collect and manage user data, and the level
of control is essential in preserving one’s privacy.
Overall, the aforementioned scales address users’
concerns about overcollection, unauthorized release,
or misuse, which Xu and Dinev (2022) refer to as a
data flows focus. Notably, there are instances where
researchers propose the unforeseen consequences of
institutional practices, such as perceived surveillance
and perceived intrusion (Xu et al., 2012), indicating a
transition from the third era - the rise of the Internet
1990-2010) - to the fourth era the rise of AI and
machine learning (AI/ML) (2011-present). We
observe a shift in focus from data flows to the impact
of advanced learning algorithms on individual well-
being. This evolving landscape highlights the need to
update the concept of information privacy for the new
era, as the literature shows that advanced techniques
such as differential privacy are not the end-all solution
(Domingo-Ferrer et al., 2021).
2.4. Updating information privacy concepts
for the fourth era
With technological advancements, it is crucial to
reexamine the concept of information privacy to tackle
the challenges and seize the opportunities presented by
the AI/ML age. This reevaluation should aim to
establish a unified and comprehensive framework for
information privacy that keeps pace with swiftly
evolving data inference practices. Indeed, to better
study privacy, " one needs to be attentive to not only
the characteristics of an individual but also the
complex, socially and culturally dependent,
commonsense background, for which our
understanding is still nascent" (Xu & Zhang, 2023, p.
2). Therefore, the new conceptualization needs to
account for how entities other than traditional
institutions handle personal information and should
include elements such as the legal framework and the
advancement of technology. Despite the significant
development of empirical literature on information
privacy over the past several decadesreflecting
changes in technological capabilities, psychological
impacts, and regulatory frameworks (Xu & Dinev,
2022) researchers have noted that most current
studies still primarily focus on the second party (the
institution/vendor with which users do business) (see
Yun et al. (2019) for a review). Furthermore, legal
scholars concur that the current conceptualization of
privacy "fails to provide a useful framework for
determining what constitutes a privacy problem and,
as a consequence, has begun to disserve the
community" (Angel & Calo, 2024, p. 521).
In the following sections, we will first discuss a
direction to reconceptualize information privacy and
propose a unified conceptualization of privacy and the
progress of empirical studies related to the multilevel
view. Next, we will discuss (a) the need to study
information privacy from a multilevel perspective
(Bélanger & Crossler, 2011), (b) moving past the data
flows-oriented perspective to the inference perspective
(Xu & Dinev, 2022), and (c) considering the
interactions between actions and actors.
3. Directions to reconceptualize
information privacy
3.1. Moving beyond a focus on data flows by
including emphasis on inference
The data flows perspective on information
privacy emphasizes controlling the overcollection,
unauthorized release, or misuse of data (Xu & Dinev,
2022). This approach seeks to manage privacy by
regulating how personal information moves through
checks and compliance measures. However, as data
flows become increasingly complex and difficult to
track, this perspective faces significant challenges.
Data collected for one purpose can be repurposed
without additional consent, and technological
advancements continually find new ways to mine,
Page 4554
combine, analyze, and utilize data in ways that bypass
traditional privacy protections, potentially violating
certain rights, including those protected by the U.S.
Fourth Amendment (Slobogin, 2008).
Shifting to an inference perspective can better
address these challenges by focusing on the
consequences of data integration and analysis and
using inference outcomes rather than merely how data
is handled. This perspective highlights the risks posed
by anonymized or aggregated data, which can still lead
to sensitive inferences about individuals, affecting
their autonomy (Cormode et al., 2013). It also
addresses individual concerns from algorithmic
decisions like credit scoring, job screening, and law
enforcement profiling (Favaretto et al., 2019). As large
language models increasingly draw on vast public
databases, incorporating an inference component into
the concept of information privacy is crucial for
studying privacy concerns in the age of AI/ML.
Incorporating inference into information privacy
would open new research avenues and enhance legal
protections. This approach would focus on limiting
harmful inferences and setting boundaries on
permissible data inferences, regardless of how data is
collected or processed. Current data protection laws
are inadequate against the challenges posed by big
data and AI, highlighting the urgent need for legal
revisions to better safeguard individuals from privacy
risks associated with inferential analytics (Richards,
2021; Wachter & Mittelstadt, 2019). This data
inference approach aligns with Daniel Solove's
privacy taxonomy, categorizing privacy into
information collection, processing, dissemination, and
invasion (Solove, 2005). While the first two categories
focus on data handling, the latter two address data
management and its outcomes.
More importantly, explicitly extending the
concept of privacy to include inference will advance
studies on accountability and transparency in
algorithmic decision-making, ensuring individuals can
understand decisions made about them. Moreover, it
will promote the development of ethical AI that
respects privacy by design and inherently limits
intrusive inferences (Mingers & Walsham, 2010).
In conclusion, integrating an inference
componentencompassing data integration, analysis,
and outcome useinto the concept of privacy better
addresses the complexities of modern data
environments and the evolving threats to personal
privacy. This approach offers a more effective and
forward-looking defense of individual rights in the
digital age by focusing on the implications of
intelligence extracted from data rather than just its
movement through databases.
3.2. Moving beyond individual-vendor
relationships by including a multilevel
perspective
Research on information privacy has traditionally
focused on direct interactions between individuals and
vendors (Yun et al., 2019). However, in today's
interconnected digital environment, there is a need for
multilevel and multiparty perspectives. To assess the
current scope of this research, we conducted a targeted
literature review via Web of Science, using keywords
such as "information privacy," "privacy concerns,"
"internet privacy," and "information privacy
concerns." These keywords were used because they
encapsulate the core concepts and variations in
terminology prevalent in the literature (e.g., privacy
concerns are often used to measure information
privacy and internet privacy in the literature - Pavlou
(2011)), allowing us to capture a broad spectrum of
studies relevant to both the general and specific
aspects of privacy in the digital age. This review
targeted three leading information systems journals:
Management Information Systems Quarterly (MISQ),
Information Systems Research (ISR), and the Journal
of Management Information Systems (JMIS). The
search yielded 49 papers published between 1990 and
2023. An evaluation of these articles revealed that,
although various levels of information privacy are
mentioned or discussed, they have not been
sufficiently conceptualized or measured.
Table 1 below provides an overview of the articles
addressing information privacy. Among these, the
theoretical work by Bélanger and James (2020) stands
out. Their paper introduces a two-level theory of
information privacy, coining the terms "we-privacy"
and "I-privacy." This review highlights that
information privacy extends beyond individuals
managing their data with institutions and encompasses
collective privacy considerations.
Table 1. Information privacy is mentioned across
various levels in the IS literature
Level of Mentioned
# of articles
Infrastructure
23
Institution-specific
40
Regulatory systems
7
Individuals
10
Building on the foundational concepts of "we-
privacy" and "I-privacy," it is logical to extend the
focus to include broader elements critical to IS
research, such as regulatory systems, IT infrastructure,
and individual actors like peers and hackers, to
develop a comprehensive understanding of privacy
issues.
Page 4555
First, regulatory systems are critical in shaping
privacy norms and practices by enforcing standards
and penalties, directly influencing privacy
stakeholders' behavior (Bellia, 2009). Additionally, IT
infrastructureincluding hardware (e.g., IoT devices,
CCTVs), software (e.g., AI/ML and statistical
models), internet networks, and databasesis vital
because it stores, transmits, combines, and analyzes
personal information, creating potential
vulnerabilities. As Quach et al. (2022, p. 1308) warn:
"Algorithms can extract sensitive information such as
people’s political opinions, sexual orientation, and
medical conditions from less sensitive information."
(p. 1038). Additionally, individual-level actors, such
as peers, influence privacy through social sharing and
receiving norms, while hackers pose continuous
threats to data security (Liu & Wang, 2018).
Moreover, digital stalking has emerged as a significant
concern, where malicious actors exploit personal
information and online behavior to track and harass
individuals, further exacerbating privacy risks
(Stevens et al., 2021).
By incorporating these layers of actors, privacy
scholars can develop more effective
conceptualizations for managing and protecting
privacy within this complex ecosystem.
3.3. Incorporating the interaction between
data flows, data inference, and multilevel
focus
Reconceptualizing information privacy through
the lens of data flowsincluding overcollection and
management (e.g., unauthorized release)combined
with data inference (covering data integration,
analysis, and outcome use) and involving multilevel
actors (such as infrastructure, institutions, regulatory
systems, and individuals) offers a nuanced
understanding of privacy. This integrated approach
considers all aspects of data handling, providing a
comprehensive view of data manipulation and
associated privacy risks at each stage. Engaging
stakeholders across various levels, from IT
infrastructure to policy regulators, promotes a
collaborative approach that balances individual rights
with societal needs. This alignment is crucial for
developing more effective and adaptable privacy
frameworks to address and mitigate emerging threats
in our data-driven world.
This reconceptualization facilitates a dynamic and
proactive response to privacy challenges, keeping pace
with technological advancements and evolving
regulatory landscapes. The interplay between
individual privacy concerns, advancements in IT
infrastructure (e.g., generative AI and deepfakes), and
the dynamics of regulatory systems (such as GDPR in
Europe, HIPAA, and ADPPA in the US), along with
the rise of peer-to-peer business models, opens new
avenues for exploration. The scope of information
privacy stakeholders has expanded beyond the
traditional dyad of service providers and customers,
especially as AI models increasingly predict user
behavior in real time, leading to more intricate
relationships.
More importantly, a multi-actor perspective
enables discussions of information privacy across
various contexts, thereby extending Helen
Nissenbaum's model of privacy as "contextual
integrity" (Nissenbaum, 2004). This model stresses
that privacy norms should align with specific contexts,
with information flow rules tied to the norms of each
particular context (e.g., the same information may
raise different privacy concerns when dealing with
institutions compared to peers or AI).
Following the detailed exploration of the
expanded focus on inference in information privacy,
the following section introduces a broader conceptual
framework. This framework integrates the interactions
between data flows, data inference, and multilevel
stakeholder perspectives, paving the way for a
comprehensive approach to addressing the evolving
challenges in data governance and privacy protection
in the digital age.
4. Proposing a unified conceptualization
of information privacy
This reconceptualization of information privacy is
based on two essential foundations: (1) following the
established practice of using information privacy
concerns as a proxy for information privacy and (2)
employing operationalism as a framework to measure
information privacy concerns.
First, privacy concerns serve as a reliable proxy
for information privacy because they encapsulate the
complex nature of privacy in the digital age. Initially
defined as "the right to be let alone" (Warren &
Brandeis, 1890, p. 193), privacy has evolved from a
rights-based concept to one centered on consent and
control over personal information (Westin, 1968).
This evolution aligns with the modern understanding
of information privacy, involving data collection,
improper access, and unauthorized secondary use
(Bélanger & Crossler, 2011; Smith et al., 1996). Given
the challenges in directly measuring information
privacy, information privacy concerns have emerged
as a practical proxy (Smith et al., 1996). These
concerns reflect individuals' beliefs, attitudes, and
perceptions about the control and fairness of their
information privacy, effectively operationalizing the
Page 4556
concept (Malhotra et al., 2004). Defined as a
dispositional belief that reflects the loss of control over
personal information, privacy concerns significantly
influence privacy-related decisions (Alashoor et al.,
2023). As information systems and data analytics have
advanced, research on privacy concerns has shifted
from secure data flows to the implications of data
output in knowledge inference, emphasizing the
growing importance of privacy concerns in behavioral
research models (Xu & Dinev, 2022). This shift
underscores the role of privacy concerns as a robust
and practical means of assessing information privacy,
especially as firms increasingly use large datasets to
train AI and machine learning models.
Second, by adopting the operationalism
perspective, which defines scientific concepts through
the specific procedures used to measure them
(Bridgman, 1927), we propose a unified framework
for studying information privacy through privacy
concerns. This approach focuses on measurement and
tackles the challenge of defining information privacy
concerns through a bottom-up method by examining
how data reflects individual concerns at the
operational levelintegrating data flows (collection
and management), inference (integration, analysis,
and outcomes), and multilevel stakeholder
perspectives. This method fills a gap in the literature
regarding the lack of focus on inference and multilevel
perspectives (Bélanger & Crossler, 2011). As
information privacy concerns manifest at the
operational level, in the following section, we will
discuss each subcomponent in detail, illustrating how
they can enhance the current understanding of
information privacy by addressing individuals'
concerns toward each dimension of action and the
associated actors.
4.1. Actor-oriented focus
While existing conceptualizations have focused
on the roles of institutions (Hong & Thong, 2013) and,
more recently, the psychological concerns of peers
(Zhang et al., 2022) in information privacy, privacy
literature often overlooks the role of infrastructure and
regulatory systems as critical actors. For instance,
traditional approaches to privacy concerns may fall
short in scenarios where AI integrates data from
multiple unknown sources and employs machine
learning processes that users do not fully understand.
The use of AI as a digital agent, with its "black box"
operations, provides an opportunity to evaluate the
effectiveness of our expanded privacy
conceptualization in emerging contexts where existing
models show gaps. AI's influence is not necessarily
tied to a specific organization or vendor, but users'
concerns may arise from the unpredictable power of
the AI model itself. Technological infrastructure plays
a crucial role in driving information privacy concerns.
For instance, the Internet of Things (IoT) generates
countless data points, which are centralized in cloud
storage and analyzed by powerful algorithms.
Everyday devices like thermostats, smartphones, and
facial recognition systems collect extensive data,
tracking movements, learning routines, and scanning
faces. This infrastructurecomprising hardware,
platforms, and applicationsstores, transmits, and
processes human digital footprints. While privacy
features like anonymization and encryption can be
built into these technologies, data breaches remain
risky due to system vulnerabilities, human error, or
sophisticated cyberattacks.
Additionally, even anonymized data can be re-
identified through advanced analytical techniques,
compromising privacy (Majeed & Lee, 2020). The
protocols governing data transmission and security
within these systems are often opaque, leading to
distrust and uncertainty among users. Individuals
typically have limited control over how these systems
use and analyze their data, fostering feelings of
powerlessness and privacy violations. In conclusion,
the evolution of IT infrastructure is transforming our
world and reshaping how individuals perceive privacy.
Regulatory systems play a crucial role in setting
the rules for data handling, often balancing business
interests, technological innovation, and individual
privacy. However, the relationship between law and
privacy is complex and influenced by several factors.
First, regulations often lag behind rapid technological
advancements, leaving legal frameworks struggling to
keep pace. As data becomes a valuable commodity
moving quickly through networks, emerging data
practices can outpace legal protections, creating
uncertainty and vulnerability. Additionally, cultural
and political differences shape diverse legal
landscapes worldwide. For example, the EU's
stringent General Data Protection Regulation (GDPR)
contrasts with the more fragmented approach in the
US, reflecting differing values and priorities that
influence how individuals interact with their data and
institutions. In state-owned systems, the potential for
government-business collusion heightens surveillance
and data misuse concerns. Conversely, independent
market economies may offer more individual control
but still face challenges related to corporate data
collection practices. Given the diverse regulatory
systems, legal scholar Strahilevitz argues for the
necessity of a unified framework for privacy law,
stating, "It is time to move aggressively toward the
reunification" (Strahilevitz, 2010, p. 2009).
Page 4557
While information privacy concerns among peers
have been studied, particularly in psychological
contexts like virtual territory and communication
privacy (Zhang et al., 2022), the current
conceptualization overlooks how peers leverage data
flows and inference to impact privacy. Individuals
collecting, integrating, and analyzing peers' data for
secondary or malicious purposes introduce significant
privacy concerns in the digital age. This goes beyond
traditional breaches involving unauthorized or
unforeseen use of personal information by
acquaintances, colleagues, or even strangers in shared
digital spaces. The rise of social media, online forums,
and digital platforms has made it easier for individuals
to collect personal information about peers, often
without their consent. This data can be analyzed to
reveal patterns, preferences, and behaviors, which can
be used for anything from targeted advertising to more
harmful activities like identity theft, stalking, or
manipulation (Stevens et al., 2021). These practices
underscore the urgent need for solid digital privacy
protections and careful consideration of the
information shared online, as data misuse can have
lasting consequences for personal privacy and
security. As technology continues to evolve and
become more integrated into daily life, there is an
increasing need to expand our understanding of
privacy issues, especially as legal frameworks struggle
to keep up with rapid technological advancements.
4.2. Action-oriented focus
The action-oriented approach to information
privacy emphasizes the dynamic management of data
from collection to outcome use. By integrating data
flows and inference, this perspective highlights that
privacy in the digital age depends on how data is
actively handled. Addressing concerns about data
overcollection and focusing on inference through
integration, analysis, and outcome use, this approach
offers a comprehensive framework aligned with the
fluid nature of digital information. It redefines the
privacy paradigm by emphasizing the active phases of
the data lifecycle, providing a more relevant and
practical framework for protecting privacy in the
digital age.
4.3. A unified conceptualization of information
privacy
After understanding the actions and actors, the
next step is to expand the existing framework, where
each action interacts with various actors, such as
infrastructure, institutions, regulatory systems, and
individuals. Table 2 illustrates the comprehensiveness
of the unified model, categorizing privacy concerns
across four levels of actors: infrastructure, institution-
specific, regulatory systems, and individuals. The
table provides a detailed breakdown and definition of
privacy concerns at each level, highlighting the
multifaceted nature of these issues and the
interconnectedness of the factors that influence them.
This comprehensive perspective is essential for
developing effective strategies to protect and promote
privacy in the digital age.
5. Discussion
This unified framework builds on existing
conceptualizations by encompassing critical
components of information privacy, such as data
collection, management (e.g., errors, unauthorized
secondary use), surveillance, and intrusion, as
discussed by Smith et al. (1996), Malhotra et al.
(2004), Hong and Thong (2013), Dinev and Hart
(2004, 2006), Xu et al. (2012). Moreover, it allows
researchers to extend beyond the individual-institution
relationship by incorporating other actors into
consideration, making it more relevant to Nissenbaum
(2004)’s model of privacy as contextual integrity,
which emphasizes the importance of context-specific
norms in governing the flow of information and
maintaining privacy by ensuring that data is shared
and used in ways that are consistent with these norms.
Since it accounts for data inference, this unified
conceptualization can also serve as a strong predictor
for the three essential aspects of privacy: identity,
freedom, and protection (Richards, 2021).
This conceptualization allows for a nuanced
understanding of privacy concerns by considering how
different actions and actors interact to influence
privacy outcomes. For instance, integrating data from
multiple sources and using advanced analytics can
lead to new privacy risks that are not adequately
addressed by traditional privacy frameworks. The
reconceptualization of information privacy presented
in this paper addresses several critical concerns in the
existing literature, particularly the lack of focus on the
consequences of how learning algorithms can infer
knowledge from massive data collections (Xu &
Dinev, 2022). Contributing to the multilevel privacy
movement, such as considering “I-privacy” and “we-
privacy” (Bélanger & James, 2020), this multilevel
approach highlights the need to consider various
stakeholders involved in the privacy ecosystem. This
includes individuals, institutions, regulatory
frameworks, and technological infrastructures that
shape privacy practices. By examining privacy
concerns at these multiple levels, we provide a more
holistic view of the challenges and opportunities in
Page 4558
managing information privacy in the digital age.
Furthermore, our framework underscores the
importance of regulatory systems in shaping privacy
norms and practices while providing tangible
attributes of information privacy for evaluating and
updating regulations to effectively address new data
practices' challenges.
In terms of practical implications, this approach to
conceptualizing information privacy extends beyond
traditional data flows considerations to include data
inferences and multilevel perspectives. It allows an
institution's privacy governance committee or public
board members to have a comprehensive approach to
reviewing and monitoring organizational practices,
which can help "prevent harmful impacts of learning
algorithms on individuals' autonomy and agency" (Xu
& Dinev, 2022, p. 7). Institutions that directly collect,
integrate, analyze, and use the outcomes of data should
implement robust privacy measures that address not
only the collection and management of data but also
the potential inferences and insights that can be drawn
from it. This involves communicating to users how
their data will be used, including any inferences that
might be made, and employing advanced privacy-
preserving technologies such as differential privacy,
federated learning, and secure multiparty computation
to safeguard data during its lifecyclefrom collection
to analysis and inference. Practitioners should adhere
Table 2. Proposed unified conceptualization of information privacy
Action
Data flows
Data inference
Collection
Management
Integration
Analysis
Outcome Use
Concern that
extensive amounts of
data are being
collected.
Concern about how
data is managed.
Concern about
how data from
different
applications, data
storages, and
systems are being
combined.
Concern that
extensive
amounts of
personal data are
being analyzed
and inferred.
Concern about
the outcome of
the nefarious use
of data and
information.
Actor
Infrastructure
Individuals' concerns
about how IT
infrastructure is
used to collect
personal data.
Individuals'
concerns toward the
ability to govern
data of the existing
IT infrastructure.
Individuals'
concerns about IT
infrastructure's
capability in
combining data.
Individuals are
concerned about
the IT
infrastructure's
ability to analyze
their data to infer
new information.
Individuals'
concerns about
the vulnerable
outcomes caused
by the IT
infrastructure.
Institution
Individuals are
concerned that
organizations are
collecting and
storing an extensive
amount of personally
identifiable
information in
databases.
Individuals are
concerned that
organizations are
not adequately
managing/governing
the data and not
complying with the
laws and
regulations.
Individuals are
concerned about
how much data is
combined or
merged by
organizations.
Individuals are
concerned about
organizations
analyzing
extensive data to
infer new
information.
Individuals are
concerned about
the vulnerable
outcomes of data
analyses from
organizations
such as
surveillance,
manipulation,
and propaganda.
Regulatory
systems
Individuals are
concerned that the
regulatory system is
ineffective in
regulating data
collection.
Individuals are
concerned that the
regulatory system is
ineffective in
regulating data
management.
Individuals are
concerned that the
regulatory system
is ineffective in
regulating data
integration.
Individuals are
concerned that
the regulatory
system is
ineffective in
regulating data
analyses.
Individuals are
concerned that
the regulatory
system is
ineffective in
regulating data
or data outcome
use.
Individuals
Individuals are
concerned that
others, such as peers
or hackers, are
collecting and
storing extensive
data about them.
Individuals are
concerned that
others are not
adequately
managing/governing
the shared data.
Individuals are
concerned about
the amount of data
combined or
merged by others.
Individuals are
concerned that
others can
analyze and infer
extensive
amounts of
information about
them.
Individuals are
concerned about
the vulnerable
outcomes when
other people
analyze their
data
vulnerabilities
such as
cyberstalking,
misjudgment,
bias, and privacy
invasion.
Page 4559
to current regulatory frameworks while advocating for
updated laws addressing modern data practices'
complexities. Moreover, because this framework
provides a multifaceted view of information privacy,
it can guide the design and development of privacy-
enhancing technologies (PETs), which can help
mitigate privacy risks, enhance data security, and
empower individuals with greater control over their
personal information.
In conclusion, this work contributes to the
ongoing discourse on information privacy by
proposing a comprehensive and forward-looking
framework that addresses the multifaceted nature of
privacy concerns in the digital age. Our unified
conceptualization provides a robust foundation for
future research and policy development, ultimately
aiming to enhance the protection of individual privacy
in an increasingly interconnected and data-driven
world.
6. Future research directions
Future research can account for the sophisticated
ways data can be combined, analyzed, managed, and
outcome used by moving beyond a focus on data flows
to include data inference. This shift is particularly
relevant in emerging technologies such as artificial
intelligence and machine learning, which can
potentially generate significant privacy risks through
inferential analytics (Xu & Dinev, 2022).
Our research opens up several avenues for future
empirical work. By providing a comprehensive
conceptual model of information privacy concerns, we
offer a valuable tool for privacy scholars to investigate
privacy actors that have been understudied in the
literature (Yun et al., 2019). Due to the framework's
extensible and modular nature, it can be employed to
assess the impact of novel technological
advancements and regulatory interventions on privacy
outcomes. More importantly, this unified framework
aids future research in creating comprehensive
measurement items to capture information privacy
concerns more cohesively and holistically. In fact, this
unified conceptualization of information privacy
concerns can help answer the call to develop more
common measurements to be used across studies
(Bélanger & Crossler, 2011, p. 1035).
As design science in information systems focuses
on creating and evaluating innovative artifacts to solve
real-world problems, a unified framework for
examining information privacy - encompassing both
actors and actions can significantly enhance this
approach by providing a comprehensive lens to better
identify and evaluate privacy-related features in
system development. This framework facilitates the
systematic identification of privacy issues across
different interaction levels and data-handling phases,
guiding the development of IT artifacts precisely
tailored to these concerns. At the systems design and
analysis level, this framework can guide system
analysts and development teams in designing and
evaluating information systems that effectively
alleviate user privacy concerns. Legal scholars
suggested that better privacy conceptualization could
become more relevant to policy by offering guidance
on weighing and ordering various privacy interests
when tough choices arise (Pozen, 2016).
This unified conceptualization of information
privacy provides a comprehensive framework for legal
scholarship and regulatory decision-making. Using
this actor/action approach, lawmakers can develop
targeted regulations that address each aspect of
privacy, ensuring vulnerabilities are thoroughly
considered and effectively mitigated in legislation.
7. Conclusion
This paper explores information privacy's
complex and evolving nature by proposing a unified
conceptual framework that integrates data flows, data
inference, and multilevel stakeholder perspectives.
Our research emphasizes the need to update traditional
privacy frameworks in response to technological
advancements. By incorporating an inference
component and adopting a multilevel approach, we
provide a more comprehensive understanding of
privacy concerns that align with contemporary data
practices. The proposed framework highlights the
dynamic interplay between actions and actors, offering
a multifaceted perspective that enhances the
theoretical understanding of information privacy. It
also provides tools for assessing and mitigating
privacy risks in real-world scenarios. Our work
underscores the importance of considering both the
flow and inferential use of data, addressing gaps in the
existing literature, and paving the way for future
research and policy development in information
systems.
Acknowledgements: Truong (Jack) Luu
acknowledges support from the University of
Cincinnati Digital Futures Graduate Student Fellows
Program.
8. References
Alashoor, T., Keil, M., Smith, J., & McConnell, A. R.
(2023). Too Tired and in Too Good of a Mood to Worry
About Privacy: Explaining the Privacy Paradox
Through the Lens of Effort Level in Information
Page 4560
Processing. Information Systems Research, 34(4),
1415-1436.
Angel, M. P., & Calo, R. (2024). Distinguishing privacy law:
a critique of privacy as social taxonomy. Columbia Law
Review, 124(2), 507-562.
Bélanger, F., & Crossler, R. E. (2011). Privacy in the digital
age: a review of information privacy research in
information systems. MIS quarterly, 35(1), 1017-1041.
Bélanger, F., & James, T. L. (2020). A Theory of Multilevel
Information Privacy Management for the Digital Era.
Information Systems Research, 31(2), 510-536.
Bellia, P. L. (2009). Federalization in Information Privacy
Law. Yale Law Journal, 118(5), 868-900.
Bridgman, P. W. (1927). The logic of modern physics (Vol.
3). Macmillan.
Cormode, G., Procopiuc, C. M., Shen, E., Srivastava, D., &
Yu, T. (2013). Empirical privacy and empirical utility
of anonymized data. 2013 IEEE 29th International
Conference on Data Engineering Workshops
(ICDEW), Australia.
Dinev, T., & Hart, P. (2004). Internet privacy concerns and
their antecedents-measurement validity and a
regression model. Behaviour & Information
Technology, 23(6), 413-422.
Dinev, T., & Hart, P. (2005). Internet privacy concerns and
social awareness as determinants of intention to
transact. International Journal of Electronic
Commerce, 10(2), 7-29.
Dinev, T., & Hart, P. (2006). An extended privacy calculus
model for e-commerce transactions. Information
Systems Research, 17(1), 61-80.
Domingo-Ferrer, J., Sánchez, D., & Blanco-Justicia, A.
(2021). The limits of differential privacy (and its
misuse in data release and machine learning).
Communications of the ACM, 64(7), 33-35.
Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data
and discrimination: perils, promises and solutions. A
systematic review. Journal of Big Data, 6(1), 1-27.
Hong, W., & Thong, J. Y. (2013). Internet privacy concerns:
An integrated conceptualization and four empirical
studies. MIS Quarterly, 37(1), 275-298.
Liu, Z. L., & Wang, X. Q. (2018). How to regulate
individuals' privacy boundaries on social network sites:
A cross-cultural comparison. Information &
Management, 55(8), 1005-1023.
Majeed, A., & Lee, S. (2020). Anonymization techniques for
privacy preserving data publishing: A comprehensive
survey. IEEE access, 9, 8512-8545.
Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet
users' information privacy concerns (IUIPC): The
construct, the scale, and a causal model. Information
Systems Research, 15(4), 336-355.
Mingers, J., & Walsham, G. (2010). Toward ethical
information systems: The contribution of discourse
ethics. MIS quarterly, 34(4), 833-854.
Nissenbaum, H. (2004). Privacy as contextual integrity.
Washington Law Review, 79, 119.
Pavlou, P. A. (2011). State of the information privacy
literature: Where are we now and where should we go?
MIS quarterly, 35(4), 977-988.
Pozen, D. E. (2016). Privacy-Privacy Tradeoffs. University
of Chicago Law Review, 83(1), 221-247.
Quach, S., Thaichon, P., Martin, K. D., Weaven, S., &
Palmatier, R. W. (2022). Digital technologies: tensions
in privacy and data. Journal of the Academy of
Marketing Science, 50(6), 1299-1323.
Richards, N. (2021). Why privacy matters. Oxford
University Press, Oxford.
Slobogin, C. (2008). Government data mining and the fourth
amendment. The University of Chicago Law Review,
75(1), 317-341.
Smith, H. J., Dinev, T., & Xu, H. (2011). Information
privacy research: an interdisciplinary review. MIS
quarterly, 35(4), 989-1015.
Smith, H. J., Milberg, S. J., & Burke, S. J. (1996).
Information privacy: Measuring individuals' concerns
about organizational practices. MIS quarterly, 20(2),
167-196.
Solove, D. J. (2005). A taxonomy of privacy. University of
Pennsylvania Law Review, 154, 477.
Stevens, F., Nurse, J. R., & Arief, B. (2021). Cyber stalking,
cyber harassment, and adult mental health: A
systematic review. Cyberpsychology, Behavior, and
Social Networking, 24(6), 367-376.
Stewart, K. A., & Segars, A. H. (2002). An empirical
examination of the concern for information privacy
instrument. Information Systems Research, 13(1), 36-
49.
Strahilevitz, L. J. (2010). Reunifying Privacy Law.
California Law Review, 98(6), 2007-2048.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable
inferences: re-thinking data protection law in the age of
big data and AI. Columbia Business Law Review, 494.
Warren, S. D., & Brandeis, L. D. (1890). The Right to
Privacy. Harvard Law Review, 4(5), 193-220.
Westin, A. F. (1968). Privacy and freedom (Vol. 25).
Washington and Lee Law Review, VA.
Xu, H., & Dinev, T. (2022). Guest Editorial: Reflections on
the 2021 Impact Award: Why Privacy Still Matters.
Management Information Systems Quarterly, 46(4), 1-
13.
Xu, H., Gupta, S., Rosson, M. B., & Carroll, J. M. (2012).
Measuring mobile users' concerns for information
privacy. Thirty Third International Conference on
Information Systems, Orlando.
Xu, H., & Zhang, N. (2023). An Onto-Epistemological
Analysis of Information Privacy Research. Information
Systems Research(Articles in advance), 1-13.
Yun, H., Lee, G., & Kim, D. J. (2019). A chronological
review of empirical research on personal information
privacy concerns: An analysis of contexts and research
constructs. Information & Management, 56(4), 570-
601.
Zhang, N., Wang, C., Karahanna, E., & Xu, Y. (2022). Peer
privacy concern: conceptualization and measurement.
MIS Quarterly, 46(1), 491-530.
Page 4561
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Driven by data proliferation, digital technologies have transformed the marketing landscape. In parallel, significant privacy concerns have shaken consumer-firm relationships, prompting changes in both regulatory interventions and people's own privacy-protective behaviors. With a comprehensive analysis of digital technologies and data strategy informed by structuration theory and privacy literature, the authors consider privacy tensions as the product of firm-consumer interactions, facilitated by digital technologies. This perspective in turn implies distinct consumer, regulatory, and firm responses related to data protection. By consolidating various perspectives, the authors propose three tenets and seven propositions, supported by interview insights from senior managers and consumer informants, that create a foundation for understanding the digital technology implications for firm performance in contexts marked by growing privacy worries and legal ramifications. On the basis of this conceptual framework, they also propose a data strategy typology across two main strategic functions of digital technologies: data mone-tization and data sharing. The result is four distinct types of firms, which engage in disparate behaviors in the broader ecosystem pertaining to privacy issues. This article also provides directions for research, according to a synthesis of findings from both academic and practical perspectives.
Article
Full-text available
Anonymization is a practical solution for preserving user’s privacy in data publishing. Data owners such as hospitals, banks, social network (SN) service providers, and insurance companies anonymize their user’s data before publishing it to protect the privacy of users whereas anonymous data remains useful for legitimate information consumers. Many anonymization models, algorithms, frameworks, and prototypes have been proposed/developed for privacy preserving data publishing (PPDP). These models/algorithms anonymize users’ data which is mainly in the form of tables or graphs depending upon the data owners. It is of paramount importance to provide good perspectives of the whole information privacy area involving both tabular and SN data, and recent anonymization researches. In this paper, we presents a comprehensive survey about SN (i.e., graphs) and relational (i.e., tabular) data anonymization techniques used in the PPDP. We systematically categorize the existing anonymization techniques into relational and structural anonymization, and present an up to date thorough review on existing anonymization techniques and metrics used for their evaluation. Our aim is to provide deeper insights about the PPDP problem involving both graphs and tabular data, possible attacks that can be launched on the sanitized published data, different actors involved in the anonymization scenario, and major differences in amount of private information contained in graphs and relational data, respectively. We present various representative anonymization methods that have been proposed to solve privacy problems in application-specific scenarios of the SNs. Furthermore, we highlight the user’s re-identification methods used by malevolent adversaries to re-identify people uniquely from the privacy preserved published data. Additionally, we discuss the challenges of anonymizing both graphs and tabular data, and elaborate promising research directions. To the best of our knowledge, this is the first work to systematically cover recent PPDP techniques involving both SN and relational data, and it provides a solid foundation for future studies in the PPDP field.
Article
Full-text available
Background Big Data analytics such as credit scoring and predictive analytics offer numerous opportunities but also raise considerable concerns, among which the most pressing is the risk of discrimination. Although this issue has been examined before, a comprehensive study on this topic is still lacking. This literature review aims to identify studies on Big Data in relation to discrimination in order to (1) understand the causes and consequences of discrimination in data mining, (2) identify barriers to fair data-mining and (3) explore potential solutions to this problem. Methods Six databases were systematically searched (between 2010 and 2017): PsychINDEX, SocIndex, PhilPapers, Cinhal, Pubmed and Web of Science. Results Most of the articles addressed the potential risk of discrimination of data mining technologies in numerous aspects of daily life (e.g. employment, marketing, credit scoring). The majority of the papers focused on instances of discrimination related to historically vulnerable categories, while others expressed the concern that scoring systems and predictive analytics might introduce new forms of discrimination in sectors like insurance and healthcare. Discriminatory consequences of data mining were mainly attributed to human bias and shortcomings of the law; therefore suggested solutions included comprehensive auditing strategies, implementation of data protection legislation and transparency enhancing strategies. Some publications also highlighted positive applications of Big Data technologies. Conclusion This systematic review primarily highlights the need for additional empirical research to assess how discriminatory practices are both voluntarily and accidentally emerging from the increasing use of data analytics in our daily life. Moreover, since the majority of papers focused on the negative discriminative consequences of Big Data, more research is needed on the potential positive uses of Big Data with regards to social disparity.
Article
Privacy is one of the most pressing concerns in the continuously evolving landscape of information technology. Despite decades of vigorous and multifaceted exploration in the interdisciplinary field of information privacy, a consensual or unifying theory remains elusive. Moreover, the complexities of issues surrounding privacy are frequently labeled as “too big to understand” in the public press. At this critical juncture, it is beneficial to delve deeper into the foundational assumptions that privacy scholars have about privacy phenomena. In this commentary, we offer a fresh perspective by drawing on Dreyfus’ influential exegesis of the Heideggerian onto-epistemological framework to reflect on these assumptions. The perspective we offer yields three integrative recommendations for future privacy research to open to new research directions. We illustrate how these new directions could not only grow future privacy research but also facilitate the design of more effective privacy-protection measures in practice.
Article
Privacy needs on today’s internet differ from the information privacy needs in traditional e-commerce settings due to their focus on interactions among online peers rather than merely transactions with an online vendor. Peer-oriented online interactions have critical implications for an individual’s virtual presence and self-cognition. Yet existing conceptualizations of internet privacy concerns have solely focused on the control of personal information release and on online interactions with online vendors. Drawing on the theory of personal boundaries, this study revisits the theoretical foundation of online privacy and proposes a multidimensional peer-related privacy concern construct, that focuses on privacy violations from online peers. We term this new construct “Peer Privacy Concern” (PrPC) and define it as the general feeling of being unable to maintain functional personal boundaries in online activities as a result of the behavior of online peers. This construct consists of four dimensions comprised of a reconceptualization of information privacy concerns to also reflect privacy concerns with respect to peers’ handling of self-shared information and with respect to peer-shared information about one’s self, and three new dimensions that tap into the arising privacy needs from virtual interactions (i.e., virtual territory privacy concern and communication privacy concern) as well as from the need to maintain psychological independence (i.e., psychological privacy concern). These new dimensions, which are rooted in the theory of personal boundaries, are prominent privacy needs in online social interactions with peers. However, they are absent from previous privacy concern conceptualizations. Scales for measuring this new construct are developed and empirically validated.
Article
As Internet use increases, there is a growing risk of online harms, including cyber stalking and cyber harassment. However, there has been limited research investigating the impact of such online harms upon adults' well-being. This paper engages in a systematic literature review concerning the mental health impact of online stalking and harassment for adult victims to further understand their experiences and the effects these have on their lives. Our research utilised the PRISMA technique to review papers published in eight online databases. A total of 1,204 articles were extracted, and ultimately 43 articles analysed. Forty-two of the reviewed articles reported that victims of cyber stalking and/or harassment experienced a multitude of harmful and detrimental consequences for their mental health including depression, anxiety, suicidal ideation and panic attacks. Victims recounted the lack of support they received from the criminal justice system, and their subsequent distrust of technology post abuse. Only one study found no relationship between cyber abuse victimisation and the well-being dimensions they examined. Our research highlights the need to devise practical solutions to tackle and minimise this victimisation. Furthermore, it underlines the necessity for adult education concerning safer technology use, as well as for researchers to be transparent regarding the platforms that victims have been abused on so we can better infer where and how exactly individuals need support to interact safely online.
Article
In the digital era, it is increasingly important to understand how privacy decisions are made because information is frequently perceived as a commodity that is mismanaged. The preponderance of privacy literature focuses on individual-level information privacy concern and personal self-disclosure decisions. We propose that a more versatile multilevel description is required to enable exploration of complex privacy decisions that involve co-owned (i.e., group) information in increasingly sophisticated digital environments. We define the concepts of group and individual information privacy, “we-privacy” and “I-privacy” respectively, as the ability of an individual or group to construct, regulate, and apply the rules for managing their information and interaction with others. We develop the theory of multilevel information privacy (TMIP), which uses the theory of communication privacy management and the developmental theory of privacy as foundations for a social rule-based (i.e., normative) process of making privacy decisions that evolve over time with experience. The TMIP contributes to the privacy literature by drawing from prominent social psychology theories of group behavior (i.e., social identity and self-categorization theories) to explain how privacy decisions can be made by individuals or groups (i.e., social units) or social units acting as members of a particular group. We contend that technology complicates the privacy decision-making process by adding unique environmental characteristics that can influence the social identity assumed for a particular privacy decision, the estimation of the cost-benefit components of the privacy calculus, and the application and evolution of the norms that define the rules for information and interaction management. We discuss the implications of the TMIP for information systems research and provide a research agenda.
Article
As concerns about personal information privacy (PIP) continue to grow, an increasing number of studies have empirically investigated the phenomenon. However, researchers are not well informed about the shift of PIP research trends with time. In particular, there is a lack of understanding of what constructs have been studied in what contexts. As a result, researchers may design their study without sufficient guidance. This problem can lead to unproductive efforts in advancing PIP research. Therefore, it is important and timely to review prior PIP research to enhance our understanding of how it has evolved. We are particularly interested in understanding the chronological changes in contexts and research constructs studied. We use a chronological stage model of PIP research we develop, a set of contextual variables identified from prior literature, and the four-party PIP model suggested by Conger et al. (2013) as theoretical foundations to conduct a chronological literature review of empirical PIP concern studies. We find several PIP research trends during the last two decades, such as the quantity of PIP research has drastically increased; the variety of contexts and research constructs being studied has increased substantially; and many constructs have been studied only once while only a few have been repeatedly studied. We also find that the focus of PIP research has shifted from general/unspecified contexts to specific ones. We discuss the contributions of the study and recommendations for future research directions. We propose a fifth party as an emergent player in the ecosystem of PIP and call for future research that investigates it.