ArticlePDF AvailableLiterature Review

Towards an Integration of Individualistic, Networked, and Institutional Approaches to Online Disclosure and Privacy in a Networked Ecology


Abstract and Figures

In this paper, we review three different approaches to disclosure and privacy: a) an individualistic approach, which emphasizes an individual’s control over information access and flow, b) a networked approach focused on information flow in horizontal relations between people, and c) an institutional approach concerned with public and societal privacy risks from platforms, providers, and governments. These approaches co-exist largely independently of each other in privacy and disclosure literature. However, with overlapping public and private spheres of communication where a presumption of individual agency over personal information is no longer tenable, we argue for the importance of bridging these perspectives towards a more multifaceted view on online disclosure and privacy in a networked ecology.
Content may be subject to copyright.
Towards an Integration of Individualistic, Networked, and Institutional Approaches to Online
Disclosure and Privacy in a Networked Ecology
Natalya N. Bazarova1
Philipp K. Masur2
1Department of Communication, Cornell University
2Department of Communication, Johannes Gutenberg University Mainz
Author note
Author Contributions: The authors have contributed equally to this work.
Correspondence concerning this article should be addressed to Natalya N. Bazarova, Department
of Communication, Cornell University. E-Mail:
Keywords: disclosure, privacy, networked ecology, information control, horizontal and vertical
dimensions of privacy
Individualistic, networked, and institutional perspectives on disclosure and privacy
Each perspective circumscribes privacy at a particular level of information control
Networked ecologies intersect different levels of information control and access
Control and access are organized along horizontal and vertical dimensions of privacy
In this paper, we review three different approaches to disclosure and privacy: a) an
individualistic approach, which emphasizes an individual’s control over information access and
flow, b) a networked approach focused on information flow in horizontal relations between
people, and c) an institutional approach concerned with public and societal privacy risks from
platforms, providers, and governments. These approaches co-exist largely independently of each
other in privacy and disclosure literature. However, with overlapping public and private spheres
of communication where a presumption of individual agency over personal information is no
longer tenable, we argue for the importance of bridging these perspectives towards a more
multifaceted view on online disclosure and privacy in a networked ecology.
Towards an Integration of Individualistic, Networked, and Institutional Approaches to Online
Disclosure and Privacy in a Networked Ecology
1. Introduction
Recent surveys have shown that most people feel limited control over their personal
information and report concerns about how their information is collected, tracked, and used by
companies and the government [1,2]. The feeling of loss over control of personal information in
social media underscores the challenges for individualistic models of disclosure and privacy
grounded in the presumption of individuals’ control over their information (see for review, [61]).
As an act of revealing personal information, self-disclosure has predominantly been
conceptualized at the level of individuals, with the main agency over data access and flow
ascribed to the discloser [39]. The individual-centered approach has been recently called into
question by the networked perspective on privacy, which shifts information control from an
individual agency to collective disclosure decisions that affect each person’s privacy in a
network [10*,11]. While this is an important development towards understanding disclosure and
privacy from a collective point of view, people’s concerns about corporations’ and government’s
control over their data suggest a need for an even broader perspective on privacy that would
account for both network and institutional forces at play [12*14]. In an effort to reconcile the
individual-centered perspective with shifting personal information controls, we review the
individualistic, networked, and institutional approaches, towards a more integrative view of
online disclosure and privacy in a networked ecology.
2. Individualistic theories of disclosure
A majority of individualistic approaches to disclosure and privacy emphasize an
individual’s control of personal information in social interactions. Following Altman’s [15]
dialectical approach, privacy can be regarded as “the selective control of access to the self” (p.
18), where an information holder engages in interpersonal boundary control through the use of
verbal, non-verbal, environmental, and culturally defined behaviors and practices. Accordingly,
individualistic approaches examine privacy as an individual’s choice and control of personal
information in social interactions [1620]. Most of them emphasize privacy and disclosure
management as a dynamic response to a situation to achieve a desired level of privacy by sharing
or withholding personal information [6**,21,22]. A transfer of information to another context
violates its contextual integrity because of different normative values in different social contexts
Disclosure, as “the process of making the self known to other persons” [24, p. 91],
presents itself as a dialectical opposite to privacy in that it regulates a selective access to the self
in social interactions [6**,25*]. Studies of disclosure examine it as a volitional act for which
people choose an appropriate level of intimacy and breadth based on individual and situational
factors that influence their perceptions of disclosure rewards and risks [2630]. Disclosure is, on
the one hand, a form of privacy regulation and information control; however, privacy is often not
an end goal of disclosure, but rather a precondition for disclosure [6**,31,32], to satisfy other
communication and relational goals, such as receiving social support or venting out.
In sum, individualistic approaches to privacy and disclosure are primarily guided by the
“presumption of individual agency”, that is, people’s ability to control the access and flow of
information [25*]. This ability, however, is often undermined in networked environments where
other actors either other users, platforms, or governments have power to limit and override
individuals’ control of access to and flow of personal information.
3. Networked approaches to disclosure and privacy
Many scholars have emphasized that individual control over personal information in
networked environments is no longer feasible and privacy must be understood from an
exclusively networked point of view [10*,11,33,34]. The networked nature of online disclosure
becomes palpable through examples of typical communication dynamics on social media. For
example, even if personal information is disclosed to only a few people, it is possible that one of
the recipients shares the information with originally unintended audiences. Because of the
communication practices on social media (i.e., sharing, liking, and commenting, but also editing
and recontextualizing) and a lack of privacy-by-default settings, such unintended dissemination
happens fast and frequently, allowing information to reach large and unintended audiences.
Second, on social media, personal information is not only disclosed by users themselves. Often, a
user discloses information about another person without the person’s consent (e.g., by uploading
photos with a group of people, tagging people in posts, or simply revealing information about
someone else). These dynamics strongly increase the potential visibility of personal content [35],
lead to the collapse of traditionally distinct contexts [3638], and cause the blurring of public and
private spheres.
In response to these dynamics, networked approaches to privacy and disclosure highlight
the necessity to engage in collaborative privacy management strategies. Communication privacy
management theory [22] provides a theoretical framework for understanding such practices.
According to this theory, when a discloser shares information with others, those others become
co-owners of this information [22]. Individuals, therefore, establish rules and norms that
determine the boundaries in which the information is allowed to flow. Such rules are based on
shared social norms and interpersonal trust. They determine the connection between information
co-owners (linkage rules), the openness or closeness of the boundary (permeability rules), and
the rights and responsibilities of each boundary member (ownership rules).
Establishing and protecting such boundaries requires understanding the context in which
one is communicating, the people and parties involved, and the norms and rules that determine
the information flow in this context [11,39]. To navigate the complexities of networked
environments, disclosers, especially teens, engage in creative practices that allow them to retain
some control over personal information and restrict co-ownership to selected parties by
obscuring the meaning of disclosures through shared symbols, cues, references, or language, as
well as creating complex norms about what is acceptable to openly communicate and what is not
Therefore, networked approaches to privacy and self-disclosure recognize the
codependency of individuals in safeguarding their privacy in online environments. The available
level of privacy no longer depends only on individual, independent control of access and
regulation choices, but on the behavior and choices of other users. Online privacy management
thus is based on a shared responsibility to protect negotiated boundaries [22,33].
4. Institutional approaches to disclosure and privacy
Because of characteristics of digital information and the technological infrastructure of online
environments, online service providers and institutions gain access to users’ personal information
that they themselves or members of their network share about them, as well as their other
personal data (e.g., information created during a registration process, network information about
links between individual users) and metadata (e.g., activity records, including browsing
behaviors and deleted content). Whereas the former is knowingly and intentionally produced (cf.
the definition of self-disclosure), the latter is generated and stored automatically from the use of
online services. The combination of personal data and metadata provide fine-grained insights
into the course of an individual's (online) life, as well as mechanisms for data analysis at scale,
both for commercial interests and for mass-surveillance conducted by intelligence agencies [41
These vertical dynamics pose several challenges to individual privacy. First, identifying the
risks associated with vertical privacy invasions is difficult because many users are unaware of
covert data collection practices and pay limited attention to often convoluted privacy policy and
platform terms of agreement [44]. Second, individual protection against vertical privacy risks is
hardly feasible. Although some control can potentially be exerted through comparatively
sophisticated data protection strategies (e.g., using TOR, encryption, obfuscation, or
pseudonymization), most people do not have the knowledge and skills to implement them [45
48]. Third, data shared by our online friends can be better predictors of our future behavior than
our own data [49*,50]. Through connections between our friends, online service providers (e.g.,
SNSs) create so-called “shadow profiles” of non-users [13].
Not only individual and networked disclosures can be harvested by third parties for
commercial and surveillance purposes, but big data analytics and algorithmic profiling pose
another privacy control challenge by limiting individuals’ abilities “to self-define, and thus claim
control and agency, over their social trajectory” [12*, p. 591]. Both commercial and institutional
surveillance practices are based on algorithmic big data processes that put individuals in fleeting,
ever-changing, and abstract categories, which may not only be very different from how
individuals would define themselves and would want to be viewed, but which individuals are
completely unable to challenge. The continuous process of classifying individuals is not (only)
based on information that the individuals (or their friends) have shared or on metadata that they
have produced, but rather relies on the constant accumulation of person-related data and
metadata from all users of various platforms. This implies that even if an individual does
everything to protect herself against horizontal and vertical privacy invasions (e.g., implementing
encryption, anonymization, etc.), follows the established rules within her privacy boundaries (the
networked perspective), and assuming that all those measures actually work, her privacy can still
be invaded because data collected in completely different contexts and situations about unrelated
individuals nonetheless allows inferring and predicting information about her [51,52].
5. Towards a more integrative view on disclosure and privacy in networked ecology
As outlined above, there are multiple agents that have access to or exert control over
users’ personal information in online environments. Studying a disclosure-related phenomenon
from only one of the discussed perspectives circumscribes it around a particular level of
information control. However, whenever corporate and government interests intersect with
individuals’ private pursuits, privacy concerns span the boundary between public and
private and should be studied with regard to “relationships between individuals and
corporate or government organizations as well as to relations among individuals” [53, p. 213].
This “kaleidoscope of privacy and surveillance” [54, p. 32] calls for a conceptual bridging of
individualistic, networked, and institutional approaches to disclosure and privacy, instead of
treating these levels of privacy concerns as independent of each other.
As the initial step towards this bridging, we organize information access (direct and
indirect) and control (primary, proximal, and distal) along horizontal and vertical dimensions
(see Figure 1). For access, information disclosed in networked environments is directly
accessible horizontally to the intended recipients (i.e., other users; direct horizontal access) and
vertically to platform providers (direct vertical access). However, intended users may share this
information with unintended audiences (indirect horizontal access), and platform providers may
also share this information with collateral third-parties, such as other commercial actors or
governments (indirect vertical access).
Figure 1. Disclosure and privacy in a networked ecology.
For information control, we distinguish between primary, proximal, and distal
information control holders. Initial disclosers (either individuals themselves or members of their
network who share information about them) hold a primary control by exercising individual
privacy boundary management on the horizontal level (i.e., deciding what to share and to whom)
and individual data protection on the vertical level (i.e., using sophisticated data protection
strategies such as encryption, anonymization, obfuscation). Understanding how individuals
account for both vertical and horizontal risks [e.g., [6**,55,56]], and how shared rules and norms
factor into their privacy boundary management, especially when they disclose information about
someone else, is a step towards bridging the individualistic view with networked and institutional
perspectives on disclosure and privacy.
On the horizontal level, the intended information receivers become information co-
owners and thus have proximal control over the shared data, which is regulated by privacy rules
and norms that determine collective privacy boundary management (see Figure 1). Networked
approaches [11,22] serve well to explain these collective privacy management practices, but
more work needs to be done to explain dissemination of collective privacy norms and rules in a
network, as well as their sensemaking and internalization by proximal information holders.
These approaches also need to account for how perceptions of vertical privacy risks may shape
collective rules, norms, and behaviors with regard to information flow.
On the vertical level, platforms or third-parties with whom platforms share this
information (collateral third-parties) gain a distal control of user data. Although users authorize
their access through companies’ terms of use in exchange for using company services, platforms
and even more so third-parties, are not the intended recipients of disclosures. Being at least two-
steps removed from distal information-control parties, primary information holders have limited
awareness or control over their privacy at the vertical level and, therefore, can feel digitally
resigned and helpless [57,58]. As discussed earlier and shown in Figure 1, preventing such
privacy invasions is challenging and in many cases impossible for the individual. Vertical control
challenges underscore the importance of understanding privacy as a social and collective value
[23,59]. This requires a recognition of the codependency of privacy in networked environments
and that actions of one person may produce privacy loss of others [50]. The ability for
individuals or collectives to exert agency over their information is severely limited when
information related to any individual (not only those in their own social network) is used to make
inferences about an individual’s personality and behavior, even in the absence of disclosure from
primary information holders. Against such invasions, the only remaining privacy protection
mechanism is collective data protection, such as challenging the societal conditions (e.g., through
democratic deliberation) that have led to the need for privacy protection and information control
in the first place on a societal level [12*,14,60*].
Bringing all three perspectives together thus allows us to distinguish between different
types of information control and access, and individual and collective privacy risks associated
with them. It further acknowledges the intertwined nature of vertical and horizontal dynamics in
networked environments in which platform providers not only exert information control and pose
risks for individuals’ privacy, but also provide spaces in which communication and thus
horizontal privacy regulation takes place.
[1] B. Auxier, L. Rainie, M. Anderson, A. Perrin, M. Kumar, Turner Erica, Americans and
privacy: Concerned, confused and feeling lack of control over their personal information.,
Pew Research Center, 2019.
[2] European Commission, Special Eurobarometer 487a: The General Data Protection
Regulation, 2019.
[3] N. Kashian, J. Jang, S.Y. Shin, Y. Dai, J.B. Walther, Self-disclosure and liking in
computer-mediated communication, Comput. Hum. Behav. 71 (2017) 275283.
[4] N.C. Krämer, J. Schäwel, Mastering the challenge of balancing self-disclosure and privacy
in social media, Curr. Opin. Psychol. 31 (2020) 6771.
[5] R. Lin, S. Utz, Self-disclosure on SNS: Do disclosure intimacy and narrativity influence
interpersonal closeness and social attraction?, Comput. Hum. Behav. 70 (2017) 426436.
[6] **P.K. Masur, Situational privacy and self-disclosure: Communication processes in online
environments, Springer, Cham, Switzerland, 2018.
The book synthesizes independently developed theories of privacy and self-disclosure
into one comprehensive framework that allows the antecedents of self-disclosure to be
identified and systematized into personal and environmental as well as non-situational and
situational factors. It further investigates this theory of situational privacy and self-
disclosure in the context of smartphone-based communication.
[7] S. Sannon, B. Stoll, D. DiFranzo, M.F. Jung, N.N. Bazarova, “I just shared your responses”
Extending Communication Privacy Management Theory to Interactions with
Conversational Agents, Proc. ACM Hum.-Comput. Interact. 4 (2020) 118.
[8] S. Sannon, E.L. Murnane, N.N. Bazarova, G. Gay, “ I was really, really nervous posting it”
Communicating about Invisible Chronic Illnesses across Social Media Platforms, in: Proc.
2019 CHI Conf. Hum. Factors Comput. Syst., 2019: pp. 113.
[9] M. Tsay-Vogel, J. Shanahan, N. Signorielli, Social media cultivating perceptions of
privacy: A 5-year analysis of privacy attitudes and self-disclosure behaviors among
Facebook users, New Media Soc. 20 (2018) 141161.
[10] *S. Barocas, K. Levy, Privacy Dependencies, Wash. Law Rev. Forthcom. (2019).
This article outlines three types of privacy dependencies that determine how our privacy
depends on other people’s decisions and behaviors: individuals’ social ties, similarities to
others, and differences from other people. Each type of dependency is characterized by
distinct mechanisms, values, and normative concerns that must be accounted for when
developing technical interventions and regulatory solutions for privacy protection.
[11] A.E. Marwick, danah boyd, Networked privacy: How teenagers negotiate context in social
media:, New Media Soc. (2014).
[12] *L. Baruh, M. Popescu, Big data analytics and the limits of privacy self-management, New
Media Soc. 19 (2017) 579596.
This article examines how big data analytics and predictive algorithms used ubiquitously
for commercial purposes normalize privacy invasions and can accentuate social
inequalities. The algorithmic social sorting made possible through pervasive surveillance
and big data analytics “thin-slices” individuals into social categories and profiles, which
drastically limit individuals’ agency, control, and choices. The article argues for the
importance of acknowledging the collective and social dimensions of privacy as an
alternative to the individual-centric “notice and choice” privacy management framework in
today’s digital environment.
[13] D. Garcia, Privacy beyond the individual, Nat. Hum. Behav. 3 (2019) 112113.
[14] M. Popescu, L. Baruh, Privacy as Cultural Choice and Resistance in the Age of
Recommender Systems, in: Routledge Handb. Digit. Writ. Rhetor., Routledge, 2018: pp.
[15] I. Altman, The environment and social behavior: Privacy, personal space, territory, and
crowding, Wadsworth, Belmont, CA, 1975.
[16] A. Acquisti, L. Brandimarte, G. Loewenstein, Privacy and human behavior in the age of
information, Science. 347 (2015) 509514.
[17] N. Bol, T. Dienlin, S. Kruikemeier, M. Sax, S.C. Boerman, J. Strycharz, N. Helberger, C.H.
De Vreese, Understanding the effects of personalization as a privacy calculus: analyzing
self-disclosure across health, news, and commerce contexts, J. Comput.-Mediat. Commun.
23 (2018) 370388.
[18] S. Trepte, L. Reinecke, N.B. Ellison, O. Quiring, M.Z. Yao, M. Ziegele, A cross-cultural
perspective on the privacy calculus, Soc. Media Soc. 3 (2017) 2056305116688035.
[19] P.J. Wisniewski, B.P. Knijnenburg, H.R. Lipford, Making privacy personal: Profiling social
network users to inform privacy education and nudging, Int. J. Hum.-Comput. Stud. 98
(2017) 95108.
[20] D. Yang, Z. Yao, R. Kraut, Self-disclosure and channel difference in online health support
groups, in: Elev. Int. AAAI Conf. Web Soc. Media, 2017.
[21] L. Palen, P. Dourish, Unpacking “privacy” for a networked world, in: Proc. SIGCHI Conf.
Hum. Factors Comput. Syst., Association for Computing Machinery, Ft. Lauderdale,
Florida, USA, 2003: pp. 129136.
[22] S. Petronio, Boundaries of privacy: Dialectics of disclosure, State University of New York
Press, Albany, 2002.
[23] H.F. Nissenbaum, Privacy in context: Technology, policy, and the integrity of social life,
Stanford Law Books, Stanford, 2010.
[24] S.M. Jourard, P. Lasakow, Some factors in self-disclosure, J. Abnorm. Soc. Psychol. 56
(1958) 9198.
[25] *J.L. Crowley, A framework of relational information control: a review and extension of
information control research in interpersonal contexts, Commun. Theory. 27 (2017) 202
This paper synthesizes literature on information control and outlines a new framework of
information control in interpersonal relationships. The proposed framework offers a
typology of implicit and explicit forms of information control, addresses antecedents,
outcomes, and contexts of information control, and accounts for the perspectives of both the
sender and the target.
[26] N. Andalibi, A. Forte, Announcing pregnancy loss on Facebook: A decision-making
framework for stigmatized disclosures on identified social network sites, in: Proc. 2018
CHI Conf. Hum. Factors Comput. Syst., 2018: pp. 114.
[27] N.N. Bazarova, Y.H. Choi, Self-disclosure in social media: Extending the functional
approach to disclosure motivations and characteristics on social network sites, J. Commun.
64 (2014) 635657.
[28] H. Krasnova, S. Spiekermann, K. Koroleva, T. Hildebrand, Online social networks: Why
we disclose, J. Inf. Technol. 25 (2010) 109125.
[29] J. Omarzu, A disclosure decision model: Determining how and when individuals will self-
disclose, Personal. Soc. Psychol. Rev. 4 (2000) 174185.
[30] E.L. Spottswood, J.T. Hancock, Should I share that? Prompting social norms that influence
privacy behaviors on a social networking site, J. Comput.-Mediat. Commun. 22 (2017) 55
[31] C.A. Johnson, Privacy as personal control, Man-Environ. Interact. Eval. Appl. Part. 2
(1974) 83100.
[32] S. Trepte, P.K. Masur, Need for privacy, in: Zeigler-Hill, V., Shakelford, T. K. (Ed.),
Encycl. Personal. Individ. Differ., Springer, London, 2017.
[33] R. De Wolf, Contextualizing how teens manage personal and interpersonal privacy on
social media, New Media Soc. (2019) 1461444819876570.
[34] R. De Wolf, K. Willaert, J. Pierson, Managing privacy boundaries together: exploring
individual and group privacy management strategies in Facebook, Comput. Hum. Behav.
35 (2014) 444454.
[35] d. boyd, Social network sites as networked publics: Affordances, dynamics, and
implications, in: Z. Papacharissi (Ed.), Networked Self Identity Community Cult. Soc.
Netw. Sites, Routledge, New York, NY, 2011: pp. 3958.
[36] d. boyd, Taken out of context. American teen sociality in networked publics: Dissertation,
University of California, Berkeley, CA, 2008.
[37] J. Vitak, The impact of context collapse and privacy on social network site disclosures, J.
Broadcast. Electron. Media. 56 (2012) 451470.
[38] A.F. Zillich, K.F. Müller, Norms as Regulating Factors for Self-Disclosure in a Collapsed
Context: Norm Orientation Among Referent Others on Facebook, Int. J. Commun. 13
(2019) 20.
[39] A.E. Marwick, danah boyd, Understanding Privacy at the Margins, Int. J. Commun. 12
(2018) 9.
[40] E. Oolo, A. Siibak, Performing for one’s imagined audience: Social steganography and
other privacy strategies of Estonian teens on networked publics, Cyberpsychology J.
Psychosoc. Res. Cyberspace. 7 (2013).
[41] G. Greenwald, No Place to Hide: Edward Snowden, the NSA and the surveillance state,
Hamish Hamilton, 2014.
[42] M. Popescu, L. Baruh, M. Popescu, L. Baruh, P. Messaris, L. Humphreys, Consumer
surveillance and distributive privacy harms in the age of big data, Digit. Media Transform.
Hum. Commun. (2017) 313327.
[43] E. Snowden, Permanent Record, 1st Edition, Metropolitan Books, New York, 2019.
[44] N. Steinfeld, “I agree to the terms and conditions”: (How) do users read privacy policies
online? An eye-tracking experiment, Comput. Hum. Behav. 55 (2016) 9921000.
[45] L. Baruh, E. Secinti, Z. Cemalcilar, Online privacy concerns and privacy management: A
Meta-Analytical Review, J. Commun. (2017).
[46] M. Büchi, N. Just, M. Latzer, Caring is not enough: The importance of Internet skills for
online privacy protection, Inf. Commun. Soc. 20 (2016) 12611278.
[47] P.K. Masur, D. Teutsch, S. Trepte, Entwicklung und Validierung der Online-
Privatheitskompetenzskala (OPLIS), Diagnostica. 63 (2017) 256268.
[48] S. Trepte, D. Teutsch, P.K. Masur, C. Eicher, M. Fischer, A. Hennhöfer, F. Lind, Do People
Know About Privacy and Data Protection Strategies? Towards the “Online Privacy Literacy
Scale” (OPLIS), in: S. Gutwirth, R. Leenes, P. de Hert (Eds.), Reforming Eur. Data Prot.
Law, Springer Netherlands, Dordrecht, 2015: pp. 333365.
[49] *J.P. Bagrow, X. Liu, L. Mitchell, Information flow reveals prediction limits in online
social activity, Nat. Hum. Behav. 3 (2019) 122128.
This paper provides empirical evidence that personal information about the activities and
interests of individuals can be accurately predicted from their social ties in online social
networks. Online social networks have embedded information about individuals, even if
they are not on the platform themselves.
[50] E. Sarigol, D. Garcia, F. Schweitzer, Online privacy as a collective phenomenon, in: Proc.
Second ACM Conf. Online Soc. Netw., Association for Computing Machinery, Dublin,
Ireland, 2014: pp. 95106.
[51] M. Mann, T. Matzner, Challenging algorithmic profiling: The limits of data protection and
anti-discrimination in responding to emergent discrimination, Big Data Soc. 6 (2019)
[52] T. Matzner, Why privacy is not enough privacy in the context of “ubiquitous computing”
and “big data,” J. Inf. Commun. Ethics Soc. (2014).
[53] P.M. Regan, Legislating privacy: Technology, social values, and public policy., University
of North Carolina Press, Chapel Hill, NC, 1995.
[54] G. Marx T., Coming to Terms: The Kaleidoscope of Privacy and Surveillance, in: Soc.
Dimens. Priv. Interdiscip. Perspect., Cambridge University Press, Cambridge, MA, 2015:
pp. 3249.
[55] A.E. Marwick, E. Hargittai, Nothing to hide, nothing to lose? Incentives and disincentives
to sharing information with institutions online, Inf. Commun. Soc. 22 (2019) 16971713.
[56] H. Xu, The effects of self-construal and perceived control on privacy concerns, ICIS 2007
Proc. (2007) 125.
[57] H. Choi, J. Park, Y. Jung, The role of privacy fatigue in online privacy behavior, Comput.
Hum. Behav. 81 (2018) 4251.
[58] N.A. Draper, J. Turow, The corporate cultivation of digital resignation, New Media Soc. 21
(2019) 18241839.
[59] P.M. Regan, Privacy and the common good: revisited, in: B. Roessler, D. Mokrosinska
(Eds.), Soc. Dimens. Priv. Interdiscip. Perspect., Cambridge University Press, 2015: pp.
[60] *P.K. Masur, How online privacy literacy facilitates self-data protection and supports
informational self-determination in the age of information, Media Commun. (2020).
The paper proposes a comprehensive model of online privacy literacy that encompasses
factual privacy knowledge, privacy-related reflection abilities, privacy and data protection
skills, as well as critical privacy literacy. It argues that such a literacy not only empowers
individuals to protect themselves against privacy intrusions, but also allows them to
challenge privacy-invasive status quo.
[61] N.N. Bazarova, Online Disclosure. C.R. Berger, M.E. Roloff, S.R. Wilson, J.P. Dillard,
J. Caughlin, D. Solomon (Eds.), In The International Encyclopedia of Interpersonal
Communication (2015). 10.1002/9781118540190.wbeic251
... This line of research has been embellished with ideas such as context collapse (Vitak 2012), imagined audiences (Litt 2012), and an emphasis on the interdependency of privacy behaviors (Baruh and Popescu 2017;Marwick and boyd 2014). Increasingly, scholars call to differentiate between privacy's social (i.e., interpersonal or horizontal), and institutional (i.e., individual-to-institution or vertical) dimensions (Masur 2018;Quinn, Epstein and Moon 2019) and to distinguish among primary, proximal, and distal access and use of data (Bazarova and Masur 2020). The goal of the CPR framework is both to accommodate the use of a variety of privacy conceptualizations and enable systematic ways to scrutinize privacy conceptualizations across structural settings. ...
... Specific economic environments place different premiums on private information and provide alternative incentives for protecting, invading, and exploiting individual privacy. Market structures may thereby shape interactions between individuals who use certain services, companies which provide those services, and regulators who aim to align information flows with data protection law (Bazarova and Masur 2020). Data brokers and the provision of financial incentives for privacy-compromising internet use (e.g., Facebook Zero) have accentuated privacy's economic underpinnings. ...
The ways in which privacy is defined, perceived, and enacted are contingent on cultural, social, political, economic, and technological structures. Privacy research, however, is often conducted in settings that do not account for variations in how privacy is perceived and enacted. A comparative perspective explicitly addresses this shortcoming by requiring the contextualization of privacy through investigating structural similarities and differences. This paper outlines a comparative privacy research framework, which proposes five interrelated structures (cultural, social, political, economic, and technological) as fruitful units of comparison and disentangles how these structures affect and interact with privacy processes at the micro-, meso-, and macro levels. We conclude by proposing a comparative privacy research agenda, which acknowledges the embeddedness of privacy in such structural settings, and informs efforts to address privacy as a valued outcome through policy formation, education, and research.
... Previous studies have often focused primarily on how users manage their personal privacy boundaries on an individual level on Facebook and/or across different SNSs boundaries [14]. Such 'individualistic' approaches, where privacy management is considered an individual task, have increasingly been found wanting, though, as they limit their gaze to individual responsibility for controlling private information and managing privacy (for an overview see [42]). In line with this, from a CPM perspective, information shared through SNSs in general, and on Facebook in particular, is not and cannot be owned and controlled solely by an individual, as once information is disclosed, individual privacy boundaries expand into collective privacy boundaries, and thus information becomes co-owned with those sharing the boundaries. ...
... A growing body of work on SNS environments proposes that as privacy boundaries are collectively shared and continue to expand across different levels, it is increasingly necessary to develop online and offline collaboration, coordination, and negotiation among users who own and co-own the information [37,42,44,45,47,[52][53][54][55][56]. These studies also argue that such management of information disclosure and privacy boundaries requires an understanding of the context, including audiences present and privacy norms and values [9,34,37]. ...
Conference Paper
This paper qualitatively examines how members of a large private Facebook group view the risks of information disclosure to their privacy and the strategies they employ to navigate and manage those risks. The paper adds to an emerging interest in how privacy is managed collectively and within dynamic large groups, thus moving beyond established knowledge of privacy management on individual and small-scale levels. The work builds on semi-structured interviews with 20 members of a private Facebook group and draws on Communication Privacy Management theory. The study shows how privacy management practices are enacted at individual, intragroup, and group levels. Findings show that participants associate very high risks with sharing private information in the group, partly because it consists of a mix of known others and strangers, who are potentially geographically co-located. They adopt several strategies for managing and protecting their privacy at all three levels. The risks associated with context, time, and spatial collapse of the imagined audience are identified as important to how participants experience information disclosure in the group. The paper concludes by identifying some practical implications that serve as a call for developers to design privacy tools that support dynamic groups’ privacy challenges and needs. Pre-print available at:
... The widespread success of visual social network sites (SNS) like Instagram or TikTok emphasize how interpersonal constraints impact the privacy calculus through practices like responses or re-sharing of other users' pictures or videos. The exposure of a third party's content to a new, and possibly unintended audience (Bazarova & Masur, 2020) brings about risks and benefits impossible for the original creator to evaluate (De Wolf, 2020). An example of this behavior in close relationships is "sharenting", a phenomenon where parents share children-related content on social media (Blum-Ross & Livingstone, 2017; Ranzini et al., 2020; see chapters 16 and 17 in this Handbook by Walrave on children, adolescents and privacy; Walrave 2023a, 2023b), projecting today's risk/benefits evaluations onto tomorrow's adults. ...
In this chapter, we discuss the concept of privacy cynicism as a cognitive coping mechanism to the complex privacy landscape users are confronted within digital societies. We situate the development of the concept within the privacy paradox and privacy calculus literature, offer a definition and explain its four dimensions (mistrust, powerlessness, uncertainty, resignation). Since privacy cynicism is adjacent but distinct from recently introduced concepts, we contrast it with privacy apathy, surveillance realism, privacy fatigue, and privacy helplessness. We follow this discussion with a contextualization of privacy cynicism within existing constraints, that reduce user agency and foster privacy cynicism. The chapter concludes with a forward-looking agenda for future research on the topic that includes conceptual clarifications, the identification of salient antecedents and outcomes, contextually situated and comparative work, as well as studies into how to best address privacy cynicism from a top-down policy perspective or a bottom-up resistance and repair perspective.
... Understanding privacy from a social and, hence, networked perspective makes sense (Bazarova & Masur, 2020). First, the privacy behavior of online users is strongly affected by social norms and behaviors of others . ...
Full-text available
Privacy is a hotly debated topic in academia and society. The digitalization of our world has had enormous implications for our privacy. Some researchers and public figures agree that privacy has changed substantially, that we are living in a post-privacy world, and that we need to address privacy differently. Conversely, others maintain that privacy remains a relevant concept in our society, and that, although facets and degrees of privacy change, the conceptual core and societal relevance remain unchanged and intact. In this paper, we discuss the current state and future of privacy, presenting two opposing stances on four central questions: Has privacy changed? Is privacy dead? Have we lost control over our own privacy? How should we react? With this dialogue we hope to provide an overview of current positions on privacy by presenting divergent lines of reasoning and thinking, while outlining potential paths forward.
... Studies frequently connected the dimension of individualism-collectivism (i.e., the degree to which individuals in a society are integrated by individual identity or group identity) to self-disclosure given its inherent link with self-construal (Oghazi et al., 2020). It is found that individualism was associated with greater amounts of self-disclosure, whereby collectivism was associated with higher disclosure depth (Bazarova & Masur, 2020). ...
Self-disclosure in social media and psychological well-being have been theorized to mutually influence each other. The vibrant research on this issue, however, presents mixed results, calling for a synthesis of the empirical evidence. To this end, we conducted a meta-analysis with 38 empirical studies to systematically examine the nature of the relationship between social media self-disclosure and psychological well-being. We adopted a multidimensional perspective of self-disclosure to scrutinize how the quantity (amount and depth) and quality (intent, valence, and honesty) dimensions of self-disclosure were associated with psychological well-being. The results indicated that valence and honesty of self-disclosure were moderately and positively associated with psychological well-being, but the quantity of self-disclosure was not significantly associated with psychological well-being. Participants’ gender, age, and cultural context of the studies significantly moderated the associations between some dimensions of self-disclosure and psychological well-being. Based on the meta-analysis results, we reassessed theoretical claims on self-disclosure in social media and suggested directions for future research.
... Understanding privacy from a networked perspective makes sense (Bazarova & Masur, 2020). First, the privacy behavior of online users is strongly affected by social norms and behaviors of others . ...
Full-text available
Privacy is a hotly debated topic in academia and society. The digitalization of our world has had enormous implications for privacy. Many researchers and public figures agree that privacy has changed substantially, that we are living in a post-privacy world, and that we need to address privacy differently. Others maintain that privacy remains a relevant concept – although facets and degrees of privacy have changed, its conceptual core and societal relevance have stayed the same. In this paper, we discuss the state and future of privacy, presenting two opposing stances on four central questions. Has privacy changed? Is privacy dead? Have we lost control over privacy? How should we react? With this dialogue we provide an overview of current positions on privacy, discussing divergent lines of reasoning and thinking, while also suggesting potential ways to move forward.
People’s perception of privacy can primarily be directed to themselves or to the value of privacy for society. Likewise, privacy protection can repel both individual and collective privacy threats. Focusing on this distinction, the present article examines Internet users’ privacy protection behaviors in relation to individual privacy concerns and their perceived collective value of privacy over time. We conducted a longitudinal panel study with three measurement points ( N = 1790) to investigate relations between and within persons. The results of a random-intercept cross-lagged panel model revealed positive relations between the perceived value of privacy, privacy concerns, and privacy protection between persons. At the within-person level, only a temporal increase in the perceived value of privacy was related to increased protection behaviors. This suggests that individual privacy concerns are not as important for temporal protection as assumed, but that a recognition of collective privacy may temporarily change people’s privacy behavior.
Full-text available
At present, 87% of adolescents (aged 12-15 years) report using social networking sites (SNS; Ofcom, 2021). Research predominantly highlights the risks of SNS use (e.g., cyberbullying); yet SNS also presents potential benefits (e.g., enhancing social relationships). This study aims to explore adolescent perceptions of the benefits of SNS use and whether risk concern may predict these. Adolescents (N= 342; 53.3% female; M= 13.92, SD=1.35) completed two measures: sorting items about positive SNS use and an adapted SNS risk concern scale (Buchanan et al., 2007). Findings suggest females’ SNS risk concern positively predicted perceptions of disclosing to family online, whilst older females viewed this less favourably. Also, both males and females who viewed social capital positively viewed social comparison positively, and vice versa.
The social distancing and lockdown measures enacted to address the COVID-19 pandemic entailed an unprecedented shift to digitally-mediated communication. The move to remote learning at schools was one of such important changes. Drawing on privacy and STS literature, we investigate the role of different privacy cultures during the initial moment of turbulence, when schools had to quickly adapt to remote teaching. To this end, we conducted semi-structured interviews with teachers in Israel and Germany. Our interviews carved out three distinct phases: A moment of turbulence, a period of negotiation and a phase of temporary closure, leading to the dominance of Zoom and Google Classroom in Israel, and government-mandated open-source tools for German teachers. These different pathways are shaped by considerations of vertical privacy among German teachers, and the absence of such considerations on the part of Israeli teachers.
Full-text available
Conversational agents are increasingly becoming integrated into everyday technologies and can collect large amounts of data about users. As these agents mimic interpersonal interactions, we draw on communication privacy management theory to explore people's privacy expectations with conversational agents. We conducted a 3x3 factorial experiment in which we manipulated agents' social interactivity and data sharing practices to understand how these factors influence people's judgments about potential privacy violations and their evaluations of agents. Participants perceived agents that shared response data with advertisers more negatively compared to agents that shared such data with only their companies; perceptions of privacy violations did not differ between agents that shared data with their companies and agents that did not share information at all. Participants also perceived the socially interactive agent's sharing practices less negatively than those of the other agents, highlighting a potential privacy vulnerability that users are exposed to in interactions with socially interactive conversational agents.
Full-text available
The potential for biases being built into algorithms has been known for some time (e.g., Friedman and Nissenbaum, 1996), yet literature has only recently demonstrated the ways algorithmic profiling can result in social sorting and harm marginalised groups (e.g., Browne, 2015; Eubanks, 2018; Noble, 2018). We contend that with increased algorithmic complexity, biases will become more sophisticated and difficult to identify, control for, or contest. Our argument has four steps: first, we show how harnessing algorithms means that data gathered at a particular place and time relating to specific persons, can be used to build group models applied in different contexts to different persons. Thus, privacy and data protection rights, with their focus on individuals (Coll, 2014; Parsons, 2015), do not protect from the discriminatory potential of algorithmic profiling. Second, we explore the idea that anti-discrimination regulation may be more promising, but acknowledge limitations. Third, we argue that in order to harness anti-discrimination regulation, it needs to confront emergent forms of discrimination or risk creating new invisibilities, including invisibility from existing safeguards. Finally, we outline suggestions to address emergent forms of discrimination and exclusionary invisibilities via intersectional and post-colonial analysis.
Full-text available
Users face the challenge of balancing the tension between disclosing and concealing personal information on social network sites. We argue that users handle this challenge by collectively establishing norms. Applying a focus group methodology, we analyzed which norms of self-disclosure exist among German Facebook users and the reference groups to which they referred, how they shape users' self-disclosure practices on Facebook, and how these norms and practices have changed over time. Descriptive norms manifested themselves mainly by referring to negative self-disclosure practices of relevant others, but the injunctive norms of self-disclosure were of great relevance to the participants. The participants stated that users should present themselves strategically, communicate consciously concerning their privacy, and not post about the private lives of others. Users can manage the context collapse on Facebook by adapting their communicative activities there to the norms they perceive within their reference groups.
Full-text available
The privacy calculus suggests that online self-disclosure is based on a cost-benefit trade-off. However, although companies progressively collect information to offer tailored services, the effect of both personalization and context-dependency on self-disclosure has remained understudied. Building on the privacy calculus, we hypothesized that benefits, privacy costs, and trust would predict online self-disclosure. Moreover, we analyzed the impact of personalization, investigating whether effects would differ for health, news, and commercial websites. Results from an online experiment using a representative Dutch sample (N = 1,131) supported the privacy calculus, revealing that it was stable across contexts. Personalization decreased trust slightly and benefits marginally. Interestingly, these effects were context-dependent: While personalization affected outcomes in news and commerce contexts, no effects emerged in the health context.
Many researchers have been studying teens’ privacy management on social media, and how they individually control information. Employing the theoretical framework of communication privacy management (CPM) theory, I argue that individual information control in itself is desirable but insufficient, giving only a limited understanding of teens’ privacy practices. Instead, I argue that research should focus on both personal and interpersonal privacy management to ultimately understand teens’ privacy practices. Using a survey study ( n = 2000), I investigated the predictors of teens’ personal and interpersonal privacy management on social media and compared different types of boundary coordination. The results demonstrate that feelings of fatalism regarding individual control in a networked social environment, which I call networked defeatism, are positively related with interpersonal privacy management. Also, interpersonal privacy management is less important when coordinating boundaries with peers than it is when coordinating sexual materials, and dealing with personal information shared by parents.
Online health support groups are places for people to compare themselves with others and obtain informational and emotional support about their disease. To do so, they generally need to reveal private information about themselves and in many support sites, they can do this in public or private channels. However, we know little about how the publicness of the channels in health support groups influence the amount of self-disclosure people provide. Our work examines the extent members self-disclose in the private and public channels of an online cancer support group. We first built machine learning models to automatically identify the amount of positive and negative self-disclosure in messages exchanged in this community, with adequate validity (r>0.70). In contrast to findings from non-health-related sites, our results show that people generally self-disclose more in the public channel than the private one and are especially likely to reveal their negative thoughts and feelings publicly. We discuss theoretical and practical implications of our work.
Humans have the urge to self-disclose, which is nowadays often satisfied on social media platforms. While the affordances of contemporary social media applications help to indulge this urge, they also pose a significant challenge. People are usually good at leveraging self-disclosure in a way that gratifications are increased and disadvantages are minimized - for example, by avoiding pitfalls such as revealing too intimate information to a large audience. However, when trying to balance self-disclosure gratifications and privacy risks on social media platforms, users are not always able to take rational decisions as suggested in the 'privacy calculus' approach. Instead, recent research indicates that users are prone to biases which hinder rational calculations of advantages and disadvantages - especially caused by social media cues. Users therefore need to be supported in order to master self-disclosure decisions.
The aim of this article is to propose a theoretical framework for studying digital resignation, the condition produced when people desire to control the information digital entities have about them but feel unable to do so. We build on the growing body of research that identifies feelings of futility regarding companies’ respect for consumer privacy by suggesting a link between these feelings and the activities of the companies they benefit. We conceptualize digital resignation as a rational response to consumer surveillance. We further argue that routine corporate practices encourage this sense of helplessness. Illuminating the dynamics of this sociopolitical phenomenon creates a template for addressing important questions about the forces that shape uneven power relationships between companies and publics in the digital age.
Privacy regulations for online platforms allow users to control their personal data. But what happens when our private attributes or behaviour can be inferred without our personal data? Researchers reveal that the behaviour of individuals is predictable using only the information provided by their friends in an online social network.
What incentives and disincentives do Internet users weigh as they consider providing information to institutional actors such as government agencies and corporations online? Focus group participants list several benefits to sharing information including convenience, access to information, personalization, financial incentives, and more accurate health information, but also recognize that not all sharing may be in their interest. Disincentives to sharing include skepticism, distrust, and fears of discrimination. Decisions about sharing are related to the information type, the context in which information is revealed, and the institution to which they are – or think they are – providing information. Significantly, many participants were mistrustful of both governmental and corporate actors. Participants displayed awareness of privacy risks, but frequently mischaracterized the extent to which information could be aggregated and mined. They displayed resignation towards privacy violations, suggesting that they perceived little control over their ability to protect their privacy, which may influence their privacy behaviors. This calls into question the privacy calculus, as individuals misunderstand the risks of their information provision and do not believe opting out of information-sharing is possible.