ArticlePDF Available

Abstract

In this position paper, we synthesize various knowledge gaps in information privacy scholarship and propose a research agenda that promotes greater cross-disciplinary collaboration within the iSchool community and beyond. We start by critically examining Westin's conceptualization of information privacy and argue for a contextual approach that holds promise for overcoming some of Westin's weaknesses. We then highlight three contextual considerations for studying privacy-digital networks, marginalized populations, and the global context-and close by discussing how these considerations advance privacy theorization and technology design.
Forthcoming in the Journal of the Association for Information Science and Technology
A Contextual Approach to Information Privacy Research
Philip Fei Wu
Royal Holloway, University of London
Email: philip.wu@rhul.ac.uk
Jessica Vitak
University of Maryland, College Park
Email: jvitak@umd.edu
Michael T. Zimmer
University of Wisconsin–Milwaukee
Email: zimmerm@uwm.edu
Abstract
In this position paper, we synthesize various knowledge gaps in information privacy scholarship
and propose a research agenda that promotes greater cross-disciplinary collaboration within the
iSchool community and beyond. We start by critically examining Westin’s conceptualization of
information privacy and argue for a contextual approach that holds promise for overcoming
some of Westin’s weaknesses. We then highlight three contextual considerations for studying
privacy – digital networks, marginalized populations, and the global context – and close by
discussing how these considerations advance privacy theorization and technology design.
Keywords: information privacy, contextual integrity, networked privacy, marginalized groups,
privacy by design
2
A Contextual Approach to Information Privacy Research
Introduction
Privacy is a central issue of the information age. Advances in information and
communication technologies (ICTs) and their wide adoption have exponentially increased the
amount of personal information being collected by commercial and government entities. While
ICTs like fitness trackers, smart speakers, and social media provide users with new ways to
interact and learn about themselves, they also pose a number of privacy risks. For example, the
Cambridge Analytica scandal in early 2018 spotlighted problematic privacy practices at
Facebook (Cadwalladr & Graham-Harrison, 2018). More broadly, the promises of big data and
“data-driven decision making” raise wider concerns for the future of individual privacy (boyd &
Crawford, 2012; Lane, Stodden, Bender, & Nissenbaum, 2014; Zhang, 2016; Zimmer, 2016).
Although few scholars would argue against the importance of information privacy, there
are considerable differences across privacy scholarship on how to assess, improve, and regulate
current industry practices for a better protection of personal information. The intertwining
relationship between information technology and privacy calls for a highly interdisciplinary
approach to examining information privacy issues from multiple perspectives. We believe that
the information science community is particularly well positioned to contribute to the current
privacy discussion and to shape the solution space with innovative ideas. Indeed, a quick survey
of JASIST publications over the last decade (2008-2018) shows that more than 30 articles have
tackled privacy issues in various empirical contexts, including mobile health (Clarke & Steele,
2015; Harvey & Harvey, 2014), social media platforms (Squicciarini, Xu, & Zhang, 2011; Stern
& Kumar, 2014), as well as new ways to model and measure privacy in academic research
(Rubel & Biava, 2014; Sánchez & Batet, 2016). Collectively, these studies span a broad
spectrum of intellectual traditions in the community and demonstrate nuanced understandings of
the relationship between ICTs and privacy.
Nevertheless, research gaps still exist. In particular, despite the diversity of intellectual
resources being utilized in privacy research, there has been limited integration of these resources
in proposing practical and innovative privacy-enhancing solutions. For example, there is a wide
recognition that social network sites’ (SNSs) privacy settings match poorly with users’ privacy
3
expectations (Liu, Gummadi, Krishnamurthy, & Mislove, 2011; Wu, 2019); however, few
studies to date have proposed and empirically tested alternative designs for a better control of
privacy parameters (with Stern & Kumar, 2014 as a notable exception). Likewise, scholars
taking a sociopsychological approach have identified multiple factors that affect people’s privacy
perceptions and behaviors, but these findings are often difficult to translate into concrete policy
suggestions (Acquisti & Grossklags, 2004, 2005).
In this position paper, we synthesize various knowledge gaps in information privacy
scholarship and propose a contextual approach of privacy research that promotes greater cross-
disciplinary collaboration. We start by critically examining Westin’s conceptualization of
privacy and argue for a contextual perspective that holds promise for overcoming some of
Westin’s weaknesses. We then highlight three contextual considerations for studying privacy,
and we discuss how these considerations advance privacy theorization and technology design.
Assumptions of Westin’s Theory of Privacy
Writing more than 50 years ago, Westin (1967) defined privacy as “the right of the
individual to decide what information about himself [sic] should be communicated to others and
under what condition” (p. 10). This widely cited definition contains several underlying
assumptions, including that 1) “information about himself” is known and transparent to the
individual; 2) “communicated to others” is the end of the information journey; and 3) individuals
are capable of evaluating “conditions” and making rational decisions about their privacy rights.
Each of these assumptions is contestable in today’s digital information environment. As
our daily activities are being facilitated (e.g., shopping) and sometimes deeply embedded (e.g.,
social networking) in various digital technology platforms, we leave data trails that are recorded,
monitored, and shared with or without our knowing. Hence, individuals rarely have a complete
picture of what “information about themselves” is out there. Furthermore, privacy policy
development and implementation has lagged behind technological advancements; for example,
while the U.S. Federal Trade Commission recommended a one-stop “privacy dashboard” in 2013
for smartphone users to review information being accessed across mobile apps (Federal Trade
Commission, 2013), such recommendations have not yet been widely adopted by the industry. In
fact, as digital businesses create “walled gardens” to lock in users and maintain competitive
advantage, a cross-app, cross-platform, comprehensive privacy dashboard is unlikely to become
4
a reality. It is also important to note that in this hyperconnected era (Floridi, 2015), individuals
have less control over information about themselves, with data being co-managed with friends,
family, and others who can post or share your personal information to a variety of online
channels. For example, Besmer and Lipford (2010) found that photo tagging on SNSs reduces
users’ control over their information disclosures when images are shared across their many
overlapping social circles.
Control over, access to, and communication of personal data are still key aspects of
information privacy. Yet, information privacy today is more than just who has access to what
information. A significant development in recent years is the technological capability of
analyzing large volumes of data from diverse sources to identify patterns in consumption,
lifestyle, sexual orientation, political inclinations, and more (e.g., Ohm, 2009). An individual’s
privacy is at risk not only because information about herself may be “communicated to others”
without consent, but also because existing dots can now be connected with high efficiency to
reveal intimate details about the person.
Lastly, Westin’s definition assumes a knowledgeable and rational human who is capable
of making the best decision for their privacy under different scenarios, yet research reveals this is
not always the case (Acquisti, Taylor, & Wagman, 2016). Often, there are transparency and
information asymmetries that prevent individuals from obtaining complete and perfect
information for decision making. Further, humans are known for making poor decisions due to
cognitive biases and changing preferences. For example, in evaluating risks and benefits of
revealing personal information, people frequently make decisions that favor short-term gains
over long-term consequences, both known and unknown (Acquisti & Grossklags, 2005). A
number of empirical studies have demonstrated inconsistencies and difficulties of making the
“best” privacy trade-off in various circumstances (see, for example, Acquisti et al., 2016).
A Contextual Approach to Privacy Research
Recognizing that the “transparency-and-choice” scenario in Westin’s conceptualization
of privacy does not fit well with the digital reality of privacy today, a growing number of privacy
scholars are advocating for a more a contextual approach to information privacy, emphasizing
the importance of understanding and respecting the conditions and context that guide individuals
decision to disclose sensitive data. One of the foundations for this approach is Helen
5
Nissenbaum’s theory of “privacy as contextual integrity” (2004, 2010), which links the
protection of personal information to the norms of information flow within specific contexts.
Rejecting the traditional dichotomy of public versus private information—as well as the notion
that a user’s preferences and decisions of privacy are independent of context—contextual
integrity provide a framework for evaluating the flow of personal information between different
agents and explaining why certain patterns of information flow might be acceptable in one
context but viewed as problematic in another.
Researchers have applied contextual integrity to various privacy-sensitive contexts, such
as search engines (Zimmer, 2008), social network sites (Shi, Xu, & Chen, 2013), location-based
technologies (Barkhuus, 2012), electronic medical records (Chen & Xu, 2014), student learning
analytics (Rubel & Jones, 2016), smart home devices (Apthorpe, Shvartzshnaider, Mathur,
Reisman, & Feamster, 2018), and big data research ethics (Zimmer, 2018), among others. These
studies have identified more nuanced explanations for perceived “inconsistencies” or
“paradoxes” in privacy behaviors, suggesting that breaches in contextual integrity can help
explain why users would be concerned with uses of information that go beyond the original
purpose or context in which they were initially disclosed.
In light of the critical importance of contextual integrity in studying privacy, we advocate
for an even broader contextual view of privacy at all analytical levels—individual, group, and
societal. Below, we briefly discuss three specific contextual considerations that are likely to
shape future directions of privacy research: privacy in networked contexts, privacy for
marginalized groups, and privacy in a global regulatory context.
Privacy in Networked Contexts
With a contextual perspective, privacy can be understood as a process of managing
boundaries across different social contexts. The boundaries may shift, collapse, or re-emerge as
social circumstances change. For example, on Facebook, users navigate a variety of audiences
and social contexts, with different boundaries for their disclosures. In private groups, they may
feel more open in making sensitive disclosures because only other group members can see the
content; contrast these disclosures with status updates that may be viewable to all friends or an
even wider audience, depending whether the post is public or if other users have been tagged in
the post. In these spaces, therefore, privacy becomes an “ongoing negotiation of contexts in a
6
networked ecosystem in which contexts regularly blur and collapse” (Marwick & boyd 2014,
p.1063). Users must constantly negotiate questions about the content they’re sharing, and who
the perceived audience for the post is, who the potential audience is, among other considerations.
Furthermore, users of these spaces may quickly discover that they co-manage their privacy with
other users (who might share content related to them) and the platforms themselves (who make
various pieces of personal information more or less visible in the system).
The concept of “networked privacy”— that individuals lack full control over how and
what information about them is shared online and that privacy is collaboratively managed by
both individuals and other users of a platform — highlights two key aspects of privacy in a
networked environment: a) privacy norms about appropriate information flow are in flux as
individuals move within and/or across social boundaries; b) privacy management is a collective,
rather than individual, practice.
In evaluating how norms around privacy and sharing change across time and space,
networked privacy researchers have studied the challenges arising when social contexts collapse.
Context collapse, which broadly describing the flattening of social networks into homogeneous
groups, can affect disclosure and privacy practices in a variety of ways. For example, some users
stop sharing on social media completely or significantly censor their posts because platforms
offer few technical strategies for more nuanced sharing (Marwick & boyd, 2011; Vitak, 2012).
Furthermore, researchers have considered how the sociotechnical affordances of social media
platforms shape users’ experiences, encourage sharing, and make it more challenging to discern
how information flows through (and beyond) the platform. These studies (e.g., Bangasser-Evans
et al., 2017; Treem & Leonardi, 2013) highlight how the features of various platforms afford
different outcomes, with some sites affording high levels of visibility or spreadability of content,
while others may afford greater degrees of obscurity or anonymity. Finally, studies suggest that
the collective nature of privacy in these spaces leads users to engage in a variety of privacy
management strategies, including social steganography or vaguebooking (Marwick & boyd,
2014), constant curation of connections and content (Vitak et al., 2015), and using more private
platforms for sensitive disclosures (Piwek & Joinson, 2016).
Privacy for Marginalized Groups
7
When looking at the subjects of privacy research, it quickly becomes clear that some
subsets of the population are largely overlooked or understudied. A key demographic receiving
little empirical attention is economically disadvantaged internet users. As a group, these
individuals have lower digital literacy, less access to the internet and computers, and fewer
connections in their social network to go to for help with technology (van Dijk, 2005). Therefore,
a contextual approach is needed to examine how socioeconomic and other contextual factors
affect the group’s privacy concerns and practices. Numerous studies have considered the broader
effects of the digital divide (see, for example, Rice & Katz, 2003; Stanley, 2003), but few have
addressed privacy issues across socioeconomic spectrums. In one notable exception, research by
Vitak and colleagues (2018) highlighted that low socio-economic status (SES) families face a
range of privacy and security risks online and many lack trust in companies to protect their
personal information. Continuing to evaluate low-SES internet users is increasingly important in
a time when job applications, tax forms, and government benefits require users to complete
online forms and submit sensitive personal information.
Marginalized and stigmatized groups also face heightened risks around identity-based
disclosures; therefore, their disclosure strategies and privacy-protection behaviors in digital
spaces are more important than for the general population. For example, LGBTQ+ adults and
adolescents may have heightened privacy concerns around when and where they make identity
disclosures online (Blackwell et al., 2016), and such disclosure decisions may be difficult,
especially in spaces where others can “out” an individual and users have less control over their
self-presentation (Duguay, 2016). Individuals with stigmatized health conditions or chronic
illnesses may possess greater privacy concerns about sharing their data online, even when
disclosures may help facilitate social, informational, and emotional support (Choudhury & De,
2014). Likewise, individuals living in authoritarian regimes or under restrictive governments
may have greater privacy concerns and face greater risks when speaking out against the
government than those living in more democratic countries (Pearce, Vitak, & Barta, 2018).
Privacy in a Global Regulatory Context
Context matters not only in understanding individuals’ privacy needs and behaviors, but
also in addressing regulatory challenges in a globalized world. Governments have struggled with
whether and how to regulate information flows across global platforms and services to protect
8
citizens’ privacy. Given the diversity of interests, histories, and cultural contexts, a complicated
terrain of trans-national laws and policies for the protection of privacy and personal data flows
across networks has emerged (Greenleaf, 2017). Some jurisdictions have opted for broad, and
relatively strict, laws regulating the collection, use, and disclosure of personal information, such
as Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and the
European Union’s General Data Protection Regulation (GDPR). The U.S., however, maintains a
more sectoral approach to privacy legislation, with laws addressing only specific types of
personal information. For example, the Health Insurance Portability and Accountability Act
(HIPAA) offers protection of personal medical information; the Fair Credit Reporting Act
regulates the collection and flow of personal financial data; and the Video Privacy Protection Act
makes the wrongful disclosure of video rental records illegal.
The differences between Canadian/E.U. approach to privacy, and that of the U.S., have
been well documented and analyzed (Bennett & Rabb, 2006; Krotoszynski, 2016). While the
E.U. and Canada focus on direct and preemptive regulation of the collection and use of personal
data, prohibiting “excess” data collection and restricting use to the original and stated purposes
of the collection, the U.S. approach begins with the assumptions that most data collection and
use is both acceptable and beneficial, that guidelines should be primarily voluntary and non-
invasive, and that any regulation should only address documented instances of misuse or harm.
This difference in regulatory approaches to privacy—and the underpinning tensions between
different jurisdictions’ views towards the rights of data subjects—becomes complicated further
given the increasing flows of personal information between transnational networks and across
borders. Internet companies like Google and Facebook have customers accessing their products
and services from across the globe, with data processing and storage facilities equally scattered.
A Canadian citizen, for example, might be accessing a Google product in the U.S., while the
record of the particular information exchange might be stored on a server in Ireland. Each
jurisdiction has its own complex set of regulations and rights assigned to the treatment of any
personal information shared and stored.
These kinds of scenarios have prompted debate about whether the global diversity of
privacy governance will result in a “trading up,” where information platforms develop practices
and policies that meet higher privacy standards in order to be perceived as the “best” protector of
personal information flows irrespective of the borders the personal information might cross, or a
9
“race to the bottom” where corporate interests in processing personal data will migrate to
jurisdictions with little or no control over the circulation and capture of personal information
flows. Researchers wishing to embrace a more contextual approach to privacy will need to
grapple with the complex global nature of information flows and regulations, recognizing that
privacy expectations and practices differ greatly across geopolitical borders. For the information
science community, this will require continued focus on global research studies and
collaborations.
Conclusion and a Design Recommendation
Our brief review of three contextual considerations above highlights the challenges of
designing a one-size-fits-all solution for informational privacy needs that spans multiple
contexts. For example, privacy researchers have long observed a “privacy paradox” phenomenon
(i.e., people claim to care about privacy but behave as if they don’t care), but few have
systematically examined in what contexts this attitude-behavior dichotomy is likely to manifest
— or how to resolve the dichotomy through technology design. Many current systems and
platforms fail to protect user privacy because privacy is an afterthought of system design
(Papacharissi & Gibson, 2011). More effective privacy protections, as Cavoukian (2011) argues,
may require a Privacy by Design approach where privacy considerations are an integral part of
design and implementation from the outset, with design decision-making situated in the relevant
local and global contexts. Such privacy-sensitive design could even embed a choice architecture
(Thaler, Sunstein, & Balz, 2013) where privacy choices are contingent on the use context and the
platform’s technological affordances, thereby nudging users to take privacy-protective actions
when necessary (Wang et al., 2013). Almuhimedi et al. (2015) demonstrated in a field study that
even a simple nudge on mobile devices can lead participants to adjust their mobile app privacy
settings and bring their data sharing behaviors into alignment with their privacy preferences. To
this end, designing for privacy should move beyond mainstream mechanisms that protect
already-generated personal data, and instead develop creative ways of steering both individuals
and organizations toward preventative behaviors in various contexts.
To conclude, we have explained how a contextual view of information privacy may open
up new venues of research. Prior research based on Westin’s assumptions does not provide the
full picture of people’s privacy behaviors and decision-making strategies in the information age.
10
Today, we find that privacy management is negotiated not just at the individual level, but
between many individuals at a group or community level, with companies and third-parties who
collect and share data, and with governments and regulators in different regions. Considering
privacy from a contextual approach is more difficult, but it more accurately reflects the reality of
data sharing and privacy management in the 21st century. Investigating how individuals, groups,
and businesses deal with information sharing in all types of contexts is critical to extending
theories of privacy and to designing privacy-sensitive tools that address the needs and concerns
of a wider range of users and communities. We believe the information science community can
lead this line of inquiry due to their interdisciplinary knowledge and experience in social and
computational sciences and their well-established tradition of respecting use context in
information system research and design.
References
Acquisti, A, & Grossklags, J. (2004). Privacy attitudes and privacy behavior. In L. J. Camp & S.
Lewis (Eds.), Economics of information security (pp. 165–178). Boston, MA: Springer
US.
Acquisti, A, & Grossklags, J. (2005). Privacy and rationality in individual decision making.
IEEE Security Privacy, 3(1), 26–33.
Acquisti, A., Taylor, C., & Wagman, L. (2016). The economics of privacy. Journal of Economic
Literature, 54(2), 442–492.
Almuhimedi, H., Schaub, F., Sadeh, N., Adjerid, I., Acquisti, A., Gluck, J., Cranor, L. F., &
Agarwal, Y. (2015). Your location has been shared 5,398 times!: A field study on mobile
app privacy nudging. In Proceedings of the 33rd Annual ACM Conference on Human
Factors in Computing Systems (pp. 787–796). New York, NY, USA: ACM.
Apthorpe, N., Shvartzshnaider, Y., Mathur, A., Reisman, D., & Feamster, N. (2018).
Discovering IoT smart home privacy norms using contextual integrity. In Proceedings of
the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (Vol. 2, 2).
Bangasser-Evans, S., Pearce, K., Vitak, J., & Treem, J. (2017). The affordances test: A
conceptual model for understanding affordances in communication research. Journal of
Computer-Mediated Communication, 22, 35-52.
11
Barkhuus, L. (2012). The mismeasurement of privacy: Using contextual integrity to reconsider
privacy in HCI. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (pp. 367–376). Austin, TX.
Bennett, C. J., & Charles, D. R. (2006). The governance of privacy: Policy instruments in global
perspective. Cambridge, MA: MIT Press.
Besmer, A., & Richter Lipford, H. (2010). Moving beyond untagging: Photo privacy in a tagged
world. In Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems (pp. 1563–1572). New York, NY, USA: ACM.
Blackwell, L., Hardy, J., Ammari, T., Veinot, T., Lampe, C., & Schoenebeck, S. (2016). LGBT
parents and social media: Advocacy, privacy, and disclosure during shifting social
movements. In Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems (pp. 610–622). New York, NY, USA: ACM.
boyd, danah, & Crawford, K. (2012). Critical questions for big data. Information,
Communication & Society, 15(5), 662–679.
Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook
profiles harvested for Cambridge Analytica in major data breach. The Guardian.
Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-
facebook-influence-us-election
Cavoukian, A. (2011). Privacy by design in law, policy and practice: A white paper for
regulators, decision-makers and policy-makers. Information and Privacy Commissioner,
Ontario, Canada. Retrieved from
http://www.ontla.on.ca/library/repository/mon/25008/312239.pdf
Chen, Y., & Xu, H. (2013). Privacy management in dynamic groups: understanding information
privacy in medical practices. In Proceedings of the 2013 conference on Computer
supported cooperative work (CSCW ’13) (pp. 541-552), New York, NY, USA: ACM.
Choudhury, M. D., & De, S. (2014). Mental health discourse on reddit: Self-disclosure, social
support, and anonymity. In Eighth International AAAI Conference on Weblogs and Social
Media. Retrieved from
https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8075
12
Clarke, A., & Steele, R. (2015). Smartphone-based public health information systems:
Anonymity, privacy and intervention. Journal of the Association for Information Science
and Technology, 66(12), 2596–2608.
Duguay, S. (2016). “He has a way gayer Facebook than I do”: Investigating sexual identity
disclosure and context collapse on a social networking site. New Media & Society, 18(6),
891–907.
Federal Trade Commission. (2013). Mobile privacy disclosures: Building trust through
transparency: a federal trade commission staff report. Retrieved from
https://www.ftc.gov/reports/mobile-privacy-disclosures-building-trust-through-
transparency-federal-trade-commission
Floridi, L. (Ed.). (2015). The onlife manifesto: Being human in a hyperconnected era. Cham,
Switzerland: Springer Open.
Greenleaf, G. (2017). Global data privacy laws 2017: 120 national data privacy laws, including
Indonesia and Turkey. 145 Privacy Laws & Business International Report, 10-13;
UNSW Law Research Paper No. 17-45. Available at
SSRN: https://ssrn.com/abstract=2993035
Harvey, M. J., & Harvey, M. G. (2014). Privacy and security issues for mobile health platforms.
Journal of the Association for Information Science and Technology, 65(7), 1305–1318.
Krotoszynski, R. J. (2016). Privacy revisited: A global perspective on the right to be left alone.
Oxford, UK: Oxford University Press.
Lane, J., Stodden, V., Bender, S., & Nissenbaum, H. (Eds.). (2014). Privacy, big data, and the
public good: Frameworks for engagement. New York, NY: Cambridge University Press.
Liu, Y., Gummadi, K. P., Krishnamurthy, B., & Mislove, A. (2011). Analyzing Facebook
privacy settings: User expectations vs. reality. In Proceedings of the 2011 ACM
SIGCOMM Conference on Internet Measurement Conference (pp. 61–70). New York,
NY, USA: ACM.
Marwick, A. E., & boyd, d. (2011). I tweet honestly, I tweet passionately: Twitter users, context
collapse, and the imagined audience. New Media & Society, 13, 114-133.
Marwick, A. E., & boyd, d. (2014). Networked privacy: How teenagers negotiate context in
social media. New Media & Society, 16(7), 1051–1067.
13
Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119–
157.
Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life.
Stanford, CA: Stanford University Press.
Ohm, P. (2009). Broken promises of privacy: Responding to the surprising failure of
anonymization. UCLA Law Review, 57, 1701.
Papacharissi, Z., & Gibson, P. L. (2011). Fifteen minutes of privacy: Privacy, sociality, and
publicity on social network sites. In S. Trepte & L. Reinecke (Eds.), Privacy online:
Perspectives on privacy and self-disclosure in the social web (pp. 75–90). Heidelberg:
Springer.
Pearce, K. E., Vitak, J., & Barta, K. (2018). Privacy at the margins| Socially mediated visibility:
Friendship and dissent in authoritarian Azerbaijan. International Journal of
Communication, 12(0), 22.
Piwek, L., & Joinson, A. (2016). “What do they Snapchat about?” Patterns of use in time-limited
instant messaging service. Computers in Human Behavior, 54, 358-367.
Rice, R. E., & Katz, J. E. (2003). Comparing internet and mobile phone usage: digital divides of
usage, adoption, and dropouts. Telecommunications Policy, 27(8), 597–623.
Rubel, A., & Biava, R. (2014). A framework for analyzing and comparing privacy states.
Journal of the Association for Information Science and Technology, 65(12), 2422–2431.
Rubel, A., & Jones, K. M. L. (2016). Student privacy in learning analytics: An information ethics
perspective. The Information Society, 32(2), 143–159.
Sánchez, D., & Batet, M. (2016). C-sanitized: A privacy model for document redaction and
sanitization. Journal of the Association for Information Science and Technology, 67(1),
148–163.
Shi, P., Xu, H., & Chen, Y. (2013). Using contextual integrity to examine interpersonal
information boundary on social network sites. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems (pp. 35–38). New York, NY, USA: ACM.
Squicciarini, A. C., Xu, H., & Zhang, X. (Luke). (2011). CoPE: Enabling collaborative privacy
management in online social networks. Journal of the American Society for Information
Science and Technology, 62(3), 521–534.
14
Stanley, L. D. (2003). Beyond access: Psychosocial barriers to computer literacy special issue:
ICTs and community networking. The Information Society, 19(5), 407–416.
Stern, T., & Kumar, N. (2014). Improving privacy settings control in online social networks with
a wheel interface. Journal of the Association for Information Science and Technology,
65(3), 524–538.
Thaler, R. H., Sunstein, C. R., & Balz, J. P. (2013). Choice architecture. In The Behavioral
Foundations of Public Policy (pp. 428–439). Princeton, NJ: Princeton University Press.
Treem, J. W., & Leonardi, P. M. (2013). Social media use in organizations: Exploring the
affordances of visibility, editability, persistence, and association. Annals of the
International Communication Association, 36(1), 143-189.
van Dijk, J. A. G. M. (2005). The deepening divide: inequality in the information society.
Thousand Oaks; London; New York: Sage Publications.
Vitak, J. (2012). The impact of context collapse and privacy on social network site disclosures.
Journal of Broadcasting and Electronic Media, 56, 451-470.
Vitak, J., Blasiola, S., Patil, S., & Litt, E. (2015). Balancing audience and privacy tensions on
social network sites. International Journal of Communication, 9, 1485-1504.
Vitak, J., Liao, Y., Subramaniam, M., & Kumar, P. (2018). “I knew it was too good to be true”:
The challenges economically disadvantaged internet users face in assessing
trustworthiness, avoiding scams, and developing self-efficacy online. In Proceedings of
the ACM: Human-Computer Interaction, 2(CSCW), Article 176, 1-25.
Wang, Y., Leon, P. G., Scott, K., Chen, X., Acquisti, A., & Cranor, L. F. (2013). Privacy nudges
for social media: An exploratory Facebook study. In Proceedings of the 22nd
International Conference on World Wide Web (pp. 763–770). New York, NY, USA:
ACM.
Westin, A. F. (1967). Privacy and freedom. New York: Atheneum.
Wu, P. F. (2019). The privacy paradox in the context of online social networking: A self-identity
perspective. Journal of the Association for Information Science and Technology, 70(3),
207-217.
Zhang, S. (2016). Scientists are just as confused about the ethics of big-data research as you.
Wired. Retrieved December 28, 2016, from https://www.wired.com/2016/05/scientists-
just-confused-ethics-big-data-research/
15
Zimmer, M. (2008). Privacy on planet Google: Using the theory of contextual integrity to clarify
the privacy threats of Google’s quest for the perfect search engine. Journal of Business &
Technology Law, 3(1), 109–126.
Zimmer, M. (2016). OkCupid study reveals the perils of big-data science. Wired. Retrieved May
29, 2016, from https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-
science/
Zimmer, M. (2018). Addressing conceptual gaps in big data research ethics: An application of
contextual integrity. Social Media + Society, 4(2), 1-11.
... Providing consent for companies to use personal information in business is both an emotional and cognitive decision-making situation for consumers in the digital world. The digital environment and its various data strategies have created several privacy-sensitive contexts and situations for consumers while they are making decisions on the Internet (Bornschein et al., 2020;Quach et al., 2022;Wu et al., 2020). Several theories of human decision-making reveal that individuals optimize decisions between situational (Terborg, 1981;Wu et al., 2020) and conditional (Einhorn & Hogarth, 1981) factors. ...
... The digital environment and its various data strategies have created several privacy-sensitive contexts and situations for consumers while they are making decisions on the Internet (Bornschein et al., 2020;Quach et al., 2022;Wu et al., 2020). Several theories of human decision-making reveal that individuals optimize decisions between situational (Terborg, 1981;Wu et al., 2020) and conditional (Einhorn & Hogarth, 1981) factors. The consumers' subjective decision to accept the consent for collecting and using their personal information involves situational-related risks that cause the emotion of uncertainty among consumers. ...
Article
Full-text available
Aim/Purpose: This study aimed to increase our understanding of how the stages of the customer purchase journey, privacy trade-offs, and information sensitivity of different business service sectors affect consumers’ privacy concerns. Background: The study investigated young consumers’ willingness to provide consent to use their personal data at different phases of the customer journey. This study also examined their readiness to provide consent if they receive personal benefits, and how information sensitivity varied between different individuals and business sectors. Methodology: Data was collected by a quantitative survey (n=309) and analyzed with R using the Bayesian linear mixed effect modeling approach. The sample consisted of university students in Finland, who represented a group of young and digitally native consumers. The questionnaire was designed for this study and included constructs with primarily Likert-scale items. Contribution: The study contributed to data privacy and consent management research in information sensitivity, privacy trade-off, and the customer journey. The study underlined the need for a stronger user experience focus and contextuality. Findings: The results showed that readiness to disclose personal data varied at different phases of the customer journey as privacy concerns did not decrease in a linear fashion throughout the purchase process. Perceived benefits affected the willingness to provide consent for data usage, but concerned consumers would be less trade-off oriented. Self-benefit was the most relevant reason for sharing, while customization was the least. There is a connection between the information sensitivity of different business sector information and privacy concerns. No support for gender differences was found, but age affected benefits and business sector variables. Recommendations for Practitioners: The study recommends approaching consumers’ data privacy concerns from a customer journey perspective while trying to motivate consumers to share their personal data with relevant perceived benefits. The self-benefit was the most relevant benefit for willingness to provide consent, while customization was the least. Recommendation for Researchers: The study shows that individual preference for privacy was a major factor directly and via interaction for all three models. This study also showed that consumers’ subjective decision-making in privacy issues is both a situational and a contextual factor. Impact on Society: This study could encourage policymakers and societies to develop guidelines on how to develop privacy practices and consent management to be more user centric as individuals are increasingly concerned about their online privacy. Future Research: This study encourages examining consumers’ motivational factors to provide digital consent for companies with experimental research settings. This study also calls to explore perceived benefits in all age groups from the perspective of different information in various business sectors. This study shows that privacy concern is a contextual and situational factor.
... If it is true that the increasing use of information technologies has brought advantages by facilitating access to electronic services, provided by the State and by private organizations, it is also true that we can face a set of threats and risks here, namely, improper access to personal information. The themes of privacy and, consequently, that of data protection that concerns everyone, have been the subject of study for several decades, but they have never been as current as now due to the fact that every day, information is published, consulted, processed and stored our respect (Gstrein & Beaulieu, 2022;Staff, C.A.C.M., 2021;Wu, Vitak, & Zimmer, 2020). On the other hand, the collection and storage of personal information has become the basis of the commercial activity of many companies, being sometimes illegal because it is carried out without consent and without any type of control by the supervisory authorities. ...
Article
Full-text available
Background: Present the relevance of the study and highlights the key points of literature overview. Purpose: As of May 25, 2018, General Data Protection Regulation (GDPR) has become mandatory for all organizations, public or private, that handle personal data of European citizens, regardless of their physical location. Higher education institutions (HEIs), namely public universities, are no exception to this requirement and, as in many other organizations, many HEIs begin the process of implementing the GDPR without meeting the minimum conditions necessary for implementation. The purpose of this study, therefore, is to present a model to determine the level of readiness of HEIs regarding the implementation of the GDPR. Study design/methodology/approach: With the objective of designing a new artefact as a readiness model for the implementation of the GDPR, this study follows Design Science Research as an approach to be used to build the readiness model, based on a set of 16 critical success factors (CSFs) previously determined. Findings/conclusions: A readiness model was designed, based on a set of 16 CSFs related to the implementation of GDPR in HEIs. Limitations/future research: This is a new area of study that needs further development, namely through the practical application of the model, allowing the improvement of the measurement levels of the different CSFs. Practical implications: The determined readiness model allows HEIs to realize a priori if they have the necessary conditions for the implementation of the GDPR, giving useful indications of the organizational dimensions and the CSFs that compose them where better performance is necessary to ensure a successful implementation. Originality/Value: As far as we know, this is the first model of readiness based on CSFs related to the implementation of GDPR in HEIs, being therefore a first contribution to the development of this area.
... A computer system provides relevant users with information [36]; users are usually part of an official organization. The information shows an organization with a view of what occurred previously, what is currently taking place, and what will occur in the future [37]. The information has been provided in the form of special reports, mathematical simulation results, and periodic reports [38]. ...
Article
Full-text available
Sustainable development integrates business, environmental, and social objectives into a unified effort to achieve a common goal. Sustainable customer relationship management (CRM) combines company strategy, customer-focused business processes, and computer technologies. From the consumer’s perspective, it lowers psychological, energy, time, and other costs; from the company’s perspective, it offers a means of engaging with customers to build lasting and reliable relationships. The sustainable CRM program provides advantages to businesses in various industries, particularly online commerce. It alludes to a comprehensive strategy that promotes solid interactions between buyers and sellers of goods and services. Since current customer retention is less costly than new customer attraction in competitive markets, especially online shopping, identifying the factors affecting relationship management with stable customers is essential. This investigation intends to evaluate the effect of the use of management information systems (MIS), as well as insights on employee behavior and knowledge, and customer behavior (satisfaction and loyalty), on the effectiveness of sustainable CRM in online shopping. The model is validated using the PLS–SEM technique, and study sample of 293 employees and managers from private organizations. According to the results, the MIS, employee behavior and knowledge, customer satisfaction, and customer loyalty influence the effectiveness of sustainable CRM in online shopping. Furthermore, employee behavior and knowledge positively moderate the relationship between customer loyalty and the effectiveness of sustainable CRM. However, the moderating role of employee behavior and knowledge on customer satisfaction and the effectiveness of sustainable CRM is not confirmed. Overall, taking these characteristics into account might help organizations to take significant steps toward increasing the efficacy of sustainable CRM.
... Data privacy, also known as online privacy or information privacy, is the inability of individuals to fully control how their data are shared online, and the personal privacy involved is managed jointly by the individual and other users on the platform [29]; it indicates that consumers are worried about how information in their personal files is shared with organizations and across organizations [30]. In recent years, the frequent occurrence of data leakage incidents has drawn the attention of consumers and the state to the protection of personal data privacy [31], and users' data privacy awareness has gradually increased. ...
Article
Full-text available
In recent years, the collection, mining, and utilization of data have become a new profit growth point for enterprises, and these events have also accelerated the pace of enterprises to collect users’ data. However, the relevance of personal data privacy and the frequent occurrence of data leakage events have increased users’ privacy awareness. The purpose of our study is to enhance the effective flow of data while protecting users’ data privacy. The data supply chain consists of the end user, data provider, and service provider, and involves the flow of the value-added process of data. Our study focuses on the pricing strategy of data products considering data incentive and data protection levels. We propose three models—centralized pricing, decentralized pricing, and revenue-sharing pricing—and solve them, and then we analyze the impact of users’ privacy awareness on data incentives, protection, and pricing of data products in the three models. We also analyze which pricing method works best for participants.
Book
Full-text available
Digital surveillance is a daily and all-encompassing reality of life in China. This book explores how Chinese citizens make sense of digital surveillance and live with it. It investigates their imaginaries about surveillance and privacy from within the Chinese socio-political system. Based on in-depth qualitative research interviews, detailed diary notes, and extensive documentation, Ariane Ollier-Malaterre attempts to ‘de-Westernise’ the internet and surveillance literature. She shows how the research participants weave a cohesive system of anguishing narratives on China’s moral shortcomings and redeeming narratives on the government and technology as civilising forces. Although many participants cast digital surveillance as indispensable in China, their misgivings, objections, and the mental tactics they employ to dissociate themselves from surveillance convey the mental and emotional weight associated with such surveillance exposure. The book is intended for academics and students in internet, surveillance, and Chinese studies, and those working on China in disciplines such as sociology, anthropology, social psychology, psychology, communication, computer sciences, contemporary history, and political sciences. The lay public interested in the implications of technology in daily life or in contemporary China will find it accessible as it synthesises the work of sinologists and offers many interview excerpts.
Chapter
Data breaches have increased substantially over the years, having an immense effect on the reputation and finances of impacted organizations. A major concern for any organization is their reputation and finances and a poorly devised information privacy protection strategy will adversely impact both these elements. It is therefore imperative that information privacy protection should be implemented as a fundamental aspect of any organization’s strategy. Developing a solid information privacy protection strategy is however a major challenge for most organizations. This study seeks to propose a consolidated organizational information protection strategy by combining the components of data, people, process, technology, and rules with the parameters of framework, vision, scope, objectives, actions, and governance. This is vital as there exists a critical requirement for clarity and structure in managing an organizational information privacy program in a coordinated manner.The ever-changing state of information privacy protection has amplified pressure on those tasked with information privacy protection in their organizations. The strategy posed in this study will serve to alleviate this concern and provide any organization with a comprehensive organizational information privacy protection strategy.KeywordsData BreachesPrivacyInformation PrivacyInformation Privacy Protection Strategy
Article
During the onset of the COVID-19 pandemic and the subsequent lock-down, digital platforms like Zoom became essential for remote work. Yet at the same time, substantial security and privacy risks made the headlines. Using the lenses of Naturalistic Decision-making and the Theory of Multilevel Information Privacy, we find diverging responses to well-documented security risks of Zoom use in educational environments. We identify-three distinct response patterns, which we name the 'Agnostic', the 'Pragmatic' and the 'Sceptic', and show how the interplay of the salient social identity, personal privacy norms, and the privacy calculus guides the dynamic of privacy decision-making in light of experiential feedback, and the developing public discourse about security risks. We provide empirical evidence for multilevel decision-making and highlight the contextual and social nature of privacy decision-making about platform mode of use for remote work.
Article
Corporate data protection malpractices are not uncommon, especially in contemporary technological environments. Embracing a regulatory view, this study attempts to advance a taxonomy of prevailing corporate data protection practices and their causal mechanisms by analyzing cases where organizations were fined for violating data protection legislation. Selecting the General Data Protection Regulation (GDPR) enacted by the European Union (EU) as our benchmark, this study employs an iterative taxonomy development technique as guidance and conducts a thematic analysis on 875 cases of GDPR enforcement. In so doing, we derive a conceptual model comprising 6 focal categories and 28 subcategories of prevailing corporate data protection malpractices existing within organizations as well as 4 main categories and 22 subcategories of causal mechanisms underlying these identified malpractices. Empirical findings from this study not only reinforce corporate data protection malpractices established in prior research, but they also yield novel malpractices which have been neglected in previous work. From a pragmatic standpoint, this study yields invaluable insights into the prevention and resolution of corporate data protection malpractices for practitioners.
Article
Full-text available
This study set out to investigate how students at China's Xiamen University felt about mobile advertising. In addition to determining whether or not college students have a favourable opinion of mobile and SMS advertising, the study seeks to understand the challenges students faced and the concepts they understood from a pedagogical perspective. Using a sampling strategy based on convenience, data were obtained from 98 different pupils. Students' negative attitudes toward mobile advertising and their belief that it diminishes the quality of education are the study's most important findings. The research concluded that China Unicom should start using permission marketing to protect its users from being disturbed and that mobile marketers should better segment their client bases to avoid sending customers communications that are not relevant to them.
Article
Full-text available
Drawing on identity theory and privacy research, this article argues that the need for self‐identity is a key factor affecting people's privacy behavior in social networking sites. I first unpack the mainstream, autonomy‐centric discourse of privacy, and then present a research model that illustrates a possible new theorization of the relationship between self‐identity and information privacy. An empirical study with Facebook users confirms the main hypotheses. In particular, the data show that the need for self‐identity is positively related to privacy management behaviors, which in turn result in increased self‐disclosure in online social networks. I subsequently argue that the so‐called “privacy paradox” is not a paradox per se in the context of online social networking; rather, privacy concerns reflect the ideology of an autonomous self, whereas social construction of self‐identity explains voluntary self‐disclosure.
Article
Full-text available
In the U.S., consumers increasingly turn to the internet and mobile apps to complete essential personal transactions, ranging from financial payments to job applications. This shift to digital transactions can create challenges for those without reliable home internet connections or with limited digital literacy by requiring them to submit sensitive information on public computers or on unfamiliar websites. Using interviews with 52 families from high-poverty communities in the mid-Atlantic region of the U.S., we explore the compounding privacy and security challenges that economically disadvantaged individuals face when navigating online services. We describe the real, perceived, and unknown risks they face as they navigate online transactions with limited technical skills, as well as the strategies and heuristics they employ to minimize these risks. The findings highlight a complex relationship between participants' negative experiences and their general mistrust of sharing data through online channels. We also describe a range of strategies participants use to try and protect their personal information. Based on these findings, we offer design recommendations to inform the creation of educational resources that we will develop in the next phase of this project.
Article
Full-text available
The rise of big data has provided new avenues for researchers to explore, observe, and measure human opinions, activities, and interactions. While scholars, professional societies, and ethical review boards have long-established research ethics frameworks to ensure the rights and welfare of the research subjects are protected, the rapid rise of big data-based research generates new challenges to long-held ethical assumptions and guidelines. This article discloses emerging conceptual gaps in relation to how researchers and ethical review boards think about privacy, anonymity, consent, and harm in the context of big data research. It closes by invoking Nissenbaum’s theory of “privacy as contextual integrity” as a useful heuristic to guide ethical decision-making in big data research projects.
Article
Full-text available
This study aims to clarify inconsistencies regarding the term affordances by examining how affordances terminology is used in empirical research on communication and technology. Through an analysis of 82 communication-oriented scholarly works on affordances, we identify 3 inconsistencies regarding the use of this term. First, much research describes a particular affordance without engaging other scholarship addressing that affordance. Second, several studies identify “lists” of affordances without conceptually developing individual affordances within those lists. Third, the affordances perspective is evoked in situations where the purported affordance does not meet commonly accepted definitions. We conclude with a set of criteria to aid scholars in evaluating their assumptions about affordances and to facilitate a more consistent approach to its conceptualization and application.
Article
Full-text available
Increasing numbers of American parents identify as lesbian, gay, bisexual, or transgender (LGBT). Shifting social movements are beginning to achieve greater recognition for LGBT parents and more rights for their families; however, LGBT parents still experience stigma and judgment in a variety of social contexts. We interviewed 28 LGBT parents to investigate how they navigate their online environments in light of these societal shifts. We find that 1) LGBT parents use social media sites to detect disapproval and identify allies within their social networks; 2) LGBT parents become what we call incidental advocates, when everyday social media posts are perceived as advocacy work even when not intended as such; and 3) for LGBT parents, privacy is a complex and collective responsibility, shared with children, partners, and families. We consider the complexities of LGBT parents' online disclosures in the context of shifting social movements and discuss the importance of supporting individual and collective privacy boundaries in these contexts.
Preprint
The proliferation of Internet of Things (IoT) devices for consumer "smart" homes raises concerns about user privacy. We present a survey method based on the Contextual Integrity (CI) privacy framework that can quickly and efficiently discover privacy norms at scale. We apply the method to discover privacy norms in the smart home context, surveying 1,731 American adults on Amazon Mechanical Turk. For $2,800 and in less than six hours, we measured the acceptability of 3,840 information flows representing a combinatorial space of smart home devices sending consumer information to first and third-party recipients under various conditions. Our results provide actionable recommendations for IoT device manufacturers, including design best practices and instructions for adopting our method for further research.
Article
The use of social media technologies—such as blogs, wikis, social networking sites, social tagging, and microblogging—is proliferating at an incredible pace. One area of increasing adoption is organizational settings where managers hope that these new technologies will help improve important organizational processes. However, scholarship has largely failed to explain if and how uses of social media in organizations differ from existing forms of computer-mediated communication. In this chapter, we argue that social media are of important consequence to organizational communication processes because they afford behaviors that were difficult or impossible to achieve in combination before these new technologies entered the workplace. Our review of previous studies of social media use in organizations uncovered four relatively consistent affordances enabled by these new technologies: Visibility, persistence, editability, and association. We suggest that the activation of some combination of these affordances could influence many of the processes commonly studied by organizational communication theorists. To illustrate this point, we theorize several ways through which these four social media affordances may alter socialization, knowledge sharing, and power processes in organizations.