Helen Nissenbaum’s research while affiliated with Weill Cornell Medicine and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (112)


Privacy of Groups in Dense Street Imagery
  • Preprint
  • File available

May 2025

·

26 Reads

Matt Franchi

·

·

Madiha Zahrah Choksi

·

[...]

·

Helen Nissenbaum

Spatially and temporally dense street imagery (DSI) datasets have grown unbounded. In 2024, individual companies possessed around 3 trillion unique images of public streets. DSI data streams are only set to grow as companies like Lyft and Waymo use DSI to train autonomous vehicle algorithms and analyze collisions. Academic researchers leverage DSI to explore novel approaches to urban analysis. Despite good-faith efforts by DSI providers to protect individual privacy through blurring faces and license plates, these measures fail to address broader privacy concerns. In this work, we find that increased data density and advancements in artificial intelligence enable harmful group membership inferences from supposedly anonymized data. We perform a penetration test to demonstrate how easily sensitive group affiliations can be inferred from obfuscated pedestrians in 25,232,608 dashcam images taken in New York City. We develop a typology of identifiable groups within DSI and analyze privacy implications through the lens of contextual integrity. Finally, we discuss actionable recommendations for researchers working with data from DSI providers.

Download


Privacy for Groups Online: Context Matters

November 2024

·

10 Reads

·

3 Citations

Proceedings of the ACM on Human-Computer Interaction

The pervasive influence of online activities in our lives, encompassing personal connections, professional engagements, and e-commerce, has amplified concerns about privacy. However, existing privacy research has predominantly concentrated on the individual level, paying less attention to the privacy practices and strategies adopted by online social groups. This research gap calls for a renewed focus on understanding and addressing privacy challenges specific to online group settings. In this paper we explore the privacy needs of online groups through the lens of Contextual Integrity. We perform two complementary studies: semi-structured qualitative interviews of Facebook Groups users (n=17), and a large-scale survey of individuals organizing in groups on Facebook, Discord, and Reddit (n=4486). We investigate the privacy needs of different contextual groups, and locate the presence of contextual norms, contextual member roles, explicit and implicit rules, and privacy concerns. We trace how this complex interplay informs privacy expectations, needs, and negotiations across groups. We find that technical systems provide limited tools to effectively enforce group privacy, allowing individuals to compromise privacy norms. Based on these findings, we offer recommendations to support the design of privacy controls for online groups.


AI Safety: A Poisoned Chalice?

March 2024

·

15 Reads

IEEE Security and Privacy Magazine

We hear a lot about the awesome potential of AI—the achievements of reinforcement learning, the astonishing power of foundation models and generative AI. Amplifying the hype, AI Safety has emerged as its counterpoint. AI Safety, when I first encountered it, brought to mind autonomous vehicle crashes, nuclear meltdowns, killer drones, and robots-gone-haywire. Nowadays, I see a different, more aggressive intention as AI Safety has come to dominate the public agenda around AI, beyond the purely technical and economic.







Figure 3: Reported acceptance levels for VC passport vignettes organized by recipients. The box plots on the right indicate the variances of the acceptability scores. Recall that the survey only showed each participant three randomly selected vignettes. The denominator of the percentages is the number of responses for each vignette. The top row shows an overall acceptance level across all vignettes.
Figure 5: A heat map of the average of all participants' responses under a combination of four CI parameters (sender, recipient, attribute, and transmission principle). For instance, the color of the top left cell represents the acceptance level of the information flow-customs and border control agencies share their customers' vaccination certificate information with the local government for public health purposes.
Figure 6: An example vaccination certificate shown to survey participants.
Stop the Spread: A Contextual Integrity Perspective on the Appropriateness of COVID-19 Vaccination Certificates

May 2022

·

101 Reads

·

7 Citations

We present an empirical study exploring how privacy influences the acceptance of vaccination certificate (VC) deployments across different realistic usage scenarios. The study employed the privacy framework of Contextual Integrity, which has been shown to be particularly effective in capturing people's privacy expectations across different contexts. We use a vignette methodology, where we selectively manipulate salient contextual parameters to learn whether and how they affect people's attitudes towards VCs. We surveyed 890 participants from a demographically-stratified sample of the US population to gauge the acceptance and overall attitudes towards possible VC deployments to enforce vaccination mandates and the different information flows VCs might entail. Analysis of results collected as part of this study is used to derive general normative observations about different possible VC practices and to provide guidance for the possible deployments of VCs in different contexts.


Citations (82)


... AI inferences present significant challenges to conceptions of privacy, both in theory and in data protection practice. The privacy conceptions most susceptible to erosion by AI's inferential power are likely those grounded exclusively in categorical distinctionssuch as classifying data as sensitive or non-sensitive -while at the same time uncritically accepting AI-generated inferences as inherently valid [37]. Critical data scientists, particularly members of the FAccT community, have demonstrated the adverse impacts of invalid inferential models, especially due to biased misrepresentations that result in improper and unfair predictive descriptions of individuals they cannot understand, correct, or control (e.g., [36,38,49,51,98,105,106]). ...

Reference:

Privacy of Groups in Dense Street Imagery
Countering Privacy Nihilism

SSRN Electronic Journal

... The question of values in games or sensitive design is approached in studies 5, 6, 8, 14, 15 and 16 with the tools "VASE", "Grow-A-Game", a curriculum for "Value Conscious Design", "The Values Cards" and "The Values at Play Framework" (Barendregt et al., 2021;Belman et al., 2011;Belman & Flanagan, 2010;Flanagan et al., 2007;Flanagan & Nissenbaum, 2007c). ...

Reference:

D2.1-Epic-We
Grow-A-Game: A Tool for Values Conscious Design and Analysis of Digital Games
  • Citing Conference Paper
  • January 2011

... The inferential reality of computer vision AI models and real-time DSI produce privacy vulnerabilities for groups in public spaces. For the purposes of our work, Nissenbaum's theory of Privacy as Contextual Integrity (CI) helps distinguish legitimate from illegitimate information flows according to contextual norms for such groups [27]. Drawing on social theory, social philosophy, and law, CI conceives of social life as comprising distinct social domains or contexts, such as commerce, education, finance, healthcare, civic life, family, and friends [80]. ...

Privacy for Groups Online: Context Matters
  • Citing Article
  • November 2024

Proceedings of the ACM on Human-Computer Interaction

... 67 Risks associated with third-party cookies diverge from those generated by first-party cookies, which are minimal in comparison. 68 Notwithstanding, their use is not prohibited nor does the GDPR (nor any interpretation of the rule conducted by a DPA) mandate their absolute rolling out from the web. In any case, Google's Privacy Sandbox intended to eliminate them in favour of replicable tools which would deliver the same results in terms of advertising capabilities available for third parties. ...

No Cookies For You!: Evaluating The Promises Of Big Tech’s ‘Privacy-Enhancing’ Techniques.
  • Citing Article
  • January 2023

SSRN Electronic Journal

... However, the firm is not the only stakeholder whose interests matter. The decision subject's welfare is also important as part of the broader set of societal considerations [32], and indeed protecting the interests of disadvantaged consumers is the underlying motivation for fair lending interventions and regulations, including those that use the LDA. ...

Strategic Evaluation
  • Citing Conference Paper
  • October 2023

... For the present paper, I understand design heuristics in the sense of the term as used by Flanagan and Nissenbaum [20], who are concerned with showing alternatives to existing game design practices in order to support activist games. They propose specific principles as "design aspirations" [20] to guide the concrete development of an artifact. ...

Design Heuristics for Activist Games
  • Citing Chapter
  • September 2008

... Optimization, often carried out as a descriptive process, underpins the creation of many computational tools like machine learning models. However, it also leads to unintended outcomes that inherently carry normative inquiries [5] that bring attention to the impact of actions on involved parties, in particular, within complex and emergent socioeconomic systems. ...

Optimization’s Neglected Normative Commitments
  • Citing Conference Paper
  • June 2023

... According to this theory, privacy is defined as an appropriate flow of information according to context-dependent norms. The theory has been used to evaluate information flow in various contexts such as Facebook privacy policy [20], big data research [21], smart home [22] and contact-tracing applications [23]. ...

Going against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
  • Citing Article
  • October 2019

Proceedings of the AAAI Conference on Human Computation and Crowdsourcing