Article

Successful failure: what Foucault can teach us about privacy self-management in a world of Facebook and big data

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The “privacy paradox” refers to the discrepancy between the concern individuals express for their privacy and the apparently low value they actually assign to it when they readily trade personal information for low-value goods online. In this paper, I argue that the privacy paradox masks a more important paradox: the self-management model of privacy embedded in notice-and-consent pages on websites and other, analogous practices can be readily shown to underprotect privacy, even in the economic terms favored by its advocates. The real question, then, is why privacy self-management occupies such a prominent position in privacy law and regulation. Borrowing from Foucault’s late writings, I argue that this failure to protect privacy is also a success in ethical subject formation, as it actively pushes privacy norms and practices in a neoliberal direction. In other words, privacy self-management isn’t about protecting people’s privacy; it’s about inculcating the idea that privacy is an individual, commodified good that can be traded for other market goods. Along the way, the self-management regime forces privacy into the market, obstructs the functioning of other, more social, understandings of privacy, and occludes the various ways that individuals attempt to resist adopting the market-based view of themselves and their privacy. Throughout, I use the analytics practices of Facebook and social networking sites as a sustained case study of the point.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... This is part of the "privacy paradox". Where one claims to care about their privacy, but when an effort is needed to protect it, they tend to look the other way (Hull 2015; Bandara and Levine 2019). What we can see when discussing the different possible explanations of the privacy paradox is that the liberal paradigm fully relies on the calculus model, while behavioural economics and social theories suggest that this model does not sufficiently explain the privacy paradox. ...
... According to Gordon Hull (2015), the system of "notice and consent" is a way of privacy selfmanagement that does not work in protecting citizens' privacy. Inspired by Foucault, Hull calls this a successful failure because: "their failure to protect privacy tells only half the story. ...
... Through this disciplinary power, subjectification happens (Foucault 1991). Hull (2015) argues that we need to see the notice and consent regime as a particular form of subjectification, which is, as argued by Foucault, a technique of power that makes persons into subjects: "users are presented with a repeated choice: more privacy or less? Everything about the context of that choice encourages them to answer "less." ...
Thesis
Full-text available
In the vastly changing world of consumer privacy, laws that protect citizens from the data hunger of companies are of the utmost importance. While the GDPR does protect consumer privacy in a certain way, it is based on a very limited conception of privacy. This paper examines the dominant paradigm in privacy law and shows that there are other ways to conceive of privacy. This will be done by looking at three components: (1) what is privacy, (2) what is privacy behaviour, and (3) why is privacy important. I labelled the current paradigm as the liberal conception of privacy. It contends that privacy is having control over information, that privacy behaviour is determined by rational choice and that privacy is important because it is a prerequisite for autonomy. This paper shows that the meaning of privacy could also be the right to be let alone, or more broad conceptions of control over information. Furthermore, privacy behaviour is not as straightforward as the privacy calculus model makes it seem, behavioural economics and social theory provide us with different understandings of privacy behaviour. Finally, when it comes to the value of privacy, republicanism showed the importance for democracy, relationship theory indicated its role in the development of love, friendship and trust, and critical theory explained the power of surveillance and how losing privacy is losing our humanity. This study concludes that the liberal paradigm provides a very limited way of looking at privacy and consequently, current law does not accurately protect consumer privacy.
... Although not yet su ciently reflected in law, critical perspectives on privacy selfmanagement are well-represented in the literature, with authors describing the approach as "dysfunctional" [26], "failed (...), impractical" [27], "destined to fail" [28], not "fit for purpose" [29], and as a "market failure" [30], "a fundamental dilemma" [16], and a deceptive "neoliberal technique of power" [31]. In 2013, the renowned data protection lawyer Eduardo Ustaran [32] concluded: "Yes, consent is dead. ...
... All sorts of interesting and compelling arguments have been brought forth in opposition of privacy self-management. Among other topics, these arguments deal with the intricacies of data collection and processing [34,28], people's dependence on certain services [34,31], and people's inability to read, let alone understand the privacy policies of all their used services due to complex language and the enormous amount of time that would be required to do so [16,35]. For instance, New York Times journalists examined privacy policies of major tech and media platforms and concluded that they are "verbose and full of legal jargon"short: an "incomprehensible disaster" [36]. ...
Chapter
Full-text available
Under current privacy laws, most forms of personal data processing are authorized via consent, leaving the cost-benefit calculation to the individual data subject. However, given the realities of today’s data economy, people are usually not in a position to make truly free and informed privacy choices. Thus, the notice-and-choice approach does not ensure informational self-determination and fails to align corporate data practices with fundamental values of society. Consumer education and technical privacy safeguards can help to some extent, but will not be able to fundamentally solve this problem. On closer inspection, the very notion of informational self-determination is based on wrong assumptions and may – despite all good intentions – be an unachievable and misleading concept. To illustrate the problem, this chapter uses the privacy impacts of modern data analytics as an example. It is argued that, with regard to human limitations and the complexities of modern data processing, privacy laws should focus on the anticipatory prevention of harms and rely to a much lesser extent on data subjects’ supposedly “free” and “informed” decisions.
... With the fast-evolving power of modern data analytics, it is hard to predict what privacy-relevant information can be inferred from the personal data that users provide and to which purposes this information may be put (Kosinski, Stilwell, and Graepel 2013). The risks also compound over time: the more personal information becomes available, the more predictive power algorithms have, and the more privacy-relevant information they may reveal (Hull 2015). It is therefore near-impossible for a user to arrive at a reasoned assessment of the likelihood of possible implications of data processing when they consent to it (Solove 2013, Acquisti, Brandimarte, andLoewenstein 2015). ...
... The Grindr case is an example: the company made access to its services conditional on accepting near-unlimited onward sale of personal data. Because personal data only becomes valuable when aggregated (Hull 2015), individual users do not have leverage; users could only pose a threat to SNS by overcoming collective action problems and acting in unison to demand better terms, or by acting through governments. ...
Article
Full-text available
Social Networking Services provide services in return for rights to commercialize users' personal data. We argue that what makes this transaction permissible is not users' autonomous consent but the provision of sufficiently valuable opportunities to exchange data for services. We argue that the value of these opportunities should be assessed for both (a) a range of users with different decision-making abilities and (b) third parties. We conclude that regulation should shift from aiming to ensure autonomous consent towards ensuring that users face options that they will use to advance individual and common interests.
... In the critics' view, the consent that people actually give to data sharing often appears to be, in a word, 'meaningless' (Andreotta et al., 2022;Barocas & Nissenbaum, 2014;Becker, 2019;Hull, 2015;Pascalev, 2017;Solove, 2013;Susser, 2019;Wolmarans & Voorhoeve, 2022;Zuboff, 2019) -so far from the ideal, or even paradigmatic cases of morally transformative consent that it barely deserves the name. Consequently, such deeply deficient consent fails to secure the important privacy interests that the policy of notice-andconsent has been designed to secure (for instance, rather than enhancing users' autonomy, it facilitates their exploitation by data collectors). ...
Article
This paper argues that if the critics of the currently dominant notice-and-consent model of governing digital data transactions are correct, then they should oppose political reforms of the model. The crux of the argument is as follows: the reasons the critics give for doubting the effectiveness of notice-and-consent in protecting user privacy (namely, ordinary users’ various cognitive deficiencies and the inherent inscrutability of the subject matter) are also reasons for doubting the effectiveness of protecting user privacy through democratic or regulatory means. Furthermore, insofar as attempts to improve the notice-and-consent model through more legislation or regulation would also involve more coercion than the status quo, they should be resisted on normative grounds. So, out of the bad options we have when it comes to protecting digital privacy, it seems – contrary to the majority position advanced in the literature – that we should stick with notice-and-consent.
... The latest literature on the introduction of volatility index factors into asset pricing models can be traced back to 2007. [13] believed that the general level factor of the volatility index and the market excess return rate were used as second-order and first-order variables, respectively, when measuring market risk, so there was irreplaceable between them. They were used to modify the Carhart four-factor asset pricing model. ...
Article
Full-text available
For the pricing model, a solid economic foundation is the key node to improve the pricing model. This is not only true for stocks and other profitable assets, but also for other assets such as creditor’s rights. This study is mainly based on the empirical asset pricing model, constructs a stochastic equilibrium model, and uses the generalized matrix method to analyze the asset pricing model. The estimation results show that these parameters are significant at the 5% or even 1% significance level, and the estimated values of the parameters meet the economic expectations, which can be tested by over-identification of tool variables. The experimental results prove that the empirical asset pricing model in this paper can effectively improve the effect of the single feedforward neural network model, and emphasize the necessity of feature learning, especially nonlinear unsupervised feature learning, in the application of machine learning in the field of empirical finance, which enriches the relevant research in the cross field of machine learning and empirical finance and has potential practical value.
... Meaningfully controlling one's future requires meaningful control of one's data, including real opportunities to reject data-driven systems. Consent regimes are demonstrably inadequate protections for almost everyone [68,77], but (multiply) marginalised people have even less power to exert choice under heightened surveillance, and often face strong incentives to demonstrate compliance with agencies demanding their data [41]. Moreover, the consequences of the failures of consent regimes is far from evenly distributed across society. ...
Article
Full-text available
Reproductive justice is an intersectional feminist framework and movement which argues all people have the right to have a child, to not have a child, to parent in safe and healthy environments, and to own their bodies and control their futures. We identify increasing surveillance, assessing worth, datafication and monetisation, and decimating planetary health as forms of structural violence associated with emerging digital technologies. These trends are implicated in the (re)production of inequities, creating barriers to the realisation of reproductive justice.We call for algorithmic reproductive justice, and highlight the potential for both acts of resistance and industry reform to advance that aim.
... We are not only hypocritical towards the arms trade, arms dealers, war and our own role. Think for example about the creation of online communities about awareness of Facebook's privacy rights violations, on Facebook (Hull 2015). Or what about online crowdfunding to support projects that help refugees who fled ongoing war crimes and human rights violations by, for example, armed militia in the Democratic Republic of Congo who control one of the world's largest natural resources of cobalt and coltan and the involved mining industry (Henleben 2020). ...
Article
Waffenhändler werden zumeist als moralisch korrupt und als das pure Böse verstanden. Oftmals lassen sich Darstellungen wie „Händler des Todes“ finden, die Konflikte, Kriege und Leid ausnutzen, um Waffen und Munition zu verkaufen. Dieser Artikel stellt, basierend auf einer biografischen Studie mit einem legal operierenden Waffenhändler, die banalen Beweggründe vor, die Menschen dazu motivieren, von Kriegshandlungen finanziell zu profitieren. In diesem Kontext wird eine kurze „kriminologische Imagination“ des Waffenhandels im Verhältnis zu Krieg und Staatskriminalität vorgestellt. Dabei wird auch erörtert, warum ein entsprechender Ansatz in der Kriminologie vielversprechend erscheint. Anschließend werden die Lebensentscheidungen und Motivationen hinter der Entscheidung, als Waffenhändler tätig zu werden, diskutiert. Die dabei zum Vorschein kommenden Ergebnisse verraten etwas über tiefergehende Narrative und „Biografien“ in kapitalistischen und „strebsamen“ Gesellschaften. So wird abschließend auch auf das Zusammenspiel der Biografie eines Waffenhändlers mit den Strukturmerkmalen gegenwärtiger Gesellschaften eingegangen.
... Flender and Müller [13] believed that because some people have already self-disclosed some information on social networks, based on the idea of reciprocity and pressure of social fairness, those who have not disclosed personal information are also required to disclose information. Hull's research found that not sharing personal information is seen as shameful by other users [14]. Kim and Kim believed that Facebook users mainly disclose personal information to maintain social interaction [15]. ...
Article
Full-text available
Social networking service (SNS) users often express great concern for their personal privacy, yet continue to disclose personal information on these platforms. This privacy paradox between privacy concerns and disclosure behavior has drawn widespread academic attention. In this study, we use the double-entry mental accounting theory to construct a theoretical model and conduct an in-depth analysis of the privacy paradox phenomenon and its causes through empirical verification. Our research shows a significant positive correlation between perceived benefits and users’ intention to disclose privacy, while perceived risks and users’ intention to disclose privacy are significantly negatively correlated. The double-entry mental accounting theory plays a crucial role in mediating the relationship between perceived values and users’ intention to disclose privacy. Furthermore, we found that information sensitivity negatively regulates the relationship between perceived risks, the pleasure attenuation coefficient α, the pain buffering coefficient β, and the intention to disclose privacy. Our study provides theoretical and empirical information on the reasons for the privacy paradox and offers insights for social networking service providers to optimize their services.
... When you visit a website and click a button that says, 'I agree to these terms'-do you really agree? Many scholars who consider this question (Solove 2013; Barocas & Nissenbaum 2014;Hull 2015;Pascalev 2017;Yeung 2017;Becker 2019;Zuboff 2019;Andreotta et al. 2022;Wolmarans and Vorhoeve 2022) would tend to answer 'no'-or, at the very least, they would deem your agreement normatively deficient. The reasoning behind that conclusion is in large part driven by the claim that when most people click 'I agree' when visiting online websites and platforms, they do not really know what they are agreeing to. ...
Article
Full-text available
This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online privacy. We then argue that if internet users can be thought of as exercising their RNTK before consenting to privacy policies, their consent ought to be considered free of the standard charges leveled against it by critics.
... The empirical failure of such approaches is evident in privacy, where the procedural protection of consumer "choice" is pervasive (Acquisti et al., 2015;Solove, 2013). The procedural focus makes it difficult to see that the choices are not really free in the normal sense in that they are highly constrained and manipulated by algorithmic systems and platform architectures (Acquisti, 2009;Hull, 2015Hull, , 2022, that corporations turn privacy into compliance issues and thereby evade substantive reform (Waldman, 2021), that (as in the case of AI) many of the harms have to do with (mis) classification (Dwork & Mulligan, 2013), that social power concerns are made completely invisible (Austin, 2014;Hull, 2021), and that the directly expressive, and thus political, values of privacy become unprotectable (Skinner- Thompson, 2021). The pervasive failure of procedural fairness to achieve privacy should be a warning sign for the tendency to focus myopically on it in the ML context. ...
Article
Full-text available
Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their target variables reflect inaccurate and unjust proxies for what they claim to measure; (2) classification systems, such as facial recognition, commit hermeneutic injustice because their classification taxonomies, almost no matter how they are derived, reflect and perpetuate racial and other stereotypes; and (3) epistemic injustice better explains what is going wrong in these types of situations than does the more common focus on procedural (un)fairness.
... As a result, most people are not autonomous in their decisions to accept or reject the use of social media, since they are significantly influenced by the opinion and behaviour of their social environment [13]. In addition, while peers and family can create social pressure towards certain decisions, they can also create a social stigma for anyone who deviates from these decisions [64]. Hence, the expressed attitude is apparently echoing the unbiased opinion, but it is not necessarily reflected in the actual behaviour, which is often affected by social factors [13]. ...
Article
Full-text available
People use social media to achieve particular gratifications despite expressing concerns about the related privacy risks that may lead to negative consequences. This inconsistency between privacy concerns and actual behaviour has been referred to as the privacy paradox. Although several possible explanations for this phenomenon have been provided over the years, they each consider only some of the obstacles that stand in the way of informed and rational privacy decisions, and they usually assume a static situation, thus neglecting the changes taking place over time. To overcome these limitations, this article incorporates all the key privacy obstacles into a qualitative system dynamics model and examines the conditions under which the privacy paradox emerges over time in the context of social media. The results show that the privacy obstacles prevent adequately accounting for the negative consequences by (1) reinforcing gratifications, thus inducing social media adoption and use, while (2) hampering the realisation of (all) negative consequences, thus reducing the motivation for social media discard. Moreover, gratifications kick off early and often seem to dominate even major long-term negative consequences, thereby resulting in users becoming only gradually concerned about privacy, by which time they are usually deeply engaged in the platform to consider discarding, and therefore arriving in a paradoxical situation that seems not viable to escape from (i.e., the boiling frog syndrome). Conversely, major short-term negative consequences are more likely to conflict with gratifications already earlier, thereby resulting in users becoming less engaged, more concerned, and therefore still able to discard the platform, thus resolving the paradoxical situation.
... 6 Se ex. Hull (2015) om sociala mediers roll för formeringen av självidentiteter. Habermas (1997). ...
Article
Full-text available
Social Media as Public Space
... Depending on the perspective taken, some of the issues covered in this dissertation, such as the wealth of sensitive inferences that can be drawn from seemingly innocuous sensor data [11,12,13] or the extent of hidden web tracking [14], may even contribute to a fatalistic sentiment by illustrating the opacity and overwhelming complexities of modern data processing. In addition, there are narratives portraying privacy as "primarily an antiquated roadblock on the path to greater innovation" [15]. ...
Chapter
Full-text available
... It can be said that data protection maintains the functional differentiation of modern societies and thus safeguarding many basic societal functions by making structural power asymmetries a central topic (Rost 2018). In addition to the structural nature of data processing, societies with division of labor can in principle not let the overstrained individual deal with all details of information processing, if they want to be fair (Hull 2015;Kröger et al. 2021). The focus of data protection therefore does not lie on the "privacy" and "privacy decisions" of the individual data subject, but on the overall structural societal effects of data processing (Bock/Engeler 2016). ...
Chapter
Full-text available
Krisen erschüttern die Routinen des gesellschaftlichen Zusammenlebens. Zugleich offenbaren sie, wie Gesellschaften funktionieren, zu welchen Anpassungsleistungen sie fähig sind und über welche Ressourcen sie verfügen. Die Coronakrise hat gezeigt, wie moderne Gesellschaften auf Gesundheitskrisen reagieren und zur Bewältigung mannigfaltige Technologien einsetzen, um in einer Situation der gegenseitigen Bedrohung ihr weiteres Funktionieren sicherzustellen. Der Band widmet sich dieser Rolle von Technologien der Krise in einer theoretischen, normativen und empirischen Perspektive und versammelt Beiträge aus Soziologie, Computer Science, Ethik und Gesundheitswissenschaft.
... It can be said that data protection maintains the functional differentiation of modern societies and thus safeguarding many basic societal functions by making structural power asymmetries a central topic (Rost 2018). In addition to the structural nature of data processing, societies with division of labor can in principle not let the overstrained individual deal with all details of information processing, if they want to be fair (Hull 2015;Kröger et al. 2021). The focus of data protection therefore does not lie on the "privacy" and "privacy decisions" of the individual data subject, but on the overall structural societal effects of data processing (Bock/Engeler 2016). ...
Book
Krisen erschüttern die Routinen des gesellschaftlichen Zusammenlebens. Zugleich offenbaren sie, wie Gesellschaften funktionieren, zu welchen Anpassungsleistungen sie fähig sind und über welche Ressourcen sie verfügen. Die Coronakrise hat gezeigt, wie moderne Gesellschaften auf Gesundheitskrisen reagieren und zur Bewältigung mannigfaltige Technologien einsetzen, um in einer Situation der gegenseitigen Bedrohung ihr weiteres Funktionieren sicherzustellen. Der Band widmet sich dieser Rolle von Technologien der Krise in einer theoretischen, normativen und empirischen Perspektive und versammelt Beiträge aus Soziologie, Computer Science, Ethik und Gesundheitswissenschaft.
... It can be said that data protection maintains the functional differentiation of modern societies and thus safeguarding many basic societal functions by making structural power asymmetries a central topic (Rost 2018). In addition to the structural nature of data processing, societies with division of labor can in principle not let the overstrained individual deal with all details of information processing, if they want to be fair (Hull 2015;Kröger et al. 2021). The focus of data protection therefore does not lie on the "privacy" and "privacy decisions" of the individual data subject, but on the overall structural societal effects of data processing (Bock/Engeler 2016). ...
Chapter
Full-text available
Krisen erschüttern die Routinen des gesellschaftlichen Zusammenlebens. Zugleich offenbaren sie, wie Gesellschaften funktionieren, zu welchen Anpassungsleistungen sie fähig sind und über welche Ressourcen sie verfügen. Die Coronakrise hat gezeigt, wie moderne Gesellschaften auf Gesundheitskrisen reagieren und zur Bewältigung mannigfaltige Technologien einsetzen, um in einer Situation der gegenseitigen Bedrohung ihr weiteres Funktionieren sicherzustellen. Der Band widmet sich dieser Rolle von Technologien der Krise in einer theoretischen, normativen und empirischen Perspektive und versammelt Beiträge aus Soziologie, Computer Science, Ethik und Gesundheitswissenschaft.
... It can be said that data protection maintains the functional differentiation of modern societies and thus safeguarding many basic societal functions by making structural power asymmetries a central topic (Rost 2018). In addition to the structural nature of data processing, societies with division of labor can in principle not let the overstrained individual deal with all details of information processing, if they want to be fair (Hull 2015;Kröger et al. 2021). The focus of data protection therefore does not lie on the "privacy" and "privacy decisions" of the individual data subject, but on the overall structural societal effects of data processing (Bock/Engeler 2016). ...
Book
Krisen erschüttern die Routinen des gesellschaftlichen Zusammenlebens. Zugleich offenbaren sie, wie Gesellschaften funktionieren, zu welchen Anpassungsleistungen sie fähig sind und über welche Ressourcen sie verfügen. Die Coronakrise hat gezeigt, wie moderne Gesellschaften auf Gesundheitskrisen reagieren und zur Bewältigung mannigfaltige Technologien einsetzen, um in einer Situation der gegenseitigen Bedrohung ihr weiteres Funktionieren sicherzustellen. Der Band widmet sich dieser Rolle von Technologien der Krise in einer theoretischen, normativen und empirischen Perspektive und versammelt Beiträge aus Soziologie, Computer Science, Ethik und Gesundheitswissenschaft.
... data in society and business (#10 and #19) [33,38], other studies have emphasized concerns pertaining to privacy and ethical judgment (#38, #42, #145, and #87) [39][40][41][42]. Other scholars have highlighted the application of big data in various contexts, such as the clinical (#120) [43] and biomedical fields (#55) [44], social media (#79, #126, and #124) [45][46][47], and mobile applications (#93) [48]. ...
Article
Full-text available
This paper primarily aims to provide a citation-based method for exploring the scholarly network of artificial intelligence (AI)-related research in the information science (IS) domain, especially from Global North (GN) and Global South (GS) perspectives. Three research objectives were addressed, namely (1) the publication patterns in the field, (2) the most influential articles and researched keywords in the field, and (3) the visualization of the scholarly network between GN and GS researchers between the years 2010 and 2020. On the basis of the PRISMA statement, longitudinal research data were retrieved from the Web of Science and analyzed. Thirty-two AI-related keywords were used to retrieve relevant quality articles. Finally, 149 articles accompanying the follow-up 8838 citing articles were identified as eligible sources. A co-citation network analysis was adopted to scientifically visualize the intellectual structure of AI research in GN and GS networks. The results revealed that the United States, Australia, and the United Kingdom are the most productive GN countries; by contrast, China and India are the most productive GS countries. Next, the 10 most frequently co-cited AI research articles in the IS domain were identified. Third, the scholarly networks of AI research in the GN and GS areas were visualized. Between 2010 and 2015, GN researchers in the IS domain focused on applied research involving intelligent systems (e.g., decision support systems); between 2016 and 2020, GS researchers focused on big data applications (e.g., geospatial big data research). Both GN and GS researchers focused on technology adoption research (e.g., AI-related products and services) throughout the investigated period. Overall, this paper reveals the intellectual structure of the scholarly network on AI research and several applications in the IS literature. The findings provide research-based evidence for expanding global AI research.
... As a result, many people, including court members, struggle to articulate why the protection of personal data is important [12]. This state of misinformation has severe consequences on policy and public discourse, with data protection advocates being referred to as "privacy alarmists" [13] and privacy itself being framed as "old-fashioned [,] antiprogressive, overly costly" [1] and "primarily an antiquated roadblock on the path to greater innovation" [14]. This widespread sentiment also helps legitimize and perpetuate current privacy laws, which are riddled with loopholes and fail to consistently safeguard people from harmful, abusive and ethically questionable data practices [15,16,17]. ...
Preprint
Full-text available
Even after decades of intensive research and public debates, the topic of data privacy remains surrounded by confusion and misinformation. Many people still struggle to grasp the importance of privacy, which has far-reaching consequences for social norms, jurisprudence, and legislation. Discussions on personal data misuse often revolve around a few popular talking points, such as targeted advertising or government surveillance, leading to an overly narrow view of the problem. Literature in the field tends to focus on specific aspects, such as the privacy threats posed by 'big data', while overlooking many other possible harms. To help broaden the perspective, this paper proposes a novel classification of the ways in which personal data can be used against people, richly illustrated with real world examples. Aside from offering a terminology to discuss the broad spectrum of personal data misuse in research and public discourse, our classification provides a foundation for consumer education and privacy impact assessments, helping to shed light on the risks involved with disclosing personal data.
... Compounding this, most of us lack the time to consider the implications of our digital choices, and when we do, the choices we make are highly individualised, leaving us feeling as if enrolment is inevitable, if not compulsory. Hull (2015) argues that the choice model of privacy self-management is a form of neoliberal responsibilisation, ensuring that individual privacy protection fails in ways that benefit big data companies. This form of individualisation, atomisation, and dependence is by design, as are the levels of opacitybecause, as Zuboff (2019b, p. 25) says, "there can be no exit from processes that bypass individual awareness and on which we must depend on for effective daily life". ...
Article
Full-text available
The rise of big data has led to profound changes to the dynamics of accumulation and profiteering. Today, data is captured, produced, and reproduced with such regularity that its collection, utility, and value can go largely unnoticed, giving rise to “surveillance capitalism” (Zuboff, 2019a). This paper explores emerging forms of exploitation within the data economy, including the rise of “instrumentarian power” (Zuboff, 2019a), opacity surrounding data collection and use, and the impact of data breaches on our capacity to function within the information economy. We consider whether new forms of extended responsibility reporting may help to disrupt the trajectory of surveillance capitalism and democratise participation in the digital economy (Crawford, 2021). We draw on the accounting literature on organisational disclosures to consider whether the disclosure of data breaches might enhance accountability by making aspects of the surveillance economy knowable to us. Empirically, our analysis considers the various rules currently governing the disclosure of data breaches in Australia, the US, the EU, and Canada, and the application of these rules in practice. While regulation of the digital economy is developing, laws governing the disclosure of data breaches are highly dependent on an organisation’s judgement. As a consequence, the nature, scale, and timeliness of these disclosures vary significantly, and the lack of clear routines makes it difficult for stakeholders to assess data risks. In response, we consider whether a mandatory disclosure framework might contribute usefully to the public “naming and taming” of surveillance capitalism (Zuboff, 2019a) and the democratisation of our digital future.
... Iako se taj pojam koristi da opiše jednu konkretnu meru za suzbijanje širenja virusa tokom pandemije, on je problematičan zato što se ne govori uvek o istom razmaku već se pominju različite udaljenosti, a poštovanje te distance podrazumeva konstantno preračunavanje i slobodnu procenu. Različite države imaju različite definicije bezbedne razdaljine pa tako Britanija propisuje distancu od dva metra 69 , kao i Sjedinjene američke države (šest stopa) 70 , dok Nemačka propisuje distancu od 1,5 metara 71 , a Svetska zdravstvena organizacija nalaže da u javnom prostoru treba biti najmanje jedan metar udaljen od drugih osoba 72 . U Srbiji je zvanično propisana distanca od 1-2 metra "između sebe i obolele osobe" na državnom sajtu posvećenom epidemiji COVID-19 73 , ali zdravstveni radnici u javnim nastupima pominju socijalnu distancu koja iznosi dva metra 74 , dok se u medijima ponekad spominju i veći razmaci u zavisnosti od toga da li osobe u javnom prostoru stoje ili se kreću. ...
Book
Full-text available
The internet is a multifaceted social network where people live their private and public lives. A place for work and leisure, it provides spaces for all kinds of activities from running a business to playing video games and hanging out with friends. But because it is a privately-owned public space under constant surveillance, can we even speak about privacy? As legal definitions of privacy are shifting from the concept of confidentiality to partial control over personal data, we are abandoning the hope of claiming the right to privacy in our everyday lives. This ethnographic research shows how people operate with a weaker concept of privacy settings derived from social media. Having capitulated to surveillance capitalism, they no longer count on having full control of their digital doubles and are merely holding on to crumbles of privacy. Their strategies are reduced to defending provisional and temporary borders between private and public.
Chapter
The objective is to place the data subject in focus while deciding the course of actions that protect and prevent the misuse of personal information. Often, the data controllers or those processing personal data work towards compliance within prescribed norms. These norms are set according to the legislative interventions and guidelines that jurisdictions provide. While there are common grounds in different jurisdictions, they have added their understanding in framing data protection norms.
Article
In the digital age, technology is integral to daily life, significantly impacting areas such as art, entertainment, healthcare and education. This paper explores the ethical aspects of digital content creation, focusing on the responsibilities of creators in shaping public opinion and behaviour. Key issues addressed include misinformation, digital harassment, privacy breaches, and the commercialisation of personal experiences. By reviewing existing literature and emphasising the importance of ethical digital practices, this study aims to contribute to a more responsible and ethical digital landscape. The main research question investigates whether ethical principles should guide digital content creation and dissemination.
Article
Relatively little work brings together Foucault and epistemic injustice. This article works through Miranda Fricker’s attempt to position herself between Marx and Foucault. Foucault repeatedly emphasizes the importance of beginning with “structures” rather than “subjects.” Reading Foucault’s critique of Marxism shows that Fricker’s account comes very close to the standpoint theories it tries to avoid. Foucault’s emphasis on structures explains some of the gaps in Fricker’s account of hermeneutical injustice, especially the need to emphasize the embeddedness of epistemic practices in institutions, and their resulting irreducibly political nature. In both cases, this article offers contemporary examples taken from data and privacy regulations.
Chapter
This chapter explores the principles and frameworks of human-centered Artificial Intelligence (AI), specifically focusing on user modeling, adaptation, and personalization. It introduces a four-dimensional framework comprising paradigms, actors, values, and levels of realization that should be considered in the design of human-centered AI systems. This framework highlights a perspective-taking approach with four lenses of technology-centric, user-centric, human-centric, and future-centric perspectives. Ethical considerations, transparency, fairness, and accountability, among others, are highlighted as values when developing and deploying AI systems. The chapter further discusses the corresponding human values for each of these concepts. Opportunities and challenges in human-centered AI are examined, including the need for interdisciplinary collaboration and the complexity of addressing diverse perspectives. Human-centered AI provides valuable insights for designing AI systems that prioritize human needs, values, and experiences while considering ethical and societal implications.
Article
Long considered an object of the law, Americans increasingly encounter privacy via the operations and settings of networked technologies. Based on ethnographic fieldwork with privacy engineers and their corporate colleagues, this paper examines how privacy’s manifestation in web technologies opens it to pragmatic linkages with new sensuous qualities and interpretive possibilities. The paper’s primary object is Project Quantum, a company-wide effort initiated by the Mozilla Corporation in 2016 to build a new engine for its Firefox web browser, thus radically improving Firefox’s “performance.” I show that animating Project Quantum was an ideal of smooth, snappy performance that Firefox engineers understood to be keyed to the demands of user attention and attempted to establish for users as meaningful signs of Mozilla’s engineering prowess and paternalistic care. Drawing on the study of “qualia”—phenomenal experiences of abstract qualities—I identify performance engineering as a key site of privacy’s semiotic bundling with attention, through which privacy’s practically available forms are taking on idealized qualities of speed and smoothness. Attending to privacy’s qualia, I propose, provides methodological access to the institutional and global value systems reconfiguring privacy’s political capacities as it becomes an object of technological stewardship and intervention.
Article
At present, based on conventions held in Japan in 2016, it is considered to be the era of Society 5.0 (Society 5.0). The Era of Society 5.0 is a form of society of the fifth era after previously (in succession) known as the era of hunting, agriculture / animal husbandry, industry and information. Society 5.0 was triggered by the 4.0 industrial revolution marked by internet of things technology, artificial intelligence, 3D printing, augmented reality and block chain. The 5.0 or super smart society is a society that is intensively supported by internet of things technology and artificial intelligence. Based on this phenomenon, this research was made with the aim of identifying the prerequisites that must be owned by people who lived in the era of Society 5.0 in relation to the awareness of privacy security in the era of internet of things. The second step is to examine the current condition of Indonesian society. This study aims to determine whether there is a relationship between Community 5.0 with privacy awareness in Indonesia. The data collection method used in this study was to collect primary data using a questionnaire method that was distributed online. Questionnaires were distributed for 6 weeks to ascertain whether the data that had been collected related to this research was sufficient. The data analysis technique used in this study is the correlation test to examine the relationship of privacy awareness to the community included in the Community 5.0Keywords : Society 5.0, Privacy Awareness, Cybersecurity. ABSTRAKSaat ini, berdasarkan konvensi yang diadakan di Jepang pada tahun 2016, dianggap sebagai era Masyarakat 5.0 (Society 5.0). Era Masyarakat 5.0 adalah bentuk masyarakat era ke-lima setelah sebelumnya (secara berturutan) dikenal dengan era perburuan, pertanian/peternakan, industri dan informasi. Masyarakat 5.0 dipicu oleh adanya revolusi industri 4.0 yang ditandai dengan teknologi internet of things, artificial intelligence, 3D printing, augmented reality dan block chain. Masyarakat 5.0 atau super smart society adalah masyarakat yang didukung secara intensif oleh teknologi internet of things dan artificial intelligence. Berdasarkan fenomena tersebut, maka penelitian ini dibuat dengan tujuan untuk mengidentifikasi prasyarat yang harus dimiliki oleh masyarakat yang hidup di era Masyarakat 5.0 dalam kaitan dengan kesadaran keamanan privasi di era internet of things. Langkah kedua adalah meneliti kondisi masyarakat Indonesia saat ini. Penelitian ini bertujuan untuk mengetahui apakah terdapat hubungan antara Masyarakat 5.0 dengan kesadaran privasi di indonesia. Metode pengumpulan data yang digunakan dalam penelitian ini untuk mengumpulkan data primer dengan metode kuesioner yang dibagikan online. Penyebaran kuesioner dilakukan selama 6 minggu untuk memastikan apakah data yang telah terkumpul terkait penelitian ini sudah memadai. Teknik Analisa data yang digunakan dalam penelitian ini adalah Uji korelasi untuk menguji hubungan kesadaran privasi terhadap masyarakat yang termasuk dalam Masyarakat 5.0Kata Kunci : Masyarakat 5.0, kesadaran privasi, Cybersecurity
Article
Full-text available
The Digital Markets Act (DMA) captures gatekeeper power to address the lack of contestability and unfairness in digital markets. Its provisions imbricate into the regulatory landscape bearing in mind complementarity regarding other acts of Union law which also apply to certain aspects of the digital arena, namely the General Data Protection Regulation (GDPR) or the e-Privacy Directive. The DMA does not override the provisions of these rules, although the practical implementation of its do’s and don’ts will question the value of non-economic interests which have been at the forefront of EU policy at large in their interaction with digital business models. In the particular case of the intersection between privacy and antitrust, Articles 5(2) and 6(10) of the DMA stand out as the two key areas where the interpretation of the GDPR will play a major role, namely through the force of consent, legal basis, and user choice. Although both provisions impose negative and positive obligations on personal data, their role is tempered when the user is presented with a specific choice and grants consent to the gatekeeper to combine and use personal data. The paper analyses the potential implications of both provisions in light of the existence of power and information asymmetries between gatekeepers and end users. The paper navigates the cases that have inspired the framework of the DMA in this regard, from an antitrust and data protection perspective. The paper identifies that the interaction between the concept of consent and the massive collection and processing of personal data is designed according to a circular concept. The DMA builds up its provisions on Articles 5 and 6 on the same premise. The paper identifies the circularity which the DMA’s enforcers might incur when enforcing the regulatory instrument.
Article
Extant literature has proposed an important role for trust in moderating people’s willingness to disclose personal information, but there is scant HCI literature that deeply explores the relationship between privacy and trust in apparent privacy paradox circumstances. Attending to this gap, this paper reports a qualitative study examining how people account for continuing to use services that conflict with their stated privacy preferences, and how trust features in these accounts. Our findings undermine the notion that individuals engage in strategic thinking about privacy, raising important questions regarding the explanatory power of the well-known privacy calculus model and its proposed relationship between privacy and trust. Finding evidence of hopeful trust in participants’ accounts, we argue that trust allows people to morally account for their ‘paradoxical’ information disclosure behavior. We propose that effecting greater alignment between people’s privacy attitudes and privacy behavior—or ‘un-paradoxing privacy’—will require greater regulatory assurances of privacy.
Article
Although a wealth of consumer research literature has examined privacy, the majority of this research has been conducted from a micro-economic or psychological perspective. This has led to a rather narrow view of consumer privacy, which ignores the larger socio-cultural forces at play. This paper suggests a shift in research perspective by adopting a consumer culture theory approach. This allows an in-depth look into the micro, meso and macro levels of analysis to explore privacy as a subjective, lived experience but also as a representation of cultural meanings that are further shaped by marketplace actors. The paper synthesizes how privacy has been conceptualized within consumer theory and advances three necessary shifts in research focus: from (1) prediction to experience, (2) causality to systems and (3) outcome to process. Specific theories or focus areas are explored within these shifts, which are then utilized to build a future research agenda.
Chapter
The vast majority of Australian children own a smartphone. High rates of smartphone ownership are associated with high rates of leakage of sensitive information. A child’s time and location patterns are enough to enable someone to build an accurate profile of the child. But children think that their devices already ensure that their sensitive information is secure. The aim of this study was to use off-the-shelf computing devices to educate school age children about leakage of sensitive information from IoT devices. A Distributed Sensor Network (DSN) was assembled and installed around a high school campus in Australia to measure leakage from IoT devices. Children were then informed of the results of the DSN monitoring, during an online safety lesson, which trained them on how to change common default device settings to reduce data leakage. The DSN then again measured the amount of leakage from IoT devices to see if children modified their device settings to reduce leakage of sensitive information. The results of the study revealed that the amount of data leaked from smartphones after the intervention was significantly less than the traffic captured before the intervention, thus confirming the intervention must have had an effect on changing children’s behaviour. It is recommended that this evidence-based program be expanded to other high schools in Australia to empower children to secure their sensitive information.KeywordsDistributed sensor networkIoTSmartphonesChildrenPrivacy
Book
The Federal Trade Commission, a US agency created in 1914 to police the problem of 'bigness', has evolved into the most important regulator of information privacy - and thus innovation policy - in the world. Its policies profoundly affect business practices and serve to regulate most of the consumer economy. In short, it now regulates our technological future. Despite its stature, however, the agency is often poorly understood by observers and even those who practice before it. This volume by Chris Jay Hoofnagle - an internationally recognized scholar with more than fifteen years of experience interacting with the FTC - is designed to redress this confusion by explaining how the FTC arrived at its current position of power. It will be essential reading for lawyers, legal academics, political scientists, historians and anyone else interested in understanding the FTC's privacy activities and how they fit in the context of the agency's broader consumer protection mission.
Chapter
Solove (2006) states that privacy cannot be considered independently from society, as “a need for privacy is a socially created need”. Privacy is a challenging, vague and multifaceted concept, however, for decades researchers from social, psychological and computer science disciplines were focused on various aspects of privacy interpretation and conceptualisation. Altman (1977) tackles privacy as behavioural mechanisms used to regulate desired levels of privacy that occurs in all cultures. Altman defines privacy functions as: management of social interaction; establishment of plans and strategies for interacting with others; and development and maintenance of self-identity.
Chapter
This chapter describes current information and research marketing practices with AI. More or less covertly, marketers collect, aggregate, and analyse consumers’ data from different offline and online sources and use intelligent algorithmic systems to derive new knowledge about their preferences and behaviours. Consumers know little about how marketers operate with their data, what knowledge is extracted, and with whom this is shared. A new profound asymmetry emerges between consumers and marketers, where power is not only related to the market and contracts but also to the possibility of knowing consumers' lives. Existing privacy and data protection only succeed to a limited extent in limiting surveillance practices and reducing these asymmetries. The main reasons for this are found in the limited effectiveness of the dominant protection paradigm of informed consent and the conceptualisation of information as personal data.
Book
Full-text available
Książka dotyczy wyzwań związanych z uregulowaniem rozwoju Sztucznej Inteligencji z perspektywy praw człowieka. Punktem wyjścia dla analizy jest uznanie, że postęp naukowo-techniczny powoduje kształtowanie się nowego paradygmatu naukowego opartego na intensywnej eksploracji danych. W związku z tym, dostępność danych staje się kluczowym zagadnieniem dla zapewnienia równego korzystania z osiągnięć postępu naukowego oraz jego zastosowań. Z tego powodu głównym celem książki jest określenie obowiązków państwa wynikających z praw w dziedzinie nauki (art. 15 Międzynarodowego Paktu Praw Gospodarczych, Socjalnych i Kulturalnych) w zakresie dostępności danych. Interpretacja tych obowiązków jest poprzedzona identyfikacją luk, jakie powstają przy próbach uregulowania dostępności danych z perspektywy prawa do prywatności, prawa do wolności wypowiedzi oraz prawa do ochrony interesów twórców – perspektyw, które dominują w doktrynie oraz praktyce stanowienia prawa. Analizie zostają poddane regulacje uniwersalne (ONZ), regionalne (przede wszystkim UE), a także krajowe (Konstytucja RP i ustawodawstwo). Ostatnia część monografii została poświęcona ocenie wywiązywania się z obowiązków przez Polskę. W tym celu autor proponuje własną interpretację kryteriów realizacji obowiązków w zakresie zapewnienia dostępności danych będącą rozwinięciem metodologii właściwego organu traktatowego ONZ.
Article
The dominant legal and regulatory approach to protecting information privacy is a form of mandated disclosure commonly known as “notice-and-consent.” Many have criticized this approach, arguing that privacy decisions are too complicated, and privacy disclosures too convoluted, for individuals to make meaningful consent decisions about privacy choices—decisions that often require us to waive important rights. While I agree with these criticisms, I argue that they only meaningfully call into question the “consent” part of notice-and-consent, and that they say little about the value of notice. We ought to decouple notice from consent, and imagine notice serving other normative ends besides readying people to make informed consent decisions.
Article
Full-text available
‘Feminist data protection’ is not an established term or field of study: Data protection discourse is dominated by doctrinal legal and economic positions, and feminist perspectives are few and far between. This editorial introduction summarises a number of recent interventions in the broader fields of data sciences and surveillance studies, then turns to data protection itself and considers how it might be understood, critiqued and possibly reimagined in feminist terms. Finally, the authors return to ‘feminist data protection’ and the different directions in which it might be further developed—as a feminist approach to data protection, as the protection of feminist data, and as a feminist way of protecting data—and provide an overview of the papers included in the present special issue. © 2021, Alexander von Humboldt Institute for Internet and Society. All rights reserved.
Article
Social media companies wield power over their users through design, policy, and through their participation in public discourse. We set out to understand how companies leverage public relations to influence expectations of privacy and privacy-related norms. To interrogate the discourse productions of companies in relation to privacy, we examine the blogs associated with three major social media platforms: Facebook, Instagram (both owned by Facebook Inc.), and Snapchat. We analyze privacy-related posts using critical discourse analysis to demonstrate how these powerful entities construct narratives about users and their privacy expectations. We find that each of these platforms often make use of discourse about "vulnerable"identities to invoke relations of power, while at the same time, advancing interpretations and values that favor data capitalism. Finally, we discuss how these public narratives might influence the construction of users' own interpretations of appropriate privacy norms and conceptions of self. We contend that expectations of privacy and social norms are not simply artifacts of users' own needs and desires, but co-constructions that reflect the influence of social media companies themselves.
Article
This systematic review of research on online student privacy in higher education used a strategic search process to synthesize research on online student privacy based on publications trends, research methods, online spaces studied and the focus of the research. A total of 41 articles were analyzed to explore the existing literature on the topic. Most of the articles utilized undergraduate students as research subjects, and a majority focused on social media as the site of online privacy interest. Few studies considered the impact of faculty on student privacy in online spaces or included online collaboration tools as an area of privacy concern for students. The most common focus of research found among the articles was online privacy behavior. This theme is closely related to the theme of privacy paradox, which was analyzed in previous related systematic reviews. The results of this systematic review have implications for future research in online instruction and student privacy.
Article
Smart devices with the capability to record audio can create a trade-off for users between convenience and privacy. To understand how users experience this trade-off, we report on data from 35 interview, focus group, and design workshop participants. Participants' perspectives on smart-device audio privacy clustered into the pragmatist, guardian, and cynic perspectives that have previously been shown to characterize privacy concerns in other domains. These user groups differed along four axes in their audio-related behaviors (for example, guardians alone say they often move away from a microphone when discussing a sensitive topic). Participants surfaced three usage phases that require design consideration with respect to audio privacy: 1) adoption, 2) in-the-moment recording, and 3) downstream use of audio data. We report common design solutions that participants created for each phase (such as indicators showing when an app is recording audio and annotations making clear when an advertisement was selected based on past audio recording).
Article
Full-text available
The discrepancy between informational privacy attitudes and actual behaviour of consumers is called the “privacy paradox”. Researchers across disciplines have formulated different theories on why consumers’ privacy concerns do not translate into increased protective behaviour. Over the past two decades multiple differing explanations for the paradox have been published. However, authors generally agree that companies are in a strong position to reduce consumers’ paradoxical behaviour by improving their customers’ informational privacy. Hence, this paper aims at answering the question: How can companies address the privacy paradox to improve their customers’ information privacy? Reviewing a sample of improvement recommendations from 138 papers that explore 41 theories in total, we determined that companies can generally align their privacy practices more closely with customers’ expectations across 4 inter-connected managerial processes: (1) strategic initiatives, (2) structural improvements, (3) human resource management, and (4) service development. The findings of this systematic literature review detail how companies can address both the rational and irrational nature of the privacy decision-making process. Furthermore, we propose a dynamic model able to identify weaknesses and strengths in companies’ privacy orientation.
Research
Full-text available
One often criticises the fact that law in the books is disconnected from law in action. There is no other area where this criticism is more justified than in data protection law, not least because it is unable to keep abreast of the rapidly evolving technologies and pervasive data collection1. In fact, data protection law cannot be expected to give users control over their personal data in these days of profiling and Big Data, nor to tackle the negative impacts on data subjects2. Relying on high-level principles encumber the objectives of data protection in addressing and solving the actual problems3. By increasing the specificity of data processing in data protection law, its scope is expanded resulting in the ‘framing [of] Internet-related problems as data protection problems’4. Consent remains essential in data protection policies notwithstanding its limitations5. Koops, in his criticism of the ‘mythology of consent’, asks why ‘the conclusion is too seldom drawn that consent is simply not a suitable approach to legitimate data processing in online contexts’
Article
Full-text available
This paper considers debates around the neoliberal governmentality, and argues for the need to better theorize the specific ethical practices through which such programs of governmentality are carried out. Arguing that much theo-retical and empirical work in this area is prone to a "top down" approach, in which governmentality is reduced to an imposing apparatus through which subjectivities are produced, it argues instead for the need to understand the self-production of subjectivities by considering the ethical practices that make up neoliberal govern-mentality. Moreover, taking Robert T. Kiyosaki's Rich Dad/Poor Dad as an illustra-tive case, the point is made that the work of neoliberal governmentality specifically targets the temporalities of conduct, in an attempt to shape temporal orientations in a more entrepreneurial form. Drawing on Foucault's lecture courses on liberalism and neoliberalism, and Jacques Donzelot's work on the social, the case is made that neoliberal governmentality exhorts individuals to act upon the residual social tem-poralities that persist as a trace in the dispositions of neoliberal subjects. Moreover, the paper concludes with a discussion of the potentials for resistance in this relation, understood as temporal counter-conducts within neoliberalism.
Chapter
Full-text available
During the past decade, rapid developments in information and communications technology have transformed key social, commercial and political realities. Within that same time period, working at something less than internet speed, much of the academic and policy debates arising from these new and emerging technologies have been fragmented. There have been few examples of interdisciplinary dialogue about the potential for anonymity and privacy in a networked society. Lessons from the Identity Trail fills that gap, and examines key questions about anonymity, privacy and identity in an environment that increasingly automates the collection of personal information and uses surveillance to reduce corporate and security risks. This project has been informed by the results of a multi-million dollar research project that has brought together a distinguished array of philosophers, ethicists, feminists, cognitive scientists, lawyers, cryptographers, engineers, policy analysts, government policy makers and privacy experts. Working collaboratively over a four-year period and participating in an iterative process designed to maximize the potential for interdisciplinary discussion and feedback through a series of workshops and peer review, the authors have integrated crucial public policy themes with the most recent research outcomes.
Article
Full-text available
While much attention is given to young people’s online privacy practices on sites like Facebook, current theories of privacy fail to account for the ways in which social media alter practices of information-sharing and visibility. Traditional models of privacy are individualistic, but the realities of privacy reflect the location of individuals in contexts and networks. The affordances of social technologies, which enable people to share information about others, further preclude individual control over privacy. Despite this, social media technologies primarily follow technical models of privacy that presume individual information control. We argue that the dynamics of sites like Facebook have forced teens to alter their conceptions of privacy to account for the networked nature of social media. Drawing on their practices and experiences, we offer a model of networked privacy to explain how privacy is achieved in networked publics.
Article
Full-text available
This paper illustrates the relevance of Foucault's analysis of neoliberal governance for a critical understanding of recent transformations in individual and social life in the United States, particularly in terms of how the realms of the public and the private and the personal and the political are understood and practiced. The central aim of neoliberal governmentality ("the conduct of conduct") is the strategic creation of social conditions that encourage and necessitate the production of Homo economicus, a historically specific form of subjectivity constituted as a free and au-tonomous "atom" of self-interest. The neoliberal subject is an individual who is mo-rally responsible for navigating the social realm using rational choice and cost-benefit calculations grounded on market-based principles to the exclusion of all oth-er ethical values and social interests. While the more traditional forms of domination and exploitation characteristic of sovereign and disciplinary forms of power remain evident in our "globalized" world, the effects of subjectification produced at the lev-el of everyday life through the neoliberal "conduct of conduct" recommend that we recognize and invent new forms of critique and ethical subjectivation that constitute resistance to its specific dangers
Article
Full-text available
The essay identifies workfare as the exemplary form of contingent labor practice inasmuch as it blurs the boundaries between the free and unfree labor contract, welfare and work, flexibility and compulsion. However, analyses of workfare have too often ignored the centrality of sexual politics to the tendencies of welfare and labor reform. Arguing that neoliberal labor practice is inseparable from the theological project of neopaternalist social policy, the essay casts a critical eye on the moral orthodoxies of recent capitalist critique (Luc Boltanski and Eve Chiapello, Alain Badiou, Slavoj Žižek). The theological turn of recent anticapitalist theory is merely the refracted expression of faith-based workfare and is entirely complicit with the restorative moment of capital’s double movement.
Article
Full-text available
The privacy paradox describes people's willingness to disclose personal information on social network sites despite expressing high levels of concern. In this study, we employ the distinction between institutional and social privacy to examine this phenomenon. We investigate what strategies undergraduate students have developed, and their motivations for using specific strategies. We employed a mixed-methods approach that included 77 surveys and 21 in-depth interviews. The results suggest that, in addition to using the default privacy settings, students have developed a number of strategies to address their privacy needs. These strategies are used primarily to guard against social privacy threats and consist of excluding contact information, using the limited profile option, untagging and removing photographs, and limiting Friendship requests from strangers. Privacy strategies are geared toward managing the Facebook profile, which we argue functions as a front stage. This active profile management allows users to negotiate the need for connecting on Facebook with the desire for increased privacy. Thus, users disclose information, because they have made a conscious effort to protect themselves against potential violations. We conclude that there is a tilt toward social privacy concerns. Little concern was raised about institutional privacy and no strategies were in place to protect against threats from the use of personal data by institutions. This is relevant for policy discussions, because it suggests that the collection, aggregation, and utilization of personal data for targeted advertisement have become an accepted social norm.
Article
Full-text available
Significance We show, via a massive ( N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.
Article
Full-text available
Within a given conversation or information exchange, do privacy expectations change based on the technology used? Firms regularly require users, customers, and employees to shift existing relationships onto new information technology, yet little is known as about how technology impacts established privacy expectations and norms. Coworkers are asked to use new information technology, users of gmail are asked to use GoogleBuzz, patients and doctors are asked to record health records online, etc. Understanding how privacy expectations change, if at all, and the mechanisms by which such a variance is produced will help organizations make such transitions. This paper examines whether and how privacy expectations change based on the technological platform of an information exchange. The results suggest that privacy expectations are significantly distinct when the information exchange is located on a novel technology as compared to a more established technology. Furthermore, this difference is best explained when modeled by a shift in privacy expectations rather than fully technology-specific privacy norms. These results suggest that privacy expectations online are connected to privacy offline with a different base privacy expectation. Surprisingly, out of the five locations tested, respondents consistently assign information on email the greatest privacy protection. In addition, while undergraduate students differ from non-undergraduates when assessing a social networking site, no difference is found when judging an exchange on email. In sum, the findings suggest that novel technology may introduce temporary conceptual muddles rather than permanent privacy vacuums. The results reported here challenge conventional views about how privacy expectations differ online versus offline. Traditionally, management scholarship examines privacy online or with a specific new technology platform in isolation and without reference to the same information exchange offline. However, in the present study, individuals appear to have a shift in their privacy expectations but retain similar factors and their relative importance—the privacy equation by which they form judgments—across technologies. These findings suggest that privacy scholarship should make use of existing privacy norms within contexts when analyzing and studying privacy in a new technological platform.
Article
Full-text available
The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Given the rise of Big Data as a socio-technical phenomenon, we argue that it is necessary to critically interrogate its assumptions and biases. In this article, we offer six provocations to spark conversations about the issues of Big Data: a cultural, technological, and scholarly phenomenon that rests on the interplay of technology, analysis, and mythology that provokes extensive utopian and dystopian rhetoric.
Article
Full-text available
Not all Facebook users appreciated the September 2006 launch of the `News Feeds' feature. Concerned about privacy implications, thousands of users vocalized their discontent through the site itself, forcing the company to implement privacy tools. This essay examines the privacy concerns voiced following these events. Because the data made easily visible were already accessible with effort, what disturbed people was primarily the sense of exposure and invasion. In essence, the `privacy trainwreck' that people experienced was the cost of social convergence.
Article
Full-text available
The publication of Michel Foucault’s lectures at the Collège de France in the late 1970s has provided new insight into crucial developments in his late work, including the return to an analysis of the state and the introduction of biopolitics as a central theme. According to one dominant interpretation, these shifts did not entail a fundamental methodological break; the approach Foucault developed in his work on knowledge/power was simply applied to new objects. The present article argues that this reading — which is colored by the overwhelming privilege afforded to Discipline and Punish in secondary literature — obscures an important modification in Foucault’s method and diagnostic style that occurred between the introduction of biopolitics in 1976 (in Society Must Be Defended) and the lectures of 1978 ( Security, Territory, Population) and 1979 (Birth of Biopolitics). Foucault’s initial analysis of biopolitics was couched in surprisingly epochal and totalizing claims about the characteristic forms of power in modernity. The later lectures, by contrast, suggest what I propose to call a ‘topological’ analysis that examines the ‘patterns of correlation’ in which heterogeneous elements — techniques, material forms, institutional structures and technologies of power — are configured, as well as the redeployments through which these patterns are transformed. I also indicate how attention to the topological dimension of Foucault’s analysis might change our understanding of key themes in his late work: biopolitics, the analysis of thinking, and the concept of governmentality.
Article
Full-text available
Users of social networking sites (SNSs) increasingly must learn to negotiate privacy online with multiple service providers. Facebook's third-party applications (apps) add an additional layer of complexity and confusion for users seeking to understand and manage their privacy. We conducted a novel exploratory survey (conducted on Facebook as a Platform app) to measure how Facebook app users interact with apps, what they understand about how apps access and exchange their profile information, and how these factors relate to their privacy concerns. In our analysis, we paid special attention to our most knowledgeable respondents: given their expertise, would they differ in behaviors or attitudes from less knowledgeable respondents? We found that misunderstandings and confusion abound about how apps function and how they manage profile data. Against our expectations, knowledge or behavior weren't consistent predictors of privacy concerns with third-party apps or on SNSs in general. Instead, whether or not the respondent experienced an adverse privacy event on a social networking site was a reliable predictor of privacy attitudes.
Article
Full-text available
The sharing of personal data has emerged as a popular activity over online social networking sites like Facebook. As a result, the issue of online social network privacy has received significant attention in both the research literature and the mainstream media. Our overarching goal is to improve defaults and provide better tools for managing privacy, but we are limited by the fact that the full extent of the privacy problem remains unknown; there is little quantification of the incidence of incorrect privacy settings or the difficulty users face when managing their privacy. In this paper, we focus on measuring the disparity between the desired and actual privacy settings, quantifying the magnitude of the problem of managing privacy. We deploy a survey, implemented as a Facebook application, to 200 Facebook users recruited via Amazon Mechanical Turk. We find that 36% of content remains shared with the default privacy settings. We also find that, overall, privacy settings match users' expectations only 37% of the time, and when incorrect, almost always expose content to more users than expected. Finally, we explore how our results have potential to assist users in selecting appropriate privacy settings by examining the user-created friend lists. We find that these have significant correlation with the social network, suggesting that information from the social network may be helpful in implementing new tools for managing privacy.
Article
Full-text available
We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait "Openness," prediction accuracy is close to the test-retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.
Article
Full-text available
As everyday life is increasingly conducted online, and as the electronic world continues to move out into the physical, the privacy of information and action and the security of information systems are increasingly a focus of concern both for the research community and the public at large. Accordingly, privacy and security are active topics of investigation from a wide range of perspectives-institutional, legislative, technical, interactional, and more. In this article, we wish to contribute toward a broad understanding of privacy and security not simply as technical phenomena but as embedded in social and cultural contexts. Privacy and security are difficult concepts to manage from a technical perspective precisely because they are caught up in larger collective rhetorics and practices of risk, danger, secrecy, trust, morality, identity, and more. Reductive attempts to deal with these issues separately produce incoherent or brittle results. We argue for a move away from narrow views of privacy and security and toward a holistic view of situated and collective information practice.
Article
Full-text available
The key tenets of neo-liberalism regarding risk, governance, and responsibility are critically evaluated through an empirical study of the private insurance industry. Recent tendencies in this industry towards increasing segmentation of consumers regarding risk, and towards an expansion of private policing of insurance fraud, are analysed. The definition of moral hazard is broadened to include all parties in the insurance relationship, not just the insured. Moral hazards embedded in the social organization of private insurance lead to various kinds of immoral risky behaviour by insureds, insurance companies, and their employees, and to intensified efforts to regulate this behaviour. The analysis concludes with some critical observations about the neo-liberal emphasis on minimal state, market fundamentalism, risk-taking, individual responsibility, and acceptance of inequality.
Article
Many important public health, health research, and treatment goals can be furthered by the use of health information. Privacy protection, as the EU recognizes, is an important aspect for facilitating the use and transfer of this information. All too often, privacy is seen as a barrier to information use, but without adequate assurances that data will be safeguarded appropriately, there are risks that data will not be shared at all. 106 Providing such assurances is a primary impetus for the EU’s revision of its current data protection law.As the EU moves towards adopting and implementing its proposed regulatory changes, it will be important to monitor the impacts on public health and medical research both within and beyond the borders of its member states. For the US, one concern will be whether the EU’s increasingly stringent privacy protections will require revision of the safe harbor framework. There may also be growing divergence between the EU’s overarching approach to the protection of personal information and the US’s segmented approach that imposes very different restrictions on information within and outside of the HIPAA rules. Those interested in the responsible use of health information in the US might be well advised to watch how the EU’s proposed Regulation plays out in the treatment of health information, especially the implementation of the provisions of Articles 81 and 83 governing the use of information for public health, improvement of health care, and medical research.
Article
Can the government stick us with privacy we don't want? It can, it does, and according to this author, may need to do more of it. Privacy is a foundational good, she argues, a necessary tool in the liberty-lover's kit for a successful life. A nation committed to personal freedom must be prepared to mandate inalienable, liberty-promoting privacies for its people, whether they eagerly embrace them or not. The eight chapters of this book are reflections on public regulation of privacy at home; isolation and confinement for punitive and health reasons; religious modesty attire; erotic nudity; workplace and professional confidentiality; racial privacy; online transactions; social networking; and the collection, use and storage of electronic data. Most books about privacy law focus on rules designed to protect popular forms of privacy. Popular privacy is the kind that people tend to want, believe they have a right to, and expect governments to secure. Typical North Americans and Europeans embrace privacy for home-life, telephone calls, e-mail, health records, and financial transactions. This unique book draws attention to unpopular privacy- privacies disvalued or disliked by their intended beneficiaries and targets-and the best reasons for imposing them. Examples of unwanted physical and informational privacies with which contemporary Americans have already lived? Start with laws designed to keep website operators from collecting personal information from children under 13 without parental consent; the anti-nudity laws that force strippers to wear pasties and thongs; the 'Don't Ask Don't Tell' rules that kept gays out of the US military; and the myriad employee and professional confidentiality rules- including insider trading laws-- that require strict silence about matters whose disclosure could earn us small fortunes. Conservative and progressive liberals agree that coercion and paternalism should be the exceptions rather than the rule. Better to educate, incentivize and nudge than to force. But what if people continue to make self-defeating bad choices? What are the exceptional circumstances that warrant coercion, and in particular, coercing privacy? When can government turn privacies into duties, especially duties of self-care? Early modern societies went wrong, imposing unequal conditions of forced modesty and confinement on women and others groups, giving privacy and imposed privacies a bad rap. But now may be a time for imposed privacies of another sort-imposed privacies that are liberating rather than dominating. A role for coercive and paternalistic regulation may be called for in view of the Great Privacy Give-Away. The public turns over vast amounts of personal information in exchange for the ease of online shopping, browsing and social networking, protected in some instances by little more than a pro forma privacy policy pasted on a home page. The public uploads and stores information 'in the cloud,' and have become more and more dependent upon electronic telecommunications and personal archiving exposed to public and private surveillance. Have they lost the taste for privacy? Do they fail to understand the implications of what is happening? This book offers insight into the ethical and political underpinnings of public policies mandating privacies that people may be indifferent to or despise. Privacy institutions and practices play a role in sustaining the capable free-agents presupposed by liberal democracy. Physical sanctuaries and data protection by law confers and preserve opportunities for making and acting on choices. Imposing privacy recognizes the extraordinary importance of dignity, reputation, confidential relationships, and preserving social, economic and political options throughout a lifetime.
Article
The proposed General Data Protection Regulation in the legal policy debate has created many myths, sustaining an illusion of a higher level of protection than will actually be the result. Data subjects are not empowered with respect to consent and rights. Harmonization will improve but not be ideal. The text of the Regulation cannot be understood by ordinary data subjects and controllers.
Article
The trouble with European data protection law, as with Alfred Hitchcock's Harry, is that it is dead. The current legal reform will fail to revive it, since its three main objectives are based on fallacies. The first fallacy is the delusion that data protection law can give individuals control over their data, which it cannot. The second is the misconception that the reform simplifies the law, while in fact it makes compliance even more complex. The third is the assumption that data protection law should be comprehensive, which stretches data protection to the point of breaking and makes it meaningless law in the books. Unless data protection reform starts looking in other directions—going back to basics, playing other regulatory tunes on different instruments in other legal areas, and revitalising the spirit of data protection by stimulating best practices—data protection will remain dead. Or, worse perhaps, a zombie.
Book
Who owns your genetic information? Might it be the doctors who, in the course of removing your spleen, decode a few cells and turn them into a patented product? In 1990 the Supreme Court of California said yes, marking another milestone on the information superhighway. This extraordinary case is one of the many that James Boyle takes up in Shamans, Software, and Spleens, a timely look at the infinitely tricky problems posed by the information society. Discussing topics ranging from blackmail and insider trading to artificial intelligence (with good-humored stops in microeconomics, intellectual property, and cultural studies along the way), Boyle has produced a work that can fairly be called the first social theory of the information age.Now more than ever, information is power, and questions about who owns it, who controls it, and who gets to use it carry powerful implications. These are the questions Boyle explores in matters as diverse as autodialers and direct advertising, electronic bulletin boards and consumer databases, ethno-botany and indigenous pharmaceuticals, the right of publicity (why Johnny Carson owns the phrase "Here's Johnny!"), and the right to privacy (does J. D. Salinger "own" the letters he's sent?). Boyle finds that our ideas about intellectual property rights rest on the notion of the Romantic author--a notion that Boyle maintains is not only outmoded but actually counterproductive, restricting debate, slowing innovation, and widening the gap between rich and poor nations. What emerges from this lively discussion is a compelling argument for relaxing the initial protection of authors' works and expanding the concept of the fair use of information. For those with an interest in the legal, ethical, and economic ramifications of the dissemination of information--in short, for every member of the information society, willing or unwilling--this book makes a case that cannot be ignored.
Article
Article
The research program of behavioral economics is gaining increasing influence in academic economics and in interest from policymakers. This article analyzes behavioral economics from the dual perspective of Foucault’s genealogical investigation of neoliberal governmentality and contemporary critical theorizations of neoliberalism. I argue that behavioral economics should be understood as a political economic apparatus of neoliberal governmentality with the objective of using the state to manage and subjectivize individuals – by attempting to correct their deviations from rational, self-interested, utility-maximizing cognition and behavior – such that they more effectively and efficiently conform to market logics and processes. In this analysis, I contend that behavioral economics enacts three components of neoliberal governmentality: positioning the market as a site of truth and veridiction for the individual and the state; regulating what constitutes the objects of political economy and governmental intervention; and producing homo economicus (economic human) and diffusing this mode of economic subjectivity across the social terrain. In doing so, behavioral economics and its rationalities transform and introduce new technologies of power into neoliberal governmentality. I illustrate this argument with an analysis of recent changes to retirement savings policy in the United States, heavily influenced by behavioral economics thinking, that entrench neoliberal formations.
Article
Digital technologies have given rise to a new combination of big data and computational practices which allow for massive, latent data collection and sophisticated computational modeling, increasing the capacity of those with resources and access to use these tools to carry out highly effective, opaque and unaccountable campaigns of persuasion and social engineering in political, civic and commercial spheres. I examine six intertwined dynamics that pertain to the rise of computational politics: the rise of big data, the shift away from demographics to individualized targeting, the opacity and power of computational modeling, the use of persuasive behavioral science, digital media enabling dynamic real-time experimentation, and the growth of new power brokers who own the data or social media environments. I then examine the consequences of these new mechanisms on the public sphere and political campaigns.
Article
In this article we examine the effectiveness of consent in data protection legislation. We argue that the current legal framework for consent, which has its basis in the idea of autonomous authorisation, does not work in practice. In practice the legal requirements for consent lead to ‘consent desensitisation’, undermining privacy protection and trust in data processing. In particular we argue that stricter legal requirements for giving and obtaining consent (explicit consent) as proposed in the European Data protection regulation will further weaken the effectiveness of the consent mechanism. Building on Miller and Wertheimer’s ‘Fair Transaction’ model of consent we will examine alternatives to explicit consent.
Article
A crucial task in the analysis of on-line social-networking systems is to identify important people --- those linked by strong social ties --- within an individual's network neighborhood. Here we investigate this question for a particular category of strong ties, those involving spouses or romantic partners. We organize our analysis around a basic question: given all the connections among a person's friends, can you recognize his or her romantic partner from the network structure alone? Using data from a large sample of Facebook users, we find that this task can be accomplished with high accuracy, but doing so requires the development of a new measure of tie strength that we term `dispersion' --- the extent to which two people's mutual friends are not themselves well-connected. The results offer methods for identifying types of structurally significant people in on-line applications, and suggest a potential expansion of existing theories of tie strength.
Article
Offers of free services abound on the Internet. But the focus on the price rather than on the cost of free services has led consumers into a position of vulnerability. For example, even though internet users typically exchange personal information for the opportunity to use these purportedly free services, one court has found that users of free services are not consumers for purposes of California consumer protection law. This holding reflects the common misconception that the costs of free online transactions are negligible-when in fact true costs may be quite significant. To elucidate the true costs of these allegedly free services, we apply a transaction cost economics (TCE) approach. Unlike orthodox economic theory, TCE provides a framework for analyzing exchanges in which the price of the product seems to be zero. Under a TCE analysis, we argue that information-intensive companies misuse the term "free" to promote products and services that involve numerous nonpecuniary costs. In so doing, firms generate contractual hazards for consumers, ignore consumer preferences for privacy, and mislead consumers by creating the impression that a given transaction will be free.
Article
Privacy law creates winners and losers. The distributive implications of privacy rules are often very significant, but they are also subtle. Policy and academic debates over privacy rules tend to de-emphasize their distributive dimensions, and one result is an impoverished descriptive account of why privacy laws look the way they do. The article posits that understanding the identities of the real winners and losers in privacy battles can improve predictions about which interests will prevail in the agencies and legislatures that formulate privacy rules. Along the way, the article shows how citizens whose psychological profiles indicate a strong concern for their own privacy are less likely to be politically efficacious than citizens who do not value privacy, producing a substantive skew against privacy protections. The article employs public choice theory to explain why California’s protections for public figure privacy are noticeably stronger than the protections that exist in other American jurisdictions, and what factors might explain the trans-Atlantic divide over privacy regulation with regard to Big Data, the popularity of Megan’s Laws in the United States, and the enactment of Do Not Call protections. The article concludes by noting that structural features of privacy regulation can affect the public choice dynamics that emerge in political controversies. Individuals seeking to expand privacy protections in the United States might therefore focus initially on altering the structure of American privacy laws instead of trying to change the law’s content.
Article
The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” — the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Although privacy self-management is certainly a necessary component of any regulatory regime, I contend in this Article that it is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model. Moreover, people cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework. In addition, privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution — paternalistic measures — even more directly denies people the freedom to make consensual choices about their data. In this Article, I propose several ways privacy law can grapple with the consent dilemma and move beyond relying too heavily on privacy self-management.
Article
Obesity provides a potentially informative signal about individuals' choices and preferences. Using NLSY survey data, we estimate that the loan delinquency rate among the obese is 20 percent higher than among the non-obese after controlling for numerous observable, prohibited, and - to lenders - unobservable credit risk factors. The economic significance of obesity for delinquencies is comparable to that of job displacements. Obesity is particularly informative about future delinquencies among those with low credit risk. In terms of channels, we find that the obesity effect is at least partially mediated through poor health, but is not attributable to individuals' time preferences.
Article
Privacy has an image problem. Over and over again, regardless of the forum in which it is debated, it is cast as old-fashioned at best and downright harmful at worst — anti-progressive, overly costly, and inimical to the welfare of the body politic. Yet the perception of privacy as antiquated and socially retrograde is wrong. It is the result of a conceptual inversion that relates to the way in which the purpose of privacy has been conceived. Like the broader tradition of liberal political theory within which it is situated, legal scholarship has conceptualized privacy as a form of protection for the liberal self. Its function is principally a defensive one; it offers shelter from the pressures of societal and technological change. So characterized, however, privacy is reactive and ultimately inessential. In fact, the liberal self who is the subject of privacy theory and privacy policymaking does not exist. The self who is the real subject of privacy law- and policy-making is socially constructed, emerging gradually from a preexisting cultural and relational substrate. For this self, the purpose of privacy is quite different. Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop. So described, privacy is anything but old-fashioned, and trading it away creates two kinds of large systemic risk. First, privacy is an indispensable structural feature of liberal democratic political systems. Freedom from surveillance, whether public or private, is foundational to the capacity for critical self-reflection and informed citizenship. A society that permits the unchecked ascendancy of surveillance infrastructures cannot hope to remain a liberal democracy. Under such conditions, liberal democracy as a form of government is replaced, gradually but surely, by a form of government that I will call modulated democracy because it relies on a form of surveillance that operates by modulation: a set of processes in which the quality and content of surveillant attention is continually modified according to the subject’s own behavior, sometimes in response to inputs from the subject but according to logics that ultimately are outside the subject’s control. Second, privacy is also foundational to the capacity for innovation, and so the perception of privacy as anti-innovation is a non sequitur. A society that values innovation ignores privacy at its peril, for privacy also shelters the processes of play and experimentation from which innovation emerges. Efforts to repackage pervasive surveillance as innovation — under the moniker “Big Data” — are better understood as efforts to enshrine the methods and values of the modulated society at the heart of our system of knowledge production. In short, privacy incursions harm individuals, but not only individuals. Privacy incursions in the name of progress, innovation, and ordered liberty jeopardize the continuing vitality of the political and intellectual culture that we say we value.
Article
Privacy is a widely studied concept in relation to social computing and sensor-based technologies; scores of research papers have investigated people's "privacy preferences" and apparent reluctance to share personal data. In this paper we explore how Ubicomp and HCI studies have approached the notion of privacy, often as a quantifiable concept. Leaning on several theoretical frameworks, but in particular Nissenbaum's notion of contextual integrity, we question the viability of obtaining universal answers in terms of people's "general" privacy practices and apply elements of Nissenbaum's theory to our own data in order to illustrate its relevance. We then suggest restructuring inquiries into information sharing in studies of state-of-the-art technologies and analyze contextually grounded issues using a different, more specific vocabulary. Finally, we provide the first building blocks to such vocabulary.
Article
This article considers Foucault’s analysis of ordoliberal and neoliberal governmental reason and its reorganization of social relations around a notion of enterprise. I focus on the particular idea that the generalization of the enterprise form to social relations was conceptualized in such exhaustive terms that it encompassed subjectivity itself. Self as enterprise highlights, inter alia, dynamics of control in neoliberal regimes which operate through the organized proliferation of individual difference in an economized matrix. It also throws into question conceptions of individual autonomy that underpin much political thought and upon which ideas about political resistance are based. Self as enterprise also problematizes the viability of Foucault’s later work on ethics of the self as a practice of resistance. I go on to argue that Foucault’s discussion of an unresolved clash in civil society between monarchical and governmental power, between law and norm, offers an elliptical but more promising account of opposition to normalizing bio-power.
Article
We live increasingly in a ‘risk society’, Characterized by the multiplication and increasing unpredictability of risks, as well as by our enhanced consciousness of them. Claims on the part of expert bureaucracies to possess superior abilities to anticipate and managerisks are increasingly suspect in public perceptions. This suspicion is legitimate. The history of economics is revealing in this respect. The triumph within econmics of the notion of ‘risk’ (as defined by frank Knight), or a vision of the future as subject to probabilistic analysis, over ‘uncertainty’, or a vision of the future as so fundamentally and radically indeterminate as to preclude such an analysis, has been instrumental in the legitimation of expert bureaucracies. Anthropological literature on risk also reveals, in enlightening but rather perverse fashion, many allied modern presumptions. The enhancement of the degree of both democratic legitimacy and consequential efficacy of social decision-making procedures to confront indeterminacy requires that ‘uncertainty’ should take the place of ‘risk’ as the governing motif of risk analysis, with corresponding implications for the enlargement of the field of political contention.
Article
End-users share a wide variety of information on Facebook, but a discussion of the privacy implications of doing so has yet to emerge. We examined how Facebook aects privacy, and found serious a ws in the system. Privacy on Facebook is undermined by three principal factors: users disclose too much, Facebook does not take adequate steps to protect user privacy, and third parties are actively seeking out end-user information using Facebook. We based our end- user ndings on a survey of MIT students and statistical analysis of Facebook data from MIT, Harvard, NYU, and the University of Oklahoma. We analyzed the Facebook system in terms of Fair Information Practices as recommended by the Federal Trade Commission. In light of the information available and the system that protects it, we used a threat model to analyze specic privacy risks. Specically , university administrators are using Facebook for disciplinary purposes, rms are using it for marketing purposes, and intruders are exploiting security holes. For each threat, we analyze the ecacy of the current protection, and where solutions are inadequate, we make recommendations on how to address the issue.
Article
From the Publisher: In this age of DNA computers and artificial intelligence, information is becoming disembodied even as the "bodies" that once carried it vanish into virtuality. While some marvel at these changes, envisioning consciousness downloaded into a computer or humans "beamed" Star Trek-style, others view them with horror, seeing monsters brooding in the machines. In How We Became Posthuman, N. Katherine Hayles separates hype from fact, investigating the fate of embodiment in an information age. Hayles relates three interwoven stories: how information lost its body, that is, how it came to be conceptualized as an entity separate from the material forms that carry it; the cultural and technological construction of the cyborg; and the dismantling of the liberal humanist "subject" in cybernetic discourse, along with the emergence of the "posthuman." Ranging widely across the history of technology, cultural studies, and literary criticism, Hayles shows what had to be erased, forgotten, and elided to conceive of information as a disembodied entity. Thus she moves from the post-World War II Macy Conferences on cybernetics to the 1952 novel Limbo by cybernetics aficionado Bernard Wolfe; from the concept of self-making to Philip K. Dick's literary explorations of hallucination and reality; and from artificial life to postmodern novels exploring the implications of seeing humans as cybernetic systems. Although becoming posthuman can be nightmarish, Hayles shows how it can also be liberating. From the birth of cybernetics to artificial life, How We Became Posthuman provides an indispensable account of how we arrived in our virtual age, and of where we might go from here.
Article
This paper draws from Foucault’s analysis of liberalism and neoliberalism to reconstruct the mechanisms and the means whereby neoliberalism has transformed society into an ‘enterprise society’ based on the market, competition, inequality, and the privilege of the individual. It highlights the role of financialization, neglected by Foucault, as a key apparatus in achieving this transformation. It elaborates the strategies of individualization, insecuritization and depoliticization used as part of neoliberal social policy to undermine the principles and practices of mutualization and redistribution that the Welfare State and Fordism had promoted. It shows that the aim of neoliberal politics is the restoration of the power of capital to determine the distribution of wealth and to establish the enterprise as dominant form; this requires that it target society as a whole for a fundamental reconstruction, putting in place new mechanisms to control individual conduct. The analysis refers to the case of workers in the culture industry to illustrate the operation of these mechanisms in practice. It also outlines the main elements of the analytical apparatus that makes visible the new role of the state as an ensemble of apparatuses constituting the conditions for neoliberal market capitalism and the new type of individual appropriate for it. The paper thus adds a new dimension to Foucault’s analysis.
Article
The legal and technical rules governing flows of information are out of balance, argues Julie E. Cohen in this original analysis of information law and policy. Flows of cultural and technical information are overly restricted, while flows of personal information often are not restricted at all. The author investigates the institutional forces shaping the emerging information society and the contradictions between those forces and the ways that people use information and information technologies in their everyday lives. She then proposes legal principles to ensure that people have ample room for cultural and material participation as well as greater control over the boundary conditions that govern flows of information to, from, and about them.