How Should We Think about Privacy?

To read the full-text of this research, you can request a copy directly from the author.


Making sense of one of the thorniest issues of the digital age

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... For our twelfth change, we add the digitally enabled world, starting with modern computers during Cold War days but today embracing a huge arena of activities, technologies, and future potentials (Lanier 2013). This yields Mikitani's Marketplace 3.0, with retailing now competing for on-line customers. ...
... "Big data" arise, we discover, because almost everything done digitally gets tracked by somebody, if not NSA then various private corporations eager to expand sales by better targeting of their advertising (Lanier 2013). As noted above, this has become part of Marketing 3.0. ...
This working paper argues we need to reimagine development tactics to fashion Development 3.0, to match what business analysts now call World 3.0, a global system characterized by high turbulence and new threats. It begins by contrasting our former classification of countries spatially into First, Second and Third worlds with a new division of development epochs in sequence since the end of World War II. World 1.0 emphasized industrialization, urbanization, and modernization, lasting from 1945 to 1980. World 2.0 emphasized global trade, and a shift to private actors doing the work of development, from 1980 to the early 2000s. World 3.0 can be seen as superceding globalization by concern with emergent threats. World 1.0 privileged state actions to accelerate “nation building” within former colonies, whereas World 2.0 privileged private capital and free trade as engines for economic growth. Now, following wars, disasters, and the near meltdown of the global financial system in 2007/08, we enter World 3.0 as depicted by Ghemawat and others.
The personalization–privacy paradox persists because consumers appreciate the value of personalization, yet marketers’ exploitation of consumers’ personal information to provide such personalization raises privacy concerns. Consumers then may refuse to provide personal information, which limits personalization efforts. Attempts to rely on information technology (e.g., anonymizing techniques, peer-to-peer communication) to address privacy concerns largely have proven ineffective, often because they are overly sophisticated for consumers. Thus, even if the personalization–privacy paradox seemingly arose with mobile technologies, it must stem from a theoretical foundation. Prior information systems literature tends to adopt micro-oriented theories to understand the paradox, but this perspective article instead seeks to investigate the personalization–privacy paradox at a macro level, with an attention economy lens. By investigating the relationship among personalization, privacy, and attention, this study seeks to offer insights pertaining to the ecology of attention, choice architectures, and stylistic devices, as well as some relevant implications for research and practice.
Full-text available
This study is a rhetorical criticism of "The Privacy Project," a multimedia presentation on digital privacy and mass surveillance published by The New York Times in 2019. This critique is genera-tive, because it combines metaphor analysis with framing theory to generate its units of analysis. These units consist of metaphors that explain digital data in real-world terms, and calls to action, which both serve as the dominant rhetorical strategies in this project. After coding and interpreting these units, this essay explains that metaphors may help lay people understand some aspects of mass surveillance, but they also negate literal understandings that could help the population become more digitally savvy. Additionally, viewing online data through the lens of real-world issues poses the risk of extending these issues to the internet, which was originally created to fix them. These findings offer an introduction to the potential strengths and weaknesses metaphors have for spreading awareness of new-age controversies in mass media, and serve as a starting place for future researchers exploring media ethics, mass communication, and mass surveillance.
What will future creativity-based work in collaboration with ubiquitous, AI-driven systems be like? In this paper, we argue that following a ‘tangible interaction’ approach can be beneficial in this context. We describe six connected objects that illustrate how the quality of future creative work could be designed. The objects aim to shape embedded computation in ways that support embodied interaction. They include a place for sacrificing one’s phone, an olfactory calendar, a reader/writer for cloud data in everyday objects, a concrete-based data logger, a slot machine for recombining old ideas into new ones, and a dimmer for artificial intelligence. We summarize the results of a critical reflection of the prototypes in an argument for designing interactions that foster collaborative creative processes between embodied humans in a world of embedded computation.
Full-text available
Optimizing the harvesting and analysis of student data promises to clear the fog surrounding key drivers of student success and retention, and our understanding of the impact of interventions to improve student success. While recognising its potential, it is crucial to deconstruct some of the assumptions regarding learning analytics in the nexus between student data and institutional accountability as well as broader concerns regarding the scope and impact of acts of surveillance. This chapter will explore and contest assumptions regarding the potential of big data in higher education – for example, that big data will, in itself, result in more accuracy or objectivity, that surveillance is uni-directional and that available data provides holistic pictures of student learning. Such assumptions not only impact directly on our view of institutional accountability towards a range of stakeholders, but also on the ethical implications of the harvesting of student data, its use and storage. Assumptions and understanding of the issues surrounding student privacy and institutional accountability are sedimented in institutional policies, operational frameworks based on a technocratic predictive logic inherent in neoliberal and governmentality discourses. Policies and frameworks in higher education institutions should provide enabling and ethical environments for the optimal use of learning analytics as negotiated and student centred praxis. This chapter explores student data privacy in the context of the dominant technocratic predictive logic in learning analytics. We propose a broader framework for the interrogation of learning analytics as an ethical and negotiated contract between institutions and students within an ethics of care.
ResearchGate has not been able to resolve any references for this publication.