ArticlePDF Available

Abstract

Heralded by regulators, Privacy by Design holds the promise to solve the digital world's privacy problems. But there are immense challenges, including management commitment and step-by-step methods to integrate privacy into systems.
credit tk
july 2012 | vol. 55 | no. 7 | COMMUNICATIONS OF THE ACM 1
V
viewpoints
V
viewpoints
P
RIVACY MAINTENANCE AND
control is a social value
deeply embedded in our
societies. A global survey
found that 88% of people are
worried about who has access to their
data; over 80% expect governments to
regulate privacy and impose penalties
on companies that do not use data re-
sponsibly. But privacy regulation is not
easy. The Internet’s current economics
as well as national security manage-
ment benefit from the collection and
use of rich user profiles. Technology
constantly changes. And data is like
water: it flows and ripples in ways that
are difficult to predict. As a result, even
a well-conceived, general, and sustain-
able privacy regulation, such as the
European Data Protection Directive
95/46/EC, struggles to ensure its ef-
fectiveness. Companies regularly test
legal boundaries and many risk sanc-
tions for privacy breaches to avoid con-
straining their business
.
Against this background, the Eu-
ropean Commission and other regu-
latory bodies are looking for a more
effective, system- and context-specific
balance between citizens’ privacy
rights and the data needs of compa-
nies and governments. The apparent
solution proposed by regulators now,
but barely specified, is Privacy by De-
sign (PbD). At first sight, the power-
ful term seems to suggest we simply
need to take a few Privacy-Enhancing
Technologies (PETs) and add a good
dose of security, thereby creating a
fault-proof systems’ landscape for
the future. But the reality is much
more challenging. According to Ann
Cavoukian, the Ontario information
and privacy commissioner who first
coined the term, PbD stands for a pro-
active integration of technical privacy
principles in a system’s design (such
as privacy default settings or end-to-
end security of personal data) and the
recognition of privacy in a company’s
risk management processes.
1
PbD can
thus be defined as “an engineering
and strategic management approach
that commits to selectively and sus-
tainably minimize information sys-
tems’ privacy risks through technical
and governance controls.”
DOI:10.1145/2209249.2209263 Sarah Spiekermann
Viewpoint
The Challenges
of Privacy by Design
Heralded by regulators, Privacy by Design holds the promise to solve the digital
world’s privacy problems. But there are immense challenges, including management
commitment and step-by-step methods to integrate privacy into systems.
2 COMMUNICATIONS OF THE ACM | july 2012 | vol. 55 | no. 7
viewpoints
the privacy issue as a nuisance that is
better left to be fixed by their lawyers.
But even if managers took up the
privacy challenge and incorporated the
active governance of personal data into
their companies’ strategic asset man-
agement, they would not be able to de-
termine the right strategy without their
IT departments: PbD requires the guts
and ingenuity of engineers. As the term
implies, the design of systems needs to
be altered or focused to technically em-
brace the protection of peoples’ data.
Consequently, privacy must be on en-
gineers’ requirements radar from the
start of a new IT project. It needs to en-
ter the system development life cycle at
such an early point that architectural
decisions around data processing,
transfer, and storage can still be made.
Managers and engineers (as well as
other potential stakeholders) need to
assess the privacy risks they are willing
to take and jointly decide on techni-
cal and governance controls for those
risks they are not willing to bear.
Privacy by Design Challenges
Even when both managers and engi-
neers are committed to PbD, more
challenges must be overcome:
˲
Privacy is a fuzzy concept and is
thus difficult to protect. We need to
come to terms on what it is we want
to protect. Moreover, conceptually
and methodologically, privacy is often
confounded with security. We need to
start distinguishing security from pri-
vacy to know what to address with what
means.
˲
No agreed-upon methodology sup-
ports the systematic engineering of pri-
vacy into systems. System development
life cycles rarely leave room for privacy
considerations.
˲
Little knowledge exists about the
tangible and intangible benefits and
risks associated with companies’ pri-
vacy practices.
How can these challenges be over-
come? A Privacy Impact Assessment
(PIA) Framework recently created for
RFID technology (see http://ec.europa.
eu/information_society/policy/rfid/pia/
index_en.htm) has been called a “land-
mark for PbD” because it offers some
answers: The PIA Framework suggests
concrete privacy goals and describes a
method to reach them. Pragmatically,
it recommends that organizations use
However, a core challenge for PbD
is to get organizations’ management
involved in the privacy strategy. Man-
agement’s active involvement in the
corporate privacy strategy is key be-
cause personal data is the asset at the
heart of many companies’ business
models today. High privacy standards
can restrict the collection and use of
data for further analysis, limit strate-
gic options, and impact a firm’s bot-
tom line. Consider advertising rev-
enues boosted by behavioral targeting
practices and peoples’ presence on
social networking sites: without per-
sonal data, such services are unthink-
able. PbD proponents hardly embrace
these economic facts in their reason-
ing. In contrast, they take a threat per-
spective arguing that low privacy stan-
dards can provoke media backlash
and lead to costly legal trials around
privacy breaches. And indeed, distrust
caused by privacy breaches is probably
the only real blemish on the image of
technology companies such as Google
or Facebook. Brands are a precious
company asset, the most difficult to
build and the most costly to maintain.
Hence, brand managers should be
keen to avoid privacy risks. Equally, re-
cent data breach scandals have forced
CEOs to quit.
Despite these developments, many
managers still do not understand that
a sustainable strategy for one of their
company’s core assets—personal
data—requires them to actively man-
age this asset. Managing personal data
means optimizing its strategic use,
quality, and long-term availability. Un-
fortunately, few of today’s managers
want to take on this new challenge. In-
stead, they derive what they can from
the information bits they get and leave
Managing personal
data means
optimizing its
strategic use, quality,
and long-term
availability.
ad tk
viewpoints
july 2012 | vol. 55 | no. 7 | COMMUNICATIONS OF THE ACM 3
the specific legislative privacy prin-
ciples of their region or sector or the
OECD Privacy guidelines as a starting
point to determine privacy protection
goals. In Europe, for example, the Eu-
ropean Data Protection Directive 95/46/
EC or its successor should be taken. It
includes the following privacy goals:
˲
Safeguarding personal data qual-
ity through data avoidance, purpose-
specific processing, and transparency
vis-à-vis data subjects.
˲
Ensuring the legitimacy of person-
al and sensitive data processing.
˲
Complying with data subjects’
right to be informed, to object to the
processing of their data, and to access,
correct, and erase personal data.
˲
Ensuring confidentiality and secu-
rity of personal data.
Security and privacy in this view are
clearly distinguished. Security means
the confidentiality, integrity, and avail-
ability of personal data are ensured.
From a data protection perspective se-
curity is one of several means to ensure
privacy. A good PbD is unthinkable
without a good Security by Design plan.
The two approaches are in a “positive
sum” relationship. That said, privacy
is about the scarcity of personal data
creation and the maximization of in-
dividuals’ control over their personal
data. As a result, some worry that PbD
could undermine law enforcement
techniques that use criminals’ data
traces to find and convict them. More
research and international agreement
in areas such as anonymity revocation
are certainly needed to demonstrate
this need not be the case even if we
have privacy-friendly systems.
After privacy goals are clearly de-
fined, we must identify how to reach
them. The PIA Framework mentioned
earlier is built on the assumption that
a PbD methodology could largely re-
semble security risk assessment pro-
cesses such as NIST or ISO/IEC 27005.
These risk assessment processes iden-
tify potential threats to each protec-
tion goal. These threats and their prob-
abilities constitute a respective privacy
risk. All threats are then systematically
mitigated by technical or governance
controls. Where this cannot be done,
remaining risks are documented to be
addressed later.
As in security engineering, PbD
controls heavily rely on systems’ ar-
chitectures.
2
Privacy scholars still put
too much focus on information prac-
tices only (such as Web site privacy
policies). Instead, they should further
investigate how to build systems in
client-centric ways that maximize user
control and minimize network or ser-
vice provider involvement. Where such
privacy-friendly architectures are not
feasible (often for business reasons),
designers can support PbD by using
technically enforceable default poli-
cies (“opt-out” settings) or data scarci-
ty policies (erasure or granularity poli-
cies), data portability, and user access
and delete rights. Where such techni-
cal defaults are not feasible, concise,
accurate, and easy-to-understand no-
tices of data-handling practices and
contact points for user control and re-
dress should come into play.
A challenge, however, is that system
development life cycles and organiza-
tional engineering processes do not
consider such practices. So far, privacy
is simply not a primary consideration
for engineers when designing systems.
This gap raises many questions: When
should privacy requirements first en-
ter the system development life cycle?
Who should be responsible? Given that
privacy controls impact business goals,
who can actually decide on appropriate
measures? Must there be ongoing pri-
vacy management and practices moni-
toring? If organizations purchase stan-
dard software solutions or outsource
operations, pass data to third parties or
franchise their brands, who is respon-
sible for customer privacy?
Conclusion
For privacy to be embedded in the sys-
tem development life cycle and hence
in organizational processes, compa-
nies must be ready to embrace the do-
main. Unfortunately, we still have too
little knowledge about the real dam-
age that is being done to brands and
a company’s reputation when privacy
breaches occur. The stock market sees
some negligible short-term dips, but
people flock to data-intensive services
(such as social networks); so far, they
do not sanction companies for pri-
vacy breaches. So why invest in PbD
measures? Will there be any tangible
benefits from PbD that justifies the in-
vestment? Would people perhaps be
willing to pay for advertisement-free,
privacy-friendly services? Will they in-
cur switching costs and move to com-
petitive services that are more privacy
friendly? Would the 83% of U.S. con-
sumers who claim that they would stop
doing business with a company that
breaches their privacy really do so? We
need to better understand these dy-
namics as well as the current changes
in the social perception of what we re-
gard as private.
But research on the behavioral eco-
nomics of privacy has clearly demon-
strated that regardless of what people
say, they make irrational privacy deci-
sions and systematically underesti-
mate long-term privacy risks. And this
is not only the case for privacy-seeking
individuals, but also for managers who
are making PbD decisions for their
companies.
Therefore, I appreciate that PIAs are
suggested to become mandatory in the
new European data protection legisla-
tion. However, they must be accompa-
nied by a clear set of criteria for judging
their quality as well as sanctions for
noncompliance.
Most important, as this Viewpoint
makes clear: PIAs need to be made
mandatory for the designers of new
technologies—the IBMs and SAPs of
the world—and not just data control-
lers or processors who often get system
designs off the shelf without a say.
Making PIAs mandatory for system
designers could be a great step toward
PbD and support compliance with the
policies defined in Europe, in U.S. Pri-
vacy sectors laws, as well as the Safe-
Harbor Framework.
Only if we force those companies
that design systems, their manage-
ment and their engineers, to embrace
such process-driven, bottom-up ways
to embed laws an ethics into code can
we really protect the core values of our
Western liberal democracies and con-
stitutions.
References
1. Cavoukian, A. Privacy by Design Curriculum 2.0, 2011;
http://privacybydesign.ca/publications/.
2. Spiekermann, S. and Cranor, L.F. Engineering privacy.
IEEE Transactions on Software Engineering 35, 1
(Jan./Feb. 2009), 67–82.
Sarah Spiekermann (sspieker@wu.ac.at) is the head of
the Institute for Management Information Systems at
the Vienna University of Economics and Business, Vienna,
Austria.
Copyright held by author.
... One recurring challenge is translating privacy requirements, which can be filled with legal jargon, into technical requirements [e.g., 19,33,61,64,69]. Developers' understanding of privacy might be impacted by several other factors, such as their workplace and the software development platforms [e.g., 4,12,32,70]. ...
Conference Paper
Full-text available
Health data is considered to be sensitive and personal; both governments and software platforms have enacted specific measures to protect it. Consumer apps that collect health data are becoming more popular, but raise new privacy concerns as they collect unnecessary data, share it with third parties, and track users. However, developers of these apps are not necessarily knowingly endangering users’ privacy; some may simply face challenges working with health features. To scope these challenges, we qualitatively analyzed 269 privacy-related posts on Stack Overflow by developers of health apps for Android- and iOS-based systems. We found that health-specific access control structures (e.g., enhanced requirements for permissions and authentication) underlie several privacy-related challenges developers face. The specific nature of problems often differed between the platforms, for example additional verification steps for Android developers, or confusing feedback about incorrectly formulated permission scopes for iOS. Developers also face problems introduced by third-party libraries. Official documentation plays a key part in understanding privacy requirements, but in some cases, may itself cause confusion. We discuss implications of our findings and propose ways to improve developers’ experience of working with health-related features—and consequently to improve the privacy of their apps’ end users.
... Privacy-by-Design wird beispielsweise verwendet, um Datenschutz während der Entwicklung zu verankern (siehe z.B. [3,4]). Allerdings gibt es selbst bei dieser Einschränkung auf eine einzige ELS-Teilimplikation Kritik, dass dieser Ansatz im Sinne eines "designing for privacy" verstanden werden muss und in der Praxis eine Reihe von Herausforderungen im Detail mit sich bringt [5,6]. Schon hier besteht also eine Forschungslücke bezüglich praxisnaher, effektiver integrierter Methoden. ...
Conference Paper
Full-text available
Bei der ELSI-by-Design-Methode „ELSI-Scrum“ entwickeln Technikexpert*innen mit Stakeholdern und potenziellen Nutzer*innen (hier: Adressat*innen und Fachkräfte der Sozialen Arbeit) gemeinsam und agil digitale Artefakte. Das Vorgehen basiert auf zerlegende Phasen von Scrum, Verantwortungsteilung von eduScrum, Einbeziehung benachteiligter Nutzer*innen in den Entwicklungsprozess durch Inclusive Research und Participatory Design. Es zielt darauf, mithilfe von ethischen Prüfkategorien wie in MEESTAR während des Designprozesses (nicht erst nach Fertigstellung), ELSI-by-Design in das zu entwickelnde Produkt zu integrieren. Bei dem Beitrag handelt es sich um einen Work-in-Progress-Beitrag, der darauf zielt, begründend einen methodischen Ansatz vorzustellen. Dieser wird im BMBF-Projekt „Inklusive Entwicklung von Methoden und Technologien für digitale Hilfen zur Alltagsbewältigung in der Behinderten- und Erziehungshilfe“ entwickelt und erprobt.
... The development of the standard went through three phases. At the kick-off in 2016 the starting point was my textbook on "Ethical IT Innovation -A Value-based System Design Approach" [3] that already recognized Value Sensitive Design (VSD) [4], Risk-assessment based Design [5], Participatory Design and Mumford's Ethics Method. This baseline implied that IEEE 7000 TM would focus on how to embed value principles into systems (including virtues) and that a system can (of course) only be conceived of as a 'sociotechnical' one. ...
Article
Full-text available
This September 2021 IEEE 7000 TM , the world's first standardized model process for addressing ethical concerns during system design is published. It shows organizations how to do value-based engineering by identifying context-sensitive, philosophically grounded ethical value requirements for their systems and thereby change not only their business mission for the better, but-most importantly-their system architecture and design. First case studies show that following IEEE 7000 TM has the potential to deeply change the narrative of tech companies' mission; changing it from mere profit-oriented motives to public interest motives, social sustainability and wellbeing. This article gives a short overview of the standardization effort, its phases and what to expect from the standard from the perspective of its vice-chair.
... -La transparence devient une obligation [Goddard, 2017] -Les responsabilités sont rééquilibrées [Lindqvist, 2017] -De nouveaux concepts sont créés ou instanciés : profilage, droit à l'oubli,Protection de la vie privée dès la conception. [Spiekermann, 2012] L'intelligence artificielle (IA) qui se cache derrière les techniques de fouille de texte est analytique : elle prend des données en entrée et, en fonction de tous les textes que l'IA a vus auparavant, elle applique un algorithme (classification, traduction, analyse syntaxique...) en fonction de la ou des tâches pour lesquelles elle a été créée. Cependant, nos études se concentrent sur une technologie qui utilise non seulement l'analyse, mais aussi l'interaction humain-machine (IHM) : les systèmes de dialogue et plus précisément les systèmes de dialogue orientés tâche (tods). ...
Thesis
Full-text available
Dans ce travail, nous nous intéressons à la place des systèmes de dialogue orientés-tâche à la fois dans le traitement automatique des langues, et dans l’interaction humain-machine. Nous nous concentrons plus particulièrement sur la différence de traitement de l’information et de l’utilisation de la mémoire, d’un tour de parole à l’autre, par l’humain et la machine, pendant une conversation écrite de type clavardage. Après avoir étudié les mécanismes de rétention et de rappel mémoriels chez l’humain durant un dialogue, en particulier dans l'accomplissement d'une tâche, nous émettons l’hypothèse qu’un des éléments susceptible d'expliquer que les performances des machines demeurent en deçà de celles des humains, est la capacité à posséder non seulement une image de l’utilisateur, mais également une image de soi, explicitement convoquée pendant les inférences liées à la poursuite du dialogue. Cela se traduit pour le système par les trois axes suivants. Tout d’abord, par l’anticipation, à un tour de parole donné, du tour suivant de l’utilisateur. Ensuite, par la détection d’une incohérence dans son propre énoncé, facilitée, comme nous le démontrons, par l’anticipation du tour suivant de l’utilisateur en tant qu’indice supplémentaire. Enfin, par la prévision du nombre de tours de paroles restants dans le dialogue afin d’avoir une meilleure vision de la progression du dialogue, en prenant en compte la potentielle présence d’une incohérence dans son propre énoncé, c’est que nous appelons le double modèle du système, qui représente à la fois l’utilisateur et l’image que le système renvoie à l’utilisateur. Pour mettre en place ces fonctionnalités, nous exploitons les réseaux de mémoire de bout-en-bout, un modèle de réseau de neurones récurrent qui possède la spécificité non seulement de traiter des historiques de dialogue longs (comme un RNN ou un LSTM) mais également de créer des sauts de réflexion, permettant de filtrer l’information contenue à la fois dans l’énoncé de l’utilisateur et dans celui de l’historique de dialogue. De plus, ces trois sauts de réflexion servent de mécanisme d’attention “naturel” pour le réseau de mémoire, à la manière d’un décodeur de transformeur. Pour notre étude, nous améliorons, en y ajoutant nos trois fonctionnalités, un type de réseau de mémoire appelé WMM2Seq (réseau de mémoire de travail par séquence). Ce modèle s’inspire des modèles cognitifs de la mémoire, en présentant les concepts de mémoire épisodique, de mémoire sémantique et de mémoire de travail. Il obtient des résultats performants sur des tâches de génération de réponse de dialogue sur les corpus DSTC2 (humain-machine dans le domaine de restaurant) et MultiWOZ (multi-domaine créé avec Magicien d’Oz); ce sont les corpus que nous utilisons pour nos expériences. Les trois axes mentionnés précédemment apportent deux contributions principales à l’existant. En premier lieu, ceci complexifie l’intelligence du système de dialogue en le dotant d’un garde-fou (incohérences détectées). En second lieu, cela optimise à la fois le traitement des informations dans le dialogue (réponses plus précises ou plus riches) et la durée de celui-ci. Nous évaluons les performances de notre système avec premièrement la f-mesure pour les entités détectées à chaque tour de parole, deuxièmement de score BLEU pour la fluidité de l’énoncé du système et troisièmement de taux d’exactitude jointe pour la réussite du dialogue. Les résultats obtenus montrent l’intérêt d’orienter les recherches vers des modèles de gestion de la mémoire plus cognitifs afin de réduire l’écart de performance dans un dialogue entre l’humain et la machine.
... Clearly, this does not only include the operationalization of producing source code, but also encompasses the holistic view on software architecture, business organization and culture including all stakeholders. This perspective led to the umbrella term Privacy by Design and By Default [8,30,32,59]. From a legal perspective, privacy engineering is motivated through said motto in Art. 25 GDPR. Controllers, therefore, have to take into account the "state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity" of processing personal data. ...
Chapter
Full-text available
Cloud native information systems engineering enables scalable and resilient software architectures powering major online offerings. Today, these are built following agile development practices. At the same time, a growing demand for privacy-friendly services is articulated by societal norms and policy through effective legislative frameworks. In this paper, we (i) identify conceptual dimensions of cloud native privacy engineering – that is, bringing together cloud computing fundamentals and privacy regulation – and propose an integrative approach to be addressed to overcome the shortcomings of existing privacy enhancing technologies in practice and evaluating existing system designs. Furthermore, we (ii) propose a reference software development lifecycle called DevPrivOps to enhance established agile development methods with respect to privacy. Altogether, we show that cloud native privacy engineering opens up key advances to the state of the art of privacy by design and by default using latest technologies.
... Security-by-Design refers to reducing product complexity by considering security from the early phase of development, such as during product requirements analysis and design, by doing this it was possible to achieve product trustworthiness (Avizienis et al. 2004;Spiekermann 2012;Cavoukian and Dixon 2013;Herrmann 2001;Cherdantseva and Hilton 2015). The development process containing this Securityby-Design philosophy is called "Secure SDLC (Secure Software Development Life Cycle)" or the "Security Engineering Process" (Hardin 1996;Avizienis et al. 2004;Latham 1986;Jahl 1991;Bacic 1990;Instruction, DoD 1997Williams and Steward 2007). ...
Article
Full-text available
From the early 1970s, the U.S. government began to recognize that simple penetration testing could not assure the security quality of products. The results of penetration testing such as identified vulnerabilities and faults can vary depending on the capabilities of the team. In other words, the penetration testing team cannot assure that “vulnerabilities are not found” is equal to “product does not have any vulnerabilities”. So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed in a strict, systematic manner. The US government began to publish various standards related to development methodology and evaluation procurement systems, embedding the “Security-by-Design” concept from the 1980s. Security-by-Design involves reducing a product’s complexity by considering security from the early phase of the development life-cycle such as during the product requirements analysis and design phase to ultimately achieve trustworthiness of the product. Since then, the Security-by-Design concept has spread to the private sector, since 2002 this has often come in the form of Secure SDLC by Microsoft and IBM, this system is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the field because the standards or guidelines related to Secure SDLC contain only abstract and declarative content. Therefore, in this paper, we present a new framework that specifies the level of Secure SDLC desired by enterprises. We propose the CIA (functional Correctness, safety Integrity, security Assurance)-level based Security-by-Design framework which combines an evidence-based security approach standard with existing Secure SDLC. By using our methodology, we can quantitatively show any differences in Secure SDLC process level employed between the company in question one of its competitors. In addition, our framework is very useful when you want to build Secure SDLC in the field because you can easily derive detailed security activities and documents to build the desired level of Secure SDLC.
... Understanding these implications has high practical relevance to such companies, as substantial amounts of time, strategic planning, employee training and financial and human resources are typically needed to implement the requirements. Consent is a critical pillar of the new legislation, and GDPR affirms that companies can only use personal data for the explicit purpose for which they were given (Spiekermann, 2012). For human resources teams, this means that employees must explicitly engage in allowing the employer to use their personal data and must be fully aware of how these data will be used. ...
Article
Full-text available
The goal of GDPR is to harmonize consumer rights in the European Union regardless of where they are or where they come from. This has an impact on the processing of personal data within organizations - especially in human resources departments. GDPR has major consequences in the HR field as the employer processes employee data (and potential employees) on a large scale. At the formal level, the Human Resources Director must ensure that the new concepts introduced by the Regulation are correctly reflected in the internal documents governing the duties and responsibilities of the employees. The biggest challenge in this regard is defining the role of the data protection officer at the organization level. The methodological section of this article includes a narrative analysis based on an interview with a data protection officer, head of compliance guide to GDPR. The purpose of this study reflects the impact of the Personal Data Regulations on the Human Resources activities. It is useful for organizations and subjects to know what particular attention should be paid regarding GDPR to the recruitment process, the access methods of the equipment available to the employee, the data protection solutions in the systems and the employee monitoring system.
... Die Rechtsunsicherheit im Hinblick auf die relevanten Maßnahmen schafft einen Ermessensspielraum für die Berücksichtigung der Bedürfnisse und Eigeninteressen des Verantwortlichen. 20 Seitens der Betroffenen könnte dies ihre Rechte und Freiheiten betreffende Risiken hervorbringen und im konkreten Anwendungsfall des autonomen Fahrzeugs zu einem reduzierten Vertrauen in die Nutzung dieser neuen Technologie führen. ...
Article
Full-text available
Zusammenfassung Das neue Gesetz zum autonomen Fahren (Gesetz zur Änderung des Straßenverkehrsgesetzes und des Pflichtversicherungsgesetzes) schafft mit § 1g StVG (neu) Halterpflichten zur Datenspeicherung. Dabei adressiert der Gesetzgeber die Verarbeitung von Daten im Fahrzeug auch Hersteller und verankert den Grundsatz des Privacy by Design im StVG. Dieser Beitrag untersucht mögliche Privacy by Design Anforderungen, deren Einhaltung sich nicht auf die technische Gestaltung beschränkt, sondern auch organisatorische Maßnahmen beinhaltet.
Article
This work forms a part of a series of “on the ground” studies dealing with (post-)conflict situations, focusing on the iterative, participatory design of a tool, Cyberactivist, for protection for activists and the empirical research that led to it. Work on the development of privacy and security tools has not always recognized the fragile nature of the political processes in emerging democracies, frequent naivety about threat, nor the “occasioned” responses of activists because activism can be a “one time” endeavor, prompted by specific events. Researching political activism in Republika Srpska, we identified issues relating to the use of ICT and social media, leading to the redesign of our prototype which now raises awareness of privacy and security and supports activists by challenging ignorance, lowering exposure, and enabling remediation. We addressed “usable security” challenges to ensure simplicity of the tool and engaged with HCI researchers focused on international activism to assess the global applicability of the technical design.
Article
Privacy is an essential topic in (social) robotics and becomes even more important when considering interactive and autonomous robots within the domestic environment. Robots will collect a lot of personal and sensitive information about the users and their environment. Thereby, privacy does consider the topic of (cyber-)security and the protection of information against misuse by involved service providers. So far, the main focus relies on theoretical concepts to propose privacy principles for robots. This article provides a privacy framework as a feasible approach to consider security and privacy issues as a basis. Thereby, the proposed privacy framework is put in the context of a user-centered design approach to highlight the correlation between the design process steps and the steps of the privacy framework. Furthermore, this article introduces feasible privacy methodologies for privacy-enhancing development to simplify the risk assessment and meet the privacy principles. Even though user participation plays an essential role in robot development, this is not the focus of this article. Even though user participation plays an essential role in robot development, this is not the focus of this article. The employed privacy methodologies are showcased in a use case of a robot as an interaction partner contrasting two different use case scenarios to encourage the importance of context awareness.
Privacy by Design Curriculum 2
  • A Cavoukian
Cavoukian, A. Privacy by Design Curriculum 2.0, 2011; http://privacybydesign.ca/publications/.