Conference PaperPDF Available

Offlinetags - A novel privacy approach to online photo sharing

Authors:
  • Paris Lodron University of Salzburg
  • BlnBDI prev. TU Berlin
  • Ethical Tech Society

Abstract and Figures

In this paper, we describe a novel approach to the privacy problem that photos showing persons are often “meddle-shared” by others online. We introduce a set of four elementary privacy preferences a photo subject can have. These preferences are represented by corresponding symbols – “Offlinetags” – which can be worn in the form of stickers or badges and which are designed to be easily recognizable by humans and algorithms. Especially for the context of public events, these Offlinetags can serve as a basis for novel practices of photo sharing that respect the photo subjects’ privacy preferences. http://dx.doi.org/10.1145/2559206.2581195
Content may be subject to copyright.
Offlinetags A Novel Privacy
Approach to Online Photo Sharing
Abstract
In this paper, we describe a novel approach to the
privacy problem that photos showing persons are often
“meddle-shared” by others online. We introduce a set
of four elementary privacy preferences a photo subject
can have. These preferences are represented by
corresponding symbols “Offlinetags” which can be
worn in the form of stickers or badges and which are
designed to be easily recognizable by humans and
algorithms. Especially for the context of public events,
these Offlinetags can serve as a basis for novel
practices of photo sharing that respect the photo
subjects’ privacy preferences.
Author Keywords
Privacy; photo sharing; social networks; offlinetags
ACM Classification Keywords
K.4.1. Computers and Society, Public Policy Issues:
Privacy
H.5.m. Information interfaces and presentation (e.g.,
HCI): Miscellaneous.
Introduction
Online photo sharing has been a major source of social
conflict since the broad adoption of online social
networks. Whether in the context of private activities,
work life or, in particular, in the context of public
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
Copyright is held by the owner/author(s).
CHI 2014, Apr 26 - May 01 2014, Toronto, ON, Canada
ACM 978-1-4503-2474-8/14/04.
http://dx.doi.org/10.1145/2559206.2581195
Frank Pallas
Technical University of Berlin,
Dep. Computers & Society
Marchstr. 23
10587 Berlin, Germany
frank.pallas@tu-berlin.de
Max-Robert Ulbrich t
Technical University of Berlin,
Dep. Computers & Society
Marchstr. 23
10587 Berlin, Germany
max-robert.ulbricht@ tu-berlin.de
Lorena Jaume-Palasí
Ludwig-Maximilians-University
Munich
Chair for Philosophy IV
Geschwister-Scholl-Platz 1
80539 Munich, Germany
lorena.jaume-palasi@gsi.uni-
muenchen.de
Ulrike Höppner
Internet & Society Collaboratory
Sophienstr. 24
10178 Berlin, Germany
ulrike@collaboratory.de
Work-in-Progress
CHI 2014, One of a CHInd, Toronto, ON, Canada
2179
events: Nowadays, it is hardly possible to avoid being
photographed by others and these photos being shared
online. Such sharing of photos without the consent or
even the knowledge of the shown persons will herein be
called “meddle-sharing”, whereas the shown person is
referred to as the “photo subject”.
Everyday examples of such meddle-sharing include
photo subjects being shown as participants of
demonstrations (e.g. at a gay parade), as attendees of
specific events (e.g. a convention of a political party or
a conference), or simply as having been at a certain
place at a given time. Depending on the situation
shown on the photo and the party taking notice of it,
such photos can disclose information about the photo
subject that she would otherwise not have revealed to
the noticing person. Following a common understanding
of privacy as primarily being about “rights to control
your public image” [5], meddle-sharing can thus
constitute serious privacy infringements for the photo
subject. In the following, we therefore present a novel
approach for influencing the taking, sharing and further
handling of photos of oneself.
Generally speaking, our approach1 is based on a well-
defined set of four symbols that, in the form of stickers,
buttons, badges, etc., can be attached to the clothes
and represent the wearer’s preferences on the desired
handling of photos taken of her. The symbols which
1 The concept presented herein was developed by a multitude of
people under the umbrella of the Berlin-based Internet &
Society Collaboratory (http://en.collaboratory.de). Besides the
authors, significant contributions were made by (in
alphabetical order): Thomas Heilmann, Jan Schallaböck, Max
Senges and Gordon Süß. The proof-of-concept software was
written by Markus Köbele. See also http://offlinetags.net.
we call “Offlinetags” are designed to be easily
understandable to humans and recognition-friendly for
computer-vision algorithms, thereby enabling social
consideration as well as technological analysis and
processing of these preferences.
Related Work
Many approaches for controlling the visibility and
handling of personal content like photos have been
suggested in the past [3] and are now available in most
online social networks and other content sharing
platforms. Current scientific discussions go even
beyond this platform-focused perspective and suggest
rather generic and comprehensive mechanisms for
distributed usage control [4, 2]. These mechanisms are
based on the assumption that the uploading party is
the one that should be provided with possibilities for
specifying visibility and usage policies. Meddle-sharing,
however, is a concern for the depicted party and
therefore the mentioned mechanisms do not help.
Photo tagging plays an important role for privacy
infringements related to meddle-sharing. Therefore,
advanced mechanisms for semi-automated untagging
[1] seem highly promising. Nonetheless, such
mechanisms are restricted to the platform they are
employed in and do not prevent privacy infringements
upon non-members. Furthermore, they only come into
effect after a photo has been uploaded and tagged.
Our mechanism, in contrast, is explicitly designed with
the problem of meddle-sharing in mind. Furthermore, it
is designed to come into effect much earlier than
established models of untagging etc., thereby working
against unwanted revelations through photo sharing in
general and across the boundaries of single platforms.
Work-in-Progress
CHI 2014, One of a CHInd, Toronto, ON, Canada
2180
Problem Confinement
If we include the privacy risks that were also present
for “traditional” media, we can, from a high-level
perspective, now distinguish at least three generic
classes of unwanted information revelations:
Unintentional discovery
The most obvious case of information about a person
being revealed through photos has been present since
the existence of photography in general: A person
looking at a photo unintentionally recognizes a known
photo subject in a specific context. Such random
discoveries constitute a privacy infringement in the
above-mentioned sense, at least in the case where the
depicted context conflicts with the subject’s intended
public image (think, again, of a political party’s
convention).
Directed searchability
The possibility to search for photos showing specific
persons just by their name, Twitter ID etc. clearly
distinguishes current online social networks and other
photo sharing platforms from traditional settings. As
soon as the respective search results contain
“incriminating” information information that does not
match the shown person’s intended public image this
unquestionably heightens the risk of unwanted
information revelations and thereby limits the photo
subject’s ability to control her public image.
Reverse searchability
Finally, novel technologies from the field of face
recognition introduce another risk: Instead of searching
for “images attributed to a given identifier”, it is now
also possible to revert this search and start with a
photo of an unknown person in order to identify her
and obtain further information. This leads to even
absolute strangers being able to search for information
about a given person that this person would by no
means have revealed to such strangers. In particular,
this also applies to reverted searches being made on
the basis of photos meddle-shared by others and to
such meddle-shared photos appearing in the results of
a reverted search. Again, this heightens the risk of
unwanted revelations significantly and thus reduces a
person’s ability to control her public image.
As we can see, novel technologies and practices in the
field of online photo sharing introduce new risks to an
individual’s abilitity to exert control over her public
image and reinforce existing ones. This loss of control
should be countervailed by means of our Offlinetags.
The Four “Offlinetags”
As already laid out, our Offlinetags shall represent
individual preferences of would-be photo subjects on
the taking and sharing of photos showing them and
counteract the above-mentioned privacy risks. After
long-lasting discussions on possibilities for addressing
these risks separately, we came to the conclusion that
at least the risks of directed and reverse searchability
are highly interrelated and can hardly be isolated from
each other. For example, countering the risk of directed
searches while at the same time ignoring the risk of
reverse searchability would hardly make sense: Once a
searching party gets hold of just one photo showing the
subject she is searching for, directed searches can to a
certain extent be substituted through reverse searches.
Instead of addressing the different risks separately, we
therefore decided to follow a graduated “risk-
minimization” approach, leading us to the elementary
Work-in-Progress
CHI 2014, One of a CHInd, Toronto, ON, Canada
2181
pleas presented below: “No photos”, “Blur me”, “Upload
me” and “Tag me”.
Each of these pleas is represented by a respective
symbol, the Offlinetag, that shall ensure easy human
recognition as well as good machine-readability. We
therefore decided for a fundamental design of a bold
black circle as an anchor for image recognition
algorithms and simple, algorithmically well-
distinguishable black symbols inside the circle
representing the plea. The symbols are also designed
for being intuitively associated with the represented
plea by humans. To reinforce the intuitive association
with the respective plea, the free space is colored in a
corresponding hue. The colors are, however, not
intended to be evaluated by image recognition
mechanisms to prevent analytical failures for gray
photos, for example. Following these fundamental
concepts, the meanings and graphical representations
of our Offlinetags are as follows:
No photos
The first preference a would-be photo subject can have
with regard to the risk of meddle-sharing is not to be
seen on any photo in a certain situationno matter
whether this photo is intended to be uploaded
somewhere or not. Following the above-mentioned
approach of graduated risk-minimization, the “no
photos” Offlinetag represents the most rigid plea to
take no photos of the person currently wearing the
button/badge. Graphically, this rigidness is represented
by a cross-symbol in the middle of the circle and a red
color hue, transporting a clear “stop” message. This
Offlinetag is intended to address all three classes of
unwanted information revelation identified above to the
strongest possible extent. In particular, this also
minimizes the risk of unintentional discovery through
recognition of typical clothes or accessories being worn.
Blur me
To allow photos of multiple persons to be taken without
infringing upon the privacy of individual photo subjects,
we introduced the “blur me” Offlinetag. It represents
the preference of the wearer to be made
unrecognizable in case of the photo being shared. A
common way for this anonymization is to blur out
single faces, but other mechanisms can also be thought
of. Following the known practice of blurring, we decided
on a light blue color for this Offlinetag. As for the
symbol, anything “blurry” would have been hard for
algorithms to automatically recognize. We therefore
chose a single black horizontal bar, referencing the one
usually put over a photo subject’s eyes for
anonymization purposes in traditional media. If the
represented plea is followed, this Offlinetag primarily
minimizes the risks arising from reverse searches and
from directed searches on the basis of taggings made
by automated face recognition mechanisms. The risk of
unintentional discovery is also limited to a certain
extent, even if such discoveries can still happen on the
basis of other recognized features than the photo
subject’s face like clothes, accessories, etc.
Upload me
Besides not wanting to be seen or recognized online, a
photo subject can also accept or even desire being seen
online in a specific context while still feeling
uncomfortable with being subject to excessive directed
and reverted searches. This less restrictive preference
is reflected by the “upload me” Offlinetag, meaning that
uploading and sharing the photo is accepted while
rejecting tagging or face recognition mechanisms. The
No photos!
Please refrain from taking
pictures with me being
depicted, no matter if I
appear to be not re-
cognizable for the person
taking the picture.
Blur me!
Please ensure before
uploading or any
application of a picture of
me, that I cannot be
recognized especially by
means of facial recognition
algorithms.
Work-in-Progress
CHI 2014, One of a CHInd, Toronto, ON, Canada
2182
general acceptance of uploading is represented by a
checkmark symbol and the yellow color hue transports
the intuitive message that at least some attention is
necessary during the handling of the photo. Regarding
the categories of unwanted revelations, this Offlinetag
does not prevent unintentional discoveries but does, at
least to a certain extent, work against the risks of
directed and reverted searches.
Tag me
Finally, a photo subject can also have no objections
against being tagged on photos showing her in certain
contexts or being subject to face recognition
mechanisms etc. Moreover, a photo subject can even
have a vital interest in being tagged and algorithmically
identified. The “tag me” Offlinetag represents this
preference. It has a green color hue, signaling an
“anything goes” attitude and carries another circle and
a dot at the center as symbolic references to an
abstract target. Different from the other Offlinetags,
this one is not intended to counteract the identified
privacy risks. Instead, it shall give the wearer a
possibility to signal that she is aware of any potential
risks and has consciously decided that they don’t
matter to her when using this badge/button.
Intended Use and Enforcement
As laid out above, Offlinetags are intended to be worn
in the form of stickers, buttons, badges, etc. that
represent the wearers privacy preferences in a given
situation. The question is, then, how the so-formulated
preferences are to be enforced. First of all, Offlinetags
are not meant to “replaceor “overwriteexisting legal
rules already regulating the handling of photos showing
individuals. Instead, Offlinetags shall complement legal
rules by providing a simple and intuitive way for
communicating individual preferences within specific
situations. This being said, we envisage different modes
of enforcement:
First, the intuitive design shall make those persons
taking and handling photos aware of the photo
subjects’ preferences. Of course, it is always possible to
act against these preferences, but this would
necessarily require a conscious decision against the
subject’s explicitly stated preference and therefore
break a moral convention. In this vein, Offlinetags
function as a means of communication among humans
and allow for a more consensual practice in the field of
photo sharing.
Second, Offlinetags are explicitly designed to be easily
recognizable by algorithms. This enables automatic
enforcement of the preferences at any point of the
photo-sharing chain from the camera to the final
recipient. In particular, one can think of cameras that
don’t take photos or automatically blur them as soon as
the respective Offlinetag is recognized. Based on
OpenCV and Qt, we implemented a proof-of concept
desktop application realizing exactly this functionality
for photos being taken by a webcam. Figure 1 shows
this application in operation for a “blur me” Offlinetag.
As, however, such mechanisms could easily be
circumvented, combined modes of enforcement seem
most promising to us. For example, we envisage
cameras and upload routines which automatically
analyze photos, raise indicative warnings or even offer
to automatically blur certain faces once the respective
tag has been found. This would allow to take the best
of both modes without patronizing individual users and
introduce a whole new method of self-regulation since
Tag Me!
Feel free to take pictures
of me, upload them, tag
them, and m ake them
available for facial
recognition or any other
means.
Upload me!
Feel free to upload and
share pictur es of me, but
please r efrain from
tagging or facial
recognition.
Work-in-Progress
CHI 2014, One of a CHInd, Toronto, ON, Canada
2183
other users of a certain platform would be aware that a
meddle-sharer must have consciously infringed upon
the well-stated preferences of the photo subject.
Figure 1: Proof-of-concept implementation of the software
showing the ability to recognize faces and tags appropriately
and to automatically perform the desired behavior.
Conclusion and Next Steps
As we have seen, the concept of Offlinetags introduces
novel ways of influencing the way photos are shared
online. In particular, they address the problem of
photos being “meddle-shared” without the consent or
even knowledge of the photo subject. The design is
optimized for being recognized by humans as well as by
computer vision algorithms and thereby supports
different modalities of enforcement.
For the future, we are planning to focus on more
detailed analyses of the Offlinetags’ interplay with
existing legal regulations regarding the handling of
photos. For example, it is unclear whether wearing an
“upload me” tag would suffice as an explicit statement
of consent to publicly share a photo as it is formally
required under several legislations. Furthermore, we
are also planning for structured user-studies on the
appropriateness and acceptance of the overall concept
as well as of the chosen elementary pleas.
From a more general perspective, the effectiveness of
social enforcement mechanisms based on norm
compliance also needs to be further explored. We will
therefore relate the concept of Offlinetags to broader
debates on anonymity, privacy and public space as well
as considering its relationship to current debates on
non-state governance. Finally, we will try to find
partners who will implement and test Offlinetags-based
mechanisms in camera-apps, upload routines and other
contexts in order to evaluate the concept’s practical
applicability.
References
[1]Besmer, A., Lipford, H.R. Moving beyond untagging:
photo privacy in a tagged world. Proc. CHI '10, pp.
1563-1572. Doi: 10.1145/1753326.1753560
[2] Kumari, P., Pretschner, A., Peschla, J., Kuhn, J.-M.
Distributed data usage control for web applications:
a social network implementation. Proc. CODASPY
'11, pp. 85-96. Doi: 10.1145/1943513.1943526
[3] Mannan, M., Oorschot, P.C. Privacy-enhanced
sharing of personal content on the web. Proc. WWW
'08, pp. 487-496. Doi: 10.1145/1367497.1367564
[4] Pretschner, A., Hilty, M., Basin, D. Distributed usage
control. Comm. ACM 49(9), 2006, pp. 39-44. Doi:
10.1145/1151030.1151053
[5] Whitman, J.Q. The Two Western Cultures of Privacy:
Dignity versus Liberty. Yale Law Journal 113(6),
2004, pp. 1151-1221
Work-in-Progress
CHI 2014, One of a CHInd, Toronto, ON, Canada
2184
... The methods in the latter category attempt to proactively prevent the photos of the photographed person from being taken. For example, some techniques force for the privacy of photographed person to be respected based on his privacy preferences represented by visual markers [10] or hand gestures [11] or offline tags [12] which are visible to everyone. However, the privacy preferences might vary widely among individuals and change from time to time, following patterns which cannot be conveyed by static visual markers [13]. ...
... Some methods require the photographed person to wear visible specialized tags. For examples, Pallas et al. in [12] introduced a set of four elementary privacy preferences represented by corresponding symbols-"Offlinetags" which are invisible and easily to be detected by detection algorithms. These privacy preferences are: "No photos", "blur me", "upload me", and "tag me". ...
... On the other hand, the gray cells with white numbers indicate the incorrect decision. In order to obtain detection accuracy, Equation (12) was utilized. In overall, by using proposed approach, we achieve 92.5% of accuracy in recognizing photo-taking behavior. ...
Preprint
Full-text available
Nowadays, with smartphones people can easily take photos and post photos to any social networks. This leads to a social problem that unintended appearance in photos may threaten the privacy of photographed person. Some solutions to protect facial privacy in photos have already been proposed. However, most of them rely on different techniques to de-identify photos which can be done only by photographers, giving no choice to photographed person. To deal with that, we propose an approach that allows photographed person to proactively detect whether someone is intentionally/unintentionally trying to take pictures of him/her. Thereby, he/she can have appropriate reaction. In this approach, we assume that the photographed person uses a wearable camera such as lifelogging camera to record surrounding in real-time. The analysis on skeleton information of likely photographers is then performed to detect the behavior of taking photo and differentiate it from similar behavior, i.e., net-surfing behavior. Specifically, in this study, we use OpenPose to extract human skeleton from monitored videos, whereas, dynamic programing matching is used to detect photo-taking behavior and classify it from net-surfing behavior. Experiment results demonstrate that the proposal achieves high accuracy of 92.5% in recognizing photo-taking behavior.
... The methods in latter category attempt to proactively prevent the photos of photographed person from being taken. For example, some techniques make the privacy of photographed persons to be respected based on their privacy preferences represented by visual markers [9] or hand gestures [10] or offlinetags [11] which are visible to everyone. However, the privacy preferences might vary widely among individuals and change from time to time, following patterns which cannot be conveyed by static visual markers [12]. ...
... Some methods require photographed person to wear visible specialized tags. For examples, Pallas et.al in [11] introduced a set of four elementary privacy preferences represented by corresponding symbols -"Offlinetags" which are invisible and easily to be detected by detection algorithms. These privacy preferences are: "No photos", "blur me", "Upload me" and "Tag me". ...
Preprint
Full-text available
Nowadays, with smartphones people can easily take photos, post photos to any social networks and use the photos for some purposes. This leads to a social problem that unintended appearance in photos may threaten the privacy of photographed person. Some solutions to protect facial privacy in photos have already been proposed. However, most of them rely on different techniques to de-identify photos which can be done only by photographers, giving no choice to photographed person. To deal with that, we propose an approach that allows photographed person to proactively detect whether someone is intentionally/unintentionally trying to take pictures of him/her. Thereby, he/she can have appropriate reaction to protect the privacy. In this approach, we assume that the photographed person uses a wearable camera to record the surrounding environment in real-time. The skeleton information of likely photographers who are captured in the monitoring video is then extracted to be put into the calculation of dynamic programming score which is eventually compared with a threshold for recognition of photo-taking behavior. Experimental results demonstrate that by using the proposed approach, the photo-taking behavior is precisely recognized with high accuracy of 92.5%.
... To avoid MPCs and achieve finer-grained privacy protection, Kandappu et al. [37] analyzed multiple life log images to identify privacy-sensitive factors and then used blurring technology to selectively blur parts of the photos, thereby alleviating privacy issues and achieving a balance between privacy and usability. Some researchers studied the use of tags [55], QR codes [5], and gestures [65] for communicating people's privacy preferences to photographers. However, malicious actors can also infer privacy preferences communicated using such marks, therefore introducing a new vector of privacy leakage (e.g., a malicious actor then takes an image of the individual communicating their privacy preference). ...
Preprint
Full-text available
Online users often post facial images of themselves and other people on online social networks (OSNs) and other Web 2.0 platforms, which can lead to potential privacy leakage of people whose faces are included in such images. There is limited research on understanding face privacy in social media while considering user behavior. It is crucial to consider privacy of subjects and bystanders separately. This calls for the development of privacy-aware face detection classifiers that can distinguish between subjects and bystanders automatically. This paper introduces such a classifier trained on face-based features, which outperforms the two state-of-the-art methods with a significant margin (by 13.1% and 3.1% for OSN images, and by 17.9% and 5.9% for non-OSN images). We developed a semi-automated framework for conducting a large-scale analysis of the face privacy problem by using our novel bystander-subject classifier. We collected 27,800 images, each including at least one face, shared by 6,423 Twitter users. We then applied our framework to analyze this dataset thoroughly. Our analysis reveals eight key findings of different aspects of Twitter users' real-world behaviors on face privacy, and we provide quantitative and qualitative results to better explain these findings. We share the practical implications of our study to empower online platforms and users in addressing the face privacy problem efficiently.
... Artificial visual marker. Pallas et al. [33] proposed using artificial visual markers instead of natural ones. They designed a set of four visual symbols representing four elementary privacy preferences. ...
Article
Full-text available
Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also increased the risk of privacy invasion. An online image can reveal various types of sensitive information, prompting the public to rethink individual privacy needs in OSN image sharing critically. However, the interaction of images and OSN makes the privacy issues significantly complicated. The current real-world solutions for privacy management fail to provide adequate personalized, accurate and flexible privacy protection. Constructing a more intelligent environment for privacy-friendly OSN image sharing is urgent in the near future. Meanwhile, given the dynamics in both users’ privacy needs and OSN context, a comprehensive understanding of OSN image privacy throughout the entire sharing process is preferable to any views from a single side, dimension or level. To fill this gap, we contribute a survey of ”privacy intelligence” that targets modern privacy issues in dynamic OSN image sharing from a user-centric perspective. Specifically, we present the important properties and a taxonomy of OSN image privacy, along with a high-level privacy analysis framework based on the lifecycle of OSN image sharing. The framework consists of three stages with different principles of privacy by design. At each stage, we identify typical user behaviors in OSN image sharing and their associated privacy issues. Then a systematic review of representative intelligent solutions to those privacy issues is conducted, also in a stage-based manner. The analysis results in an intelligent ”privacy firewall” for closed-loop privacy management. Challenges and future directions in this area are also discussed.
... Artificial visual marker. Pallas et al. [33] proposed using artificial visual markers instead of natural ones. They designed a set of four visual symbols representing four elementary privacy preferences. ...
Preprint
Full-text available
Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also led to an increased risk of privacy invasion. The recent image leaks from popular OSN services and the abuse of personal photos using advanced algorithms (e.g. DeepFake) have prompted the public to rethink individual privacy needs in OSN image sharing. However, OSN image privacy itself is quite complicated, and solutions currently in place for privacy management in reality are insufficient to provide personalized, accurate and flexible privacy protection. A more intelligent environment for privacy-friendly OSN image sharing is in demand. To fill the gap, we contribute a survey of "privacy intelligence" that targets modern privacy issues in dynamic OSN image sharing from a user-centric perspective. Specifically, we present a definition and a taxonomy of OSN image privacy, and a high-level privacy analysis framework based on the lifecycle of OSN image sharing. The framework consists of three stages with different principles of privacy by design. At each stage, we identify typical user behaviors in OSN image sharing and the privacy issues associated with these behaviors. Then a systematic review on the representative intelligent solutions targeting those privacy issues is conducted, also in a stage-based manner. The resulting analysis describes an intelligent privacy firewall for closed-loop privacy management. We also discuss the challenges and future directions in this area.
Conference Paper
Full-text available
Smart home cameras (SHCs) offer convenience and security to users, but also cause greater privacy concerns than other sensors due to constant collection and processing of sensitive data. Moreover, privacy perceptions may differ between primary users and other users at home. To address these issues, we developed three physical cover prototypes for SHCs: Manual, Hybrid, and Automatic, based on design criteria of observability, understandability, and tangibility. With 90 SHC users, we ran an online survey using video vignettes of the prototypes. We evaluated how the physical covers alleviated privacy concerns by measuring perceived creepiness and trustworthiness. Our results show that the physical covers were well received, even though primary SHC users valued always-on surveillance. We advocate for the integration of physical covers into future SHCs, emphasizing their potential to establish a shared understanding of surveillance status. Additionally, we provide design recommendations to support this proposition.
Conference Paper
Photo capturing and sharing have become routine daily activities for social platform users. Alongside the entertainment of social interaction, we are experiencing tremendous visual violation and photo abusing. Especially, users may be unconsciously filmed and exposed online, which is termed as the non-consensual sharing issue. Unfortunately, this problem cannot be well handled with proactive access control or dedicated bystander detection, as users are unaware of their situations and may be filmed stealthily. We propose Videre on behalf of the privacy of the unaware parties in a way that they would be automatically identified and warned before such photos go public. For this, we first elaborate on the predominant features encountered in non-consensual captured photos via a thorough user study. Then we establish a dataset for this context and build a classifier as a proactive detector based on multi-deep feature fusion. To relieve the burden of person-wise unawareness detection, we further design a signature-based filter for local reauthorization, which can also implicitly avoid classification errors. We implement and test Videre in various field settings to demonstrate its effectiveness and performance.
Article
Interior scene colorization is vastly demanded in areas such as personalized architecture design. Existing works either require manual efforts to colorize individual objects, or conform to fixed color patterns automatically learned from prior knowledge, whilst neglecting user preference. Quantitatively identifying user preference is challenging, particularly at the early stage of the design process. The 3D setup also presents new challenges as the inhabitant can observe from any possible viewpoints. We propose a representative view selection method based on visual attention, and a progressive preference inference model. We particularly focus on the progressive integration of eye-tracked user preference, which enables the assistance in creativity support and allows the possibility of convergent thinking. A series of user studies have been conducted to validate the effectiveness of the proposed view selection method, preference inference model and the creativity support.
Article
Privacy advocates often like to claim that all modern societies feel the same intuitive need to protect privacy. Yet it is clear that intuitive sensibilities about privacy differ from society to society, even as between the closely kindred societies of the United States and continental Europe. Some of the differences involve questions Of everyday behavior, such as whether or not one may appear nude in public. But many involve the law. In fact, we are in the midst of major legal conflicts between the countries on either side of. the Atlantic-conflicts over questions like the protection of consumer data, the use of discovery in civil procedure, the public exposure of criminal offenders, and more. Clearly the idea that there are universal human sensibilities about privacy, which ought to serve as the basis of a universal law of privacy, cannot be right. This Article explores these conflicts, trying to show that European privacy norms are founded on French and German ideas of "personal honor." Continental "privacy," like continental sexual harassment law, prison law, and many other bodies of law, aims to protect the "personal honor" of ordinary French and German folk. American law takes a very different approach, protecting primarily a liberty interest. The Article traces the roots of French and German attitudes over the last couple of centuries, highlighting the French experience of sexual license in the nineteenth century and the German experience of Nazism. The Article then discusses the current state of French and German law with regard to matters such as consumer credit reporting, public nudity, and the law of baby names. It contrasts continental approaches to what we find in American law. Throughout, the Article argues, American law shows a far greater sensitivity to intrusions on the part of the state, while continental law shows a far greater sensitivity to the protection of one's public face. These are not differences that we can understand unless we abandon the approach taken by most privacy advocates, since such differences have little to do with the supposedly universal intuitive needs of "personhood." Instead, they are differences that reflect the contrasting political and social ideals of American and continental law. Indeed, we should broadly reject intuitionism in our legal scholarship, focusing instead on social and political ideals.
Article
A server-side architecture is used to connect specialized enforcement mechanisms with distributed usage control requirements and policies. The fundamentals of usage control in the notions of provisions, obligations, and compensations in the context of controllability and observability are discussed. The given architecture is compatible with different client-side software enforcement mechanisms including trusted platform technologies and other digital rights management (DRM) mechanisms. Trusted platform technology can be used as a mechanism to control obligations. The high-level policies specifies obligations and provisions that encompasses access control requirement and provisional actions. A compensation management component is used to monitor the obligations to find whether they are violated and thereby necessary actions could be taken for its prevention. The data object is modified in the controllable obligations to enable the trusted systems handle the respective requirements.
Article
Privacy advocates often like to claim that all modern societies feel the same intuitive need to protect privacy. Yet it is clear that intuitive sensibilities about privacy differ from society to society, even as between the closely kindred societies of the United States and continental Europe. Some of the differences involve questions of everyday behavior, such as whether or not one may appear nude in public. But many involve the law. In fact, we are in the midst of major legal conflicts between the countries on either side of the Atlantic - conflicts over questions like the protection of consumer data, the use of discovery in civil procedure, the public exposure of criminal offenders, and more. Clearly the idea that there are universal human sensibilities about privacy, which ought to serve as the basis of a universal law of the protection of privacy, cannot be right. This article explores these conflicts, trying to show that European privacy norms are founded on European ideas of personal honor. Continental privacy, like continental sexual harassment law, prison law, and many other bodies of law, aims to protect the personal honor of ordinary Europeans. American law takes a very different approach, protecting primarily a liberty interest. These are not differences that we can understand unless we abandon the approach taken by most privacy advocates, since they have little to do with the supposedly universal intuitive needs of personhood. Instead, they are differences that reflect the contrasting political and social ideals of American and continental law. Indeed, we should broadly reject intuitionism in our legal scholarship, focusing instead on social and political ideals.
Conference Paper
Photo tagging is a popular feature of many social network sites that allows users to annotate uploaded images with those who are in them, explicitly linking the photo to each person's profile. In this paper, we examine privacy concerns and mechanisms surrounding these tagged images. Using a focus group, we explored the needs and concerns of users, resulting in a set of design considerations for tagged photo privacy. We then designed a privacy enhancing mechanism based on our findings, and validated it using a mixed methods approach. Our results identify the social tensions that tagging generates, and the needs of privacy tools to address the social implications of photo privacy management.
Conference Paper
1.INTRODUCTION Publishing personal content on the web is gaining increased popularity with dramatic growth in social networking web-sites, and availability of cheap personal domain names and hosting services. Although the Internet enables easy pub-lishing of any content intended to be generally accessible, restricting personal content to a selected group of contacts is more di cult. Social networking websites partially enable users to restrict access to a selected group of users of the same network by explicitly creating a "friends'list. " While this limited restriction supports users'privacy on those (few) selected websites, personal websites must still largely be pro-tected manually by sharing passwords or obscure links. Our focus is the general problem of privacy-enabled web con-tent sharing from any user-chosen web server. By leverag-ing the existing "circle of trust" in popular Instant Messag-ing (IM) networks, we propose a scheme called IM-based Privacy-Enhanced Content Sharing (IMPECS) for personal web content sharing. IMPECS enables a publishing user's personal data to be accessible only to her IM contacts. A user can put her personal web page on any web server she wants (vs. being restricted to a specific social networking website), and maintain privacy of her content without re-quiring site-specific passwords. Our prototype of IMPECS required only minor modifications to an IM server, and PHP scripts on a web server. The general idea behind IMPECS extends beyond IM and IM circles of trust; any equiva-lent scheme, (ideally) containing pre-arranged groups, could similarly be leveraged.
Conference Paper
Usage control is concerned with how data is used after access to it has been granted. Respective enforcement mechanisms need to be implemented at different layers of abstraction in order to monitor or control data at and across all these layers. We present a usage control enforcement mechanism at the application layer. It is implemented for a common web browser and, as an example, is used to control data in a social network application. With the help of the mechanism, a data owner can, on the grounds of assigned trust values, prevent data from being printed, saved, copied&pasted, etc., after this data has been downloaded by other users.
Distributed data usage control for web applications: a social network implementation
  • P Kumari
  • A Pretschner
  • J Peschla
  • J.-M Kuhn
Kumari, P., Pretschner, A., Peschla, J., Kuhn, J.-M. Distributed data usage control for web applications: a social network implementation. Proc. CODASPY '11, pp. 85-96. Doi: 10.1145/1943513.1943526
Distributed usage control
  • A Pretschner
  • M Hilty
  • D Basin
Pretschner, A., Hilty, M., Basin, D. Distributed usage control. Comm. ACM 49(9), 2006, pp. 39-44. Doi: 10.1145/1151030.1151053