Article

Gone in 15 Seconds: The Limits of Privacy Transparency and Control

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Even simpler or more usable privacy controls and notices might not improve users' decision-making regarding sharing of personal information. Control might paradoxically increase riskier disclosure by soothing privacy concerns. Transparency might be easily muted, and its effect arbitrarily controlled, through simple framing or misdirections.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Both of these documents, however, suggest obtaining consent for data processing from data subjects. Although the GDPR defines several potential legal bases 1 for the lawful personal data processing 2 , for instance for the provision of a contract, in order to fulfill a legal obligation, in the case of vital interest, in the case of public interest, or for reasons of legitimate interest, in many cases data controllers and processors, will need to obtain consent from data subjects for the processing of their personal data 3 , for example in order to deliver personalized recommendations or to improve their services. According to Art. 4 (11) 4 of the GDPR, consent needs to be "freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, 1 GDPR Art. ...
... Although the GDPR defines several potential legal bases 1 for the lawful personal data processing 2 , for instance for the provision of a contract, in order to fulfill a legal obligation, in the case of vital interest, in the case of public interest, or for reasons of legitimate interest, in many cases data controllers and processors, will need to obtain consent from data subjects for the processing of their personal data 3 , for example in order to deliver personalized recommendations or to improve their services. According to Art. 4 (11) 4 of the GDPR, consent needs to be "freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, 1 GDPR Art. 6(1)(b -f) 2 For the lawful personal data processing data subject's consent is not required. ...
... However, studies show that such policies and terms and conditions are rarely read and when they are, they are hard to digest [20]. Although there have been some attempts to give users more control and transparency regarding personal data processing [10,15], the cognitive limitation of data subjects in terms of understanding what exactly they consented to remains an open research challenge [1,6]. Considering that the GDPR in general, and GDPR Art. 4 (11) in particular, is quite prescription when it comes to consent, we argue that consent request user interface (UI) designers should pay particular attention to consent requirements specified in the GDPR and the interpretation of said requirements, in the form of guidelines 5 , by various expert groups, such as the European Data Protection Board, and its predecessor the Article 29 Working Party 6 . ...
Conference Paper
Although the General Data Protection Regulation (GDPR) defines several potential legal bases for personal data processing, in many cases data controllers, even when they are located outside the European Union (EU), will need to obtain consent from EU citizens for the processing of their personal data. Unfortunately, existing approaches for obtaining consent, such as pages of text followed by an agreement/disagreement mechanism, are neither specific nor informed. In order to address this challenge, we introduce our Consent reqUest useR intErface (CURE) prototype, which is based on the GDPR requirements and the interpretation of those requirements by the Article 29 Working Party (i.e., the predecessor of the European Data Protection Board). The CURE prototype provides transparency regarding personal data processing, more control via a customization, and, based on the results of our usability evaluation, improves user comprehension with respect to what data subjects actually consent to. Although the CURE prototype is based on the GDPR requirements, it could potentially be used in other jurisdictions also.
... To reduce privacy concerns and thereby increase service usage, previous research has investigated transparency-enhancing mechanisms [6,34,61], including privacy assurances [44] and control features [42]. 2 Information-use transparency is the extent to which an online firm provides features that allow consumers to access the data collected about them and informs them about how and for what purposes the acquired information is used [6]. From a consumer's perspective, privacy policies and transparency features are not the same [6]: transparency features give an overview and thus enhance the sense of which information is collected and how it could be used by organizations in an accessible and understandable way. ...
... Although available in all conditions, transparency features also created increased awareness of the control that legislators have implemented for privacy protection. Based on prior studies, this should have further increased the impact of the transparency features [2]. Nevertheless, the inclusion of transparency features did not considerably change individuals' behavioral intentions. ...
... 2. As transparency and control features are often discussed in combination [2,59], we would like to point out the difference between the two types of features as we manipulate the availability of transparency features as part of our experiment while keeping the availability of control features constant. Transparency features aim at informing consumers about which data is collected, how it may be used, or to whom it may be passed on; thus, these features are a passive instrument [4]. ...
Article
Digital services need access to consumers’ data to improve service quality and to generate revenues. However, it remains unclear how such services should be configured to facilitate consumers’ willingness to share personal information. Prior studies discuss an influence of selected individual traits or service configurations, including transparency features and service personalization. This study aims at uncovering how interactions among individuals’ privacy valuation, transparency features, and service personalization influence their willingness to disclose information. Building on information boundary theory, we conducted an experimental study with 286 participants on a data-intense digital service. In contrast to our expectation, we found no indication that providing transparency features facilitates individuals’ information disclosure. Relative to the personalization–privacy paradox, individuals’ privacy valuation is a strong inhibitor of information provision in general, not only for personalized services. Personalization benefits only convince consumers who exhibit little focus on privacy. Thus, service providers need to align their service designs with consumers’ privacy preferences.
... No study to date has investigated how transparency of privacy policy would influence actual disclosure on SNSs depending on whether or not they are privacy-friendly. Even if it is transparent, users may refuse to attend to the policy, miscomprehend or ignore it [14,[24][25][26]. ...
... Depending on the approach, users are either deemed able to self-protect when provided with all relevant information in a user-friendly way or are deemed incapable of self-protection and need backend design solutions or regulators to protect them. Despite a wealth of research, the empirical evidence on which approach best reflects reality is not straightforward [25,37] and there are only few attempts to ensure a causal understanding of the phenomenon via experimental research [31,37]. Until recently, the majority of the community has followed the rational user approach and it has been an exception "for privacy researchers to consider alternative models and explanations outside the APCO model" [32, p. 640]. ...
... Consequently, reading a policy appears to be no guarantee for policy comprehension. Internet users have limited time and attention and may not fully comprehend privacy policies for many reasons [16,25,50]. By now users are also habituated to accepting terms and conditions without reading them [38,51]. ...
Preprint
Full-text available
Users disclose ever-increasing amounts of personal data on Social Network Service platforms (SNS). Unless SNSs' policies are privacy friendly, this leaves them vulnerable to privacy risks because they ignore the privacy policies. Designers and regulators have pushed for shorter, simpler and more prominent privacy policies, however the evidence that transparent policies increase informed consent is lacking. To answer this question, we conducted an online experiment with 214 regular Facebook users asked to join a fictitious SNS. We experimentally manipulated the privacy-friendliness of SNS's policy and varied threats of secondary data use and data visibility. Half of our participants incorrectly recalled even the most formally "perfect" and easy-to-read privacy policies. Mostly, users recalled policies as more privacy friendly than they were. Moreover, participants self-censored their disclosures when aware that visibility threats were present, but were less sensitive to threats of secondary data use. We present design recommendations to increase informed consent.
... As information sharing becomes more ubiquitous, privacy trade-offs have attracted due attention. In recent years, transparency and control approaches (such as 'notice and consent' regimes) have been touted as one of the important measures to help individuals steer through privacy tradeoffs ( Acquisti et al. 2013;Kaplan 2016). The argument is made that if individuals are informed about how their data will be handled (for example, what is being collected and to whom it is disclosed), then they will be able to decide their preferences regarding privacy protection and disclosure. ...
... The limits of transparency are further exposed by research indicating that transparency and control might paradoxically increase disclosure of sensitive information ). Consent mechanisms can also exploit (known and still unknown) cognitive biases, such as limited attention span, framing effects, and decision making heuristics, in how people interpret and act on available information ( Acquisti et al. 2013;Kahneman and Tversky 1979). For example, Adjerid et al. (2013), in a series of experiments, demonstrate how simple misdirections can alter subject perception of privacy risks, even though the objective risks (and corresponding facts) are not altered. ...
Article
Full-text available
Wearable self-tracking devices capture multidimensional health data and offer several advantages including new ways of facilitating research. However, they also create a conflict between individual interests of avoiding privacy harms, and collective interests of assembling and using large health data sets for public benefits. While some scholars argue for transparency and accountability mechanisms to resolve this conflict, an average user is not adequately equipped to access and process information relating to the consequences of consenting to further uses of her data. As an alternative, this paper argues for fiduciary relationships, which put deliberative demands on digital health data controllers to keep the interests of their data subjects at the forefront as well as cater to the contextual nature of privacy. These deliberative requirements ensure that users can engage in collective participation and share their health data at a lower risk of privacy harms. This paper also proposes a way to balance the flexible and open-ended nature of fiduciary law with the specific nature and scope of fiduciary duties that digital health data controllers should owe to their data subjects.
... Given all the challenges to appropriation discussed, the result is a control paradox. Merely signaling the possibility of control over some data makes consumers responsive to market nudges ( Tucker 2014) and tempts them into giving away even more data ( Acquisti, Adjerid, and Brandimarte 2013;Acquisti, Brandimarte, and Loewenstein 2015;Brandimarte et al. 2013). The desire for control makes people so keen to believe in the promise of control that they may end up sacrificing that which they hoped to obtain: control as a gateway to a sense of ownership. ...
Chapter
Full-text available
In the age of information everything becomes mined for the nuggets giving rise to it: data. Yet, who these new treasures do and should belong to is still being hotly debated. With individuals often acting as the source of the ore and businesses acting as the miners, both appear to hold a claim. This chapter contributes to this debate by analyzing whether and when personal data may evoke a sense of ownership in those they are about. Juxtaposing insights on the experience and functions of ownership with the essence of data and practices in data markets, we conclude that a very large fraction of personal data defies the logic and mechanisms of psychological possessions. In the canon of reasons for this defeat, issues of data characteristics, obscuring market practices, and data’s mere scope are center stage. In response, we propose to condense the boundless collection of data points into the singularized and graspable metaphor of a digital blueprint of the self. This metaphor is suggested to grasp the notion of personal data. To also enable consumers to effectively manage their data, we advocate adopting a practice commonly used with plentiful assets: the establishment of personal data agents and managers.
... Still, there are yet unexplored issues. For example, transparency is fundamental to the privacy protection of users in the current privacy self-management model, and it also ensures accountability [10]. Nevertheless, currently, there is no study investigating the transparency issues in smart assistants. ...
Chapter
This paper intends to highlight the risks of AI in Android smartphones. In this regard, we perform a risk analysis of Google Smart Assistant, a state-of-the-art, AI-powered smartphone app, and assess the transparency in its risk communication to users and implementation. Android users rely on the transparency of an app’s descriptions and Permission requirements for its risk evaluation, and many risk evaluation models consider the same factors while calculating app threat scores. Further, different risk evaluation models and malware detection methods for Android apps use an app’s Permissions and API usage to assess its behavior. Therefore, in our risk analysis, we assess Description-to-Permissions fidelity and Functions-to-API-Usage fidelity in Google Smart Assistant. We compare Permission and API usage in Google Smart Assistant with those of four leading smart assistants and discover that Google Smart Assistant has unusual permission requirements and sensitive API usage. Our risk analysis finds a lack of transparency in risk communication and implementation of Google Smart Assistant. This lack of transparency may make it impossible for users to assess the risks of this app. It also makes some of the state-of-the-art app risk evaluation models and malware detection methods ineffective.
... While limits on user decision-making has been part of convincing arguments that notice and choice has insufficiently protected privacy [31], [32] or even entirely failed as a concept [33], it has also been used to suggest the potential role of design in overcoming the framework's shortcomings, by nudging users towards privacy-protective choices [34]. Schaub et. ...
Preprint
The California Consumer Privacy Act (CCPA)---which began enforcement on July 1, 2020---grants California users the affirmative right to opt-out of the sale of their personal information. In this work, we perform a manual analysis of the top 500 U.S. websites and classify how each site implements this new requirement. We find that the vast majority of sites that implement opt-out mechanisms do so with a Do Not Sell link rather than with a privacy banner, and that many of the linked opt-out controls exhibit features such as nudging and indirect mechanisms (e.g., fillable forms). We then perform a pair of user studies with 4357 unique users (recruited from Google Ads and Amazon Mechanical Turk) in which we observe how users interact with different opt-out mechanisms and evaluate how the implementation choices we observed---exclusive use of links, prevalent nudging, and indirect mechanisms---affect the rate at which users exercise their right to opt-out of sale. We find that these design elements significantly deter interactions with opt-out mechanisms (including reducing the opt-out rate for users who are uncomfortable with the sale of their information) and that they reduce users' awareness of their ability to opt-out. Our results demonstrate the importance of regulations that provide clear implementation requirements in order empower users to exercise their privacy rights.
... As one of the prime goals is to achieve transparency and preserve consumer rights in this vast digital sphere, we need to acknowledge limitation of transparency [3,4]. Handing over the responsibilities to understand consequences, to adjudicate them, to recognize tolerant threshold, and to make an informed decision in the end could seem to be a daunting and burdensome task for the user [34]. ...
... The traditional way to obtain consent is to ask for consent for all current and future personal data processing outlined in very general terms by clicking on an agree button. Acquisti et al. (2013) highlight that several behavioral studies dispute the effectiveness of such consent mechanisms from a comprehension perspective. A study by McDonald and Cranor (2008) indicates it would take on average 201 hours per year per individual if people were to actually read existing privacy policies. ...
... 9 In the case of data sharing, incomplete or asymmetric information, intangible risks with complex mitigation processes and benefit trade-offs lead to a heavy reliance on heuristics or other choice strategies that utilize information that people actually have. 9,38 One such information might be the trust that decision-makers place in the person or institution that receives the data. 39,40 In their systematic review, Clayton et al 41 identified the primary data recipient (and potential third-party recipients) as one of the major concerns that determine willingness-to-share genomic data. ...
Article
Of all the information that we share, health and genetic data might be among the most valuable for researchers. As data are handled as particularly sensitive information, a number of pressing issues regarding people's preferences and privacy concerns are raised. The goal of the present study was to contribute to an understanding of people's reported willingness‐to‐share genetic data for science (WTS). For this, predictive psychological factors (e.g., risk and benefit perceptions, trust, knowledge) were investigated in an online survey (N = 416). Overall, participants seemed willing to provide their genetic data for research. Participants who perceived more benefits associated with data sharing were particularly willing to share their data for research (β = .29), while risk perceptions were less influential (β = ‐.14). As participants with higher knowledge of the potential uses of genetic data for research perceived more benefits (β = .20), WTS can likely be improved by providing people with information regarding the usefulness of genetic data for research. In addition to knowledge and perceptions, trust in data recipients increased people's willingness‐to‐share directly (β = .24). Especially in the sensitive area of genetic data, future research should strive to understand people's shifting perceptions and preferences. This article is protected by copyright. All rights reserved. Graphical Abstract Estimated model with standardised regression weights (N = 416, *: p < .05, **: p < .01, ***: p < .001).
... EU GDPR 2016/679). Paradoxically, however, transparency might backfire: users can interpret transparency as a cue to quickly decide that a system can be trusted [1,2] and then disclose their personal data. Transparency would then work as a heuristic. ...
Chapter
Full-text available
We present a study that investigates whether the transparency of the data acquisition technique can work as a heuristic when making evaluations about data protection and sensitivity. The study (N = 40) compares an explicit data acquisition technique (questionnaires) with an implicit one (eye-tracker) and varies also the actual sensitivity of the data collected (popularity evaluation vs. usability evaluation). The results suggest that, when judging general data sensitivity, the transparency of the data collection procedure might work as a heuristic; instead if more specific judgments or decisions are asked this effect is not observed. Implications are discussed.
... Labeling ground beef as 75% lean instead of 25% fat significantly affected participants' perceptions of beef quality (Levin & Gaeth, 1988). Privacy research has also shown that presenting information with a positive or negative framing can also change individuals' perceptions and awareness in disclosure decisions (Acquisti, Adjerid, & Brandimarte, 2013;Acquisti, Brandimarte, & Loewenstein, 2015;Gluck et al., 2016). Framing ease individuals' privacy concerns. ...
Article
Individuals' disclosure of personal health information (PHI) can hold substantial benefits for both users and providers, but users are often reluctant to disclose, even if they gain benefits such as better personalization. While previous research has dealt with message framing and information quality in a health-related context, factors have been observed separately. To our best knowledge, we are among the first to have examined both factors (attribute framing and argument strength) and their interactions concerning PHI disclosure. Thus, we conducted a web-based experiment with 529 participants to examine the impacts of two persuasive message techniques (attribute framing and argument strength) on individuals' PHI disclosure. We reveal that individuals tend to disclose more PHI when they experience persuasive messages with more positively framed health wearable (HW) attributes or messages with higher argument strength concerning data collection. We enable researchers to uncover the impacts of persuasive messages in highly sensitive data environments and provide practitioners with workable suggestions on how to affect individuals' PHI disclosure behaviors.
... Even transparency and control can be used to nudge consumers toward higher disclosures (Acquisti, Adjerid, & Brandimarte, 2013. Transparency and control are important components of privacy management. ...
Article
Full-text available
We review different streams of social science literature on privacy with the goal of understanding consumer privacy decision making and deriving implications for policy. We focus on psychological and economic factors influencing both consumers' desire and consumers' ability to protect their privacy, either through individual action or through the implementation of regulations applying to firms. Contrary to depictions of online sharing behaviors as careless, we show how consumers fundamentally care about online privacy, and present evidence of numerous actions they take to protect it. However, we also document how prohibitively difficult it is to attain desired, or even desirable, levels of privacy through individual action alone. The remaining instrument for privacy protection is policy intervention. However, again for both psychological and economic reasons, the collective impetus for adequate intervention is often countervailed by powerful interests that oppose it.
... Currently the predominant mechanism for obtaining consent is to present the data subject with a verbose description of the current and future data processing, where processing is described in some very general terms. Although there are a couple of papers that examine consent in the form of notice and choice, whereby data subjects are provided with more transparency and control [1,4], we highlight that the cognitive limitation of end users is a major issue, which needs to be addressed. ...
Conference Paper
The General Data Protection Regulation (GDPR) requires, except for some predefined scenarios (e.g., contract performance, legal obligations, vital interests, etc.), obtaining consent from the data subjects for the processing of their personal data. Companies that want to process personal data of the European Union (EU) citizens but are located outside the EU also have to comply with the GDPR. Existing mechanisms for obtaining consent involve presenting the data subject with a document where all possible data processing, done by the entire service, is described in very general terms. Such consent is neither specific nor informed. In order to address this challenge, we introduce a consent request (CoRe) user interface (UI) with maximum control over the data processing and a simplified CoRe UI with reduced control options. Our CoRe UI not only gives users more control over the processing of their personal data but also, according to the usability evaluations reported in the paper, improves their comprehension of consent requests.
... Therefore, enhanced PDT -originally meant as a means of increasing consumer protection -may indeed lead to less privacy. 1 The final publication is available at Springer via DOI https://doi.org/10.1007/978-3-030-16744-8_16 ...
Chapter
Full-text available
A growing number of business models are based on the collection, processing and dissemination of personal data. For a free decision about the disclosure of personal data, the individual concerned needs transparency as insight into which personal data is collected, processed, passed on to third parties, for what purposes and for what time (Personal Data Transparency, or PDT for short). The intention of this paper is to assess theories for research on PDT. We performed a literature review and explored theories used in research on PDT. We assessed the selected theories that may be appropriate for exploring PDT. Such research may build on several theories that open up different perspectives and enable various fields of study.
... Privacy notice (privacy policies) can be too long and complex to be comprehensible for the average users. Notification mechanisms do not consider the user limitations and biases and therefore are not effective [11]. Therefore, research on how to communicate/present the risks, the policies and develop effective notification mechanisms is urgently needed in order to enable users to make effective privacy decisions. ...
Conference Paper
Full-text available
p>Consent is a key measure for privacy protection and needs to be ‘meaningful’ to give people informational power. It is increasingly important that individuals are provided with real choices and are empowered to negotiate for meaningful consent. Meaningful consent is an important area for consideration in IoT systems since privacy is a significant factor impacting on adoption of IoT. Obtaining meaningful consent is becoming increasingly challenging in IoT environments. It is proposed that an “apparency, pragmatic/semantic transparency model” adopted for data management could make consent more meaningful, that is, visible, controllable and understandable. The model has illustrated the why and what issues regarding data management for potential meaningful consent [1]. In this paper, we focus on the ‘how’ issue, i.e. how to implement the model in IoT systems. We discuss apparency by focusing on the interactions and data actions in the IoT system; pragmatic transparency by centring on the privacy risks, threats of data actions; and semantic transparency by focusing on the terms and language used by individuals and the experts. We believe that our discussion would elicit more research on the apparency model’ in IoT for meaningful consent.</p
... But even when it is genuine, transparency might not protect the user as much as one would expect. Acquisti et al describe two risks deriving from making a system transparent to the user: the control paradox and the user's (unnecessary) responsibilization [1]. The former resides in users taking more risks with a transparent system because they feel they are more in control than in opaque cases. ...
... These are important proposals to make hidden data exchanges more detectable. However, previous work [1] has shown that simply providing more control and more transparency around data exchanges are not straightforward solutions to the problem of increasing levels of user comfort in this context. ...
Conference Paper
Full-text available
We are surrounded by a proliferation of connected devices performing increasingly complex data transactions. Traditional design methods tend to simplify or conceal this complexity to improve ease of use. However, the hidden nature of data is causing increasing discomfort. This paper presents BitBarista, a coffee machine designed to explore perceptions of data processes in the Internet of Things. BitBarista reveals social, environmental, qualitative and economic aspects of coffee supply chains. It allows people to choose a source of future coffee beans, situating their choices within the pool of decisions previously made. In doing so, it attempts to engage them in the transactions that are required to produce coffee. Initial studies of BitBarista with 42 participants reveal challenges of designing for connected systems, particularly in terms of perceptions of data gathering and sharing, as well as assumptions generated by current models of consumption. A discussion is followed by a series of suggestions for increasing positive attitudes towards data use in interactive systems.
... Another interesting observation that we made is that only Hoepman[16]discusses the right to data portability. This right, its implementation, and consequences seem to not yet have been discussed deeply in the literature.Kalloniatis et al.[19,20], Spiekermann and Cranor[21]o oMakri and Lambrinoudakis[22], Acquisti et al.[23], Masiello[24], Krol and Preibusch[25], Deng et al.[26], Komanduri et al.[27], Cranor[28], Wicker and Schrader[29]oo ...
Article
Full-text available
Privacy as a software quality is becoming more important these days and should not be underestimated during the development of software that processes personal data. The privacy goal of intervenability, in contrast to unlinkability (including anonymity and pseudonymity), has so far received little attention in research. Intervenability aims for the empowerment of end-users by keeping their personal data and how it is processed by the software system under their control. Several surveys have pointed out that the lack of intervenability options is a central privacy concern of end-users. In this paper, we systematically assess the privacy goal of intervenability and set up a software requirements taxonomy that relates the identified intervenability requirements with a taxonomy of transparency requirements. Furthermore, we provide a tool-supported method to identify intervenability requirements from the functional requirements of a software system. This tool-supported method provides the means to elicit and validate intervenability requirements in a computer-aided way. Our combined taxonomy of intervenability and transparency requirements gives a detailed view on the privacy goal of intervenability and its relation to transparency. We validated the completeness of our taxonomy by comparing it to the relevant literature that we derived based on a systematic literature review. The proposed method for the identification of intervenability requirements shall support requirements engineers to elicit and document intervenability requirements in compliance with the EU General Data Protection Regulation.
... Postconsent access is a further challenge, as users' preferences may change over time. Thus single actions should not be definite, irrevocable, and with long-lasting effects [28] and consent should not be modelled as a single point decision [29], [30]. A-3: Proof of Identity. ...
... In accordance with what we stressed when explaining the relevant hypotheses, safeguards and guarantees are designed to protect the security and privacy of social shoppers. Prior research reported that privacy and security treatments written in a similar way will not perform very well because those clauses are often too legalistic and complex for regular users to read and understand (Acquisti et al., 2013). In a safe buy-button scenario, we selected and adapted Alibaba's safeguards and guarantees to our case, and used a brief, concise text describing them. ...
Article
Buy buttons are a strategic enabler of social shopping and social platform monetization, but their use is still far from satisfactory. This paper aims to find effective actions to optimize buy button performance. A two-factor (safe shopping vs. unsafe shopping; an integrated path to purchase vs. a separated path to purchase), between-subject experiment was conducted on China’s online crowdsourcing platform, Wjx.cn. A two-way, full factorial ANOVA was used to test hypotheses. The study showed that shopping safeguards and guarantees that a social platform aligns with buy button use can positively influence users’ shopping-related attitudinal and behavioral responses, such as shopping attitude and impulse buying intent. In addition, an integrated purchase path, where the user can complete the entire purchasing process on the platform where the item is offered, can result in users adopting a favorable attitude towards using the social platform to make purchases. However, no interaction was found between these two factors, only additive effects. Although major social platforms do not currently offer security and privacy measures for direct purchases, this paper reasonably explains why as well as what measures a social platform can take to optimize buy button performance. This study also indicates that channel integration of a focal social platform and external commercial websites can provide users with an integrated, seamless path to purchase, and can enable social platforms to enhance their value position in today’s business ecosystem.
... Thus, participants rate their privacy concerns higher their needs for SNs services, indicating that their privacy calculus is not expected to change after a privacy violation, since they already respond with mechanisms, such the ones [83] proposed, namely, refusal or negativism. Up to this, considering that the relationship between self-disclosure and benefits is mediated by the factor of control of information [84], it was not surprising that the participants indicate the low degree of control on their information. ...
Article
Full-text available
Social Networks (SNs) bring new types of privacy risks threats for users; which developers should be aware of when designing respective services. Aiming at safeguarding users’ privacy more effectively within SNs, self-adaptive privacy preserving schemes have been developed, considered the importance of users’ social and technological context and specific privacy criteria that should be satisfied. However, under the current self-adaptive privacy approaches, the examination of users’ social landscape interrelated with their privacy perceptions and practices, is not thoroughly considered, especially as far as users’ social attributes concern. This study, aimed at elaborating this examination in depth, in order as to identify the users’ social characteristics and privacy perceptions that can affect self-adaptive privacy design, as well as to indicate self-adaptive privacy related requirements that should be satisfied for users’ protection in SNs. The study was based on an interdisciplinary research instrument, adopting constructs and metrics from both sociological and privacy literature. The results of the survey lead to a pilot taxonomic analysis for self-adaptive privacy within SNs and to the proposal of specific privacy related requirements that should be considered for this domain. For further establishing of our interdisciplinary approach, a case study scenario was formulated, which underlines the importance of the identified self-adaptive privacy related requirements. In this regard, the study provides further insight for the development of the behavioral models that will enhance the optimal design of self-adaptive privacy preserving schemes in SNs, as well as designers to support the principle of PbD from a technical perspective.
... These insights also extend research on framing. Previous research has shown that presenting information with a positive or negative framing can change individuals' perceptions and behavioural reactions in adoption and disclosure decisions (Acquisti et al., 2013(Acquisti et al., , 2015Lee & Koo, 2012). For instance, message content is more likely to positively affect behaviour if the message frame prompts positive thoughts and associations (Angst & Agarwal, 2009). ...
Article
Full-text available
The underlying scenario for adopting Covid-19 contact tracing apps is complex, given that users face potential surveillance, while expected health benefits are for the greater societal good rather than for themselves. To encourage adoption, many governments employ persuasive messages, highlighting either the app’s potential societal benefits (e.g., protecting the elderly) or high privacy standards. Responding to public media criticism, we compare the impact of two different persuasive messages, which focus on either societal benefits or privacy assurance. Emphasising societal benefits successfully stimulates this decision making without users losing sight of privacy risks. In contrast, emphasising privacy assurance diminishes users’ societal welfare concerns, as potential personal gains and losses largely frame their decision. These results are critical for developing and launching tracing apps, also beyond Covid-19, as well as for other applications of which widespread adoption is imperative to unlock potential societal level benefits, while requiring individual disclosure of sensitive data.
... 182 Data subjects often face a cognitive limitation to understand what exactly they consent to. 183 Having a uniform document (the European data altruism consent form) might be hoped to improve the level of readability and understandability of individuals so that they give genuinely informed consent. It is also commendable that the proposal foresees that the informed consent form will be tailored to specific sectors and for different purposes. ...
Technical Report
Full-text available
The White Paper offers an academic perspective to the discussion on the Data Governance Act proposal (“DGA proposal”), as adopted by the European Commission in November 2020. It contains a legal analysis of the DGA proposal and includes recommendations to amend its shortcomings. The White Paper aims to cover the full spectrum of the DGA proposal and therefore offers an in-depth analysis of its main provisions. In conclusion, the authors identify general patterns at work with the DGA proposal, namely, first, the (new) regulation of data as an object and, even more so, as an object of rights. This approach, the authors find, may contribute to exacerbate the risk of contradictions of the DGA proposal with the GDPR on the level of principles. Second, it discusses the relationship of the DGA proposal vis-à-vis the (regulation of) European data spaces and more generally its place in the two-pillars approach of the EC, between horizontal (sector-agnostic) and sectoral regulation of data. Finally, the DGA proposal is identified as a cornerstone of the new EU ‘digital sovereignty’ policy.
... A-2: Post-Consenting Access The expression of one's own privacy preferences is cumbersome, thus new interfacing mechanisms such as self-service privacy dashboards [4] have the potential to manage changes. Access post-consent is a further challenge as Consenting is often not conceptualized nor modelled as a process that spans over long periods, instead of a single point decision [1]. Single actions should not be definite, irrevocable, and with long-lasting effects [28]. ...
Conference Paper
Full-text available
Data Protection and Consenting Communication Mechanisms (DPCCMs) have the potential of becoming one of the most funda�mental means of protecting humans’ privacy and agency. However, they are yet to be improved, adopted and enforced. In this paper, based on the results of a technical document analysis and an expert study, we we identify some of the main technical factors that can be com�parison factors betwe some of the main interdisciplinary challenges of a Human-centric, Accountable, Lawful, and Ethical practice of personal data pro�cessing on the Internet and discuss whether the current DPCCMs proposal can contribute towards their resolution. In particular, we discuss the two current open specifications, i.e. the Advanced Data Protection Control (ADPC) and the Global Privacy Control (GPC) based on the identified challenges.
... The privacy paradox of apparently hypocritical valorization of personal data is further exacerbated by regular users' inability to make sense of sophisticated privacy agreements ( Acquisti, Adjerid andBrandimarte, 2013 , 2015 ). Trusting in individual, informed, free choice over data disclosure is theoretically risky if not downright impossible, given the layered complexity of corporate business practices regarding data, the chains through which data are transferred to various organizations with multiple interests and policies, and the obfuscation introduced into such agreements through corporate jargon Nissenbaum (2010) . ...
Article
We study variability in General Data Protection Regulation (GDPR) awareness in relation to digital experience in the 28 European countries of EU27-UK, through secondary analysis of the Eurobarometer 91.2 survey conducted in March 2019 (N = 27,524). Education, occupation, and age are the strongest sociodemographic predictors of GDPR awareness, with little influence of gender, subjective economic well-being, or locality size. Digital experience is significantly and positively correlated with GDPR awareness in a linear model, but this relationship proves to be more complex when we examine it through a typological analysis. Using an exploratory k-means cluster analysis we identify four clusters of digital citizenship, across both dimensions of digital experience and GDPR awareness: the off-line citizens (22%), the social netizens (32%), the web citizens (17%), and the data citizens (29%). The off-line citizens rank lowest in internet use and GDPR awareness; the web citizens rank at about average values, while the data citizens rank highest in both digital experience and GDPR knowledge and use. The fourth identified cluster, the social netizens, have a discordant profile, with remarkably high social network use, below average online shopping experiences, and low GDPR awareness. Digitalization in human capital and general internet use is a strong country-level correlate of the national frequency of the data citizen type. Our results confirm previous studies of the low privacy awareness and skills associated with intense social media consumption, but we find that young generations are evenly divided between the rather carefree social netizens and the strongly invested data citizens. In order to achieve the full potential of the GDPR in changing surveillance practices while fostering consumer trust and responsible use of Big Data, policymakers should more effectively engage the digitally connected social netizens in the public debate over data use and protection. Moreover, they should enable all types of digital citizens to exercise their GDPR rights and to support the creation of value from data, while defending the right to protection of personal data.
... Such information, especially in detail, may create a feeling of "wall of text. " Being cognitively overloaded, users may easily neglect or mute such information, resulting in lower overall understanding of the AI system [1,46], which can reduce their ability to form appropriate trust or make informed decisions. Second, the current explanations have followed a one-size-fits-all model [4], whereby the users have no control over what explanations they see. ...
... In recent years, with new data protection legislation launching around the globe, the assessment of interfaces regarding (legal) principles such as transparency [2,6,28], choice and notice [13,58,59] or their disregard by use of "dark patterns" [42] has gained popularity in HCI. While clearly having a connection to the legal discourse, more often than not, the dots still need to be connected by the reader herself. ...
Article
Data protection risks play a major role in data protection laws and have shown to be suitable means for accountability in designing for usable privacy. Especially in the legal realm, risks are typically collected heuristically or deductively, e.g., by referring to fundamental right violations. Following a user-centered design credo, research on usable privacy has shown that a user-perspective on privacy risks can enhance system intelligibility and accountability. However, research on mapping the landscape of user-perceived privacy risks is still in its infancy. To extend the corpus of privacy risks as users perceive them in their daily use of technology, we conducted 9 workshops collecting 91 risks in the fields of web browsing, voice assistants and connected mobility. The body of risks was then categorized by 11 experts from the legal and HCI-domain. We find that, while existing taxonomies generally fit well, a societal dimension of risks is not yet represented. Discussing our empirically backed taxonomy including the full list of 91 risks, we demonstrate roads to use user-perceived risks as a mechanism to foster accountability for usable privacy in connected devices.
... Although privacy policies are essential in various aspects, prior research suggests that privacy policies suffer from multiple deficiencies. For example, previous studies highlighted that privacy policies contained ambiguity, incomplete information, and excessive usage of legalistic language, leading to a lack of transparency [20], [21]. In addition, inconsistent formats [22] and difficulty to read [23] were also noted in privacy policies [22]. ...
Conference Paper
Today's privacy policies contain various deficiencies, including failure to convey information comprehensibly to most Internet users and a lack of transparency. Meanwhile, existing studies on privacy policies only focused on specific areas of interest and lack in cooperating an inclusive outlook on the state privacy policies due to the differences in privacy policy samples, text properties, measures, methodologies, and backgrounds. Therefore, this research develops an assessment metric to bridge this gap by integrating the fragmented understanding of privacy policies and exploring potential aspects to evaluate privacy policies absent from existing studies. The multifaceted assessment metric developed through this study covers three main aspects: content, text property , and user interface. Through the investigation and analyses performed on Ma-laysian organizations online privacy policies, this study reveals several trends using Text processing and Clustering analysis methods: (1) the use of jargon in privacy policies are relatively low, (2) privacy policies with higher compliance level tends to be lengthier and more repetitive, and vice versa, (3) regardless of compliance level, there are privacy policies that are not presented in user-friendly font size. Finally, as an experiment of applying the developed metrics, the results derived confirm the relevancy of the assessment metrics developed for assessing online privacy policies via text processing and clustering analysis.
Article
A contact tracing app can positively support the requirement of social and physical distancing during a pandemic. However, there are aspects of the user’s intention to download the app that remain under-researched. To address this, we investigate the role of perceived privacy risks, social empowerment, perceived information transparency and control, and attitudes towards government, in influencing the intention to download the contact tracing app. Using fuzzy set qualitative comparative analysis (fsQCA), we found eight different configurations of asymmetrical relationships of conditions that lead to the presence or absence of an intention to download. In our study, social empowerment significantly influences the presence of an intention to download. We also found that perceived information transparency significantly influences the absence of an intention to download the app.
Article
Phishing is a method of social engineering—it attempts to influence behavior and/or beliefs—in which a party either “imitates a trusted source” (Felix & Hauck, 1987) or induces another party to trust or place more or a different kind of trust in it. I argue that by their very nature, social platforms such as Facebook, Twitter, and others are large-scale phishing operations designed to collect information about users surreptitiously. Although providing terms of service and privacy policies, an individual has no way of knowing the extent of the platform’s personal data collection. This article reconsiders platforms as organizational phishing, and just as harmful as that done by hackers or others seeking unjust enrichment. To do this, this article identifies the significant elements of platform phishing by examining the descriptions of platform conduct provided in regulatory actions taken by the US Federal Trade Commission.
Conference Paper
For a long time, the Internet and web technologies have supported a more fluid interaction between public institutions and citizens through e-government. With this spirit, several public services are being offered online. One of such services, though not a standard one, is transparency. Strongly encouraged by open-data initiatives, transparency is being marketed as a powerful mechanism to fight corruption. Leveraging communication technologies, societies are broadly adopting online transparency practices to give the general public more control over the scrutiny of state institutions. However, a neglected implementation of transparency may cause almost unlimited access to large amounts of information, a side effect we call hyper-transparency. Inevitably, serious privacy risks arise for the individuals in this context. In this work, we analyze the emergence of hyper-transparent practices in Ecuador, a country recently involved in a fierce attempt to offer free access to public information as a fundamental right enabled through e-government. Moreover, we systematically dissect the large amount of microdata released online by Ecuadorian public institutions. Accordingly, we also unveil here a scenario where sensitive information of public employees is openly released under transparency laws. After exposing potential privacy violations, we elaborate on some mechanisms aimed at protecting citizens from such violations.
Chapter
Full-text available
Transparency seems to represent a solution to many ethic issues generated by systems that collect implicit data from users to model the user themselves based on programmed criteria. However, making such systems transparent – besides being a major technical challenge - risks raising more issues than it solves, actually reducing the user’s ability to protect themselves while trying to put them in control. Are transparent systems only a chimera, which provides a seemingly useful information pastiche while failing to make sense upon closer examination? Scholars from ethics and cognitive science share their thoughts on how to achieve genuine transparency and the value of transparency.
Chapter
App Tracking Transparency (ATT) introduces opt-in tracking authorization for iOS apps. In this work, we investigate how mobile apps present tracking requests to users, and we evaluate how the observed design patterns impact users’ privacy. We perform a manual observational study of the Top 200 free iOS apps, and we classify each app by whether it requests permission to track, the purpose of the request, how the request was framed, whether the request was preceded or followed by additional ATT-related pages, and whether the request was preceded or followed by other permission requests. We then perform a user study with 950 participants to evaluate the impact of the observed UI elements. We find that opt-in authorizations are effective at enhancing data privacy in this context, and that the effect of ATT requests is robust to most implementation choices.
Article
Virtual advisors (VA) are tools that assist users in making decisions. Using VAs necessitates the disclosure of personal information, especially when they are employed in personalized contexts such as healthcare, where disclosure is vital to providing valid and accurate advice. Yet, extant research has largely overlooked the factors that encourage or inhibit users’ from disclosing to VAs. In contrast, this study investigates the determinants of users’ intentions to self-disclose, and examines how VAs can be designed to enhance these intentions. The results of a study in the context of skin care advice reveal that the intention to disclose to a VA is not only the product of a rational process, but that perceptions of the VA and the relationship with it are important. The results further show that a parsimonious set of design elements can be used to endow a VA with desired characteristics that enhance the willingness to disclose. The study contributes to our understanding of the factors influencing users’ intentions to provide personal information to a VA, which extend beyond the expected benefits and costs. The study further demonstrates that social exchange theory can be applied in contexts in which humans are interacting with automated VAs.
Article
Encouraging consumers to enter a data disclosure process constitutes a crucial challenge for retailers. This paper suggests that retailers can lever consumers’ willingness to enter disclosure processes through the design of their data requests. Four experimental studies confirm that consumers are more likely to comply with a data request if retailers do not only use textual relevance arguments but also augment them with relevance-illustrating game elements to further underpin the purpose of data disclosure. This favorable effect can be delineated according to dual-processing models of decision-making: Relevance-illustrating game elements amplify the positive effect of textual relevance arguments by helping consumers to a) cognitively appreciate the objective benefits of data disclosure (i.e., meaningful engagement) and b) increase hedonic engagement on the affective processing route. However, arbitrarily chosen game elements which solely aim at entertaining without conveying the purpose of data disclosure, do not yield these positive effects. Finally, the authors show that the proposed approach is especially worthwhile for retailers facing customers with low trust levels, whereas customers with high trust levels are likely to comply with the data request regardless.
Article
Buy buttons are not only links allowing social platform users to complete a purchase directly by clicking on them, but they are also key to social platform monetization. Shedding light on the use of these buttons is, thus, particularly interesting for their practical value. We have conducted two between-subject experiments to analyze two basic issues: first, how buy buttons can affect social platform users' shopping-related attitudinal and behavioral responses; and, how providing users with a safe shopping environment can affect their shopping-related responses. The results from two samples (Spanish and Chinese) showed that displaying a buy button on social platform posts is related to better user shopping attitude and willingness to purchase; also, when users are on a platform that offers safeguards and guarantees, their shopping responses improve, with users' perceived risk towards shopping on such a platform ameliorated by its positioning as trustworthy mediator in the purchasing process.
Article
Purpose In the context of social media (SM) use, self-disclosure (SD) behaviour meets users' social and emotional needs, but it is also accompanied by risks that can harm users. This paper aims to identify the factors that influence users' SD behaviour on SM in Indonesia, using a comparative analysis based on age groups. Design/methodology/approach A survey was conducted on 2,210 respondents who were active SM users in Indonesia. Data were processed and analysed using covariance-based structural equation modelling with AMOS 24.0 software. Findings Results indicate that, in the overall age group data, factors such as use of information (UI), trust, privacy control (PC), interactivity, perceived benefits (PB) and perceived risks (PR) influence users' SD behaviour. This research also found differences in the characteristics of SD behaviour between age groups. Originality/value Findings from this study can help SM service providers to evaluate the credibility and reliability of their platforms to encourage user retention.
Preprint
Full-text available
We studied variability in General Data Protection Regulation (GDPR) awareness in relation to digital experience in the 28 European countries of EU27-UK, through secondary analysis of the Eurobarometer 91.2 survey conducted in March 2019 (N = 27,524). Education, occupation, and age were the strongest sociodemographic predictors of GDPR awareness, with little influence of gender, subjective economic well-being, or locality size. Digital experience was significantly and positively correlated with GDPR awareness in a linear model, but this relationship proved to be more complex when we examined it through a typological analysis. Using an exploratory k-means cluster analysis we identified four clusters of digital citizenship, across both dimensions of digital experience and GDPR awareness: the off-line citizens (22%), the social netizens (32%), the web citizens (17%), and the data citizens (29%). The off-line citizens ranked lowest in internet use and GDPR awareness; the web citizens ranked at about average values, while the data citizens ranked highest in both digital experience and GDPR knowledge and use. The fourth identified cluster, the social netizens, had a discordant profile, with remarkably high social network use, below average online shopping experiences, and low GDPR awareness. Digitalization in human capital and general internet use is a strong country-level correlate of the national frequency of the data citizen type. Our results confirm previous studies of the low privacy awareness and skills associated with intense social media consumption, but we found that young generations are evenly divided between the rather carefree social netizens and the strongly invested data citizens.
Conference Paper
Full-text available
The new information and communication technology providers collect increasing amounts of personal data, a lot of which is user generated. Unless use policies are privacy-friendly, this leaves users vulnerable to privacy risks such as exposure through public data visibility or intrusive commercialisation of their data through secondary data use. Due to complex privacy policies, many users of online services unwillingly agree to privacy-intruding practices. To give users more control over their privacy, scholars and regulators have pushed for short, simple, and prominent privacy policies. The premise has been that users will see and comprehend such policies, and then rationally adjust their disclosure behaviour. In this paper, on a use case of social network service site, we show that this premise does not hold. We invited 214 regular Facebook users to join a new fictitious social network. We experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy. Half of our participants miscomprehended even this transparent privacy policy. When privacy threats of secondary data use were present, users remembered the policies as more privacy-friendly than they actually were and unwittingly uploaded more data. To mitigate such behavioural pitfalls we present design recommendations to improve the quality of informed consent.
Preprint
Full-text available
The new information and communication technology providers collect increasing amounts of personal data, a lot of which is user generated. Unless use policies are privacy-friendly, this leaves users vulnerable to privacy risks such as exposure through public data visibility or intrusive commercialisation of their data through secondary data use. Due to complex privacy policies, many users of online services unwillingly agree to privacy-intruding practices. To give users more control over their privacy, scholars and regulators have pushed for short, simple, and prominent privacy policies. The premise has been that users will see and comprehend such policies, and then rationally adjust their disclosure behaviour. In this paper, on a use case of social network service site, we show that this premise does not hold. We invited 214 regular Facebook users to join a new fictitious social network. We experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy. Half of our participants miscomprehended even this transparent privacy policy. When privacy threats of secondary data use were present, users remembered the policies as more privacy-friendly than they actually were and unwittingly uploaded more data. To mitigate such behavioural pitfalls we present design recommendations to improve the quality of informed consent.
Chapter
Personal data is widely and readily available online. Some of that personal data might be considered private or sensitive, such as portions of social security numbers [1]. Prior research demonstrates the knowledge of personal acquaintances of data used in secondary authentication protocols [1]. We explored discoverability and location of personal data online and gathered observation actors making the data available.
Purpose This study examines how the different dimensions of a privacy policy separately influence perceived effectiveness of privacy policy, as well as the mediating mechanisms behind these effects (i.e. vulnerability, benevolence). In addition, this study considers privacy concern as a significant moderator in the research model, to examine if the relative influences of privacy policy content are contingent upon levels of users' privacy concern. Design/methodology/approach The survey experiment was conducted to empirically validate the model. Specifically, three survey experiments and six scenarios were designed to manipulate high and low levels of the three privacy policy dimensions (i.e. transparency, control and protection). The authors totally distributed 450 copies of the questionnaire, of which 407 were valid. Findings This paper found that (1) all the three privacy policy dimensions directly influence perceived effectiveness of privacy policy; (2) all the three privacy policy dimensions indirectly influence perceived effectiveness of privacy policy by enhancing perceived corporate benevolence, whereas control also affects perceived effectiveness of privacy policy by reducing perceived vulnerability; and (3) individuals with high-privacy concern are much more impacted by privacy policy contents than individuals with low-privacy concern. Practical implications The findings could provide website managers with guidelines on how to design privacy policy contents by reducing user perceptions of vulnerability and enhancing user perceptions of corporate benevolence. The managers need to focus on customers' perceived vulnerability and corporate benevolence when launching or updating privacy policies. Furthermore, the managers also need to attend to users' privacy concerns, especially for multinational companies or companies with specific consumer groups. Originality/value This study extends the current privacy policy literature by articulating the separate influences of the three privacy policy dimensions and their impact mechanisms on perceived effectiveness of privacy policy. It also uncovers privacy concerns as a boundary condition that influence the effects of privacy policy contents on users' privacy perceptions.
Article
Artificial intelligence is a rapidly developing field of research with many practical applications. Congruent to advances in technologies that enable big data, deep learning, and neural networks to train, learn, and predict, artificial intelligence creates new risks that are difficult to predict and manage. Such risks include economic turmoil, existential crises, and the dissolution of individual privacy. If unchecked, the capabilities of artificially intelligent systems could pose a fundamental threat to privacy in their operation or these systems may leak information under adversarial conditions. In this article, we survey the literature and provide various scenarios for the use of artificial intelligence, highlighting potential risks to privacy and offering various mitigating strategies. For the purpose of this research, a North American perspective of privacy is adopted. Impact statement —While an appreciation of the privacy risks associated with artificial intelligence is important, a thorough understanding of the assortment of different technologies that comprise artificial intelligence better prepares those implementing such systems in assessing privacy impacts. This can be achieved through the independent consideration of each constituent of an artificially intelligent system and its interactions. Under individual consideration, privacy-enhancing tools can be applied in a targeted manner to reduce the risk associated with specific components of an artificially intelligent system. A generalized North American approach to assess privacy risks in such systems is proposed that will retain applicability as the field of research evolves and can be adapted to account for various sociopolitical influences. With such an approach, privacy risks in artificial intelligent systems can be well understood, measured, and reduced.</p
Article
When using mobile apps that extensively collect user information, privacy uncertainty, which is consumers’ difficulty in assessing the privacy of the data they entrust to others, is a major concern. Using a simulated app-buying experiment, we find that privacy uncertainty, which is mainly driven by uncertainty about what data are collected and how they are used and protected, is indeed a significant influencer of one’s intentions to use a mobile app and the perceived risk associated with that use, as well as the price a potential consumer is willing to pay for an app. Our results further show that the uncertainty concerning the data collected while using a mobile app drives consumers’ decisions more than the uncertainty regarding data that are collected at the time an app is downloaded. To investigate whether privacy uncertainty continues to be a factor after a consumer has already started using an app, we conducted a survey of users of wellness and personal finance apps. The results indicate that privacy uncertainty is a lingering concern because it continues to influence a user’s intention to continue using an app and the perceived risk associated with that continued use.
Chapter
Today's privacy policies contain various deficiencies, including failure to convey information comprehensibly to most Internet users and a lack of transparency. Meanwhile, existing studies on privacy policies only focused on specific areas of interest and lack an inclusive outlook on the state privacy policies due to the differences in privacy policy samples, text properties, measures, methodologies, and backgrounds. Therefore, this research develops an assessment metric to bridge this gap by integrating the fragmented understanding of privacy policies and exploring potential aspects to evaluate privacy policies absent from existing studies. The multifaceted assessment metric developed through this study covers three main aspects: content, text property, and user interface. Through the investigation and analyses performed on Malaysian organizations’ online privacy policies, this study reveals several trends using text processing and clustering analysis methods: (1) the use of jargon in privacy policies are relatively low, (2) privacy policies with higher compliance levels tend to be lengthier and more repetitive, and vice versa, (3) regardless of compliance level, there are privacy policies that are not presented in user-friendly font size. Finally, as an experiment of applying the developed metrics, the results confirm the relevance of the assessment metrics developed for assessing online privacy policies via text processing and clustering analysis.
Article
Full-text available
We test the hypothesis that increasing individuals’ perceived control over the release and access of private information—even information that allows them to be personally identified––will increase their willingness to disclose sensitive information. If their willingness to divulge increases sufficiently, such an increase in control can, paradoxically, end up leaving them more vulnerable. Our findings highlight how, if people respond in a sufficiently offsetting fashion, technologies designed to protect them can end up exacerbating the risks they face.
Article
Full-text available
We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of in-terviews, user diaries, and online surveys involving 569 Ameri-can Facebook users. Their regrets revolved around sensitive top-ics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a "hot" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their post-ings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook plat-form. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodologi-cal considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.
Conference Paper
This paper deals with the estimation of line of sight rates and angles of evader with respect to pursuer from the available measurements from (i) noisy seeker alone and also from (ii) noisy seeker together with radar. During terminal guidance, the on board seeker of pursuer acquires the measurements of range rate, gimbal angles and line of sight rates along yaw and pitch planes. Generally the gimbal angles are contaminated by correlated bore sight error. The line of sight rates are very noisy and correlated with glint, radar cross section fluctuation and thermal noise. Also due to eclipsing there is an aperiodic data loss in line of sight rate measurements. Since the noise are generally modeled as white Gaussian in Kalman filter applications the present study using Extended Kalman Filter (EKF) and Unscented Kalman filter (UKF) is worked out to provide a bench mark solution for comparison with non white and non Gaussian model results that could be available later on. The line of sight rates and angles with observer states are worked out under the above practical constraints. The system state kinematic equations have been used in both Cartesian and Polar frame and the available measurements are in seeker gimbal frame. The present results from EKF and UKF under white Gaussian noise assumption have been found to be consistent. Thus the UKF, which accounts for system nonlinearity better than EKF shows the adequacy of the latter in the present case. Hence the EKF under the above Gaussian assumption has been used as against UKF to estimate the line of sight rates and observer states when using both seeker and radar measurements. The estimation algorithm has been validated in a simulated environment and the results are encouraging. The adequacy of the present filter model with all the inadequate modelling has been shown to be adequate in the present scenario. Further research to be carried out by modelling non Gaussian measurement noise is also provided.
Conference Paper
In this paper, formulation and realistic 6 DOF simulation of a pursuer evader engagement has been carried out using pure proportional and bang-bang guidance laws in presence of low pass filter lag for processing seeker measurements and autopilot lag. During engagement, pursuer velocity is highly time varying. A realistic seeker model has been used for obtaining the evader information as measurements during terminal guidance. Frequency domain low pass filter has been used to process the noisy seeker measurements for guidance command generation. The guidance kinematics has been formulated in seeker gimbal angle frame. Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Conference Paper
In an effort to address persistent consumer privacy concerns, policy makers and the data industry seem to have found common grounds in proposals that aim at making online privacy more "transparent." Such self-regulatory approaches rely on, among other things, providing more and better information to users of Internet services about how their data is used. However, we illustrate in a series of experiments that even simple privacy notices do not consistently impact disclosure behavior, and may in fact be used to nudge individuals to disclose variable amounts of personal information. In a first experiment, we demonstrate that the impact of privacy notices on disclosure is sensitive to relative judgments, even when the objective risks of disclosure actually stay constant. In a second experiment, we show that the impact of privacy notices on disclosure can be muted by introducing simple misdirections that do not alter the objective risk of disclosure. These findings cast doubts on the likelihood of initiatives predicated around notices and transparency to address, by themselves, online privacy concerns.
Article
This paper presents an exact solution to the delayed data problem for the information form of the Kalman filter, together with its application to decentralised sensing networks. To date, the most common method of handling delayed data in sensing networks has been to use a conservative time alignment of the observation data with the filter time. However, by accounting for the correlation between the late data and the filter over the delayed period, an exact solution is possible. The inclusion of this information correlation term adds little extra complexity, and may be applied in an information filter update stage which is associative. The delayed data algorithm can also be used to handle data that is asequent or out of order. The asequent data problem is presented in a simple recursive information filter form. The information filter equations presented in this paper are applied in a decentralised picture compilation problem. This involves multiple aircraft tracking multiple ground targets and the construction of a single common tactical picture.
Article
Introduction, 99. — I. Some general features of rational choice, 100.— II. The essential simplifications, 103. — III. Existence and uniqueness of solutions, 111. — IV. Further comments on dynamics, 113. — V. Conclusion, 114. — Appendix, 115.
Article
Privacy decisions often involve balancing competing interests. As such, they're a natural field of study for economics. But traditional economic models have made overly restrictive assumptions about the stability and nature of individual privacy preferences. Approaches drawing on existing research in behavioral economics and psychology can offer complementary tools for understanding privacy decision making.
Conference Paper
We used an iterative design process to develop a privacy label that presents to consumers the ways organizations collect, use, and share personal information. Many surveys have shown that consumers are concerned about online privacy, yet current mechanisms to present website privacy policies have not been successful. This research addresses the present gap in the communication and understanding of privacy policies, by creating an information design that improves the visual presentation and comprehensibility of privacy policies. Drawing from nutrition, warning, and energy labeling, as well as from the effort towards creating a standardized banking privacy notification, we present our process for constructing and refining a label tuned to privacy. This paper describes our design methodology; findings from two focus groups; and accuracy, timing, and likeability results from a laboratory study with 24 participants. Our study results demonstrate that compared to existing natural language privacy policies, the proposed privacy label allows participants to find information more quickly and accurately, and provides a more enjoyable information seeking experience.
Article
Analysis of decision making under risk has been dominated by expected utility theory, which generally accounts for people's actions. Presents a critique of expected utility theory as a descriptive model of decision making under risk, and argues that common forms of utility theory are not adequate, and proposes an alternative theory of choice under risk called prospect theory. In expected utility theory, utilities of outcomes are weighted by their probabilities. Considers results of responses to various hypothetical decision situations under risk and shows results that violate the tenets of expected utility theory. People overweight outcomes considered certain, relative to outcomes that are merely probable, a situation called the "certainty effect." This effect contributes to risk aversion in choices involving sure gains, and to risk seeking in choices involving sure losses. In choices where gains are replaced by losses, the pattern is called the "reflection effect." People discard components shared by all prospects under consideration, a tendency called the "isolation effect." Also shows that in choice situations, preferences may be altered by different representations of probabilities. Develops an alternative theory of individual decision making under risk, called prospect theory, developed for simple prospects with monetary outcomes and stated probabilities, in which value is given to gains and losses (i.e., changes in wealth or welfare) rather than to final assets, and probabilities are replaced by decision weights. The theory has two phases. The editing phase organizes and reformulates the options to simplify later evaluation and choice. The edited prospects are evaluated and the highest value prospect chosen. Discusses and models this theory, and offers directions for extending prospect theory are offered. (TNM)
Conference Paper
This paper is concerned with optimal filtering in a distributed multiple sensor system with the so-called out-of-sequence measurements (OOSM). Based on BLUE (best linear unbiased estimation) fusion, we present two algorithms for updating with OOSM that are optimal for the information available at the time of update. Different minimum storage of information concerning the occurrence time of OOSMs are given for both algorithms. It is shown by analysis and simulation results that the two proposed algorithms are flexible and simple.
Article
In target tracking systems measurements are typically collected in "scans" or "frames" and then they are transmitted to a processing center. In multisensor tracking systems that operate in a centralized manner, there are usually different time delays in transmitting the scans or frames from the various sensors to the center. This can lead to situations where measurements from the same target arrive out of sequence. Such "out-of-sequence" measurement (OOSM) arrivals can occur even in the absence of scan/frame communication time delays. The resulting "negative-time measurement update" problem, which is quite common in real multisensor systems, was solved previously only approximately in the literature. The exact state update equation for such a problem is presented. The optimal and two suboptimal algorithms are compared on a number of realistic examples, including a GMTI (ground moving target indicator) radar case.
Observer Based Estimation Approaches for Seeker Filtering The Institution oj Electrical Engineering
  • S Sadhu
Sadhu S., Ghoshal T. K., "Observer Based Estimation Approaches for Seeker Filtering," The Institution oj Electrical Engineering, Savoy place, London WC2R OBL,UK, 2001.
FTC Final Privacy Report Draws a Map to Meaningful Privacy Protection in the Online World
  • R Reitman
What's Next for the FTC's Proposed Privacy Framework?
  • R Santalesa