Chapter

Privacy and Data Protection: Processing Personal Data, Monitoring, and Profiling Citizens

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This chapter delves into the impact of the application of AI technologies on fundamental rights, especially the right to privacy, and how it carries the risk of greater ramifications of discrimination, false accusations, and surveillance. The problem of data-processing in the era of AI and big-data systems is explicated in particular, in light of the EU and Estonian data protection framework.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Customers should be cautious of phishing scams, use strong passwords, and keep software and anti-virus programs up to date. Consumers also have the right to access, correct, and delete personal data (Pratheep Kumar et al., 2017;Ebers & Tupay, 2023). They should be aware of their rights and how to exercise them. ...
... The Duncan test was conducted further to understand the difference among the selected educational level groups. The results indicated that Ph.D. students (mean value = 4.72) were more concerned about data privacy, and UG (mean value = 4.08) and PG (mean value = 4.37) students were grouped into the first category, as shown in table 11. This implies that UG and PG students were less concerned about data privacy than Ph.D. students. ...
Conference Paper
Full-text available
In this digital era, the use of the internet for information and communication has become an integral part of our daily life. With the significant increase in personal and sensitive information collected and stored digitally, customers are highly concerned about their data privacy, protection, and security. This study investigated internet users' concerns regarding data privacy, security, and safety, which are crucial issues as users entrust their personal and sensitive information to companies. The study identified the critical concerns for computer and smart phone users, such as insecure data sharing, data breaches, inability to protect personal information, companies selling data to third parties, and lack of online platform security. A questionnaire was developed with the help of group discussion to examine the concern of the users, and subsequently, primary data were collected. The data were analyzed with the help of independent sample t-tests and analysis of variance, which determined differences between demographic variables and study factors. The results showed varying opinions between male and female users, email account holders and non-holders, online buyers and non-buyers, and those concerned about credit card information linkage versus those who were not. The users who are young and highly educated using smart phones and belonging to middle-class families expressed more significant concerns about data privacy, security, and protection than older and non-smart phone users. The study explores the growing concern among the different users in India. It suggests that the government in India needs to address these issues for smart phone users by setting specific policies to safeguard data and data security and provide consumers with the privacy they deserve.
Article
Full-text available
On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper analyzes the unclear preemptive effect of the AIA and EU competences (2), the scope of application (3), the prohibited uses of Artificial Intelligence (AI) (4), the provisions on high-risk AI systems (5), the obligations of providers and users (6), the requirements for AI systems with limited risks (7), the enforcement system (8), the relationship of the AIA with the existing legal framework (9), and the regulatory gaps (10). The last section draws some final conclusions (11).
Article
Full-text available
This article examines the problem of AI memory and the Right to Be Forgotten. First, this article analyzes the legal background behind the Right to Be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the authors explore whether the Right to Be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to Be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments. Finally, this article addresses the core issue at the heart of the AI and Right to Be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation.
Article
Full-text available
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a 'right to explanation' of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive limited information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a 'right to be informed'. Further, the ambiguity and limited scope of the 'right not to be subject to automated decision-making' contained in Article 22 (from which the alleged 'right to explanation' stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.
Article
Full-text available
The GDPR will bring substantial changes to the legal framework of information privacy in the European Union. In this article we take a look at the GDPR from the vantage point of Big Data analysis: will it facilitate or hinder Big Data in Europe? While we find that data collection is further restricted, we point at a somewhat surprising change that may enable data reuse in the context of Big Data without reconsent, as well as extended data retention possibilities. We also point out important conceptual shortcomings of the GDPR with respect to Big Data, and offer suggestions for further information privacy development in Europe.
Chapter
In this chapter, a critical analysis is undertaken of the provisions of Art. 22 of the European Union’s General Data Protection Regulation of 2016, with lines of comparison drawn to the predecessor for these provisions—namely Art. 15 of the 1995 Data Protection Directive. Article 22 places limits on the making of fully automated decisions based on profiling when the decisions incur legal effects or similarly significant consequences for the persons subject to them. The basic argument advanced in the chapter is that Art. 22 on its face provides persons with stronger protections from such decision making than Art. 15 of the Directive does. However, doubts are raised as to whether Art. 22 will have a significant practical impact on automated profiling.
Article
Die Datenschutz-Grundverordnung wird das Gesicht des Datenschutzrechts nachhaltig verändern. Es sind weniger die materiell-rechtlichen Neuerungen, mit denen sie Akzente setzt. Vor allem das Marktortprinzip und die institutionellen Modifikationen in der Struktur der europäischen Datenschutzaufsicht sorgen für eine datenschutzrechtliche Frischzellenkur, deren Ausstrahlungswirkung weit über die Grenzen der Union hinausreicht. Kombiniert mit dem Wechsel zur Handlungsform der Verordnung geht damit eine im Verhältnis zum bisherigen Richtlinienregime deutlich sichtbare Harmonisierung einher. In der Sache ist die Datenschutz-Grundverordnung allerdings in Teilen eher eine Richtlinie im Verordnungsgewand: Mit rund vier Dutzend Öffnungsklauseln eröffnet sie den Mitgliedstaaten Spielraum für eigene normative Schattierungen. Der Beitrag wirft einen Blick darauf, welche Änderungen die Verordnung im Datenschutzrecht nach sich zieht und wie weit die nationalen Spielräume reichen.
Article
A Google search for a person's name, such as "Trevon Jones", may yield a personalized ad for public records about Trevon that may be neutral, such as "Looking for Trevon Jones?", or may be suggestive of an arrest record, such as "Trevon Jones, Arrested?". This writing investigates the delivery of these kinds of ads by Google AdSense using a sample of racially associated names and finds statistically significant discrimination in ad delivery based on searches of 2184 racially associated personal names across two websites. First names, assigned at birth to more black or white babies, are found predictive of race (88% black, 96% white), and those assigned primarily to black babies, such as DeShawn, Darnell and Jermaine, generated ads suggestive of an arrest in 81 to 86 percent of name searches on one website and 92 to 95 percent on the other, while those assigned at birth primarily to whites, such as Geoffrey, Jill and Emma, generated more neutral copy: the word "arrest" appeared in 23 to 29 percent of name searches on one site and 0 to 60 percent on the other. On the more ad trafficked website, a black-identifying name was 25% more likely to get an ad suggestive of an arrest record. A few names did not follow these patterns. All ads return results for actual individuals and ads appear regardless of whether the name has an arrest record in the company's database. The company maintains Google received the same ad text for groups of last names (not first names), raising questions as to whether Google's technology exposes racial bias.
Ethics guidelines for trustworthy AI (Ethics Guidelines)
  • A I Hleg
Google mistakenly tags black people as “Gorillas,” showing limits of algorithms
  • A Barr
Enabling access, erasure, and rectification rights in AI systems
  • R Binns
National COVID-19 contact tracing apps
  • M Ciucci
  • F Gouardères
Taking AI personally: how the E.U. must learn to balance the interests of personal data privacy & artificial intelligence
  • M Humerick
Terviseandmete töötlemise õiguslik alus suurandmete analüüsil põhinevate personaalmeditsiini teenuste pakkumiseks. Tervise infosüsteemi andmetel põhinevate kliiniliste otsuste tugisüsteemide näide
  • L M Kuuskmaa
  • LM Kuuskmaa
2020) § 26. In: Madise Ü (ed) Commentaries of the Estonian Constitution
  • K Jaanimägi
  • L Oja
EPRS) (2020) The impact of the General Data Protection Regulation (GDPR) on artificial intelligence
  • M Kritikos
  • European Parliamentary Research Service
Isikuandmete kaitse seaduse eelnõu seletuskiri (explanatory memorandum to the Personal Data Protection Act
  • Ministry Of Justice Of Estonia
Annotiations to] GDPR Art. 6. In: Beck’scher Online-Kommentar, marginal notes 32-33
  • B P Paal
The real story on cookies: dispelling common myths about the GDPR and consent
  • T Rankin
Incompatible: the GDPR in the age of big data
  • T Z Zarsky
  • TZ Zarsky
The pending European e-privacy regulation (EPR)
  • E Mcconnell
Mis toimub meie terviseandmetega: kas teavet saada on õigus või privileeg
  • K Pormeister
Annotations to] GDPR Art 6
  • M Albers
  • R-D Veit
Infotehnoloogilised võimalused põhiõiguste kaitsel
  • D Bogdanov
  • T Siil
Anonymity assessment - a universal tool for measuring anonymity of data sets under the GDPR with a special focus on smart robotics
  • M Kolain
  • C Grafenauer
  • M Ebers
Patsiendiseire ehk ennetamise eesmärgil terviseandmete töötlemine ilma patsiendi nõusolekuta
  • K Pormeister
11 Datenschutzrechtliche Herausforderungen von KI
  • T Krügel
Tehisintellekti kasutamine haldusakti andmisel
  • K Lember
Data protection and privacy: (in)visibilities and infrastructures. Law, governance and technology series
  • B Sloot