ArticlePDF Available

The EU is launching a market for personal data. Here’s what that means for privacy

Authors:

Abstract

In a radical shift for the EU’s data governance strategy, the Trusts Project promotes data sharing as a civic duty.
Regulation (GDPR) and stringent antitrust laws have inspired new legislation around the world.
For decades, the EU has codified protections on personal data and fought against what it viewed
as commercial exploitation of private information, proudly positioning its regulations in contrast
to the light-touch privacy policies in the United States.
The new European data governance strategy (pdf) takes a fundamentally different approach. With
it, the EU will become an active player in facilitating the use and monetization of its citizens’
personal data. Unveiled by the European Commission in February 2020, the strategy outlines
policy measures and investments to be rolled out in the next five years.
This new strategy represents a radical shift in the EU’s focus, from protecting individual privacy
to promoting data sharing as a civic duty. Specifically, it will create a pan-European market for
personal data through a mechanism called a data trust. A data trust is a steward that manages
people’s data on their behalf and has fiduciary duties toward its clients.
The EU’s new plan considers personal data to be a key asset for Europe. However, this approach
raises some questions. First, the EU’s intent to profit from the personal data it collects puts
European governments in a weak position to regulate the industry. Second, the improper use of
data trusts can actually deprive citizens of their rights to their own data.
The Trusts Project, the first initiative put forth by the new EU policies, will be implemented by
2022. With a €7 million budget, it will set up a pan-European pool of personal and nonpersonal
information that should become a one-stop shop for businesses and governments looking to
access citizens’ information.
Global technology companies will not be allowed to store or move Europeans’ data. Instead, they
will be required to access it via the trusts. Citizens will collect “data dividends,” which haven’t
been clearly defined but could include monetary or nonmonetary payments from companies that
use their personal data. With the EU’s roughly 500 million citizens poised to become data
sources, the trusts will create the world’s largest data market.
For citizens, this means the data created by them and about them will be held in public servers
and managed by data trusts. The European Commission envisions the trusts as a way to help
European businesses and governments reuse and extract value from the massive amounts of data
produced across the region, and to help European citizens benefit from their information. The
project documentation, however, does not specify how individuals will be compensated.
Data trusts were first proposed by internet pioneer Sir Tim Berners Lee in 2018, and the concept
has drawn considerable interest since then. Just like the trusts used to manage one’s property, data
trusts may serve different purposes: they can be for-profit enterprises, or they can be set up for
data storage and protection, or to work for a charitable cause.
IBM and Mastercard have built a data trust to manage the financial information of their European
clients in Ireland; the UK and Canada have employed data trusts to stimulate the growth of the AI
industries there; and recently, India announced plans to establish its own public data trust to spur
the growth of technology companies.
The new EU project is modeled on Austria’s digital system, which keeps track of information
produced by and about its citizens by assigning them unique identifiers and storing the data in
public repositories.
Unfortunately, data trusts do not guarantee more transparency. The trust is governed by a charter
created by the trust’s settlor, and its rules can be made to prioritize someone’s interests. The trust
is run by a board of directors, which means a party that has more seats gains significant control.
The Trusts Project is bound to face some governance issues of its own. Public and private actors
often do not see eye to eye when it comes to running critical infrastructure or managing valuable
assets. Technology companies tend to favor policies that create opportunity for their own products
and services. Caught in a conflict of interest, Europe may overlook the question of privacy.
And in some cases, data trusts have been used to strip individuals of their rights to control data
collected about them. In October 2019, the government of Canada rejected a proposal by
Alphabet/Sidewalk Labs to create a data trust for Toronto’s smart city project. Sidewalk Labs had
designed the trust in a way that secured the company’s influence over citizens’ data. And India’s
data trust faced criticism for giving the government unrestricted access to personal information by
defining authorities as “information fiduciaries.
One possible solution could be to set up an ecosystem of data stewards, both public and private,
that each serve different needs. Sylvie Delacroix and Neil Lawrence, the originators of this
bottom-up approach, liken data trusts to pension funds, saying they should be tightly regulated
and able to provide different services to designated groups.
When put into practice, the EU’s Trusts Project will likely change the privacy landscape on a
global scale. Unfortunately, however, this new approach won’t necessarily give European citizens
more privacy or control over their information. It is not yet clear what model of trusts the project
will pursue, but the policies do not currently provide any way for citizens to opt out.
At a recent congressional antitrust hearing in the United States, four major platform companies
publicly recognized the use of surveillance technologies, market manipulation, and forceful
acquisitions to dominate the data economy. The single most important lesson from these
revelations is that companies that trade in personal data cannot be trusted to store and manage it.
Decoupling personal information from the platforms’ infrastructure would be a decisive step
toward curbing their monopoly power. This can be done through data stewardship.
Ideally, the Trusts Project would show the world a more equitable way to capture and distribute
the true value of personal data. There’s still time to deliver on that promise.
Correction: In the original piece, the author suggested Sidewalk Labs sought to “control” citizens'
data. That description has been amended to “influence.
Anna Artyushina is a public policy scholar specializing in data governance and smart cities. She is a
PhD candidate in science and technology studies at York University in Toronto.
... As noted earlier, even if regulators find ways to break up data market dominance, the result may hurt the profitability of companies and undermine consumer satisfaction. This dilemma has emboldened companies in U.S. antitrust hearings to defend their data monopolies: ' … four major platform companies publicly recognized the use of surveillance technologies, market manipulation, and forceful acquisitions to dominate the data economy' (Artyushina, 2020). ...
... It remains to be seen whether creating a public data monopoly will solve many problems of the current system. Any movement in the direction of public data trusts should be considered in light of the dangers of political tampering, government surveillance, and collusion between business and government trustees, among other abuses (Artyushina, 2020). ...
... Although the idea of data trusts has been attributed to World Wide Web pioneer and digital rights advocate Tim Berners Lee (Artyushina, 2020), it seems unlikely that Berners-Lee would endorse government management of large user databases for commercial purposes without levels of privacy and user choice that would damage the economic value of the data. What the Web founder has called for is a sweeping Contract for the Web (https://contractfortheweb.org/) that includes governments, companies, civil society groups and users. ...
Article
Full-text available
Core precepts of the Silicon Valley technology culture include disruption, breaking things, and profiting from the results. As technologies run ahead of regulatory regimes, a great deal of attention has been paid to the organization and evaluation of regulatory processes, from corporate self-regulation, to direct state intervention, and multiple stakeholder governance. However, it is less clear what values or principles should animate these approaches. Public discussions and academic literatures have identified at least four critical areas that require attention: data market and business monopolies; moral dilemmas of AI, such as surveillance and behavioral engineering; a variety of environmental harms; and the spread of hate, disinformation and attacks on liberal democratic values. This analysis develops a holistic framework of guiding principles to address these different problem areas. Without a set of widely shared guiding principles, regulatory approaches will remain fragmentary and weak. Policies in one area may miss or even create problems in others. The aim is not to impose a priori solutions on complex problems, but to offer scholars, regulators, and stakeholders a heuristic framework to guide policy thinking.
... Governments and legal teams are equally capable of generating the necessary jurisdictions and profiting as well. Artyushina's (2020) review of the new European data strategy in MIT Technology Review, makes precisely this point. With the European Commission's (EC) (2020) data governance strategy, "…the EU will become an active player in facilitating the use and monetization of its citizens' personal data. ...
Article
Person-specific biomedical data are now widely collected, but its sharing raises privacy concerns, specifically about the re-identification of seemingly anonymous records. Formal re-identification risk assessment frameworks can inform decisions about whether and how to share data; current techniques, however, focus on scenarios where the data recipients use only one resource for re-identification purposes. This is a concern because recent attacks show that adversaries can access multiple resources, combining them in a stage-wise manner, to enhance the chance of an attack’s success. In this work, we represent a re-identification game using a two-player Stackelberg game of perfect information, which can be applied to assess risk, and suggest an optimal data sharing strategy based on a privacy-utility tradeoff. We report on experiments with large-scale genomic datasets to show that, using game theoretic models accounting for adversarial capabilities to launch multistage attacks, most data can be effectively shared with low re-identification risk.
Book
The book emphasises the importance of implementing cybersecurity curricula in the university environment, in order to prepare future experts in the field. The book brings together the contributions of professors, PhD students, researchers, experts and trainers, for the development of a reference framework for education, research and cooperation in the field of cybersecurity.
ResearchGate has not been able to resolve any references for this publication.