ArticlePublisher preview available

Legal dilemmas of Estonian artificial intelligence strategy: in between of e-society and global race

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Estonia has successfully created a digital society within the past 2 decades. It is best known for its eGovernment achievements, but it is also home for four unicorn star-ups. While the state is aiming to attract tech investments with e-Residency program and has recently started to invest into protecting national IP and safeguarding data from cybercrime by applying blockchain technology and creating its “digital embassy” in Luxembourg, emerging technologies such as and applications of artificial intelligence but also internet of things have posed the question on legal regulation and standardization. The dilemma, however, seems to be that since the new technologies, such artificial intelligence is much more overwhelming phenomenon than e-governance and presumably, before deciding the legal standards, the political and economic strategies that go beyond the e-governance should be set.
Vol.:(0123456789)
1 3
AI & SOCIETY (2021) 36:561–572
https://doi.org/10.1007/s00146-020-01009-8
OPEN FORUM
Legal dilemmas ofEstonian artificial intelligence strategy:
inbetweenofe‑society andglobal race
TanelKerikmäe1· EvelinPärn‑Lee1
Received: 27 January 2020 / Accepted: 16 June 2020 / Published online: 1 July 2020
© Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract
Estonia has successfully created a digital society within the past 2 decades. It is best known for its eGovernment achievements, but
it is also home for four unicorn star-ups. While the state is aiming to attract tech investments with e-Residency program and has
recently started to invest into protecting national IP and safeguarding data from cybercrime by applying blockchain technology
and creating its “digital embassy” in Luxembourg, emerging technologies such as and applications of artificial intelligence but
also internet of things have posed the question on legal regulation and standardization. The dilemma, however, seems to be that
since the new technologies, such artificial intelligence is much more overwhelming phenomenon than e-governance and presum-
ably, before deciding the legal standards, the political and economic strategies that go beyond the e-governance should be set.
Keywords Artificial intelligence· Robot judge· Automated decision making· Liability· EU competition law· EU state aid
1 Introduction
Estonia is best known as an example of a developed e-gov-
ernance system. Being a small state, Estonians are proud of
having built in the last 25years the world’s most advanced
digital society,1 and there is a social consensus that the
direction has been profitable equally to all members of
society. Hardly anyone could have imagined in 1994, when
the first national IT strategy was drafted,2 that by the year
2020 almost every Estonian citizen3 would have an e-ID
and nearly 70% of them would use it in their everyday life.
Already 20years back, to reduce administrative bureaucracy,
the Estonian Government began to use e-solutions in their
decision-making procedures. Over the last 2 decades (i.e.,
2000–2020), many public services have been made available
online, starting with filing tax returns or voting in nationwide
elections and ending with registering sale and purchase of a
vehicle in a vehicle register,4 or renewing a driving license.
Nowadays, every Estonian public service has some e-solu-
tion component attached to it. Although some concerns have
been expressed as to whether the Estonian IT tiger5 is getting
tired,6 in general Estonian ranking in surveys concerning
This paper was written on behalf of theprojectno. 20-
27227S“The Advent, Pitfalls and Limits of Digital
Sovereignty of the European Union” funded by the Czech
ScienceFoundation(GAČR).
* Evelin Pärn-Lee
evelin.parn-lee@taltech.ee
Tanel Kerikmäe
tanel.kerikmae@taltech.ee
1 TalTech Law School, Tallinn University ofTechnology
(TalTech), Ehitajate tee 5, 19086Tallinn, Estonia
1 See https ://e-eston ia.com/.
2 A document that was much inspired from the EU and US develop-
ments in the field, such as the “Bageman report” from 1994, Com-
mission White Paper on Growth, competitiveness, and employment
from 1993. See also Tarmo Kalvet. Eesti Infoühiskonna arengud
alates 1990. Aastatest. PRAXISe Toimetised Nr 30, August 2007.
3 The percentage of ID owners is reported to be 99.
4 Only in very special cases you need to be physically present for re-
registering the vehicle.
5 In 1996 Estonia launched a program called “Tiger’s Leap” with the
aim of investing into network infrastructure and hardware.
6 See for example Kalvet, T. (2007) The Estonian Information Soci-
ety Developments Since the 1990s, Working Paper No. 29, Praxis
Centre for Policy Studies, Tallinn. Drechsler, W., Backhaus, J.G.,
Burlamaqui, L., Chang, H-J., Kalvet, T., Kattel, R., Kregel, J. and
Reinert, E.S. (2006) ‘Creative destruction management in central
and eastern Europe: meeting the challenges of the techno-economic
paradigm shift’, in Kalvet, T. and Kattel, R. (Eds.): Creative Destruc-
tion Management: Meeting the Challenges of the Techno-Economic
Paradigm Shift, PRAXIS Centre for Policy Studies, Tallinn. Tiits,
M., Kattel, R., Kalvet, T. and Tamm, D. (2008) ‘Catching up, forging
ahead or falling behind? central and eastern European development
in 1990–2005, Innovation: The European Journal of Social Science
Research, Vol. 21, No. 1, pp.65–85.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... We delve into two concrete examples at different implementation phases to elucidate these transformative benefits. Estonia, recognizing the imperative to attract foreign investment and talent, strategically harnessed a regulatory sandbox to refine its e-residency 1 program [21]. This approach enabled real-time user feedback, refined the program's functionality and user experience, and fostered collaboration among various government agencies involved in the e-Residency program. ...
Chapter
This chapter explores how Regulatory Sandboxes can promote innovation and transform public services for citizens through co-production. They can be centered on citizens’ interests and rights and aim to enhance participation, inclusion, and personal data protection in the public sector. We argue that Regulatory Sandboxes are adaptable and forward-thinking regulatory solutions and frameworks that can be useful beyond digital transformation to regulate a range of issues such as fake news, privacy invasion, discriminatory biases, and the use of Artificial Intelligence for governments. Regulatory Sandboxes allow for rapid experimentation, iteration, and learning in a controlled environment, free from the burden of existing regulations. Citizens could be involved in this process by providing input on the types of innovations they would like to see tested in the Sandbox and participating in the testing process. Using relevant literature and case examples, we showcase the advantages and disadvantages of co-production practices, highlighting some initiatives that improved public services. The chapter provides insights into the benefits and trials of using Regulatory Sandboxes as a form of co-production to support policymakers and public managers in designing and implementing more effective public services that meet citizens’ needs.
... Methods for ensuring societal accountability include the following: (1) regulation and standardization-creating regulations and standards for AI system design and use can help hold these systems accountable to society, safeguarding the rights and interests of all stakeholders [73]; (2) public-private partnerships-fostering collaboration among government agencies, private-sector companies, and other entities to promote the societal accountability of AI and ML systems [74]. ...
Article
Full-text available
Background The use of social media for disseminating health care information has become increasingly prevalent, making the expanding role of artificial intelligence (AI) and machine learning in this process both significant and inevitable. This development raises numerous ethical concerns. This study explored the ethical use of AI and machine learning in the context of health care information on social media platforms (SMPs). It critically examined these technologies from the perspectives of fairness, accountability, transparency, and ethics (FATE), emphasizing computational and methodological approaches that ensure their responsible application. Objective This study aims to identify, compare, and synthesize existing solutions that address the components of FATE in AI applications in health care on SMPs. Through an in-depth exploration of computational methods, approaches, and evaluation metrics used in various initiatives, we sought to elucidate the current state of the art and identify existing gaps. Furthermore, we assessed the strength of the evidence supporting each identified solution and discussed the implications of our findings for future research and practice. In doing so, we made a unique contribution to the field by highlighting areas that require further exploration and innovation. Methods Our research methodology involved a comprehensive literature search across PubMed, Web of Science, and Google Scholar. We used strategic searches through specific filters to identify relevant research papers published since 2012 focusing on the intersection and union of different literature sets. The inclusion criteria were centered on studies that primarily addressed FATE in health care discussions on SMPs; those presenting empirical results; and those covering definitions, computational methods, approaches, and evaluation metrics. Results Our findings present a nuanced breakdown of the FATE principles, aligning them where applicable with the American Medical Informatics Association ethical guidelines. By dividing these principles into dedicated sections, we detailed specific computational methods and conceptual approaches tailored to enforcing FATE in AI-driven health care on SMPs. This segmentation facilitated a deeper understanding of the intricate relationship among the FATE principles and highlighted the practical challenges encountered in their application. It underscored the pioneering contributions of our study to the discourse on ethical AI in health care on SMPs, emphasizing the complex interplay and the limitations faced in implementing these principles effectively. Conclusions Despite the existence of diverse approaches and metrics to address FATE issues in AI for health care on SMPs, challenges persist. The application of these approaches often intersects with additional ethical considerations, occasionally leading to conflicts. Our review highlights the lack of a unified, comprehensive solution for fully and effectively integrating FATE principles in this domain. This gap necessitates careful consideration of the ethical trade-offs involved in deploying existing methods and underscores the need for ongoing research.
Chapter
This study explores the transformative role of artificial intelligence (AI) in digital governance by examining four diverse case studies: Saudi Arabia, Estonia, Singapore, and Spain. Using a case study methodology, the research investigates how AI has been integrated into public administration to enhance efficiencyand address societal challenges. Findings reveal significant variations in approaches to AI integration, influenced by each country's socio-economic contexts and governance priorities. Key theoretical implications include the need to balance technological innovation with socio-political considerations, while practical insights emphasize the scalability of interoperable systems and the critical role of transparency in fostering public trust. Despite notable advancements, challenges such as ensuring inclusivity, addressing algorithmic bias, and safeguarding data privacy persist, underscoring the need for ongoing policy adaptation and citizen engagement.
Chapter
Digital technologies have significantly disrupted and transformed the social organization and behaviour of individuals. Digital technologies have not only become the medium connecting individuals but also human remains connected to them.
Article
Full-text available
Perkembangan Kecerdasan Buatan (AI) telah menawarkan peluang transformatif dalam berbagai sektor, termasuk sistem peradilan. Dalam konteks Indonesia, implementasi AI sebagai hakim menjanjikan potensi untuk mengatasi tantangan integritas dan efisiensi yang dihadapi oleh sistem peradilan, yang telah tercoreng oleh kasus korupsi dan kerentanan terhadap bias subjektif. Dengan memanfaatkan kemampuan AI dalam mengolah data besar secara cepat dan objektif, diharapkan dapat menciptakan proses pengambilan keputusan yang lebih transparan, mengurangi bias, dan meminimalkan potensi korupsi. Penelitian ini membahas implementasi AI dalam peran sebagai hakim dalam sistem peradilan pidana Indonesia, mengkaji potensi efisiensi, objektivitas, dan transparansi yang ditawarkan oleh AI, serta mengeksplorasi sinergi antara AI dan hakim manusia dalam meningkatkan kualitas layanan hukum. Melalui analisis teoriti, penelitian ini menggarisbawahi pentingnya pengembangan standar dan protokol, transparansi, pelatihan dan pendidikan, serta evaluasi berkala dalam integrasi AI. Kerjasama antara AI dan hakim manusia tidak hanya memperkaya proses pengambilan keputusan dalam peradilan tetapi juga mempertahankan inti humanistik hukum. Penelitian ini menunjukkan bahwa dengan pendekatan yang hati-hati dan etis, integrasi AI dalam sistem peradilan dapat memperkuat keadilan, meningkatkan efisiensi, dan memastikan bahwa teknologi mendukung, bukan menggantikan, kebijaksanaan hakim manusia, membuka era baru dalam peradilan yang lebih adil, efisien, dan bermartabat.
Article
Full-text available
Cybercrime poses a growing threat to individuals, businesses, and governments in the digital age. This research aims to conduct a comprehensive study of the legal frameworks developed by international organizations to combat cybercrime, providing a comparative analysis of their approaches and highlighting strengths, weaknesses, and areas for improvement. The study employs a qualitative research methodology, utilizing a doctrinal approach to examine primary and secondary legal sources for data analysis. The results reveal the ongoing efforts of the United Nations and other international bodies to establish a unified approach to combating cybercrime through conventions on Cybercrime. The research emphasizes the importance of harmonizing laws, fostering international cooperation, and adapting to evolving cyber threats while maintaining a balance between security and individual rights. Recommendations include strengthening legal frameworks, enhancing public-private partnerships, and investing in capacity building and technical assistance for developing countries. The study concludes by highlighting the critical importance of comprehensive and harmonized cybercrime legislation in the global fight against cybercrime and calls for continued efforts to address the challenges posed by this ever-evolving threat.
Conference Paper
Full-text available
This paper presents a comprehensive analysis of the implications as well as the adoption of blockchain technologies in government processes from 2019 to 2024. Blockchain, originally designed for cryptocurrency transactions, has evolved into a transformative tool with profound socio-economic implications. The study delves into the foundational concepts of blockchain, emphasizing decentralization and immutability, which underpin transparency and accountability in governance. Smart contracts, enabled by blockchain, promise to streamline bureaucratic processes and reduce errors. Through a systematic literature review of 78 articles from Scopus, the study identifies key trends, including a slight decline in research output in recent years. Citation analysis with the help of VOSviewer has been conducted. The distribution of publications across journals reflects a diverse range of research interests, underscoring the interdisciplinary nature of blockchain research. Keyword co occurrence analysis reveals thematic connections, emphasizing blockchain's role in digitalization, security, and governance. The discussion underscores blockchain's potential to revolutionize government operations and calls for further research to explore its broader applications. Overall, the study contributes to the ongoing discourse on leveraging emerging technologies for transparent, accountable, and efficient governance.
Article
Full-text available
Automated process control has been used for a long time. Innovation and information technology achievements have made it possible to use automation in the State governance. Algorithm-based automated decisions are integral part of the concept of e-Government. Automated decisions are becoming more and more prevalent in modern society of the EU. Using automated decisions in public administration is a challenge for Administrative Law, because it has to evolve and keep up with the usage of new technologies, keep the legal balance between the cost-efficiency and operational flexibility of the State in general and at the same time ensure the protection of rights of individuals in each Member State and in the EU as a whole. Estonia is EU Member State and its public sector uses automated decisions but there are no direct legal provisions regarding what automated decision is, what are the conditions for issuing them, what are the safeguards to avoid the violation of rights of individuals etc. The right to issue automated decision is based only on the authorisation norm stipulated in a specific act regulating the field of activity of administrative authority. The Estonian Unemployment Insurance Fund is one of the administrative authorities which issues automated decisions in its field of activity. The aim of this paper is to examine and find out whether the automated decisions used by Estonian Unemployment Insurance Fund comply with the general principles of administrative procedure and the EU rules on data protection but also to identify aspects where legal adjustment is needed and propose legislative amendments. The paper is based on the analysis of relevant scientific books, articles, legal acts, supported by relevant case law and other sources.
Article
Full-text available
The current digital society has witnessed important developments in robotics and artificial intelligence (AI) research being applied to several spheres of life in order to address multitude of issues. While there are numerous studies on human-robot collaboration on low- and high-level tasks with a focus on robot development, in the current study, the chapter focused on organizational issues arising from human-robot co-working on education and research with particular reference to research and education network (REN) for universities as leverage to human capital development.The chapter identified critical issues in the current REN and tried to solve them with human-robot collaboration from an organizational and pedagogical normalization perspective. The research described an AI-powered instructional robotics application and the development process that the current society can participate and impact the AI pedagogic literacy using deep learning, introducing organizational robotics research studies with an emphasis on education and human capital expansion.
Article
Full-text available
p>El desarrollo exponencial que ha comenzado a tener la industria automotriz, en lo que concierne a vehículos autónomos no es un dato aislado, sino que debe entenderse en el actual estado del desarrollo tecnológico de la robótica. La cuarta revolución industrial irrumpe en la vida cotidiana y con ella la posibilidad de causación de daños provoca una alteración de las estructuras teóricas de la responsabilidad civil. En el presente trabajo nos proponemos adentrarnos en el nuevo escenario, presentando al lector los primeros análisis teóricos a la vez que nos cuestionamos si las estructuras de la responsabilidad civil son adecuadas para la nueva realidad</p
Article
Full-text available
The United Arab Emirates (UAE) is the first country in the world to appoint a State Minister for Artificial Intelligence (AI). The UAE is embracing AI in society at the governmental level, which is leading to a new generations of digital government (which we are labeling Gov. 3.0). This paper argues that the decision to embrace AI will lead to positive impacts on society, including businesses, organizations and individuals, as well as on the AI industry itself. This paper discusses the societal impacts of AI at a macro (country-wide) level.
Article
Full-text available
This article analyses the potential benefits and drawbacks of artificial intelligence (AI). It argues that the EU should become a leading force in AI development. As a goal that captures the public imagination and mobilises a variety of actors, the EU should develop mission-based innovations that focus on using this technological leadership to solve the most pressing societal problems of our time whilst avoiding potential dangers and risks. This leadership could be achieved either by adapting the EU’s available instruments to focus on AI development or by designing new ones. Be it seeking a visionary future for AI or addressing concerns about it, progress should always be driven with the human-centred perspective in mind, that is, one that seeks to augment human intelligence and capacity, and not to supersede it.
Article
Full-text available
A recent issue of a popular computing journal asked which laws would apply if a self-driving car killed a pedestrian. This paper considers the question of legal liability for artificially intelligent computer systems. It discusses whether criminal liability could ever apply; to whom it might apply; and, under civil law, whether an AI program is a product that is subject to product design legislation or a service to which the tort of negligence applies. The issue of sales warranties is also considered. A discussion of some of the practical limitations that AI systems are subject to is also included.
Article
Full-text available
p>Cuanto mayor sea la inteligencia artificial de los bots, robots y androides mayor será su autonomía y en consecuencia tendrán menor dependencia de los fabricantes, propietarios y usuarios. Es un hecho que la nueva generación de robots convivirá con los humanos y la legislación debe adaptarse y regular cuestiones de gran importancia jurídica cuáles son: ¿quién asume la responsabilidad de los actos u omisiones de los robots inteligentes?, ¿cuál es su condición jurídica?, ¿deben tener un régimen especial de derechos y obligaciones?, ¿qué soluciones vamos a dar a los conflictos éticos relacionados con su conducta? y por último, ¿deben establecerse medidas mínimas organizativas, técnicas y legales para minimizar los riesgos de seguridad a los que está expuesta la tecnología asumiendo que su desarrollo no debe verse como una amenaza sino como una oportunidad y que los robots pueden estar interconectados?. La regulación es un aspecto clave para la existencia de una sociedad más segura y pacífica y por ello debe estar adaptada a como es a día de hoy la humanidad y como queremos que sea en un futuro. The greater the artificial intelligence of bots, robots and androids, the greater its autonomy and consequently they will depend less of factories, owners and users. It is a fact that the new generation of robots will coexist with human. For this reason, legislation should order questions of great legal importance such as: Who takes responsibility for the acts or omissions of intelligent robots? Should it exist a special regime of rights and obligations?, what solutions will we give to ethical conflicts related to their behavior? Finally, Should we establish the minimum organizational, technical and legal measures to minimize the security risks that the technology is exposed assuming that its development should not be a threat as a threat but as an opportunity and that, the robots may be interconnected?. Regulation is a key aspect for the existence of a more secure and peaceful society and therefore it must be adapted to how humanity is today and how we want society is in the future</p
Article
Full-text available
As the capabilities of artificial intelligence systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action-selection can overcome many of the issues which limit MEU's effectiveness in implementing aligned artificial intelligence. We examine existing approaches to multiobjective artificial intelligence, and identify how these can contribute to the development of human-aligned intelligent agents.
Preprint
Accepted in Vanderbilt Journal of Entertainment & Technology LawThere is a pervading sense of unease that artificially intelligent machines will soon radically alter our lives in ways that are still unknown. Advances in AI technology are developing at an extremely rapid rate as computational power continues to grow exponentially. Even if existential concerns about AI do not materialise, there are enough concrete examples of problems associated with current applications of artificial intelligence to warrant concern about the level of control that exists over developments in AI. Some form of regulation is likely necessary to protect society from risks of harm. However, advances in regulatory capacity have not kept pace with developments in new technologies including AI. This is partly because regulation has become decentered; that is, the traditional role of public regulators such as governments commanding regulation has been dissipated and other participants including those from within the industry have taken the lead. Other contributing factors are the dwindling of resources in governments on the one hand and the increased power of technology companies on the other. These factors have left the field of AI development relatively unregulated. Whatever the reason, it is now more difficult for traditional public regulatory bodies to control the development of AI. In the vacuum, industry participants have begun to self-regulate by promoting soft law options such as codes of practice and standards. We argue that, despite the reduced authority of public regulatory agencies, the risks associated with runaway AI require regulators to begin to participate in what is largely an unregulated field. In an environment where resources are scarce, governments or public regulators must develop new ways of regulating. This paper proposes solutions to regulating the development of AI ex ante. We suggest a two-step process: first, governments can set expectations and send signals to influence participants in AI development. We adopt the term nudging to refer to this type of influencing. Second, public regulators must participate in and interact with the relevant industries. By doing this, they can gather information and knowledge about the industries, begin to assess risks and then be in a position to regulate those areas that pose most risk first. To conduct a proper risk analysis, regulators must have sufficient knowledge and understanding about the target of regulation to be able to classify various risk categories. We have proposed an initial classification based on the literature that can help to direct pressing issues for further research and a deeper understanding of the various applications of AI and the relative risks they pose.
Article
The concept of different levels of automation (LOAs) has been pervasive in the automation literature since its introduction by Sheridan and Verplanck. LOA taxonomies have been very useful in guiding understanding of how automation affects human cognition and performance, with several practical and theoretical benefits. Over the past several decades a wide body of research has been conducted on the impact of various LOAs on human performance, workload, and situation awareness (SA). LOA has a significant effect on operator SA and level of engagement that helps to ameliorate out-of-the-loop performance problems. Together with other aspects of system design, including adaptive automation, granularity of control, and automation interface design, LOA is a fundamental design characteristic that determines the ability of operators to provide effective oversight and interaction with system autonomy. LOA research provides a solid foundation for guiding the creation of effective human–automation interaction, which is critical for the wide range of autonomous and semiautonomous systems currently being developed across many industries.
Article
In discussions of the regulation of autonomous systems, private law — specifically, company law — has been neglected as a potential legal and regulatory interface. As one of us has suggested previously,1 there are several possibilities for the creation of company structures that might provide functional and adaptive legal “housing” for advanced software, various types of artificial intelligence, and other programmatic systems and organizations — phenomena that we refer to here collectively as autonomous systems, for ease of reference. In particular, this prior work introduces the notion that an operating agreement or private entity constitution (such as a corporation’s charter or a partnership’s operating agreement) can adopt, as the acts of a legal entity, the state or actions of arbitrary physical systems. We call this the algorithm-agreement equivalence principle. Given this principle and the present capacities existing forms of legal entities, companies of various kinds can serve as a mechanism through which autonomous systems might engage with the legal system. This paper considers the implications of this possibility from a comparative and international perspective. Our goal is to suggest how, under U.S., German, Swiss and U.K. law, company law might furnish the functional and adaptive legal “housing” for an autonomous system — and, in turn, we aim to inform systems designers, regulators, and others who are interested in, encouraged by, or alarmed at the possibility that an autonomous system may “inhabit” a company and thereby gain some of the incidents of legal personality. We do not aim here to be normative. Instead, the paper lays out a template suggesting how existing laws might provide a potentially unexpected regulatory framework for autonomous systems, and to explore some legal consequences of this possibility. We do suggest that these considerations might spur others to consider the relevant provisions of their own national laws with a view to locating similar legal “spaces” that autonomous systems could “inhabit.”