Conference PaperPDF Available

Outline of a Novel Approach for Identifying Ethical Issues in Early Stages of AI4EO Research

Authors:

Abstract

In the EU, expert groups have done a great deal of work in compiling numerous “ethics guidelines ” for AI. However, recent academic research suggests that these guidelines are not practically useful for academic researchers. Making ethically mindful choices at very early stages of research can help reduce delays and expenses. It can also permit more efficient development of beneficial applications to help solve real-world problems or accomplish the United Nations Sustainable Development Goals (UN SDGs). To support early identification of ethical issues in AI4EO research, this article recommends a novel approach to classifying and identifying ethical issues, based on Eastern and Western philosophical thought and existing theories of ethics.
Outline of a Novel Approach for Identifying Ethical Issues in Early Stages of AI4EO Research
Mrinalini Kochupillai
1
I Introduction
Academic research and literature
discussing ethical issues in Earth Observation (EO)
or Remote Sensing (RS) research are scant. Yet,
ethical concerns, particularly those linked to
privacy [1, 2], explainability and bias [3, 4], are
growing in relevance as AI and Machine Learning
(ML) models are adopted to study and analyze
petabytes of EO/RS data (hereinafter referred to as
“AI4EO research”). AI/ML models have been used
in EO and RS sciences for decades [5]. However,
ethical issues take center stage as the resolution of
EO/RS data increases rapidly, and as newer sources
of data are fused with EO/RS data to achieve better
results at lower costs and greater speeds.
Nevertheless, not all ethical issues can be
identified in the present partly because of rapid
technological evolution and almost blind focus on
innovation as an end in itself [8], and partly because
of uncertainties inherent in AI4EO research
methods, analysis and results [6]. Real-world
application of research findings also give rise to
uncertainties vis-à-vis ethical impact.
In the EU, expert groups have done a great
deal of work in compiling numerous “ethics
guidelines
2
for AI. However, recent academic
research [7, 8] and surveys conducted by the author,
suggest that these guidelines are not practically
1
Affiliation: Data Science in Earth Observation, Technical University of Munich, Willy-Messerschmitt-Str. 1, 82024
Taufkirchen/Ottobrunn, Germany.
Acknowledgement: The work is funded by the German Federal Ministry of Education and Research (BMBF) in the framework of the
international future AI lab "AI4EO -- Artificial Intelligence for Earth Observation: Reasoning, Uncertainties, Ethics and Beyond" (Grant
number: 01DD20001). The author wou ld like to thank Ms. Julia Köning er for excellent research ass istance.
2
At the outset, it is necessary to distinguish between general ethics guidelines for researchers (i.e. guidelines on how to conduct research
e.g. taking informed consent from research participants/interviewees etc.) and technology specific ethics guidelines (i.e. guidelines on
broader ethical issues to be avoided or kept in mind when deploying research results and applications from any technology for general or
specific real-world use). In this article, “ethics guidelines” refer to the latter.
useful for academic researchers, particularly when
they are engaged with “fundamental” or
“application agnostic” research.
Yet, making ethically mindful choices at
very early stages of research can help reduce delays
and expenses. It can also permit more efficient
development of beneficial applications to help
solve real-world problems or accomplish the
United Nations Sustainable Development Goals
(UN SDGs). To support early identification of
ethical issues in AI4EO research, this article
recommends a novel approach to classifying and
identifying ethical issues, based on Eastern and
Western philosophical thought and existing
theories of ethics. Based on this approach, a step
wise guide can be created that helps researchers
identify major ethical issues under concrete and
comprehensive “action categories/stages.” These
categories can also be easily modified based on the
specific technology and use-case at hand.
II A novel approach to identify and
classify ethical issues in emerging technologies
Ethical approaches emerging from the
Western world can be classified broadly into two
categories, namely, those based on (i) the
consequences of an action (also known as
consequentialist or teleological approach) and (ii)
the nature or duty of the human actor (also known
as deontological approach). Scholars also
recommend approaches that combine both
categories [13, 14]. Within these approaches,
several theories of ethics have evolved over time,
including the ethical egotism [15], Utilitarianism
[16], theory of rights and justice [17], virtue ethics
[18], feminist ethics [19] etc.
Going beyond duties and consequences,
Eastern philosophy also places considerable
emphasis on the interlinked concepts of Karma
(action, its causes and consequences) and Dharma
(personal duty, social responsibility and religious
dictates). Considerable emphasis is also placed on
“our three powers” (Shaktis): Power of (i)
intention/desire/will (“Icchha Shakti”), (ii) action
(“Kriya Shakti”), and (iii) knowledge/human values
(“Gyan Shakti”). In Indian mythology, symbols
associated with Lord Muruga vividly describe the
link between these three powers: Knowledge and
human values are meaningless if they are
unaccompanied by appropriate will, i.e. desire to
act, and constructive action. Similarly, will and
action must be guided by knowledge or human
values for long term benefits for oneself and society
[17], [18].
In AI for EO, the presence of a plethora of
uncertainties mandates the need to look beyond
consequences, at duties (social responsibilities),
intentions, knowledge/ human values, as well as
concrete, present action: indeed, in several (if not
most) instances, the (long term) consequences of
conducting research, and of translating research
results to concrete products or services, will remain
unknown until a much later date. A focus on
consequentialist theories may not provide
meaningful guidance at early stages of research.
Being aware of one’s broad duties, while
very significant, may also not be adequate. Indeed,
guidelines for research ethics, which provide, inter
alia, methodological and procedural guidelines on
how or how not to conduct research (e.g. obtaining
informed consent from research participants,
avoiding plagiarism), comprehensively enumerate
the duties of researchers. Fundamental principles of
ethics, e.g. honesty, integrity and fairness, are also
basic duties or characteristics of any ethical
researcher. However, these duties are abstract and
inadequate to guide researchers working with
emerging technologies like AI4EO.
In this situation, five practical steps can
provide concrete guidance to identify, flag and
avoid ethical issues in early stages of research.
These five steps are not linear, but rather iterative
and circular, spanning the entire research duration:
First, scrutinizing and becoming aware of
the concrete (long and short term)
will/intention/desire driving the research. Second,
determining whether these intentions/desires are
aligned with one’s own conscience, with human
rights and contribute constructively to the UN
SDGs or to overcoming concrete societal problems.
Third, identifying and categorizing the specific
actions taken as part of the research (“action
categories/stages”). Fourth, ensure that every
action (e.g. steps of research) is aligned with and
aimed at accomplishing the expressed desire/will.
Finally, checking to ensure that each action(s) is in
harmony with human conscience and a universally
acceptable set of human values.
In present times, unlike the Universal
Declaration of Human Rights (UDHR), Universal
human values have not been consolidated into a
single, comprehensive international document.
While a few scattered efforts in this direction are
notable [19-21], in the absence of a globally
accepted document, practical ethical issues
identified in existing literature, the UDHR, as well
as the UN SDGs can contribute significantly in
steering research endeavors towards ethically
aware goals, objectives (intention/will) and actions.
While broad principles of ethics are, arguably,
already well-known to researchers, what is missing
is a method to practically use or apply these
principles in various stages of research, especially
when dealing with large quantities of data from
various sources. The next section provides a step by
step guide for researchers.
III Identifying Ethical Issues in
AI4EO research: Step wise guidance
In the broader contexts of AI/ML (and, to
some extent, EO/RS) research, the most significant
ethical issues identified in published literature
include privacy, bias, uncertainty and error, stigma,
(national) security, data veracity, accountability,
integrity, honesty and fairness [2, 4]; [8, 23-25].
In order to make these abstract ethical
issues/concerns more practically useful for
researchers at early stages of research, the first step,
as discussed above, is for the researcher to identify
and list (e.g. in the form of a mind map) the
intention(s)/will/desire(s) driving her research.
These can be classified as long and short term goals
and objectives of the research itself (e.g. I want to
see the rate of rural-urban migration to support
policy making that prevents or reverts the trends, I
want to create a map of major farming systems of
Europe to support the expansion of
sustainable/organic agriculture), and as personal
goals of the researcher (e.g. I want to complete my
Ph.D. and get a doctor title, I want to get promoted,
I want to help the poor). A detailed, honest and
comprehensive list will already help the researcher
get an understanding of her underlying motivations
and whether they stand the test of her personal
conscience and objective standards found in
published literature, the UDHR and the UN SDGs.
Thereafter, the researcher needs to identify
broad “action categories/stages”. These categories
can be at a general or macro level, or at a specific
or micro level. For example, preliminary
discussions with academic researchers revealed
that research in this field can be broadly categorized
under the following research lifecycle stages
(“AI4EO research lifecycle stages”) (List 1) (i)
Fundamental Research (application agnostic) (ii)
Engineering Research (application agnostic, but
possible focus on specific targets) (iii) Applied
Research (application oriented) and (iv)
Application Specific Research (Industrial research
and innovation for marketable applications)
Second, in AI4EO research, large amounts of
data are typically necessary in any and all of the
above research lifecycle stages. Accordingly, the
most important research steps can be further
categorized under detailed “AI4EO data lifecycle”
stages (List 2), such as (i) Selection of data source
and research dataset (iii) Selection of data analysis
method/approach; (iv) Data labelling; (v) Data
analysis; (vi) Data storage; and (vii) Data
publication or dissemination.
It is necessary to note that Lists 1 and 2 are
merely illustrations of possible “action
categories/stages” in any AI4EO (or other)
research. Academic researcher can and should
modify the above two lists, or create other lists
based on their specific field of research. What is
important is that each “action” step is documented
and aligned with the identified desires/intentions of
the research. Finally, it is recommended that
AI4EO researchers (notwithstanding the research
lifecycle stage at which their work is situated),
work closely with ethics researchers to identify the
broad categories of ethical issues that are most like
to arise, need to be flagged or, where possible,
addressed by technological solutions at the earliest
possible, and at all relevant “action stages”.
References
1. Yemeni, Z., et al., Reliable spatial and temporal data redundancy reduction approach for WSN. Computer Networks. 185: p. 107701.
2. Thomson, D., et al., Critical Commentary: Need for an Integrated Deprived Area “Slum” Mapping Syst em (IDeAMapS) in LMICs. Preprints; MDPI:
Basel, Switzerland, 2019.
3. Kuffer, M., et al., The Scope of Earth-Observation to Improve the Consistency of the SDG Slum Indicator. ISPRS International Journal of Geo-
Information, 2018. 7(11): p. 428-428.
4. Harris, R., Reflections on the value of ethics in relation to Earth observation. International Journal of Remote Sensing, 2013. 34(4)
5. Estes, J.E., C. Sailer, and L.R. Tinney, Applications of artificial intelligence techniques to remote sensing. The Professional Geographer, 1986.
38(2): p. 133-141.
6. Sudmanns, M., et al., Big Earth data: disruptive changes in Ear th observation data management and analysis? Internatio nal Journal of Digital
Earth, 2020. 13(7): p. 832-850.
7. Lary, D.J., et al., Machine learning applications for earth observation. Earth observation open science and innovation, 2018. 165.
8. Carrillo, M.R., Artificial intelligence: from ethics to law. Telecommunications Policy, 2020: p. 101937.
9. Hagendorff, T., The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 2020: p. 1-22.
10. Thomas, A.J., Deontology, Consequentialism and Moral Realism. Mi nerva: An Internet J ournal of Philosophy, 2015. 19.
11. Biagetti, M.T., A. Gedutis, and L. Ma, Ethical Theories in Research Evaluation: An Exploratory Approach. Scholarly Assessment Reports, 2020.
12. Rachels, J., Ethical egoism. Ethical theory: an anthology, 2012. 14: p. 193.
13. Shaw, W ., Contemporary ethics: Taki ng account of utilitarianism. 1999.
14. Brady, F.N. and C.P. Dunn, Business meta-ethics: An a nalysis of tw o theories. Business Ethics Quarterly, 1995: p . 385-398.
15. Annas, J., Virtue ethics. The Oxford handbook of ethical theory, 2006: p. 51 5-536.
16. Jaggar, A.M., Feminist ethics. The Blackwell guide to ethical theory, 2013: p. 433-460.
17. Sri Sri Ravi Shanakar (2021), The Three Shaktis (Powers), available here: htt ps://wisdom.srisriravishankar.org /dynamics-of-three-
shaktis/?fbclid=IwAR0psc9AtoGOIpFhTQmr6T4xIjtYNYqkHEaOt66fzMbASx3uRsS46pOg3fQ (accessed, 20 January 2021)
18. Don Handelman, Myths of Murugan: As ymmetry and Heirarchy in a South Indian P uranic Cos mology, 2:27 History of Religions (University of
Chicago Press), (1987), 133-170, at p. 141.
19. InterActio n Council, A universal declaration of human responsibilities. Proposed by th e InterAction Council, 1997. 1.
20. Saith, A., From universal values to millennium development goals: Lost in translation. Development and change, 2006. 37(6): p. 1167-1199.
21. Shankar SSR. (2007) Universal Declaration of Human Values, Draft text published by the International Association for Human Values, available
here: https://www.iahv.org/us-en/wp-content/themes/IAHV/PDF/Universal-Declaration-of-Human-Values.pdf (accessed 07 December 2020)
22. Wagenaar, D., et al., Invi ted pers pectives: How machine learning will change flood risk and i mpact assessm ent. Nat ural Hazards and Earth
System Sciences, 2020. 20(4): p. 1149-1161.
23. Ananny, M. and K. Crawford, Seeing without knowing: Limitations of the transparency ideal a nd its application to algorithmic accountability.
new media & society, 2018. 20(3): p. 973-989.
24. Nemorin, S. a nd O.H. Gandy, Exploring Neuromarketing and Its Reliance on Remote Sensing: Social and Ethical Concerns. International Journal
of Communication, 2017. 11: p. 4824-4844.
25. Liu, J.Z., et al., Rethinking big data: A review on the data quality and usage issues. ISPRS Journal of Photogrammetry and Remote Sensing, 2016.
115: p. 134-142.
26. Zhao, B., e t al., S poofing in G eography: Can We Trust Ar tificial Int elligence to Manage Geos patial Data?, i n Spatial Sy nthesis. 2020, Springer. p.
325-338.
... In Indian mythology, for example, symbols associated with Skanda ("Lord Murugan") vividly describe the link among these three powers: knowledge and human values are meaningless if they are unaccompanied by appropriate intention and will, i.e., a desire to act, and constructive action. Similarly, will and action must be guided by knowledge and human values for long-term benefits for oneself and society [35], [36], [37], [38]. Neo-Confucian scholars from China also emphasize a combination of action and knowledge to realize morally appropriate outcomes [39]. ...
Article
Full-text available
Ethics is a central and growing concern in all applications utilizing Artificial Intelligence (AI). Earth Observation (EO) or Remote Sensing (RS) research relies heavily on both Big Data and AI or Machine Learning (ML). While this reliance is not new, with increasing image resolutions and the growing number of EO/RS use-cases that have a direct impact on governance, policy, and the lives of people, ethical issues are taking center stage. In this article, we provide scientists engaged with AI4EO research (i) a practically useful overview of the key ethical issues emerging in this field with concrete examples from within EO/RS to explain these issues, and (ii) a first road-map (flowchart) that scientists can use to identify ethical issues in their ongoing research. With this, we aim to sensitize scientists about these issues and create a bridge to facilitate constructive and regular communication between scientists engaged in AI4EO research on the one hand, and ethics research on the other. The article also provides detailed illustrations from four AI4EO research fields to explain how scientists can redesign research questions to more effectively grab ethical opportunities to address real-world problems that are otherwise akin to ethical dilemmas with no win-win solution in sight. The paper concludes by providing recommendations to institutions that want to support ethically mindful AI4EO research and provides suggestions for future research in this field.
Article
Full-text available
Fire seasons have become increasingly variable and extreme due to changing climatological, ecological, and social conditions. Earth observation data are critical for monitoring fires and their impacts. Herein, we present a whole-systems framework for identifying and synthesizing fire monitoring objectives and data needs throughout the life cycle of a fire event. The four stages of fire monitoring using Earth observation data include: 1) pre-fire vegetation inventories, 2) active-fire monitoring, 3) post-fire assessment, and 4) multi-scale synthesis. We identify the challenges and opportunities associated with current approaches to fire monitoring, highlighting four case studies from North American boreal, montane, and grassland ecosystems. While the case studies are localized to these ecosystems and regional contexts, they provide insights for others experiencing similar monitoring challenges worldwide. The field of remote sensing is experiencing a rapid proliferation of new data sources, providing observations that can inform all aspects of our fire monitoring framework; however, significant challenges for meeting fire monitoring objectives remain. We identify future opportunities for data sharing and rapid co-development of information products using cloud computing that benefit from open-access Earth observation and other geospatial data layers.
Chapter
Full-text available
The recent development of Artificial Intelligence (AI) has offered its tremendous capacity to intelligentize GIScience. In the meantime, such capacity could also generate problematic geospatial data. To explain how geospatial data can be affected by AI and to discuss the concomitant social implications, this chapter focuses on three geospatial spoofing cases: first, the game player trajectories generated by bot; second, the fake locational information spread out on twitter by its filtering algorithm; and third, simulated image of place made by a deep learning algorithm. Confronted with the above-mentioned complex issues, we further propose spoofing detection principles to understand their inherent mechanisms and hopefully inform their impacts on society better. By this chapter, we aim to promote a critical lens in perceiving AI’s capacity in transforming GIScience, and discuss the societial impacts engendered by the convergence of AI and GIScience.
Article
Full-text available
Research evaluation encompasses the practices of assessing research quality and impact at various stages of research. The processes and criteria of research evaluation vary depending on the nature and objectives of the assessment. Different research evaluation systems influence the research strategies of universities and institutes. There are, however, some known issues of research evaluation with regards to the peer review and, most prominently, the use of citation-based metrics, which lead to recent calls for responsible use of metrics. In this paper, we argue that there is a need for ethical theories for considering research evaluation and that research evaluation ethics, as an overlapping area between research ethics and evaluation ethics, deserve its own treatment. The core of the article consists of a discussion of the most influential ethical theories in the context of the research evaluation, including the deontological ethics, the consequentialist ethics and the virtue ethics. The aim is to highlight the need to assume an ethical view that combines the deontological and the consequentialist concepts, adopting 'common good' as the most likely pillar for the research evaluation procedures. We propose that the mixed approach would be useful for developing a framework for research evaluation ethics and for analysing ethical approaches and ethical dilemmas in research evaluation. Policy Highlights The misuses and abuse of evaluative metrics have been discussed and debated in many high-profile publications including the San Francisco Declaration on Research Assessment (DORA), The Metrics Tide, the Leiden Manifesto, and the Hong Kong Principles. There are also many studies stating the limitations of and bias in peer review. The paper argues that ethical theories are useful in understanding ethical assumptions and ethical dilemmas in research evaluation and are pertinent in future design and development of research evaluation processes and criteria. Ethical theories that can construct ethical principles for research evaluation, including deontological and consequentialist ethics, taking into account the Mertonian normative theory, have been examined. In order to address the issues of research evaluation, we propose a mixed approach that combines the deontological and the consequentialist concepts that is able to infringe the boundaries of the rivaling theories and provide basis needed for research evaluation ethics.
Article
Full-text available
p>Increasing amounts of data, together with more computing power and better machine learning algorithms to analyse the data, are causing changes in almost every aspect of our lives. This trend is expected to continue as more data keep becoming available, computing power keeps improving and machine learning algorithms keep improving as well. Flood risk and impact assessments are also being influenced by this trend, particularly in areas such as the development of mitigation measures, emergency response preparation and flood recovery planning. Machine learning methods have the potential to improve accuracy as well as reduce calculating time and model development cost. It is expected that in the future more applications will become feasible and many process models and traditional observation methods will be replaced by machine learning. Examples of this include the use of machine learning on remote sensing data to estimate exposure and on social media data to improve flood response. Some improvements may require new data collection efforts, such as for the modelling of flood damages or defence failures. In other components, machine learning may not always be suitable or should be applied complementary to process models, for example in hydrodynamic applications. Overall, machine learning is likely to drastically improve future flood risk and impact assessments, but issues such as applicability, bias and ethics must be considered carefully to avoid misuse. This paper presents some of the current developments on the application of machine learning in this field and highlights some key needs and challenges. .
Article
Full-text available
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.
Article
Full-text available
Ninety percent of the people added to the planet over the next 30 years will live in African and Asian cities, and a large portion of these populations will reside in deprived neighborhoods defined by slum conditions, informal settlement, or inadequate housing. The four current approaches to neighborhood deprivation mapping are largely silo-ed, and each fall short of producing accurate, timely, comparable maps that reflect local contexts. The first approach, classifying “slum households” in census and survey data and aggregating to administrative areas, reflects household-level rather than neighborhood-level deprivation. The second approach, field-based mapping, can produce the most accurate and context-relevant maps for a given neighborhood, however it requires substantial resources, preventing up-scaling. The third and fourth approaches, human interpretation and machine classification of satellite, aerial, or drone imagery, both overemphasize informal settlements, and fail to represent key social characteristics of deprived areas such as lack of tenure, exposure to pollution, and lack of basic public services. The latter, machine classification of imagery, can be automated and extended to incorporate new and multiple sources of data. This diverse collection of authors represent experts from these four approaches to neighborhood deprivation mapping. We summarize common areas of understanding, and present a set of requirements to produce maps of deprived urban areas that can be used by local-to-international stakeholders for advocacy, planning, and decision-making.
Article
Full-text available
Turning Earth observation (EO) data consistently and systematically into valuable global information layers is an ongoing challenge for the EO community. Recently, the term 'big Earth data' emerged to describe massive EO datasets that confronts analysts and their traditional workflows with a range of challenges. We argue that the altered circumstances must be actively intercepted by an evolution of EO to revolutionise their application in various domains. The disruptive element is that analysts and end-users increasingly rely on Web-based workflows. In this contribution we study selected systems and portals, put them in the context of challenges and opportunities and highlight selected shortcomings and possible future developments that we consider relevant for the imminent uptake of big Earth data.
Article
The Oxford Handbook of Ethical Theory is a reference work in ethical theory, consisting of articles by leading moral philosophers. Ethical theories have always been of central importance to philosophy, and remain so—ethical theory is one of the most active areas of philosophical research and teaching today. Courses in ethics are taught in colleges and universities at all levels, and ethical theory is the organizing principle for all of them. The book is divided into two parts, mirroring the field. The first part treats meta-ethical theory, which deals with theoretical questions about morality and moral judgment, including questions about moral language, the epistemology of moral belief, the truth aptness of moral claims, and so forth. The second part addresses normative theory, which deals with general moral issues, including the plausibility of various ethical theories and abstract principles of behavior. Examples of such theories are consequentialism and virtue theory. The twenty-five contributors cover the field in a comprehensive and highly accessible way, while achieving three goals: exposition of central ideas, criticism of other approaches, and putting forth a distinct viewpoint.
Article
Data generated by sensors are inherently apt to spatial and temporal redundancy owing to the proximity of sensors that could sense the same environment or react to the same event. The massively generated data leads to reducing the life span of the sensors in particular, and the network in general. To minimize the effect of such generated data, we develop an approach to reducing the spatial and temporal data redundancy while maintaining the life of the sensor that results in prolonging the lifetime of the network with balancing data reliability. The proposed approach relies on two levels. The first level represents the end node, and it is responsible for reducing the temporal data redundancy and minimizing the data transmission using the Kalman filter for data estimation. The second level represents the sink or base station, which works in synchronization with the end nodes. This level is responsible for minimizing the spatial data redundancy based on two algorithms, namely Sink Level Grouping Algorithm (SLGA) and Sink Level Aggregation Algorithm (SLAA). The obtained results demonstrate that the proposed approach outperformed Prefix Frequency Filtering (PFF) and Redundancy Elimination Data Aggregation (REDA) algorithms in terms of spatial and temporal data redundancy and accuracy with acceptable results of energy consumption.
Article
AI is the subject of a wide-ranging debate in which there is a growing concern about its ethical and legal aspects. Frequently, the two are mixed and confused despite being different issues and areas of knowledge. The ethical debate raises two main problems: the first, conceptual, relates to the idea and content of ethics; the second, functional, concerns its relationship with law. Both establish models of social behaviour, but they are different in scope and nature. The juridical analysis is based on a non-formalistic scientific methodology. This means that it is necessary to consider the nature and characteristics of the AI as a preliminary step to the definition of its legal paradigm. In this regard, there are two main issues: the relationship between artificial and human intelligence and the question of the unitary or diverse nature of the AI. From that theoretical and practical basis, the study of the legal system is carried out by examining its foundations, the governance model and the regulatory bases. According to this analysis, throughout the work and in the conclusions, International Law is identified as the principal legal framework for the regulation of AI.