About
43
Publications
16,400
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,051
Citations
Introduction
I am a TT assoc. professor in computer science at the Universitat Politècnica de Catalunya·BarcelonaTech (UPC). My research focuses on the techniques and tools needed for the design, implementation, and deployment of intelligent autonomous systems such as to ensure their trustworthiness. Over the past decade, I have been a PI and co-PI in projects related to eXplainable AI (XAI), assessment and auditing of AI systems, and interdisciplinary perspectives in AI.
Additional affiliations
April 2021 - April 2024
January 2019 - March 2021
January 2016 - November 2017
Education
September 2015 - June 2019
September 2011 - June 2015
Publications
Publications (43)
Many high-level ethics guidelines for AI have been produced in the past few years. It is time to work towards concrete policies within the context of existing moral, legal and cultural values, say Andreas Theodorou and Virginia Dignum.
Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirement i...
Although not a goal universally held, maintaining human-centric artificial intelligence is necessary for society’s long-term stability. Fortunately, the legal and technological problems of maintaining control are actually fairly well understood and amenable to engineering. The real problem is establishing the social and political will for assigning...
Variable autonomy equips a system, such as a robot, with mixed initiatives such that it can adjust its independence level based on the task’s complexity and the surrounding environment. Variable autonomy solves two main problems in robotic planning: the first is the problem of humans being unable to keep focus in monitoring and intervening during r...
Machine learning (ML) has become popular in the
industrial sector as it helps to improve operations, increase
efficiency, and reduce costs. However, deploying and managing
ML models in production environments can be complex. This is
where Machine Learning Operations (MLOps) comes in. MLOps
aims to facilitate this deployment and management process....
Machine learning (ML) has become popular in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs. However, deploying and managing ML models in production environments can be complex. This is where Machine Learning Operations (MLOps) comes in. MLOps aims to facilitate this deployment and management process....
Sociotechnical requirements shape the governance of artificially intelligent (AI) systems. In an era where embodied AI technologies are rapidly reshaping various facets of contemporary society, their inherent dynamic adaptability presents a unique blend of opportunities and challenges. Traditional regulatory mechanisms, often designed for static --...
Variable autonomy equips a system, such as a robot, with mixed initiatives such that it can adjust its independence level based on the task's complexity and the surrounding environment. Variable autonomy solves two main problems in robotic planning: the first is the problem of humans being unable to keep focus in monitoring and intervening during r...
This paper surveys the Variable Autonomy (VA) robotics literature that considers two contributory elements to Trustworthy AI: transparency and explainability. These elements should play a crucial role when designing and adopting robotic systems, especially in VA where poor or untimely adjustments of the system’s level of autonomy can lead to errors...
Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI expla...
Machine Learning Operations (MLOps) involves software development practices for Machine Learning (ML), including data management, preprocessing, model training, deployment, and monitoring. While MLOps have received significant interest, much less work has been published addressing MLOps in industrial production settings lately, particularly if solu...
Machine learning (ML) has become a popular tool in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs. However, deploying and managing ML models in production environments can be complex. This is where Machine Learning Operations (MLOps) comes in. MLOps aims to streamline this deployment and management pr...
Several high profile incidents that involve Artificial Intelligence (AI) have captured public attention and increased demand for regulation. Low public trust and attitudes towards AI reinforce the need for concrete policy around its development and use. However, current guidelines and standards rolled out by institutions globally are considered by...
Using different Levels of Autonomy (LoA), a human operator can vary the extent of control they have over a robot's actions. LoAs enable operators to mitigate a robot's performance degradation or limitations in the its autonomous capabilities. However, LoA regulation and other tasks may often overload an operator's cognitive abilities. Inspired by v...
This paper presents the AWKWARD architecture for the development of hybrid agents in Multi-Agent Systems. AWKWARD agents can have their plans re-configured in real time to align with social role requirements under changing environmental and social circumstances. The proposed hybrid architecture makes use of Behaviour Oriented Design (BOD) to develo...
Data and autonomous systems are taking over our lives, from healthcare to smart homes very few aspects of our day to day are not permeated by them. The technological advances enabled by these technologies are limitless. However, with advantages so too come challenges. As these technologies encompass more and more aspects of our lives, we are forget...
Developed and used responsibly Artificial Intelligence (AI) is a force for global sustainable development. Given this opportunity, we expect that the many of the existing guidelines and recommendations for trustworthy or responsible AI will provide explicit guidance on how AI can contribute to the achievement of United Nations' Sustainable Developm...
Artificial Intelligence (AI) as a highly transformative technology take on a special role as both an enabler and a threat to UN Sustainable Development Goals (SDGs). AI Ethics and emerging high-level policy efforts stand at the pivot point between these outcomes but is barred from effect due the abstraction gap between high-level values and respons...
This paper presents the AWKWARD agent architecture for the development of agents in Multi-Agent Systems. AWKWARD agents can have their plans re-configured in real time to align with social role requirements under changing environmental and social circumstances. The proposed hybrid architecture makes use of Behaviour Oriented Design (BOD) to develop...
Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaborati...
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI...
This paper describes IEEE P7001, a new draft standard on transparency of autonomous systems 1 . In the paper, we outline the development and structure of the draft standard. We present the rationale for transparency as a measurable, testable property. We outline five stakeholder groups: users, the general public and bystanders, safety certification...
This paper is preoccupied with the following question: given a (possibly opaque) learning system, how can we understand whether its behaviour adheres to governance constraints? The answer can be quite simple: we just need to "ask" the system about it. We propose to construct an investigator agent to query a learning agent -- the suspect agent -- to...
The right to contest a decision with consequences on individuals or the society is a well-established democratic right. Despite this right also being explicitly included in GDPR in reference to automated decision-making, its study seems to have received much less attention in the AI literature compared, for example, to the right for explanation. Th...
In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a framework to evaluate their s...
The right to contest a decision with consequences on individuals or the society is a well-established democratic right. Despite this right also being explicitly included in GDPR in reference to automated decision-making, its study seems to have received much less attention in the AI literature compared, for example, to the right for explanation. Th...
(Preprint arXiv:2005.08370). In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a...
Artificially intelligent agents are increasingly used for morally-salient decisions of high societal impact. Yet, the decision-making algorithms of such agents are rarely transparent. Further, our perception of, and response to, morally-salient decisions may depend on agent type; artificial or natural (human). We developed a Virtual Reality (VR) si...
Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to improve people's lives, then people must be able to trust AI, which means being able to understand what the system is doing and why....
Autonomous robots can be difficult to design and understand. Designers have difficulty decoding the behaviour of their own robots simply by observing them. Naive users of robots similarly have difficulty deciphering robot behaviour simply through observation. In this paper we review relevant robot systems architecture, design, and transparency lite...
As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that ou...
The EPSRC's Principles of Robotics advises the implementation of transparency in robotic systems, however research related to AI transparency is in its infancy. This paper introduces the reader of the importance of having transparent inspection of intelligent agents and provides guidance for good practice when developing such agents. By considering...
Deciphering the behaviour of intelligent others is a fundamental characteristic of our own intelligence. As we interact with complex intelligent artefacts, humans inevitably construct mental models to understand and predict their behaviour. If these models are incorrect or inadequate, we run the risk of self deception or even harm. This paper repor...
As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users find it hard to create useful mental models of robot reasoning solely from observed behaviour. The EPSRC Principles of Robotics mandate that our artefa...