António Barata Lopes’s research while affiliated with Agrupamento Escolas MDS and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (23)


Fig. 1 (a) Left panel: Learning gradients for social learners (SL, black line) and counterfactual learners (CT, red line) for the N-person SH game. If the learning gradient is positive (negative), the fraction of cooperators will tend to increase (decrease). Empty and full circles represent the finite population analogue of unstable and stable fixed points, respectively. Right panel: Stationary distribution of the Markov processes created by the transition probabilities pictured in the left panel; it characterizes the prevalence in time of each fraction of cooperators in finite populations. (b) Right panel: Overall cooperation as a function of the prevalence of individuals resorting to social learning (SL, χ) and counterfactual reasoning (CT, 1-χ). It shows that only a relatively small prevalence of counterfactual thinking is required to nudge cooperation in an entire population of self-regarding agents. Other parameters: Z = 50, N = 6, F = 5.5. M = N/2 (panel A), c = 1.0, μ = 0.01, β SL = β CT = 5.0
AI Modelling of Counterfactual Thinking for Judicial Reasoning and Governance of Law
  • Chapter
  • Full-text available

December 2023

·

41 Reads

·

1 Citation

·

Francisco C. Santos

·

António Barata Lopes

When speaking of moral judgment, we refer to a function of recognizing appropriate or condemnable actions and the possibility of choice between them by agents. Their ability to construct possible causal sequences enables them to devise alternatives in which choosing one implies setting aside others. This internal deliberation requires a cognitive ability, namely that of constructing counterfactual arguments. These serve not just to analyse possible futures, being prospective, but also to analyse past situations, by imagining the gains or losses resulting from alternatives to the actions actually carried out, given evaluative information subsequently known. Counterfactual thinking is in thus a prerequisite for AI agents concerned with Law cases, in order to pass judgement and, additionally, for evaluation of the ongoing governance of such AI agents. Moreover, given the wide cognitive empowerment of counterfactual reasoning in the human individual, namely in making judgments, the question arises of how the presence of individuals with this ability can improve cooperation and consensus in populations of otherwise self-regarding individuals. Our results, using Evolutionary Game Theory (EGT), suggest that counterfactual thinking fosters coordination in collective action problems occurring in large populations and has limited impact on cooperation dilemmas in which such coordination is not required.

Download

Prisoners' Dilemma.
Employing AI to Better Understand Our Morals

December 2021

·

97 Reads

·

10 Citations

We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are “blind” to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.



Cognitive Prerequisites: The Special Case of Counterfactual Reasoning

January 2020

·

25 Reads

·

9 Citations

When speaking of moral conscience, we are referring to a function of recognizing appropriate or condemnable action, and the possibility of choice between them. In fact, it would make no sense to talk about morals or ethics, if for each situation we had only one possible answer. Morality is justified because the agent can choose among possible actions. His ability to construct possible causal sequences enables him to devise alternatives in which choosing one implies setting aside the other. This typology of internal deliberation requires certain cognitive capacities, namely that of constructing counterfactual arguments. These serve not only to analyse possible futures, being prospective, but also to analyse past situations, by imagining the gains or losses resulting from imagining alternatives to the action actually carried out. Compared to social learning, where the subject can only mimic certain behaviours, the construction of counterfactuals is much richer and more fruitful. Thus, for machines to be equipped with effective moral capacity, it is necessary to equip them with the ability to construct and analyse counterfactual situations.


Intelligence and Autonomy in Artificial Agents

January 2020

·

15 Reads

An intelligent agent will, inherently, be an autonomous agent. Assuming this thesis is pertinent, it becomes necessary to clarify the notion of autonomy and its prerequisites. Initially, the difficulties inherent in developing ways of thinking that make it effective must be acknowledged. In fact, most individuals deliberate and decide on concrete aspects of their lives yet are unable to do so critically enough. This requires a complex set of prerequisites to be met, which we make explicit. Among them is the ability to construct hypothetical counterfactual scenarios, which support the analysis of possible futures, thereby leading the subject to the construction of a non-standard identity, of his own preference and choice. In the realm of AI, the notions of genetic algorithms and emergence, allow for an engineered approximation of what is viewed as autonomy in humans. Indeed, a machine can follow trial and error procedures, finding unexpected solutions to problems itself, or in conjunction with other machines. In theory, though we are mindful of the difficulties inherent in the construction of autonomy, nothing in principle prevents machines from attaining it.


Is It Possible to Program Artificial Emotions? A Basis for Behaviours with Moral Connotation?

January 2020

·

28 Reads

·

4 Citations

The fact that machines can recognize emotions, or even be programmed with something functionally similar to an emotion, does not mean that they exhibit moral behaviour. The laws defined by Isaac Asimov are of little use if a machine agent has to make decisions in complex scenarios. It must be borne in mind that morality is primarily a group phenomenon. It serves to regulate the relationship among individuals having different motivations regarding the cohesion and benefit of that group. Concomitantly, it moderates expectations about one another. It is necessary to make sure agents do not hide malevolent purposes, that they are capable of acknowledging errors and to act accordingly. One must begin somewhere, even without presently possessing a detailed knowledge of human morality, to the extent of programming ethical machines in full possession of all the functions of justification and argumentation that underlie decisions. This chapter will discuss the bringing out of a moral lexicon shareable by most cultures. The specific case of guilt and the capacity to recognize it is present in all cultures. It can be computer-simulated and can be a starting point for exploring this field.


Mutant Algorithms and Super Intelligences

January 2020

·

6 Reads

Interpreting the cognitive development of Humanity as a liberating process, the role of a conceivable superintelligence is questioned. Admittedly, there is nowadays a kind of “arms race” which aims to build ever more flexible algorithms, with a view towards what is now dubbed Artificial General Intelligence (A.G.I.). There is a perception that leading this race will have immeasurable competitive advantages but first the technical difficulties of such an undertaking should be noted. Even highly sophisticated systems like AlphaGo are a long way from the general intelligence of a human being. On the other hand, there is an inscription of this aim in our cultural matrix, and this is as challenging as it is paradoxical. Firstly, in the name of our individual and collective freedom, we killed God. We are currently working hard to find forms of intelligence that surpass us, thereby enabling much more effective social control. Again, the analysis of the shaping mythology of our culture will help us spell out the problem, this time through the magical powers of the goddess Circe.


Breaking Barriers: Symbiotic Processes

January 2020

·

12 Reads

Looking into the nature and evolutionary History of Humanity, we find that symbiotic processes are far from new. Not only are they present in Biology, but also in our relationship with other animals and archaic machines, multipliers of force and speed. Only on the basis of a less than 25,000-year-old illusion do we perceive ourselves as exclusive holders of the top of the knowledge chain; but in this exclusivity we have always been accompanied by projections of transcendent beings, or expectations about extra-terrestrials. Until about 25,000 years ago we shared the planet with other hominids, with whom the Sapiens had close relationships. Neanderthals have disappeared, but not all of their genes. Thus, the concept of symbiosis occupies centre stage in the understanding of evolutionary cognition. This has not been seen as too problematic. However, as AI evolves, this may change. Such scenarios should convene citizens for informed debates on the topics and processes of scientific inquiry. Icarus is an example of the abuse of technologies engendered by Daedalus, and symbolizes the risks we face. But it does not have to be so: the problems faced will be better solved with more, never less, properly conducted science and technology.


Aside on Children and Youths, on Identity Construction in the Digital World

January 2020

·

16 Reads

Symbiotic processes have a special impact on children and young people. Born in a world of technological tools paraphernalia linked to the Internet and the most widespread media, they cannot even conceive of a life where they would not be permanently connected to the network. Traditional notions associated with privacy are thus questioned without much awareness. The impacts of fragmented information, of the way social networks summon reactivity and immediate emotional response, of the permanent presence of the other mediated by a smartphone, a tablet or a computer, are not yet thought out and conceived in all their consequences. However, the phenomena of scattered and diffuse identity and the emergence of behaviours intolerant to frustration are becoming increasingly evident. In a world where in each of those present in the network there constitute within themselves like one alter ego (or more), youths have difficulty in structuring a solid and differentiating identity, caving before the multiple pressures they are subjected to. Perhaps in the near future the notion of building a differentiated identity will not have the same pertinence it has today.


Cognition with or Without Emotions?

January 2020

·

22 Reads

Since human moral decisions often have an emotional coating—either through empathy or antipathy—it is necessary to address the possibility of developing emotionally motivated machines. This is considered one of the limits of AI when doubting its progress. “Machines will never be emotionally motivated, for they have no endocrine system to endow them with emotions.” This thesis systematically ignores the role of emotions in humans, as well as what we should really expect from cognitive machines. First, we will highlight that, from a functional viewpoint, emotions play an anticipatory role, preparing possible responses for an organism. Then we will caution that emotions are not a human particularity, since there are many other species that have them too. Hence, there is nothing to prevent a machine from being able, in advance and using its power to conceive counterfactuals, to conjecture alternative possible answers. In the future we will not have fearful or sad computers, but there will be within them the role that fear or sadness play in our decision-making processes. Even based on what is already achievable today, we will soon have robots capable of interpreting and interacting with human emotions.


Citations (5)


... 1. attribution of fault through an understanding of the impact of the AI system's behaviour and how it is unaligned with the user's needs and priorities, 2. an articulate comprehension of the cause for the behaviour through an explainable decision process that reveals the discrepancy, thus facilitating 3. adaption of future behaviour for repair or reform, demonstrating the intent not to repeat the offence Harland et al (2023); Pereira et al (2022). ...

Reference:

AI Apology: A Critical Review of Apology in AI Systems
Employing AI to Better Understand Our Morals

... The first topic of interest, centering on facets of "artificial morality," has seen a rapid rise over the past 10 years. Two recent reviews in the psychological literature took stock of some of the garnered insights (Bonnefon et al., 2024;Ladak et al., 2023), and several other reviews have surveyed some of the core questions and initial answers (Bigman et al., 2019;Malle, 2016;Misselhorn, 2018;Pereira & Lopes, 2020). The range of questions is broad: how to design machines that follow norms and make moral judgments and decisions (Cervantes et al., 2020;Malle & Scheutz, 2019;Tolmeijer et al., 2021) and how humans do and will perceive such (potential) moral machines (Malle et al., 2015;Shank & DeSanti, 2018;Stuart & Kneer, 2021); legal and ethical challenges that come with robotics (Lin et al., 2011), such as challenges posed by social robots (Boada et al., 2021;Salem et al., 2015), autonomous vehicles (Bonnefon et al., 2016;Zhang et al., 2021), autonomous weapons systems (Galliott et al., 2021), and large language models (Harrer, 2023;Yan et al., 2024); deep concerns over newly developed algorithms that perpetuate sexism, racism, or ageism; and tension over the use of robots in childcare, eldercare, and health care, which is both sorely needed and highly controversial (Sharkey & Sharkey, 2010;Sio & Wynsberghe, 2015). ...

Machine Ethics: From Machine Morals to the Machinery of Morality
  • Citing Book
  • January 2020

... Therefore, counterfactual thinking is the activity of "thinking about past possibilities and past or present impossibilities" (Roese, 1997). Counterfactual reasoning, on the other hand, is the process of creating an alternative scenario to the one that occurred and considering its ramifications (Pereira, & Machado, 2020). Additionally, it is asserted that a crucial mechanism for explaining adaptive behavior in a changing environment is counterfactual reasoning (Paik, et al.,.2014;Zhang et al., 2015). ...

Cognitive Prerequisites: The Special Case of Counterfactual Reasoning
  • Citing Chapter
  • January 2020

... No obstante, la comunidad científica se ha dividido. Por una parte, podemos encontrar un grupo de investigadores que apoyan el desarrollo de agentes artificiales éticos o morales [9], [10], [13] y aquel grupo de investigadores que critican este enfoque y que consideran inviable el desarrollo de este tipo de agentes [14], [15]. Sin embargo, más que contribuir a este debate, el objetivo de este artículo es ofrecer una revisión de los avances logrados en esta área de investigación hasta este momento. ...

Employing AI for Better Understanding Our Morals
  • Citing Chapter
  • January 2020

... Robot ethics has quickly become a burgeoning field, mentioned in 66,796 entries of the ACM Digital Library [3] as of this writing, 19, 449 since 2020. On the other hand, science and engineering have attempted to actually develop such robots, or moral machines more generally [5,7,25,69,86]. Machines with social-moral capacities would advance the prospects of robots succeeding in human communities, but the challenges are enormous. The science of machine morality is itself quite young, and the technical demands on such capacities are considerable [86]. ...

Is It Possible to Program Artificial Emotions? A Basis for Behaviours with Moral Connotation?
  • Citing Chapter
  • January 2020