ChapterPDF Available

Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge


Abstract and Figures

Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing.
Content may be subject to copyright.
Expectation: Personalized Explainable Artificial
Intelligence for Decentralized Agents with
Heterogeneous Knowledge
Davide Calvaresi 4[0000000198167439] , Giovanni Ciatto1 [0000000218418996],
Amro Najjar2 [0000000177846176], Reyhan Aydoğan3[0000000252609999],
Leon Van der Torre2[0000000343303717], Andrea
Omicini1[0000000266553869] , and Michael Schumacher4[0000000251235075]
1Alma Mater Studiorum – Università di Bologna, Cesena, Italy
{giovanni.ciatto, andrea.omicini}
2University of Luxembourg, Luxembourg
{amro.najjar, leon.vandertorre}
3Özyeğin University, Istanbul, Turkey
4University of Applied Sciences and Arts Western Switzerland HES-SO, Switzerland
{davide.calvaresi, michael.schumacher}
Abstract. Explainable AI (XAI) has emerged in recent years as a set of
techniques and methodologies to interpret and explain machine learning
(ML) predictors. To date, many initiatives have been proposed. Never-
theless, current research efforts mainly focus on methods tailored to spe-
cific ML tasks and algorithms, such as image classification and sentiment
analysis. However, explanation techniques are still embryotic, and they
mainly target ML experts rather than heterogeneous end-users. Further-
more, existing solutions assume data to be centralised, homogeneous, and
fully/continuously accessible—circumstances seldom found altogether in
practice. Arguably, a system-wide perspective is currently missing.
The project named “Personalized Explainable Artificial Intelligence for
Decentralized Agents with Heterogeneous Knowledge” (Expectation)
aims at overcoming such limitations. This manuscript presents the overall
objectives and approach of the Expectation project, focusing on the
theoretical and practical advance of the state of the art of XAI towards
the construction of personalised explanations in spite of decentralisation
and heterogeneity of knowledge, agents, and explainees (both humans or
To tackle the challenges posed by personalisation, decentralisation, and
heterogeneity, the project fruitfully combines abstractions, methods, and
approaches from the multi-agent systems, knowledge extraction / injec-
tion, negotiation, argumentation, and symbolic reasoning communities.
Keywords: Multi-agent systems ·eXplanable AI ·Chist-Era IV ·Per-
sonalisation ·Decentralisation ·Expectation
2 Calvaresi et al.
1 Background and Motivations
In recent decades, data-driven decision-making processes have increasingly in-
fluenced strategic choices. This applies to both virtual and humans’ decisional
needs. The application domains of Machine learning (ML) algorithms are broad-
ening [1,2]. Ranging from finance to healthcare, ML supports humans in making
informed decisions based on the information buried within enormous amounts
of data. However, most effective ML methods are inherently opaque, meaning
that it is hard for humans (if possible at all) to grasp the reasoning hidden
in their predictions (so-called black boxes). To mitigate the issues arising from
such opaqueness, several techniques and methodologies aiming at inspecting ML
models and predictors have been proposed under the eXplainable Artificial Intel-
ligence (XAI) umbrella [3,4] (e.g., feature importance estimators, rule lists, and
surrogate trees [5]). Such tools enable humans to understand, inspect, analyse –
and therefore trust – the operation and outcomes of AI systems effectively.
The many XAI-related initiatives proposed so far constitute the building
blocks for making tomorrow’s intelligent systems explainable and trustable. How-
ever, to date, the ultimate goal of letting intelligent systems provide not only
valuable recommendations but also motivations and explanations for their sug-
gestions – possibly, interactively – is still unachieved. Indeed, current research
efforts focus on specific methods and algorithms, often tailored to single ML
tasks—e.g. classification and, in particular, image classification. For instance,
virtually all approaches proposed so far target supervised learning, and in par-
ticular, classification tasks [6,3,4]—and many of them are tailored on neural
networks [7]. In other words, there is still a long way to generality [8].
Moreover, while existing XAI solutions do an excellent job on inspecting ML
algorithms, current interpretation/explanations provide valuable insights solely
profitable by human experts, entirely neglecting the need for producing more
broadly accessible or personalised explanations that everybody could under-
stand. Recalling their social nature, explanations should rather be interactive
and tailored on the explainee’s cognitive capabilities and background knowledge
to be effective [9,10].
To complicate this matter, existing XAI solutions assume data to be cen-
tralised, homogeneous, and fully/continuously available for operation [8]. Such
circumstances rarely occur in real-world scenarios. For example, data is of-
ten scattered through many administrative domains. Thus, even when carry-
ing similar information, datasets are commonly structured according to different
schemas—when not lacking structure at all. Privacy and legal constraints com-
plete the picture by making it unlikely for data to be fully available at any
given moment. In other words, the availability of data is more frequently partial
rather than total. Therefore, explainable intelligent systems should be able to
deal with scattering, decentralisation, heterogeneity, and unavailability of data,
rather than requiring data to be centralised and standardised before even start-
ing to process it—which would impose heavy technical, administrative, and legal
constraints on the production of both recommendations and explanations.
Expectation 3
Summarising, further research is needed to push XAI towards the construc-
tion of personalised explanations, which can be built in spite of decentralisation
and heterogeneity of information—possibly, out of the interaction among intel-
ligent software systems and human or virtual explainees.
Clearly, tackling personalisation, decentralisation, and heterogeneity entails
challenges from several perspectives. On the one hand, personalisation of expla-
nations must cope with the need for providing human-intelligible (i.e., symbolic)
explanations of incremental complexity, possibly iteratively adapting to the cog-
nitive capabilities, and background knowledge of the users who are receiving
the explanation. In turn, it requires enabling an interactive explanation process
both within the intelligent systems themselves (i.e., agent to agent) and with
the end-users. On the other hand, decentralisation of data opens to question-
ing how explanations can be produced or aggregated without letting data cross
administrative borders. Therefore, the need for collaboration among multiple
cross-domain software entities is imperative. Finally, the challenge of hetero-
geneity, of both data and ML techniques used to mine information out of it,
dictates the detection of some lingua franca to present recommendations and
explanations to the users in intelligible forms.
To address these challenges, the Expectation project has been recently
recommended for funding – along with other 11 projects – as part of the Chist-
Era 2019 call5concerning “Explainable Machine Learning-based Artificial In-
telligence”. The project has started on April 1, 2021 and it will last up to the
of March 2024. In the remainder of this paper, we discuss how the project plans
to tackle the challenges posed by personalisation, decentralisation, and hetero-
geneity, by fruitfully combining abstractions, methods, and approaches from the
multi-agent systems, knowledge extraction/injection, negotiation, argumenta-
tion, and symbolic reasoning research areas.
2 State of the Art
The generation of personalised explanation for decentralised and heterogeneous
intelligent agents roots in several disciplines, including XAI, agreement technolo-
gies, personalisation, and AI ethics.
2.1 Explainable Agency
Neuro-symbolic integration [11,12] aims at bridging the gap between symbolic
and sub-symbolic AI, reconciling the two key branches of AI (connectionist AI –
relying on connectionist networks inspired from human neurons, and symbolic AI
– relying on logic, symbols, and reasoning) [13]. Sub-symbolic techniques (e.g.,
pattern recognition and classification) can offer excellent performance. However,
their outcomes can be biased and difficult to understand (if possible at all).
Seeking trust, transparency, and the possibility to debug sub-symbolic predic-
tors (so-called black boxes), the XAI community relies on reverse engineering
4 Calvaresi et al.
models trained on unknown datasets generating plausible explanations fitting
the outcome produced by the black box [14]. A typical practice is to train an
interpretable machine learning model (e.g., decision trees, linear model, or rules)
with the outcome of a black box [3,15,16].
Explainable agents go beyond the mere application of sub-symbolic ML mech-
anisms. Agents can leverage symbolic AI techniques (e.g., logic and planning
languages), which are easier to trace, reason about, understand, debug, and
explain [17]. However, they can still partially rely on ML predictors, thus deem-
ing necessary to be explaining their overall behavior (relying on neuro-symbolic
integration). Endowing virtual agents with explanatory abilities raises trust, ac-
ceptability, and reduces possible failures due to misunderstandings [14,18]. Yet,
it necessary to consider user characterisation (e.g., age, background, and exper-
tise), the context (e.g., why do the user need the explanation), and the agents’
limits [14].
Built-in explainability is still rare in literature. Most of the works utterly
provide indicators which “should serve” as an explanation for the human user [3].
To date, such approaches have been unable to produce satisfying human-under-
standable explanations. Nevertheless, more recent contributions employ neuro-
symbolic integration to identifying factors influencing the human comprehension
of representation formats and reasoning approaches [19].
2.2 Agreement Technologies
Understanding other parties’ interests and preferences is crucial in human social
interaction. It enables the proposal of reasonable bids to resolve conflicts effec-
tively [20,21]. Agreement technologies (AT) [22] literature counts several tech-
niques to automatically learn, reproduce, and possibly predict an opponent’s
preferences and bidding strategies in conflict resolution scenarios [23].
AT are mostly based on heuristics [24,25] and traditional ML methods (e.g.,
decision trees [26,27], Bayesian learning [28,29,30], and concept-based learn-
ing [31,32]) and rely on possibly numerous bid exchanges regulated by nego-
tiation protocols [33]. By exploiting such techniques, machines can negotiate
with humans seamlessly, resolving conflicts with a high degree of mutual un-
derstanding [34]. Nevertheless, in human-agent negotiation, the complexity sky-
rockets. Humans leverage on semantic and reasoning (e.g., employing similari-
ties/differences) while learning about the competitors’ preferences and generat-
ing well-targeted offers. Conversely to agent-agent, the number of exchanged bids
between parties is limited due to the nature of human interactions, and may em-
ploy unstructured data. Therefore, classical opponent modeling techniques used
in automated negotiation in which thousands of bids are exchanged may not be
suitable, and additional reasoning to understand humans’ intentions, interests,
arguments, and explanations supporting their proposals is required [35,36]. To
the best of our knowledge, there is no study incorporating exchanged arguments
or explanations into opponent modeling in agent-based negotiation literature.
Without explanations, human users may attribute a wrong state of mind to
agents/robots [18]. Thus, the creation of an effective agent-based explainable
Expectation 5
AT for human-agent interactions and the realisation of a common understand-
ing would require the integration of (i) ontology reasoning, (ii) understanding
humans’ preferences/interests by reasoning on any type of information provided
during the negotiation, and (iii) generating well-targeted offers with their sup-
portive explanations or motivations (i.e., why the offer can be acceptable for
their human counterpart). To the best of our knowledge, the state of the art still
needs concrete contributions concerning the three directions mentioned above.
Moreover, albeit the need for personalised motivations and arguments (e.g., con-
sidering user expertise, personal attributes, and goals) is well known in litera-
ture [14], most of the existing works are rather conceptual and do not consider
the overall big picture [37]. Furthermore, no work addresses explanation person-
alisation in the context of heterogeneous systems combining sub-symbolic (e.g.,
neural network) and symbolic (agents/robots) AI mechanisms.
2.3 AI Ethics
Due to the growing adoption of intelligent systems, machine ethics and AI ethics
have received a deserved increasing attention from scientists working in vari-
ous domains [38]. The growing safety, ethical, societal, and legal impacts of AI
decisions are the main reason behind this surge of interest [39]. In literature,
AI ethics includes implicitly- and explicitly-moral agents. In both cases, intelli-
gent systems depend on human intervention to distinguish moral from immoral
behaviour. However, on the one hand, implicitly-moral agents are ethically con-
strained from having immoral behaviour via rules set by the human designer [38].
On the other hand, explicitly-ethical agents (or agents with functional morality)
presume to be able to morally judge themselves (having guidelines or examples
of what is good and bad [38]).
Summarising, AI systems can have implicit and explicit ethical notions. The
main advantage of implicit AI ethics is that they are simple to develop and con-
trol, being incapable of unethical behaviour. Nevertheless, this simplicity implies
mirroring the ethic standing point and perception of the designer. Explicit-ethics
systems affirm to autonomously evaluate the normative status of actions and rea-
son independently about what they consider unethical, thus being able to solve
normative conflicts. Furthermore, they could bend/violate some rules, resulting
in better fulfilment of overarching ethical objectives. However, the main short-
coming of these systems is their complexity and possible unexpected behaviour.
3 The Expectation Approach
This section elaborates on the limitations elicited from the state of art, the
related challenges, and formalises the needed interventions. The six major limi-
tations identified are:
(L1) Opaqueness of sub-symbolic predictors. Most ML algorithms leverage
a sub-symbolic representation of knowledge that is hard to debug for experts
6 Calvaresi et al.
and hard to interpret for common people. Thus, the compliance of internal
mechanisms and results with ethical principles and regulations cannot be
(L2) Heterogeneity of rule extraction techniques. Extracting general-purpose
symbolic rules from any sort of sub-symbolic predictor can be a difficult task
(if possible, at all). Indeed, the nature of the data and the particular pre-
dictor at hand significantly impact the quality (i.e., the intelligibility) of the
extracted rules. Furthermore, existing techniques to extract rules to produce
explanations mostly leverage structured, low-dimensional data, given the
scarcity of methods supporting more complex data (i.e., images, videos, or
audios). In particular, most of the existing works interpreting sub-symbolic
mechanisms place interpretable mechanisms (i.e., decision-tree) on top of
the predictors, thereby interpreting (e.g., reconstructing) from outside their
outcomes without really mirroring their internal mechanisms.
(L3) Manual amending and integration of heterogeneous predictors. The
update and integration of already pre-trained predictors are usually hand-
crafted and poorly automatable. Moreover, it heavily relies on datasets that
might be available only for a limited period. Therefore, a sustainable, au-
tomatable, and seamless sharing/reusing/integrating of knowledge from di-
verse predictors is still unsatisfactory.
(L4) Lack of personalisation. Current XAI approaches are mostly one-way
processes (e.g., interactive interactions are rarely involved) and do not con-
sider the explainee’s context and background. Thus, the customisation and
personalisation of the explanations are still open challenges.
(L5) Tendency of centralisation in data-driven AI. The development of sub-
symbolic predictors usually involves the centralisation of training data in a
single point, which raises privacy concerns. Thus, letting a system composed
of several distributed intelligent components learning without centralising
data is still an open challenge.
(L6) Lack of explanation integration in Agreement Technologies. Current
negotiation and argumentation frameworks mostly leverage well-structured
interactions and clearly defined objectives, resources, and goals. Current AT
are not suitable for providing interactive explanations nor for reconciling
fragmented knowledge. Moreover, although a few works explored more so-
phisticated mechanisms (e.g., adopting semantic similarities via subsumption
to relate alternative values within a single bid), the need for ontological rea-
soning to infer the relationship between several issues – possibly pivotal in
negotiation and argumentation of explanations – is still unmet.
To overcome the limitation mention, Expectation formalises the following ob-
(O1) To define an agent-based model embedding ML predictors relying on hetero-
geneous (though potentially similar/complementary) knowledge, as in train-
ing datasets, contextual assumptions & ontologies.
(O2) To design and implement a decentralised agent architecture capable of inte-
grating symbolic knowledge and explanations produced by individual agents.
Expectation 7
(O3) To define and implement agent strategies for cooperation, negotiation, and
trust establishment for providing personalised explanations according to the
user context.
(O4) To investigate, implement, and evaluate multi-modal explanation communi-
cation mechanisms (visual, auditory, cues, etc.), the role of the type of agent
providing these explanations (e.g., robot, virtual agents), and their role in
explanation personalisation.
(O5) To validate and evaluate the personalised explainability results, as well as the
agent-based XAI approach for heterogeneous knowledge, within the context
of a prototype, focused on food and nutrition recommendations.
(O6) To investigate the specific ethical challenges that XAI is able to meet and
when and to what extent explicability is legally required in European reg-
ulations, considering the AI guidelines and evaluation protocols published
by the national and European institutions (e.g., the Data Protection Impact
Analysis thanks to the open-source software PIA, CNIL guidelines), as well
as recent research on the ethics of recommender systems w.r.t. values such
as transparency and fairness.
Conflict resolution
(i.e., negotiation argumentation)
in agent-user & agent-agent settings
User Profilation
(i.e., SoM, knowledge)
Ethics-compliance verification and validation O6
Contributes to implemented in
Topic Aects O2 Aects O4,O5
Fig. 1. Expectation’s objectives, topics, and respective interconnections.
The aforementioned objectives are clearly interdependent. In particular, Fig-
ure 1 groups and organises the objectives per contribution, effect, and imple-
mentation among each other.
8 Calvaresi et al.
3.1 Research Method
Despite being still in its early stage, the project’s roadmap has already been
established. Expectation’s research and development activities will be carried
out along two orthogonal dimensions – namely intra- and inter-agent ones –, as
depicted in Figure 2.
Fig. 2. Main components and interactions of the proposed architecture.
The envisioned scenario for this project assumes a 1-to-1 mapping between
end-users and software agents (cf. Figure 2, rightmost part). Therefore, each
software agent interacts with a single user in order to (i) acquire their contextual
data (cf. blue dashed line in Figure 2), and (ii) provide them with personalised
explanations taking that contextual information into account (cf. green solid line
in Figure 2). This is the purpose of what we call intra-agent explainability.
However, the idea of building agents that provide precise recommendations
by solely leveraging on the data acquired from a single user is unrealistic. Ac-
cordingly, we envision agents to autonomously debate and negotiate with each
other to mutually complement and globally improve their knowledge, thus gen-
erating personalised and accurate recommendations. Addressing this challenge
is the purpose of what we call inter-agent explainability.
On the one hand, intra-agent explainability focuses on deriving explainable
information at the local level – where contextual information about the user
is most likely available – and on presenting it to the user in a personalised
way. To do so, symbolic knowledge extraction and injection play a crucial role.
The former lets agents fully exploit the predictive performance of conventional
ML-based black-box algorithms while still enabling the production of intelligible
information to be used for building personalised explanations. Conversely, by
injecting symbolic knowledge in ML-based systems, agents will be able to update,
revise, and correct the functioning of ML-based predictors by taking into account
users’ contextual information and feedback.
On the other hand, inter-agent explainability focuses on enabling the agents
to exploit negotiation and argumentations to mutually improve their predictive
Expectation 9
capabilities by exchanging the symbolic knowledge they have extracted from
given black boxes. Even in this context, the role of symbolic knowledge extrac-
tion is of paramount importance as it enables exchanges of aggregated knowledge
coming from different ML-predictors—which possibly offer different perspectives
on the problem at hand. To this end, inter-agent explainability requires for-
malising interaction protocols specifying what actions are possible and how to
represent this information so that both parties can understand and interpret it
seamlessly. Moreover, inter-agent interactions will require reasoning mechanisms
handling heterogeneous data received from other agents, including techniques to
detect conflicts and adopt resolution or mitigation policies accordingly.
By combining intra- and inter-agent explainability, Expectation will be able
to tackle decentralisation (of both data and agents), heterogeneity (of both data
and analysis techniques), and users’ privacy simultaneously. Indeed, the proposed
approach does not require data to be centralised to allow training and knowledge
extraction. Therefore, each agent can autonomously take care of the local data it
has access to by exploiting the ML-based analysis technique it prefers, while joint
learning is delegated to decentralised negotiation protocols which only exchange
aggregated knowledge. Users’ personal data is expected to remain close to the
user, while agents are in charge of blending the extracted symbolic knowledge
with the general-purpose background knowledge jointly attained by the multi-
agent systems via negotiation and argumentation. Heterogeneity is addressed
indirectly via knowledge extraction, which provides a lingua franca for knowledge
sharing in the form of logic facts and rules.
Notably, knowledge extraction is what enables bridging intra- and inter-agent
explainability too, as it enables the exchange of the extracted knowledge via
negotiation and argumentation protocols—which already rely on the exchange
of symbolic information.
Knowledge injection closes the loop by letting the knowledge acquired via
interaction to be used to improve the local data and analytic capabilities of
each individual agent. Finally, the purposes of preserving privacy and complying
with ethical implications are addressed by only allowing agents to share aggre-
gated symbolic knowledge. Moreover, we envision to equip the agents with ethics
reasoning engines combining techniques from both implicit and explicit ethics.
4 Discussion
To test the advancement produced by EXPECTATION, we envision combin-
ing the techniques mentioned above in a proof of concept cantered on a topic
which nowadays is delicate more than ever: a nutrition recommender system,
fostering a responsible and correct alimentation. Such a prototype will be tested
and evaluated according to the user-subjective such as understandability, trust,
acceptability, soundness, personalisation, perceived system autonomy, perceived
user autonomy, and fairness. The envisioned agent-based recommender system is
intended to operate as a virtual assistant equipped with personalised explanatory
capabilities. This would make it possible to tackle two dimensions of the quest
10 Calvaresi et al.
for a correct regime (i) trust and acceptance, and (ii) autonomous personalisa-
tion, education, and explicability. In particular, the user will be provided with
transparent explanations about the recommendation received. The purpose of
the explanations is multi-faceted: (i) educative (i.e., improve the user knowledge
and raising his/her awareness about a given topic/suggestion), (ii) informative
(i.e., indicate the user on how the system works), and (iii) motivational (i.e., it
helps the user understanding how personal characteristics and decisions lead to
favorable/adverse outcomes).
Overall, Expectation is expected to impact beyond its lifespan. Such an
impact encompasses several aspects and is four-folded.
Impact of theoretical outcomes. Production of mechanisms to extract, com-
bine, explain, negotiate heterogeneous symbolic knowledge as well as coop-
eration and negotiation strategies.
Impact of technological outcomes. Fostering the adoption of intelligent sys-
tems in health and safety-critical domains and inspiring new technology
leveraging innovative multi-modal explanation communication mechanisms.
Impact in application domains. We expect uptake of the project results in
sectors (commercial/academic) such as eHealth, prevention, wellbeing appli-
cations, and distribution and restoration.
Impact of ethical aspects. Given the sensitive nature of personal data in the
context of the project, the proposed XAI prototype will develop generalisable
mechanisms to ensure compliance, fairness, transparency, and trust.
This work has been partially supported by the Chist-Era grant CHIST-ERA-
19-XAI-005, and by (i) the Swiss National Science Foundation (G.A. 20CH21_195530),
(ii) the Italian Ministry for Universities and Research, (iii) the Luxembourg Na-
tional Research Fund (G.A. INTER/CHIST/19/14589586 and
INTER/Mobility/19/13995684/DLAl/van ), (iv) the Scientific and Research
Council of Turkey (TÜBİTAK, G.A. 120N680).
1. Zubair Md Fadlullah, Fengxiao Tang, Bomin Mao, Nei Kato, Osamu Akashi,
Takeru Inoue, and Kimihiro Mizutani. State-of-the-art deep learning: Evolving
machine intelligence toward tomorrow’s intelligent network traffic control systems.
IEEE Communications Surveys Tutorials, 19(4):2432–2455, 2017.
2. Dirk Helbing. Societal, economic, ethical and legal challenges of the digital rev-
olution: From big data to deep learning, artificial intelligence, and manipulative
technologies. In Towards Digital Enlightenment. Essays on the Dark and Light
Sides of the Digital Revolution, pages 47–72. Springer, 2019.
3. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Gian-
notti, and Dino Pedreschi. A survey of methods for explaining black box models.
ACM Computing Surveys, 51(5):93:1–93:42, 2019.
Expectation 11
4. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Ben-
netot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel
Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. Explainable
explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and
challenges toward responsible AI. Information Fusion, 58(December 2019):82–115,
5. Roberta Calegari, Giovanni Ciatto, and Andrea Omicini. On the integration of
symbolic and sub-symbolic techniques for XAI: A survey. Intel ligenza Artificiale,
14(1):7–32, 2020.
6. Filip Karlo Dosilovic, Mario Brcic, and Nikica Hlupic. Explainable artificial intel-
ligence: A survey. In Karolj Skala, Marko Koricic, Tihana Galinac Grbac, Marina
Cicin-Sain, Vlado Sruk, Slobodan Ribaric, Stjepan Gros, Boris Vrdoljak, Mladen
Mauher, Edvard Tijan, Predrag Pale, and Matej Janjic, editors, 41st International
Convention on Information and Communication Technology, Electronics and Mi-
croelectronics (MIPRO 2018), pages 210–215, Opatija, Croatia, 21–25 May 2018.
7. Evren Dağlarli. Explainable artificial intelligence (xAI) approaches and deep meta-
learning models. In Marco Antonio Aceves-Fernandez, editor, Advances and Ap-
plications in Deep Learning, chapter 5. IntechOpen, London, UK, 2020.
8. Giovanni Ciatto, Roberta Calegari, Andrea Omicini, and Davide Calvaresi. To-
wards XMAS: eXplainability through Multi-Agent Systems. In Claudio Savaglio,
Giancarlo Fortino, Giovanni Ciatto, and Andrea Omicini, editors, AI&IoT 2019 –
Artificial Intelligence and Internet of Things 2019, volume 2502 of CEUR Work-
shop Proceedings, pages 40–53. Sun SITE Central Europe, RWTH Aachen Univer-
sity, November 2019.
9. Giovanni Ciatto, Michael I. Schumacher, Andrea Omicini, and Davide Calvaresi.
Agent-based explanations in AI: Towards an abstract framework. In Davide Cal-
varesi, Amro Najjar, Michael Winikoff, and Kary Främling, editors, Explainable,
Transparent Autonomous Agents and Multi-Agent Systems, volume 12175 of Lec-
ture Notes in Computer Science, pages 3–20. Springer, Cham, 2020. 2nd Interna-
tional Workshop, EXTRAAMAS 2020, Auckland, New Zealand, May 9–13, 2020,
Revised Selected Papers.
10. Giovanni Ciatto, Davide Calvaresi, Michael I. Schumacher, and Andrea Omicini.
An abstract framework for agent-based explanations in AI. In Amal El Fal-
lah Seghrouchni, Gita Sukthankar, Bo An, and Neil Yorke-Smith, editors, 19th
International Conference on Autonomous Agents and MultiAgent Systems, pages
1816–1818, Auckland, New Zeland, May 2020. International Foundation for Au-
tonomous Agents and Multiagent Systems. Extended Abstract.
11. Giuseppe Pisano, Giovanni Ciatto, Roberta Calegari, and Andrea Omicini. Neuro-
symbolic computation for XAI: Towards a unified model. In Roberta Calegari, Gio-
vanni Ciatto, Enrico Denti, Andrea Omicini, and Giovanni Sartor, editors, WOA
2020 – 21th Workshop “From Objects to Agents”, volume 2706 of CEUR Workshop
Proceedings, pages 101–117, Aachen, Germany, October 2020. Sun SITE Central
Europe, RWTH Aachen University. Bologna, Italy, 14–16 September 2020.
12. Benedikt Wagner and Artur d’Avila Garcez. Neural-symbolic integration for fair-
ness in AI. In Andreas Martin, Knut Hinkelmann, Hans-Georg Fill, Aurona Gerber,
Doug Lenat, Reinhard Stolle, and Frank van Harmelen, editors, Proceedings of the
AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge
Engineering (AAAI-MAKE 2021), volume 2846 of CEUR Workshop Proceedings,
Stanford University, Palo Alto, CA, USA, 22–24 March 2021.
12 Calvaresi et al.
13. Paul Smolensky. Connectionist AI, symbolic AI, and the brain. Artificial Intelli-
gence Review, 1(2):95–109, 1987.
14. Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. Explainable
agents and robots: Results from a systematic literature review. In Edith Elkind,
Manuela Veloso, Noa Agmon, and Matthew E. Taylor, editors, 18th International
Conference on Autonomous Agents and MultiAgent Systems (AAMAS’19), pages
1078–1088, Montreal, QC, Canada, 13–17 May 2019. International Foundation for
Autonomous Agents and Multiagent Systems.
15. Roberta Calegari, Giovanni Ciatto, Jason Dellaluce, and Andrea Omicini. Inter-
pretable narrative explanation for ML predictors with LP: A case study for XAI. In
Federico Bergenti and Stefania Monica, editors, WOA 2019 – 20th Workshop “From
Objects to Agents” , volume 2404 of CEUR Workshop Proceedings, pages 105–112.
Sun SITE Central Europe, RWTH Aachen University, Parma, Italy, 26–28 June
16. Robert Andrews, Joachim Diederich, and Alan B. Tickle. Survey and critique of
techniques for extracting rules from trained artificial neural networks. Knowledge-
Based Systems, 8(6):373–389, December 1995.
17. Roberta Calegari, Giovanni Ciatto, Viviana Mascardi, and Andrea Omicini. Logic-
based technologies for multi-agent systems: A systematic literature review. Au-
tonomous Agents and Multi-Agent Systems, 35(1):1:1–1:67, 2021. Collection “Cur-
rent Trends in Research on Software Agents and Agent-Based Software Develop-
18. Thomas Hellström and Suna Bensch. Understandable robots - what, why, and
how. Paladyn, Journal of Behavioral Robotics, 9(1):110–123, 2018.
19. Amina Adadi and Mohammed Berrada. Peeking inside the black-box: A survey on
explainable artificial intelligence (XAI). IEEE Access, 6:52138–52160, 2018.
20. Tim Baarslag, Michael Kaisers, Enrico Gerding, Catholijn Jonker, and Jonathan
Gratch. Computers that negotiate on our behalf: Major challenges for self-
sufficient, self-directed, and interdependent negotiating agents. In Autonomous
Agents and Multiagent Systems: AAMAS 2017 Workshops, Visionary Papers,
pages 143–163, 2017.
21. Catholijn M. Jonker, Reyhan Aydoğan, Tim Baarslag, Joost Broekens, Christian A.
Detweiler, Koen V. Hindriks, Alina Huldtgren, and Wouter Pasman. An Intro-
duction to the Pocket Negotiator: A General Purpose Negotiation Support Sys-
tem. In Natalia Criado Pacheco, Carlos Carrascosa, Nardine Osman, and Vicente
Julián Inglada, editors, Multi-Agent Systems and Agreement Technologies, pages
13–27, Cham, 2017. Springer International Publishing.
22. Sascha Ossowski, editor. Agreement Technologies, volume 8 of Law, Governance
and Technology Series. Springer Netherlands, 2012.
23. T. Baarslag, Mark J. C. Hendrikx, K. Hindriks, and C. Jonker. Learning about the
opponent in automated bilateral negotiation: a comprehensive survey of opponent
modeling techniques. Autonomous Agents and Multi-Agent Systems, 30:849–898,
24. Reyhan Aydoğan, T. Baarslag, K. Hindriks, C. Jonker, and P. Yolum. Heuristics
for using CP-nets in utility-based negotiation without knowing utilities. Knowledge
and Information Systems, 45:357–388, 2014.
25. N. Jennings, Peyman Faratin, Alessio Lomuscio, Simon Parsons, Michael
Wooldridge, and Carles Sierra. Automated negotiation: Prospects, methods and
challenges. Group Decision and Negotiation, 10:199–215, March 2001.
Expectation 13
26. Reyhan Aydoğan, Ivan Marsá-Maestre, Mark Klein, and Catholijn Jonker. A Ma-
chine Learning Approach for Mechanism Selection in Complex Negotiations. Jour-
nal of Systems Science and Systems Engineering, 27, 2018.
27. Litan Ilany and Ya’akov Gal. Algorithm selection in bilateral negotiation. Au-
tonomous Agents and Multi-Agent Systems, 30(4):697–723, July 2016.
28. Koen V. Hindriks and Dmytro Tykhonov. Opponent modelling in automated multi-
issue negotiation using bayesian learning. In Lin Padgham, David C. Parkes,
Jörg P. Müller, and Simon Parsons, editors, 7th International Joint Conference
on Autonomous Agents and Multiagent Systems (AAMAS 2008), volume 1, pages
331–338, Estoril, Portugal, 12–16 May 2008. IFAAMAS.
29. Chao Yu, Fenghui Ren, and Minjie Zhang. An Adaptive Bilateral Negotiation Model
Based on Bayesian Learning, volume 435, pages 75–93. Springer, January 2013.
30. Dajun Zeng and Katia Sycara. Bayesian learning in negotiation. International
Journal of Human-Computer Studies, 48(1):125–141, 1998.
31. Reyhan Aydogan and Pinar Yolum. Ontology-based learning for negotiation. In
2009 IEEE/WIC/ACM International Conference on Intelligent Agent Technology
(IAT 2009), volume 2, pages 177–184, January 2009.
32. Boris A. Galitsky, Sergei O. Kuznetsov, and Mikhail V. Samokhin. Analyzing
conflicts with concept-based learning. In Frithjof Dau, Marie-Laure Mugnier, and
Gerd Stumme, editors, Conceptual Structures: Common Semantics for Sharing
Knowledge, pages 307–322. Springer Berlin Heidelberg, 2005.
33. Ivan Marsa-Maestre, Mark Klein, Catholijn M. Jonker, and Reyhan Aydoğan. From
problems to protocols: Towards a negotiation handbook. Decision Support Systems,
60:39–54, 2014.
34. Yinon Oshrat, Raz Lin, and Sarit Kraus. Facing the challenge of human-agent
negotiations via effective general opponent modeling. In 8th International Con-
ference on Autonomous Agents and Multiagent Systems (AAMAS’09), volume 1,
pages 377–384. IFAAMAS, 2009.
35. Onat Güngör, Umut Çakan, Reyhan Aydoğan, and Pinar Özturk. Effect of aware-
ness of other side’s gain on negotiation outcome, emotion, argument, and bidding
behavior. In Reyhan Aydoğan, Takayuki Ito, Ahmed Moustafa, Takanobu Otsuka,
and Minjie Zhang, editors, Recent Advances in Agent-based Negotiation, pages 3–
20, Singapore, 2021. Springer Singapore.
36. Philippe Pasquier, Ramon Hollands, Frank Dignum, Iyad Rahwan, and Liz Sonen-
berg. An empirical study of interest-based negotiation. Autonomous Agents and
Multi-Agent Systems, 22:249–288, 2011.
37. Frank Kaptein, Joost Broekens, Koen Hindriks, and Mark Neerincx. Personalised
self-explanation by robots: The role of goals versus beliefs in robot-action expla-
nation for children and adults. In 2017 26th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN), pages 676–682, 2017.
38. James H. Moor. The nature, importance, and difficulty of machine ethics. IEEE
Intelligent Systems, 21(4):18–21, 2006.
39. Davide Calvaresi, Michael Schumacher, and Jean-Paul Calbimonte. Personal data
privacy semantics in multi-agent systems interactions. In Yves Demazeau, Tom
Holvoet, Juan M. Corchado, and Stefania Costantini, editors, Advances in Practical
Applications of Agents, Multi-Agent Systems, and Trustworthiness. The PAAMS
Collection - 18th International Conference (PAAMS 2020), volume 12092 of Lec-
ture Notes in Computer Science, pages 55–67, L’Aquila, Italy, 7–9 October 2020.
... To achieve this goal, XAI researchers are increasingly focusing on the automation and interactivity of the explanation process [8]. This involves developing AI systems that can generate explanations on the fly and adapt their explanations to the needs and knowledge level of the explainee [5]. Along this line, multi-agent systems (MAS) are likely the most adequate metaphor for intelligent explainable systems. ...
... Future works. Our protocol, as well as the PyXMas technology, plays a crucial role in the context of the Expectation project [5]-which is funding this work. There, the exploitation of multi-agent interaction as a means for explaining recommendations is at the core of the project. ...
Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation. However, it does not prescribe any particular sort of recommendation or explanation, hence remaining agnostic w.r.t. such notions. Novelty lays in the extended support for both ordinary and contrastive explanations, as well as for the situation where no explanation is needed as none is requested by the explainee.Accordingly, we formally present and analyse the protocol, motivating its design and discussing its generality. We also discuss the reification of the protocol into a re-usable software library, namely PyXMas, which is meant to support developers willing to build explainable MAS leveraging our protocol. Finally, we discuss how custom notions of recommendation and explanation can be easily plugged into PyXMas.KeywordsXAIrecommender systemsmulti-agent systemsexplanation protocols Spade PyXMas
... As the next step, we train 4 different classifiers on the training set-namely a kNN, an MLP, and two DT-one with the original, continuous data set, and one with its binarised version. 10 Finally, in the following paragraphs, we show how rules can actually be extracted and what their ultimate shape actually is. ...
... Finally, we aim at further investigating the effectiveness of PSyKE within running EU projects, like StairwAI 11 and Expectation [10]. ...
A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation. Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing. Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning.
This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts' prescriptions, (ii) adherence to users' tastes and preferences, (iii) explainability of the whole recommendation process. Accordingly, in this paper we propose a novel approach to the engineering of nutritional RS, combining machine learning and symbolic knowledge extraction to profile users-hence harmonising the aforementioned requirements. Methods Our contribution focuses on the data processing workflow. Stemming from neural networks (NN) trained to predict user preferences, we use CART Breiman et al.(1984) to extract symbolic rules in Prolog Körner et al.(2022) form, and we combine them with expert prescriptions brought in similar form. We can then query the resulting symbolic knowledge base via logic solvers, to draw explainable recommendations. ResultsExperiments are performed involving a publicly available dataset of 45,723 recipes, plus 12 synthetic datasets about as many imaginary users, and 6 experts' prescriptions. Fully-connected 4-layered NN are trained on those datasets, reaching ∼86% test-set accuracy, on average. Extracted rules, in turn, have ∼80% fidelity w.r.t. those NN. The resulting recommendation system has a test-set precision of ∼74%. The symbolic approach makes it possible to devise how the system draws recommendations. Conclusions Thanks to our approach, intelligent agents may learn users' preferences from data, convert them into symbolic form, and extend them with experts' goal-directed prescriptions. The resulting recommendations are then simultaneously acceptable for the end user and adequate under a nutritional perspective, while the whole process of recommendation generation is made explainable.
Full-text available
Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks. On one hand, virtual coaching systems (VCS) are intended to support and educate the users about food, integrating additional dimensions w.r.t. the conventional RS (i.e., leveraging persuasion techniques, argumentation, informative systems, and recommendation paradigms) and show promising results. On the other hand, as of today, VCS raise unexplored ethical and legal concerns. This paper discusses the need for a clear understanding of the ethical/legal-technological entanglements, formalizing 21 ethical and ten legal challenges and the related mitigation strategies. Moreover, it elaborates on nutrition sustainability as a further nutrition virtual coaches dimension for a better society.
Full-text available
Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable , explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are “weighted" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a—highly needed—standard for the communication among interdisciplinary areas of AI.
Full-text available
Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computer-scientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI”—in particular, logic-based ones—will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones.
Full-text available
Recently, the eXplainable AI (XAI) research community has focused on developing methods making Machine Learning (ML) predictors more interpretable and explainable. Unfortunately, researchers are struggling to converge towards an unambiguous definition of notions such as interpretation, or, explanation—which are often (and mistakenly) used interchangeably. Furthermore, despite the sound metaphors that Multi-Agent System (MAS) could easily provide to address such a challenge, and agent-oriented perspective on the topic is still missing. Thus, this paper proposes an abstract and formal framework for XAI-based MAS, reconciling notions, and results from the literature.
Full-text available
In recent years, we have witnessed the growth of applications relying on the use and processing of personal data, especially in the health and well-being domains. Users themselves produce these data (e.g., through self-reported data acquisition, or personal devices such as smartphones, smartwatches or other wearables). A key challenge in this context is to guarantee the protection of personal data privacy, respecting the rights of users for deciding about data reuse, consent to data processing and storage, anonymity conditions, or the right to withhold or delete personal data. With the enforcement of recent regulations in this domain, such as the GDPR, applications are required to guarantee compliance, challenging current practices for personal data management. In this paper, we address this problem in the context of decentralized personal data applications, which may need to interact and negotiate conditions of data processing and reuse. Following a distributed paradigm without a top-down organization, we propose an agent-based model in which personal data providers and data consumers are embedded into privacy-aware agents capable of negotiating and coordinating data reuse, consent, and policies, using semantic vocabularies for privacy and provenance.
Full-text available
The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.
Conference Paper
Full-text available
We propose an abstract framework for XAI based on MAS encompassing the main definitions and results from the literature, focussing on the key notions of interpretation and explanation. KEYWORDS XAI; Multi-Agent Systems; Abstract Framework ACM Reference Format:
Full-text available
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence , namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Conference Paper
Full-text available
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) techniques. Such systems are employed in a growing number of domains, and making them explainable is an impelling priority. Recently, the domain of eX-plainable Artificial Intelligence (XAI) emerged with the aims of fostering transparency and trustworthiness. Several reviews have been conducted. Nevertheless, most of them deal with data-driven XAI to overcome the opaqueness of black-box algorithms. Contributions addressing goal-driven XAI (e.g., explainable agency for robots and agents) are still missing. This paper aims at filling this gap, proposing a Systematic Literature Review. The main findings are (i) a considerable portion of the papers propose conceptual studies, or lack evaluations or tackle relatively simple scenarios; (ii) almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability. Finally, (iii) while providing explanations to non-expert users has been outlined as a necessity, only a few works addressed the issues of personalization and context-awareness.
Full-text available
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of Artificial Intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on Explainable Artificial Intelligence. A research field that holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to explainable AI. Through the lens of literature, we review existing approaches regarding the topic, we discuss trends surrounding its sphere and we present major research trajectories.
Designing agents aiming to negotiate with human counterparts requires additional factors. In this work, we analyze the main elements of human negotiations in a structured human experiment. Particularly, we focus on studying the effect of negotiators being aware of the other side’s gain on the bidding behavior and the negotiation outcome. We compare the negotiations in two settings where one allows human negotiators to see their opponent’s utility and the other does not. Furthermore, we study what kind of emotional state expressed and arguments sent in those setups. We rigorously discuss the findings from our experiments.
The more intelligent systems based on sub-symbolic techniques pervade our everyday lives, the less human can understand them. This is why symbolic approaches are getting more and more attention in the general effort to make AI interpretable, explainable, and trustable. Understanding the current state of the art of AI techniques integrating symbolic and sub-symbolic approaches is then of paramount importance, nowadays-in particular in the XAI perspective. This is why this paper provides an overview of the main symbolic/sub-symbolic integration techniques, focussing in particular on those targeting explainable AI systems.