Content uploaded by David Halasz
Author content
All content in this area was uploaded by David Halasz on Feb 02, 2024
Content may be subject to copyright.
Towards Understanding Trust in Self-adaptive Systems
Dimitri Van Landuyt
dimitri.vanlanduyt@kuleuven.be
KU Leuven
Belgium
Dávid Halász
halasz@mail.muni.cz
Masaryk University
Czech Republic
Stef Verreydt
stef.verreydt@kuleuven.be
KU Leuven
Belgium
Danny Weyns
danny.weyns@gmail.com
KU Leuven/Linnaeus University
Belgium/Sweden
ABSTRACT
Self-adaptive systems (SASs) can change their structures au-
tonomously and dynamically adapt their behaviors aiming at (i) at-
taining longer-term system goals and (ii) coping with inevitable
dynamics and changes in their operational environments that are
dicult to anticipate. As SASs directly or indirectly interact with,
and aect humans, such degrees of autonomy create the necessity
for these systems to be trusted or considered trustworthy.
While the notions of ‘trust’ and ‘trustworthiness’ have been
investigated for over a decade, particularly by the SEAMS com-
munity, trust is a broad concept that covers diverse notions and
techniques and there is currently no clear view on the state of the
art. To that end, we present the outcomes of an exploratory liter-
ature study that claries how trust as a foundational concept has
been concretized and used in SASs. Based on an analysis of a set
of 16 articles from the published SEAMS proceedings, we provide
(i) a summary of the diverse quality attributes of SASs inuenced
by trust, (ii) a clarication on the dierent participant roles to trust
establishment in SASs, and (iii) a summary of trust qualication or
quantication approaches used in literature. This review provides a
more holistic view on the current state of the art for attaining trust
in the engineering of self-adaptive systems, and identies research
gaps worthy of further investigation.
ACM Reference Format:
Dimitri Van Landuyt, Dávid Halász, Stef Verreydt, and Danny Weyns. 2024.
Towards Understanding Trust in Self-adaptive Systems. In Proceedings of
19th International Conference on Software Engineering for Adaptive and Self-
Managing Systems (SEAMS 2024). ACM, New York, NY, USA, 7 pages. https:
//doi.org/XXXXXXX.XXXXXXX
1 INTRODUCTION
Digital systems and ecosystems are increasing interconnected and
collaborative – both among organizations (federated) and between
systems and humans. To accomplish any worthwhile objective,
involvement of and collaboration among third party systems and
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
SEAMS 2024, April 15–16, 2024, Lisbon, Portugal
©2024 Association for Computing Machinery.
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00
https://doi.org/XXXXXXX.XXXXXXX
actors is often required that rely on trust. For example, the simple
act of a digital payment relies in fact on a complex interplay be-
tween payment providers, nancial institutes, and nancial network
providers, all of which cooperate in a trust network. Despite the
intrinsic complexity, the relatively stable nature of these networks
acts as a source of trust: customers often have a long-standing trust
relations with their banks, and payment networks are considered
the most reputed and secure. Governments provide regulations and
implement oversight activities on this overall ecosystem, and this
in turn increases overall trust in the ecosystem.
This changes signicantly when systems and ecosystems become
highly dynamic and self-adaptive. Self-adaptive systems (SASs) can
dynamically and autonomously change their structures, congura-
tions, and collaboration models [
11
,
33
]. Consequently, establishing
and managing trust is intrinsically more dicult.
Overall distrust and scrutiny are higher for systems that au-
tonomously make decisions that are dicult to anticipate at de-
sign time. The AI Act [
23
] for example restricts the use of non-
explainable Articial Intelligence (AI) for specic use cases, and
right out prohibits the use any autonomous decision-making based
on deep learning for others. At the core, these regulations and levels
of distrust are rooted in uncertainties about fairness or bias, lack
of transparency, loss of control and oversight. This puts empha-
sis on the necessity to equip SASs with abilities to establish and
reason about trust, and to promote trustworthiness. While there
is no unied denition of trustworthiness of SAS, most commonly
it refers to a system’s ability to provide guarantees that its goals
are achieved. For instance, for an unmanned underwater vehicle
system (U UV) [
16
] that needs to perform a monitoring mission,
trustworthiness would mean that the system provides guarantees
on throughput, i.e., the UUV should take at least a given number of
measurements of sucient accuracy for a given part of a mission
distance, the energy consumption of the sensors should not exceed
a given amount of energy for a given distance, and if a congura-
tion that meets requirements cannot be identied within a given
amount of time, the UUV should stop and wait until the controller
identies a suitable conguration (e.g., after a sensor has recovered)
or new instructions are provided by a human operator.
As a foundational notion, methods and techniques to increase or
ensure trust and trustworthiness have been a topic of interest and
research in self-adaptive systems, see for instance [
5
,
17
,
26
,
28
,
35
].
However, the motivations, concretizations and realizations of trust
have been diverse and broad. In this paper, we perform an ex-
ploratory study of trust in SAS, with a focus on how it is approached
SEAMS 2024, April 15–16, 2024, Lisbon, Portugal Dimitri Van Landuyt, Dávid Halász, Stef Verreydt, and Danny Weyns
in SAS research. We select 16 articles from the SEAMS proceedings
that each make contributions to this topic. We summarize the dier-
ent non-functional quality attributes related to trust, and identify
the dierent tactics and approaches that have been used to realize
trust described in the literature. In addition, we take note of the
dierent methods to quantify or qualify trust, and the extent to
which the proposed trust mechanisms themselves can be considered
self-adaptive.
The remainder of this paper is structured as follows. Section 2
discusses background and related work. Section 3 states the main
research questions, and Section 4 introduces the methodology. The
study results are presented and discussed in Section 5 after which
Section 6 concludes the article.
2 BACKGROUND AND RELATED WORK
We discuss the basic concepts of trust, and outline the related work
on trust surveys in software engineering. Then, we introduce the
key notion of a ‘trusted computing base’ and the role of a ‘root of
trust’ in security engineering approaches, after which we discuss
the specic role of trust in self-adaptive systems (SAS). Finally, we
motivate this work.
Trust. Trust refers to the degree of certainty that is bestowed by a
trustor onto a trustee in terms of its accomplishment of a specic
property of interest [
20
,
35
]. In digital systems, the trustee as such
can be a technology (modules, algorithms, designs, protocols), or
an operational third-party system (services and providers), but also
the involved organizations (service providers) and humans (users).
In addition, trust can be given not just to individual entities, but
to entire ecosystems consisting of multiple collaborating parties.
For example, in blockchain-based systems [
22
], participants do not
need to trust individual participants, but the rules and mechanisms
of the overall distributed ledger-based collaborations by design
(distributed consensus) provide an individual participant with com-
putational guarantees about the correctness of the transactions and
the integrity of the data in the distributed ledger.
Trust is an individual and personalized property (the trustor is a
single entity), in the sense that dierent entities may have dierent
trust relations to the same party. For example, they may apply dif-
ferent criteria and approaches (indicative of their individual norms
and values) to accomplish trust [
2
]. Trust is a foundational and un-
derpinning concept in complex systems, ecosystems and societies.
For example, the global economic system relies on mechanisms
of supply and demand, and stock market trading is essentially a
means to reconcile the dierent trust assessments of individual
traders about the economic future of a company within the broader
economic market.
Trust in software engineering. In general, the interpretation of
trust is ambiguous, and depends heavily on the eld of science in
which it is applied [
2
]. There have been several eorts to create an
overview of trust in software engineering from various perspectives.
The most recent survey conducted by Buhnova et al. [
3
] observes
that there has been a lack of signicant attempts to examine trust
in autonomous systems. This study also observes that most of the
related work in software engineering does not clearly dene or
refer to a clear denition of trust. Furthermore, it also argues that
the most generic denition of trust has been provided by Gambetta
et al. [
15
] as the “the willingness of one party (trustor) to depend or
rely on the actions of another party (trustee)”.
Trusted computing base and roots of trust. In security, the
‘trusted computing base’ (TCB) is a key underpinning concept rst
introduced in 1984 by John Rushby [
27
], which refers to the princi-
ple that the eectiveness of a security mechanisms can always be
tied to its ‘roots of trust’.
For example, in a traditional authentication system, the root of
trust is based on having established a shared secret beforehand
(e.g. a username and password) which additionally is computation-
ally impossible to obtain (i.e., the password is kept secret, is dicult
to guess or brute-force, password enumeration is not allowed). In
biometric authentication systems, the root of trust is based on being
able to provide unique, tamper-proof and infalsiable credentials
(e.g., a ngerprint, face, iris). In some approaches, trust is further
increased with additional factors through liveness checks, i.e., to ver-
ify that the user is in fact physically present [
21
]. In self-sovereign
identity (SSI) [
13
,
25
], users self-manage the credentials issued to
them by trusted third parties (e.g. a governmental identity provider)
and provide them upon request to obtain access to a service. The
fact that self-governed security credentials are signed by a trusted
authority and that these signatures can be veried independently,
together with cryptographic guarantees about immutability and
tamper-proofness of the credentials serve as the roots of trust.
Trusted computing [
18
,
24
] refers to the capabilities of a com-
putational platform that enable external parties to independently
verify properties of the computational outcomes. In remote attes-
tation [
36
], the ability to individually verify the correctness and
integrity of software running outside of the observer’s direct control
serves as a root of trust.
Trust in self-adaptive systems. In traditional software develop-
ment processes, trust is created at development time, by engineers
and developers, in the phases of conceptualization, construction,
testing, verication and certication. Only when sucient quality
assurance steps have been taken, software can be released, deployed
and integrated in the real-world.
However, self-adaptive systems (SASs) are fundamentally dier-
ent in that they are equipped with capabilities to change their struc-
tures autonomously and dynamically adapt to (i) meet longer-term
system goals and (ii) cope with inevitable dynamics and changes in
their operational environments that are dicult to anticipate. In
SASs, trust is maintained at run time, through the use of specic
trust mechanisms, tactics, and enablers. One of the key underly-
ing approaches to maintain trust that has been studied in SASs
is the use of formal techniques to perform run-time analysis and
select congurations for adaptation [
34
]. Characteristic examples
include the use of probabilistic and statistical model checking tech-
niques [
4
,
9
,
19
]. These and other mechanisms, tactics, and enablers
of interest are required to maintain or increase trust at run time, to
allow external observers –individual participants, operators, over-
sight bodies– to ensure that the overall system maintains its desired
properties, even during and after adaptation or despite evolution
and change in its environment. However, as argued in [
5
,
6
], commu-
nicating the dynamic arguments for trust to stakeholders remains
a key challenge in SASs.
Towards Understanding Trust in Self-adaptive Systems SEAMS 2024, April 15–16, 2024, Lisbon, Portugal
Motivation. As an underpinning and foundational notion, meth-
ods and techniques to increase or ensure trust and trustworthiness
have been a topic of interest and research in SASs. However, the
motivations, concretizations and realizations of trust have been
diverse and broad, and the issue is further complicated by termino-
logical overloading of related concepts such as reliability, resilience,
dependability and safety.
3 RESEARCH QUESTIONS
We perform an exploratory literature study of published peer-to-
peer reviewed articles within the research community that focuses
extensively on Software Engineering approaches for Self-Adaptive
Systems (SASs) – the SEAMS community. We state the follow-
ing two key questions, which are each further rened into sub-
questions and attention points of particular interest.
RQ1
Trust notions and denitions. What types of trust are put
to the forefront in research on self-adaptive systems (SAS)?
RQ1.1
NFRs. Which non-functional properties do these notions
of trust specically contribute to?
RQ1.2
Trust roles. Which parties fulll the roles of trustor and
trustee, what is their nature (system/human) and cardinality
(individual/collective)?
RQ2
Operationalization of trust in SASs. How is trust brought
into practice? Which mechanisms, tactics, and enablers?
RQ2.1
Trust approaches and tactics. What are the ap-
proaches and recurring tactics used to establish trust?
RQ2.2
Expressing trust. How is trust expressed (quantied,
or qualied), calculated or predicted?
RQ2.3
Adaptive trust mechanisms. Is trust mechanism itself
self-adaptive?
4 METHODOLOGY
Section 4.1 discusses the establishment of the core set of articles
upon which this study is based. Section 4.2 subsequently discusses
our synthesis approach.
4.1 Establishment of core literature corpus
Figure 1 graphically depicts the applied process to establish our
core set of articles adopted for this study.
Search query
fulltext : "trust" or "ẗrust*"
Resources:
ACM and IEEE Xplore,
Collection:
SEAMS: Software
Engineering for Adaptive
and Self-Managing
Systems
full set:
73 articles core set:
16 articles
<5 mentions
of "trust" OR
"trust*"
Exclusion filter
Figure 1: Graphical representation of the methodology used
to identify the core set of articles included in this study.
We select relevant articles from the Proceedings Library of the
SEAMS conference1.
1
This is predominantly the ACM conference collection entitled “
SEAMS: Software
Engineering for Adaptive and Self-Managing Systems
”. This was further aug-
mented with the SEAMS publications listed in IEEE Xplore for the specic years of
2009,2021 and 2023, since these were not available in the ACM Library.
4
86
1
11
14
11 12
16
12 10
13
23
15 15
6
14
5
1
2
2
1
32
1
4
0
5
10
15
20
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
Year
Count
Included
Excluded
Figure 2: Frequency overview of the articles over publication
year. Articles of the core set are highlighted in blue.
In this rst selection step, we specically retain those articles
that include at least one textual reference to the
trust
keyword,
anywhere in the full text. This rst selection step leads to the
identication of 73 articles (out of a total publication count of 334
articles in SEAMS in the 2006–2023 time period).
To further scope the study, we deliberately exclude articles that
have less than 5textual references to trust in the full-text manu-
script. We argue that in these articles, trust and trust mechanisms
will not be at the core of the contribution or the mechanism2.
Applying of this exclusion criterion led to the establishment of
the core set, which consists of 16 articles. These articles are listed
in Table 1. The included articles are highly diverse in form, con-
tribution, publication year, author list, and focus. The set consists
of one community discussion article, one demonstration article,
one NIER-track article, seven short research articles, and six full
research articles (out of which two survey papers [
10
,
29
]). Figure 2
graphically depicts the relation between the full set and the core
set distributed over the X-axis at the basis of publication year.
4.2 Mapping and annotating results
Each article is carefully revised with specic attention to the role
of trust in the context of the attainment of larger system goals and
in the self-adaptive mechanisms described in the articles.
For
RQ1
, we take note of the motivations given in the article for
adopting a trust focused approach. These are typically explained
in the introduction section, and in the the motivation and prob-
lem statement section of the articles. For
RQ1.1
specically, we
note down of the software and system qualities for which the ar-
ticle mentions a positive relation to the trust mechanisms (i.e.,
they will benet from the trust mechanism), and we perform co-
occurrence analysis to untie the diverse relationships among these
qualities. For
RQ1.2
, we take note of the involved parties and their
role of the trustor (who requires trust) or trustee. In addition, we
distinguish between entity-to-entity, entity-to-human, entity-to-
ecosystem, human-to-entity and nally human-to-ecosystem trust
establishment (entities being singular participants, ecosystem re-
ferring to the overall collective ecosystem).
2This assumption was veried manually by sampling from the excluded article set.
SEAMS 2024, April 15–16, 2024, Lisbon, Portugal Dimitri Van Landuyt, Dávid Halász, Stef Verreydt, and Danny Weyns
Title Authors Year
Count
“trust”
Ref.
A Middleware and Algorithms for Trust Calculation from Multiple Evidence Sources Yew and Lutyya 2012 107 [35]
Approaching runtime trust assurance in open adaptive systems Schneider, Becker and Trapp 2012 75 [28]
From Systems to Ecosystems: Rethinking Adaptive Safety Halász 2022 47 [17]
A Platform to Enable Self-Adaptive Cloud Applications Using Trustworthiness Properties
Pereira, Silva, Antunes, Silva, de Franç, Moraes and
Vieira
2020 43 [26]
Self-Protection against Business Logic Vulnerabilities Zeller, Khakpour, Weyns and Deogun 2020 34 [38]
A Paradigm for Safe Adaptation of Collaborating Robots Cioroaica, Buhnova and Tomur 2022 31 [7]
Extending MAPE-K to Support Human-Machine Teaming
Cleland-Huang, Agrawal, Vierhauser, Murphy and
Prieto
2022 17 [8]
Self-Adaptive Testing in the Field: Are We There Yet? Silva, Bertolino and Pelliccione 2022 15 [29]
A Survey of Approaches to Adaptive Application Security Elkhodary and Whittle 2007 14 [14]
On Learning in Collective Self-Adaptive Systems: State of Practice and a 3D Framework
D’Angelo, Gerasimou, Ghahremani, Grohmann,
Nunes, Pournaras and Tomforde
2019 12 [10]
A Taxonomy and Survey of Self-Protecting Software Systems Yuan and Malek 2012 11 [37]
Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: A Goal-Oriented Approach
Solano, Caldas, Ricardo, Rodrigues, Vogel and Pel-
liccione
2019 7 [30]
Benchmarking the Resilience of Self-Adaptive Software Systems: Perspectives and Challenges Almeida and Vieira 2011 6 [1]
Towards Integrating Undependable Self-Adaptive Systems in Safety-Critical Environments Weiss, Schleiss, Schneider and Trapp 2018 6 [32]
Self-Adaptive Articial Intelligence de Lemos, and Grześ 2019 5 [12]
Threat modeling at run time: the case for reective and adaptive threat management (NIER track) Van Landuyt, Pasquale, Sion and Joosen 2021 5 [31]
Table 1: Full overview of the core set of articles, sorted in decreasing order of the amount of textual trust references per article.
Addressing
RQ2
shifts focus towards the solutions, the archi-
tectures, mechanisms, protocols and the specics described in the
article, with focus on how they increase trust. These are typically
core to the contribution of the paper, and thus emphasized and
explained in the contribution sections, the approach sections, and
in evaluation and validation. To address
RQ2.1
, a generalization
and mapping activity is performed, to identify the underlying core
tactics and fundamental underpinning of the specic trust approach
at hand. In
RQ2.2
specic attention is paid to the expression of trust
in the system (if applicable), or to the evaluation and validation
approach. In both cases, we summarize of how the authors provide
quantitative or qualitative evidence of trust, which evaluation meth-
ods and metrics were used. Finally,
RQ2.3
is addressed by assessing
whether the mechanism of trust itself will be adaptive to run-time
context factors, and whether it is reactive (upon challenge/demand
by the trustor), or proactive by the trustee.
5 STUDY RESULTS
Section 5.1 and Section 5.2 discuss the study results for
RQ1
and
RQ2. Finally, Section 5.3 discusses threats to validity.
5.1 RQ1. Trust notions and denitions
To answer
RQ1.1
, we rst identify from the articles of the core
set the dierent non-functional attributes, software qualities and
system qualities. We specically list those attributes that are posi-
tively inuenced by a trust-aware approach (or negatively by a lack
thereof). This led to the overview presented in Table 2, obtained by
performing a keyword count in the articles of the core set3.
We observe that these notions are often used ambiguously and
interchangeably (e.g., safety and dependability, reliability and ro-
bustness), yet that key nuances in their interpretation matter. To
further shed light on these relations and dependencies, we perform
pairwise co-occurrence analysis, taking note of which of these
3
To reduce incidental noise created by accidental matches (e.g.,a key wordis mentioned
in the title of a cited article but not in the article itself), a cut-o threshold of 2was
adopted, i.e. we match to articles with three or more mentions of the quality at hand.
Quality attribute References Count
Security
[
1
,
7
,
14
,
17
,
26
,
28
,
31
,
37
,
38
]
324
Safety [7, 8, 17, 28, 32] 267
Reliability [8, 14, 29, 30, 35, 37] 123
Performance [1, 8, 12, 26, 29] 90
Resilience [1, 10] 84
Dependability [1, 26, 28, 30, 32] 75
Privacy [10, 26, 31, 35] 61
Availability [1, 14, 32, 37] 51
Integrity [28, 37] 32
Transparency [8, 12] 30
Predictability [8] 9
Table 2: Synthesis of the dierent system and software qual-
ity attributes discussed in the articles of the core set.
Safety 2
Security 3 3
Availability 3 1 4
Performance 2 1 2 1
Resilience 1 0 1 1 3
Reliability 2 2 2 3 2 0
Integrity 1 1 2 2 0 0 2
Predictability 0 1 0 0 1 0 1 0
Transparency 0 1 0 0 2 1 1 0 1
Privacy 1 0 2 0 2 1 1 0 0 0
Dependability
Safety
Security
Availability
Performance
Resilience
Reliability
Integrity
Predictability
Transparency
Table 3: Co-occurrence table of dierent NFRs.
qualities are discussed together in the articles of the core set. The
outcome of this analysis is presented in Table 3.
Towards Understanding Trust in Self-adaptive Systems SEAMS 2024, April 15–16, 2024, Lisbon, Portugal
Trustor (trust by) Trustee (trust in)References
entity human entity human ecosystem
○ ○ [1, 12, 31, 35, 37]
○ ○
[
1
,
12
,
14
,
26
,
28
,
29
,
31, 37]
○ ○ [7, 10, 17, 28, 30]
○ ○ N/A [35, 38]
○ N/A ○[8]
Table 4: Overview of the fulllment of trustor and trustee
roles in the articles of the core set.
Finally, Table 4 summarizes the fulllment of the trustor and
trustee roles taken in the dierent articles of the core set (
RQ1.2
).
It distinguishes between trust to and from singular entities and
humans, and puts additional attention onto the granularity of an
collective ecosystem – i.e., trust of the entity in properties of the
ecosystem as a whole. One single article (nal row of Table 4) fo-
cuses explicitly on the establishment of trust between humans and
the collective ecosystem, in this case by openly “accepting inter-
ventions from humans” in terms of their assessment and feedback
of “the appropriateness of individual adaptations” [8].
Main takeaways. The results for RQ1 indicate that:
RQ1.1
A wide variety of system and software quality attributes
has been brought into relation to trust, yet there is a need
for harmonization of terminology and scope. The focus is
quite disparate and divergent: no more than 4articles out
of 16 focus on the same sets of properties.
RQ1.2
Many articles focus on entity-to-human and entity-to-
entity trust relations, whereas only more recently, research
focus has shifted to ecosystem-level trust, i.e. the ability
of a single entity to accomplish trust in the collective and
emergent behavior of the entire ecosystem.
5.2 RQ2. Operationalization of trust in SASs
Table 6 summarizes and hierarchically classies the dierent ap-
proaches and tactics identied in the articles of the core set. Follow-
ing the breakdown from Table 4, fundamental distinction is made
between entity-to-entity or human-to entity, entity-to-human and
entity-to-ecosystem approaches. The second column presents a
rst classication on the types of trust sought after (e.g. trust in
overall attainment of a desired property), whereas the third column
renes these into more concrete ‘trust tactics’, i.e. the abstracted
approach adopted in each of the papers. The nal column lists the
articles of the core set in which these tactics were recognized.
To address
RQ2.1
, we observe that in the approaches of the dier-
ent articles, there is a recurrence of (abstracted) tactics, specically
either to (i) (externally/blackbox or internally/whitebox) evaluate
attainment of a certain property (trust evaluation tactics), (ii) to
use adaptation as a means to cope with limited trust, or to actively
try to increase trust (e.g., evidence seeking) (adaptation for trust
tactics), or (iii) to constrain the overall system so that trust becomes
part of its emergent behavior, i.e. incentive mechanisms to pro-
mote trustworthiness, and adoption of game-theoretic elements to
Art.
Q S Trust expression approach A
[
35
]
○
Experience score (percentage of positive experiences),
which combines recommendations from third parties with
collected evidences.
○
[
28
]
Boolean mapping function, combining factors for availabil-
ity, reliability, safety, integrity, maintainability.
[
17
]
○
Trust level between 0 and 1; conditional activation of safety
features based on the exact score. Concrete calculation
approach not specied.
[
26
]
○ ○
Prediction of future service levels based of response time
and throughput of cloud node through cpu and memory
metrics.
[
38
]
○
High, medium and low trust levels derived from the behav-
ior of users (in the case study, the hotel booking cancellation
behavior of benign and malicious users).
[
7
]
Conformity between predictions of a digital twin and ob-
served behavior is used as a basis for trust.
[
8
]
Bi-directional calibrated trust is established between hu-
mans and machines.
[
14
]
The Adaptive Trust Negotiation Framework (ATNAC) em-
ploys two metrics: system threat-level and user suspicion-
level (not further dened).
[
30
]
○ ○
A trust value is derived from on a quantied reliability
and cost function, which are both in turn calculated using
weighted formulae (parametric symbolic formulae).
○
[
1
]
This article discusses the requirements and design for a
benchmarking and run-time testing approach that can gen-
erate run-time evidence to increase trust. No direct or indi-
rect metrics for trust are discussed.
[
32
]
This article discusses an approach to dynamically ingest
partially trustable sensor information by also using more
reliable sensors for conrmation.
Table 5: Overview and summary of the dierent approaches
taken to express trust in the articles of the core set.
(Q: a quantitative approach was used; S: specic metrics have been
presented; A: self-adaptiveness in the trust mechanism).
make trustworthy behavior the most likely outcome (trust as an
ecosystem constraint).
This observation highlights the potential for better consolida-
tion and reuse of these types of approaches in well-documented
architectural tactics and patterns. To illustrate this, by using the
property
𝑥
as a shorthand for any of the non-functional properties
listed in Table 2 and exploring the dierent approaches and tactics
of Table 6, novel ideas on trust mechanisms can be devised that
are currently unexplored yet may hold value. For example, suitable
approaches to have an entity increase trust in the attainment of
Performance requirements would be to (i) allow that participant to
execute (micro-)benchmarks in the system (rst row, transparency),
or (ii) to publish and share reputation scores based on past behavior
(second row, reputation scores), or nally, to (iii) provide access
to alternative service providers so that loss of Performance in one
provider can be dynamically mitigated by shifting computation to
a dierent provider (third row, self-protective adaptation or recon-
guration).
Trust can not be measured directly but is often estimated in-
directly through other metrics and evidences. To address
RQ2.2
,
we summarize the approaches to trust quantication in Table 5
4
.
The second column highlights whether a quantitative approach is
used, rather than a boolean or qualitative approach (Likert scale)
4
We exclude the survey and taxonomy articles [
10
,
29
,
37
] the community discussion
paper [
12
] and the NIER paper [
31
], as they don’t provide concrete approaches to
express trust.
SEAMS 2024, April 15–16, 2024, Lisbon, Portugal Dimitri Van Landuyt, Dávid Halász, Stef Verreydt, and Danny Weyns
Trust in Renement Approach/tactic Articles
Property 𝑥of individual entity
Attainment of 𝑥
Transparency/openness: allowing to externally verify prop-
erties of 𝑥(black-box perspective)
[
8
,
12
,
29
,
32
]
Prediction of 𝑥based on past behavior - reputation scores [26, 35]
Adaptation for 𝑥Self-protective adaptation/reconguration [31, 37]
Property 𝑥of human user
User’s attainment of 𝑥User trustworthiness score based on behavior [14, 38]
Adaptation for 𝑥User access control informed by trust [14, 31, 37, 38]
Fostering 𝑥in human involvement Incentivation and game theory for 𝑥[8]
Property 𝑥of collective system Assessing overall attainment of 𝑥
Analysis of past behavior for 𝑥- predictive learning [7, 10, 17]
Sharing of reputation scores - recommendation [10, 35]
Dynamic and coordinated adaptation for 𝑥[7, 17]
Generating and collecting evidence for 𝑥Run-time test case execution and dynamic assessment [1, 7, 28, 30]
Promoting 𝑥towards individual entities
Providing mechanisms for feedback, intervention and control
[8]
Table 6: Synthesis of the dierent trust renements (column 2) and approaches and tactics encountered in the dierent articles
(column 3). In this synthesis, the parameterized property of interest 𝑥can be any one of the NFRs from Table 2.
for expressing trust. There is a clear distinction in how trust is
expressed or estimated. Some papers ([
17
,
26
,
30
,
35
]) use quanti-
ed metrics, for example percentages, to express trust in an entity
or system, while others ([
7
,
28
,
38
]) describe qualied metrics like
{high, medium, low} or simply {trusted, untrusted}. The majority of
the examined papers ([
1
,
8
,
10
,
12
,
14
,
29
,
31
,
32
,
37
]), however, does
not specify how trust is expressed. The third column shows the de-
gree of specicity of the used approach to express trust, i.e. whether
the article provides (or refers to) a concrete methodology to apply
the metrics.
The nal column of Table 5 summarizes the results for
RQ2.3
,
showing that only two approaches in the core set present the trust
mechanism that itself also is self-adaptive. Firstly, the SCOUT mid-
dleware [
35
] dynamically adapts the strategy of collecting evidence
based on the calculated reliability of a specic belief, which in turn
is based on the evidence. More active evidence gathering will be
performed when the beliefs as formulated in the system prove to
be unreliable. Secondly, Solano et al. [
30
] present a generative ap-
proach to dynamically creating dierent parametric formulae used
to quantify trust when dealing with uncertainty. In this work, a
search-based, self-optimizing approach is used to gradually adapt
the trust quantication mechanism taking into account uncertainty.
Main takeaways. The results for RQ2 indicate that:
RQ2.1
Despite dierences in context, property and goals, we
abstract three recurring and distinct trust approaches and
tactics: (i) trust evaluation, (ii) adaptation in function of trust,
(iii) trust as an ecosystem constraint. Potential for reuse and
broader applicability is large.
RQ2.2
Only a few articles in the core set assume or adopt a
quantitative approach to express trust, and the degree of con-
cretization in trust metrics is limited. Additional research is
required to express trust in SASs.
RQ2.3
In the articles of the core set, only two approaches imple-
ment self-adaptive behavior in the trust mechanisms. Trust
approaches that are intrinsically self-adaptive, i.e. tailor their
trust activities dynamically, are promising to tune cost and
complexity in cases where trust assurance can be relaxed.
5.3 Threats to validity
This study is subject to a number of validity threats, of which we
highlight the two main ones. First, we only considered studies pub-
lished in SEAMS, which may be construed as being too limiting in
scope. In subsequent work, we would like to extend and validate
this study, by also including articles from related publications such
as the ACM Transactions on Autonomous and Adaptive Systems
(TAAS), the IEEE International Conference on Self-Adaptive and
Self-Organizing Systems (SASO) and the IEEE International Con-
ference on Autonomic Computing and Self-Organizing Systems
(ACSOS). A preliminary selection from these libraries indicates
that with the same search criteria, this would extend the scope
with respectively 34,37 and 15 additional articles
5
. Second, we
excluded articles based on having less than ve mentions of the
‘trust’ keyword, and this introduces the potential risk that we may
have missed some relevant studies. To mitigate this, we have man-
ually crosschecked some of papers with less occurrences which
conrmed the validity of the approach using a cut-o point.
6 CONCLUSIONS
We present a focused literature synthesis study on the key research
contributions on trust in self-adaptive systems in SEAMS. Among
the observations made, we highlight a diversity in trust notions,
both in terms of non-functional attributes targeted, and in terms
of the fulllment of the trustor and trustee roles. We identify re-
curring approaches and tactics, and observe a more recent shift
towards trust in ecosystems, i.e. approaches to enable trust about
the emergent behavior of a dynamic, collective system. We observe
that trust is rarely quantied or quantiable. Despite being limited
in scale, this exploratory overview shows the diversity in focus of
trust approaches presented and studied by the community.
Based on these ndings, we advocate for trust engineering ap-
proaches to become more explicit of the actual roots of trust, i.e. to
explicitly discuss and stipulate the trust-related assumptions that
underpin the approaches —much like a security threat model that
documents assumptions, preconditions and residual risks.
5
Selected from the total sets consisting of respectively 99 (TAAS), 81 (SASO), and 58
(ACSOS) articles referring at least once to ‘trust’.
Towards Understanding Trust in Self-adaptive Systems SEAMS 2024, April 15–16, 2024, Lisbon, Portugal
Acknowledgements. This work was supported by the Research
Fund KU Leuven, and the H2020 ERATOSTHENES project (Grant
Nb. 101020416).
REFERENCES
[1]
Raquel Almeida and Marco Vieira. 2011. Benchmarking the Resilience of Self-
Adaptive Software Systems: Perspectives and Challenges. In Proceedings of the 6th
International Symposium on Software Engineering for Adaptive and Self-Managing
Systems (Waikiki, Honolulu, HI, USA) (SEAMS ’11). ACM, 190–195. https://doi.
org/10.1145/1988008.1988035
[2]
Kirsimarja Blomqvist. 1997. The many faces of trust. Scandinavian journal of
management 13, 3 (1997), 271–286.
[3]
Barbora Buhnova, David Halasz, Danish Iqbal, and Hind Bangui. 2023. Survey
on Trust in Software Engineering for Autonomous Dynamic Ecosystems. In 38th
ACM/SIGAPP Symposium on Applied Computing. 1490–1497.
[4]
Radu Calinescu, Lars Grunske, Marta Kwiatkowska, Raaela Mirandola, and
Giordano Tamburrelli. 2011. Dynamic QoS Management and Optimization in
Service-Based Systems. IEEE Transactions on Software Engineering 37, 3 (2011),
387–409. https://doi.org/10.1109/TSE.2010.92
[5]
Radu Calinescu, Danny Weyns, Simos Gerasimou, Muhammad Usman Iftikhar,
Ibrahim Habli, and Tim Kelly. 2018. Engineering Trustworthy Self-Adaptive Soft-
ware with Dynamic Assurance Cases. IEEE Transactions on Software Engineering
44, 11 (2018), 1039–1069. https://doi.org/10.1109/TSE.2017.2738640
[6]
Matteo Camilli, Raaela Mirandola, and Patrizia Scandurra. 2023. XSA: EXplain-
able Self-Adaptation. In Proceedings of the 37th IEEE/ACM International Conference
on Automated Software Engineering (Rochester, MI, USA) (ASE ’22). ACM, Article
189, 5 pages. https://doi.org/10.1145/3551349.3559552
[7]
Emilia Cioroaica, Barbora Buhnova, and Emrah Tomur. 2022. A Paradigm for
Safe Adaptation of Collaborating Robots. In Proceedings of the 17th Symposium
on Software Engineering for Adaptive and Self-Managing Systems (Pittsburgh,
Pennsylvania) (SEAMS ’22). ACM, 113–119. https://doi.org/10.1145/3524844.
3528061
[8]
Jane Cleland-Huang, Ankit Agrawal, Michael Vierhauser, Michael Murphy, and
Mike Prieto. 2022. Extending MAPE-K to Support Human-Machine Teaming.
In Proceedings of the 17th Symposium on Software Engineering for Adaptive and
Self-Managing Systems (Pittsburgh, Pennsylvania) (SEAMS ’22). ACM, 120–131.
https://doi.org/10.1145/3524844.3528054
[9]
Javier Cámara and Rogério de Lemos. 2012. Evaluation of resilience in self-
adaptive systems using probabilistic model-checking. In 2012 7th International
Symposium on Software Engineering for Adaptive and Self-Managing Systems
(SEAMS). 53–62. https://doi.org/10.1109/SEAMS.2012.6224391
[10]
Mirko D’Angelo, Simos Gerasimou, Sona Ghahremani, Johannes Grohmann,
Ingrid Nunes, Evangelos Pournaras, and Sven Tomforde. 2019. On Learning
in Collective Self-Adaptive Systems: State of Practice and a 3D Framework. In
Proceedings of the 14th International Symposium on Software Engineering for
Adaptive and Self-Managing Systems (SEAMS ’19). IEEE Press, Montreal, Quebec,
Canada, 13–24. https://doi.org/10.1109/SEAMS.2019.00012
[11]
Rogério De Lemos, Holger Giese, Hausi A Müller, Mary Shaw, Jesper Andersson,
Marin Litoiu, Bradley Schmerl, Gabriel Tamura, Norha M Villegas, Thomas Vogel,
et al
.
2013. Software engineering for self-adaptive systems: A second research
roadmap. In Software Engineering for Self-Adaptive Systems II: International Semi-
nar, Dagstuhl Castle, Germany, October 24-29, 2010 Revised Selected and Invited
Papers. Springer, 1–32.
[12]
Rogério de Lemos and Marek Grześ. 2019. Self-Adaptive Articial Intelligence.
In Proceedings of the 14th International Symposium on Software Engineering for
Adaptive and Self-Managing Systems (SEAMS ’19). IEEE Press, Montreal, Quebec,
Canada, 155–156. https://doi.org/10.1109/SEAMS.2019.00028
[13]
Uwe Der, Stefan Jähnichen, and Jan Sürmeli. 2017. Self-sovereign identity
−
opportunities and challenges for the digital revolution. arXiv preprint
arXiv:1712.01767 (2017).
[14]
Ahmed Elkhodary and Jon Whittle. 2007. A Survey of Approaches to Adaptive
Application Security. In Proceedings of the 2007 International Workshop on Software
Engineering for Adaptive and Self-Managing Systems (SEAMS ’07). IEEE Computer
Society, USA, 16. https://doi.org/10.1109/SEAMS.2007.2
[15]
Diego Gambetta et al
.
2000. Can we trust trust. Trust: Making and breaking
cooperative relations 13, 2000 (2000), 213–237.
[16]
Simos Gerasimou, Radu Calinescu, Stepan Shevtsov, and Danny Weyns. 2017.
UNDERSEA: An Exemplar for Engineering Self-Adaptive Unmanned Underwater
Vehicles. In 2017 IEEE/ACM 12th International Symposium on Software Engineering
for Adaptive and Self-Managing Systems (SEAMS). 83–89. https://doi.org/10.1109/
SEAMS.2017.19
[17]
David Halasz. 2022. From Systems to Ecosystems: Rethinking Adaptive Safety.
In Proceedings of the 17th Symposium on Software Engineering for Adaptive and
Self-Managing Systems. 48–52. https://doi.org/10.1145/3524844.3528067
[18]
Thomas Hardjono and Ned Smith. 2019. Decentralized trusted computing base
for blockchain infrastructure security. Frontiers in Blockchain 2 (2019), 24.
[19]
M. Usman Iftikhar and Danny Weyns. 2014. ActivFORMS: Active Formal Models
for Self-Adaptation. In Proceedings of the 9th International Symposium on Software
Engineering for Adaptive and Self-Managing Systems (Hyderabad, India) (SEAMS
2014). ACM, 125–134. https://doi.org/10.1145/2593929.2593944
[20]
Audun Jøsang, Roslan Ismail, and Colin Boyd. 2007. A survey of trust and
reputation systems for online service provision. Decision support systems 43, 2
(2007), 618–644.
[21]
Smita Khade, Swati Ahirrao, Shraddha Phansalkar, Ketan Kotecha, Shilpa Gite,
and Sudeep D Thepade. 2021. Iris liveness detection for biometric authentication:
A systematic literature review and future directions. Inventions 6, 4 (2021), 65.
[22]
Auqib Hamid Lone and Roohie Naaz Mir. 2019. Consensus protocols as a model
of trust in blockchains. International Journal of Blockchains and Cryptocurrencies
1, 1 (2019), 7–21.
[23]
Tambiama Madiega. 2021. Articial intelligence act. European Parliament: Euro-
pean Parliamentary Research Service (2021).
[24]
C. Mitchell and Institution of Electrical Engineers. 2005. Trusted Computing.
Institution of Engineering and Technology. https://books.google.be/books?id=
T99QAAAAMAAJ
[25]
Alexander Mühle, Andreas Grüner, Tatiana Gayvoronskaya, and Christoph
Meinel. 2018. A survey on essential components of a self-sovereign identity.
Computer Science Review 30 (2018), 80–86.
[26]
José D’Abruzzo Pereira, Rui Silva, Nuno Antunes, Jorge L. M. Silva, Breno de
França, Regina Moraes, and Marco Vieira. 2020. A Platform to Enable Self-
Adaptive Cloud Applications Using Trustworthiness Properties. In Proceedings of
the IEEE/ACM 15th International Symposium on Software Engineering for Adaptive
and Self-Managing Systems (Seoul, Republic of Korea) (SEAMS ’20). ACM, 71–77.
https://doi.org/10.1145/3387939.3391608
[27]
John Rushby. 1984. A trusted computing base for embedded systems. In Proceed-
ings of the 7th DoD/NBS Computer Security Conference. 294–311.
[28]
Daniel Schneider, Martin Becker, and Mario Trapp. 2011. Approaching Runtime
Trust Assurance in Open Adaptive Systems. In Proceedings of the 6th International
Symposium on Software Engineering for Adaptive and Self-Managing Systems
(Waikiki, Honolulu, HI, USA) (SEAMS ’11). ACM, 196–201. https://doi.org/10.
1145/1988008.1988036
[29]
Samira Silva, Antonia Bertolino, and Patrizio Pelliccione. 2022. Self-Adaptive
Testing in the Field: Are We There Yet?. In Proceedings of the 17th Symposium on
Software Engineering for Adaptive and Self-Managing Systems (Pittsburgh, Penn-
sylvania) (SEAMS ’22). ACM, 58–69. https://doi.org/10.1145/3524844.3528050
[30]
Gabriela Félix Solano, Ricardo Diniz Caldas, Genaína Nunes Rodrigues, Thomas
Vogel, and Patrizio Pelliccione. 2019. Taming Uncertainty in the Assurance
Process of Self-Adaptive Systems: A Goal-Oriented Approach. In Proceedings of
the 14th International Symposium on Software Engineering for Adaptive and Self-
Managing Systems (SEAMS ’19). IEEE Press, Montreal, Quebec, Canada, 89–99.
https://doi.org/10.1109/SEAMS.2019.00020
[31]
Dimitri Van Landuyt, Liliana Pasquale, Laurens Sion, and Wouter Joosen. 2021.
Threat modeling at run time: the case for reective and adaptive threat manage-
ment (NIER track). In 2021 International Symposium on Software Engineering for
Adaptive and Self-Managing Systems (SEAMS). 203–209. https://doi.org/10.1109/
SEAMS51251.2021.00034
[32]
Gereon Weiss,P hilipp Schleiss, Daniel Schneider, and Mario Trapp. 2018. Towards
Integrating Undependable Self-Adaptive Systems in Safety-Critical Environments.
In 13th International Conference on Software Engineering for Adaptive and Self-
Managing Systems (Gothenburg, Sweden). ACM, 26–32. https://doi.org/10.1145/
3194133.3194157
[33]
Danny Weyns. 2019. Software engineering of self-adaptive systems. Handbook
of software engineering (2019), 399–443.
[34]
Danny Weyns, Nelly Bencomo, Radu Calinescu, Javier Camara, Carlo Ghezzi,
Vincenzo Grassi, Lars Grunske, Paola Inverardi, Jean-Marc Jezequel, Sam Malek,
Raaela Mirandola, Marco Mori, and Giordano Tamburrelli. 2017. Perpetual
Assurances for Self-Adaptive Systems. In Software Engineering for Self-Adaptive
Systems III. Assurances, Rogério de Lemos, David Garlan, Carlo Ghezzi, and Holger
Giese (Eds.). Springer International Publishing, Cham, 31–63.
[35]
Chern Har Yew and Hanan Lutyya. 2012. A Middleware and Algorithms for
Trust Calculation from Multiple Evidence Sources. In Proceedings of the 7th
International Symposium on Software Engineering for Adaptive and Self-Managing
Systems (SEAMS ’12). IEEE Press, Zurich, Switzerland, 83–88.
[36]
Yue Yu,Huaimin Wang, Bo Liu, and Gang Yin. 2013. A Trusted Remote Attestation
Model Based on Trusted Computing. In 2013 12th IEEE International Conference
on Trust, Security and Privacy in Computing and Communications. 1504–1509.
https://doi.org/10.1109/TrustCom.2013.183
[37]
Eric Yuan and Sam Malek. 2012. A Taxonomy and Survey of Self-Protecting
Software Systems. In Proceedings of the 7th International Symposium on Software
Engineering for Adaptive and Self-Managing Systems (SEAMS ’12). IEEE Press,
Zurich, Switzerland, 109–118.
[38]
Silvan Zeller, Narges Khakpour, Danny Weyns, and Daniel Deogun. 2020. Self-
Protection against Business Logic Vulnerabilities. In IEEE/ACM 15th International
Symposium on Software Engineering for Adaptive and Self-Managing Systems.
174–180. https://doi.org/10.1145/3387939.3391609