ArticlePDF Available

Verification of multi-agent systems with public actions against strategy logic



Model checking multi-agent systems, in which agents are distributed and thus may have different observations of the world, against strategic behaviours is known to be a complex problem in a number of settings. There are traditionally two ways of ameliorating this complexity: imposing a hierarchy on the observations of the agents, or restricting agent actions so that they are observable by all agents. We study systems of the latter kind, since they are more suitable for modelling rational agents. In particular, we define multi-agent systems in which all actions are public and study the model checking problem of such systems against Strategy Logic with equality, a very rich strategic logic that can express relevant concepts such as Nash equilibria, Pareto optimality, and due to the novel addition of equality, also evolutionary stable strategies. The main result is that the corresponding model checking problem is decidable.
Verification of Multi-Agent Systems with Public Actions
against Strategy Logic
Francesco Belardinelli
Department of Computing, Imperial College London, UK and Laboratoire IBISC, UEVE, France
Alessio Lomuscio
Department of Computing, Imperial College London, UK
Aniello Murano
DIETI, Universit`a degli Studi di Napoli, Italy
Sasha Rubin
University of Sydney, Australia
Model checking multi-agent systems, in which agents are distributed and thus may have
different observations of the world, against strategic behaviours is known to be a complex
problem in a number of settings. There are traditionally two ways of ameliorating this
complexity: imposing a hierarchy on the observations of the agents, or restricting agent
actions so that they are observable by all agents. We study systems of the latter kind,
since they are more suitable for modelling rational agents. In particular, we define multi-
agent systems in which all actions are public and study the model checking problem of
such systems against Strategy Logic with equality, a very rich strategic logic that can
express relevant concepts such as Nash equilibria, Pareto optimality, and due to the
novel addition of equality, also evolutionary stable strategies. The main result is that
the corresponding model checking problem is decidable.
Keywords: Strategy Logic, Multi-agent systems, Imperfect Information, Verification,
Formal Methods
1. Introduction
Logics expressing individual and joint strategic abilities offer powerful formalisms for
reasoning about the behaviour of rational agents in multi-agent systems (MAS), a subject
of growing interest in the area of formal methods for Artificial Intelligence. Coalition
Email addresses: (Francesco Belardinelli),
(Alessio Lomuscio), (Aniello Murano), (Sasha Rubin)
Preprint submitted to Artificial Intelligence January 13, 2020
Logic [1] and Alternating-time Temporal Logic (ATL) [2] were among the first and most
influential logics that were introduced for this purpose. These logics can be used to
express formally what states of affairs coalitions of agents may bring about in a MAS
irrespective of what other agents in the system may do. For example, in a scenario where
several autonomous robots are competing for resources, a coalition of two robots may be
able to enforce that a particular resource can be brought under their control, irrespective
of the actions of the other robots in the system.
Over the years ATL has been enriched in a number of directions, including by incor-
porating epistemic operators to reason about both the knowledge and the strategic power
of the agents in the system [3,4,5,6], and by accounting explicitly for the resources
agents have available [7]. More recently, formalisms more expressive than ATL have been
introduced. In the framework of Strategy Logic (SL) strategies are first-class objects that
can be named and associated with agents [8,9,10,11]. This enables the representation of
game-theoretic concepts, such as Nash equilibria, that cannot be rendered by formalisms
such as ATL, but are of high importance in MAS. Like ATL, SL has also been combined
with epistemic concepts [12,13,14,15].
A key focus of attention in these lines of work concerns formal verification, notably
the model checking problem [16], of MAS against strategy-based specifications expressed
in these languages. Various methods and accompanying implementations have been
developed supporting ATL and variations [7,17,14,18,19,20,21]. These range from
explicit to symbolic model checkers, as well as SAT-based engines. By using these tools
practical scenarios ranging from strategic games [7] to autonomous vehicles [22,23], have
been analysed and debugged.
A crucial consideration in assessing the practical feasibility of verification via model
checking is the computational complexity of its decision problem. In this light, an at-
tractive feature of ATL lies in the fact that its model checking problem is PTIME-
complete [2]. This is, however, limited to the case of perfect information, i.e., under
the assumption that the agents in the system have full visibility of its global state. In
MAS this assumption is of limited relevance, as the agents can normally access only a
fraction of the information available. Much more important in applications is the case
of imperfect information, particularly in the context of perfect recall, where agents in
the system have full memory of their local histories. While this is a compelling set-up
from a modelling point of view, it is challenging from a verification standpoint, as the
corresponding model checking problem is undecidable [24]. It follows that the model
checking problem for all extensions of ATL, including Strategy Logic, is also undecidable
under the assumptions of perfect recall and imperfect information.
Given this limitation, it is of interest to identify cases for which the model checking
problem for strategic reasoning is decidable even under perfect recall and imperfect in-
formation. This paper provides one concrete setting, relevant for applications, where we
show it to be the case.
More specifically, there appear to be three possible directions to tame undecidability
in this context. One option involves restricting the syntax of the specification languages.
This option generally results in a loss of expressiveness; however, useful specification
patterns might be identified within the fragment [25]. A second possibility might concern
modifying the standard semantics for the specification language in question. This might
involve amending the standard notion of strategy or to consider minor modifications of
the underlying complete information and perfect recall setting [26]. A third line of attack
consists in identifying semantical subclasses of MAS, still analysed under perfect recall
and incomplete information, for which the model checking is decidable. In the following
we pursue this latter option.
Contribution. In this work we introduce and study a variant of SL under incomplete
information (henceforth SLi), and exemplify its applicability in the context of MAS. In
particular, we describe a number of formulas of SLi that capture important concepts,
such as winning strategies, Nash equilibria, evolutionary stable strategies. We observe
that the corresponding model checking problem is undecidable in general, but identify a
subclass of MAS for which the same question is decidable. The subclass, that we isolate
and investigate, consists of systems of agents that can communicate only via public
actions. Examples of such systems include games with fully observable (public) moves,
open-cry auction protocols where all bidding is public, and, more generally, systems
evolving via broadcasting actions. Clearly, this is a broad class of systems of interest
in applications. We analyse the related complexity and show that the model-checking
problem for this subclass is in (k+ 2)-exptime, where kis the quantifier-block depth of
the formula to be checked. We also provide a lower bound in (k1)-expspace. Thus,
this subclass provides a middle ground between the full observability case (which is well-
understood, more tractable, but has limited expressiveness and applicability) and the
partial observability case (which is undecidable, but extremely expressive).
Related Work. The work here presented builds upon and extends the framework of
Strategy Logic [9,11]. In those papers, SL is interpreted on concurrent game structures
and, barring the exceptions below, it is typically employed and analysed under complete
information and perfect recall. In contrast, here we use a variant of interpreted systems
as the underlying semantics and study the verification problem under the assumption of
imperfect information and perfect recall.
Variants on the semantics of Strategy Logic have been previously explored. In [27]
an alternative setting is studied in which strategies that are not bound to agents are not
propagated when temporal operators are evaluated. Under this semantics, the model
checking problem becomes undecidable. The notion of dependency between strategies in
SL is analysed in [28]. In this work when a strategy xis quantified in an SL formula,
it depends on all other strategies quantified before it. In particular, the value of x
on a given history depends on the value of other strategies on all histories. The same
paper introduces, motivates, and studies weaker dependencies (e.g., when a strategy
xis quantified, its value on a history hdepends only on the values of earlier quantified
strategies on prefixes of h). Further, [29] introduces an extension of SL in which strategies
can be nondeterministic and there is an unbinding operator that allows agents to revoke
their strategies. These extensions allow one to express the notion of sustainable control
for an agent, while retaining a decidable model checking problem.
In addition to the above, the verification of MAS against various strategy-based
specifications, enriched with epistemic specifications, has been investigated in [7,17,
30,18,19,14]. However, this has been typically limited to observational or positional
semantics, where an agent’s strategy depends only on her current state. In contrast we
here analyse the case of perfect recall, which is undecidable. These works also focus
mainly on the interplay between strategic and epistemic modalities. While we show that
epistemic modalities are also supported in our setting, they are shown to be derivable
and do not need to be introduced as first-class citizens. [31] also introduced an epistemic
strategy logic and studied the corresponding model checking problem. In this setup,
however, the strategies the agents use are directly encoded in their local states, resulting
in a rather different framework. As in [14], the focus is on observational semantics, while
here we deal with the argubly more complex case of perfect recall.
Reasoning about strategic abilities in MAS under imperfect information is known to
be computationally difficult even for logics less expressive than SL. For example, model
checking MAS against ATL specifications goes from PTIME-complete to ∆p
under incomplete information and memoryless strategies [32], and it is undecidable under
perfect recall [24]. The results of this paper confirm these findings and extend them to
Strategy Logic.
Other approaches have been introduced to retain decidability when reasoning about
strategies in MAS. A notable direction involves imposing a hierarchy on the information,
or the observations, of the agents [33,34,35,36,37]. This constraints in a well-structured
way the information that agents possess. Hierarchies have also been studied in the context
of variants of SL, thereby achieving similar results [37]. While we share the motivation
and the result of these approaches, these restrictions are considerably different from ours.
In particular, we impose no a-priory hierarchy on the information and the observations
of the agents. In other words, hierarchies can be represented in the framework presented
here, but they are not a constitutive feature.
Differently from the contributions above, we here introduce the use of public actions
as a way to retain decidability for the verification problem. There are strong correspon-
dences between our notion of public action and communication by broadcasting, which
has previously been studied in the context of MAS in [38,39,40]. While the semantics is
similar, previous approaches focused on the modelling and axiomatisations of epistemic
and temporal-epistemic logics on these structures. Instead, we here study variants of SL
and focus on the model checking problem instead.
Also related to the above are recent proposals to approximate the verification problem.
For instance, [41] studies an approximation of the model checking problem for ATL
under imperfect information, specifically one in which the ATL operators admit fixpoint
characterisations. By doing so, while the original undecidable problem cannot be solved,
a closely related verification question is offered a solution. Differently from [41], we solve
the same verification problem but under the restricted communication assumption. Also,
we here work on SL, which is strictly more expressive than ATL.
Moreover, we note that there are points of contacts between the present work and
developments in Dynamic Epistemic Logic (DEL). DEL [42,43] is a framework whereby
an epistemic logic is augmented with dynamic modal operators to model information
updates. A noteworthy model update operator in DEL is truthful public announcement.
In DEL, as well as in related earlier frameworks [38,44], this is modelled via an epis-
temic model update incorporating the postconditions of the action. While the framework
here presented and DEL for public announcements address related classes of MAS, the
technical approaches are rather different. While in DEL the models are instantaneous
representations of the agents’ epistemic alternatives and time is modelled via the update
operations, our models, in line with interpreted systems semantics, ATL, and SL models,
incorporate primitively the concepts of time in the notion of run and history. Moreover,
normally the syntax of DEL does not include operators for strategic abilities.
Related to our contribution is also some of the work in epistemic planning [45,46],
whereby one asks whether there is a sequence of event-models such that the resulting
model satisfies a given epistemic formula. Similarly to our findings, the problem is
undecidable in general, but becomes decidable when public actions are assumed [45].
Moreover, [47,48] study a reduction of epistemic planning with public announcements to
classical planning. The encoding allowing this reduction uses an idea similar to what we
use here, in that both reductions record the current state for each possible initial state.
An important application of the formalism presented in this paper concerns reasoning
about rich solution concepts such as Nash equilibria in which agents have LTL goals, as
well as generalisations thereof known as strong rational synthesis. In [49] the strong ra-
tional synthesis problem with LTL objectives and aggregation of finitely many objectives
is shown to be 2EXPTIME-complete. In [50], Equilibrium Logic is introduced to reason
about Nash equilibria in games with LTL and CTL objectives. However, both cases as-
sume that agents have perfect information. In case agents have imperfect information,
the existence of Nash equilibria is undecidable for three agents, but decidable for two
(cf. [51]). In the case of multiple agents, [37] shows that decidability for a language
similar to the one here presented, which can also express the existence of Nash equilib-
ria, can be retained by imposing a hierarchy on the agent observations. In contrast, in
Section 3we show that one can regain decidability, and thus decide the existence of Nash
equilibria, by assuming that agents use public actions only, while making no restriction
on the agent observations.
On the purely technical side of our contribution, we remark that one of the ideas used
in the proof of our decidability result uses ideas similarly to those employed in Littman’s
PhD Thesis [52][Lemma 6.1] and [53], in that one can convert a deterministic partially
observable Markov decision process (POMDP) into an MDP with exponentially many
states. The states of the derived MDP are functions Dt:SSwhere Dt(s) says that,
after a fixed sequence of actions and observations, if twere the initial state then swould
be the current state. The main differences with our work are that i) we consider a set
of initial states (rather than a probability distribution over the initial states), and ii) we
model check a very expressive logic, rather than simply solving the synthesis problem.
Previous Work. Preliminary versions of this work were published in non-archival
conference papers by the same authors [54,55,56]. There are a number of notable differ-
ences that are introduced here to make the paper uniform, mature, and self-contained.
First, differently from [54,55], this paper uses interpreted systems, rather than concur-
rent game structures, as the underlying semantical model. This enables us, as we did
in [56] (and this is the only overlap with that paper), to give a more intuitive and precise
definition of what it means for an action of an agent to be public. Indeed, in [54,55]
we only referred to joint actions being public, not individual ones. Second, our logic no
longer includes explicit epistemic operators, but does allow equality between strategies.
Third, we include extensive examples of the expressiveness of the logic. Fourth, we pro-
vide a a different, conceptually simpler, and fully detailed proof of the main decidability
result, including a complexity analysis. Indeed, instead of giving a reduction to monadic
second-order logic [55], or using automata-theoretic techniques [54], we here give a reduc-
tion to an intermediate logic (quantified CTL) that has served as a useful and natural
bridge between strategic logics and monadic second-order logic.
Outline. The rest of the paper is organised as follows.
In Section 2we define the syntax and semantics of SLi, as well as provide a number of
formulas of SLi and discuss their importance and relevance to express the strategic
abilities of agents in MAS.
In Section 3we introduce the subclass of systems in which agents operate only
through public actions, we prove that the corresponding model checking problem
is decidable, and provide upper and lower bounds on its complexity.
In Section 4we summarise the main findings of the paper and point to future
directions of research.
2. Strategy Logic under Imperfect Information
In this section we introduce Strategy Logic under imperfect information (SLi), a logic
for strategic reasoning in multi-agent systems. The logic is inspired by Strategy Logic
(SL) [10], in which strategies are treated syntactically in the language. This is accom-
plished by having quantification on first-order variables ranging over strategies. In SL
the strategy quantifier xis read as “there exists a strategy x”, and the binding operator
bind(i, x) is read as “agent iuses strategy x”. Moreover, SL includes linear-temporal op-
erators Xand U[57] for reasoning about the temporal evolution of the system in which
the agents are bound to particular strategies. The logic we introduce, SLi, inherits these
features, and allows us to express game-theoretic concepts such as existence of winning
strategies, Nash equilibria, etc. [58].
The main difference between SL and SLi is that the semantics of SLi permits one
to reason about agents with imperfect information (other similar extensions of SL such
as [37,14] are discussed in the related-work section). Thus, if an agent is associated
to a strategy x, then strategy xwill prescribe actions that are enabled by the protocol
of that agent, and that only depend on the local state of the agent; cf. [10,14]. SLi is
equipped with two types of strategy quantifiers (oand s); these are inspired by the
distinction between the objective and subjective semantics of alternating-time temporal
logic (ATL) [59]. Intuitively, these corresponds to whether or not the quantified strategy
is known to succeed by the agents that use them. Finally, SLi has the ability to express
whether two strategies are equal; this is inspired by SL with Graded Modalities [60] and
first-order logic with equality.
2.1. Syntax of SLi
In what follows we fix AP to be a finite, non-empty set of atoms, and Ag to be a
finite, non-empty set of agents. Further, let Var be a finite, non-empty set of strategy
variables denoted by x, y, . . . and xi, yi, . . .
Definition 1 (SLi).The following grammar defines the SLi formulas ϕ:
ϕ::= p|x=y| ¬ϕ|ϕϕ|Xϕ|ϕUϕ| ∃o| ∃s|bind(i, x)ϕ
where pAP ,x, y Var, iAg.
Without loss of generality, we assume that every variable xis quantified at most
once in ϕ(this can be done, without changing the size of the formula, by renaming
variables). We use the standard abbreviations, e.g., ϕ1ϕ2abbreviates ¬(¬ϕ1∨ ¬ϕ2),
true abbreviates p∨ ¬p,Fϕand Gϕare shorthands for true U ϕand ¬F¬ϕrespectively,
and abbreviates ¬∃x¬ϕ.
Discussion. The formal semantics of SLi is presented in Section 2.2. Here, we give
intuitions about how to interpret formulas of SLi.
Inspired by the distinction between the objective and subjective semantics of strat-
egy modalities in alternating-time temporal logic under imperfect information [59],
we use oto refer to the objective interpretation and sto refer to the subjective
Quantified strategies are intended as being coherent and uniform, i.e., if an agent
uses a strategy then the actions it prescribes should be available to that agent
(coherency), and these actions should only depend on the local state of the agent
(uniformity). Thus, a formula ox ϕ (or sϕ) is read as “there exists a strategy x,
that is coherent and uniform for all agents that use it (in the subformula ϕ) such
that ϕholds”. Note that normally, both in SL and in ATL the formula is simply
read as “there exists a strategy xsuch that ϕ”.
The formula x=ytests whether the strategies denoted by xand yare equal. This
is inspired by the distinction between first-order logic with and without equality.
Moreover, it allows us to express complex properties such as uniqueness of solution
concepts (see Example 5), and the existence of an evolutionary stable strategy
(see Example 6). Previous versions of SL (e.g., [10]) do not include equality on
strategies, although some versions allow strategy counting [60].
To define the semantics we need some further notation defined below.
Free Variables and Agents. We introduce the set free(ϕ) to denote the set of free variables
and agents appearing in a formula ϕ(cf. [10]). Intuitively, a variable xis free in ϕif one
needs to associate xwith a strategy in order to evaluate ϕ, and an agent ais free in ϕ
if one needs to bind a strategy to ain order to evaluate ϕ.
Definition 2 (Free variables and agents).The set free(ϕ)Ag Var representing free
agents and variables is defined inductively as follows:
free(p) =
free(x=y) = {x, y}
free(¬ϕ) = free(ϕ)
free(Xϕ) = Ag free(ϕ)
free(ϕ1Uϕ2) = Ag (free(ϕ1)free(ϕ2))
free(ϕ1ϕ2) = free(ϕ1)free(ϕ2)
free(ox ϕ) = free(sx ϕ) = free(ϕ)\ {x}
free(bind(i, x)ϕ) = ((free(ϕ)\ {i})∪ {x}if ifree(ϕ)
A formula ϕwithout free agents (resp., variables), i.e., with f ree(ϕ)Ag =(resp.,
free(ϕ)Var = ), is agent-closed (resp., variable-closed). If ϕis both agent- and
variable-closed, it is called a sentence.
Agents using strategies in a subformula. We introduce the set use(x, ϕ) to denote the
agents using strategy xin evaluating formula ϕ. Formally, let use(x, ϕ) consist of all
agents iAg such that ϕhas a subformula of the form bind(i, x)ϕ0such that i
free(ϕ0). This set will be used to provide an imperfect information semantics to SLi.
2.2. Interpreted Systems
We interpret formulas of SLi on interpreted systems. Interpreted systems are a formal
model for multi-agent systems where each agent is defined by its local states, actions,
and transition function [38]. In this section we recall this semantics, whereas in Section 3
we introduce a novel variant in which we distinguish specific actions that we call public.
Notation. We write [n] for the set {iN: 1 in}. The length of a finite
sequence uXis denoted by |u| ∈ N. For i1, we write uifor the i-th element
of u, and uifor the prefix of uof length i. Then, we denote its first element u1by
first(u), and its last element u|u|by last(u). To ease notation, we sometimes write u(i)
instead of ui. The empty sequence is denoted by . The length of an infinite sequence is
the cardinal ω. For a vector vQjXjwe denote the i-th co-ordinate of vby vi. The
powerset of Xis denoted P(X). We use the following convention: let f , f0:XYbe
partial functions and a binary relation on Y; then whenever we write f(x)f0(x0)
we mean, in particular, that both f(x) and f0(x0) are defined.
Definition 3 (Interpreted Systems).An interpreted system (IS) is a tuple
S= (Ag, {Li, Acti, Pi, τi}iAg , S0, AP , λ)
1. Ag = [n]for some nN, is a finite non-empty set of agents.
2. For each agent iAg:
(a) Liis a finite non-empty set of local states.
(b) Actiis a finite non-empty set of local actions.
(c) Pi:LiP(Acti)\{∅} is the local protocol.
(d) τi:Li×QjAg ActjLiis a partial function, called the local transition
function, such that for every lLi,aQjAg Actj,τi(l, a)is defined iff
3. S0QiAg Liis the set of initial global states.
4. AP is the finite set of atomic propositions (also called atoms).
5. λ:AP P(QiAg Li)is a labelling function.
Note that we do not assume that sets Actis of local actions are disjoint. Intuitively,
an interpreted system describes the synchronous evolution of a group Ag of agents: at
any point in time, each agent iis in some local state lLi, which encodes the (possibly
partial) information she has about the state of the system. The local protocol Pispecifies
which actions from Actiagent ican execute from each local state. The execution of a
joint action aQjAg Actjgives rise to the transition from the present state lto the
successor state τi(l, a). The actions in Pi(l) are said to be available to agent iin local
state lLi. Thus, the local transition function τi(l, a) is defined iff action aiis available
to agent iin local state l.
We now recall some standard terminology about interpreted systems (IS) that allows
us to reason about temporal, epistemic and strategic properties in an IS.
Global States, Joint Action, and Transitions. We introduce the following notions:
the set S,QiAg Liis called the set of global states;
the set Act ,QiAg Actiis called the set of joint actions;
the partial function τ:S×Act Sis called the global transition function; it is
defined so that τ(s, a) = s0iff for every iAg,τi(si, a) = s0
the set of all actions aAgActais denoted act.
Notice that transitions are synchronous executions of individual actions, one for each
agent in the system. Atomic facts about the system at every point in time are given by
the labelling function λ.
Runs and Histories. Arun (resp. history) is an infinite (resp. finite non-empty) sequence
r=r(1)r(2) · · · of global states starting in an initial state and respecting the global
transition function, i.e., r(1) S0and for every t < |r|there exists a joint action aAct
such that τ(r(t), a) = r(t+ 1). The set of all histories is denoted by Hist. Notice that
S0Hist. For a run (or history) r, agent iAg, and index t < |r|, let r(t)ibe the local
state of agent iin the global state r(t) (we use this notation as it is easier to read than
Perfect recall of observations. We assume that agents have perfect recall. Intuitively,
this means that they remember the full history of their observations. Formally, we define
the indistinguishability relation of agent ias the equivalence relation iover global states
Sdefined as follows: sis0iff si=s0
i, that is, two global states are indistinguishable for
agent iiff agent i’s local state is the same in both [38]. In order to capture agents with
perfect recall of their observations, this relation is lifted to histories in a synchronous,
pointwise fashion: hih0iff (1) |h|=|h0|and (2) h(t)ih0(t) for 1 t≤ |h|.
Strategies. We define a strategy to be a function σ:Hist → ∪jAgActjfrom histories to
actions. Then, let Str denote the set of all strategies. A strategy σis coherent for agent i
if action σ(h) is available to agent iin local state last(h)i, that is, σ(h)Pi(last(h)i); it
is uniform for agent iif hih0implies σ(h) = σ(h0), that is, in indistinguishable states
agent iis required to execute the same action [59,61].
Assignments. Avaluation is a function ν: Var Str that maps variables to strategies.
A valuation νis ϕ-compatible if for every variable xVar, the strategy ν(x) is coherent
and uniform for every agent in use(x, ϕ). A binding is a function β:Ag Var that
maps agents to variables. Note that the composition function ν(β(·)) maps agents to
strategies (in the game-theory literature, such a function is called a strategy profile). An
assignment χis a pair (ν, β) such that for all iAg, the strategy ν(β(i)) is coherent
and uniform for i. An assignment (ν, β) and a history hdetermine a unique infinite run
π(h, ν, β) in which agents play according to the assigned strategies, i.e., agent iplays
starting from haccording to ν(β(i)). Formally, π(h, ν, β) is defined as the run πsuch
that (1) π≤|h|=h; and (2) for t > |h|,πt=τ(πt1, a) where ai=ν(β(i))(πt1) for
every iAg.
We define variants of valuations and bindings. For xVar and σStr, the variant
ν[x7→ σ] is the valuation ν0that agrees with νexcept that ν0maps xto σ. Similarly,
the variant β[i7→ σ] is the binding β0that agrees with βexcept that β0(i) = σ.
We now give the semantics of the satisfaction relation for the logic.
Definition 4 (Semantics).For a given IS Sand an SLi formula ϕ, we define the satis-
faction relation (S, h, ν, β)|=ϕinductively on the structure of ϕ, where his a history, ν
is a ϕ-compatible valuation, and βis a binding such that (ν, β)is an assignment:
(S, h, ν, β)|=pif last(h)λ(p)
(S, h, ν, β)|=x=yif for every history h0extending h,ν(x)(h0) = ν(y)(h0)
(S, h, ν, β)|=¬ϕif (S, h, ν, β)6|=ϕ
(S, h, ν, β)|=ϕϕ0if (S, h, ν, β)|=ϕor (S, h, ν, β)|=ϕ0
(S, h, ν, β)|=Xϕif (S, π≤|h|+1(h, ν, β ), ν, β)|=ϕ
(S, h, ν, β)|=ϕUϕ0if there exists j≥ |h|such that (S, πj(h, ν, β), ν, β)|=ϕ0
and for all |h| ≤ k < j , (S, πk(h, ν, β), ν, β)|=ϕ
(S, h, ν, β)|=oif for some σStr that is coherent and uniform for every
agent in use(x, ϕ), we have that (S, h, ν[x7→ σ], β )|=ϕ
(S, h, ν, β)|=sif for some σStr that is coherent and uniform for every
agent in use(x, ϕ), we have that (S, h0, ν[x7→ σ], β )|=ϕ
for every history h0ihand every iuse(x, ϕ)
(S, h, ν, β)|=bind(i, x)ϕif (S, h, ν, β[i7→ x]) |=ϕ
Note that the satisfaction relation is well defined in the sense that the valuation-
binding pairs introduced at every step of the inductive definition are indeed assignments
and that the valuations are compatible. We prove this in Appendix A.
It is routine to show (by structural induction) that if ϕis a sentence, i.e., f ree(ϕ) = ,
then (S, h, ν, β)|=ϕdoes not depend on the assignment (ν, β ). Thus, for a sentence ϕwe
write (S, h)|=ϕto mean that (S, h, ν, β )|=ϕfor some (equivalently every) assignment
(ν, β). Further, we say that ϕis true in S, and write S|=ϕ, iff for every initial state
sS0, (S, s)|=ϕ.
Remark 1. We provide the derived semantics of the universal strategy quantifiers o
and s:
(S, h, ν, β)|=oif for every σStr that is coherent and uniform for every
agent in use(x, ϕ), we have that (S, h, ν[x7→ σ], β )|=ϕ
(S, h, ν, β)|=sif for every σStr that is coherent and uniform for every
agent in use(x, ϕ), we have that (S, h0, ν[x7→ σ], β )|=ϕ
for some h0ihwhere iuse(x, ϕ)
Note that in the subjective semantics (i.e., sand s), we only consider reachable
epistemic alternatives h0, as histories are defined to start in initial states and being
consistent with the transition function τ.
We also observe that the formal meaning of the squantifier seems disaligned from the
usual intuition about universal quantification (since it includes an existential quantifica-
tion over histories). However, this reading is consistent with the subjective interpretation
of operator [[A]] in ATL. Specifically, no matter what strategy we consider, ϕis epistem-
ically consistent.
Remark 2. The subjective existential quantifier sallows us to introduce an epistemic
operator Ki(for certain formulas) that represents the individual knowledge of agent ias
“truth in indistinguishable histories” [38]. Indeed, define Kiϕ,sz.bind(i, z)ϕ, where z
is a fresh variable not appearing in ϕand i6∈ free(ϕ). Since, idoes not appear free in ϕ,
the truth of Kiϕdoes not depend on the particular strategy assigned to z, and therefore
we have that:
(S, h, ν, β)|=Kiϕif for every history h0ihwe have that (S, h0, ν, β)|=ϕ
We remark that the operator Kidefines a notion of knowledge based on truth in
indistinguishable histories, a mainstream notion in knowledge representation and multi-
agent systems [38]. On the other hand, the epistemic and strategic dimensions of multi-
agent systems can be combined in many different ways (see [59,15] for some examples).
Such an analysis is beyond the scope of the current contribution.
Remark 3. We discuss the definition of equality =. Informally, we consider two strate-
gies to be equal if they agree on all histories. However, since formulas of SLi cannot talk
about the past, we may restrict this definition to histories extending the current one. That
is, (S, h)|=σ=σ0iff σand σ0coincide on all histories extending h(which includes h
itself). This ensures that also =does not talk about the past, which is technically helpful
(in the proof of Proposition 2).
Note that the behaviour of σand σ0cannot be distinguished in SLi without subjective
quantifiers sand s. Indeed, well-known principles characterising equality, such as the
substitution of identicals: oxoy(x=y(ϕϕ[x/y])), are valid whenever neither
snor sappear in ϕ.
Furthermore, we might want to consider a notion of equality that also accounts for
extensions of epistemic alternatives of the current history h. It turns out that such “sub-
jective” equality =scan be defined by using =and the epistemic operator Kiintroduced
in Remark 2. More formally, define x=sy::= ViAg Ki(x=y). Then, the meaning of
=sis as follows:
(S, h, ν, β)|=x=syif for every agent i, every history h0ih, and every history h00
extending h0, we have that ν(x)(h00) = ν(y)(h00 )
Since =sis definable in terms of =and Ki, we take the latter as primitive. Note
however, that still formula oxoy(x=sy(ϕϕ[x/y])) is not valid unrestrictedly.
Remark 4 (Syntactic Fragments of SLi).It is well-known that, in the perfect information
setting, SL subsumes the alternating-time temporal logic ATL[2], and therefore also ATL
and the temporal logics LTL,CTL,CTL.
Similarly, in the imperfect information setting of this work, ATLcan be seen as
a syntactic fragment of SLi. To show this, we present the syntax of ATL, where we
explicitly distinguish between strategy operators hhAiio(resp. hhAiis) interpreted according
to the objective (subjective, resp.) semantics for ATL:
ϕ::= p| ¬ϕ|ϕϕ| hhAiioψ| hhAiisψ
ψ::= ϕ| ¬ψ|ψψ|Xψ|ψUψ
where pAP , and AAg.
Here we do not provide the semantics of ATLbut refer to [59] for full details.
We can define a translation tfrom ATLto SLi that is the identity on atoms (i.e.,
t(p) = p) and that commutes with the Boolean and temporal operators (e.g., t(¬ϕ) =
¬t(ϕ)and t(Xψ) = Xt(ψ)), and whose translation of strategy formulas is given as follows,
for Ag = [n]and A= [m]:
t(hhAiioψ)=(oxi)1im(oxi)m<in(bind(xi, ai))1int(ψ)
Ki(oxi)m<in(bind(xi, ai))1int(ψ)
Specifically, the translation of operator hhAiioclosely corresponds to its informal read-
ing: there exist (uniform) strategies for the agents in coalition Asuch that, no matter
what the agents in Ag \Ado, it is the case that ψholds. As regards hhAiis, its translation
states that “there exists (uniform) strategies for the agents in coalition Asuch that in all
histories indistinguishable for some agent in A, no matter what the agents in Ag \Ado
it is the case that ψholds”.
In particular, the truth-preserving implication from hhAiioψto t(hhAiioψ)holds inde-
pendently from the assumptions on knowledge and memory, as the choice of any strategy
for the adversary coalition Ag \Agenerates some path in the iCSG. As for the converse
implication, we claim that if hhAiioψis false, then t(hhAiioψ)is false as well. Indeed,
suppose that for every strategy available to coalition A, there exists some path λsuch
that ψis false on λ. Given such a path λ, we can define a joint strategy for the adversary
coalition Ag \Athat basically returns the actions played by Ag \Aalong λ. The fact that
agents have perfect recall allows them to play possibly different actions whenever they
end up in an indistinguishable state along the path, and therefore the strategy for Ag \A
is well-defined. Moreover, the strategy can be assumed to be uniform w.l.o.g. by simply
associating the same action in indistinguishable histories. As a result, the translation
t(hhAiioψ)is false as well.
Further, note that a naive translation of hhAiisthat makes use of a suite (sxi)1im
of subjective quantifiers will not achieve the same effect, as for each quantifier sxia
possible different set of histories (i.e., those indistinguishable for agent i) may be selected.
Following the intuitions above, by suitably adapting the semantics in [59] to inter-
preted systems, it can be shown that a formula ϕin ATLis true in an IS Siff its
translation t(ϕ)is. Hence, our logic SLi is a (conservative) extension of ATLunder
imperfect information, both in its objective and subjective interpretation.
2.3. MAS Specifications in SLi
In this section we illustrate the use of SLi for the specification of strategic interplay in
MAS. As we show below, SLi is a very expressive specification language to reason about
MAS under incomplete information.
Example 1. [Winning strategies] We begin by observing that since ATLformulas can
be expressed in SLi (Remark 4), SLi can express express specifications often used in
voting (“a coercer can ensure that the voter will eventually either have voted for a given
candidate or be punished” [41]), bridge endplay (“a given player can ensure that her team
takes more than half of the remaining tricks” [41]), scheduler systems (mutual exclusion
and lack of starvation [14]), and anonymity protocols (such as dining cryptographers [14]).
The corresponding SLi specifications used in these context are variations of the property
expressing that a player in a game has a winning strategy.
Suppose Srepresents a card game between multiple players in which the atom pointsp
represents that player jhas scored ppoints (see Example 7for more details). The SLi
winning1, ^
expresses that player 1 has scored more points than player 2. This can be generalised to
player 1 having scored more points than any of the other players. Let end be an atom
denoting that the game has ended, and define the SLi formula
ψ,bind(1, x)bind(2, y)F(end winning1)
that expresses that if player 1 uses strategy xand player 2 uses strategy ythen eventually
the game ends with player 1 having more points than player 2.
Consider the formula schema ϕ,xthat expresses, intuitively, that player 1 has
a strategy that dominates all of player 2’s strategies. We will consider all 4 variations
of this schema in which the quantifiers are subjective or objective. Since these are
sentences, we consider, for a given a history h, whether (S, h)|=ϕ. Note that in all cases
the strategy quantified by xmust be coherent and uniform for agent 1 since only agent
1 uses strategy x(formally, use(x, y bind(1, x)bind(2, y)F(end winning1)) = {1});
similarly, throughout we will assume that the strategy quantified by ymust be coherent
and uniform for agent 2. For simplicity, we will assume that his a history of length 1,
i.e., player i(for i= 1,2) has been dealt a set Hiof cards that the other player cannot
see, and the game is about to commence.
1. The sentence oxorepresents that there is a strategy σ1for player 1, such that
for every strategy σ2for player 2, if each player uses their strategy starting from
h, player 1 will win. In words, player 1, with hand H1, can defeat player 2if his
hand is H2.
2. The sentence sxomeans that there is a strategy σ1for player 1, such that for
every h01h, and every strategy σ2for player 2, if each player uses their strategy
starting from h0, player 1 will win. In words, player 1, with hand H1, can defeat
player 2no matter what his hand is.
3. The sentence oxsmeans that there exists a strategy σ1for player 1 such that
for all strategies σ2for player 2 there exists h00 2h, such that if each player uses
their strategy, then starting at h00 player 1 will win. In words, player 1, with hand
H1, can ensure that player 2will consider it possible that player 2(not knowing
player 1’s hand) will be defeated.
4. The sentence sxsmeans that there exists a strategy σ1for player 1, such that
for every h01h, and every strategy σ2for player 2, there exists h00 2h0, such
that if each player uses their strategy, then starting at h00 player 1 will win. In
words, player 1has a strategy that she knows player 2will think it may defeat him
(player 2).
Observe that slogically implies o, whereas the converse does not hold. Thus,
e.g., sxsimplies oxs. Moreover, the formulas oxoand sxohave the
same interpretation as the formulas hh1iioψand hh1iisψin ATLrespectively. However
there are no simple translations for the formulas oxsand sxsinto ATL, since
the latter cannot express both the subjective and objective interpretation of quantifiers.
In what follows, let ¯xdenote a tuple (x1, x2,· · · , xn) of strategy variables, and let
bind(i, xi)i[n]stand for the binding prefix
bind(1, x1)bind(2, x2)· · · bind(n, xn).
Example 2. [Dependencies of coalitions] By alternating quantifiers, SLi can express
various dependencies of coalitions in games. For instance, the formula
ox1ox3ox2bind(i, xi)i[3] ψ(1)
represents that players 1 and 2 can collude to ensure that ψholds no matter what player
2 does. Compare this with the formula
ox1ox2ox3bind(i, xi)i[3] ψ(2)
which is similar except that player 3’s strategy may depend on player 2’s strategy. Note
that (1) can be expressed by the ATLformula hh{1,2}iiot(ψ); in contrast, formula (2)
cannot be expressed in ATL.
Example 3. [Game-theoretic solution concepts] SLi can express classic notions of strate-
gic behaviour in multiplayer games, e.g., best-response and Nash equilibrium. Consider
a game where the objective for player iis encoded by the formula ψi(objectives may be
arbitrary SLi formulas; typically, they are just LTL formulas). The SLi formula
BRi(x),(oy bind(j, xj)j6=ibind(i, y)ψi)bind(j, xj)j6=ibind(i, xi)ψi
expresses that xiis a best-response to (xj)j6=i, that is, if agent ican achieve goal ψiby
playing the strategy y, then she already can by playing strategy xi. Building on this,
the SLi formula
NE(x1, . . . , xn),^
expresses that each strategy xiis a best-response to the strategies of the other players.
Nash equilibria (NE), as expressed by the formula above, describe optimal play in two-
player zero-sum games of imperfect information [58]. SLi can express properties that
build on NE, such as correctness of fair division protocols [62]. Related notions such as
(strong) rational synthesis [49] and the simple one-alternation strategy formulas for two-
player zero-sum games [8] can also be expressed in SL and thus in SLi. Also, NE are used
as a basis for other solution concepts. For instance, subgame-perfect equilibria of certain
infinite-duration games can be expressed in SLi [60]. Subgame-perfect equilibria are
arguably more suited to graph-games because they eliminate some implausible NE [63].
Finally, SLi can express solution concepts such as k-resilience and timmunity that are
also used in rational distributed computing [64,65].
We now discuss in more detail a subjective interpretation of the formulas expressing
best-response and NE. Recall that Kiϕis expressible as long as iis not free in ϕ. So,
the SLi formula KLRi(y, ¯x) defined by
bind(j, xj)j6=ibind(i, y)ψiKibind(j, xj)j6=ibind(i, xi)ψi
expresses that for every strategy yto achieve goal ψi, strategy xiis known by agent ito
be at least as good as a response to (xj)j6=ias y.
Then, the formula, KBRi(x) defined by
(sy bind(j, xj)j6=ibind(i, y)ψi)Kibind(j, xj)j6=ibind(i, xi)ψi
expresses that for every strategy profile (xj)j6=i, if some strategy yfor agent iachieves
goal ψi, then strategy xiis known to be a best-response to (xj)j6=i.
Finally, the SLi formula KNE (x1,· · · , xn), defined by Vi[n]KBRi(x) expresses an
epistemic variant of NE, according to which the strategy each agent currently plays is
not just the best response, but it is known to be so by each agent.
Figure 1: Illustration of epistemic variant of NE
To illustrate these formulas, consider the turn-based game of imperfect information
in Fig. 1in which agent 1 plays first (atoms true in a state are drawn to the left of that
state). Observe that such a game can be represented as an IS, in which both agents 1
and 2 are uncertain about the initial state. Agent 1 has goal X X p, while agent 2 has goal
X X q. Notice that a uniform strategy for agent 1 consists of a single move, either L(left)
or R(right); whereas agent 2’s strategies must be uniform on {sL, s0
L}and {sR, s0
R}, even
though she might choose different actions for the two knowledge sets. Now consider the
strategy profile where agent 1 plays Land agent 2 plays Lin all states. We can check
that playing Lis a best response for agent 1, that is, the formula KBR1(x) is true in
s0for the relevant strategy profile x. Indeed, even though playing Ldoes not guarantee
that agent 1 achieves his goal, the antecedent of KBR1is false as well: since pis false
in s0
RL, there is no strategy that guarantees that agent 1 also achieves his goal from the
indistinguishable state s0
0, and therefore agent 1 does not know the alternative to be a
best response. However, is always playing La known best response for agent 2? Although
playing Rwould guarantee that agent 2 knows he achieves his goal, this knowledge can
be already obtained by playing L. As a result, playing Lis a best response for agent 2
as well. Since the chosen strategy profile is known to be a best response for both agents,
it satisfies the epistemic variant KNE of the existence of a Nash equilibrium.
Example 4. [Kingmaker] Consider the SLi formula
(ox1ox2ox3NEϕ1(x1, x2, x3)) (ox1ox2ox3N Eϕ2(x1, x2, x3))
ϕ1=Wp<q pointsp
2says that player 2 gets more points than player 1,
ϕ2=Wp<q pointsp
1says that player 1 gets more points than player 2,
and NEϕ(¯x) = NE( ¯x)bind(i, xi)i[3]ϕexpresses that ¯xis a Nash equilibrium in
which ϕholds.
The whole formula says that there are two Nash equilibria in which player 3 can decide
which of the other players gets more points. This expresses a form of a kingmaker
property, that occurs in certain forms of poker with mixed strategies such as Kuhn’s
three-player, four-card poker [66] in which there are four cards, with values 1,2,3,4, and
each of the three players gets a single card (visible only to them), and after some rounds
of betting the player with the highest card who has not folded wins the pot.
Since SLi includes equality of variables, we can express concepts that involve compar-
isons between strategies.
Example 5. [Unique Equilibria] The formula
xy.NE (y)^
expresses that there is a unique Nash equilibrium. Deciding if a game has a unique Nash
equilibrium is relevant to the predictive power of the Nash equilibrium as a solution
concept. Indeed, in case there are multiple equilibria, the outcome of the game cannot
be uniquely pinned down.1
Example 6. Consider a symmetric two-player game where p(x, y) is the payoff to player
1 if she uses strategy xand the opponent uses strategy y. Recall that a strategy x
1We remark that a different extension of SL was introduced in [67] to capture, amongst other things,
uniqueness of Nash equilibria.
is evolutionary stable if, intuitively, no mutant strategy can replace xif all players are
playing x[68]. Formally, xis an evolutionary stable strategy if for every y6=x, either
i) p(x, x)> p(y, x) or ii) p(x, x) = p(y, x) and p(x, y)> p(y, y). In case p(x, y) can
only take on a finite number of values, we can express the concept of evolutionary stable
strategies in SLi. Let pi(x, y) be an atom denoting that p(x, y) = i. Then the following
SLi formula defines that xis an evolutionary stable strategy:
where C1is i>j (pi(x, x)pj(y, x)), and C2is i(pi(x, x)pi(y , x)), and C3is i>j(pi(x, y)
pj(y, y)).
Remark 5. Although winning strategies (Example 1) can be expressed in ATL, richer
solution concepts such as Nash equilibria in which agents have LTL goals are not express-
ible in ATL, already for three players with reachability/safety goals and perfect infor-
mation [69, Theorems 3and 5]. We remark that winning strategies have been used to
characterise the existence of Nash equilibria in some special cases [70,71]; this holds, in
particular, for two-player turn-based games of perfect information in which agents have
LTL objectives that do not depend on finite prefixes of the play [8, Proof of Lemma 1]. A
detailed study of the preservation of the existence of Nash equilibria under bisimulation
is given in [72].
In case agents have imperfect-information, the existence of Nash equilibria is unde-
cidable for three agents [51], and for two agents decidable, cf. [51]. For multiple-agents,
[37] show that one can regain decidability (for a strategy logic similar to ours that can
express the existence of Nash equilibria) by imposing a hierarchy on the agent observa-
tions. In contrast, in Section 3we will show that one can regain decidability, and thus
decide the existence of Nash equilibria, assuming agents use public actions (and we make
no restriction on the agent observations).
2.4. The Model-checking Problem
In the rest of the paper we consider the following decision problem.
Definition 5 (Model Checking).The model-checking problem is defined as follows: given
an interpreted system Sand an SLi sentence ϕ, decide whether S|=ϕ.
As expected, this problem is undecidable in general.
Theorem 6. Model checking IS against SLi specifications is undecidable.
To see this, we observe that the model-checking problem for concurrent-game struc-
tures under perfect recall and imperfect information (iCGS) against specifications ex-
pressed in alternating-time temporal logic (ATLiR) is undecidable. In fact, the latter
problem is undecidable already for formulas of the form hhAii Gpwhere |A|= 2 and
|Ag|= 3 [24]. Since this is an adaptation of existing results, we only sketch the reduc-
tion. Specifically, in Appendix B we define the semantics of SLi over iCGS and prove
that the model-checking problem for a subclass of iCGS, namely the square iCGS, against
SLi is inter-reducible in polynomial time to the model-checking problem for IS against
SLi (this result is novel and may be of independent interest). A key property of square
iCGS is that if agent ifinds two states indistinguishable, then after applying the same
joint action, the resulting states are still indistinguishable to agent i(this captures that,
in an IS, an agent updates its local states using its local transition function τi). Then,
in Appendix C, we show that the undecidability proof in [24] can be adapted to hold
for square iCGS.
As discussed in the introduction, a number of restrictions on the general setting have
been explored to obtain decidability. In the next section we will define and study a class
of IS for which the model-checking problem is decidable. Moreover, in order to make
statements about the computational complexity of the problem, we need to specify how
the inputs Sand ϕare represented. We use an explicit representation. In particular, the
size of the SLi formula, denoted |ϕ|, is the number of its symbols, and the size of the
IS, denoted |S|, is the number of transitions in its global transition function restricted
to the reachable global states. Here, a state sis reachable if it occurs in some history
of S(recall that histories start in initial states and are consistent the global transition
function). In particular, we do not measure the size of the labelling function, i.e., we
assume that the number of atoms is fixed.
3. Public Action Interpreted Systems and SLi
In this section we introduce a class of interpreted systems and prove that their model
checking problem, against SLi specifications, is decidable. This result should be con-
trasted with the undecidability in the general case (Theorem 6).
3.1. Interpreted Systems with Public Actions
We introduce a class of interpreted systems (Definition 3) in which we distinguish
explicitly public actions, that is, actions that are observable to all agents.
Definition 7 (IS with Public Actions).An interpreted system with public actions is a
S= (Ag, {L pri, Acti,Pb Acti, Pi, τi}iAg , S0, AP , λ)
such that
(Ag, {Li, Acti, Pi, τi}iAg , S0, AP , λ)
is an interpreted system where, for every agent iAg:
1. Pb ActiActiis the set of public actions of agent i;
2. Li=L pri×QjAg (Pb Actj∪ {})is the set of local states, where is a fresh
3. the local transition function τisatisfies the property that τi(l, a)=(p0, a0)implies
that for all jAg, if ajPb Actjthen a0
j=ajand otherwise a0
j= ∆.
The set L priconsists of the private (local) states of agent i. By the condition on the
local transition functions, the public actions performed last are copied into the successor
local states of all agents; and in case the last action is not public, ∆ is copied instead.
As a result, such actions are observable to every agent.
Remark 6. Even if an action is not in Pb Actait may still be observed by all agents,
i.e., it is recorded in the private local state of everybody. Thus, Pb Actashould really be
considered as the set of all explicitly public actions of agent a.
Clearly, any system following Definition 7is an interpreted system. Also notice
that any interpreted system is isomorphic to some system adhering to Definition 7for
which Pb Acti=for all iAg. To prove the latter fact, consider the mapping
θ:l7→ (l, ¯
∆); this can be lifted to a bijection θbetween global states, which has the
property that τ(s, a) = s0iff τ(θ(s), a) = θ(s0). Given this, for convenience we will call
systems conforming to Definition 7simply interpreted systems (IS).
The next definition singles out interpreted systems in which all actions are public.
Definition 8. Let PAIS (Public-Action Interpreted Systems) denote the set of inter-
preted systems with public actions such that Acti=Pb Actifor all iAg.
We now discuss the expressive power of PAIS for modeling AI scenarios. Although
all actions in PAIS are public, they can still model private update of an agent’s private
state. For instance, if an agent’s private local state contains a Boolean variable x, then
we can model a private update of the value of xas follows. First, for every global initial
state with x= 0 we ensure there is an identical global initial state except that x= 1,
and vice versa. Second, the agent can update its variable with the public action “toggle
the value of x”, which has the effect of replacing the value of xby 1 x. In particular
then, although the other agents know that the variable was toggled, if they could not
distinguish between x= 0 and x= 1 before the action, then they can’t afterwards either.
Also, PAIS can model that an agent allows the other agents to see part of its local
state. For instance, this can be done with the public action “the value of xis 0”, which
we assume can only be done by the agent who owns the variable xif indeed x= 0.
Moreover, PAIS can be used to represent several AI scenarios of interest:
1. In community-card games such as Texas hold’em, each player is privately dealt
some cards, which are combined with “community cards” that are dealt face up
on the table. Moreover, all bidding is public. Single rounds, or a bounded number
of rounds, of such games can be modelled as PAIS. Such single rounds appear,
for instance, as endgames and other simplified forms of Poker [73,74,75,66] and
Bridge [41].
2. In epistemic puzzles such as the muddy children puzzle, the Russian cards puzzle,
the consecutive numbers puzzle, and the sum-and-products puzzle (see [76]), all
communication is public, and therefore they can be modelled as PAIS.
3. In distributed systems one of the basic communication primitives is to broadcast
a message to all other components [77]. The exchange of such messages can be
modelled via public actions.
We now give an example of a PAIS that represents a simple trick-taking card game.
Example 7. [Card Game] Consider an r-player card game parameterised by integers
k, l with 1 lk. The game is played with rmany decks of kcards numbered 1
through k. Each player starts with a subset of size lof their deck of cards that only
they can see (the remaining cards are not used in the game). At each round the players
simultaneously reveal one card and the player with the highest revealed card scores a
point. The revealed cards are discarded. This is repeated until all the cards have been
revealed, and the winner (if any) is the player that has the most points. This game can
be formalised as a PAIS with the following components:
Ag = [r],
Lpriconsists of all pairs of the form (H, p) where H[k] represents the cards
player icurrently holds, and p[k]∪ {0}represents the number of points player i
currently has.
Acti={revealm:m[k]}, i.e., revealmis the action of revealing the card with
value m. The set of local states Liconsists of elements of the form (H, p, a) where
(H, p)L priand aQjAg (Actj∪ {}).
Pi(H, p, a) = {revealm:mH}, i.e., one can only reveal a card one is holding.
The local transition function τimaps the local state (H, p, a) and joint action
a0= (revealm1, . . . , revealmr) to the local state (H0, p0, a0) where H0=H\ {mi},
and p0=pif mimjfor some j6=i, or p0=p+ 1 otherwise.
AP ={pointsj
i:iAg, j [k]}∪{end}
λis defined as follows:
- it maps end to the global states ((Hi, wi, ai)iAg) such that Hi=for all i,
i.e., end holds if all the cards have been revealed;
- it maps pointsj
ito the global states ((Hi, wi, ai)iAg) such that wi=j, i.e.,
player ihas scored jpoints.
Note that we do not put the entire history of the play into the local state as we assume
perfect recall semantics.
PAIS can be used similarly to model repeated games in normal form with unknown
initial types, more complicated scoring mechanisms (for instance, that take ties into
account), infinitely many rounds (for instance, by giving each player a “pass” action that
does not play a card in that round, or allowing players to reuse played cards), turn-based
games including endplay scenarios in bridge (as in [41]).
Finally, we compare PAIS to similar formalisms in the literature.
Abroadcasting environment [40] is defined as an IS with a distinguished agent called
the environment, in which each agent’s local state Liconsists of two pieces of information:
1) a private part Pithat only depends on its local actions, and 2) a shared part that is
the value of some fixed function obs :LeO(the same function for all agents) of the
local state of the environment agent. Similarly, as discussed above, a PAIS can model
that an agent can update its private variable, as well as allow other agents to observe
just part of its local state.
Adeterministic partially observable Markov decision process (POMDP) [52,53] is a
POMDP in which the transition function and observation functions are deterministic.
In particular, the only stochasticity is in the initial distribution. A PAIS is also a deter-
ministic transition system, except that instead of an initial distribution, it has an initial
set of states.
3.2. Model Checking PAIS against SLi specifications
In this section we prove that model checking PAIS against SLi specifications is de-
cidable. Then, in Section 3.5 we provide an analysis of the computational complexity of
the problem.
Theorem 9. Model checking PAIS against SLi specifications is decidable.
Before proving Theorem 9, we outline a standard approach for evaluating the com-
plexity of model-checking strategic logics, and discuss how to adapt it to the setting at
hand. The basic idea involves encoding strategies σas trees T. Typically, the domain of
Tis the set of histories of the system, and a node his labelled by action σ(h). A mapping
is then made from formulas into a formalism (such as tree automata or a branching-time
logic) that can process trees. For instance, one approach used by algorithms for model
checking SL [10] and ATL[2] is to effectively convert ϕinto a tree automaton that ac-
cepts exactly the trees Tthat code the strategies σthat make the formula ϕtrue (in
the given game-structure, or model). This encoding cannot be used in the presence of
imperfect information since the set of uniform strategies (for a given agent, in a given
structure) is not the language of any tree automaton. Intuitively, the reason for this is
that uniformity is a non-local restriction on the labels of the nodes of the tree that are
“distant cousins”, and a tree automaton cannot tell if a distant cousin has the same label.
One way to overcome this problem is to encode a strategy for an agent as a tree whose
nodes are the set of sequences of observations of that agent. In this way, uniformity
becomes a local condition. This approach can be used in the case of a single agent in
an environment, or agents whose observations are hierarchical, in the sense that their
indistinguishability relations are totally ordered by the refinement relation [78,36,37].
However, this encoding cannot be used in the multi-player case in which agents’ obser-
vations are not hierarchical. The reason, intuitively, is that neither tree of observations
is a refinement of the other, and thus the automaton cannot encode the strategies that
arise from incomparable observations.
Given the above, we require a novel encoding of strategies, which we now describe.
The proposed encoding is based on the following insight: every history hin a PAIS is
uniquely determined by a pair (s, α) where sis a state and αis a sequence of joint
actions. Given this, strategies in a PAIS can be encoded as labellings of the tree Twhose
domain consists of all sequences of joint actions. The labelling of a node αActis a
function that encodes, for every state s, the action of the strategy given that the starting
state is sand the sequence of joint actions were α. Under this encoding, uniformity
becomes a local condition on the tree. Also, the fact that all strategies are labellings of
the same tree Tmeans that the infinite run determined by an assignment corresponds
to an infinite path in T. This allows formalisms like tree-automata or branching-time
logics to check properties of the path.
In what follows, we apply this encoding and translate SLi formulas and PAIS into
formulas of a branching-time logic, rather than to tree automata. That is, we show how to
reduce the model-checking problem of PAIS against SLi specifications to model checking
regular trees against Quantified Computation Tree Logic (QCTL). The logic QCTLis a
generalisation of CTLthat enables the quantification over atomic propositions [79]. This
quantification will be used to simulate quantification over strategies. Model-checking
regular trees against QCTLspecifications is decidable (and, in fact, is solved by tree
automata) [80]. In fact, QCTLhas been used as an intermediate logic between the
low-level machinery of tree-automata and strategic logics such as ATLwith strategic
contexts [81] and hierarchical SLi [37].
We begin by recalling the syntax and semantics of the logic QCTLfrom [80].
3.3. Quantified Computation Tree Logic
This language of QCTLadds quantification over atomic propositions to the syntax
of CTL.
Definition 10 (QCTLSyntax).QCTLstate formulas ϕand path formulas ψare
defined by the following grammar, where pAP :
φ::= p|φφ| ¬φ|Eψ| ∃p.φ
ψ::= φ|ψψ| ¬ψ|Xψ|ψUψ
Formulas in QCTLare all and only the state formulas in Def. 10. The intuitive
reading of linear-time operators Xand Uis the same as in SLi; whereas Eis the existential
path quantifier from CTL. Finally, a quantified formula p.φ is read as “there exists
an assignment to atom psuch that φis true”. Clearly, universal quantification can be
expressed as p.φ ,¬∃p.¬φ.
In order to introduce the semantics of QCTL, we need the notion of a tree. Let Dbe
a finite set of directions, and let Σ be a finite set of labels. A D-ary domain dom D
is a non-empty prefix-closed set of strings over D. A Σ-labelling of dom is a function
lab :dom Σ. A D-ary Σ-labelled tree (or simply tree) is a pair T= (dom(T), lab(T)).
Anode of Tis an element tdom, and a path of Tfrom a node tis an infinite sequence
π=t1t2. . . such that t1=tand for every i1 there is a dDsuch that ti+1 =ti·d
and ti+1 dom. The set of all paths from tis denoted P aths(t). For i1, define
πi=πiπi+1 . . . , i.e., the path starting at the ith position of path π. For pAP and
domain dom, two P(AP )-labellings lab, lab0of dom are p-equivalent w.r.t. tree T, written
lab =plab0, if for all tdom, we have that lab(t)\ {p}=lab0(t)\ {p}, i.e., lab and lab0
may differ only on the labelling of nodes in Tby the atom p. We often omit tree Twhen
it is clear from the context.
We need introduce the “tree-based” semantics of QCTL. Note that the alterna-
tive semantics of QCTL, so called “structure-based semantics” is not suitable for our
purposes. Both semantics are studied in [80].
Definition 11 (QCTLSemantics).Define the satisfaction relation (T , t)|=ϕfor QCTL
state formulas ϕand (T, π)|=ψfor QCTLpath formulas by induction on the formulas,
as follows:
(T, t)|=pif plab(t)
(T, t)|=ϕ1ϕ2if (T , t)|=ϕifor some i∈ {1,2}
(T, t)|=¬ϕif (T , t)6|=ϕ
(T, t)|=p.ϕ if for some lab0=plab,((dom(T), lab0), t)|=ϕ
(T, t)|=Eψif for some πP aths(t),(T , π)|=ψ
(T, π)|=ϕif (T, π1)|=ϕ
(T, π)|=ψ1ψ2if (T, π )|=ψifor some i∈ {1,2}
(T, π)|=¬ψif (T, π )6|=ψ
(T, π)|=Xψif (T, π2)|=ψ
(T, π)|=ψ1Uψ2if for some j1,for all k[1, j),(T, πj)|=ψ2and (T , πk)|=ψ1.
A formula ϕis true in a tree T, or T|=ϕ, iff ϕis true in the initial node , i.e.,
(T, )|=ϕ. The tree-unwinding of a finite-state system is called a regular tree, and its
size is the number of states of the finite-state system. In [80] it is proved that model
checking QCTL, that is, deciding whether a QCTLformula φis true in a regular tree
T, is decidable. We report this result as the following theorem.
Theorem 12 ([80]).Model-checking regular trees against QCTLspecifications is decid-
Later (Theorem 16), we will cite a theorem that gives the complexity of this decision
procedure. Hereafter, our model checking procedure for SLi proceeds by translating the
given SLi formula and PAIS into a QCTLformula and regular tree, and then applying
Theorem 12. We now present this reduction.
3.4. Reducing SLi to QCTL
We first show that one can interchange histories of PAIS with pairs consisting of a
state and a sequence of joint actions. Define the function µthat maps a state sand a
sequence of actions αto the history it determines, i.e., the history that starts in state s
and applies the sequence of joint actions α.
Definition 13. Let Sbe an IS. Define the function µ:S×ActHist(S)such that,
for all sS:
1. µ(s, ),s;
2. µ(s, α ·¯a),µ(s, α)·τ(last(µ(s, α)),¯a), for αAct.
The function µis clearly onto. On the other hand, if Sis a PAIS, then µis also
one-to-one. To see this we will use the following central fact about PAIS.
Fact 1. For all states s0, t0and joint actions ¯a, ¯
b, if τ(s0,¯a) = τ(t0,¯
b)then ¯a=¯
To see this, simply note that the definition of PAIS implies that after a transition is
taken, the last joint action is written into the local state of every agent.
Proposition 1. In a PAIS S, the function µ:S×ActHist(S)is one-to-one.
Proof. To show that µis one-to-one we suppose that µ(s, α) = µ(t, β) and show that
s=tand α=β. By definition of µwe have that |α|=|β|, call this length l0.
If l= 0 then α=β=and so s=µ(s, ) = µ(t, ) = t, as required. So, suppose
l > 0. Then, s=tsince s=µ(s, ) = first(µ(s, α)) = f irst(µ(t, β)) = µ(t, ) = t.
Finally, to check that α=β, repeatedly apply fact 1.
The relevance of µbeing both onto and one-to-one, i.e., a bijection, is that we can
treat histories in Hist(S) and sequences in S×Actinterchangeably. In particular, the
following notation is well defined.
Definition 14. In a PAIS S, for every history hHist(S)let state(h)Sand
actions(h)Actdenote the unique elements such that µ(state(h), actions(h)) = h.
The following lemma characterises i:
Lemma 1. For all agents iand histories h, h0, we have that hih0iff actions(h) =
actions(h0)and state(h)istate(h0).
Proof. Recall that in an IS two histories are indistinguishable to agent i, i.e., hih0,
if agent ihas the same sequence of local states in both hand h0. Moreover, in a PAIS,
the local state consists of the private state and the tuple of last actions of each agent
(with a dummy action in the first state). Thus, if hih0then actions(h) = actions(h0)
(since, in particular, the joint actions are visible) and state(h)istate(h0) (since, in
particular, the initial states of the histories are indistinguishable to agent i). For the
direction from right to left, use the fact that agent i’s local state at a given point in time
is determined by the local transition function τi, and thus only depends on its local state
in the previous point of time and on the last joint action.
In what follows, all trees will have the same domain, i.e., dom ,Act(recall that
Act =QiActi). We now define labellings over this domain. These labellings will encode
the transition function of the PAIS and agent strategies.
Encoding PAIS
We now describe the labelling labSthat encodes the PAIS S; later we will consider
labellings labνthat encode valuations ν. The labelling labScaptures relevant information
about the system, e.g., it records the current state, given that the history started in state
New atoms. To define labS, we introduce the following new sets of atoms:
− {cur(s, s0) : s, s0S}; intuitively, cur(s, s0) holds in a node αif s0is the result of
applying the sequence of actions αto the state s.
− {lastact(i, a) : iAg, a Acti∪ {}}; intuitively, lastact(i, a) holds in a node α
if the last action done by agent iis a(here ∆ represents the case that no action
has yet been done).
− {atom(s, p) : sS, p AP }; intuitively, atom(s, p) holds in a node αif the atom
pAP holds after applying the sequence of actions αto the state s.
− {rel(i, s, t) : iAg, s, t, S}; intuitively, rel(i, s, t) holds in a node αif the
histories, resulting from applying the sequence of actions αstarting with sand
tare indistinguishable to agent i.
Labelling labS.We now define labSby induction on the length of the elements in dom ,
Act. For the base case, define labS() to be the union of the following sets of atoms:
− {cur(s, s) : sS}; intuitively, the empty sequence of actions does not advance the
− {lastact(i, ∆) : iAg}; intuitively, the first action of each agent is the dummy ∆
− {atom(s, p) : sλ(p) and pAP }; intuitively, the atoms of AP that hold are
those that hold in s.
− {rel(i, s, t) : s, t S, i Ag, and sit}; intuitively, the initial indistinguishability
relations are the given observability relations.
For the inductive case, let αAct,¯aAct, and define labS(α·¯a) to consist of the
union of the following sets of atoms:
− {cur(s, t) : for some s0, t =τ(s0,¯a) and cur(s, s0)labS(α)}; intuitively, given that
swas the starting state, the current state is τ(s0,¯a) if in the previous step the state
was s0and the joint action was ¯a.
− {lastact(i, ai) : iAg}; intuitively, if the last direction in the tree was a, then ai
was the last action of agent i.
− {atom(s, p) : for some tS, t λ(p) and cur(s, t)labS(α·¯a)}; intuitively, p
holds now assuming the initial state was s.
− {rel(i, s, t) : for some s0, t0S, rel(i, s, t)labS(α), s0it0, cur(s, s0), cur(t, t0)
labS(α·¯a)}; intuitively, the histories starting in sand tare indistinguishable to
agent iif they were so one step prior and the current states are indistinguishable
to agent i.
The following properties of the labelling labSfollow from the definitions:
Lemma 2. For all αdom,s, t S,iAg,pAP :
1. cur(s, t)labS(α)iff t=last(µ(s, α)).
2. atom(s, p)labS(α)iff last(µ(s, α)) λ(p).
3. rel(i, s, t)labS(α)iff µ(s, α)iµ(t, α).
Proof. We prove the first item by induction on α(the proofs of the other items are
similar). Suppose α=. Then cur(s, t)labS(α) iff s=t(by Definition of labS) iff t=
last(µ(s, α)), since µ(s, ) = sby definition of µ. Suppose α6=. Then cur(s, t)labS(α·
¯a) iff there exists s0such that t=τ(s0,¯a) and cur(s, s0)labS(α) (by Definition of labS)
iff there exists s0such that t=τ(s0,¯a) and s0=last(µ(s, α)) (by induction hypothesis)
iff t=τ(last(µ(s, α)),¯a) = last(µ(s, α ·¯a), since µ(s, α ·¯a),µ(s, α)·τ(last(µ(s, α)),¯a),
by definition of µ.
Encoding strategies
Intuitively, a strategy is encoded by a labelling that maps a node αActto the
function that maps the state sto the action suggested by the strategy on history µ(s, α).
To capture this we introduce below new atoms str(x, s, a) and a labelling labνof the
domain dom (Def. 15).
New atoms. We introduce the following new set of atoms:
− {str(x, s, a) : xV ar, s S, a Act}; intuitively, str(x, s, a) holds at a node
αActif the strategy for variable xsuggests action ain history µ(s, α).
Recall that act denotes iActi, the set of all possible actions.
QCTLformulas defining strategies, coherence, uniformity. For every xVar and Z
Ag, define the following QCTLformulas:
Uniqx,A G ^
aact ^
str(x, s, a)∧ ¬str(x, s, b)
Z,A G ^
str(x, s, a)
Unif x
Z,A G ^
[rel(i, t, t0)str(x, t, a)str(x, t0, a)]
Intuitively, Uniqxexpresses that the atoms str(x, ,) encode a strategy, i.e., a
unique action is associated with every history, and Cohex
Z(resp., Unif x
Z) expresses that
the strategy is coherent (resp., uniform) for the agents in Z. These facts are captured
by the following lemma, whose proof follows from the definitions of the formulas and
Definition 14.
In what follows, a tree Twith domain Actwill be labeled by P(AP ) where AP is the
set of atoms introduced above. In particular, the labeling of Tcan be decomposed into
two parts: labSwhich labels nodes of the tree by the atoms cur(,), lastact(,),
atom(,), and rel(,,); and lab which labels nodes of the tree by atoms of the
form str(,,).
Lemma 3. Fix a tree T= (Act, labSlab), a variable xVar, and a set ZAg of
agents. Consider the relation RxHist ×Act defined by Rx(h, a)iff str(x, state(h), a)
1. T|=U niqxiff Rxrepresents a strategy, i.e., for every hHist there exists a unique
aAct such that Rx(h, a).
2. If Rxrepresents a strategy (as in item 1), then T|=Cohex
Ziff the strategy is
coherent for the agents in Z.
3. If Rxrepresents a strategy (as in item 1), then T|=Unif x
Ziff the strategy is
uniform for the agents in Z.
Proof. For the first item, suppose that T|=U niqxand let hbe a history. Let α=
actions(h) and s=state(h). Since T|=Uniqx, there is a unique action asuch that
str(x, s, a)lab(α), as required. Conversely, suppose that Rxrepresents a strategy and
let αAct. For every sSlet abe the unique action such that Rx(µ(s, α), a). By
definition of Rx, for every sthere is a unique action asuch that str(x, s, a)lab(α), as
For the second item, we are given that Rxrepresents the strategy (as in item 1).
Suppose T|=Cohex
Z, let hbe a history, and iZ. Let α=actions(h) and s=
state(h). Since T|=C ohex
Zwe have that str(x, s, b)lab(α) for some bPi(si)Acti.
Since Rxrepresents a strategy, this strategy maps hto the unique action asuch that
str(x, s, a)lab(α), and so a=b. Since hwas arbitrary, the strategy is coherent.
Conversely, suppose the strategy represented by Rxis coherent, let αAct,iZand
sS. Since Rxrepresents a strategy, this strategy maps hto the unique action asuch
that str(x, s, a)lab(α). By coherency, aPi(si), as required.
For the third item, we are given that Rxrepresents the strategy (as in item 1).
Suppose T|=Unifx
Z, let iZ, and let h, h0be two histories such that hih0. By
Lemma 1, we have that α,actions(h) = actions(h0) and t,state(h)it0,state(h0).
By Lemma 2, we have that rel(i, t, t0)labS(α). Say the strategy maps history hto
action a. Then str(x, t, a)lab(α). Thus also str(x, t0, a)lab(α), i.e., the strategy
maps history h0=µ(t0, α) to a. Since h, h0and iwere arbitrary, conclude that the strategy
is uniform for agents in Z. Conversely, suppose the strategy is uniform for agents in Z,
let αAct,iZ, t, t0Sand aActi. Further, suppose that rel(i, t, t0)labS(α)
and str(x, t, a)lab(α). By Lemma 2we have that h,µ(t, α)ih0,µ(t0, α).
By uniformity, we have that the strategy maps hand h0to the same action. However
str(x, t, a)lab(α) implies that the strategy maps hto a. Thus the strategy also maps
h0to a, and so str(x, t0, a)lab(α), as required.
Definition 15 (labelling labν).To every valuation ν:V ar Str we associate the
labelling labνover the atoms defined as follows:
str(x, s, a)labν(α)iff ν(x)(µ(s, α)) = a.
In the introduction to this section we suggested that we would encode strategies in
such a way that uniformity becomes a local condition. The next remark explains this.
Remark 7. Uniformity is a property of labνthat can be checked at each node inde-
pendently. More precisely, the strategy ν(x)is uniform for agent iiff for every node
αAct, states sis0, and action aact, we have that str(x, s, a)labν(α)iff
str(x, s0, a)labν(α).
Reduction SLi to QCTL
The following proposition shows how to reduce the model checking problem for SLi
to the same problem for QCTL. Intuitively, we reduce checking whether Ssatisfies ϕ
under assignment (ν, β) to verifying if the tree (dom, labSlabν) satisfies the translation
ϕβ,state(h)in QCTL. The domain dom of the tree is Act, and its labelling is labSlabν
where labSand labνwere defined earlier.
Proposition 2. For every PAIS S, assignment (ν, β ), history hHist(S), and SLi
formula ϕ, there is a QCTLformula ϕβ,state(h)that depends on ϕ,S,βand state(h),
such that
(S, h, ν, β)|=ϕif and only if ((dom, labSlabν), actions(h)) |=ϕβ,state(h)(3)
Moreover, ϕβ,state(h)is computable in polynomial time on the size of ϕ.
Before we prove this proposition, we show how to use it to conclude that model-
checking PAIS against SLi specifications is decidable (Theorem 9).
Proof of Theorem 9.Recall that a tree is regular if it is the tree-unwinding of a finite-state
system. Given a PAIS S, history h, and SLi sentence ϕ, apply the following steps:
1. Pick an assignment (ν, β) so that (dom, labSlabν) is a regular tree;2
2. Form the QCTLformula ϕβ,state(h)from Proposition 2;
3. Decide if (dom, labSlabν)|=ϕβ ,state(h)using Theorem 12.
Note that since ϕis a sentence, its truth does not depend on the particular assignment
(ν, β) chosen. Thus, we have that the answer in the last step is “Yes” iff (S, h)|=ϕ.
Proof of Proposition 2.To prove this proposition we first describe how to construct the
QCTLformula ϕβ,state(h), and then prove that it is correct, i.e., that equivalence (3)
Constructing ϕβ,state(h).Recall that Act =iActiis the set of all actions. We
define the QCTLformula ϕβ,s (for β:Ag V ar and sS) inductively:
if ϕis pAP , define ϕβ,s ,atom(s, p).
if ϕis x=ydefine
ϕβ,s ,^
A G (str(x, t, a)str(y , t, a))
if ϕis ϕ0ϕ00 define ϕβ,s ,(ϕ0)β,s (ϕ00 )β,s .
if ϕis ¬ϕ0define ϕβ,s ,¬(ϕ0)β,s .
if ϕis bind(i, x)ϕ0define ϕβ,s ,ϕ0
if ϕis o0define
ϕβ,s ,(str(x, t, a))tS,aAct hU niqxCohex
use(x,ϕ)Unif x
use(x,ϕ)(ϕ0)β,s i
where the atoms str(x, t, a) and the formulas Uniqx, C ohex
Z, U nifx
Zare defined
above (for arbitrary xand Z).
2This is not hard to do, e.g., let βassign each agent a different variable, and for each local state si
fix an action aiPi(si) and define ν(x)(h) = aiincase β(i) = xand last(h)i=si.
if ϕis s0define ϕβ,s in a similar way to the o-case except that the last conjunct
(i.e., ϕ0
β,s ) is replaced by
(rel(i, s, t)ϕ0
β,t ).
if ϕis Xϕ0then ϕβ,s ,E(I sP athβ,s Xϕ0
β,s ).
if ϕis ϕ0Uϕ00 then ϕβ,s ,E(I sP athβ,s ϕ0
β,s Uϕ00
β,s ).
In the last two items we use a QCTLpath formula that depends on the binding
β:Ag Var and state sS, defined as follows:
I sP athβ,s ,G^
iAg ^
(Xlastact(i, a)str(β(i), s, a))
where lastact(i, a) are new atoms introduced above. Intuitively, IsP athβ,s holds of a
path πstarting at node αdom if there is an infinite path extending µ(s, α) such that
agent ifollows the strategy associated with variable β(i)V ar.
This completes the construction of ϕβ,s .
Proof that the construction is correct. To prove that equivalence 3in Propo-
sition 2holds, we proceed by induction on ϕ.
If ϕis pAP , then we have (S, h, ν, β )|=p
last(h)λ(p) dfn of |= in SLi
last(µ(state(h), actions(h)) λ(p) dfn 14
atom(state(h), p)lab(actions(h)) lemma 2
(dom, labSlabν), actions(h)|=atom(state(h), p) dfn of |= in QCTL
(dom, labSlabν), actions(h)|= (p)β ,state(h)construction
If ϕis x=y, then we have (S, h, ν, β )|=x=y
for all h0, h pr ef h0implies ν(x)(h0) = ν(y)(h0) dfn of |= in SLi
(T , actions(h)) |=^
A G (str(x, t, a)str(y , t, a)) lemma 3
(T , actions(h)) |= (x=y)β,state(h)construction
where T= (dom, labSlabν).
The case that ϕis a Boolean combination is immediate from the induction hypoth-
esis. For instance, if ϕis ¬ϕ0then (S, h, ν, β)|=ϕ
(S, h, ν, β )6|=ϕ0dfn of |= in SLi
((dom, labSlabν), actions(h)) 6|=ϕ0
((dom, labSlabν), actions(h)) |=¬ϕ0
β,state(h)dfn of |= in QCTL
((dom, labSlabν), actions(h)) |=ϕβ ,state(h)construction
If ϕis bind(i, x)ϕ0, we have (S, h, ν, β )|=bind(i, x)ϕ
(S, h, ν, β [i7→ x]) |=ϕ0dfn of |= in SLi
(dom, labSlabν), actions(h)|= (ϕ0)β[i7→x],state(h)induction
(dom, labSlabν), actions(h)|= (bind(i, x)ϕ0)β ,state(h)construction
If ϕ=o0, we have (S, h, ν, β )|=o0
for some σuniform and coherent for use(x, ϕ0),
(S, h, ν[x7→ σ], β )|=ϕ0dfn of |= in SLi
for some σuniform and coherent for use(x, ϕ0),
(dom, labSlabν[x7→σ]), actions(h)|= (ϕ0)β,state(h)induction
(dom, labSlabν), actions(h)|= (str(x, t, a))tS,aAct
hUniqxC ohex
use(x,ϕ0)Unif x
use(x,ϕ0)(ϕ0)β,state(h)ilemma 3
(dom, labSlabν), actions(h)|= (ox.ϕ0)β ,state(h)construction
The case for ϕ=s0is similar to the previous one.
If ϕis Xϕ0, we have (S, h, ν, β )|=Xϕ0
(S, π≤|h|+1 (h, ν, β), ν, β)|=ϕ0dfn of |= in SLi
i.e., the path πextending hin which each agent ifollows
strategy ν(β(i)), satisfies that ϕ0holds in the next step
(dom, labSlabν), actions(h)|=
E(I sP athβ,state(h)X(ϕ0)β,state(h)) induction
(dom, labSlabν), actions(h)|= (Xϕ0)β ,state(h)construction
The case for ϕ=ϕ0Uϕ00 is similar to the previous one.
This completes the proof of Proposition 2.
Remark 8. We mention that we can provide a slight but useful optimisation in the
translation from SLi to QCTL. Instead of dealing with consecutive temporal operators
separately, we treat them as a single LTL formula. That is, we can view the syntax
of SLi so that besides including the terms Xϕ|ϕUϕwe also include arbitrary LTL
formulas whose atoms are SLi formulas, e.g., we include (ϕϕ)U(X X ϕ). Then, in
the reduction we add the corresponding items, e.g., if ϕis (ϕ0ϕ00 )U(X X ϕ000)then
ϕβ,s ,E(I sP athβ,s ((ϕ0)β,s (ϕ00 )β,s )U(X X(ϕ000)β ,s). The consequence of this is that
ϕβ,s has the form E(LTL(·)) rather than the more general form CTL(·)where ·stands
for the translation of SLi formulas.
Similarly, we can treat sequences of quantifiers of the same type in a single step of
the translation. That is, there are two types of quantifiers in SLi, i.e., oand s. So,
for instance, instead of treating a sequence of quantifiers ox1ox2. . . oxkin kseparate
steps of the translation, we can treat them in one step and thus get a translated formula
of the form
(str(x1, t, a)str(x2, t, a). . . str(xk, t, a))tS,aact . . .
These optimisations have consequences for the complexity of model-checking SLi, as
we see next.
3.5. Computational Complexity of model-checking PAIS
In this section we provide upper and lower bounds on the computational complexity
of model checking PAIS against SLi (recall that we use an explicit representation of the
inputs Sand ϕto the model-checking problem; see Section 2.4).
We start with upper bounds.
Upper Bound
The complexity of the algorithm in the previous section depends on a) the complex-
ity of model-checking QCTLformulas, and b) the complexity of the translation of SLi
formulas to QCTLformulas. We analyse these components in turn.
The finest published upper bound for model checking QCTLis based on the quantifier-
block depth of a formula, i.e., the maximum, over all paths in the parse tree of the formula,
of the number of consecutive sequences of quantifiers. Formally, for a QCTLformula φ
define the quantifier-block depth, denoted depth(φ), inductively as follows:
depth(φ1φ2) = maxi(depth(φi))
depth(¬φ) = depth(φ)
depth() = depth(φ) + mwhere m= 0 if φstarts with , and m= 1 otherwise.
depth(Eψ) = maxi(depth(φi)), where φivaries over the maximal state subformulas
of ψ.
Recall that the complexity class k-exptime consists of the decision problems that can
be solved by a deterministic Turing machine running in time O(expk(P(n))) where Pis
a polynomial and expkis defined inductively as follows: exp0(n),nand expk+1 (n),
Theorem 16. [80] The complexity of model-checking QCTLformulas of quantifier-
block depth k1is (k+ 1)-exptime-complete.3
To apply this result, we similarly define the quantifier-block depth of an SLi formula.
Here, however, we have two types of quantifiers, oand s, which are treated separately.
Formally, for ϕSLi, define depth(ϕ) inductively:
depth(p) = depth(x=y) = 0
3The definition of quantifier-block depth given above coincides with that given in [80]. To see this,
note that depth(φ) = 0 iff φCTL, and that depth(φ)k+ 1 iff ϕis of the form CTL(p1...pnφ0)
where nvaries ov