Conference PaperPDF Available

Reasoning about Changes of Observational Power in Logics of Knowledge and Time

Authors:

Abstract

We study dynamic changes of agents’ observational power in logics of knowledge and time. We consider CTLK*, the extension of CTL* with knowledge operators, and enrich it with a new operator that models a change in an agent’s way of observing the system. We extend the classic semantics of knowledge for agents with perfect recall to account for changes of observational power, and we show that this new operator increases the expressivity of CTLK*. We reduce the model-checking problem for our logic to that for CTLK*, which is known to be decidable. This provides a solution to the model-checking problem for our logic, but it is not optimal, and we provide a direct model-checking procedure with better complexity.
Reasoning about Changes of Observational Power in Logics of
Knowledge and Time
Aurèle Barrière
ENS Rennes
Bastien Maubert
Università degli Studi di Napoli “Federico II”
Sasha Rubin
Università degli Studi di Napoli “Federico II”
Aniello Murano
Università degli Studi di Napoli “Federico II”
ABSTRACT
We study dynamic changes of agents’ observational power in logics
of knowledge and time. We consider
CTLK
, the extension of
CTL
with knowledge operators, and enrich it with a new operator that
models a change in an agent’s way of observing the system. We
extend the classic semantics of knowledge for agents with perfect
recall to account for changes of observational power, and we show
that this new operator increases the expressivity of
CTLK
. We
reduce the model-checking problem for our logic to that for
CTLK
,
which is known to be decidable. This provides a solution to the
model-checking problem for our logic, but it is not optimal, and we
provide a direct model-checking procedure with better complexity.
1 INTRODUCTION
In multi-agent systems, agents usually have only partial information
about the state of the system [
24
]. This has led to the development of
epistemic logics, often combined with temporal logics, for describ-
ing and reasoning about how agents’ knowledge evolve over time.
Such formalisms have been applied to the modelling and analysis
of, e.g., distributed protocols [
8
,
16
], information ow and crypto-
graphic protocols [9, 27] and knowledge-based programs [28].
In these frameworks, an agent’s view of a particular state of
the system is given by an observation of that state. In all the cited
settings, an agent’s observation of a given state does not change
over time. In other words, these frameworks have no primitive
for reasoning about agents whose observation power can change.
Because this phenomenon occurs in real scenarios, for instance
when a user of a system is granted access to previously hidden data,
we propose here to tackle this problem. Precisely, we extend classic
epistemic temporal logics with a new unary operator,
o
, that
represents changes of observation power, and is read “the agent
changes her observation power to
o
”. For instance, the formula
o1
AF
(o2(
K
p
K
¬p))
expresses that “For an agent with initial
observation power
o1
, in all possible futures there exists a point
where, if the agent updates her observation power to
o2
, she learns
whether or not the proposition
p
holds”. If in this example
o1
and
o2
represent dierent “security levels” and
p
is sensitive information,
then the formula expresses a possible avenue for attack. The present
work provides means to express and evaluate such properties.
Proc. of the 18th International Conference on Autonomous Agents and Multiagent Systems
(AAMAS 2019), N. Agmon, M. E. Taylor, E. Elkind, M. Veloso (eds.), May 2019, Montreal,
Canada
©
2019 International Foundation for Autonomous Agents and Multiagent Systems
(www.ifaamas.org). All rights reserved.
https://doi.org/doi
Related work.
There is a rich history of epistemic logic in AI, in-
cluding the static and temporal [
8
], dynamic [
29
] and strategic [
24
]
settings. The most common logics of knowledge and time are CTLK,
LTLK and
CTLK
, which extend the classic temporal logics CTL,
LTL and
CTL
with epistemic operators. Satisability and axioma-
tisation have been studied in depth in [
10
,
11
]. Model checking has
also been studied, for agents with either no memory or perfect re-
call. For memoryless agents, the model-checking problem for LTLK,
CTLK and
CTLK
is Pspace-complete [
13
,
22
], while for agents with
perfect recall it is nonelementary, with
k
-Exptime upper-bound for
formulas with at most
k
nested knowledge operators [
1
,
4
,
6
,
26
].
However it is not known whether these bounds are tight.
Two recent works involve dynamic changes of observation power.
The rst one [
2
] studies an imperfect-information extension of Strat-
egy Logic [
18
] in which agents can change observation power when
changing strategies, but the logic does not allow reasoning about
knowledge. The second [
17
] extends the latter with knowledge
operators, and solves the model-checking problem for a fragment
related to the notion of hierarchical information [
14
,
20
,
21
]. In
these two works, the focus is on strategic aspects. In the present
work, instead, we intend to study in depth how the possibility to
reason about change of observational power aects the semantics,
expressive power, and model checking of epistemic temporal logics.
Contributions.
We extend
CTLK
(which subsumes CTLK and
LTLK) with observation-change operators
o
. For agents with per-
fect recall, which we study in this work, extending the classic se-
mantics of knowledge requires to store past observations of agents,
which we do thanks to the introduction of observation records. Start-
ing with the mono-agent case, we solve the model-checking prob-
lem by rst dening an alternative semantics which, unlike the
natural one, is based on a bounded amount of information. Once
the two semantics are proven to be equivalent, designing a model-
checking algorithm is almost straightforward. We then extend the
logic to the multi-agent case, introducing operators
o
a
for each
agent
a
, and we extend our approach to solve its model-checking
problem. Next, we study the expressivity of our logic, showing that
the observation-change operator increases expressivity. We nally
provide a reduction to
CTLK
which removes observation-change
operators at the cost of a blow-up in the size of the model. We show
that going through this reduction and using known model-checking
algorithms for CTLKis more costly than our direct approach.
2CTLK
In this section we dene the logic
CTLK
. We rst study the
case where there is only one agent (and thus only one knowledge
operator). We will extend to the multi-agent setting in Section 5.
AAMAS’19, May 2019, Montreal, Canada Aurèle Barrière, Bastien Maubert, Sasha Rubin, and Aniello Murano
2.1 Notation
Anite (resp. innite)word over some alphabet
Σ
is an element of
Σ
(resp.
Σω
). The length of a nite word
w=w0. . . wn
is
|w|=n+
1,
and we let
last(w)=wn
. Given a nite (resp. innite) word
w
and
0
i<|w|
(resp.
iN
), we let
wi
be the letter at position
i
in
w
,
wi
is the prex of
w
that ends at position
i
, and
wi
is the sux
that starts at position i. We write wwif wis a prex of w.
2.2 Syntax
We x a countably innite set of atomic propositions,
AP
, and a
nite set of observations
O
, that represent possible observational
powers of the agent. Note that in this work, “observation” does not
refer to a punctual observation of a system’s state, but rather a way
of observing the system, or “observational power” of an agent.
As for state and path formulas in
CTL
, we distinguish between
history formulas and path formulas. We say history formulas instead
of state formulas because, considering agents with perfect recall of
the past, the truth of epistemic formulas depends not only on the
current state, but also on the history before reaching this state.
Denition 2.1 (Syntax). The sets of history formulas
φ
and path
formulas ψare dened by the following grammar:
φ::=p| ¬φ|φφ|Aψ|Kφ|oφ
ψ::=φ| ¬ψ|ψψ|Xψ|ψUψ,
where p∈ AP and o∈ O .
We call
CTLK
formulas all history formulas so dened. Oper-
ators Xand Uare the classic next and until operators of temporal
logics, and Ais the universal path quantier from branching-time
temporal logics. Kis the knowledge operator from epistemic logics,
and K
φ
reads as “the agent knows that
φ
is true”. Our new obser-
vation change operator,
o
, reads as “the agent now observes the
system with observation o”.
As usual, we dene
=p∨ ¬p
,
φφ=¬(φ∧ ¬φ)
,
φφ=
¬φφ
, as well as the temporal operators nally (F) and always
(G): Fφ=Uφ, and Gφ=¬F¬φ.
2.3 Semantics
Models of
CTLK
are Kripke structures equipped with one relation
oon states for each observation o.
Denition 2.2 (Models). AKripke structure with observations is a
structure M=(AP,S,T,V,{∼o}o∈O ,sι,oι), where
AP ⊂ AP is a nite subset of atomic propositions,
Sis a set of states,
TS×Sis a left-total1transition relation,
V:S2AP is a valuation function,
• ∼oS×Sis an equivalence relation, for each o∈ O,
sιSis an initial state, and
oι∈ O is the initial observation.
Apath is an innite sequence of states
π=s0s1. . .
such that for
all
i
0,
siTsi+1
, and a history
h
is a nite prex of a path. For
IS
,
we write
T(I)={s|sIs.t. sTs }
for the set of successors of
states in
I
. Finally, for
o∈ O
and
sS
, we let
[s]o={s|sos}
be the equivalence class of sfor relation o.
1
i.e., for every
sS
there exists
sS
such that
sT s
. This cosmetic restriction is
made to avoid having to deal with nite runs ending in deadlocks.
Remark 1. We model agents’ information via indistinguishability
relations
o
, where
sos
means that
s
and
s
are indistinguishable
for an agent who has observation power
o
. Other approaches exist.
One is via observation functions (see, e.g., [
26
]), that map states
to atomic observations, and where two states are indistinguishable
for an observation function if they have the same image. Another
consists in seeing states as tuples of local states, one for each agent,
two global states being indistinguishable for an agent if her local state
is the same in both (see, e.g., [
13
]). All these formalisms are essentially
equivalent with respect to epistemic temporal logics [
19
]. In these
alternative formalisms, change of observation power would correspond
to, respectively, changing observation function, and changing the
local states inside each global state. We nd that indistinguishability
relations are convenient to study theoretical aspects of our logic. To
model concretely how observational power changes, one may prefer to
use local states and, for instance, specify in operators of observation
change which variables become visible or hidden to an agent.
Observation records.
To dene which histories the agent cannot
distinguish, we need to keep track of how she observed the system
at each point in time. To do so, we record each observation change
as a pair
(o,n)
, where
o
is the new observation and
n
is the time
when this change occurs.
Denition 2.3. An observation record
r
is a nite word over
O × N
,
i.e., r∈ (O × N).
Note that observation records are meant to represent changes
of observational ability, and thus they do not contain the initial
observation (which is given in the model). We write
for the empty
observation record.
Example 2.4. Consider a model
M
with initial observation
oι
, a
history
h=s0. . . s4
and an observation record
r=(o1,
0
) · (o2,
3
) ·
(o3,
3
)
. The agent rst observes state
s0
with observation
oι
. The
observation record shows that at time 0, thus before the rst transi-
tion, the agent changed for observation
o1
. She then observed state
s0
again, but this time with observation
o1
. Then the system goes
through states
s1
and
s2
and reaches
s3
, all of which she observes
with observation
o1
. At time 3, the agent changes to observation
o2
, and thus observes state
s3
again, but this time with observation
o2
, and nally she switches to observation
o3
and thus observes
s3
once more, with observation
o3
. Finally, the system goes to state
s4
,
which the agent observes with observation o3.
We write
r· (o,n)
for the observation record obtained by ap-
pending
(o,n)
to the observation record
r
, and
r[n]
for the record
consisting of all pairs
(o,m)
in
r
such that
m=n
. We say that an
observation record
r
stops at
n
if
r[m]
is empty for all
m>n
, and
r
stops at history
h
if it stops at
|h| −
1. Unless otherwise specied,
when we consider an observation record
r
together with a history
h, it is understood that rstops at h.
Observations at time n.
We let
ol(r,n)
be the list of observations
used by the agent at time
n
. It consists of the observation that the
agent has when the
n
-th transition is taken, plus those of observa-
tion changes that occur before the next transition. It is dened by
Reasoning about Changes of Observational Power in Logics of Knowledge and Time AAMAS’19, May 2019, Montreal, Canada
induction on n:
ol(r,0)=oι·o1·. . . ·ok,
if r[0]=(o1,0) · . . . · (ok,0),and
ol(r,n+1)=last(ol(r,n)) · o1·. . . ·ok,
if r[n+1]=(o1,n+1) · . . . · (ok,n+1).
Observe that
ol(r,n)
is never empty: if no observation change
occurs at time
n
,
ol(r,n)
only contains the last observation taken
by the agent. If ris empty, the latter is the initial observation oι.
Example 2.5. If
r=(o1,
0
) · (o2,
3
) · (o3,
3
)
, then
ol(r,
0
)=oι·o1
,
ol(r,1)=ol (r,2)=o1,ol(r,3)=o1·o2·o3, and ol (r,4)=o3.
Synchronous perfect recall.
The usual denition of synchronous
perfect recall states that for an agent with observation
o
, histories
h
and
h
are indistinguishable if they have the same length and are
point-wise indistinguishable, i.e.,
|h|=|h|
and for each
i<|h|
,
hioh
i
. We adapt this denition to changing observations: two
histories are indistinguishable if, at each point in time, the states
are indistinguishable for all observations used at that time.
Denition 2.6 (Dynamic synchronous perfect recall). Given an
observation record
r
, two histories
h
and
h
are equivalent, written
hrh, if |h|=|h|and i<|h|,ool(r,i),hioh
i.
We now dene the natural semantics of CTLK.
Denition 2.7 (Natural semantics). Fix a model
M
. A history
formula
φ
is evaluated in a history
h
and an observation record
r
.
A path formula
ψ
is interpreted on a run
π
, a point in time
nN
and an observation record. The semantics is dened by induction
on formulas (we omit the obvious boolean cases):
h,r|=pif pV(last(h))
h,r|=Aψif πs.t. hπ,π,|h| − 1,r|=ψ
h,r|=Kφif hs.t. hrh,h,r|=φ
h,r|=oφif h,r· (o,|h| − 1) |=φ
π,n,r|=φif πn,r|=φ
π,n,r|=Xψif π,(n+1),r|=ψ
π,n,r|=ψ1Uψ2if mns.t. π,m,r|=ψ2and
ks.t. nk<m,π,k,r|=ψ1
We say that a model
M
with initial state
sι
satises a
CTLK
formula φ, written M|=φ, if sι,∅ |=φ.
We rst discuss a subtlety of our semantics, which is that an agent
can observe the same state consecutively with several observations.
Remark 2. Consider the formula
oφ
and history
h
. By denition,
h,r|=oφ
i
h,r· (o,|h| −
1
) |=φ
. Note that although the history
did not change (it is still
h
), the observation record is extended by
the observation
o
at time
|h| −
1, with the following consequence.
Suppose that
ol(r,|h|−
1
)=o
. After switching to
o
, the agent considers
possible all histories
h
such that i)
hrh
(they were considered
possible before the change of observation) and ii)
last(h) ∼olast(h)
(they are still considered possible after the change of observation).
Informally this means that by changing observation from
o
to
o
, the
agent’s information is further rened by
o
, and it is as though the
agent at time
|h| −
1observed the system with observation
oo
.
At later times, her observation is simply
o
, until another change of
observation occurs.
2.4 Examples of observation change
We now illustrate that observation change is natural and relevant.
Example 2.8. A logic of accumulative knowledge (and resource
bounds) is introduced in [
12
]. It studies agents that can perform
successive observations to improve their knowledge of the situation,
each observation rening their current view of the world. In their
framework, an observation models a yes/no question about the
current situation; if the answer is ‘yes’, the agent can eliminate
all possible worlds for which the answer is ‘no’, and vice versa.
Formally, an observation is a binary partition of the possible states,
and the agent learns in which partition is the current state. Such
observations are particular cases of our models’ indistinguishability
relations, and the semantics of an agent performing an observation
o
is exactly captured by the semantics of our operator
o
. Similarly,
performing sequence of observations
o1. . . on
corresponds to the
successive application of operators
o1. . . on
. As an example, [
12
]
shows how to model a medical diagnosis in which the disease is
narrowed down by performing a series of successive tests.
Our logic is incomparable with the one discussed in the previous
example: in the latter observations have a cost, but no temporal
aspect is considered, while in this work we do not consider costs,
but we study the evolution of knowledge through time in addition to
dynamic observation change. We now illustrate how both interact.
Example 2.9 (Security scenario). Consider a system with two
possible levels of security clearance, modelled by observations
o1
and
o2
, which dene what information users have access to. In this
scenario, we want to hide a secret
p
from the users. A desirable prop-
erty is thus expressed by the formula
(o1
A
G¬
K
p)∧(o2
A
G¬
K
p)
,
which means that a user using either
o1
or
o2
will never know that
pholds. Model Mfrom Figure 1 satises this formula.
Now consider formula
φ=o1
E
Fo2
K
p
, which means that if the
user starts with observation
o1
, there exists a path and a moment
when changing observation lets her discover the secret. We show
that
M
satises
φ
and thus that users should not be allowed to
change security level. Consider history
h=s0s2s5
in
M
with initial
observation
o1
. At time 0 the user knows that the current state is
s0
.
After going to
s2
, she does not know if the current state is
s2
or
s1
,
as they are indistinguishable by
o1
. At time 2, at rst the user does
not know whether the system is in
s4
or
s5
. Now, if she changes to
observation
o2
, she sees that the system is either in state
s5
or
s6
.
Rening her previous knowledge that the system is either in state
s4or s5, she deduces that the current state is s5, and that pholds.
Example 2.10 (Fault-Tolerant Diagnosability). Diagnosability is a
property of systems which states that every failure is eventually
detected [
23
]. In the setting considered in [
3
], the system is moni-
tored through a set of sensors, and a diagnosability condition is a
pair
(c1,c2)
of disjoint sets of states that the system should always
be able to tell apart. The problem of nding minimal sets of sen-
sors that ensure diagnosability is studied, that is, nding a minimal
sensor conguration
sc
such that
osc
AG
(
K
c1
K
c2)
holds, where
osc is the observation corresponding to sensor conguration sc.
In
CTLK
one can express and model check a stronger notion of
diagnosability that we call fault-tolerant diagnosability, where the
system must remain diagnosable even after the loss of a sensor. For
AAMAS’19, May 2019, Montreal, Canada Aurèle Barrière, Bastien Maubert, Sasha Rubin, and Aniello Murano
s0
¬p
s2
¬p
s1
¬p
s3
¬p
s5
p
s4
¬p
s6
¬p
o1,o2
o1o2
o2
Figure 1: Model Min Example 2.9, and its variant M
a given diagnosability condition
(c1,c2)
and sensor conguration
sc
, we write
o
the original observation (with every sensor in
sc
),
oi
the observation where sensor
i
failed, and
pi
is a proposition
indicating the failure of sensor
i
. The following formula expresses
that sensor conguration sc ensures fault-tolerant diagnosability:
Φdiag =oAG((Kc1Kc2) ∧ (pioiAG(Kc1Kc2))).
Observe that it is possible for a system to satisfy
Φdiag
but not
oi
AG
(
K
c1
K
c2)
if sensor
i
, before failing, brings some piece of
information that is crucial for diagnosis.
2.5 Model-checking problem
The model checking-problem for
CTLK
consists in, given a model
Mand a formula φ, deciding whether M|=φ.
Model-checking approach.
Perfect-recall semantics refers to his-
tories of unbounded length, but it is well known that in many
situations it is possible to maintain a bounded amount of informa-
tion that is sucient to deal with perfect recall. We show that it
is also the case for our logic, by generalising the classic approach.
Intuitively, it is enough to know the current state, the current ob-
servational power and the set of states that the agent believes the
system might be in. The latter is usually called information set in
epistemic temporal logics and games with imperfect information.
We dene an alternative semantics based on information sets in-
stead of histories and records, and we prove that this semantics
is equivalent to the natural one presented in this section. Because
information sets are of bounded size, it is then easy to build from
this alternative semantics a model-checking algorithm for
CTLK
.
3 ALTERNATIVE SEMANTICS
We dene an alternative semantics for
CTLK
. It is based on
information sets, a classic notion in games with imperfect informa-
tion [30], whose denition we now adapt to our setting.
Denition 3.1. Given a model
M
, the information set
I(h,r)
after
a history hand an observation record ris dened as follows:
I(h,r)={sS|h,hrhand last(h)=s}.
This information is sucient to evaluate epistemic formulas for
one agent when we consider the S5 semantics of knowledge, i.e.,
when indistinguishability relations are equivalence relations, as
is our case. We now describe how to maintain this information
along the evaluation of a formula. To do so, we dene two update
functions for information sets: one reects changes of observational
power, and the other captures transitions taken in the system.
Denition 3.2. Fix a model
M=(AP,S,T,V,{∼o}o∈O ,sι,oι)
.
Functions
UT
and
U
are dened as follows, for all
IS
, all
s,sS
and o,o∈ O.
UT(I,s,o)=T(I)∩[s]o
U(I,s,o)=I∩ [s]o
When the agent has observational power
o
and information set
I
,
and the model takes a transition to a state
s
, the new information
set is
UT(I,s,o)
, which consists of all successors of her previous
information set
I
that are
o
-indistinguishable with the new state
s
. When the agent is in state
s
with information set
I
, and she
changes for observational power
o
, her new information set is
U(I,s,o)
, i.e., all states that she considered possible before and
that she still considers possible after switching to o.
We let
O(h,r)
be the last observation taken by the agent after
history
h
, according to
r
. Formally,
O(h,r)=on
if
ol(r,|h| −
1
)=
o1·. . . ·on
. The following result establishes that the functions
U
and
UT
correctly update information sets. It is proved by simple
application of the denitions.
Proposition 3.3. For every history
h·s
, observation record
r
that
stops at hand observation o, it holds that
I(h·s,r)=UT(I(h,r),s,O(h,r)), and
I(h,r· (o,|h| − 1)) =U(I(h,r),last(h),o).
Using these update functions we can now dene our alternative
semantics for CTLK.
Denition 3.4 (Alternative semantics). Fix a model
M
. A history
formula
φ
is evaluated in a state
s
, an information set
I
and an obser-
vation
o
. A path formula
ψ
is interpreted on a run
π
, an information
set
I
and an observation
o
. The semantic relation
|=I
is dened by
induction on formulas (we omit the obvious boolean cases):
s,I,o|=Ipif pV(s)
s,I,o|=IAψif πs.t. π0=s,π,I,o|=Iψ
s,I,o|=IKφif sI,s,I,o|=Iφ
s,I,o|=Ioφif s,U(I,s,o),o|=Iφ
π,I,o|=Iφif π0,I,o|=Iφ
π,I,o|=IXψif π1,UT(I,π1,o),o|=Iψ
π,I,o|=Iψ1Uψ2if n0such that
πn,Un
T(I,π,o),o|=Iψ2and
msuch that 0m<n,
πm,Um
T(I,π,o),o|=Iψ1,
where
Un
T(I,π,o)
is the iteration of the temporal update, dened
inductively as follows:
U0
T(I,π,o)=I, and
Un+1
T(I,π,o)=UT(Un
T(I,π,o),πn+1,o).
Using Proposition 3.3, one can prove that the natural semantics
|=and the information semantics |=Iare equivalent.
Theorem 3.5. For every history formula
φ
, model
M
, history
h
and observation record rthat stops at h,
h,r|=φi last(h),I(h,r),o(h,r) |=Iφ.
Reasoning about Changes of Observational Power in Logics of Knowledge and Time AAMAS’19, May 2019, Montreal, Canada
4 MODEL CHECKING CTLK
In this section we devise a model-checking procedure based on
the equivalence between the natural and alternative semantics
(Theorem 3.5), and we prove the following result.
Theorem 4.1. Model checking CTLKis in Exptime.
Augmented model.
Given a model
M
, we dene an augmented
model
ˆ
M
in which the states are tuples
(s,I,o)
consisting of a state
s
of
M
, an information set
I
and an observation
o
. According to
Theorem 3.5, history formulas can be viewed on this model as state
formulas, and a model checking procedure can be devised by merely
following the denition of the alternative semantics.
Let
M=(AP,S,T,V,{∼o}o∈O ,sι,oι)
. We dene the Kripke
structure ˆ
M=(S,T,V,sι), where:
S=S×2S× O,
• (s,I,o)T(s,I,o)if s T s and I=UT(I,s,o),
V(s,I,o)=V(s), and
sι=(sι,[sι]oι,oι).
We call
ˆ
M
the augmented model, and we write
ˆ
Mo
the Kripke struc-
ture obtained by restricting
ˆ
M
to states of the form
(s,I,o)
where
o=o. Note that the dierent ˆ
Moare disjoint with regards to T.
Model-checking procedure.
We dene function Check
CTLK
which evaluates a history formula in a state of ˆ
M:
Check
CTLK
(
ˆ
M,(sc,Ic,oc),Φ
) returns true if
M,sc,Ic,oc|=I
φ
and false otherwise, and is dened as follows: if
Φ
is a
CTL
formula, we evaluate it using a classic model-checking procedure
for
CTL
. Otherwise,
Φ
contains a subformula of the form
φ=
K
φ1
or
φ=oφ1
where
φ1CTL
. We evaluate
φ1
in every state
of every component
ˆ
Mo
(recall that the dierent
ˆ
Mo
are disjoint),
and mark those that satisfy
φ1
with a fresh atomic proposition
pφ1
.
Then, if
φ=
K
φ1
, we mark with a fresh atomic proposition
pφ
every
state
(s,I,o)
of
ˆ
M
such that for every
sI
,
(s,I,o)
is marked with
pφ1
. Else,
φ=oφ1
and we mark with a fresh proposition
pφ
every
state
(s,I,o)
such that
(s,U(I,s,o),o)
is marked with
pφ1
. Finally,
we recursively call function Check
CTLK
on the marked model
and formula Φobtained by replacing φwith pφin Φ.
To model check a formula
φ
in a model
M
, we build
ˆ
M
and call
CheckCTLK(ˆ
M,(sι,[sι]oι,oι),φ).
Algorithm correctness.
The correctness of the algorithm follows
from the following properties:
For each formula Kφ1chosen by the algorithm,
pφV(s,I,o)i M,s,I,o|=IKφ1
For each formula oφ1chosen by the algorithm,
pφV(s,I,o)i M,s,I,o|=Ioφ1
Complexity analysis.
Let
|M|
be the number of states in model
M
.
Model checking a
CTL
formula
φ
on a model
M
with state-set
S
can
be done in time 2
O(| φ|)O(|S| )
[
7
,
15
]. Our procedure, for a
CTLK
formula
φ
and a model
M
, calls the
CTL
model-checking procedure
for at most
|φ|
formulas of size at most
|φ|
, on each state of
ˆ
M
.
The latter is of size 2
O(| M|) × |O|
, but each call to the
CTL
model-
checking procedure is performed on a disjoint component
ˆ
Mo
of size
2
O(| M|)
. Our overall procedure thus runs in time
|O | ×
2
O(| φ|+|M|)
.
5 MULTI-AGENT SETTING
We now extend
CTLK
to the multi-agent setting. We x
Ag =
{a1, . . . , am}
a nite set of agents and dene the logic
CTLKm
.
This logic contains, for each agent
a
and observation
o
, an operator
o
a
which reads as “agent
a
changes for observation
o
”. We consider
that these observation changes are public in the sense that all agents
are aware of them. The reason is that if agent
a
changes observation
without agent
b
knowing it, agent
b
may entertain false beliefs
about what agent
a
knows. This would not be consistent with the
S5 semantics of knowledge that we consider in this work, where
false beliefs are ruled out by the Truth axiom Kφφ.
5.1 Syntax and natural semantics
We rst extend the syntax, with knowledge operators K
a
and ob-
servation change operators o
afor each agent.
Denition 5.1 (Syntax). The sets of history formulas
φ
and path
formulas ψare dened by the following grammar:
φ::=p| ¬φ|φφ|Aψ|Kaφ|o
aφ
ψ::=φ| ¬ψ|ψψ|Xψ|ψUψ,
where p∈ AP ,aAg and o∈ O.
Formulas of CTLKmare all history formulas.
The models of
CTLKm
are as for the one-agent case, except
that we assign one initial observation to each agent. We write
o
for
a tuple
{oa}aAg
,
oa
for
oa
, and
o[ao]
for the tuple
o
where
oa
is replaced by o. Finally, for 1im,oirefers to oai.
Denition 5.2 (Multiagent models). Amultiagent Kripke structure
with observations is a structure
M=(AP,S,T,V,{∼o}o∈O ,sι,oι)
,
where all components are as in Denition 2.2, except for
oι∈ OAg
,
the initial observation for each agent.
We now adapt some denitions to the multi-agent setting.
Records tuples.
We now need one observation record for each
agent. We shall write
r
for a tuple
{ra}aAg
. Given a tuple
r=
{ra}aAg
and
aAg
we write
ra
for
ra
, and for an observation
o
and time
n
we let
r· (o,n)a
be the record tuple
r
where
ra
is
replaced with
ra· (o,n)
. Finally, for
i∈ {
1
, . . . , m}
,
ri
refers to
rai
.
Observations at time n.
We let
ola(r,n)
be the list of observations
used by agent aat time n:
ola(r,0)=oι
a·o1·. . . ·ok,
if ra[0]=(o1,0) · . . . · (ok,0),and
ola(r,n+1)=last(ola(r,n)) · o1·. . . ·ok,
if ra[n+1]=(o1,n+1) · . . . · (ok,n+1).
Denition 5.3 (Dynamic synchronous perfect recall). Given a record
tuple
r
, two histories
h
and
h
are equivalent for agent
a
, written
hr
ah, if |h|=|h|and i<|h|,oola(r,i),hioh
i.
Denition 5.4 (Natural semantics). Let
M
be a model,
h
a history
and
r
a record tuple. We dene the semantics for the following
inductive cases, the remaining ones are straightforwardly adapted
from the one-agent case (Denition 2.7).
h,r|=Kaφif hs.t. hr
ah,h,r|=φ
h,r|=o
aφif h,r· (o,|h| − 1)a|=φ
AAMAS’19, May 2019, Montreal, Canada Aurèle Barrière, Bastien Maubert, Sasha Rubin, and Aniello Murano
A model
M
with initial state
sι
satises a
CTLKm
formula
φ
,
written
M|=φ
, if
sι,|=φ
, where
is the tuple where each agent
has empty observation record.
5.2 Alternative semantics
As in the one-agent case, we dene an alternative semantics that we
prove equivalent to the natural one and upon which we build our
model-checking algorithm. The main dierence here is that we need
richer structures than information sets to represent an epistemic
situation of a system with multiple agents. For instance, to evaluate
formula K
a
K
b
K
cp
, we need to know what agent
a
knows about
agent
b
’s knowledge of agent
c
’s knowledge of the system’s state.
To do so we use the
k
-trees introduced in [
25
,
26
] in the setting
of static observations, and which contain enough information to
evaluate formulas of knowledge depth k.
k-trees.
Fix a model
M=(AP,S,T,V,{∼o}o∈O ,sι,oι)
. Intuitively,
a
k
-tree over
M
is a structure of the form
s,I
1, . . . , I
m
, where
sS
is the current state of the system, and for each
i∈ {
1
, . . . , m}
,
I
i
is a set of
(k
1
)
-trees that represents the state of knowledge (of
depth
k
1) of agent
ai
. Formally, for every history
h
and record
tuple rwe dene by induction on kthe k-tree Ik(h,r)as follows:
I0(h,r)=last(h),, . . . , ∅⟩
Ik+1(h,r)=last(h),I
1, . . . , I
m,
where for each i,I
i={Ik(h,r) | hr
aih}.
For a
k
-tree
Ik=s,I
1, . . . , I
m
, we call
s
the root of
Ik
, and
write it
root(Ik)
. We also write
Ik(a)
for
I
i
, where
a=ai
, and we let
Tk
be the set of
k
-trees for
M
. Observe that for one agent (
m=
1),
a1-tree is an information set together with the current state.
Updating k-trees.
We generalise our update functions
U
and
UT
(Denition 3.2) to update
k
-trees. We rst dene, by induction on
k
, the function
Uk
T
that updates
k
-trees when a transition is taken.
U0
T(⟨s,, . . . , ,s,o)=s,, . . . , ∅⟩
Uk+1
T(⟨s,I
1, . . . , I
m,s,o)=s,I
1, . . . , I
m,
where for each i,
I
i={Uk
T(Ik,s′′,o) | Ik∈ I
i,s′′ oisand root(Ik)Ts′′ }.
Uk
T
takes the current
k
-tree
s,I
1, . . . , I
m
, the new state
s
and
the current observation
o
for each agent, and returns the new
k
-tree
after the transition.
We now dene the second update function
Uk
, which is used
when an agent aichanges observation for some o.
U0
(⟨s,, . . . , ,o,ai)=s,, . . . , ∅⟩
Uk+1
(⟨s,I
1, . . . , I
m,o,ai)=s,I
1, . . . , I
m,
where for each j,i,
I
j={Uk
(Ik,o,ai) | Ik∈ I
j},and
I
i={Uk
(Ik,o,ai) | Ik∈ I
iand root(Ik) ∼os}.
The intuition is that when agent
ai
changes observation for
o
,
in every place of the
k
-tree that refers to agent
ai
’s knowledge,
we remove possible states (and corresponding subtrees) that are
no longer equivalent to the current possible state for
ai
’s new
observation o.
We let
O(h,r)
be the tuple of last observations taken by each
agent after history
h
, according to
r
. For each
aAg
,
O(h,r)a=on
if
ola(r,|h| −
1
)=o1·. . . ·on
. The following proposition establishes
that functions Uk
Tand Uk
correctly update k-trees.
Proposition 5.5. For every history
h·s
, record tuple
r
that stops
at h, observation tuple oand integer k, it holds that
Ik(h·s,r)=Uk
T(Ik(h,r),s,o(h,r)),and
Ik(h,r· (o,|h| − 1)a)=Uk
(Ik(h,r),o,a).
We now dene the alternative semantics for CTLKm.
Denition 5.6 (Alternative semantics). The semantics of a history
formula
φ
of knowledge depth
k
is dened inductively on a
k
-tree
Ik
and a tuple of current observations
o
(note that the current state
is the root of the
k
-tree). We only give the following inductive cases,
the others are simply adapted from Denition 3.4.
Ik,o|=Ipif pV(root(Ik))
Ik,o|=IAψif πs.t. π0=root(Ik),π,Ik,o|=Iψ
Ik,o|=IKaφif Ik1Ik(a),Ik1,o|=Iφ
Ik,o|=Io
aφif Uk
(Ik,o,a),o[ao] |=Iφ
The following theorem can be proved similarly to Theorem 3.5,
using Proposition 5.5 instead of Proposition 3.3.
Theorem 5.7. For every history formula
φ
of knowledge depth
k
,
each model M, history hand tuple of records r,
h,r|=φi Ik(h,r),o(h,r) |=Iφ.
6 MODEL CHECKING CTLKm
Like in the mono-agent case, it is rather easy to devise from this
alternative semantics a model-checking algorithm for
CTLKm
,
the main dierence being that the states of the augmented model
are now
k
-trees. In this section we adapt the model-checking pro-
cedure for
CTLK
to the multi-agent setting, once again relying
on the equivalence between the natural and alternative semantics
(Theorem 5.7), and we prove the following result.
Theorem 6.1. The model-checking problem for
CTLKm
is in
k-EXPTIME for formulas of knowledge depth at most k.
Augmented model.
Given a model
M
, we dene an augmented
model
ˆ
M
in which the states are pairs
(Ik,o)
consisting of a
k
-tree
Ikand an observation for each agent, o.
Let
M=(AP,S,T,V,{∼o}o∈O ,sι,oι)
. We dene the Kripke
structure ˆ
M=(S,T,V,sι), where:
S=Tk× OAg ,
• (Ik,o)T(Ik,o)if s T s
and
Ik=Uk
T(Ik,s,o)
, where
s=
root(Ik)and s=root(Ik),
V(Ik,o)=V(root(Ik)), and
sι=(Ik(sι,),oι).
We call
ˆ
M
the augmented model, and we write
ˆ
Mo
the Kripke struc-
ture obtained by restricting
ˆ
M
to states of the form
(Ik,o)
where
o=o. Again, the dierent ˆ
Moare disjoint with regards to T.
Reasoning about Changes of Observational Power in Logics of Knowledge and Time AAMAS’19, May 2019, Montreal, Canada
Model-checking procedure.
We dene function Check
CTLKm
which evaluates a history formula in a state of ˆ
M:
Check
CTLKm
(
ˆ
M,(Ik
c,oc),Φ
) returns true if
M,Ik
c,oc|=Iφ
and false otherwise, and is dened as follows: if
Φ
is a
CTL
formula,
we evaluate it using a classic model-checking procedure for CTL.
Otherwise,
Φ
contains a subformula of the form
φ=
K
aφ
or
φ=
o
aφ
where
φCTL
. We evaluate
φ
in every state of
ˆ
M
, and
mark those that satisfy
φ
with a fresh atom
pφ
. Then, if
φ=
K
aφ
,
we mark with a fresh atomic proposition
pφ
every state
(Ik,o)
of
ˆ
M
such that for every
Ik1Ik(a)
,
(Ik1,o)
is marked with
pφ
.
Else,
φ=o
aφ
and we mark with a fresh proposition
pφ
every
state
(Ik,o)
such that
(Uk
(Ik,o,a),o[ao])
is marked with
pφ
.
Finally, we recursively call Check
CTLKm
on the marked model
and formula Φobtained by replacing φwith pφin Φ.
To model check a formula
φ
in a model
M
, we build
ˆ
M
and call
CheckCTLKm(ˆ
M,(Ik(sι,),oι),φ).
Algorithm correctness.
The correctness of the algorithm follows
from the following properties:
For each formula Kaφchosen by the algorithm,
pφV(Ik,o)i M,Ik,o|=IKaφ
For each formula o
aφchosen by the algorithm,
pφV(Ik,o)i M,Ik,o|=Io
aφ
Complexity analysis.
The number of dierent
k
-trees for
m
agents
and a model with
l
states is no greater than
Ck=exp(m×l,k)/m
,
where
exp(a,b)
is dened as
exp(a,
0
)=a
and
exp(a,b+
1
)=
a
2
exp(a,b)
[
26
]. The size of the augmented model
ˆ
M
is thus bounded
by
exp(m×l,k)/m× |O | |Ag |
, and it can be computed in time
exp(O(m×l),k) × | O| |Ag |.
Model checking a
CTL
formula
φ
on a model
M
with state-
set
S
can be done in time 2
O(| φ|) ×O(|S|)
[
7
,
15
]. For a
CTLKm
formula
φ
of knowledge depth at most
k
and a model
M
with
l
states, our procedure calls the
CTL
model-checking procedure
for at most
|φ|
formulas of size at most
|φ|
, on each state of the
augmented model
ˆ
M
which has size
exp(m×l,k)/m× | O|m
. Each
recursive call (for each subformula and state of
ˆ
M
) is performed
on a disjoint component
ˆ
Mo
of size at most
exp(m×l,k)/m
, and
thus takes time 2
O(| φ|) ×O(exp(m×l,k)/m)
, and there are at most
|φ| × exp(m×l,k)/m× |O |m
of them. Our overall procedure thus
runs in time
|O |m×
2
O(| φ|) ×exp(O(m×l),k)
, which we rewrite
as |O | |Ag |×2O(|φ|) ×exp(O(|Ag |×|M|),k).
Note that, as described in [
25
,
26
], the
k
-trees machinery can be
rened to deal with formulas of alternation depth
k
. Theorem 4.1
would then become the instanciation of Theorem 6.1 for one agent
and
k=
1. We do not present this result here for reasons of space
and simplicity of presentation.
7 EXPRESSIVITY
In this section we prove that the observation-change operator adds
expressive power to epistemic temporal logics. Formally, we com-
pare the expressive power of
CTLKm
with that of
CTLKm
[
5
,
10
],
which is the syntactic fragment of
CTLKm
obtained by remov-
ing the observation-change operator. Our semantics for
CTLKm
generalises that of
CTLKm
, with which it coincides on
CTLKm
formulas. Note that our multi-agent models (Denition 5.2) are
more general than usual models for
CTLKm
, as they may contain
observation relations that are not initially assigned to any agent,
but such relations are mute in the evaluation of
CTLKm
formulas.
For two logics
L
and
L
over the same class of models, we say
that
L
is at least as expressive as
L
, written
L ⪯ L
, if for every
formula
φ∈ L
there exists a formula
φ∈ L
such that
φφ
.
L
is strictly more expressive than
L
, written
L ≺ L
, if
L ⪯ L
and
L̸⪯ L
. Finally,
L
and
L
are equiexpressive, written
L ≡ L
, if
L ⪯ Land L⪯ L.
First, since CTLKmextends CTLKm, we have that:
Proposition 7.1. For all m1,CTLKmCTLKm.
We now point out that when there is only one observation, i.e.,
|O | =
1, the observation-change operator has no eect, and thus
CTLKmis no more expressive than CTLKm.
Proposition 7.2. For |O | =1,CTLKmCTLKm.
Proof.
We show that for
|O | =
1,
CTLKmCTLKm
, which
together with Proposition 7.1 provides the result. Observe that
when
|O | =
1, observation change has no eect, and in fact obser-
vation records can be omitted in the natural semantics. For every
CTLKm
formula
φ
, dene the
CTLKm
formula
φ
by removing
all observation-change operators o
afrom φ. Clearly, φφ.
On the other hand, we show that as soon as we have at least two
observations, the observation-change operator adds expressivity.
We rst consider the mono-agent case.
Proposition 7.3. If |O | >1then CTLK̸⪯ CTLK.
Proof.
Assume that
O
contains
o1
and
o2
. Consider the model
M
from Example 2.9 (Figure 1), and dene the model
M
which is
the same as
M
except that
s4
and
s5
are indistinguishable for both
o1
and
o2
, while in
M
they are only indistinguishable for
o1
. In both
models, agent
a
is initially assigned observation
o1
. To prove the
proposition we exhibit a formula of
CTLK
that can distinguish
between Mand M, and justify that no formula of CTLKcan.
Consider formula
φ=
E
Fo2
K
ap
. As detailed in Example 2.9, we
have that
M|=φ
. We now show that
M̸|=φ
: The only history in
which
p
holds, and thus where agent
a
may get to know it, is the
path
s0s2s5
. After observing this path with observation
o1
, agent
a
considers that both
s4
and
s5
are possible. She still does after
switching to observation
o2
, as
s4
and
s5
are
o2
-indistinguishable.
As a result M̸|=φ, and thus φdistinguishes Mand M.
Now to see that no formula of
CTLK
can distinguish between
these two models, it is enough to see that in both models the only
agent
a
is assigned observation
o1
, and thus on these models no
operator of
CTLK
can refer to observation
o2
, which is the only
dierence between Mand M.
This proof for the mono-agent case relies on the fact that
CTLK
can refer to observations that are not initially assigned to any agent,
and thus cannot be referred to within
CTLK
. This proof can be
easily adapted to the multi-agent case, by considering the same
models
M
and
M
and assigning the same initial observation
o1
to
all agents. We show that in fact, when we have at least two agents,
CTLKm
is strictly more expressive than
CTLKm
even when we
assume that all observations are initially assigned to some agent.
AAMAS’19, May 2019, Montreal, Canada Aurèle Barrière, Bastien Maubert, Sasha Rubin, and Aniello Murano
Proposition 7.4. If
|O | >
1and
m
2,
CTLKm̸⪯ CTLKm
even on models in which all observations are initially assigned.
Proof.
Assume that
O
contains
o1
and
o2
. We consider two
agents
a
and
b
; the proof can easily be generalised to more agents.
Consider again the models
M
and
M
used in the proof of Propo-
sition 7.3. This time, in both models, agent
a
is initially assigned
observation
o1
and agent
b
observation
o2
. For the same reasons as
before, formula φ=EFo2Kapdistinguishes between Mand M.
Now to see that no formula of
CTLKm
can distinguish these two
models, recall that the only dierence between
M
and
M
concerns
observation
o2
, and that agents
a
and
b
are bound to observations
o1
and
o2
respectively. Since in
CTLKm
agents cannot change
observation, the modication of
o2
between
M
and
M
can only
aect the knowledge of agent
b
, by making her unable to distinguish
s4
and
s5
. However this cannot happen. Indeed, these states can
only be reached via histories
s0s1s4
and
s0s2s5
respectively; since
s1
and
s2
are not
o2
-indistinguishable, and we consider perfect recall,
s0s1s4and s0s2s5are not o2-indistinguishable neither.
Formally, dene the perfect-recall unfolding of a model
M
as the
innite tree consisting of all possible histories starting in the initial
state, in which two nodes
h
and
h
are related for
oi
if
|h|=|h|
and for all
i<|h|
,
hioih
i
. It is clear that
CTLKm
is invariant
under perfect-recall unfolding. Now it suces to notice that the
perfect-recall unfoldings of
M
and
M
are the same, and thus cannot
be distinguished by any CTLKmformula.
Remark 3. Unlike
CTLKm
,
CTLKm
is not invariant under
perfect-recall unfolding. Indeed in these unfoldings observation rela-
tions on histories are dened for xed observations, and thus cannot
account for observation changes induced by operators o
a.
Putting together Propositions 7.1, 7.3 and 7.4, we obtain:
Theorem 7.5. If |O | >1then CTLKmCTLKm.
8 ELIMINATING OBSERVATION CHANGE
In this section we show how to reduce the model-checking problem
for
CTLK
to that of
CTLK
. The approach can be easily gener-
alised to the multi-agent case.
Fix an instance
(M,Φ)
of the model-checking problem for
CTLK
,
where
M=(AP,S,T,V,{∼o}o∈O ,sι,oι)
is a (mono-agent) model
and
Φ
is a
CTLK
formula. We build an equivalent instance
(M,Φ)
of the model-checking problem for
CTLK
; in particular,
M
con-
tains a single observation relation, and
Φ
does not use operator
o
.
We rst dene
M
. For each observation symbol
o∈ O
we
create a copy
Mo
of the original model
M
. Moving to copy
Mo
will
simulate switching to observation
o
. To make this possible, we need
to introduce transitions between each state
so
of a copy
Mo
to state
soof copy Mo, for all o,o.
Let M=(AP ∪ {po|o∈ O},S,T,V,,sι), where
for each o∈ O,pois a fresh atomic proposition,
S=Ðo∈O {so|sS},
T={(so,s
o) | o∈ O and (s,s) T}
∪ {(so,so) | sS,o,o∈ O and o,o}
V(so)=V(s) ∪ {po}, for all sSand o∈ O,
• ∼=Ðo∈ O {(so,s
o) | sos}, and
sι=sιoι.
We now dene formula
Φ
. The translation
tro
is parameterised
with an observation o∈ O and is dened by induction on Φ:
tro(oφ)=(tro(φ)if o=o
AX(potro(φ)) otherwise
tro(Aψ)=A(Gpotro(ψ))
All other cases simply distribute over operators. We nally let
Φ=troι(Φ). Using the alternative semantics, we see that:
Lemma 8.1. M|=Φif, and only if, M|=Φ.
Since we know how to model-check
CTLK
, this provides a
model-checking procedure for
CTLK
. However this algorithm
does not provide optimal complexity. Indeed, the model
M
is of size
|M| × |O |
, and the best known model-checking algorithm for
CTLK
runs in time exponential in the size of the model and the formula [
4
].
Going through this reduction thus yields a procedure that is expo-
nential in the number of observations. Our direct model-checking
procedure, which generalises techniques used for the classic case
of static observations, provides instead a decision procedure which
is only linear in the number of observations (Theorem 4.1).
The reduction described above can be easily generalised to the
multi-agent case, by creating one copy
Mo
of the original model
M
for each possible assignment
o
of observations to agents. We
thus get a model
M
of size
|M| × |O | |Ag |
, and since the best known
model-checking procedure for
CTLKm
is
k
-exponential in the size
of the model [
4
], this reduction provides a procedure which is
k
-
exponential in the number of observations and
k+
1-exponential
in the number of agents.
The direct approach provides an algorithm that is only poly-
nomial in the number of observations, exponential in the number
of agents, and whose combined complexity is
k
-exponential time
(Theorem 6.1).
9 CONCLUSION
Epistemic temporal logics play a central role in MAS as they permit
one to reason about the knowledge of agents along the evolution of a
system. Previous works in this eld have treated agents’ observation
power as a static feature. However, in many scenarios, agents’
observation power may change.
In this work we introduced
CTLK
, a logic that can express
such dynamic changes of observation power. We showed that it
can express natural properties that are not expressible without this
operator, and provided some examples of applications of our logic.
While in [
17
], changes of observation are bound to quantication
on strategies, and the model-checking problem is undecidable, we
showed that in the purely temporal epistemic setting, model check-
ing is decidable, and known techniques can be extended to deal
with observation change with no additional cost in complexity.
We also showed how to reduce the model-checking problem
for our logic to that of
CTLK
, removing the observation-change
operator. This yields a model-checking procedure for
CTLK
, but
that is not as ecient as the direct algorithm we provide.
As future work we would like to establish the precise complexity
of model checking
CTLK
. We conjecture that it should be the
same as for
CTLK
, i.e., that adding the possibility to reason about
changes of observational power comes for free. However, the exact
Reasoning about Changes of Observational Power in Logics of Knowledge and Time AAMAS’19, May 2019, Montreal, Canada
complexity of model checking classic epistemic temporal logics
such as LTLK or
CTLK
is a long-standing open problem. It would
also be interesting to study the satisability problem of epistemic
temporal logic with changes of observation power. Finally, studying
axiomatisation of our logic could provide more insights into how
changes of observation power work.
REFERENCES
[1]
Guillaume Aucher. 2014. Supervisory control theory in epistemic temporal logic.
In AAMAS. 333–340. http://dl.acm.org/citation.cfm?id=2615787
[2] Raphaël Berthon, Bastien Maubert, Aniello Murano, Sasha Rubin, and Moshe Y.
Vardi. 2017. Strategy logic with imperfect information. In LICS. 1–12. https:
//doi.org/10.1109/LICS.2017.8005136
[3]
Benjamin Bittner, Marco Bozzano, Alessandro Cimatti, and Xavier Olive. 2012.
Symbolic Synthesis of Observability Requirements for Diagnosability.. In AAAI.
[4]
Laura Bozzelli, Bastien Maubert, and Sophie Pinchinat. 2015. Uniform strategies,
rational relations and jumping automata. Information and Computation 242 (2015),
80 – 107. https://doi.org/10.1016/j.ic.2015.03.012
[5]
Laura Bozzelli, Bastien Maubert, and Sophie Pinchinat. 2015. Unifying Hyper
and Epistemic Temporal Logics. In FoSSaCS. 167–182. https://doi.org/10.1007/
978-3- 662-46678- 0_11
[6]
Cătălin Dima. 2009. Revisiting Satisability and Model-Checking for CTLK with
Synchrony and Perfect Recall. In CLIMA IX-2008. 117–131. https://doi.org/10.
1007/978-3- 642-02734- 5_8
[7]
E Allen Emerson and Chin-Laung Lei. 1987. Modalities for model checking:
Branching time logic strikes back. Science of computer programming 8, 3 (1987),
275–306.
[8]
Ronald Fagin, Joseph Y Halpern, Yoram Moses, and Moshe Vardi. 2004. Reasoning
about knowledge. MIT press.
[9]
Joseph Y. Halpern and Kevin R. O’Neill. 2005. Anonymity and information
hiding in multiagent systems. Journal of Computer Security 13, 3 (2005), 483–512.
http://content.iospress.com/articles/journal-of- computer-security/jcs237
[10]
Joseph Y. Halpern, Ron van der Meyden, and Moshe Y. Vardi. 2004. Complete
Axiomatizations for Reasoning about Knowledge and Time. SIAM J. Comput. 33,
3 (2004), 674–703. https://doi.org/10.1137/S0097539797320906
[11]
Joseph Y. Halpern and Moshe Y. Vardi. 1989. The complexity of reasoning about
knowledge and time. 1. Lower bounds. J. Comput. System Sci. 38, 1 (1989), 195–237.
https://doi.org/10.1145/12130.12161
[12]
Wojciech Jamroga and Masoud Tabatabaei. 2018. Accumulative knowledge under
bounded resources. J. Log. Comput. 28, 3 (2018), 581–604. https://doi.org/10.1093/
logcom/exv003
[13]
Jeremy Kong and Alessio Lomuscio. 2017. Symbolic Model Checking Multi-Agent
Systems against CTL*K Specications. In AAMAS. 114–122. http://dl.acm.org/
citation.cfm?id=3091147
[14]
O. Kupermann and M.Y. Vardi. 2001. Synthesizing distributed systems. In LICS’01.
389–398.
[15]
Orna Kupferman, Moshe Y Vardi, and Pierre Wolper. 2000. An automata-theoretic
approach to branching-time model checking. Journal of the ACM (JACM) 47, 2
(2000), 312–360.
[16]
Richard E. Ladner and John H. Reif. 1986. The Logic of Distributed Protocols. In
TARK. 207–222.
[17]
Bastien Maubert and Aniello Murano. 2018. Reasoning about Knowledge
and Strategies under Hierarchical Information. In Principles of Knowledge Rep-
resentation and Reasoning: Proceedings of the Sixteenth International Confer-
ence, KR 2018, Tempe, Arizona, 30 October - 2 November 2018. 530–540. https:
//aaai.org/ocs/index.php/KR/KR18/paper/view/17996
[18]
Fabio Mogavero, Aniello Murano, Giuseppe Perelli, and Moshe Y. Vardi. 2014.
Reasoning About Strategies: On the Model-Checking Problem. ACM Trans.
Comput. Log. 15, 4 (2014), 34:1–34:47. https://doi.org/10.1145/2631917
[19]
Eric Pacuit. 2007. Some comments on history based structures. Journal of Applied
Logic 5, 4 (2007), 613–624.
[20]
Gary Peterson, John Reif, and Salman Azhar. 2002. Decision algorithms for
multiplayer noncooperative games of incomplete information. CAMWA 43, 1
(2002), 179–206.
[21]
A. Pnueli and R. Rosner. 1990. Distributed reactive systems are hard to synthesize.
In FOCS’90. 746–757.
[22]
Franco Raimondi and Alessio Lomuscio. 2005. The complexity of symbolic model
checking temporal-epistemic logics. In CS&P. 421–432.
[23]
Meera Sampath, Raja Sengupta, Stéphane Lafortune, Kasim Sinnamohideen, and
Demosthenis Teneketzis. 1995. Diagnosability of discrete-event systems. IEEE
Transactions on automatic control 40, 9 (1995), 1555–1575.
[24]
W. van der Hoek and M. Wooldridge. 2003. Cooperation, knowledge, and time:
Alternating-time Temporal Epistemic Logic and its applications. Studia Logica
75, 1 (2003), 125–157. https://doi.org/10.1023/A:1026185103185
[25]
Ron van der Meyden. 1998. Common Knowledge and Update in Finite Environ-
ments. Inf. Comput. 140, 2 (1998), 115–157. https://doi.org/10.1006/inco.1997.2679
[26]
Ron van der Meyden and Nikolay V.Shilov. 1999. Model Checking Knowledge and
Time in Systems with Perfect Recall (Extended Abstract). In FSTTCS. 432–445.
[27]
Ron van der Meyden and Kaile Su. 2004. Symbolic Model Checking the Knowledge
of the Dining Cryptographers. In CSFW-17. 280–291.
[28]
Ron van der Meyden and Moshe Y Vardi. 1998. Synthesis from knowledge-based
specications. In CONCUR. Springer, 34–49.
[29]
Hans van Ditmarsch, Wiebe Van der Hoek, and Barteld Pieter Kooi. 2007. Dynamic
epistemic logic. Vol. 337. Springer.
[30]
John Von Neumann and Oskar Morgenstern. 2007. Theory of games and economic
behavior (commemorative edition). Princeton university press.
... In some practical engineering applications, the objects identified from the system may change over time [53]. In this case, the MAIF system needs to collect information in real time and conduct corresponding decision analysis [54]. Supposing a military base has an MAIF system to identify targets, the system uses three agents to read real-time information. ...
Article
Full-text available
The multi-agent information fusion (MAIF) system can alleviate the limitations of a single expert system in dealing with complex situations, as it allows multiple agents to cooperate in order to solve problems in complex environments. Dempster–Shafer (D-S) evidence theory has important applications in multi-source data fusion, pattern recognition, and other fields. However, the traditional Dempster combination rules may produce counterintuitive results when dealing with highly conflicting data. A conflict data fusion method in a multi-agent system based on the base basic probability assignment (bBPA) and evidence distance is proposed in this paper. Firstly, the new bBPA and reconstructed BPA are used to construct the initial belief degree of each agent. Then, the information volume of each evidence group is obtained by calculating the evidence distance so as to modify the reliability and obtain more reasonable evidence. Lastly, the final evidence is fused with the Dempster combination rule to obtain the result. Numerical examples show the effectiveness and availability of the proposed method, which improves the accuracy of the identification process of the MAIF system.
... In some practical engineering applications, objects identified from the system may continuously change over times [13]. In this scenario, the multi-agent information fusion system needs to collect the information in real time and make decision analysis correspondingly [1]. Suppose there is a MAIF system in a military base to identify the type of an target. ...
Conference Paper
Full-text available
In the field of informed decision-making, the usage of a single diagnostic expert system has limitations when dealing with complicated circumstances. The usage of a multi-agent information fusion (MAIF) system can mitigate this situation, as it allows multiple agents collaborating to solve the problems in a complex environment. However, the MAIF system needs to handle the uncertainty problem between different agents objectively at the same time. Target to this goal, this study reconstructs the generation of basic probability assignments (BPAs) based on the framework of evidence theory, and presents the uncertainty relationship between recognition sets, which are beneficial to the applications of the MAIF system. On the basis of evidence distance measurement, our method demonstrates the effectiveness and extendibility in numerical examples, and improves the accuracy and anti-interference ability during the identification process in the MAIF system.
Conference Paper
Full-text available
Two distinct semantics have been considered for knowledge in the context of strategic reasoning, depending on whether players know each other's strategy or not. In the former case, that we call the informed semantics, distributed synthesis for epistemic temporal specifications is undecidable, already on systems with hierarchical information. However, for the other, uninformed semantics, the problem is decid-able on such systems. In this work we generalise this result by introducing an epistemic extension of Strategy Logic with imperfect information. The semantics of knowledge operators is uninformed, and captures agents that can change observation power when they change strategies. We solve the model-checking problem on a class of "hierarchical in-stances", which provides a solution to a vast class of strategic problems with epistemic temporal specifications, such as distributed or rational synthesis, on hierarchical systems.
Article
Full-text available
A general concept of uniform strategies has recently been proposed as a relevant notion in game theory for computer science, which subsumes various notions from the literature. It relies on properties involving sets of plays in two-player turn-based arenas equipped with arbitrary binary relations between plays; these properties are expressed in a language based on with a quantifier over related plays. There are two semantics for our quantifier, a strict one and a full one, that we study separately. Regarding the strict semantics, the existence of a uniform strategy is undecidable for rational binary relations, but introducing jumping tree automata and restricting attention to recognizable relations allows us to establish a 2-Exptime-complete complexity – and still capture a class of two-player imperfect-information games with epistemic temporal objectives. Regarding the full semantics, relying on information set automata we establish that the existence of a uniform strategy is decidable for rational relations and we provide a nonelementary synthesis procedure. We also exhibit an essentially optimal subclass of rational relations for which the problem becomes 2-Exptime-complete. Considering rich classes of relations makes the theory of uniform strategies powerful: it directly entails various results in logics of knowledge and time, some of them already known, and others new.
Conference Paper
Full-text available
Supervisory control theory deals with problems related to the existence and the synthesis of supervisors. The role of a supervisor in a system is to control and restrict the behavior of this system in order to realize a specific behavior. When there are multiple supervisors, such systems are in fact multi-agent systems. The results of supervisory control theory are usually expressed in terms of operations like intersection and inclusion between formal languages. We re-formulate them in terms of model checking problems in an epistemic temporal logic. Our reformulations are very close to natural language expressions and highlight their under-lying intuitions. From an applied perspective, they pave the way for applying model checking techniques developed for epistemic temporal logics to the problems of supervisory control theory.
Article
Full-text available
Given a partially observable dynamic system and a diagnoser observing its evolution over time, diagnosability analysis for-mally verifies (at design time) if the diagnosis system will be able to infer (at runtime) the required information on the hid-den part of the dynamic state. Diagnosability directly depends on the availability of observations, and can be guaranteed by different sets of sensors, possibly associated with different costs. In this paper, we tackle the problem of synthesizing ob-servability requirements, i.e. automatically discovering a set of observations that is sufficient to guarantee diagnosability. We propose a novel approach with the following character-izing features. First, it fully covers a comprehensive formal framework for diagnosability analysis, and enables ranking configurations of observables in terms of cost, minimality, and diagnosability delay. Second, we propose two comple-mentary algorithms for the synthesis of observables. Third, we describe an efficient implementation that takes full advan-tage of mature symbolic model checking techniques. The pro-posed approach is thoroughly evaluated over a comprehensive suite of benchmarks taken from the aerospace domain.
Conference Paper
Full-text available
In the literature, two powerful temporal logic formalisms have been proposed for expressing information flow security requirements, that in general, go beyond regular properties. One is classic, based on the knowledge modalities of epistemic logic. The other one, the so called hyper logic, is more recent and subsumes many proposals from the literature; it is based on explicit and simultaneous quantification over multiple paths. In an attempt to better understand how these logics compare with each other, we consider the logic KCTL* (the extension of CTL* with knowledge modalities and synchronous perfect recall semantics) and HyperCTL*. We first establish that KCTL* and HyperCTL* are expressively incomparable. Second, we introduce and study a natural linear past extension of HyperCTL* to unify KCTL* and HyperCTL*; indeed, we show that KCTL* can be easily translated in linear time into the proposed logic. Moreover, we show that the model-checking problem for this novel logic is decidable, and we provide its exact computational complexity in terms of a new measure of path quantifiers' alternation. For this, we settle open complexity issues for unrestricted quantified propositional temporal logic.
Conference Paper
We define the logic LDLK, a formalism for specifying multi-agent systems. LDLK extends LDL with epistemic modalities, including common knowledge, for reasoning about the evolution of knowledge states of the agents in the system. We study the complexity of verifying a multi-agent system against LDLK specifications and show this to be in PSPACE. We give an algorithm for the practical verification of multi-agent systems specified in LDLK. We show that the model checking algorithm, based on alternating-automata and nFA, is amenable to symbolic implementation on OBDDs. We introduce MCMAS LDLK , an extension of the open-source model checker MCMAS, implementing the algorithm and discuss the experimental results obtained.
Conference Paper
We introduce an extension of Strategy logic for the imperfect-information setting, called SL ii , and study its model-checking problem. As this logic naturally captures multi-player games with imperfect information, the problem turns out to be undecidable. We introduce a syntactical class of " hierarchical instances " for which, intuitively, as one goes down the syntactic tree of the formula, strategy quantifications are concerned with finer observations of the model. We prove that model-checking SL ii restricted to hierarchical instances is decidable. This result, because it allows for complex patterns of existential and universal quantification on strategies, greatly generalises previous ones, such as decidability of multi-player games with imperfect information and hierarchical observations, and decidability of distributed synthesis for hierarchical systems. To establish the decidability result, we introduce and study QCTL * ii , an extension of QCTL (itself an extension of CTL with second-order quantification over atomic propositions) by parameterising its quantifiers with observations. The simple syntax of QCTL * ii allows us to provide a conceptually neat reduction of SL ii to QCTL * ii that separates concerns, allowing one to forget about strategies and players and focus solely on second-order quantification. While the model-checking problem of QCTL * ii is, in general, undecidable, we identify a syntactic fragment of hierarchical formulas and prove, using an automata-theoretic approach, that it is decidable. The decidability result for SL ii follows since the reduction maps hierarchical instances of SL ii to hierarchical formulas of QCTL * ii .
Conference Paper
A propositional logic of distributed protocols is introduced which includes both the logic of knowledge and temporal logic. Phenomena in distributed computing systems such as asynchronous time, incomplete knowledge by the computing agents in the system, and game-like behavior among the computing agents are all modeled in the logic. Two versions of the logic, the linear logic of protocols (LLP) and the tree logic of protocols (TLP) are investigated. The main result is that the set of valid formulas in LLP is undecidable.