PreprintPDF Available

Computing Distributed Knowledge as the Greatest Lower Bound of Knowledge

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Let L be a distributive lattice and E(L)\mathcal{E}(L) be the set of join endomorphisms of L. We consider the problem of finding fE(L)gf \sqcap_{{\scriptsize \mathcal{E}(L)}} g given L and f,gE(L)f,g\in \mathcal{E}(L) as inputs. (1) We show that it can be solved in time O(n) where n=Ln=| L |. The previous upper bound was O(n2)O(n^2). (2) We characterize the standard notion of distributed knowledge of a group as the greatest lower bound of the join-endomorphisms representing the knowledge of each member of the group. (3) We show that deciding whether an agent has the distributed knowledge of two other agents can be computed in time O(n2)O(n^2) where n is the size of the underlying set of states. (4) For the special case of S5 knowledge, we show that it can be decided in time O(nαn)O(n\alpha_{n}) where αn\alpha_{n} is the inverse of the Ackermann function.
Content may be subject to copyright.
COMPUTING DISTRIBUTED KNOWLEDGE AS THE GREATEST LOWER BOUND
OF KNOWLEDGE
SANTIAGO QUINTERO, CARLOS PINZ ´
ON a, FRANK VALENCIA b, AND SERGIO RAM´
IREZ c
aLIX, ´
Ecole Polytechnique de Paris
e-mail address: squinter@lix.polytechnique.fr
bInria-LIX, ´
Ecole Polytechnique de Paris
e-mail address: carlos.pinzon@lix.polytechnique.fr
cCNRS-LIX, ´
Ecole Polytechnique de Paris
e-mail address: frank.valencia@lix.polytechnique.fr
dUniversidad EAFIT
e-mail address: ssramirezr@eafit.edu.co
ABS TR ACT.
Let
L
be a distributive lattice and
E(L)
be the set of join endomorphisms of
L
. We
consider the problem of finding
fuE(L)g
given
L
and
f, g E(L)
as inputs. (1) We show that
it can be solved in time
O(n)
where
n=|L|
. The previous upper bound was
O(n2)
. (2) We
characterize the standard notion of distributed knowledge of a group as the greatest lower bound of
the join-endomorphisms representing the knowledge of each member of the group. (3) We show that
deciding whether an agent has the distributed knowledge of two other agents can be computed in time
O(n2)
where
n
is the size of the underlying set of states. (4) For the special case of
S5
knowledge,
we show that it can be decided in time O(n)where αnis the inverse of the Ackermann function.
INTRODUCTION
Structures involving a lattice
L
and its set of join-endomorphisms
E(L)
are ubiquitous in computer
science. For example, in Mathematical Morphology (MM) [
BHR07
], a well-established theory for the
analysis and processing of geometrical structures founded upon lattice theory, join-endomorphisms
correspond to one of its fundamental operations: dilations. In this and many other areas, lattices are
used as rich abstract structures that capture the fundamental principles of their domain of application.
We believe that devising efficient algorithms in the abstract realm of lattice theory could be of
great utility: We may benefit from many representation results and identify general properties that
can be exploited in the particular domain of application of the corresponding lattices. In fact, we will
use distributivity and join-irreducibility to reduce significantly the time and space needed to solve
particular lattice problems. In this paper we focus on algorithms for the meet of join-endomorphisms.
We shall begin with a maximization problem: Given a lattice
L
of size
n
and
f, g E(L)
,
find the greatest lower bound
h=fuE(L)g
. Notice that the input is
L
not
E(L)
. Simply taking
OPTIONAL comment concerning the title.
This work has been partially supported by the ECOS-NORD project FACTS (C19M03).
Preprint submitted to
Logical Methods in Computer Science
© S. Quintero, C. Pinzon, F. Valencia, and S. Ramirez
CC
Creative Commons
arXiv:2210.08128v1 [cs.MA] 14 Oct 2022
2 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
h(a) = f(a)uLg(a)
for all
aL
does not work because the resulting
h
may not even be a
join-endomorphism. Previous lower bounds for solving this problem are
O(n3)
for arbitrary lattices
and
O(n2)
for distributive lattices [
QRRV20
]. We will show that that this problem can actually be
solved in O(n)for distributive lattices.
Distributed knowledge [
HM90
] corresponds to knowledge that is distributed among the members
of a group, without any of its members necessarily having it. This notion can be used to analyse the
implications of the knowledge of a community if its members were to combine their knowledge,
hence its importance. We will show that distributed knowledge can be seen as the meet of the
join-endomorphisms representing the knowledge of each member of a group.
The standard structures in economics for multi-agent knowledge [
Sam10a
] involve a set of states
(or worlds)
and a knowledge operator
Ki:P(Ω) P(Ω)
describing the events, represented
as subsets of
, that an agent
i
knows. The event of
i
knowing the event
E
is
Ki(E) = {ω
| Ri(ω)E}
where
Ri2
is the accessibility relation of
i
and
Ri(ω) = {ω0|(ω, ω0) Ri}.
The event of having distributed knowledge of
E
by
i
and
j
is
D{i,j}(E) = {ω| Ri(ω)∩Rj(ω)
E}[FHMV95].
Knowledge operators are join-endomorphisms of
L= (P(Ω),)
. Intuitively, the lower an
agent
i
(its knowledge function) is placed in
E(L)
, the “wiser” (or more knowledgeable) the agent
is. We will show that
D{i,j}=KiuE(L)Kj,
i.e.,
D{i,j}
can be viewed as the least knowledgeable
agent that is wiser than both iand j.
We also consider the following decision problem: Given the knowledge of agents
i, j, m
, decide
whether
m
has the distributed knowledge of
i
and
j
, i.e.,
Km=D{i,j}
. The knowledge of an
agent
k
can be represented by
Kk:P(Ω) P(Ω)
. If available it can also be represented,
exponentially more succinctly, by
Rk2
. In the first case the problem reduces to checking
whether
Km=KiuE(L)Kj
. In the second the problem reduces to
Rm=Ri Rj
and this can be
done in O(n2)where n=||.
Nevertheless, we show that even without the accessibility relations, if inputs are the knowledge
operators, represented as arrays, the problem can be still be solved in
O(n2)
. We obtain this result
using tools from lattice theory to exponentially reduce the number of tests on the knowledge operators
(arrays) needed to decide the problem.
Furthermore, if the inputs are the accessibility relations and they are equivalences (hence they
can be represented as partitions), we show that the problem can be solved basically in linear time:
More precisely, in
O(n)
where
αn
is an extremely slow growing function; the inverse of the
Ackermann function. It is worth noticing that if accessibility relations can be represented as partitions,
the structures are known as Aumann structures [
Aum76
] and they characterize a standard notion of
knowledge called S5[FHMV95].
To prove the
O(n)
bound we show a new result of independent interest using a Disjoint-
Set data structure [
GF64
]: The intersection of two partitions of a set of size
n
can be computed
in
O(n)
. This result may have applications beyond knowledge, particularly in domains where
Disjoint-Set is typically used; e.g., given two undirected graphs
G1
and
G2
with the same nodes, find
an undirected graph
G3
such that two nodes are connected in it iff they are connected in both
G1
and G2.
Contributions and Organization. The main contributions are the following:
(1)
We prove that for distributive lattices of size
n
, the meet of join-endomorphisms can be computed
in time O(n). Previous upper bound was O(n2).
(2)
We show that distributed knowledge of a given group can be viewed as the meet of the join-
endomorphisms representing the knowledge of each member of the group.
COMPUTING DISTRIBUTED KNOWLEDGE 3
(3)
We show that the problem of whether an agent has the distributed knowledge of two other can be
decided in time O(n2)where n=||.
(4)
If the agents’ knowledge can be represented as partitions, the problem in (3) can be decided in
O(n)
. To obtain this we provide a procedure, interesting in its own right, that computes the
intersection of two partitions of a set of size nin O(n).
The above results are given in Sections 2.1 and 5. For conducting our study, in the intermediate
sections (Sections 3 and 4) we will adapt some representation and duality results (e.g., J
´
onsson-Tarski
duality [
JT52
]) to our structures. Some of these results are part of the folklore in lattice theory but
for completeness we provide simple proofs of them. We also provide experimental results for the
above-mentioned effective procedures.
1. NOTATI ON, DEFINITIONS AND ELEM ENTARY FAC TS
We list facts and notation used throughout the paper. We index joins, meets, and orders with their
corresponding poset but often omit the index when it is clear from the context.
Partially Ordered Sets and Lattices.
A poset
L
is a lattice iff each finite nonempty subset of
L
has
a supremum and infimum in
L
. It is a complete lattice iff each subset of
L
has a supremum and
infimum in
L
. A poset
L
is distributive iff for every
a, b, c L
,
at(buc)=(atb)u(atc)
. We
write
akb
to denote that
a
and
b
are incomparable in the underlying poset. A lattice of sets is a set of
sets ordered by inclusion and closed under finite unions and intersections. A powerset lattice is a
lattice of sets that includes all the subsets of its top element.
Definition 1.1
(Downsets, Covers, Join-irreducibility [
DP02
])
.
Let
L
be a lattice and
a, b L
. We
say
b
is covered by
a
, written
ba
, if
b@a
and there is no
cL
s.t.,
b@c@a
. The down-set
(up-set) of
a
is
adef
={bL|bva}(adef
={bL|bwa})
, and the set of elements covered by
a
is
1adef
={b|ba}
. An element
cL
is said to be join-irreducible if
c=atb
implies
c=a
or
c=b
. If
L
is finite,
c
is join-irreducible if
|↓1c|= 1
. The set of all join-irreducible elements of
L
is J(L)and Jcdef
=c J (L).
Posets of maps.
A map
f:XY
where
X
and
Y
are posets is monotonic (or order-preserving)
if
avXb
implies
f(a)vYf(b)
for every
a, b X
. We say that
f
preserves the join of
SX
iff
f(FS) = F{f(c)|cS}
. A self-map on
X
is a function
f:XX
. If
X
and
Y
are posets,
we define
F
as the poset of all functions from
X
to
Y
. We use
hXYi
to denote the poset of
monotonic functions of
F
. The functions in
F
are ordered pointwise: i.e.,
fvFg
iff
f(a)vYg(a)
for every aX.
Definition 1.2
(Join-endomorphisms and
E(L)
)
.
Let
L
be a lattice. We say that a self-map is a
(bottom preserving) join-endomorphism iff it preserves the join of every finite subset of
L
. Define
E(L)
as the set of all join-endomorphisms of
L
. Furthermore, given
f, g E (L)
, define
fvEg
iff
f(a)vg(a)for every aL.
Proposition 1.3 ([GS58, DP02]).Let Lbe a lattice.
P.1 f E (L)iff f() = and f(atb) = f(a)tf(b)for all a, b L.
P.2 If f E (L)then fis monotonic.
P.3 If Lis a complete lattice, then E(L)is a complete lattice.
P.4 E(L)is a complete distributive lattice iff Lis a complete distributive lattice.
P.5 If Lis finite and distributive, E(L)
=hJ (L)Li.
4 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
P.6 If Lis a finite lattice, e=FL{c J (L)|cve}for every eL.
P.7 If Lis finite and distributive, f E(L)iff (eL)f(e) = F{f(e0)|e0 Je}.
We shall use these posets in our examples:
¯
n
is
{1, . . . , n}
with the order
xvy
iff
x=y
and
Mn
def
= (¯
n)>is the lattice that results from adding a top and bottom to ¯
n.
2. COMPUTING THE ME ET O F JOIN-ENDOMORPHISMS
Join-endomorphisms and their meet arise as fundamental computational operations in computer
science. We therefore believe that the problem of computing these operations in the abstract realm
of lattice theory is a relevant issue: We may identify general properties that can be exploited in all
instances of these lattices.
In this section, we address the problem of computing the meet of join-endomorphisms. Let us
consider the following maximization problem.
Problem 2.1.
Given a lattice
L
of size
n
and two join-endomorphisms
f, g :LL
, find the
greatest join-endomorphism h:LLbelow both fand g: i.e., h=fuE(L)g.
Notice that the lattice
E(L)
, which could be exponentially bigger than
L
[
QRRV20
], is not an
input to the problem above. It may not be immediate how to find
h
; e.g., see the endomorphism
h
in Figure 1a for a small lattice of four elements. A naive approach to find
fuE(L)g
could be to
attempt to compute it pointwise by taking
h(a) = f(a)uLg(a)
for every
aL
. Nevertheless, the
somewhat appealing equation
(fuE(L)g) (a) = f(a)uLg(a)(2.1)
does not hold in general, as illustrated in the lattices M2and M3in Figure 1b and Figure 1c.
A general approach in [
QRRV20
] for arbitrary lattices shows how to find
h
in Problem 2.1 by
successive approximations
h0Ah1A· · · Ahi
, starting with some self-map
h0
known to be smaller
than both
f
and
g
, and greater than
h
; while keeping the invariant
hiwh
. The starting point is the
naive approach above:
h0(a) = f(a)ug(a)
for all
aL.
The approach computes decreasing upper
bounds of
h
by correcting in
hi
the image under
hi1
of some values
b, c, b tc
violating the property
hi1(b)thi1(c) = hi1(btc).
The correction satisfies
hi1Ahi
and maintains the invariant
hiwh
. This approach eventually finds
h
in
O(n3)
basic lattice operations (binary meets and joins).
For the sake of the presentation, we approach the above problem for distributive and arbitrary
lattices separately.
2.1.
Algorithms for Distributive Lattices.
Recall that in finite distributive lattices, and more
generally in co-Heyting algebras [
MT46
], the subtraction operator
is uniquely determined by the
Galois connection
bwca
iff
atbwc
. Based on the following proposition it was shown in
[
QRRV20
] that if the only basic operations are joins or meets,
h
can be computed in
O(n3)
of them,
but if we also allow subtraction as a basic operation, the bound can be improved to O(n2).
Proposition 2.2
([
QRRV20
])
.
Let
L
be a finite distributive lattice. Let
h=fuE(L)g.
Then (1)
h(c) = dL{f(a)tg(b)|atbwc}, and (2) h(c) = dL{f(a)tg(ca)|a c}.
Nevertheless, it turns out that we can partly use Equation 2.1 to obtain a better upper bound. The
following lemma states that Equation 2.1 holds if Lis distributive and a J (L).
Lemma 2.3.
Let
L
be a finite distributive lattice and
f, g E (L)
. Then the following equation
holds: (fuE(L)g) (a) = f(a)uLg(a)for every a J (L).
COMPUTING DISTRIBUTED KNOWLEDGE 5
1 2
>
(A)f:,g:,h:99K
1 2
>
(B)f:,g:,h:99K
123
>
(C)f:,g:,h:99K
FIGURE 1. (a)
h=fuE(L)g
. (b)
h(a)def
=f(a)ug(a)
for
aM2
is not in
E(M2)
:
h(1 t2) 6=h(1) th(2)
. (c) Any
h:M3M3
s.t.
h(a) = f(a)ug(a)
for
a J (M3)
is
not in
E(M3)
:
h(>) = h(1t2) = h(1)th(2) = 1 6==h(2)th(3) = h(2t3) = h(>).
Proof.
From Proposition 2.2,
(fuE(L)g)(a) = d{f(a0)tg(aa0)|a0 a}.
Note that since
a J (L)
if
a0 a
then
aa0=a
when
a6=a0
, and
aa0=
when
a=a0
. Then,
{f(a0)tg(aa0)|a0 a}={f(a0)tg(aa0)|a0@a}{f(a)tg()}={f(a0)tg(a)|a0@
a}∪{f(a)}={f(a0)tg(a)| @a0@a}∪{f(a), g(a)}
. By absorption, we know that
(f(a0)tg(a))ug(a) = g(a)
. Finally, using properties of
u
,
(fuE(L)g)(a) = d({f(a0)tg(a)| @
a0@a}∪{f(a), g(a)}) = d{f(a0)tg(a)| @a0@a} u f(a)ug(a) = f(a)ug(a).
It is worth noting the Lemma 2.3 may not hold for non-distributive lattices. This is illustrated
in Figure 1c with the archetypal non-distributive lattice
M3
. Suppose that
f
and
g
are given as in
Figure 1c. Let
h=fuE(L)g
with
h(a) = f(a)ug(a)
for all
a {1,2,3}=J(M3)
. Since
h
is a
join-endomorphism, we would have
h(>) = h(1 t2) = h(1) th(2) = 1 6==h(2) th(3) =
h(2 t3) = h(>), a contradiction.
Lemma 2.3 and Property P.7 lead us to the following characterization of meets over E(L).
Theorem 2.4.
Let
L
be a finite distributive lattice and
f, g E (L)
. Then
h=fuE(L)g
iff
h
satisfies
h(a) = (f(a)uLg(a)if a J (L)or a=
h(b)tLh(c)if b, c 1awith b6=c(2.2)
Proof.
The only-if direction follows from Lemma 2.3 and P.7. For the if-direction, suppose that
h
satisfies Equation 2.2. If
h E(L)
the result follows from Lemma 2.3 and P.7. To prove
h E(L)
from P.7 it suffices to show (2.3)
h(e) = F{h(e0)|e0 Je}
for every
eL
. From Equation 2.2
and since
f
and
g
are monotonic,
h
is monotonic. If
e J (L)
then
h(e0)vh(e)
for every
e0 Je
. Therefore,
F{h(e0)|e0 Je}=h(e)
. If
e6∈ J (L)
, we proceed by induction. Assume
Equation 2.3 holds for all
a 1e
. By definition,
h(e) = h(b)th(c)
for any
b, c 1e
with
b6=c
.
Then, we have
h(b) = F{h(e0)|e0 Jb}
and
h(c) = F{h(e0)|e0 Jc}
. Notice that
e0 Jb
or
e0 Jc
iff
e0
yJ(btc)
, since
L
is distributive. Thus,
h(e) = h(b)th(c) = F{h(e0)|e0
yJ(btc)}=F{h(e0)|e0 Je}as wanted.
We conclude this section by stating the time complexity
O(n)
to compute
h
in the above theorem.
As in [
QRRV20
], the time complexity is determined by the number of basic binary lattice operations
(i.e., meets and joins) performed during execution.
6 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
(A) Powerset lattices. (B) Arbitrary distributive lattices.
FIGURE 2. Comparison between an implementation of Proposition 2.2 (
DMeet
) and Theo-
rem 2.4 (DMeet+).
Corollary 2.5.
Given a distributive lattice
L
of size
n
, and functions
f, g E(L)
, the function
h=fuE(L)gcan be computed in O(n)binary lattice operations.
Proof.
If
a J (L)
then from Theorem 2.4,
h(a)
can be computed as
f(a)ug(a)
. If
a=
then
h(a)
is
. If
a / J (L)
and
a6=
, we pick any
b, c 1a
such that
b6=c
and compute
h(a)
recursively as
h(b)th(c)
by Theorem 2.4. We can use a lookup table to keep track of the values of
aL
for which
h(a)
has been computed, starting with all
a J (L)
. Since
h(a)
is only computed
once for each aL, either as a meet for elements in J(L)or as a join otherwise, we only perform
nbinary lattice operations.
2.1.1. Experimental Results. Now we present some experimental results comparing the average
runtime between the previous algorithm in [
QRRV20
] based on Proposition 2.2, referred to as
DMeet, and the proposed algorithm in Theorem 2.4, called DMeet+.
Figure 2 shows the average runtime of each algorithm, from 100 runs with a random pair
of join-endomorphisms. For Figure 2a, we compared each algorithm against powerset lattices of
sizes between
22
and
210.
For Figure 2b, 10 random distributive lattices of size 10 were selected.
In both cases, all binary lattice operation are guaranteed a complexity in
O(1)
to showcase the
quadratic nature of
DMeet
compared to the linear growth of
DMeet+
. The time reduction from
DMeet
to
DMeet+
is also reflected in a reduction on the number of
t
and
u
operations performed
as illustrated in Table 1. For
DMeet+
, given a distributive lattice
L
of size
n,
#
u=|J (L)|
and
#t=|L| |J (L)| 1(is directly mapped to ).
2.2.
Algorithms for Arbitrary Lattices.
The
DMeet+
algorithm, introduced in the Section 2.1,
computes the meet of join endomorphisms on distributive lattices in
O(n)
. This section explores
algorithms for computing the meet of join endomorphisms on arbitrary lattices, not necessarily
distributive. The best known algorithm for this task is
GMeet+
[
QRRV20
], which is based on
successive aproximations (as described at the beging of Section 2) and has a complexity of
O(n3)
.
This section presents alternative algorithms for the same task, each with its proof of correctness and
COMPUTING DISTRIBUTED KNOWLEDGE 7
DMeet DMeet+DMeet DMeet+DMeet DMeet+
Size Time [s] Time [s] #t#t#u#u
16 0.000246 0.000024 81 11 81 4
32 0.000971 0.000059 243 26 243 5
64 0.002659 0.000094 729 57 729 6
128 0.008735 0.000163 2187 120 2187 7
256 0.038086 0.000302 6561 247 6561 8
512 0.244304 0.000645 19683 502 19683 9
1024 1.518173 0.001468 59049 1013 59049 10
TABL E 1. Average runtime in seconds over powerset lattices. Number of
t
and
u
operations
performed for each algorithm.
experimental analysis. These algorithms are experimentally faster than
GMeet+
, but finding tight
bounds for their runtime complexity is still an open problem.
GMeet+
is an enrichment of the simple abstract algorithm
GMeet
[
QRRV20
], which is also
the base for the alternative algorithms introduced in this paper and is presented here as Algorithm 1.
The proof of correctness of
GMeet
and the description of
GMeet+
are found in the original
paper [QRRV20].
GMeet
starts with the function
hdef
=fuFg
, computed pointwise
h(a) = f(a)uLg(a)
, which is
not necessarily a join-endomorphism. Then, it iterates a loop that resolves conflicts, in whatever order
they are found, until there are no conflicts at all. Recall that we refer to a conflict as a pair of elements
not conforming the join-endomorphism property:
h(atb) = h(a)th(b)
. The main invariants kept
during the loop are that the function
h
is an upper-bound of the target function
fuE(L)g
, and
h
decreases strictly whenever a conflict is resolved.
Algorithm 1 GMeet(h),h F.
Particularly, GMeet(fuFg) = fuE(L)g.
1: procedure GME ET(h)
2: While a, b Lwith h(atb)6=h(a)th(b):
3: If h(atb)Ah(a)th(b):
4: h(atb)h(a)th(b)
5: Else:
6: h(a)h(a)uh(atb)
7: h(b)h(b)uh(atb)
8: return h . Maximal join-end. below the input
Algorithm 2 GMeetMono(h),h F.
1: procedure GME ET MON O(h)
2: hMonoBelow(h)
3: While a, b Lwith h(atb)Ah(a)th(b):
4: ch(a)th(b)
5: For each xvatb:
6: h(x)h(x)uc
7: return h . Maximal join-end. below the input
Algorithm 3 MonoBelow(h),h F.
1: procedure MONO BEL OW(h)
2: For each bL, top-down order:
3: For each children aof b:
4: h(a)h(a)uh(b)
5: return h . Maximal monotone below the input
Algorithm 4 GMeetMonoLazy(h),h F.
1: procedure GME ET MON OLA ZY(h)
2: hMonoBelow(h)
3: Do:
4: h0h
5: for a, b L:
6: h(atb)h(atb)u(h(a)th(b))
7: hMonoBelow(h)
8: While h6=h0
9: return h . Maximal join-end. below the input
8 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
GMeet
was originally designed as an algorithm for computing
fuE(L)g
, but it can serve for the
more general purpose of finding the maximal join-endomorphism below a given arbitrary function
h F. This maximal join-endomorphism is always well defined as will be shown in Corollary 2.7,
derived from Theorem 2.6.
Algorithm 1 differs from the original version of
GMeet
in that it takes a single function
h F
as input instead of two
f, g F
. This is done precisely to reflect the fact that
GMeet
solves a more
general problem, and the original proof of correctness of
GMeet
suffices for proving the version
presented here because said proof only uses
f
and
g
to set the starting point
fuFg
, and to define
the target function
fuE(L)g
, which coincides with the the maximal join-endomorphism below the
starting point fuFg.
Theorem 2.6.
Let
S F
, be a sublattice of
F
such that the join operator
tS
in
S
coincides with
the pointwise join operator
tF
in
F
. For every
f F
, there is a unique maximal
h S
with
hvf
.
Proof.
Suppose
h1, h2 S
are two different maximal functions in
S
satisfying
h1, h2vf
, i.e.
hdef
=h1tFh2vf
. Since
tS=tF
then
h S
, and since
h1
and
h2
are incomparable, then
h1, h2@hvf. This contradicts that h1and h2were maximal on the first place.
The following is an immediate result from the above theorem.
Corollary 2.7. For any f F, there is a unique maximal h E with hvf.
Corollary 2.8. For any f F, there is a unique maximal monotonic h F with hvf.
Theorem 2.6 can also be used directly to derive Corollary 2.7 because in the sublattice of
monotonic functions, the join operator is the pointwise join.
MonoBelow
, i.e. Algorithm 3,
implements this corollary by computing the maximal monotonic function below a given one in
O(n+m)
, where
n
is the number of elements in the lattices and
m
is the number of (direct) child
relations that exists between elements. The algorithm assumes precomputation of list of children for
each element in the lattice, and a list in topological order, from top down to bottom.
GMeetMono
(Algorithm 2) is an alternative algorithm to
GMeet
that also implements Corol-
lary 2.7. It works by introducing an invariant to
GMeet
that preserves the monotonicity of
h
on each
iteration of the main loop. This is shown formally in Theorem 2.9.
Theorem 2.9. GMeetMono computes the unique maximal join-endomorphism below the input h.
Proof.
Let
h0 F
be the input of the algorithm, and
h E
the unique maximal join-endomorphism
satisfying
hvh0
, i.e. the target output. The algorithm works with the invariant property that
h
is
monotonic and
hwh
. The first step that calls
MonoBelow
, guarantees this invariant because, on
the one hand,
h
is monotonic, and on the other, since all join-endomorphisms are monotonic, the
maximal monotonic function hwith hvh0satisfies h0whvh.
For analyzing the while loop, let
h
and
h0
denote the function
h
before and after an iteration.
Let us show that the invariant is preserved, that is, whenever
h
is monotonic and
hwh
, then
h0
is
monotonic and
h0wh
. Indeed, if there are
a, b L
with
h(atb)Ah(a)th(b)
, then for all
x
we
have
h0(x)def
=h(x)u(h(a)th(b))
whenever
xvatb
and
h0(x)def
=h(x)
otherwise. In the first
case,
h0(x) = h(x)u(h(a)th(b)) wh(x)u(h(a)th(b)) = h(x)u(h(atb)) = h(x)
,
hence
h0(x)wh(x)
, and in the second case,
h0(x) = h(x)wh(x)
. Thus
h0
satisfies
h0wh
.
Moreover,
h0
can be expressed as the pointwise meet between
h
and the function that maps all
elements below
atb
to
h(a)th(b)
and all other elements to the top element. Since both functions
are monotone, it follows that
h0
is also monotone, thus the invariant is preserved. Moreover, the loop
guarantees that
h0@h
because
h0(atb)@h(atb)
, hence, in addition to preserve the invariant,
COMPUTING DISTRIBUTED KNOWLEDGE 9
the main loop terminates. Termination occurs when no elements
a, b L
exist satisfying the loop
condition. Since
h
is monotone, termination happens if and only if
h(atb) = h(a)th(b)
for all
a, b L.
GMeetMonoLazy
(Algorithm 2) is a lazy variant of
GMeet
that delays the transformation of
hinto a monotonic function after the iteration over all pairs a, b L.
Theorem 2.10. GMeetMonoLazy
computes the unique maximal join-endomorphism below the
input h.
Proof.
As in the proof of Theorem 2.9, let
h
and
h0
be the functions before and after the iteration
of the do-while loop. Let also
g
be the function
h
after the for loop is executed and before the
algorithm
MonoBelow
is called, so that
h0=MonoBelow(g)
. Since
MonoBelow
is called before
each iteration,
h
and
h0
are always monotone functions. To show that
h0wh
, it suffices to show
that
gwh
because
E
is a sublattice of the lattice of monotone functions. Moreover, by induction,
letting
f
and
f0
be the function
h
before and after each iteration of the for loop, it suffices to show
that whenever
fwh
then
f0wh
. This holds because
f0(atb) = f(atb)u(f(a)tf(b)) w
f(atb)u(h(a)th(b)) = f(atb)u(h(atb)) wf(atb)uf(atb)
. Thus all
f0, g
and
h0
are upper bounds of
h
. Termination occurs when
h0=h
, which happens if and only if
h=g=h0
,
if and only if h(atb) = h(a)th(b)for all a, b L.
The main contribution of
GMeetMono
and
GMeetMonoLazy
over the existing algorithm
GMeet+
is the empirical speed superiority. Finding tight upper bounds for these two algorithms is
not done in this paper and remains as an open theoretical problem. A secondary contribution of the
algorithms is that they approach the problem from a different theoretical perspective, which may lead
to ideas for future faster algorithms.
2.2.1. Experimental Results. The runtime complexity of
GMeetMono
and
GMeetMonoLazy
has
an upper bound of
O(n4)
because the number of updates per element can never exceed the number
of elements
n
, but experimentally this bound seems to be very loose. Table
??
shows the time and
profiling counters for the algorithms on several experiments. The experiments suggest a behavior
near O(n2)for both algorithms after applying heuristics.
The algorithms
GMeet?
and
GMeetMono?
presented in Table
??
correspond to implementa-
tions of
GMeet
and
GMeetMono
respectively with heuristics for executing the existential quantifier.
3. A REPRESENTATION OF JOIN-IRREDUCIBLE ELEMENTS OF E(L)
In this section we state a characterization of the join-irreducible elements of the lattice of join-
endomorphisms
E(L)
. We use it to prove a representation result for join-endomorphisms. Some of
these results may be part of the folklore in lattice theory, our purpose here is to identify and use them
as technical tools in the following section.
The following family of functions can be used to represent J(E(L)).
Definition 3.1.
Let
L
be a lattice and
a, b J (L)
. Let
fa,b :LL
be given by
fa,b (x)def
=b
if
x a, otherwise fa,b (x)def
=.
It is easy to verify that
fa,b () =
. On the other hand, for every
c, d L
,
fa,b (ctd) =
fa,b (c)tfa,b (d)
follows from the fact that
a J (L)
and by cases on
ctd a
and
ctd6∈ a
.
Thus, from P.1 we know that
fa,b
is a join-endomorphism, and from P.2 it is monotone. Therefore,
10 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
fa,bJ(L) hJ (L)Li
. In addition, we point out the following rather technical lemma that gives
us way to construct from a function g hJ (L)Li, a function h hJ (L)Licovered by g.
Lemma 3.2.
Let
L
be a finite lattice. Let
g hJ (L)Li
,
x0 J (L)
and
y0L
be such that
y0 1g(x0)
and
g(x)vy0
for all
x@x0
. Define
h:J(L)L
as
h(x)def
=y0
if
x=x0
else
h(x)def
=g(x). Then his monotonic and gcovers h.
Proof.
For notational convenience let
M=hJ (L)Li
. We will prove (1)
hM
and (2)
h 1g
in M.
To prove (1), let x1, x2 J (L)with x1@x2. We will show that h(x1)vh(x2).
If both x16=x0and x26=x0, then h(x1) = g(x1)vg(x2) = h(x2).
If x0=x1@x2, then h(x1) = y0@g(x1)vg(x2) = h(x2).
If x1@x2=x0, then h(x1) = g(x1)vy0=h(x2).
Now we prove (2). From the definition of
h
, it follows that
h@Mg
. If there is a function
¯
hM
such that
h@M¯
h@Mg
, then it must be the case that
¯
h(x) = g(x)
for all
x J (L)
with
x6=x0
and
h(x0)@¯
h(x0)@g(x0), which is impossible since h(x0) = y0and y0 1g(x0).
Thus, we conclude gcovers hin M.
We proceed to characterize the join-irreducible elements of the lattice
E(L)
. The next lemma,
together with P.6, tell us that every join-endomorphism in
E(L)
can be expressed solely as a join of
functions of the form fa,b defined in Definition 3.1.
Lemma 3.3.
Let
L
be a finite distributive lattice. For any join-endomorphism
f E(L)
,
f
is
join-irreducible iff f=fa,b for some a, b J (L).
Proof.
For notational convenience let
M=hJ (L)Li.
From P.5 it suffices to prove:
gM
is
join-irreducible in
M
iff
g=ga,b
for some
a, b J (L)
where
ga,b =fa,bJ(L)
. We use the following
immediate consequence of Lemma 3.2.
Property (?)
: Let
gM
,
x1, x2 J (L)
and
y1, y2L
, be such that for each
i {1,2}
,
yi 1g(xi)
and
g(x)vyi
for all
x@xi
. If
x16=x2
or
y16=y2
, then there are two distinct
functions g1, g2Mthat are covered by gin M.
(1)
For the only-if direction, let
X={x J (L)|g(x)6=⊥}
and
Y={g(x)|xX}
. If
X=
,
then
g(x) =
for all
x J (L)
, in which case
g
is not join-irreducible in
M
. Thus, necessarily,
X6=
and
Y6=
. Let us now prove that: (a)
X
has a minimum element
a J (L)
with
g(a) J (L), and (b) Y={g(a)}.
(a)
Let
x1, x2X
be minimal elements in
X
. For each
i {1,2}
, let
yi 1g(xi)
. Since
xi
is minimal, it follows that
g(x) =
for all
x@xi
. From (
?
) and the fact that
g
is
join-irreducible, we have
x1=x2
and
y1=y2
. Thus,
X
has a minimum element. We refer
to such element as a. Furthermore, |↓1g(a)|= 1, i.e. g(a) J (L).
(b)
Let
Y=Y\ {g(a)}
. For the sake of contradiction, suppose
Y6=
. Let
yY
be a
minimal element and
xX
be a minimal of
X={xX|g(x) = y}
. Since
a@x
and
y6=g(a)
, we have
g(a)@g(x) = y
. Then there is at least one
z 1y
such that
g(a)vz@y
. Since
g
is monotonic,
Im(g) = {⊥} Y
and
y
is minimal in
Y
, for all
x@x
, we have
g(x) {⊥, g(a)}
. Therefore,
g(x)vz
for all
x@x
. From (
?
), with
x1=a
,
x2=x
,
y1 1g(a)
and
y2=z
, it follows that
g
is not join-irreducible in
M
, a
contradiction.
Monotonicity of gand (a)-(b), imply Im(g) = {⊥, b}with b=g(a). Thus g=ga,b .
(2)
We prove that
g=ga,b
has a unique cover in
M
. Let
c
be the only cover of
b
. Define
g:J(L)L
as
g(x) = c
if
x=a
else
g(x) = g(x)
. From Lemma 3.2, it follows
COMPUTING DISTRIBUTED KNOWLEDGE 11
that
gM
and
ga,b
covers
g
in
M
. It suffices that for any
hM
with
h@Mga,b
,
hvMg
holds. Take any such
hM
. Since
h(a)6=b
,
h(a)@b
. Thus
h(a)vc
, so
h(a)vg(a).
Indeed, for any x6=a,h(x)@g(x) = g(x). Then hvMg.
We conclude with a corollary of Lemma 3.3 that provides a representation theorem for join-
endomorphism on distributive lattices. We will use this result in the next section.
Corollary 3.4.
Let
L
be a finite distributive lattice and let
f E(L)
. Then
f=FR
where
R={(a, b) J (L)2|avf(b)}
and
FR:LL
is the function given by
FR(c)def
=F{a
J(L)|(a, b)Rand cwbfor some b J (L)}.
Proof. From P.6 f=FE(L){g J (E(L)) |gvEf}. Thus,
f(c) = GE(L){g J (E(L)) |gvEf}(c) = G{g(c)|g J (E(L)) and gvEf}
=Gfb,a (c)|(b, a) J (L)2and fb,a vEf(Lemma 3.3)
=Gfb,a (c)|(b, a) J (L)2and avf(b)
=Ga J (L)|(b, a) J (L)2, a vf(b)and cwbfor some b J (L)
=G{a J (L)|(b, a)Rand cwbfor some b J (L)}=FR(c)
4. DISTRIBUTIVE LATT IC ES AN D KNOWL EDG E STRUC TURES
In this section, we introduce some knowledge structures from economics [
Aum76
,
Sam10a
] and
relate them to distributive lattices by adapting fundamental duality results between modal algebras
and frames [
JT52
]. We will use these structures and their relation to distributive lattices in the
algorithmic results in the next section. We use the term knowledge to encompass various epistemic
concepts including S5knowledge and belief [FHMV95].
Definition 4.1
([
Sam10a
])
.
A (finite) Knowledge Structure (KS) for a set of agents
A
is a tuple
(Ω,{Ki}i∈A)
where
is a finite set and each
Ki:P(Ω) P(Ω)
is given by
Ki(E) = {ω
| Ri(ω)E}where Ri2and Ri(ω) = {ω0|(ω, ω0) Ri}.
The elements
ω
and the subsets
E
are called states and events, resp. We refer to
Ki
and Rias the knowledge operator and the accessibility relation of agent i.
The notion of event may be familiar to some readers from probability theory; for example
the event “public transportation is suspended” corresponds the set of states at which public trans-
portation is suspended. An event
E
holds at
ω
if
ωE
. Thus
, the event that holds at every
ω
,
corresponds to true in logic, union of events corresponds to disjunction, intersection to conjunction,
and complementation in
to negation. We use
E
for
\E
. We write
EF
for the event
EF
which corresponds to classic logic implication. We say that
E
entails
F
if
EF
. The event of
i
knowing Eis Ki(E).
The following properties hold for all events Eand Fof any KS (Ω,{Ki}i∈A):
(K1) Ki(Ω) = ,
(K2) Ki(E)Ki(F) = Ki(EF),
(K3) (Ki(E)Ki(EF)) Ki(F), and
12 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
(K4) if EFthen Ki(E)Ki(F).
Property (
K
1) represents that agents know the event that holds at every state, namely
. A
distinctive property of knowledge is (
K
2), i.e., if an agent knows two events, she knows their
conjunction. In fact, (
K
2) implies (
K
3), that expresses modus ponens for knowledge. Other property
implied by (K2) is (K4), meaning that knowledge is monotonic, i.e., agents know the consequences
of their knowledge.
An agent
i
is wiser (or more knowledgeable) than
j
iff
Kj(E)Ki(E)
for every event
E
; i.e.,
if jknows Eso does i.
Aumann Structures.
Aumann structures are the standard event-based formalism in economics
and decision theory [
FHMV95
] for reasoning about knowledge. A (finite) Aumann structure (AS) is
a KS where all the accessibility relations are equivalences.
1
The intended notion of knowledge of
AS is
S5
; i.e., the knowledge captured by properties (
K
1)-(
K
2) and the following three fundamental
properties which hold for any AS:
(K5) Ki(E)E,
(K6) Ki(E)Ki(Ki(E)), and
(K7) Ki(E)Ki(Ki(E)).
The first says that if an agents knows
E
, then
E
cannot be false; the second and third state that agents
know both what they know and what they do not know.
A straightforward property between knowledge operators and accessibility relations is that they
uniquely define each other.
Proposition 4.2. Let (Ω,{Ki}i∈A)be a KS and i, j A. Then Ki=Kjiff Ri=Rj.
Proof.
The “if” direction is obvious. For the other direction suppose
Ki=Kj
but
Ri6=Rj
.
Then there exists
ω
such that
Ri(ω)6=Rj(ω)
. If
Ri(ω)
is not included in
Rj(ω)
then we obtain
ω6∈ Kj(Ri(ω))
but
ωKi(Ri(ω))
, a contradiction with
Ki=Kj
. The case when
Rj(ω)
is not
included in Ri(ω)is symmetric.
Extended KS.
We now introduce a simple extension of KS that will allow us to give a uniform
presentation of our results.
Definition 4.3
(EKS)
.
A tuple
(Ω,S,{Ki}i∈A)
is said to be an extended knowledge structure (EKS)
if (1)
(Ω,{Ki}i∈A)
is a KS, and (2)
S
is a subset of
P(Ω)
that contains
and it is closed under
union, intersection and application of Kifor every i A.
Notation. Given an underlying EKS
(Ω,S,{Ki}i∈A)
and
f:P(Ω) P(Ω)
we shall use
˜
f
for the function
fS:S P (Ω)
, i.e.,
˜
f(E) = f(E)
for every
E S
. Because of the closure
properties of S, for every i A we have e
Ki:S S.
Notice that the AS and, in general KS, are EKS where
S=P(Ω)
. Also Kripke frames [
FHMV95
]
can be viewed as EKS with
S=P(Ω)
. Other structures not discussed in this paper such as set
algebras with operators (SOS) [
Sam10b
] and general frames [
CZ97
] can be represented as EKSs
where Sis required to be closed under complement.
1
The presentation of AS [
Aum76
] uses a partition
Pi={Ri(ω)|ω}
of
and
Ki(E)
is equivalently defined as
{ω| Pi(ω)E}where Pi(ω)is the cell of Picontaining ω.
COMPUTING DISTRIBUTED KNOWLEDGE 13
4.1.
Extended KS and Distributive Lattices.
The knowledge operators of an EKS are join-
endomorphisms on a distributive lattice. This is an easy consequence of (
K
1) and (
K
2), and the
closure properties of EKS. The next proposition tells us that the wiser the agent, the lower that (its
knowledge operator) is placed in the corresponding lattice.
Proposition 4.4.
Let
(Ω,S,{Ki}i∈A)
be an EKS. Then
L= (S,)
is a distributive lattice and for
each i A,e
Ki E(L).
Proof.
Since
S
is closed under union and intersection and,
S
,
L= (S,)
is a distributive lattice
whose join is the intersection and bottom is
. By definition
e
Ki(E) = Ki(E)
for every
E S
.
Thus, from (
K
1) and (
K
2),
e
Ki(Ω) =
and
e
Ki(EF) = e
Ki(E)e
Ki(F)
for every
E, F S
.
From Property P.1, we conclude e
Ki E(L).
Conversely, the join-endomorphisms of distributive lattices correspond to knowledge operators
of EKS. Recall that every distributive lattice is isomorphic to (the dual of) a lattice of sets. The next
proposition is an adaptation to finite distributive lattices of J
´
onsson-Tarski duality for general-frames
and boolean algebras with operators [JT52].
Proposition 4.5.
Let
L
be dual to a finite lattice of sets with a family
{fi E(L)}iI
. Then
(Ω,S,{Ki}iI)
is an EKS where
S=L, = L
, and for every
iI
,
Ri={(ω, ω0)
2|for all E S, ω fi(E)implies ω0E}. Also, for iI,e
Ki=fi.
Proof.
Notice that
L=S
is closed under union and intersection since
L
is the dual of a lattice of
sets. Showing
e
Ki=fi
also proves that
S
is closed under
Ki
. Recall that
e
Ki(E) = Ki(E)
for each
E S.
Thus, it remains to prove
Ki(E) = fi(E)
for all
E S.
From (
K
1) and the fact that
fi
is
a join-endomorphism,
Ki(E) = fi(E)=Ω
for
E= .
Hence, choose an arbitrary
E6=
. First
suppose that
τfi(E)
. From the definition of
Ri
if
(τ, τ 0) Ri
,
τ0E
. Hence
Ri(τ)E
, so
τKi(E).
Now suppose that τKi(E)but τ6∈ fi(E). From τKi(E)we obtain:
for all τ0if (τ, τ 0) Rithen τ0E.(4.1)
From the assumption τ6∈ fi(E)and the monotonicity of join-endomorphisms (P.2):
for every F S if FEthen τ6∈ fi(F).(4.2)
Let
X={E0 S | τfi(E0)}.
If
X=
then from the definition of
Ri
we conclude
Ri(τ)=Ω
which contradicts (4.1) since
E6=
. If
X6=
take
S=TX
. Since
fi
is a join-endomorphism,
it distributes over intersection (i.e., the join in
L
), we conclude
τf(S)
. Thus, if
SE
we
obtain a contradiction with (4.2). If
S6⊆ E
then there exists
τ0S
such that
τ06∈ E
. From the
definition of
S
,
τ0E0
for each
E0
such that
τfi(E0)
. But this implies
(τ, τ 0) Ri
and
τ06∈ E
,
a contradiction with (4.1).
Nevertheless, we can use our general characterization of join endomorphisms in the previous
section (Corollary 3.4) to obtain a simpler relational construction for join endomorphisms of powerset
lattices (boolean algebras). Unlike the construction in Proposition 4.5, this characterization of
Ri
does not appeal to universal quantification.
Proposition 4.6.
Let
L
be dual to a finite powerset lattice with a family
{fi E(L)}iI
. Let
(Ω,{Ki}iI)
be the KS where
= L
and
Ri=n(ω, ω0)|ωfi({ω0})o
. Then, for every
i A,Ki=fi.
14 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
Proof.
Since
L
is dual to a powerset lattice,
t=
,
v=
, and
J(L) = n{τ}τo
. Let
Q=n({σ},{τ})|(σ, τ) Rio
. Notice that for every
({σ},{τ})Q
, we have
σfi({τ})
.
Equivalently,
{σ} fi({τ})
and
fi({τ}) {σ}
. Therefore, from Corollary 3.4, it follows that for
every
EL
,
fi(E) = Tn{σ} J (L)|({σ},{τ})Qiand E {τ}for some {τ} J (L)o
.
We complete the proof as follows:
fi(E) = \n{σ} J (L)| ∃{τ}∈J(L) : (({σ},{τ})Qand E {τ})o
=\n{σ} J (L)| ¬∀{τ}∈J(L) : (({σ},{τ})Q=E6⊆ {τ})o
=\{\ {σ}∈J(L)| ¬∀τ : ((σ, τ ) Ri=τE)}
=\{\ {σ}∈J(L)| ¬(Ri(σ)E)}
= \ {σ| ¬(Ri(σ)E)}={σ| Ri(σ)E}=Ki(E)
We conclude this section by pointing out that accessibility relations can be obtained from
knowledge operators.
Corollary 4.7. Let K= (Ω,{Ki}i∈A)be a KS. Then
(1) Ri=n(ω, ω0)|ωKi({ω0})o.
(2) If Kis an AS then Ri(ω) = Ki({ω})for every ω.
Proof.
The proof of (1) is an immediate consequence of Proposition 4.2 and Proposition 4.6. For
(2) rewrite the property as
Ri(ω0) = Ki({ω0})
for every
ω0.
if
K
is an AS then
Ri
is an
equivalence. Thus from the symmetry of
Ri
and (1) we obtain:
(ω0, ω) Ri
iff
(ω, ω0) Ri
iff
ωKi({ω0}). This implies (2).
5. DISTRIBUTED KNOWLE DG E.
The notion of distributed knowledge represents the information that two or more agents may have as
a group but not necessarily individually. Intuitively, it is what someone who knows what each agent,
in a given group, knows. As described in [
FHMV95
], while common knowledge can be viewed as
what “any fool” knows, distributed knowledge can be viewed as what a “wise man” would know.
Let
(Ω,{Ki}i∈A)
be a KS and
i, j A
. The distributed knowledge of
i
and
j
is represented by
D{i,j}:P(Ω) P(Ω)
defined as
D{i,j}(E) = {ω| Ri(ω) Rj(ω)E}
where
Ri
and
Rj
are the accessibility relations for iand j.
The following property captures the notion of distributed knowledge by relating group to
individual knowledge: (
K1
)
(Ki(E)Kj(EF)) D{i,j}(F)
. It says that if one agents knows
E
and the other knows that
E
implies
F
, together they have the distributed knowledge of
F
even if
neither agent knew F.
Example 5.1.
Let
E
be the event “Bob’s boss is working from home and
F
be the event public
transportation is suspended”. Suppose that agent Alice knows that Bob’s boss is working from
home (i.e.,
KA(E)
), and that agent Bob knows that his boss works from home only when public
transportation is suspended (i.e.,
KB(EF)
). Thus, if they told each other what they knew, they
COMPUTING DISTRIBUTED KNOWLEDGE 15
would have distributed knowledge of
F
(i.e.,
D{A,B}(F)
). Indeed,
KA(E)KB(EF)
entails
D{A,B}(F)from 1.
A self-explanatory property relating individual and distributed knowledge is (
K2
)
Ki(E)
D{i,j}(E).
Furthermore, the above basic properties of knowledge Proposition (
K
1)-(
K
2) also hold if
we replace the
Ki
with
D{i,j}
: Intuitively, distributed knowledge is knowledge. Indeed, imagine an
agent
m
that combines
i
and
j
’s knowledge by having an accessibility relation
Rm=Ri Rj.
In
this case we would have
Km=D{i,j}
. Therefore, any KS may include distributed knowledge as one
of its knowledge operators. For simplicity, we are considering distributed knowledge of two agents
but this can be easily extended to arbitrary groups of agents. E.g. if
Km=D{i,j}
then
D{k,m}
represents the distributed knowledge of three agents i, j and k.
5.1.
The Meet of Knowledge.
In Section 4.1 we identified knowledge operators and join endomor-
phisms. We now show that the notion of distributed knowledge corresponds exactly to the meet of
the knowledge operators in the lattice of all join-endomorphisms in (S,).
Theorem 5.2.
Let
(Ω,S,{Ki}i∈A)
be an EKS and let
L
be the lattice
(S,)
. Let us suppose that
Km=D{i,j}for some i, j, m A.Then e
Km=e
KiuE(L)e
Kj.
Proof.
Let us assume
Km=D{i,j}
. Then from the closure properties of
S
, we have
e
D{i,j}=e
Km:
S S.
Let
f=e
KiuE(L)e
Kj
. (Recall that the order relation
vL
over
L
is reversed inclusion
,
joins are intersections and meets are unions.)
From Proposition 2, for every
E S
,
D{i,j}(E)vLKi(E), Kj(E)
. Thus
e
D{i,j}
is a lower
bound of both e
Kiand e
Kjin E(L), so e
D{i,j}vE(L)f.
To prove
fvE(L)e
D{i,j}
, take
τe
D{i,j}(E) = D{i,j }(E)
for an arbitrary
E S
. By
definition of D{i,j}, we have (5.1) Ri(τ) Rj(τ)E. From Proposition 2.2
f(E) = [{Ki(F)Kj(H)|F, H S and FHE}(5.2)
Take
F=Ri(τ)
and
H=Rj(τ)
, from (5.1),
FHE
. By definition of knowledge operator,
τKi(F)and τKj(H). From (5.2), τf(E). Thus fvE(L)e
D{i,j}.
The theorem above allows us to characterize an agent
m
having the distributed knowledge of
i
and
j
as the least knowledgeable agent wiser than both
i
and
j
. In the next section we consider the
decision problem of whether a given mindeed has the distributed knowledge of iand j.
5.2.
The Distributed Knowledge Problem.
In what follows, let
(Ω,{Ki}i∈A)
be a KS and let
n=||
. Let us now consider the following decision: Given the knowledge of agents
i, j, m
, decide
whether mhas the distributed knowledge of iand j, i.e., Km=D{i,j}.
The input for this problem is the knowledge of the agents and it can be represented using either
knowledge operators
Ki, Kj, Km
or accessibility relations
Ri,Rj,Rm
. For each representation,
the algorithm that solves the problem
Km=D{i,j}
can be implemented differently. For the first
representation, it follows from Theorem 5.2 that
Km=D{i,j}
holds if and only if
Km=KiuE(L)Kj
where
L= (P(Ω),)
. For the second one, we can verify
Rm=Ri Rj
instead. Indeed, as stated
in Corollary 4.7, one representation can be obtained from the other, hence an alternative solution for
the decision problem is to translate the input from the given representation into the other one before
solving.
16 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
Accessibility relations represent knowledge much more compactly than knowledge operators
because the former are relations on
2
while the latter are relations on
P(Ω)2
. For this reason, it
would seem in principle that the algorithm for handling the knowledge operator would be slower by
several orders of magnitude. Nevertheless, we can use our lattice theoretical results from previous
sections to show that this is not necessarily the case, thus it is worth considering both types of
representations.
From Knowledge Operators. We wish to determine
Km=D{i,j}
by establishing whether
Km=KiuE(L)Kj
where
L= (P(Ω),)
. Let us assume the following bitwise representation of
knowledge operators. The states in
are numbered as
ω1, . . . , ωn
. Each event
E
is represented as
a number
#E[0..2n1]
whose binary representation has its
k
-th bit set to 1 iff
ωkE
. Each
input knowledge operator
Ki
is represented as an array
Ki
of size
2n
that stores
#Ki(E)
at position
#E, i.e., Ki[ #E] = #Ki(E).
From Lemma 2.3,
Km=KiuE(L)Kj
iff
Km(E) = Ki(E)Kj(E)
for every join-irreducible
element
E
in
L
. Notice that
E J (L)
iff
E
has the form
{ωk}
for some
ωk
. Moreover,
#{ωk}= (2n1) 2k. These facts lead us to the following result.
Theorem 5.3.
Given the arrays
Ki,Kj,Km
where
i, j, m I
, there is an effective procedure that can
decide Km=D{i,j}in time O(n2)where n=||.
Proof.
Let
L= (P(Ω),)
. We have
Km=D{i,j}
iff
Km=KiuE(L)Kj
(Theorem 5.2) iff
Km(E) = Ki(E)Kj(E)
for every
E J (L)
(Lemma 2.3). Furthermore,
E J (L)
iff
E={ω}
for some
ω
. Then we can conclude that
E J (L)
iff
#E= (2n1) 2k
for some
k[0..n 1]. Therefore, Km=D{i,j }iff for every k[0..n 1]
Km[pk] = Ki[pk]|Kj[pk](5.3)
where
pk= (2n1) 2k
and
|
is the OR operation over the bitwise representation of
Ki[pk]
and
Ki[pk]
. For each
k[0..n 1]
, the equality test and the OR operation in Equation 5.3 can be
computed in O(n). Hence the total cost is O(n2).
From Accessibility Relations. A very natural encoding for accessibility relations is to use a binary
n×n
matrix. If the input is encoded using three matrices
Mi,Mj
and
Mm
, we can test whether
Rm=
Ri Rj
(a proxy for
Km=D{i,j}
) in
O(n2)
by checking pointwise if
Mm[a, b] = Mi[a, b]·Mj[a, b]
.
It suggests that for AS we can use a different encoding and check
Rm=Ri Rj
practically in
linear time: More precisely in
O(αnn)
where
αn
is the inverse of the Ackermann function
2
. The
key point is that the relations of AS are equivalences so they can be represented as partitions. The
proof of the following result, which is interesting in its own right, shows an
O(n)
procedure for
deciding Rm=Ri Rj.
Theorem 5.4.
Let
R1,R2,R32
be equivalences over a set
of
n=||
elements. There is an
O(αnn)algorithm for the following problem:
Input:
Each
Ri
in partition form, i.e. an array of disjoint arrays of elements of
, whose
concatenation produces . This is readable in O(n).
Output: Boolean answer to whether R3=R1 R2.
Proof.
We use the Disjoint-Sets data structure [
GF64
] whose details are included in the technical
report https://hal.archives-ouvertes.fr/hal-03323638. We can view a disjoint-set as a function
r:
2
Here
αn
def
= min{k:A(k, k)n}
, where
A
is the Ackermann function. The growth of
αn
is negligible in practice,
e.g., αn= 4 for n= 22265536
3.
COMPUTING DISTRIBUTED KNOWLEDGE 17
Algorithm 5
Intersection of disjoint sets in
O(n)
1: procedure INTERSECTION(r1,r2)
2: Let f:II×Ibe an array
3: For each iIdo
4: f[i](r1(i), r2(i))
5: Let g:Im(f)Ibe a hash map
6: For each iIdo g[f[i]] i
7: Let q:IIbe an array
8: For each iIdo q[i]g[f[i]]
9: return q
Algorithm 6
Equality of disjoint sets in
O(n)
1: procedure CANONICAL(r)
2: (Comment) Jdef
={r(i) : iI}.
3: Let t:JIbe a hash map.
4: For each iIdo t[r(i)] r(i).
5: For each iIdo
6: t[r(i)] min(t[r(i)], i)
7: Let ˆr:IIbe an array
8: For each iIdo ˆr[i]tr(i)
9: return ˆr
II
that satisfies
rr=r
and can be evaluated at a particular index in
O(αn)
. The element
r(i)
corresponds to the class representative of ifor each iI, so that irjif and only if r(i) = r(j).
If we let
ri
denote a disjoint-set for
Ri
for each
i {1,2,3}
, and we let
q
denote the disjoint-set
for
R1 R2
, then the problem can be divided into computing the disjoint-set
q
in
O(n)
and
verifying whether
q=r3
also in
O(n)
. To organize these claims, let us consider the following
algorithm descriptions.
Intersection.
Takes two disjoint-sets
r1
and
r2
, and produces a disjoint-set
q
such that
iqj
iff
ir1jand ir2j.
Canonical.
Takes a disjoint-set
r
and produces another
ˆr
with
r=ˆr
, but such that
ˆr(i)i
for
all iI.
Equality.
Takes two disjoint-sets
r1, r2
and determines if
ir1j
iff
ir2j
for all
i, j I
. This
problem is reduced simply to checking if ˆr1= ˆr2.
We proceed to show that Algorithms 5 and 6 compute
q
and
ˆr
(in array form) in
O(n)
. The
complexity follows from the fact that they must read the input function(s) pointwise and all other
operations are linear. It remains to show correctness only.
The array
g
in Algo.5 is any version of the inverse image of
f
, i.e.
f[g[y]] = y
for every
yIm(f)
. This guarantees
fgf=f
and hence
qq=gfgf=gf=q
. Moreover,
for any
i, j I
,
q[i] = q[j]
iff
g[f[i]] = g[f[j]]
by definition; iff
f[i] = f[j]
because
f
is injective;
iff r1(i) = r1(j)and r2(i) = r2(j); iff ir1jand ir2j.
Regarding Algo.6, for all
iI
,
it[r(i)]
, thus
r(i) = r(t[r(i)])
. This is,
r=rtr
.
Thus,
ˆrˆr=trtr=tr= ˆr
. Moreover, for any
i, j I
,
ij
iff
r(i) = r(j)
; iff
t[r(i)] = t[r(j)] since tis injective on J; iff ˆr[i] = ˆr[j]by definition.
5.2.1. Experimental Results. Figure 3 shows the average runtime (100 random executions) of the
four algorithms listed below for the distributed knowledge problem. Fixing the number of elements
n=||
elements, the input for each execution consisted of three randomly generated partitions
Pi
,
Pj
and
Pm
. The first two are generated independently and uniformly over the set of all possible
partitions of
n
elements. The third,
Pm
, corresponds with
50%
probability to the intersection of the
relations of the first two, and to a different but very similar partition otherwise, so as to increase the
problem difficulty.
(1)
The “Cached operator” algorithm is the one described in Theorem 5.3. It assumes that the
input knowledge operators can be evaluated in
O(1)
at any join-irreducible input
E
. Its
complexity is
O(n2)
, because bit-mask operations are linear w.r.t. the number of bits. However,
18 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
0 100 200 300 400
||
0
10
20
30
40
50
Average runtime [ms]
DisjointSet
Cached operator
Relation
Non-cached operator
0 100 200 300 400
||
0.0
0.1
0.2
0.3
0.4
0.5
Average runtime [ms]
DisjointSet
Cached operator
FIGURE 3. Runtime comparison of several algorithms that solve the distributed knowledge
problem.
this is compensated heavily in practice by the speed of bit-masking operations, at least for the
sizes depicted.
(2)
The “Disjoint set” algorithm is the one described in Theorem 5.4 (
O(n)
). It takes the
accessibility relations in partition form as input.
(3)
The “Relation” algorithm (
O(n2)
) takes as input the accessibility relations in the form of
n×n
binary matrices, and simply verifies if the pointwise-and matches.
(4)
The “Non-cached operator” (
O(n2)
) algorithm is that of the “Cached operator” when the cost of
evaluating
Ki(·)
is taken into account. It shows that although the “Cached operator” algorithm
is very fast, its speed depends heavily on the assumption that the knowledge operators are
pre-computed.
6. CONCLUDING RE MA RKS A ND RELATED WO RK.
We have used some standard tools from lattice theory to characterize the notion of distributed knowl-
edge and provide efficient procedures to compute the meet of join-endomorphisms. Furthermore,
we provide an algorithm to compute the intersection of partitions of a set of size
n
in
O(n)
. As
illustrated in the introduction, this algorithm may have applications for graph connected components
and other domains where the notion of partition and intersection arise naturally.
In [
QRRV20
] we proposed algorithms to compute
fuE(L)g
with time complexities
O(n3)
for
arbitrary lattices and
O(n2)
for distributive lattices. Here we have improved the bound to
O(n)
for distributive lattices. The authors in [
HN96
] gave a method of logarithmic time complexity (in
the size of the lattice) for meet operations. Since
E(L)
is isomorphic to
O(J(L)× J (L)op )
for
a distributive lattice
L
, finding
fuE(L)g
with their algorithm would be in
O(log2(2n2)) = O(n2)
in contrast to our linear bound. Furthermore, we would need a lattice isomorphic to
E(L)
to find
fuE(L)g
using their algorithm. This lattice can be exponentially bigger than
L
[
QRRV20
] which is
the input to our algorithm. We also provided experimental results illustrating the performance of our
procedures. We followed the work in [JL15] for generating random distributive lattices.
The finite representation results we used in Sections 3 and 4 to obtain our main results are
adaptations from standard results from duality theory. J
´
onsson and Tarski [
JT51
,
JT52
] originally
presented an extension of boolean algebras with operators (BAO), called canonical extensions,
provided with some representation theorems. Roughly speaking, the representation theorems state
COMPUTING DISTRIBUTED KNOWLEDGE 19
that (1) every relation algebra is isomorphic to a complete and atomic relation algebra and (2) every
boolean algebra with operators is isomorphic to a complex algebra that is complete and atomic. The
idea behind this result, as was presented later by Kripke in [
Kri59
], basically says that the operators
can be recovered from certain binary relations and vice versa. Another approach to this duality was
given by Goldblatt [
Gol89
] where it is stated that the variety of normal modal algebras coincides
with the class of subalgebras defined on the class of all frames. Canonical extensions have been
useful for the development of duality and algebra. J
´
onsson proved an important result for modal
logic in [
J´
94
] and the authors of [
GJ04
,
GH01
,
DGP05
] have generalized canonical extensions for
BAOs to distributive and arbitrary bounded lattices and posets.
Distributed knowledge was introduced in [
HM90
] and various axiomatization and expressive-
ness for it have been provided, e.g., in [
HN07
,
AW17
]. In terms of computational complexity, the
satisfiability problem for epistemic logic with distributed knowledge (
S5D
) has been shown to be
PSPACE-complete [FHMV95]. Nevertheless, we are not aware of any lattice theoretical characteri-
zation of distributed knowledge nor algorithms to decide if an agent has the distributed knowledge of
others.
REFERENCES
[Aum76] Robert J. Aumann. Agreeing to disagree. The Annals of Statistics, 4:1236–1239, 1976.
[AW17]
Thomas Agotnes and Yi N. Wang. Resolving distributed knowledge. Artificial Intelligence, 252:1–21, 2017.
doi:https://doi.org/10.1016/j.artint.2017.07.002.
[BHR07]
Isabelle Bloch, Henk Heijmans, and Christian Ronse. Mathematical morphology. In Handbook of Spatial
Logics, pages 857–944. Springer Netherlands, 2007.
[CZ97] Alexander Chagrov and Michael Zakharyaschev. Modal Logic, volume 35. Oxford University Press, 1997.
[DGP05]
J. Michael Dunn, Mai Gehrke, and Alessandra Palmigiano. Canonical extensions and relational completeness
of some substructural logics. Journal of Symbolic Logic, 70(3):713–740, 2005.
[DP02]
B. A. Davey and H. A. Priestley. Introduction to Lattices and Order. Cambridge University Press, 2 edition,
2002.
[FHMV95]
Ronald Fagin, Joseph Y Halpern, Yoram Moses, and Moshe Y Vardi. Reasoning about knowledge. MIT press
Cambridge, 4th edition, 1995.
[GF64]
Bernard A Galler and Michael J Fisher. An improved equivalence algorithm. Communications of the ACM,
7(5):301–303, 1964.
[GH01] Mai Gehrke and John Harding. Bounded lattice expansions. Journal of Algebra, pages 345–371, 2001.
[GJ04]
Mai Gehrke and BJarni J
´
onsson. Bounded distributive lattice expansions. Mathematica Scandinavica, 94(1):13–
45, 2004. URL: http://www.jstor.org/stable/24493402.
[Gol89]
Robert Goldblatt. Varieties of complex algebras. Annals of Pure and Applied Logic, 44(3):173–242, 1989.
doi:10.1016/0168-0072(89)90032- 8.
[GS58]
George Gr
¨
atzer and E. Schmidt. On the lattice of all join-endomorphisms of a lattice. Proceedings of The
American Mathematical Society, 9:722–722, 1958.
[HM90]
Joseph Y. Halpern and Yoram Moses. Knowledge and common knowledge in a distributed environment. J.
ACM, 37(3):549–587, 1990.
[HN96]
Michel Habib and Lhouari Nourine. Tree structure for distributive lattices and its applications. Theoretical
Computer Science, 165(2):391–405, 1996.
[HN07]
Raul Hakli and Sara Negri. Proof theory for distributed knowledge. In CLIMA, volume 5056 of Lecture Notes
in Computer Science, pages 100–116. Springer, 2007.
[J´
94] Bjarni J´
onsson. On the canonicity of sahlqvist identities. Studia Logica, 53(4):473–491, 1994.
[JL15]
Peter Jipsen and Nathan Lawless. Generating all finite modular lattices of a given size. Algebra universalis,
74(3):253–264, 2015.
[JT51]
Bjarni J
´
onsson and Alfred Tarski. Boolean algebras with operators. Part I. American Journal of Mathematics,
73(4):891–939, 1951. URL: http://www.jstor.org/stable/2372123.
[JT52]
Bjarni J
´
onnson and Alfred Tarski. Boolean algebras with operators. Part II. American Journal of Mathematics,
74(1):127–162, 1952. URL: http://www.jstor.org/stable/2372074.
20 S. QUINTERO, C. PINZON, F. VALENCIA, AND S. RAMIREZ
[Kri59] Saul A. Kripke. A completeness theorem in modal logic. The Journal of Symbolic Logic, 24(1):1–14, 1959.
[MT46]
J. C. C. McKinsey and Alfred Tarski. On closed elements in closure algebras. Annals of Mathematics,
47(1):122–162, 1946. URL: http://www.jstor.org/stable/1969038.
[QRRV20]
Santiago Quintero, Sergio Ram
´
ırez, Camilo Rueda, and Frank Valencia. Counting and computing join-
endomorphisms in lattices. In RAMiCS, volume 12062 of Lecture Notes in Computer Science, pages 253–269.
Springer, 2020.
[Sam10a]
Dov Samet. Agreeing to disagree: The non-probabilistic case. Games and Economic Behavior, 69(1):169–174,
2010. doi:10.1016/j.geb.2008.09.032.
[Sam10b] Dov Samet. S5 knowledge without partitions. Synthese, 172(1):145–155, 2010.
This work is licensed under the Creative Commons Attribution License. To view a copy of this
license, visit https://creativecommons.org/licenses/by/4.0/ or send a letter to Cre-
ative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or Eisenacher
Strasse 2, 10777 Berlin, Germany
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Modular lattices, introduced by R. Dedekind, are an important subvariety of lattices that includes all distributive lattices. Heitzig and Reinhold developed an algorithm to enumerate, up to isomorphism, all finite lattices up to size 18. Here we adapt and improve this algorithm to construct and count modular lattices up to size 23, semimodular lattices up to size 22, and lattices of size 19. We also show that 2n32^{n-3} is a lower bound for the number of nonisomorphic modular lattices of size n.
Article
Full-text available
The notion of a canonical extension of a lattice with additional operations is introduced. Both a concrete description and an abstract characterization of this extension are given. It is shown that this extension is functorial when applied to lattices whose additional operations are either order preserving or reversing, in each coordinate, and various results involving the preservation of identities under canonical extensions are established.
Article
In epistemic logic, a key formal theory for reasoning about knowledge in AI and other fields, different notions of group knowledge describe different ways in which knowledge can be associated with a group of agents. Distributed knowledge can be seen as the sum of the knowledge in a group; it is sometimes referred to as the potential knowledge of a group, or the joint knowledge they could obtain if they had unlimited means of communication. In epistemic logic, a formula of the form DGφ is intended to express the fact that group G has distributed knowledge of φ, that the total information in the group can be used to infer φ. In this paper we show that this is not the same as φ necessarily being true after the members of the group actually share all their information with each other – perhaps contrary to intuitive ideas about what distributed knowledge is. We furthermore introduce a new operator RG, such that RGφ means that φ is true after G have shared all their information with each other – after G's distributed knowledge has been resolved. The RG operators are called resolution operators. We study logics with different combinations of resolution operators and operators for common and distributed knowledge. Of particular interest is the relationship between distributed and common knowledge. The main results are characterizations of expressive power, and sound and complete axiomatizations. We also study the relationship to public announcement logic.
Article
A new notion of a canonical extension Aσ is introduced that applies to arbitrary bounded distributive lattice expansions (DLEs) A. The new definition agrees with the earlier ones whenever they apply. In particular, for a bounded distributive lattice A, Aσ has the same meaning as before. A novel feature is the introduction of several topologies on the universe of the canonical extension of a DL. One of these topologies is used to define the canonical extension fσ: Aσ → Bσ of an arbitrary map f: A → B between DLs, and hence to define the canonical extension Aσ of an arbitrary DLE A. Together the topologies form a powerful tool for showing that many properties of DLEs are preserved by canonical extensions.