Conference PaperPDF Available

Human-understandable inference of causal relationships



We present a method for aiding humans in understanding causal relationships between entities in complex systems via a simplified calculus of facts and rules. Facts are human-readable subject-verb-object statements about system entities, interpreted as (entity-relationship-entity) triples. Rules construct new facts via implication and “weak transitive” rules of the form “If X r Y and Y s Z then X t Z”, where X, Y, and Z are entities and r, s, and t are relationships. Constraining facts and rules in this way allows one to treat abductive inference as a graph computation, to quickly answer queries about the most related entities to a chosen one, and to explain any derived fact as a shortest chain of base facts that were used to infer it. The resulting chain is easily understood by a human without the details of how it was inferred. This form of simplified reasoning has many applications in human understanding of knowledge bases, including causality analysis, troubleshooting, and documentation search, and can also be used to verify knowledge bases by examining the consequences of recorded facts.
Human-Understandable Inference of Causal
(Invited Paper)
Alva L. Couch
Department of Computer Science
Tufts University
Medford, Massachusetts 02155
Mark Burgess
Department of Informatics
Oslo University College
Abstract—We present a method for aiding humans in under-
standing causal relationships between entities in complex systems
via a simplified calculus of facts and rules. Facts are human-
readable subject-verb-object statements about system entities,
interpreted as (entity-relationship-entity) triples. Rules construct
new facts via implication and “weak transitive” rules of the
form “If X r Y and Y s Z then X t Z”, where X, Y, and Z
are entities and r, s, and t are relationships. Constraining facts
and rules in this way allows one to treat abductive inference
as a graph computation, to quickly answer queries about the
most related entities to a chosen one, and to explain any derived
fact as a shortest chain of base facts that were used to infer it.
The resulting chain is easily understood by a human without
the details of how it was inferred. This form of simplified
reasoning has many applications in human understanding of
knowledge bases, including causality analysis, troubleshooting,
and documentation search, and can also be used to verify
knowledge bases by examining the consequences of recorded
There are many situations in which causal information
about a complex system must be interpreted by a human
being to aid in some critical task. For example, the speed of
troubleshooting depends upon how quickly one can link ob-
served symptoms with potential causes. Likewise, in scanning
documentation, it is often useful to associate a behavioral goal
for a managed system with the subsystems involved in assuring
that goal, and associate each subsystem with documentation
on how to modify the subsystem toward that goal.
We present a simplified reasoning system for inferring
causal relationships between entities in a complex system,
including the use cases of linking symptoms with causes and
linking goals with documentation. Input to the method is a
set of facts that represent what is known about the world,
as well as a set of rules for inferring new facts. We assume
that predetermined facts are not complete and, at best, are an
approximate description of relationships in the world. Rules
that allow us to construct new facts are limited to two simple
forms in order to allow us to treat the inference algorithms
as graph computations. This limitation allows us to answer
simple queries efficiently, and explain each inference via a
linear chain of relationships that is easily understandable to a
human being.
A. Facts
Facts in our method are relationships between named enti-
ties, expressed as subject-verb-object triples of the form:
host01 :::::::::::::::
is an instance of file server
Entities such as host01 are underlined with straight lines while
relationships such as ::::::::::::::
is an instance of are underlined with
wavy lines. A set of facts represents an entity-relationship
model of the kind used commonly in Software Engineering,
as opposed to those utilized in database theory; the former
describe interactions and causal dependencies, while the latter
describe functional dependencies. Facts are positive statements
about causal relationships and cannot express a lack of re-
lationship. Negation of an existing fact is handled specially
B. Relationships
Relationships in our method are chosen to aid in solv-
ing problems in a particular problem domain. In analyzing
causality, :::::::::
influences and related concepts form the basis for
analysis, while in thinking about and analyzing document
spaces, ::::::::
describes and ::::::::::
documents are more relevant. In thinking
about scientific papers, ::::::::
explains would be a relevant verb.
The central relationship for a problem domain (e.g.,
influences) represents a concept that is related to other
concepts also expressed as relationships (e.g., :::::::::
can influence, ::::::::::::::
partly influences). Interactions between rela-
tionships are coded as rules that might be more accurately
viewed as defining interactions between the concepts that
relationships represent.
A key feature of our method is that modalities (e.g., “can”,
“might”, etc) are part of the symbols and not part of the logic.
In systems that can perform “modal logic”, there are symbols
in the system for the concepts of “can”, “might”, etc., while
in our system these concepts are appended to each specific
token. This actually simplifies what might be very complex
computations otherwise, when we try to reason in the system.
C. Rules
Rules are used to construct derived facts from base facts
known to be true at the outset. However, all rules in our system
are stated in terms of relationships, and not individual facts.
Rules include implications, inverses, and weak transitive rules.
Implicative rules have the form “For every X and Y, if
X r Y then X s Y” where X and Y are entities and r and
s are relationships. In general, all variables in our rules are
universally quantified, so we leave out the quantifiers and treat
them as implicit. The implicative rule “For every X and Y, if
X requires Y then ‘X depends upon Y” is written as
requires ::::::::::::
depends uponi
Implicative rules distinguish between more specific relation-
ships (in the antecedent) and more general relationships (in the
consequent); :::::::
requires is a more specific connection between
two entities than the more “generic” ::::::::::::
depends upon. Invoking
an implicative rule thus raises the level of abstraction in
describing a relationship.
D. Inverses
While a fact “X r Y” is typically directional in the sense
that X and Y cannot be interchanged, every fact “X r Y” has a
corresponding fact “Y s X” where X and Y appear in reverse
order. Such relationships r and s are called inverses to one
another. The relationship between inverses is defined (by the
user) through rules, rather than being inferred (through logic).
E.g., the fact
host01 :::::::::::::::
is an instance of file server
is equivalent with the fact
file server :::::::::::
has instance host01
Their equivalence is notated via the rule
is an instance of ./ :::::::::::
has instancei
An inverse rule is just a special kind of implication; the above
rule means that “for every A and B, if A :::::::::::::::
is an instance of
B, then B :::::::::::
has instance A, and vice-versa. We also write
that inv(:::::::::::
has instance) is :::::::::::::::
is an instance of and vice versa,
Some facts are self-inverse, e.g., if “X ::::::::::
is a peer of Y”, then
“Y :::::::::::
is a peer of X”.
E. Weak transitive rules
Weak transitive rules have the form “for every X, Y, and Z,
if X r Y and Y s Z then X t Z,” where X, Y, Z are entities
and r, s, t are relationships. The (strong) transitive rule “if X
requires Y and Y requires Z, then X requires Z” is written as
while the rule “if X has part Y and Y controls Z then X
controls Z” is written as
has part,:::::::
The latter is called a weak transitive rule because the conse-
quent :::::::
controls does not match at least one of the antecedents
has part). Weak transitive rules are the key to turning a
logic computation into a graph computation when computing
queries about facts.
F. Kinds of queries
The inference method supports two kinds of queries, both
based upon a concept of inference distance between entities in
the fact base. The inference distance between two entities X
and Y with respect to a relationship r is the minimum number
of weak transitive rules that must be applied to base facts to
infer the fact “X r Y”. For reasons that will become clear by
example, implication rules and inverses are not used in our
measure of distance.
The first kind of query is to locate the entities Y related to
a given entity (or set of entities) X via some predetermined
relationship r, i.e., to solve the query “X r Y?” where X and
r are known and Y? is unknown. For each appropriate Y, the
minimum number of weak transitive rules applied to infer
“X r Y” from base facts is used as a measure of distance
between X and Y as subsystems, so that potential candidates
Y can be listed in order from “near to” to “far away from” X.
The second kind of query is to create a human-readable
explanation of the shortest derivation of a given derived fact
“X r Y” in terms of recorded facts and weak transitive rules
applied in order to derive it. This derivation takes the form of
a sequence of entities and relationships, for which X and Y
are endpoints.
Many other queries are possible, but for the purposes of
this paper, these two kinds are exemplary of the benefits of
the method.
Two use cases motivate our choices above. In troubleshoot-
ing, one of the central problems is to relate symptoms to
causes, where it is possible that causes are several entities
distant from symptoms. In scanning documentation, one of the
central problems is to relate one’s personal goal to relevant
documents describing how to accomplish the goal. Both of
these are causal inference problems that can be addressed
through the proposed inference method.
A. Troubleshooting
Suppose we have a very simple network with a fileserver
host01, a dns server host02, and a client host03. We might
code the relationships between these hosts as a set of abstract
“sentences”, like
host01 :::::::::::::::
is an instance of file server
file server :::::::
provides file service
host02 :::::::::::::::
is an instance of dns server
dns server :::::::
provides dns service
host03 :::::::::::::::
is an instance of client workstation
client workstation :::::::
requires file service
client workstation :::::::
requires dns service
In this case, rules include
is an instance of ./ :::::::::::
has instancei
provides ./ :::::::::::::
is provided byi
requires ./ ::::::::::::
is required byi
depends upon ./ :::::::::::::::::::
is depended upon byi
provides :::::::::::::::::::
is depended upon byi
requires ::::::::::::
depends uponi
Suppose that host03 has a problem. A query of the entities
that host03 depends upon returns
host03 ::::::::::::
depends upon host01
host03 ::::::::::::
depends upon host02
but this information is not enough to be useful to a human;
we must consider the explanation of the dependence. One
explanation of the dependence between host01 and host03 is
entity relationship entity
host03 :::::::::::::::
is an instance of client workstation
client workstation :::::::
requires file service
file service :::::::::::::
is provided by file server
file server :::::::::::
has instance host01
We call such an explanation a story of the dependence between
two entities, and often leave out the repeated entity, as in
entity relationship
host03 :::::::::::::::
is an instance of
client workstation :::::::
file service :::::::::::::
is provided by
file server :::::::::::
has instance
Some stories are easy to compute, and others require
more subtle techniques. Architectural descriptions are often
incomplete and specified at different levels of abstraction. To
cope with this, our method utilizes implication to “lift” facts to
acommon level of abstraction at which reasoning can occur,
and then “re-grounds” that reasoning by expressing high-level
(abstract) inferred facts in terms of the low-level (concrete)
facts that were their basis.
Consider, e.g., the following quandary:
host02 :::::::::::::::::::::
is authoritative for zone
host03 ::::::::::::
is inside zone
What is the real relationship or dependency between host02
and host03? To answer this question, we must proceed to a
higher level of abstraction, via implication:
is authoritative for zone :::::::::
is inside zone ::::::::::::::
is influenced byi
and define appropriate inverses:
influences ./ ::::::::::::::
is influenced byi
is authoritative for zone ./ ::::::::::::::::
has zone authorityi
is inside zone ./ ::::::::::::::::::::
contains zone memberi
after which the facts available also include:
host02 :::::::::
host03 ::::::::::::::
is influenced by
We invert the latter to its inverse: :::::::::
influences host03
so that by the obvious transitive rule:
we infer the story that:
entity relationship
host02 :::::::::
influences :::::::::
but this is not good enough for human consumption. The
relationships :::::::::
influences in the above are the result of two
is authoritative for zone :::::::::
contains zone member :::::::::
To complete the picture, we “ground” the lifted relationships
by replacing them with the concrete relationships that imply
them, e.g.,
entity relationship
host02 :::::::::::::::::::::
is authoritative for zone ::::::::::::::::::::
contains zone member
and this grounded explanation “explains” the abstract reason-
ing in concrete (and useful) terms.
In this example, implication is used to vary the level of
abstraction used in reasoning, and is not a component of the
reasoning itself. This is part of the reason for why it is not
counted as an inference step in computing which explanations
are most succinct.
B. Document browsing
A second problem that can be solved via this form of reason-
ing is to locate documents relevant to a task in document space.
Locating documents is the purpose for which the method was
originally designed. In this case, the verb ::::::::
describes replaces
the verb :::::::::
influences as the central relationship of interest.
Suppose we want to understand how to set up a service
switch in a network. We might start with the facts:
service switch :::::::
requires service URL
service :::::::::::
has attribute service URL
service URL :::::::::::::::
is an instance of URL
URL :::::::::::
described by wikipedia for URL
user service :::::::::::::::
is an instance of service
user service :::::::::::
described by user service manual
(and many others). Rules include:
is described by ./ ::::::::
is partly described by ./ ::::::::::::::
partly describesi
has attribute ./ :::::::::::::::
is an attribute ofi
has instance ./ :::::::::::::::
is an instance ofi
describes ::::::::::::::
partly describesi
has attribute,::::::::
has attribute,:::::::::::
described by,:::::::::::::::::
partly described byi
has instance,::::::::
is an instance of,::::::::::::::
partly describesi
In this case, we do not care about causal relationships as much
as relevant documentation. If we query our reasoning system
for X such that “user service :::::::::::::::::::
is partly described by X”, we
obtain, among other responses:
user service ::::::::::::::
is described by user service manual
user service :::::::::::::::::::
is partly described by wikipedia for URL
etc. An explanation of the latter fact is:
entity relationship
user service ::::::::::::
is instance of
service :::::::
service URL :::::::::::::::
is an instance of
URL ::::::::::::::
is described by
wikipedia for URL
which demonstrates exactly how the Wikipedia documenta-
tion is relevant. Note that we asked for ::::::::::::::
partly describes and
actually received feedback on facts for the more specific
relationship ::::::::::::::
is described by; this is a result of the lifting and
grounding technique discussed in the previous section.
Note the carefully crafted rules in the preceding example;
if something is an instance of something else, and something
describes the instance, it might not describe the whole scheme
of things. If we describe the whole scheme, we do describe
the instance. Also, describing an attribute of a thing does not
describe the whole thing, but describing a thing does describe
its attributes. These rules might be considered as part of the
definition of the ::::::::
describes relationship, via its interaction with
is an instance of and :::::::::::::::
is an attribute of.
The examples above assume the existence of both a knowl-
edge base and a set of rules with specific properties. These
properties lead both to the kinds of inferences that can be done,
as well as the speed with which they can be accomplished.
A. Specifying facts
Facts in our knowledge base represent invariant properties of
entities. Variation over time is not supported. Thus not every
kind of fact can be represented. Kinds of facts that can be
represented are mostly “architectural” in nature, in the sense
that they do not vary for the lifetime of the entity.
Facts cannot express contradictions. Even if two classes A
and B of entities are mutually exclusive, there is no way to
express that in a fact. Mutual exclusivity is instead a result
of reasoning, in the sense that if an object is an instance of
A, then it enjoys all properties of an instance of A, while if
it is not an instance of A, properties of A are not assumed
to be either present or absent. Thus, in constructing a base of
facts, it is important to eliminate seeming contradictions from
the fact base, because contradictions cannot be detected by the
reasoning method.
In a fact, entities and relationships are formal symbols
devoid of meaning. The statement host01 :::::::::::::::
is an instance of
file server is a sequence of three tokens, as opposed to the
English sentence “host01 is an instance of file server.” The
meaning of an entity token (as a mapping between the token
and the real world) is implicit in the set of facts that describe
the entity, just as the meaning of a relationship token is
implicit in the rules that describe how it interacts with other
B. Retracting a fact
In our information store, facts cannot be deleted. They can,
however, become outmoded, in the sense that they describe old
information that is no longer of interest. Suppose for example
that we record a fact about the authors, e.g.,
alva :::
eats cornflakes
and then it turns out that this is inaccurate. There is no way to
retract that fact, but in fact, our new information describes a
“different alva” than before. So instead, we issue a new token
alva’ with no facts listed, and then duplicate all of the facts
from alva to alva’ except the fact to be deleted. Then we can
incorporate new facts about alva’ that do not apply to the old
alva, e.g.,
alva’ :::
eats oatmeal
and computation proceeds as defined below. Thus we delete
facts by re-versioning the entities that they describe. This can
be done automatically through a machine-learning importance
ranking, such as used by Cfengine [1].
C. Constructing rules
Constructing a rule requires more than considering how it
acts on facts. At a superficial level, derived facts are computed
from base facts by repeated use of rules. The rules themselves,
however, can be combined to form new rules. There are several
finer points of describing rules, including describing modal
relationships and partial knowledge.
1) Modal and partial knowledge: There is no special
mechanism for separately handling modal constructions such
as ::::::::::::::
might determine and ::::::::::::
can influence in our method. Instead,
these are defined via their interactions with other tokens. The
qualifiers “can” and “might”, in a relationship, are weaker than
the unqualified relationship; “can” indicates capability while
“might” indicates possibility (this assumes a standard partial
ordering of the terms in the ontology). Thus we write:
determines :::::::::::::
can determinei
can determine :::::::::::::::
might determinei
from which we can immediately derive by transitivity of
implication that:
determines :::::::::::::::
might determinei
Likewise, one can distinguish between complete and partial
determination via:
determines :::::::::::::::
partly determinesi
Influence is another way to describe partial determination:
determines :::::::::
The relationship between :::::::::
influences and :::::::::::::
can determine can be
obtained from:
determines :::::::::::::
can determinei
can determine ::::::::::::
can influencei
In general, however, two abstract concepts may not enjoy any
relationship whatever.
The above describe only one facet of the meaning of
influences. Several more facets include:
is a part of,::::::::::
is a part of,:::::::::
is an instance of,::::::::::
is an instance of,:::::::::
In other words, if a thing is part of a determined thing,
the part is likewise determined, but determining a part of a
thing only influences the thing. Likewise, if a set (class) of
things is determined, so are its members, but determining a
member does not determine the set of which it is a member.
These rules might be considered facets of a “definition” of the
(more abstract) relationship :::::::::
influences, in terms of the (more
concrete) relationship ::::::::::
D. Deleting a rule
As for facts, there is no well-defined notion of deleting
a rule. However, one can proceed exactly as one does for
deleting facts, by creating a new version of the consequent
relationship of the rule to invalidate current inferences via the
rule. E.g., if one specifies:
is a part of,:::::::::::::::
is an instance of,::::::::::
is a part ofi
(in error), the solution is to create a new version of the conse-
is a part of’, instantiate all rules for the new relationship
except the one to be deleted, and then resume computation of
the consequences of the new rules. Eventually, the original
is a part of can be removed, when the consequences of the
new ::::::::::
is a part of’ are known and the original relationship is
no longer needed.
E. Inferring new rules
So far, we have emphasized deriving new facts from base
facts and base rules, but there is an equivalent calculus for
deriving rules from other rules. Each relationship rcan be
viewed as representing the set of ordered pairs (X, Y )where
X r Y ” is either a base fact about the world or can be
inferred. For example, the relationship :::::::::
is a part of can be
thought of as representing the set
is a part of ≡ {(X, Y )|X::::::::::
is a part of Y}(1)
Likewise, each rule is a statement about sets represented by
relationships. Implication is a subset relationship:
hrsi ≡ rs(2)
i.e., the set of facts ris a subset of the set of facts s. In like
manner, a weak transitive rule is also a subset relationship of
a different kind:
hr, s, ti ≡ rst(3)
i.e., tis a superset of the product rsresulting from
combining sets rand s, where rsis defined by
rs≡ {(X, Z )|(X, Y )r, (Y, Z )s}(4)
Note that every rule is inclusive of other kinds of meaning,
and that one never limits set contents with a rule.
Creating rules from others is most easily explained by
treating relationships as sets. The obvious relationship between
rsand strt(5)
can be trivially restated as a relationship between rules:
hrsiand hsti ⇒ hrti(6)
in the sense that there is no problem with instantiating a new
rule hrtiand using it instead of the other two.
The subset relationships for weak transitivity are described
in the following diagram:
∩ ∩
(where indicates vertical subsetting). In the diagram, r0r
(as relationships) or equivalently r0r(as sets), s0s(as
relationships) or equivalently s0s(as sets), and tt0
(as relationships) or equivalently tt0(as sets). Because
the subset relationship is transitive, for any rule hr, s, tiand
any appropriate r0,s0, and t0, the rule hr0, s0, t0ialso applies
by set containment. We might more concisely represent this
set of relationships by substituting rules for products and
implications for subsets, e.g.,
↓ ↓
hr, s, t i⇒hr0, s0, t0i
Fig. 1. Recording results of a best known inference of “X t Z” from “X r Y”,
“Y s Z”, and hr,s,ti. Input edges “X r Y” and “Y s Z” are marked with the
number of inferences (2 and 3) needed to create them. The edge “X t Z” is
labeled with the least inference distance + 1 and information about how to
achieve that least distance (r,Y,s).
Finally, there are obvious rules for handling inverses.
hr, s, ti ⇒ hinv(s), inv(r), inv(t)i(9)
hrsi ⇒ hinv(r)inv(s)i(10)
These “meta-rules” can be used to compute unknown rules
from known ones. More important, they account for all possi-
ble uses of implication and inverses in subsequent applications
of weak transitive rules.
The key to everything that follows is that applying rules
only adds facts, so that if one simply tries all rules until no
new facts are added, one has all of the current available facts.
One naive way to accomplish this is:
for all hrsido
Set s=sr.
end for
for all hr, s, tido
Set t=trs.
end for
until there is no change in any relationship set.
In doing this, we keep track of which relationships are ground
and which are derived, which allows us to construct a “shortest
path” between two given entities as a chain of relationships.
F. Computing queries
Queries are computed efficiently by relying upon math-
ematical properties of our choice of facts and rules. By
construction, any derived fact or rule cannot be invalidated
over time and may safely be cached for later use. Adding a
new fact does not require recomputation of existing cached
facts. Changing a fact is a matter of creating a new version of
its subject and/or object. Changing a rule is a matter of creating
a new version of its consequent relationship. No backtracking
is necessary to compute new facts.
The steps in satisfying a query ““X r Y?”” are as follows.
Let Ebe the set of entities described by facts, and let F
represent the a priori facts we have about entities in E. Note
that facts Fare edges in a graph G= (E,F)where Eis the
set of entities that facts describe. Let Rrepresent our a priori
rules. These can be separated into weak transitive rules Wand
implicative and inverse rules I, where R=W ∪ I.
First we utilize all implication and inverse rules in Ito gen-
erate the derived facts and rules that result from implication:
1) Apply all implications and inverse rules in Ito all facts
Fto create a complete list of implied facts F0.
2) Use implication and inverse rules in Ito generate a
complete set of weak transitive rules W0from W, by
using equations 6, 8, 9, and 10.
These two steps account for all implication and inverse rules
in I, both for facts and rules. Thus we need not account for
these in further computations. This step results in what might
be called “weaker rules” than the originals, e.g., combining
the rules: h:::::::
requires :::::::::::
may requirei
results in the derived rule:
may requirei
that is “weaker” than the original.
Our algorithm is based upon the observation that “X r Y”
is a consequence of facts Fand rules Rif and only if it is
a consequence of facts F0and rules W0. Thus the problem is
no longer an inference problem, but rather, a graph problem
as to whether “X r Y” is in the transitive closure G00 of the
derived labeled graph G0= (E,F0)with respect to the weak
transitive laws in W0.
To compute G00 , let dist(“X r Y”) represent the current
minimum known inference distance between base facts and
the fact “X r Y”, or infinity if there is no a priori relationship
between X and Y. We will compute G00 by a simple variation
of the transitive closure algorithm, to wit:
Set G00 =G0.
for all existing edges “X r Y” ∈ F0do
Set dist(“X r Y”)=0.
end for
while some edges are updated or added to F00 do
for all facts “X r Y” and “Y s Z” in F00 do
if there is a rule hr,s,tithen
if “X t Z” 6∈ F00 then
Put “X t Z” into F00.
Label dist(“X t Z”)
=dist(“X r Y”) + dist(“Y r Z”) + 1.
else if dist(“X t Z”)
>dist(“X r Y”) + dist(“Y r Z”) + 1 then
Label dist(“X t Z”)
=dist(“X r Y”) + dist(“Y r Z”) + 1.
end if
end if
end for
end while
The result of this process is a graph G00 = (E,F00)in which
X is connected to every candidate Y that has a relationship
with it, and every edge between X and Y is labeled with the
minimum number of rule applications needed to infer the edge.
From this computation, one can list the connected entities in
order of inference distance from X.
The general pattern of computation can be summarized as
follows: F W
↓ I ↓ I
→ F00
where arrows represent (implicative and weak transitive) clo-
sure computations and, at the end of this, “X r Y” ∈ F00.
G. Improving runtime
The above “brute force” algorithm captures the idea of the
algorithm succinctly, but there are many simple optimizations
that improve its runtime. First, it is always safe to cache prior
computations of derived rules and facts, and use them later.
All that is necessary is to incorporate the effects of new rules
into the cache.
Second, one can safely restrict the domain of the computa-
tion to include only relationships of interest or that result in
relationships of interest. One need not consider, e.g.,::::::::
when one is interested only in ::::::::::
determines, because there is no
relationship between documentation and causality.
Third, one can limit inference to facts that involve Xand
Y. One can proceed breadth-first from the facts involving X,
to include facts that connect that set with others. This reduces
the iteration above from “for all facts “X r Y” and “Y r Z” in
F00, to “for all facts “X r Y” previously derived about X and
all related facts “Y r Z” in F00, which converts iteration over
all facts to a breadth-first traversal similar to that in Dijkstra’s
“single-destination shortest path” algorithm [2].
H. Inferring stories
To compute the sequence of inferences that connect two
entities A and B, we repeat the computation above, but this
time, record the midpoint entity and relationships used to
create each minimum-distance edge, on the edge. E.g., in
Figure 1, as a result of applying hr,s,tito “X r Y” and “Y s Z”,
we get “X t Z” labeled with both the number of rules required
(6 = 1+2+3) and the prior inference details “r Y s” that resulted
in that inference count.
To produce an explanation of “X t Z”, we use the result
of the transitive closure calculation to proceed in reverse,
replacing each edge with the antecedent of the weak transitive
inference that produced the edge. First we replace “X t Z” with
“X r Y s Z”, where we applied hr,s,tito “X r Y” and “Y s Z”
to produce “X t Z” via the minimum number of rules. We
continue to replace one adjacent pair with a triple at a time,
until all that remains is a sequence of base facts, all labeled
with distance 0.
The result of this sequence of substitutions is a derivation
tree for the relationship “X t Z”. The leaves of the derivation
tree are the entities on the path between X and Z, while the
relationships in the tree describe the relationships between
Finally, for each of the leaf facts, referring to the implicative
rules I, we choose the base fact that is most specific and
implies it, thus “grounding” the sequence in low-level terms.
This gives the outputs discussed in the use cases above.
This work arose over time from ideas on utilizing logic
programming in system administration [3], but approaches the
problem of applying logic to system administration from a new
angle based upon ideas in topic maps [4], [5]. We initially
intended to solve problems in navigating in a specific topic
map – Copernicus [6], [7] – that documents and tracks use
of the Cfengine suite of autonomic management tools [8]–
[11]. We realized that our method for navigating Copernicus
has application to troubleshooting, by giving human operators
more detailed causal information than is available by other
Surprisingly, this work did not arise out of any tradi-
tion of formal reasoning or knowledge representation, but
instead, from traditions of library science, graph algorithms,
autonomic system management, computer immunology, and
troubleshooting. We asked ourselves which graph algorithms
would provide meaningful connections between objects in a
topic map, and the answer was a form of abductive reasoning.
Our method represents a limited form of abduction, via a
limited information model, where our simplifications avoid
computational quandaries and make our specific problem of
reporting causal chains easy to solve.
A. Topic maps
Our entities and relationships bear strong resemblance to
“topics” and “associations” between topics in a topic map [4],
[5]. A topic map is a kind of generalized entity-relationship
(ER) model utilized in library science:
1) Topics (entities) are analogous to entries in an index of
a book.
2) Associations (relationships) are analogous to “See also”
in a book index.
3) Occurrences are analogous to page numbers in an index,
and specify “where” a topic is mentioned.
The most important thing we draw from topic maps is the
limitation to positive relationships, and the lack of negatives.
Our ER-diagrams, like topic maps, are intended to define
entities through their relationships with other entities. Our facts
and rules have “definition-like” qualities. Notably:
1) Entities are static and do not change over time (from the
point of view of the reasoning method, inside the formal
2) Relationships are static and do not change over time.
3) Rules are additive and define facets of the definition of
a relationship.
Topic map associations are slightly more expressive than our
relationships; unlike our triples, an association is a quintuple
where role1 serves to disambiguate the scope of the name
topic1 while role2 serves to disambiguate the scope of the
name topic2. The scope of a name is the context in which it
has meaning. E.g., “Charlie” could be a host name, or a pet’s
name, or even a software package. The scope of “Charlie”
determines the one of these to which it refers. E.g., our fact
host01 :::::::::::::::
is an instance of file server
might be written in a topic map as
is an instance ofhost typefile server
Its inverse association would be
file serverhost type:::::::::::::::
is an instance ofhostnamehost01
because roles disambiguate direction and allow, e.g., use of
languages that read right-to-left as topics, associations, and
roles. The association itself is viewed as a triple
assocnamehost type
Our reasoning methods are easily adapted to handle roles, but
we left that adaptation out of this paper for simplicity.
In using topic maps to index Copernicus, we found that a
particular way of thinking about the map led to more efficient
use of documentation. If we view the map as a set of links
between topics, it is easy to get lost in the map, while if
we view it as a set of chains of reasoning, the relationships
become more clear and the map becomes more useful. This
led to our algorithms for computing chains, which serve as
“explanations” of relationships between topics.
B. Cfengine
We were also inspired by the philosophy of the configura-
tion management suite Cfengine that Cfknowledge documents.
Cfengine distributes complex problems of configuration man-
agement among cooperating autonomous agents. Cooperation
between agents is based upon promises that in their simplest
form are assertions of behavior presented by one agent to
another. An agent’s promise – e.g., the promise to provide a
particular kind of service – maps to a base fact in our method.
This allows one to link the configuration of Cfengine (a set of
promises) with the documentation for Cfengine (a set of other
facts and relationships that explain the former).
C. Troubleshooting
There are many other approaches to troubleshooting. Snitch
[12] applies a maximum-entropy approach to creating dynamic
decision trees for troubleshooting support, using a proba-
bilistic model. The Maelstrom approach [13] exploits self-
organization in troubleshooting procedures to allow use of
less effective procedure. STRIDER [14] employs a state-based
approach and knowledge of behavior of peer stations to infer
possible trouble points. Outside the system administration do-
main, SASCO [15] guides troubleshooting by heuristics, using
what it calls a “greedy approach” to pick most likely paths to
a solution. Troubleshooting has an intimate relationship with
cost of operations [16], which justifies use of decision trees
and other probabilistic tools to minimize cost and maximize
There are several differences between our work and the
above approaches to troubleshooting. We base our trou-
bleshooting upon a partial description of the architecture of
the system under test; it is partial because any sufficiently
detailed description of architecture lacks some details, and
details change over time so that no snapshot of architecture
can be completely accurate. . We use architectural reasoning
to infer the nature of dependencies in the system, and utilize
those inferences to guide troubleshooting. The net result is that
we show how to apply something we already need to have –
a global map of architecture – to the troubleshooting process.
D. Abduction
The problem of determining a set of rules to apply to
achieve a desired relationship is a simple form of logic-based
abduction [17]–[19], i.e., deriving an “explanation” from a
logical description of a problem and an observed symptom.
Unlike prior work, we limit the problem structure to gain
substantive performance advantages. The general abduction
problem is to support some conclusion C from a base of facts
B, via logical reasoning. In our case, C is a single fact “X r Y”,
while Bis limited to the facts and rules as specified above.
The output of our abduction calculation is limited to linear
chains of reasoning that a human can interpret quickly. Thus
we do not solve a general abduction problem, but rather, a very
specific one, constructed so that only deduction from known
facts is required.
E. Information Modeling
Our facts and rules are a (very limited) form of informa-
tion modeling as proposed by Parsons [20]. Whereas early
information modeling tried to express models by classifying
objects in a form of object-oriented modeling, this mechanism
quickly proved vulnerable to a problem Parsons calls the
“tyranny of classification” [21] in which an instance must be a
member of some class. Parsons proposes a separation of data
into instances and separate classes, which mimics our design
closely. The main difference between our data model and
Parsons’ is that it intentionally does not model certain kinds of
relationships, e.g., ternary relationships such as“foo(X,Y,Z)”.
Our approach is quite different from information modeling
regimens such as the Shared Information and Data model(SID)
[22], mostly due to lack of structure (or even the need for
structure) in our approach. While SID gains its strength from
stratifying knowledge into domains, our approach invokes
stratification by simple mechanisms such as the relationship
is an instance of. There is no overall required hierarchical
structure to our data, and any hierarchical relationships emerge
from defined facts and rules (and potentially topic map roles).
Our approach is also quite distinct from prior approaches
to “causal modeling,” e.g., “revealed causal modeling” [23],
[24], because we rely upon user knowledge of the details of
causality, and do not try to infer it second-hand. Our model
of causality is based upon coding simple English statements,
rather than inferring probabilities of relationship between
F. Ontology
Our relationship to ontological mapping is to propose a
new problem. Like all other approaches to information rep-
resentation, our approach requires ontological mapping to
link concepts arising from different information domains. For
example, the concept of a ::::::::::
determines relationship may differ
depending upon who is using it. However, ontological mapping
in our system – like reasoning – is made simpler by the
limitations we impose upon our logic. The rules that govern
a relationship in our representation constitute – in some sense
– its “meaning”, and a set of relationships that satisfy the
same rules may safely be considered as equivalent. Thus the
ontological mapping problem is – for us – a matter of matching
relationships across information domains in such a manner
that the same rules apply to either side of a mapping. This is
– again – a much simpler problem than general ontological
mapping as embodied, e.g., in DEN-NG [25].
The proposed method solves some common problems but
raises some deeper questions about the use of knowledge in
systems management. The main cost of using the method is
that its input knowledge must be carefully structured into a
usable form, which often requires substantive transformation
from its original form. During this transformation, some
meaning is lost. The method is very sensitive to the choice
of facts and rules to represent ideas, and one invalid rule or
fact can render the output useless.
A. Ambiguity and uncertainty
Some statements one can make about architecture are cer-
tain, and others portray only partial information. In formal
reasoning, certain kinds of uncertainty are “good”, in the sense
that they enable reasoning, while other kinds are “bad”, in the
sense that they impede reasoning. Acknowledging uncertainty
in knowledge of architecture enables reasoning, while uncer-
tainty in interpreting facts impedes reasoning.
Ambiguity based upon lack of assumptions is “good”, in the
sense that the reasoning method functions best when as little as
possible is assumed. For example, if one is not absolutely sure
about the nature of a dependence between entities, one uses the
generic ::::::::::::
depends upon relationship to assert that there is some
unspecified dependence. Likewise, if one is not absolutely sure
that there is a dependence between two entities, one should
code that relationship as :::::::::::::::::
might depend upon to remember that
the dependence is not known to be a certainty. In both of these
examples, we encode uncertainty in a set of facts as a generic
Another kind of ambiguity impedes reasoning, by making
the user uncertain as to how to interpret a relationship. For
example, the relationship :::
is a is ambiguous about the domain
in which similarity is invoked. Depending upon how one
speaks in English, “X :::
is a Y” could mean that X is an instance
of class Y, is similar to an instance Y, or even that X is a variant
of an instance Y. Thus in coding facts, we avoid contextually-
defined verbs such as::::
is a in favor of the disambiguated forms
is an instance of, ::::::::::
is a type of, and ::::::::::
is a peer of.
B. Why we did not use logic programming
We implemented a prototype reasoning system entirely in
the Perl programming language. The literate Prolog program-
mer may realize that our rules and facts fit very well into the
logic programming language Prolog. E.g., if we code facts
fact(host01, instance_of, file_server).
and implication rules like:
and transitive rules like:
then the entire reasoning method is very easily coded in Prolog
with one clause per fact and one clause per rule.
The reason we did not do this is that our rules are much
simpler than Prolog supports, with the result that our Perl
prototype executes a few orders of magnitude faster than an
equivalent Prolog program. It was rather important to us to
have the program function quickly, and direct access to data
structures was useful in speeding up the search process. Unlike
general, unconstrained logic programs, our careful choice of
facts and rules allows us to eliminate backward chaining
completely, which means we do not need Prolog at all.
The reasoning method remains extremely simple and several
obvious quandaries remain in the work.
First, the desire to express reasoning as stories does not just
simplify computation, but also limits the kind of reasoning that
can be done. It is not clear exactly where the limits lie, though
we know that there are some facts that cannot be inferred or
even represented.
The simplest shortcoming arises when we try to use
Burgess’ promises [26], [27] as a description of architecture,
which is part of what promises are intended to describe. A
promise is a ternary relationship between three entities: a
promiser, a promisee, and a promise body (or description). Our
rules only act on binary relationships, so that so far, the full
concept of a promise cannot be implemented or reasoned about
in the current method. In particular, the relationship between
promises and bindings might be described in Prolog as:
:- promise(Agent1,Service,Agent2),
where capitalized phrases represent variables while lowercase
phrases are atoms (strings). This means that Agent2 is bound
to Agent1 if Agent1 promises a service and Agent2
promises to use that service. To our knowledge, this rule
cannot be coded in our method, though bindings can be coded
One potential solution to this kind of quandary is to invoke
topic map roles to disambiguate roles in ternary facts, and to
express these relationships in the manner of chemical bonds,
file server desktop workstation
as promiser & . as promisee
as promise body
file service
to mean that a file server (in the role of promiser)
promises file service (in the role of a promise body) to
adesktop workstation (in the role of promisee). This has
potential to encode more kinds of relationships, at the expense
of depicting the results of inference as a two-dimensional
graph rather than as a linear chain of reasoning.
In starting this work, we suffered from all of the precon-
ceptions of regular computer algebra and knowledge repre-
sentation: that computers can solve the worlds’ problems and
that it does not matter whether a human understands what
the computer is doing or not. We learned that if reasoning is
limited to the kind that is easily human-understandable, then
it has more value and reasoning actually becomes simpler to
implement. Knowledge domains that might have once been
irrelevant to a task become relevant. Notions such as inference
distance become a meaningful measure of the strength of
relationships between entities, and one can express a very
complex sequence of the proper kind of inferences in a very
readable form.
Our method of reasoning is not “the solution” to trou-
bleshooting or to documentation search, but rather, utilizes
a part of available information that was previously ignored.
It is not a replacement for current strategies, but rather, a
synergistic addition to the toolbox of the system administrator,
in the grand challenge of making the job more doable by
integrating all available knowledge about each problem.
The authors would like to thank John Strassner for the
opportunity to present this work, as well as years of thoughtful
comments on the use of knowledge in management. We also
thank Oslo University College for generously funding Prof.
Couch’s extended residence at the University, during which
time this work was done.
[1] M. Burgess, “Probabilistic anomaly detection in distributed computer
networks,” Science of Computer Programming, vol. 60, no. 1, pp. 1–26,
[2] E. Dijkstra, “A note on two problems in connexion with graphs,
Numerische Mathematik 1, pp. 269–271, 1959.
[3] A. Couch and M. Gilfix, “It’s elementary, dear watson: Applying logic
programming to convergent system management processes,Proceed-
ings of the Thirteenth Systems Administration Conference (LISA XIII)
(USENIX Association: Berkeley, CA), p. 123, 1999.
[4] S. Pepper, Encyclopedia of Library and Information Sciences. CRC
Press, ISBN 9780849397127, 2009, ch. Topic Maps.
[5] ——, “The tao of topic maps,” in Proceedings of XML Europe Confer-
ence, 2000.
[6] M. Burgess, “Cfengine knowledge management,” CFengine AS, Tech.
Rep., 2009. [Online]. Available:
[7] ——, “Knowledge management and promises,” Lecture Notes on Com-
puter Science, vol. 5637, pp. 95–107, 2009.
[8] ——, “A site configuration engine,Computing systems (MIT Press:
Cambridge MA), vol. 8, p. 309, 1995.
[9] M. Burgess and R. Ralston, “Distributed resource administration using
cfengine,” Software practice and experience, vol. 27, p. 1083, 1997.
[10] M. Burgess, “Automated system administration with feedback regula-
tion,” Software practice and experience, vol. 28, p. 1519, 1998.
[11] ——, “Cfengine as a component of computer immune-systems,” Pro-
ceedings of the Norwegian conference on Informatics, 1998.
[12] J. Mickens, M. Szummer, and D. Narayanan, “Snitch: interactive
decision trees for troubleshooting misconfigurations,” in SYSML’07:
Proceedings of the 2nd USENIX workshop on Tackling computer systems
problems with machine learning techniques. Berkeley, CA, USA:
USENIX Association, 2007, pp. 1–6.
[13] A. Couch and N. Daniels, “The maelstrom: Network service debugging
via ”ineffective procedures”,Proceedings of the Fifteenth Systems
Administration Conference (LISA XV) (USENIX Association: Berkeley,
CA), p. 63, 2001.
[14] Y.-M. Wang, C. Verbowski, J. Dunagan, Y. Chen, H. J. Wang, C. Yuan,
and Z. Zhang, “Strider: A black-box, state-based approach to change
and configuration management and support,” in LISA ’03: Proceedings
of the 17th USENIX conference on System administration. Berkeley,
CA, USA: USENIX Association, 2003, pp. 159–172.
[15] F. V. Jensen, U. Kjæ rulff, B. Kristiansen, H. Langseth, C. Skaanning,
J. Vomlel, and M. Vomlelov´
a, “The sacso methodology for troubleshoot-
ing complex systems,” Artif. Intell. Eng. Des. Anal. Manuf., vol. 15,
no. 4, pp. 321–333, 2001.
[16] A. Couch, N. Wu, and H. Susanto, “Towards a cost model for system
administration,” Proceedings of the Nineteenth Systems Administration
Conference (LISA XIX) (USENIX Association: Berkeley, CA), pp. 125–
141, 2005.
[17] T. Eiter and G. Gottlob, “The complexity of logic-based abduction,J.
ACM, vol. 42, no. 1, pp. 3–42, 1995.
[18] P. Liberatore and M. Schaerf, “Compilability of propositional abduc-
tion,” ACM Trans. Comput. Logic, vol. 8, no. 1, p. 2, 2007.
[19] G. Nordh and B. Zanuttini, “What makes propositional abduction
tractable,” Artif. Intell., vol. 172, no. 10, pp. 1245–1284, 2008.
[20] J. Parsons, “An Information Model Based on Clas-
sification Theory,MANAGEMENT SCIENCE, vol. 42,
no. 10, pp. 1437–1453, 1996. [Online]. Available:
[21] J. Parsons and Y. Wand, “Emancipating instances from the tyranny of
classes in information modeling,” ACM Trans. Database Syst., vol. 25,
no. 2, pp. 228–268, 2000.
[22] T. Forum, “Information framework (sid),” website. [Online]. Available:
[23] K. M. Nelson, H. J. Nelson, and D. Armstrong, “Revealed causal
mapping as an evocative method for information systems research,” in
HICSS ’00: Proceedings of the 33rd Hawaii International Conference
on System Sciences-Volume 7. Washington, DC, USA: IEEE Computer
Society, 2000, p. 7046.
[24] K. M. Nelson, S. Nadkarni, V. K. Narayanan, and M. Ghods, “Un-
derstanding software operations support expertise: a revealed causal
mapping approach,” MIS Q., vol. 24, no. 3, pp. 475–507, 2000.
[25] J. Strassner, S. Meer, D. O’Sullivan, and S. Dobson, “The use of context-
aware policies and ontologies to facilitate business-aware network man-
agement,” J. Netw. Syst. Manage., vol. 17, no. 3, pp. 255–284, 2009.
[26] M. Burgess and A. Couch, “Autonomic computing approximated by
fixed point promises,” Proceedings of the 1st IEEE International Work-
shop on Modelling Autonomic Communications Environments (MACE);
Multicon verlag 2006. ISBN 3-930736-05-5, pp. 197–222, 2006.
[27] M. Burgess, “Promise you a rose garden,”
... The paper follows directly from the prior study in [18] (hereafter referred to as paper 1), and applies an approach that revisits ideas developed with A. Couch in [19], [20]. In paper 1, natural language texts (episodes of narrative) were used as a data source, ignoring a linguistic understanding of their content. ...
... It could apply on the level of concepts or on the level of fragments. However, without some horizon or limit on the minimum degree of overlap, it's potentially possible for many if not all concepts to be considered close together 19 . ...
... Patterns will eventually be selected by their niche semantics: if they bind to something and advance a process then they can become new process invariants. For example: words 'knife', 'murder', can be combined as new phrases: murder knife murder by knife 19 Counterfactual evidence may also add a further selection criterion for filtering clusters that have become too enmeshed in each others associations [16]. That subject goes beyond the scope of the current work. ...
Full-text available
Given a pool of observations selected from a sensor stream, input data can be robustly represented, via a multiscale process, in terms of invariant concepts, and themes. Applying this to episodic natural language data, one may obtain a graph geometry associated with the decomposition, which is a direct encoding of spacetime relationships for the events. This study contributes to an ongoing application of the Semantic Spacetime Hypothesis, and demonstrates the unsuper-vised analysis of narrative texts using inexpensive computational methods without knowledge of linguistics. Data streams are parsed and fractionated into small constituents, by multiscale in-terferometry, in the manner of bioinformatic analysis. Fragments may then be recombined to construct original sensory episodes-or form new narratives by a chemistry of association and pattern reconstruction, based only on the four fundamental spacetime relationships.
... In using topic maps to index documentation, we found that a particular way of thinking about the map led to more efficient use of documentation. If we view the map as a set of links between topics, it is easy to get lost in the map, while if we view a map as a set of chains of reasoning, the relationships become clearer and the map becomes more useful[7]. The same kind of reasoning that can be used to understand documentation can be utilized to understand complex systems. ...
... The reason for this philosophical stance is computational . This will allow us (in the future) to code the inferences on a cloud at massive scale, because we can compute resultant facts in advance and then use Map/Reduce to find them[7]. This allows us to turn a logic problem into a database search problem, greatly simplifying implementation. ...
Full-text available
In troubleshooting a complex system, hidden depen-dencies manifest in unexpected ways. We present a methodology for uncovering dependencies between behavior and configuration by exploiting what we call "weak transitive relationships" in the architecture of a system. The user specifies known architectural re-lationships between components, plus a set of infer-ence rules for discovering new ones. A software sys-tem uses these to infer new relationships and suggest culprits that might cause a specific behavior. This serves both as a memory aid and to quickly enu-merate potential causes of symptoms. Architectural descriptions, including selected data from Configura-tion Management Databases (CMDB) contain most of the information needed to perform this analysis. Thus the user can obtain valuable information from such a database with little effort.
... The understanding of high level cognition (e.g. in humans), including all its symbology, is hypothesized to be a natural extension of those agent-centric models accumulated over multiple scales. The underpinnings for Semantic Spacetime began a decade ago with work on knowledge representations, using Promise Theory in collaboration with Alva Couch, University of Tufts [5]- [7], and work related to the autonomous software agent system CFEngine [8]. The scope of issues covered is large and draws on ideas from physics, information theory, and computer science. ...
Full-text available
This note is a guide to ongoing work and literature about the Semantic Spacetime Hypothesis: a model of cognition rooted in Promise Theory and the physics of scale. This article may be updated with new developments. Semantic Spacetime is a model of space and time in terms of agents and their interactions. It places dynamics and semantics on an equal footing. The Spacetime Hypothesis proposes that cognitive processes can be viewed as the natural scaling (semantic and dynamic) of memory processes, from an agent-centric local observer view of interactions. Observers record 'events' and distinguish basic spacetime changes and spacetime serves as the causal origin of all cognitive representation. If the Spacetime Hypothesis prevails, it implies that relative spacetime scales are crucial to bootstrapping cognition and that the mechanics of cognition are directly analogous to sequencing representations in bioinformatic process, under the phenomenon of an interferometric process of selection. The hypothesis remains plausible (has not been ruled out). Experiments with text mining, i.e. natural language processing, illustrate how the method shares much in common with bioinformatic analysis. The implications of this are broad.
... The ultimate goal of the study is to take an input stream and turn it into a reasoning system in the form of relational promise graph [31], as described in [1]. The link between narrative and reasoning was initially discussed in joint work with A. Couch in [1], [8], [32], [33]. ...
Full-text available
The problem of extracting important and meaningful parts of a sensory data stream, without prior training, is studied for symbolic sequences, by using textual narrative as a test case. This is part of a larger study concerning the extraction of concepts from spacetime processes, and their knowledge representations within hybrid symbolic-learning `Artificial Intelligence'. Most approaches to text analysis make extensive use of the evolved human sense of language and semantics. In this work, streams are parsed without knowledge of semantics, using only measurable patterns (size and time) within the changing stream of symbols---as an event `landscape'. This is a form of interferometry. Using lightweight procedures that can be run in just a few seconds on a single CPU, this work studies the validity of the Semantic Spacetime Hypothesis, for the extraction of concepts as process invariants. This `semantic preprocessor' may then act as a front-end for more sophisticated long-term graph-based learning techniques. The results suggest that what we consider important and interesting about sensory experience is not solely based on higher reasoning, but on simple spacetime process cues, and this may be how cognitive processing is bootstrapped in the beginning.
... The earliest attempts at discovering emergent connections, were implemented by my collaborator Alva Couch [4], [31], and used a shortest path approach to selecting a unique 'route' between two concepts: initial and final boundary conditions. This is slightly different from full brainstorming, because it is already more constrained by having a definite target concept. ...
Full-text available
In modern machine learning, pattern recognition replaces realtime semantic reasoning. The mapping from input to output is learned with fixed semantics by training outcomes deliberately. This is an expensive and static approach which depends heavily on the availability of a very particular kind of prior raining data to make inferences in a single step. Conventional semantic network approaches, on the other hand, base multi-step reasoning on modal logics and handcrafted ontologies, which are {\em ad hoc}, expensive to construct, and fragile to inconsistency. Both approaches may be enhanced by a hybrid approach, which completely separates reasoning from pattern recognition. In this report, a quasi-linguistic approach to knowledge representation is discussed, motivated by spacetime structure. Tokenized patterns from diverse sources are integrated to build a lightly constrained and approximately scale-free network. This is then be parsed with very simple recursive algorithms to generate `brainstorming' sets of reasoned knowledge.
Full-text available
Conference Paper
Information systems (IS) is a complex discipline constantly in need of additional operationalized theories and constructs. The need exists for methodologies that are qualitative and interpretive but result in theories and constructs that can be subjected to empirical testing. This paper proposes that revealed causal mapping (RCM) is a methodology that meets this need. This paper uses the domain of IS expertise to demonstrate the potential role of RCMs in IS research. Revealed causal maps fall in the category of evocative research methods. These methods are used where general theoretical frameworks are available but operationalization of concepts and specification of linkages among the concepts are still not available. While qualitative methods are especially useful in exploratory areas such as IS expertise, they cannot be used to test emergent theory. The theory must be transformed into testable hypotheses, and then operationalized into measurable constructs. Once this transformation is complete, the theory can be tested using established quantitative methods. Revealed causal maps can facilitate the transformation from qualitative inquiry to quantitative inquiry as an evocative research method.
Full-text available
The core of system administration is to utilize a set of ''best practices'' that minimize cost and result in maximum value, but very little is known about the true cost of system administration. In this paper, we define the problem of determining the cost of system administration. For support organizations with fixed budgets, the dominant variant cost is the work and value lost due to time spent waiting for services. We study how to measure and analyze this cost through a variety of methods, including white-box and black-box analysis and discrete event simulation. Simple models of cost provide insight into why some practices cost more than expected, and why transitioning from one kind of practice to another is costly.
Full-text available
We use the concept of promises to develop a service ori- ented abstraction of the primitive operations that make an autonomic computer system. Convergent behaviour does not depend on centralized control. We summarize necessary and sufficient conditions for maintain- ing a convergently enforced policy without sacrificing autonomy of deci- sion, and we discuss whether the idea of versioning control or "rollback" is compatible with an autonomic framework.
Full-text available
The purpose of autonomic networking is to manage the business and technical complexity of networked components and systems. However, existing network management data has no link to business concepts. This makes it very difficult to ensure that services offered by the network are meeting business objectives. This paper describes a novel context-aware policy model that uses a combination of modeled and ontological data to determine the current context, which policies are applicable to that context, and what services and resources should be offered to which users and applications. KeywordsContext-Ontology-based management-Policy management-Semantic reasoning
Troubleshooting misconfigurations of modern applica-tions is difficult due to their large and complex state. Snitch is a prototype tool that assists human trou-bleshooters by finding relationships between application state and subsequent faults. It correlates configuration state and application errors across many machines and users, and across long periods of time. Snitch aids the human expert in extracting patterns from this rich but enormous data set by building decision trees pinpointing potential configuration problems. We applied Snitch to 114 GB of configuration traces from 151 machines over 567 days. We illustrate how Snitch can suggest misconfigurations in case studies of two Windows applications: Messenger and Outlook.
This paper develops a formal information structuring model based on the premise that an information system represents knowledge about things in an organization. Since humans organize knowledge about things via categories or classes, the model is motivated by a theory of classification. The theory suggests several critical elements of classification based on the importance of classifying things to human survival. These elements are reflected by constructs in the model. Formal implications of the model for systems development are derived and strategies proposed for empirically evaluating these implications with respect to current modeling approaches. Necessary conditions are identified for a collection of classes to be considered a "good" model of some domain. The conditions permit different users to classify the same objects in different ways, depending on need. This suggests a new approach to data or object modeling which emphasizes instances and properties, rather than fixed categories of data or schemas. The model also offers insights into the role of classification in object-oriented analysis and, design methodologies. Finally, a program of research is outlined in which the model is being used to develop and experimentally evaluate an information modeling methodology, and as a source of implementation primitives for "instance-based" data modeling.
We consider a graph with n vertices, all pairs of which are connected by an edge; each edge is of given positive length. The following two basic problems are solved. Problem 1: construct the tree of minimal total length between the n vertices. (A tree is a graph with one and only one path between any two vertices.) Problem 2: find the path of minimal total length between two given vertices.