Conference PaperPDF Available

Visualization and Intuitive Explanation using Koios++

Authors:

Abstract and Figures

In a certain sense, explanations in computer science are answers to questions and often an explanatory dialog is necessary to support users of a software tool. In this paper, we introduce the concept of intuitive explanations representing the first explanations in an explanatory dialog. Based on an abstract approach of explanation generation we present the generic explanation component Koios++ applying Semantic Technologies to derive intuitive explanations. We illustrate our generation approach by means of the information extraction system smartFIX and put a special emphasis on visualizing explanations as semantic networks using a special layouting algorithm. smartFIX itself is a product portfolio for knowledge-based extraction of data from any document format. The system automatically determines the document type and extracts all relevant data for the respective business process. In this context, Koios++ is used to justify extraction results.
Content may be subject to copyright.
Visualization of Intuitive Explanations using Koios++
Bj
¨
orn Forcher
1
, Nils Petersen
2
, Andreas Dengel
3
, Michael Gillman
4
, Zeynep Tuncer
5
Abstract. In a certain sense, explanations in computer science are
answers to questions and often an explanatory dialog is necessary to
support users of a software tool. In this paper, we introduce the con-
cept of intuitive explanations representing the first explanations in
an explanatory dialog. Based on an abstract approach of explanation
generation we present the generic explanation component Koios++
applying Semantic Technologies to derive intuitive explanations. We
illustrate our generation approach by means of the information ex-
traction system smartFIX and put a special emphasis on visualizing
explanations as semantic networks using a special layouting algo-
rithm. smartFIX itself is a product portfolio for knowledge-based ex-
traction of data from any document format. The system automatically
determines the document type and extracts all relevant data for the re-
spective business process. In this context, Koios++ is used to justify
extraction results.
1 Introduction
In a certain sense, explanations in computer science are answers to
questions [7] and often an explanatory dialog [1] is necessary to sup-
port users of a software tool. In this paper we introduce the concept
of intuitive explanations that can be illustrated by means of a daily
situation. Imagine, a nine year old child complains about dizziness
and visits the doctor for an examination. After the examination the
doctor concludes that the child suffers from the M
`
eni
`
ere’s disease.
The child does not know this particular disease and asks the doctor
what it is. In other words, the child requests an explanation that helps
to understand what dizziness and M
`
eni
`
ere’s disease have to do with
each other. Probably, the doctor will not elaborate on the M
`
eni
`
ere’s
disease nor he will make use of foreign words. On the contrary, he
will roughly estimate the knowledge of the child and, based on that,
he will give a short explanation. For instance, M
`
eni
`
ere’s disease is a
disease of the inner ear which causes, for instance, dizziness is an
understandable explanation, enabling the child to take up the given
information and ask further questions. In the figurative sense, intu-
itive explanations represent the first explanations in an explanatory
dialog in which the explainer tries to give an understandable expla-
nation based on a rough estimation of the knowledge of his counter-
part. The explanation enables the consumer of the explanation to ask
further questions leading to a complex explanatory dialog.
1
German Research Center for Artificial Intelligence (DFKI) GmbH, Ger-
many, email: bjoern.forcher@dfki.de
2
German Research Center for Artificial Intelligence (DFKI) GmbH, Ger-
many, email: nils.pertersen@dfki.de
3
German Research Center for Artificial Intelligence (DFKI) Gmb-
H,Germany, email: andreas.dengel@dfki.de
4
Insiders Technologies GmbH, Germany, email: m.gillmann@insiders-
technologies.de
5
John Deere European Technology Innovation Center, Germany, e-
mail:TuncerZeynep@johndeere.com
In smartFIX documents are classified automatically on the basis of
free form and forms-analysis methods [5]. Relevant data is extracted
using different methods for each document type and is validated and
valuated via database matching and other sophisticated knowledge-
based methods. Due to mathematical and logical checks data quality
is enhanced. Data that is accurately recognized is released for direct
export. In contrast, unreliably recognized data is forwarded to a ver-
ification workplace for manual checking. In many cases, users have
no difficulties to extract information from a document. Consequent-
ly, they often do not understand the difficulties of smartFIX during
the extraction process. Making the system more transparent, smart-
FIX creates a semantic log that is used by the generic explanation
component Koios++ to justify extraction results. As explanations are
answers to questions, explanations represent some kind of informa-
tion that can be externalized by text or by charts. Currently, Koios++
uses semantic networks along with a special layouting algorithm to
visualize explanations. We will point that this kind of visualization is
a useful alternative to textual explanations enabling intuitive expla-
nations and thus the start of an explanatory dialog.
This paper is structured as follows. The next section gives a short
overview about relevant research on explanations and describes an
abstract explanation generation approach. Sect. 3 presents the smart-
FIX system and motivates its explanation need by an intuitive ex-
ample. Sect. 4 presents the generic explanation component Koios++
whereas the following describes the explanation-aware layouting al-
gorithm. We conclude the paper with a brief summary and outlook.
2 Related Work
Wick and Thompson [12] developed the expert system REX, which
implements the concept of reconstructive explanations. REX trans-
forms a trace, a line of reasoning, into a plausible explanation sto-
ry, a line of explanation. The transformation is an active, complex
problem-solving process using additional domain knowledge. The
degree of coupling between the trace and the explanation is con-
trolled by a filter that can be set to one of four states regulating the
transparency of the filter. The more information of the trace is let
through the filter, the more closely the line of explanation follows
the line of reasoning. In this work, we describe how semantic tech-
nologies can be applied to (re-) construct explanations.
In [2] we described our conceptual work on explaining smartFIX.
In our explanation scenario (Fig. 1) we distinguish three main par-
ticipants: the user who is corresponding with the software system
via its user interface (UI), the originator, the tool that provides the
functionality for the original task of the software and the explainer.
Originator (smartFIX) and explainer (Prof. Smart) need to be cou-
pled in order to provide the necessary knowledge about the inner
workings of the originator for the explainer. In (rule-based) expert
systems looking at the rule trace was the only way of accessing the
originator’s actions. Given that the inference mechanism is fixed in
those systems the trace was all the explainer needed.
Figure 1. Explanation Generation [2].
The mentioned scenario implies that the originator has to provide
detailed information about its behavior and solutions. Therefore it is
necessary that the originator prepares some kind of log representing
the initial starting point for the explainer to generate explanation-
s. Regarding user questions, this information is step-by-step being
transformed into an adequate explanation. Thus, a multi-layered ex-
planation model is constructed, whereas each step contributes a layer
to the model, i. e., the transformation result.
Depending on the coupling, originator and explainer share infor-
mation that is required for problem solving and for explanation gen-
eration as well. In Fig. 1, this information is contained in the expla-
nation knowledge base (EKB). The originator may have access to
information which is hidden from the explainer and vice versa. At
least, they have to share the semantic log. As its name implies, the
logging process collects all information with respect to the behavior
of the originator for building the log.
Users communicate their explanation needs by keywords or in nat-
ural language. As the formal language of originator and explainer is
often completely different from the user’s language an interpretation
process is necessary. In simplified terms, relevant parts of the seman-
tic log and EKB must be identified and the exact explanation needs of
the user must be determined. The result of the interpretation process
is called translation layer.
The translation layer does not necessarily represent adequate ex-
planation information. Until this stage, the explainer is only aware of
the users’ explanation problem concerning, for instance, an incom-
prehensible result of the originator. However, the information that
solves the users’ explanation problem completely has not been de-
rived. This is the task of construction process which is similar to the
concept of reconstructive explanations. The result of that process is
called content layer representing useful explanation information. As
understandability is a very important aspect of explanation [10, 11],
explanations should not contain too much or too confusing informa-
tion. Furthermore, it takes up the knowledge of the user and reveals
a connection between known and unknown information enabling the
user to ask follow-up questions.
Explanation is information that is communicated by text, charts,
tables, etc. Each communication form has different application pos-
sibilities in an explanation scenario. Text can describe complicated
conceptions whereas charts can reveal qualitative connections be-
tween concepts in a simple way [13]. The externalization process
transforms the content layer into a formal description for communi-
cating explanations, namely the externalization layer. In this work,
we put a special emphasis on semantic networks based on mathemat-
ical graphs for depicting explanations.
However, this layer does not include layout and style information.
This is task of the presentation process which transforms the content
of the externalization layer into the presentation layer. The informa-
tion of the last mentioned layer can be used by special renderer to
visualize the explanation. As explained, the presentation process and
the layouting of the semantic network is the main contribution of this
work. We will show how the layout can contribute to intuitive expla-
nations and the explanatory dialog.
The explainer creates some kind of meta log that is indicated
by the arrow between EKB and explainer. In a way, the explainer
is aware of single transformation processes and layers and thus, it
knows how it explains a certain explanation problem. This knowl-
edge is essential to continue the explanatory dialog and offers so-
phisticated interaction possibilities.
So far, we described an abstract method to generate explanations.
In the next chapters we describe a realization of that method with the
help of Semantic Technologies.
3 smartFIX
smartFIX extracts data from paper documents as well as from many
electronic document formats (e.g., faxes, e-mails, MS Office, PDF,
HTML, XML, etc.). Regardless of document format and structure,
smartFIX recognizes the document type and any other important in-
formation during processing.
Basic image processing such as binarization, despeckling, rotation
and skew correction is performed on each page image. If desired, s-
martFIX automatically merges individual pages into documents and
creates processes from individual documents. For each document, the
document class and thus the business process to be triggered in the
company is implicitly determined. smartFIX subsequently identifies
all relevant data contained in the documents and related to the respec-
tive business process. In this step, smartFIX can use customer rela-
tion and enterprise resource planning data (ERP data) provided by
a matching database to increase the detection rate. A special search
strategy searches for all entries from the customer’s vendor database
on the document. The procedure works independently of the location,
layout and completeness of the data on the document. Within smart-
FIX this strategy is called “Top Down Search”. Moreover, smartFIX
provides self-teaching mechanisms as a highly successful method for
increasing recognition rates. Both general and sender-specific rules
are applied. An automatic quality check is then performed on all rec-
ognized values. Beside others, Constraint Solving [4] and Transfer
Learning methods [8] are used. Values that are accurately and un-
ambiguously recognized are released for direct export; uncertain [9]
values are forwarded to a verification workplace for manual checking
and verification. The quality-controlled data is then exported to the
desired downstream systems, e.g., an enterprise resource planning
system like SAP for further processing. An overview of the system
architecture is also presented in [2].
Let us illustrate exemplary an actual scenario that currently results
in support calls and internal research and clarification effort by ex-
perts.
Often, several subcompanies of the same trust are resident at the
same location or even in the same building. If one smartFIX system
has to analyze, for instance, invoices of more than one of those com-
panies, very similar database entries can be found in the customer’s
master database.
The company’s master data is an important knowledge source used
by Top Down Search during the analysis step of smartFIX. When
smartFIX analyzes an invoice sent to such a subcompany it may be
unable to identify a clear and unambiguous extraction result due to
the high degree of similarity of the master data entries. So, smartFIX
has to regard all the subcompanies as possible hits.
smartFIX extracts the most reliable result based on extraction
rules. Here, it does not valuate that result as reliable but as a sug-
gestion [9]. Fig. 2 presents a look into the smartFIX Verifier in that
case. You see that the recipient’s name and identifier are correctly
extracted but the values are marked blue which means “uncertain” in
the smartFIX context.
Figure 2. Analysis results presented in smartFIX Verifier
With this picture on screen, the user wonders why the system asks
for interaction (here, for pressing the Return key to confirm the cor-
rect extraction results) although she can clearly and easily read the
full recipient’s address on the invoice. This scenario holds, too, and
becomes more intransparent the more extraction rules and sophisti-
cated extraction and valuation methods come into operation.
smartFIX creates a semantic log that is based on the smartFIX On-
tology (SMART) that is an extension of the OWL-S ontology
6
and
the EXACT ontology as well. OWL-S provides a set of representa-
tion primitives capable of describing features and capabilities of Web
services in unambiguous, machine-interpretable form. This includes,
among other things, the possibility to describe how the service works.
OWL-S comprises general constructs to represent processes, results
and intermediate results. The EXACT ontology provides represen-
tational constructs to describe the explanation generation approach
including primitives for semantic networks (nodes, edges, ...), layout-
ing (cell-layout, box-layout, ...) and presentation (fonts, ...). The S-
MART ontology integrates both ontologies regarding smartFIX spe-
cific aspects. This allows not only describing the behavior of smart-
FIX in an abstract way, but also to instantiate a concrete log with
respect to the Semantic Logging step as presented in section 2.
4 Explanation Component Koios++
In this section we present an implementation of the abstract explana-
tion generation method as presented in section 2 by means of Seman-
tic Technologies. As mentioned before, smartFIX creates a semantic
log that is encoded with the SMART ontology. A specific log is lever-
aged by the generic explanation component Koios++ and thus, it is
starting point for the generation processes Interpretation, Construc-
tion, Externalization und Presentation. The semantic log represents
6
http://www.w3.org/Submission/OWL-S/
an RDF-Graph which is stepwise transformed into the RDF-Graph
of the presentation layer.
At its core, Koios++ represents a semantic search engine that en-
ables keyword-based search on graph-shaped RDF data [6]. In ad-
dition, it includes various manipulation strategies with which any
RDF-Graph can be transformed into another RDF-Graph. For in-
stance, Koios++ employs the Quadruple Prolog Language (QPL) [3]
to transform an RDF-Graph into another one by means of set of pre-
defined Prolog rules. As a consequence, Koios++ offers all prereq-
uisites to realize the abstract explanation generation method as pre-
sented in chapter 2. In this section, we describe how Koios++ is used
to explain the smartFIX system in order to justify extraction results.
For interpreting the user’s explanation needs the semantic search
algorithm of KOIOS++ is applied, realizing a keyword-based search
on the semantic log. The search engine maps keywords to elements
of the log and searches for connections between them. More precise-
ly, every keyword k
1
to k
z
is mapped to a set of mapping elements
m
1
to m
z
. The mapping element sets were distributed to the threads
t
1
to t
z
and in each thread t
g
a graph exploration is performed for
each mapping element e
g
u
m
g
and thus, many paths were de-
termined starting from e
g
u
. In case there is a log element r that is
reached by any path in each thread a connecting subgraph of the log
can be constructed consisting of z paths. To conclude, the engine de-
termines a set of subgraphs representing basic explanation informa-
tion including an articulation point r that is called root or connecting
element. Regarding the example as presented in Sect. 3 a user may
use the keywords analyser, recipient and uncertain result in order to
find out why the recipient is not identified correctly. A subgraph that
connects the corresponding elements represents the basic informa-
tion to understand why the result is unreliable. In general, users do
not use the same keywords to specify their explanation need depend-
ing on the user’s level of expertise. For that reason, we enrich the S-
MART ontology with synonyms taken from the WordNet thesaurus
7
.
As a result, users can also use unsure result instead of uncertain re-
sult. However, keywords generally map on several elements of the
log. Hence, subgraphs are ranked depending on the weighting of the
graph elements representing possible explanation alternatives.
As explained above, the Interpretation step determines an extrac-
t of the log. Potentially, this extract is still not understandable for
non-expert smartFIX users. After initially experimenting with expla-
nations of smartFIX it turned out that the extract of the log contains
too much information. Primarily, the log includes a long sequence
of subprocess which contradicts the concept of intuitive explanation-
s. For that reason, we focused in particular on shortening the deter-
mined extract of the log. For shortening we defined several QPL rules
that leverage the transitivity characteristic of the subprocess relation
that do not affect the truth content of the explanation. Hence, the
construction process transforms the log extract into a smaller RDF-
Graph. Apart from that, connecting or mapping elements are never
removed and there is always a connection between all mapping el-
ements. Actually, this is the content of the explanation that must be
visualized only that is described in the next chapter.
Fig. 4 is an extract of the construction layer regarding the example
as described in section 3. It contains the information, that process
root process has a subprocess called top down search.
5 Explanation-Aware Graph-Layouting
As explained, the construction process returns the informational con-
tent of the explanation that must be externalized. We decided to use
7
http://wordnet.princeton.edu/
Figure 3. Extract of the construction layer
semantic networks that have three reasons. First, semantic networks
are understandable alternative to textual information enabling a fast
recognition of qualitative connections. Second, RDF-Graphs can be
easily transformed into semantic networks and third, they enable var-
ious interaction possibilities and hence, the continuation of the ex-
planatory dialog.
In linguistics, an entity of the real world is characterized by three
components, namely label, connections to other entities and a com-
plex pattern of perceptual origin. The label is used to communicate
the entity in natural language and in many cases the entity is per-
ceived visually. As described above, the construction layer contains
only informational content of the explanation and thus, it includes la-
bels but no visual information. Images, for instance, make sense in a
semantic network but not in a pure textual explanation. For that rea-
son, the construction layer includes class relations that can be used in
texts to characterize instances more precisely, for example, top down
research process. Regarding semantic networks it is possible to use
both, labels and symbols to characterize an entity. If an instance can-
not be associated with an individual symbol, it is may be possible
to characterize the instance with a class symbol, for example, a gear
symbol to represent the class owls:Process. The transformation of
the construction layer is illustrated in Fig. 4. The figure shows the
root process smart:Instance1 of Fig. 3 is transformed into node of
the semantic network which is the source of an edge and which has a
label and a symbol. It should be noted, that mapping and connecting
elements of the interpretation layer correspond to mapping and con-
necting nodes in the externalization layer. The connecting node of a
subgraph is still an articulation point where all paths starting from
the mapping nodes meat.
Figure 4. Extract of the externalization layer
The externalization layer describes the information paradigm that
is used to communicate the explanation content. It does not provide
information about the design of the semantic network nodes or the lo-
cation of nodes on the drawing area (graph layout). This information
is added in the presentation process. The layout of the semantic net-
work influences the progress of the dialog and the understandability
of explanation itself. In context of a keyword-based search, there are
four requirements of layouting the intuitive explanation. First, it must
be clear which keywords are mapped to elements of the log. Second,
it must also be clear what the root or connecting element is. Third,
the semantic network must not contain overlaps of edges. Finally, it
must be possible to fade-in supporting nodes with respect to the third
point. Fig. 5 visualizes a rendered intuitive explanation. The map-
ping elements are located at the very top of chart and are arranged
at random. The connecting element can be found in the center at the
bottom of the chart. The other nodes are positioned in such a way
that no node is directly under or above another node. The remaining
place can be used to fade-in supporting nodes under the base nodes
without hiding information. Here, the mouse-over events can be used.
In Fig. 5 the node labeled with unsure result may be extended with
the information corresponding field in the verifier is colored blue.
Furthermore, mouse-click events on nodes represent a suitable mean
to continue the explanatory dialog. Currently, these kind of events
center the corresponding node in the explanation panel showing a s-
election of connections to other nodes. This represents some kind of
conceptual explanations and thus, they give an answer to What is the
meaning of...
Figure 5. Visualization with explanation-aware layout
For realizing the described layout we divide the explanation panel
into a matrix of cells of equal size and assign each node exactly one
cell - similar to the Java CellLayout. After the assignment a check
is performed whether there are overlaps of edges. If that is true the
procedure starts again whereas a new random order of the mapping
elements is calculated. After a certain number of iterations the entire
process stops and the last assignment is used for layouting the nodes.
The number of cells in both, horizontal and vertical direction depends
on the number of paths between mapping and connecting nodes and
the maximum number of nodes in a path. The entire procedure is de-
scribed in Alg. 1. The input for the procedure is the connecting node
r and the paths of the connecting subgraph p
1
to p
v
. Here, the path
length is equal to the number of nodes in the path and is denoted with
l
1
, to l
v
. The variables row and col represent the number of rows
and columns of the matrix, n
ab
is node number b in path number a,
c
ab
(x, y) represents a cell assignment for n
ab
located at column x
and row y, c
r
(a, b) belongs to r, c(d,k) is any cell, and finally CA
contains all cell assignments.
Algorithm 1 Assigning cells to network nodes
col := 0
CA =
colIndex := 1
rowIndex := 1
anyIndex := 1
forward := true
col := max(l
0
. . . l
v
)
row := ((col 1) v) + 1
rowRoot := row 1
if (t mod 2) == 0 then
colRoot := (cols 1)/2
else
colRoot := ((v + 1) (row 1))/2
end if
add c
r
(colRoot, rowRoot) to CA
for i = 0 to (v-1) do
for all j such that 0 j < (row 1) do
if j < v then
node := n
ji
else
node := null
end if
rowIndex := rowIndex + 1
if forward is true then
colIndex := colIndex + 1
anyIndex := colIndex
else
anyIndex := anyIndex 1
end if
if node != null and c(anyIndex, rowIndex) 6∈ CA then
add c
ji
(anyIndex, rowIndex) to CA
end if
if (colIndex + 1) == rowRoot then
forward := false
colInde := colIndex + 1
exit from most inner loop
end if
end for
rowIndex := 1
anyIndex := colIndex + row
end for
The information about the layout, cells and corresponding nodes
is content of the presentation layer. That means, in the end we have
detailed description of the intuitive explanation that can be rendered
by the explanation renderer that is part of the user interface. At this
point we do not provide further details about the presentation layer,
for example, fonts of the labels or layout of the nodes because this
would exceed the scope of this work.
6 Conclusion and Outline
In this paper, we presented the generic explanation componen-
t KOIOS++ that realizes an abstract approach for intuitive explana-
tion generation based on Semantic Technologies. In addition, we de-
scribed a representative explanation problem in the information ex-
traction system smartFIX and illustrated how intuitive explanations
can be used to justify extraction results in order to make the system
more transparent for users. Currently, intuitive explanations are vi-
sualized as semantic networks whereas a special layouting algorithm
is used to arrange network nodes clearly and to enable suitable start
for an explanatory dialog, for example, the mouse over triggers the
fade-in of supporting network nodes.
In a future version of smartFIX the explanation component will
not only be able to justify extraction results but also to give practical
hints to avoid low quality extraction results. In addition, we provide
further forms of explanation externalization, for instance, semantic
networks combined with text.
ACKNOWLEDGEMENTS
This work was funded by the German Federal Ministry of Educa-
tion and Research (BMBF) in the EMERGENT project under grant
number 01IC10S01.
REFERENCES
[1] G. Du, M. Richter, and G.Ruhe, ‘An explanation oriented dialogue ap-
proach and its application to wicked planning problems’, Computers
and Artificial Intelligence, 25(2-3), (2006).
[2] B. Forcher, S. Agne, A. Dengel, M.Gillmann, and T. Roth-Berghofer,
‘Towards understandable explanations for document analysis system-
s’, in Proceedings of 10th IAPR International Workshop on Document
Analysis Systems, (2012).
[3] B. Forcher, M. Sintek, T. Roth-Berghofer, and A. Dengel, ‘Explanation-
aware system design of the semantic search engine koios’, in Proceed-
ings of the ECAI-10 workshop on Explanation-aware Computing (Ex-
ACt 2010), (2010).
[4] Andreas Fordan, ‘Constraint solving over ocr graphs’, in Proceedings
of the Applications of prolog 14th international conference on Web
knowledge management and decision support, INAP’01, pp. 205–216,
Berlin, Heidelberg, (2003). Springer-Verlag.
[5] B. Klein, A. Dengel, and A. Fordan, ‘smartfix: An adaptive system
for document analysis and understanding’, in Reading and Learning
-Adaptive Content Recognition, eds., Andreas Dengel, Markus Junker,
and A. Weisbecker, 166–186, Springer Publ., (3 2004). LNCS 2956.
[6] M. Liwicki, B. Forcher, P. Jaeger, and A. Dengel, ‘Koios++: A query-
answering system for handwritten input’, in Proceedings of 10th IAPR
International Workshop on Document Analysis Systems, (2012).
[7] Thomas R. Roth-Berghofer and Michael M. Richter, ‘On explanation’,
K
¨
unstliche Intelligenz, 22(2), 5–7, (May 2008).
[8] F. Schulz, M. Ebbecke, M. Gillmann, B. Adrian, S. Agne, and A. Den-
gel, ‘Seizing the treasure: Transferring layout knowledge in invoice
analysis’, in ICDAR-09, July 26-29, Barcelona, Spain, pp. 848–852.
IEEE, Heidelberg, (2009).
[9] B. Seidler, M. Ebbecke, and M. Gillmann, ‘smartFIX Statistics To-
wards Systematic Document Analysis Performance Evaluation and Op-
timization’, in Proceedings of the 9th IAPR International Workshop on
Document Analysis Systems (DAS), Boston, MA, USA, (2010).
[10] W. R. Swartout and S. W. Smoliar, ‘Explanation: A source of guidance
for knowledge representation’, in Knowledge Representation and Or-
ganization in Machine Learning, ed., K. Morik, 1–16, Springer, Berlin,
Heidelberg, (1989).
[11] William R. Swartout, C
´
ecile Paris, and Johanna D. Moore, ‘Explana-
tions in knowledge systems: Design for explainable expert systems’,
IEEE Expert, 6(3), 58–64, (1991).
[12] Michael R. Wick and William B. Thompson, ‘Reconstructive expert
system explanation’, Artif. Intell., 54(1-2), 33–70, (1992).
[13] Patricia Wright and Fraser Reid, ‘Written information: Some alterna-
tives to prose for expressing the outcomes of complex contingencies’,
Journal of Applied Psychology, 57 (2), 160–166, (1973).
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper deals with the transfer of knowledge on invoice document layout and extraction strategies. This knowledge has been automatically generated by self-teaching mechanisms of the invoice analysis software smartFIX over several years of operation. We present results of analyzing this "treasure" of knowledge and putting it to use in smartFIX systems of new users. The evaluation shows that this transfer of knowledge using state-of-the-art techniques in transfer learning achieves significantly higher initial recognition rates than the unaugmented system, de- livering instant economic advantages by reducing accountant personnel workload.
Article
Full-text available
In this paper we propose KOIOS++, which automatically processes natural language queries provided by handwritten input. The system integrates several recent achievements in the area of handwriting recognition, natural language processing, information retrieval, and human computer interaction. It uses a knowledge base described by the resource description framework (RDF). Our generic approach first generates a lexicon as background information for the handwritten text recognition. After recognizing a handwritten query, several output hypotheses are sent to a natural language processing system in order to generate a structured query (SPARQL query). Subsequently, the query is applied to the given knowledge base and a result graph visualizes the retrieved information. At all stages, the user can easily adjust the intermediate results if there is any undesired outcome. The system is implemented as a web-service and therefore works for handwritten input on digital paper as well as on input on Pen-enabled interactive surfaces. Furthermore, we build on the generic RDF-representation of semantic knowledge which is also used by the linked open data (LOD) initiative. As such, our system works well in various scenarios. We have implemented prototypes for querying company knowledge bases, the DBPedia1, the DBLP computer science bibliography2, and a knowledge base of the DAS 2012.
Conference Paper
smart FIX is a product portfolio for knowledge based extraction of data from any document format. The system automatically determines the document type and extracts all relevant data for the respective business process. Data that is unreliably recognized is forwarded to a verification workplace for manual checking. In general, users have no difficulties to interpret the document data and wonder why the system needs additional input. For that reason, we implemented an explanation component that is used to justify extraction results, thus, increasing confidence of users. The component is using a semantic log making it possible to provide understandable explanations. We illustrate the benefits of that kind of technology in contrast to the current smart FIX Log Viewer by means of a preliminary user experiment.
Chapter
In constructing an expert system, there are usually several ways to represent a given piece of knowledge regardless of the knowledge representation formalism used. Initially, all of them may appear to be equivalent, but as the system evolves, it often becomes apparent that some are better than others, leading to the need to revise representations. Such revisions can be very time-consuming and prone to error. In this paper, we argue that the additional constraints imposed by the addition of an explanation facility can guide the creation of a knowledge base in a manner that reduces the need to subsequently re-structure the knowledge base as the system's functionality increases. We describe criteria that may be applied after the knowledge base is constructed to reveal potential weaknesses as well as those that may be employed during knowledge base construction. Finally, we briefly describe an expert system shell we have constructed that embodies these guiding principles, the Explainable Expert Systems framework.
Conference Paper
The provision of explanations in knowledge based systems has a long tradition in computer science. Regarding expert systems, several interesting contributions were proposed, but an overall method for explanation generation is missing. Furthermore, various approaches have never been realised or have never been followed up with respect to current technologies. Explanation-Aware Software Design (EASD) aims at making software systems smarter in interactions with their users. The long-term goal is to develop methods and tools for engineering and improving such capabilities. In this paper we present the idea of EASD including an abstract model for explanation generation. We describe in detail the realisation of that approach for the semantic search engine KOIOS using Semantic Web technology.
Article
68 adults solved problems using information written either as (a) bureaucratic-style prose, (b) a flow chart or algorithm, (c) a list of short sentences, or (d) a 2-dimensional table. Prose was always slower to use and more error-prone than other versions, but error on nonprose formats was affected by problem difficulty. Easier problems resulted in no differential error-rates, although the table was used most rapidly; for harder problems, the algorithm gave fewest errors. Differences in retention strategies appeared when Ss worked from memory-performance with prose and short sentences continued to improve over trials, whereas performance with the algorithm and table deteriorated. It is concluded that the optimal format for written information depends on conditions of use. (15 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Conference Paper
Before buying a document analysis system, companies typically perform an intensive evaluation project whereby several candidates are invited to process a test set of selected representative documents. Moreover, after a system was purchased, it should continuously be optimized regarding the given customer-specific document appearance. In this paper we discuss a set of metrics for measuring the performance of common document analysis systems as base for comparing the systems, on the one hand, and for pair-wise comparing the performance of different configurations of one system as the base of system optimization, on the other hand. Finally, we present the smartFIX Statistics tool that is used daily by our customers to tune their smartFIX document analysis system towards higher economic efficiency and, thus, holds as proof of concept of the presented approach.
Conference Paper
The internet is certainly a wide-spread platform for information interchange today and the semantic web actually seems to become more and more real. However, day-to-day work in companies still necessitates the laborious, manual processing of huge amounts of printed documents. This article presents the system smartFIX, a document analysis and understanding system developed by the DFKI spin-off insiders. During the research project “adaptive Read”, funded by the German ministry for research, BMBF, smartFIX was fundamentally developed to a higher maturity level, with a focus on adaptivity. The system is able to extract information from documents – documents ranging from fixed format forms to unstructured letters of many formats. Apart from the architecture, the main components and the system characteristics, we also show some results from the application of smartFIX to representative samples of medical bills and prescriptions.
Article
Existing explanation facilities are typically far more appropriate for knowledge engineers engaged in system maintenance than for end-users of the system. This is because the explanation is little more than a trace of the detailed problem-solving steps. An alternative approach recognizes that an effective explanation often needs to substantially reorganize the actual line of reasoning and bring to bear additional information to support the result. Explanation itself becomes a complex problem-solving process that depends not only on the actual line of reasoning, but also on additional knowledge of the domain. This paper presents a new computational model of explanation and argues that it results in significant improvements over traditional approaches.