Conference PaperPDF Available

Towards an Ontology-Driven Adaptive Dialogue Framework


Abstract and Figures

In this paper, we describe the principles and technologies that underpin the development of an adaptive dialogue manager framework, tailored to carrying out human-agent conversations in a natural, robust and exible manner. Our research focus is twofold. First, the investigation of dialogue strategies that can handle dynamically created user and system actions, while still enabling the agent to adapt its actions to various and possibly changing contexts. Second, the utilisation of rich semantic annotations for capturing background knowledge, as well as conversation topics and semantics of user utterances extracted through language analysis. The resulting annotations comprise the situational descriptions upon which reasoning takes place to recognise the conversation context and compile appropriate responses.
Content may be subject to copyright.
Towards an Ontology-Driven Adaptive Dialogue
Georgios Meditskos
Information Technologies
Institute, CERTH, Greece
Stamatia Dasiopoulou
Department of Information
and Communication
Technologies, UPF, Spain
Louisa Pragst
Institute of Communications
Engineering, Ulm University,
Stefan Ultes
Department of Engineering
University of Cambridge, UK
Stefanos Vrochidis,
Ioannis Kompatsiaris
Information Technologies
Institute, CERTH, Greece
{stefanos, ikom}
Leo Wanner
Department of Information
and Communication
Technologies, UPF, Spain
In this paper, we describe the principles and technologies
that underpin the development of an adaptive dialogue man-
ager framework, tailored to carrying out human-agent con-
versations in a natural, robust and flexible manner. Our re-
search focus is twofold. First, the investigation of dialogue
strategies that can handle dynamically created user and sys-
tem actions, while still enabling the agent to adapt its ac-
tions to various and possibly changing contexts. Second, the
utilisation of rich semantic annotations for capturing back-
ground knowledge, as well as conversation topics and seman-
tics of user utterances extracted through language analysis.
The resulting annotations comprise the situational descrip-
tions upon which reasoning takes place to recognise the con-
versation context and compile appropriate responses.
Dialogue systems, language analysis, question answering,
A key prerequisite in dialogue systems is to afford effec-
tive strategies for tailoring system behaviour to user actions
and ensuring meaningful and coherent interactions. Cur-
rent dialogue managers (e.g. [19]), however, often restrict
their scope to predefined sets of possible user and system
actions, severely undermining thereby flexibility and robust-
ness. The latter depend largely on bridging the gap between
the users’ perception of the domain of discourse and the
way domain knowledge is captured. Towards this direction,
ACM ISBN 978-1-4503-2138-9.
DOI: 10.1145/1235
there have been important advances recently in semantic
search and Question Answering (QA) over structured data
[13], exploiting ontological relationships to understand and
disambiguate a query. However, most of the existing ap-
proaches consider syntactic relations (e.g. subject and ob-
ject dependencies) rather than semantic ones, thus capturing
only partially the underlying language semantics.
In this work, extending the proposal presented in [17], we
combine advanced techniques in the fields of language anal-
ysis (LA), dialogue management, knowledge representation
and reasoning to support an adaptive dialogue flow and re-
act flexibly to user input without restricting the wording or
the way questions are formulated. More specifically:
We delineate a semantic language analysis framework
that allows to abstract away from syntactic variations
and formalise user utterances in OWL.
We present a context-aware query answering algorithm
that combines OWL 2 ontologies and rules to feed the
dialogue agent with ongoing situational information.
We introduce a dialogue management approach that
handles dynamically created user and system actions,
utilising general dialogue acts combined with ontology
semantics to determine the system’s behaviour.
The rest of the paper is structured as follows: Section 2
presents related work in the domains of QA, LA, and DM.
Section 3 describes the architecture of the framework, elabo-
rating on the provided functionality and component interac-
tions. Section 4 presents an example use case, while Section
5 concludes our work.
2.1 Capturing Language Semantics
Several works have investigated knowledge extraction from
natural language text and its capturing into Semantic Web
compliant representations using deep parsing to abstract
away from syntactic variations, predicate-argument resources
(e.g. FrameNet1) for semantic role labelling, and respective
mapping rules for their formalisation [16, 2, 18]. The id-
iosyncrasies of deep parsing approaches in combination with
the non-trivial decisions involved in re-engineering the lin-
guistic rather than ontological considerations underlying the
extracted data, impact accordingly the resulting ontological
representations; for instance, in [18], only verbal events are
considered, while the use of blank nodes in [2] hinders sub-
sequent reasoning tasks.
In the context of DM and QA though, user utterance se-
mantics tend to be considered in a less comprehensive man-
ner. DM systems often rely on restricted, domain-tailored
lexical to semantics bindings in order to contextualise and in-
terpret user utterances (e.g. [19, 14]); in contrast, ontology-
based question answering approaches tend to allow overall
for a greater flexibility, by employing dependency parsing
in order to transform user queries into some intermediate
structured representation [13]; however, as the deployed de-
pendency parsers address primarily syntactic rather than
semantic dependencies, the resulting representations do not
effectively capture the wealth of user query semantics (e.g.
n-ary relations, events and participant roles).
2.2 Ontology-based Question Answering
Several approaches have been proposed in the literature
[13] to enrich QA with ontologies. PowerAqua [12] allows
users to choose an ontology and then ask queries relevant
to the vocabulary. The results are transformed into triples,
which are further annotated with ontology resources. The
triples are translated into logical queries that retrieve an-
swers from the knowledge sources. NLP-Reduce [11] pro-
cesses queries as bags of words, employing stemming and
synonym expansion. It attempts to match the parsed ques-
tion words to the synonym-enhanced triples stored in the
lexicon generated from a knowledge base and expanded with
WordNet synonyms, generating SPARQL statements for the
matches. FREyA [9] is an interactive Natural Language In-
terface for querying ontologies. It combines syntactic pars-
ing with ontology-based lookup in an attempt to answer
questions. In [10] RDF ontologies are used to populate a KB
with product descriptions, while SPARQL queries are gen-
erated taking into account domain and range restrictions.
Other systems, such as [1, 20], follow a different approach.
Instead of generating SPARQL queries, QA is reduced to a
subgraph matching problem. Either way, the focus is mainly
put on retrieving answers, without supporting further in-
teraction with the users. Interactivity is often limited to
solving disambiguation problems (e.g. in FREyA) and re-
questing clarifications from users. The possibility to adapt
the dialogue strategy to the user is usually not given.
2.3 Dialogue Management
There have been efforts to separate the domain model of
the DM, and thereby the available system and user actions,
from the dialogue flow control. The RavenClaw DM [5]
consists of a Dialog Task Specification layer, which models
domain and domain-specific dialogue logic, and a domain-
independent Dialog Engine. The Dialog Task Specification
Layer is realised as hierarchical structure of tasks to be ac-
complished in the dialogue. The Dialog Engine incorporates
various domain independent conversational strategies, such
as turn-taking behaviour or help and repeat actions.
Nothdurft et al. [15] proposed an architecture, in which
a DM and a reasoner worked together to provide explana-
tions for the system’s proposed course of action and thereby
sustain the user’s trust. By comparing a Finite State Ma-
chine with a decision tree resulting from a POMDP, po-
tential points of distrust are identified and the DM inserts
an explanation for the system’s proposal. In [14], the au-
thors use OWL ontologies to model information regarding
the digital TV subscription domain. However, no further
details are provided regarding the way ontology reasoning is
used to support the dialogue management tasks.
The proposed framework, which aims to address the de-
scribed challenges, is structured around three pillars:
Semantic language analysis: Semantic analysis of the
natural language user input to extract and formalise the
underlying semantics (in terms of the pertinent entities and
their interrelations) as ontological representations so that
the user input can be interpreted against the KB.
Question answering and feedback: Semantic process-
ing of the LA results to identify conversation topics and to
provide a context-aware representation of the situation that
drives the query answering and feedback tasks.
Dialogue management: Selection of an appropriate
system reaction considering the dialogue state as well as
the feedback of QA. This decision process can be further
adapted, e.g. to the user’s emotion or culture.
Figure 1: Adaptive DM framework architecture.
Figure 1 illustrates the logical architecture, while the tasks
involved in each pillar are described in the following.
3.1 Semantic Language Analysis
In order to capture and formalise the natural language
user utterances in OWL, the proposed semantic language
analysis framework uses shallow semantic parsing, FrameNet-
based role labelling and the design principles underpinning
DUL2’s Description and Situation (DnS) pattern.
3.1.1 FrameNet-based structure extraction
Advancing beyond the current DM and QA practices of
syntactic dependency-based analysis, the user input is anal-
ysed in terms of semantic dependencies, through the use of
graph transducers [3] that allow its encoding in predicate-
argument structures that abstract away from language-speci-
fic idiosyncrasies. The extracted predicate-argument struc-
tures are subsequently further enhanced with frame and cor-
responding frame elements annotations using a frame se-
mantics parser3. Frames are linguistically-grounded concep-
tual structures that describe particular types of relational
contexts along with the semantic roles of the pertinent enti-
ties. For example the Ingestion frame describes a situation
involving a Ingestor and some Ingestibles, and is evoked by
words such as “drink”, “nibble”, “eat”, “devour”, etc.; the
roles are called frame elements and the frame-evoking words
lexical units (LUs). For example, in the sentence “Elif drinks
coffee”, “drinks” is the lexical unit that evokes the Ingestion
frame, while “Elif” and “coffee” would be annotated as the
frame elements Ingestor and Ingestibles respectively.
3.1.2 DnS-based ontological translation
To formalise the extracted FrameNet-based structures and
map them into corresponding ontological representations,
frame patterns are considered as specialisations of the DnS
pattern: frames are interpreted as dul:Descriptions, frame
elements as dul:Concepts that determine how the involved
entities should be interpreted, while the extracted frame oc-
currences correspond to dul:Situations. In addition, each
frame-based situation is annotated with its corresponding
dialogue acts, e.g. request and statement.
To account for the disparate semantics that the various
frame categories admit to, we distinguish between frames
that denote event situations (e.g. Ingestion, Grooming),
frames that are related to attributes (e.g. Age, Usefulness),
and frames that relate to objects (e.g. Containers, Food).
More specifically, event frame situations specify the class
EventFrameSituation defined as follows:
EventFrameSituation SubClassOf (
dul:Situation and
dul:satisfies some EventFrameDescription )
EventFrameDescription SubClassOf (
dul:Description and
dul:defines some InvolvedEvent )
For each extracted event frame occurrence, an individ-
ual of the respective frame situation class is introduced and
linked with the individuals corresponding to the participat-
ing entities and the lexical unit that evoked the frame; the
latter is further typed as a subclass of dul:Event. Thus,
for example, the sentence “Elif drinks coffee” would result
among others in the assertions:
:IngestionFrame rdfs:subClassOf dul:Situation .
:ingestion1 rdf:type :IngestionFrame;
dul:satisfies :description1.
:desciption1 rdf:type dul:Description;
dul:defines [dul:classifies :drink1]
:Drink rdfs:subClassOf dul:Event .
:drink1 rdf:type :Drink.
Likewise the classes AttributeFrameSituation vFrame-
Situation and AttributeFrameDescription vFrameDe-
scription are introduced to capture attributive frames, while
respective specialisations allow distinguishing between rela-
tive and absolute attribute descriptions. For example, abso-
lute attribute descriptions specialise the following definition:
AbsoluteAttributeDescription SubClassOf (
dul:Description and
dul:defined some InvolvedAttribute and
dul:defines some dul:Region and
dul:defines some dul:UnitType )
Lacking the descriptive contexts pertinent to event and
attribute frames, frames related to objects are interpreted
as specialisations of the class dul:Entity.
3.2 Domain Modelling and Reasoning
A number of ontologies have been developed to support
context abstraction, reasoning and feedback. These include
a) Ontologies that capture the various types of background
knowledge involved (user profile, medical information, etc.)
in compiling appropriate system responses, and b) Mapping
and interpretation models that abstract incoming informa-
tion into topics, enabling the derivation of situations of in-
terest, answers and feedback.
Regarding the modelling of domain information, the frame-
work does not impose any restriction on the vocabularies
used to capture background knowledge. As such, existing
foundational ontologies (e.g. DUL), design patterns and vo-
cabularies (e.g. MeSH) can be used to capture knowledge.
The remainder of this section describes the mapping and
interpretation of models and rules that enable the recog-
nition of conversation contexts and their further coupling
with background knowledge. The objective is to support the
derivation of abstract conceptualisations about conversation
topics that designate the reasoning task to be triggered to
generate meaningful responses to user input.
3.2.1 Context ontology
The context ontology defines the semantics of conversa-
tion topics. It is used to abstract detected frames into higher
level situations, capitalising on the OWL semantics for defin-
ing multi-level concept hierarchies. The ontology defines a
hierarchy of topics extending the dul:Situation concept.
The current implementation supports two contexts: Bio-
graphical context, which captures information regarding bi-
ographical attributes, such as age, birthdate, etc., and Be-
haviour context, which captures user routines and prefer-
ences (e.g. daily water intake). As an example, we present
the semantics of the complex class description that recog-
nises routine conversation contexts based on the recognition
of routine-related events by LA4.
RoutineContext EquivalentTo dul:Situation and
(dul:satisfies some (dul:defines some
(dul:classifies some ent:RoutineEvent)))
This abstract context can be further specialised to capture
concrete routine contexts. For example, the DrinkingContext,
which captures conversation context relevant to drinking
routines, is defined with the following two axioms:
DrinkingContext EquivalentTo core:IngestionFrame
and (dul:includesEvent some ent:Drink) .
DrinkingContext SubClassOf
fluid some owl:Thing, period some :Period .
4The example uses the Manchester OWL Syntax.
Each frame relevant to ingestion is classified as drinking,
provided that the frame also includes a routine-related event
(in this example, drinking). In addition, the drinking con-
text is associated with property assertions that characterize
the drinking element (fluid) and period (e.g. daily, weekly).
3.2.2 Rule-based mapping
Despite the fact that OWL 2 ontologies support a rich set
of semantics, there are still certain expressive limitations.
For example, the native semantics of OWL 2 does not al-
low the dynamic generation of new individuals. In order
to overcome OWL 2 reasoning shortcomings, we follow a
hybrid context interpretation approach, complementing the
ontology-based mapping with rules. More precisely, we use
SPARQL construct graph patterns, enabling property value
propagation and instance generation. The core idea is to
associate each context concept with one or more SPARQL
rules that address specific mapping tasks, e.g. the mapping
of property values between frames and the context ontology.
As an example, we present the mapping rule we have de-
fined to populate the DrinkingContexts presented in Section
3.2.1 with information about the ingestible (e.g. the fluid).
CONSTRUCT {?this :fluid ?fluid}
?this a :DrinkingContext;
dul:satisfies [dul:defines ?c] .
?c a core:Ingestibles.ingestion .
?c dul:classifies ?fluid .
Having mapped the frames into the contextual hierarchy,
the last step is to query the KB and retrieve the results. This
is achieved by a set of domain-dependent SPARQL queries
that are attached to context concepts. We present examples
of such SPARQL queries in Section 4.2.
3.3 Adaptive Dialogue Management
The task of the DM is to determine an appropriate system
reaction to the user input. This can be realised by a pol-
icy function mapping all combinations of user actions and
dialogue states to system actions. Such a mapping can be
either rule-based or learned from training data. However, in
both approaches a usual assumption is that the system and
user actions are known when defining the policy.
Questions arise if actions are created dynamically after
the policy is defined: unknown system actions do not have a
corresponding mapping to a user action and unknown user
actions would not be chosen as all mappings are already
occupied by existing user actions.
In this work we aim to address the following questions:
- How can user utterances be mapped to user actions with-
out the use of grammars?
- How can unknown user actions be correlated with existing
ones to estimate a good policy?
- How are unknown system actions generated?
- How can unknown system actions be evaluated in regard
to their appropriateness in a given situation?
In order to handle these tasks, we use generalisation. In-
stead of concretely defining all possible actions, we utilise
general dialogue acts [6, 8, 7] and the hierarchical structure
of the ontology to describe them in a more general manner.
More specifically, the LA provides the DM with a classi-
fication of the user utterance in terms of the dialogue acts
request and statement; if applicable, the DM can be fur-
ther refined to more specialised dialogue acts (e.g. confirm,
affirm) by taking into account the dialogue history. In ad-
dition, the semantic content of the user utterance, which is
extracted by the LA, provides its topic. The hierarchy of
the ontology further helps to rank the user action: if a user
action with the topic “medicine for headache” has not been
anticipated the super topic “medicine” can be used instead.
System actions are dynamically generated from the feed-
back of the QA module. Information that is missing to com-
plete a query leads to a request action, while the result of
a query is presented as inform action. Similar to the user
actions, system actions have a topic depending on their se-
mantic content. The hierarchy of the ontology can then be
used to rank unknown system actions.
The appropriateness of a system action is derived consid-
ering its general dialogue act as well as its topic (or a super-
class of it). This way, the policy for known actions can be
extrapolated to unknown actions by shared characteristics.
Studies show [4] that in migration circles, especially of
Turkish origin, there is a low take-up of care services and
the provision system is insufficiently aligned with the needs,
e.g. comprehension problems relevant to the culture and
habits of the care recipient. In such cases, dialogue systems
can act as mediators between migrants and caregivers.
In the considered scenario, Elif is an elderly Turkish mi-
grant in a retirement home in Germany who receives support
by a German caregiver. Information about the habits and
preferences of the care recipient are needed, however, there
may be problems of comprehension concerning the culture
and habits. The required information can be provided to
the dialogue system by Elif, and acquired by the profes-
sional caregiver through interaction with the system. We
demonstrate the functionality of our framework on the basis
of the following example dialogue:
user/caregiver: How much does Elif drink?
system: Elif drinks two glasses of water per day.
4.1 Extracting User Input Semantics
Applying the semantic language analysis described in Sec-
tion 3.1, the user input is mapped to the knowledge graph
depicted in Figure 2. The knowledge graph (DnS pattern)
contains structured information that describes (i) the dia-
logue act (request), (ii) the frame type (ingestion), (iii) the
ingestor (person), and (iv) the event (drink). Note that the
user has not provided information about the ingestible (e.g.
water, beverage) and the period (e.g. daily, weekly).
4.2 Context Recognition and Feedback
4.2.1 User behaviour modelling
Profile information about the drinking routines is cap-
tured through instantiations of the DnS pattern. We define
the Frequency and View concepts for modelling the situation
and description, respectively. The view instance defines two
concepts for modelling the occurrence context and the event
type (Drink). The former encapsulates information about
the period (e.g. daily), while the latter points to the entity
type (e.g. Water). In addition, the occurrence context is as-
sociated with the frequency value, which is defined through
Figure 2: An RDF graph that captures user input.
Figure 3: Modelling the user’s drinking routine.
an instance of muo:QualityValue. This way, we are able
to capture information about the unit of measurement (e.g.
glass). Figure 3 presents an example instantiation (DnS)
that defines the glasses of water the user drinks per day.
4.2.2 Context recognition
Based on the semantics of the drinking context described
in Section 3.2.1, the OWL 2 reasoning process classifies the
IngestionFrame_1Req instance of Figure 2 in the Drinking-
Context class, since all the class restrictions are satisfied.
However, the SPARQL rule described in Section 3.2.2 is not
able to fill in the fluid property of the drinking context,
since no relevant information is provided by the user.
4.2.3 Results and feedback
In order to answer questions about recognised contexts,
we associate each context with SPARQL queries. As such,
the DrinkingContext class is associated with the following
SPARQL query that retrieves the amount, type, unit and
period of the drinking routine (e.g. 2 glasses of water daily).
SELECT ?val ?unit ?period ?stuff
?dc a context:DrinkingContext;
uomvocab:qualityValue [uomvocab:measuredIn ?unit];
context:fluid ?stuff; context:period ?period;
dul:includesEvent ?eventType .
SELECT ?unit ?eventType ?val ?stuff ?period
?p a freq:Frequency ; cp:hasView ?v .
?v dul:defines [dul:classifies ?eventType] .
?v dul:defines [cp:interprets ?stuff ;
freq:period ?period; freq:value [
uomvocab:measuredIn ?unit ;
uomvocab:qualityLiteralValue ?val]]
However, in our example the context does not include any
fluid and period bindings, and therefore the WHERE graph
pattern is not matched. In such cases, we collect the un-
bound variables and try to find resources in the KB that sat-
isfy the semantic restrictions imposed by the ontology. For
example, the context:fluid property has as range the class
Fluid. The system collects all the instances that belong to
the class hierarchy of fluids and periods and selects the most
probable ones, based on information collected from past con-
versation. Assuming that the bindings ?stuff = water and
?period = daily are selected, the SPARQL query success-
fully retrieves the glasses of water the user drinks each day
In addition, DM has to be informed about the feedback
needed regarding the drinking type and period, so as to pro-
vide an alternative response, if needed. This is achieved by
returning to the DM a partial DnS instantiation that cap-
tures missing information. The final response to the DM
contains both the variable bindings of the SPARQL query
and the missing information required.
:ResultContext rdfs:subClassOf dul:Situation .
:result1 rdf:type :ResultContext;
dul:satisfies :description.
:desciption rdf:type dul:Description;
dul:defines [dul:classifies [dul:hasDataValue "2"]]
dul:defines [dul:classifies [muo:measuredIn :glass]]
dul:defines [dul:classifies :daily]
dul:defines [dul:classifies :water]
:MissingContext rdfs:subClassOf dul:Situation .
:missing1 rdf:type :MissingContext
dul:isSettingFor Fluid, Period .
4.3 DM Inference
The DM generates three possible system actions from the
feedback of the QA module: a request action for each miss-
ing information and an inform action for the result of the
query. Which of these actions is selected to be the system
response depends on the dialogue strategy and the dialogue
state, as elaborated in the following examples.
The decision of the DM is influenced by the dialogue his-
tory. If the caregiver and the system have been talking about
the significance of drinking enough water during the day just
before the example dialogue, this information supports the
conclusion of the QA module regarding the type of fluid and
period of time. Therefore, the inform action is selected. On
the other hand, if the caregiver has been talking about the
dangers of drinking too much soft drinks per day, this ren-
ders the concluded type of fluid water unlikely. In this case,
the DM decides to request this missing information from
the caregiver. A further improvement of the cooperation
between DM and QA module can be achieved by integrat-
ing the dialogue history in the reasoning process of the QA.
The previous examples show DM decisions based on the
general dialogue acts of the available system actions. How-
ever, the topic can be of importance as well. Assuming that
the DM decided to schedule a request action, the topic of
the request actions can be utilised in order to decide which
one is better suited in the current situation. The dialogue
history contains as last user action a request with the topic
amount of fluid. The request with the topic kind of fluid is
closely related to that last user action, therefore the dialogue
strategy chooses it as next system action.
Designing a flexible dialogue system involves many non-
trivial decisions. In this paper, we described an approach
to utilise an ontology-based QA module as domain model
of the DM and integrate it into the process of dialogue flow
control. Furthermore, a DnS-based pattern approach is used
to capture and leverage the semantics of the user utterances
with the underlying domain models. As a result, user and
system actions are dynamically generated and the dialogue
strategy does not depend on prior predefined actions.
As a future work, emphasis will be placed on extending
the disambiguation capabilities of LA, incorporating, in ad-
dition to the FN-based annotations, mappings against Ba-
belNet synsets, so as to cater for the development of a semi-
automated rule definition approach that will enable the dy-
namic translation of meta-patterns into queries. In addition,
the incorporation of non-verbal aspects, such as user emo-
tion, in the DM selection strategy will be investigated.
This work has been supported by the H2020-ICT-645012
project KRISTINA: A Knowledge-Based Information Agent
with Social Competence and Human Interaction Capabilities
[1] N. Aggarwal and P. Buitelaar. A system description of
natural language query over dbpedia. Proc. of
Interacting with Linked Data, pages 96–99, 2012.
[2] I. Augenstein, S. Pad´o, and S. Rudolph. Lodifier:
Generating linked data from unstructured text. In The
Semantic Web: Research and Applications - Extended
Semantic Web Conference, pages 210–224, 2012.
[3] M. Ballesteros, B. Bohnet, S. Mille, and L. Wanner.
Data-driven deep-syntactic dependency parsing.
Natural Language Engineering, pages 1–36, 2015.
[4] I. Bermejo, L. H¨
olzel, L. Kriston, and M. H¨
[barriers in the attendance of health care interventions
by immigrants]. Bundesgesundheitsblatt,
Gesundheitsforschung, Gesundheitsschutz,
55(8):944–953, 2012.
[5] D. Bohus and A. I. Rudnicky. Ravenclaw: Dialog
management using hierarchical task decomposition
and an expectation agenda. 2003.
[6] H. Bunt. The DIT++ taxonomy for functional
dialogue markup. In AAMAS 2009 Workshop,
Towards a Standard Markup Language for Embodied
Dialogue Acts, pages 13–24, 2009.
[7] H. Bunt, J. Alexandersson, J.-W. Choe, A. C. Fang,
K. Hasida, V. Petukhova, A. Popescu-Belis, and D. R.
Traum. Iso 24617-2: A semantically-based standard for
dialogue annotation. In LREC, pages 430–437, 2012.
[8] M. G. Core and J. Allen. Coding dialogs with the
damsl annotation scheme. In AAAI fall symposium on
communicative action in humans and machines, pages
28–35. Boston, MA, 1997.
[9] D. Damljanovic, M. Agatonovic, and H. Cunningham.
FREyA: An interactive way of querying Linked Data
using natural language. In The Semantic Web: ESWC
2011 Workshops, pages 125–138. Springer, 2011.
[10] A. Hallili. Toward an Ontology-Based Chatbot
Endowed with Natural Language Processing and
Generation. 26th European Summer School in Logic,
Language & Information, Aug. 2014. Poster.
[11] E. Kaufmann, A. Bernstein, and L. Fischer.
NLP-Reduce: A “naive” but Domain-independent
Natural Language Interface for Querying Ontologies.
ESWC Zurich, 2007.
[12] V. Lopez, M. Fern´andez, E. Motta, and N. Stieler.
Poweraqua: Supporting users in querying and
exploring the semantic web. Semant. web,
3(3):249–265, Aug. 2012.
[13] V. Lopez, V. Uren, M. Sabou, and E. Motta. Is
question answering fit for the semantic web?: A
survey. Semant. web, 2(2):125–155, Apr. 2011.
[14] D. Mouromtsev, L. Kovriguina, Y. Emelyanov,
D. Pavlov, and A. Shipilo. From spoken language to
ontology-driven dialogue management. In TSD, pages
542–550, 2015.
[15] F. Nothdurft, F. Richter, and W. Minker. Probabilistic
human-computer trust handling. In Proc. of the
Annual Meeting of the SIGDIAL, pages 51–59, 2014.
[16] A. G. Nuzzolese, A. Gangemi, and V. Presutti.
Gathering lexical linked data and knowledge patterns
from framenet. In 6th International Conference on
Knowledge Capture (K-CAP 2011), pages 41–48, 2011.
[17] L. Pragst, S. Ultes, M. Kraus, and W. Minker.
Adaptive dialogue management in the KRISTINA
project for multicultural health care applications.
SEMDIAL 2015 goDIAL, page 202, 2015.
[18] V. Presutti, F. Draicchio, and A. Gangemi. Knowledge
extraction based on discourse representation theory
and linguistic frames. In Knowledge Engineering and
Knowledge Management, Ireland, pages 114–129, 2012.
[19] S. Ultes and W. Minker. Managing adaptive spoken
dialogue for intelligent environments. Journal of
Ambient Intelligence and Smart Environments,
6(5):523–539, 2014.
[20] L. Zou, R. Huang, H. Wang, J. X. Yu, W. He, and
D. Zhao. Natural language question answering over
rdf: a graph data driven approach. In International
conference on Management of data, pages 313–324,
... In this work, we describe the implementation and evaluation of the extension of the already existing OwlSpeak DM [3,13] in order to handle dynamically created user and system actions, utilising general dialogue actions combined with ontol-ogy semantics to determine the system behaviour based on [5,11]. The structure of the paper is as follows: In Section 2, the original OwlSpeak DM is introduced. ...
... As described by Meditskos et al. [5], modules for advanced techniques in the fields of language analysis as well as knowledge interpretation and reasoning need to be integrated in order to support dynamically created dialogue actions both by the user and the system. To facilitate the interaction with such modules, the Model and the View of OwlSpeak need to be adapted. ...
In order to take up the challenge of realising user-adaptive system behaviour, we present an extension for the existing OwlSpeak Dialogue Manager which enables the handling of dynamically created dialogue actions. This leads to an increase in flexibility which can be used for adaptation tasks. After the implementation of the modifications and the integration of the Dialogue Manager into a full Spoken Dialogue System, an evaluation of the system has been carried out. The results indicate that the participants were able to conduct meaningful dialogues and that the system performs satisfactorily, showing that the implementation of the Dialogue Manager was successful.
... Another similar approach was implemented for the healthcare domain and the goal was to produce a framework that can assist patients by providing them an AI chatbot with strong conversational skills and a robust Knowledge Base source [20,21]. ...
The paper presents recent work on the design and development of AI chatbots for museums using Knowledge Graphs (KGs). The utilization of KGs as a key technology for implementing chatbots raises not only issues related to the representation and structuring of exhibits’ knowledge in suitable formalism and models, but also issues related to the translation of natural language dialogues to and from the selected technology for the formal representation and structuring of information and knowledge. Moreover, such a translation must be as transparent as possible to visitors, towards a realistic human-like question-answering process. The paper reviews and evaluates a number of recent approaches for the use of KGs in developing AI chatbots, as well as key tools that provide solutions for natural language translation and the querying of Knowledge Bases and Linked Open Data sources. This evaluation aims to provide answers to issues that are identified within the proposed MuBot approach for designing and implementing AI chatbots for museums. The paper also presents Cretan MuBot, the first experimental KG/Ontology-based AI chatbot of the MuBot Platform, which is under development in the Heracleum Archaeological Museum.
... Using the knowledge base directly to model the (noisy) dialogue state(Pragst et al., 2015;Meditskos et al., 2016) usually results in high access times. ...
... Using the knowledge base directly to model the (noisy) dialogue state(Pragst et al., 2015;Meditskos et al., 2016) usually results in high access times. ...
Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e.g., relations. In this work, we propose a novel dialogue model that is centred around entities and is able to model relations as well as multiple entities of the same type. We demonstrate in a prototype implementation benefits of relation modelling on the dialogue level and show that a trained policy using these relations outperforms the multi-domain baseline. Furthermore, we show that by modelling the relations on the dialogue level, the system is capable of processing relations present in the user input and even learns to address them in the system response.
... The decision whether to consult the knowledge integration as well as how to proceed afterwards is based the general dialogue acts and topics of the user input, the knowledge integration output as well as the dialogue history. This interaction between language analysis, knowledge integration and dialogue management is described in depth in [5]. ...
Conference Paper
Access to health care related information can be vital and should be easily accessible. However, immigrants often have difficulties to obtain the relevant information due to language barriers and cultural differences. In the KRISTINA project, we address those difficulties by creating a socially competent multimodal dialogue system that can assist immigrants in getting information about health care related questions. Dialogue management, as core component responsible for the system behaviour, has a significant impact on the successful reception of such a system. Hence, this work presents the specific challenges of the KRISTINA project to adaptive dialogue management, namely the handling of a large dialogue domain and the cultural adaptability required by the envisioned dialogue system, and our approach to handling them.
... Our approaches to the automatic generation of CS are developed as part of the KRISTINA Project Meditskos et al., 2016). At the core of the aspired system, a DM component decides on the next system action. ...
Conference Paper
We present work in progress on (verbal, facial, and gestural) modality selection in an embodied multilingual and multicultural conversation agent. In contrast to most of the recent proposals, which consider non-verbal behavior as being superimposed on and/or derived from the verbal modality, we argue for a holistic model that assigns modalities to individual content elements in accordance with semantic and contextual constraints as well as with cultural and personal characteristics of the addressee. Our model is thus in line with the SAIBA framework, although methodological differences become apparent at a more fine-grained level of realization.
Conference Paper
Full-text available
Human-computer trust has shown to be a critical factor in influencing the complexity and frequency of interaction in technical systems. Particularly incomprehensible situations in human-computer interaction may lead to a reduced users trust in the system and by that influence the style of interaction. Analogous to human-human interaction, explaining these situations can help to remedy negative effects. In this paper we present our approach of augmenting task-oriented dialogs with selected explanation dialogs to foster the humancomputer trust relationship in those kinds of situations. We have conducted a webbased study testing the effects of different goals of explanations on the components of human-computer trust. Subsequently, we show how these results can be used in our probabilistic trust handling architecture to augment pre-defined task-oriented dialogs.
Conference Paper
Full-text available
The paper describes the architecture of the prototype of the spoken dialogue system combining deep natural language processing with an information state dialogue manager. The system assists technical support to the customers of the digital TV provider. Raw data are sent to the natural language processing engine which performs tokenization, morphological and syntactic analysis and anaphora resolution. Multimodal Interface Language (MMIL) is used for the sentence semantic representation. A separate module of the NLP engine converts Shallow MMIL representation into Deep MMIL representation by applying transformation rules to shallow syntactic structures and generating its paraphrases. Deep MMIL representation is the input of the module generating facts for the dialogue manager. Facts are extracted using the domain ontology. A fact itself is an RDF triple containing temporal information wrapped in the move type. Dialogue manager can accept unlimited number of facts and supports mixed initiative.
Full-text available
‘Deep-syntactic’ dependency structures that capture the argumentative, attributive and coordinative relations between full words of a sentence have a great potential for a number of NLP-applications. The abstraction degree of these structures is in between the output of a syntactic dependency parser (connected trees defined over all words of a sentence and language-specific grammatical functions) and the output of a semantic parser (forests of trees defined over individual lexemes or phrasal chunks and abstract semantic role labels which capture the frame structures of predicative elements and drop all attributive and coordinative dependencies). We propose a parser that provides deep-syntactic structures. The parser has been tested on Spanish, English and Chinese.
Full-text available
This paper summarizes the latest, final version of ISO standard 24617-2 "Semantic annotation framework, Part 2: Dialogue acts". Compared to the preliminary version ISO DIS 24617-2:2010, described in Bunt et al. (2010), the final version additionally includes concepts for annotating rhetorical relations between dialogue units, defines a full-blown compositional semantics for the Dialogue Act Markup Language DiAML (resulting, as a side-effect, in a different treatment of functional dependence relations among dialogue acts and feedback dependence relations); and specifies an optimally transparent XML-based reference format for the representation of DiAML annotations, based on the systematic application of the notion of 'ideal concrete syntax'. We describe these differences and briefly discuss the design and implementation of an incremental method for dialogue act recognition, which proves the usability of the ISO standard for automatic dialogue annotation.
Full-text available
RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a national language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.
Conference Paper
Full-text available
The automated extraction of information from text and its transformation into a formal description is an important goal in both Semantic Web research and computational linguistics. The extracted information can be used for a variety of tasks such as ontology generation, question answering and information retrieval. LODifier is an approach that combines deep semantic analysis with named entity recognition, word sense disambiguation and controlled Semantic Web vocabularies in order to extract named entities and relations between them from text and to convert them into an RDF representation which is linked to DBpedia and WordNet. We present the architecture of our tool and discuss design decisions made. An evaluation of the tool on a story link detection task gives clear evidence of its practical potential.
Conference Paper
Full-text available
We have implemented a novel approach for robust ontology design from natural language texts by combining Discourse Representation Theory (DRT), linguistic frame semantics, and ontology design patterns. We show that DRT-based frame detection is feasible by conducting a comparative evaluation of our approach and existing tools. Furthermore, we define a mapping between DRT and RDF/OWL for the production of quality linked data and ontologies, and present FRED, an online tool for converting text into internally well-connected and linked-data-ready ontologies in web-service-acceptable time.
This paper describes our system, which is developed as a first step towards implementing a methodology for natural language query-ing over semantic structured information (semantic web). This work fo-cuses on interpretation of natural language queries (NL-Query) to fa-cilitate querying over Linked Data. This interpretation includes query annotation with Linked Data concepts (classes and instances), a deep linguistic analysis and semantic similarity/relatedness to generate poten-tial SPARQL queries for a given NL-Query. We evaluate our approach on QALD-2 test dataset and achieve a F1 score of 0.46, an average precision of 0.44 and an average recall of 0.48.
Speech interfaces within Intelligent Environments (IEs) must be rendered adaptive to external and internal factors, among those the complexity of the dialogue. Hence, we present HIS-OwlSpeak, a model-driven dialogue manager for Intelligent Environments. It meets the challenges arising from engineering IEs by providing a unified platform comprising adaptivity to a variety of internal and external factors. This work addresses internal adaptivity realized by different modes of dialogue control, i.e., rule-based and probabilistic. For this, the Hidden Information State (HIS) approach – featuring inherent handling of uncertainty in dialogue systems – is applied to a model-driven, solely rule-based dialogue manager. It uses ontologies to specify the dialogue thus separating the specification from the dialogue control. Consequently, all necessary aspects for merging the world of model-driven dialogue management with the HIS approach are presented in detail. Furthermore, the system has been evaluated using two concurrent dialogues of different complexity successfully validating the implementation.