[Show abstract][Hide abstract] ABSTRACT: This paper investigates the effects of se-mantic distance on the development of lexical entrainment. For this purpose, the authors developed a card game with three levels of semantic distance. The participants were asked to arrange the cards into a congruent sequential order. By increasing the semantic distance, more words were needed to solve the task and a higher rate of hypernyms was used, demonstrating lexical entrainment. Additionally, results showed that the participants recurred to the use of deentrained terms on a third stage of the conversation. Based on this we examine what this finding might entail for existing theories on linguistic alignment.
Proceedings of SemDial 2013, Amsterdam, NL; 12/2013
[Show abstract][Hide abstract] ABSTRACT: Enabling collaborative work on multi-touch tables comes with many challenges for the design of tabletop systems. For example, multi-touch tables have been not standardized, tabletop groupware systems are built for various purposes and the diversity of task activities constitute some of the challenges for enabling natural collaborative human computer interaction on multi-touch tables. While many studies have been conducted on individual problems, aggregate guidelines for designing an appropriate tabletop groupware system that can adapt to variable conditions are still under construction. In this paper we contribute some insights toward more general guidelines via an empirical study that sought to untangle the interrelated effects of ownership, individual collaborative strategies and workspace usage.
[Show abstract][Hide abstract] ABSTRACT: This paper presents the research activities of the Collaborative Research Centre (CRC) 637 “Autonomous Cooperating Logistic Processes—A Paradigm Shift and its Limitations” at the University of Bremen. After a motivation of autonomous logistics as an answer to current trends in increasingly dynamic
markets, we sketch the structure and aims of the interdisciplinary CRC. We present several interpretations of the central
motive of autonomous control, pursued by sub-projects over the course of the first project period, and focus on an agent-based
approach to autonomous logistics.
[Show abstract][Hide abstract] ABSTRACT: In this paper, we investigate how discourse context in the form of short-term memory can be exploited to automatically group
consecutive strokes in digital freehand sketching. With this machine learning approach, no database of explicit object representations
is used for template matching on a complete scene—instead, grouping decisions are based on limited spatio-temporal context.
We employ two different classifier formalisms for this time series analysis task, namely Echo State Networks (ESNs) and Support
Vector Machines (SVMs). ESNs present internal-state classifiers with inherent memory capabilities. For the conventional static
SVM, short-term memory is supplied externally via fixed-length feature vector expansion. We compare the respective setup heuristics
and conduct experiments with two exemplary problems. Promising results are achieved with both formalisms. Yet, our experiments
indicate that using ESNs for variable-length memory tasks alleviates the risk of overfitting due to non-expressive features
or improperly determined temporal embedding dimensions.
Smart Graphics, 10th International Symposium on Smart Graphics, Banff, Canada, June 24-26, 2010, Proceedings; 01/2010
[Show abstract][Hide abstract] ABSTRACT: Endowing embodied conversational agent with personality affords more natural modalities for their interaction with human interlocutors. To bridge the personality gap between users and agents, we designed minimal two personalities for corresponding agents i.e. an introverted and an extroverted agent. Each features a combination of different verbal and non-verbal behaviors. In this paper, we present an examination of the effects of the speaking and behavior styles of the two agents and explore the resulting design factors pertinent for spoken dialogue systems. The results indicate that users prefer the extroverted agent to the introverted one. The personality traits of the agents influence the users' preferences, dialogues, and behavior. Statistically, it is highly significant that users are more talkative with the extroverted agent. We also investigate the spontaneous speech disfluency of the dialogues and demonstrate that the extroverted behavior model reduce the user's speech disfluency. Furthermore, users having different mental models behave differently with the agents. The results and findings show that the minimal personalities of agents maximally influence the interlocutors' behaviors.
Proceedings of the 12th International Conference on Multimodal Interfaces / 7. International Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, Beijing, China, November 8-12, 2010; 01/2010
[Show abstract][Hide abstract] ABSTRACT: Herein we describe the QuickWoZ system, a Wizard-of-Oz (WoZ) tool that allows for the remote control of the behavior of animated characters in a 3D environment. The complete scene, character, behaviors and sounds can be defined in simple XML documents, which are parsed at runtime, so that setting up an experiment can be done without programming expertise. Quick selection lists and buttons enable the wizard to easily control the agents' behavior and allow for fast reactions to the subjects' input. The system is tailored for experiments with embodied conversational agents (ECAs) featuring multimodal interaction and was designed as a rapid prototyping system for evaluating the impact of an agent's behavior on the user.
Proceedings of the 2010 International Conference on Intelligent User Interfaces, February 7-10, 2010, Hong Kong, China; 01/2010
[Show abstract][Hide abstract] ABSTRACT: The present work and demonstration system aims at finding an efficient and cost-effective human computation method to expand the linguistic capabilities of interactive games that need it to respond appropriately to the language based input of their users. As a showcase scenario for the experiments conducted, we took interactive fiction applications and examined how the human computation game design and scoring approaches affects the quality of the data gathered. The ensuing analysis of the data confirms our initial hypothesis that game approaches can provide both the qualitative and quantitative data needed for the corresponding interactive games.
Proceedings of the ACM SIGKDD Workshop on Human Computation, Paris, France, June 28, 2009; 01/2009
[Show abstract][Hide abstract] ABSTRACT: Designing user interfaces for ubiquitous computing applications is a challenging task. In this paper we discuss how to build
intelligent interfaces. The foundations are usability principles that are valid on very general levels. We present a number
of established methods for the design process that can help to meet these principle requirements. In particular participatory
and iterative so-called human centered approaches are important for interfaces in ubiquitous computing. In particular the
question how to make interactional interfaces more intelligent is not trivial and there are multiple approaches to enhance
either the intelligence of the system or that of the user. Novel interface approaches, presented herein, follow the idea of
embodied interaction and put particular emphasis on the situated use of a system and the mental models humans develop in context.
KI 2009: Advances in Artificial Intelligence, 32nd Annual German Conference on AI, Paderborn, Germany, September 15-18, 2009. Proceedings; 01/2009
[Show abstract][Hide abstract] ABSTRACT: Increased availability of mobile computing, such as personal digital assistants (PDAs), creates the potential for constant and intelligent access to up-to-date, integrated and detailed information from the Web, regardless of one's actual geographical position. Intelligent question-answering requires the representation of knowledge from various domains, such as the navigational and discourse context of the user, potential user questions, the information provided by Web services and so on, for example in the form of ontologies. Within the context of the SmartWeb project, we have developed a number of domain-specific ontologies that are relevant for mobile and intelligent user interfaces to open-domain question-answering and information services on the Web. To integrate the various domain-specific ontologies, we have developed a foundational ontology, the SmartSUMO ontology, on the basis of the DOLCE and SUMO ontologies. This allows us to combine all the developed ontologies into a single SmartWeb Integrated Ontology (SWIntO) having a common modeling basis with conceptual clarity and the provision of ontology design patterns for modeling consistency. In this paper, we present SWIntO, describe the design choices we made in its construction, illustrate the use of the ontology through a number of applications, and discuss some of the lessons learned from our experiences.
Web Semantics: Science, Services and Agents on the World Wide Web. 07/2007;
[Show abstract][Hide abstract] ABSTRACT: We present three types of data collections and their experimental paradigms. The resulting data were employed to conduct a
number of annotation experiments, create evaluation gold standards and train statistical models. The data, experiments and their analyses highlight the importance of data-driven empirical
laboratory and field work for research on intuitive multimodal human-computer interfaces.
[Show abstract][Hide abstract] ABSTRACT: The SmartKom multimodal dialogue system offers access to a wide range of information and planning services. A significant subset of these
are constituted by external data and service providers. The work presented herein describes the challenging task of integrating
such external data and service sources to make them semantically accessible to other systems and users. We present the implemented
multiagent system the corresponding knowledge-based extraction and integration approach. As a whole these agents cooperate
to provide users with topical high-quality information via unified and intuitively usable interfaces such as the SmartKom system.
[Show abstract][Hide abstract] ABSTRACT: This chapter describes the English-language SmartKom-Mobile system and related research. We explain the work required to support a second language in SmartKom and the design of the English speech recognizer. We then discuss research carried out on signal processing methods for robust
speech recognition and on language analysis using the Embodied Construction Grammar formalism. Finally, the results of human-subject
experiments using a novel Wizard and Operator model are analyzed with an eye to creating more felicitous interaction in dialogue systems.
[Show abstract][Hide abstract] ABSTRACT: This paper presents SmartKom-Mobile, the mobile version of the SmartKom system. SmartKom-Mobile brings together highly advanced user interaction and mobile computing in a novel way and allows for ubiquitous access
to multidomain information. SmartKom-Mobile is device-independent and realizes multimodal interaction in cars and on mobile devices such as PDAs. With its siblings,
SmartKom-Home and SmartKom-Public, it provides intelligent user interfaces for an extremely broad range of scenarios and environments.
[Show abstract][Hide abstract] ABSTRACT: The approach to knowledge representation taken in the multimodal, multidomain, and multiscenario dialogue system — SmartKom — is presented. We focus on the ontological and representational issues and choices helping to construct an ontology, which
is shared by multiple components of the system and can be reused in different projects and applied to various tasks. Finally,
two applications of the ontology that highlight the usefulness of our approach are described.
[Show abstract][Hide abstract] ABSTRACT: We describe the role of context models in natural language processing systems and their implementation and evaluation in the
SmartKom system. We show that contextual knowledge is needed for an ensemble of tasks, such as lexical and pragmatic disambiguation,
decontextualizion of domain and common-sense knowledge that was left implicit by the user and for estimating an overall coherence
score that is used in intention recognition. As the successful evaluations show, the implemented context model enables a multicontext
system, such as SmartKom, to respond felicitously to contextually underspecified questions. This ability constitutes an important step toward making
dialogue systems more intuitively usable and conversational without losing their reliability and robustness.
[Show abstract][Hide abstract] ABSTRACT: In this paper we describe an onto-logical model of pragmatic knowl-edge -using an example from the domain of navigation -that is based on the Descriptive Ontology for Linguistic and Cognitive Engineer-ing and employs a specific onto-logical module called Descriptions & Situations. This framework es-tablishes so-called ontological pat-terns. We employ such a pattern for modeling schematic knowledge of the pragmatics of spatial naviga-tion.