Conference Paper

Midas: a declarative multi-touch interaction framework.

DOI: 10.1145/1935701.1935712 Conference: Proceedings of the 5th International Conference on Tangible and Embedded Interaction 2011, Funchal, Madeira, Portugal, January 22-26, 2011
Source: DBLP

ABSTRACT Over the past few years, multi-touch user interfaces emerged from research prototypes into mass market products. This evolution has been mainly driven by innovative devices such as Apple's iPhone or Microsoft's Surface tabletop computer. Unfortunately, there seems to be a lack of software engineering abstractions in existing multi-touch development frameworks. Many multi-touch applications are based on hard-coded procedural low level event processing. This leads to proprietary solutions with a lack of gesture extensibility and cross-application reusability. We present Midas, a declarative model for the definition and detection of multi-touch gestures where gestures are expressed via logical rules over a set of input facts. We highlight how our rule-based language approach leads to improvements in gesture extensibility and reusability. Last but not least, we introduce JMidas, an instantiation of Midas for the Java programming language and describe how JMidas has been applied to implement a number of innovative multi-touch gestures.

0 Bookmarks
 · 
212 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-touch technology has become pervasive in our daily lives, with iPhones, iPads, touch displays, and other devices. It is important to find a user input model that can work for multi-touch gesture recognition and can serve as a building block for modeling other modern input devices (e.g., Leap Motion, gyroscope). We present a novel approach to model multi-touch input using Petri Nets. We formally define our method, explain how it works, and the possibility to extend it for other devices.
    Human-Computer Interaction International 2014., Crete, Greece; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: When dealing with the complex task of extracting meaningful information from multiple continuous sensor streams, declarative rules can be employed to benefit from software engineering principles such as modularization and composition. We propose PARTE, a parallel scalable event processing engine proving predictable response times for a high-quality user experience.
    Proceedings of the 3rd annual conference on Systems, programming, and applications: software for humanity; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: An increasing number of today's consumer devices such as mobile phones or tablet computers are equipped with various sensors. The extraction of useful information such as gestures from sensor-generated data based on mainstream imperative languages is a notoriously difficult task. Over the last few years, a number of domain-specific programming languages have been proposed to ease the development of gesture detection. Most of these languages have adopted a declarative approach allowing programmers to describe their gestures rather than having to manually maintain a history of event data and intermediate gesture results. While these declarative languages represent a clear advancement in gesture detection, a number of issues are still unresolved. In this paper we present relevant criteria for gesture detection and provide an initial classification of existing solutions based on these criteria in order to foster a discussion and identify opportunities for future gesture programming languages.
    Proceedings of EGMI 2014, 1st International Workshop on Engineering Gestures for Multimodal Interfaces, Rome, Italy, June, 2014; 06/2014

Full-text (2 Sources)

Download
64 Downloads
Available from
May 17, 2014