The gesture-testing interface. (a) The user is performing a gesture to test the recognition result. (b) The user can input a gesture name using a smartphone. If the gesture does not exist, then the system prompts the user to create it.

The gesture-testing interface. (a) The user is performing a gesture to test the recognition result. (b) The user can input a gesture name using a smartphone. If the gesture does not exist, then the system prompts the user to create it.

Source publication
Conference Paper
Full-text available
As motion sensors have become more advanced, gesture-control systems have become more popular in gaming and everyday appliances. However, in existing systems, gestures are predefined by designers or pattern-recognition experts. Such predefined gestures can be inconvenient for specific users in specific environments. Hence, it would be useful to pro...

Context in source publication

Context 1
... the gesture-testing stage, the user performs different gestures in recognition mode and can change the classifier threshold to improve the gesture. The interface is shown in Figure 6. The realtime RGB video of the user's body is shown on the left side, which allows the user to see what s/he is doing and to make sure that body movements are properly tracked by the KINECT depth sensors. ...

Similar publications

Conference Paper
Full-text available
The People Research group, part of Philips Design, plays an important role in the creation of meaningful innovations for Philips by representing the end-user in design processes. The group carries out a people centric design approach with established methodologies that are very helpful in the creation of single (connected) propositions. But these m...
Conference Paper
Full-text available
Usability Evaluation is one of the techniques used in the field of Human-Computer Interaction (HCI) in order to evaluate the extent of usability and level of acceptability of an Information System (IS) such as Web-Based Test Blueprint. A user-centered approach to development i.e. Interaction Design Model is encouraged in the design of computing int...
Conference Paper
Full-text available
This paper describes a user study with an aim to explore intuitiveness of gestures-based interaction in an augmented reality (AR) application. We selected children of age 10-11 as the target group in order to address the research problem in a setting where users have as little as possible experience on prior related technologies or ways of interact...
Article
Full-text available
Standard user applications provide a range of cross-cutting interaction techniques that are common to virtually all such tools: selection, filtering, navigation, layer management, and cut-and-paste. We present VisDock, a JavaScript mixin library that provides a core set of these cross-cutting interaction techniques for visualization, including sele...

Citations

... Previous studies have also shown that the approaches that are useful for IMU-based gesture sensing systems could be useful for gesture systems that use other sensors, such as depth cameras that track user's body movement. For example, CUBOD [46] developed by Tang and Igarashi follows a similar approach to MAGIC; it shows the prediction of classification performance, false activations, and gesture consistency based on gesture Goodness Measure so that the designers can know whether the gesture is good or bad without conducting the study. GestureAnalyzer [20] proposed by Jang et al. provides interactive clustering and visualization techniques to help gesture designers categorize and characterize gestures collected during gesture elicitation studies. ...
... To summarize, many attempted to make the gesture design process easier and more efficient by developing a user-friendly design tool [2,16,36], easier recognizer modification [25,26], and a database of everyday motions for false positive testing [2,30,46]. However, they do not offer the designer to 1) test the classification and false-activation performance of body-based gestures and 2) modify and test gestures without re-recording them. ...
... A drawback of any delimiter action is the disruption in the user's workflow [2,42], the user must first perform the delimiter and then the intended gesture, therefore using the gestures less efficient. Another approach for mitigating the false activation problem is to simulate the false activation performance using an everyday motion database [30,30,46]. While promising, these methods only support false activation checking against the whole gesture that the designers cannot modify gestures in the design process so that they iterate through the process to make gestures usable. ...
... We found tools and systems designed for gesture acquisition (Acharya, Matovu, Serwadda and Griswold-Steiner, 2019;, gesture processing (Bomsdorf et al., 2017;Kazemitabaar et al., 2017;Nebeling et al., 2014), sharing gesture sets via online repositories Solis, Pakbin, Akbari, Mortazavi and Jafari, 2019), and gesture analysis (Kin et al., 2012a,b;Lü et al., 2014;Vatavu, 2017aVatavu, , 2019Vatavu and Wobbrock, 2015), respectively. The gesture types addressed in this prior work can be collected from a variety of input devices, from touchscreens (Kin et al., 2012a;Long Jr et al., 1999;Lü and Li, 2012) to video cameras and depth sensors (Ashbrook and Starner, 2010;Buruk andÖzcan, 2017;Nebeling, Ott and Norrie, 2015;Nebeling et al., 2014), motion sensors (Tang and Igarashi, 2013), IoT devices (Solis et al., 2019), mobile devices (Acharya et al., 2019;Kohlsdorf, Starner and Ashbrook, 2011;), dual-screen mobile devices (Wu and Yang, 2020), and wearables (Jones et al., 2020;Kazemitabaar et al., 2017;Roggen, 2020). Some of these systems implement distributed software architectures, e.g., on the web and in the cloud, to enable remote access to the tool, data, and resources (Ali et al., 2019;Buruk andÖzcan, 2017;Schipor et al., 2019a;Solis et al., 2019). ...
... By surveying the scientific literature, we identified four types of tools designed for gesture input: (1) tools for gesture set design represented by software applications that enable users to manage gesture sets for interactive systems (Ashbrook and Starner, 2010;Kin et al., 2012b;Kohlsdorf et al., 2011;Long Jr et al., 1999;Solis et al., 2019;Tang and Igarashi, 2013), (2) gesture acquisition tools represented by software applications that collect and store gestures (Acharya et al., 2019;, (3) gesture recognition tools with services and features for gesture processing and classification (Kin et al., 2012a,b;Lü et al., 2014;Lü and Li, 2012), and (4) experiment-centered tools represented by software applications designed to assist researchers and practitioners in conducting user studies and experiments about gesture input (Ali et al., 2019;Buruk andÖzcan, 2017;Nebeling et al., 2015). For example, CUBOD (Tang and Igarashi, 2013) enables definition of custom body gestures and MAGIC (Ashbrook and Starner, 2010;Kohlsdorf et al., 2011) assists designers in creating gesture sets that minimize false positives for specific gesture recognition approaches. ...
... By surveying the scientific literature, we identified four types of tools designed for gesture input: (1) tools for gesture set design represented by software applications that enable users to manage gesture sets for interactive systems (Ashbrook and Starner, 2010;Kin et al., 2012b;Kohlsdorf et al., 2011;Long Jr et al., 1999;Solis et al., 2019;Tang and Igarashi, 2013), (2) gesture acquisition tools represented by software applications that collect and store gestures (Acharya et al., 2019;, (3) gesture recognition tools with services and features for gesture processing and classification (Kin et al., 2012a,b;Lü et al., 2014;Lü and Li, 2012), and (4) experiment-centered tools represented by software applications designed to assist researchers and practitioners in conducting user studies and experiments about gesture input (Ali et al., 2019;Buruk andÖzcan, 2017;Nebeling et al., 2015). For example, CUBOD (Tang and Igarashi, 2013) enables definition of custom body gestures and MAGIC (Ashbrook and Starner, 2010;Kohlsdorf et al., 2011) assists designers in creating gesture sets that minimize false positives for specific gesture recognition approaches. GestMan ) is a cloud-based tool for the management of gesture sets that enables researchers and practitioners to remotely collect gestures from end users via HTML web appli- Table 1. ...
Article
Full-text available
We introduce GearWheels, a software tool for studies about gesture input with wearables, including smartwatches, rings, and glasses. GearWheels features an event-based asynchronous software architecture design implemented exclusively with web standards, communications protocols, and data formats, which makes it flexible to support many wearables via HTTP and WebSocket communications. GearWheels differentiates from prior software tools for gesture acquisition, elicitation, recognition, and analysis with its web-based, wearable-oriented, experiment-centered architecture design. We demonstrate GearWheels with a device affixed to the index finger, wrist, and the temple of a pair of glasses to illustrate touch stroke-gesture and motion-gesture input acquisition. We also perform a technical evaluation of GearWheels in the form of a simulation experiment, and report the request-response time performance of the software components of GearWheels with off-the-shelf wearables. We release GearWheels as open source software to assist researchers and practitioners in implementing studies about gesture input with wearables.
... We report them here only for completeness since these types of gestures are out of the scope of our current proposal. Among these systems we can mention MAGIC (Multiple Action Gesture Interface Creation), 27 a tool that allows to design motion gestures without specific knowledge of pattern matching, and also test them in order to ensure that they are not triggered unintentionally by the user (false positives); GRDT (Gesture Recognition Design Toolkit), 28 a set of tools to support gesture creations; Mogeste, 29 a tool that allows designers to define, train, and test new motion gestures just captured through the inertial sensors in a commodity device (e.g., smartphone); EventHurdle, 30 a visual design tool that support designers in exploratory prototyping, provides functionality to integrate the defined gestures into a prototype for their automatic recognition, and supports handled gestures through physical sensors, remote gestures through a camera, and also touchscreen gestures; ProGesture, 31 that supports rapid prototyping of full body gestures by combining three design matters: gesture, presentation, and dialog; KIND-DAMA, 32 a modular middleware for developing interactive applications based on gestures on Kinect-like devices; KinectAnalysis, 33 a system for elicitation studies on the Microsoft Kinect, with support for recording, analysis and sharing; GestureAnalyzer, 34 a tool that allows gesture analysis by applying clustering and visualization techniques to the gesture data captured by motion tracking; CUBOD, 35 which allows users to design and customize their own gestures, rather than relying on those defined by designers, and offers feedback to guide users through the design process in order to avoid gestures that are difficult to distinguish from others, difficult to execute with consistency, or too similar to unintended movements; and HotSpotizer, 36 another system that allows users to define their own custom gestures, in this case to map them into keyboard commands, in order to use them in arbitrary applications. ...
Article
Full-text available
In this article, we present PolyRec Gesture Design Tool (PolyRec GDT), a tool that allows the definition of gesture sets for mobile applications. It supports the entire process of designing gesture sets, from collecting gestures to analyzing and exporting the set created for use in a mobile application. It is equipped with the PolyRec gesture recognizer, a unistroke recognizer based on template matching with good accuracy and performances while still allowing support for other recognizers. The main features of PolyRec GDT include the ability to collect gestures directly on mobile devices, to detect similarities between gestures to help the designer to detect ambiguities in the design phase, and the ability to automatically select the most representative gestures for each gesture class by using a clustering technique to increase recognition speed without affecting accuracy. Experimental results show that this latter feature allows an increase in accuracy compared with when the same operation is performed manually by developers or randomly by software. Finally, a user study was carried out to evaluate PolyRec GDT, in which participants were asked to use it to add gesture functionality to a mobile application. The results showed that with the support of PolyRec GDT this operation took only a few minutes for participants with mobile development experience, whereas it would have taken much longer without the support of PolyRec GDT.
... When a motion gesture has been used in production, a video recording its recognition is made available to the designer for annotation and analysis. CUBOD [87] is a design tool for body gestures optimizing the real-time recognition. Myo GestureControl [1] enables an experimenter to gather surface electromyography gestures acquired by a Thalmic Myo armband and to classify them with an accuracy that is comparable to other similar devices in medical applications. ...
Article
A gesture elicitation study, as originally defined, consists of gathering a sample of participants in a room, instructing them to produce gestures they would use for a particular set of tasks, materialized through a representation called referent, and asking them to fill in a series of tests, questionnaires, and feedback forms. Until now, this procedure is conducted manually in a single, physical, and synchronous setup. To relax the constraints imposed by this manual procedure and to support stakeholders in defining and conducting such studies in multiple contexts of use, this paper presents Gelicit, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into six stages: (1) define a study: a designer defines a set of tasks with their referents for eliciting gestures and specifies an experimental protocol by parameterizing its settings; (2) conduct a study: any participant receiving the invitation to join the study conducts the experiment anywhere, anytime, anyhow, by eliciting gestures and filling forms; (3) classify gestures: an experimenter classifies elicited gestures according to selected criteria and a vocabulary; (4) measure gestures: an experimenter computes gesture measures, like agreement, frequency, to understand their configuration; (5) discuss gestures: a designer discusses resulting gestures with the participants to reach a consensus; (6) export gestures: the consensus set of gestures resulting from the discussion is exported to be used with a gesture recognizer. The paper discusses Gelicit advantages and limitations with respect to three main contributions: as a conceptual model for gesture management, as a method for distributed gesture elicitation based on this model, and as a cloud computing platform supporting this distributed elicitation. We illustrate Gelicit through a study for eliciting 2D gestures executing Internet of Things tasks on a smartphone.
... Cooking, gardening, mechanical repairs and surgery (Wen et al. 2013) are other domains where contactless UIs can be preferable due to hygiene. Convenient control of smart homes (Tang & Igarashi 2013), interactive art and music, interfaces for manipulating 3D images (Gallo 2013) and spatial medicine (Simmons et al. 2013) are further examples of cases where gesture-based UIs can enable novel capabilities. As related technologies progress and mature, we may expect gestural UIs to become increasingly common and novel user experiences to surface. ...
Article
User interfaces that utilise human gestures as input are becoming increasingly prevalent in diverse computing applications. However, few designers possess the deep insight, awareness and experience regarding the nature and usage of gestures in user interfaces to the extent that they are able to exploit the technological affordances and innovate over them. We argue that design students, who will be expected to envision and create such interactions in the future, are constrained as such by their habits that pertain to conventional user interfaces. Design students should gain an understanding of the nature of human gestures and how to use them to add value to UI designs. To this end, we formulated an ‘awareness course’ for design students based on concepts derived from mime art and creative drama. We developed the course iteratively through the involvement of three groups of students. The final version of the course was evaluated by incorporating the perspectives of design educators, an industry expert and the students. We present the details of the course, describe the development process, and discuss the insights revealed by the evaluations.
... A collection of commercial 7,8 and research [26] efforts implement demonstration for authoring skeletal tracking gestures. While demonstration seems to be straightforward solution, it requires the temporal segmentation of intended gestures from intermediate movements to be done manually, often by editing on a timeline of keyframes. ...
Conference Paper
Drawing from a user-centered design process and guidelines derived from the literature, we developed a paradigm based on space discretization for declaratively authoring mid-air gestures and implemented it in Hotspotizer, an end-to-end toolkit for mapping custom gestures to keyboard commands. Our implementation empowers diverse user populations -- including end-users without domain expertise -- to develop custom gestural interfaces within minutes, for use with arbitrary applications.
... A collection of commercial 7,8 and research [26] efforts implement demonstration for authoring skeletal tracking gestures. While demonstration seems to be straightforward solution, it requires the temporal segmentation of intended gestures from intermediate movements to be done manually, often by editing on a timeline of keyframes. ...
Thesis
Full-text available
Devices that sense the alignment and motion of human limbs via computer vision have recently become a commodity; enabling a variety of novel user interfaces that use human gesture as the main input modality. The design and development of these interfaces requires programming tools that support the representation, creation and manipulation of information on human body gestures. Following concerns such as usability and physical di�erences among individuals, these tools should ideally target end-users and designers as well as professional software developers. This thesis documents the design, development, deployment and evaluation of a software application to support gesture authoring by end-users for skeletal tracking vision-based input devices. The software enables end-users without programming experience to introduce gesture control to computing applications that serve their own goals; and provides developers and designers of gestural interfaces with a rapid prototyping tool that can be used to experientially evaluate designs.