Conference Paper

Ensuring a Robust Multimodal Conversational User Interface During Maintenance Work

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... More importantly, the system was perceived as highly efficient for users with cognitive impairments because the cognitive workload is lower if such a system is used. Fleiner et al. (2021) argued that conversational user interfaces are not sufficiently robust for maintenance guidance because of the ambient noise that interferes with voice recognition. Therefore, they proposed a set of user-defined gestural inputs (hand and head) as a complement to text-and voice-based communication. ...
Article
Full-text available
PDF EPUB Share icon Back to Top Abstract This study presents a systematic literature review to understand the applications, benefits, and challenges of digital assistants (DAs) in production and logistics tasks. Our conceptual framework covers three dimensions: information management, collaborative operations, and knowledge transfer. We evaluate human-DA collaborative tasks in the areas of product design, production, maintenance, quality management, and logistics. This allows us to expand upon different types of DAs, and reveal how they improve the speed and ease of production and logistic work, which was ignored in previous studies. Our results demonstrate that DAs improve the speed and ease of workers’ interaction with machines/information systems in searching, processing, and demonstrating. Extant studies describe DAs with different levels of autonomy in decision-making; however, most DAs perform tasks as instructed or with workers’ consent. Additionally, we observe that workers find it more intuitive to perform tasks and acquire knowledge when they receive multiple sensorial cues (e.g. auditory and visual cues). Consequently, future research can explore how DAs can be integrated with other technologies for robust multi-modal assistance such as eye tracking and augmented reality. This can provide customised DA support to workers with disabilities or conditions to facilitate more inclusive production and logistics.
Article
Full-text available
Due to the increasing digitalization in manufacturing logistics, devices to integrate the worker into the digital manufacturing system are necessary. A voice user interface (VUI) can be considered suitable for this purpose due to its flexibility and intuitive operability. Despite the popularity and acceptance of VUIs in everyday life, their use in industrial applications, especially in manufacturing logistics, is still rare. While VUIs have been successfully used in order picking for decades, hardly any other industrial fields of application exist. In this paper, we have identified various barriers to the use of VUI in industrial applications. We categorized them and identified four key barriers. We then conducted a systematic literature review to determine and compare already investigated application areas of VUIs, their characteristics, advantages and disadvantages. We found that in particular the operation of machines and industrial robots, as well as general data and information output on machine and system status, maintenance and employee training are frequently investigated. It is noticeable that VUIs are often used in combination with other user interfaces (UIs). Some challenges to VUI usage, such as high ambient noise levels, have already been solved through various approaches, while other challenges remain. Based on the results of the literature review, we put forward a research agenda regarding further suitable industrial application areas as well as general challenges for the use of VUIs in industrial environments.
Conference Paper
Full-text available
If an artwork could talk, what would visitors ask? This paper explores what types of content voice-based AI conversational systems should have to attend visitors' expectations in a museum. The study analyses 142,463 conversation logs from 5,242 unique sessions of a nine-month long deployment of a voice-based interactive guide in a modern art museum in Brazil. In this experiment, visitors freely asked questions about seven different artworks of different styles. By grouping the visitor utterances into eight types of content, we determined that more than half of the visitors asked about the meanings and intentions behind the artwork, followed by facts about the artwork and author-related questions. We also determined that the types of questions were not affected by each artwork, the artwork style, or its physical location. We also saw some relationships between the visitor's overall evaluation of the experience with the types of questions she asked. Based on those results, we identified implications for designing content for voice-based conversational systems in museums.
Conference Paper
Full-text available
As our landscape of wearable technologies proliferates, we find more devices situated on our heads. However, many challenges hinder them from widespread adoption - from their awkward, bulky form factor (today's AR and VR goggles) to their socially stigmatized designs (Google Glass) and a lack of a well-developed head-based interaction design language. In this paper, we explore a socially acceptable, large, head-worn interactive wearable - a hat. We report results from a gesture elicitation study with 17 participants, extract a taxonomy of gestures, and define a set of design concerns for interactive hats. Through this lens, we detail the design and fabrication of three hat prototypes capable of sensing touch, head movements, and gestures, and including ambient displays of several types. Finally, we report an evaluation of our hat prototype and insights to inform the design of future hat technologies.
Conference Paper
Full-text available
Chatbots have been around since the 1960's, but recently they have risen in popularity especially due to new compatibility with social networks and messenger applications. Chatbots are different from traditional user interfaces, for they unveil themselves to the user one sentence at a time. Because of that, users may struggle to interact with them and to understand what they can do. Hence, it is important to support designers in deciding how to convey chatbots' features to users, as this might determine whether the user continues to chat or not. As a first step in this direction, in this paper our goal is to analyze the communicative strategies that have been used by popular chatbots to convey their features to users. To perform this analysis we use the Semiotic Inspection Method (SIM). As a result we identify and discuss the different strategies used by the analyzed chatbots to present their features to users. We also discuss the challenges and limitations of using SIM on such interfaces.
Conference Paper
Full-text available
Teaching new assembly instructions at manual assembly workplaces has evolved from human supervision to digitized automatic assistance. Assistive systems provide dynamic support, adapt to the user needs, and alleviate perceived workload from expert workers supporting freshman workers. New assembly instructions can be implemented at a fast pace. These assistive systems decrease the cognitive workload of workers as they need to memorize new assembly instructions with each change of product lines. However, the design of assistive systems for the industry is a challenging task. Once deployed, people have to work with such systems for full workdays. From experiences made during our past project motionEAP, we report on design challenges for interactive worker assistance at manual assembly workplaces as well as challenges encountered when deploying interactive assistive systems for diverse user populations.
Article
Full-text available
Augmented reality smart glasses (ARSG) are increasingly popular and have been identified as a vital technology supporting shop-floor operators in the smart factories of the future. By improving our knowledge of how to efficiently evaluate and select ARSG for the shop-floor context, this paper aims to facilitate and accelerate the adoption of ARSG by the manufacturing industry. The market for ARSG has exploded in recent years, and the large variety of products to select from makes it not only difficult but also time consuming to identify the best alternative. To address this problem, this paper presents an efficient step-by-step process for evaluating ARSG, including concrete guidelines as to what parameters to consider and their recommended minimum values. Using the suggested evaluation process, manufacturing companies can quickly make optimal decisions about what products to implement on their shop floors. The paper demonstrates the evaluation process in practice, presenting a comprehensive review of currently available products along with a recommended best buy. The paper also identifies and discusses topics meriting research attention to ensure that ARSG are successfully implemented on the industrial shop floor.
Conference Paper
Full-text available
With increasing complexity of assembly tasks and an increasing number of product variants, instruction systems providing cognitive support at the workplace are becoming more important. Different instruction systems for the workplace provide instructions on phones, tablets, and head-mounted displays (HMDs). Recently, many systems using in-situ projection for providing assembly instructions at the workplace have been proposed and became commercially available. Although comprehensive studies comparing HMD and tablet-based systems have been presented, in-situ projection has not been scientifically compared against state-of-the-art approaches yet. In this paper, we aim to close this gap by comparing HMD instructions, tablet instructions, and baseline paper instructions to in-situ projected instructions using an abstract Lego Duplo assembly task. Our results show that assembling parts is significantly faster using in-situ projection and locating positions is significantly slower using HMDs. Further, participants make less errors and have less perceived cognitive load using in-situ instructions compared to HMD instructions.
Conference Paper
Full-text available
The best way to construct user interfaces for smart glasses is not yet known. We investigated the use of eye tracking in this context in two experiments. The eye and head movements were combined so that one can select the object to interact by looking at it and then change a setting in that object by turning the head horizontally. We compared three different techniques for mapping the head turn to scrolling a list of numbers with and without haptic feedback. We found that the haptic feedback had no noticeable effect in objective metrics, but it sometimes improved user experience. Direct mapping of head orientation to list position is fast and easy to understand, but the signal-to-noise ratio of eye and head position measurement limits the possible range. The technique with constant rate of change after crossing the head angle threshold was simple and functional, but slow when the rate of change is adjusted to suit beginners. Finally the rate of change dependent on the head angle tends to lead to fairly long task completion times, although in theory it offers a good combination of speed and accuracy.
Conference Paper
Full-text available
We address in this work the process of agreement rate analysis for characterizing the level of consensus between participants' proposals elicited during guessability studies. Two new measures, i.e., disagreement rate for referents and coagreement rate between referents, are proposed to accompany the widely-used agreement rate formula of Wobbrock et al. [37] when reporting participants' consensus for symbolic input. A statistical significance test for comparing the agreement rates of k>=2 referents is presented in analogy with Cochran's success/failure Q test [5], for which we express the test statistic in terms of agreement and coagreement rates. We deliver a toolkit to assist practitioners to compute agreement, disagreement, and coagreement rates, and run statistical tests for agreement rates at p=.05, .01, and .001 levels of significance. We validate our theoretical development of agreement rate analysis in relation with several previously published elicitation studies. For example, when we present the probability distribution function of the agreement rate measure, we also use it (1) to explain the magnitude of agreement rates previously reported in the literature, and (2) to propose qualitative interpretations for agreement rates, in analogy with Cohen's guidelines for effect sizes [6]. We also re-examine previously published elicitation data from the perspective of the agreement rate test statistic, and highlight new findings on the effect of referents over agreement rates, unattainable prior to this work. We hope that our contributions will advance the current knowledge in agreement rate analysis, providing researchers and practitioners with new techniques and tools to help them understand user-elicited data at deeper levels of detail and sophistication.
Conference Paper
Full-text available
Guessability is essential for symbolic input, in which users enter gestures or keywords to indicate characters or commands, or rely on labels or icons to access features. We present a unified approach to both maximizing and evaluating the guessability of symbolic input. This approach can be used by anyone wishing to design a symbol set with high guessability, or to evaluate the guessability of an existing symbol set. We also present formulae for quantifying guessability and agreement among guesses. An example is offered in which the guessability of the EdgeWrite unistroke alphabet was improved by users from 51.0% to 80.1% without designer intervention. The original and improved alphabets were then tested for their immediate usability with the procedure used by MacKenzie and Zhang (1997). Users entered the original alphabet with 78.8% and 90.2% accuracy after 1 and 5 minutes of learning, respectively. The improved alphabet bettered this to 81.6% and 94.2%. These improved results were competitive with prior results for Graffiti, which were 81.8% and 95.8% for the same measures.
Conference Paper
Full-text available
Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures - deliberate movements of the device by end-users to invoke commands. However, little is known about best-practices in motion gesture design for the mobile computing paradigm. To address this issue, we present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device. We demonstrate that consensus exists among our participants on parameters of movement and on mappings of motion gestures onto commands. We use this consensus to develop a taxonomy for motion gestures and to specify an end-user inspired motion gesture set. We highlight the implications of this work to the design of smartphone applications and hardware. Finally, we argue that our results influence best practices in design for all gestural interfaces.
Conference Paper
Full-text available
With gesture-based interactions in mobile settings becoming more popular, there is a growing concern regarding the social acceptance of these interaction techniques. In this paper we begin by examining the various definitions of social acceptance that have been proposed in the literature to synthesize a definition that is based on how the user feels about performing a particular interaction as well as how the bystanders perceive the user during this interaction. We then present the main factors that influence gestures' social acceptance including culture, time, interaction type and the user's position on the innovation adoption curve. Through a user study we show that an important factor in determining social acceptance of gesture-based interaction techniques is the user's perception of others ability to interpret the potential effect of a manipulation.
Conference Paper
Full-text available
This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the "integral image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.
Conference Paper
Nonverbal communication is an indispensable and omnipresent element of human behaviors which includes various ways such as human action, hand gesture, and facial expression, etc. As a basic means to express a person's attitudes, feelings, and emotions, it is essential in human's communication, and also play an important role in the multimodal interactions between human and computer system. Due to the richness, flexibility, and ambiguity existing in human's nonverbal expressions, as well as being affected by personalized behavior habits, the understanding and recognition of above human's expressions involves complex intelligent technologies such as visual analysis and situational awareness based on machine learning. This paper aims to provide a systematic summary and analysis of machine learning methods and technologies in human-computer nonverbal communication. It starts with the evolution of nonverbal communication research, goes on to present the definition and classification of nonverbal communication, proceeds to review and analyze the machine learning techniques which are frequently employed in human-computer nonverbal communication, and finally expounds the development of smart learning based on neural mechanism.
Conference Paper
In this paper we try to provoke by teasing the question "if conversational user interfaces should be multimodal?". Of course they should! In decades of research in multimodal HCI excellent arguments can be found. We substantiate our perspective with an example showing how conversational interaction becomes more robust and efficient through the use of multimodality.
Conference Paper
Small lot sizes are becoming more common in modern manufacturing. Rather than automate every possible product variant, companies may rely on manual assembly to be more flexible. However, it can be difficult for people to remember the steps for every possible product variant. Assistive systems providing instructions can support workers. In this paper, we present a study investigating whether existing machine translation and text-to-speech engines provide sufficient quality to enable on-the-fly translations to provide assistance to workers in their native languages. The results of our tests indicate that machine translation is not yet sufficient for this application.
Conference Paper
The robustness and consistency of sensory inference models under changing environmental conditions and hardware is a crucial requirement for the generalizability of recent innovative work, particularly in the field of deep learning, from the lab to the real world. We measure the extent to which current speech recognition cloud models are robust to background noise, and show that hardware variability is still a problem for real-world applicability of state-of-the-art speech recognition models.
Conference Paper
Important goals of the process-industry are efficient, safe and resource-saving production. High expectations have been formulated concerning the linkage of automation technologies, digitization and data-driven analytics methods like machine learning. This paper investigates the adaptation challenges of the process industry when it comes to the introduction of digital services in the plant. Specifically, we discuss the potential of virtual assistant systems to address these challenges. We discuss virtual assistant systems in their role as 1) support directly embedded into the work process and 2) integration point for new services. Based on this, we outline a research agenda for virtual assistant systems for the process industry.
Article
In the digital age, cultural organizations strive to retain audience engagement especially via experimentation with novel technologies and social media. The latter are increasingly influencing the way cultural heritage is perceived, providing options for grappling with crucial issues in the sector, including sustainability, openness, and public participation. One tool that has been deployed to explore these issues is the chatbot, a computer program designed to simulate conversation with human users, especially over the internet. Chatbots run through different conversational interfaces, but they have a particularly heavy application in Facebook Messenger. Within the museums and cultural sector specifically, these robotic media are regularly proclaimed to offer novel engagement mechanisms that can empower participants to actively participate in the heritage process. However, most heritage Messenger bots are purely informative and object- or exhibit-centered, providing little opportunity for meaningful interactivity, creative expression, or critical engagement. This article explores and critically reviews three Messenger chatbots related to heritage organizations, concluding with suggestions for their future development.
Conference Paper
Combining mid-air gestures with pen input for bi-manual input on tablets has been reported as an alternative and attractive input technique in drawing applications. Previous work has also argued that mid-air gestural input can cause discomfort and arm fatigue over time, which can be addressed in a desktop setting by allowing users to gesture in alternative restful arm positions (e.g., elbow rests on desk). However, it is unclear if and how gesture preferences and gesture designs would be different for alternative arm positions. In order to inquire these research question we report on a user and choice based gesture elicitation study in which 10 participants designed gestures for different arm positions. We provide an in-depth qualitative analysis and detailed categorization of gestures, discussing commonalities and differences in the gesture sets based on a "think aloud" protocol, video recordings, and self-reports on user preferences.
Conference Paper
Modern smartphones are built with capacitive-sensing touchscreens, which can detect anything that is conductive or has a dielectric differential with air. The human finger is an example of such a dielectric, and works wonderfully with such touchscreens. However, touch interactions are disrupted by raindrops, water smear, and wet fingers because capacitive touchscreens cannot distinguish finger touches from other conductive materials. When users' screens get wet, the screen's usability is significantly reduced. RainCheck addresses this hazard by filtering out potential touch points caused by water to differentiate fingertips from raindrops and water smear, adapting in real-time to restore successful interaction to the user. Specifically, RainCheck uses the low-level raw sensor data from touchscreen drivers and employs precise selection techniques to resolve water-fingertip ambiguity. Our study shows that RainCheck improves gesture accuracy by 75.7%, touch accuracy by 47.9%, and target selection time by 80.0%, making it a successful remedy to interference caused by rain and other water.
Conference Paper
Small lot sizes in modern manufacturing present new challenges for people doing manual assembly tasks. Assistance systems, including instruction systems and collaborative robots, can support the flexibility needed, while also reducing the number of errors. This session is designed to give participants a better understanding of the strengths and limitations of the different technologies with respect to the practical implementation in companies. Several new technological solutions designed for companies will be presented. In addition, participants will be given the chance to gain first-hand experience with some of the technologies presented.
Conference Paper
This paper presents a study on the potential of today’s smartwatches as a complementary user interface to mobile support systems for industrial maintenance tasks. Starting from challenges, usage scenarios and use cases for the valuable use of smartwatches in an industrial context, the various possibilities of infor- mation delivery, task support and process control using smartwatches are evaluated and checked against basic ergonomic requirements on data pre-processing, user interface design and the hardware equipment. A prototypical implementation of a support system for industrial maintenance tasks illustrates the applicability of the derived interaction concepts and design guidelines for smartwatch-equipped mobile support systems.
Conference Paper
We present and demonstrate Kinemic Wave, a fully freehand and mobile interaction system, which allows gesture control and text-entry on-the-go. We use wrist-worn inertial sensors which are almost independent of environmental influences. The system is therefore especially suited to interact with smart- and augmented-reality glasses. Simple commands like scrolling or confirmation can be input with short gestures, more complex commands, search terms, small messages or annotations can be input by writing text continuously in the air. Interaction integrates seamlessly in other activities as gestures and Airwriting are spotted automatically in the continuous data stream.
Conference Paper
Gestural interaction has become increasingly popular, as enabling technologies continue to transition from research to retail. The mobility of miniaturized (and invisible) technologies introduces new uses for gesture recognition. This paper investigates single-hand microgestures (SHMGs), detailed gestures in a small interaction space. SHMGs are suitable for the mobile and discrete nature of interactions for ubiquitous computing. However, there has been a lack of end-user input in the design of such gestures. We performed a user-elicitation study with 16 participants to determine their preferred gestures for a set of referents. We contribute an analysis of 1,632 gestures, the resulting gesture set, and prevalent conceptual themes amongst the elicited gestures. These themes provide a set of guidelines for gesture designers, while informing the designs of future studies. With the increase in hand-tracking and electronic devices in our surroundings, we see this as a starting point for designing gestures suitable to portable ubiquitous computing.
Conference Paper
Wearable technology, such as Google Glass, offers potential benefits to engineers in industrial settings. We designed and developed a wearable solution for industrial maintenance, which 1) provides workflow guidance to the user, 2) supports hands-free operation, 3) allows the users to focus on their work, and 4) enables an efficient way for collaborating with a remote expert. The prototype, which was demonstrated at InnoTrans 2014, the largest international trade show for train technology, received positive feedback from many potential users and customers.
Chapter
Standardized usability questionnaires are questionnaires designed for the assessment of perceived usability, typically with a specific set of questions presented in a specified order using a specified format with specific rules for producing scores based on the answers of respondents. For usability testing, standardized questionnaires are available for assessment of a product at the end of a study (post-study—e.g., QUIS, SUMI, PSSUQ, and SUS) and after each task in a study (post-task—e.g., ASQ, Expectation Ratings, SEQ, SMEQ, and Usability Magnitude Estimation). Standardized questionnaires are also available for the assessment of website usability (e.g., WAMMI and SUPR-Q) and for a variety of related constructs. Almost all of these questionnaires have undergone some type of psychometric qualification, including assessment of reliability, validity, and sensitivity, making them valuable tools for usability practitioners.
Article
We present insights from a gesture elicitation study conducted for TV control, during which 18 participants contributed gesture commands and rated the execution difficulty and recall likeliness of free-hand gestures for 21 television control tasks. Our study complements previous work on gesture interaction design for the TV set with the first exploration of fine-grained resolution 3-D finger movements and hand gestures. We report lower agreement rates than previous gesture studies (AR=.158{\mathcal {AR}}=.158AR=.158) with 72.8 % recall rate and 15.8 % false positives, results that are explained by the complexity and variability of unconstrained finger and hand gestures. However, our observations also confirm previous findings, such as people preferring related gestures for dichotomous tasks and more disagreement occurring for abstract tasks, such as “open browser” or “show the list of channels” for our specific TV scenario. To reach a better understanding of our participants’ preferences for articulating finger and hand gestures, we defined five measures for Leap Motion gestures, such as gesture volume and finger-to-palm distance, which we employed to evaluate gestures performed by our participants. We also contribute a set of guidelines for practitioners interested in designing free-hand gestures for interactive TV scenarios involving similar gesture acquisition technology. We release our dataset consisting in 378 Leap Motion gestures described by fingertips position, direction, and velocity coordinates to foster further studies in the community. This first exploration of viewers’ preferences for fine-grained resolution free-hand gestures for TV control represents one more step toward designing low-effort gesture interfaces for lean-back interaction with the TV set.
Conference Paper
Industrial maintenance is a complex and knowledgeintensive field. Therefore, maintenance technicians need to have easy access to versatile and situationally relevant knowledge. The aim of this paper is to increase the understanding of maintenance technicians’ interactions and knowledge sharing with colleagues and technology during maintenance work. Three industrial maintenance cases were studied using interviews and observation. As a result, a model for knowledge sharing in maintenance work was developed. Based on the model, it is easier to tackle challenges in knowledge gathering and sharing. In addition, it supports the research and development of technologies that contribute to knowledge sharing in the future.
Conference Paper
We present a robust real-time capable and simple framework for segmenting video sequences and live-streams of manual workflows into the comprising single tasks. Using classifiers trained on these segments we can follow a user that is performing the workflow in real-time as well as learn task variants from additional video examples. Our proposed method neither requires object detection nor high-level features. Instead we propose a novel measure derived from image distance that evaluates image properties jointly without prior segmentation. Our method can cope with repetitive and free-hand activities and the results are in many cases comparable or equal to manual task segmentation. One important application of our method is the automatic creation of a step-by-step task documentation from a video demonstration. The entire process to automatically create a fully functional augmented reality manual will be explained in detail and results are shown.
Conference Paper
In order to study users’ spontaneous formulation of commands in the context of multimodal human-computer interaction (HCI), we conducted a Wizard of Oz experiment on the use of unconstrained speechand 2D-gestures for interacting with standard application software: 8 subjects performedvariousdesignand process control tasks during 3 weekly sessions. Some functionalities of the tnultimodal user interface were simulated by 3 human operators or ‘wizards’. First analyses bring out the great diversity of subjects’ styles and strategies; they also indicate that, in such environments, the addition of spoken natural language to direct manipulation (the manipulation of graphical objects through pointing) improves HCI efficiency and flexibility, whilst command interpretation remains tractable.
Conference Paper
Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause. In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users' mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior.
Article
First computers became more visual, then they took a step further to understand vocal commands and now they have gone a step further and became „TOUCHY‟, that is skin to screen. In this paper we will throw light on significance of touchscreen technology, its types, components, working of different touchscreens, their applications and a comparative study among various types of touchscreen technologies. Recently touchscreen technology is increasingly gaining popularity as these can be seen at ATMs, cellphones, information kiosks etc. Touch screen based system allows an easy navigation around a GUI based environment. As the technology advances, people may be able to operate computers without mice and keyboards.The touchscreen is an assistive technology. This interface can be beneficial to those that have difficulty in using other input devices such as a mouse or keyboard. When used in conjunction with software such as on-screen keyboards, or other assistive technology, they can help make computing resources more available to people that have difficulty in using computers. Currently various researches are being made to develop touchscreen video projectors. The ability to transform any surface in a touchscreen means lower costs, making the technology more cost effective.
Conference Paper
Today multimedia is often mentioned as a panacea to design an intuitive user interface. But - depending on the device and the context of use - multimedia can be a misapplication. This paper describes a case study of a context of use analysis as a basis for the user-centered development of a user interface for service and maintenance technicians. Beside the classic user interface questions the use of a speech user interface was evaluated.