Conference PaperPDF Available

Challenges for Designing the User Experience of Multi-touch interfaces

Authors:

Abstract and Figures

Advances in technology have led to an increased presence of multi-touch interfaces in consumer products in recent years. Still, many challenges remain that designers need to face when designing for multi-touch interaction. As multi-touch interfaces are becoming more ubiquitous it is important to investigate not only their performance for certain tasks, but also the user experience of interacting with such interfaces. In this paper we discuss eight challenges that need to be considered when designing the user experience of multi-touch interfaces. The challenges also reveal potential areas for future research in the field of multi-touch interaction.
Content may be subject to copyright.
Challenges for Designing the User Experience of
Multi-touch Interfaces
Stefan Bachl1, Martin Tomitsch2, Christoph Wimmer1, Thomas Grechenig1
1 Research Group for Industrial Software (INSO), Vienna University of Technology, Austria
{stefan.bachl, christoph.wimmer, thomas.grechenig} @ inso.tuwien.ac.at
2 Faculty of Architecture, Design and Planning, The University of Sydney, Australia
martin.tomitsch@sydney.edu.au
ABSTRACT
Advances in technology have led to an increased presence
of multi-touch interfaces in consumer products in recent
years. Still, many challenges remain that designers need to
face when designing for multi-touch interaction. As multi-
touch interfaces are becoming more ubiquitous it is
important to investigate not only their performance for
certain tasks, but also the user experience of interacting
with such interfaces. In this paper we discuss eight
challenges that need to be considered when designing the
user experience of multi-touch interfaces. The challenges
also reveal potential areas for future research in the field of
multi-touch interaction.
Keywords
Multi-touch interaction, surface computing, touch
interaction, touch screens, interaction design.
INTRODUCTION
In the last few years multi-touch interfaces have gained a
lot of attention, not only due to their application in mobile
phones but also because of the advantages that come with
this technology. One of the most important advantages
compared to other interfaces is the possibility to directly
interact with information on screen using fingers for input.
This provides users with a stronger feeling of having
control over their interactions rather than being controlled
by the system. Another aspect of direct interaction is that it
makes interacting with digital interfaces accessible to a
broad spectrum of users. Furthermore, the technology
enables concurrent co-located collaboration, especially on
larger surfaces. Multi-touch interfaces that provide physical
behaviour (such as gravity or inertia) also lead to higher
performance [13]. Combined with visually appealing
interface elements and graphics this might also lead to an
increased overall user experience. However, multi-touch
interaction also has its disadvantages, resulting in several
challenges that need to be addressed when designing for
multi-touch. Whereas multi-touch is good for specific use
cases in certain contexts, it is not a general remedy for
interaction design problems. Interfaces designed for mouse
and keyboard interaction cannot be easily augmented with
multi-touch capability without redesigning the interface
accordingly.
This paper provides a general overview of existing
challenges that need to be considered when designing
multi-touch interfaces. It focuses on implications to user
experience rather than purely technological issues, such as
improved algorithms for finger tracking. The challenges are
on the one hand derived from practical experience with the
development of multi-touch applications and supervising
student projects at our research group. On the other hand,
the theoretical background is formed by an extensive
literature review as well as presentations by experts in the
field of multi-touch, human-computer interaction and
natural user interfaces.
Table 1. Overview of the eight multi-touch challenges
discussed in this paper, divided into three categories
In this paper, we present eight multi-touch challenges,
classified into three categories (see Table 1): Screen-based,
user-based and input-based challenges. The screen-based
challenges describe problems related to physical properties
of touch screens. The user-based challenges explore the use
of fingers for direct input as the origin of problems users
are facing when interacting with multi-touch interfaces.
Finally the input-based challenges outline the difficulties in
interpreting and supporting the input to enhance the user
experience of multi-touch interfaces.
Affordance of screens
Screen-based challenges
Tactile user feedback
Ergonomics
Individual differences
User-based challenges
Accessibility
Gestures and patterns
Supporting data input
Input-based challenges
Multi-user support
SCREEN-BASED CHALLENGES
Challenges from within this category are divided into
challenges relating to the affordance of screens and
challenges related to the lack of tactile user feedback.
Affordance of Screens
One fundamental challenge of designing multi-touch
interfaces lies in the natural affordance of screens. The
physical appearance of screens is responsible for affording
touch [16]. As not all screens are capable of detecting
touch, there are two undesired scenarios: a) the user
touches a normal screen that is not capable of responding to
this interaction technique and b) the user fails in
recognising a touch screen as such. Hardware-specific
details (e.g. the existence or absence of keyboard and
mouse) can help the user to determine the touch capabilities
of a screen. If the designer cannot influence the hardware
design of a screen, the use of written or symbolic
instructions (Figure 1) or already learned user interface
conventions (e.g. a button with three-dimensional
appearance) is necessary to provide visual cues. According
to Norman [16], instructions or conventions do not affect
the physical affordance of the screen itself, but the
perceived affordance by the user.
Figure 1. Written instructions to affect the perceived
affordance of touch screens
Once users become familiar with a screen or device, the
importance of communicating its touch capability
decreases. This is apparent when comparing the design of
public screens (e.g. terminals) to personal devices that are
used regularly (e.g. the Apple iPhone).
Tactile User Feedback
The absence of tactile user feedback related to multi-touch
technology is one of the most discussed and investigated
challenges in the area of multi-touch interfaces, yet there is
no conclusive solution to this problem. Current touch
screen technology does not provide tactile feedback when
touched, compared to the press of a key on a physical
keyboard. Therefore the use of adequate visual feedback,
including the simple visualisation of the detection of the
users’ fingers, is essential when designing touch screen
interfaces. Acoustic feedback can enhance the effect, but
can also harm the experience when used in collaborative
setups [6] due to the fact that sounds target all users at the
same time.
In addition to patents by large companies (e.g. [12], [22]),
there are several research approaches for simulating tactile
feedback. Technological approaches include vibration,
piezoelectric actuation, solenoid, pin matrices, or ciliated
surfaces [7]. Problems of these approaches include high
costs, scalability and the lack of support for multiple
concurrent users. For instance, the vibration of the screen to
provide feedback to an individual user would interfere with
other collaborating users. A completely different approach
[15] employs the users’ mobile phones for distal tactile
feedback through vibration. Harrison and Hudson [7] use
pneumatically actuated physical buttons. However, this
approach is limited by the preliminary assignment of fixed
buttons, which stands in contrast with the typical
adaptability of multi-touch interfaces.
Many studies (e.g. [9], [11], [14]) have proven that tactile
feedback can improve performance and decrease error rates
regardless of the technology used. Lee and Zhai [14]
discovered that the combination of vibration and acoustic
feedback does not result in further improvements.
USER-BASED CHALLENGES
Challenges from within this category are separated into
challenges related to the ergonomics and individual
differences of users and accessibility.
Ergonomics
The use of fingers for direct input and manipulation also
entails challenges where different screen properties are
concerned. One important thing to consider when designing
interfaces for multi-touch applications is partial occlusion
of the screen caused by fingers, hands and arms when users
interact with a touch screen [18]. In contrast to the use of a
computer mouse, users specifically occlude those parts of
the interface they are interacting with when touching
interface elements with their fingers. While this is also the
case when typing on a keyboard, the problem is more
severe on touch screens due to the additional lack of tactile
feedback (see section Tactile user feedback).
This problem is exacerbated on small screens, where touch
targets are also smaller. One approach to address the
problem of occlusion is the so-called back-of-device
interaction (e.g. [2], [24]) where the user can touch the
screen on its backside. Either semi-transparent screens or a
digital visualisation of the fingers help the user locate the
target. Unfortunately this technique does not scale to larger
screens. A very general approach is the Shift technique [19]
which displays the occluded part above the finger while
touching, sometimes even magnified.
Another important effect to consider is muscle fatigue
while interacting with large touch screens. With a mouse,
users can reach distant positions of the screen with minor
movements of their hands. On touch screens users have to
cover larger distances with theirs arms and hands. This
problem increases when designing applications that support
multiple different hardware platforms. Also, movements on
vertically arranged touch screens might even be more
strenuous. Therefore it is important to consider hand and
arm movements for the positioning of user interface
elements to prevent early muscle fatigue.
Individual Differences
Individual differences between users play an important role
when designing multi-touch interfaces. For instance,
different hand sizes will not lead to different sized touch
screens, compared with the production of different sized
computer mice. Therefore, different physical properties
have to be considered while designing the interface (e.g.
size of buttons). Not only hand size and finger size vary
from one person to another, the fingers on one hand have
different pointing properties (such as size) as well. The
decision which fingers will be used for which gestures (e.g.
object rotation) is dependent on finger properties and
ergonomics. This leads to important design considerations
as well as to technological challenges. As an example, a
touch target should be greater than 11.5 mm, according to
Wang and Ren [21]. This presents a challenge when
designing touch screen interfaces for mobile devices with
small screens. Other considerations [18] include
handedness of users, fingernails that can make the detection
of touch points difficult, gloves, which can prevent the
detection of fingers in capacitive-based touch setups, and
fingerprints that make users swipe over a screen, which is
still listening for deliberate input.
Figure 2. Accuracy and conclusion of mouse (left)
compared to finger (right)
Another important consideration is the accuracy of fingers
compared to a computer mouse. A computer mouse has a
target zone of one pixel, whereas targeting a specific single
pixel with a finger can become nearly impossible (Figure
2). Techniques for helping users to target the right spot
exist (e.g. [1], [17]) and should be considered when
designing touch interfaces. Different accuracy and touch
target size of finger and mouse emphasise that existing
interfaces should not be reused or enabled for touch
interaction without appropriate adaption.
Accessibility
The lack of tactile user feedback (see section Tactile user
feedback) also impacts the accessibility of touch screens.
Especially blind and visually impaired people face barriers
when interacting with touch screens. Existing products,
such as the Apple iPhone 3GS, try to solve this problem by
providing screen readers that give synthesised speech
feedback when touching the screen. However, if more
complex tasks need to be accomplished, screen readers
might not be sufficient. Kane et al. [10] use the advantages
of gestures for helping people with disabilities to navigate
trough menus, but although the tasks could be solved more
quickly compared to button-based interfaces, the error rate
was higher. Another approach uses relative inputs for text-
entry on a touch screen [26]. The first touch at any position
on the screen represents the central position of one of three
layers. A swipe into one of the eight basic directions
chooses the according character. Staying in the central
position exchanges the three layers in rotation.
Other disabilities have to be considered as well when
designing multi-touch applications, particularly in a public
context. For instance, physical disabilities could especially
cause problems when interacting with larger touch screens
or complicated gestures.
INPUT-BASED CHALLENGES
Challenges from within this category are divided into
challenges related to gestures and patterns, supporting data
input and multi-user support.
Gestures and Patterns
With the success of multi-touch enabled devices, gestures
in use are increasingly becoming inconsistent [23] across
different manufacturers. Patents that protect specific
gestures for a manufacturer aggravate this development. As
a result, competitors can either provide no gesture or invent
an alternative. Therefore standardisation becomes difficult
to achieve. However, approaches to provide de facto
standards do exist. Wobbrock et al. [25] propose a user-
defined gesture set for tabletops as well as a taxonomy of
surface gestures.
When gestures get more complex, the number of people
that are able to perform these gestures without instructions
decreases. According to Saffer [18], “the complexity of the
gesture should match the complexity of the task at hand”.
Major challenges have to be faced when trying to define
intuitive, self-revealing gestures for complex tasks that go
beyond zooming, rotating and swiping. Saffer [18]
proposes that complex tasks should be realised with simple
gestures (e.g. with buttons or menu systems), and
additionally provide more sophisticated gestures for expert
users.
The context of interaction with a multi-touch interface has
to be considered when defining gestures as well. For
instance, the use of complex gestures with more than one
finger is not always possible on mobile devices due to
different hand-held positions and usage.
When designing applications that support multiple different
hardware platforms, screen and hardware properties have to
be included in gesture considerations. This can include the
number of concurrent trackable fingers, the screen size or
the general touch screen technology and related accuracy.
Supporting Data Input
The lack of tactile user feedback also affects the user
experience of data input on multi-touch interfaces. There
are no methods available that make use of the advantages
of multi-touch for data input while also overcoming this
problem. One big advantage of virtual keyboards is the
ability to dynamically change buttons related to the desired
input (for instance an adapted keyboard for an email
address field where certain characters are forbidden), the
context, or the users abilities (e.g. QWERTY versus
ABCDEF layout). Furthermore, multi-touch enhances the
user experience compared with single-touch since keys do
not have to be pressed consecutively.
The dispute over the use of virtual versus physical
keyboards in mobile devices has resulted in a variety of
different concepts1,2,3 for virtual data input methods.
Nevertheless, small screens imply small virtual keys and
therefore high error rates combined with lower
performance. Adaptive algorithms (e.g. [5]) try to
counteract these problems by resizing touch targets
according to predictions of preceding inputs in the
background. Other approaches (e.g. [8]) address problems
like occlusions and resting palms which have to be
classified as invalid input, and try to reduce finger
movements on larger surfaces. Again, the Shift technique
[19] is a very common approach to handle occlusions. Zhai
et al. [27] examine the performance of virtual keyboards on
the basis of Fitt’s Law and the corresponding learning
curve.
Multi-user Support
One of the most obvious technological challenges when
designing a system for several co-located users, who
interact simultaneously, is the ability to distinguish between
individual users. Still there is no conclusive solution for
multi-user support that works for different touch screen
technologies. DiamondTouch [3] is a well-known approach
that uses an array of antennas to distinguish between users,
each hooked up to a capacitive receiver. However, the
approach is limited to a very specific tabletop setup. It does
not use distinguishing characteristics of the users
themselves (such as fingerprints) and therefore requires
initialisation.
As collaborative multi-touch applications often do not
specify dedicated zones for each user, the distinction
between individual users would allow further
enhancements to improve the user experience. The position
of the user(s) (e.g. around a multi-touch tabletop) cannot be
determined reliably in every situation and with every
hardware setup, although a few approaches exist. Wang et
al. [20] analyse the orientations of the contact shapes
1 http://www.retype.ch
2 http://www.shapewriter.com/iphone.html
3 http://www.touchtype-online.com
(fingertips) to detect the corresponding users but suffer
from the fact that not every contact point in multi-touch
interaction is an oblique touch (i.e. touch with the finger
pad instead of the fingertip). Dohse et al. [4] augment a
multi-touch tabletop setup with hand tracking by mounting
an additional camera above the table to distinguish between
users. Although very promising, the approach only works
when users stand on opposite sides of a tabletop.
CONCLUSION
In this paper we presented an overview of currently
existing challenges that have an impact on the user
experience of multi-touch interfaces. The eight identified
challenges are based on our experience with designing
multi-touch interfaces as well as an extensive review of
recent literature and presentations from this field. There are
three different groups of solutions to these challenges: 1)
challenges that need to be addressed through technological
innovation, 2) challenges that can be solved through
interface design, and 3) challenges that need to be tackled
both through interface design and on a hardware level.
The first group covers challenges that require solutions in
terms of new technology. To provide real, authentic tactile
user feedback the challenge is to invent reliable, scalable
touch screen technology. To enable multi-user support,
unique characteristics, such as fingerprints, need to be
incorporated. The second group covers challenges that can
be addressed through interface design solutions. Individual
differences of fingers, hands and arms should be considered
when designing multi-touch interfaces. Further, the user
experience of different gestures and patterns defined to
perform specific actions can be supported through
appropriate interface design. The third group covers the
remaining challenges that can be approached through
interface design but also partly in terms of technology or
hardware development. The physical affordance of a touch
screen is defined by its hardware appearance, but perceived
affordance can be manipulated by changing the visual
appearance of interface design elements. Ergonomics of
touch screens should be considered when designing multi-
touch interfaces, but technological advances also need to be
made to prevent occlusions. Similarly, accessibility can be
approached with hardware solutions or sophisticated
interaction methods. Finally, the support of high-
performance, reliable data input can be partly implemented
using intelligent keyboard interfaces, but are also
dependent of technological issues such as tactile user
feedback.
The list of challenges discussed in this paper not only
reveals current challenges for designing the user experience
of multi-touch interfaces, but also suggests possible
directions for further research.
REFERENCES
1. Albinsson, P. and Zhai, S. High precision touch screen
interaction. Proceedings of the SIGCHI conference on
Human factors in computing systems, ACM (2003),
105-112.
2. Baudisch, P. and Chu, G. Back-of-device interaction
allows creating very small touch devices. Proceedings
of the conference on Human factors in computing
systems, ACM (2009), 1923-1932.
3. Dietz, P. and Leigh, D. DiamondTouch: a multi-user
touch technology. Proceedings of the ACM symposium
on User interface software and technology, ACM
(2001), 219-226.
4. Dohse, K.C., Dohse, T., Still, J.D., and Parkhurst, D.J.
Enhancing Multi-user Interaction with Multi-touch
Tabletop Displays Using Hand Tracking. International
Conference on Advances in Computer-Human
Interaction, IEEE Computer Society (2008), 297-302.
5. Gunawardana, A., Paek, T., and Meek, C. Usability
guided key-target resizing for soft keyboards.
Proceeding of the conference on Intelligent user
interfaces, ACM (2010), 111-118.
6. Hancock, M.S., Shen, C., Forlines, C., and Ryall, K.
Exploring non-speech auditory feedback at an
interactive multi-user tabletop. Proceedings of Graphics
Interface 2005, Canadian Human-Computer
Communications Society (2005), 41-50.
7. Harrison, C. and Hudson, S.E. Providing dynamically
changeable physical buttons on a visual display.
Proceedings of conference on Human factors in
computing systems, ACM (2009), 299-308.
8. Hirche, J., Bomark, P., Bauer, M., and Solyga, P.
Adaptive interface for text input on large-scale
interactive surfaces. Proceedings of TABLETOP 2008,
IEEE (2008), 163-166.
9. Hoggan, E., Brewster, S.A., and Johnston, J.
Investigating the effectiveness of tactile feedback for
mobile touchscreens. Proceedings of the SIGCHI
conference on Human factors in computing systems,
ACM (2008), 1573-1582.
10. Kane, S.K., Bigham, J.P., and Wobbrock, J.O. Slide
rule: making mobile touch screens accessible to blind
people using multi-touch interaction techniques.
Proceedings of the ACM SIGACCESS conference on
Computers and accessibility, ACM (2008), 73-80.
11. Koskinen, E., Kaaresoja, T., and Laitinen, P. Feel-good
touch: finding the most pleasant tactile feedback for a
mobile touch screen button. Proceedings of conference
on Multimodal interfaces, ACM (2008), 297-304.
12. Laitinen, P. Tactile Touch Screen. International
Application PCT/EP2006/009377, filed 27 September
2006.
13. Lee, S. and Lee, W. Exploring effectiveness of physical
metaphor in interaction design. Proceedings of the
conference extended abstracts on Human factors in
computing systems, ACM (2009), 4363-4368.
14. Lee, S. and Zhai, S. The performance of touch screen
soft buttons. Proceedings of the conference on Human
factors in computing systems, ACM (2009), 309-318.
15. McAdam, C. and Brewster, S. Distal tactile feedback
for text entry on tabletop computers. Proceedings of the
British Computer Society Conference on Human-
Computer Interaction, British Computer Society (2009),
504-511.
16. Norman, D.A. Affordance, conventions, and design.
interactions 6, 3 (1999), 38-43.
17. Potter, R.L., Weldon, L.J., and Shneiderman, B.
Improving the accuracy of touch screens: an
experimental evaluation of three strategies. Proceedings
of the SIGCHI conference on Human factors in
computing systems, ACM (1988), 27-32.
18. Saffer, D. Designing Gestural Interfaces: Touchscreens
and Interactive Devices. O'Reilly Media, Inc., 2008.
19. Vogel, D. and Baudisch, P. Shift: a technique for
operating pen-based interfaces using touch.
Proceedings of the SIGCHI conference on Human
factors in computing systems, ACM (2007), 657-666.
20. Wang, F., Cao, X., Ren, X., and Irani, P. Detecting and
leveraging finger orientation for interaction with direct-
touch surfaces. Proceedings of the ACM symposium on
User interface software and technology, ACM (2009),
23-32.
21. Wang, F. and Ren, X. Empirical evaluation for finger
input properties in multi-touch interaction. Proceedings
of the conference on Human factors in computing
systems, ACM (2009), 1063-1072.
22. Westerman, W.C. Keystroke tactility arrangement on a
smooth touch surface, US Patent 20090315830, filed 24
December 2009
23. Wigdor, D., Fletcher, J., and Morrison, G. Designing
user interfaces for multi-touch and gesture devices.
Proceedings of the conference extended abstracts on
Human factors in computing systems, ACM (2009),
2755-2758.
24. Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and
Shen, C. Lucid touch: a see-through mobile device.
Proceedings of the ACM symposium on User interface
software and technology, ACM (2007), 269-278.
25. Wobbrock, J.O., Morris, M.R., and Wilson, A.D. User-
defined gestures for surface computing. Proceedings of
the conference on Human factors in computing systems,
ACM (2009), 1083-1092.
26. Yfantidis, G. and Evreinov, G. Adaptive blind
interaction technique for touchscreens. Universal
Access in the Information Society 4, 4 (2006), 328-337.
27. Zhai, S., Sue, A., and Accot, J. Movement model, hits
distribution and learning in virtual keyboarding.
Proceedings of the SIGCHI conference on Human
factors in computing systems, ACM (2002), 17-24.
... For instance, car manufacturers (Spreng, 2002) use a dial as an input device to navigate their infotainment system. Potential advantages of the flight-dial include quickly navigate menus through haptic feedback and musclememory (MacLean, 2000) and a minimized occlusion problem (Bachl et al., 2010) opposed to the mouse cursor. The dial allows the precise selection of neighboring values, but causes increased physical effort when scrolling over large distances. ...
Article
Full-text available
Air traffic control (ATC) is a safety-critical, cooperative work domain, which faces usability challenges due to technology driven development in the past. In this work, we followed a user-centered design process to explore how novel interaction concepts increase user experience in ATC. Based on controllers’ needs we envisioned one unified interface together with three possible interaction concepts (the mouse interface, flight-dial and tangible interface) addressing different aspects of ATC. We prototypically implemented the interaction concepts and iterated each prototype based on feedback from 24 controllers. Qualitative data from these iterative formative evaluations indicated that controllers prefer interfaces that are efficient to use, minimalistic, customizable and context sensitive. A summative evaluation (N = 12) showed that the hedonic quality of all three concepts were higher compared to the system currently in use. Our results and insights can provide guidance and inspiration for the future design of ATC interfaces.
... Another point was that we did not consider MST with a multi-point touch. Although there are many challenges in designing user experiences of multi-touch interfaces [11], users may have different insights into MST with multi-point touch. In the future, considering MST with multi-point touch may help to thoroughly understand the active and passive MST with vibrotactile stimuli. ...
Article
Full-text available
Tactile technology in mobile devices makes mediated social touch (MST) a possibility. MST with vibrotactile stimuli can be applied in future online social communication applications. There may be different gestures to trigger vibrotactile stimuli for senders and receivers. In this study, we compared senders with gestures and receivers without gestures to identify the differences in perceiving MST with vibrotactile stimuli. We conducted a user study to explore differences in the likelihood to be understood as a social touch with vibrotactile stimuli between senders and receivers. The results showed that for most MST, when participants acted as senders and receivers, there were no differences in understanding MST with vibrotactile stimuli when actively perceiving with gestures or passively perceiving without gestures. Researchers or designers could apply the same vibrotactile stimuli for senders’ and the receivers’ phones in future designs.
... The HCI literature [4] already points out challenges in the construction of touch interfaces but designing interfaces for sensor devices that operate at a distance (no physical contact with the user) is an additional challenge that correlates with the proposed study. Additional studies are necessary for better interfaces to be conceived for devices such as the Leap Motion Controller. ...
Article
Full-text available
The use of sensors for tasks such as (1) interface manipulation, (2) digital games and (3) simulations is quickly becoming commonplace. Novel digital interfaces are being established and, as a result, users are getting better and better at both learning and using them. Medicine, engineering, and entertainment are but a few of the fields making use of sensors for the purpose of scientific progress or to improve the user’s experience. However, the use of emerging sensor devices is significantly more prominent in 3D games and simulations. This article evaluates the effectiveness of the Leap Motion Controller in 2D gaming environments. In the study, other devices such as a mouse, keyboard, gamepad, and touch screen are used to further analyze the performance data gathered with the Leap Motion Controller. Data collection employs user experience heuristics and pre-determined key performance indicators. The indicators were used to measure the effectiveness of the controllers in four main tasks: interface handling, character movement, item collection and combat gameplay. Users showed more effective control in item collection and combat gameplay, according to the data gathered from our subject’s gameplay sessions. Item collection was performed better with the Leap Motion Controller and touch screen, based on the data collected. Observed results also showed that conventional interfaces outperformed the Leap Motion Controller in character movement.
... Moreover, Capuano, Mangione, Pierri, and Salerno (2014) addressed issues of personalization and contextualization of learning experiences in information and services management. Bachl, Tomitsch, Wimmer, and Grechenig (2010) investigated the design challenges of user experience in the context of multi-touch interfaces. The challenges that they identified (such as individual differences, input-based challenges, and accessibility) can be tackled either technologically, through a proper user interface design, or by a hardware solution. ...
Article
Full-text available
Students are often unsure of how to select the right study path at a higher educational institution. They either lack knowledge of a proposed study path or they do not manage to learn more before making their selection. Universities often apply various approaches, such as printed or online course curricula, to facilitate the selection process. Yet these approaches are often inefficient because they do not attract students' attention, or they provide ambiguous descriptions. Moreover, these descriptions are not provided to students through the right channels. The study reveals that students use different digital channels in various contexts to perform their educational activities. The study reveals that the use of social media applications in an educational context results in social bonding among students. The results of the study can help educators select appropriate channels that match students' expectations of a reliable and trustworthy interaction medium.
... Ten of the reviewed articles cover various ability declines within one study or set of guidelines [81,82,212,[273][274][275][276][277][278][279]. ...
Thesis
More than other age group, older adults suffer from multiple chronic conditions, receive care from multiple healthcare providers and settings, and transition across this continuum of care as they age. During the last decade, we have observed the transformation of aged care worldwide both on organisational and legal levels due to an increasing older population from one side and the use of technology in their care from another. In addition, the involvement of family members as informal caregivers introduces the concept of a triad of aged care: a collaboration of senior patients, their relatives and professional caregivers; and poses additional challenges such as appropriate and efficient communication from the points of views of all care stakeholders. Hence, sharing of health and wellbeing information (HWBI) in the care triad becomes particularly important, and e-Health services have shown the potential to support this, for example, by becoming a channel that could mediate sharing, while taking into account the values and concerns of all groups of users. In this thesis, we explore existing strategies of HWBI sharing in various aged care scenarios and identify the challenges and opportunities of designing information systems that could support them. In particular, by conducting a systematic literature review and a series of user studies with all three groups of care stakeholders, we study if and how technology-based mediation of informational exchange can improve institutionalized care for older adults. We primarily focus on different dimensions of aged care scenarios, based on the involvement of triad stakeholders, the level of acceptance of technology, and the degree of control seniors have over sharing their HWBI. To gather design recommendations for such information systems, we investigate HWBI-related work practices of professional caregivers; information needs of family members, and information disclosure preferences and associated concerns of seniors, including their reasons to share or not to share. We raise a critical discussion on values addressed by e-Health interventions and illustrate the views of care stakeholders, revealing that these views can be conflicting, given their needs and priorities. For instance, tensions emerge between values that prioritise placing the responsibility on a physician for their patients versus a value system that prioritises patient autonomy. By discussing information and interaction design of technology-based mediation of HWBI sharing and based on the research findings of this thesis, we provide a set of design principles and requirements targeting the following areas and roles: - e-Health and HCI researchers, providing a foundation for their future research, - designers, who could benefit from a complete image of the abilities and needs of potential users in this sensitive and complex care context, - healthcare and legislation policies, that could adhere to a system of values that place a premium on patient empowerment, and - educational programs, that need to provide seniors and triad actors with the knowledge of how to share personal health information digitally. Finally, following a user-centred design approach, we implement these design recommendations and evaluate them with caregivers to validate our findings.
Article
Touch tables have become a common interface in recent decades. However, few studies explore their use in therapeutic rehabilitation. This review analyzes publications identified through a systematic search in four literature databases between 1970 and 2023. A total of 29 publications involving three therapeutic uses of touch tables are presented. 11 studies address work on physical remediation, 14 on cognitive training and 11 on collaborative social skills. Results indicate that research on touch tables in rehabilitation is limited in quantity, but that touch tables are regularly proposed into therapy. Their use has produced relatively beneficial effects in motor, cognitive and social therapy. This review highlights the essential role of accessibility for rehabilitation users, whether they are patients or health practitioners. We conclude by presenting recommendations for research and practices, particularly around inclusive approaches and adaptive techniques to further integrate touch tables into the care pathway.
Article
Gestural interaction has evolved from a set of novel interaction techniques developed in research labs, to a dominant interaction modality used by millions of users everyday. Despite its widespread adoption, the design of appropriate gesture vocabularies remains a challenging task for developers and designers. Existing research has largely used Expert-Led, User-Led, or Computationally-Based methodologies to design gesture vocabularies. These methodologies leverage the expertise, experience, and capabilities of experts, users, and systems to fulfill different requirements. In practice, however, none of these methodologies provide designers with a complete, multi-faceted perspective of the many factors that influence the design of gesture vocabularies, largely because a singular set of factors has yet to be established. Additionally, these methodologies do not identify or emphasize the subset of factors that are crucial to consider when designing for a given use case. Therefore, this work reports on the findings from an exhaustive literature review that identified 13 factors crucial to gesture vocabulary design and examines the evaluation methods and interaction techniques commonly associated with each factor. The identified factors also enable a holistic examination of existing gesture design methodologies from a factor-oriented viewpoint and highlighting the strengths and weaknesses of each methodology. This work closes with proposals of future research directions of developing an iterative user-centered and factor-centric gesture design approach as well as establishing an evolving ecosystem of factors that are crucial to gesture design.
Article
Touchscreens – mobile phones, tablets, phablets, laptops – are now omnipresent. They are increasingly used as a gaming platform: puzzle games, role play games and first-person shooters are just some of the genres to have appeared on this platform. Despite this, there is only limited research on the particular value of touchscreens for gamer experience. This paper discusses a raft of experiments designed to explore this issue. We compared games played on touchscreen, with the same games played on two other platforms: console and PC. We measured three dependent variables related to gamer experience: ease of controls, ease of task completion, and satisfaction. We found that the value of touchscreens for gamer experience depends on the interaction of three factors: characteristics of players (their gaming background and preferences); types of games (their genre and design); and platform (their controls and affordances). We discuss how these interactions can be further understood and explored, and lay out a set of sensitivities and hazards to inform the future design of touchscreen games.
Article
Full-text available
A study comparing the speed, accuracy, and user satisfaction of three different touch screen strategies was performed. The purpose of the experiment was to evaluate the merits of the more intricate touch strategies that are possible on touch screens that return a continuous stream of touch data. The results showed that a touch strategy providing continuous feedback until a selection was confirmed had fewer errors than other touch strategies. The implications of the results for touch screens containing small, densely-packed targets were discussed.
Conference Paper
Full-text available
Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audio- based multi-touch interaction techniques that enable blind users to access touch screen applications. We describe the design of Slide Rule, our interaction techniques, and a user study in which 10 blind people used Slide Rule and a button-based Pocket PC screen reader. Results show that Slide Rule was significantly faster than the button-based system, and was preferred by 7 of 10 users. However, users made more errors when using Slide Rule than when using the more familiar button-based system.
Conference Paper
Full-text available
Soft keyboards offer touch-capable mobile and tabletop devices many advantages such as multiple language support and room for larger displays. On the other hand, because soft keyboards lack haptic feedback, users often produce more typing errors. In order to make soft keyboards more robust to noisy input, researchers have developed key-target resizing algorithms, where underlying target areas for keys are dynamically resized based on their probabilities. In this paper, we describe how overly aggressive key-target resizing can sometimes prevent users from typing their desired text, violating basic user expectations about keyboard functionality. We propose an anchored key-target method which incorporates usability principles so that soft keyboards can remain robust to errors while respecting usability principles. In an empirical evaluation, we found that using anchored dynamic key-targets significantly reduce keystroke errors as compared to the state-of-the-art.
Conference Paper
Full-text available
Bare hand pointing on touch screens both benefits and suffers from the nature of direct input. This work explores techniques to overcome its limitations. Our goal is to design interaction tools allowing pixel level pointing in a fast and efficient manner. Based on several cycles of iterative design and testing, we propose two techniques: Cross-Keys that uses discrete taps on virtual keys integrated with a crosshair cursor, and an analog Precision-Handle that uses a leverage (gain) effect to amplify movement precision from the user's finger tip to the end cursor. We conducted a formal experiment with these two techniques, in addition to the previously known Zoom-Pointing and Take-Off as baseline anchors. Both subjective and performance measurements indicate that Precision-Handle and Cross-Keys complement existing techniques for touch screen interaction.
Conference Paper
Full-text available
In this paper, we explore how to add pointing input capabilities to very small screen devices. On first sight, touchscreens seem to allow for particular compactness, because they integrate input and screen into the same physical space. The opposite is true, however, because the user's fingers occlude contents and prevent precision. We argue that the key to touch-enabling very small devices is to use touch on the device backside. In order to study this, we have created a 2.4" prototype device; we simulate screens smaller than that by masking the screen. We present a user study in which participants completed a pointing task successfully across display sizes when using a back-of device interface. The touchscreen-based control condition (enhanced with the shift technique), in contrast, failed for screen diagonals below 1 inch. We present four form factor concepts based on back-of-device interaction and provide design guidelines extracted from a second user study.
Conference Paper
In this paper we present a novel approach to text input on large interactive surfaces using a combination of strategies to resolve the inherent difficulties with text input on such a device. Instead of using a conventional full size QWERTY based layout, the idea is to use a very limited set of buttons that, by using word prediction and hints, would only require minimal finger movement. The input mechanism is somewhat related to input methods employed when using keyboards with a limited size and amount of keys, commonly found in phones and other 10 digit keyboards. Given that the main motivation to this approach was not the limited size but rather to overcome the difficult task of locating keys with fingers on a flat and featureless surface, making touch-typing very difficult and requiring frequent visual monitoring of the finger position, we opted to enhance the interaction with easily made gestures, a layout that adapts to the hand anatomy of the user, and easy control over the text prediction.
Conference Paper
Initially Designers only had a keyboard and lines of text to design. Then, the mouse enabled a richer design ecosystem with two dimensional plains of UI. Now the Design and Research communities have access to multi-touch and gestural interfaces which have been released on a mass market scale. This allows them to design and develop new, unique, and richer design patterns and approaches. These methods are no longer confined to research projects or innovation labs, but are now offered on a large scale to millions of consumers. With these new interface behaviors, in combination with multiple types of hardware devices that can affect the interface, there are new problems and patterns that have increased the complexity of designing interfaces. The aim of this SIG is to provide a forum for Designers, Researchers, and Usability Professionals to discuss this new and emerging technology trends for multi-touch and gesture interfaces, as well as discuss current design patterns within these interfaces. Our goal is to cross pollinate ideas and current solutions from practitioners and researchers across communities to help drive awareness of this new field for those interested in, just starting in, or currently involved in the design of these systems.
Conference Paper
Current multi-touch interaction techniques typically only use the x-y coordinates of the human finger's contact with the screen. However, when fingers contact a touch-sensitive surface, they usually approach at an angle and cover a relatively large 2D area instead of a precise single point. In this paper, a Frustrated Total Internal Reflection (FTIR) based multi-touch device is used to collect the finger imprint data. We designed a series of experiments to explore human finger input properties and identified several useful properties such as contact area, contact shape and contact orientation which can be exploited to improve the performance of multi-touch selecting and pointing tasks. Based on the experimental results, we discuss some implications for the design of human finger input interfaces and propose several design prototypes which incorporate these implications. A set of raw data and several concrete recommendations which are useful for the research community are also presented.
Conference Paper
One direction of the emerging paradigm of interface design is the use of physical metaphors, the adoption of physical phenomenon from the real world with physical principles such as gravity or inertia. To explore effectiveness of physical metaphors in interaction design, we conducted an exploratory study by selecting one specific task where a physical metaphor was applied with physics: searching for a phone number in a contact list using an inertial scroll method with a mouse and touch screen interface environment. The result from this initial study showed that employing a physical metaphor does not always guarantee an improvement of performance; a different effect can be drawn according to the interaction style.
Conference Paper
Physical buttons have the unique ability to provide low-attention and vision-free interactions through their intuitive tactile clues. Unfortunately, the physicality of these interfaces makes them static, limiting the number and types of user interfaces they can support. On the other hand, touch screen technologies provide the ultimate interface flexibility, but offer no inherent tactile qualities. In this paper, we describe a technique that seeks to occupy the space between these two extremes - offering some of the flexibility of touch screens, while retaining the beneficial tactile properties of physical interfaces. The outcome of our investigations is a visual display that contains deformable areas, able to produce physical buttons and other interface elements. These tactile features can be dynamically brought into and out of the interface, and otherwise manipulated under program control. The surfaces we describe provide the full dynamics of a visual display (through rear projection) as well as allowing for multitouch input (though an infrared lighting and camera setup behind the display). To illustrate the tactile capabilities of the surfaces, we describe a number of variations we uncovered in our exploration and prototyping. These go beyond simple on/off actuation and can be combined to provide a range of different possible tactile expressions. A preliminary user study indicates that our dynamic buttons perform much like physical buttons in tactile search tasks.