Content uploaded by Martin Tomitsch
Author content
All content in this area was uploaded by Martin Tomitsch
Content may be subject to copyright.
Challenges for Designing the User Experience of
Multi-touch Interfaces
Stefan Bachl1, Martin Tomitsch2, Christoph Wimmer1, Thomas Grechenig1
1 Research Group for Industrial Software (INSO), Vienna University of Technology, Austria
{stefan.bachl, christoph.wimmer, thomas.grechenig} @ inso.tuwien.ac.at
2 Faculty of Architecture, Design and Planning, The University of Sydney, Australia
martin.tomitsch@sydney.edu.au
ABSTRACT
Advances in technology have led to an increased presence
of multi-touch interfaces in consumer products in recent
years. Still, many challenges remain that designers need to
face when designing for multi-touch interaction. As multi-
touch interfaces are becoming more ubiquitous it is
important to investigate not only their performance for
certain tasks, but also the user experience of interacting
with such interfaces. In this paper we discuss eight
challenges that need to be considered when designing the
user experience of multi-touch interfaces. The challenges
also reveal potential areas for future research in the field of
multi-touch interaction.
Keywords
Multi-touch interaction, surface computing, touch
interaction, touch screens, interaction design.
INTRODUCTION
In the last few years multi-touch interfaces have gained a
lot of attention, not only due to their application in mobile
phones but also because of the advantages that come with
this technology. One of the most important advantages
compared to other interfaces is the possibility to directly
interact with information on screen using fingers for input.
This provides users with a stronger feeling of having
control over their interactions rather than being controlled
by the system. Another aspect of direct interaction is that it
makes interacting with digital interfaces accessible to a
broad spectrum of users. Furthermore, the technology
enables concurrent co-located collaboration, especially on
larger surfaces. Multi-touch interfaces that provide physical
behaviour (such as gravity or inertia) also lead to higher
performance [13]. Combined with visually appealing
interface elements and graphics this might also lead to an
increased overall user experience. However, multi-touch
interaction also has its disadvantages, resulting in several
challenges that need to be addressed when designing for
multi-touch. Whereas multi-touch is good for specific use
cases in certain contexts, it is not a general remedy for
interaction design problems. Interfaces designed for mouse
and keyboard interaction cannot be easily augmented with
multi-touch capability without redesigning the interface
accordingly.
This paper provides a general overview of existing
challenges that need to be considered when designing
multi-touch interfaces. It focuses on implications to user
experience rather than purely technological issues, such as
improved algorithms for finger tracking. The challenges are
on the one hand derived from practical experience with the
development of multi-touch applications and supervising
student projects at our research group. On the other hand,
the theoretical background is formed by an extensive
literature review as well as presentations by experts in the
field of multi-touch, human-computer interaction and
natural user interfaces.
Table 1. Overview of the eight multi-touch challenges
discussed in this paper, divided into three categories
In this paper, we present eight multi-touch challenges,
classified into three categories (see Table 1): Screen-based,
user-based and input-based challenges. The screen-based
challenges describe problems related to physical properties
of touch screens. The user-based challenges explore the use
of fingers for direct input as the origin of problems users
are facing when interacting with multi-touch interfaces.
Finally the input-based challenges outline the difficulties in
interpreting and supporting the input to enhance the user
experience of multi-touch interfaces.
Affordance of screens
Screen-based challenges
Tactile user feedback
Ergonomics
Individual differences
User-based challenges
Accessibility
Gestures and patterns
Supporting data input
Input-based challenges
Multi-user support
SCREEN-BASED CHALLENGES
Challenges from within this category are divided into
challenges relating to the affordance of screens and
challenges related to the lack of tactile user feedback.
Affordance of Screens
One fundamental challenge of designing multi-touch
interfaces lies in the natural affordance of screens. The
physical appearance of screens is responsible for affording
touch [16]. As not all screens are capable of detecting
touch, there are two undesired scenarios: a) the user
touches a normal screen that is not capable of responding to
this interaction technique and b) the user fails in
recognising a touch screen as such. Hardware-specific
details (e.g. the existence or absence of keyboard and
mouse) can help the user to determine the touch capabilities
of a screen. If the designer cannot influence the hardware
design of a screen, the use of written or symbolic
instructions (Figure 1) or already learned user interface
conventions (e.g. a button with three-dimensional
appearance) is necessary to provide visual cues. According
to Norman [16], instructions or conventions do not affect
the physical affordance of the screen itself, but the
perceived affordance by the user.
Figure 1. Written instructions to affect the perceived
affordance of touch screens
Once users become familiar with a screen or device, the
importance of communicating its touch capability
decreases. This is apparent when comparing the design of
public screens (e.g. terminals) to personal devices that are
used regularly (e.g. the Apple iPhone).
Tactile User Feedback
The absence of tactile user feedback related to multi-touch
technology is one of the most discussed and investigated
challenges in the area of multi-touch interfaces, yet there is
no conclusive solution to this problem. Current touch
screen technology does not provide tactile feedback when
touched, compared to the press of a key on a physical
keyboard. Therefore the use of adequate visual feedback,
including the simple visualisation of the detection of the
users’ fingers, is essential when designing touch screen
interfaces. Acoustic feedback can enhance the effect, but
can also harm the experience when used in collaborative
setups [6] due to the fact that sounds target all users at the
same time.
In addition to patents by large companies (e.g. [12], [22]),
there are several research approaches for simulating tactile
feedback. Technological approaches include vibration,
piezoelectric actuation, solenoid, pin matrices, or ciliated
surfaces [7]. Problems of these approaches include high
costs, scalability and the lack of support for multiple
concurrent users. For instance, the vibration of the screen to
provide feedback to an individual user would interfere with
other collaborating users. A completely different approach
[15] employs the users’ mobile phones for distal tactile
feedback through vibration. Harrison and Hudson [7] use
pneumatically actuated physical buttons. However, this
approach is limited by the preliminary assignment of fixed
buttons, which stands in contrast with the typical
adaptability of multi-touch interfaces.
Many studies (e.g. [9], [11], [14]) have proven that tactile
feedback can improve performance and decrease error rates
regardless of the technology used. Lee and Zhai [14]
discovered that the combination of vibration and acoustic
feedback does not result in further improvements.
USER-BASED CHALLENGES
Challenges from within this category are separated into
challenges related to the ergonomics and individual
differences of users and accessibility.
Ergonomics
The use of fingers for direct input and manipulation also
entails challenges where different screen properties are
concerned. One important thing to consider when designing
interfaces for multi-touch applications is partial occlusion
of the screen caused by fingers, hands and arms when users
interact with a touch screen [18]. In contrast to the use of a
computer mouse, users specifically occlude those parts of
the interface they are interacting with when touching
interface elements with their fingers. While this is also the
case when typing on a keyboard, the problem is more
severe on touch screens due to the additional lack of tactile
feedback (see section Tactile user feedback).
This problem is exacerbated on small screens, where touch
targets are also smaller. One approach to address the
problem of occlusion is the so-called back-of-device
interaction (e.g. [2], [24]) where the user can touch the
screen on its backside. Either semi-transparent screens or a
digital visualisation of the fingers help the user locate the
target. Unfortunately this technique does not scale to larger
screens. A very general approach is the Shift technique [19]
which displays the occluded part above the finger while
touching, sometimes even magnified.
Another important effect to consider is muscle fatigue
while interacting with large touch screens. With a mouse,
users can reach distant positions of the screen with minor
movements of their hands. On touch screens users have to
cover larger distances with theirs arms and hands. This
problem increases when designing applications that support
multiple different hardware platforms. Also, movements on
vertically arranged touch screens might even be more
strenuous. Therefore it is important to consider hand and
arm movements for the positioning of user interface
elements to prevent early muscle fatigue.
Individual Differences
Individual differences between users play an important role
when designing multi-touch interfaces. For instance,
different hand sizes will not lead to different sized touch
screens, compared with the production of different sized
computer mice. Therefore, different physical properties
have to be considered while designing the interface (e.g.
size of buttons). Not only hand size and finger size vary
from one person to another, the fingers on one hand have
different pointing properties (such as size) as well. The
decision which fingers will be used for which gestures (e.g.
object rotation) is dependent on finger properties and
ergonomics. This leads to important design considerations
as well as to technological challenges. As an example, a
touch target should be greater than 11.5 mm, according to
Wang and Ren [21]. This presents a challenge when
designing touch screen interfaces for mobile devices with
small screens. Other considerations [18] include
handedness of users, fingernails that can make the detection
of touch points difficult, gloves, which can prevent the
detection of fingers in capacitive-based touch setups, and
fingerprints that make users swipe over a screen, which is
still listening for deliberate input.
Figure 2. Accuracy and conclusion of mouse (left)
compared to finger (right)
Another important consideration is the accuracy of fingers
compared to a computer mouse. A computer mouse has a
target zone of one pixel, whereas targeting a specific single
pixel with a finger can become nearly impossible (Figure
2). Techniques for helping users to target the right spot
exist (e.g. [1], [17]) and should be considered when
designing touch interfaces. Different accuracy and touch
target size of finger and mouse emphasise that existing
interfaces should not be reused or enabled for touch
interaction without appropriate adaption.
Accessibility
The lack of tactile user feedback (see section Tactile user
feedback) also impacts the accessibility of touch screens.
Especially blind and visually impaired people face barriers
when interacting with touch screens. Existing products,
such as the Apple iPhone 3GS, try to solve this problem by
providing screen readers that give synthesised speech
feedback when touching the screen. However, if more
complex tasks need to be accomplished, screen readers
might not be sufficient. Kane et al. [10] use the advantages
of gestures for helping people with disabilities to navigate
trough menus, but although the tasks could be solved more
quickly compared to button-based interfaces, the error rate
was higher. Another approach uses relative inputs for text-
entry on a touch screen [26]. The first touch at any position
on the screen represents the central position of one of three
layers. A swipe into one of the eight basic directions
chooses the according character. Staying in the central
position exchanges the three layers in rotation.
Other disabilities have to be considered as well when
designing multi-touch applications, particularly in a public
context. For instance, physical disabilities could especially
cause problems when interacting with larger touch screens
or complicated gestures.
INPUT-BASED CHALLENGES
Challenges from within this category are divided into
challenges related to gestures and patterns, supporting data
input and multi-user support.
Gestures and Patterns
With the success of multi-touch enabled devices, gestures
in use are increasingly becoming inconsistent [23] across
different manufacturers. Patents that protect specific
gestures for a manufacturer aggravate this development. As
a result, competitors can either provide no gesture or invent
an alternative. Therefore standardisation becomes difficult
to achieve. However, approaches to provide de facto
standards do exist. Wobbrock et al. [25] propose a user-
defined gesture set for tabletops as well as a taxonomy of
surface gestures.
When gestures get more complex, the number of people
that are able to perform these gestures without instructions
decreases. According to Saffer [18], “the complexity of the
gesture should match the complexity of the task at hand”.
Major challenges have to be faced when trying to define
intuitive, self-revealing gestures for complex tasks that go
beyond zooming, rotating and swiping. Saffer [18]
proposes that complex tasks should be realised with simple
gestures (e.g. with buttons or menu systems), and
additionally provide more sophisticated gestures for expert
users.
The context of interaction with a multi-touch interface has
to be considered when defining gestures as well. For
instance, the use of complex gestures with more than one
finger is not always possible on mobile devices due to
different hand-held positions and usage.
When designing applications that support multiple different
hardware platforms, screen and hardware properties have to
be included in gesture considerations. This can include the
number of concurrent trackable fingers, the screen size or
the general touch screen technology and related accuracy.
Supporting Data Input
The lack of tactile user feedback also affects the user
experience of data input on multi-touch interfaces. There
are no methods available that make use of the advantages
of multi-touch for data input while also overcoming this
problem. One big advantage of virtual keyboards is the
ability to dynamically change buttons related to the desired
input (for instance an adapted keyboard for an email
address field where certain characters are forbidden), the
context, or the users’ abilities (e.g. QWERTY versus
ABCDEF layout). Furthermore, multi-touch enhances the
user experience compared with single-touch since keys do
not have to be pressed consecutively.
The dispute over the use of virtual versus physical
keyboards in mobile devices has resulted in a variety of
different concepts1,2,3 for virtual data input methods.
Nevertheless, small screens imply small virtual keys and
therefore high error rates combined with lower
performance. Adaptive algorithms (e.g. [5]) try to
counteract these problems by resizing touch targets
according to predictions of preceding inputs in the
background. Other approaches (e.g. [8]) address problems
like occlusions and resting palms which have to be
classified as invalid input, and try to reduce finger
movements on larger surfaces. Again, the Shift technique
[19] is a very common approach to handle occlusions. Zhai
et al. [27] examine the performance of virtual keyboards on
the basis of Fitt’s Law and the corresponding learning
curve.
Multi-user Support
One of the most obvious technological challenges when
designing a system for several co-located users, who
interact simultaneously, is the ability to distinguish between
individual users. Still there is no conclusive solution for
multi-user support that works for different touch screen
technologies. DiamondTouch [3] is a well-known approach
that uses an array of antennas to distinguish between users,
each hooked up to a capacitive receiver. However, the
approach is limited to a very specific tabletop setup. It does
not use distinguishing characteristics of the users
themselves (such as fingerprints) and therefore requires
initialisation.
As collaborative multi-touch applications often do not
specify dedicated zones for each user, the distinction
between individual users would allow further
enhancements to improve the user experience. The position
of the user(s) (e.g. around a multi-touch tabletop) cannot be
determined reliably in every situation and with every
hardware setup, although a few approaches exist. Wang et
al. [20] analyse the orientations of the contact shapes
1 http://www.retype.ch
2 http://www.shapewriter.com/iphone.html
3 http://www.touchtype-online.com
(fingertips) to detect the corresponding users but suffer
from the fact that not every contact point in multi-touch
interaction is an oblique touch (i.e. touch with the finger
pad instead of the fingertip). Dohse et al. [4] augment a
multi-touch tabletop setup with hand tracking by mounting
an additional camera above the table to distinguish between
users. Although very promising, the approach only works
when users stand on opposite sides of a tabletop.
CONCLUSION
In this paper we presented an overview of currently
existing challenges that have an impact on the user
experience of multi-touch interfaces. The eight identified
challenges are based on our experience with designing
multi-touch interfaces as well as an extensive review of
recent literature and presentations from this field. There are
three different groups of solutions to these challenges: 1)
challenges that need to be addressed through technological
innovation, 2) challenges that can be solved through
interface design, and 3) challenges that need to be tackled
both through interface design and on a hardware level.
The first group covers challenges that require solutions in
terms of new technology. To provide real, authentic tactile
user feedback the challenge is to invent reliable, scalable
touch screen technology. To enable multi-user support,
unique characteristics, such as fingerprints, need to be
incorporated. The second group covers challenges that can
be addressed through interface design solutions. Individual
differences of fingers, hands and arms should be considered
when designing multi-touch interfaces. Further, the user
experience of different gestures and patterns defined to
perform specific actions can be supported through
appropriate interface design. The third group covers the
remaining challenges that can be approached through
interface design but also partly in terms of technology or
hardware development. The physical affordance of a touch
screen is defined by its hardware appearance, but perceived
affordance can be manipulated by changing the visual
appearance of interface design elements. Ergonomics of
touch screens should be considered when designing multi-
touch interfaces, but technological advances also need to be
made to prevent occlusions. Similarly, accessibility can be
approached with hardware solutions or sophisticated
interaction methods. Finally, the support of high-
performance, reliable data input can be partly implemented
using intelligent keyboard interfaces, but are also
dependent of technological issues such as tactile user
feedback.
The list of challenges discussed in this paper not only
reveals current challenges for designing the user experience
of multi-touch interfaces, but also suggests possible
directions for further research.
REFERENCES
1. Albinsson, P. and Zhai, S. High precision touch screen
interaction. Proceedings of the SIGCHI conference on
Human factors in computing systems, ACM (2003),
105-112.
2. Baudisch, P. and Chu, G. Back-of-device interaction
allows creating very small touch devices. Proceedings
of the conference on Human factors in computing
systems, ACM (2009), 1923-1932.
3. Dietz, P. and Leigh, D. DiamondTouch: a multi-user
touch technology. Proceedings of the ACM symposium
on User interface software and technology, ACM
(2001), 219-226.
4. Dohse, K.C., Dohse, T., Still, J.D., and Parkhurst, D.J.
Enhancing Multi-user Interaction with Multi-touch
Tabletop Displays Using Hand Tracking. International
Conference on Advances in Computer-Human
Interaction, IEEE Computer Society (2008), 297-302.
5. Gunawardana, A., Paek, T., and Meek, C. Usability
guided key-target resizing for soft keyboards.
Proceeding of the conference on Intelligent user
interfaces, ACM (2010), 111-118.
6. Hancock, M.S., Shen, C., Forlines, C., and Ryall, K.
Exploring non-speech auditory feedback at an
interactive multi-user tabletop. Proceedings of Graphics
Interface 2005, Canadian Human-Computer
Communications Society (2005), 41-50.
7. Harrison, C. and Hudson, S.E. Providing dynamically
changeable physical buttons on a visual display.
Proceedings of conference on Human factors in
computing systems, ACM (2009), 299-308.
8. Hirche, J., Bomark, P., Bauer, M., and Solyga, P.
Adaptive interface for text input on large-scale
interactive surfaces. Proceedings of TABLETOP 2008,
IEEE (2008), 163-166.
9. Hoggan, E., Brewster, S.A., and Johnston, J.
Investigating the effectiveness of tactile feedback for
mobile touchscreens. Proceedings of the SIGCHI
conference on Human factors in computing systems,
ACM (2008), 1573-1582.
10. Kane, S.K., Bigham, J.P., and Wobbrock, J.O. Slide
rule: making mobile touch screens accessible to blind
people using multi-touch interaction techniques.
Proceedings of the ACM SIGACCESS conference on
Computers and accessibility, ACM (2008), 73-80.
11. Koskinen, E., Kaaresoja, T., and Laitinen, P. Feel-good
touch: finding the most pleasant tactile feedback for a
mobile touch screen button. Proceedings of conference
on Multimodal interfaces, ACM (2008), 297-304.
12. Laitinen, P. Tactile Touch Screen. International
Application PCT/EP2006/009377, filed 27 September
2006.
13. Lee, S. and Lee, W. Exploring effectiveness of physical
metaphor in interaction design. Proceedings of the
conference extended abstracts on Human factors in
computing systems, ACM (2009), 4363-4368.
14. Lee, S. and Zhai, S. The performance of touch screen
soft buttons. Proceedings of the conference on Human
factors in computing systems, ACM (2009), 309-318.
15. McAdam, C. and Brewster, S. Distal tactile feedback
for text entry on tabletop computers. Proceedings of the
British Computer Society Conference on Human-
Computer Interaction, British Computer Society (2009),
504-511.
16. Norman, D.A. Affordance, conventions, and design.
interactions 6, 3 (1999), 38-43.
17. Potter, R.L., Weldon, L.J., and Shneiderman, B.
Improving the accuracy of touch screens: an
experimental evaluation of three strategies. Proceedings
of the SIGCHI conference on Human factors in
computing systems, ACM (1988), 27-32.
18. Saffer, D. Designing Gestural Interfaces: Touchscreens
and Interactive Devices. O'Reilly Media, Inc., 2008.
19. Vogel, D. and Baudisch, P. Shift: a technique for
operating pen-based interfaces using touch.
Proceedings of the SIGCHI conference on Human
factors in computing systems, ACM (2007), 657-666.
20. Wang, F., Cao, X., Ren, X., and Irani, P. Detecting and
leveraging finger orientation for interaction with direct-
touch surfaces. Proceedings of the ACM symposium on
User interface software and technology, ACM (2009),
23-32.
21. Wang, F. and Ren, X. Empirical evaluation for finger
input properties in multi-touch interaction. Proceedings
of the conference on Human factors in computing
systems, ACM (2009), 1063-1072.
22. Westerman, W.C. Keystroke tactility arrangement on a
smooth touch surface, US Patent 20090315830, filed 24
December 2009
23. Wigdor, D., Fletcher, J., and Morrison, G. Designing
user interfaces for multi-touch and gesture devices.
Proceedings of the conference extended abstracts on
Human factors in computing systems, ACM (2009),
2755-2758.
24. Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and
Shen, C. Lucid touch: a see-through mobile device.
Proceedings of the ACM symposium on User interface
software and technology, ACM (2007), 269-278.
25. Wobbrock, J.O., Morris, M.R., and Wilson, A.D. User-
defined gestures for surface computing. Proceedings of
the conference on Human factors in computing systems,
ACM (2009), 1083-1092.
26. Yfantidis, G. and Evreinov, G. Adaptive blind
interaction technique for touchscreens. Universal
Access in the Information Society 4, 4 (2006), 328-337.
27. Zhai, S., Sue, A., and Accot, J. Movement model, hits
distribution and learning in virtual keyboarding.
Proceedings of the SIGCHI conference on Human
factors in computing systems, ACM (2002), 17-24.