Conference PaperPDF Available

Pinch: An Interface That Relates Applications on Multiple Touch-Screen by ‘Pinching’ Gesture

Authors:

Abstract and Figures

We devised a new user interface that relates applications running on multiple mobile devices when the surfaces of juxtaposed screens are merely pinched. The multiple-screen layout can be changed dynamically and instantly even while applications are running in each device. This interface can introduce a new kind of interaction: rearrangement of devices triggers a certain reaction of contents. We expect this interface to show great potential to inspire various application designs, and we expect to enrich the contents by offering interaction that a single display or a static multi-display environment cannot provide. To prove and demonstrate that the interface is functional, we implemented a framework for using the interface and developed several applications using it. Although these applications are simple prototypes, they received favorable responses from audiences at several exhibitions.
Content may be subject to copyright.
Pinch: An Interface That Relates Applications
on Multiple Touch-Screen by ‘Pinching’ Gesture
Takashi Ohta and Jun Tanaka
Tok y o Unive r sity o f Tech n o l ogy, Sch o o l o f Media Sc i e nce
takashi@stf.teu.ac.jp, j.tanaka@eje-c.com
http://www2.teu.ac.jp/media/~takashi/cmdeng/CmdEng/
Abstract. We devis ed a ne w use r int e rfac e t hat relat e s appl i cat i ons ru n -
ning on multiple mobile devices when the surfaces of juxtaposed screens
are merely pinched. The multiple-screen layout can be changed dynam-
ically and instantly even while applications are running in each device.
This interface can introduce a new kind of interaction: rearrangement of
devices triggers a certain reaction of contents. We expect this interface
to show great potential to inspire various application designs, and we
expect to enrich the contents by oering interaction that a single dis-
play or a static multi-display environment cannot provide. To prove and
demonstrate that the interface is functional, we implemented a frame-
work for using the interface and develop ed several applications using it.
Although these applications are simple prototypes, they received favor-
able responses from audiences at several exhibitions.
Keywords: User Interface, Multi-Display, Interaction, Mobile Device,
Tou c h Scree n , D y n amic R e c ongur a t ion, Fac e - t o-Face .
1Introduction
Through our experience of creating interactive applications on multi-display en-
vironment, we felt it would be possible to create more interesting representations
using multiple displays if we were able to add more dynamical features to them.
Multi-display systems are generally static in their composition, and are mainly
used for oering a very large screen or high-resolution display [1][2]. If interac-
tive applications such as media-art works run on a multi-display system, then
multiple displays can be expected to give more impact to an audience than when
running on a single display. However, if the usage of multi-display stays in form-
ing a larger but single virtual screen, then the designs and the interactions of
applications are not expected to be too much dierent from those designed for
a single display. Although that is suitable for scientific visualization purposes,
we believe that using multiple displays has greater potential as a platform for
interactive applications.
In pursuing the potential of multi-displays, we decided to ascertain the ways
that the displays’ layout can be changed interactively even when applications
are running. We sought an interaction as such that changing of displays’ lay-
out causes an application’s reaction. First, we created applications that achieve
A. Nijholt, T. Rom˜ao, and D. Reidsma (Eds.): ACE 2012, LNCS 7624, pp. 320–335, 2012.
c
Springer-Verlag Berlin Heidelberg 2012
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 321
such interaction with notebook PCs [3], by attaching sensors to them. However,
because notebook PCs are still large and heavy to carry around casually, we
were not able to find appropriate application scenarios. Because mobile devices
such as smartphones and tablet PCs have become popular, we decided to import
the idea and mechanism to these platforms, which are ideal candidates for our
purposes because of their mobility and popularity.
We do not want to attach sensors to mobile devices, and it would not be toler-
able if it were done by opening a configuration panel on a screen and registering
the devices manually one by one. We want to create a more interactive interface
to achieve the action. We propose a simple and intuitive interface to do so, us-
ing a “pinching” gesture accomplished by putting the forefinger and thumb on
juxtaposed screens and swiping them as if to stitch them together. This linking
of displays is possible by choosing mobile devices with a touch screen for our
platform. We also use “shaking” of a device to break the connection. We pre-
pared the framework using the interface’s functions and created three prototype
applications to demonstrate that this interface can be a foundation of various
representations.
As described in this paper, we present the concept and the mechanism of our
interface, its implementation, and the applications developed using the interface.
2RelatedWork
Several reports in the literature have described research using dynamically recon-
figurable multiple display devices.“Data Tiles” consists of a flat display and tiny
transparent tiles [4]. Each tile has an RFID tag, and reading sensors are mounted
on the panel, so that the system can recognize when a tile is placed on the panel.
When a tile is placed, contents associated with each tile’s category are displayed
automatically on the panel in that area. This research demonstrates a kind of
interface in that multiple displaying units are used, and making physical interac-
tion as placing a unit onto a panel to trigger content to react. Other studies have
investigated the use of physically independent displaying devices.“ConnecTables
[5]” is a work that develops a system that dynamically connects two displays and
makes them a single virtual screen to produce a collaborative workplace. A dis-
play unit, called ConnecTable, is built using a graphic tablet and built-in-sensor.
They detect each other when they are moved close. Then their screens are con-
nected to form a single display area. On the connected screen, users can share
information by moving displayed objects between devices. Hinckley proposed
the use of a bumping of displays to trigger a connection [6]. An acceleration
sensor can detect a vibration by the bumping motion. Then display regions are
connected to form a single workplace. “Stitching [7]” is a similar method that
has been examined for building a collaborative workplace by multiple displays.
This method uses a stylus pen for connecting displays. The system recognizes
the pen’s continuous movement that spans over multiple displays, and forms a
temporary single screen by deducing relative positions of displays by making
the pen’s trail be drawn as continuous line. These studies show variants of the
322 T. Ohta and J. Tanaka
approach, some use sensors to detect the physical contact of displays, some use
a gesture to know it occurs, and other approaches use a pen’s trail to ascertain
positions more precisely.
Some works uses mobile devices. “Junkyard Jumbotron [8] is an application
that combines devices including smartphones or/and PC displays, and binds
them into single large virtual screen. It configures the relative positioning of
each device by detecting specific graphical markers displayed on each display
using a camera. “Shiftables [9] designs a specific tiny block device with a dis-
play, equipped with a built-in-sensor on its four sides to detect the others. That
approach can be characterized as similar to ConnecTables and Hinckley’s work.
Dierences of the research objective explain the dierences between these
projects and our work. Our approach is similar to “Stitching” at taking a relative
display’s position by the drawn trail on screens, but we weighed more on the
aspect of changing the display layout dynamically. We use the gesture not only
for prompting connection of the displays, but making it as an interface to invoke
areactionofapplications.Additionally,wechosethegestureof“pinching”asa
physical analogy of gluing two things together, so that a user can have a feeling
of actually connecting devices manually. Junkyard Jumbotron was designed to
create a single large screen with temporarily assembled devices, whereas ours
work is intended to produce an application platform that uses the change of
display layout as a means of interaction. Therefore, Junkyard Jumbotron detects
and configures the display positioning as a whole at one time, whereas ours does
not use such a configuration approach.
An important dierence from Shiftables is that our system uses ubiquitous
devices such as smartphones. Shiftables are designed for one player possessing
specifically tailored devices, whereas we expect a person to call friends to bring
their devices to play together. Using temporarily assembled devices is our ap-
proach’s major feature.
3“PinchInterface
We wo u l d l i ke to use mul t i p l e d i s p l ays not for building a static large vi r t u a l
screen, but for creating fascinating interactions of contents triggered by rear-
rangement of the display layout. For that purpose, we need an interactive and
instant means to reconfigure the display layout. We also want to change the lay-
out repetitively, and to be able to add or remove devices at any time. We do not
seek a configuration tool to do that. We want a certain interaction that prompts
connecting of displays and which also triggers the reaction in contents simulta-
neously. When we choose smartphones (iPhone) and tablet PCs (iPad and iPod
Touch) as our platform, we came up with the idea of using a “pinching” gesture
because these devices are typically equipped with touch screens, which we think
we can use.
What we call “pinching” is an action of putting a forefinger and thumb on
each display surface of two juxtaposed devices, and making a swiping gesture
of them until they meet. The salient advantage of the pinching gesture is that
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 323
it is extremely simple and intuitive. One can be reminded easily that this is
an action to stitch something together. It is also extremely easy to deduce the
physical position of screens by the gesture if we assume that it is applied by the
forefinger and thumb of the same hand: we can safely expect that these fingers
move along the same straight line, in opposite directions, applied at the very
same time, on screens facing the same direction. Such information is obtainable
by detecting a touch on the screen. A pinched pair can be determined by sharing
this information among devices and finding a pair of information that meets the
conditions explained above. Consequently, a pinching action on the touch screen
enables the connection of displays without extra sensors.
Fig. 1. Connect displays using a pinching gesture
The gesture is not useful only for making a connection of displays. We want an
interface that relates the displayed contents, rather than that for connecting the
devices. For example, as shown in Fig. 1, when two displays are pinched, these
displays are connected. At the very same time, the image appears throughout
the screens. In other words, we want to achieve interaction of contents, which
will occur by moving displays around. Designing a reconfigurable multi-display
system itself is not the direct objective of this research.
To realize such an interaction, it is necessary that an application to react to
an event of connection or disconnection of displays with the pinching action.
We design these two events to occur spontaneously without requiring an extra
step to relate the application’s content. We expect that this approach can pro-
duce fascinating applications that make users feel as though they control digital
contents by physically handling objects.
4SystemDesign
In this section, we explain the design and implementation of the interface sys-
tem. Our intention is to have a dynamic interaction of a gesture and content.
Therefore, one can apply a pinching gesture while the application is running.
The entire procedure for connecting displays and letting applications react to
324 T. Ohta and J. Tanaka
the event can largely be done in three steps: find the pinched pair, determine the
screen positions, and call an appropriate reaction in an application. We explain
here how these steps are implemented and what tasks are done in each step.
4.1 Determine a Connected Pair
When an application starts, it seeks others on a network. Once finding the other
devices on which a compatible application is running, the application registers
their network addresses and establishes a connection with them. Finding others
on a network is done automatically using Apple’s Bonjour protocol. This protocol
is useful for publicizing a network service to other devices. The bonjour protocol
deals with the identifier designating a kind of service and the type of transport
protocol, in a format like “ pinch. tcp”. In this way, the application can find other
fellow “Pinch”-able applications on the network. We prepared the function for
network connection so that it can use either Wi-Fi or Bluetooth. Each has its
merits and shortcomings in terms of performance, which we discuss in a later
section.
Once a group of devices establishes the connection, they are ready to send and
receive information of a pinching action. On each device, the application observes
whether a swiping motion is applied to its screen. When the application notices
that a swiping occurs, it sends out information related to that motion to all
other devices (Fig. 2).
Fig. 2. Broadcasting of motion data
Information on the motion consists of data listed as shown in Table 1. If
aswipingmotionresultsfroma”Pinch”action,thentheswipingmotioncan
be expected to occur at two devices simultaneously. Therefore, if an application
receives swiping information and also has its own at that duration, it can deduce
whether it results from a pinching action by comparing the time stamps in
the respective devices’ information. A pair of swiping motions is identified as a
pinching gesture when data or these motions satisfy the following conditions.
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 325
if motions occur simultaneously
if a screen’s surface is directed to the same orientation
if swipes move in opposite directions
As might be apparent from the explanation above, no central architecture ex-
ists for managing the information entirely. Network communications are done by
peer-to-peer among devices, with no serverdeliveringapplicationcontents.The
processes are running on each device independently, only exchanging necessary
information on necessary occasions with corresponding devices. Additionally, it
does not need any extra devices such as sensors attached to the display device.
Detecting a motion applied onto the touch screen can provide sucient informa-
tion to configure the connection. These features of having no centric server and
no extra attachment provide greater advantages in realizing the use of multiple
displays by temporarily gathered commodity devices.
Tabl e 1. Broadcasting of motion data
4.2 Connection of Screens
After a pinched pair is discovered, because we allow an arbitrary screen layout,
the need exists to determine each device’sscreencoordinationrelativetothe
others. The pair has the information of the swipe motion of the other device
of the pair. Therefore, each can deduce the relative position by analyzing that
data.
To explain the process, we assume the swipe motions shown in Fig. 3. The
procedure to determine the relative screen coordination is conducted as depicted
in Fig. 4. The following are what are performed in each step.
1. position screens A and B as overlapping completely
2. move screen B by the distance between swipe A’s location and screen A’s
center position
3. rotate screen B by the dierence of the two devices’ directions
4. move screen B further by the distance between swipe B’s location and screen
B’s center position
326 T. Ohta and J. Tanaka
Using these procedures, an application run on each device can know the other
connected screen’s relative location andcanconvertthepositionoftheobjects
on the other screen to its own coordination, and vice versa. This process is also
applicable to screens of dierent sizes. Therefore, the mechanism works with the
combination of smartphones and tablet PCs.
Fig. 3. Swipe motion and screen coordinates
Fig. 4. Process to determine relative screen coordinates
4.3 Application Programming Framework
We designed a programming framework for the benefit of developing the appli-
cations compatible with the interface. It handles the procedures of networking,
detection of pinching action, conversion of screen’s coordinates, relaying mes-
sages among multiple devices, and disconnection by a shaking gesture, and so
forth. It covers most of the system work and saves a developer from coding these
parts. Fig. 5 shows that the framework’s layer is constructed. With the frame-
work, developers can concentrate only onthecodingofgraphicsandreactions.
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 327
Fig. 5. Frame wor k l aye rs
The class structure of the framework is shown in Fig. 6. The framework is cur-
rently implemented in Objective-C for iOS 5 and after, for the use on iPhone,
iPod Touch, and iPad. Most of the functions are gathered under the PinchCon-
troller class. Therefore, a developer can appreciate the Pinch interface’s functions
only by using the class and designating one’s own View object in it.
Fig. 6. Class structure of the framework
Other important classes in the framework are PinchControllerDelegate and
PinchControllerMessage. The former is the class that receives a message when a
device is connected with others. Using this class and set reaction in the methods
it provides, an application can react when displays are connected. Each applica-
tion’s specific reaction should be called by here. PinchMessage is for a container
of a message and is used for sending and receiving it between devices. It also
supports the relay of a message through multiple devices.
5Applications
We developed applications that employ the “Pinch” interface to examine whether
the system is actually as functional as we expected. In addition, we designed three
328 T. Ohta and J. Tanaka
applications to demonstrate that the interface can aord to create the variety
of interaction. In this section, we also explain what should be prepared in each
application side to have a proper reaction to a display’s layout change.
5.1 Traveling Crickets
This is a very simple application example with which a graphic object can move
among multiple displays when they are connected. When the application is
started on one display, a cricket appears in a grass field that appears on the
screen. When tapping just behind the cricket, it jumps and moves forward. The
movement is, however, restricted by the screen’s boundary. When the insect
reaches an edge, it bounces and retreats. Connecting a new display provides an
additional field onto which the cricket can move beyond the edge of one screen.
When the cricket goes beyond a boundary to a dierent screen (Fig. 7), the
chirping sound of the cricket moves together with it and is heard from the next
device as well.
Fig. 7. Cricket moves beyond the edge to a dierent display
Making a graphic object move between devices is done in the following way
(as shown in Fig. 8). Here, we proceed to an explanation by assuming that an
object is originally located on the screen of device A.
1. set positions of the cricket’s original location and destination in device A’s
screen coordination
2. deduce these positions in device B’s screen coordination
3. create a copy of the cricket object in device B, at the cricket’s original posi-
tion converted to device B’s screen coordination
4. move cricket objects on both screens simultaneously
5. remove the cricket object from device A after the animation is completed
As explained above, motion of an object between the displays is done by copying
the object instance to a new device when the object moves over to a dierent
screen. This diers from the approach by which a central device does all the
calculation on the graphical object’s movements, and broadcasts them to other
devices. This benefits our choice of platform because none of the devices becomes
indispensable, which means that any device can be removed from the connection.
This application design therefore lends flexibility to the system and the interface.
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 329
Fig. 8. Convert coordination between screens
5.2 Dynamic Canvas
This is an application that creates a single virtual screen by multiple displays,
as is generally done with an ordinary multi-display approach. The dierence is
that the virtual screen is formed and reformed dynamically by attaching or re-
tracting displays. The size of an image or a movie shown there is also adjusted
dynamically to appear as large as it can be within the virtual screen’s larger
allowed direction size (Fig. 9). The size of the entire virtual screen is recalcu-
lated repeatedly and the shown image or movie’s size is adjusted at the instant
whenever a display is added or removed from the entire formation. When the
connection is broken, the image appears on every screen so as to fit into a single
display size.
Fig. 9. Image displayed throughout multiple displays
To realize this eect, we let all devices retain information related to the vir-
tual screen’s size, and the local screen’s origin point in that virtual screen’s
coordination. When an additional device is joined to form the virtual screen,
the device to which the new device is attached directly renews its information
related to the virtual screen. Then it sends out the renewed information to its
direct neighbors and makes them update their information. Subsequently, they
330 T. Ohta and J. Tanaka
also send information again to their own neighbors. The updated information
related to the virtual screen’s size is therefore relayed to all devices by repeating
this. To prevent the information circulated endlessly, the framework prepared
the function to relay the information among many devices. That function pre-
vents sending of the same information repeatedly to the same device by adding
a unique identifier to the information. A device stops sending the information
forward when it receives a message that arrived earlier. The routing of messages
is depicted in Fig. 10. Sending messages stops at the devices where a message
has already been sent via another device.
Fig. 10. Relaying message to entire devices
For displaying an image or a movie, determine which device holds the virtual
screen’s central position. Then decide the size of the image from the virtual
screen’s size. Then each device draw its own part, which is possible because each
device knows its own position in the virtual screen coordinates.
5.3 Tuneblock
The third application is for composing and playing music. At the start, a rectan-
gular space with a tiny dot appears on screen. Fig. 11 shows that the player can
place a tiny silver circle on these dots by touching the screen. A sound is played
when a scan line traverses the screen and hits these tiny circles. The sound pitch
is determined according to each circle’s position.
Fig. 11. Screen image of Tuneblock
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 331
Although the single device’s screen aords only a very short melody, it can
be elongated by adding another display later. When the scan line reaches the
end of one device, it sends a message to the next device to begin its turn. When
the scan line reaches the end of the last device, scanning begins again from the
first device. Relaying of a message is necessary to realize this looping of playing
sound. Connecting the devices works not only to elongate a musical note: tunes
set in each screen are played in chorus if another display is connected in parallel
in relation to the scanning line’s direction of movement (Fig. 12). Although the
same setting of circles remains on each device, we can play a dierent melody
or sound by altering the display layout.
Fig. 12. Relaying message to entire devices
5.4 Variation of Application Design
These three applications (Fig. 13) represent dierent aspects of the feature of
dynamically changeable multiple display. One allows graphical objects to move
beyond screen boundaries to the other device. The other one uses multiple dis-
plays to form a single virtual screen, whereas the last one uses an event of
connecting displays for prompting the applications to run cooperatively. The
applications are simple and straightforward in representing each aspect, but we
think we were able to show dierent usages of the function the interface can oer,
and demonstrate the potential of such system as a platform for interactive ap-
plications. However, we would like to develop applications of more sophisticated
idea and design in the future.
All of these applications are designed not only for multiple displays. In prin-
ciple, the applications are designed so that they are playable on a single screen,
and so that they have extra interaction when connected with others. This design
principle can enable people to play the application both alone or with friends. In
addition, interaction of more dierent types can be expected with the interface.
For example, the connection is done only on the same plane currently. How-
ever, with a slight tweak, we think a three-dimensional display construction is
possible. This would enlarge the possibility further.
When considering the situation of how people play applications using this
interface, a group of friends or colleagues gather to play because one person
might not aord so many mobile devices. In such a situation, we would be able to
332 T. Ohta and J. Tanaka
Fig. 13. Image of applications employing Pinch interface actually running on iPod
touch: left, Tuneblock; right top, Traveling Cricket; right bottom, Big Canvas
develop social type applications of a new kind. The applications require face-to-
face communication. Therefore, it is useful for encouraging viral advertisement.
Pursuing such an aspect for using this interface and applications for encouraging
people to have communication with physical contact is a future objective.
6Evaluation
In this section, this report describes the system performance including a test
of elapsed time in connecting the devices. We also report feedback from the
audiences at conferences where weexhibitedtheapplications.
6.1 Device Response
First, we examine whether the connection is actually established and whether
the system can deduce a correct direction for the connection. Many combinations
exist in the orientation of devices for a connection like that shown in Fig. 14, and
more when one considers four sides to put other devices. We examine by obser-
vation i f a vir tua l scr een s co ordinates are bu ilt consi stently with the displays
physical placement. We use the application of a moving cricket to verify that the
combinations of two or three devices are handled properly. We confirmed that
all of these are processed correctly.
We observe that a little discrepancy of finger size exists in the matching
of screen coordination from physical placement. This cannot be avoided when
trying to ascertain a position by touching the screen with a finger. About 2.5
mm of slip is observed, on average, with a maximum value of 5 mm, although
this number is expected to dier among people and the mode of moving fingers;
especially a touch by a thumb induces greater slippage. However, we need no
such high accuracy for our purposes. A small slip between the juxtaposed screens
does not deter us from regarding the two displays as connected.
As a pinching action not only for connecting displays but also for prompting
applications to react, the response time to the action is critical for realizing
the sucient interaction. We measured the elapsed time for connection and
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 333
Fig. 14. Variatio n of the displ ay l ayout
disconnection of dierent protocols. Response is almost instant with UDP and
Bluetooth, while it takes about a second with TCP via Wi-Fi.
Lastly, we examined how many devices can join the connection simultaneously.
With the Wi-Fi protocol, we observe that the connection is successful for up to
six devices, but Bluetooth fails with more than four devices. This is perhaps
attributable to the restriction of the hardware or the system framework. The
number of communications increases drastically because the system currently
uses all-to-all communication to find a pair where a “pinch” action is applied.
Although the mechanism itself allows any number to join simultaneously, it
takes longer to find a pair. Therefore, it takes longer to react. The connection
sometimes becomes unstable according to network conditions. We must establish
anetworkingalgorithmthatprovidesafasterandmorestableconnection,and
which makes the system more robust for allowing the connection of a larger
number of devices.
6.2 Audience Feedback
We have p r e s e nted th e a p p l i c a t i o n s a t s e veral confer ences and exhi b i t i o n s . R e -
sponses from the audience members when they actually try the application by
themselves are favorable and show their great enjoyment. The applications even
received an award at a certain conference as the most impressive demonstration
by attendees’ votes. At such occasions, we administered questionnaire surveys
and interviewed some audience members about how they evaluated the contents
and the interface.
We wanted to ascertain by the questionnaire if the interface is accepted as
natural for the purpose, and if users felt that it inspired various new application
ideas. Table 2 shows results of the questionnaire survey. We asked if they felt
that the interface is natural for connecting displays (Q1), and if it inspires new
ideas of applications (Q2). Additionally, we asked respondents if they want to
develop applications individually (Q3) when the audience develops applications
individually (which is expected because many researchers and students were in
the audience at that conference). We obtained extremely positive answers to all
of these questions. We were especially gratified to learn that a majority of people
answered that the interface inspires new ideas and that they want to develop
their own applications themselves.
334 T. Ohta and J. Tanaka
Tabl e 2. Questionnaire about the “Pinch” interface
7Conclusions
We devised a new interface that connects displays of mobile devices dynam-
ically, and have applications to react automatically to the change of display
arrangement. The objective of our approach is the creation of a dynamically re-
configurable multi-display environment as a new platform for interactive media
contents. Using “Pinching” action to prompt connection of the displays provides
an intuitive interface because the gesture is an analogy of stitching things. Addi-
tionally, we created applications to react to the change of display arrangement,
which means that the pinching action is useful not only as the interface for
connecting displays. Simultaneously it is a trigger the causes an application’s
response. The fun with our approach derives mostly from the fact that no ex-
tra step is necessary to have a reaction of applications other than connecting
displays.
Along with that interface concept, applications should be designed so that the
action of connecting and disconnecting of displays triggers responses in them,
which will bring a possibility of creating various new ideas in application design.
In theory, no limitation exists for the number of displays; an application will
have benefit if it is designed to run either on single display or with any number
of multiple displays.
We pro d u ced t hr e e pr o t o type a pplications of dierent t y p e s t o d e m o n s t r a t e
that the interface and applications are actually functional. By creating three
applications, we also sought to show that the interface can become a platform
that can produce various applications. We presented these applications at some
conferences and exhibitions, and received favorable feedback and comments from
the audience. Although applications are simple in terms of their contents, the
idea of the interface and actions is apparently as appreciated as we had expected.
Although it only accommodates display arrangements in-plane, the interface is
applicable to three-dimensional placement with minor alterations.
Other merits of our approach are the selection of the application platform.
Because mobile devices such as smartphones and tablet-PCs are now sold and
owned as commodity gadgets of which many people own one or two, there are
plenty of occasions during which several devices can be gathered at the same
spot, without the need to purchase a set of devices. This aspect engenders an-
other possibility of the applications. One might call friends or colleagues to try
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 335
the application using this interface even if one person is unlikely to own several
smartphones. Consequently, the application will encourage face-to-face commu-
nication, unlike SNS or chat applications oering a communication over network.
We are considering applying this interface to oer face-to-face social networking
applications or advertisement purpose. We are planning to pursue that aspect
and to design applications encouraging such communication as subject of future
work.
Acknowledgement. This research was supported by a Grant-in-Aid for Scien-
tific Research (c), 24500154, 2012, funded by MEXT (the Ministry of Education,
Culture, Sports, Science and Technology, Japan).
References
1. Ni, T., Schmidt, G.S., Staadt, O.G., Livingston, M.A., Ball, R., May, R.: A Survey
of Large High-Resolution Display Technologies, Techniques, and Applications. In:
Proceedings of the IEEE Conference on Virtual Reality (VR 2006), pp. 223–236.
IEEE Computer Society, Washington, DC (2006)
2. Li, K., Chen, H., Chen, Y., Clark, D.W., Cook, P., Damianakis, S., Essl, G., Finkel-
stein, A., Funkhouser, T., Housel, T., Klein, A., Liu, Z., Praun, E., Samanta, R.,
Shedd, B., Singh, J.P., Tzanetakis, G., Zheng: Building and Using A Scalable Dis-
play Wall System. IEEE Comput. Graph. Appl. 20(4), 29–37 (2000)
3. Ohta, T.: Dynamically reconfigurable multi-display environment for CG contents.
In: Proceedings of the 2008 International Conference on Advances in Computer
Entertainment Technology (ACE 2008), p. 416. ACM, New York (2008)
4. Rekimoto, J., Ullmer, B., Oba, H.: DataTiles: a modular platform for mixed physical
and graphical interactions. In: Proceedings of the SIGCHI Conference on Human
Fac t ors in Com p utin g S y s tems ( C HI 2001 ) , p p . 269– 2 76. AC M, New York (2001 )
5. Tandoor, P., Prante, T., M¨uller-Tomfelde, C., Streitz, N., Steinmetz, R.: Connecta-
bles: dynamic coupling of displays for the flexible creation of shared workspaces. In:
Proceedings of the 14th Annual ACM Symposium on User Interface Software and
Tech n olog y ( U I S T 200 1 ) , p p . 11–2 0 . A CM, New Yo r k (200 1 )
6. Hinckley, K.: Synchronous gestures for multiple persons and computers. In: Proceed-
ings of the 16th Annual ACM Symposium on User Interface Software and Technology
(UIST 2003), pp. 149–158. ACM, New York (2003)
7. Hinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., Smith, M.: Stitching: pen
gestures that span multiple displays. In: Proceedings of the Working Conference on
Advanced Visual Interfaces (AVI 2004), pp. 23–31. ACM, New York (2004)
8. Junkyard Jumbotron, http://civic.mit.edu/blog/csik/junkyard-jumbotron
9. Merrill, D., Kalanithi, J., Maes, P.: Siftables: towards sensor network user interfaces.
In: Proceedings of the 1st International Conference on Tangible and Embedded
Interaction (TEI 2007), pp. 75–78. ACM, New York (2007)
... There is a need for such systems to be more reliable (Kubo et al. 2017), to improve speed and accuracy during regular use and motion (Peng et al. 2007;Qiu et al. 2011;Chang and Li 2011;Schwarz et al. 2012;G. Hu et al. 2014), and to bring cross-device capabilities to unmodified devices outside the research space (Ohta and Tanaka 2012). On a technical level, cross-device research needs more practical testing (Jin, Holz, and Hornbaek 2015) and refinement for situations outside the lab (Li and Kobbelt 2012), to support wider-scale use and in the wild deployments. ...
... Examples of cross-device interaction techniques can be found in all 6 modalities. For example, using on-screen interactions, techniques such as stitching(Hinckley et al. 2004) or pinching(Ohta and Tanaka 2012;Lucero, Keränen, and Korhonen 2010; H. S. Nielsen et al. 2014) multiple displays together, or performing synchronous tapping touches(Rekimoto 2004) have been used to pair devices into one configuration. Pairing techniques have also leveraged on-device pointing(Petford, Nacenta, and Gutwin 2018;Waldner et al. 2010) or different finger posture and identification(Houben and Marquardt 2015) to implicitly create crossdevice configurations. ...
Conference Paper
Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems.
... Many interaction techniques and systems are designed for the collaborative purpose of ad hoc sharing across personal and situated device ecologies (e.g., GroupTogether [148], HuddleLamp [187] or TouchProjector [27]). Some require instrumentation of the environment (e.g., [145,148,186,187,215,216,251,253]), while others deploy more lightweight sensing infrastructure, either via device modification [70,221] or exploiting built-in mobile sensors for detection of synchronous gestures across devices (e.g., [32,93,95,123,151,168,173]) or proximity detection (e.g., [1,27,65,111,122,181]). Re-Proxemic Interaction for cross-device interaction cent research on developing sensing systems for cross-device interaction often draws upon the framework of Proxemic Interactions by Greenberg et al. [74]. ...
... Finally, a range of interactive systems ad hoc collaboration proposes lightweight interactions for associating devices based on, e.g., consumer depth cameras for tracking [187] or pinch gestures on touch screens [168,173,203]. The typical scenarios with the mentioned sys-Device stitching to imitate a tabletop display tems is that device screen real-estate from multiple (small-screen) devices can be stitched together to create an ad hoc "tabletop-like" continuous workspace. ...
Thesis
Full-text available
Three people meet to work on a slide deck together. They all place their mobile devices on a table, and one of them slams the table surface with the fist. The devices jointly sense the impact and connect to each other. Now, the three users are ready to collaborate on the slides across their devices. This is just one of several prototype concepts that Jens Emil developed during his PhD. Engaging in co-located collaboration, people often transition between individual and shared work, organizing themselves flexibly around computing devices and furniture. However, current technologies lack support for such transitions. Addressing this issue, Jens Emil’s research shows how interactive technologies can be designed to support more flexibility – both in how we share digital content and how we transition between different ways of working together in physical spaces. The research contributions build on proxemics – the study of how humans interact with each other in shared physical space. Jens Emil experimented with two new “paradigms”: (1) cross-device interaction, enabling flexible connectivity as well as spatial distribution of digital content and control; and (2) shape-changing tabletops, enabling transitions in users’ physical organization and collaboration around shared digital content. Jens Emil’s research contributes to the understanding of how novel interactive technologies can be designed to support more flexibility in co-located digital sharing and physical organization of collaborative activities. This PhD project was conducted in the Ubiquitous Computing and Interaction Group, Department of Computer Science, Science and Technology, Aarhus University, with support from Microsoft Research, Cambridge, UK.
... A large number of research papers introduce techniques to support simultaneous usage of multiple devices. For example, many techniques enable distributing content across multiple devices [4,35,42,58,60], or gestures to enable interactions across multiple devices [33,52,53,59]. Systems such as Webstrates [43], Conductor [31] or Panelrama [67] provide technological solutions to distribute user interfaces across multiple devices [3,24]. ...
Conference Paper
To better ground technical (systems) investigation and interaction design of cross-device experiences, we contribute an in-depth survey of existing multi-device practices, including fragmented workflows across devices and the way people physically organize and configure their workspaces to support such activity. Further, this survey documents a historically significant moment of transition to a new future of remote work, an existing trend dramatically accelerated by the abrupt switch to work-from-home (and having to contend with the demands of home-at-work) during the COVID-19 pandemic. We surveyed 97 participants, and collected photographs of home setups and open-ended answers to 50 questions categorized in 5 themes. We characterize the wide range of multi-device physical configurations and identify five usage patterns, including: partitioning tasks, integrating multi-device usage, cloning tasks to other devices, expanding tasks and inputs to multiple devices, and migrating between devices. Our analysis also sheds light on the benefits and challenges people face when their workflow is fragmented across multiple devices. These insights have implications for the design of multi-device experiences that support people's fragmented workflows.
... In addition, Swip.js works in the browser, which makes it platform independent. Pinch [15] is another example of this type of interface. Similarly to Swip.js, it allows attaching multiple mobile phone displays. ...
Conference Paper
Full-text available
Performance of newly-formed project teams is often limited, or at least delayed, when team members refrain from sharing their ideas due to unfamiliarity with their peers. A variety of ice-breaking methods can help overcome this cold start, but mostly they need to be deployed and moderated by experienced facilitators. This setup is rarely an option for most undergrad project courses at university level, typically carried out in small teams. In order to help breaking the ice in this context, we developed Maze Maestro, a collaborative tabletop game in which the board is made up by attaching the displays of the team members' mobile phones to form a large maze. Each member controls a character in the maze, and the whole team has the common goal of leaving the maze together; however, this is only possible with timely communication and much cooperation. While playing, team members are encouraged to confer possible plans and share their ideas, which is the fertile ground for breaking the ice. Play testing has shown that Maze Maestro was perceived as a fun and original collaborative game. So far, results of a preliminary user study are optimistic about the ability of Maze Maestro to break the ice within newly-formed teams, without requiring any facilitator. CCS CONCEPTS • Applied computing → Interactive learning environments.
... The interaction used synchronous gesture that required users to perform synchronize action on multiple mobile device for specific task. Example of study that used synchronous gesture was Ohta & Tanaka that used pinch gesture to align multiple tablets to form a composite display [9]. To perform pinch gesture, user would first put one finger on one device and another finger on another device. ...
... Similarly, Samsung Group Play [59] and MobiUS [66] only allow display sharing with a pre-recorded video that has to be downloaded on every device. Pinch [53], allows display sharing using only a Pinch-enabled app. Unlike M2, these apps are device-specific and do not enable other apps to access the shared device. ...
Conference Paper
Full-text available
As smartphones and tablets proliferate, there is a growing demand for multi-mobile computing [1, 2], the ability to combine multiple commodity mobile systems into more capable ones, including using multiple hardware devices such as cameras, displays, speakers, microphones, sensors, GPS, and input. However, the tremendous device, hardware, and software heterogeneity of mobile systems makes this difficult. In this demo, we present M2, a system for multi-mobile computing that enables existing unmodified mobile apps to make use of new ways of sharing and combining multiple devices. M2 introduces a new data-centric approach that leverages higher-level device abstractions and encoding/decoding hardware to efficiently share device data as opposed to low-level device-specific APIs.
Chapter
We designed an intuitive user interface to connect multiple mobile devices over a network and relate the applications running on them. We proposed a pinching gesture for making a connection between two devices, which is realized by swiping the touch screens of the two annexed mobile devices as if to stitch them together. We believe that this user interface can create new user experiences for multiple-screen usage, especially by designing the application content to react instantly to the connection and disconnection triggered by the gesture. We expect this interface to fulfill a great potential in inspiring application designers to conceive various ideas, particularly suited for visually fascinating content that takes advantage of the dynamic reconfigurable multi-display feature. To demonstrate the potential, we produced some prototype applications. In this article, we explain the idea and details of the interface mechanism, and introduce the design of the sample applications.
Chapter
Solutions for enhancing user experience in engaging multiple devices for a task largely imply a tight coupling between device combinations and their supporting user interface (UI) and interaction, thus usability issues may arise when end-users create own combinations of devices not foreseen by designers or developers. We propose the three design principles that foster spontaneous shifts in device engagement: partnership discoverability, role election and UI-interaction election. These principles are examined and realized through shifting cues existed in pre-transition, transition and post-transition phases of the transition pathway. Designed as independent user interfaces, shifting cues give hints to users about available nearby devices and guide the shifts in device engagement. Revisiting the design principles and know-how–so far accumulated based on the single device interaction–will be an important step towards realizing a usable interaction design that considers the increasingly common situations of using and shifting around among multiple devices while conducting a task.
Article
Full-text available
Continued advances in display hardware, computing power, networking, and rendering algorithms have all converged to dramatically improve large high-resolution display capabilities. We present a survey on prior research with large high-resolution displays. In the hardware configurations section we examine systems including multi-monitor workstations, reconfigurable projector arrays, and others. Rendering and the data pipeline are addressed with an overview of current technologies. We discuss many applications for large high-resolution displays such as automotive design, scientific visualization, control centers, and others. Quantifying the effects of large high-resolution displays on human performance and other aspects is important as we look toward future advances in display technology and how it is applied in different situations. Interacting with these displays brings a different set of challenges for HCI professionals, so an overview of some of this work is provided. Finally, we present our view of the top ten greatest challenges in large highresolution displays.
Conference Paper
Full-text available
This paper outlines Siftables, a novel platform that applies technology and methodology from wireless sensor networks to tangible user interfaces in order to yield new possibili- ties for human-computer interaction. Siftables are compact devices with sensing, graphical display, and wireless com- munication. They can be physically manipulated as a group to interact with digital information and media. We discuss the unique affordances that a sensor network user interface (SNUI) such as Siftables provides, as well as the resulting directness between the physical interface and the data being manipulated. We conclude with a description of some ge- stural language primitives that we are currently prototyping with Siftables. Author Keywords Sensor Network User Interface (SNUI), Tangible User Inter- face (TUI), Sensor Network, Siftable Computing Interface
Conference Paper
Full-text available
We are designing an application framework for creating graphics contents that utilize a multi-display environment. The processes store and exchange the geometry and event information via a shared space. We use a sensor to detect the placement of the displays. This enables the environment to change the number and layout of its displays dynamically.
Conference Paper
Full-text available
We present the ConnecTable, a new mobile, networked and context-aware information appliance that provides affordances for pen-based individual and cooperative work as well as for the seamless transition between the two. In order to dynamically enlarge an interaction area for the purpose of shared use, a flexible coupling of displays has been realized that overcomes the restrictions of display sizes and borders. Two ConnecTable displays dynamically form a homogeneous display area when moved close to each other. The appropriate triggering signal comes from built-in sensors allowing users to temporally combine their individual displays to a larger shared one by a simple physical movement in space. Connected ConnecTables allow their users to work in parallel on an ad-hoc created shared workspace as well as exchanging information by simply shuffling objects from one display to the other. We discuss the user interface and related issues as well as the software architecture. We also present the physical realization of the ConnecTables.
Conference Paper
We present the ConnecTable, a new mobile, networked and context-aware information appliance that provides affordances for pen-based individual and cooperative work as well as for the seamless transition between the two. In order to dynamically enlarge an interaction area for the purpose of shared use, a flexible coupling of displays has been realized that overcomes the restrictions of display sizes and borders. Two ConnecTable displays dynamically form a homogeneous display area when moved close to each other. The appropriate triggering signal comes from built-in sensors allowing users to temporally combine their individual displays to a larger shared one by a simple physical movement in space. Connected ConnecTables allow their users to work in parallel on an ad-hoc created shared workspace as well as exchanging information by simply shuffling objects from one display to the other. We discuss the user interface and related issues as well as the software architecture. We also present the physical realization of the ConnecTables.
Conference Paper
This research explores distributed sensing techniques for mobile devices using synchronous gestures. These are patterns of activity, contributed by multiple users (or one user with multiple devices), which take on a new meaning when they occur together in time, or in a specific sequence in time. To explore this new area of inquiry, this work uses tablet computers augmented with touch sensors and two-axis linear accelerometers (tilt sensors). The devices are connected via an 802.11 wireless network and synchronize their time-stamped sensor data. This paper describes a few practical examples of interaction techniques using synchronous gestures such as dynamically tiling together displays by physically bumping them together, discusses implementation issues, and speculates on further possibilities for synchronous gestures.
Article
Princeton's scalable display wall project explores building and using a large-format display with commodity components. The prototype system has been operational since March 1998. Our goal is to construct a collaborative space that fully exploits a large-format display system with immersive sound and natural user interfaces. Our prototype system is built with low-cost commodity components: a cluster of PCs, PC graphics accelerators, consumer video and sound equipment, and portable presentation projectors. This approach has the advantages of low cost and of tracking technology well, as high-volume commodity components typically have better price-performance ratios and improve at faster rates than special-purpose hardware. We report our early experiences in building and using the display wall system. In particular, we describe our approach to research challenges in several specific research areas, including seamless tiling, parallel rendering, parallel data visualization, parallel MPEG decoding, layered multiresolution video input, multichannel immersive sound, user interfaces, application tools, and content creation