Pinch: An Interface That Relates Applications
on Multiple Touch-Screen by ‘Pinching’ Gesture
Takashi Ohta and Jun Tanaka
Tok y o Unive r sity o f Tech n o l ogy, Sch o o l o f Media Sc i e nce
Abstract. We devis ed a ne w use r int e rfac e t hat relat e s appl i cat i ons ru n -
ning on multiple mobile devices when the surfaces of juxtaposed screens
are merely pinched. The multiple-screen layout can be changed dynam-
ically and instantly even while applications are running in each device.
This interface can introduce a new kind of interaction: rearrangement of
devices triggers a certain reaction of contents. We expect this interface
to show great potential to inspire various application designs, and we
expect to enrich the contents by oﬀering interaction that a single dis-
play or a static multi-display environment cannot provide. To prove and
demonstrate that the interface is functional, we implemented a frame-
work for using the interface and develop ed several applications using it.
Although these applications are simple prototypes, they received favor-
able responses from audiences at several exhibitions.
Keywords: User Interface, Multi-Display, Interaction, Mobile Device,
Tou c h Scree n , D y n amic R e c onﬁgur a t ion, Fac e - t o-Face .
Through our experience of creating interactive applications on multi-display en-
vironment, we felt it would be possible to create more interesting representations
using multiple displays if we were able to add more dynamical features to them.
Multi-display systems are generally static in their composition, and are mainly
used for oﬀering a very large screen or high-resolution display . If interac-
tive applications such as media-art works run on a multi-display system, then
multiple displays can be expected to give more impact to an audience than when
running on a single display. However, if the usage of multi-display stays in form-
ing a larger but single virtual screen, then the designs and the interactions of
applications are not expected to be too much diﬀerent from those designed for
a single display. Although that is suitable for scientiﬁc visualization purposes,
we believe that using multiple displays has greater potential as a platform for
In pursuing the potential of multi-displays, we decided to ascertain the ways
that the displays’ layout can be changed interactively even when applications
are running. We sought an interaction as such that changing of displays’ lay-
out causes an application’s reaction. First, we created applications that achieve
A. Nijholt, T. Rom˜ao, and D. Reidsma (Eds.): ACE 2012, LNCS 7624, pp. 320–335, 2012.
⃝Springer-Verlag Berlin Heidelberg 2012
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 321
such interaction with notebook PCs , by attaching sensors to them. However,
because notebook PCs are still large and heavy to carry around casually, we
were not able to ﬁnd appropriate application scenarios. Because mobile devices
such as smartphones and tablet PCs have become popular, we decided to import
the idea and mechanism to these platforms, which are ideal candidates for our
purposes because of their mobility and popularity.
We do not want to attach sensors to mobile devices, and it would not be toler-
able if it were done by opening a conﬁguration panel on a screen and registering
the devices manually one by one. We want to create a more interactive interface
to achieve the action. We propose a simple and intuitive interface to do so, us-
ing a “pinching” gesture accomplished by putting the foreﬁnger and thumb on
juxtaposed screens and swiping them as if to stitch them together. This linking
of displays is possible by choosing mobile devices with a touch screen for our
platform. We also use “shaking” of a device to break the connection. We pre-
pared the framework using the interface’s functions and created three prototype
applications to demonstrate that this interface can be a foundation of various
As described in this paper, we present the concept and the mechanism of our
interface, its implementation, and the applications developed using the interface.
Several reports in the literature have described research using dynamically recon-
ﬁgurable multiple display devices.“Data Tiles” consists of a ﬂat display and tiny
transparent tiles . Each tile has an RFID tag, and reading sensors are mounted
on the panel, so that the system can recognize when a tile is placed on the panel.
When a tile is placed, contents associated with each tile’s category are displayed
automatically on the panel in that area. This research demonstrates a kind of
interface in that multiple displaying units are used, and making physical interac-
tion as placing a unit onto a panel to trigger content to react. Other studies have
investigated the use of physically independent displaying devices.“ConnecTables
” is a work that develops a system that dynamically connects two displays and
makes them a single virtual screen to produce a collaborative workplace. A dis-
play unit, called ConnecTable, is built using a graphic tablet and built-in-sensor.
They detect each other when they are moved close. Then their screens are con-
nected to form a single display area. On the connected screen, users can share
information by moving displayed objects between devices. Hinckley proposed
the use of a bumping of displays to trigger a connection . An acceleration
sensor can detect a vibration by the bumping motion. Then display regions are
connected to form a single workplace. “Stitching ” is a similar method that
has been examined for building a collaborative workplace by multiple displays.
This method uses a stylus pen for connecting displays. The system recognizes
the pen’s continuous movement that spans over multiple displays, and forms a
temporary single screen by deducing relative positions of displays by making
the pen’s trail be drawn as continuous line. These studies show variants of the
322 T. Ohta and J. Tanaka
approach, some use sensors to detect the physical contact of displays, some use
a gesture to know it occurs, and other approaches use a pen’s trail to ascertain
positions more precisely.
Some works uses mobile devices. “Junkyard Jumbotron  is an application
that combines devices including smartphones or/and PC displays, and binds
them into single large virtual screen. It conﬁgures the relative positioning of
each device by detecting speciﬁc graphical markers displayed on each display
using a camera. “Shiftables  designs a speciﬁc tiny block device with a dis-
play, equipped with a built-in-sensor on its four sides to detect the others. That
approach can be characterized as similar to ConnecTables and Hinckley’s work.
Diﬀerences of the research objective explain the diﬀerences between these
projects and our work. Our approach is similar to “Stitching” at taking a relative
display’s position by the drawn trail on screens, but we weighed more on the
aspect of changing the display layout dynamically. We use the gesture not only
for prompting connection of the displays, but making it as an interface to invoke
physical analogy of gluing two things together, so that a user can have a feeling
of actually connecting devices manually. Junkyard Jumbotron was designed to
create a single large screen with temporarily assembled devices, whereas ours
work is intended to produce an application platform that uses the change of
display layout as a means of interaction. Therefore, Junkyard Jumbotron detects
and conﬁgures the display positioning as a whole at one time, whereas ours does
not use such a conﬁguration approach.
An important diﬀerence from Shiftables is that our system uses ubiquitous
devices such as smartphones. Shiftables are designed for one player possessing
speciﬁcally tailored devices, whereas we expect a person to call friends to bring
their devices to play together. Using temporarily assembled devices is our ap-
proach’s major feature.
We wo u l d l i ke to use mul t i p l e d i s p l ays not for building a static large vi r t u a l
screen, but for creating fascinating interactions of contents triggered by rear-
rangement of the display layout. For that purpose, we need an interactive and
instant means to reconﬁgure the display layout. We also want to change the lay-
out repetitively, and to be able to add or remove devices at any time. We do not
seek a conﬁguration tool to do that. We want a certain interaction that prompts
connecting of displays and which also triggers the reaction in contents simulta-
neously. When we choose smartphones (iPhone) and tablet PCs (iPad and iPod
Touch) as our platform, we came up with the idea of using a “pinching” gesture
because these devices are typically equipped with touch screens, which we think
we can use.
What we call “pinching” is an action of putting a foreﬁnger and thumb on
each display surface of two juxtaposed devices, and making a swiping gesture
of them until they meet. The salient advantage of the pinching gesture is that
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 323
it is extremely simple and intuitive. One can be reminded easily that this is
an action to stitch something together. It is also extremely easy to deduce the
physical position of screens by the gesture if we assume that it is applied by the
foreﬁnger and thumb of the same hand: we can safely expect that these ﬁngers
move along the same straight line, in opposite directions, applied at the very
same time, on screens facing the same direction. Such information is obtainable
by detecting a touch on the screen. A pinched pair can be determined by sharing
this information among devices and ﬁnding a pair of information that meets the
conditions explained above. Consequently, a pinching action on the touch screen
enables the connection of displays without extra sensors.
Fig. 1. Connect displays using a pinching gesture
The gesture is not useful only for making a connection of displays. We want an
interface that relates the displayed contents, rather than that for connecting the
devices. For example, as shown in Fig. 1, when two displays are pinched, these
displays are connected. At the very same time, the image appears throughout
the screens. In other words, we want to achieve interaction of contents, which
will occur by moving displays around. Designing a reconﬁgurable multi-display
system itself is not the direct objective of this research.
To realize such an interaction, it is necessary that an application to react to
an event of connection or disconnection of displays with the pinching action.
We design these two events to occur spontaneously without requiring an extra
step to relate the application’s content. We expect that this approach can pro-
duce fascinating applications that make users feel as though they control digital
contents by physically handling objects.
In this section, we explain the design and implementation of the interface sys-
tem. Our intention is to have a dynamic interaction of a gesture and content.
Therefore, one can apply a pinching gesture while the application is running.
The entire procedure for connecting displays and letting applications react to
324 T. Ohta and J. Tanaka
the event can largely be done in three steps: ﬁnd the pinched pair, determine the
screen positions, and call an appropriate reaction in an application. We explain
here how these steps are implemented and what tasks are done in each step.
4.1 Determine a Connected Pair
When an application starts, it seeks others on a network. Once ﬁnding the other
devices on which a compatible application is running, the application registers
their network addresses and establishes a connection with them. Finding others
on a network is done automatically using Apple’s Bonjour protocol. This protocol
is useful for publicizing a network service to other devices. The bonjour protocol
deals with the identiﬁer designating a kind of service and the type of transport
protocol, in a format like “ pinch. tcp”. In this way, the application can ﬁnd other
fellow “Pinch”-able applications on the network. We prepared the function for
network connection so that it can use either Wi-Fi or Bluetooth. Each has its
merits and shortcomings in terms of performance, which we discuss in a later
Once a group of devices establishes the connection, they are ready to send and
receive information of a pinching action. On each device, the application observes
whether a swiping motion is applied to its screen. When the application notices
that a swiping occurs, it sends out information related to that motion to all
other devices (Fig. 2).
Fig. 2. Broadcasting of motion data
Information on the motion consists of data listed as shown in Table 1. If
be expected to occur at two devices simultaneously. Therefore, if an application
receives swiping information and also has its own at that duration, it can deduce
whether it results from a pinching action by comparing the time stamps in
the respective devices’ information. A pair of swiping motions is identiﬁed as a
pinching gesture when data or these motions satisfy the following conditions.
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 325
–if motions occur simultaneously
–if a screen’s surface is directed to the same orientation
–if swipes move in opposite directions
As might be apparent from the explanation above, no central architecture ex-
ists for managing the information entirely. Network communications are done by
peer-to-peer among devices, with no serverdeliveringapplicationcontents.The
processes are running on each device independently, only exchanging necessary
information on necessary occasions with corresponding devices. Additionally, it
does not need any extra devices such as sensors attached to the display device.
Detecting a motion applied onto the touch screen can provide suﬃcient informa-
tion to conﬁgure the connection. These features of having no centric server and
no extra attachment provide greater advantages in realizing the use of multiple
displays by temporarily gathered commodity devices.
Tabl e 1. Broadcasting of motion data
4.2 Connection of Screens
After a pinched pair is discovered, because we allow an arbitrary screen layout,
the need exists to determine each device’sscreencoordinationrelativetothe
others. The pair has the information of the swipe motion of the other device
of the pair. Therefore, each can deduce the relative position by analyzing that
To explain the process, we assume the swipe motions shown in Fig. 3. The
procedure to determine the relative screen coordination is conducted as depicted
in Fig. 4. The following are what are performed in each step.
1. position screens A and B as overlapping completely
2. move screen B by the distance between swipe A’s location and screen A’s
3. rotate screen B by the diﬀerence of the two devices’ directions
4. move screen B further by the distance between swipe B’s location and screen
B’s center position
326 T. Ohta and J. Tanaka
Using these procedures, an application run on each device can know the other
connected screen’s relative location andcanconvertthepositionoftheobjects
on the other screen to its own coordination, and vice versa. This process is also
applicable to screens of diﬀerent sizes. Therefore, the mechanism works with the
combination of smartphones and tablet PCs.
Fig. 3. Swipe motion and screen coordinates
Fig. 4. Process to determine relative screen coordinates
4.3 Application Programming Framework
We designed a programming framework for the beneﬁt of developing the appli-
cations compatible with the interface. It handles the procedures of networking,
detection of pinching action, conversion of screen’s coordinates, relaying mes-
sages among multiple devices, and disconnection by a shaking gesture, and so
forth. It covers most of the system work and saves a developer from coding these
parts. Fig. 5 shows that the framework’s layer is constructed. With the frame-
work, developers can concentrate only onthecodingofgraphicsandreactions.
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 327
Fig. 5. Frame wor k l aye rs
The class structure of the framework is shown in Fig. 6. The framework is cur-
rently implemented in Objective-C for iOS 5 and after, for the use on iPhone,
iPod Touch, and iPad. Most of the functions are gathered under the PinchCon-
troller class. Therefore, a developer can appreciate the Pinch interface’s functions
only by using the class and designating one’s own View object in it.
Fig. 6. Class structure of the framework
Other important classes in the framework are PinchControllerDelegate and
PinchControllerMessage. The former is the class that receives a message when a
device is connected with others. Using this class and set reaction in the methods
it provides, an application can react when displays are connected. Each applica-
tion’s speciﬁc reaction should be called by here. PinchMessage is for a container
of a message and is used for sending and receiving it between devices. It also
supports the relay of a message through multiple devices.
We developed applications that employ the “Pinch” interface to examine whether
the system is actually as functional as we expected. In addition, we designed three
328 T. Ohta and J. Tanaka
applications to demonstrate that the interface can aﬀord to create the variety
of interaction. In this section, we also explain what should be prepared in each
application side to have a proper reaction to a display’s layout change.
5.1 Traveling Crickets
This is a very simple application example with which a graphic object can move
among multiple displays when they are connected. When the application is
started on one display, a cricket appears in a grass ﬁeld that appears on the
screen. When tapping just behind the cricket, it jumps and moves forward. The
movement is, however, restricted by the screen’s boundary. When the insect
reaches an edge, it bounces and retreats. Connecting a new display provides an
additional ﬁeld onto which the cricket can move beyond the edge of one screen.
When the cricket goes beyond a boundary to a diﬀerent screen (Fig. 7), the
chirping sound of the cricket moves together with it and is heard from the next
device as well.
Fig. 7. Cricket moves beyond the edge to a diﬀerent display
Making a graphic object move between devices is done in the following way
(as shown in Fig. 8). Here, we proceed to an explanation by assuming that an
object is originally located on the screen of device A.
1. set positions of the cricket’s original location and destination in device A’s
2. deduce these positions in device B’s screen coordination
3. create a copy of the cricket object in device B, at the cricket’s original posi-
tion converted to device B’s screen coordination
4. move cricket objects on both screens simultaneously
5. remove the cricket object from device A after the animation is completed
As explained above, motion of an object between the displays is done by copying
the object instance to a new device when the object moves over to a diﬀerent
screen. This diﬀers from the approach by which a central device does all the
calculation on the graphical object’s movements, and broadcasts them to other
devices. This beneﬁts our choice of platform because none of the devices becomes
indispensable, which means that any device can be removed from the connection.
This application design therefore lends ﬂexibility to the system and the interface.
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 329
Fig. 8. Convert coordination between screens
5.2 Dynamic Canvas
This is an application that creates a single virtual screen by multiple displays,
as is generally done with an ordinary multi-display approach. The diﬀerence is
that the virtual screen is formed and reformed dynamically by attaching or re-
tracting displays. The size of an image or a movie shown there is also adjusted
dynamically to appear as large as it can be within the virtual screen’s larger
allowed direction size (Fig. 9). The size of the entire virtual screen is recalcu-
lated repeatedly and the shown image or movie’s size is adjusted at the instant
whenever a display is added or removed from the entire formation. When the
connection is broken, the image appears on every screen so as to ﬁt into a single
Fig. 9. Image displayed throughout multiple displays
To realize this eﬀect, we let all devices retain information related to the vir-
tual screen’s size, and the local screen’s origin point in that virtual screen’s
coordination. When an additional device is joined to form the virtual screen,
the device to which the new device is attached directly renews its information
related to the virtual screen. Then it sends out the renewed information to its
direct neighbors and makes them update their information. Subsequently, they
330 T. Ohta and J. Tanaka
also send information again to their own neighbors. The updated information
related to the virtual screen’s size is therefore relayed to all devices by repeating
this. To prevent the information circulated endlessly, the framework prepared
the function to relay the information among many devices. That function pre-
vents sending of the same information repeatedly to the same device by adding
a unique identiﬁer to the information. A device stops sending the information
forward when it receives a message that arrived earlier. The routing of messages
is depicted in Fig. 10. Sending messages stops at the devices where a message
has already been sent via another device.
Fig. 10. Relaying message to entire devices
For displaying an image or a movie, determine which device holds the virtual
screen’s central position. Then decide the size of the image from the virtual
screen’s size. Then each device draw its own part, which is possible because each
device knows its own position in the virtual screen coordinates.
The third application is for composing and playing music. At the start, a rectan-
gular space with a tiny dot appears on screen. Fig. 11 shows that the player can
place a tiny silver circle on these dots by touching the screen. A sound is played
when a scan line traverses the screen and hits these tiny circles. The sound pitch
is determined according to each circle’s position.
Fig. 11. Screen image of Tuneblock
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 331
Although the single device’s screen aﬀords only a very short melody, it can
be elongated by adding another display later. When the scan line reaches the
end of one device, it sends a message to the next device to begin its turn. When
the scan line reaches the end of the last device, scanning begins again from the
ﬁrst device. Relaying of a message is necessary to realize this looping of playing
sound. Connecting the devices works not only to elongate a musical note: tunes
set in each screen are played in chorus if another display is connected in parallel
in relation to the scanning line’s direction of movement (Fig. 12). Although the
same setting of circles remains on each device, we can play a diﬀerent melody
or sound by altering the display layout.
Fig. 12. Relaying message to entire devices
5.4 Variation of Application Design
These three applications (Fig. 13) represent diﬀerent aspects of the feature of
dynamically changeable multiple display. One allows graphical objects to move
beyond screen boundaries to the other device. The other one uses multiple dis-
plays to form a single virtual screen, whereas the last one uses an event of
connecting displays for prompting the applications to run cooperatively. The
applications are simple and straightforward in representing each aspect, but we
think we were able to show diﬀerent usages of the function the interface can oﬀer,
and demonstrate the potential of such system as a platform for interactive ap-
plications. However, we would like to develop applications of more sophisticated
idea and design in the future.
All of these applications are designed not only for multiple displays. In prin-
ciple, the applications are designed so that they are playable on a single screen,
and so that they have extra interaction when connected with others. This design
principle can enable people to play the application both alone or with friends. In
addition, interaction of more diﬀerent types can be expected with the interface.
For example, the connection is done only on the same plane currently. How-
ever, with a slight tweak, we think a three-dimensional display construction is
possible. This would enlarge the possibility further.
When considering the situation of how people play applications using this
interface, a group of friends or colleagues gather to play because one person
might not aﬀord so many mobile devices. In such a situation, we would be able to
332 T. Ohta and J. Tanaka
Fig. 13. Image of applications employing Pinch interface actually running on iPod
touch: left, Tuneblock; right top, Traveling Cricket; right bottom, Big Canvas
develop social type applications of a new kind. The applications require face-to-
face communication. Therefore, it is useful for encouraging viral advertisement.
Pursuing such an aspect for using this interface and applications for encouraging
people to have communication with physical contact is a future objective.
In this section, this report describes the system performance including a test
of elapsed time in connecting the devices. We also report feedback from the
audiences at conferences where weexhibitedtheapplications.
6.1 Device Response
First, we examine whether the connection is actually established and whether
the system can deduce a correct direction for the connection. Many combinations
exist in the orientation of devices for a connection like that shown in Fig. 14, and
more when one considers four sides to put other devices. We examine by obser-
vation i f a vir tua l scr een ’s co ordinates are bu ilt consi stently with the displays’
physical placement. We use the application of a moving cricket to verify that the
combinations of two or three devices are handled properly. We conﬁrmed that
all of these are processed correctly.
We observe that a little discrepancy of ﬁnger size exists in the matching
of screen coordination from physical placement. This cannot be avoided when
trying to ascertain a position by touching the screen with a ﬁnger. About 2.5
mm of slip is observed, on average, with a maximum value of 5 mm, although
this number is expected to diﬀer among people and the mode of moving ﬁngers;
especially a touch by a thumb induces greater slippage. However, we need no
such high accuracy for our purposes. A small slip between the juxtaposed screens
does not deter us from regarding the two displays as connected.
As a pinching action not only for connecting displays but also for prompting
applications to react, the response time to the action is critical for realizing
the suﬃcient interaction. We measured the elapsed time for connection and
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 333
Fig. 14. Variatio n of the displ ay l ayout
disconnection of diﬀerent protocols. Response is almost instant with UDP and
Bluetooth, while it takes about a second with TCP via Wi-Fi.
Lastly, we examined how many devices can join the connection simultaneously.
With the Wi-Fi protocol, we observe that the connection is successful for up to
six devices, but Bluetooth fails with more than four devices. This is perhaps
attributable to the restriction of the hardware or the system framework. The
number of communications increases drastically because the system currently
uses all-to-all communication to ﬁnd a pair where a “pinch” action is applied.
Although the mechanism itself allows any number to join simultaneously, it
takes longer to ﬁnd a pair. Therefore, it takes longer to react. The connection
sometimes becomes unstable according to network conditions. We must establish
which makes the system more robust for allowing the connection of a larger
number of devices.
6.2 Audience Feedback
We have p r e s e nted th e a p p l i c a t i o n s a t s e veral confer ences and exhi b i t i o n s . R e -
sponses from the audience members when they actually try the application by
themselves are favorable and show their great enjoyment. The applications even
received an award at a certain conference as the most impressive demonstration
by attendees’ votes. At such occasions, we administered questionnaire surveys
and interviewed some audience members about how they evaluated the contents
and the interface.
We wanted to ascertain by the questionnaire if the interface is accepted as
natural for the purpose, and if users felt that it inspired various new application
ideas. Table 2 shows results of the questionnaire survey. We asked if they felt
that the interface is natural for connecting displays (Q1), and if it inspires new
ideas of applications (Q2). Additionally, we asked respondents if they want to
develop applications individually (Q3) when the audience develops applications
individually (which is expected because many researchers and students were in
the audience at that conference). We obtained extremely positive answers to all
of these questions. We were especially gratiﬁed to learn that a majority of people
answered that the interface inspires new ideas and that they want to develop
their own applications themselves.
334 T. Ohta and J. Tanaka
Tabl e 2. Questionnaire about the “Pinch” interface
We devised a new interface that connects displays of mobile devices dynam-
ically, and have applications to react automatically to the change of display
arrangement. The objective of our approach is the creation of a dynamically re-
conﬁgurable multi-display environment as a new platform for interactive media
contents. Using “Pinching” action to prompt connection of the displays provides
an intuitive interface because the gesture is an analogy of stitching things. Addi-
tionally, we created applications to react to the change of display arrangement,
which means that the pinching action is useful not only as the interface for
connecting displays. Simultaneously it is a trigger the causes an application’s
response. The fun with our approach derives mostly from the fact that no ex-
tra step is necessary to have a reaction of applications other than connecting
Along with that interface concept, applications should be designed so that the
action of connecting and disconnecting of displays triggers responses in them,
which will bring a possibility of creating various new ideas in application design.
In theory, no limitation exists for the number of displays; an application will
have beneﬁt if it is designed to run either on single display or with any number
of multiple displays.
We pro d u ced t hr e e pr o t o type a pplications of diﬀerent t y p e s t o d e m o n s t r a t e
that the interface and applications are actually functional. By creating three
applications, we also sought to show that the interface can become a platform
that can produce various applications. We presented these applications at some
conferences and exhibitions, and received favorable feedback and comments from
the audience. Although applications are simple in terms of their contents, the
idea of the interface and actions is apparently as appreciated as we had expected.
Although it only accommodates display arrangements in-plane, the interface is
applicable to three-dimensional placement with minor alterations.
Other merits of our approach are the selection of the application platform.
Because mobile devices such as smartphones and tablet-PCs are now sold and
owned as commodity gadgets of which many people own one or two, there are
plenty of occasions during which several devices can be gathered at the same
spot, without the need to purchase a set of devices. This aspect engenders an-
other possibility of the applications. One might call friends or colleagues to try
Pinch: An Interface That Relates Applications on Multiple Touch-Screen 335
the application using this interface even if one person is unlikely to own several
smartphones. Consequently, the application will encourage face-to-face commu-
nication, unlike SNS or chat applications oﬀering a communication over network.
We are considering applying this interface to oﬀer face-to-face social networking
applications or advertisement purpose. We are planning to pursue that aspect
and to design applications encouraging such communication as subject of future
Acknowledgement. This research was supported by a Grant-in-Aid for Scien-
tiﬁc Research (c), 24500154, 2012, funded by MEXT (the Ministry of Education,
Culture, Sports, Science and Technology, Japan).
1. Ni, T., Schmidt, G.S., Staadt, O.G., Livingston, M.A., Ball, R., May, R.: A Survey
of Large High-Resolution Display Technologies, Techniques, and Applications. In:
Proceedings of the IEEE Conference on Virtual Reality (VR 2006), pp. 223–236.
IEEE Computer Society, Washington, DC (2006)
2. Li, K., Chen, H., Chen, Y., Clark, D.W., Cook, P., Damianakis, S., Essl, G., Finkel-
stein, A., Funkhouser, T., Housel, T., Klein, A., Liu, Z., Praun, E., Samanta, R.,
Shedd, B., Singh, J.P., Tzanetakis, G., Zheng: Building and Using A Scalable Dis-
play Wall System. IEEE Comput. Graph. Appl. 20(4), 29–37 (2000)
3. Ohta, T.: Dynamically reconﬁgurable multi-display environment for CG contents.
In: Proceedings of the 2008 International Conference on Advances in Computer
Entertainment Technology (ACE 2008), p. 416. ACM, New York (2008)
4. Rekimoto, J., Ullmer, B., Oba, H.: DataTiles: a modular platform for mixed physical
and graphical interactions. In: Proceedings of the SIGCHI Conference on Human
Fac t ors in Com p utin g S y s tems ( C HI 2001 ) , p p . 269– 2 76. AC M, New York (2001 )
5. Tandoor, P., Prante, T., M¨uller-Tomfelde, C., Streitz, N., Steinmetz, R.: Connecta-
bles: dynamic coupling of displays for the ﬂexible creation of shared workspaces. In:
Proceedings of the 14th Annual ACM Symposium on User Interface Software and
Tech n olog y ( U I S T 200 1 ) , p p . 11–2 0 . A CM, New Yo r k (200 1 )
6. Hinckley, K.: Synchronous gestures for multiple persons and computers. In: Proceed-
ings of the 16th Annual ACM Symposium on User Interface Software and Technology
(UIST 2003), pp. 149–158. ACM, New York (2003)
7. Hinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., Smith, M.: Stitching: pen
gestures that span multiple displays. In: Proceedings of the Working Conference on
Advanced Visual Interfaces (AVI 2004), pp. 23–31. ACM, New York (2004)
8. Junkyard Jumbotron, http://civic.mit.edu/blog/csik/junkyard-jumbotron
9. Merrill, D., Kalanithi, J., Maes, P.: Siftables: towards sensor network user interfaces.
In: Proceedings of the 1st International Conference on Tangible and Embedded
Interaction (TEI 2007), pp. 75–78. ACM, New York (2007)