[show abstract][hide abstract] ABSTRACT: GatherReader is a prototype e-reader with both pen and multi-touch input that illustrates several interesting design trade-offs to fluidly interleave content consumption behaviors (reading and flipping through pages) with information gathering and informal organization activities geared to active reading tasks. These choices include (1) relaxed precision for casual specification of scope; (2) multiple object collection via a visual clipboard; (3) flexible workflow via deferred action; and (4) complementary use of pen+touch. Our design affords active reading by limiting the transaction costs for secondary subtasks, while keeping users in the flow of the primary task of reading itself.
[show abstract][hide abstract] ABSTRACT: Current developments hint at a rapidly approaching future where simultaneous pen + multi-touch input becomes the gold standard for direct interaction on displays. We are motivated by a desire to extend pen and multi-touch input modalities, including their use in concert, to enable users to take better advantage of each.
[show abstract][hide abstract] ABSTRACT: We explore the design of a system for three-way collaboration over a shared visual workspace, specifically in how to support three channels of communication: person, reference, and task-space. In two studies, we explore the implications of extending designs intended for dyadic collaboration to three-person groups, and the role of each communication channel. Our studies illustrate the utility of multiple configurations of users around a distributed workspace, and explore the subtleties of traditional notions of identity, awareness, spatial metaphor, and corporeal embodiments as they relate to three-way collaboration.
Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, CSCW 2010, Savannah, Georgia, USA, February 6-10, 2010; 01/2010
[show abstract][hide abstract] ABSTRACT: As we design tabletop technologies, it is important to also understand how they are being used. Many prior researchers have developed visualizations of interaction data from their studies to illustrate ideas and concepts. In this work, we develop an interactional model of tabletop collaboration, which informs the design of VisTACO, an interactive visualization tool for tabletop collaboration. Using VisTACO, we can explore the interactions of collaborators with the tabletop to identify patterns or unusual spatial behaviours, supporting the analysis process. VisTACO helps bridge the gap between observing the use of a tabletop system, and understanding users' interactions with the system.
ACM International Conference on Interactive Tabletops and Surfaces, ITS 2010, Saarbrücken, Germany, November 7-10, 2010; 01/2010
[show abstract][hide abstract] ABSTRACT: As technology becomes ever more pervasive in our lives, one of the fundamental questions confronting us is how to resolve
the increasing complexity that too often accompanies it — complexity which threatens to prevent us from reaping the potential
benefits that it offers. In addressing this question, much of the literature has focused on improving the design and usability
of the interface to the technologies. In this chapter we investigate another approach, one in which some of the complexity
in using the devices is eliminated by exploiting some of the key properties of architectural and social space. Our work is
based on the observation that there is meaning in space and in distance. Hence, we can relieve users of the complexity of
having to explicitly specify such meaning, as — through appropriate design — it can be implicit, given its spatial context.
[show abstract][hide abstract] ABSTRACT: In this paper we describe extensions to our work on ThinSight, necessary to scale the system to larger tabletop displays. The technique integrates optical sensors into existing off-the-shelf LCDs with minimal impact on the physical form of the display. This allows thin form-factor sensing that goes beyond the capabilities of existing multi-touch techniques, such as capacitive or resistive approaches. Specifically, the technique not only senses multiple fingertips, but outlines of whole hands and other passive tangible objects placed on the surface. It can also support sensing and communication with devices that carry embedded computation such as a mobile phone or an active stylus. We explore some of these possibilities in this paper. Scaling up the implementation to a tabletop has been non-trivial, and has resulted in modifications to the LCD architecture beyond our earlier work. We also discuss these in this paper, to allow others to make practical use of ThinSight.
Third IEEE International Workshop on Tabletops and Interactive Surfaces (Tabletop 2008), October 1-3 2008, Amsterdam, The Netherlands; 01/2008
[show abstract][hide abstract] ABSTRACT: ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multiple fingers placed on or near the display surface. We describe this new hardware in detail, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without degradation of display capability. With our approach, fin- gertips and hands are clearly identifiable through the dis- play, allowing zero force multi-touch interaction. The ap- proach of optical sensing also opens up the possibility for detecting other physical objects and visual markers through the display, and some initial experiments with these are described. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, thin form-factor making it easier to deploy. We therefore envisage using this approach to capture rich sensor data through the display to enable both multi-touch and tangible interaction. We also discuss other novel capabilities of our system including interacting with the display from a dis- tance and direct bidirectional communication between the display and mobile devices.
Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, Newport, Rhode Island, USA, October 7-10, 2007; 01/2007
[show abstract][hide abstract] ABSTRACT: ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multiple objects such as fingertips placed on or near the display sur- face. We describe this new hardware, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without compromising display quality. Our aim is to cap- ture rich sensor data through the display, which can be processed using computer vision techniques to enable inter- action via multi-touch and physical objects. A major advan- tage of ThinSight over existing camera and projector based optical systems is its compact, low profile form factor mak- ing such interaction techniques more practical and deploy- able in real-world settings.
Proceedings of the Emerging Displays Technologies Workshop, EDT 2007, Images and Beyond: The Future of Displays and Interaction, August 4th, 2007, San Diego, California, co-located with SIGGRAPH 2007; 01/2007
[show abstract][hide abstract] ABSTRACT: Let me dredge up an old observation. I made it during a CHI plenary talk. I was trying to explain why I wasn't coming to the conference any more. It was that the Gaithersburg Conference, which led to the formation of SIGCHI, took place after the commercial release of the graphical user interface and the Xerox Star. That is, the CHI literature played no role in the development of what was perhaps the greatest contribution to improving people's experience using computers. There was no CHI literature! Now let's flash forward. Imagine stacking up all of the CHI literature that has accumulated since then. It would make a pile that was a couple of stories high. Yet, despite all of the work that pile represents, we as a discipline have not come up with anything that even begins to compare with those innovations that preceded the establishment of our field as a distinct discipline. So here is the thought that drove me to speak then: We could have done so, we should have done so, and even now, I feel like a failure for not having done so.