Bill Buxton

Microsoft, Washington, West Virginia, United States

Are you Bill Buxton?

Claim your profile

Publications (17)0 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.
    Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services; 08/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: GatherReader is a prototype e-reader with both pen and multi-touch input that illustrates several interesting design trade-offs to fluidly interleave content consumption behaviors (reading and flipping through pages) with information gathering and informal organization activities geared to active reading tasks. These choices include (1) relaxed precision for casual specification of scope; (2) multiple object collection via a visual clipboard; (3) flexible workflow via deferred action; and (4) complementary use of pen+touch. Our design affords active reading by limiting the transaction costs for secondary subtasks, while keeping users in the flow of the primary task of reading itself.
    01/2012;
  • Source
    Ken Hinckley, Michel Pahud, Bill Buxton
    [Show abstract] [Hide abstract]
    ABSTRACT: Current developments hint at a rapidly approaching future where simultaneous pen + multi-touch input becomes the gold standard for direct interaction on displays. We are motivated by a desire to extend pen and multi-touch input modalities, including their use in concert, to enable users to take better advantage of each.
    SID Symposium Digest of Technical Papers. 01/2012; 41(1).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: These projects were curated by two instructors at the Swedish School of Textiles, University of Borås, Linda Worbin (textile design) and Clemens Thornquist (fashion design). All four projects were presented at Ambience'11, an international conference ...
    Interactions. 01/2012; 19:64-69.
  • Source
    Conference Paper: Pen + touch = new tools.
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.
    Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, October 3-6, 2010; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.
    Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Extended Abstracts Volume, Atlanta, Georgia, USA, April 10-15, 2010; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As we design tabletop technologies, it is important to also understand how they are being used. Many prior researchers have developed visualizations of interaction data from their studies to illustrate ideas and concepts. In this work, we develop an interactional model of tabletop collaboration, which informs the design of VisTACO, an interactive visualization tool for tabletop collaboration. Using VisTACO, we can explore the interactions of collaborators with the tabletop to identify patterns or unusual spatial behaviours, supporting the analysis process. VisTACO helps bridge the gap between observing the use of a tabletop system, and understanding users' interactions with the system.
    ACM International Conference on Interactive Tabletops and Surfaces, ITS 2010, Saarbrücken, Germany, November 7-10, 2010; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We explore the design of a system for three-way collaboration over a shared visual workspace, specifically in how to support three channels of communication: person, reference, and task-space. In two studies, we explore the implications of extending designs intended for dyadic collaboration to three-person groups, and the role of each communication channel. Our studies illustrate the utility of multiple configurations of users around a distributed workspace, and explore the subtleties of traditional notions of identity, awareness, spatial metaphor, and corporeal embodiments as they relate to three-way collaboration.
    Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, CSCW 2010, Savannah, Georgia, USA, February 6-10, 2010; 01/2010
  • Bill Buxton
    [Show abstract] [Hide abstract]
    ABSTRACT: As technology becomes ever more pervasive in our lives, one of the fundamental questions confronting us is how to resolve the increasing complexity that too often accompanies it — complexity which threatens to prevent us from reaping the potential benefits that it offers. In addressing this question, much of the literature has focused on improving the design and usability of the interface to the technologies. In this chapter we investigate another approach, one in which some of the complexity in using the devices is eliminated by exploiting some of the key properties of architectural and social space. Our work is based on the observation that there is meaning in space and in distance. Hence, we can relieve users of the complexity of having to explicitly specify such meaning, as — through appropriate design — it can be implicit, given its spatial context.
    06/2009: pages 217-231;
  • Bill Buxton
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine how ambient displays can augment social television. Social TV 2 is an interactive television solution that incorporates two ambient displays to convey to participants an aggregate view of their friends' current TV-watching status. Social TV ...
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 04/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe extensions to our work on ThinSight, necessary to scale the system to larger tabletop displays. The technique integrates optical sensors into existing off-the-shelf LCDs with minimal impact on the physical form of the display. This allows thin form-factor sensing that goes beyond the capabilities of existing multi-touch techniques, such as capacitive or resistive approaches. Specifically, the technique not only senses multiple fingertips, but outlines of whole hands and other passive tangible objects placed on the surface. It can also support sensing and communication with devices that carry embedded computation such as a mobile phone or an active stylus. We explore some of these possibilities in this paper. Scaling up the implementation to a tabletop has been non-trivial, and has resulted in modifications to the LCD architecture beyond our earlier work. We also discuss these in this paper, to allow others to make practical use of ThinSight.
    Third IEEE International Workshop on Tabletops and Interactive Surfaces (Tabletop 2008), October 1-3 2008, Amsterdam, The Netherlands; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multiple objects such as fingertips placed on or near the display sur- face. We describe this new hardware, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without compromising display quality. Our aim is to cap- ture rich sensor data through the display, which can be processed using computer vision techniques to enable inter- action via multi-touch and physical objects. A major advan- tage of ThinSight over existing camera and projector based optical systems is its compact, low profile form factor mak- ing such interaction techniques more practical and deploy- able in real-world settings.
    Proceedings of the Emerging Displays Technologies Workshop, EDT 2007, Images and Beyond: The Future of Displays and Interaction, August 4th, 2007, San Diego, California, co-located with SIGGRAPH 2007; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multiple fingers placed on or near the display surface. We describe this new hardware in detail, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without degradation of display capability. With our approach, fin- gertips and hands are clearly identifiable through the dis- play, allowing zero force multi-touch interaction. The ap- proach of optical sensing also opens up the possibility for detecting other physical objects and visual markers through the display, and some initial experiments with these are described. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, thin form-factor making it easier to deploy. We therefore envisage using this approach to capture rich sensor data through the display to enable both multi-touch and tangible interaction. We also discuss other novel capabilities of our system including interacting with the display from a dis- tance and direct bidirectional communication between the display and mobile devices.
    Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, Newport, Rhode Island, USA, October 7-10, 2007; 01/2007
  • Physicality '06, Lancaster; 01/2006
  • Source
    Bill Buxton
    [Show abstract] [Hide abstract]
    ABSTRACT: Let me dredge up an old observation. I made it during a CHI plenary talk. I was trying to explain why I wasn't coming to the conference any more. It was that the Gaithersburg Conference, which led to the formation of SIGCHI, took place after the commercial release of the graphical user interface and the Xerox Star. That is, the CHI literature played no role in the development of what was perhaps the greatest contribution to improving people's experience using computers. There was no CHI literature! Now let's flash forward. Imagine stacking up all of the CHI literature that has accumulated since then. It would make a pile that was a couple of stories high. Yet, despite all of the work that pile represents, we as a discipline have not come up with anything that even begins to compare with those innovations that preceded the establishment of our field as a distinct discipline. So here is the thought that drove me to speak then: We could have done so, we should have done so, and even now, I feel like a failure for not having done so.
    01/2006;
  • A Sellen, B. Buxton, J. Arnott