Takeo Igarashi

The University of Tokyo, Edo, Tōkyō, Japan

Are you Takeo Igarashi?

Claim your profile

Publications (156)34.73 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: One of the difficulties with standard route maps is accessing to multi-scale routing information. The user needs to display maps in both a large scale to see details and a small scale to see an overview, but this requires tedious interaction such as zooming in and out. We propose to use a hierarchical structure for a route map, called a "Route Tree", to address this problem, and describe an algorithm to automatically construct such a structure. A Route Tree is a hierarchical grouping of all small route segments to allow quick access to meaningful large and small-scale views. We propose two Route Tree applications, "RouteZoom" for interactive map browsing and "TreePrint" for route information printing, to show the applicability and usability of the structure. We conducted a preliminary user study on RouteZoom, and the results showed that RouteZoom significantly lowers the interaction cost for obtaining information from a map compared to a traditional interactive map.
    Proceedings of the 19th international conference on Intelligent User Interfaces; 02/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a series of projects for end-user authoring of interactive robotic behaviors, with a particular focus on the style of those behaviors: we call this approach Style-by-Demonstration (SBD). We provide an overview introduction of three different SBD platforms: SBD for animated character interactive locomotion paths, SBD for interactive robot locomotion paths, and SBD for interactive robot dance. The primary contribution of this article is a detailed cross-project SBD analysis of the interaction designs and evaluation approaches employed, with the goal of providing general guidelines stemming from our experiences, for both developing and evaluating SBD systems. In addition, we provide the first full account of our Puppet Master SBD algorithm, with an explanation of how it evolved through the projects.
    ACM Transactions on Interactive Intelligent Systems (TiiS). 01/2014; 3(4).
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present faceton, a geometric modeling primitive designed for building architectural models, using a six degrees of freedom (DoF) input device in a virtual environment (VE). A faceton is given as an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple drag-and-drop and group interaction of faceton, users can easily create 3D architecture models in the VE. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling in VE, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry (CSG), but it is driven by a novel adaptive bounding algorithm and is specifically designed for the 3D modeling activities in an immersive virtual environment.
    Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology; 10/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: User-Centered Programming by Demonstration is an approach that places the needs of people above algorithmic constraints and requirements. In this paper we present a user-centered programming by demonstration project for authoring interactive robotic locomotion style. The style in which a robot moves about a space, expressed through its motions, can be used for communication. For example, a robot could move aggressively in reaction to a person's actions, or alternatively react using careful, submissive movements. We present a new demonstration interface, algorithm, and evaluation results.
    Proceedings of the Twenty-Third international joint conference on Artificial Intelligence; 08/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: We introduce an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth. This "LightCloth" is woven from diffusive optical fibers. Sensor-emitter pairs attached to bundles of contiguous fibers enable bundle-specific light input and output. We developed a prototype system that allows full-color illumination and 8-bit data input by infrared signals.
    CHI '13 Extended Abstracts on Human Factors in Computing Systems; 04/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth. This "LightCloth" is woven from diffusive optical fibers. Since the fibers are arranged in parallel, the cloth has one-dimensional position information. Sensor-emitter pairs attached to bundles of contiguous fibers enable bundle-specific light input and output. We developed a prototype system that allows full-color illumination and 8-bit data input by infrared signals. We present as an application a chair with a LightCloth cover whose illumination pattern is specified using an infrared light pen. Here we describe the implementation details of the device and discuss possible interactions using the device.
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 04/2013
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    [Show abstract] [Hide abstract]
    ABSTRACT: Current programming environments use textual or symbolic representations. While these representations are appropriate for describing logical processes, they are not appropriate for representing raw values such as human and robot posture data, which are necessary for handling gesture input and controlling robots. To address this issue, we propose Picode, a text-based development environment integrated with visual representations: photos of human and robots. With Picode, the user first takes a photo to bind it to posture data. S/he then drag-and-drops the photo into the code editor, where it is displayed as an inline image. A preliminary in-house user study implied positive effects of taking photos on the programming experience.
    CHI '13: Proceedings of the SIGCHI conference on Human Factors in Computing Systems; 04/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Image storyboards of films and videos are useful for quick browsing and automatic video processing. A common approach for producing image storyboards is to display a set of selected key‐frames in temporal order, which has been widely used for 2D video data. However, such an approach cannot be applied for 3D animation data because different information is revealed by changing parameters such as the viewing angle and the duration of the animation. Also, the interests of the viewer may be different from person to person. As a result, it is difficult to draw a single image that perfectly abstracts the entire 3D animation data. In this paper, we propose a system that allows users to interactively browse an animation and produce a comic sequence out of it. Each snapshot in the comic optimally visualizes a duration of the original animation, taking into account the geometry and motion of the characters and objects in the scene. This is achieved by a novel algorithm that automatically produces a hierarchy of snapshots from the input animation. Our user interface allows users to arrange the snapshots according to the complexity of the movements by the characters and objects, the duration of the animation and the page area to visualize the comic sequence. Our system is useful for quickly browsing through a large amount of animation data and semi‐automatically synthesizing a storyboard from a long sequence of animation.
    Computer Graphics Forum 01/2013; 32(7). · 1.64 Impact Factor
  • L. Zhu, T. Igarashi, J. Mitani
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce soft folding, a new interactive method for designing and exploring thin‐plate forms. A user specifies sharp and soft folds as two‐dimensional(2D) curves on a flat sheet, along with the fold magnitude and sharpness of each. Then, based on the soft folds, the system computes the three‐dimensional(3D) folded shape. Internally, the system first computes a fold field, which defines local folding operations on a flat sheet. A fold field is a generalization of a discrete fold graph in origami, replacing a graph with sharp folds with a continuous field with soft folds. Next, local patches are folded independently according to the fold field. Finally, a globally folded 3D shape is obtained by assembling the locally folded patches. This algorithm computes an approximation of 3D developable surfaces with user‐defined soft folds at an interactive speed. The user can later apply nonlinear physical simulation to generate more realistic results. Experimental results demonstrated that soft folding is effective for producing complex folded shapes with controllable sharpness.
    Computer Graphics Forum 01/2013; 32(7). · 1.64 Impact Factor
  • Yuki Igarashi, Takeo Igarashi, Jun Mitani
    [Show abstract] [Hide abstract]
    ABSTRACT: Beadwork is the art of connecting beads together with wire. Igarashi et al. [2012] presented an interactive beadwork design system called Beady to help non-professionals design their own 3D beadwork. They observed that existing beadwork designs, especially large ones, typically consist of hexagonal faces. This is probably because a hexagonal mesh (honeycomb lattice) is the most efficient structure for holding flat surfaces with minimal support materials. After conducting physical simulations, they also found that a near-hexagonal mesh, obtained as the dual of a triangular mesh, yields a more aesthetically pleasing beadwork model. However, the interactive modeling interface of the original Beady system did not consider this. Thus the user had to carefully combine various editing operations to construct a near-hexagonal polyhedron. Existing 3D modeling software is also inconvenient for near-hexagonal mesh modeling. Therefore, we introduce mesh-editing operations specifically designed for creating near-hexagonal polyhedra. By combining the original Beady interface with our method, the user can more easily design near-hexagonal polyhedra.
    SIGGRAPH Asia 2012 Posters; 11/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: More and more services and information are being stored on the cloud. Since anybody can access an Internet terminal, it is critical to provide appropriate security mechanisms. One popular approach is to strengthen the protocol and encryption algorithm, which is now being actively investigated in the security field. Another potentially effective approach is to enhance the user interface for security systems. Since security is ultimately a human-computer interaction problem, we believe that there are many interesting opportunities related to the latter approach.
    SIGGRAPH Asia 2012 Emerging Technologies; 11/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: RoboJockey is an interface for creating robot behavior and giving people a new entertainment experience with robots, in particular, making robots dance, such as the "Disc jockey" and "Video jockey" (Figure 1, left). The users can create continuous robot dance behaviors on the interface by using a simple visual language (Figure 1, right). The system generates music with beat and choreographs the robots in a dance using user-created behaviors. The RoboJockey has a multi-touch tabletop interface, which gives users a multi-user collaboration; every object is designed as a circle, and it can be operated from all positions around the tabletop interface. RoboJockey supports a humanoid robot, which has a capable of expressing human like dance behaviors (Figure 1, center).
    SIGGRAPH Asia 2012 Emerging Technologies; 11/2012
  • Genki Furumi, Daisuke Sakamoto, Takeo Igarashi
    [Show abstract] [Hide abstract]
    ABSTRACT: The screen of a tabletop computer is often occluded by physical objects such as coffee cups. This makes it difficult to see the virtual elements under the physical objects (visibility) and manipulate them (manipulability). Here we present a user interface widget, called "SnapRail," to address these problems, especially occlusion of a manipulable collection of virtual discrete elements such as icons. SnapRail detects a physical object on the surface and the virtual elements under the object. It then snaps the virtual elements to a rail widget that appears around the object. The user can then manipulate the virtual elements along the rail widget. We conducted a preliminary user study to evaluate the potential of this interface and collect initial feedback. The SnapRail interface received positive feedback from participants of the user study.
    Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces; 11/2012
  • Yuta Sugiura, Masahiko Inami, Takeo Igarashi
    [Show abstract] [Hide abstract]
    ABSTRACT: We have developed a simple skin-like user interface that can be easily attached to curved as well as flat surfaces and used to measure tangential force generated by pinching and dragging interactions. The interface consists of several photoreflectors that consist of an IR LED and a phototransistor and elastic fabric such as stocking and rubber membrane. The sensing method used is based on our observation that photoreflectors can be used to measure the ratio of expansion and contraction of a stocking using the changes in transmissivity of IR light passing through the stocking. Since a stocking is thin, stretchable, and nearly transparent, it can be easily attached to various types of objects such as mobile devices, robots, and different parts of the body as well as to various types of conventional pressure sensors without altering the original shape of the object. It can also present natural haptic feedback in accordance with the amount of force exerted. A system using several such sensors can determine the direction of a two-dimensional force. A variety of example applications illustrated the utility of this sensing system.
    Proceedings of the 25th annual ACM symposium on User interface software and technology; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study presents an interactive configuration tool that assists non-expert users to design specific navigation route for mobile robot in an indoor environment. The user places small active markers, called pebbles, on the floor along the desired route in order to guide the robot to the destination. The active markers establish a navigation network by communicating each other with IR beacon and the robot follows the markers to reach the designated goal. During the installation, a user can get effective feedback from LED indicators and voice prompts, so that the user can immediately understand if the navigation route is appropriately configured as expected. With this tool a novice user may easily customize a mobile robot for various indoor tasks.
    Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose 2D stick figures as a unified medium for visualizing and searching for human motion data. The stick figures can express a wide range or human motion, and they are easy to be drawn by people without any professional training. In our interface, the user can browse overall motion by viewing the stick figure images generated from the database and retrieve them directly by using sketched stick figures as an input query. We started with a preliminary survey to observe how people draw stick figures. Based on the rules observed from the user study, we developed an algorithm converting motion data to a sequence of stick figures. The feature-based comparison method between the stick figures provides an interactive and progressive search for the users. They assist the user's sketching by showing the current retrieval result at each stroke. We demonstrate the utility of the system with a user study, in which the participants retrieved example motion segments from the database with 102 motion files by using our interface. © 2012 Wiley Periodicals, Inc.
    Computer Graphics Forum 09/2012; 31(7pt1):2057-2065. · 1.64 Impact Factor
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    [Show abstract] [Hide abstract]
    ABSTRACT: There are many toolkits for physical UIs, but most physical UI applications are not locomotive. When the programmer wants to make things move around in the environment, he faces difficulty related to robotics. Toolkits for robot programming, unfortunately, are usually not as accessible as those for building physical UIs. To address this interdisciplinary issue, we propose Phybots, a toolkit that allows researchers and interaction designers to rapidly prototype applications with locomotive robotic things. The contributions of this research are the combination of a hardware setup, software API, its underlying architecture and a graphical runtime debug tool that supports the whole prototyping activity. This paper introduces the toolkit, applications and lessons learned from three user studies.
    DIS '12: Proceedings of the 9th conference on Designing Interactive Systems; 06/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Vignette is an interactive system that facilitates texture creation in pen-and-ink illustrations. Unlike existing systems, Vignette preserves illustrators' workflow and style: users draw a fraction of a texture and use gestures to automatically fill regions with the texture. Our exploration of natural work-flow and gesture-based interaction was inspired by traditional way of creating illustrations. We currently support both 1D and 2D synthesis with stitching. Our system also has interactive refinement and editing capabilities to provide a higher level texture control, which helps artists achieve their desired vision. Vignette makes the process of illustration more enjoyable and that first time users can create rich textures from scratch within minutes.
    05/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Vignette is an interactive system that facilitates texture creation in pen-and-ink illustrations. Unlike existing systems, Vignette preserves illustrators' workflow and style: users draw a fraction of a texture and use gestures to automatically fill regions with the texture. We currently support both 1D and 2D synthesis with stitching. Our system also has interactive refinement and editing capabilities to provide a higher level texture control, which helps artists achieve their desired vision. A user study with professional artists shows that Vignette makes the process of illustration more enjoyable and that first time users can create rich textures from scratch within minutes.
    05/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an intuitive sketching interface that allows the user to interactively place a 3D human character in a sitting position on a chair. Within our framework, the user sketches the target pose as a 2D stick figure and attaches the selected joints to the environment (for example, the feet on the ground) with a pin tool. As reconstructing the 3D pose from a 2D stick figure is an ill-posed problem due to many possible solutions, the key idea in our paper is to reduce the solution space by considering the interaction between the character and the environment and adding physics constraints, such as balance and collision.We formulate this reconstruction into a nonlinear optimization problem, and solve it via the genetic algorithm (GA) and the quasi-Newton solver. With the GPU implementation, our system is able to generate the physically correct and visually pleasing pose at interactive speed. The promising experimental results and user study demonstrate the efficacy of our method.
    IEEE transactions on visualization and computer graphics. 02/2012;

Publication Stats

1k Citations
34.73 Total Impact Points

Institutions

  • 2003–2014
    • The University of Tokyo
      • Department of Computer Science
      Edo, Tōkyō, Japan
  • 2012
    • University of Texas at Austin
      Austin, Texas, United States
  • 2009–2012
    • RIKEN
      Вако, Saitama, Japan
    • Keio University
      • Graduate School of Media Design
      Edo, Tōkyō, Japan
  • 2011
    • The University of Calgary
      Calgary, Alberta, Canada
  • 2010
    • University of Tsukuba
      • Centre for Computational Sciences
      Tsukuba, Ibaraki, Japan
  • 2006–2008
    • Sony Computer Science Laboratories, Inc.
      Edo, Tōkyō, Japan