Takeo Igarashi

The University of Tokyo, Tōkyō, Japan

Are you Takeo Igarashi?

Claim your profile

Publications (195)53.91 Total impact

  • Proceedings of the Second International Conference on Human-agent Interaction, Tsukuba, Japan; 10/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Computer displays play an important role in connecting the information world and the real world. In the era of ubiquitous computing, it is essential to be able to access information in a fluid way and non-obstructive integration of displays into our living environment is a basic requirement to achieve it. Here, we propose a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction and then draw lines by raising the fibers by moving a finger in the opposite direction. These material properties can be found in various items such as carpets and plush toy in our living environment. Our technology can turn these ordinary objects into displays without requiring or creating any non-reversible modifications to the objects. It can be used to make a large-scale display and the drawings it creates have no running costs.
    07/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Computer displays play an important role in connecting the information world and the real world. In the era of ubiquitous computing, it is essential to be able to access information in a fluid way and non-obstructive integration of displays into our living environment is a basic requirement to achieve it. Here, we propose a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction and then draw lines by raising the fibers by moving a finger in the opposite direction. These material properties can be found in various items such as carpets and plush toy in our living environment. Our technology can turn these ordinary objects into displays without requiring or creating any non-reversible modifications to the objects. It can be used to make a large-scale display and the drawings it creates have no running costs.
    07/2014;
  • Jun Kato, Takeo Igarashi
    Proceedings of the 2014 Graphics Interface Conference, Montreal, Quebec, Canada; 05/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present an animation creation workflow for integrating offline physical, painted media into the digital authoring of Flash-style animations. Generally, animators create animations with standardized digital authoring software. However, the results tend to lack the individualism or atmosphere of physical media. In contrast, illustrators have skills in painting physical media but have limited experience in animation. To incorporate their skills, we present a workflow that integrates the offline painting and digital animation creation processes in a labor-saving manner. First, a user makes a rough sketch of the visual elements and defines their movements using our digital authoring software with a sketch interface. Then these images are exported to printed pages, and users can paint using offline physical media. Finally, the work is scanned and imported back into the digital content, forming a composite animation that combines digital and physical media. We present an implementation of this system to demonstrate its workflow. We also discuss the advantages of using physical media in digital animations through design evaluations.
    04/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Programmers write and edit their source code in a text editor. However, when they design the look-and-feel of a game application such as an image of a game character and an arrangement of a button, it would be more intuitive to edit the application by directly interacting with these objects on a game window. Although modern game engines realize this facility, they use a highly structured framework and limit what the programmer can edit. In this paper, we present CapStudio, a development environment for a visual application with an interactive screencast. A screencast is a movie player-like output window with code editing functionality. The screencast works with a traditional text editor. Modifications of source code in the text editor and visual elements on the screencast will be immediately reflected on each other. We created an example application and confirmed the feasibility of our approach.
    04/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: One of the difficulties with standard route maps is accessing to multi-scale routing information. The user needs to display maps in both a large scale to see details and a small scale to see an overview, but this requires tedious interaction such as zooming in and out. We propose to use a hierarchical structure for a route map, called a "Route Tree", to address this problem, and describe an algorithm to automatically construct such a structure. A Route Tree is a hierarchical grouping of all small route segments to allow quick access to meaningful large and small-scale views. We propose two Route Tree applications, "RouteZoom" for interactive map browsing and "TreePrint" for route information printing, to show the applicability and usability of the structure. We conducted a preliminary user study on RouteZoom, and the results showed that RouteZoom significantly lowers the interaction cost for obtaining information from a map compared to a traditional interactive map.
    Proceedings of the 19th international conference on Intelligent User Interfaces; 02/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a series of projects for end-user authoring of interactive robotic behaviors, with a particular focus on the style of those behaviors: we call this approach Style-by-Demonstration (SBD). We provide an overview introduction of three different SBD platforms: SBD for animated character interactive locomotion paths, SBD for interactive robot locomotion paths, and SBD for interactive robot dance. The primary contribution of this article is a detailed cross-project SBD analysis of the interaction designs and evaluation approaches employed, with the goal of providing general guidelines stemming from our experiences, for both developing and evaluating SBD systems. In addition, we provide the first full account of our Puppet Master SBD algorithm, with an explanation of how it evolved through the projects.
    ACM Transactions on Interactive Intelligent Systems (TiiS). 01/2014; 3(4).
  • [Show abstract] [Hide abstract]
    ABSTRACT: The availability of low-cost digital fabrication devices enables new groups of users to participate in the design and fabrication of things. However, software to assist in the transition from design to actual fabrication is currently overlooked. In this paper, we introduce PacCAM, a system for packing 2D parts within a given source material for fabrication using 2D cutting machines. Our solution combines computer vision to capture the source material shape with a user interface that incorporates 2D rigid body simulation and snapping. A user study demonstrated that participants could make layouts faster with our system compared with using traditional drafting tools. PacCAM caters to a variety of 2D fabrication applications and can contribute to the reduction of material waste.
    10/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present faceton, a geometric modeling primitive designed for building architectural models, using a six degrees of freedom (DoF) input device in a virtual environment (VE). A faceton is given as an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple drag-and-drop and group interaction of faceton, users can easily create 3D architecture models in the VE. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling in VE, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry (CSG), but it is driven by a novel adaptive bounding algorithm and is specifically designed for the 3D modeling activities in an immersive virtual environment.
    Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology; 10/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a technique called voice augmented manipulation (VAM) for augmenting user operations in a mobile environment. This technique augments user interactions on mobile devices, such as finger gestures and button pressing, with voice. For example, when a user makes a finger gesture on a mobile phone and voices a sound into it, the operation will continue until stops making the sound or makes another finger gesture. The VAM interface also provides a button-based interface, and the function connected to the button is augmented by voiced sounds. Two experiments verified the effectiveness of the VAM technique and showed that repeated finger gestures significantly decreased compared to current touch-input techniques, suggesting that VAM is useful in supporting user control in a mobile environment.
    08/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: User-Centered Programming by Demonstration is an approach that places the needs of people above algorithmic constraints and requirements. In this paper we present a user-centered programming by demonstration project for authoring interactive robotic locomotion style. The style in which a robot moves about a space, expressed through its motions, can be used for communication. For example, a robot could move aggressively in reaction to a person's actions, or alternatively react using careful, submissive movements. We present a new demonstration interface, algorithm, and evaluation results.
    Proceedings of the Twenty-Third international joint conference on Artificial Intelligence; 08/2013
  • Source
    Oliver Mattausch, Takeo Igarashi, Michael Wimmer
    [Show abstract] [Hide abstract]
    ABSTRACT: We present an algorithm for artistically modifying physically based shadows. With our tool, an artist can directly edit the shadow boundaries in the scene in an intuitive fashion similar to freeform curve editing. Our algorithm then makes these shadow edits consistent with respect to varying light directions and scene configurations, by creating a shadow mesh from the new silhouettes. The shadow mesh helps a modified shadow volume algorithm cast shadows that conform to the artistic shadow boundary edits, while providing plausible interaction with dynamic environments, including animation of both characters and light sources. Our algorithm provides significantly more fine-grained local and direct control than previous artistic light editing methods, which makes it simple to adjust the shadows in a scene to reach a particular effect, or to create interesting shadow shapes and shadow animations. All cases are handled with a single intuitive interface, be it soft shadows, or (self-)shadows on arbitrary receivers.
    Computer Graphics Forum 05/2013; 32(2):175-184. · 1.64 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We introduce an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth. This "LightCloth" is woven from diffusive optical fibers. Sensor-emitter pairs attached to bundles of contiguous fibers enable bundle-specific light input and output. We developed a prototype system that allows full-color illumination and 8-bit data input by infrared signals.
    CHI '13 Extended Abstracts on Human Factors in Computing Systems; 04/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth. This "LightCloth" is woven from diffusive optical fibers. Since the fibers are arranged in parallel, the cloth has one-dimensional position information. Sensor-emitter pairs attached to bundles of contiguous fibers enable bundle-specific light input and output. We developed a prototype system that allows full-color illumination and 8-bit data input by infrared signals. We present as an application a chair with a LightCloth cover whose illumination pattern is specified using an infrared light pen. Here we describe the implementation details of the device and discuss possible interactions using the device.
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 04/2013
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    [Show abstract] [Hide abstract]
    ABSTRACT: Current programming environments use textual or symbolic representations. While these representations are appropriate for describing logical processes, they are not appropriate for representing raw values such as human and robot posture data, which are necessary for handling gesture input and controlling robots. To address this issue, we propose Picode, a text-based development environment integrated with visual representations: photos of human and robots. With Picode, the user first takes a photo to bind it to posture data. S/he then drag-and-drops the photo into the code editor, where it is displayed as an inline image. A preliminary in-house user study implied positive effects of taking photos on the programming experience.
    CHI '13: Proceedings of the SIGCHI conference on Human Factors in Computing Systems; 04/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Image storyboards of films and videos are useful for quick browsing and automatic video processing. A common approach for producing image storyboards is to display a set of selected key‐frames in temporal order, which has been widely used for 2D video data. However, such an approach cannot be applied for 3D animation data because different information is revealed by changing parameters such as the viewing angle and the duration of the animation. Also, the interests of the viewer may be different from person to person. As a result, it is difficult to draw a single image that perfectly abstracts the entire 3D animation data. In this paper, we propose a system that allows users to interactively browse an animation and produce a comic sequence out of it. Each snapshot in the comic optimally visualizes a duration of the original animation, taking into account the geometry and motion of the characters and objects in the scene. This is achieved by a novel algorithm that automatically produces a hierarchy of snapshots from the input animation. Our user interface allows users to arrange the snapshots according to the complexity of the movements by the characters and objects, the duration of the animation and the page area to visualize the comic sequence. Our system is useful for quickly browsing through a large amount of animation data and semi‐automatically synthesizing a storyboard from a long sequence of animation.
    Computer Graphics Forum 01/2013; 32(7). · 1.64 Impact Factor
  • L. Zhu, T. Igarashi, J. Mitani
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce soft folding, a new interactive method for designing and exploring thin‐plate forms. A user specifies sharp and soft folds as two‐dimensional(2D) curves on a flat sheet, along with the fold magnitude and sharpness of each. Then, based on the soft folds, the system computes the three‐dimensional(3D) folded shape. Internally, the system first computes a fold field, which defines local folding operations on a flat sheet. A fold field is a generalization of a discrete fold graph in origami, replacing a graph with sharp folds with a continuous field with soft folds. Next, local patches are folded independently according to the fold field. Finally, a globally folded 3D shape is obtained by assembling the locally folded patches. This algorithm computes an approximation of 3D developable surfaces with user‐defined soft folds at an interactive speed. The user can later apply nonlinear physical simulation to generate more realistic results. Experimental results demonstrated that soft folding is effective for producing complex folded shapes with controllable sharpness.
    Computer Graphics Forum 01/2013; 32(7). · 1.64 Impact Factor
  • Yuki Igarashi, Takeo Igarashi, Jun Mitani
    [Show abstract] [Hide abstract]
    ABSTRACT: Beadwork is the art of connecting beads together with wire. Igarashi et al. [2012] presented an interactive beadwork design system called Beady to help non-professionals design their own 3D beadwork. They observed that existing beadwork designs, especially large ones, typically consist of hexagonal faces. This is probably because a hexagonal mesh (honeycomb lattice) is the most efficient structure for holding flat surfaces with minimal support materials. After conducting physical simulations, they also found that a near-hexagonal mesh, obtained as the dual of a triangular mesh, yields a more aesthetically pleasing beadwork model. However, the interactive modeling interface of the original Beady system did not consider this. Thus the user had to carefully combine various editing operations to construct a near-hexagonal polyhedron. Existing 3D modeling software is also inconvenient for near-hexagonal mesh modeling. Therefore, we introduce mesh-editing operations specifically designed for creating near-hexagonal polyhedra. By combining the original Beady interface with our method, the user can more easily design near-hexagonal polyhedra.
    SIGGRAPH Asia 2012 Posters; 11/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: More and more services and information are being stored on the cloud. Since anybody can access an Internet terminal, it is critical to provide appropriate security mechanisms. One popular approach is to strengthen the protocol and encryption algorithm, which is now being actively investigated in the security field. Another potentially effective approach is to enhance the user interface for security systems. Since security is ultimately a human-computer interaction problem, we believe that there are many interesting opportunities related to the latter approach.
    SIGGRAPH Asia 2012 Emerging Technologies; 11/2012

Publication Stats

2k Citations
53.91 Total Impact Points

Institutions

  • 2003–2014
    • The University of Tokyo
      • Department of Computer Science
      Tōkyō, Japan
  • 2012
    • University of Texas at Austin
      Austin, Texas, United States
  • 2009–2012
    • RIKEN
      Вако, Saitama, Japan
    • Keio University
      • Graduate School of Media Design
      Edo, Tōkyō, Japan
  • 2011
    • The University of Calgary
      Calgary, Alberta, Canada
  • 2010
    • University of Tsukuba
      • Centre for Computational Sciences
      Tsukuba, Ibaraki, Japan
  • 2006–2008
    • Sony Computer Science Laboratories, Inc.
      Edo, Tōkyō, Japan
  • 2001–2002
    • Brown University
      • Department of Computer Science
      Providence, Rhode Island, United States