Drawing for Illustration and Annotation in 3D

Article (PDF Available)inComputer Graphics Forum 20(3):114-122 · May 2001with 147 Reads
DOI: 10.1111/1467-8659.00504
Abstract
We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model.

Supplementary resources

  • ... Schmid et al.'s Overcoat [Schmid et al. 2011] allows artists to create strokes in 3D space as volumetric entities; these strokes are rendered by accumulating brush samples in the affected regions. Bourguignon et al. [Bourguignon et al. 2001] developed an approach where strokes are created as Bezier surfaces instead. Bae et al.'s ILoveSketch [Bae et al. 2008] allows artists to create models as NURBS curves in a gestural interface optimised for pen-based interaction. ...
    ... First, we provide some practical examples of how hand drawn strokes can be used in a 3D environment alongside props and sets that have been created using standard 3D modelling techniques. Second, instead of using Bourguignon et al.'s [Bourguignon et al. 2001] positioning widgets or the workplane approaches used by Bae et al. [Bae et al. 2008] and Dorsey et al. [Dorsey et al. 2007] , we use the " 3D Cursor " concept built into the host environment to provide a tangible reference point for controlling how points are projected from screen-space to 3D-space. Finally, we introduce the concept of providing tools to edit these strokes in 3D space as if they were meshes. ...
    Conference Paper
    Freehand drawing is one of the most flexible and efficient ways of expressing creative ideas. However, it can often also be tedious and technically challenging to animate complex dimensional environments or dynamic choreographies. We present a case study of how freehand drawing tools can be integrated into an end-to-end 3D content creation platform to reap the benefits from both worlds. Creative opportunities and challenges in achieving this type of integration are discussed. We also present examples from short films demonstrating the potential of how these techniques can be deployed in production environments.
  • ... Another approach is to sketch on top of 3D-dimensional views, being it real images, virtual reality or augmented reality, and then infer the three dimensional models from the 2D projective sketch and information about the scene geometry[15]. Example applications include annotation and sketching in archeological sites[14]and sketching and modelling of cartoon like scenes for animation[17]. Ambiguities in where points in the 2D sketches are in the 3D model can be resolved fixing the points using multiple views of the scene[16]. ...
    Conference Paper
    The modelling of 3D shapes is a challenging problem. Many innovative approaches have been proposed, however most 3D software require advanced skills that hinders collaboration and spontaneous ideation. This paper proposes a novel framework that allows designers to express their ideas in 3D space without extensive training such that they can reuse their 2D sketching skills collaboratively in teams.
  • ... Wither et al. give a detailed overview and taxonomy of AR annotations, specifically noting the lack of interfaces for authoring (i.e., creating) annotations in AR [Wither et al. 2009]. While many types of annotations exist, our work falls under the category of 2D annotations for 3D scenes [Pierce et al. 1997;Bourguignon et al. 2001;Jung et al. 2002;Nuernberger et al. 2016;Polvi et al. 2016;Lien et al. 2016]. As mentioned previously, the main challenge for 2D annotation authoring for 3D scenes is that 2D drawings are inherently ambiguous in 3D. ...
    Conference Paper
    We present a novel 2D gesture annotation method for use in image-based 3D reconstructed scenes with applications in collaborative virtual and augmented reality. Image-based reconstructions allow users to virtually explore a remote environment using image-based rendering techniques. To collaborate with other users, either synchronously or asynchronously, simple 2D gesture annotations can be used to convey spatial information to another user. Unfortunately, prior methods are either unable to disambiguate such 2D annotations in 3D from novel viewpoints or require relatively dense reconstructions of the environment. In this paper, we propose a simple multi-view annotation method that is useful in a variety of scenarios and applicable to both very sparse and dense 3D reconstructions. Specifically, we employ interactive disambiguation of the 2D gestures via a second annotation drawn from another viewpoint, triangulating two drawings to achieve a 3D result. Our method automatically chooses an appropriate second viewpoint and uses image-based rendering transitions to keep the user oriented while moving to the second viewpoint. User experiments in an asynchronous collaboration scenario demonstrate the usability of the method and its superiority over a baseline method. In addition, we showcase our method running on a variety of image-based reconstruction datasets and highlight its use in a synchronous local-remote user collaboration system.
  • ... Creating 3D objects and curves from 2D user interfaces has been studied for a long time [7,21,13]. There are several recent research on the subject that tries to enrich an already existing 3D scene, either by annotating the scene or augmenting the 3D object itself [9,14,15]. Bourguignon et al.[9] created a system that can be used both annotating a 3D object or creating an artistic illustration that can be represented from different viewpoints. Although the resulting scenes are pleasingly beautiful, they are not truly 3D. ...
    Chapter
    This paper proposes a method that resembles a natural pen and paper interface to create curve based 3D sketches. The system is particularly useful for representing initial 3D design ideas without much effort. Users interact with the system by the help of a pressure sensitive pen tablet. The input strokes of the users are projected onto a drawing plane, which serves as a paper that they can place anywhere in the 3D scene. The resulting 3D sketch is visualized emphasizing depth perception. Our evaluation involving several naive users suggest that the system is suitable for a broad range of users to easily express their ideas in 3D. We further analyze the system with the help of an architect to demonstrate the expressive capabilities.
  • ... Traditionally, blueprints are used in industry because they provide the clearest way to describe the specifications of a mechanical system. They communicate complexity in a comprehensible and effective manner thanks to visual abstraction [2]. In the industrial scenario, the development of a concept must be followed by its design and development. ...
    Article
    Full-text available
    Because reading blueprints plays a critical role in industry, there is the need for highly skilled personnel to interpret them. An elementary yet vital component of a technician's training regime, its instruction could be improved by understanding of the exact nature of the tasks involved in the process. To address this need, the goal of this paper is to understand the spatial visualization strategies adopted by an expert while reading blueprints of mechanical parts. For the purpose of this study, Tobii T60 eye tracking equipment was used to monitor the expert's eye gaze movements as he read a set of blueprints coupled with a retrospective think aloud protocol to better understand the strategies used. Based on this information, a hierarchical task analysis was developed, the results showcasing the structured objective approach employed by the expert. Two important findings of this study are that the title block was not the first part of the blueprint that the expert looked at and that the reading the blueprint was primarily done from the point of view of the manufacturing of the component. It was also observed that for the spatial visualization, two views were simultaneously taken and then mentally flipped to convert one into the other. Future research needs to apply this methodology to a larger population of experts and to study the efficacy of using the results of the task analysis in the formulation of an online intervention tool.
  • Chapter
    Digital textbooks (DTs) have been established as the major media for future and smart education in South Korea. In general, DTs are implemented using two-dimensional (2D) web-based platforms with embedded 3D content, including images, motion graphics, animations, and video and audio. Recently, these 2D-based DTs have evolved into a 3D-based interface by adopting various types of 3D virtual environments. Accordingly, a number of 3D DTs have been developed; however, these are mainly focused on the implementation of the basic features of DTs, including displaying text and images, viewing pages, zooming in and out, indexing, moving to a certain page, and finding a specific text or object. Furthermore, these have not yet been comprehensively implemented, and further development is required to provide more diverse input and annotation features to facilitate better dynamic interaction between students and DTs. Here, we introduce 3D annotation features that were designed and implemented to enable users to freely annotate on 3D DTs with various forms of input, facilitating dynamic user interactions. These include stylus writing, underlining, highlighting, drawing lines and shapes, and the insertion of textboxes, sound and video files, and voice memos. Students may highlight text or place notes to give commentary on the content of DTs using stylus writings, lines, shapes, text, multimedia files, and voice memos to aid their studying. The 3D annotation features were developed using the eXtensible 3D (X3D) standard, which is an XML-based international standard file format for representing scenes and 3D objects in computer graphics, extended from the virtual reality modeling language (VRML), along with the Java Applet and JavaScript. We aim to enhance student engagement in the learning process by supporting various forms of dynamic interaction features via a natural user interface. We believe that the approaches we describe provide a viable solution to enable 3D annotation on DTs.
  • Article
    To tackle the problems in adjusting and controlling shapes of rotation surfaces, a new efficient method for quickly constructing generalized Bézier rotation surfaces with multiple shape parameters is proposed. Firstly, following the important idea of transfinite vectored rational interpolating function, the shape-adjustable generalized Bézier rotation surfaces are constructed using a generalized Bézier curve with multiple shape parameters. Secondly, the explicit function expression of the shape-adjustable generalized Bézier rotation surfaces is presented. The new rotation surfaces inherit the outstanding properties of the Bézier rotation surfaces, with a good performance on adjusting their local shapes by changing the shape parameters. Finally, some properties of the new rotation surfaces are discussed, and the influence rules of the shape parameters on the new rotation surfaces are studied. The modeling examples illustrate that the shape-adjustable generalized Bézier rotation surfaces provide a valuable way for the design of rotation surfaces.
  • Conference Paper
    A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for a range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if the camera moves and is observing an arbitrary environment, 2D gesture annotations can easily lose their meaning when shown from novel viewpoints due to perspective effects. In this paper, we present a new approach towards solving this problem by using a gesture enhanced annotation interpretation. By first classifying which type of gesture the user drew, we show that it is possible to render the 2D annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for an augmented reality enhanced remote collaboration scenario by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel real-time method to automatically handle the two most common 2D gesture annotations — arrows and circles — and give a detailed analysis of the ambiguities that must be handled in each case. Arrow gestures are interpreted by identifying their anchor points and using scene surface normals for better perspective rendering. For circle gestures, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our method outperforms previous approaches by better conveying the meaning of the original drawing from different viewpoints.
  • Article
    We propose a 3D sketching method to draw the silhouette of the 3D garment directly on the manikin. The following 3D surface is generated from a bumped collision patch in terms of the boundary of the 3D sketches. The shape adjustment is performed through a traditional FFD (Free Form Deformation) followed by a global curvature suppressing based on Loop.s surface subdivision. The resulting surface can be employed as the prototype in designing tight-fit and fit shape of 3D garment reconstruction.
  • Article
    In order to animate the valuable murals of Dun-huang Mogao Grottoes, we propose a hybrid approach to creating the animation with the artistic style in the murals. Its key point is the fusion of 2D and 3D animation assets, for which a hybrid model is constructed from a 2.5D model, a 3D model, and registration information. The 2.5D model, created from 2D multi-view drawings, is composed of 2.5D strokes. For each 2.5D stroke, we let the user draw corresponding strokes on the surface of the 3D model in multiple views. Then the method automatically generates registration information, which enables 3D animation assets to animate the 2.5D model. At last, the animated line drawings are produced from 2.5D and 3D models respectively and blended under the control of per-stroke weights. The user can manually modify the weights to get the desired animation style.
  • Chapter
    An important task of numerical mathematics is the drawing up of computation rules (algorithms) with whose help and by the use of previously known means (such as tables, desk calculating machines, analogue computers, digital computers, and so on) input data are transformed into desired output data.
  • Article
    This paper was reproduced from the AFIPS Conference proceedings, Volume 23, of the Spring Joint Computer Conference held in Detroit, 1963. Mr. Timothy Johnson suggested that this report contained essentially the same material he spoke on at the SHARE D/A Committee Workshop.
  • Article
    Before the introduction of Computer Aided Design and solid modeling systems, designers had developed a set of methods for designing solid objects by sketching their ideas using pencil and paper, and refining these ideas into workable designs. These methods are different from those used for designing objects with a conventional solid modeler. Not only does this dichotomy waste a vast reserve of talent and experience (people typically start sketching from the moment they first pick up a crayon), but it also has a more fundamental problem: designer can use their intuition more effectively when sketching than they can when using a solid modeler. This dissertation introduces interactive sketch interpretation as a new user interface paradigm for solid modeling systems. Interactive sketch interpretation makes it possible to use the computer as a sketchpad for designing three-dimension objects. The premise behind interactive sketch interpretation is to let the designer change an object's design by modifying a computer generated the line drawing of the object. Sketch interpretation maps the designer's changes onto a boundary representation model of the object. The designer can continue the design process by then changing the line drawing of the modified object. This design cycle is highly interactive and, as a result, incorrect interpretations can be easily corrected by the designer. Viking is a solid modeling system whose user interface is based on interactive sketch interpretation. With Viking, the designer can modify his or her design by either changing the line drawing or placing geometric constraints on the object. Sketch interpretation changes Viking's model of the object so that it is consistent with the modified line drawing and the geometric constraints placed by the user.
  • Article
    Rendering systems generally treat the production of images as an objective process governed by the laws of physics. However, perception and understanding on the part of viewers are subjective processes influenced by a variety offactors. For example, in the presentation of architectural drawings, the apparent precision with which the drawings are made will affect whether the viewer considers the design as part of a preliminary design or as part of a final polished project, and to some extent the level of confidence the viewer has in the encoded information. In this paper we develop techniques for rendering images in a way that differs from the usual photorealistic or wire-frame output of renderers. In particular, our techniques allow a user to adjust the rendering of a scene to produce images using primitives with variable degrees of precision, from approximations that resemble vague “five-minute-sketches” to more mature but still hand-drawn images. We provide a theoretical framework for analysing the information flow from the computer to the user via such images. Finally, we describe the design and implementation of a prototypical renderer and show examples of its output.
  • Article
    Full-text available
    This paper describes ‘Quick-sketch’, a 2D and 3D modelling tool for pen-based computers. Users of this system define a model by simple pen strokes, drawn directly on the screen of a pen-based PC. Exact shapes and geometric relationships are interpreted from the sketch. The system can also be used to sketch 3D solid objects and B-spline surfaces. These objects may be refined by defining 2D and 3D geometric constraints. A novel graph-based constraint solver is used to establish the geometric relationships, or to maintain them when manipulating the objects interactively. The approach presented here is a first step towards a conceptual design system. Quick-sketch can be used as a hand sketching front-end to more sophisticated modelling, rendering or animation systems.
  • Article
    This paper describes an optimization-based algorithm for reconstructing a 3D model from a single, inaccurate, 2D edge-vertex graph. The graph, which serves as input for the reconstruction process, is obtained from an inaccurate freehand sketch of a 3D wireframe object. Compared with traditional reconstruction methods based on line labelling, the proposed approach is more tolerant of faults in handling both inaccurate vertex positioning and sketches with missing entities. Furthermore, the proposed reconstruction method supports a wide scope of general (manifold and non-manifold) objects containing flat and cylindrical faces. Sketches of wireframe models usually include enough information to reconstruct the complete body. The optimization algorithm is discussed, and examples from a working implementation are given.
  • Conference Paper
    Full-text available
    We present the Zoom Illustrator which illustrates complex 3D-models for teaching anatomy. Our system is a semi-interactive tool which combines an animation mode and an interactive mode. While the animation mode is suitable for beginners, the interactive mode is dedicated to experienced users. The animation is controlled by scripts specifying what should be explained in which level of detail. The design of the animation and the interactive components is directed to generate illustrations according to the user's interest. This includes the presentation of text and the parameterization of the rendered image. Changes on the textual part, interactively requested or generated in an animation, are propagated to the graphics part and vice versa. Thus the display of an explanation results in an adaptation of the corresponding graphical part.