Conference Paper

Integrating Corrections into Digital Ink Playback

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, we describe preliminary work on an ink editing application that allows an instructor to correct mistakes to digital ink written during a presentation that is to be archived. These corrections are then seamlessly reintegrated into the digital archive so that when the presentation is replayed the corrected ink is displayed instead of the original incorrect ink. We base our results on a system we have developed and prototype the work flow from initial presentation, through correction, updating the archive and playback. We show that a simple mechanism for correction is effective and low effort for the instructor. A key technical challenge that is addressed is the substitution of strokes by matching of the original and corrected ink.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Large classrooms have traditionally provided multiple blackboards on which an entire lecture could be visible. In recent decades, classrooms were augmented with a data projector and screen, allowing computer-generated slides to replace hand-written blackboard presentations and overhead transparencies as the medium of choice. Many lecture halls and conference rooms will soon be equipped with multiple projectors that provide large, high-resolution displays of comparable size to an old fashioned array of blackboards. The predominant presentation software, however, is still designed for a single medium-resolution projector. With the ultimate goal of designing rich presentation tools that take full advantage of increased screen resolution and real estate, we conducted an observational study to examine current practice with both traditional whiteboards and blackboards, and computer-generated slides. We identify several categories of observed usage, and highlight differences between traditional media and computer slides. We then present design guidelines for presentation software that capture the advantages of the old and the new and describe a working prototype based on those guidelines that more fully utilizes the capabilities of multiple displays.
Article
One potentially useful feature of future computing environments will be the ability to capture the live experiences of the occupants and to provide that record to users for later access and review. Over the last three years, a group at the Georgia Institute of Technology has designed and extensively used a particular instrumented environment: a classroom that captures the traditional lecture experience. This paper describes the history of the Classroom 2000 project and provides results of extended evaluations of the effect of automated capture on the teaching and learning experience. There are many important lessons to take away from this long-term, large-scale experiment with a living, ubiquitous computing environment. The environment should address issues of scale and extensibility, it should continuously be evaluated for effectiveness, and the ways in which the environment both improves and hinders the activity that it aims to support—in our case, education—need to be understood and acted upon. In d escribing our experiences and lessons learned, we hope to motivate other researchers to take more seriously the challenge of ubiquitous computing—the creation and exploration of the everyday use of computationally rich environments.
Article
In this paper, we present a study of how instructors draw diagrams in the process of delivering lectures. We are motivated by wanting to understand challenges and opportunities for automatically analyzing diagrams, and to use this to improve tools to support the delivery of presentations and the viewing of archived lectures. The study was conducted by analyzing a large group of examples of diagrams collected from real lectures that were delivered from a Tablet PC. The main result of the paper is the identification of three specific challenges in analyzing spontaneous instructor diagrams: separating the diagram from its annotations and other surrounding ink, identifying phases in discussion of a diagram, and constructing the active context in a diagram.
Conference Paper
In this paper, we report on an empirical exploration of digital ink and speech usage in lecture presentation. We studied the video archives of five Master's level Computer Science courses to understand how instructors use ink and speech together while lecturing, and to evaluate techniques for analyzing digital ink. Our interest in understanding how ink and speech are used together is to inform the development of future tools for supporting classroom presentation, distance education, and viewing of archived lectures. We want to make it easier to interact with electronic materials and to extract information from them. We want to provide an empirical basis for addressing challenging problems such as automatically generating full text transcripts of lectures, matching speaker audio with slide content, and recognizing the meaning of the instructor's ink. Our results include an evaluation of handwritten word recognition in the lecture domain, an approach for associating attentional marks with content, an analysis of linkage between speech and ink, and an application of recognition techniques to infer speaker actions.
Conference Paper
Despite recent advances in authoring systems and tools, creating multimedia presentations remains a labor-intensive process. This paper describes a system for automatically constructing structured multimedia documents from live presentations. The automatically produced documents contain synchronized and edited audio, video, images, and text. Two essential problems, synchronization of captured data and automatic editing, are identified and solved.
Conference Paper
This paper is devoted to explore media correlation and media synchronization in a composite multimedia document, the so-called navigated hypermedia document in our language learning system, to facilitate the multimedia authoring, presentation, and access. Two levels of media correlation in temporal, spatial, and content domains are investigated: syntactic level correlation and semantic level correlation. We devise a capturing mechanism to record all the media streams and relations between them, including voice and event streams, for replaying the lecturing in a form as close as possible to the original classroom experience. The syntactic level correlation is based on specific timestamps of the media stream and used to reconstruct the recorded lecture for synchronized presentation. Furthermore, to integrate media objects with specific segments within the media stream, some computed synchronization processes are required to discover semantic content of the media. The proposed computed synchronization techniques, including speech-event binding process for temporal domain, tele-pointer (i.e. cursor) movement interpolation and adaptable handwriting presentation for spatial domain, and erasing handling for content domain, will be addressed. Experimental results show that in the speech-event binding process 74% of speech access entries for accessible visualized events are found. The acceptable rate of human perception on tele-pointer movement is higher than 85% if time interval is selected carefully. Finally, the accuracy of erasing handling for content removing is about 71%.
Conference Paper
Digital inking systems are becoming increasingly popular across a variety of domains. In particular, many systems now allow instructors to write on digital surfaces in the classroom. Yet, our understanding of how people actually use writing in these systems is limited. In this paper, we report on classroom use of writing in one such system, in which the instructor annotates projected slides using a Tablet PC. Through a detailed analysis of lecture archives, we identify key use patterns. In particular, we categorize a major use of ink as analogous to physical gestures and present a framework for analyzing this ink; we explore the relationship between the ephemeral meaning of many annotations and their persistent representation; and we observe that instructors make conservative use of the system's features. Finally, we discuss implications of our study to the design of future digital inking systems.
Conference Paper
This demonstration illustrates different ways to support users dealing with recorded live presentations in order to improve the usability of the corresponding documents. It highlights different problems in this context and presents solutions and alternative approaches for both, multimedia indexing and query processing as well as user interface issues in order to support users who are skimming or browsing such documents in search for information.