Conference Paper

Opportunistic Recording of Live Experiences Using Multiple Mobile Devices

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this work, we present a case study in which participants used a mobile application for collaborative record presentations. We developed an application for Android-enabled devices to investigate the usability and design requirements of a mobile and collaborative capture system. Our main concern is to facilitate collaboration and create an enhanced result without adding complexity in relation to the individual capture task. In this way, we focused on problems related to the usability and to the awareness information that enables users to conduct an opportunistic recording. We report our case study results and discuss design requirements we identified for the collaborative recording of presentations by users in possession of smartphones and tablets.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Apesar da visível utilidade na disponibilização de aulas e de palestras gravadas, produzir um vídeo de qualidade exige alto custo operacional. Para reduzir esses custos, muitas ferramentas que permitem a captura (semi) automática de aulas foram desenvolvidas para gravar aulas [Cunha et al. 2016, Damasceno et al. 2014, Halawa et al. 2011] e também outras apresentações [Jansen et al. 2015]. Há ainda ferramentas para apresentar [Viel et al. 2013] e anotar o conteúdo correspondente [Ferreira de Sousa et al. 2013, Martins and. ...
Conference Paper
There are several ways to make computing accessible to everyone, such as providing teaching material in text and videos formats. In particular, the recording lectures and talks with the aim of making the corresponding content available (as a video or multimedia document), is a common activity in many locations world-wide. Two common approaches used to recording such events are using a studio or instrumenting a conventional classroom with cameras and microphones so as to record the activity in-place. In this paper we study the influence that the use of these two environments may have in the recording process. We report on a case study with 27 participants which recorded short academic talks in the two scenarios and also discuss how the environment affected their behavior. Understanding such influences may inform the design of infrastructures aimed at supporting the authoring of interactive multimedia documents from live experiences.
Chapter
As the number of mobile devices grow, also grows the amount of data exchanged. This ever growing amount of data may overload Internet Service Providers. A possible solution to this problem is to use the mobile devices wireless network capabilities to exchange data by creating mobile P2P networks. These networks should opportunistically collaborate to exchange information to other devices in their proximity, only requiring users to specify their interests. This paper presents DMEK, (Decision Mobile Exchange of Knowledge) a solution where mobile devices disseminate knowledge among their users, opportunistically, using a decision mechanism based on profile matching. Experiments show DMEK feasibility and performance.
Conference Paper
A particularly challenging situation in amateur multimedia authoring is ad-hoc collaborative capture, i.e., a process that, having no pre-production phase, demands live coordination between multiple authors. Important problems to realize such scenarios include proper synchronization of media elements by multiple devices, and which information to include in the resulting collaboratively authored multimedia document. In this paper, we report how our collaborative authoring system tackles these ad-hoc multimedia capture problems by combining mobile and web applications. We present a case study in the educational domain and discuss the results of a user study.
Conference Paper
Full-text available
Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise multi-device timing is the main technical challenge. This paper proposes the introduction of an HTMLTimingObject as basis for linear composition in the single-device scenario. Linear composition in the multi-device scenario is ensured as HTMLTimingObjects may integrate with Shared Motion, a generic timing mechanism for the Web. By connecting HTMLMediaElements and HTMLTrackElements with a multi-device timing mechanism, a powerful programming model for multi-device linear media is unlocked.
Article
Full-text available
This paper describes a user evaluation study of automated creation of mobile video remixes in three different event contexts. The evaluation contributes to the design process of the Automatic Video Remixing System, deepening knowledge to wider usage context. The study was completed with 30 users in three different contexts: a sports event, a music concert and a doctoral dissertation. It was discovered that users are motivated to provide their material to the service when knowing they get an automatically created remix containing many capturers' content in return. Automatic video remixing was stated to ease the task of editing videos and to improve the quality of amateur videos. The study reveals requirements for pleasurable remix creation in different event contexts and details the user experience factors related to the content capturing, sharing, and viewing of captured content and the remixes. The results provide insights into media creation in small event-based groups.
Conference Paper
Full-text available
We present a method to generate aesthetic video from a robotic camera by incorporating a virtual camera operating on a delay, and a hybrid controller which uses feedback from both the robotic and virtual cameras. Our strategy employs a robotic camera to follow a coarse region-of-interest identified by a realtime computer vision system, and then resamples the captured images to synthesize the video that would have been recorded along a smooth, aesthetic camera trajectory. The smooth motion trajectory is obtained by operating the virtual camera on a short delay so that perfect knowledge of immediate future events is known. Previous autonomous camera installations have employed either robotic cameras or stationary wide-angle cameras with subregion cropping. Robotic cameras track the subject using realtime sensor data, and regulate a smoothness-latency trade-off through control gains. Fixed cameras post-process the data and suffer significant reductions in image resolution when the subject moves freely over a large area. Our approach provides a solution for broadcasting events from locations where camera operators cannot easily access. We can also offer broadcasters additional actuated camera angles without the overhead of additional human operators. Experiments on our prototype system for college basketball illustrate how our approach better mimics human operators compared to traditional robotic control approaches, while avoiding the loss in resolution that occurs from fixed camera system.
Article
Full-text available
The purpose of two related studies was to explore the relationships between course characteristics (teaching approach, content type, and level of curricular coordination), lecture-capture implementation, and learning in a veterinary medical education environment. Two hundred and twenty two students and 35 faculty members participated in the first study, which surveyed respondents regarding their perception of lecture-capture use and impact on learning. Four hundred and ninety one students participated in the second study, which compared scores on a standardized test of basic science knowledge among groups experiencing various levels of lecture-capture implementation. Students were most likely to view captured lectures in courses that moved quickly, relied heavily on lecture, were perceived as highly relevant to their future success, and contained information not available in other formats. A greater percentage of students than faculty perceived lecture capture as beneficial to learning. Higher views of captured lectures were associated with higher test scores in disciplines that relied most heavily on a straight-lecture teaching approach and had a basic science – research teaching context. The number of lecture-capture views was not significantly related to test scores in disciplines that relied less heavily on straight lecture for instruction and had a basic science – applied teaching context.
Article
Full-text available
Traditional cameras and video equipment are gradually losing the race with smart phones and small mobile devices that allow video, photo, and audio capturing on the go. Users are now quickly creating movies and taking photos whenever and wherever they go, particularly at concerts and live events (e.g., shows, sport events). Still, in-situ media capturing with such devices poses constraints to any user, especially amateur ones. In this paper, we present the design and evaluation of a mobile video capture suite that allows for cooperative ad hoc production. Our system relies on ad hoc in-situ collaboration offering users the ability to switch between streams and cooperate with each other in order to capture better media with mobile devices. Our main contribution is the real-time awareness that users gain on media capturing endeavors around them and the possibility to collect that data for personal use once the event is over. This contribution is further emphasized by the geo-referenced cues that support the overall user interface and the management of the different media streams. As a secondary contribution, we report on lessons and design guidelines that emerged and apply to in-situ design of rich video collaborative experiences and with the elicitation of functional and usability requirements related to privacy, social connections, and gamification.
Article
Full-text available
A classroom environment contains both private (student-generated) and public (teacher-generated) streams of information. This paper discusses a system, StuPad, that integrates publicly available streams of information, such as a lecture presented by an instructor, with notes captured by individual students. We discuss the motivation for StuPad within the Classroom 2000 project and present a prototype to support capture and access/review activities.
Conference Paper
Full-text available
This paper presents a study of professional live TV production, investigating the work and interactions between distributed camera operators and a vision mixer during an ice hockey game. Using interview and video data, we discuss the vision mixer's and camera operators' individual assignments, showing the role of video as both a topic and resource in their collaboration. Our findings are applied in a design-oriented examination into the interactive user experience of TV, and inform the development of mobile collaborative tools to support amateur live video production.
Article
Full-text available
The 12-month pre-Ph.D ICTP Diploma Courses in the fields of Condensed Matter Physics, High Energy Physics, Mathematics, Earth System Physics and Basics Physics have been recorded using the automated, low cost recording system called EyA developed in-house. We discuss the technical details on how these recordings were implemented, together with some web usage statistics and students feedback. As yet, no similar endeavor has been made to put on-line a complete high-level Diploma Programme, due to the high costs involved when using alternative recording solutions. These recordings are freely available on the website www.ictp.tv
Article
We introduce new prototype Apps for the automated recording of complete lessons, seminars, talks, etc using mobile devices running Android OS and iOS, which aim at supporting the recoding of academic lectures by students themselves. These Apps are free for use and are based on the experiences gained by the ICTP Science Dissemination Unit (SDU) in Trieste, Italy with its open source "Enhance your Audience" (EyA) recording system: wwww.openeya.org-with more than 10 thousands hours of automated educational recordings in the fields of physics and mathematics. Copyright © 2013 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes. This volume is published and copyrighted by its editors. WAVe 2013 workshop at LAK'13, April 8, 2013, Leuven, Belgium.
Article
Methods for authoring Web-based multimedia presentations have advanced considerably with the improvements provided by HTML5. However, authors of these multimedia presentations still lack expressive, declarative language constructs to encode synchronized multimedia scenarios. The SMIL Timesheets language is a serious contender to tackle this problem as it provides alternatives to associate a declarative timing specification to an HTML document. However, in its current form, the SMIL Timesheets language does not meet important requirements observed in Web-based multimedia applications. In order to tackle this problem, this paper presents the ActiveTimesheets engine, which extends the SMIL Timesheets language by providing dynamic clientside modifications, temporal linking and reuse of temporal constructs in fine granularity. All these contributions are demonstrated in the context of a Web-based annotation and extension tool for multimedia documents.
Article
This article presents results from a study of an automated capture and access system, eClass, which was designed to capture the materials presented in college lectures for later review by students. In this article, we highlight the lessons learned from our three-year study focusing on the effect of capture and access on grades, attendance, and use of the captured notes and media. We then present suggestions for building future systems discussing improvements from our system in the capture, integration, and access of college lectures.
Conference Paper
The user interaction with mobile devices has dramatically improved over the last years. Increasingly we rely on smartphones and tablets for a wider range of tasks. Modern mobile devices enable users to access, manage and transmit multiple types of media in an easy, convenient and portable way. In this context, the playback of videos on mobile devices becomes a usual activity. Many works regarding video annotations have been made, but few are concerned with the mobile scenario. The ability to add annotations and to share them with others is a content enriching process which can improve activities from educational to entertainment purposes. In this paper, we present an intuitive tool that allows users to perform temporal video annotations on mobile devices. Using conventional tablets and smartphones equipped with the Android operating system, text, audio and digital ink annotations can be made on any video. It is possible to share text annotations with other users and play multiple annotations at the same time. The several display sizes and the possibility to switch between portrait and landscape mode have also been considered.
Conference Paper
Mobile broadcasting services, allowing people to stream live video from their cameraphones to viewers online, are becoming widely used as tools for user-generated content. The next generation of these services enables collaboration in teams of camera operators and a director producing an edited broadcast. This paper contributes to this research area by exploring the possibility for the director to join the camera team on location, performing mixing and broadcasting on a mobile device. The Mobile Vision Mixer prototype embodies a technical solution for connecting four camera streams and displaying them in a mixer interface for the director to select from, under the bandwidth constraints of mobile networks. Based on field trials with amateur users, we discuss technical challenges as well as advantages of enabling the director to be present on location, in visual proximity of the camera team.
Conference Paper
The capture of lectures or similar presentations is of interest for several reasons. From the attendee's perspective, students may use the recordings when working on homework assignments or preparing for exams, or to watch the contents of a missed class. From the instructor's perspective, a captured lecture may be evaluated, recaptured for improvements, or reused as complementary learning material. Moreover, captured lectures may be a valuable resource for e-learning and distance education courses. In this paper we detail the design rationale associated with the development of a prototype platform for the ubiquitous capture of live presentations and their transformation into a corresponding interactive multi-video object. Our approach includes capturing important context information which, when incorporated into the multimedia object, enables one to interact with the recorded lecture in novel dimensions. We tested our prototype by using case studies involving instructors and students, which allowed us to identify important features and novel uses for the platform.
Article
This paper evaluates the benefits and drawbacks of lecture recording, which aspects of lectures and lecture capture systems are most used, and what additional features and functions would make the experience more effective. We evaluated 4 computer science courses recorded during spring 2011 using our comprehensive lecture capture system PAOL and presented with webMANIC. We discuss the results of student surveys and focus groups and compare these with prior surveys that investigated how students reacted to the availability of online lecture content and how they used these resources in large- and small-scale deployments with both home-grown and commercial lecture capture technologies. The primary motivation for this study was to analyze how lecture capture fits in the context of computer science curricula and pedagogy and about how we can enhance our systems to be more educationally effective.
Conference Paper
The ClassX open source project is a free experimental interactive video streaming platform designed for educators, researchers and software developers. With minimal infra-structure set-up, ClassX offers educational communities a cost-effective solution for online lecture delivery. Our goal is to encourage contributions from other researchers, developers and educators in building an open, cost-effective and state-of-the-art online education video viewing system for the general public.
Conference Paper
We report on design research investigating a possible combination of mobile collaborative live video production and V Jing. In an attempt to better understand future forms of collaborative live media production, we study how VJs produce and mix visuals live. In the practice of producing visuals through interaction with both music and visitors, VJing embodies interesting properties that could inform the design of emerging mobile services. As a first step to examine a generation of new applications, we tease out some characteristics of VJ production and live performance. We then decide on the requirements both for how visitors could capture and transmit live video using their mobile phones and how this new medium could be integrated within VJ aesthetics and interaction. Finally, we present the SwarmCam application, which has been implemented to investigate these requirements.
Article
This article presents results from a study of an automated capture and access system, eClass, which was designed to capture the materials presented in college lectures for later review by students. In this article, we highlight the lessons learned from our three-year study focusing on the effect of capture and access on grades, attendance, and use of the captured notes and media. We then present suggestions for building future systems discussing improvements from our system in the capture, integration, and access of college lectures.
Article
Remote viewing of lectures presented to a live audience is becoming increasingly popular. At the same time, the lectures can be recorded for subsequent on-demand viewing over the Internet. Providing such services, however, is often prohibitive due to the labor-intensive cost of capturing and pre/post-processing. This article presents a complete automated end-to-end system that supports capturing, broadcasting, viewing, archiving and searching of presentations. Specifically, we describe a system architecture that minimizes the pre- and post-production time, and a fully automated lecture capture system called iCam2 that synchronously captures all contents of the lecture, including audio, video, and presentation material. No staff is needed during lecture capture and broadcasting, so the operational cost of the system is negligible. The system has been used on a daily basis for more than 4 years, during which 522 lectures have been captured. These lectures have been viewed over 20,000 times.
UW-Madison online learning study: Insights regarding undergraduate preferences for lecture capture
  • R Veeramani
  • S Bradley
StuPad: integrating student notes with class lectures, CHI '99 Extended Abstracts on Human Factors in Computing Systems