Figure 4 - uploaded by Ingar Arntzen
Content may be subject to copyright.
Media elements (blue rectangles) and track elements (green triangles) distributed or duplicated across devices, each device with its own HTMLTimingObject (red line).
Source publication
Composition is a hallmark of the Web, yet it does not fully extend to linear media. This paper defines linear composition as the ability to form linear media by coordinated playback of independent linear components. We argue that native Web support for linear composition is a key enabler for Web-based multi-device linear media, and that precise mul...
Contexts in source publication
Context 1
... the concept of linear composition extends nat- urally to multi-device scenarios. Essentially, we want to go from single-device to multi-device by scattering or duplicat- ing linear components across devices. At the same time, we need to go from single-device playback, to simultane- ous, multi-device playback. Figure 4 illustrates how two media elements (blue rectan- gles) and two track elements (green triangles) may be split across three devices. Note also that this multi-device sce- nario, particularly B), demonstrates why the current depen- dency between track elements and media elements is not appropriate. In this illustration, track elements are promoted as standalone programming constructs depending directly on the HTMLTimingObject, just like media ...
Context 2
... since we are designing for the Web, shared timing objects should be available wherever and when- ever the Web is available. It follows that technical solutions based on services or features of local networks, specific network carriers, NAT traversal etc., are not appropriate. Also, in line with the client-server architecture of the Web, we prefer a centralized, service-based solution. So, we propose the concept of online timing objects, hosted by web services and available for all connected devices. Figure 4 may illustrate how a regatta could be presented in a multi-device scenario. Interactive race infographics and the regatta map may be hosted by an iPad, while a smart TV presents the main video feed. A smart phone may present time-aligned video-clips, images and com- ments, while at the same time serving as an input- device for user-generated content. Finally, media con- trol is available from all devices. For instance, a touch-sensitive regatta time- line on the iPad may support easy timeshifting, as would a simple progress bar on the smart phone. Media control affects all components in unison, thereby providing consistent linear composition across multiple devices. Figure 5 illustrates a single, online timing object (red line), shared between distributed media elements (blue rectan- gles) and track elements (green triangles). The HTMLTiming- Objects on each device (red lines) serve as local represen- tation for the shared, online timing object. As the HTML- TimingObject encapsulates synchronization with online timing objects, media elements and track elements may readily support linear composition in multi-device as well as single-device media. In principle, distributed synchroniza- tion would only require the programmer to specify a valid URL for the source attribute of the ...
Similar publications
Understanding the reproductive patterns and strategies of a species is an important step in establishing the species’ life history. Campostoma oligolepis, the Largescale Stoneroller, is a species that has received little attention in the 90 years since it was first identified, and the work that has been done has been localized in the American Midwe...
Citations
... Temporal interoperability implies that multiple, possibly heterogeneous media com- ponents may easily be combined into a single, consistently timed media experi- ence [5]. We argue that temporal interoperability must be promoted as a principal feature of the Web, and finding the right approach to media synchronization is key to achieving this. ...
The Web is a natural platform for multimedia, with universal reach, powerful backend services, and a rich selection of components for capture, interactivity, and presentation. In addition, with a strong commitment to modularity, composition, and interoperability, the Web should allow advanced media experiences to be constructed by harnessing the combined power of simpler components. Unfortunately, with timed media this may be complicated, as media components require synchronization to provide a consistent experience. This is particularly the case for distributed media experiences. In this chapter we focus on temporal interoperability on the Web, how to allow heterogeneous media components to operate consistently together, synchronized to a common timeline and subject to shared media control. A programming model based on external timing is presented, enabling modularity, interoperability, and precise timing among media components, in single-device as well as multi-device media experiences. The model has been proposed within the W3C Multi-device Timing Community Group as a new standard, and this could establish temporal interoperability as one of the foundations of the Web platform.
... In recent years a growing body of research has analysed and verified the benefits that so-called Time Awareness can bring to a broad range of application domains [4], [12], [11], [2]. Of particular interest is the TAACCS Interest GRoup (Time Aware Applications, Computers and Comm Systems, that has published a NIST white paper [27]. ...
Time synchronisation plays a critical role in time-sensitive distributed applications. While a variety of such applications exist across many domains, one particular set of applications where improved time synchronisation can lead to significant benefits, particularly with respect to QoE (Quality of Experience), is multimedia applications. While time synchronisation is not a new challenge, advances in wireless technologies have drastically transformed network infrastructures. 802.11 wireless networks increasingly represent the last hop within the ever expanding Internet and whilst users expect the same levels of multimedia QoE as exist over wired networks, the reality of moving back to contention based access leaves many disappointed. This transformation of networks has also proven problematic for time synchronisation protocols that were designed for wired infrastructures. Wireless networks, particularly contention based networks, can be the source of very significant non-deterministic packet latencies. In certain scenarios, such latencies can greatly degrade the performance of time synchronisation. This work details and validates a technique that can be used to determine the latency of time messages in real-time as they traverse an 802.11 wireless link. Knowledge of these latencies can be used to greatly reduce the error in a dataset employed by time synchronisation protocols such as NTP and, thus, improve their performance. Experimental results confirm error reductions of up to 90% in a dataset and prove that the use of this technique can deliver time accuracies akin to those achievable over wired networks. This in turn can greatly benefit users by enabling multimedia applications to benefit from the continued use of time synchronisation for QoE management. We outline two such scenarios, one where time synchronisation is used to prioritise VoIP traffic within an Access Point and a second where the aim is to use time synchronisation to optimise jitter buffer strategies for WebRTC.
... UI-independent sequencing simplifies integration of new data types into visual and interactive components. Integration with an external timing object [7] ensures that media components based on the Sequencer may trivially be synchronized and remote controlled, both in single-page media presentations as well as global, multi-device media applications [5,6,7,16]. A JavaScript implementation for the Sequencer is provided based on setTimeout, ensuring precise timing and reduced energy consumption. ...
... Instead, we advocate a programming model where timing and sequencing functionality are made available as independent, generic programming tools, applicable across application domains. The timing object [5,7] is the fundamental building block of this programming model, defining a unifying foundation for timing, synchronization and control. This paper presents the Sequencer, a generic sequencing tool as an additional building block for timed media applications. ...
... Support for dynamic sequencing allows dynamic data sources to be used without introducing any additional complexity for the programmer. Finally, the timing object supports distributed timing through Shared Motion [5,6,16]. In short, this opens up multi-device sequencing to any connected Web agent, independent of data format, delivery mechanism or UI framework. ...
Media players and frameworks all depend on the ability to produce correctly timed audiovisual effects. More formally, sequencing is the process of translating timed data into correctly timed presentation. Though sequencing logic is a central part of all multimedia applications, it tends to be tightly integrated with specific media formats, authoring models, timing/control primitives and/or predefined UI elements. In this paper, we present the Sequencer, a generic sequencing tool cleanly separated from data, timing/control and UI. Data-independent sequencing implies broad utility as well as simple integration of different data types and delivery methods in multimedia applications. UI-independent sequencing simplifies integration of new data types into visual and interactive components. Integration with an external timing object [7] ensures that media components based on the Sequencer may trivially be synchronized and remote controlled, both in single-page media presentations as well as global, multi-device media applications [5, 6, 7, 16]. A JavaScript implementation for the Sequencer is provided based on setTimeout, ensuring precise timing and reduced energy consumption. The implementation is open sourced as part of timingsrc [2, 3], a new programming model for precisely timed Web applications. The timing object and the Sequencer are proposed for standardization by the W3C Multi-device Timing Community Group [20].
In this work, we present a case study in which participants used a mobile application for collaborative record presentations. We developed an application for Android-enabled devices to investigate the usability and design requirements of a mobile and collaborative capture system. Our main concern is to facilitate collaboration and create an enhanced result without adding complexity in relation to the individual capture task. In this way, we focused on problems related to the usability and to the awareness information that enables users to conduct an opportunistic recording. We report our case study results and discuss design requirements we identified for the collaborative recording of presentations by users in possession of smartphones and tablets.