Silvia Pfeiffer

Silvia Pfeiffer
National ICT Australia Ltd | NICTA · Broadband and the Digital Economy

PhD (Dr rer nat)

About

46
Publications
7,430
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,188
Citations
Additional affiliations
May 1996 - June 2006
The Commonwealth Scientific and Industrial Research Organisation
Position
  • Researcher
January 1994 - March 1999
Universität Mannheim

Publications

Publications (46)
Article
Full-text available
Background/Introduction: Literacy difficulties have significant long-term impacts on individuals, and therefore early identification and intervention are critical. Access to experienced professionals who conduct standardized literacy assessments with children is limited in rural and remote areas. The emerging literature supports the feasibility of...
Article
Introduction: Access to cognitive assessments for children living remotely is limited. Telehealth represents a potential cost- and time-effective solution. A pilot study was conducted to determine the feasibility of telehealth to assess cognitive function in children with learning difficulties. Methods: Thirty-three children (median age = 9 year...
Conference Paper
Full-text available
Children with speech and language difficulties living in rural areas are disadvantaged by their relative lack of access to speech pathologists. Unrecognised and untreated language impairment can impact significantly on literacy, learning, and employment. Telehealth is an effective way of providing intervention for speech and language issues. Howeve...
Article
Available from http://www.w3.org/TR/media-frags/
Article
Full-text available
To make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking into resources (e.g., fragment URIs into Web pages). To turn...
Article
HTML5 is an updated version of the hypertext markup language that has been empowering the World Wide Web for the last 20 years. One of the things that HTML5 introduces is a element, which make video content as simple to include into Web pages as images. Similar to the issues that had to be overcome with the introduction of the tag in 1993, we are n...
Conference Paper
Full-text available
In this paper, we describe two examples of implementations of the Media Fragments URI specification which is currently being developed by the W3C Media Fragments Working Group. The group's mission is to create standard addressing schemes for media fragments on the Web using Uniform Resource Identifiers (URIs). We describe two scenarios to illustrat...
Article
SVG stands for Scalable Vector Graphics and is a language used to describe two-dimensional graphical objects in XML. In the past, SVG has been a standalone format used in web browsers through Adobe Flash as an embedded resource or as an image resource. Nowadays, all modern browsers support SVG natively, including Internet Explorer 9.
Article
In this chapter we look at the efforts that are being made to allow HTML5 to access audio and video devices and to use these for live communications such as audio and video conferencing.
Article
We have learned a lot of ways in which the HTML5 media elements can be manipulated and modified using JavaScript. Some of the video manipulations—in particular when used in Canvas—can be very CPU intensive and slow. Web Workers are a means to deal with this situation.
Article
With this chapter, we explore a set of features that are less stable and less firmly defined than the features discussed in previous chapters. This and all following chapters present features that are at the time of writing still work in progress. But they introduce amazing possibilities and we therefore cannot ignore them. Some of the features hav...
Conference Paper
Full-text available
In this paper, we describe existing implementations for putting subtitles and captions alongside the HTML5 tag inside Web pages and a proposal for standardizing such approaches, which will make them interoperable and easier to be processed by automated tools. Since video and audio are fundamental data types that any Web user will want to make use o...
Article
Full-text available
Since the year 2000 a project under the name of "Continuous Media Web", CMWeb, has explored how to make video (and incidentally audio) a first class citizen on the Web. The project has led to a set of open specifications and open source implementations, which have been included into the Xiph set of open media technologies. In the spirit of the Web,...
Article
Full-text available
The World Wide Web, with its paradigms of surfing and searching for information, has become the predominant system for computer-based information retrieval. Media resources, however information-rich, only play a minor role in providing information to Web users. While bandwidth (or the lack thereof) may be an excuse for this situation, the lack of s...
Article
Full-text available
Digital audio & video data have become an integral part of multimedia information systems. To reduce storage and bandwidth requirements, they are commonly stored in a compressed format, such as MPEG-1. Increasing amounts of MPEG encoded audio and video documents are available online and in proprietary collections. In order to effectively utilise th...
Article
Full-text available
The lure of video blogging combines the ubiquitous, grassroots, Web-based journaling of blogging with the richness of expression available in multimedia. Some claim that video blogging is an important force in a future world of video journalism and a powerful technical adjunct to our existing televised news sources. Others point to the huge demands...
Conference Paper
Semantic interpretation of the data distributed over the Internet is subject to major current research activity. The Continuous Media Web (CMWeb) extends the World Wide Web to time-continuously sampled data such as audio and video in regard to the searching, linking, and browsing functionality. The CMWeb technology is based the file format Annodex...
Article
Full-text available
The Continuous Media Web project has developed a technology to extend the Web to time-continuously sampled data enabling seamless searching and surfing with existing Web tools. This chapter discusses requirements for such an extension of the Web, contrasts existing technologies and presents the Annodex technology, which enables the creation of Webs...
Conference Paper
This demonstration introduces the Annodex set of technologies, which enable the creation of Webs of audio and video resources integrated into the searching and surfing infrastructure of the World Wide Web. The demonstration covers the live creation of Annodex content and thus of Webs of video and audio, the setup of a Web server to distribute Annod...
Conference Paper
One of the goals of the Continuous Media Web project1 is to integrate digital media with the World Wide Web: media documents can hyperlink to and from other documents in the same way that HTML pages do. The dual capabilities of hyperlinking (1) to other documents while viewing a media clip, and (2) into precise time intervals in a media clip, enabl...
Conference Paper
We give an overview of existing audio analysis approaches in the compressed domain and incorporate them into a coherent formal structure. After examining the kinds of information accessible in an MPEG-1 compressed audio stream, we describe a coherent approach to determine features from them and report on a number of applications they enable. Most o...
Conference Paper
Full-text available
Today, Web browsers can interpret an enormous amount of different file types, including time-continuous data. By consuming an audio or video, however, the hyperlinking functionality of the Web is "left behind" since these files are typically unsearchable, thus not indexed by common text-based search engines. Our XML-based CMML annotation format and...
Article
Full-text available
This chapter describes different approaches that use audio features for determination of scenes in edited video. It focuses on analysing the sound track of videos for extraction of higher-level video structure. We define a scene in a video as a temporal interval which is semantically coherent. The semantic coherence of a scene is often constructed...
Article
Full-text available
Determining automatically what constitutes a scene in a video is a challenging task, particularly since there is no precise definition of the term scene. It is left to the individual to set attributes shared by consecutive shots which group them into scenes. Certain basic attributes such as dialogs, settings and continuing sounds are consistent ind...
Conference Paper
Full-text available
Conference Paper
Full-text available
This paper presents work on the determination of temporal audio segmentations at different semantic levels. The segmentation algorithm draws upon the calculation of relative silences or pauses. A perceptual loudness measure is the only feature employed. An adaptive threshold is used for classification into pause and non-pause. The segmentation algo...
Article
Introduction & Project Overview In 1994, an ambitious project in the multimedia domain was started at the University of Mannheim under the guidance of Prof. Dr. W. Effelsberg. We realized that multimedia applications using continuous media like video and audio data absolutely require access to semantic contents of these media types similar to that...
Article
Full-text available
Semantic access to the content of a video is highly desirable for multimedia content retrieval. Automatic extraction of semantics requires content analysis algorithms. Our MoCA (Movie Content Analysis) project provides an interactive workbench supporting the researcher in the development of new movie content analysis algorithms. The workbench offer...
Article
Full-text available
Determining automatically what constitutes a scene in a video is a challenging task, particularly since there is no precise definition of the term "scene". It is left to the individual to set attributes shared by consecutive shot which group them into scenes. Certain basic attributes such as dialogs, like settings and continuing sounds are consiste...
Article
Full-text available
We all know what the abstract of an article is: a short summary of a document, often used to preselect material relevant to the user. The medium of the abstract and the document are the same, namely text. In the age of multimedia, it would be desirable to use video abstracts in very much the same way: as short clips containing the essence of a long...
Conference Paper
Full-text available
The ISO/MPEG group has identified a wide range of application scenarios [1] for their emerging MPEG-7 standard on audio-visual metadata. TV Anytime with their vision of future digital TV services [2] encompasses a large number of them. As TV Anytime has also identified metadata as one of the key requirements to realize their vision, MPEG-7 is the n...
Conference Paper
Full-text available
Determining automatically what constitutes a scene in a video is a challenging task, particularly since there is no precise definition of the term “scene”. It is left to the individual to set attributes shared by consecutive shots which group them into scenes. Certain basic attributes such as dialogs, like settings and continuing sounds are consist...
Article
Full-text available
Audio content analysis is nowadays mainly performed on quantised sound waves after transforming the samples into the frequency domain. Our work exploits MPEG-1 encoded audio data for audio content analysis. As MPEG-1 uses Subband Coding to compress sound samples, the encoded data is directly usable for content analysis. This has the advantage of re...
Article
Full-text available
The importance of perceptive modeling for calculation of sound features is well known. Use of simple perception-based adaptations of physically measured stimuli, such as the dB- scale or loudness, is a minimal requirement. Exactly how much value can be gained by more complex perceptive modeling, has not been investigated in detail. The paper examin...
Article
Full-text available
ing 6.1 Motivation In current video marketing, it is common to produce a trailer (a short summary) of a video in order to get people interested in the film. With a vast number of 8 stored videos in a video archive, it is not possible to produce by hand a trailer for each of the stored films. It is, however, interesting for a customer to browse the...
Article
Full-text available
Presented is an algorithm for automatic production of a video abstract of a feature film, similar to a movie trailer. It selects clips from the original movie based on detection of special events like dialogs, shots, explosions and text occurrences, and on general action indicators applied to scenes. These clips are then assembled to form a video t...
Conference Paper
Full-text available
Semantic access to the content of a video is highly desirable for multimedia content retrieval. Automatic extraction of semantics requires content analysis algorithms. Our MoCA (Movie Content Analysis) project provides an interactive workbench supporting the researcher in the development of new movie content analysis algorithms. The workbench offer...
Article
ing Digital Movies Automatically S. Pfeioeer, R. Lienhart, S. Fischer und W. Eoeelsberg Universit#t Mannheim Praktische Informatik IV L 15, 16 D-68131 Mannheim Abstracting Digital Movies Automatically Silvia Pfeioeer, Rainer Lienhart, Stephan Fischer and Wolfgang Eoeelsberg Praktische Informatik IV University of Mannheim D-68131 Mannheim pfeiffer@p...
Conference Paper
Full-text available
This paper describes the theoretic framework and applications of automatic audio content analysis. After explaining the tools for audio analysis such as analysis of the pitch or the frequency spectrum, we describe new applications which can be developed using the toolset. We discuss content-based segmentation of the audio stream, music analysis and...
Article
Full-text available
The Continuous Media Web (CMWeb) integrates time–continuous media into the searching, linking, and browsing functionality of the World Wide Web. The file format underlying the CMWeb technology, Annodex, streams the media content multiplexed with XML–markup in the Continuous Media Markup Language (CMML). CMML contains information relevant to the who...
Article
This document describes the Media Fragments 1.0 specification. It specifies the syntax for constructing media fragment URIs and explains how to handle them when used over the HTTP protocol. The syntax is based on the specification of particular name-value pairs that can be used in URI fragment and URI query requests to restrict a media resource to...
Article
Full-text available
In the digital age, in a time where increasing amounts of video are being published and distributed online, we need more quantitative information on user needs and user consumption of online video to determine a sensible broadcast and communications policy. This talk outlines what types of video communication we now encounter online, what approache...

Network

Cited By

Projects

Projects (3)
Project
Easy to integrate, highly secure video conferencing API as a startup.