Conference Paper

Code generation for the MPEG Reconfigurable Video Coding framework: From CAL actions to C functions

IETR Lab., Image & Remote Sensing Group, INSA de Rennes, Rennes
DOI: 10.1109/ICME.2008.4607618 Conference: Multimedia and Expo, 2008 IEEE International Conference on
Source: IEEE Xplore


The MPEG reconfigurable video coding (RVC) framework is a new standard under development by MPEG that aims at providing a unified specification of current MPEG video coding technologies. In this framework, a decoder is built as a configuration of video coding modules taken from the standard ldquoMPEG toolbox libraryrdquo. The elements of the library are specified using the CAL actor language (CAL). CAL is a dataflow based language providing computation models that are concurrent and modular. This paper describes a synthesis tool that from a CAL specification automatically generates compilable C-code. Code generators are fundamental supports for the deployment and success of the MPEG RVC framework. This paper focuses on the automatic translation of CAL actions, which is the first step to a complete actor translation. The techniques described here enable to automatically generate C-code according to a finite set of rules. This approach has been used to obtain a C implementation of the IDCT module which is one element of the RVC library. The generated code is validated against the original CAL dataflow program simulated using the open dataflow environment.

Download full-text


Available from: Olivier Déforges
  • Source
    • "Code generation from CAL to C (CAL2C) has also been developed in previous work [10], and we have explored integrated application of CAL, TDP, and CAL2C using manual techniques [9]. Simulation results from such manual integration demonstrated that the integrated application leads to improved exploitation of parallelism [11]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an automatic design flow from user-friendly design to efficient implementation of video processing systems. This design flow starts with the use of coarse-grain dataflow representations based on the CAL language, which is a complete language for dataflow programming of embedded systems. Our approach integrates previously developed techniques for detecting synchronous dataflow (SDF) regions within larger CAL networks, and exploiting the static structure of such regions using analysis tools in The Dataflow interchange format Package (TDP). Using a new XML format that we have developed to exchange dataflow information between different dataflow tools, we explore systematic implementation of signal processing systems using CAL, SDF-like region detection, TDP-based static scheduling, and CAL-to-C (CAL2C) translation. Our approach, which is a novel integration of three complementary dataflow tools — the CAL parser, TDP, and CAL2C — is demonstrated on an MPEG Reconfigurable Video Coding (RVC) decoder.
    Full-text · Conference Paper · Dec 2010
  • Source
    • "Inside an actor, CAL translation is performed Fig. 3. CAL2C compilation process. The action translation process starts with an abstract syntax tree (AST) derived from the CAL source code; the transformed CAL AST is expressed in the C intermediate language (CIL) [22], where CAL functional constructs are replaced by imperative ones. in two parts: translation of actor code (actions, functions, and procedures) to express the core functionality, and implementation of the action scheduler [priorities, finite-state machines (FSMs), and guards] to control execution of the actions [20]. Translating CAL actor code produces a single C file that contains translated versions of functions, procedures, and actions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an in-depth case study on dataflow-based analysis and exploitation of parallelism in the design and implementation of a MPEG reconfigurable video coding decoder. Dataflow descriptions have been used in a wide range of digital signal processing (DSP) applications, such as applications for multimedia processing and wireless communications. Because dataflow models are effective in exposing concurrency and other important forms of high level application structure, dataflow techniques are promising for implementing complex DSP applications on multicore systems, and other kinds of parallel processing platforms. In this paper, we use the client access license (CAL) language as a concrete framework for representing and demonstrating dataflow design techniques. Furthermore, we also describe our application of the differential item functioning dataflow interchange format package (TDP), a software tool for analyzing dataflow networks, to the systematic exploitation of concurrency in CAL networks that are targeted to multicore platforms. Using TDP, one is able to automatically process regions that are extracted from the original network, and exhibit properties similar to synchronous dataflow (SDF) models. This is important in our context because powerful techniques, based on static scheduling, are available for exploiting concurrency in SDF descriptions. Detection of SDF-like regions is an important step for applying static scheduling techniques within a dynamic dataflow framework. Furthermore, segmenting a system into SDF-like regions also allows us to explore cross-actor concurrency that results from dynamic dependences among different regions. Using SDF-like region detection as a preprocessing step to software synthesis generally provides an efficient way for mapping tasks to multicore systems, and improves the system performance of video processing applications on multicore platforms.
    Full-text · Article · Dec 2009 · IEEE Transactions on Circuits and Systems for Video Technology
  • Source
    • "Furthermore, video processing is also investigating adapting processes. For instance, the Reconfigurable Video Coding (RVC) working group for MPEG standardization [30] is dealing with this topic. The solution we proposed in [28] for the management of reconfiguration is called HDReM for Hierarchical Distributed Reconfiguration Management. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Cognitive Radio (CR) equipments are radio devices that support the smart facilities offered by future cognitive networks. Even if several categories of equipments exist (terminal, base station, smart PDA, etc.), each requiring different processing capabilities (and associated cost or power consumption), these equipments have to integrate also a set of new capabilities as regards CR support, in addition to the usual radio signal processing elements. This implies real-time radio adaptation and sensing capabilities, but not only. We assert that it is necessary to add inside the radio equipments some management facilities for that purpose, and we propose in this paper a high-level design approach for the specification of a management framework. This includes a set of designing rules, based on hierarchical units that are distributed over three levels, and the associated APIs necessary to efficiently manage CR features inside a CR equipment. The proposed architecture is called HDCRAM (Hierarchical and Distributed Cognitive Architecture Management). HDCRAM is an extension of a former hierarchical and distributed reconfiguration management (HDReM) architecture, which is derived from our previous research on Software Defined Radio (SDR). The HDCRAM adds to the HDReM’s reconfiguration management facilities the necessary new management features, which enable the support of sensing and decision making facilities. It consists in the combination of one Cognitive Radio Management Unit (CRMU) with each Reconfiguration Management Unit (ReMU) distributed within the equipment. Each of these CRMU is in charge of the capture, the interpretation and the decision making according to its own goals. In this Cognitive Radio context, the term “decision” refers to the adaptation of the radio parameters to the equipment’s environment. This paper details the HDCRAM’s management functionality and structure. Moreover, in order to facilitate the early design phase of the management specification, which is new in radio design, HDCRAM has also been modeled with a meta-programming language based on UML. But beyond the first objective of high-level specification, we have also derived a simulator from the obtained meta-model, thanks to the use of an executable language. This gives the opportunity to specify the CR needs and play a wide variety of scenarios, in order to validate the CR equipment’s design. This approach provides high-level design facilities for the specification of cognitive management APIs inside a cognitive radio equipment.
    Full-text · Article · Mar 2009 · Journal of Network and Systems Management
Show more