Karol Desnos

Karol Desnos
Institut National des Sciences Appliquées de Rennes | INSA Rennes · Institut d'Electronique et de Télécommunications de Rennes IETR-INSA

PhD

About

64
Publications
4,591
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
436
Citations
Additional affiliations
September 2011 - September 2015
Institut National des Sciences Appliquées de Rennes
Position
  • PhD Student
Education
September 2006 - August 2011
Institut National des Sciences Appliquées de Rennes
Field of study
  • Electronics and Computer Engineering

Publications

Publications (64)
Preprint
Full-text available
Tangled Program Graph (TPG) is a reinforcement learning technique based on genetic programming concepts. On state-of-the-art learning environments, TPGs have been shown to offer comparable competence with Deep Neural Networks (DNNs), for a fraction of their computational and storage cost. This lightness of TPGs, both for training and inference, mak...
Chapter
A common optimization of signal and image processing applications is the pipelining on multiple Processing Elements (PE) available on multicore or manycore architectures. Pipelining an application often increases the throughput at the price of a higher memory footprint. Evaluating different pipeline configurations to select the best one is time con...
Article
Full-text available
In dataflow representations for signal processing systems, applications are represented as directed graphs in which vertices represent computations and edges correspond to buffers that store data as it passes among computations. The edges in the dataflow graph are single-input, single-output components that manage data transmission in a first-in, f...
Article
Stream processing applications running on Heterogeneous Multi-Processor Systems on Chips (HMPSoCs) require efficient resource allocation and management, both at compile-time and at runtime. To cope with modern adaptive applications whose behavior can not be exhaustively predicted at compile-time, runtime managers must be able to take resource alloc...
Article
Full-text available
The widening of the complexity-productivity gap in application development witnessed in the last years is becoming an important issue for the developers. New design methods try to automate most designers tasks to bridge this gap. In addition, new Models of Computation (MoCs), as those dataflow-based, ease the expression of parallelism within applic...
Chapter
Cyber-Physical Systems (CPS) are interconnected devices, reactive and dynamic to sensed external and internal triggers. The H2020 CERBERO EU Project is developing a design environment composed by modelling, deployment and verification tools for adaptive CPS. This paper focuses on its efficient support for run-time self-adaptivity.
Chapter
A common problem when developing signal processing applications is to expose and exploit parallelism in order to improve both throughput and latency. Many programming paradigms and models have been introduced to serve this purpose, such as the Synchronous DataFlow (SDF) Model of Computation (MoC). SDF is used especially to model signal processing a...
Article
Heterogeneous Multiprocessor Systems-on-a-Chip (MPSoCs) with programmable hardware acceleration are currently gaining market share in the embedded device domain. Largest MPSoCs combine several software processing cores with programmable logic. In these systems, reaching the optimal implementation performance is difficult because many manual and tim...
Article
Full-text available
To design faster and more energy-efficient systems, numerous inexact arithmetic operators have been proposed, generally obtained by modifying the logic structure of conventional circuits. However, as the quality of service of an application has to be ensured, these operators need to be precisely characterized to be usable in commercial or real-life...
Conference Paper
Full-text available
Cyber-Physical Systems (CPS) are embedded computational collaborating devices, capable of sensing and controlling physical elements and, often, responding to humans. Designing and managing systems able to respond to different, concurrent requirements during operation is not straightforward, and introduce the need of proper support at design-time an...
Chapter
The complexity of today’s multi-processor architectures raises the need to increase the level of abstraction of software development paradigms above third-generation programming languages (e.g., C/C++). Code generation from model-based specifications is considered as a promising approach to increase the productivity and quality of software developm...
Article
Full-text available
Domain-specific acceleration is now a “must” for all the computing spectrum, going from high performance computing to embedded systems. Unfortunately, system specialization is by nature a nightmare from the design productivity perspective. Nevertheless, in contexts where kernels to be accelerated are intrinsically streaming oriented, the combinatio...
Conference Paper
Dataflow Models of Computation (MoCs) have proven efficient means for modeling computational aspects of Cyber-Physical System (CPS). Over the years, diverse MoCs have been proposed that offer trade-offs between expressivity, conciseness, predictability, and reconfigurability. While being efficient for modeling coarse grain data and task parallelism...
Conference Paper
Because of the growing concern towards the energy consumption of embedded devices, the quality of an application is now considered as a new tunable parameter during the implementation phase. Approximations are then deliberately introduced to gain performance. Nevertheless, when implementing an approximate computing technique, quality deteriorations...
Conference Paper
Inexact operators are developed to exploit the tolerance of an application to imprecisions. These operators aim at reducing system energy consumption and memory footprint. In order to integrate the appropriate inexact operators in a complex system, the Quality of Service of the approximate system must be thoroughly studied through simulation. Howev...
Conference Paper
The widening of the complexity-productivity gap witnessed in the last years is becoming unaffordable from the application development point of view. New design methods try to automate most designers tasks in order to bridge this gap. In addition, new Models of Computation (MoC), as those dataflow-based, ease the expression of parallelism within app...
Article
Full-text available
In recent years, the Electronic Design Automation (EDA) community shifted spotlights from performance to energy efficiency. Consequently, energy consumption becomes a key criterion to take into consideration during Design Space Exploration (DSE). Finding a trade-off between energy consumption and performance early in the design flow in order to sat...
Conference Paper
Embedded manycore architectures offer energy-efficient super-computing capabilities but are notoriously difficult to program with traditional parallel Application Programming Interfaces (APIs). To address this challenge, dataflow Models of Computation (MoCs) are increasingly used as their high-level of abstraction eases the automation of computatio...
Article
Current trends in high performance and embedded computing include design of increasingly complex hardware architectures with high parallelism, heterogeneous processing elements and non-uniform communication resources. In order to take hardware and software design decisions, early evaluations of the system non-functional properties are needed. These...
Conference Paper
Dataflow models of computation have early on been acknowledged as an attractive methodology to describe parallel algorithms, hence they have become highly relevant for programming in the current multicore processor era. While several frameworks provide tools to create dataflow descriptions of algorithms, generating parallel code for programmable pr...
Article
The approximate computing paradigm provides methods to optimize algorithms while considering both application quality of service and computational complexity. Approximate computing can be applied at different levels of abstraction, from algorithm level to application level. Approximate computing at algorithm level reduces the computational complexi...
Conference Paper
Full-text available
The Interface-Based Synchronous Dataflow (IBSDF) Model of Computation (MoC) is a hierarchical extension of the well-known Synchronous Dataflow (SDF) MoC. The IBSDF model extends the semantics of the SDF model by introducing a graph composition mechanism based on hierarchical interfaces.The IBSDF model introduces also execution rules to ease the ana...
Article
This paper presents a study of the parallelism of a Principal Component Analysis (PCA) algorithm and its adaptation to a manycore MPPA (Massively Parallel Processor Array) architecture, which gathers 256 cores distributed among 16 clusters. This study focuses on porting hyperspectral image processing into manycore platforms by optimizing their proc...
Conference Paper
Full-text available
Synchronous Dataflow (SDF) is the most commonly used dataflow Model of Computation (MoC) for the specification of Digital Signal Processing (DSP) systems. The Interface-Based SDF (IBSDF) model extends the semantics of the SDF model by introducing a graph composition mechanism based on hierarchical interfaces. Computing the throughput of an applicat...
Article
This article introduces a new technique to minimize the memory footprints of Digital Signal Processing (DSP) applications specified with Synchronous Dataflow (SDF) graphs and implemented on shared-memory Multiprocessor System-on-Chip (MPSoCs). In addition to the SDF specification, which captures data dependencies between coarse-grained tasks called...
Article
Massively Parallel Multi-Processors System-on-Chip (MP2SoC) architectures require efficient programming models and tools to deal with the massive parallelism present within the architecture. In this paper, we propose a tool which automates the generation of the System-Level Architecture Model (S-LAM) from a Unified Modeling Language-based (UML) mod...
Technical Report
Full-text available
The current trend in high performance and embedded computing consists of designing increasingly complex heterogeneous hardware architectures with non-uniform communication resources. In order to take hardware and software design decisions, early evaluations of the system non-functional properties are needed. These evaluations of system efficiency r...
Conference Paper
For many years, following the ever-increasing number of transistors per chip, advances in computer architecture mostly consisted of adding complex mechanisms to mono-core processors to improve their computing performance. In the last decade, the continuous growth of computing performance was supported by the introduction of multi-core architectures...
Article
Full-text available
The majority of applications, ranging from the low complexity to very multifaceted entities requiring dedicated hardware accelerators, are very well suited for Multiprocessor Systems-on-Chips (MPSoCs). It is critical to understand the general characteristics of a given embedded application: its behavior and its requirements in terms of MPSoC resour...
Conference Paper
Full-text available
This paper introduces and assesses a new technique to minimize the memory footprints of Digital Signal Processing (DSP) applications specified with Synchronous Dataflow (SDF) graphs and implemented on shared-memory Multiprocessor Systems-on-Chips (MP-SoCs). In addition to the SDF specification, which captures data dependencies between coarse-graine...
Article
This paper details the design and implementation performances of an efficient generator of chaotic discrete integer valued sequences. The generator exhibits orbits having very large lengths compared to those given in the literature. It is implemented in C language and parallelized using the Parameterized and Interfaced Synchronous Dataflow Model of...
Article
Full-text available
This paper proposes and validates a new methodology to facilitate the analysis of modern data-intensive applications. A major part of handling the processing needs of these applications consists in using the appropriate Model-of-Computation (MoC) which guarantees accurate performance estimations. Our methodology introduces one major contribution th...
Article
Parallelization of Digital Signal Processing (DSP) software is an important trend in Multiprocessor System-on-Chip (MPSoC) implementation. The performance of DSP systems composed of parallelized computations depends on the scheduling technique, which must in general allocate computation and communication resources for competing tasks, and ensure th...
Article
The development of embedded Digital Signal Processing (DSP) applications for Multiprocessor Systems-on-Chips (MPSoCs) is a complex task requiring the consideration of many constraints including real-time requirements, power consumption restrictions, and limited hardware resources. To satisfy these constraints, it is critical to understand the gener...
Conference Paper
Full-text available
The high performance Digital Signal Processors (DSPs) currently manufactured by Texas Instruments are heteroge-neous multiprocessor architectures. Programming these ar-chitectures is a complex task often reserved to specialized engineers because the bottlenecks of both the algorithm and the architecture need to be deeply understood in order to ob-t...
Data
Full-text available
PREESM Rapid Prototyping Framework PREESM is an open source rapid prototyping tool used both for research, development, and educational purposes on Multiprocessor Systems on Chip (MPSoCs). Poster Presented at HiPEAC 2014 in Vienna
Conference Paper
Full-text available
Parallelization of Digital Signal Processing (DSP) software is an important trend for MultiProcessor System-on-Chip (MPSoC) implementation. The performance of DSP systems composed of parallelized computations depends on the scheduling technique, which must in general allocate computation and communication resources for competing tasks, and ensure t...
Conference Paper
Full-text available
Dataflow models of computation are widely used for the specification, analysis, and optimization of Digital Signal Processing (DSP) applications. In this paper a new meta-model called PiMM is introduced to address the important challenge of managing dynamics in DSP-oriented representations. PiMM extends a dataflow model by introducing an explicit p...
Conference Paper
Full-text available
This paper introduces and assesses a new method to allocate memory for applications implemented on a shared memory Multiprocessor System-on-Chip (MPSoC). This method first consists of deriving, from a Synchronous Dataflow (SDF) algorithm description, a Memory Exclusion Graph (MEG) that models all the memory objects of the application and their allo...
Conference Paper
Full-text available
A chain of three state-of-the-art tools is demonstrated to generate efficient code for Multi-Processors System-on-Chips (MPSoCs) from a high-level dataflow language. The experimental platform is based on a 5-core Texas Instruments OMAP4 heterogeneous MPSoC running an image processing application.
Conference Paper
Full-text available
This paper presents an application analysis tech-nique to define the boundary of shared memory requirements of Multiprocessor System-on-Chip (MPSoC) in early stages of devel-opment. This technique is part of a rapid prototyping process and is based on the analysis of a hierarchical Synchronous Data-Flow (SDF) graph description of the system applica...

Network

Cited By

Projects

Projects (3)
Project
http://www.cerbero-h2020.eu/ The Cross-layer modEl-based fRamework for multi-oBjective dEsign of Reconfigurable systems in unceRtain hybRid envirOnments (CERBERO) project aims at developing a design environment for CPS based of two pillars: a cross-layer model based approach to describe, optimize, and analyze the system and all its different views concurrently; an advanced adaptivity support based on a multi-layer autonomous engine. To overcome the limit of current tools, CERBERO provides: libraries of generic Key Performance Indicators for reconfigurable CPSs in hybrid/uncertain environments; novel formal and simulation-based methods; a continuous design environment guaranteeing early-stage analysis and optimization of functional and non-functional requirements, including energy, reliability and security.
Archived project
Anr compas