Rémi Boutteau

Rémi Boutteau
Université de Rouen | UR · Institut Universitaire de Technologie de Rouen (IUT)

Professor

About

68
Publications
31,937
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
684
Citations
Introduction
Rémi Boutteau received his PhD degree from the University of Rouen Normandy for works related to Computer Vision in 2010. Until 2020, he was an Associate Professor at ESIGELEC. In 2018, he obtained the HDR from the University of Rouen Normandy for his research on autonomous vehicles. Since 2020, he is a Full Professor at University of Rouen Normandy within the STI team at the LITIS Lab. His research interests are perception, localization and computer vision dedicated to autonomous vehicles.
Additional affiliations
September 2020 - present
Université de Rouen
Position
  • Professor (Full)
September 2009 - August 2020
ESIGELEC
Position
  • Professor (Associate)
September 2006 - August 2009
IPSIS - IT Link Group
Position
  • Engineer
Education
September 2018 - December 2018
Université de Rouen
Field of study
  • Automatic, Signal, Productivity, Robotics
September 2006 - April 2010
Université de Rouen
Field of study
  • Computer Vision
September 2005 - August 2006
Université de Lille
Field of study
  • AG2I

Publications

Publications (68)
Preprint
Full-text available
About a decade ago the idea of cooperation has been introduced to self-driving with the aim to enhance safety in dangerous places such as intersections. Infrastructure-based cooperative systems emerged very recently bringing a new point of view of the scene and more computation power. In this paper, we want to go beyond the framework presented in t...
Article
Full-text available
For smart mobility, and autonomous vehicles (AV), it is necessary to have a very precise perception of the environment to guarantee reliable decision-making, and to be able to extend the results obtained for the road sector to other areas such as rail. To this end, we introduce a new single-stage monocular real-time 3D object detection convolutiona...
Article
Full-text available
A robust visual understanding of complex urban environments using passive optical sensors is an onerous and essential task for autonomous navigation. The problem is heavily characterized by the quality of the available dataset and the number of instances it includes. Regardless of the benchmark results of perception algorithms, a model would only b...
Article
Full-text available
The idea of cooperation has been introduced to self-driving cars about a decade ago with the aim to reduce the occlusion caused by other users or the scene. More recently, the research efforts turned toward cooperative infrastructure bringing a new kind of the point of view as well as more processing power. This paper lies in this new field providi...
Article
On-road behavior analysis is a key task required for robust autonomous vehicles. Unlike traditional perception tasks, this paper aims to achieve a high-level understanding of road agent activities. This allows better operation under challenging contexts to enable better interaction and decision-making in such complex environments. In this paper, we...
Article
Full-text available
Autonomous navigation is a key defining feature that allows agricultural robots to perform automated farming tasks. Global navigation satellite system (GNSS) technology is providing autonomous navigation solutions for current commercial robotic platforms that can achieve centimeter-level accuracy when real-time kinematic (RTK) corrections are avail...
Article
Full-text available
For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, we introduce in this work a new r...
Article
Full-text available
Improving performance and safety conditions in industrial sites remains a key objective for most companies. Currently, the main goal is to be able to dynamically locate both people and goods on the site. Security and access regulation to restricted areas are often ensured by doors or badge barriers and those have several issues when faced with peop...
Article
Full-text available
In this paper, we propose a vision-based system to localize mobile robots in large industrial environments. Our contributions rely on the use of fisheye cameras to have a large field of view and the associated algorithms. We propose several calibration methods and evaluate them with a ground-truth obtained by a motion capture system. In these exper...
Article
Full-text available
Many applications in the context of Industry 4.0 require precise localization. However, indoor localization remains an open problem, especially in complex environments such as industrial environments. In recent years, we have seen the emergence of Ultra WideBand (UWB) localization systems. The aim of this article is to evaluate the performance of a...
Article
Full-text available
Many applications in the context of Industry 4.0 require precise localization. However, indoor localization remains an open problem, especially in complex environments such as industrial environments. In recent years, we have seen the emergence of UltraWideBand (UWB) localization systems. The aim of this article is to evaluate the performance of a...
Article
Full-text available
In core computer vision tasks, we have witnessed significant advances in object detection, localisation and tracking. However, there are currently no methods to detect, localize and track objects in road environments, and taking into account real-time constraints. In this paper, our objective is to develop a deep learning multi object detection and...
Article
Full-text available
Vision systems that provide a 360-degree view are becoming increasingly common in today’s vehicles. These systems are generally composed of several cameras pointing in different directions and rigidly connected to each other. The purpose of these systems is to provide driver assistance in the form of a display, for example by building a Bird’s eye...
Article
Full-text available
Many applications in the context of Industry 4.0 require precise localization. However, indoor localization remains an open problem, especially in complex environments such as industrial environments. In recent years, we have seen the emergence of Ultra WideBand (UWB) localization systems. The aim of this article is to evaluate the performance of a...
Conference Paper
Full-text available
Common approaches for vehicle localization propose to match LiDAR data or 2D features from cameras to a prior 3D LiDAR map. Yet, these methods require both heavy computational power often provided by GPU, and a first rough localization estimate via GNSS to be performed online. Moreover, storing and accessing 3D dense LiDAR maps can be challenging i...
Conference Paper
Full-text available
Applications in the context of the industry 4.0 need a precise localization. Indoor localization remains an open problem. Among the possible solutions, we see the Emergence of Ultra Wide Band(UWB)-methods. The aim of this article is to evaluate an UWB system in order to estimate the position of a person in indoor environments. We have evaluated an...
Poster
Full-text available
The aim of this poster is to present the performance of a UWB system to estimate the position of a person moving in an indoor environment. To do so, we have implemented an experimental protocol to evaluate the accuracy of the UWB system both statically and dynamically. UWB system is compared to a ground truth obtained by a motion capture system wit...
Article
Full-text available
In this paper, we address the problem of camera pose estimation using 2D and 3D line features, also known as PnL (Perspective-n-Line) with a known vertical direction. The minimal number of line correspondences required to estimate the complete camera pose is 3 (P3L) in the general case, yielding to a minimum of 8 possible solutions. Prior knowledge...
Conference Paper
Full-text available
This paper presents the overall architecture of the VIKINGS robot, one of the five contenders in the ARGOS challenge and winner of two competitions. The VIKINGS robot is an autonomous or remote-operated robot for the inspection of oil and gas sites and is able to assess various petrochemical risks based on embedded sensors and processing. As descri...
Article
Full-text available
This paper presents the overall architecture of the VIKINGS robot, one of the five contenders in the ARGOS challenge and winner of two competitions. The VIKINGS robot is an autonomous or remote-operated robot for inspection of oil and gas sites and is able to assess various petrochemical risks based on embedded sensors and processing. As described...
Chapter
Full-text available
Egomotion estimation is a fundamental issue in structure from motion and autonomous navigation for mobile robots. Several camera motion estimation methods from a set of variable number of image correspondences have been proposed. Seven- and eight-point methods have been first designed to estimate the fundamental matrix. Five-point methods represent...
Article
Full-text available
In this article, we introduce a fast, accurate and invariant method for RGB-D based human action recognition using a Hierarchical Kinematic Covariance (HKC) descriptor. Recently, non singular covariance matrices of pattern features which are elements of the space of Symmetric Definite Positive (SPD) matrices, have been proven to be very efficient d...
Article
Full-text available
Over the last few decades, action recognition applications have attracted the growing interest of researchers, especially with the advent of RGB-D cameras. These applications increasingly require fast processing. Therefore, it becomes important to include the computational latency in the evaluation criteria. In this paper, we propose a novel human...
Conference Paper
Full-text available
La télédétection par laser (Lidar) est une technologie de plus en plus utilisée en particulier dans les fonctions de perception et localisation nécessaires à la conduite autonome. L'acquisition des données Lidar doit être couplée à la mesure du mouvement du véhicule par une centrale inertielle. Ces capteurs n'étant pas conçus pour fonctionner ensem...
Article
Full-text available
In this paper, we address the problem of vehicle localization in urban environments. We rely on visual odometry, calculating the incremental motion, to track the position of the vehicle and on place recognition to correct the accumulated drift of visual odometry, whenever a location is recognized. The algorithm used as a place recognition module is...
Article
Full-text available
This paper proposes a basic structured light system for pose estimation. It consists of a circular laser pattern and a camera rigidly attached to the laser source. We develop a geometric modeling that allows to efficiently estimate the pose at scale of the system, relative to a reference plane onto which the pattern is projected. Three different ro...
Article
Full-text available
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today’s life applications includes entertainment or augmented re...
Article
Full-text available
In this paper, we investigate the impact of different kind of car trajectories on LiDAR scans. In fact, LiDAR scanning speeds are considerably slower than car speeds introducing distortions. We propose a method to overcome this issue as well as new metrics based on CAN bus data. Our results suggest that the vehicle trajectory should be taken into a...
Article
Full-text available
In this paper, we propose a LiDAR-based robot localization method in a complex oil and gas environment. Localization is achieved in six degrees of freedom (DoF) thanks to a particle filter framework. A new time-efficient likelihood function, based on a precalculated three-dimensional likelihood field, is introduced. Experiments are carried out in r...
Conference Paper
Full-text available
Vehicle localization is a fundamental issue in autonomous navigation that has been extensively studied by the Robotics community. An important paradigm for vehicle localization is based on visual place recognition which relies on learning a database, then consecutively trying to find matchings between this database and the actual visual input. An i...
Article
Full-text available
Long-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is...
Article
Full-text available
In this paper, we explore the different minimal solutions for egomotion estimation of a camera based on homography knowing the gravity vector between calibrated images. These solutions depend on the prior knowledge about the reference plane used by the homography. We then demonstrate that the number of matched points can vary from two to three and...
Conference Paper
Full-text available
Egomotion estimation is a fundamental issue in structure from motion and autonomous navigation for mobile robots. Several camera motion estimation methods from a set of variable number of image correspondances have been proposed. Five-point methods represent the minimal number of required correspondences to estimate the essential matrix, raised spe...
Conference Paper
Full-text available
With the availability of the recent human skeleton extraction algorithm introduced by Shotton et al. [1], an interest for skeleton-based action recognition methods has been renewed. Despite the importance of the low-latency aspect in applications, it can be noted that the majority of recent approaches has not been evaluated in terms of computationa...
Conference Paper
Full-text available
Classiquement, l'évaluation des méthodes de reconnaissance d'actions se fait à l'aide d'un critère principal : le taux de reconnaissance. Cependant, une méthode dont le temps de calcul est important n'est utilisable que pour un nombre très restreint d'applications. Dans ce papier, nous présentons une nouvelle approche pour la reconnaissance d'actio...
Conference Paper
Full-text available
Dans ces travaux, nous évaluons l'impact de différentes trajectoires de véhicule routier sur les nuages de points li-dar embarqué. Effectivement, les fréquences de balayage des lidars sont faibles au regard des vitesses véhicule. Nous proposons une méthode pour palier ce problème et des métriques de comparaison. Nos résultats montrent qu'il est néc...
Conference Paper
Full-text available
In this paper, we propose an algorithm for estimating the absolute pose of a vehicle using visual data. Our method works in two steps: first we construct a visual map of geolocalized landmarks, then we localize the vehicle using this map. The main advantages of our method are that the localization of the vehicle is absolute and that it requires onl...
Conference Paper
Full-text available
This paper presents a novel descriptor based on skeleton information provided by RGB-D videos for human action recognition. These features are obtained, considering the motion as continuous trajectories of skeleton joints. With the discrete information of skeleton joints position, a cubic-spline interpolation is applied to joints position, velocity...
Conference Paper
Full-text available
In this paper we present an unsynchronized camera network able to estimate the motion and the structure with accurate absolute scale. The proposed algorithm requires at least three frames: two frames from one camera and a frame from a neighbouring camera. The relative camera poses are estimated with classical Structure-from-Motion and the absolute...
Conference Paper
Full-text available
Dans ce papier nous proposons une solution de localisation en milieu industriel complexe basée lidar et filtrage particulaire. Notre méthode de localisation en 6 degrés de liberté (ddl) utilise une nouvelle fonction de vraisemblance efficace en temps de calcul grâce à un champ de vraisemblance 3D pré-calculé. Au final, nous obtenons une localisatio...
Article
Full-text available
Ce papier présente une méthode d'odométrie visuelle pour la navigation autonome d'une chaise roulante à partir d'un capteur de vision omnidirectionnelle. Le système est com-posé d'un capteur de vision monoculaire catadioptrique fixé sur la partie haute de la chaise roulante. L'objec-tif consiste à utiliser l'information de vision pour estimer le dé...
Conference Paper
Full-text available
Nous proposons un système de vision, basé sur un ré-seau de caméras non-synchronisées permettant d'estimer le mouvement d'un véhicule et la structure de l'environne-ment 3D à l'échelle absolue. L'algorithme proposé néces-site au moins trois images prises par au moins deux ca-méras. Les poses relatives des caméras sont estimées par les méthodes clas...
Conference Paper
Full-text available
In this paper, we tackle the problem of map estimation from small set of vehicular GPS traces collected from low cost devices. Contrary to the existing works, we rely only on GPS information. First, we propose a fast implementation of Kalman filtering of spline-based road modeling. Our approach demonstrates a significant boost of the computation sp...