Mathieu Cocheteux

Mathieu Cocheteux
Verified
Mathieu verified their affiliation via an institutional email.
Verified
Mathieu verified their affiliation via an institutional email.
  • Master of Science
  • PhD Student at University of Technology of Compiègne

Currently focusing on deep learning-based sensor calibration for autonomous driving.

About

5
Publications
159
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
7
Citations
Introduction
I'm Mathieu Cocheteux, a PhD candidate in Computer Science at Université de technologie de Compiègne, focusing on sensor calibration and autonomous systems. My research has led to publications in top conferences like WACV and CVPR, and an international patent. I've gained experience through roles at Motional, Toyota Motor Europe, and as a researcher at my university.
Current institution
University of Technology of Compiègne
Current position
  • PhD Student
Additional affiliations
June 2022 - December 2022
Motional
Position
  • Research Engineer
April 2021 - September 2021
University of Technology of Compiègne
Position
  • Engineer
July 2020 - January 2021
Toyota Motor
Position
  • Engineer
Education
October 2021 - April 2025
University of Technology of Compiègne
Field of study
  • Computer vision, Artificial intelligence, Intelligent vehicles
February 2016 - March 2021
University of Technology of Compiègne
Field of study
  • Computer Science and Engineering

Publications

Publications (5)
Preprint
Accurate sensor calibration is crucial for autonomous systems, yet its uncertainty quantification remains underexplored. We present the first approach to integrate uncertainty awareness into online extrinsic calibration, combining Monte Carlo Dropout with Conformal Prediction to generate prediction intervals with a guaranteed level of coverage. Our...
Preprint
Full-text available
Despite the increasing interest in enhancing perception systems for autonomous vehicles, the online calibration between event cameras and LiDAR - two sensors pivotal in capturing comprehensive environmental information - remains unexplored. We introduce MULi-Ev, the first online, deep learning-based framework tailored for the extrinsic calibration...
Conference Paper
Full-text available
Camera-LiDAR extrinsic calibration is a critical task for multi-sensor fusion in autonomous systems, such as self-driving vehicles and mobile robots. Traditional techniques often require manual intervention or specific environments, making them labour-intensive and error-prone. Existing deep learning-based self-calibration methods focus on small re...
Preprint
Full-text available
We introduce a novel architecture, UniCal, for Camera-to-LiDAR (C2L) extrinsic calibration which leverages self-attention mechanisms through a Transformer-based backbone network to infer the 6-degree of freedom (DoF) relative transformation between the sensors. Unlike previous methods, UniCal performs an early fusion of the input camera and LiDAR d...

Network

Cited By