Jerome Leudet’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Fig. 1. Projection of point P onto the image plane using the Pinhole Camera Model [15].
Fig. 2. Lens distortions: (a) Barrel distortion with k 1 = −0.5, (b) Pincushion distortion with k 1 = 0.5, and (c) Tangential distortion with p 1 and p 2 set to 0.1.
Fig. 3. Distortion effects on a city scene from AILiveSim with different H-FOV settings. (a) 90° H-FOV with k 1 = 0.25. (b) 150° H-FOV with k 1 = 0.25.
Fig. 5. Normalized pixel-wise errors along top, middle, and bottom horizontal lines for DBC v1, DBC v2, and DBC v3. Each graph shows normalized pixel-wise errors across x-coordinates for all three models.
Fig. 6. Visualization of the distortion and undistortion process. (a) Original line image, (b) Image distorted using true distortion parameters, (c) Undistorted image using predicted parameters from DBC v1, (d) Undistorted image using DBC v2, (e) Undistorted image using DBC v3.
Deep-BrownConrady: Prediction of Camera Calibration and Distortion Parameters Using Deep Learning and Synthetic Data
  • Preprint
  • File available

January 2025

·

49 Reads

Faiz Muhammad Chaudhry

·

Jarno Ralli

·

Jerome Leudet

·

[...]

·

This research addresses the challenge of camera calibration and distortion parameter prediction from a single image using deep learning models. The main contributions of this work are: (1) demonstrating that a deep learning model, trained on a mix of real and synthetic images, can accurately predict camera and lens parameters from a single image, and (2) developing a comprehensive synthetic dataset using the AILiveSim simulation platform. This dataset includes variations in focal length and lens distortion parameters, providing a robust foundation for model training and testing. The training process predominantly relied on these synthetic images, complemented by a small subset of real images, to explore how well models trained on synthetic data can perform calibration tasks on real-world images. Traditional calibration methods require multiple images of a calibration object from various orientations, which is often not feasible due to the lack of such images in publicly available datasets. A deep learning network based on the ResNet architecture was trained on this synthetic dataset to predict camera calibration parameters following the Brown-Conrady lens model. The ResNet architecture, adapted for regression tasks, is capable of predicting continuous values essential for accurate camera calibration in applications such as autonomous driving, robotics, and augmented reality. Keywords: Camera calibration, distortion, synthetic data, deep learning, residual networks (ResNet), AILiveSim, horizontal field-of-view, principal point, Brown-Conrady Model.

Download