Article

A new calibration model of camera lens distortion

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Lens distortion is one of the main factors affecting camera calibration. In this paper, a new model of camera lens distortion is presented, according to which lens distortion is governed by the coefficients of radial distortion and a transform from ideal image plane to real sensor array plane. The transform is determined by two angular parameters describing the pose of the real sensor array plane with respect to the ideal image plane and two linear parameters locating the real sensor array with respect to the optical axis. Experiments show that the new model has about the same correcting effect upon lens distortion as the conventional model including all the radial distortion, decentering distortion and prism distortion. Compared with the conventional model, the new model has fewer parameters to be calibrated and more explicit physical meaning.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In response to the challenges outlined above and to accomplish remote monitoring tasks, a method for measuring water surface velocity (WSV) based on video imagery has emerged in recent years. This approach has garnered considerable attention due Water 2024, 16, 2784 3 of 16 latter issue makes orthorectification a crucial step. When lens distortion is negligible, the pinhole camera model can be employed. ...
... Water 2024, 16, 2784 4 of 16 min R,t∑ i ∥p i − project(P i , R, t, K)∥ 2 (3) ...
... In addition, a comprehensive image distortion correction model should also include tangential distortion. The most commonly used method for addressing tangential distortion is the Brown-Conrady model [16]. x andŷ are the image coordinates after radial correction, while k 1 , k 2 , and k 3 are parameters to be determined, which can be calculated during camera calibration. ...
Article
Full-text available
Accurate assessment of water surface velocity (WSV) is essential for flood prevention, disaster mitigation, and erosion control within hydrological monitoring. Existing image-based velocimetry techniques largely depend on correlation principles, requiring users to input and adjust parameters to achieve reliable results, which poses challenges for users lacking relevant expertise. This study presents RivVideoFlow, a user-friendly, rapid, and precise method for WSV. RivVideoFlow combines two-dimensional and three-dimensional orthorectification based on Ground Control Points (GCPs) with a deep learning-based multi-frame optical flow estimation algorithm named VideoFlow, which integrates temporal cues. The orthorectification process employs a homography matrix to convert images from various angles into a top-down view, aligning the image coordinates with actual geographical coordinates. VideoFlow achieves superior accuracy and strong dataset generalization compared to two-frame RAFT models due to its more effective capture of flow velocity continuity over time, leading to enhanced stability in velocity measurements. The algorithm has been validated on a flood simulation experimental platform, in outdoor settings, and with synthetic river videos. Results demonstrate that RivVideoFlow can robustly estimate surface velocity under various camera perspectives, enabling continuous real-time dynamic measurement of the entire flow field. Moreover, RivVideoFlow has demonstrated superior performance in low, medium, and high flow velocity scenarios, especially in high-velocity conditions where it achieves high measurement precision. This method provides a more effective solution for hydrological monitoring.
... In the original pinhole model [39], the camera is modeled by a set of intrinsic parameters (focal length, principal point, axis skew), and its poses and orientation are expressed by the extrinsic or exterior parameters (rotation matrix and translation vector). However, subsequent works have improved the model by including lens distortion in the internal camera parameters (interior orientation in photogrammetry) [39][40][41][42]. The intrinsic parameters are responsible for transforming the 3D points of the camera reference system, or camera coordinates, into 2D points of the displayed image. ...
... However, it is not valid for modeled real cameras. To accurately represent a real camera, lens distortion must be included in the pinhole model [39][40][41][42]. Brown [40] divided the distortion of a lens into radial distortion and tangential distortion. ...
... On the other hand, the tangential distortion is characterized with three coefficients: p 1 , p 2 and p 3 . In practice, only the first two terms of tangential distortion are considered, as the remaining terms are usually negligible [42]. Thus, for a more comprehensive camera model, the distorted points can be estimated by the following empirical equations [37]: ...
Article
Full-text available
Camera calibration is necessary for many machine vision applications. The calibration methods are based on linear or non-linear optimization techniques that aim to find the best estimate of the camera parameters. One of the most commonly used methods in computer vision for the calibration of intrinsic camera parameters and lens distortion (interior orientation) is Zhang’s method. Additionally, the uncertainty of the camera parameters is normally estimated by assuming that their variability can be explained by the images of the different poses of a checkerboard. However, the degree of reliability for both the best parameter values and their associated uncertainties has not yet been verified. Inaccurate estimates of intrinsic and extrinsic parameters during camera calibration may introduce additional biases in post-processing. This is why we propose a novel Bayesian inference-based approach that has allowed us to evaluate the degree of certainty of Zhang’s camera calibration procedure. For this purpose, the a prioriprobability was assumed to be the one estimated by Zhang, and the intrinsic parameters were recalibrated by Bayesian inversion. The uncertainty of the intrinsic parameters was found to differ from the ones estimated with Zhang’s method. However, the major source of inaccuracy is caused by the procedure for calculating the extrinsic parameters. The procedure used in the novel Bayesian inference-based approach significantly improves the reliability of the predictions of the image points, as it optimizes the extrinsic parameters.
... In contrast, the fsheye camera can perceive a wide range of a scene, and even obtain visual information about the hemispheric domain theoretically [4]. Figure 1 shows the visual diference between fsheye images and standard images. ...
... Te angle of incident light is denoted as θ. However, the nonlinear projection of a fsheye lens is more complex and can be expressed by diferent mathematical models [4] according to the design and manufacturing, such as stereographic projection, equidistance projection, equisolid angle projection, and orthogonal projection, respectively, interpreted as follows: ...
... Te correction process starts from the optical imaging model, and reconstructs the incident ray using the camera parameters obtained by the calibration. Ten, it builds a spatial mapping from the spherical perspective projection to the plane (or cylinder) projection [4]. Kannala and Brandt [26] proposed a fexible radially symmetric projection model with circular control points to improve the calibration accuracy. ...
Article
Full-text available
Accurate image feature point detection and matching are essential to computer vision tasks such as panoramic image stitching and 3D reconstruction. However, ordinary feature point approaches cannot be directly applied to fisheye images due to their large distortion, which makes the ordinary camera model unable to adapt. To address such a problem, this paper proposes a self-supervised learning method for feature point detection and matching on fisheye images. This method utilizes a Siamese network to automatically learn the correspondence of feature points across transformed image pairs to avoid high annotation costs. Due to the scarcity of the fisheye image dataset, a two-stage viewpoint transform pipeline is also adopted for image augmentation to increase the data variety. Furthermore, this method adopts both deformable convolution and contrastive learning loss to improve the feature extraction and description of distorted image regions. Compared with traditional feature point detectors and matchers, this method has been demonstrated with superior performance on fisheye images.
... Other types include decentering distortion, which arises from misalignment of optical elements, and thin prism distortion, caused by the tilt of an optical element relative to the imaging sensor. [Wang et al. 2008] In addition to geometric distortions, there are non-geometric aberrations such as vignetting, which darkens the image toward its periphery; chromatic aberration, which splits incoming light into a spectrum; bokeh, which affects the appearance of out-of-focus areas; and lens flares. ...
... The most commonly used method in this case is the Brown-Conrady model, which addresses various radial aberrations and optical misalignments. [Wang et al. 2008] This model operates entirely in image space and is independent of the camera's field of view (FOV). ...
Preprint
Full-text available
Lens Distortion Encoding System (LDES) allows for a distortion-accurate workflow, with a seamless interchange of high quality motion picture images regardless of the lens source. This system is similar in a concept to the Academy Color Encoding System (ACES), but for distortion. Presented solution is fully compatible with existing software/plug-in tools for STMapping found in popular production software like Adobe After Effects or DaVinci Resolve. LDES utilizes common distortion space and produces single high-quality, animatable STMap used for direct transformation of one view to another, neglecting the need of lens-swapping for each shoot. The LDES profile of a lens consist of two elements; View Map texture, and Footage Map texture, each labeled with the FOV value. Direct distortion mapping is produced by sampling of the Footage Map through the View Map. The result; animatable mapping texture, is then used to sample the footage to a desired distortion. While the Footage Map is specific to a footage, View Maps can be freely combined/transitioned and animated, allowing for effects like smooth shift from anamorphic to spherical distortion, previously impossible to achieve in practice. Presented LDES Version 1.0 uses common 32-bit STMap format for encoding, supported by most compositing software, directly or via plug-ins. The difference between standard STMap workflow and LDES is that it encodes absolute pixel position in the spherical image model. The main benefit of this approach is the ability to achieve a similar look of a highly expensive lens using some less expensive equipment in terms of distortion. It also provides greater artistic control and never seen before manipulation of footage.
... The decentering and tilting of the lens elements in the compound lens cause the decentering distortion of the camera. According to the literature [16,17], the non-frontal lens-sensor model can model the decentering distortion of the fixed focal length (monofocus) well and has comparable results to the mathematical modeling of the Brown model [14]. However, for zoom cameras, modeling is more complicated because the operation of zoom cameras relies on the lenses within the lens group moving relative to each other, exacerbating the decentering distortion. ...
... The radial distortion maybe 0 at a specific position in the middle focal length of the zoom lens. Since the decentering distortion of the lens has radial and tangential components [16], the total distortion will not be zero in the middle focal length of the zoom lens. However, since radial distortion is dominant, the reduction of radial distortion will result in a minimum value of total distortion in the middle focal length of the zoom lens. ...
Article
Full-text available
Zoom camera calibration has always been challenging, as arbitrary zoom/focus settings change the camera parameters. Current calibration methods are based on the pinhole imaging model, which results in coupled calibration parameters due to the lack of geometric/physical constraints. In this Letter, we present a novel, to the best of our knowledge, pupil-centric imaging model that accounts for the camera’s radial, decentering, and mustache distortions at various zoom settings using exit pupil offsets and a non-frontal lens-sensor model. Therefore, we provide a reasonable physical explanation for the different distortion effects. Global optimization is performed based on the proposed initial camera calibration and bundle adjustment under several zoom and autofocus setting combinations. Experiments using three representative zoom cameras demonstrate the effectiveness of the proposed method. Its relative measurement accuracy is better than that of current state-of-the-art methods.
... It is, therefore, crucial that an accurate lens model be used and compensation in the phase reconstructions be applied. For camera lens distortion, look-up tables (LUT) derived from the related parameters of camera calibration [5][6][7][8][9] can be used to correct the distortion of camera in real time where the elements of the tables are indexed the integer row and column coordinates of the camera's pixels. However for projector lens distortion, the projector coordinates of phase are real-valued, having infinite precision, and hence the projector lens distortion cannot be corrected as straightforward as correcting camera distortion. ...
... For investigating the performance in accuracy and speed, we compare our method with existing methods in several experiments by conducting 1) a naive reconstruction with two-direction scanning in Ref. [37] (without correcting the projector distortion); 2) iterative post-undistortion [11] (we iteratively calculate (x p u , y p u ) rather than (X w u , Y w u , Z w u ) as a simplification of Ref. [11]); 3) pre-distortion [16]; 4) the proposed with two-direction scanning; and 5) the proposed with one-direction scanning. To focus on observing projector distortion for all the five methods above, we use the method in Ref. [5] to calibrate the camera and then build up LUTs for compensating the camera distortion. The strategies for computing 3D point clouds are as follows: we use Eq. ...
Article
Full-text available
In fringe projection profilometry, inevitable distortion of optical lenses decreases phase accuracy and decreases the quality of 3D point clouds. For camera lens distortion, existing compensation methods include real time look-up tables derived from the related parameters of camera calibration. However, for projector lens distortion, so far, post-undistortion methods iteratively correcting lens distortion are relatively time-consuming while, despite avoiding iteration, pre-distortion methods are not suitable for binary fringe patterns. In this paper, we aim to achieve real-time phase correction for the projector by means of a scale-offset model that characterizes projector distortion by four correction parameters within a small-enough area, and thus we can speed up the post-undistortion by looking up tables. Experiments show that the proposed method can suppress the distortion error by a factor of 20 ×, i.e., the error of root mean square is less than 45 µm/0.7‰, while also proposed improving the computation speed by a factor of 50× over traditional iterative post-undistortion.
... However, ensuring low distortion while maintaining high resolution is a highly challenging task. Therefore, in practical applications, commercial projection lenses often prioritize achieving high resolution, even if it means compromising to some extent on distortion control [14,15]. The 2x projection lens utilized in our laboratory is a commercially procured microscope lens, exhibiting an approximate distortion of 0.6% across the entire field of view. ...
Article
Full-text available
Distortion is a common issue in projection lens imaging, leading to image distortion and edge deformation, which significantly affects the quality of the projected pattern. Conventional methods for distortion correction are typically constrained by the precision of the projection pixel size. In this work, we propose an ultra-pixel precision correction method for projection distortion in projection lithography systems. By fitting the position error between the projected pattern and the calibration pattern, and combining the overlapping lithography method with the digital correction method to reduce quantization error, we have overcome the limitation of pixel size on correction precision, thereby achieving ultra-pixel precision calibration of the projected pattern. The resulting position error of the final exposed pattern can be reduced to approximately 1µm (with a projection pixel side length of 5.4µm). The zone plate fabricated using this method exhibits extremely high ring band position accuracy, and the diffraction test patterns are highly consistent with the simulation results. Our ultra-pixel precision correction method, based on a calibration substrate, is characterized by its simplicity of operation, cost-effectiveness, and wide adaptability. It plays a pivotal role in enhancing the quality of lithographic patterns within lithography systems.
... The lowest radial symmetry distortion (coefficient k 1 ) is usually dominant, while the remaining higher-order terms are small enough to be negligible. An improper assembly of the projection lens can lead to both decentering and thin prism distortion [32]. The former comes from the decentering of lenses and/or other optical components, and thence it can be described mathematically by ...
Article
Full-text available
The optical distortion of the lithographic projection lens can reduce imaging quality and cause overlay errors in lithography, thus preventing the miniaturization of printed patterns. In this paper, we propose a technique to measure the optical distortion of a lithographic projection lens by sensing the wavefront aberrations of the lens. A multichannel dual-grating lateral shearing interferometer is used to measure the wavefront aberrations at several field points in the pupil plane simultaneously. Then, the distortion at these field points is derived according to the proportional relationship between the Z2{{\rm Z}_2} Z 2 and Z3{{\rm Z}_3} Z 3 Zernike terms (the tilt terms) and the image position shifts. Without the need for additional devices, our approach can simultaneously retrieve both the wavefront aberrations and the image distortion information. Consequently, it improves not only measurement speed and accuracy but also enables accounting for displacement stage positioning error. Experiments were conducted on a lithographic projection lens with a numerical aperture of 0.57 to verify the feasibility of the proposed method.
... The first two are based on blob detection methods that consider feactures as sections of the image with constant or comparable attributes, whereas the third and fourth use edge following techniques to extract the contours of image features and the last is used to extract the camera parameters using Brown's radial and tangential distortion correction model (Ricolfe-Viala and Sanchez-Salmeron, 2010). FindCircleGrid, FindContours, drawCirclesGrid, and the calibrateCamera functions were found to be easier to implement and robust enough for this purpose because they have already been deployed and widely tested on calibration images (Wang et al., 2008;Ricolfe-Viala and Sanchez-Salmeron., 2010;Kawanishi et al., 2015). A droplet's curvature distortion was investigated using a calibration method based on the radial and tangential models, and the results are displayed as shown in Fig. 6. ...
Article
Full-text available
The use of levitated droplets with electrostatic and ultrasonic fields has attracted much attention in the fields of materials development, chemical engineering, droplet-based microfluidics, inkjet printing, and aerospace engineering. To use them properly, it is essential to understand the internal flow of the suspended droplet. The flow inside fluid droplets is visualized with the particle image velocimetry (PIV) method. The fluid that is to be investigated is scattered with small particles that follow the flow well. One problem that occurs when the PIV method is used to measure velocity fields inside droplets is however that light refracts at the droplet surface and the internal flow curvature is distorted. The aim of this research is to improve the accuracy of PIV measurement by correcting the distorted particle images of an acoustically levitated droplet using calibration method. In this study, a simulated droplet with different refractive index and aspect ratio was used to investigate their influence on distortion correction. The circular target plate was also utilized to correct the distortion in a simulated droplet using a calibration method of the python-OpenCV. The experimental results showed that the internal flow curvature can be distorted in two types such as barrel, pincushion distortions, and increase as the refractive index increases. Therefore, correction of the distorted image of particles in the droplet showed good convergence as the aspect ratio decreased.
... Another widely used calibration approach is the method of Zhang (2000). Later works introduced more extensive sensor models, that is, by including additional parameters for modeling radial distortion (Kannala & Brandt, 2006;Wang et al., 2008). ...
Article
Full-text available
Both robot and hand‐eye calibration have been object of research for decades. While current approaches manage to precisely and robustly identify the parameters of a robot's kinematic model, they still rely on external devices such as calibration objects, markers and/or external sensors. Instead of trying to fit recorded measurements to a model of a known object, this paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space via a moving kinematic chain. As such,we enable robot calibration by using nothing but an arbitrary eye‐in‐hand depth sensor.To the authors' best knowledge the presented framework is the first solution to three‐dimensional (3D) sensor‐based robot calibration that does not require external sensors nor reference objects. Our novel approach utilizes a modified version of the IterativeCorresponding Point algorithm to run bundle adjustment on multiple 3D recordings estimating the optimal parameters of the kinematic model. A detailed evaluation of the system is shown on a real robot with various attached 3D sensors. The presented results show that the system reaches precision comparable to a dedicated external tracking system at a fraction of its cost.
... In non-fixed focal length cameras, every time the focus or zoom changes, it affects the focal length, location of the principal point, and the lens distortions, especially the tangential (decentering) one. Lens distortions can be calculated with one single image of a calibration scene only if the focal length is known or assumed beforehand [34]. There are several application examples produced in various programming languages, but the majority of them are built for basic lens distortions and the method of non-photogrammetric camera calibration [35]. ...
Article
Full-text available
Producing accurate spatial data with stereo photogrammetric techniques is a challenging task, and the central projection of the space needs to be defined as closely as possible to its real form in each image taken for the relevant production. Interior camera parameters that define the exact imaging geometry of the camera and the exterior orientation parameters that locate and rotate the imaging directions in a coordinate system have to be known accurately for this correct definition. All distortions sourcing from lens and sensor planes and their recording geometry are significant as they are not suitable for detection with manual measurements. It is of vital importance to clearly understand the camera self-calibration concept with respect to the lens and the sensor plane geometry and include every possible distortion source as an unknown parameter in the calibration adjustments as they are all modellable systematic errors. In this study, possible distortion sources and self-calibration adjustments are explained in detail with a recently developed visualization software. The distortion sources investigated in the study are radial, tangential, differential scale, and axial skewing distortion. Thanks to the developed software, image center point, distorted grids, undistorted grids, and principal points were visualized. As a result, the most important element of obtaining accurate and precise photogrammetric productions is the correct definition of the central projection of the space for each image, and therefore, the study explains an accurate and robust procedure with the correct definition and use of correct camera internal parameters.
... 12 The presence of distortion is innate to imaging lenses, and when it is not corrected through the lens manufacturing process, it must be corrected through either postprocessing algorithms and/or additional correction lenses. 39,40 The selection of this lens block also has a great impact on the spatial resolution and sharpness of the detected features. 12 Another parameter dictated by the optical arrangement of the detection block is the depth of field (DOF), which conveys the range of z distances from the focus plane at which image quality is preserved or lost. ...
Article
Full-text available
Significance: Fluorescence guided surgery (FGS) has demonstrated improvements in decision making and patient outcomes for a wide range of surgical procedures. Not only can FGS systems provide a higher level of structural perfusion accuracy in tissue reconstruction cases but they can also serve for real-time functional characterization. Multiple FGS devices have been Food and Drug administration (FDA) cleared for use in open and laparoscopic surgery. Despite the rapid growth of the field, there has been a lack standardization methods. Aim: This work overviews commonalities inherent to optical imaging methods that can be exploited to produce such a standardization procedure. Furthermore, a system evaluation pipeline is proposed and executed through the use of photo-stable indocyanine green fluorescence phantoms. Five different FDA-approved open-field FGS systems are used and evaluated with the proposed method. Approach: The proposed pipeline encompasses the following characterization: (1) imaging spatial resolution and sharpness, (2) sensitivity and linearity, (3) imaging depth into tissue, (4) imaging system DOF, (5) uniformity of illumination, (6) spatial distortion, (7) signal to background ratio, (8) excitation bands, and (9) illumination wavelength and power. Results: The results highlight how such a standardization approach can be successfully implemented for inter-system comparisons as well as how to better understand essential features within each FGS setup. Conclusions: Despite clinical use being the end goal, a robust yet simple standardization pipeline before clinical trials, such as the one presented herein, should benefit regulatory agencies, manufacturers, and end-users to better assess basic performance and improvements to be made in next generation FGS systems.
... 12 The presence of distortion is innate to imaging lenses, and when it is not corrected through the lens manufacturing process, it must be corrected through either postprocessing algorithms and/or additional correction lenses. 39,40 The selection of this lens block also has a great impact on the spatial resolution and sharpness of the detected features. 12 Another parameter dictated by the optical arrangement of the detection block is the depth of field (DOF), which conveys the range of z distances from the focus plane at which image quality is preserved or lost. ...
Article
Full-text available
Significance Fluorescence guided surgery (FGS) has demonstrated improvements in decision making and patient outcomes for a wide range of surgical procedures. Not only can FGS systems provide a higher level of structural perfusion accuracy in tissue reconstruction cases but they can also serve for real-time functional characterization. Multiple FGS devices have been Food and Drug administration (FDA) cleared for use in open and laparoscopic surgery. Despite the rapid growth of the field, there has been a lack standardization methods. Aim This work overviews commonalities inherent to optical imaging methods that can be exploited to produce such a standardization procedure. Furthermore, a system evaluation pipeline is proposed and executed through the use of photo-stable indocyanine green fluorescence phantoms. Five different FDA-approved open-field FGS systems are used and evaluated with the proposed method. Approach The proposed pipeline encompasses the following characterization: (1) imaging spatial resolution and sharpness, (2) sensitivity and linearity, (3) imaging depth into tissue, (4) imaging system DOF, (5) uniformity of illumination, (6) spatial distortion, (7) signal to background ratio, (8) excitation bands, and (9) illumination wavelength and power. Results The results highlight how such a standardization approach can be successfully implemented for inter-system comparisons as well as how to better understand essential features within each FGS setup. Conclusions Despite clinical use being the end goal, a robust yet simple standardization pipeline before clinical trials, such as the one presented herein, should benefit regulatory agencies, manufacturers, and end-users to better assess basic performance and improvements to be made in next generation FGS systems.
... The first step involves correcting the radial distortion caused by the camera lens using a check board captured from multiple perspectives to measure and correct the internal or external www.nature.com/scientificreports/ parameters through inverse calculation 29 . The Camera Calibration function provided by OpenCV was used for this purpose. ...
Article
Full-text available
The COVID-19 pandemic and discovery of new mutant strains have a devastating impact worldwide. Patients with severe COVID-19 require various equipment, such as ventilators, infusion pumps, and patient monitors, and a dedicated medical team to operate and monitor the equipment in isolated intensive care units (ICUs). Medical staff must wear personal protective equipment to reduce the risk of infection. This study proposes a tele-monitoring system for isolation ICUs to assist in the monitoring of COVID-19 patients. The tele-monitoring system consists of three parts: medical-device panel image processing, transmission, and tele-monitoring. This system can monitor the ventilator screen with obstacles, receive and store data, and provide real-time monitoring and data analysis. The proposed tele-monitoring system is compared with previous studies, and the image combination algorithm for reconstruction is evaluated using structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). The system achieves an SSIM score of 0.948 in the left side and a PSNR of 23.414 dB in the right side with no obstacles. It also reduces blind spots, with an SSIM score of 0.901 and a PSNR score of 18.13 dB. The proposed tele-monitoring system is compatible with both wired and wireless communication, making it accessible in various situations. It uses camera and performs live data monitoring, and the two monitoring systems complement each other. The system also includes a comprehensive database and an analysis tool, allowing medical staff to collect and analyze data on ventilator use, providing them a quick, at-a-glance view of the patient's condition. With the implementation of this system, patient outcomes may be improved and the burden on medical professionals may be reduced during the COVID-19 pandemic-like situations.
... As a result, since most of the imaging area for inner surface defects is located far from the center, this method does not meet detection requirements. Mundhenk [18] expressed the image distortion as a longitude and latitude model, uniformly mapping the abscissa to the corrected position while keeping the ordinate of each pixel on the longitude unchanged; Wang [19] decomposed endoscopic image distortion into radial distortion and rigid transformation from an ideal image plane to sensor array plane, simplifying the model parameters to two angle parameters and two linear parameters. The above research has simplified the distortion model in different ways, achieving ideal correction effects while reducing resource occupation. ...
Article
Full-text available
A method is proposed for correcting endoscopic images to enable the measurement of inner surface defects in holes. The distortion is decomposed into circumferential and axial components based on imaging principles, and geometric constraints are added to simplify the correction model parameters to a central coordinate and nonlinear parameter, improving accuracy at the edge of endoscopic images. Experimental results demonstrate good universality with an average error rate of 5.35% in defect measurements, making it applicable for automatic and intelligent detection of hole parts and pipelines.
... Literature [2][3][4][5] shows that temperature fluctuations result in performance degradation by defocusing due to housing deformation and lens expansion, imaging errors due to thermal mismatch between lens and lens carrier and interconnects, creep in solder joints, and adhesive shrinkage, among others. Authors in [6][7][8] report about the various aberrations introduced into the system by the tilt and decentering of the lens elements. It is also shown in [9] that lens tilt and decentration can be caused by aging, or thermal mismatch, when cameras are operated at elevated temperatures . ...
... To derive the colored noise model, we adopt a star tracker model with eight calibration parameters, which was used with proven calibration performance in previous research [7,9,10,19]. The eight calibration parameters are defined as where ⃗ o = m o n o T is a pixel coordinate of the optical center for the lens projected on a detector, f is the focal length, ⃗ a = a 1 a 2 T is the rotation angle of the detector with respect to the lens plane axes, and g y is a skew factor for the pixels, as illustrated in Fig. 1. ...
Article
This paper presents an attitude error model of a star tracker, which is induced from the optical system errors, and proposes an attitude Kalman filter considering the star tracker errors. Though it can be calibrated before and after launches, it is impossible to obtain error-free star tracker parameters in practice, which generates non-white noise errors in the star tracker outputs. Moreover, star tracker error models are usually a business secret for the manufacturers, so it is hard to estimate them online on the spacecraft bus. We model the attitude bias caused by the error of the optical parameters as colored noise using the camera model parameters and their covariance. A recursive form of the colored noise is derived based on a vector autoregressive model, and a colored noise Kalman filter is proposed to estimate the attitude error along with the spacecraft attitude and gyro bias. The proposed method only needs three additional states to be estimated and does not contain sensitive information for a star tracker manufacturer, which can ease the burden of its applications. The simulations illustrate the stability and reliability of the proposed algorithm.
... Quite a number of works are devoted to this issue, which include camera calibration using different models in order to get rid of the mentioned distortions, e.g. [14][15][16][17][18]. ...
Article
Full-text available
A technique is described for referecing images of wide-angle optical systems intended for registration own radiation of the Earth's atmosphere, to geographic coordinates. The technique is based on an automatic procedure for the stars emphasing and identification in the frames and subsequent georeferencing. An example of the technique use for calculating the characteristics of a long-lived meteor trail based on observation data of two spatially separated wide-angle optical systems is shown. Keywords: Geo-referencing, all-sky camera, star identification, atmospheric emission, long-lasting trail.
... Imaging sensor Usually, the two coefficients of k0 and k1 are sufficient to correct the effects of radial distortion [WSZL08]. However, for the most distorted cases such as fisheye lenses (lenses with a very short focal length and therefore a large angle of view, up to 180 °), the third coefficient 2 could be considered for a better correction of the image. ...
Thesis
Le sujet de thèse repose sur l'étude et le développement d'un système de vision tridimensionnelle couplé à un système de déplacement pour la mesure sans contact de pièces mécaniques de manière exacte, dense et rapide. Contrairement aux machines de mesures 3D à base de palpeurs, les systèmes de vision 3D permettent de fournir une mesure plus dense en un temps faible. Toutes ces opérations de mesure 3D, initialement réalisées sur des pièces simples, ont été étendues récemment à des surfaces complexes pour répondre aux nouveaux besoins industriels en matière d'automatisation de la mesure en ligne, comme spécifié dans l'industrie du futur (ou Industrie 4.0). Dans le cadre du projet LaVA (Applications pour la métrologie des grands volumes), la problématique principale repose sur la mise en œuvre d'un système de mesure multi-caméras (de photogrammétrie/lumière structurée) directement traçable à la définition SI du mètre. La combinaison du système de cameras-projecteur avec un robot industriel constitue le système de mesure complet destiné à des opérations de scanning 3D en ligne sur des pièces mécaniques de grands volumes et de formes complexes.Le système de vision 3D basé sur le principe de la lumière structurée a été développé et étalonné en interne. L'étalonnage des systèmes de vision par ordinateur est une étape cruciale avant la mesure, elle permet d'obtenir les informations nécessaires à la triangulation. Par conséquent, les techniques d'étalonnage ont été étudiées et une nouvelle méthode d'optimisation permettant d'améliorer l'exactitude de l'étalonnage a été proposée dans le cadre de la thèse. Afin d'assurer la traçabilité, l'étalonnage du système de vision 3D est réalisé avec un étalon matériel mesuré avec une machine traçable et permettant de relier notre système à une chaine de traçabilité.Enfin, une pièce grand volume et de forme complexe - similaires à celle utilisés dans l'aéronautique - a été développée, mesurée sur des machines traçables Zeiss UPMC Carat. Cette pièce a été proposée pour évaluer les performances du scanner 3D. Une stratégie de scan a également été proposée pour couvrir toute la surface de la pièce. Cela implique de numériser plusieurs zones de la pièce individuellement et de fusionner les mesures dans un même référentiel à l'aide de techniques d’alignement. Des algorithmes de traitement et de fusion de données 3D ont été implémentés pour obtenir des résultats de mesure fiables et précis. Le résultat de mesure pour la pièce grand volume montre une erreur de recalage maximale de 150 µm.
... From the current research, there are many algorithms for fish-eye correction.such as: J Wang et al. [4] proposed a new camera lens distortion model, that is, lens distortion is controlled by the transformation of radial distortion coefficient and ideal image plane to real sensor array plane. To eliminate the distortion from this model, this method needs to obtain fewer parameters. ...
Article
Full-text available
Underwater environment is complex and changeable. In order to obtain more underwater environment information. The larger the field of view of underwater images collected by ROV, the more information is contained. Effective methods to obtain large field of view include fish-eye lens and image stitching. In order to obtain larger field information, we combine the two methods and propose a splicing algorithm that can be applied to fish-eye lenses. This algorithm includes two parts, the first part is correction the fish-eye images, on the basis of the traditional chessboard correction method to improve, this paper put forward a new adaptive gray level method, this method can keep more angular point features, can be more accurate extraction of checkerboard angular point, will be further accurate correction result. In order to achieve real-time underwater patchwork effect. For stitching the corrected images, this paper proposes a fast stitching algorithm (FASTITCH), in the process of image stitching, the algorithm can preserve image feature points and transposed matrix of image matching, so as to calculate the new coordinates, joining together the original feature points in the image. Using this coordinate to match the feature points of another image can save the time of finding feature points in the stitching image, and finally speed up the stitching and complete the task of real-time stitching. The experiment proves that: The error obtained by the new correction method is smaller. Compared with the traditional feature point stitching algorithm, the fast stitching algorithm (FASTITCN) proposed in this paper can shorten the stitching time by about 20%.
... The second application of our camera parameter estimation is image undistortion. The existing image undistortion algorithms attempt to correct radial lens distortions by warping input images to undistorted images [47], [48]. In our case, we achieve this using FOV and the distortion parameter ξ. ...
Article
Full-text available
This paper presents a novel Deep Learning (DL) model that estimates camera parameters, including camera rotations, field of view, and distortion parameter, from single-view images. The classical approach often analyzes geometric cues such as vanishing points, but is constrained only when geometric cues exist in images. To alleviate such constraints, we use DL, and employ implicit geometric cues, which can reflect the inter-image changes of camera parameters and be observed more frequently in images. Our geometric cues are inspired by two important intuitions: 1) geometric appearance changes caused by camera parameters are the most prominent in object edges; 2) spatially consistent objects (in size and shape) better reflect the inter-image changes of camera parameters. To realize our approach, we propose a weighted edge-attention mechanism that assigns higher weights onto the edges of spatially consistent objects. Our experiments prove that our edge-driven geometric emphasis significantly improves the estimation accuracy of the camera parameters than the existing DL-based approaches.
... The accuracy of the camera model determines the accuracy of the measurement system [7]. It also depends on the calibration accuracy of the visual measurement system [8]. A precise camera model and an effective measurement method determine the measurement accuracy of the camera measurement system [9]. ...
Article
Full-text available
The accuracy of binocular visual system calibration using the traditional method is poor in the depth direction. To enlarge the high-accuracy field of view (FOV) of a binocular visual system, a 3D spatial distortion model (3DSDM) based on the 3D Lagrange difference is proposed to minimize 3D space distortion. In addition, a global binocular visual model (GBVM) is proposed that contains the 3DSDM and a binocular visual system. The GBVM calibration method and 3D reconstruction method are based on the Levenberg–Marquardt method. Experiments were carried out to verify the accuracy of our proposed method by measuring the length of the calibration gauge in a 3D space. Experiments show that compared to traditional methods our method can improve the calibration accuracy of a binocular visual system. Our GBVM has a lower reprojection error, higher accuracy, and a larger working field.
... Thus, the intrinsic camera parameters (sensor resolution, pixel size, optical distortion, . . . ) must be known to convert the image pixel coordinates to real-world units. It is important to have a good model of the camera's optical distortion [15,16] to correct pixel positions before to converting them to real-world units. ...
Article
Full-text available
Several calibration algorithms use spheres as calibration tokens because of the simplicity and uniform shape that a sphere presents across multiple views, along with the simplicity of its construction. Among the alternatives are the use of complex 3D tokens with reference marks, usually complex to build and analyze with the required accuracy; or the search of common features in scene images, with this task being of high complexity too due to perspective changes. Some of the algorithms using spheres rely on the estimation of the sphere center projection obtained from the camera images to proceed. The computation of these projection points from the sphere silhouette on the images is not straightforward because it does not match exactly the silhouette centroid. Thus, several methods have been developed to cope with this calculation. In this work, a simple and fast numerical method adapted to precisely compute the sphere center projection for these algorithms is presented. The benefits over other similar existing methods are its ease of implementation and that it presents less sensibility to segmentation issues. Other possible applications of the proposed method are presented too.
... The measurement accuracy of an optical camera directly affects the accuracy of autonomous optical navigation, and the optical measurement accuracy is mainly affected by systematic errors of the optical camera [5]. Therefore, the calibration of the internal systematic error parameters, including radial distortion, tangential distortion, image plane translation, and focal length error, and external systematic error parameters such as installation error of an optical camera is crucial for realizing a successful autonomous optical navigation [6][7][8]. Before the spacecraft launch, the error parameters of an optical camera are typically subjected to detailed ground calibration. However, owing to the vibrations during the launching process of spacecraft and a long-term complex space environment during the orbital operation, the internal and external parameters of an optical camera can change significantly. ...
Article
Full-text available
Narrow field-of-view (FOV) cameras enable long-range observations and have been often used in deep space exploration missions. To solve the problem of systematic error calibration for a narrow FOV camera, the sensitivity of the camera systematic errors to the angle between the stars is analyzed theoretically, based on a measurement system for observing the angle between stars. In addition, the systematic errors for a narrow FOV camera are classified into “Non-attitude Errors” and “Attitude Errors”. Furthermore, the on-orbit calibration methods for the two types of errors are researched. Simulations show that the proposed method is more effective in the on-orbit calibration of systematic errors for a narrow FOV camera than the traditional calibration methods.
Article
The most common forms of target multi-degree-of-freedom (MDOF) motion include plane and space motions, whose accurate measurement is crucial for the fields of pose estimation, inertial navigation, and structural health monitoring of buildings. Currently, the primary methods used for the MDOF motion measurement are laser interferometry, grating ruler-based method, and sensor-based method. However, the laser interferometry is operationally complex, costly, and it requires strict environmental conditions. Although the grating ruler-based method is low-cost and accurate, its applicable frequency range is limited. Moreover, the sensor-based method suffers from limited accuracy and dynamic range. To achieve accurate measurement of MDOF motion across a wide frequency range, a new monocular vision (MV)-based decoupling measurement method is investigated. This method utilizes a specially designed measurement mark, combined with an improved Harris corner positioning algorithm with sub-pixel accuracy and decoupling models for plane and space motions, which has the advantages of high measurement accuracy, simple algorithm, and strong robustness. Comparison experiments with the current methods demonstrated that the investigated method can simultaneously accomplish the decoupling measurements of plane and space motions, which achieved the accuracy of 0.66% and 1.29% in the range of 0.1-2 Hz.
Article
The four core technologies of automatic driving have evolved environment perception, precise positioning, path planning. and line control execution. Perfect planning must establish a deep understanding of the surrounding environment for environmental perception, especially the dynamic environment. Visual environment perception has played a key role in the development of autonomous vehicles. It has been widely used in intelligent rearview mirror, reversing radar, 360° panorama, driving recorder, collision warning, traffic light recognition, lane departure, line-parallel assistance, automatic parking and etc. The traditional way to obtain environmental information is the narrow-angle pinhole camera, which has limited field of vision and blind area. Multiple cameras are often needed to be covered around the car body, which not only increases the cost, but also increases the information processing time. Fisheye lens perception can be an effective way to use for environmental information. The large field of view (FOV) can provide the entire hemisphere view of 180°. Theoretically, The capability to cover 360° to avoid visual blindness, reduce the occlusion of visual objects, provide more information for visual perception and greatly reduce the processing time with only two cameras. Based on deep learning, processing surrounded image has been mainly processed in two ways. First, the surrounded fisheye image is transformed into ordinary normal image based on the image correction and distortion. The corrected image has been processed via classical image processing algorithm. The disadvantage is that image distortion can damage image quality, especially the image edges, lead to important visual information missing, the closer the image edge, the more loss of information. Second, the distorted fisheye image has been modeled and processed directly. The complexity of the fisheye image geometric process (model) cannot make the algorithm to migrate to the surrounded fisheye image very well, which is determined by the imaging characteristics of ordinary image and fisheye image, there is no surround fisheye image modeling model with better effect. Finally, there is no representative public dataset to carry out unified evaluation of the vision algorithm, and there is also a lack of a large number of data for model training. The related research directions of the fisheye image including the correction processing of the fisheye image have been summarized. Subdivided into the fisheye image correction method based on calibration has been conducted and the fisheye image correction method based on the projection transformation model has been demonstrated; the target detection in the fisheye image has been mainly introduced to pedestrian detection as well. The city road environment semantic segmentation, pseudo fisheye image dataset generation method has mainly been introduced based on the semantic segmentation of fisheye images. The other fisheye image modeling methods have been used to list the approximate proportion of these research directions and analyze the application background and real-time characteristics of the environment of automatic driving vehicle. In addition, the general datasets of the fisheye image has included the size of these datasets, publishing time, annotation category and etc. The experimental results of object detection methods and semantic segmentation methods in the fisheye image have been compared and analyzed. The evaluation dataset of fisheye image, the construction of algorithm model of fisheye image and the efficiency of the model issues have been discussed. The fisheye image processing has been benefited from the development of weak supervised and unsupervised learning.
Article
In camera distortion correction and calibration research, there is a prevalent assumption that distortion exhibits radial symmetry around a center of distortion. However, the validity of this assumption is contingent upon the extent of tangential distortion in the system. This study delves into a quantitative analysis of tangential distortion by simulating diverse camera specifications and tilt angles between lenses and sensor arrays. The findings indicate that larger field-of-view and smaller pixel sizes lead to an augmentation of tangential distortion. In practical camera settings, the tangential distortion can extend beyond a dozen pixels. Consequently, traditional models relying on radial symmetry may encounter a decline in precision when addressing such distortions. Moreover, the study suggests that centers of distortion determined geometrically based on radial symmetry may not be numerically optimal in scenarios with tangential distortions. As a result, the robustness of the radial symmetry assumption may be compromised in the face of prominent trends in camera advancements.
Article
Camera calibration is very important when planning machine vision tasks. Calibration may involve 3D reconstruction, size measurement, or careful target positioning. Calibration accuracy directly affects the accuracy of machine vision. The parameters in many image distortion models are usually applied to all image pixels. However, this may be associated with rather high pixel reprojection errors at image edges, compromising camera calibration accuracy. In this paper, we present a new camera calibration optimization algorithm that features a step function that splits images into center and edge regions. First, based on the increasing pixel reprojection errors according to the pixel distance away from the image center, we gave a flexible method to divide an image into two regions, center and boundary. Then, the algorithm automatically determines the step position, and the calibration model is rebuilt. The new model can calibrate the distortions at the center and boundary regions separately. Optimized by the method, the number of distortion parameters in the old model is doubled, and different parameters represent different distortions within two regions. In this way, our method can optimize traditional calibration models, which define a global model to describe the distortion of the whole image and get a higher calibration accuracy. Experimentally, the method significantly improved pixel reprojection accuracy, particularly at image edges. Simulations revealed that our method was more flexible than traditional methods.
Chapter
Camera calibration is known to be a difficult problem, mainly because the quantities to be identified vary over several orders of magnitude and affect the accuracy of the result in different ways. Various approaches have been proposed in the literature, e.g., the Tsai approach or the Zhang algorithm, but although they differ significantly, they all rely on the pinhole camera model. The calibration of a telecentric lens is worth attention because it implies a different procedure for the design of the optics itself. In this article, we propose a solution for telecentric lens calibration.
Article
Full-text available
Context. PLAnetary Transits and Oscillations of stars (PLATO) is the ESA M3 space mission dedicated to detect and characterise transiting exoplanets including information from the asteroseismic properties of their stellar hosts. The uninterrupted and high-precision photometry provided by space-borne instruments such as PLATO require long preparatory phases. An exhaustive list of tests are paramount to design a mission that meets the performance requirements and, as such, simulations are an indispensable tool in the mission preparation. Aims. To accommodate PLATO’s need of versatile simulations prior to mission launch that at the same time describe innovative yet complex multi-telescope design accurately, in this work we present the end-to-end PLATO simulator specifically developed for that purpose, namely PlatoSim . We show, step-by-step, the algorithms embedded into the software architecture of PlatoSim that allow the user to simulate photometric time series of charge-coupled device (CCD) images and light curves in accordance to the expected observations of PLATO. Methods. In the context of the PLATO payload, a general formalism of modelling, end-to-end, incoming photons from the sky to the final measurement in digital units is discussed. According to the light path through the instrument, we present an overview of the stellar field and sky background, the short- and long-term barycentric pixel displacement of the stellar sources, the cameras and their optics, the modelling of the CCDs and their electronics, and all main random and systematic noise sources. Results. We show the strong predictive power of PlatoSim through its diverse applicability and contribution to numerous working groups within the PLATO mission consortium. This involves the ongoing mechanical integration and alignment, performance studies of the payload, the pipeline development, and assessments of the scientific goals. Conclusions. PlatoSim is a state-of-the-art simulator that is able to produce the expected photometric observations of PLATO to a high level of accuracy. We demonstrate that PlatoSim is a key software tool for the PLATO mission in the preparatory phases until mission launch and prospectively beyond.
Article
A technique using droplets suspended by ultrasound has attracted attention as one of the containerless processing methods. While this can avoid contamination from the container, it is known that ultrasonic levitation creates flow fields inside and outside the droplet. For more precise droplet control, it is desirable to elucidate the internal flow of the droplet, and measurements of the internal flow have been performed using the particle image velocimetry (PIV). The aim of this study is to elucidate the internal flow field behavior by solving optical problems and improving the accuracy of velocity field measurements in levitated droplets. The fluid that is to be investigated is scattered with small tracer particles and illuminated by a laser to capture the flow on a created laser sheet. The curvature distortion is successfully visualized using the PIV approach toward distortion correction using calibration methods. The curvature distortion illustration was performed based on the refractive index and aspect ratio of the simulated droplet in acrylic materials. The fluid flow, affected by droplet curvature and refractive index, has been visualized for both levitated and simulated droplets. The experimental results showed that the droplet curvature can be distorted in two types such as radial and tangential distortions and increase as the refractive index and aspect ratio increases.
Article
Purpose: This study presents a treatment planning system for intraoperative low-energy photon radiotherapy based on photogrammetry from real images of the surgical site taken in the operating room. Material and methods: The study population comprised 15 patients with soft-tissue sarcoma. The system obtains the images of the area to be irradiated with a smartphone or tablet, so that the absorbed doses in the tissue can be calculated from the reconstruction without the need for computed tomography. The system was commissioned using 3D printing of the reconstructions of the tumor beds. The absorbed doses at various points were verified using radiochromic films that were suitably calibrated for the corresponding energy and beam quality. Results: The average reconstruction time of the 3D model from the video sequence in the 15 patients was 229,6±7,0 s. The entire procedure, including video capture, reconstruction, planning, and dose calculation was 520,6±39,9 s. Absorbed doses were measured on the 3D printed model with radiochromic film, the differences between these measurements and those calculated by the treatment planning system were 1.4% at the applicator surface, 2.6% at 1 cm, 3.9% at 2 cm and 6.2% at 3 cm. Conclusions: The study shows a photogrammetry-based low-energy photon IORT planning system, capable of obtaining real-time images inside the operating room, immediately after removal of the tumor and immediately before irradiation. The system was commissioned with radiochromic films measurements in 3D-printed model.
Article
Full-text available
Non -metric thermal sensors have been out of calibration in the laboratory for an extended period of time and require calibration to adjust for the interior orientation parameters and lens distortions. To generate photogrammetric products with the desired degree of geometric precision, it is important to identify the geometric calibration parameters of the non -metric sensor in order to minimize the relative orientation error and resolve the bundle adjustment. The purpose of this research is to present a novel method for geometric calibration of non -metric thermal sensors as a necessary preprocessing step before producing photogrammetric products with the desired geometric precision. To geometrically calibrate the non -metric thermal sensor, the proposed method employs a calibration pattern in the form of a rectangular plate composed of hollow circular targets with symmetrical placement geometry. Hollow circles induce temperature differences, improving the contrast and sharpness of the thermal calibration pattern. Due to the thermal sensors' low spatial resolution and low contrast, circular targets appear as an ellipse in the image. For this reason, in this study, the Hough Transform method is utilized to fit and extract the exact two - dimensional coordinates of the focal center of elliptical targets in the image space. The reason for this is that the Hough Transform employs the parameters of the ellipse to fit it and does not require the entire extraction of its circumferential lines. In the method utilized in this study, the Collinearity Equation is used to compute the geometric calibration elements of the thermal sensor. Various experiments were undertaken to evaluate the proposed approach. The results of these tests, which were performed based on the criterion of Mean Reprojection Error per Image, evaluated the accuracy of the geometric calibration as 0.03 pixels. Additionally, when the proposed method for re - projecting the target´s focal point to the calibration pattern is used in conjunction with the estimated calibration parameters, the mean error between the actual image coordinates and the actual ground coordinates of the targets is reduced to 0.28 pixels when compared to the method of the equation of conic sections.
Article
In this article, a novel in situ measurement method with stereoscopic image analysis is proposed for monitoring crystal length and width distributions during a cooling crystallization process based on a binocular telecentric imaging system. First, a stereoscopic imaging calibration model is established for using binocular telecentric cameras to conduct in situ measurement during a cooling crystallization process. Second, an enhanced algorithm is presented to improve matching crystal images from binocular image pairs based on in situ image preprocessing and segmentation. Third, four key corners related to crystal length and width are determined by using the boundary features of matched crystal image projections. Finally, the length and width of each crystal are computed based on the reconstructed key corners in a 3-D coordinate space by using the established stereoscopic calibration model. Experimental validation on the proposed stereoscopic imaging calibration model and 3-D reconstruction of key corners is carried out via in situ measurement of a microscale checkerboard plate inserted into a cooling crystallizer. In situ measurements of crystal length and width distributions during the cooling crystallization process of β\beta -form L-glutamic acid (LGA) are conducted to verify the effectiveness and advantage of the proposed measurement method.
Article
Full-text available
Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lenses, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.
Article
Full-text available
Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing unknown motions), and 0D features (self-calibration using unknown scene points). Yet, this paper proposes a new calibration technique using 1D objects (points aligned on a line), thus filling the missing dimension in calibration. In particular, we show that camera calibration is not possible with free-moving 1D objects, but can be solved if one point is fixed. A closed-form solution is developed if six or more observations of such a 1D object are made. For higher accuracy, a nonlinear technique based on the maximum likelihood criterion is then used to refine the estimate. Singularities have also been studied. Besides the theoretical aspect, the proposed technique is also important in practice especially when calibrating multiple cameras mounted apart from each other, where the calibration objects are required to be visible simultaneously.
Conference Paper
Full-text available
We propose a method of simultaneously calibrating the radial distortion function of a camera along with the other internal calibration parameters. The method relies on the use of a planar (or alternatively nonplanar) calibration grid, which is captured in several images. In this way, the determination of the radial distortion is an easy add-on to the popular calibration method proposed by Zhang [1999]. The method is entirely noniterative, and hence is extremely rapid and immune from the problem of local minima. Our method determines the radial distortion in a parameter-free way, not relying on any particular radial distortion model. This makes it applicable to a large range of cameras from narrow-angle to fish-eye lenses. The method also computes the centre of radial distortion, which we argue is important in obtaining optimal results. Experiments show that this point may be significantly displaced from the centre of the image, or the principal point of the camera.
Conference Paper
Full-text available
Radial image distortion is a frequently observed defect when using wide angle, low focal length lenses. In this paper a new method for its calibration and removal is presented. An inverse distortion model is derived that is accurate to a sub-pixel level, over a broad range of distortion levels. An iterative technique for estimating the models parameters from a single view is also detailed. Results on simulated and real images clearly indicate significantly improved performance compared to existing methods.
Article
Full-text available
We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.
Article
Full-text available
To model the way that cameras project the three-dimensional world into a two-dimensional image we need to know the camera’s image center. First-order models of lens behavior, such as the pinhole-camera model and the thin-lens model, suggest that the image center is a single, fixed, and intrinsic parameter of the lens. On closer inspection, however, we find that there are many possible definitions for image center. Most image centers do not have the same coordinates and, moreover, move as lens parameters are changed. We present a taxonomy that includes 15 techniques for measuring image center. Several techniques are applied to a precision automated zoom lens, and experimental results are shown.
Article
A 1-D calibration object is meant a segment with several known-distance markers (or points). Generally speaking, calibration methods with 1D objects are more flexible than those with 2D or 3D objects. Under the pinhole camera model, Zhang[9] proved that the calibration is not possible with free-moving 1D objects, but can be done if one of the markers is fixed. In this paper, the authors propose a catadioptric camera calibration method using 1D objects with five or more known points. The method is capable of calibrating a camera when either the camera or the calibration object undertakes three or more general motions. The proposed algorithm consists of the following two steps: Firstly, the principal point is calculated with geometric invariants under catadioptric camera model; Secondly, for every image point of 1D object, a pair of orthogonal vanishing points is derived, by which the image of absolute conic (IAC) is computed, then the intrinsic parameter matrix is obtained by Cholesky factorization on the IAC. In addition, the analytical solutions of the mirror parameter and the 1D object s pose are also provided. In simulated experiments, the method can have good calibration even if the fitting of partial visible conic is not very accurate. In the real experiment, the correctness and feasibility of the proposed method is also confirmed.
Article
Correction for image distortion in cameras has been an important topic for as long as users have wanted to faithfully reproduce or use observed information. Initially the main application was mapping. While this task continues today other applications also require precise calibration of cameras such as close range 3-D measurement and many 2-D measurement tasks. In the past the cameras used were few in number and highly expensive whereas today a typical large industrial company will have many inexpensive cameras being used for highly important measurement tasks . Cameras are used more today than they ever were but the golden age of camera calibration for aerial mapping is now well in the past. This paper considers some of the key developments and attempts to put them into perspective. In particular the driving forces behind each improvement have been highlighted.
Article
A method of self calibration applicable to nonmetric cameras is presented and discussed in connection with various other calibration approaches. The method is extremely general and includes radial symmetric and decentering lens distortions, affinity, and nonperpendicularity of axes. Although it provides interior orientation parameters for each photograph separately, the minimum control requirement remains at two horizontal and three vertical control points.
Article
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.
Article
A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.
Article
This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-linear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images.
Conference Paper
Spherical cameras are variable-resolution imaging systems and promising devices for autonomous navigation purposes, mainly because of their wide viewing angle which increases the capabilities of vision-based obstacle avoidance schemes. In addition, spherical lenses resemble the primate eye in their projective models and are biologically relevant. However, the calibration of spherical lenses for Computer Vision is a recent research topic and current procedures for pinhole camera calibration are inadequate when applied to spherical lenses. We present a novel method for spherical-lens camera calibration which models the lens radial and tangential distortions and determines the optical center and the angular deviations of the CCD sensor array within a unified numerical procedure. Contrary to other methods, there is no need for special equipment such as low-power laser beams or non-standard numerical procedures for finding the optical center. Numerical experiments, convergence and robustness analyses are presented.
Article
The human visual system can be characterized as a variable-resolution system: foveal information is processed at very high spatial resolution whereas peripheral information is processed at low spatial resolution. Various transforms have been proposed to model spatially varying resolution. Unfortunately, special sensors need to be designed to acquire images according to existing transforms. In this work, two models of the fish-eye transform are presented. The validity of the transformations is demonstrated by fitting the alternative models to a real fish-eye lens.
Article
A method for determining the radial distortion parameters of a camera is presented. The technique is based on the analysis of distorted images of straight lines and does not require the determination of point correspondence between a scene and an image of that scene. The method is described in detail, including information on the line detection method and the optimization procedure used to estimate the distortion parameters. Quantitative and qualitative experimental results using both synthetic and real image data show that the technique is effective.
Conference Paper
We present a theory and algorithms for a generic calibration concept that is based on the following recently introduced general imaging model. An image is considered as a collection of pixels, and each pixel measures the light travelling along a (half-) ray in 3-space associated with that pixel. Calibration is the determination, in some common coordinate system, of the coordinates of all pixels’ rays. This model encompasses most projection models used in computer vision or photogrammetry, including perspective and affine models, optical distortion models, stereo systems, or catadioptric systems – central (single viewpoint) as well as non-central ones. We propose a concept for calibrating this general imaging model, based on several views of objects with known structure, but which are acquired from unknown viewpoints. It allows in principle to calibrate cameras of any of the types contained in the general imaging model using one and the same algorithm. We first develop the theory and an algorithm for the most general case: a non-central camera that observes 3D calibration objects. This is then specialized to the case of central cameras and to the use of planar calibration objects. The validity of the concept is shown by experiments with synthetic and real data.
Conference Paper
Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas virtually all imaging devices introduce certain amount of nonlinear distortion, where the radial distortion is the most severe part. Common approach to radial distortion is by the means of polynomial approximation, which introduces distortion-specific parameters into the camera model and requires estimation of these distortion parameters. The task of estimating radial distortion is to find a radial distortion model that allows easy undistortion as well as satisfactory accuracy. This paper presents a new radial distortion model with an easy analytical undistortion formula, which also belongs to the polynomial approximation category. Experimental results are presented to show that with this radial distortion model, satisfactory accuracy is achieved. An application of the new radial distortion model is non-iterative yellow line alignment with a calibrated camera on ODIS, a robot built in our CSOIS (See Fig. 1).
Conference Paper
For both 3-D reconstruction and prediction of image coordinates, cameras can be calibrated implicitly without involving their physical parameters. The authors present a two-plane method for such a complete calibration, which models all kinds of lens distortions. First, the modeling is done in a general case without imposing the pinhole constraint. Epipolar curves considering lens distortions are introduced and are found in a closed form. Then, a set of constraints of perspectivity is derived to constrain the modeling process. With these constraints, the camera physical parameters can be related directly to the modeling parameters. Extensive experimental comparisons of the methods with the classic photogrammetric method and Tsai's method relating to the aspects of 3-D measurement, the effect of the number of calibration points, and the prediction of image coordinates, are made using real images from 15 different depth values
Article
This paper addresses the problem of calibrating camera lens distortion, which can be significant in medium to wide angle lenses. Our approach is based on the analysis of distorted images of straight lines. We derive new distortion measures that can be optimized using nonlinear search techniques to find the best distortion parameters that straighten these lines. Unlike the other existing approaches, we also provide fast, closed-form solutions to the distortion coefficients. We prove that including both the distortion center and the decentering coefficients in the nonlinear optimization step may lead to instability of the estimation algorithm. Our approach provides a way to get around this, and, at the same time, it reduces the search space of the calibration problem without sacrificing the accuracy and produces more stable and noise-robust results. In addition, while almost all existing nonmetric distortion calibration methods needs user involvement in one form or another, we present a robust approach to distortion calibration based on the least-median-of-squares estimator. Our approach is, thus, able to proceed in a fully automatic manner while being less sensitive to erroneous input data such as image curves that are mistakenly considered projections of three-dimensional linear segments. Experiments to evaluate the performance of this approach on synthetic and real data are reported.
Article
A camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions is presented. The proposed calibration procedure consists of two steps: (1) the calibration parameters are estimated using a closed-form solution based on a distribution-free camera model; and (2) the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. The authors introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of the calibration procedure are tested with both synthetic data and real images taken by tele- and wide-angle lenses
Article
Spherical cameras are variable-resolution imaging systems and promising devices for autonomous navigation purposes, mainly because of their wide viewing angle which increases the capabilities of vision-based obstacle avoidance schemes. In addition, spherical lenses resemble the primate eye in their projective models and are biologically relevant. However, the calibration of spherical lenses for Computer Vision is a recent research topic and current procedures for pinhole camera calibration are inadequate when applied to spherical lenses. We present a novel method for spherical-lens camera calibration which models the lens radial and tangential distortions and determines the optical center and the angular deviations of the CCD sensor array within a unified numerical procedure. Contrary to other methods, there is no need for special equipment such as low-power laser beams or non-standard numerical procedures for finding the optical center. Numerical experiments, convergence and robustnes...
Article
For highest accuracies it is necessary in close range photogrammetry to account for the variation of lens distortion within the photographic field. A theory to accomplish this is developed along with a practical method for calibrating radial and decentering distortion of close-range cameras. This method, the analytical plumb line method, is applied in an experimental investigation leading to confirmation of the validity of the theoretical development accounting for variation of distortion with object distance.
Manual of Photogrammetry
  • C Mcglone
  • E Mikhail
  • J Bethel