FIGURE 8 - uploaded by Hafiz Ahmed
Content may be subject to copyright.
Source-detector projection model. Light ray from the source passes through a world object P and is projected on the detector at point p

Source-detector projection model. Light ray from the source passes through a world object P and is projected on the detector at point p

Source publication
Article
Full-text available
Hand-eye calibration enables proper perception of the environment in which a vision guided robot operates. Additionally, it enables the mapping of the scene in the robots frame. Proper hand-eye calibration is crucial when sub-millimetre perceptual accuracy is needed. For example, in robot assisted surgery, a poorly calibrated robot would cause dama...

Contexts in source publication

Context 1
... the formulation of the homogeneous transform equation is perfectly suited to normal cameras whose optics are modelled using the pinhole camera projection model. When considering vision sensors with different optics, such as in X-rays with sourcedetector projection model (see Figure 8), it becomes difficult to use the homogeneous transform formulation as the typical pinhole projection model does not provide a proper representation of its optics. One way of achieving this is by using pose graph optimisation [76] which estimates relatives pose of an object based on a network of observed pose sequences. ...
Context 2
... the formulation of the homogeneous transform equation is perfectly suited to normal cameras whose optics are modelled using the pinhole camera projection model. When considering vision sensors with different optics, such as in X-rays with sourcedetector projection model (see Figure 8), it becomes difficult to use the homogeneous transform formulation as the typical pinhole projection model does not provide a proper representation of its optics. One way of achieving this is by using pose graph optimisation [76] which estimates relatives pose of an object based on a network of observed pose sequences. ...

Similar publications

Article
Full-text available
One of the problems with industrial robots is their ability to accurately locate the pose of the end-effector. Over the years, many other solutions have been studied including static calibration and dynamic positioning. This paper presents a novel approach for pose estimation of a Hexa parallel robot. The vision system uses three simple color featu...

Citations

... The calibration of the combined vision system and robot is crucial for automatic data transformation [26]. Determining the transformation relationship of the different coordinate systems is significant for establishing and solving the typical calibration equations 'AX = XB' and 'AX = YB' [27]. ...
Article
Full-text available
Recently, visual sensing measurement and its application in industrial robot operations have been widely researched, promoting the development of instrumentation and automation. This study proposes a combined vision sensor system for robot grasping, focusing on combined sensor system calibration and bracket pose measurements. The system configuration and working strategy of the combined vision system are introduced. Thereafter, the calibration of the combined vision coordinate systems is presented, wherein a global vision system acts as the external measuring equipment for accurately calibrating the local vision. Furthermore, a pose estimation method using a local vision system (LVS) is proposed, including morphology-based image enhancement and principal component analysis based corner recognition methods. Verification experiments, including combined calibration and bracket pose measurements, were performed to validate the effectiveness and accuracy of the proposed combined vision measurement strategy. The results demonstrated that the proposed system applies to industrial robot grasping of brackets. In addition, the proposed robot-sensor calibration method improves calibration accuracy. Finally, the proposed corner detection method is effective and accurate for different bracket detection applications. This study provides a system that improves robot grasping results by considering key factors, such as vision measurement accuracy, and calibration methods.
... Increasing the calculation accuracy of mathematical approaches and increasing the flexibility in usage by reducing the dependency on calibration objects have become major directions in development of hand-eye calibration algorithms recently [1]. However, achieving a high calibration accuracy is directly related to the careful design of the robot's joint poses and the usage of specialized calibration objects with well-defined features [2,3], as well as robust mathematical models that can handle noise and outliers effectively [4,5]. On the other hand, increasing flexibility in practice is directly related to minimizing the dependency of the calibration methods on calibration objects or even eliminating the necessity for their usage at all. ...
Article
Full-text available
An accurate and reliable estimation of the transformation matrix between an optical sensor and a robot is a key aspect of the hand–eye system calibration process in vision-guided robotic applications. This paper presents a novel approach to markerless hand–eye calibration that achieves streamlined, flexible, and highly accurate results, even without error compensation. The calibration procedure is mainly based on using the robot’s tool center point (TCP) as the reference point. The TCP coordinate estimation is based on the robot’s flange point cloud, considering its geometrical features. A mathematical model streamlining the conventional marker-based hand–eye calibration is derived. Furthermore, a novel algorithm for the automatic estimation of the flange’s geometric features from its point cloud, based on a 3D circle fitting, the least square method, and a nearest neighbor (NN) approach, is proposed. The accuracy of the proposed algorithm is validated using a calibration setting ring as the ground truth. Furthermore, to establish the minimal required number and configuration of calibration points, the impact of the number and the selection of the unique robot’s flange positions on the calibration accuracy is investigated and validated by real-world experiments. Our experimental findings strongly indicate that our hand–eye system, employing the proposed algorithm, enables the estimation of the transformation between the robot and the 3D scanner with submillimeter accuracy, even when using the minimum of four non-coplanar points for calibration. Our approach improves the calibration accuracy by approximately four times compared to the state of the art, while eliminating the need for error compensation. Moreover, our calibration approach reduces the required number of the robot’s flange positions by approximately 40%, and even more if the calibration procedure utilizes just four properly selected flange positions. The presented findings introduce a more efficient hand–eye calibration procedure, offering a superior simplicity of implementation and increased precision in various robotic applications.
... The hand-eye calibration problem is an important part of robot calibration, which has wide applications in aerospace, medical, automotive and industrial fields [10,16]. The problem is to determine the homogeneous matrix between the robot gripper and a camera mounted rigidly on the gripper or between a robot base and the world coordinate system. ...
... The approaches based on numerical optimization include Levenberg-Marquardt algorithm [28,42], gradient/Newton method [11], linear matrix inequality [12], alternative linear programming [40] and so on. For more details about solution methods for hand-eye calibration problem, one can refer to [10,30] and references therein. ...
Article
Full-text available
The hand-eye calibration problem is an important application problem in robot research. Based on the 2-norm of dual quaternion vectors, we propose a new dual quaternion optimization method for the hand-eye calibration problem. The dual quaternion optimization problem is decomposed to two quaternion optimization subproblems. The first quaternion optimization subproblem governs the rotation of the robot hand. It can be solved efficiently by the eigenvalue decomposition or singular value decomposition. If the optimal value of the first quaternion optimization subproblem is zero, then the system is rotationwise noiseless, i.e., there exists a “perfect” robot hand motion which meets all the testing poses rotationwise exactly. In this case, we apply the regularization technique for solving the second subproblem to minimize the distance of the translation. Otherwise we apply the patching technique to solve the second quaternion optimization subproblem. Then solving the second quaternion optimization subproblem turns out to be solving a quadratically constrained quadratic program. In this way, we give a complete description for the solution set of hand-eye calibration problems. This is new in the hand-eye calibration literature. The numerical results are also presented to show the efficiency of the proposed method.
... The transformation GT T B is based on ground-truth measurements, while FK T B emerges from forward kinematic techniques. As per Eq.(2a), T signifies the pose of the ground-truth sensor relative to the end-effector, which may be derived through prevalent hand-eye calibration techniques [20]. On the other hand, T represents the series of transformations from frame { } to frame { } including T , a task known for its inherent complexity in robotic kinematics [16]. ...
Conference Paper
Full-text available
Kinematic calibration is essential for improving the accuracy of parallel robots by compensating for geometric errors. This paper presents a calibration method for spherical parallel robots using relative pose measurements. By comparing relative end-effector poses from the robot’s forward kinematics to those from a ground truth measurement system, the method identifies kinematic parameters without relying on the robot base frame. An error model based on relative poses in the SE(3) manifold is formulated. Parameters are optimized using the Levenberg- Marquardt algorithm to minimize the error between measured and estimated relative poses. The methodology is demonstrated by calibrating the ARAS-DIAMOND spherical parallel robot. Results show improved repeatability and trajectory tracking after calibration, confirming its efficacy.
... Checkerboard detection is a fundamental tool in computer vision applications such as camera calibration [1][2][3][4][5][6], projector-camera systems [7,8], simultaneous localisation and mapping (SLAM) [9], and robotics in general [10][11][12]. This topic is of such high importance that it has received a large amount of attention from the community over the past decades and a large variety of detection methods have been developed. ...
Article
Full-text available
Accurate checkerboard detection is of vital importance for computer vision applications, and a variety of checkerboard detectors have been developed in the past decades. While some detectors are able to handle partially occluded checkerboards, they fail when a large occlusion completely divides the checkerboard. We propose a new checkerboard detection pipeline for occluded checkerboards that has a robust performance under varying levels of noise, blurring, and distortion, and for a variety of imaging modalities. This pipeline consists of a checkerboard detector and checkerboard enhancement with Gaussian processes (GP). By learning a mapping from local board coordinates to image pixel coordinates via a Gaussian process, we can fill in occluded corners, expand the board beyond the image borders, allocate detected corners that do not fit an initial grid, and remove noise on the detected corner locations. We show that our method can improve the performance of other publicly available state-of-the-art checkerboard detectors, both in terms of accuracy and the number of corners detected. Our code and datasets are made publicly available. The checkerboard detector pipeline is contained within our Python checkerboard detection library, called PyCBD. The pipeline itself is modular and easy to adapt to different use cases.
... Hand-eye calibration is a significant technology in the fields of robot vision and control, which finds wide-ranging applications in robotics automation [1], intelligent manufacturing [2], autonomous driving [3], and medical industries [4]. In these applications, robots require accurate perception of their surrounding environment and execution of various tasks. ...
... For a type of robot equipped with a vision system, the process of determining the pose transformation between the robot end-effector and the visual system can be referred to as hand-eye calibration, where pose typically refers to the physical quantities used to describe the position (translation) and orientation (rotation) of the robot B Huajian Song songhuajian01@sina.cn 1 School of Automation and Electrical Engineering, Linyi University, Linyi 276000, China in its workspace. The robot end-effector can be likened to a hand and the visual system to eyes in this context. ...
... According to the classification based on solving methods, there are typically three solutions for robot hand-eye calibration: iterative, deep learning, and analytical solutions [1]. While both iterative and deep learning approaches have limitations, analytical solutions have unique advantages. ...
Article
Full-text available
Hand-eye calibration is one of the important problems in the field of robot vision and control, aiming to determine the pose transformation between the robot end-effector and the visual system. The analytical solution of this problem has a more explicit numerical computation process and a more stable solution time compared with iterative or deep learning methods, and theoretically provides more accurate results. When using quaternion parameterization for rigid body motion rotation, the hand-eye calibration problem can be transformed into solving the eigenvector corresponding to the maximum eigenvalue of a symmetric matrix. This paper proposes an analytical solution for hand-eye calibration based on quaternion parameterization. Compared with previous methods, the advantage of this method is that the solution for the required eigenvector is transformed into solving cubic equations through the eigenvector-eigenvalue identity, without SVD or Eigendecomposition, thus making the proposed method a complete analytical solution, and avoids solving irrelevant information. The performance of the proposed method was evaluated through multiple experiments conducted on a synthetic dataset as well as two real-world datasets, and compared to representative analytical methods. The experimental results unequivocally demonstrate that our proposed method achieves comparable accuracy with significantly shorter solution times. Therefore, our complete analytical solution offers an efficient and accurate alternative for addressing the hand-eye calibration problem without SVD or Eigendecomposition.
... The corners of the checkerboard pattern are infinitely small and consistent against lens distortions. Therefore, the corners detection is done up to a sufficient accuracy [10]. ...
Conference Paper
Three-dimensional(3D) localization plays a crucial role in numerous computer vision applications. While 3D localization traditionally relied on specialized hardware setups or multiple cameras, recent advancements have explored the potential of monocular cameras for achieving 3D localization. This research paper investigates and develops techniques for three-dimensional (3D) localization using a monocular camera on a 3D space. By leveraging the principles of geometrical method, particularly triangulation, the study aims to achieve accurate 3D localization. A red LED bulb is used as the object to be localized. Hence the proposed approach utilizes color thresholding for establishing correspondences between multiple images. Extensive experiments are conducted using an industrial robot arm to validate the developed algorithms, evaluating their accuracy against known 3D positions. The outcomes of this research provide insights on the change of accuracy in 3D localization using a geometrical method, for various camera positions.
... In this work, it is used to determine the rigid transformation between the controllers and the underwater camera on the camera stick. Various approaches can be found in the literature (see, e.g., Enebuse et al. (2021)). The classic approach is from Tsai and Lenz (1989), in which the pose of the hand is known from the robot or an external tracking system. ...
Article
Full-text available
To advance underwater computer vision and robotics from lab environments and clear water scenarios to the deep dark ocean or murky coastal waters, representative benchmarks and realistic datasets with ground truth information are required. In particular, determining the camera pose is essential for many underwater robotic or photogrammetric applications and known ground truth is mandatory to evaluate the performance of, e.g., simultaneous localization and mapping approaches in such extreme environments. This paper presents the conception, calibration, and implementation of an external reference system for determining the underwater camera pose in real time. The approach, based on an HTC Vive tracking system in air, calculates the underwater camera pose by fusing the poses of two controllers tracked above the water surface of a tank. It is shown that the mean deviation of this approach to an optical marker-based reference in air is less than 3 mm and 0.3 $$^{\circ }$$ ∘ . Finally, the usability of the system for underwater applications is demonstrated.
... p C can be directly obtained from the point cloud provided by the camera, while 0 T EE can be determined from the robot's internal sensors. The calculation of X requires the use of a hand-eye calibration procedure[17]. We employ the method proposed by Park et al.[18] to determine X, allowing for the transformation of the points of the point cloud into the inertial coordinate frame, as given in Eq. (1).Projection onto a common planeTo determine whether or not predicted instances from different viewpoints correspond to the same instrument, a matching process must be applied. ...
Article
Full-text available
Purpose: A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task. Methods: We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene. Results: Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model. Conclusion: Our approach can drastically improve an instrument detector's performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community ( https://github.com/Jorebs/Multi-view-Voting-Scheme ).
... In the last decade, emerging industrial applications require robots to identify randomly placed workpieces on conveyors, in stacks, and in pallets faster and more accurately, and the combination of machine vision technology with robotics to help automated systems handle these workpieces has become increasingly widespread in industry [5,6]. Vision-guided robots (VGR) are rapidly becoming a key enabling technology for industrial automation [7]. In view of the problem of numerous smartphone assembly parts and complex assembly processes, this paper takes the middle frame parts in mobile phone assembly as the object and studies the rapid identification positioning and grasping problem when loading and unloading in the process of mobile phone automated assembly. ...
Article
Full-text available
With the increasing automation of mobile phone assembly, industrial robots are gradually being used in production lines for loading and unloading operations. At present, industrial robots are mainly used in online teaching mode, in which the robot's movement and path are set by teaching in advance and then repeat the point-to-point operation. This mode of operation is less flexible and requires high professionalism in teaching and offline programming. When positioning and grasping different materials, the adjustment time is long, which affects the efficiency of production changeover. To solve the problem of poor adaptability of loading robots to differentiated products in mobile phone automatic assembly lines, it is necessary to quickly adjust the positioning and grasping of different models of mobile phone middle frames. Therefore, this paper proposes a highly adaptive grasping and positioning method for vision-guided right-angle robots.