Fig 7 - uploaded by Aaron Mavrinac
Content may be subject to copyright.
Vision graph for simulation experiments-The simulated camera network yields a relatively dense vision graph, with 2 to 14 neighbors per vertex.
Source publication
The problem of online selection of monocular view sequences for an arbitrary task in a calibrated multi-camera network is investigated. An objective function for the quality of a view sequence is derived from a novel task-oriented, model-based instantaneous coverage quality criterion and a criterion of the smoothness of view transitions over time....
Context in source publication
Context 1
... that the number of undesirable transitions U (Q) is greatly reduced without significantly reducing the instantaneous performance M(Q), demonstrating good robustness to the jitter introduced by target pose estimation noise. Figure 7 shows the vision graph generated from the multi-camera network in the simulation experiments, over an abstract R spanning the interior of the structure. The number of neighbors including the vertices themselves (and thus the number of C i evaluations necessary in Algorithm 1) ranges from 2 to 14, with an average of 7.78. ...
Citations
... This allows prioritizing specific points within the coverage area. To calculate and simulate the surface coverage of multi-camera systems, it is necessary to define the camera, environment, and task model [10]. A pinhole camera is used as the camera model, in which the points are projected through the optical center onto the image plane [11]. ...
... A detailed calculation is shown in [13]. To calculate and simulate the surface coverage of multi-camera systems, it is necessary to define the camera, environment, and task model [10]. A pinhole camera is used as the camera model, in which the points are projected through the optical center onto the image plane [11]. ...
... To calculate the focus, two planes must be determined: the near and far planes ( Figure 2b). Based on the distance of the object and the intrinsic camera parameters, two constants ( and ) can be calculated, which correspond to the distance between the camera and the two planes [10,14]. Assuming that an optimal focus lies in the middle of the two planes, the optimal is defined as the average of and . ...
Implementing “first-time-right” production processes increases a production line’s sustainability and minimizes rejects. Multi-camera systems are a key element since they can quickly detect defects without contact. However, it is still a time-consuming challenge to determine the correct number and position of cameras to achieve gapless surface monitoring of complex components. This proposal aims to develop a new software tool that automatically calculates and visualizes surface coverage by using bipolar plates as an example. With this method, 100% surface coverage inspections become feasible, and the cost of commissioning multi-camera inspection systems can be significantly decreased.
... Although, triangular or pyramidal camera field-of-view models are prevalent, there exist many other types of camera coverage models in usage, [27]. Finally, it is to be noted that the choice of coverage quality metric is also important to achieve reliable results, [28]. ...
There have been numerous attempts at solving the optimal camera placement problem across multiple applications. Exact linear programming-based, as well as, heuristic combinatorial optimization methods were shown to be successful in providing optimal or near-optimal solutions to this problem. Working over a discrete space model is the general practice when solving the camera placement problem. However, discretized environments often limit the methods’ usage only to small-scale datasets due to resource and time constraints that grow exponentially with the number of 3D points collected from the discrete space. We propose a multi-resolution approach that enables the usage of existing optimization algorithms on large real-world problems modelled using high resolution 3D grids. Our method works by grouping together the given discrete set of possible camera locations into clusters of points, multiple times, resulting in multiple resolution levels. Camera placement optimization is repeated for all resolution levels while propagating the optimized solution from low to high resolutions. Our experiments on both simulated and real data with grids of varying sizes show that using our multi-resolution approach, existing camera placement optimization methods can be used even on high resolution grids consisting of hundreds of thousands of points. Our results also show that the strategy of grouping points together by exploiting underlying 3D geometry to optimize camera poses is not only significantly faster than optimizing on the entire set of samples but, it also provides better camera coverage.
... In [23], a resolution-like criterion and a FOV constraint in the multicamera coverage model are considered. In [24], almost all the criteria required in the visual coverage task are taken into account. In [25], a visual distance-based coverage model is proposed, and a parallel optimization method is presented for multi-camera deployment in 3-D space observing 3-D environment or objects. ...
... In the figure, O RT 0 is the frame of the rotary stage, which is fixed relative to the floor. The camera calibration process described in [24] is performed to obtain the relation between the camera frame O C and O RT 0 . The camera is also calibrated for the 3D reconstruction of the points on the profiles as described in [33]. ...
This paper presents a novel method called surface profile-guided scan for 3D surface reconstruction of unknown objects. This method benefits from the advantages of two types of sensors: one having a wide field of view but of low resolution (type I) and the other of high resolution but with a narrow field of view (type II) for the autonomous reconstruction of highly accurate 3D models. It employs a range sensor (type II) mounted on an industrial manipulator, a rotary stage, and a color camera (type I). No prior knowledge of the geometry of the object is required. The only information available is that the object is located on a rotary table and is within the field of view of the camera and in the working space of the industrial robot. The camera provides a set of vertical surface profiles around the object, which are used to obtain scan paths for the range sensor. Then, the robot manipulator moves the range sensor along the scan paths. Finally, the 3D surface model is completed by detecting and rescanning holes on the surface. The quality of the surface model obtained from real objects by the proposed 3D reconstruction method proves its effectiveness and versatility.
... While the camera sensor is welcomed due to many advantages such as compact size, low power consumption, reasonably low cost, and rich information, the limitation of a camera sensor is also obvious, for example, in terms of its limited field of view and occlusion. In many practical applications such as visual coverage [8]- [10], objective reconstruction [11], [12], and large-scale surveillance [13], hence, multi-camera networks are highly desired to allow individual cameras to perform collaboratively, mostly, through spatial deployment. Therefore, how to optimize the spatial deployment of camera sensor networks becomes a very interesting yet important problem that attracts a lot of attentions in recent years [14]. ...
... Therefore, how to optimize the spatial deployment of camera sensor networks becomes a very interesting yet important problem that attracts a lot of attentions in recent years [14]. In this regard, various coverage models have been proposed for visual field sensors, such as 2-D circular sector sensing in [15]- [18] and fish-eye cameras in [1], 3-D coverage models taking into account the intrinsic and/or extrinsic parameters of cameras, such as resolution, field of view (FOV), focus and occlusion, etc., in [8], [9], [13], [19]. In particular, a comprehensive 3-D coverage model with continuous measure is introduced in [13] and is further extended and successfully applied in many scenarios in [8] which considers both depth distance and view angle for a better physical interpretation. ...
... In this regard, various coverage models have been proposed for visual field sensors, such as 2-D circular sector sensing in [15]- [18] and fish-eye cameras in [1], 3-D coverage models taking into account the intrinsic and/or extrinsic parameters of cameras, such as resolution, field of view (FOV), focus and occlusion, etc., in [8], [9], [13], [19]. In particular, a comprehensive 3-D coverage model with continuous measure is introduced in [13] and is further extended and successfully applied in many scenarios in [8] which considers both depth distance and view angle for a better physical interpretation. Besides, a 'visual distance' is also proposed in [9] to characterize the pose difference between a single camera and a target point, and the optimal deployment of multiple cameras is solved based on this performance measure accordingly. ...
In this paper, a new concept, radial coverage strength, is first proposed to characterize the visual sensing performance when the orientation of the target pose is considered. In particular, the elevation angle of the optical pose of the visual sensor is taken to decompose the visual coverage strength into effective and ineffective components, motivated by the imaging intuition. An optimization problem is then formulated for a multi-camera network to maximize the coverage of the object area based on the strength information fusion along the effective coverage strength direction through the deployment of the angle between radial coverage vector of the camera optical pose. Both simulation and experiments are conducted to validate the proposed approach and comparison with existing methods is also provided.
... showed the method of multi-camera placement by analytical calculations [14]. The authors point out that the criteria for cameras' placement need adjustment [15], and suggest modifying the resolution criterion based on the distance and the AOV. The method in [14] allows for automatic calibration, but requires knowledge of the shape of the object and, in our opinion, is difficult to use in a wide outdoor space. ...
The design of multi-camera surveillance system comes with many advantages, for example it facilitates as understanding how flying objects act in a given volume. One possible application is for the observation interaction of birds and calculate their trajectories around wind turbines to create promising systems for preventing bird collisions with turbine blades. However, there are also challenges, such as finding the optimal node placement and camera calibration. To address these challenges we investigated a trade-off between calibration accuracy and node requirements, including resolution, modulation transfer function, field of view and angle baseline. We developed a strategy for camera placement to achieve improved coverage for golden eagle monitoring and tracking. This strategy based on the modified resolution criterion taking into account the contrast function of the camera and the estimation of the base angle between the cameras.
... Coverage model of a field sensor depends on the physical properties of the sensor. For example, for a camera sensor it could includes: resolution, field of view, view angle and focus ( [5], [6]). The effectiveness of a sensor network is evaluated to a large extent by the coverage provided by the sensors deployment. ...
... A S a typical noncontact sensor, a visual camera can provide rich information for target objects or scenes with a relatively low cost, thus they are employed in various applications, such as industrial inspection [1], robot localization and navigation [2]- [4], visual servoing and tracking [5], [6], nanomanipulation [7], [8], and so on. Since a single camera has a limited sensing range, multi-camera networks are highly desired in many applications such as precise visual inspection of industrial products [1] and large-scale surveillance and security systems [9], for example. In the area of multi-camera networks, Manuscript one fundamental problem is the optimal deployment of multiple cameras in order to satisfy various task requirements. ...
... Jiang et al. [23] consider the FOV constraint and the occlusion case in their weighted coverage model, however, resolution and focus are not included in this model. Mavrinac et al. [9] proposed a new coverage model to take into account almost all realistic constraints, which is validated and successfully applied in many scenarios, such as deployment of range cameras [12], real-time view selection for large-scale visual surveillance systems [9], and industrial inspection for 3-D objects [1]. ...
... Jiang et al. [23] consider the FOV constraint and the occlusion case in their weighted coverage model, however, resolution and focus are not included in this model. Mavrinac et al. [9] proposed a new coverage model to take into account almost all realistic constraints, which is validated and successfully applied in many scenarios, such as deployment of range cameras [12], real-time view selection for large-scale visual surveillance systems [9], and industrial inspection for 3-D objects [1]. ...
Based on a convex optimization approach, we propose a new method of multi-camera deployment for visual coverage of a 3-D object surface. In particular, the optimal placement of a single camera is first formulated as translation and rotation convex optimization problems, respectively, over a set of covered triangle pieces on the target object. The convex optimization is recursively applied to expand the covered area of the single camera, with the initially covered triangle pieces being chosen along the object boundary for the first trial through a selection criterion. Then, the same optimization procedures are applied to place the next camera and thereafter. It is pointed out that our optimization approach guarantees that each camera is placed at the optimal pose in some sense for a group of triangles instead of a single piece. This feature, together with the selection criterion for initially covered triangles, reduces the number of operating cameras while still satisfying various constraint requirements such as resolution, field of view, blur, and occlusion. Both simulation and experimental results are presented to show superior performance of the proposed approach, comparing with the results from other existing methods.
... Visual coverage is important in many practical applications, such as industrial inspection [1], [2], area patrolling, three-dimensional terrain monitoring [3], [4], real-time view selection [5], and so on. Thus, it has attracted significant attention in the field of sensor networks, robotics and control. ...
... For modeling of the visual coverage, many different approaches have been reported in the literature according to various task requirement. Based on an extensive survey [7], Mavrinac and Chen propose a generic visual coverage model [1], [5], [8], which has been validated and applied in industrial inspection tasks, view selection, etc.. The nature of the solution, optimal or near-optimal, depends indirectly on the properties of the coverage model. ...
... The previously coverage model was developed by Mavrinac et al. [8]. We refer the reader to the validation of the model, which is provided in [5]. ...
The deployment of multi-camera networks is usually accomplished by maximizing the covered area, while the coverage strength is difficult to be optimized simultaneously. We propose a coverage enhancement approach in this paper to improve the coverage strength of three-dimensional (3D) scenes, by using convex optimization approach to refine the initial deployment. The 3D scene is represented by a triangle mesh, with each triangle being a basic atomic unit. In addition, the sensing range of a single camera is modeled as a visual frustum. Frobenius distance and Euclidean distance between a triangle unit and a visual frustum, are novelly introduced to reflect the camera-to-triangle coverage strength. On this basis, the coverage enhancement is accomplished by minimizing the sum of squares of these two distances. Due to the use of the Frobenius distance and Euclidean distance, it is shown that the problem can be solved by convex optimization techniques. Comparative simulation results are presented to verify the effectiveness of the proposed approach.
... limitation on the total number of cameras) to be satisfied. We refer the reader to literature surveys on sensor planning methods [TT95,MC13,MCT14]. ...
In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport).
... limitation on the total number of cameras) to be satisfied. We refer the reader to literature surveys on sensor planning methods [TT95,MC13,MCT14]. ...