Fig 1 - uploaded by Aaron Mavrinac
Content may be subject to copyright.
Source publication
Modeling the coverage of a sensor network is an important step in a number of design and optimization techniques. The nature of vision sensors presents unique challenges in deriving such models for camera networks. A comprehensive survey of geometric and topological coverage models for camera networks from the literature is presented. The models ar...
Contexts in source publication
Context 1
... topology is ide- ally derived from both the geometric coverage (at some level of granularity) and from the agent dynamics model, captur- ing information not present in either the geometric coverage model or the overlap topology. Figure 1 illustrates the ideal hierarchy of information. All three types of model can be esti- mated directly from various forms of captured visual infor- mation in the absence of some or all of the primary sources. ...
Context 2
... (b) Fig. 10 Possible cases of transition-dark ellipses denote entry and exit zones, and the dotted line indicates the agent ...
Context 3
... is a question as to how transitions between cameras with overlapping coverage should be handled in transition models. Referring to the example agent paths in Fig. 10, it is clear how to handle the transition between non-overlapping cameras shown in Fig. 10a, as the surveyed methods unan- imously agree: an arc from A or its exit zone to B or its entry zone, with a positive transit duration. However, in the transition between overlapping cameras shown in Fig. 10b, the agent passes through the entry ...
Context 4
... is a question as to how transitions between cameras with overlapping coverage should be handled in transition models. Referring to the example agent paths in Fig. 10, it is clear how to handle the transition between non-overlapping cameras shown in Fig. 10a, as the surveyed methods unan- imously agree: an arc from A or its exit zone to B or its entry zone, with a positive transit duration. However, in the transition between overlapping cameras shown in Fig. 10b, the agent passes through the entry zone of B before passing through the exit zone of A, and the agent is observed by one or both ...
Context 5
... models. Referring to the example agent paths in Fig. 10, it is clear how to handle the transition between non-overlapping cameras shown in Fig. 10a, as the surveyed methods unan- imously agree: an arc from A or its exit zone to B or its entry zone, with a positive transit duration. However, in the transition between overlapping cameras shown in Fig. 10b, the agent passes through the entry zone of B before passing through the exit zone of A, and the agent is observed by one or both cameras during the entire transition. Transitions from one entry or exit zone to another within a single camera's coverage can be thought of as a special case of this scenario. Ellis et al. (2003); Makris et ...
Context 6
... a single camera's coverage can be thought of as a special case of this scenario. Ellis et al. (2003); Makris et al. (2004) deal with the over- lapping case as with the non-overlapping case. For a given departure event at time t 1 , they check for arrival events at time t 2 ∈ [t 1 − T, t 1 + T ], where T is a temporal search window. Thus, in Fig. 10a, t 2 > t 1 , whereas in Fig. 10b, t 2 < t 1 . The advantage of this approach is that it does not require prior estimation of overlap topology, and uses a single process to estimate transition topology for a general-case camera net- work with overlapping and/or non-overlapping cameras. Stauffer (2005) argues that the overlapping case is ...
Context 7
... can be thought of as a special case of this scenario. Ellis et al. (2003); Makris et al. (2004) deal with the over- lapping case as with the non-overlapping case. For a given departure event at time t 1 , they check for arrival events at time t 2 ∈ [t 1 − T, t 1 + T ], where T is a temporal search window. Thus, in Fig. 10a, t 2 > t 1 , whereas in Fig. 10b, t 2 < t 1 . The advantage of this approach is that it does not require prior estimation of overlap topology, and uses a single process to estimate transition topology for a general-case camera net- work with overlapping and/or non-overlapping cameras. Stauffer (2005) argues that the overlapping case is best handled by more robust ...
Similar publications
In this paper, an accurate 3D model analysis of a circular feature is built with error compensation for robot vision. We propose an efficient method of fitting ellipses to data points by minimizing the algebraic distance subject to the constraint that a conic should be an ellipse and solving the ellipse parameters through a direct ellipse fitting m...
The HaiYang-1C coastal zone imager (CZI) consists of two independent cameras with a total image swath of approximately 1000 km. In order to obtain precise imaging parameters of the CZI cameras, a feasible in-orbit geometric calibration approach with multiple fields is presented. First, the master CCD is calibrated with a calibration field. Then, th...
The paper describes a method for the calibration of close range photogrammetric stations consisting of solid-state cameras. Both the calibration of the digitizing of the analog video signal and the determination of the systematic errors, the inner and the outer orientation of each camera of the station are discussed.
The difficulty attached in the...
Compared with geometric stereo vision based on triangulation principle, photometric stereo method has advantages in recovering per-pixel surface details. In this paper, we present a practical 3D imaging system by combining the near-light photometric stereo and the speckle-based stereo matching method. The system is compact in structure and suitable...
In this paper we introduce a new geometric calibration algorithm, and a geometric method of 3D reconstruction using a panoramic microwave radar and a camera. These two sensors are complementary, considering the robustness to environmental conditions and depth detection ability of the radar on one hand, and the high spatial resolution of a vision se...
Citations
... They proposed an algorithm based on Minimum Spanning Tree to find a maximal support path between any pair of points in the monitored region. The coverage problems for CSNs have received an increasing number of research efforts recently [21] by Mavrinac and Chen. A new concept, the full-view coverage, in which a target is full-view covered by a camera sensor only if the target is guaranteed to be captured no matter which direction it faces was proposed in [22] by Wang et al. ...
... Breach coverage improvement by applying four additional sensorsFig. 7Breach coverage improvement compared with Gau[21] ...
Wireless ad hoc sensor networks have recently emerged as a premier research topic. They have great long-term economic potential and ability to transform our lives and pose many new system building challenges. Sensor networks also pose a number of new conceptual and optimization problems. Most of researches in wireless sensor networks are focused in obtaining better target coverage in order to reduce energy and cost of the network. The problem of planar target analysis is one of the crucial problems that should be considered while studying coverage problem of sensor networks. By combining computational geometry and graph theoretic techniques, specifically the Voronoi diagram and graph search algorithms, this paper introduces a novel sensor network coverage model that deals with plane target problem based on Clifford algebra which is a powerful tool that is coordinate free. Also, the calculations of the node coverage rate for the plane target in the sensor network using Clifford algebra are presented. Then, the maximum clearance path (worst-case coverage) of the sensor network for a plane target is proposed. The optimality and reliability of the proposed algorithm have been proved using simulation. Also, a comparison between the breach weight of the point target and the plane target is provided.
... The VPP has been the focus of the research over the last three decades within a wide range of vision tasks that demand the computation of generalized viewpoints. A broad overview of the overall progress, challenges, and applications of the VPP is provided within various surveys [2,[5][6][7][8][9]. Depending on the a priori knowledge required, the approaches for viewpoint planning can be roughly classified into model-based and non-model-based approaches. ...
... If any element of T * does not fulfill condition (8), then k is increased by 1. If a minimal number of clusters can be estimated beforehand, this can be given as an initial value to optimize the search process; otherwise, k = 1 should be assumed. ...
The efficient computation of viewpoints for solving vision tasks comprising multi-features (regions of interest) represents a common challenge that any robot vision system (RVS) using range sensors faces. The characterization of valid and robust viewpoints is even more complex within real applications that require the consideration of various system constraints and model uncertainties. Hence, to address some of the challenges, our previous work outlined the computation of valid viewpoints as a geometrical problem and proposed feature-based constrained spaces (C-spaces) to tackle this problem efficiently for acquiring one feature. The present paper extends the concept of C-spaces to consider multi-feature problems using feature cluster constrained spaces (GC-spaces). A GC-space represents a closed-form, geometrical solution that provides an infinite set of valid viewpoints for acquiring a cluster of features satisfying diverse viewpoint constraints. Furthermore, the current study outlines a generic viewpoint planning strategy based on GC-spaces for solving vision tasks comprising multi-feature scenarios effectively and efficiently. The applicability of the proposed framework is validated on two different industrial vision systems used for dimensional metrology tasks.
... Optimizing coverage induced by camera placement has been investigated using various modeling techniques for a very long time, see [14] for a survey and [13] for a more recent thesis. The geometric computation of shadows on a surface (in the sense of day-to-day language) produced in three dimensions by a light at a certain position and some occluding 3D-objects dates back to at least 1977 [7]. ...
The visible-volume function assigns to a configuration of cameras and a flexible environment in a convex room the volume that can be supervised by the cameras. It is of interest to configure the cameras and the environment in such a way that the visible volume is maximized. Some methods of global optimization can take profit from desirable analytic properties of the visible-volume function. Earlier work has only considered this function in dimensions two and three or for static environments implicitly defined by level-set functions. In this paper it is shown, that the visible-volume function for a flexible environment modeled explicitly by a parametrized simplicial complex is continuous, piecewise rational, locally Lipschitz, and semi-algebraic in all dimensions.
... Over the last three decades, the VPP has been investigated within a wide range of vision tasks that integrate an imaging device, but not necessarily a robot, and require the computation of generalized viewpoints. For a vast overview of the overall progress, challenges, and applications of the VPP, we refer to the various surveys [2,4,[7][8][9][10] that have been published. ...
... The formulation of a generalized viewpoint as given by Equation (8) can be considered one of the most straightforward formulations to solve the VGP, if for each viewpoint constraint, a Boolean condition can be expressed. For instance, by introducing such cost functions for different viewpoint constraints, several works [9,21,36,37,[49][50][51] demonstrated that optimization algorithms (e.g., greedy, genetic, or even reinforcement learning algorithms) can be used to find local and global optimal solutions within polynomial times. ...
The efficient computation of viewpoints while considering various system and process constraints is a common challenge that any robot vision system is confronted with when trying to execute a vision task. Although fundamental research has provided solid and sound solutions for tackling this problem, a holistic framework that poses its formal description, considers the heterogeneity of robot vision systems, and offers an integrated solution remains unaddressed. Hence, this publication outlines the generation of viewpoints as a geometrical problem and introduces a generalized theoretical framework based on Feature-Based Constrained Spaces (C-spaces) as the backbone for solving it. A C-space can be understood as the topological space that a viewpoint constraint spans, where the sensor can be positioned for acquiring a feature while fulfilling the constraint. The present study demonstrates that many viewpoint constraints can be efficiently formulated as C-spaces, providing geometric, deterministic, and closed solutions. The introduced C-spaces are characterized based on generic domain and viewpoint constraints models to ease the transferability of the present framework to different applications and robot vision systems. The effectiveness and efficiency of the concepts introduced are verified on a simulation-based scenario and validated on a real robot vision system comprising two different sensors.
... Camera positioning is one of the most significant issues that makes the use of optical CMSs restricted to experienced operators [179]. CMSs [179][180][181][182][183] are often application specific and the number of cameras is given in advance. ...
Camera based methods for optical coordinate metrology are growing in popularity due to their non-contact probing technique, fast data acquisition time, high point density and high surface coverage. However, these optical approaches are often highly user dependent, have high dependence on accurate system characterisation, and can be slow in processing the raw data acquired during measurement. Machine learning approaches have the potential to remedy the shortcomings of such optical coordinate measurement systems. The aim of this thesis is to remove dependence on the user entirely by enabling full automation and optimisation of optical coordinate measurements for the first time. A novel software pipeline is proposed, built, and evaluated which will enable automated and optimised measurements to be conducted. No such automated and optimised system for performing optical coordinate measurements currently exists. The pipeline can be roughly summarised as follows:
intelligent characterisation -> view planning -> object pose estimation -> automated data acquisition -> optimised reconstruction.
Several novel methods were developed in order to enable the embodiment of this pipeline. This pipeline enables an inexperienced user to place a part anywhere in the measurement volume of a system and, from the part’s associated CAD data, the system will perform an optimal measurement without the need for any user input. Each new method which was developed as part of this pipeline has been validated against real experimental data from current measurement systems and shown to be effective.
In future work a possible hardware integration of the methods developed in this thesis is presented. Although the creation of this hardware is beyond the scope of this thesis.
... Over the last three decades, the VPP has been investigated within a wide range of vision tasks that integrate an imaging device, but not necessarily a robot, and require the computation of generalized viewpoints. For a vast overview of the overall progress, challenges, and applications of the VPP, we refer to the various surveys [2,4,[7][8][9][10] that have been published. ...
... Considering a multi-feature scenario, where more than one feature can be acquired from the same sensor pose, we assume that all viewpoint constraints for each feature must be satisfied within the same viewpoint constraint, a Boolean condition can be expressed. For instance, by introducing such cost functions for different viewpoint constraints, several works [9,21,36,37,[46][47][48] demonstrated that optimization algorithms (e.g., greedy, genetic, or even reinforcement learning algorithms) can be used to find local and global optimal solutions within polynomial times. ...
The efficient computation of viewpoints under consideration of various system and process constraints is a common challenge that any robot vision system is confronted with when trying to execute a vision task. Although fundamental research has provided solid and sound solutions for tackling this problem, a holistic framework that poses its formal description, considers the heterogeneity of robot vision systems, and offers an integrated solution remains unaddressed. Hence, this publication outlines the generation of viewpoints as a geometrical problem and introduces a generalized theoretical framework based on Feature-Based Constrained Spaces ($\mathcal{C}$-spaces) as the backbone for solving it. A $\mathcal{C}$-space can be understood as the topological space that a viewpoint constraint spans, where the sensor can be positioned for acquiring a feature while fulfilling the regarded constraint. The present study demonstrates that many viewpoint constraints can be efficiently formulated as $\mathcal{C}$-spaces providing geometric, deterministic, and closed solutions. The introduced $\mathcal{C}$-spaces are characterized based on generic domain and viewpoint constraints models to ease the transferability of the present framework to different applications and robot vision systems. The effectiveness and efficiency of the concepts introduced are verified on a simulation-based scenario and validated on a real robot vision system comprising two different sensors.
... The problem of 2D coverage planning was also investigated in the context of camera networks [22], [23], with the main objective being the optimal control and placement of cameras for full visual coverage of the monitoring space. The majority of the related work discussed so far, transforms the inspection/coverage planning problem to a path planning problem by firstly decomposing the area/object of interest into a number of non-overlapping cells which are then connected together with a path-finding algorithm to form the robot's path. ...
Nowadays, unmanned aerial vehicles or UAVs are being used for a wide range of tasks, including infrastructure inspection, automated monitoring and coverage. This paper investigates the problem of 3D inspection planning with an autonomous UAV agent which is subject to dynamical and sensing constraints. We propose a receding horizon 3D inspection planning control approach for generating optimal trajectories which enable an autonomous UAV agent to inspect a finite number of feature-points scattered on the surface of a cuboid-like structure of interest. The inspection planning problem is formulated as a constrained open-loop optimal control problem and is solved using mixed integer programming (MIP) optimization. Quantitative and qualitative evaluation demonstrates the effectiveness of the proposed approach.
... As the 3D model of the product to be inspected is known, the problem becomes a model-based view planning. The purpose is to find the optimal multi-camera placement that minimize the sensing cost under a coverage constraint [12]. The cost here is mainly the number of viewpoints required. ...
... Traditional solutions on model-based view planning mainly consider the field-of-view and occlusion issues [10,12,14]. This is reasonable in 3D scanning applications, where the purpose is to obtain the point cloud of the inspected product. ...
... View planning problem has been studied for years for different applications [18]. There are two sub-problems to solve VPP [12,19]. First, the visibility of the product surface for a given viewpoint should be measured. ...
Machine vision, especially deep learning methods, has become a hot topic for product surface inspection. In practice, capturing high quality images is a base for defect detection. It turns out to be challenging for complex products as image quality suffers from occlusion, illumination, and other issues. Multiple images from different viewpoints are often required in this scenario to cover all the important areas of the products. Reducing the viewpoints while ensuring the coverage is the key to make the inspection system more efficient in production. This paper proposes a high-efficient view planning method based on deep reinforcement learning to solve this problem. First, visibility estimation method is developed so that the visible areas can be quickly identified for a given viewpoint. Then, a new reward function is designed, and the Asynchronous Advantage Actor-Critic method is applied to solve the view planning problem. The effectiveness and efficiency of the proposed method is verified with a set of experiments. The proposed method could also be potentially applied to other similar vision-based tasks.
... In some other applications, such as coverage optimization and formation control, the model of the geometrical constraint is considered. For example, in Mavrinac and Chen (2013); Zhang et al. (2018); Gusrialdi et al. (2008), the geometrical constraint of cameras is modeled and considered in the parameterized cost function for coverage optimization, while a formation control method is proposed considering the geometrical property of sensors in Li et al. (2018). This paper focuses on designing an observer with the consideration of the explicit geometrical constraint of the visual sensor. ...
Onboard visual sensing has been widely used in the unmanned ground vehicle (UGV) and/or unmanned aerial vehicle (UAV), which can be modeled as dynamic systems on SE(3). The onboard sensing outputs of the dynamic system can usually be applied to derive the relative position between the feature marks and the system, but bearing with explicit geometrical constraint. Such a visual geometrical constraint makes the design of the visual observer on SE(3) very challenging, as it will cause a time-varying or switching visible set due to the varying number of feature marks in this set along different trajectories. Moreover, the possibility of having mis-identified feature marks and modeling uncertainties might result in a divergent estimation error. This paper proposes a new robust observer design method that can accommodate these uncertainties from onboard visual sensing. The key design idea for this observer is to estimate the visible set and identify the mis-identified features from the measurements. Based on the identified uncertainties, a switching strategy is proposed to ensure bounded estimation error for any given trajectory over a fixed time interval. Simulation results are provided to demonstrate the effectiveness of the proposed robust observer.
... Strategic positioning of camera units within their target vicinity will increase their overall performance and minimize potential costs [20]. This demonstrates that research about camera modeling is continuously becoming more relevant in designing surveillance networks [21]. ...
The installation of CCTV cameras monitoring the street sections of one of the most visited areas of Manila may serve as deterrence against theft, crime, abduction, and even act of lasciviousness. Furthermore, the redundant orientations of some of the units in the system were recognized as possible inhibitors of the efficiency of the local surveillance system. In line with this, the study proposed a model for CCTV camera placement in Intramuros by representing the community as a graph in a 2-dimensional space. The paper presents a two-phase approach in determining the best placements of CCTV cameras. Phase I took care of the ideal installation spots as a set-covering problem while Phase II identified the optimal CCTV orientation using the Proposed algorithm. In Phase I, a binary integer programming model was formulated and solved using the data solver function of Microsoft Excel. The designed algorithm in Phase II was based on greedy heuristics utilizing the results in Phase I to identify the optimal orientation of the CCTV units. Findings suggest that out of the seventeen candidate locations, nine of them are optimal for CCTV installation. A total of twenty-three CCTV units are required to cover all the entry and exit points of the streets in district 5 of Intramuros. The proposed algorithm produced two optimal solutions A and B. Comparison with the existing CCTV system in the district and discussions on each optimal installation suggested that result B is better than A. Recommendations on the results of the study were addressed to the authorities of district 5 for immediate implementation.