ArticlePDF Available

CCTV Coverage Index Based on Surveillance Resolution and Its Evaluation Using 3D Spatial Analysis


Abstract and Figures

We propose a novel approach to evaluating how effectively a closed circuit television (CCTV) system can monitor a targeted area. With 3D models of the target area and the camera parameters of the CCTV system, the approach produces surveillance coverage index, which is newly defined in this study as a quantitative measure for surveillance performance. This index indicates the proportion of the space being monitored with a sufficient resolution to the entire space of the target area. It is determined by computing surveillance resolution at every position and orientation, which indicates how closely a specific object can be monitored with a CCTV system. We present full mathematical derivation for the resolution, which depends on the location and orientation of the object as well as the geometric model of a camera. With the proposed approach, we quantitatively evaluated the surveillance coverage of a CCTV system in an underground parking area. Our evaluation process provided various quantitative-analysis results, compelling us to examine the design of the CCTV system prior to its installation and understand the surveillance capability of an existing CCTV system.
Content may be subject to copyright.
Sensors 2015, 15, 23341-23360; doi:10.3390/s150923341
ISSN 1424-8220
CCTV Coverage Index Based on Surveillance Resolution and
Its Evaluation Using 3D Spatial Analysis
Kyoungah Choi and Impyeong Lee *
Lab. for Sensor & Modeling, Department of Geoinformatics, University of Seoul, Seoulsiripdaero 163,
Dongdaemun-gu, Seoul 02504, Korea; E-Mail:
* Author to whom correspondence should be addressed; E-Mail:;
Tel.: +82-6490-2888; Fax: +82-6490-2884.
Academic Editor: Gonzalo Pajares Martinsanz
Received: 9 June 2015 / Accepted: 10 September 2015 / Published: 16 September 2015
Abstract: We propose a novel approach to evaluating how effectively a closed circuit
television (CCTV) system can monitor a targeted area. With 3D models of the target area
and the camera parameters of the CCTV system, the approach produces surveillance
coverage index, which is newly defined in this study as a quantitative measure for
surveillance performance. This index indicates the proportion of the space being monitored
with a sufficient resolution to the entire space of the target area. It is determined by
computing surveillance resolution at every position and orientation, which indicates how
closely a specific object can be monitored with a CCTV system. We present full
mathematical derivation for the resolution, which depends on the location and orientation
of the object as well as the geometric model of a camera. With the proposed approach, we
quantitatively evaluated the surveillance coverage of a CCTV system in an underground
parking area. Our evaluation process provided various quantitative-analysis results,
compelling us to examine the design of the CCTV system prior to its installation and
understand the surveillance capability of an existing CCTV system.
Keywords: closed circuit television (CCTV); surveillance performance; surveillance
coverage; surveillance resolution
Sensors 2015, 15 23342
1. Introduction
CCTV surveillance operations have rapidly expanded due to the technology’s important role in
crime prevention, traffic monitoring, and security [1]; although controversies regarding privacy and the
effectiveness of CCTV installation have continually arisen [2,3]. Currently, many municipal
governments throughout the world independently operate integrated CCTV control centers, whereby
CCTV images are used to arrest criminals; additionally, corresponding news items can easily be
encountered [46]. Furthermore, the use of CCTV in public locations such as shopping malls,
apartments, and underground parking lots has reduced the possibility of crime, including theft, assault,
and/or fraud [710]. The use of CCTV images has expanded beyond crime prevention; for example, to
ensure the safety of people on a train-station platform; to observe public-transport passengers for
unexpected behaviors; and to monitor patients at hospitals [1114].
To solve problems regarding social welfare, transport safety, crime prevention, and other social
issues, the establishment of CCTV systems in public and residential areas has been proposed [15].
Further CCTV systems were installed and existing systems were upgraded to decrease the blind spots,
thereby improving the surveillance quality [16]. In addition to enhancing hardware specifications, the
performance of CCTV systems can be improved by incorporating new software and technologies; for
example, some researchers attempted to enhance the overall performance by optimizing the camera
configuration in a CCTV system [17,18]. In addition, using GIS, they determine the optimal locations
of CCTV following an analysis of CCTV images [19,20]. For example, a GIS tool, Isovist Analyst is
used to identify a minimal number of CCTV for complete coverage of a target area based on a greedy
search [21]. In reality, however, when analyzing CCTV performance in terms of quantifiable
indicators, we usually calculate the ratio of the blind spots depending on whether or not the target areas
are observable by a CCTV. The observable area is determined using the location and field of view of
each camera on a two-dimensional (2D) ground plan of the target space. Importantly, this analysis may
not provide sufficient accuracy because it does not consider three-dimensional (3D) locations and the
distributions of the cameras, objects, and targets in the 3D space. For example, the coverage of a
camera significantly differs according to the height of the camera and the target plane. Such a
conventional 2D analysis may cause unnecessary overlapping coverage or over-estimated coverage.
Further research is therefore required to provide a quantitative evaluation of surveillance performance
including surveillance resolution or blind-spot calculations with the height in 3D space.
With the improved Building Information Modeling (BIM) technology, both the physical and
functional characteristics of a building can now be generated in a digital format [22]. As a result, the
creation, visualization, and simulation of a 3D virtual model of a building can be performed more
conveniently [23,24]. The BIM models allow for the manipulation of surveillance locations and
viewpoints; therefore, the idea of using the BIM as the basis for simulating CCTV coverage has been
proposed and verified to determine surveillance performance [25,26]. By referring to this idea, the
redundant overlapping coverage of CCTV could be effectively prevented with the generation of a 3D
model during the design phase of a CCTV system; however, it will still not be possible to determine
the quality of the surveillance performance using the resolution at which an object can be identified in
a given area. For example, the use of CCTV images to trace the movements of a suspect at a crime
scene may not provide an image of the suspect’s face; or even if it does, the resolution is insufficient
Sensors 2015, 15 23343
for facial recognition. This may occur because the surface direction of the target object was not
considered and the achievable resolution was not computed through a simulation process. Although the
target area appears covered at the ground level, the coverage of each camera narrows as the height
increases from the ground. As most targets are off the ground, the surveillance performance for the
target is lower than the results from the existing evaluation method, which calculates coverage at the
ground level. Additionally, the existing evaluation method assumes that the target is facing toward the
camera; however, targets are usually looking in a horizontal direction so that the surveillance
performance is significantly reduced when compared with the existing method.
In Figure 1, the weak points of the existing surveillance-performance evaluation method are shown,
whereby there are four people in the 2D coverage of a CCTV camera; therefore, complete surveillance
is achieved for the four people based on the existing method. The faces of the people, however, cannot
be recognized. Three-dimensional coverage of the face of the male in the black T-shirt is not
observable from the camera image, while the entire face of the female in the green jacket is hidden by
the male in the red polo shirt. The male in the red shirt is positioned with his back to the camera so that
his face cannot be detected from the camera image. The child is looking at the TV in a different
direction from the optical axis of the camera and is sitting in a lower position away from the optical
axis of the camera; consequently, the resolution of the camera image is insufficient to recognize his
face even though his face is in the 3D coverage. With commercial software such as VideoCAD, we can
check 3D coverage and simulate images on virtual avatars captured by a certain camera in a site
interactively through a 3D graphic interface. However, because the software does not provide any
comprehensive quantitative indicator about surveillance performance of the system, we still cannot
understand how completely a CCTV system monitors the target area in a certain level of detail [27].
Figure 1. It is not possible to observe the faces of the four people from the image even
though they exist in the 2D coverage of a CCTV camera.
In this study, we therefore propose new comprehensive indicators to quantify CCTV-surveillance
performance, which quantitatively represents how completely a target space can be monitored by a
CCTV system with a degree of detail enough for given surveillance requirements. Here, it is carefully
considered that the details of objects appearing in a CCTV image depend on types and specifications
of the sensors in the system and the orientation as well as location of the objects in the target area.
We also develop a performance evaluation method using the proposed indicators and applied it to a
real case to verify its feasibility. The remaining part of this paper is organized as follows: proposed
Sensors 2015, 15 23344
concepts and methods are explained in Section 2; examples of surveillance coverage evaluation are
shown in Section 3; and conclusions are presented in Section 4.
2. Surveillance Coverage Evaluation
Surveillance performance indicates how effectively a target area is being monitored by a CCTV
system. As quantitative measures of surveillance performance, we propose the following two
indicators: surveillance resolution and surveillance coverage index. Surveillance resolution indicates
how closely a specific object can be monitored with a CCTV system, depending on the location and
orientation of the object as well as the cameras of the CCTV system. Surveillance coverage index
focuses on a specific region rather than an object, indicating how completely a region of interest can be
monitored with more than a specified surveillance resolution. The region can also be a path, an area, or
a 3D space as a subset of the object space; for example, to what extent a pedestrian path in a parking
lot, a crowded area in a mall, or the entire inner space of a building can be completely monitored may
be of interest. The resolution threshold can be established according to its own application; for
example, it can be two px/cm for facial recognition.
The evaluation process to derive the proposed performance indicators requires two kinds of inputs.
The first group includes those with almost constant propertiesat least during the evaluation
processwhich are a 3D geometric model of the object space, and the intrinsic and extrinsic
parameters of all of the cameras of the CCTV system. The second group includes the changeable
parameters, which are the resolution threshold and the regions of interest that are specified within the
object space.
In this section, we first describe the definition and derivation of the proposed indicators, the
surveillance resolution of an object with a specified location and orientation, and the surveillance
coverage index for a specified region of interest. We then explain the proposed evaluation process to
derive these indicators in an actual practical situation.
2.1. Surveillance Resolution and Coverage Index
The following four types of resolutions are used to define the quality of an image: geometric,
radiometric, spectral, and temporal. Among these resolutions, the geometric resolution is the most
effective when describing an object’s geometric properties such as position and shape. The geometric
resolution is typically expressed in terms of Ground Sampling Distance (GSD), which refers to the
distance of an object surface in a single pixel of an image. In this context, the surveillance performance
of a CCTV can be evaluated by observing the minimum GSD required to monitor an object. An
arbitrary length can therefore be set within the target area in the object space and the actual length
projected on the CCTV image can be calculated; we define the latter value as the surveillance
resolution and apply it when assessing the surveillance performance of a CCTV. The surveillance
resolution depends on sensor’s physical characteristics, such as the focal length, the principle point, the
pixel size, the projection type. It also varies according to the relative geometric relationship between
the camera and the object. Even with an object at the same position, the resolution can be different in
accordance with the orientation of the object surface. By considering these diverse factors affecting the
resolution, we derive a formula to derive the resolution as follows.
Sensors 2015, 15 23345
When an object locating at a location with its surface normal is monitored by a CCTV camera, it is
projected to the image with a resolution. The defined surveillance resolution is represented as the ratio
between the actual length  of an object and its projected length  on the image, which is
defined as Equation (1). It consists of four terms that accurately model four steps of the projection
process from the object space to the image space. The steps are computing (1) the projected length
 of  on the surface where the object can be observed at a maximum resolution;
(2) the incident angles  when the object is projected through the perspective center; (3) the
projected length  of  on the image according to a lens formula without any distortion; and
(4) the projected length  of  on the image considering distortions, respectively. Figure 2
illustrates the geometric meanings of the main parameters associated with the derivation of the defined
surveillance resolution.
 
Figure 2. Definition of surveillance resolution.
The first term considers the orientation of the object surface. In spite of the same object position, the
resolution of its projection can be different from the object’s orientation. The highest resolution can be
achieved when the surface normal corresponds to the direction toward the perspective center. The
length projected to the surface and directed to the perspective center can be derived as the following:
where is the angle between the actual orientation and the orientation resulting in the highest
surveillance resolution.
The second term transforms the projected length  into the range of the incident angle ,
which can be derived based on the arc length formula, as follows:
Sensors 2015, 15 23346
 
where is the distance from the object to the perspective center; and is its projected length to the
optical axis, which is represented as .
The third term reflects a projection model. In a narrow angle system, it is usually assumed that the
central projection model is available, whereas different models apply for wide angle systems, which
are mostly utilized for CCTV systems. Some useful models are presented as Equation (4). Through a
lens of a focal length (, an object point in the direction  is projected to a position  distant from
the principal point. The first of the models in Equation (4) signifies the central projection model, while
the last term models the distortion correction. If we consider only the radial distortion of the lens, the
correction model can be expressed as Equation (5).
As shown in Figure 2, when an object locating at location with its surface normal  is
monitored by a CCTV camera , it is projected to a CCTV image with a resolution . The
defined surveillance resolution is represented as the ratio between the actual length  of an object
and its projected length , which is formulated as Equation (6). Here, it is assumed that the
camera follows the central projection without any distortion:
 
 
The vertical distance  between the object and center of projection and the offset angle from the
optical axis  are determined based on ’s location , and the camera’s position and attitude.
According to the orientation , the angle  between the actual orientation of  and the
orientation resulting in the highest surveillance resolution 
 is decided. The and 
represent the normal vector of on the object surface and the normal vector of on the image plane at
the maximum surveillance resolution possible, respectively. In addition, focal length is computed
from the camera modeling process.
Although CCTV cameras will acquire images at a maximum resolution of if the object orientation
is facing 
, the resulting image resolution will be lower when the object is placed on the surface
tilted by . When the angle of  increases to 90°, the object will not be identifiable in the images.
Moreover, the surveillance resolution will have a negative value when  is larger than 90°, and this
is the case when the opposite side of the object is projected to the image; for example, only the back of
a suspect is captured in the CCTV image when the intention was to observe the facial features of the
suspect. The negative surveillance-resolution value is not useful information in this case and it is
therefore replaced by 0.
Sensors 2015, 15 23347
The camera parameters describe its projection characteristics. The intrinsic parameters are focal
length, principal point, and distortion coefficients, whereas the extrinsic parameters are the position
and orientation of the camera in an object coordinate system. As the camera parameters can be
estimated through a camera modeling process, such as self-calibration using the acquired CCTV
images and reference data, the intrinsic and extrinsic parameters can be assumed as known. In addition,
the same camera parameters can be applied in cases where the coverage originates from an identical
camera. The surveillance resolution  of a camera is therefore derived from an object in a specific
location and orientation in a target space.
Referring back to Equation (6),  and  must be known to determine the surveillance
resolution at a certain location and orientation. As it is assumed that the CCTV camera’s intrinsic and
extrinsic parameters and the object’s location and orientation are known, the following equations can
be used to compute  and . First, the distance from the object location  to
the center of the projection  and the unit vector of the optical axis  in a 3D coordinate
system defined by the camera’s extrinsic parameters  are calculated. Then,
the normal vector of the surface where the CCTV camera can monitor at maximum resolution 
can be calculated using Equation (7), as follows:
   
Next, the off-axis angle from the CCTV camera’s optical axis  is determined by Equation (8)
and Equation (9) is used to compute . In addition, the angle created by the actual surface where the
object exists, and the surface at which the CCTV camera can observe the object at maximum
resolution  can be calculated with the relation shown in Equation (10), as follows:
 
 
 
The values of  vary with respect to the local reference frame defined by the CCTV
camera and the object as expressed in Equations (8)(10), and these values will determine the
surveillance resolution defined by Equation (6). Nonetheless, in general, multiple cameras are installed
over the target area and one object may appear in many camera images. As the relative position and
orientation differ between each CCTV camera and the particular object, each image will attain
different surveillance resolutions. Although there may be different surveillance resolutions for a given
object, the largest value will be assigned as the object’s surveillance resolution. Consequently, the
surveillance resolution of an object in a CCTV system with multiple cameras  can be expressed as
the following, where is the total number of cameras in a CCTV system:
  
The surveillance resolution  is calculated with , which are obtained with respect to
the object’s position and orientation in a space. To assess the surveillance coverage , the
surveillance resolution at every possible position and orientation in a given space is produced. Then,
Sensors 2015, 15 23348
the percentage of the surveillance resolution that exceeds over a pre-defined threshold can be computed by
Equation (12), as follows:
 
where is the total number of positions sampled; is the total number of orientations sampled at each
position; and  is the number of samples that meet the requirement in the bracket.
2.2. Coverage Evaluation Procedure
By applying the surveillance resolution and coverage index described in Section 2.1, the evaluation
of surveillance coverage can be conducted using the steps shown in Figure 3. First, we need to
generate 3D spatial models of the target surveillance area and determine the CCTV camera’s extrinsic
and intrinsic parameters. Next, samples are selected at each location, whereby the object is visible
from different orientations. Then, the surveillance resolutions at each of the sampled locations and
orientations are derived from Equation (1). Finally, the completeness of the surveillance coverage is
evaluated by computing the surveillance coverage index based on Equation (12).
Figure 3. Surveillance Coverage Evaluation Procedure.
In the first stage, we need to generate polyhedral models of the area from existing 2D architectural
floor plans or newly acquired sensory data. If we use the floor plans, rather than using other sensory
data, it is easier to create the corresponding 3D model of the building; however, floor plans contain the
detailed architectural design of a building, and inconsistencies between the designed model and the
actual building construction may exist. To accurately model the physical building, we need to acquire
the sensory data of the area, such as images and laser-scanning data; then, the polyhedral model can be
generated manually from stereo images or semi-automatically from point clouds. With the 3D
polyhedral model, we need to know the camera’s extrinsic and intrinsic parameters. For extrinsic
parameters, we have the location of the cameras perspective center, expressed by three coordinates,
and its attitude, expressed by three independent rotation angles. To obtain the parameter values, we
distribute GCPs (Ground Control Point) in the area and perform bundle adjustment with images
including the GCPs. The intrinsic parameters describe the metric characteristics of a camera, and can
Sensors 2015, 15 23349
be determined through a camera-calibration process using a specially designed calibration target. The
position of principal point, focal length, distortion amount, and pixel size are the main intrinsic
parameters. In this case, unlike most, we cannot change the cameras position and attitude, so we must
position the calibration target on the floor in various directions and acquire images of the target.
In the second stage, we determine the amount of sampling desired from all of the possible 3D
locations of the object, along with their corresponding orientations, in the entire target space. For the
determination of sampling locations, we first define the target space in an arbitrary 3D Cartesian
coordinate system. Then, each of the axes is divided to form a 3D grid. It is possible to define the
surveillance resolution at each of the 3D grid points by using ; for instance, when
the three axes are split every 10 cm in a  space, there would be a total of 1000 locations
to sample. On the other hand, the orientations that an object can face range from 0° to 360°
horizontally, to 180° vertically, and 0 to  for their solid angle. To include all of the possible
orientations, we divide the angles both horizontally and vertically using an arbitrary location as the
center. Using the same idea that applied to the 3D grid points, the surveillance resolution of all of the
orientations at a given location can be found by using ; for example, if the
sampling was conducted at an interval of 1°, the total orientation samples for a given location will be
. When the interval is increased to 10° or 45° to reduce the number of
orientation samples, there would be 648 and 32 orientations, respectively. This means that even when
 orientations are observed at each location, the number of calculations for finding the
surveillance resolution reach 32,000 in a 1 m3 with 1000 locations.
The advantage of using such a method to sample at every given interval for both location and
orientation seems logical; however, the disadvantage is that each of the defined orientations do not
cover the same solid angle, whereby the solid angle covered by each orientation decreases as the
vertical angle increases. The vertices of the regular polyhedrons inscribed in a sphere or the center of
their faces therefore provide the same solid angle across the orientations for a location. The five
regular polyhedrons are the tetrahedron, hexahedron, octahedron, dodecahedron, and icosahedron.
For example, assuming a particular location as the center of an icosahedron, orientation sampling can
be made facing each of the vertices, which will provide the same solid angle throughout and the
number of samples will decrease to 12.
In the third stage, when deriving surveillance resolutions at each of the sampled locations and
orientations, we must analyze the visibility of the position from a camera by applying a ray-tracing
algorithm. With this algorithm, we define a ray from the position to the perspective center of the
camera and determine whether this line is intersected with other obstacles. If an intersection is
determined, the position is within the occluded area of the camera and we cannot define the
surveillance resolutions at the position. Before applying ray-tracing for each position, we need to
compute the horizontal coverage of every camera and the 2D Minimum Bounding Rectangle (MBR) of
all of the obstacles in the target area. Although the obstacles are defined in a 3D sense, most of them
are extended to the ceiling starting from the ground, with the same horizontal outline such as a pillar.
In this case, by examining the 2D overlap, we can determine whether it is overlapped in a 3D sense
without a complicated 3D process.
Ray-tracing for the calculation of the surveillance resolution at a position with an orientation by a
camera is performed as follows: (1) determine whether the position is within the horizontal coverage of
Sensors 2015, 15 23350
the camera; (2) determine the 2D MBRs of the line between an object point (a sampled position) and
the perspective center; (3) check whether there is a 2D overlap between the MBRs of a line and an
obstacle; if there is no 2D overlap, stop at this step with no 3D overlap; (4) check whether the line
segment intersects with the 2D boundary polygon of the obstacle; if it does, stop at this step because
the obstacle has an identical horizontal outline according to the height, signifying that it also intersects
with the obstacle in a 3D sense; (5) determine if the line segment intersect with a 3D polyhedron
model of the obstacle; and (6) if there is an overlap, the position cannot be identified in the camera
image and the surveillance resolution is set to 0.
In the final stage, we compute the surveillance coverage index by comparing the achievable
resolution with the desired one. Here, we determine the proportion of the instances that reach above
the desired resolution among the resolution values computed at all the sampled locations and
orientations in the previous stage. The desired resolution can be established or derived from the
specified surveillance requirements for recognition and tracking. For example, one may want to
monitor every location in the target space with a resolution of 0.5 px/cm required for a meaningful
recognition process. Using the resolution values computed in the previous stage, we can easily
compute the proportion of the locations monitored with at least the desired resolution and present it as
the overall coverage index of the target space. In addition, we can visualize the computed surveillance
resolution at each location with different object orientations in the 2D/3D space to visually inspect the
weak and strong surveillance areas. Furthermore, we can check the coverage index for a special
surveillance sector such as a doorway or moving path in a parking area. In a doorway, one may want to
recognize even the faces of the people going in and out through an exit; and for the face recognition,
the required resolution of at least 2 px/cm may be assumed. We can also determine how well such a
requirement is satisfied in the space of interests in a quantitative way by checking the computed
resolution values at all the locations with different orientations within the target space. In addition, we
can identify the weak area and propose the location and orientation for additional camera installation to
fulfill surveillance requirements. Adding cameras or constructing a CCTV system involves a major
expense; therefore, we need an elaborate design for achieving the surveillance purpose before the
installation. In addition, there are available so many different cameras with different prices and
performance, and thus we can check the surveillance performance by changing the camera models or
the camera parameters to derive more optimal camera specifications and configurations.
3. Application Example and Analysis
3.1. Experimental Data
The underground parking lots of buildings such as apartments are the places where CCTV systems
are encountered in daily life. We therefore produced a simulation of a typical configuration and size of
a parking lot, like that in Figure 4, based on the concept of the proposed surveillance resolution for the
evaluation of the surveillance coverage of the target area. The surface area of the generated parking lot
is 5079.47 m2 with a height of 3 m. Additionally, the CCTV cameras were positioned to reflect the real
world, whereby the cameras were installed in pairs, facing the opposite direction. In addition, CCTV
cameras are usually installed on the ceiling of the path that the cars and people mostly use. Here, we
Sensors 2015, 15 23351
assumed that each pair of cameras is rotated by ±12° in the -axis, and the approximate distances
between them are  in the -axis and  in the -axis. We also assumed that the focal length,
pixel size, and detector dimension of each camera are 10 mm, 5 µm, and 4000 by 3000 pixels,
respectively. In this case, the coverage of each camera is about 6 m by 4.5 m at a place of 3 m distance.
Figure 4. Arrangement of CCTV cameras in an underground parking lot.
3.2. Evaluation Results and Analysis
3.2.1. Overall Analysis
The possible locations of the object were sampled at each of the 3D grid points with an interval of
 and the possible orientations of the object were sampled every  in the horizontal and vertical
planes. Then, the observed surveillance resolutions from the CCTV cameras were determined for all of
the samples. In Figure 5, the horizontal position of the object monitored above a surveillance
resolution of 0 is displayed, when the object is located on the ground  and is facing the
ceiling. Out of the total 20,541 ground locations, 13,186 locations can be observed in CCTV images,
and 64.2% of the ground surface is computed as the surveillance area. This evaluation of surveillance
coverage is similar to an existing method for determining blind spots, whereby the coverage of CCTV
cameras in 2D space is derived [17,28].
For example, to identify a suspect who has trespassed through a parking lot, the suspect should
appear in a CCTV image and his face should be recognizable from the image; however, it is difficult to
estimate the success of such an objective when the surveillance coverage is determined with the
existing 2D blind-spot-analysis method. Although slight parameter differences exist between CCTV
cameras, the resolution that is typically required for monitoring an object in detail is about 2 px/cm,
whereas it is about 0.7 px/cm for general surveillance (Theia Technologies, 2009). Accordingly, the
surveillance resolution must be at least 2 px/cm to distinguish the appearance of a suspect without a
criminal record. The surveillance resolution from the ground surface when the orientation of an object
Sensors 2015, 15 23352
is upper vertical is illustrated in Figure 6; accordingly, a surveillance resolution exceeding over
2 px/cm is displayed in Figure 7. The resulting outcome produced 4930 locations, from the total of
20,541, as the positions that exceeded over 2 px/cm, signifying that, at this height, the suspect’s facial
features can be identified approximately  of the time. With a CCTV-surveillance-coverage
evaluation method like that which is previously explained, the percentage of surveillance resolutions
that reach the required resolution to successfully fulfill the CCTV system’s purpose can be determined.
Figure 5. Areas with a surveillance resolution greater than 0 when objects are on the
bottom and their orientations are upper vertical (white: resolution > 0 ; black:
resolution = 0 ).
Figure 6. Surveillance resolutions when objects are on the ground and their orientations
are upper vertical (blue: resolution = 0 px/cm; red: resolution = 7 px/cm).
Sensors 2015, 15 23353
Figure 7. Areas with a surveillance resolution greater than or equal to 2 px/cm when objects
are on the bottom and their orientations are upper vertical (white: resolution 2 px/cm;
black: resolution < 2 px/cm).
As the height of the object increases, the surveillance coverage significantly decreases, as shown in
Figure 8; this represents a surveillance coverage index with a threshold of 1 px/cm according to the
elevation when objects have an upper-vertical orientation. Figure 8c shows that the subject’s face
could be observed in the area with a probability of 19.7% if the individual’s face is at a 1.5 height
and looking in an upper-vertical direction.
Figure 8. Surveillance coverage index with a threshold of 1 px/cmaccording to the
elevation when objects have an upper-vertical orientation; (a) the index at the elevation of
0.5 m; (b) the index at the elevation of 1 m; (c) the index at the elevation of 1.5 m; (d) the
index at the elevation of 2 m.
Furthermore, the surveillance coverage index changes according to the object’s orientation. Figure 9
shows the surveillance coverage index with a threshold of 1 px/cm according to the orientation when
Sensors 2015, 15 23354
objects are at an elevation of 0.5 m; 0.5 m is the general height of a car’s license plate. Figure 9f shows
that if an object is moving in the area looking downward, it is not possible to recognize the
object’s surface.
Figure 9. Surveillance coverage index with a threshold of 1 px/cm according to the
orientation when objects are at an elevation of 0.5 m; (a) the index with the eastward
orientation; (b) the index with the northward orientation; (c) the index with the upward
orientation; (d) the index with the westward orientation; (e) the index with the southward
orientation; (f) the index with the downward orientation.
3.2.2. Areal Analysis
The suggested methodology incorporates the geometric properties and movement trend of an object
to allow for a detailed evaluation of the surveillance coverage in a 3D space; for example, the red
rectangular area in Figure 4 represents the entrance from a parking lot into a building, such as an
escalator or elevator, where more careful monitoring is required. Surveillance coverage in such areas
can therefore be determined to analyze the vulnerability of the CCTV system when it comes to crime
prevention and reaction. For this analysis, we limited the vertical rangefrom 1.2 to 1.8 in
consideration of the average height of a human face, and sampled the target space of this vertical range
with locations of 20 cm intervals, which is the space denoted with the blue line in Figure 4. At each
location, we considered four horizontal orientations with a 90° interval. The number of sampling of
locations and orientations in the area totaled 100(x) × 100(y) × 4(z) × 4 (horizontal angle) × 1 (vertical
angle) = 160,000, where all of their surveillance resolutions were determined. The head of a person
would roughly be at a height of 1.6 m, when the average height of adults is taken into account.
Illustrations of the surveillance resolutions at the 1.6 height when the orientation is in the direction
in which the maximum resolution is achieved are shown in Figure 10. The corresponding surveillance
coverage index is 23.7%, and it can be concluded that it is very difficult to verify the identity of an
individual in the target area using the CCTV system.
Sensors 2015, 15 23355
Figure 10. (a) Surveillance resolutions when objects are at a 1.6 m height and their
orientation is in the direction in which the maximum resolution is achieved;
(b) Surveillance coverage index with the threshold of 2 px/cm.
To improve the existing monitoring quality, the installation of two additional CCTV cameras is
planned. We examined the difference of the target-area resolution after the additional cameras are
installed, with a possible location as the center  and a rotation of ±12° in the -axis.
Figure 11 shows the surveillance resolutions at the height of 1.6  when the orientation is in the
direction in which the maximum resolution is achieved, after the cameras are added. In this case, we
can conclude that, by adding the additional cameras at the possible location, the surveillance coverage
for the target area was enhanced, as the surveillance coverage index increased from 23.7% to 38.7%.
Figure 11. (a) Surveillance resolutions when objects are at 1.6 m height and their
orientations are in the direction in which the maximum resolution is achieved; (b) Surveillance
coverage index with a threshold of 2 px/cm (after adding two cameras).
Sensors 2015, 15 23356
Table 1 shows the surveillance coverage index at different elevations when an object is facing the
direction that provides the maximum resolution, before and after adding the cameras. From this, we
can check that the surveillance coverage for the target area is improved by approximately 60% after the
two cameras are added; however, as the height of the object increases, the surveillance coverage
significantly decreases.
Table 1. Surveillance coverage index before and after adding cameras (unit: %).
Z = 1.2 m
Z = 1.4 m
Z = 1.6 m
Z = 1.8 m
Finally, Figure 12 displays the surveillance resolutions at the most probable orientations at the
height of . From this, we can observe that, even though the location of the object is identical, the
surveillance resolution significantly differs between the orientations. In the case where the object is
facing in the direction, as illustrated in Figure 12f, the object does not appear in the CCTV image,
even with the additional camera. This implies that, when a person intentionally faces the ground as
they travel, their facial features cannot be observed from the CCTV system. To solve this problem,
additional cameras facing in an upward direction should be installed at a lower height range of 0 cm to
50 cm, as needed. After installing the additional cameras, we can enhance the surveillance coverage in
the area three dimensionally and omni-directionally.
Figure 12. Surveillance resolutions according to the orientation (after adding a camera); (a) the
resolutions with the +x direction; (b) the resolutions with the −x direction; (c) the resolutions
with the +y direction; (d) the resolutions with the y direction; the resolutions with the −x
direction; (e) the resolutions with the +z direction; (f) the resolutions with the +z direction.
Sensors 2015, 15 23357
3.2.3. Path Analysis
The red line in Figure 4 represents the moving paths for vehicles in the parking lot, where careful
surveillance is required. We sampled the paths with an interval of 0.1 m and 0.5 m in horizontal and
vertical directions, respectively. At each location, we selected six orientations with a 90° interval. We
then computed the surveillance resolution at each sampled location and orientation. We presented the
surveillance resolutions of the sampled locations, when the orientation is the one direction where the
maximum resolution is attained, using vertical bars, as shown in Figure 13, where the length of a vertical
blue bar indicates the magnitude of the surveillance resolution. As shown in Table 2, the surveillance
coverage index with the thresholds of 0 px/cm and 1 px/cm are 87% and 83%, respectively.
Figure 13. Surveillance resolution at the sampled locations, when the orientation is the one
direction where the maximum resolution is attained.
Table 2. Surveillance Coverage Index (SCI) along a certain path.
Threshold ()
All Directions (E/W/N/S)
Maximum Direction (E/W/N/S)
4. Conclusions
Although the efficient design and installation of CCTV systems is recognized as important, there is
a lack of comprehensive indicators, which are useful to understand quantitatively how well a CCTV
system covers a target area while satisfying specific surveillance requirements. In this study, we thus
have proposed new indicators, surveillance resolution and coverage index, which allow us to evaluate
quantitatively the effectiveness of CCTV systems on the task-specific surveillance. The surveillance
resolution, indicating how closely an object can be observed by cameras, is derived from a rigorous
projection model from objects to cameras. The derivation reflects the 3D orientation as well as the
location of objects and cameras; and it can also be applicable to various kinds of cameras with
Sensors 2015, 15 23358
different projection and distortion characteristics. Based on the surveillance resolution, we defined the
surveillance coverage index representing how completely an area is monitored with a certain level of
detail. Using these two indicators and the presented derivation associated with them, we established an
evaluation process to enable versatile, practical and visual analysis on the CCTV’s surveillance
coverage. For example, one can derive the overall surveillance coverage of the entire target area, check
whether a specific interesting area (or path) is monitored with a required resolution, and provide
various alternatives to improve the current coverage. During these processes, one can easily
incorporate the dynamic and static attributes of objects and cameras, for example, the movement of
persons or vehicles.
In the near future, we will adapt the proposed evaluation approach to field problems related to the
regions of more strict specific surveillance requirements, for example, crime-ridden districts, subway
stations, complex malls, casinos, and other places. With the adapted approach, we can assess the
current status and provide appropriate solutions. In addition, using the proposed surveillance index as
the target values to be optimized, we can provide the optimal position and orientation of the cameras to
maximize the performance of a CCTV system according to its surveillance requirements.
This research was supported by a grant (14SCIP-B065985-02) from Smart Civil Infrastructure
Research Program funded by Ministry of Land, Infrastructure and Transport (MOLIT) of Korea
government and Korea Agency for Infrastructure Technology Advancement (KAIA).
Author Contributions
Kyoungah Choi and Impyeong Lee collaborated to perform the study presented in this paper.
Kyoungah Choi and Impyeong Lee defined the concepts and derived the mathematical formula related
to the concepts. Kyoungah Choi performed the experiments and analyzed the results. Kyoungah Choi
wrote the paper. Impyeong Lee revised the manuscript. All authors read and approved the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
1. Teague, C.; Green, L.; Leith, D. Watching me watching you: The use of CCTV to support safer
work places for public transport transit offices. In Proceedings of Australian and New Zealand
Communication Association Conference, Canberra, Australia, 9 July 2010.
2. Harris, C.; Jones, P.; Hillier, D.; Turner, D. CCTV surveillance systems in town and city centre
management. Prop. Manag. 1998, 16, 160165.
3. Smithsimon, M. Private lives, public spaces: The Surveillance State. Dissent 2003, 50, 4349.
4. Smith, G.J.D. Behind the Screens: Examining Constructions of Deviance and Informal Practices
among CCTV Control Room Operators in the UK. Surveill. Soc. 2004, 2, 376395.
Sensors 2015, 15 23359
5. Yesil, B. Watching ourselves: Video surveillance, urban space, and self-responsibilization.
Cult. Stud. 2006, 20, 400416.
6. Seo, T.; Lee, S.; Bae, B.; Yoon, E.; Kim, C. An Analysis of Vulnerabilities and Performance on
the CCTV Security Monitoring and Control. J. Korea Multimed. Soc. 2012, 15, 93100.
7. Chang, I. A Study on the Effects of CCTV installation for Larceny Incident Prevention within
Apartment Complex. Master’s Thesis, Wonkwang University, Iksan, Korea, 2009.
8. Welsh, B.C.; Farrington, D.P. Public area CCTV and crime prevention: An updated systematic
review and meta-analysis. Justice Q. 2009, 26, 716745.
9. Caplan, J.M.; Kennedy, L.W.; Petrossian, G. Police-monitored CCTV cameras in Newark, NJ: A
quasi-experimental test of crime deterrence. J. Exp. Criminol. 2011, 7, 255274.
10. McLean, S.J.; Worden, R.E.; Kim, M.S. Here’s Looking at You: An Evaluation of Public CCTV
Cameras and Their Effects on Crime and Disorder. Crim. Justice Rev. 2013, 38, 303334.
11. Goold, B.J. Public Area Surveillance and Police Work: The Impact of CCTV on Police Behaviour
and Autonomy. Surveill. Soc. 2003, 1, 191203.
12. Teague, C.; Leith, D. Who guards our guardians? The use of ethnography to study how
railway transit officers avoid injury. In Proceedings of the PATREC Conference, Perth, Australia,
2 October 2008.
13. Xu, X.; Tang J.; Zhang, X.; Liu, X.; Zhang, H.; Qiu, Y. Exploring Techniques for Vision Based
Human Activity Recognition: Methods, Systems, and Evaluation. Sensors 2013, 13, 16351650.
14. Ko, B.C.; Jeong, M.; Nam, J.Y. Fast Human Detection for Intelligent Monitoring Using
Surveillance Visible Sensors. Sensors 2014, 14, 2124721257.
15. Kim, Y. Design of CCTV-based Monitoring System for Constructing of Societal Security
Network. In Proceedings of the Summer KIIT Conference, Asan, Korea, 31 May 2013;
pp. 175177.
16. Park, S. A Study on the Effective Disposition of Indoor CCTV Camera. In Proceedings of the Fall
KIIT Conference, Asan, Korea, 30 November 2013.
17. Yabuta, K.; Kitazawa, H. Optimum Camera Placement Considering Camera Specification for
Security Monitoring. In Proceedings of the IEEE International Symposium on Circuits and
Systems, Seattle, WA, USA, 1821 May 2008; pp. 21142211.
18. Liu, J.; Sridharan, S.; Member, S.; Fookes, C. Optimal Camera Planning Under Versatile User
Constraints in Multi-Camera Image Processing Systems. IEEE Trans. Image Process. 2014, 23,
19. Ha, S. Intelligent CCTV System Application for Security & Surveillance System Reinforcement.
In Construction Technology Trends & Research Report; SsangYong Institute of Construction
Technology: Seoul, Korea, 2012; Volume 62, pp. 5054.
20. Cho, C. A Study on Performance Improvement on CCTV Video Conference Using Auto
Calibration Algorithm Application. Master’s Thesis, Konkuk University, Seoul, Korea, 2012.
21. Rana, S. Isovist Analyst-An Arcview extension for planning visual surveillance. In Proceedings of
the ESRI European User Conference, Athens, Greece, 68 November 2006.
22. Park, J.; Pyeon, M.; Jo, J.; Lee, G. Case Study of Civil-BIM & 3D Geographical Information.
J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2011, 29, 569576.
Sensors 2015, 15 23360
23. Eastman, C.; Liston, K.; Sacks, R.; Teicholz, P. BIM Handbook: A Guide to Building Information
Modeling for Owners, Managers, Designers, Engineers & Contractors, 2nd ed.; John Wiley
& Sons: Hoboken, NJ, USA, 2011.
24. Wang, J.; Li, J.; Chen, X.; Lv, Z. Developing indoor air quality through healthcare and sustainable
parametric method. In Proceedings of the 4th International Conference on Bioinformatics and
Biomedical Engineering (ICBBE 2010), Chengdu, China, 1820 June 2010; pp. 14.
25. Kim, I.; Shin, H. A Study on Development of Intelligent CCTV Security System based on BIM.
J. Korean Inst. Electron. Commun. Sci. 2011, 6, 789795.
26. Chen, H.T.; Wu, S.W.; Hsieh, S.H. Visualization of CCTV coverage in public building space
using BIM technology. Vis. Eng. 2013, 1, 117.
27. Utochkin, S. The principles of CCTV design in VideoCAD. CCTV Software. Available online: (accessed on 21 July 2015).
28. Erdem, U.M.; Scarloff, S. Automated Camera Layout to Satisfy Task-specific and Floor
Plan-specific Coverage Requirements. Comput. Vis. Image Understand. 2006, 103, 156169.
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article
distributed under the terms and conditions of the Creative Commons Attribution license
... The results show that this method can effectively improve the target capture rate of the network and observe the target from a better view. A study in [9] quantitatively evaluated the surveillance coverage of a CCTV system in an underground parking area and presented a full mathematical derivation for the resolution, which depends on the object's location and orientation as well as the camera's geometric model. Finally, the authors of [10] presented a distributed algorithm of perimeter surveillance that allows for the maintenance of total coverage in heterogeneous camera networks. ...
... Since, in most cases, the plane of the target, represented by the large rectangle in Figure 2b, is not parallel to the optic lens and the image plane, we consider the angle of the rotation of the target plane θ t to determine if the target belongs to the surveillance area. In other words, the target is located in the surveillance area only if all the points represented in (9) are located within the surveillance area. ...
Full-text available
A surveillance camera is the typical device that makes up a surveillance camera system in a modern city. It will still be a representative surveillance unit in future scenarios such as in smart cities. Furthermore, as the demand for public safety increases, a massive number of surveillance cameras will be in use in the future, and an automated system that controls surveillance cameras intelligently will also be required. Meanwhile, installing a surveillance system without any verification system might not be cost-effective, so a simulation that evaluates the system’s performance is required in advance. For this reason, we introduce how to simulate a surveillance area and evaluate surveillance performance in this paper to assess a surveillance system consisting of large amounts of surveillance cameras. Our simulator defined the surveillance area as a pair of two-dimensional planes, which depend on various camera-related configurations. Both surveillance areas are used to determine if the moving object belongs to the coverage of a surveillance camera. In addition, our simulator adopts several performance indices to evaluate a surveillance camera system in terms of target detection and quality. The simulation study provides comprehensive results on how various components of the surveillance system affect the performance of the surveillance system, leading to the conclusion that building a sophisticated scheme to control a large number of surveillance cameras can provide a cost-effective and reliable surveillance system for smart cities.
... Video surveillance systems have rapidly expanded due to the technology's important role in traffic monitoring, crime prevention, security, and post-incident analysis [1]. As a consequence of increasing safety concerns, camera surveillance has been widely adopted as a way to monitor public spaces [2]. ...
... The estimation of visual coverage, as one of the most important quality indexes for depicting the usability of a camera network, was addressed by Wang et al. [15] and Yaagoubi et al. [16]. Dealing with a similar problem, Choi and Lee [1] proposed an approach to evaluate the surveillance coverage index to quantitatively measure how effectively a video surveillance system can monitor a targeted area. The problem of organizing and managing real-time geospatial data for public security video surveillance was addressed by Wu et al. [17]. ...
Full-text available
The integration of a surveillance camera video with a three-dimensional (3D) geographic information system (GIS) requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM) of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.
... Stereovision option is the second technique proposed in this paper. Up to now, stereovision was used in CCTV systems to estimate the 3D-coordinates of the objects of interest [24,58,72]. In our case, however, it is offered to the monitoring operators mainly in order to activate their attention, to enhance their concentration, and to make their decisions faster and more precise. ...
Full-text available
In this paper visualization techniques for modern closed circuit television (CCTV) smart city services are discussed with application to prevention of threats. Unconventional approaches to the intelligent visual data processing are proposed in order to support video surveillance operators, thus to make their work less exhaustive and more effective. Although registration of a huge amount of video data requires development of intelligent and automatic signal processing information extraction techniques, improvement of visualization methods for operators is also a very important task, because of the crucial role the human factor plays and should always play in the decision making, e.g. in the operator reactions to various crisis situations, which can never be fully eliminated by artificial intelligence. Four software based mechanisms connected with a standard or with a slightly extended hardware are proposed as options for the CCTV operators. They utilize rather known ideas but are implemented with new extensions to original algorithms, as well as with additional, innovative modifications and solutions (not presented in the literature). With them they become reliable and efficient tools for the CCTV systems. First, generation of cylindrical panoramas is suggested in order to make long-time video content analysis of a defined area easier and faster. Using panoramas it is possible to reduce the time that is required to watch the video by a factor of hundreds or even thousands and perform an efficient compression of the video stream for the long-time storage. Second, the controlled stereovision option is discussed for quicker and more precise extraction of relevant information from the observed scene. Third, the thermo-vision is analyzed for faultless detection of pedestrians at night. Finally, a novel high dynamic range (HDR) technique is proposed, dedicated to the CCTV systems, in contrast to other typical entertainment oriented HDR approaches, for clear visualization of important and meaningful image details, otherwise invisible. We validated usefulness of the proposed techniques with many experiments presented in this paper.
... Benedikt (1979) showed that quantitative research in architecture can be done on human behaviours and perception, privacy and human psychology towards space, by using isovist. Indeed studies related to visibility have been underway regarding crime (S. Lee and Ha 2016), security (Said and El-Rayes 2010;Choi and Lee 2015), perception (Sengke and Atmodiwirjo 2017;Sato, Kishimoto, and Yamada 2017;Dosen and Ostwald 2017) and privacy (Alitajer and Nojoumi 2016) within buildings, as was earlier shown in the case of the CAD lab. ...
Full-text available
This study suggests a new spatial network model, called the space-connector model, for improving the representation of relations between spaces. Previous models have proved incapable of expressing the differences in the geometric and visual relations between spaces, such as the type and size of openings, the length of corridors, or the transparency of walls. New notations for explicitly representing these differences are proposed. This study suggests a modified process and map for calculating isovist, pedestrian route, based on the space-connector model. The space-connector model was implemented in the ‘ASpace’ tool and its functionality has been validated through an integrated analysis of space syntax and spatial properties as well as isovist and pedestrian route analysis.
... In the work by Choi et al. on CCTV evaluation index [5], the authors devised a method to evaluate the quality of the CCTV image. The index is mainly represented as the ratio between the actual length of an object and its projected length on the image. ...
Full-text available
Most surveillance systems only contain CCTVs. CCTVs, however, provide only limited maneuverability against dynamic targets and are inefficient for short term surveillance. Such limitations do not raise much concern in some cases, but for the scenario in which traditional surveillance systems do not suffice, adopting a fleet of UAVs can help overcoming the limitations. In this paper, we present a surveillance system implemented with a fleet of unmanned aerial vehicles (UAVs). A surveillance system implemented with a fleet of UAVs is easy to deploy and maintain. A UAV fleet requires little time to deploy and set up, and removing the surveillance is also virtually instant. The system we propose deploys UAVs to the target area for installation and perform surveillance operations. The camera mounted UAVs act as surveillance probes, the server provides overall control of the surveillance system, and the fleet platform provides fleet-wise control of the UAVs. In the proposed system, the UAVs establish a network and enable multi-hop communication, which allows the system to widen its coverage area. The operator of the system can control the fleet of UAVs via the fleet platform and receive surveillance information gathered by the UAVs. The proposed system is described in detail along with the algorithm for effective placement of the UAVs. The prototype of the system is presented, and the experiment carried out shows that the system can successfully perform surveillance over an area set by the system.
Full-text available
Background and Objective: Modern surveillance systems based on CCTV cameras is an essential element for protecting the environment and social security. Camera network optimization and designing its architecture are among the issues of camera network studies. The purpose of this paper is to develop a geospatial solution to find configurations for CCTV cameras in such a way that creates the maximum possible visual coverage in an urban area. Methods: In general, this research is performed in two steps. In the first step, the algorithm is used to locate cameras in two-dimensional space, and the resulting output is analyzed in the second step in a three-dimensional space and visually. The first step was performed using ArcGIS software and Python programming language, and the S-ROPE algorithm was used as a high-precision method for 2D camera deployment. After the modifications were made at the viewing and non-binary regions of the region, the location of the cameras was determined. In the second stage, the three-dimensional model of City Engine software was used to validate the output obtained using the S-ROPE algorithm. The evaluation of the applied method was performed on an urban study area. Findings: With the S-ROPE algorithm, an automated location determination for cameras was taken so that the area of 1798.28 m² was covered by a total area of 1953.98 m² of study area, i.e. 92%. After a three-dimensional review, only two cameras were added to the total of cameras to cover 100%. Discussion and Conclusion: With the proposed method, the number of cameras used makes significant savings, and the most possible coverage is achieved. The only challenge is the process time for large areas, which, due to the non-urgent nature of the problem, does not create a dent in the proposed method.
Location plays a very important role in geomarketing. Location tells where the customers are, identifies something in the surrounding area or solves problems regarding the location of a new outlet. However, in an urban area, the locations have a vertical component due to high-rise and multilevel buildings. This situation requires a new approach that can handle three-dimensional data for location analysis. In this research, a novel 3D data structure is introduced to manage and constellate locations in three-dimensional space. The data structure is designed based on a group of classifications and clusters, and supplemented with the additional element of nearest-neighbour information. The locations are analysed to determine a geomarketing strategy by using several methods, such as single-nearest-neighbour, k-nearest-neighbour (kNN) and reverse-k-nearest-neighbour (RkNN) analyses. These analyses are performed based on encoded neighbour information of the Voronoi diagram that is extracted from the data structure. From the results, various tasks pertaining to geomarketing strategy can be carried out, such as identifying nearby competitors, locating target customers for marketing purposes and analysing the impact of opening a new outlet on competitors. Additionally, the proposed method is tested for its ability to handle large amounts of geomarketing data in terms of its efficiency in time retrieval and storage. The data structure is compared with 3D R-Tree to analyse its performance and efficiency. 3D R-Tree is chosen because it is the most commonly used structure in spatial databases. The test demonstrates that the proposed method requires the least amount of Input/Output than 3D R-Tree. The performance of the data structure is also evaluated; the results indicate that it is outperforms it competitors by responding 60–80% faster to query operations.
Conference Paper
Full-text available
The novel concept of video surveillance systems based on principles of stereovision is described. Examples of existing CCTV systems, which use stereovision tools and techniques are presented. The proposed concept allows to estimate the 3D-coordinates of the objects of interest, concept based on the positioning of the fixed and pan-tilt-zoom cameras included in the surveillance system. Positioning is performed using fixed reference points with known 3D-coordinates. Reference points are distributed over the observed area. Also, the technique of calculating the 3D-coordinates of reference points in a unified conventional coordinate system using cameras within the surveillance system.
Full-text available
Executive SummaryIntroductionTypes of Construction FirmsInformation Contractors Want from BIMProcesses to Develop a Contractor Building Information ModelReduction of Design Errors Using Clash DetectionQuantity Takeoff and Cost EstimatingConstruction Analysis and PlanningIntegration with Cost and Schedule Control and Other Management FunctionsUse for Offsite FabricationUse of BIM Onsite: Verification, Guidance, and Tracking of Construction ActivitiesImplications for Contract and Organizational ChangesBIM Implementation
Full-text available
Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods.
Full-text available
Recently, the security monitoring and control systems based on spatial information in various field are operated and being developed according to evolve the spatial information technology. Especially, the CCTV monitoring and control system can be used in various field as a typical system. However, the security vulnerability problems have become an issue because the system connected by computer network and getting bigger than before. Therefore we studied security vulnerabilities of CCTV monitoring and control system which is being developed and operated. In addition, it is important to consider disaster and terrorism with unauthorized changes on location information. Therefore we analyzed the performance of observation when the cameras are break down as a result by hacking to CCTV monitoring and control system.
Full-text available
Background Nowadays, the use of Closed Circuit Television (CCTV) systems is effective for monitoring traffic, preventing crime, and ensuring safety in many public spaces. However, the effectiveness of CCTV coverage is often achieved through design experience and trial-and-error, instead of being evaluated and visualized using a robust approach. Methods Firstly, a method for simulating varifocal CCTV lenses in order to attain different fields of view was developed, allowing real CCTV views to be approximated by adjusting the parametric properties of simulated CCTV cameras in the 3D BIM model. Secondly, an API (Application Programming Interface) plug-in program for a commercially available BIM tool was developed to facilitate the parametric modeling of CCTV systems and the evaluation of the CCTV coverage. Results A complete BIM model of an MRT (Mass Rapid Transit) station was chosen as a case study to apply the developed approach to the examination of CCTV coverage. Finally, the overall coverage of the CCTV systems for the MRT station were demonstrated visually and studied in the station's BIM model. Conclusions This research has developed a robust visualization approach for evaluating the coverage of CCTV systems in public building spaces. The developed approach is based on Building Information Modeling (BIM) technology and is capable of simulating CCTV systems in a 3D virtual environment in order to evaluate the CCTV coverage.
Conference Paper
7-11 August, 2006, San Diego, CA, USA. Visual Surveillance e.g. CCTV, is now an essential part of the urban infrastructure in modern cities. One of the primary aims in visual surveillance is to ensure a maximum visual coverage of an area with the least number of visual surveillance installations, which is a NP-Hard maximal coverage problem. The planning of visual surveillance is a highly sensitive and costly task that has traditionally been done with a gut-feel process of establishing sight lines in CAD software. This paper demonstrates the ArcView extension Isovist Analyst, which automatically identifies a minimal number of potential visual surveillance sites that ensure complete visual coverage of an area. The paper proposes a Stochastical Rank and Overlap Elimination (S-ROPE) method, which iteratively identifies the optimal visual surveillance sites. S-ROPE method is essentially based on a greedy search technique, which has been improved by a combination of selective sampling strategy and random initialisation.
Recently the establishment of high accuracy 3D spatial information has been largely stimulated according to the increase in need of such 3D spatial information. In the fields of constructions and civil works, studies on increasing the productivity in these fields through converging them with other fields using the established 3D spatial information have been conducted. In such a tendency, BIM (Building Information Modeling) technologies have been rapidly applied to the fields of constructions and civil works. In particular, in the fields of constructions and civil works that represent a life span of plan-design-construction-maintenance, some BIM application methods and plans for the characteristics in each step have been proposed. Thus, the objective of this study is to simulate a project that is reasonable and can be optimized in connection with 3D spatial information and BIM technologies escaped from the conventional civil construction process that is based on empirical, statistical DB, and 2D information. For achieving this objective, 3D terrain data for the subject area engaged in this study using aerial photographs and airborne LiDAR was established. Also, a counter plan for the issues, which cannot be solved in the conventional methods for managing civil work projects, is applied through implementing bridge-based civil structure BIM by combining them with objective information.
We examine the impacts of public surveillance cameras on crime and disorder in Schenectady, New York, a medium-sized city in the northeastern United States. We assessed camera impacts by analyzing monthly counts of crime and disorder-related calls for service that occurred within each camera's 150-foot viewshed as an interrupted time series, with the interruption at the time that the camera in question was activated. We also analyzed counts of incidents between 150 and 350 feet of cameras to assess displacement effects and diffusion of benefits. We further estimated camera effects on counts of only incidents in public locations-street crimes. Our study suggests that cameras have had effects on crime, even more consistent effects on disorder, and that the visibility of cameras is associated with its impact on crime and disorder. We conclude by discussing the implications of the findings and discuss the questions to which future research should be directed.
The selection of optimal camera configurations (camera locations, orientations etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a Trans-Dimensional Simulated Annealing (TDSA) algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on Binary Integer Programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than 2 alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Over recent years there has been a proliferation of Closed Circuit Television (CCTV) cameras in public and private settings in a bid to increase security and combat crime. Whilst concern abounds from citizens that the use of these cameras are an invasion of personal privacy, governments and organisations have continued to view them as a panacea in the fight against crime and public disorder. Drawing on a research project currently being undertaken in a metropolitan railway environment, this paper aims to address a gap in the CCTV literature and examines the use of CCTV cameras as a ‘safety protection’ for railway transit officers. These transit officers, who have similar powers to police on railway property, provide the frontline of deterrence against anti-social behaviour and violence on the rail system. Like police, these transit officers are also subject to similar investigative procedures following any complaint received from a member of the public regarding their handling of an incident. However, radioing the monitoring room and calling for a camera to be focused on them as they deal with members of the public has a number of advantages. The camera footage provides a ‘security blanket’ for the transit officers should any complaint be received by the organisation that they handled a situation inappropriately; secondly, it provides evidence against an offender for any subsequent court action arising out of an incident; and thirdly it provides the ability for the situation to be monitored and additional support deployed to the area should the situation warrant it. Based on the researchers observations both working with railway transit officers and in the central monitoring room of the railway organisation, this paper explores the present use of the CCTV cameras in this environment, and explores how this technology could evolve in the future.