ArticlePDF Available

Close-range photogrammetry for accident reconstruction

Authors:

Abstract and Figures

Throughout the last decade forensic scientists, technicians and police have employed a number of 3D measurement tools for crime scene and accident reconstruction. These have ranged from the basic, such as EDM instruments, to the complex, namely terrestrial laser scanners. In the field of traffic accident reconstruction, close-range photogrammetry is now being adopted, primarily because of the greatly reduced on-scene time, which leads to shorter periods of traffic disruption. The fact that a permanent visual record is also obtained, from which 3D measurements can be made at any time, is a further notable benefit. However, for successful application of close-range photogrammetric techniques in accident reconstruction a few important issues must first be dealt with. These include accommodation of the generally very poor, near-planar network geometry encountered and the need for maximum ease of use, from which follows the requirement for highly automated processing and fully automatic camera calibration. This paper reports upon two innovative developments undertaken to enhance the applicability of close-range photogrammetry and consumer-grade digital cameras to accident reconstruction. The developments comprise a new approach to robust on-line image orientation and a method for automatic camera calibration which employs colour coded targets. They are highlighted via the iWitness system, which has been developed primarily for accident scene reconstruction and forensic measurement applications.
Content may be subject to copyright.
June 2005
CLOSE-RANGE PHOTOGRAMMETRY FOR
ACCIDENT RECONSTRUCTION
Clive Fraser, Harry Hanley and Simon Cronk
Department of Geomatics
University of Melbourne
Victoria 3010 Australia
Email: (c.fraser, hhanley, cronks)@unimelb.edu.au
Abstract: Throughout the last decade forensic scientists, technicians and police have
employed a number of 3D measurement tools for crime scene and accident reconstruction.
These have ranged from the basic, such as EDM instruments, to the complex, namely
terrestrial laser scanners. In the field of traffic accident reconstruction, close-range
photogrammetry is now being adopted, primarily because of the greatly reduced on-scene
time, which leads to shorter periods of traffic disruption. The fact that a permanent visual
record is also obtained, from which 3D measurements can be made at any time, is a further
notable benefit. However, for successful application of close-range photogrammetric
techniques in accident reconstruction a few important issues must first be dealt with. These
include accommodation of the generally very poor, near-planar network geometry
encountered and the need for maximum ease of use, from which follows the requirement for
highly automated processing and fully automatic camera calibration. This paper reports upon
two innovative developments undertaken to enhance the applicability of close-range
photogrammetry and consumer-grade digital cameras to accident reconstruction. The
developments comprise a new approach to robust on-line image orientation and a method for
automatic camera calibration which employs colour coded targets. They are highlighted via
the iWitness system, which has been developed primarily for accident scene reconstruction
and forensic measurement applications.
1. INTRODUCTION
The aim of traffic accident reconstruction (AR) is, as the name implies, to reconstruct motor
vehicle collision scenes. Whether the final requirements of the AR process are to assist in
calculations (such as vehicle speed), to analyse the dynamics of the collision event(s), to
provide evidence in a subsequent court case, or for some other purpose, an essential first step
is to accurately characterise the dimensions of the accident scene. The comprehensiveness
required can vary depending upon the ultimate use of the ‘mapping’ data produced. For
example, a vehicle manufacturer or traffic engineer might need a detailed 3D reconstruction,
while the local police force may only require simple 2D documentation in recognition of the
fact that if the accident does not result in subsequent legal proceedings, then the AR data will
likely never be used. Unfortunately, it is not always known at the time of the accident whether
court proceedings will eventuate. In most jurisdictions, accidents involving fatalities must be
surveyed and mapped. The term ‘diagramming’ is used in the US to describe this
2
documentation process, since the final outcome is typically a CAD drawing in the first
instance, which may be further developed into a 3D model and even an animation.
Shown in Figs. 1 and 2 are examples of CAD drawings for two accident scenes. In the context
of 3D modelling, both are reasonably simple representations. Also, both could be adequately
accomplished with 2D surveying, at its simplest represented by the measurement of distances
along and offset from a ‘baseline’ (eg. road edge or centreline), as was traditionally done.
However, with the enhanced scrutiny of any evidence in a court, and the need for the AR data
collection process to be as least disruptive to traffic as possible, the requirement has arisen for
more comprehensive and accurate data to be recorded in the shortest time possible. More
recently, total stations, laser range finders with angle encoders and even laser scanners have
been used. In the case of expensive laser scanning technology, however, adoption has been
mainly confined to research laboratories and large centralised accident investigation agencies.
These technologies have resulted in more comprehensive 3D modelling, but not necessarily
faster data acquisition at the accident scene. Moreover they are relatively expensive and
complex for local police and traffic agencies.
Figure 1: Example CAD drawing for AR illustrating object features of interest (courtesy of
DeChant Consulting Sevices – DCS [2]).
In the US alone, there are in excess of 10,000 law enforcement agencies. In relation to AR
these range from city and county police to state highway patrols. For the large number of
local agencies involved in AR, a technology is needed that can offer very low-cost, flexible
mapping of accidents with an absolute minimum of on-scene recording time. These
imperatives have seen attention turn to close-range photogrammetry. Indeed, a low-cost
photogrammetric software suite, called iWitness [10], has been designed and developed
primarily for AR and forensic measurement. Our purpose in this paper is not so much to extol
the virtues of photogrammetry to readers who are quite familiar with the technology, but
rather to consider some of the distinctive characteristics of AR that call for special attention
when designing a purpose-built close-range photogrammetric system.
2. iWitness OVERVIEW
The iWitness system is characterised by a new paradigm within its image measurement and
photogrammetric, namely automatic on-line computations which are never specifically
invoked but occur automatically in the background with every image point ‘referencing’. We
will present a short overview of iWitness, after which we will concentrate on two
developments to enhance the application of affordable close-range photogrammetry to AR.
3
The first of these concerns initial network orientation, which is greatly complicated by the
near-planar object point fields encountered in AR. The second is fully automatic camera
calibration for the consumer-grade digital cameras that are employed with iWitness.
(a) plan view
(b) perspective view of model
Figure 2: CAD reconstruction of traffic accident scene (courtesy of [2]).
As iWitness was primarily designed for AR and forensic measurement, it generates attributed
point clouds, with the attributes primarily being lines which are preserved in the export of
object coordinate data in DXF format. The system is designed to interface with CAD and
modelling packages, especially with CAD systems from CAD Zone [1]. The graphical user
interface of iWitness is illustrated in Fig. 3, which shows the vehicle collision survey from
which the ‘diagramming’ shown in Fig. 1 was produced. iWitness has many features over and
above the orientation and calibration developments that are to be discussed here. These
include fully automatic initiation of all computational functions and automatic recognition of
the camera(s) via information contained within the EXIF header of the JPEG or TIFF images.
Also included is a ‘Review Mode’ whereby it is possible to interactively review all image
point observations and to adjust these where appropriate, again with on-line and immediate
updating of the photogrammetric bundle adjustment. A quality measure indicates any
subsequent improvement or degradation in the spatial intersection accuracy as this review
process is undertaken. This provides an effective error detection and correction capability.
iWitness also supports a centroiding feature which facilitates semi-automatic image point
measurement of artificial targets, and even some natural targets, to an accuracy of up to 0.03
pixels.
3. NETWORK GEOMETRY IN AR
As can be imagined, feature points of interest in an AR survey tend to be near planar in their
distribution, since the majority lie on or near the road surface. A traffic accident scene can be
50-100m or more in length, but often displays a vertical range of interest of only a few metres
or less. Long and thin near-planar object point arrays hardly constitute a favourable geometric
configuration for close-range photogrammetry. The problem is aggravated by the fact that the
camera stations also lie close to the average plane of the object target array. This is well
4
illustrated in Fig. 4, which is both a real and generally representative AR network. When one
looks at the plan view, Fig. 4a, the photogrammetric response is that the multi-image
geometry is not optimal by any means, but is reasonable. A look at the side elevation plot, Fig
4b, produces a more emphatic response: This is very unfavourable camera station geometry
from which to build an initial relative orientation (irrespective of the chosen image pairs) and
subsequent multi-image network for bundle adjustment.
Figure 3: iWitness user interface; the CAD diagram in Fig. 1 is from this survey.
However, this is precisely what is required without the aid of any object space control. About
the only support to the photogrammetric orientation process is the use of ‘evidence
markers’[2], which are back-to-back targets, as illustrated in Fig. 5. These face horizontally
and can be semi-automatically measured in iWitness via an operated-assisted centroiding
function [5]. While evidence markers facilitate accurate conjugate point referencing from
opposite directions, they do nothing to enhance the otherwise weak network geometry. The
near-planar point distribution can be overcome by, for example, feature points on the vehicles
involved, street signs, traffic cones and even tripods. However, the fact remains that from a
photogrammetric perspective the most challenging part of AR applications is network
orientation. To conquer this problem, iWitness needed to incorporate some innovative
orientation procedures, especially for relative orientation.
4. ROBUST ON-LINE EXTERIOR ORIENTATION
The camera station and object point configuration shown in Fig. 4 illustrates well that
photogrammetric network geometry in AR can be complex; far more so in fact from a sensor
orientation standpoint than the stereo geometry of topographic photogrammetry or the
binocular stereo or wide baseline geometries encountered in computer vision. Coupled with
the often highly-convergent and multi-magnification camera station arrangements are object
point geometries which may be unsuited to relative orientation and spatial resection.
5
(a) plan view
(b) side elevation (tilted for easier interpretation)
Figure 4: Typical near-planar geometry of photogrammetric networks for AR.
Figure 5: Evidence markers placed on features of interest (photo courtesy of [2]).
Photogrammetrists rely upon two basic mathematical models for sensor orientation: the
coplanarity equation for relative orientation, and the collinearity equations for spatial
resection, intersection and multi-image bundle adjustment (exterior orientation), with or
without camera self-calibration. In their linearized form, both constitute parametric models
which are solved via an iterative least-squares adjustment of initial values for the parameters.
In the iWitness image measurement and orientation paradigm, where the least-squares bundle
adjustment is updated as each new observation is made, it is imperative that the initial values
of the parameters of exterior orientation are determined with sufficient accuracy and
reliability to ensure solution convergence.
6
Traditionally, there have been only two approaches adopted for the determination of
preliminary exterior orientation in close-range photogrammetry. The first of these involves
the use of object space ‘control points’ with known or assigned XYZ coordinate values. These
points, which need to number four or more in at least two images in the network then
facilitate closed-form spatial resection. Spatial intersection can then follow to establish the
object coordinates of further image points, which in turn can support resection of further
images, and spatial intersection of additional points, and so on. Nowadays, the use of exterior
orientation (EO) devices is popular in industrial vision metrology systems [4,6] as a practical
means of providing the necessary 3D control points for automated initial exterior orientation.
A second approach, which has not been widely adopted, is initial relative orientation (RO).
The attractiveness of RO is simply that it requires no object space coordinate data. Moreover,
it is well suited to image measurement scenarios where conjugate points are ‘referenced’
between two images, point by point, for example within a stereoscopic model. It is well
known that for a given image pair, a minimum of five referenced points is required to solve
for the unknown parameters in a dependent RO via the coplanarity model. It is also well
established that for convergent imaging geometry, good initial parameter approximations are
required to ensure convergence of the iterative least-squares solution. With the addition of the
third and subsequent images, resection would follow. Here too, good starting values are
necessary, though unlike the situation with RO, there are well recognised closed-form and
two-stage solutions for the resection problem. The most pressing problem we had in finding a
robust, reliable solution for RO in iWitness was finding a method for generating initial values
for the five RO parameters of rotation (3) and relative translation (2). Our experience with the
least-squares solution to the coplanarity equation is that it is very stable when representative
initial parameter values are available, even in situations of very poor geometry.
There has been a wealth of literature within the computer vision community since the
Essential Matrix formulation for solving in a linear manner the position and orientation of one
camera with respect to another was introduced by Longuet-Higgins [9]. The essential matrix
formulation implicitly assumes ‘calibrated’ cameras, or in photogrammetric terms, known
interior orientation. An ‘uncalibrated’ version of the essential matrix is the Fundamental
Matrix [7]. From reviewing the literature one receives the impression that these approaches
had great promise as a means to solve the RO problem. This is notwithstanding concerns that
linear solutions for the essential and fundamental matrices are prone to ill-conditioning and
the generation of both erroneous solutions and matrices which are not always decomposable.
Regrettably, while there are many publications dealing with theoretical and algorithmic
aspects of the essential matrix approach, there are not too many that give a comprehensive
experimental analysis of the method, especially in cases of poor geometry. As an aside, we
can disregard the fundamental matrix in a photogrammetric context as we always have a
reasonable initial interior orientation or ‘calibration’. Most consumer-grade digital cameras
write the zoom focal length to the EXIF header of the image file and while this does not
constitute a photogrammetric principal distance, our experience is that it is generally within
5% of the correct figure.
An evaluation of the essential matrix approach for the estimation of initial RO parameters in
iWitness was undertaken. Our endeavours, however, were not successful in the context of
producing a robust, scene independent RO solution that would be amenable to later
refinement via the rigorous coplanarity model. We could immediately discount the prospect
7
of success with near-planar objects, since this is a known failure case – but a geometry that is
unfortunately prevalent in AR. We were cautious, however, knowing that either a
normalisation process, RANSAC approach or maybe even clever interpretation of the results
of a singular value decomposition (and possibly two) could well be necessary to enhance the
prospects of success. Also, there were precedents for adoption of the approach in close-range
photogrammetry [11], so we persevered – but not for long. In summary, we found the method
unreliable and unstable for an application demanding at least a 95% success rate. We also
found it unsuited to AR and to the on-line computational scenario utilized in iWitness, which
seeks to solve the RO as soon as 6 points pairs (8 in the essential matrix case) are referenced.
In hindsight we should have taken heed of a comment made by Horn [8]: “Overall, it seems
that the two-step approach to relative orientation, where one first determines an essential
matrix, is the source of both limitations and confusion”. Or maybe we should have been more
suspicious of a method that solves an inherently non-linear problem via a linear model. One
can reminisce here on photogrammetric experience with the direct linear transformation.
In our search for a robust procedure for relative orientation in iWitness we have settled upon a
Monte Carlo type strategy whereby a very large number of possible relative orientations are
assessed for the available image point pairs. The refined solution in each case is obtained via
the coplanarity model using combinations of plausible initial values (there could be hundreds
of these). From the number of qualifying solutions obtained for the first five point pairs, the
most plausible are retained. But, no RO results are reported to the user at this time, as there
may be quite a number in cases of weak geometry, compounded by noisy data, and therefore
leading to the likelihood of ambiguous solutions. This process takes only a fraction of a
second. Then, as point pairs are successively observed the computation is repeated, with the
aim being to isolate the most probable solution from the ever fewer qualifying candidates.
Once there is a sufficient degree of certainty as to the correct solution, the orientation
computation swings from a coplanarity to a collinearity model, namely to a bundle
adjustment. In cases of reasonable network geometry and camera calibration, a successful RO
is typically reported to the operator after seven point pairs are ‘referenced’. For weaker
geometry and/or very poor calibration the number of required point pairs may rise to 8 or 9
and occasionally to more than 10.
A similar approach to checking plausible orientation solutions on line is employed when new
images are added to an already oriented network. This time, spatial resection computations
are performed via a closed-form algorithm similar to that described in [3]. Generally, the
criteria for a correct solution are met after 5 to 6 point pairs are referenced, though in
favourable cases only four points are required. Once resection is successful, the image is
added to the network and on-line bundle adjustment is used to integrate subsequent image
point observations. This unique approach to on-line exterior orientation is a very powerful and
popular feature of iWitness since it is robust, very well suited to blunder detection, and occurs
instantly and automatically.
5. AUTOMATIC CAMERA CALIBRATION
The requirements for camera self-calibration are well recognised: a multi-image, convergent
camera station geometry, which incorporates orthogonal camera roll angles, along with an
object point array which yields well distributed points throughout the format of the images,
and initial starting values for the camera calibration parameters. With the exception of the
focal length, these initial values may be taken as zero. The accurate modelling of lens
distortion is assisted by having well distributed image points throughout the image format.
8
With the facility described earlier for robust exterior orientation, forming a self-calibrating
bundle adjustment network simply requires the provision of the image point correspondences,
ie the (x, y) image coordinates for all matching points. As is now common, the approach to
ensuring fast and accurate matching of image point features in iWitness is based on coded
targets. Novel in the method developed, however, is the use of colour in the codes.
Traditionally, codes employed in close-range photogrammetry are geometric arrangements of
white dots or shapes on a black background [4]. These geometrically coded targets require
optimal exposure to ensure a near binary image is obtained. Such a requirement may be
practical for the controlled environments of industrial photogrammetry, but it does not suit the
conditions encountered in AR and it does not take advantage of one of the most prominent
characteristics of today’s digital cameras, namely that they produce colour (RGB) imagery.
The colour codes designed to facilitate fully automatic calibration in iWitness are shown in
Fig. 6 (albeit without colour due to the greyscale image). Note that the geometric arrangement
of the 5-dot pattern is the same; only the colour arrangement varies. Red and green dots are
employed to yield 32 (25) distinct codes. The blue channel is not utilised in the code approach
since the green and red channels yield a far superior response. Once the code dots are
detected, a colour transformation process is used to isolate the red/green arrangement and so
identify the code. The adoption of colour codes has afforded a more flexible automatic self-
calibration procedure.
Figure 6. Automatic camera calibration in iWitness; note array of 12 colour coded targets.
As for the placement of the codes, it is usually most convenient to simply sit them on the
floor, with one or more being out of plane. Non-planarity of codes is not essential for a
comprehensive camera calibration, but generally aids in both the initial network orientation,
as previously described, and in reducing projective coupling between the interior and exterior
orientation parameters. This enhances the precision of the recovered calibration. It has been
mentioned that an initial value for focal length is required, however this is not really the case
for the operational system. The procedure again follows a trial and error scenario where
multiple principal distance values are tested as the network is being formed and the most
9
plausible value is taken as the initial estimate within the final self-calibrating bundle
adjustment. Also shown in Fig. 6 is a typical network for automatic calibration based on
colour codes. The codes are purposefully chosen to be relatively large, not to aid in
recognition or measurement, but to constitute a sub-group of points. Thus, rather than being
treated as a single point, each code forms a bundle of five rays, as is seen in the figure. This
means that a broader distribution of image point locations is achieved, which adds strength to
the photogrammetric network.
6. CONCLUDING REMARKS
The two innovations described for enhancing the utility, robustness and flexibility of digital
close-range photogrammetric systems employing off-the-shelf cameras are incorporated in
iWitness. Although the development of a new exterior orientation process and an automatic
camera calibration strategy utilising colour coded targets was driven by the needs of the AR
and forensic measurement sector, these innovations are equally applicable to a wide range of
close-range, image-based 3D measurement tasks. The combination of iWitness and an off-the-
shelf digital camera of greater than 3 megapixel resolution affords prospective users of close-
range photogrammetry the ability to undertake measurement tasks requiring accuracies of
anywhere from 1:1000 to better than 1:50,000 of the size of the object, for as little as $2000.
REFERENCES
[1] CAD Zone: http://www.cadzone.com (Web site accessed May 20, 2005).
[2] DeChant Consulting Services – DCS, Inc.: http://www.photomeasure.com (Web site
accessed May 20, 2005).
[3] Fischler, M.A. and Bolles, R.C.: Random Sample Consensus: A Paradigm for Model
Fitting with Applications to Image Analysis and Automated Cartography,
Communications of ACM, 24(6), 381-395, 1981.
[4] Fraser, C.S.: Innovations in Automation for Vision Metrology Systems,
Photogrammetric Record, 15(90), 901-911, 1997.
[5] Fraser, C.S and Hanley, H.B.: Developments in Close-Range Photogrammetry for 3D
Modelling: the iWitness Example, Int. Workshop: Processing and Visualization using
High-Resolution Imagery, Pitsanulok, Thailand, 18-20 November, 2004.
[6] Ganci, G. and Hanley, H.: Automation in Videogrammetry, International Archives of
Photogrammetry and Remote Sensing, 32(5), 53-58, 1998.
[7] Hartley, R.I. and Zissermann, A.: Multiple View Geometry in Computer Vision,
Cambridge Press, 2000.
[8] Horn, B.K.P.: Recovering Baseline and Orientation from Essential Matrix,
http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-801Fall-
2004/Readings/, 10 pages, 1990.
[9] Longuet-Higgins, H.C.: A Computer Algorithm for Reconstructing a Scene from Two
projections, Nature, 293, 133-135, 1981.
[10] Photometrix: http://www.photometrix.com.au (Web site accessed May 20, 2005).
[11] Roth, G.: Automatic Correspondences for Photogrammetric Model Building,
International Archives of Photogrammetry, Remote Sensing and Spatial Information
Sciences, 35(B5), 713-718, 2004.
... According to the Global Health Observatory the number of deaths resulting from road traffic accidents is globally 1.25 million per year. For the law enforcement personnel the main objective of accident reconstruction is to recreate spatially and temporally the event in order to make reasonable assumptions about how and why the accident was produced [1]. This simple definition requires a process that, in many cases, results from great complexity and requires a meticulous study to correctly interpret the origin of all the signs and vestiges until arriving to the determination of the circumstance that had to occur so that the accident would produce a known result. ...
... Although these technologies could provide more accurate and comprehensive measurements than the traditional ones they are relatively expensive and complex to use for local police officers and national traffic agencies [2]. Recently, methods and technologies based on Close Range Photogrammetry (CRP) are also being used [1,3,[15][16][17]. This method allow us to: capture image data of an accident scene in a fast and effective way, conduct 3D measurements on objects depicted on the images, map objects of great fragility without deteriorating them (it is a non-invasive technique) and carry out dimensioning and spatial documentation (or representation) of objects and/or areas, without any direct contact with them [17]. ...
Article
Mapping and documenting traffic accident scenes may vary from simple sketches and tape measurements to complex 3D measurements and modeling of the scene space which could involve expensive Terrestrial Laser Scanning instruments. In this paper we propose a novel approach for the documentation of traffic accidents scenes that is based on low-cost Unmanned Aerial Vehicle (UAV) technology and UAV photogrammetry methods. By using the onboard camera with a 4 K video recording feature, a very detailed and accurate or-thophotos are generated from a set of the individual frames. Analyzing the orthophotos obtained from different case studies corresponding to different image overlaps and processing parameters, the horizontal accuracy in scene measurements ranged from 5 cm to 8 cm. This study also concludes that this state-of-art approach can be used effectively for traffic accidents scenes documentation.
... Although the general principles of operation of this type of programs are known, from the point of view of the user (and even more so the participants in the legal proceedings), dimensional accuracy of the recreated object is primarily interesting. It should be noted that in order to recreate objects of significantly different sizes, it is necessary to use different object scales and the resulting photography techniques [10][11][12][13]. ...
... With the development of digital cameras and computer hardware and software, photogrammetry has been used in numerous works. Fraser et al. [18] described a methodology that improves the applicability of near-object photogrammetry in combination with consumer digital cameras for traf昀椀c accident reconstruction, decreasing capture times in data collection, which leads to reduced traf昀椀c delays. ...
Article
Investigations into traffic accidents that lead to the determination of their causes and consequences are useful to all interested parties, both in the public and private sectors. One of the phases of investigation is the capture of data enabling the complete reconstruction of the accident scene, which is usually the point at which a conflict arises between the slow process of information gathering and the need to restore normal traffic flow. To reduce to a minimum the time the traffic is halted, this paper follows a methodology to reconstruct traffic accidents and puts forward a series of procedures and tools that are applicable to both large and small scenarios. The methodology uses low-cost UAV-SfM in combination with UAS aerial image capture systems and inexpensive GNSS equipment costing less than €900. This paper describes numerous tests and assessments that were carried out on four potential work scenarios (E−1 and E−2 urban roads with several intersections; E−3, an urban crossing with medium slopes; and E−4, a complex road section with different land morphologies), assessing the impact of using simple or double strip flights and the number of GCPs, their spacing distance and different distribution patterns. From the different configurations tested, the best results were achieved in those offset-type distributions where the GCPs were placed on both sides of the working area and at each end, with a spacing between 100 and 50 m and using double strip flights. Our conclusion is that the application of this protocol would be highly efficient and economical in the reconstruction of traffic accidents, provide simplicity in implementation, speed of capture and data processing, and provide reliable results quite economically and with a high degree of accuracy with RMSE values below 5 cm.
... Investigators also are aware of the negative effect of providing a single point of view for audience interpretation, hence they acquire multiple images shots to cater for that. They also capture multiple images in the event of image distortion caused by lens warping [204], [205]. ...
Article
Full-text available
Recreation of 3D crime scenes is critical for law enforcement in the investigation of serious crimes for criminal justice responses. This work presents a premier systematic literature review (SLR) that offers a structured, methodical, and rigorous approach to understanding the trend of research in 3D crime scene reconstruction as well as tools, technologies, methods, and techniques employed thereof in the last 17 years. Major credible scholarly database sources, Scopus, and Google Scholar, which index journals and conferences that are promoted by entities such as IEEE, ACM, Elsevier, and SpringerLink were explored as data sources. Of the initial 17, 912 papers that resulted from the first search string, 258 were found to be relevant to our research questions after implementing the inclusion and exclusion criteria. To summarize the existing efforts, we compared and analysed various classical 3D reconstruction approaches. This study presents the first comprehensive review of key milestones in the development of methods for 3D crime scene reconstruction, gaps for improvement and where immersive technology has been used to enhance crime scene findings. This study found that the implementation of light detection and ranging (LiDAR) scanners and immersive technologies, alongside traditional methods, has been beneficial in the recreation of crime scenes. The SLR is limited to existing applications with peer-reviewed papers published between 2005 and 2021. Results based on the analysed published data indicated that 20.2% of the articles implemented immersive technologies in crime scene reconstruction, of which Augmented Reality (AR) accounted for 15.3%, Virtual Reality (VR) accounted for 75%, Mixed reality (MR) accounted for 5.9% and VR and AR mixture accounted for 3.8%. Finally, we summarize the development trend of design and key technology prospects of crime scene recreation using immersive technology and provide insights into potential future research. To the best of the researchers’ knowledge, this is the first survey that accomplishes such goals.
... The use of Robotic Total Stations and Global Positioning System can reduce that time. One of the most common surveying techniques is also photogrammetry so it can also be used for traffic accidents data collection and such examples are described in (Du et al., 2009;Fraser et al., 2005;Fraser and Hanley, 2004;Kenarsari et al., 2017;Randles et al., 2010;SA and Balke, 2002;Žuraulis et al., 2016). ...
Article
The purpose of the paper is to describe, compare and analyse the instruments used, time needed and accuracy of gathered data, sketches, 3D models and to enhance the extracted information about the accident. Share Link – a personalized URL providing 50 days' free access to the article. Anyone clicking on this link before January 29, 2020 will be taken directly to the final version of your article on ScienceDirect, which they are welcome to read or download. No sign up, registration or fees are required. https://authors.elsevier.com/c/1aCieLDQwvo3
... SAR is applied by satellite and measures the earth's surface at particular (1-3 m) grid intervals (Bildirici et al., 2009). The accuracy of a DEM model created by SAR is lower than that created by LiDAR and 2 Related work Photogrammetry has been put into practice in many types of studies such as object modelling (Lingua et al., 2003), accident recovery (Fraser et al., 2005), natural hazard assessment (Altan et al., 2001), deformation measurement (Jiang et al., 2008), industrial imaging (Cooper and Robson, 1990), and space research (Di et al., 2008). The photogrammetric processes have been changed together with the scienti c progress. ...
Article
Full-text available
Urban changes occur as a result of new constructions or destructions of buildings, extensions, excavation works and earth fill arising from urbanization or disasters. The fast and efficient detection of urban changes enables us to update geo-databases and allows effective planning and disaster management. This study concerns the visualization and analysis of urban changes using multi-period point clouds from aerial images. The urban changes in the city centre of the Konya Metropolitan area within arbitrary periods between the years 1951, 1975, 1998 and 2010 were estimated after comparing the point clouds by using the iterative closest point (ICP) algorithm. The changes were detected with the point-to-surface distances between the point clouds. The degrees of the changes were expressed with the RMSEs of these point-to-surface distances. In addition, the change size and proportion during the historical periods were analysed. The proposed multi-period change visualization and analysis method ensures strict management against unauthorized building or excavation and more operative urban planning.
... According to the Global Health Observatory the number of deaths resulting from road traffic accidents is 23 globally 1.25 million per year. For the law enforcement personnel the main objective of accident 24 reconstruction is to recreate spatially and temporally the event in order to make reasonable assumptions 25 about how and why the accident was produced [1]. This simple definition requires a process that, in many 26 cases, results from great complexity and requires a meticulous study to correctly interpret the origin of all 27 the signs and vestiges until arriving to the determination of the circumstance that had to occur so that the 28 accident would produce a known result. ...
Preprint
Full-text available
Mapping and documenting traffic accident scenes may vary from simple sketches and tape measurements to complex 3D measurements and modelling of the scene space which could involve expensive Terrestrial Laser Scanning instruments. In this paper we propose a novel approach for the documentation of traffic accidents scenes that is based on low-cost Unmanned Aerial Vehicle (UAV) technology and UAV Photogrammetry methods. By using the onboard camera with a 4K video recording feature, a very detailed and accurate orthophotos are generated from a set of the individual frames. Analysing the orthophotos obtained from different case studies corresponding to different image overlaps and processing parameters, the horizontal accuracy in scene measurements ranged from 5 cm to 8 cm. This study also concludes that this state-of-art approach can be used effectively for traffic accidents scenes documentation.
... From the technical point of view the procedure to be followed for a correct metric representation of the accident site does not present theoretical difficulties, but only operational ones, as a consequence of the wide variety of en-vironmental difficulties and cases. The use of photogrammetry has long been included among the surveying methods (Du et al., 2009, Buck et al., 2007, Randles et al., 2010, Fraser et al., 2005, Carter et al., 2016. Recently, also laser scans are used for surveying accidents (Pagounis et al., 2006, Buck et al., 2007, Pu et al., 2011, Eyre et al., 2017, Fowle and Schofield, 2011. ...
Article
Full-text available
This work aims at presenting the use of new technologies in the field of forensic engineering. In particular, the use of UAV photogrammetry and laser scanning is compared with the traditional methods of surveying an accident site. In this framework, surveys must be carried out promptly, executed in a short time and performed so that the greatest possible amount of information is collected with sufficient accuracy to avoid the possibility of neglecting details once that the scene is no longer preserved. The combination of modern surveying techniques such UAV photogrammetry and laser scanning can properly fulfill these requirements. An experimental test has been arranged and instruments, procedures, settings, practical limits and results have been evaluated and compared with respect to the usual way of performing the survey for forensic purposes. In particular, both qualitative and quantitative considerations are given, assessing the completeness of the reconstructed model, the statistical evaluation of the errors and the accuracy achieved.
... In the past decades, close-range photogrammetry has developed from film-based stereo restitution with metric cameras to multi-image network orientation of images from consumergrade digital cameras, with automatic and semi-automatic 3D feature point measurement (C. Fraser, 2005). For multi-image network orientations, several steps should be taken: Feature points measurements, resection, spatial intersection and bundle block adjustment. ...
Thesis
Full-text available
This thesis presents an economic and feasible approach to georeference satellite imagery, aerial imagery and close range imagery to the common coordinate system. Consistency, completeness, and reliability of referenced spatial information have been achieved. The presented georeferencing method is performed by recursive bundle block adjustment. New points are obtained from one scale level by bundle block adjustment and used as control points in next scale level. As iterative procedures are carried out, reliability is greatly improved. In this thesis, a Monoplotting software was implemented for terrestrial imagery measurement. The purpose of this software is to record the real-time manual measurement in a single image, and export the measured vectors, which are recorded in object coordinates. This software is implemented by Java programming language. The run environment is based on windows system.
Article
Full-text available
Terrestrial Laser Scanning data have been studied for many years. Several disciplines like urban planning, architecture, telecommunication, tourism, environmental protection and many others have an increasing demand for digital 3D city models, in order to use such complex data for planning, analyses, visualization and simulation in different applications. To satisfy this increasing demand for such data, the city models must be acquired quickly, precisely, in detail, and with full completeness and in an economic manner. Based on Terrestrial Laser Scanning data as well as data from other sensors, scientists have built an operational framework to extract spatial information, but also are facing challenging tasks to enhance current point cloud processes. Laser scanning will focus on new data, methodologies, algorithms and applications related to the processing of point clouds as well as sensor improvements and new sensor-driven calibration techniques.
Article
Full-text available
Certain approaches to the problem of relative orientation in binocular stereo (as well as long-range motion vision) lead to an encoding of the baseline (translation) and orientation (rotation) in a single 3 × 3 matrix called the "essential" matrix. The essential matrix is defined by E = BR, where B is the skew-symmetric matrix that satisfies Bv = b × v for any vector v, with b being the baseline and R the orientation. Shown here is a simple method for recovering the two solutions for the baseline and the orientation from a given essential matrix using elementary matrix op- erations. The two solutions for the baseline b can be obtained from the equality bbT = 1 Trace(EET )I − EET , 2
Article
Full-text available
In 1981 Longuet-Higgins represented the world point by two vectors in the two camera reference frames and developed the essential matrix. Such a matrix is a relation between the corresponding image points on the two images of a world point on a rigid scene. The essential matrix is independent of the position and orientation of the cameras used to capture the two views. The calculation of the essential matrix requires the knowledge of at least five accurate pairs of corresponding points. The unavailability of a procedure that fulfills such a requirement led researchers to focus their attention on developing estimation methods of the essential matrix without questioning the mathematical correctness of its derivation. In this paper, we identify and expose flaws in Longuet-Higgins’ derivation of the essential matrix. These flaws are the result of mixing up between the scalar product of vectors in a single reference frame and the transformation of vectors from one reference frame to another. The flaw is in the next two statements: 1- These two vectors have the same components but they are different from each other as they are defined in two different coordinate systems. 2- Thus, Ml and Mr which satisfies the equality Mr=R(Ml-t), are the same vector expressed in two different reference frames.
Article
Full-text available
The problem of building geometric models has been a central application in photogrammetry. Our goal is to partially automate this process by finding the features necessary for computing the exterior orientation. This is done by robustly computing the fundamental matrix, and trilinear tensor for all images pairs and some image triples. The correspondences computed from this process are chained together and sent to a commercial bundle adjustment program to find the exterior camera parameters. To find these correspondences it is not necessary to have camera calibration, nor to compute a full projective reconstruction. Thus our approach can be used with any photogrammetric model building package. We also use the computed projective quantities to autocalibrate the focal length of the camera. Once the exterior orientation is found, the user still needs to manually create the model, but this is now a simpler process.
Article
A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown. This problem is relevant not only to photographic surveying1 but also to binocular vision2, where the non-visual information available to the observer about the orientation and focal length of each eye is much less accurate than the optical information supplied by the retinal images themselves. The problem also arises in monocular perception of motion3, where the two projections represent views which are separated in time as well as space. As Marr and Poggio4 have noted, the fusing of two images to produce a three-dimensional percept involves two distinct processes: the establishment of a 1:1 correspondence between image points in the two views—the 'correspondence problem'—and the use of the associated disparities for determining the distances of visible elements in the scene. I shall assume that the correspondence problem has been solved; the problem of reconstructing the scene then reduces to that of finding the relative orientation of the two viewpoints.
Article
The advantages of vision metrology (VM) systems over their film-based counterparts, in terms of the degree of automation within the measurement process, were initially confined to the image mensuration stage. Of late, however, the potential of significantly raising the level of automation of offline and real time VM systems has been more fully exploited, to the point where full automation of the offline procedure has been realized. Innovative features such as automatic image point identification via coded targets, exterior orientation through automatically recognizable “control” groupings of target points, determination of image point correspondences within unlabelled point clouds, and real time tracking of multipoint, handheld measurement probes are now incorporated in industrial VM systems. Another significant innovation is the development of “smart” cameras with incorporated image processing and measurement capabilities which remove the necessity for transmitting image data to the host computer. This paper reviews recent innovations in automation and discusses their operational impact on practical VM applications. The concepts discussed are illustrated through reference to a modern VM system which supports offline and real time measurement.
Article
A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. The authors describe the application of RANSAC to the Location Determination Problem (LDP): given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing and analysis conditions. Implementation details and computational examples are also presented
Automation in Videogrammetry, International Archives of Photogrammetry and Remote Sensing
  • G Ganci
  • H Hanley
Ganci, G. and Hanley, H.: Automation in Videogrammetry, International Archives of Photogrammetry and Remote Sensing, 32(5), 53-58, 1998.
Developments in Close-Range Photogrammetry for 3D Modelling: the iWitness Example, Int. Workshop: Processing and Visualization using High-Resolution Imagery
  • C S Fraser
  • H B Hanley
Fraser, C.S and Hanley, H.B.: Developments in Close-Range Photogrammetry for 3D Modelling: the iWitness Example, Int. Workshop: Processing and Visualization using High-Resolution Imagery, Pitsanulok, Thailand, 18-20 November, 2004.
Automatic Correspondences for Photogrammetric Model Building, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences
  • G Roth
Roth, G.: Automatic Correspondences for Photogrammetric Model Building, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 35(B5), 713-718, 2004.