PresentationPDF Available

Towards automated photogrammetry

Authors:

Abstract

Optical form measurement techniques, including close-range photogrammetry and fringe projection profilometry, are increasing in popularity due to high-speed data acquisition and the non-contact nature of the measurement. However, these techniques are often labour intensive, computationally expensive and user dependent. A system wherein an object can be placed within and measured with the press of a button is therefore highly desirable but has thus far been a pipedream. However, potential paths towards this fully automated measurement pipeline are slowly coming into view due to the maturity of machine learning (ML) techniques. Developments within other sectors of research, such as the huge research effort in computer vision, have huge potential to be modified to be exploited within the optical metrology sector. In this presentation we discuss how ML can be applied to many different parts of the measurement and data post-processing pipeline to move toward full automation. We present our own contribution to this effort which has thus far included work on camera characterisation, object pose-estimation, view planning, and surface data synthesis.
Advance the arts, sciences and technology of precision engineering, micro-engineering and nanotechnology
Joe Eastwood, Danny Sims-Waterhouse, Samanta Piano,
Ralph Weir, Richard Leach
Towards automated
photogrammetry
Photogrammetry 2
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
Photogrammetry 3
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
Scale Invariant Feature Transform
(SIFT) algorithm using difference of
Gaussians
Photogrammetry 4
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
3. Find correspondences between images
Photogrammetry 5
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
3. Find correspondences between images
4. Detect cameras and sparse point cloud
Photogrammetry 6
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
3. Find correspondences between images
4. Detect cameras and sparse point cloud
5. Densify point cloud (often to millions of points)
Typically using an algorithm such as
Patch-based Multi View Stereopsis
(PMVS)
Is there a better way? 7
Automated measurement
planning
Camera pose initialisation
(pose estimation)
Automated image
acquisition
Photogrammetric
reconstruction
Automated camera
characterisation
Camera 1
Camera 2 Camera 3
Camera 4
Camera 5
The problem:
Using a-priori knowledge can we pre-optimise the
measurement proceedure, then carry out the
measurement autonomously?
CAD to the rescue
Visible points 8
In order to assess the ‘quality’ of a camera position,
we need to be able to detect which surface points are visible
Hidden point removal
Efficient, but performs poorly at edges
Ray tracing
Expensive, but reliable
Measurement planning 9
Minimise number of camera positions while maintaining reconstruction quality
  


First, find a good starting position
The sum of the visible surface points
Then, use GA to optimise global positions

    


 
 



 
Encourage views
normal to the surface Encourage 90° inter-
camera angle
Iteration 10
14 images, down from 60!

    


 
 



 
Starting with two camera positions,
the objective function is maximised
by the GA
If a threshold value is not met,
a new position is added
Reconstruction quality 11
Normal Optimised
Point cloud deviation comapred
to CMM measurement
Normal Optimised
Point cloud deviation compared
to GOM ATOS
Pose initialisation 12
This network can then be used to locate CAD information within actual photographic images.
A simulated version of the
photogrammetry setup was
created
This simulation is used to
render a large set of synthetic
images, representative of
experimental images
These images are then used to train a convolutional neural network to detect the location and rotation
of CAD data within an image
Simulation 13
Low frequency surface waviness
High frequency surface texture
Colour ramp at edges to simulate weld lines
High gloss surface, low roughness
Real Synthetic
Simulation 14
Low frequency surface waviness
High frequency surface texture
Colour ramp at edges to simulate weld lines
High gloss surface, low roughness
Real Synthetic
New surface texture generation method using a
Progressively growing generative adversarial network
Paper #ICE21219
Model 15
1234
YΘ
XCross-entropy
LogCosh Combined loss SGD (with
momentum)
Residual Block
Conv-2D
BatchNorm
ReLU
ReLU
++
Input, reshape to 244x244
Residual Block, 64-32-16-8
Global average pooling
Dense layer, 0.25 dropout rate
Output
Loss function
Optimizer
Model built using
the Keras API for
TensorFlow
Pose results 16
Training results Synthetic test set
Real test set
Finally, the estimated pose can be refined with ICP
Pose results 17
Training results Synthetic test set
Real test set
Finally, the estimated pose can be refined with ICP
This now allows us to adjust the results from view planning based on the
initial configuration!
A brief note on characterisation (camera callibration) 18
Characterisation is still a step which requires a
lot of user effort I had two undergraduate students (Anil Thomas, Shehryar
Ahmad) look into the using ML to automate this step
Using the same simulated camera shown previously, generate images with random intrinsic and distortion
parameters to create a dataset train CNN to predict camera distortion
Real world testing cut short due to pandemic, hope to pick this back up with future students
Future work 19
Automated measurement
planning
Camera pose initialisation
(pose estimation)
Automated image
acquisition
Photogrammetric
reconstruction
Automated camera
characterisation
Working with PhD sponsor to
develop a system which integrates
all these approaches
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.