Content uploaded by Joe Eastwood
Author content
All content in this area was uploaded by Joe Eastwood on Jun 10, 2021
Content may be subject to copyright.
Advance the arts, sciences and technology of precision engineering, micro-engineering and nanotechnology
Joe Eastwood, Danny Sims-Waterhouse, Samanta Piano,
Ralph Weir, Richard Leach
Towards automated
photogrammetry
Photogrammetry 2
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
Photogrammetry 3
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
Scale Invariant Feature Transform
(SIFT) algorithm using difference of
Gaussians
Photogrammetry 4
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
3. Find correspondences between images
Photogrammetry 5
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
3. Find correspondences between images
4. Detect cameras and sparse point cloud
Photogrammetry 6
PHOTOGRAMMETRIC PIPELINE
1. Take images (usually ~60)
2. Detect features in each image
3. Find correspondences between images
4. Detect cameras and sparse point cloud
5. Densify point cloud (often to millions of points)
Typically using an algorithm such as
Patch-based Multi View Stereopsis
(PMVS)
Is there a better way? 7
Automated measurement
planning
Camera pose initialisation
(pose estimation)
Automated image
acquisition
Photogrammetric
reconstruction
Automated camera
characterisation
Camera 1
Camera 2 Camera 3
Camera 4
Camera 5
The problem:
Using a-priori knowledge can we pre-optimise the
measurement proceedure, then carry out the
measurement autonomously?
CAD to the rescue
Visible points 8
In order to assess the ‘quality’ of a camera position,
we need to be able to detect which surface points are visible
Hidden point removal
Efficient, but performs poorly at edges
Ray tracing
Expensive, but reliable
Measurement planning 9
Minimise number of camera positions while maintaining reconstruction quality
First, find a good starting position
The sum of the visible surface points
Then, use GA to optimise global positions
Encourage views
normal to the surface Encourage 90° inter-
camera angle
Iteration 10
14 images, down from 60!
Starting with two camera positions,
the objective function is maximised
by the GA
If a threshold value is not met,
a new position is added
Reconstruction quality 11
Normal Optimised
Point cloud deviation comapred
to CMM measurement
Normal Optimised
Point cloud deviation compared
to GOM ATOS
Pose initialisation 12
This network can then be used to locate CAD information within actual photographic images.
•A simulated version of the
photogrammetry setup was
created
•This simulation is used to
render a large set of synthetic
images, representative of
experimental images
•These images are then used to train a convolutional neural network to detect the location and rotation
of CAD data within an image
Simulation 13
Low frequency surface waviness
High frequency surface texture
Colour ramp at edges to simulate weld lines
High gloss surface, low roughness
Real Synthetic
Simulation 14
Low frequency surface waviness
High frequency surface texture
Colour ramp at edges to simulate weld lines
High gloss surface, low roughness
Real Synthetic
New surface texture generation method using a
Progressively growing generative adversarial network
Paper #ICE21219
Model 15
1234
YΘ
XCross-entropy
LogCosh Combined loss SGD (with
momentum)
Residual Block
Conv-2D
BatchNorm
ReLU
ReLU
++
Input, reshape to 244x244
Residual Block, 64-32-16-8
Global average pooling
Dense layer, 0.25 dropout rate
Output
Loss function
Optimizer
Model built using
the Keras API for
TensorFlow
Pose results 16
Training results Synthetic test set
Real test set
Finally, the estimated pose can be refined with ICP
Pose results 17
Training results Synthetic test set
Real test set
Finally, the estimated pose can be refined with ICP
This now allows us to adjust the results from view planning based on the
initial configuration!
A brief note on characterisation (camera callibration) 18
•Characterisation is still a step which requires a
lot of user effort •I had two undergraduate students (Anil Thomas, Shehryar
Ahmad) look into the using ML to automate this step
Using the same simulated camera shown previously, generate images with random intrinsic and distortion
parameters to create a dataset –train CNN to predict camera distortion
Real world testing cut short due to pandemic, hope to pick this back up with future students
Future work 19
Automated measurement
planning
Camera pose initialisation
(pose estimation)
Automated image
acquisition
Photogrammetric
reconstruction
Automated camera
characterisation
Working with PhD sponsor to
develop a system which integrates
all these approaches