Graham D. FinlaysonUniversity of East Anglia | UEA · School of Computing Sciences
Graham D. Finlayson
About
315
Publications
61,510
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
11,323
Citations
Publications
Publications (315)
Recently, many deep neural networks (DNN) have been proposed to solve the spectral reconstruction (SR) problem: recovering spectra from RGB measurements. Most DNNs seek to learn the relationship between an RGB viewed in a given spatial context and its corresponding spectra. Significantly, it is argued that the same RGB can map to different spectra...
Domain experts prefer interactive and targeted control-point tone mapping operations (TMOs) to enhance underwater image quality and feature visibility; though this comes at the expense of time and training. In this paper, we provide end-users with a simpler and faster interactive tone-mapping approach. This is built upon Weibull Tone Mapping (WTM)...
Spectral reconstruction (SR) algorithms recover hyperspectral measurements from RGB camera responses. Statistical models at different levels of complexity are used to solve the SR problem-from the simplest closed-form regression, to sparse coding, to the complex deep neural networks (DNN). Recently, these methods were benchmarked based on the mean...
Domain experts prefer interactive and targeted control-point tone mapping operations (TMOs) to enhance underwater image quality and feature visibility; though this comes at the expense of time and training. In this paper, we provide end-users with a simpler and faster interactive tone-mapping approach. This is built upon Weibull Tone Mapping (WTM)...
In previous work, it was shown that a camera can theoretically be made more colorimetric–its RGBs become more linearly related to XYZ tristimuli–by placing a specially designed color filter in the optical path. While the prior art demonstrated the principle, the optimal color-correction filters were not actually manufactured. In this paper, we prov...
Image enhancement is often used to alleviate the low contrast, blurring and colour reduction effects, common in underwater imagery. Tone Mapping, a particularly simple yet elegant enhancement technique improves image quality by modifying image histograms to a more desirable tonal distribution. In previous work, we presented our novel chromaticity-p...
In Spectral Reconstruction (SR), we recover hyperspectral information from RGB data. Recent works benchmark SR algorithms based on hyperspectral images of real-world scenes, where two dominant approaches are regression and Deep Neural Network (DNN). While the former seeks point-based RGB-to-spectrum mapping, the latter incorporates sophisticated ar...
Image enhancement is often used to alleviate the low contrast, blurring and colour reduction effects, common in underwater imagery. Tone Mapping, a particularly simple yet elegant enhancement technique improves image quality by modifying image histograms to a more desirable tonal distribution. In previous work, we presented our novel chromaticity-p...
By placing a color filter in front of a camera we make new spectral sensitivities. The Luther-condition optimization solves for a color filter so that the camera’s filtered sensitivities are as close to being linearly related to the XYZ color matching functions (CMFs) as possible, that is, a filter is found that makes the camera more colorimetric....
Imagery is a preferred tool for environmental surveys within marine environments, particularly in deeper waters, as it is nondestructive compared to traditional sampling methods. However, underwater illumination effects limit its use by causing extremely varied and inconsistent image quality. Therefore, it is often necessary to pre-process images t...
This poster presents our algorithm "Weibull Tone Mapping" for the enhancement of underwater imagery.
Imagery is a preferred tool for environmental surveys within marine environments, particularly in deeper waters, as it is non-destructive compared to traditional sampling methods. However, underwater illumination effects limit its use by causing extremely varied and inconsistent image quality. Therefore, it is often necessary to pre-process images...
Spectral reconstruction algorithms recover spectra from RGB sensor responses. Recent methods - with the very best algorithms using deep learning - can already solve this problem with good spectral accuracy. However, the recovered spectra are physically incorrect in that they do not induce the RGBs from which they are recovered. Moreover, if the exp...
Previous work has proposed to solve for a filter which, when placed in front of a camera, improves the colorimetric property by best satisfying the Luther condition. That is, the filtered spectral sensitivities of a camera - after a linear transform - are as close to the color matching functions of the human visual system as possible. By constructi...
Spectral reconstruction (SR) algorithms attempt to map RGB- to hyperspectral-images. Classically, simple pixel-based regression is used to solve for this SR mapping and more recently patch-based Deep Neural Networks (DNN) are considered (with a modest performance increment). For either method, the 'training' process typically minimizes a Mean-Squar...
Spectral reconstruction algorithms seek to recover spectra from RGB images. This estimation problem is often formulated as least-squares regression, and a Tikhonov regularization is generally incorporated , both to support stable estimation in the presence of noise and to prevent over-fitting. The degree of regularization is controlled by a single...
In this paper, we present the detailed mathematical derivation of the gradient and Hessian for the Vora-Value based filter optimization. We make a full recapitulation of the steps involved in differentiating the objective function for maximizing the Vora-Value. This paper serves as a supplementary material for our future paper in filter design usin...
Through optimization we can solve for a filter that when the camera views the world through this filter, it is more colorimetric. Previous work solved for the filter that best satisfied the Luther condition: the camera spectral sensitivities after filtering were approximately a linear transform from the CIE XYZ color matching functions. A more rece...
Recently Convolutional Neural Networks (CNN) have been used to reconstruct hyperspectral information from RGB images, and this spectral reconstruction problem (SR) can often be solved with good (low) error. However, little attention has been paid on whether these models' behavior can adhere to physics. We show that the leading CNN method introduces...
An image enhancement method and system is described. The method comprises receiving an input and target image pair, each of the input and target images including data representing pixel intensities; processing the data to determine a plurality of basis functions, each basis function being determined in dependence on content of the input image, dete...
The Luther condition states that if the spectral sensitivity responses of a camera are a linear transform from the color matching functions of the human visual system, the camera is colorimetric. Previous work proposed to solve for a filter which, when placed in front of a camera, results in sensitivities that best satisfy the Luther condition. By...
This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., the recovery of whole-scene hyperspectral (HS) information from a 3-channel RGB image. As in the previous challenge, two tracks were provided: (i) a "Clean" track where HS images are estimated from noise-free RGBs, the RGB images are themselves calculated nume...
When we place a colored filter in front of a camera the effective camera response functions are equal to the given camera spectral sensitivities multiplied by the filter spectral transmittance. In this paper, we solve for the filter which returns the modified sensitivities as close to being a linear transformation from the color matching functions...
Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits...
Recently Convolutional Neural Networks (CNN) have been used to reconstruct hyperspectral information from RGB images. Moreover, this spectral reconstruction problem (SR) can often be solved with good (low) error. However, these methods are not physically plausible: that is when the recovered spectra are reintegrated with the underlying camera sensi...
When we place a colored filter in front of a camera the effective camera response functions are equal to the given camera spectral sensitivities multiplied by the filter spectral transmittance. In this article, we solve for the filter which returns the modified sensitivities as close to being a linear transformation from the color matching function...
Retinex is a colour vision model introduced by Land more than 40 years ago. Since then, it has also been widely and successfully used for image enhancement. However, Retinex often introduces colour and halo artefacts. Artefacts are a necessary consequence of the per channel color processing and the lack of any strong control for controlling the loc...
In the spectral reconstruction ( SR ) problem, reflectance and/or radiance spectra are recovered from RGB images. Most of the prior art only attempts to solve this problem for fixed exposure conditions, and this limits the usefulness of these approaches (they can work inside the lab but not in the real world). In this paper, we seek methods that wo...
The ColorChecker dataset is one of the most widely used image sets for evaluating and ranking illuminant estimation algorithms. However, this single set of images has at least 3 different sets of ground-truth (i.e. correct answers) associated with it. In the literature it is often asserted that one algorithm is better than another when the algorith...
Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems. Prior work has established the cluster assumption -- under which the data distribution consists of uniform class clusters of samples separated by low density regions -- as key to its success. We analyse th...
Color transfer is an image editing process that naturally transfers the color theme of a source image to a target image. In this paper, we propose a 3D color homography model which approximates photo-realistic color transfer algorithm as a combination of a 3D perspective transform and a mean intensity mapping. A key advantage of our approach is tha...
In this paper, we propose two methods of calculating theoretically maximal metamer mismatch volumes. Unlike prior art techniques, our methods do not make any assumptions on the shape of spectra on the boundary of the mismatch volumes. Both methods utilize a spherical sampling approach, but they calculate mismatch volumes in two different ways. The...
Illumination estimation is the key routine in a camera’s onboard auto-white-balance (AWB) function. Illumination estimation algorithms estimate the color of the scene’s illumination from an image in the form of an R, G, B vector in the sensor’s raw-RGB color space. While learning-based methods have demonstrated impressive performance for illuminati...
Raw images are more useful than JPEG images for machine vision algorithms and professional photographers because raw images preserve a linear relation between pixel values and the light measured from the scene. A camera is radiometrically calibrated if there is a computational model which can predict how the raw image is mapped to the corresponding...
The use of multi-spectral imaging has been found to improve the accuracy of deep neural network-based pedestrian detection systems, particularly in challenging night time conditions in which pedestrians are more clearly visible in thermal long-wave infrared bands than in plain RGB. In this article, the authors use the Spectral Edge image fusion met...
In computer vision, illumination is considered to be a problem that needs to be 'solved'. The colour cast due to illumination is removed to support colour-based image recognition and stable tracking (in and out of shadows), among other tasks. In this paper, I review historical and current algorithms for illumination estimation. In the classical app...
In a previous work, it was shown that there is a curious problem with the benchmark Color Checker dataset for illuminant estimation. To wit, this dataset has at least 3 different sets of ground-truths. Typically, for a single algorithm a single ground-truth is used. But then different algorithms, whose performance is measured with respect to differ...
In this paper we present a new camera calibration method aimed at finding a straight-line locus, in a special colour feature space, that is traversed by daylights and as well also approximately followed by specular points. The aim of the calibration is to enable recovering the colour of the illuminant in a scene, using the calibrated camera. First...
Images of co-planar points in 3-dimensional space taken from different camera positions are a homography apart. Homographies are at the heart of geometric methods in computer vision and are used in geometric camera calibration, 3D reconstruction, stereo vision and image mosaicking among other tasks. In this paper we show the surprising result that...
Illuminant estimation algorithms are often evaluated by calculating recovery angular error which is the angle between the RGB of the ground truth and the estimated illuminants. However, the same scene viewed under two different lights with respect to which the same algorithm delivers illuminant estimates and then identical reproductions-and so, the...
Automated phenotyping technologies are capable of providing continuous and precise measurements of traits that are key to today’s crop research, breeding and agronomic practices. In additional to monitoring developmental changes, high-frequency and high-precision phenotypic analysis can enable both accurate delineation of the genotype-to-phenotype...
Compared with raw images, the more common JPEG images are less useful for machine vision algorithms and professional photographers because JPEG-sRGB does not preserve a linear relation between pixel values and the light measured from the scene. A camera is said to be radiometrically calibrated if there is a computational model which can predict how...
The angle between the RGBs of the measured illuminant and estimated illuminant colors - the recovery angular error - has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are 'divided out' fr...
A method and system for generating an output image from a plurality, N, of corresponding input image channels is described. A Jacobian matrix of the plurality of corresponding input image channels is determined. The principal characteristic vector of the outer product of the Jacobian matrix is calculated. The sign associated with the principal char...
Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141–146], maps device-dependent color values ( R G B ) to colorimetric color values ( X Y Z ) using a set of linear transforms, realized by white point preserving 3 × 3 matrices, where each tran...
Human colour perception depends initially on the responses of the long(L-), middle(M-) and short(S-) wavelength-sensitive cones. These signals are then transformed post-receptorally into cone-opponent (L-M and S-(L+M)) and colour-opponent (red/green and blue/yellow) signals and perhaps at some later stage into categorical colour signals. Here, we i...
Color transfer is an image editing process that adjusts the colors of a picture to match a target picture's color theme. A natural color transfer not only matches the color styles but also prevents after-transfer artifacts due to image compression, noise, and gradient smoothness change. The recently discovered color homography theorem proves that c...
Estimation of individual spectral cone fundamentals from color-matching functions is a classical and longstanding problem in color science. In this paper we propose a novel method to carry out this estimation based on a linear optimization technique, employing an assumption of a priori knowledge of the retinal absorptance functions. The result is a...
Homographies -- a mathematical formalism for relating image points across different camera viewpoints -- are at the foundations of geometric methods in computer vision and are used in geometric camera calibration, image registration, and stereo vision and other tasks. In this paper, we show the surprising result that colors across a change in viewi...
In this paper, we present new applications of the Spectral Edge image fusion method. The Spectral Edge image fusion algorithm creates a result which combines details from any number of multispectral input images with natural color information from a visible spectrum image. Spectral Edge image fusion is a derivative--based technique, which creates a...
We show the surprising result that colors across a change in viewing condition (changing light color, shading and camera) are related by a homography. Our homography color correction application delivers improved color fidelity compared with the linear least-square.
In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab—a difficult and lengthy procedure—or through simple statistical inference. Statistical inference methods are based on the observation that...
A remarkably simple color constancy method was recently developed, based in essence on the Gray-Edge method, i.e., the assumption that the mean of color-gradients in a scene (or colors themselves, in a Gray-World setting) are close to being achromatic. However this new method for illuminant estimation explicitly includes the important notions that...
One of the most successful features for texture recognition is the Local Binary Pattern. The LBP is the 8 digit binary number created by comparing the value of a central pixel with its 8 neighbours where 1s and 0s are assigned when respectively the central pixel is larger or smaller than its neighbour. This pattern is bit shifted circularly to its...
There are many applications where multiple images are fused to form a single summary greyscale or colour output, including computational photography (e.g. RGB-NIR), diffusion tensor imaging (medical), and remote sensing. Often, and intuitively, image fusion is carried out in the derivative domain. Here, a new composite fused derivative is found tha...
Illumination estimation is a well-studied topic in computer
vision. Early work reported performance on benchmark
datasets using simple statistical aggregates such as
mean or median error. Recently, it has become accepted to
report a wider range of statistics, e.g. top 25%, mean, and
bottom 25% performance. While these additional statistics
are more...
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exa...
Andersen and Hardeberg proposed the Hue Plane Preserving Colour Correction (HPPCC) [1], which maps RGBs to XYZs using a set of linear transforms, where each transform is learned and applied in a subregion of colour space, defined by two adjacent hue planes. A hue plane is a geometrical half-plane defined by the neutral axis and a chromatic colour....
Recovering three-dimensional shape from two-dimensional images is a long-standing problem in computer vision. First proposed in the 1980s, photometric stereo has matured to the point that accurate recovery of complex shapes and surfaces has become achievable. However, such methods typically demand multiple image captures, highly controlled scene co...
The Spectral Edge method of image fusion fuses input image details, while maintaining natural color. It is a derivative-based technique, based on the structure tensor, and lookup-table-based gradient field reconstruction. It has many applications, including RGB-NIR image fusion and remote sensing.
In this paper, we propose adding an iterative step...
Illuminant estimation algorithms are usually evaluated by measuring the angular error between the RGB vectors of the estimated illuminant and the ground-truth illuminant (recovery angular error). However, the recovery angular error reports a wide range of errors for a given illuminant estimation algorithm and a given scene viewed under multiple lig...
Cameras record three color responses ( RGB ) which are device dependent. Camera coordinates are mapped to a standard color space, such as XYZ-useful for color measurement-by a mapping function, e.g., the simple 3×3 linear transform (usually derived through regression). This mapping, which we will refer to as linear color correction (LCC), has been...
In a recent publication " Reproduction Angular Error: An Improved Performance Metric for Illuminant Estimation " , British Machine Vision Conference (2014), it was argued that the commonly used Recovery angular error – the angle between the RGBs of the actual and estimated lights-is flawed when it is viewed in concert with how the illuminant estima...
In this paper, we compare four methods of fusing visible RGB and near-infrared (NIR) images to produce a color output image, using a psychophysical experiment and image fusion quality metrics. The results of the psychophysical experiment show that two methods are significantly preferred to the original RGB image, and therefore RGB-NIR image fusion...
In this article, we describe a spectral sensitivity measurement procedure at the National Physical Laboratory, London, with the aim of obtaining ground truth spectral sensitivity functions for Nikon D5100 and Sigma SD1 Merill cameras. The novelty of our data is that the potential measurement errors are estimated at each wavelength. We determine how...
Camera spectral sensitivity functions are obtained either by measurements in the laboratory or via estimation algorithms. A procedure for camera spectral sensitivity measurement is explained in this article, which aims to provide high accuracy ground-truth values with known uncertainty. The measurements were carried out on a Nikon D5100 camera at t...
Identification of illumination, the main step in colour constancy processing, is an important problem in imaging for digital images or video, forming a prerequisite for many computer vision applications. In this paper we present a new and effective physics-based colour constancy algorithm which makes use of a novel log-relative-chromaticity planar...
This paper describes a novel approach to the fusion of multidimensional images for colour displays. The goal of the method is to generate an output image whose gradient matches that of the input as closely as possible. It achieves this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensi...