About
19
Publications
5,179
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,246
Citations
Introduction
Derek Driggs currently works at the Department of Applied Mathematics and Theoretical Physics, University of Cambridge. Derek does research in Analysis and Applied Mathematics.
Skills and Expertise
Publications
Publications (19)
We propose novel stochastic proximal alternating linearized minimization (PALM) algorithms for solving a class of non-smooth and non-convex optimization problems which arise in many statistical machine learning, computer vision, and imaging applications. We provide a theoretical analysis, showing that our proposed method with variance-reduced stoch...
Machine learning methods offer great promise for fast and accurate detection and prognostication of coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images. Many articles have been published in 2020 describing new machine learning-based models for both of these tasks, but it is unc...
Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on “negative momentum”, a technique...
We consider the task of image reconstruction while simultaneously decomposing the reconstructed image into components with different features. A commonly used tool for this is a variational approach with an infimal convolution of appropriate functions as a regularizer. Especially for noise corrupted observations, incorporating these functionals int...
Introduction
Pneumothorax is a rare but important complication of COVID-19.¹ Although barotrauma may account for some cases, many affected patients have not received positive-pressure ventilatory (PPV) support¹. The pathophysiology of COVID-pneumothorax is challenging to investigate because imaging data exist in diverse silos and only 0.97% of pati...
Over the past decade, the overlap between machine learning and image processing has grown so considerably that the two fields have become inseparable. Some of the most important problems affecting science and society fall within these overlapping fields, including the development of self-driving cars and the automated interpretation of medical imag...
Machine learning methods offer great promise for fast and accurate detection and prognostication of coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images. Many articles have been published in 2020 describing new machine learning-based models for both of these tasks, but it is unc...
Background: Machine learning methods offer great potential for fast and accurate detection and prognostication of COVID-19 from standard-of-care chest radiographs (CXR) and computed tomography (CT) images. In this systematic review we critically evaluate the machine learning methodologies employed in the rapidly growing literature. Methods: In this...
Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov's acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on "negative momentum", a technique...
We present a general analysis of variance reduced stochastic gradient methods with bias for minimising convex, strongly convex, and non-convex composite objectives. The key to our analysis is a new connection between bias and variance in stochastic gradient estimators, suggesting a new form of bias-variance tradeoff in stochastic optimisation. This...
We present a general analysis of variance reduced stochastic gradient methods with bias for minimising convex, strongly convex, and non-convex composite objectives. The key to our analysis is a new connection between bias and variance in stochastic gradient estimators, suggesting a new form of bias-variance tradeoff in stochastic optimisation. This...
This paper studies tensor-based Robust Principal Component Analysis (RPCA) using atomic-norm regularization. Given the superposition of a sparse and a low-rank tensor, we present conditions under which it is possible to exactly recover the sparse and low-rank components. Our results improve on existing performance guarantees for tensor-RPCA, includ...
This paper studies tensor-based Robust Principal Component Analysis (RPCA) using atomic-norm regularization. Given the superposition of a sparse and a low-rank tensor, we present conditions under which it is possible to exactly recover the sparse and low-rank components. Our results improve on existing performance guarantees for tensor-RPCA, includ...
We introduce two new methods to parallelize low rank recovery models in order to take advantage of GPU, multiple CPU, and hybridized architectures. Using Burer-Monteiro splitting and marginalization, we develop a smooth, non-convex formulation of regularized low rank recovery models that can be solved with first-order optimizers. Using L-BFGS on th...
The influence of fixed temperature and fixed heat flux thermal boundary
conditions on rapidly rotating convection in the plane layer geometry is
investigated for the case of stress-free mechanical boundary conditions. It is
shown that whereas the leading order system satisfies fixed temperature
boundary conditions implicitly, a double boundary laye...