Derek Driggs

Derek Driggs
University of Cambridge | Cam · Department of Applied Mathematics and Theoretical Physics

About

16
Publications
3,823
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
617
Citations
Citations since 2016
15 Research Items
617 Citations
20162017201820192020202120220100200300
20162017201820192020202120220100200300
20162017201820192020202120220100200300
20162017201820192020202120220100200300
Introduction
Derek Driggs currently works at the Department of Applied Mathematics and Theoretical Physics, University of Cambridge. Derek does research in Analysis and Applied Mathematics.

Publications

Publications (16)
Preprint
Full-text available
We propose novel stochastic proximal alternating linearized minimization (PALM) algorithms for solving a class of non-smooth and non-convex optimization problems which arise in many statistical machine learning, computer vision, and imaging applications. We provide a theoretical analysis, showing that our proposed method with variance-reduced stoch...
Article
Full-text available
Machine learning methods offer great promise for fast and accurate detection and prognostication of coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images. Many articles have been published in 2020 describing new machine learning-based models for both of these tasks, but it is unc...
Article
Full-text available
Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on “negative momentum”, a technique...
Thesis
Over the past decade, the overlap between machine learning and image processing has grown so considerably that the two fields have become inseparable. Some of the most important problems affecting science and society fall within these overlapping fields, including the development of self-driving cars and the automated interpretation of medical imag...
Article
Full-text available
Machine learning methods offer great promise for fast and accurate detection and prognostication of coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images. Many articles have been published in 2020 describing new machine learning-based models for both of these tasks, but it is unc...
Preprint
Full-text available
Background: Machine learning methods offer great potential for fast and accurate detection and prognostication of COVID-19 from standard-of-care chest radiographs (CXR) and computed tomography (CT) images. In this systematic review we critically evaluate the machine learning methodologies employed in the rapidly growing literature. Methods: In this...
Preprint
Full-text available
Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov's acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on "negative momentum", a technique...
Preprint
Full-text available
We present a general analysis of variance reduced stochastic gradient methods with bias for minimising convex, strongly convex, and non-convex composite objectives. The key to our analysis is a new connection between bias and variance in stochastic gradient estimators, suggesting a new form of bias-variance tradeoff in stochastic optimisation. This...
Preprint
We present a general analysis of variance reduced stochastic gradient methods with bias for minimising convex, strongly convex, and non-convex composite objectives. The key to our analysis is a new connection between bias and variance in stochastic gradient estimators, suggesting a new form of bias-variance tradeoff in stochastic optimisation. This...
Preprint
Full-text available
This paper studies tensor-based Robust Principal Component Analysis (RPCA) using atomic-norm regularization. Given the superposition of a sparse and a low-rank tensor, we present conditions under which it is possible to exactly recover the sparse and low-rank components. Our results improve on existing performance guarantees for tensor-RPCA, includ...
Preprint
Full-text available
This paper studies tensor-based Robust Principal Component Analysis (RPCA) using atomic-norm regularization. Given the superposition of a sparse and a low-rank tensor, we present conditions under which it is possible to exactly recover the sparse and low-rank components. Our results improve on existing performance guarantees for tensor-RPCA, includ...
Article
Full-text available
We introduce two new methods to parallelize low rank recovery models in order to take advantage of GPU, multiple CPU, and hybridized architectures. Using Burer-Monteiro splitting and marginalization, we develop a smooth, non-convex formulation of regularized low rank recovery models that can be solved with first-order optimizers. Using L-BFGS on th...
Article
Full-text available
The influence of fixed temperature and fixed heat flux thermal boundary conditions on rapidly rotating convection in the plane layer geometry is investigated for the case of stress-free mechanical boundary conditions. It is shown that whereas the leading order system satisfies fixed temperature boundary conditions implicitly, a double boundary laye...

Network

Cited By