About
29
Publications
21,644
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,564
Citations
Introduction
I'm an Assistant Professor at the University of Calgary, in the Department of Electrical and Software Engineering of the Schulich School of Engineering. I was previously a Postdoctoral Research Fellow at the Vector Institute and University of Guelph, working with Prof. Graham Taylor, and Prof. Mihai Nica, and before that a Visiting Researcher at Google Brain Toronto/Google AR Core.
Current institution
Additional affiliations
October 2019 - October 2020
March 2014 - December 2014
April 2011 - February 2013
Education
October 2015 - October 2018
September 2006 - March 2010
September 2000 - May 2006
Publications
Publications (29)
We propose a new method for training computationally efficient and compact convolutional neural networks (CNNs) using a novel sparse connection structure that resembles a tree root. Our sparse connection structure facilitates a significant reduction in computational cost and number of parameters of state-of-the-art deep CNNs without compromising ac...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples, where a small perturbation to an input can cause it to become mislabeled. We propose metrics for measuring the robustness of a neural net and devise a novel algorithm for approximating these metrics based on an encoding of robustness as a linear pro...
Published as a conference paper at ICLR 2016
Trained Models at http://dx.doi.org/10.5281/zenodo.53189
This paper investigates the connections between two state of the art classifiers: decision forests (DFs, including decision jungles) and convolutional neural networks (CNNs). Decision forests are computationally efficient thanks to their conditional computation property (computation is confined to only a small region of the tree, the nodes along a...
A novel multi-scale operator for unorganized 3D point clouds is introduced. The Difference of Normals (DoN) provides a computationally efficient, multi-scale approach to processing large unorganized 3D point clouds. The application of DoN in the multi-scale filtering of two different real-world outdoor urban LIDAR scene datasets is quantitatively a...
Large neural networks achieve remarkable performance, but their size hinders deployment on resource-constrained devices. While various compression techniques exist, parameter sharing remains relatively unexplored. This paper introduces Fine-grained Parameter Sharing (FiPS), a novel algorithm that leverages the relationship between parameter sharing...
In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this in practice. Becaus...
Dynamic Sparse Training (DST) methods achieve state-of-the-art results in sparse neural network training, matching the generalization of dense models while enabling sparse training and inference. Although the resulting models are highly sparse and theoretically less computationally expensive, achieving speedups with unstructured sparsity on real-wo...
Estimating the Generalization Error (GE) of Deep Neural Networks (DNNs) is an important task that often relies on availability of held-out data. The ability to better predict GE based on a single training set may yield overarching DNN design principles to reduce a reliance on trial-and-error, along with other performance assessment advantages. In s...
Sparse Neural Networks (NNs) can match the generalization of dense NNs using a fraction of the compute/storage for inference, and have the potential to enable efficient training. However, naively training unstructured sparse NNs from random initialization results in significantly worse generalization, with the notable exceptions of Lottery Tickets...
The failure of deep neural networks to generalize to out-of-distribution data is a well-known problem and raises concerns about the deployment of trained networks in safety-critical domains such as healthcare, finance and autonomous vehicles. We study a particular kind of distribution shift $\unicode{x2013}$ shortcuts or spurious correlations in th...
Recent advancements in self-supervised learning have reduced the gap between supervised and unsupervised representation learning. However, most self-supervised and deep clustering techniques rely heavily on data augmentation, rendering them ineffective for many learning tasks where insufficient domain knowledge exists for performing augmentation. W...
Sparse Neural Networks (NNs) can match the generalization of dense NNs using a fraction of the compute/storage for inference, and also have the potential to enable efficient training. However, naively training unstructured sparse NNs from random initialization results in significantly worse generalization, with the notable exception of Lottery Tick...
Aims. Accurately and rapidly classifying exoplanet candidates from transit surveys is a goal of growing importance as the data rates from space-based survey missions increase. This is especially true for the NASA TESS mission which generates thousands of new candidates each month. Here we created the first deep-learning model capable of classifying...
Accurately and rapidly classifying exoplanet candidates from transit surveys is a goal of growing importance as the data rates from space-based survey missions increases. This is especially true for NASA's TESS mission which generates thousands of new candidates each month. Here we created the first deep learning model capable of classifying TESS p...
Space-based missions such as Kepler, and soon the Transiting Exoplanet Survey Satellite (TESS), provide large data sets that must be analyzed efficiently and systematically. Recent work by Shallue & Vanderburg successfully used state-of-the-art deep learning models to automatically classify Kepler transit signals as either exoplanets or false posit...
Space missions such as Kepler, and soon TESS, provide large datasets that need to be analyzed efficiently and systematically in order to yield accurate exoplanet statistics. Recent work by Shallue & Vanderburg (2018) demonstrated the successful application of state-of-the-art deep learning models to automatically classify transit signals in the Kep...
RÉSUMÉ
Les personnes âgées hospitalisées présentent un haut risque de chute. Le système HELPER est un système de détection des chutes fixé au plafond qui envoie une alerte à un téléphone intelligent lorsqu’une chute est détectée. Cet article décrit la performance du système HELPER, qui a été testé dans un projet pilote mené dans un centre de santé...
Deep learning has in recent years come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite breakthroughs in training deep networks, there remains a lack of understanding of both the optimization and structure of deep networks. The approach advoca...
Deep Convolutional Neural Networks (CNNs) have recently evinced immense success for various image recognition tasks. However, a question of paramount importance is somewhat unanswered in deep learning research - is the selected CNN optimal for the dataset in terms of accuracy and model size? In this paper, we intend to answer this question and intr...
We propose a new method for creating computationally efficient and compact convolutional neural networks (CNNs) using a novel sparse connection structure that resembles a tree root. This allows a significant reduction in computational cost and number of parameters compared to state-of-the-art deep CNNs, without compromising accuracy, by exploiting...
Deep Convolutional Neural Networks (CNNs) have recently evinced immense success for various image recognition tasks. However, a question of paramount importance is somewhat unanswered in deep learning research - is the selected CNN optimal for the dataset in terms of accuracy and model size? In this paper, we intend to answer this question and intr...
In this work, we investigate the possibility to directly apply
convolutional neural networks (CNN) to segmentation of brain tumor
tissues. As input to the network, we use multi-channel intensity information
from a small patch around each point to be labelled. Only standard
intensity pre-processing is applied to the input data to account for scanner...
Disclosed herein are automated emergency detection and response systems and methods, in accordance with different embodiments of the invention. In some embodiments, a system is provided for detecting and responding to a potential emergency event occurring in respect of a user in or near a designated area. The system comprises one or more sensors di...
Recent advances in Light Detection and Ranging (LIDAR) technology and integration have resulted in vehicle-borne platforms for urban LIDAR scanning, such as Terrapoint Inc.'s TITAN system. Such technology has lead to an explosion in ground LIDAR data. The large size of such mobile urban LIDAR data sets, and the ease at which they may now be collect...
Potential Well Space Embedding (PWSE) has been shown to be an effective global method to recognize segmented objects in range data. Here Local PWSE is proposed as an extension of PWSE. LPWSE features are generated by iterating ICP to the local minima of a multiscale registration model at each point. The locations of the local minima are then used t...