David Warde-Farley

David Warde-Farley
  • Université de Montréal

About

45
Publications
88,220
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
83,954
Citations
Current institution
Université de Montréal

Publications

Publications (45)
Article
Full-text available
We introduce two Python frameworks to train neural networks on large datasets: Blocks and Fuel. Blocks is based on Theano, a linear algebra compiler with CUDA-support. It facilitates the training of complex neural network models by providing parametrized Theano operations, attaching metadata to Theano's symbolic computational graph, and providing a...
Article
In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reason...
Conference Paper
The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches whi...
Article
We study the problem of large scale, multi-label visual recognition with a large number of possible classes. We propose a method for augmenting a trained neural network classifier with auxiliary capacity in a manner designed to significantly improve upon an already well-performing model, while minimally impacting its computational footprint. Using...
Article
Full-text available
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximiz...
Article
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximiz...
Article
Full-text available
The recently introduced dropout training criterion for neural networks has been the subject of much attention due to its simplicity and remarkable effectiveness as a regularizer, as well as its interpretation as a training procedure for an exponentially large ensemble of networks that share parameters. In this work we empirically investigate severa...
Conference Paper
In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes la...
Article
Full-text available
Pylearn2 is a machine learning research library. This does not just mean that it is a collection of machine learning algorithms that share a common API; it means that it has been designed for flexibility and extensibility in order to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the lib...
Article
Full-text available
We consider the problem of designing models to leverage a recently introduced approximate model averaging technique called dropout. We define a simple new model called maxout (so named because its output is the max of a set of inputs, and because it is a natural companion to dropout) designed to both facilitate optimization by dropout and improve t...
Conference Paper
We consider the problem of designing models to leverage a recently introduced approximate model averaging technique called dropout. We define a simple new model called maxout (so named because its output is the max of a set of inputs, and because it is a natural companion to dropout) designed to both facilitate optimization by dropout and improve t...
Article
Full-text available
Theano is a linear algebra compiler that optimizes a user's symbolically-specified mathematical computations to produce efficient low-level implementations. In this paper, we present new features and efficiency improvements to Theano, and benchmarks demonstrating Theano's performance relative to Torch7, a recently introduced machine learning librar...
Article
Full-text available
Learning good representations from a large set of unlabeled data is a particularly chal-lenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentan-gling gradually higher-level factors of variation characterizing the input distribution. I...
Article
Full-text available
GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA al...
Article
Full-text available
Theano is a compiler for mathematical expressions in Python that combines the convenience of NumPy's syntax with the speed of optimized native machine language. The user composes mathematical expressions in a high-level description that mimics NumPy's syntax and semantics, while being statically typed and functional (as opposed to imperative). Thes...
Article
Full-text available
Changes in the biochemical wiring of oncogenic cells drives phenotypic transformations that directly affect disease outcome. Here we examine the dynamic structure of the human protein interaction network (interactome) to determine whether changes in the organization of the interactome can be used to predict patient outcome. An analysis of hub prote...
Data
Bar graphs of mean P20R values within each evaluation category
Data
Full-text available
Bar graphs comparing properties of GO annotations in the held-out gene set, in the newly annotated gene set and in the training set.
Data
Full-text available
Clustergram indicating Pearson correlation coefficients of the P20R performance measure among different submissions.
Data
Performance measures for the initial round of GO term predictions within each evaluation category evaluated using held-out genes.
Data
Performance measures for the second round of GO term predictions within each evaluation category evaluated using held-out genes.
Data
Full-text available
Heatmaps of precision at several recall values evaluated using held-out annotations on all GO terms within each of the 12 evaluation categories for each submission.
Data
Performance measures for the second round of GO term predictions within each evaluation category evaluated using the newly annotated genes (prospective evaluation).
Data
Performance measures of the unified predictions for each GO term.
Data
High-scoring predictions evaluated against existing literature.
Data
Full-text available
Detailed description of the submission methods and the straw man classifier.
Data
Full-text available
Bar graphs of pairwise comparisons of AUC within each evaluation category.
Data
Full-text available
Heatmap of median precision at several recall values evaluated using held-out annotations within each of the 12 evaluation categories per submission
Data
Performance measures for the initial round of GO term predictions within each evaluation category evaluated using the newly annotated genes (prospective evaluation).
Data
Results of the analysis of variance in prediction performance.
Data
Performance and variance on five subsets of the training data.
Data
Mitochondrial part predictions with data from a previous study [38].
Data
Performance measures for various individual evidence sources within each evaluation category evaluated using held-out genes.
Data
Fraction of GO terms with higher precision and recall than a given precision/recall point for the unified predictions.
Data
Description of the function prediction method used in each submission.
Data
Full-text available
Supplementary Figures 1 to 5.
Article
Full-text available
Several years after sequencing the human genome and the mouse genome, much remains to be discovered about the functions of most human and mouse genes. Computational prediction of gene function promises to help focus limited experimental resources on the most likely hypotheses. Several algorithms using diverse genomic data have been applied to this...
Article
Full-text available
Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the...

Network

Cited By