Jacob Zavatone-Veth

Jacob Zavatone-Veth
Harvard University | Harvard · Department of Physics

Master of Arts

About

26
Publications
4,065
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
121
Citations

Publications

Publications (26)
Preprint
Full-text available
We show how the replica method can be used to compute the asymptotic eigenvalue spectrum of a real Wishart product matrix. This provides a compact, elementary derivation of a polynomial condition on the Stieltjes transform first proved by M\"uller [IEEE Trans. Inf. Theory. 48, 2086-2091 (2002)]. We additionally derive polynomial conditions on the m...
Article
Neurons integrate excitatory and inhibitory signals to produce their outputs, but the role of input timing in this integration remains poorly understood. Motion detection is a paradigmatic example of this integration, since theories of motion detection rely on different delays in visual signals. These delays allow circuits to compare scenes at diff...
Article
Understanding how feature learning affects generalization is among the foremost goals of modern deep learning theory. Here, we study how the ability to learn representations affects the generalization performance of a simple class of models: deep Bayesian linear neural networks trained on unstructured Gaussian data. By comparing deep random feature...
Preprint
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking...
Article
Full-text available
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that...
Article
Full-text available
Our understanding of the neural basis of locomotor behavior can be informed by careful quantification of animal movement. Classical descriptions of legged locomotion have defined discrete locomotor gaits, characterized by distinct patterns of limb movement. Recent technical advances have enabled increasingly detailed characterization of limb kinema...
Article
In this short note, we reify the connection between work on the storage capacity problem in wide two-layer treelike neural networks and the rapidly growing body of literature on kernel limits of wide neural networks. Concretely, we observe that the “effective order parameter” studied in the statistical mechanics literature is exactly equivalent to...
Preprint
Full-text available
Understanding how feature learning affects generalization is among the foremost goals of modern deep learning theory. Here, we study how the ability to learn representations affects the generalization performance of a simple class of models: deep Bayesian linear neural networks trained on unstructured Gaussian data. By comparing deep random feature...
Preprint
Full-text available
In this short note, we reify the connection between work on the storage capacity problem in wide two-layer treelike neural networks and the rapidly-growing body of literature on kernel limits of wide neural networks. Concretely, we observe that the "effective order parameter" studied in the statistical mechanics literature is exactly equivalent to...
Preprint
Full-text available
Our understanding of the neural basis of locomotor behavior can be informed by careful quantification of animal movement. Classical descriptions of legged locomotion have defined discrete locomotor gaits, characterized by distinct patterns of limb movement. Recent technical advances have enabled increasingly detailed characterization of limb kinema...
Preprint
Full-text available
Inference in deep Bayesian neural networks is only fully understood in the infinite-width limit, where the posterior flexibility afforded by increased depth washes out and the posterior predictive collapses to a shallow Gaussian process. Here, we interpret finite deep linear Bayesian neural networks as data-dependent scale mixtures of Gaussian proc...
Preprint
Neurons integrate excitatory and inhibitory signals to produce their outputs, but the role of input timing in this integration remains poorly understood. Motion detection is a paradigmatic example of this integration, since theories of motion detection rely on different delays in visual signals. These delays allow circuits to compare scenes at diff...
Preprint
Full-text available
Recent works have suggested that finite Bayesian neural networks may outperform their infinite cousins because finite networks can flexibly adapt their internal representations. However, our theoretical understanding of how the learned hidden layer representations of finite networks differ from the fixed representations of infinite networks remains...
Preprint
Full-text available
Bayesian neural networks are theoretically well-understood only in the infinite-width limit, where Gaussian priors over network weights yield Gaussian priors over network outputs. Recent work has suggested that finite Bayesian networks may outperform their infinite counterparts, but their non-Gaussian output priors have been characterized only thou...
Article
The expressive power of artificial neural networks crucially depends on the nonlinearity of their activation functions. Though a wide variety of nonlinear activation functions have been proposed for use in artificial neural networks, a detailed understanding of their role in determining the expressive power of a network has not emerged. Here, we st...
Article
Timing and its variability are crucial for behavior. Consequently, neural circuits that take part in the control of timing and in the measurement of temporal intervals have been the subject of much research. Here we provide an analytical and computational account of the temporal variability in what is perhaps the most basic model of a timing circui...
Preprint
Full-text available
The expressive power of artificial neural networks crucially depends on the nonlinearity of their activation functions. Though a wide variety of nonlinear activation functions have been proposed for use in artificial neural networks, a detailed understanding of their role in determining the expressive power of a network has not emerged. Here, we st...
Article
Full-text available
Previous work has characterized how walking Drosophila coordinate the movements of individual limbs (DeAngelis, Zavatone-Veth, and Clark, 2019). To understand the circuit basis of this coordination, one must characterize how sensory feedback from each limb affects walking behavior. However, it has remained difficult to manipulate neural activity in...
Preprint
Full-text available
Timing and its variability are critical for behavior. Consequently, neural circuits that take part in the control of timing and in the measurement of temporal intervals have been the subject of much research. Here, we provide an analytical and computational account of the temporal variability in what is perhaps the simplest model of a timing circui...
Article
Full-text available
Visual motion estimation is a canonical neural computation. In Drosophila, recent advances have identified anatomic and functional circuitry underlying direction-selective computations. Models with varying levels of abstraction have been proposed to explain specific experimental results but have rarely been compared across experiments. Here we use...
Preprint
Full-text available
Visual motion estimation is a canonical neural computation. In Drosophila, recent advances have identified anatomical and functional circuitry underlying direction-selective computations. Models with varying levels of abstraction have been proposed to explain specific experimental results, but have rarely been compared across experiments. Here we c...
Article
Full-text available
In functional imaging, large numbers of neurons are measured during sensory stimulation or behavior. This data can be used to map receptive fields that describe neural associations with stimuli or with behavior. The temporal resolution of these receptive fields has traditionally been limited by image acquisition rates. However, even when acquisitio...
Article
Full-text available
Direction-selective neurons respond to visual motion in a preferred direction. They are direction-opponent if they are also inhibited by motion in the opposite direction. In flies and vertebrates, direction opponency has been observed in second-order direction-selective neurons, which achieve this opponency by subtracting signals from first-order d...
Article
Full-text available
Terrestrial locomotion requires animals to coordinate their limb movements to efficiently traverse their environment. While previous studies in hexapods have reported that limb coordination patterns can vary substantially, the structure of this variability is not yet well understood. Here, we characterized the symmetric and asymmetric components of...

Network

Cited By