Edward Paxon FradyUniversity of California, Berkeley | UCB · Helen Wills Neuroscience Institute
Edward Paxon Frady
PhD
About
64
Publications
11,538
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,091
Citations
Introduction
Edward Paxon Frady currently works at the Helen Wills Neuroscience Institute, University of California, Berkeley.
Additional affiliations
January 2016 - present
July 2006 - September 2008
September 2008 - September 2014
Publications
Publications (64)
We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operatio...
Visual odometry (VO) is a method used to estimate self-motion of a mobile robot using visual sensors. Unlike odometry based on integrating differential measurements that can accumulate errors, such as inertial sensors or wheel encoders, VO is not compromised by drift. However, image-based VO is computationally demanding, limiting its application in...
Analysing a visual scene by inferring the configuration of a generative model is widely considered the most flexible and generalizable approach to scene understanding. Yet, one major problem is the computational challenge of the inference procedure, involving a combinatorial search across object identities and poses. Here we propose a neuromorphic...
We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues repres...
We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues repres...
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operatio...
We investigate the task of retrieving information from compositional distributed representations formed by Hyperdimensional Computing/Vector Symbolic Architectures and present novel techniques which achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The tech...
We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techn...
Complex visual scenes that are composed of multiple objects, each with attributes, such as object name, location, pose, color, etc., are challenging to describe in order to train neural networks. Usually,deep learning networks are trained supervised by categorical scene descriptions. The common categorical description of a scene contains the names...
Autonomous agents require self-localization to navigate in unknown environments. They can use Visual Odometry (VO) to estimate self-motion and localize themselves using visual sensors. This motion-estimation strategy is not compromised by drift as inertial sensors or slippage as wheel encoders. However, VO with conventional cameras is computational...
Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic solution that utilizes an efficient factorization network which is based on three key concepts: (1) a computational framework based on Vector Symbolic Architectures (VSA) with complex-valued vector...
Vector Symbolic Architectures (VSA) were first proposed as connectionist models for symbolic reasoning, leveraging parallel and in-memory computing in brains and neuromorphic hardware that enable low-power, low-latency applications. Symbols are defined in VSAs as points/vectors in a high-dimensional neural state-space. For spiking neuromorphic hard...
The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables, which is distinct from the stateless neuron models used in deep learning. The new version of Intel’s neuromorphic research processor, Loihi 2, supports an extended range of stateful spiking neuron models with programmable dyn...
Spiking Neural Networks (SNNs) have attracted the attention of the deep learning community for use in low-latency, low-power neuromorphic hardware, as well as models for understanding neuroscience. In this paper, we introduce Spiking Phasor Neural Networks (SPNNs). SPNNs are based on complex-valued Deep Neural Networks (DNNs), representing phases b...
In this paper, we present an approach to integer factorization using distributed representations formed with Vector Symbolic Architectures. The approach formulates integer factorization in a manner such that it can be solved using neural networks and potentially implemented on parallel neuromorphic hardware. We introduce a method for encoding numbe...
The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables - very different from the stateless neuron models used in deep learning. The next version of Intel's neuromorphic research processor, Loihi 2, supports a wide range of stateful spiking neuron models with fully programmable dyn...
Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector,...
Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Vector Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing. In this paper, we generalize VSAs to function spaces by mapping continuous-valued data into a...
Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs)....
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, nanoscale hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We dem...
We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only
$n$
-bits integers (where
$n < 8$
is normally sufficient for a satisfactory performance). The rec...
Many neural network models have been successful at classification problems, but their operation is still treated as a black box. Here, we developed a theory for one-layer perceptrons that can predict performance on classification tasks. This theory is a generalization of an existing theory for predicting the performance of Echo State Networks and c...
We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer, a companion paper in this issue (“Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures”) to solve a high-dimensional vector factoriz...
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of vector symbolic architectures (VSA) (...
Various non-classical approaches of distributed information processing, such as neural networks, computation with Ising models, reservoir computing, vector symbolic architectures, and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in a computation are superimposed into a single high-dim...
Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap. However, classical VSAs and neural networks are still considered incompatible. VSAs encode symbols by dense pseudo-random vectors, where information is distributed t...
The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this brief, we focus on resource-efficient randomly connected neural networks known as random vector functional link (RVFL) networks since their simple design and extremely fast training t...
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSA) (...
Neuromorphic computing applies insights from neuroscience to uncover innovations in computing technology. In the brain, billions of interconnected neurons perform rapid computations at extremely low energy levels by leveraging properties that are foreign to conventional computing systems, such as temporal spiking codes and finely parallelized proce...
The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this article, we focus on resource-efficient randomly connected neural networks known as Random Vector Functional Link (RVFL) networks since their simple design and extremely fast training...
The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this article, we focus on resource-efficient randomly connected neural networks known as Random Vector Functional Link (RVFL) networks since their simple design and extremely fast training...
Significance
This work makes 2 contributions. First, we present a neural network model of associative memory that stores and retrieves sparse patterns of complex variables. This network can store analog information as fixed-point attractors in the complex domain; it is governed by an energy function and has increased memory capacity compared to ear...
We describe a type of neural network, called a Resonator Circuit, that factors high-dimensional vectors. Given a composite vector formed by the Hadamard product of several other vectors drawn from a discrete set, a Resonator Circuit can efficiently decompose the composite into these factors. This paper focuses on the case of "bipolar" vectors whose...
Information coding by precise timing of spikes can be faster and more energy-efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a novel type of attractor neural network in complex state space, and show how it can...
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and le...
To understand cognitive reasoning in the brain, it has been proposed that symbols and compositions of symbols are represented by activity patterns (vectors) in a large population of neurons. Formal models implementing this idea [Plate 2003], [Kanerva 2009], [Gayler 2003], [Eliasmith 2012] include a reversible superposition operation for representin...
We outline a model of computing with high-dimensional (HD) vectors--where the dimensionality is in the thousands. It is built on ideas from traditional (symbolic) computing and artificial neural nets/deep learning, and complements them with ideas from probability theory, statistics, and abstract algebra. Key properties of HD computing include a wel...
We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The...
A critical feature of neural networks is that they balance excitation and inhibition to prevent pathological dysfunction. How this is achieved is largely unknown, although deficits in the balance contribute to many neurological disorders. We show here that a microRNA (miR-101) is a key orchestrator of this essential feature, shaping the developing...
Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponentially more difficult in high dimen...
Prolonged exposure to abnormally high calcium concentrations is thought to be a core mechanism underlying hippocampal damage in epileptic patients; however, no prior study has characterized calcium activity during seizures in the live, intact hippocampus. We have directly investigated this possibility by combining whole-brain electroencephalographi...
Mean normalized fluorescence intensities during the first identifiable wave were aligned with respect to peak time and plotted across time (red and blue, NMDA treatment; green, PTZ treatment). Same scale as Figure 4F.
Assessment of seizure severity in different groups of mice. A seizure stage 3 or higher (“Materials and methods, Seizure assessment”) was classified as a convulsive motor seizure (CMS). Seizure latency was defined as the time period (in minutes) required to reach CMS. No significant differences were found between mice implanted with telemetric devi...
Overview of Stereotypical Epileptiform Motifs. Videos of the epileptiform motifs are presented in real-time. Changes in calcium sensor fluorescence are presented on a log-contrast scale in order to visualize the changes within individual cells during the flashing and wave epileptiform patterns, which differed in intensity by orders of magnitude. Co...
The Imaging Computational Microscope (ICM) is a suite of computational tools
for automated analysis of functional imaging data that runs under the
cross-platform MATLAB environment (The Mathworks, Inc.). ICM uses a
semi-supervised framework, in which at every stage of analysis computers handle
the routine work, which is then refined by user interve...
We introduce and study methods for inferring and learning from
correspondences among neurons. The approach enables alignment of data from
distinct multiunit studies of nervous systems. We show that the methods for
inferring correspondences combine data effectively from cross-animal studies to
make joint inferences about behavioral decision making t...
VoltageFluor (VF) dyes have the potential to optically measure voltage in excitable membranes with the combination of high spatial and temporal resolution essential to better characterize the voltage dynamics of large groups of excitable cells. VF dyes sense voltage with high speed and sensitivity using photoinduced electron transfer (PeT) through...
DefinitionPopulation coding is a computational theory postulating that information is represented and processed by a large number of neurons. In such a coding scheme, each neuron on its own encodes only a small amount of the information that is distributed across the population. Population coding provides robustness, because even if individual neur...
Two recent studies describe mechanisms by which sexually dimorphic responses to pheromones in the nematode worm Caenorhabditis elegans are driven by differences in the balance of neural circuits that control attraction and repulsion behaviors.
In the leech, we can observe several behaviors – swimming, crawling, shortening, and local-bending – while imaging neuronal activity with voltage-sensitive dyes (VSD) [1]. To understand the underlying neural mechanisms of these behavioral pattern generators, we must understand the functional properties of the neurons and the connectivity between ne...
Fluorescence imaging is an attractive method for monitoring neuronal activity. A key challenge for optically monitoring voltage is development of sensors that can give large and fast responses to changes in transmembrane potential. We now present fluorescent sensors that detect voltage changes in neurons by modulation of photo-induced electron tran...
Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attentio...
Under natural viewing conditions, human observers use shifts in gaze to allocate processing resources to subsets of the visual input. There are many computational models that try to predict these shifts in eye movement and attention. Although the important role of high level stimulus properties (e.g., semantic information) stands undis- puted, most...
Models of attention are typically based on difference maps in low-level features but neglect higher order stimulus structure. To what extent does higher order statistics affect human attention in natural stimuli? We recorded eye movements while observers viewed unmodified and modified images of natural scenes. Modifications included contrast modula...