
Hananel HazanTufts University | Tufts
Hananel Hazan
PhD
About
47
Publications
22,865
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
446
Citations
Introduction
I am an interdisciplinary computer scientist specialized in biologically inspired computing and neurocomputation with a strong background in machine learning. In my research, I focus on the computational properties of neuronal and non-neuronal systems. I strive to understand fundamental cognitive functions not only from the perspective of computer science and machine learning but also from the perspective of biology, neurobiology, and psychology.
Additional affiliations
September 2019 - present
Education
October 2007 - October 2013
October 2004 - October 2007
October 2000 - October 2002
Publications
Publications (47)
Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model...
Neocortical structures typically only support slow acquisition of declarative memory; however, learning through fast mapping may facilitate rapid learning-induced cortical plasticity and hippocampal-independent integration of novel associations into existing semantic networks. During fast mapping the meaning of new words and concepts is inferred, a...
The Liquid State Machine (LSM) is a method of computing with temporal neurons, which can be used amongst other things for classifying intrinsically temporal data directly unlike standard artificial neural networks. It has also been put forward as a natural model of certain kinds of brain functions. There are two results in this paper: (1) We show t...
The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid...
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. Th...
A common view in the neuroscience community is that memory is encoded in the connection strength between neurons. This perception led artificial neural network models to focus on connection weights as the key variables to modulate learning. In this paper, we present a prototype for weightless spiking neural networks that can perform a simple classi...
Biological learning operates at multiple interlocking timescales, from long evolutionary stretches down to the relatively short time span of an individual’s life. While each process has been simulated individually as a basic learning algorithm in the context of spiking neuronal networks (SNNs), the integration of the two has remained limited. In th...
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. Th...
Spiking neural networks (SNNs) with a lattice architecture are introduced in this work, combining several desirable properties of SNNs and self-organized maps (SOMs). Networks are trained with biologically motivated, unsupervised learning rules to obtain a self-organized grid of filters via cooperative and competitive excitatory-inhibitory interact...
Excitability—a threshold-governed transient in transmembrane voltage—is a fundamental physiological process that controls the function of the heart, endocrine, muscles, and neuronal tissues. The 1950s Hodgkin and Huxley explicit formulation provides a mathematical framework for understanding excitability, as the consequence of the properties of vol...
Human beings are a rich source of information. Just by looking at someone we are able to quickly assess who this person probably is and to distinguish her from others. We can more or less precisely determine the age, gender and ethnic origin based on the physical appearances of the body. We can also guess about a social status of a person or her pr...
Neuroscientific theory suggests that dopaminergic neurons broadcast global reward prediction errors to large areas of the brain influencing the synaptic plasticity of the neurons in those regions. We build on this theory to propose a multi-agent learning framework with spiking neurons in the generalized linear model (GLM) formulation as agents, to...
Excitability - a threshold governed transient in transmembrane voltage − is a fundamental physiological process that controls the function of the heart, endocrine, muscles and neuronal tissues. The 1950's Hodgkin and Huxley explicit formulation provides a mathematical framework for understanding excitability, as the consequence of the properties of...
Deep Reinforcement Learning (RL) demonstrates excellent performance on tasks that can be solved by trained policy. It plays a dominant role among cutting-edge machine learning approaches using multi-layer Neural networks (NNs). At the same time, Deep RL suffers from high sensitivity to noisy, incomplete, and misleading input data. Following biologi...
In recent years, spiking neural networks (SNNs) have demonstrated great success in completing various machine learning tasks. We introduce a method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via inhibitory interactions to learn featur...
Spiking neural networks (SNNs) with a lattice architecture are introduced in this work, combining several desirable properties of SNNs and self-organized maps (SOMs). Networks are trained with biologically motivated, unsupervised learning rules to obtain a self-organized grid of filters via cooperative and competitive excitatory-inhibitory interact...
In recent years, Spiking Neural Networks (SNNs) have demonstrated great successes in completing various Machine Learning tasks. We introduce a method for learning image features by \textit{locally connected layers} in SNNs using spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via competitive inhibitory interacti...
Various implementations of Deep Reinforcement Learning (RL) demonstrated excellent performance on tasks that can be solved by trained policy, but they are not without drawbacks. Deep RL suffers from high sensitivity to noisy and missing input and adversarial attacks. To mitigate these deficiencies of deep RL solutions, we suggest involving spiking...
The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid...
We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs. Networks are trained in an unsupervised manner to learn a self-organized lattice of filters via excitatory-inhibitory interactions among populations of neurons. We develop and test vario...
Biometric data are typically used for the purposes of unique identification of a person. However, recent research suggests that biometric data gathered for the purpose of identification can be analysed for extraction of additional information. This augments indicative value of biometric data. This paper illustrates the range of augmented indicative...
There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-p...
Hybrid IT systems with biological brains (hybrots) enhance our understanding of brain functioning.
However, given their specific form of existence and their ability to act autonomously to a certain degree, they raise questions regarding attributing liability in case they cause damage. The aim of this paper is to suggest a scheme for attributing lia...
This work use supervised machine learning methods on fMRI brain scans, taken/measured during a memory-retrieval task, to support establishing the existence of two distinct systems for human declarative memory (“Explicit Encoding” (EE) and “Fast Mapping” (FM)). The importance of using retrieval is that it allows a direct comparison between exemplars...
This work uses supervised machine learning methods over fMRI brain scans to establish the existence of two different encoding procedures for human declarative memory. Declarative knowledge refers to the memory for facts and events and initially depends on the hippocampus. Recent studies which used patients with hippocampal lesions and neuroimaging...
This experiment was designed to see if information related to linguistic characteristics of read text can be deduced from fMRI data via machine learning techniques. Individuals were scanned while reading text the size of words in loud reading. Three experiments were performed corresponding to different degrees of grammatical complexity that is perf...
The human voice signal carries much information in addition to direct linguistic semantic information. This information can be perceived by computational systems. In this work, we show that early diagnosis of Parkinson's disease is possible solely from the voice signal. This is in contrast to earlier work in which we showed that this can be done us...
Classifying human production of phonemes without additional encoding is accomplished at the level of about 77% using a version of reservoir computing. So far this has been accomplished with: (1) artificial data (2) artificial noise (designed to mimic natural noise) (3) natural human data with artificial noise (4) natural human data with its natural...
The current state of modeling artificial neurons and networks posits a significant problem of incorporating a concept of time into the machine learning infrastructure. At present the concept of time is encoded by transforming other values, such as space, color and depth. However, this approach does not seem to reflect correctly the actual functioni...
We show that real valued continuous functions can be recognized in a reliable way, with good generalization ability using an adapted version of the Liquid State Machine (LSM) that receives direct real valued input. Furthermore this system works without the necessity of preliminary extraction of signal processing features. This avoids the necessity...
Using two distinct data sets (from the USA and Germany) of healthy controls and patients with early or mild stages of Parkinson's disease, we show that machine learning tools can be used for the early diagnosis of Parkinson's disease from speech data. This could potentially be applicable before physical symptoms appear. In addition, we show that wh...
This work proposes a model-free approach to fMRI-based brain mapping where the BOLD response is learnt from data rather than assumed in advance. For each voxel, a paired sequence of stimuli and fMRI recording is given to a supervised learning process. The result is a voxel-wise model of the expected BOLD response related to a set of stimuli. Differ...
Recently Jaeger and others have put forth the paradigm of "reservoir computing" as a way of computing with highly recurrent
neural networks. This reservoir is a collection of neurons randomly connected with each other of fixed weights. Amongst other
things, it has been shown to be effective in temporal pattern recognition; and has been held as a mo...
A computational model for reading that takes into account the different processing abilities of the two cerebral hemispheres
is presented. This dual hemispheric reading model closely follows the original computational lines due to Kowamoto (J Mem
Lang 32:474–516, 1993) but postulates a difference in architecture between the right and left hemispher...
Remark to the referees and program committee: the results here are extemely recent, and so the paper is not as polished for style as we would normally wish. Nonetheless, we think this result is potentially quite important and so we are sending it off as it is because we would very much like to present it initially at BISFAI. Of course, we expect to...
It is well known that the brain (especially the cortex) is structurally separable into two hemispheres. Many neuropsychological
studies show that the process of ambiguity resolution requires the intact functioning of both cerebral hemispheres. Moreover,
these studies suggest that while the Left Hemisphere (LH) quickly selects one alternative, the R...
Neuropsychological studies have shown that both cerebral hemispheres process orthographic, phonological and semantic aspects of written words, albeit in different ways. The Left Hemisphere (LH) is more influenced by the phonological aspect of
written words whereas lexical processing in the Right Hemisphere (RH) is more sensitive to visual form. We...
Reading is a complex and highly skilled act that requires different sources of information (e.g., phonological, lexical and contextual). Despite extensive study in recent years, how and when each type of information is utilized is still controversial and not fully explainable. Research shows that whereas both cerebral hemispheres participate in wor...