Ido Kanter

Ido Kanter
Bar Ilan University | BIU · Department of Physics and Brain Science Unit and

PhD

About

264
Publications
30,009
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
7,384
Citations
Citations since 2017
44 Research Items
2339 Citations
20172018201920202021202220230100200300400
20172018201920202021202220230100200300400
20172018201920202021202220230100200300400
20172018201920202021202220230100200300400
Additional affiliations
January 1989 - present
Bar Ilan University
Position
  • Professor (Full)

Publications

Publications (264)
Article
Full-text available
Learning classification tasks of $$({2}^{n}\times {2}^{n})$$ ( 2 n × 2 n ) inputs typically consist of $$\le n(2\times 2$$ ≤ n ( 2 × 2 ) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracie...
Preprint
Full-text available
We study bi-directional associative neural networks that, exposed to noisy examples of an extensive number of random archetypes, learn the latter (with or without the presence of a teacher) when the supplied information is enough: in this setting, learning is heteroassociative -- involving couples of patterns -- and it is achieved by reverberating...
Preprint
Full-text available
Deep architectures consist of tens or hundreds of convolutional layers (CLs) that terminate with a few fully connected (FC) layers and an output layer representing the possible labels of a complex classification task. According to the existing deep learning (DL) rationale, the first CL reveals localized features from the raw data, whereas the subse...
Article
Full-text available
The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large...
Preprint
Full-text available
Learning classification tasks of (2^nx2^n) inputs typically consist of \le n (2x2) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracy success rates (SRs). In particular, average SRs of the...
Article
Full-text available
Advanced deep learning architectures consist of tens of fully connected and convolutional hidden layers, currently extended to hundreds, are far from their biological realization. Their implausible biological dynamics relies on changing a weight in a non-local manner, as the number of routes between an output unit and a weight is typically large, u...
Article
Full-text available
In neural network's Literature, "Hebbian learning" traditionally refers to the procedure by which the Hopfield model and its generalizations "store" archetypes (i.e., definite patterns that are experienced just once to form the synaptic matrix). However, the term "learning" in Machine Learning refers to the ability of the machine to extract feature...
Preprint
Full-text available
Advanced deep learning architectures consist of tens of fully connected and convolutional hidden layers, which are already extended to hundreds, and are far from their biological realization. Their implausible biological dynamics is based on changing a weight in a non-local manner, as the number of routes between an output unit and a weight is typi...
Preprint
Full-text available
Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increa...
Preprint
Full-text available
The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large...
Article
Full-text available
Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we prese...
Article
Full-text available
Synaptic plasticity is a long-lasting core hypothesis of brain learning that suggests local adaptation between two connecting neurons and forms the foundation of machine learning. The main complexity of synaptic plasticity is that synapses and dendrites connect neurons in series and existing experiments cannot pinpoint the significant imprinted ada...
Preprint
Full-text available
The gap between the huge volumes of data needed to train artificial neural networks and the relatively small amount of data needed by their biological counterparts is a central puzzle in machine learning. Here, inspired by biological information-processing, we introduce a generalized Hopfield network where pairwise couplings between neurons are bui...
Preprint
Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we prese...
Preprint
Full-text available
In neural network's Literature, {\em Hebbian learning} traditionally refers to the procedure by which the Hopfield model and its generalizations {\em store} archetypes (i.e., definite patterns that are experienced just once to form the synaptic matrix). However, the term {\em learning} in Machine Learning refers to the ability of the machine to ext...
Article
Full-text available
Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that an average anisotropic absolute R...
Preprint
Full-text available
Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time-lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that anisotropic absolute RPs could ex...
Preprint
Full-text available
Refractory periods are an unavoidable feature of excitable elements, resulting in necessary time-lags for re-excitation. Herein, we measure neuronal absolute refractory periods (ARPs) in synaptic blocked neuronal cultures. In so doing, we show that their duration can be significantly extended by dozens of milliseconds using preceding evoked spikes...
Article
Full-text available
Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increa...
Article
Full-text available
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Article
Full-text available
Attempting to imitate the brain’s functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation...
Preprint
Full-text available
Attempting to imitate the brain functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning. Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation proces...
Preprint
Full-text available
Ultrafast physical random bit generation at hundreds of Gb/s rates, with verified randomness, is a crucial ingredient in secure communication and have recently emerged using optics based physical systems. Here we examine the inverse problem and measure the ratio of information bits that can be systematically embedded in a random bit sequence withou...
Article
Full-text available
Recently, deep learning algorithms have outperformed human experts in various tasks across several domains; however, their characteristics are distant from current knowledge of neuroscience. The simulation results of biological learning algorithms presented herein outperform state-of-the-art optimal learning curves in supervised learning of feedfor...
Article
Full-text available
Synchronization of coupled oscillators at the transition between classical physics and quantum physics has become an emerging research topic at the crossroads of nonlinear dynamics and nanophotonics. We study this unexplored field by using quantum dot microlasers as optical oscillators. Operating in the regime of cavity quantum electrodynamics (cQE...
Article
Full-text available
Experimental evidence recently indicated that neural networks can learn in a different manner than was previously assumed, using adaptive nodes instead of adaptive links. Consequently, links to a node undergo the same adaptation, resulting in cooperative nonlinear dynamics with oscillating effective link weights. Here we show that the biological re...
Article
Full-text available
Introduction: rTMS has been proven effective in the treatment of neuropsychiatric conditions, with class A (definite efficacy) evidence for treatment of depression and pain (Lefaucheur et al., 2014). The efficacy in stimulation protocols is, however, quite heterogeneous. Saturation of neuronal firing by HFrTMS without allowing time for recovery may...
Article
Full-text available
Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is s...
Article
Full-text available
Neurons are the computational elements that compose the brain and their fundamental principles of activity are known for decades. According to the long-lasting computational scheme, each neuron sums the incoming electrical signals via its dendrites and when the membrane potential reaches a certain threshold the neuron typically generates a spike to...
Article
Introduction Repetitive transcranial magnetic stimulation (rTMS) has been found to be a promising noninvasive therapeutic tool for a variety of neuropsychiatric conditions. Therapeutic utility of rTMS was classified to have class A evidence for treatment of depression and chronic pain. In different stimulation protocols intervals have been introduc...
Article
Full-text available
We present an analytical framework that allows the quantitative study of statistical dynamic properties of networks with adaptive nodes that have memory and is used to examine the emergence of oscillations in networks with response failures. The frequency of the oscillations was quantitatively found to increase with the excitability of the nodes an...
Article
Neural networks are composed of neurons and synapses, which are responsible for learning in a slow adaptive dynamical process. Here we experimentally show that neurons act like independent anisotropic multiplex hubs, which relay and mute incoming signals following their input directions. Theoretically, the observed information routing enriches the...
Article
The neuronal response to controlled stimulations in vivo has been classically estimated using a limited number of events. Here we show that hours of high-frequency stimulations and recordings of neurons in vivo reveal previously unknown response phases of neurons in the intact brain. Results indicate that for stimulation frequencies below a critica...
Article
Full-text available
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vi...
Article
Full-text available
Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their...
Article
Full-text available
The experimental study of neural networks requires simultaneous measurements of a massive number of neurons, while monitoring properties of the connectivity, synaptic strengths and delays. Current technological barriers make such a mission unachievable. In addition, as a result of the enormous number of required measurements, the estimated network...
Article
Full-text available
Chaos synchronization has been demonstrated as a useful building block for various tasks in secure communications, including a source of all-electronic ultrafast physical random number generators based on room temperature spontaneous chaotic oscillations in a DC-biased weakly coupled GaAs/Al0.45Ga0.55As semiconductor superlattice (SSL). Here, we ex...
Article
Full-text available
Broadband spontaneous macroscopic neural oscillations are rhythmic cortical firing which were extensively examined during the last century, however, their possible origination is still controversial. In this work we show how macroscopic oscillations emerge in solely excitatory random networks and without topological constraints. We experimentally a...
Article
Full-text available
Realizations of low firing rates in neural networks usually require globally balanced distributions among excitatory and inhibitory links, while feasibility of temporal coding is limited by neuronal millisecond precision. We experimentally demonstrate {\mu}s precision of neuronal response timings under low stimulation frequencies, whereas moderate...
Article
Periodic synchronization of activity among neuronal pools has been related to substantial neural processes and information throughput in the neocortical network. However, the mechanisms of generating such periodic synchronization among distributed pools of neurons remain unclear. We hypothesize that to a large extent there is interplay between the...
Article
Full-text available
Consistency and predictability of brain functionalities depend on the reproducible activity of a single neuron. We identify a reproducible non-chaotic neuronal phase where deviations between concave response latency profiles of a single neuron do not increase with the number of stimulations. A chaotic neuronal phase emerges at a transition to conve...
Article
Full-text available
In 1943 McCulloch and Pitts suggested that the brain is composed of reliable logic-gates similar to the logic at the core of today's computers. This framework had a limited impact on neuroscience, since neurons exhibit far richer dynamics. Here we propose a new experimentally corroborated paradigm in which the truth tables of the brain's logic-gate...
Article
Full-text available
We experimentally show that the neuron functions as a precise time integrator, where the accumulated changes in neuronal response latencies, under complex and random stimulation patterns, are solely a function of a global quantity, the average time lag between stimulations. In contrast, momentary leaps in the neuronal response latency follow trends...
Article
Full-text available
A classical view of neural coding relies on temporal firing synchrony among functional groups of neurons, however, the underlying mechanism remains an enigma. Here we experimentally demonstrate a mechanism where time-lags among neuronal spiking leap from several tens of milliseconds to nearly zero-lag synchrony. It also allows sudden leaps out of s...
Article
Full-text available
The synchronization of chaotic lasers and the optical phase synchronization of light originating in multiple coupled lasers have both been extensively studied. However, the interplay between these two phenomena, especially at the network level, is unexplored. Here, we experimentally compare these phenomena by controlling the heterogeneity of the co...
Article
Full-text available
An all-electronic physical random number generator at rates up to 80 Gbit/s is presented, based on weakly coupled GaAs/Ga_{0.55}Al_{0.45}As superlattices operated at room temperature. It is based on large-amplitude, chaotic current oscillations characterized by a bandwidth of several hundred MHz and do not require external feedback or conversion to...
Article
Full-text available
Nonlinear networks with time-delayed couplings may show strong and weak chaos, depending on the scaling of their Lyapunov exponent with the delay time. We study strong and weak chaos for semiconductor lasers, either with time-delayed self-feedback or for small networks. We examine the dependence on the pump current and consider the question of whet...
Article
Full-text available
We propose a new experimentally corroborated paradigm in which the functionality of the brain's logic-gates depends on the history of their activity, e.g. an OR-gate that turns into a XOR-gate over time. Our results are based on an experimental procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-sca...