David Beniaguev’s research while affiliated with Hebrew University of Jerusalem and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Figure 1. Steps in quantifying the functional complexity of neurons.
Figure 2. Human cortical pyramidal neurons are more functionally complex compared
Figure 4. Correspondence between various synaptic features and the functional
synaptic parameters 830
What makes human cortical pyramidal neurons functionally complex
  • Preprint
  • File available

December 2024

·

110 Reads

·

Daniela Yoeli

·

David Beniaguev

·

[...]

·

Humans exhibit unique cognitive abilities within the animal kingdom, but the neural mechanisms driving these advanced capabilities remain poorly understood. Human cortical neurons differ from those of other species, such as rodents, in both their morphological and physiological characteristics. Could the distinct properties of human cortical neurons help explain the superior cognitive capabilities of humans? Understanding this relationship requires a metric to quantify how neuronal properties contribute to the functional complexity of single neurons, yet no such standardized measure currently exists. Here, we propose the Functional Complexity Index (FCI), a generalized, deep learning-based framework to assess the input-output complexity of neurons. By comparing the FCI of cortical pyramidal neurons from different layers in rats and humans, we identified key morpho-electrical factors that underlie functional complexity. Human cortical pyramidal neurons were found to be significantly more functionally complex than their rat counterparts, primarily due to differences in dendritic membrane area and branching pattern, as well as density and nonlinearity of NMDA-mediated synaptic receptors. These findings reveal the structural-biophysical basis for the enhanced functional properties of human neurons.

Download

Stability and flexibility of odor representations in the mouse olfactory bulb

April 2023

·

122 Reads

·

1 Citation

Dynamic changes in sensory representations have been basic tenants of studies in neural coding and plasticity. In olfaction, relatively little is known about the dynamic range of changes in odor representations under different brain states and over time. Here, we used time-lapse in vivo two-photon calcium imaging to describe changes in odor representation by mitral cells, the output neurons of the mouse olfactory bulb. Using anesthetics as a gross manipulation to switch between different brain states (wakefulness and under anesthesia), we found that odor representations by mitral cells undergo significant re-shaping across states but not over time within state. Odor representations were well balanced across the population in the awake state yet highly diverse under anesthesia. To evaluate differences in odor representation across states, we used linear classifiers to decode odor identity in one state based on training data from the other state. Decoding across states resulted in nearly chance-level accuracy. In contrast, repeating the same procedure for data recorded within the same state but in different time points, showed that time had a rather minor impact on odor representations. Relative to the differences across states, odor representations remained stable over months. Thus, single mitral cells can change dynamically across states but maintain robust representations across months. These findings have implications for sensory coding and plasticity in the mammalian brain.


Figure 1. The Filter and Fire (F&F) neuron receives input through multiple synaptic contacts per axon and filters each contact with a different synaptic kernel. (A) Example for three incoming input axons each making several contacts onto the post-synaptic cell. (B) Various synaptic filters representing their respective locations on the dendritic tree of the neuron model, shown in grey. Proximal synaptic filters are brief whereas more distal synaptic filters have broader temporal profiles. Colors are according to the source axon. (C) Local dendritic voltage responses at the synaptic loci, that result from the convolution of axonal spike train with the respective synaptic filter. Colors according to that of the source axon. Dashed lines indicate each contact's voltage contribution after learning. In this example, is increased, and are decreased following learning. (D) Somatic voltage is a weighted sum of each synaptic contact contributions with an independent weight for each contact. Standard I&F reset mechanism is used as the spike generation mechanism at the soma. In black is somatic trace before learning and in blue after learning.
Figure 2. Increased memory capacity of precisely timed output spikes for F&F neuron compared to I&F neuron. (A) Learning to place precisely timed output spikes for randomly generated input. Top. Random axonal input raster. Bottom. Output spikes before learning (top, black), after learning (middle, blue) and the desired output spikes (bottom, red). (B) Binary classification accuracy at 1ms temporal resolution as measured by area under ROC curve (AUC) as a function of the number of required output spikes for input with 200 input axons. We see increased capacity for F&F models as the number of multiple contacts increases, whereas no increase for the I&F case as one would expect. (C) A summary plot that summarizes the capacity as a function of the number of multiple connections. For this plot we use the maximal number of spikes that achieves accuracy above AUC threshold of 0.99. The vertical axis depicts the fraction of successfully timed spikes for each input axon. We see saturation in the capacity for high values of multiple contacts to be ~3x compared to the I&F capacity. (D) The capacity scales linearly as function number of axons and exhibits no saturation.
Figure 4. The dendritic filters are spanned by a 3-dimensional basis set of PSPs accounting for the 3-fold increase in the F&F capacity for a large number multiple contacts. (A) All possible post synaptic potentials (PSPs) that were used in the study shown as heatmaps. Every block has a different exponential decay time. Within each block the rise time grows for each row. (B) The Singular Value Decomposition of all the PSPs shown in (A) as heatmaps. We see a Fourier-like basis set. (C) All possible post synaptic potentials (PSPs) that were used in the study, shown as traces. We clearly see that there are strong correlations between the various PSPs, indicating they are not all equally useful (D) First 3 Basis functions of the Non-Negative Matrix Decomposition of all PSPs shown in (A) and (C). NMF instead of SVD was used for ease of interpretation and visualization purposes. (E) Cumulative variance explained by each basis component. We see that by using 3 basis functions shown in (D) we can span all PSPs shown in (C) and explain the phenomenon shown in Fig. 2C in which the capacity is capped by 3x the number of axons when the number of multiple contacts is large. (F) Direct verification that the 3 orthogonal basis set in (E) can be used as 3 optimal multiple contact filters and achieve the same capacity as 15 randomly selected multiple contacts.
Multiple Synaptic Contacts combined with Dendritic Filtering enhance Spatio-Temporal Pattern Recognition capabilities of Single Neurons

January 2022

·

248 Reads

·

4 Citations

A cortical neuron typically makes multiple synaptic contacts on the dendrites of a post-synaptic target neuron. The functional implications of this apparent redundancy are unclear. The dendritic location of a synaptic contact affects the time-course of the somatic post-synaptic potential (PSP) due to dendritic cable filtering. Consequently, a single pre-synaptic axonal spike results with a PSP composed of multiple temporal profiles. Here, we developed a "filter-and-fire" (F&F) neuron model that captures these features and show that the memory capacity of this neuron is threefold larger than that of a leaky integrate-and-fire (I&F) neuron, when trained to emit precisely timed output spikes for specific input patterns. Furthermore, the F&F neuron can learn to recognize spatio-temporal input patterns, e.g., MNIST digits, where the I&F model completely fails. Multiple synaptic contacts between pairs of cortical neurons are therefore an important feature rather than a bug and can serve to reduce axonal wiring requirements.


Single cortical neurons as deep artificial neural networks

August 2021

·

199 Reads

·

199 Citations

Neuron

Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons’ input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs’ weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.



Fig. 1. Integrate and Fire neuron model is faithfully captured by a DNN with one hidden layer consisting of a single hidden unit. (A) Illustration of an I&F neuron model receiving a barrage of synaptic inputs and generating voltage/spiking output (left), and its analogous DNN (right). Orange, blue and magenta represent the input layer, the hidden layer and the DNN output, respectively. (B) Schematic overview of our prediction approach. The objective of the DNN is to predict the spike output of the respective I&F model based on the input spike trains. The binary matrix, denoted by í µí°±, represents the input spikes in a time window T (black rectangle) preceding t 0 . í µí°± is multiplied by the synaptic weight matrix, í µí°° (represented by the heatmap image) and summed up to produce the activation value of the output unit y. This value is used to predict the output (magenta rectangle) at t = t 0 . Excitatory input is denoted in red; inhibitory in blue. Note that, unlike the I&F, the DNN has no a priori information about the type of the synaptic inputs (E or I). (C) Top. Example inputs (red -excitatory, blueinhibitory) presented to the I&F neuron model. Middle. Response of the I&F model (cyan) and of the analogous DNN (magenta). Bottom. Zoom in on the dashed rectangle region in the top trace. Note the great similarity between the two traces. (D) Learned weights of the DNN modeled synapses. Top 80 rows are excitatory synapses to the I&F model; bottom 20 rows are its inhibitory synapses. Columns correspond to different time points relative to t 0 (right most time point). One can see that the prediction probability for having a spike at t 0 is increased if the number of active excitatory synapses increases (red) and the number of active inhibitory synapses decreases (blue) just before t0. This expected behavior is automatically learned by the DNN. This heatmap represents a spatiotemporal filter of the input. (E) Temporal cross section of the learned weights in D. (F) Left. Receiver Operator Characteristic (ROC) curve of spike prediction; this curve is almost step-like. The area under the curve (AUC) is 0.997, indicating high prediction accuracy at 1ms precision. Inset: zoom in on up to 1% false alarm rates. Red circle denotes the threshold selected for the DNN model shown in C. Right. Cross Correlation between the I&F spike train (ground truth) and the predicted spike train of the respective DNN, when the prediction threshold was set to 0.25% false positive (FP) rate (red circle in left plot). (G) Scatter plot of the predicted DNN subthreshold voltage versus ground truth voltage produced by the I&F model).
Fig 2. Detailed model of L5PC neuron with AMPA synapses is faithfully captured by DNN with one hidden layer consisting of 128 hidden units. (A) A model of 3D reconstructed L5 pyramidal neuron (L5PC) from the mouse somatosensory cortex (left). Basal oblique and apical dendrites are marked by respective green, purple and orange colors. The analogous DNN for this neuron is shown at right. Orange, blue and magenta circles represent the input layer, the hidden layer and the DNN output, respectively. Green units represent linear activation units (see Methods). (B) Top. Response of the L5PC model (cyan) and of the analogous DNN (magenta) to random AMPA-based excitatory and GABAA-based inhibitory synaptic input (see Methods). Bottom. Zoom in on the dashed rectangle region in the top trace. Note the great similarity between the two traces. (C) Learned weights of a selected units in the DNN, separated by their morphological (basal, oblique and apical) location. In each case, the top half rows are excitatory synapses and the bottom half are the inhibitory synapses. As in Fig 1D, different columns correspond to different time points relative to t 0 (right most time point). Note that the output of this hidden unit increases if the number of active excitatory synapses increases at the basal and oblique dendrites (red) whereas the number of active inhibitory synapses decreases (blue) at these locations, just before t0. However, this unit is nonselective to activity at the apical tuft, indicating the lack of influence of the tuft synapses on the neuron's output. (D) Temporal cross section of the learned weights in C. Note the asymmetry between the temporal profiles of excitatory and inhibitory synapses due to the different synaptic dynamics of AMPA and GABAA synapses. (E) Quantitative evaluation of the fit (as in Fig 1F). Left. ROC curve of spike prediction; the area under the curve (AUC) is 0.961, indicating high prediction accuracy at 1ms precision. Inset: zoom-in on up to 10% false alarm rates. Red circle denotes the threshold selected for the DNN model shown in B. True positive rate (TP) at false positive rate (FP) of 0.25% is 35.8% indicating a relatively high hit rate even for a very low false alarm rate. Right. Cross Correlation plot between the ground truth (L5PC model response) and the predicted spike train of the respective DNN, for prediction threshold of 0.25% FP rate (red circle in left plot). (F) Scatter plot of the predicted DNN subthreshold voltage versus ground truth voltage.
Fig 3. Response of a single dendritic branch of L2/3PC neuron model receiving NMDA synapses is faithfully captured by a DNN with one hidden layer consisting of 4 hidden units. (A) Left. Layer 2/3 pyramidal neuron used in the simulations with a zoom on one selected basal branch (dashed rectangle). Same modeled branch with 9 excitatory synapses (depicted schematically by the "ball and stick" at bottom) was also used in the study of Branco et al. (Branco, Clark, and Häusser 2010). Right. Illustration of the analogous DNN. Colors as in Fig. 2A. (B) Example of the somatic voltage response (cyan) and DNN predicted output (magenta) to a randomly generated input spike pattern on that basal branch (red dots above). (C) Example of somatic response to two spatio-temporal sequences of synaptic activation patterns (red -distal-to-proximal direction" and blue -"proximal-to-distal direction") and the DNN predicted output for these same sequences (orange and light blue traces, respectively). (D) Learned weights of the 4 hidden units by the respective DNN model. Heatmaps are spatio-temporal filters as shown in Figs. 1D and 2C. Note the direction selective shapes and long temporal extent of influence by distal synaptic activations. (E) Scatter plot that show the discrimination ability between different order of synaptic activations on the modeled basal branch. Vertical axis is the ground truth maximum voltage at the soma during a specific synaptic order of activation. Horizontal axis is directionality index proposed in Branco et al. (Branco, Clark, and Häusser 2010). Correlation coefficient is 0.86. (F) Same as E, but the DNN's estimation of the max voltage of the respective order of activation, showing a superior performance of the DNN prediction relative to the directionality index previously proposed. Correlation coefficient is 0.99.
Fig 4. Detailed L5PC neuron model with NMDA synapses is faithfully captured by a DNN with 7 hidden layers consisting of 128 hidden units at each layer. (A) Illustration of L5PC model (left) and its analogous DNN (right). As in previous figures, orange, blue and magenta circles represent the input layer, the hidden layer and the DNN output, respectively. Green units represent linear activation units (B) Top. Exemplar voltage response of the L5PC model with NMDA synapses (cyan) and of the analogous DNN (magenta) to random synaptic input stimulation. Bottom. Zoom in on the dashed rectangle region in the top trace. (C) Learned weights of a selected unit in the first layer of the DNN. Top Left, top center and top right, inputs located on the basal dendrites, on the oblique dendrites and on the apical tuft, respectively. For each case, top half of the rows are excitatory synapses whereas bottom half of rows are its inhibitory synapses. Different columns correspond to different time points relative to t 0 (right most time point). Bottom. temporal cross-section of the learned weights above. Note the great similarity of this unit to the unit shown in Fig. 2C. (D) Similar to C, for a different unit in the first layer but with a completely different spatio-temporal pattern. This unit appears to be non-selective to whatever happens in the basal dendrites, only slightly sensitive to oblique dendrites, but very sensitive to apical tuft dendrites. The output of this hidden unit is increased when there is apical excitation and lack of apical inhibition in a time window of 40ms before t0. Note the contrast to the DNN model of L5PC with only AMPA synapses that practically ignored apical tuft dendrites. Additionally, note the asymmetry between the amplitudes of the temporal profiles of excitatory and inhibitory synapses, indicating that inhibition decreases the activity of this unit more than excitation increases its activity. (E) Quantitative evaluation of the fit. Left. ROC curve of spike prediction; the area under the curve (AUC) is 0.969, indicating high prediction accuracy at 1ms precision. Zoom in on up to 10% false alarm rates is shown in the inset. Red circle denotes the threshold selected for the DNN model shown in B. True positive rate (TP) at false positive rate (FP) of 0.25% is 25.2%, indicating a relatively high hit rate even for a very low false alarm rate. Right. Cross Correlation plot between the ground truth (L5PC with NMDA synapses) spike train and the predicted spike train of the respective DNN, when the prediction threshold was set to 0.25% false positive rate (FP) (red circle in left plot). (F) Scatter plot of the predicted DNN subthreshold voltage versus ground truth voltage.
Single Cortical Neurons as Deep Artificial Neural Networks

April 2019

·

705 Reads

·

5 Citations

We propose a novel approach based on modern deep artificial neural networks (DNNs) for understanding how the morpho-electrical complexity of neurons shapes their input/output (I/O) properties at the millisecond resolution in response to massive synaptic input. The I/O of integrate and fire point neuron is accurately captured by a DNN with a single unit and one hidden layer. A fully connected DNN with one hidden layer faithfully replicated the I/O relationship of a detailed model of Layer 5 cortical pyramidal cell (L5PC) receiving AMPA and GABAA synapses. However, when adding voltage-gated NMDA-conductances, a temporally-convolutional DNN with seven layers was required. Analysis of the DNN filters provides new insights into dendritic processing shaping the I/O properties of neurons. This work proposes a systematic approach for characterizing the functional "depth" of a biological neurons, suggesting that cortical pyramidal neurons and the networks they form are computationally much more powerful than previously assumed.

Citations (5)


... Li et al., 2023), odor-evoked activity changes depending on behavioral context (Koldaeva et al., 2019;Kudryavitskaya et al., 2020), and significant changes in sensory input do not always impair odor discrimination (Slotnick and Bodyak, 2002;Knott et al., 2012), suggesting that odor coding is robust to these alterations. The OB displays both sparse, low-latency odor-evoked activity and dense, temporally complex, state-dependent activity during and after odor exposure (Spors and Grinvald, 2002;Fletcher et al., 2009;Kato et al., 2012;Vincis et al., 2012;Patterson et al., 2013;Cazakoff et al., 2014;Adefuin et al., 2022;Shani-Narkiss et al., 2023). However, the role and information content of dense activity in odor encoding remain unknown. ...

Reference:

Dense and Persistent Odor Representations in the Olfactory Bulb of Awake Mice
Stability and flexibility of odor representations in the mouse olfactory bulb

... The calcitron is also linear, both in terms of its input-output function (excluding the activation function) and in terms of how calcium from different sources combines to produce C total i . Linear point neurons are commonly used to model neural phenomena, although experimental and theoretical work indicate that real neurons may integrate information in a nonlinear fashion [53][54][55][56][57][58][59][60][61][62][63][64]. Of particular relevance is the superlinear activation function of the NMDA receptor, whose conductance exhibits sigmoidal sensitivity to local voltage at the synapse location [25,26]. ...

Multiple Synaptic Contacts combined with Dendritic Filtering enhance Spatio-Temporal Pattern Recognition capabilities of Single Neurons

... However, that approach to deep credit assignment is complementary and not mutually exclusive to the one proposed in the current study, and it may be beneficial in the future to combine them, using both privileged processing of multiplexed spikes and bursts and gating of plasticity by local dendritic inhibition. Another line of research has focused on the advantages of nonlinear computations in basal and proximal apical dendrites for processing bottom-up inputs 93 , with theoretical work showing that single biological neurons with complex dendritic morphologies can classify input patterns comparably to entire deep networks 94,95 . That work is also complementary to ours, which focuses instead on the separate problem of processing top-down signals containing target information for synaptic credit assignment. ...

Single cortical neurons as deep artificial neural networks
  • Citing Article
  • August 2021

Neuron

... Furthermore, nonlinear dendritic integration is postulated to augment the computational capacity of neurons (Katz et al., 2009;Koch, Poggio, & Torre, 1983), enabling them to compute linearly non-separable functions (Cazé, Humphries, & Gutkin, 2013;Cazé, Tran-Van-Minh, Gutkin, & DiGregorio, 2023). The incorporation of nonlinear dendritic mechanisms into artificial neural networks is proposed to bring neurons in comparative alignment with their biological counterparts (Beniaguev, Segev, & London, 2021;Poirazi, Brannon, & Mel, 2003;Tzilivaki et al., 2019). The benefit of installing dendritic nonlinearity into artificial neural networks not only reduces power consumption and enhances accuracy (Chavlis & Poirazi, 2021;Li et al., 2020) but also mitigates communication costs within neural networks (Wu et al., 2023). ...

Single Cortical Neurons as Deep Artificial Neural Networks

SSRN Electronic Journal

... The calcium-based plasticity rule of Graupner and Brunel (2012) presents an exciting possibility for implementing perceptron-like learning in a more biological manner by making direct use of the experimentally observed mechanisms of plasticity in neurons. Because neurons exhibit some properties of multi-layered networks (Poirazi et al., 2003;Beniaguev et al., 2019), it would also be valuable to explore more powerful learning algorithms that make use of the dendrites as a second (or higher) layer of computation as in Schiess et al. (2016). Alternatively, it may make sense to consider a different paradigm of dendritic learning, where the dendrites attempt to "predict" the somatic output, allowing for forms of both supervised and unsupervised learning (Urbanczik and Senn, 2014). ...

Single Cortical Neurons as Deep Artificial Neural Networks