Grace W. Lindsay's research while affiliated with New York University and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (27)
An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidenc...
Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We s...
Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Persp...
An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidenc...
Biological neural networks adapt and learn in diverse behavioral contexts. Artificial neural networks (ANNs) have exploited biological properties to solve complex problems. However, despite their effectiveness for specific tasks, ANNs are yet to realize the flexibility and adaptability of biological cognition. This review highlights recent advances...
Artificial Neural Networks (ANNs) inspired by biology are beginning to be widely used to model behavioral and neural data, an approach we call neuroconnectionism. ANNs have been lauded as the current best models of information processing in the brain, but also criticized for failing to account for basic cognitive functions. We propose that arguing...
Behavioral studies suggest that recurrence in the visual system is important for processing degraded stimuli. There are two broad anatomical forms this recurrence can take, lateral or feedback, each with different assumed functions. Here we add four different kinds of recurrence — two of each anatomical form — to a feedforward convolutional neural...
Neuroscientists apply a range of common analysis tools to recorded neural activity in order to glean insights into how neural circuits implement computations. Despite the fact that these tools shape the progress of the field as a whole, we have little empirical evidence that they are effective at quickly identifying the phenomena of interest. Here...
Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input. To what extent these representations depend on the different learning objectives is largely unknown. Here we compare the representations learned by eight different convolutional neural networks...
[This corrects the article DOI: 10.3389/fncom.2020.00029.].
Neuromatch Academy (NMA) designed and ran a fully online 3-week Computational Neuroscience Summer School for 1757 students with 191 teaching assistants (TAs) working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity...
Neuromatch Academy designed and ran a fully online 3-week Computational Neuroscience summer school for 1757 students with 191 teaching assistants working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity and univers...
Selective visual attention modulates neural activity in the visual system in complex ways and leads to enhanced performance on difficult visual tasks. Here, we show that a simple circuit model, the stabilized supralinear network, gives a unified account of a wide variety of effects of attention on neural responses. We replicate results from studies...
Attention is the important ability to flexibly control limited computational resources. It has been studied in conjunction with many other topics in neuroscience and psychology including awareness, vigilance, saliency, executive control, and learning. It has also recently been applied in several domains in machine learning. The relationship between...
Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neurosc...
Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neurosc...
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, th...
How does attentional modulation of neural activity enhance performance? Here we use a deep convolutional neural network as a large-scale model of the visual system to address this question. We model the feature similarity gain model of attention, in which attentional modulation is applied according to neural stimulus tuning. Using a variety of visu...
How does attentional modulation of neural activity enhance performance? Here we use a deep convolutional neural network as a large-scale model of the visual system to address this question. We model the feature similarity gain model of attention, in which attentional modulation is applied according to neural stimulus tuning. Using a variety of visu...
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent be...
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by prefrontal cortex (PFC). Neural activity in PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear ‘mixed’ selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors....
Physical features of sensory stimuli are fixed, but sensory perception is context dependent. The precise mechanisms that govern contextual modulation remain unknown. Here, we trained mice to switch between two contexts: passively listening to pure tones and performing a recognition task for the same stimuli. Two-photon imaging showed that many exci...
Citations
... To address this question, we need to be able to manipulate experimentally how neural networks form, as they learn to achieve behavioural objectives, to establish the causality of these relationships. Computational models allow us to do this 8 . They have shown that network modularity can arise through the spatial cost Our understanding of how the brain's structure and function interact largely comes from observing differences in brain structure, such as across individuals 6 or following brain injury 7 , and then systematically linking these differences to brain function or behavioural outcomes. ...
... Benchmark model. Systems neuroscience lacks a benchmark model that captures our existing knowledge about the nature and origin of vision (Poggio and Serre, 2013;Golan et al., 2023). Despite this lack of a benchmark model, we know many benchmark features relevant to vision. ...
... Notably, the advent of this new source has strengthened the tendency of neuroscientific research to cast aside the phenomenological source by implicitly assuming a close correspondence between physiological processes and conscious experience. In DL-based computational neuroscience, numerous research completely dismisses the phenomenological source and compares ANN processes to brain processes (comparison which does seem quite natural given DL's initial structural inspiration being the brain); examples include Yamins' paradigmatic research on vision in brains and in ANNs (Yamins & DiCarlo, 2016), other recent investigations (Kumar, et al., 2022;Millet, et al., 2022) and even proposed research programs (Doerig, et al., 2022;Cohen, et al., 2022). ...
... Black and red arrows represent feedforward and recurrent connections, respectively. Adapted fromLindsay et al. (2022). ...
... For example, the Allen Brain Institute released an SDK that simplifies retrieval of and interaction with extensive collections of NWB standardized data recorded with cutting edge electrophysiology and imaging tools. Such initiatives greatly expand opportunities to reuse data in education [44][45][46], basic research [47], and bench-marking of new computational models [48]. ...
... In addition, experimental evidence supporting the presence of inhibition of return both at the behavioral and neural levels remains controversial (19,20). On the other hand, neural circuit models for explaining the neural effects of top-down attention often treat them as static inhibitory (21) or excitatory inputs (22,23) to local circuits; doing so, thus, completely ignores the dynamical fluctuations of attention. Therefore, despite widespread investigations, the fundamental questions of the neural circuit mechanism underlying attention fluctuations and their functional role remain unclear. ...
... The role that attention plays in the brain encourages its addition to artificial neural networks, specifically in DL. Much like the apparent need for attention in the brain, it is a means of making neural systems more flexible [48]. ...
... Some artificial intelligence architectures, and in particular Deep Convolutional Neural Networks (DCNN) can fulfill this role. DCNNs and the visual cortex have similarities in how they process information [19,20]. Accordingly, patterns of neuronal activations within a DCNN have been shown to predict those recorded in the visual cortex of humans [20][21][22][23][24]. Furthermore, previous studies have shown that metrics summarizing the processing of visual information by DCNNs correlate with self-reported assessment of this processing; for example, the mean activation per layer predicts the perceived complexity of an image [25]. ...
... For example, DCNNs rely on texture information more than shape in object recognition [65]. Yet for many neuroscientists, the emergence in DCNNs of the most fundamental properties observed in animal vision indicates that they are still powerful tools for modelling and understanding this perceptual modality [66,67]. Regarding the second assumption, we acknowledge that purely feed-forward models like VGG16 fall short in describing the full complexity of the judgment of beauty. ...
... It has been vastly studied in psychology and neuroscience [Posner and Petersen, 1990, Cohen et al., 1990, Phaf et al., 1990, Bundesen, 1990, Desimone et al., 1995, Mozer and Sitton, 1998, Corbetta and Shulman, 2002, O'Reilly and Frank, 2006, Petersen and Posner, 2012, Moore and Zirnsak, 2017 and more recently by Flesch et al. [2022], Dekker et al. [2022. These studies have acted as a source of inspiration for several artificial intelligence models [Khosla et al., 2007, Lindsay andMiller, 2018] including the ones proposed in this thesis. ...