Book

Neural Assemblies: An Alternative Approach to Artificial Intelligence

Authors:

Abstract

You can't tell how deep a puddle is until you step in it. When I am asked about my profession, I have two ways of answering. If I want a short discussion, I say that I am a mathematician; if I want a long discussion, I say that I try to understand how the human brain works. A long discussion often leads to further questions: What does it mean to understand "how the brain works"? Does it help to be trained in mathematics when you try to understand the brain, and what kind of mathematics can help? What makes a mathematician turn into a neuroscientist? This may lead into a metascientific discussion which I do not like par­ ticularly because it is usually too far off the ground. In this book I take quite a different approach. I just start explaining how I think the brain works. In the course of this explanation my answers to the above questions will become clear to the reader, and he will perhaps learn some facts about the brain and get some insight into the construc­ tions of artificial intelligence.

Chapters (16)

When I am asked about my profession, I have two ways of answering. If I want a short discussion, I say that I am a mathematician; if I want a long discussion, I say that I try to understand how the human brain works. A long discussion often leads to further questions:
In the brain the information upon which we act comes together. This is visual, acoustical, olfactory, and tactile information about the outside world, as well as information on our own state of motion (proprioception from our muscles and joints) and emotion (e.g., the hormonal concentration in our blood, the condition of our inner organs and glands). On the basis of this information our reaction for this moment is programmed, or sometimes a sequence of reactions is planned for the future.
We have seen that the brain gets all the sensory information necessary to decide what to do. It also possesses the appropriate means for motor control to carry out its decisions. But “who” is making these decisions?
In the following we will discuss how to build such an “intelligent” machine. This machine should work like an organism in an environment, which we again can picture as in Fig. 1.1.
In the last chapter we considered the task of building a complicated network that is designed to perform a certain kind of behavior. In this chapter we shall concentrate on the question of how to save blueprint in the description of this task.
In this chapter we shall take up the “square engineering problem” posed at the end of Chapter 3. Namely, how to build a “well-behaving” machine in an economic way. Our first concern will be how to save “blueprint” in the description of this machine. In Chapter 7 we shall finally arrive at the question: of how many building blocks (i.e., neurons) are needed for such a machine.
Can the improved matchbox algorithm (constructed in the last chapter) be taken seriously as a model for a learning animal?The algorithm has been introduced as a simple example for a cooperative type of mechanism, where the final goal it will attain is intuitively clear, although the detailed way to that goal is impossible to predict.
In this chapter I shall provide some possible specifications of details of the survival algorithm, in order to see whether these specifications fit with known data on real brains.
Today we have a fairly good idea of the preprocessing of sensory information on its way to the cortex. This chapter is devoted to the most extensively studied example: the visual input. Most of the results presented here are due to electrophysiological investigations (for a review see Hubel and Wiesel 1977).
It is quite clear that a change in an animal’s stimulus-response behavior must be reflected in a change in the electrophysiological properties of some neurons in its brain, and such electrophysiological changes have indeed been found after various ways of conditioning the animal (e.g., Doty 1965, 1969).
In this chapter I want to introduce a certain type of mathematical brain model. Such models arise from the desire to understand the dynamics in a large network of interconnected neurons. Thus they study the flow of activity through a neuronal network on the basis of comparatively simple assumptions on the dynamics of the individual neurons (and synapses) and on the pattern of their connectivity. The results are usually interpreted in comparison with introspective, psychological, or psychophysical experiences. This kind of interpretation, of course, tends to be very speculative, especially since usually not the whole brain is modeled but just some part of it (e.g., the cortex, the hippocampus, the visual cortex, the cerebellum), and it remains unclear to what degree other parts of the brain contribute to the experiences referred to.
In the first section we have seen that we are not aware of all the information that reaches our brain through the sensory channels.
This chapter contains some rather wild speculations that are meant to fill out the rough image sketched in the last chapter.
Now we come back to the question that has clearly emerged from the construction given in the first part of this book, and that may have lingered in the back of the reader’s mind most of the time. Can we regard ourselves as robots?
Motivated by the question how our brain might work, we have gone a long way through the design of our “improved matchbox algorithm” that finally even served as a model for human behavior. But this whole construction is only a speculation, it is not a fact (although en route we learned some “real” facts about the brain). So what do we get out of all these constructions?
... Neural associative memory is an alternative computing architecture in which, unlike to the classical von Neumann machine [6,2], computation and data storage is not separated [67,68,57,58,22,62,42,35,44]. Given a training data set of M associations {(u µ → v µ ) : µ = 1, ..., M } between inputs u µ and outputs v µ , the general "hetero-associative" task is to find, for a new input query patternũ, the most similar u µ from the training data and return the associated output patternv = v µ . ...
... Given a training data set of M associations {(u µ → v µ ) : µ = 1, ..., M } between inputs u µ and outputs v µ , the general "hetero-associative" task is to find, for a new input query patternũ, the most similar u µ from the training data and return the associated output patternv = v µ . This is similar to nearest-neighbor algorithms [10,7,27,19] and locality-sensitive hashing [24,23,56], but neural implementations turned out to be particularly efficient for some applications and play an important role in neuroscience as models of neural computation for various brain structures, for example neocortex, hippocampus, cerebellum, mushroom body [20,4,58,11,63,25,45,54,13,64,3,53,1,26,52,37,9,29,61]. A special case is "auto-association" where inputs and outputs are identical, u µ = v µ , thus associating each input u µ with itself [68,57,22]. ...
... A special case is "auto-association" where inputs and outputs are identical, u µ = v µ , thus associating each input u µ with itself [68,57,22]. Corresponding applications involve, for example, pattern completion (whereũ is a noisy version of u µ ), clustering, and modeling the recurrent architecture of brain networks implementing Hebbian cell assemblies [20,58,61,51]. ...
Preprint
Full-text available
Neural associative memories are single layer perceptrons with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous works have analyzed the optimal networks under naive Bayes assumptions of independent pattern components and heteroassociation, where the task is to learn associations from input to output patterns. Here I study the optimal Bayesian associative network for auto-association where input and output layers are identical. In particular, I compare performance to different variants of approximate Bayesian learning rules, like the BCPNN (Bayesian Confidence Propagation Neural Network), and try to explain why sometimes the suboptimal learning rules achieve higher storage capacity than the (theoretically) optimal model. It turns out that performance can depend on subtle dependencies of input components violating the ``naive Bayes'' assumptions. This includes patterns with constant number of active units, iterative retrieval where patterns are repeatedly propagated through recurrent networks, and winners-take-all activation of the most probable units. Performance of all learning rules can improve significantly if they include a novel adaptive mechanism to estimate noise in iterative retrieval steps (ANE). The overall maximum storage capacity is achieved again by the Bayesian learning rule with ANE.
... As in the Harrow [16] approach, our data are sparse. The sparse data are stored in the best possible distributed compression methods [18,19] by a Lernmatrix [20,21] also called Willshaw's associative memory [22]. Our quantum Lernmatrix model is based on the Lernmatrix. ...
... Associative memory models human memory [24][25][26][27][28]. The associative memory and distributed representation incorporate the following abilities in a natural way [18,[28][29][30]: ...
... The goal was to produce a network that could use a binary version of Hebbian learning to form associations between pairs of binary vectors. Later, this model was studied under biological and mathematical aspects mainly by Willshaw [22] and Palm [18,24] and it was shown that this simple model has a tremendous storage capacity. ...
Article
Full-text available
We introduce a quantum Lernmatrix based on the Monte Carlo Lernmatrix in which n units are stored in the quantum superposition of log2(n) units representing On2log(n)2 binary sparse coded patterns. During the retrieval phase, quantum counting of ones based on Euler’s formula is used for the pattern recovery as proposed by Trugenberger. We demonstrate the quantum Lernmatrix by experiments using qiskit. We indicate why the assumption proposed by Trugenberger, the lower the parameter temperature t; the better the identification of the correct answers; is not correct. Instead, we introduce a tree-like structure that increases the measured value of correct answers. We show that the cost of loading L sparse patterns into quantum states of a quantum Lernmatrix are much lower than storing individually the patterns in superposition. During the active phase, the quantum Lernmatrices are queried and the results are estimated efficiently. The required time is much lower compared with the conventional approach or the of Grover’s algorithm.
... Much like the spatial hierarchies found in the visual cortex that ensure an increase in receptive field size along the visual pathway 11 , there is evidence for a hierarchy of time scales with lower visual areas responding at shorter time scales and higher visual areas at longer time scales 12 . At the neural level, cell assemblies of strongly interconnected neurons that serve as representations of static or dynamic events of different duration have been suggested to develop a hierarchical temporal structure 13,14 . ...
... (12) and (13), these two extreme cases correspond respectively to the choices for the rate constants, α s = 1 (Eqs. (14) and (15)) and α r = 1 (Eqs. (16) and (17)). ...
... Similar to the spatial hierarchy exhibited by the receptive fields of neurons in the visual cortex, there is also a temporal hierarchy of neurons responding to fast stimulus changes in early visual cortex, while neurons in higher visual cortex respond to slower changes. The possibility of developing a hierarchical temporal structure has also been suggested in the context of cell assemblies; generic densely interconnected groups of neurons that serve as representations of static or dynamic events of different duration 13,14 . Such assemblies can be combined sequentially to represent events of complex temporal structure 41,42 . ...
Article
Full-text available
Recent experiments have revealed a hierarchy of time scales in the visual cortex, where different stages of the visual system process information at different time scales. Recurrent neural networks are ideal models to gain insight in how information is processed by such a hierarchy of time scales and have become widely used to model temporal dynamics both in machine learning and computational neuroscience. However, in the derivation of such models as discrete time approximations of the firing rate of a population of neurons, the time constants of the neuronal process are generally ignored. Learning these time constants could inform us about the time scales underlying temporal processes in the brain and enhance the expressive capacity of the network. To investigate the potential of adaptive time constants, we compare the standard approximations to a more lenient one that accounts for the time scales at which processes unfold. We show that such a model performs better on predicting simulated neural data and allows recovery of the time scales at which the underlying processes unfold. A hierarchy of time scales emerges when adapting to data with multiple underlying time scales, underscoring the importance of such a hierarchy in processing complex temporal information.
... and to retrieve the patterns u μ as fixed points of equation 2.3 starting witĥ u(0) :=ũ, where H is the Heaviside function with H(x) = 1 for x > 0 and H(x) = 0 for x < 0. The retrieval methods may differ in the choice of thresholds (t) (threshold regulation; see section 3 and Palm, 1982) and perhaps in special criteria for stopping the iteration. There are two basically different approaches concerning the starting patterns: either one wants to verify that a pattern u is indeed a fixed point, in which case, u μ , or a pattern very close to u μ is used as starting point (fixed-point retrieval or recognition of stored patterns), or one wants to find the next correct pattern u μ from a substantially different starting pattern (pattern correction or pattern completion). ...
... The next smaller simulated network with n = 45,056 (and k = 22, N = 2048) achieves almost the same value such that the true maximum is likely between 50,000 and 100,000 neurons. Although the maximum seems rather flat, it is nevertheless remarkable that it occurs at about the same size as a cortical macrocolumn of size 1 mm 3 (size about n = 10 5 ; Braitenberg & Schüz, 1991) Willshaw networks are often used for as generic models (Palm, 1982;Palm, Knoblauch, Hauser, & Schüz, 2014;Knoblauch & Sommer, 2016). For larger n > 10 5 information capacity C u decreases again toward the asymptotic value C u → ln 2 4 ≈ 0.173 bps. ...
... Second, random patterns are thought to be optimal to maximize pattern and information capacity defined in section 2, thereby providing an upper bound for real-world applications. Third, there are various recognition architectures that actually employ NAM mappings with random patterns (e.g., Palm, 1982;Kanerva, 1988;Knoblauch, 2012). Fourth, it is known that activity and connectivity patterns of various brain structures that are thought to work as NAM have random character (Braitenberg, 1978;Braitenberg & Schüz, 1991;Rolls, 1996;Albus, 1971;Bogacz, Brown, & Giraud-Carrier, 2001;Laurent, 2002;Pulvermüller, 2003;Lansner, 2009). ...
Article
Full-text available
Neural associative memories (NAM) are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Gripon and Berrou (2011) investigated NAM employing block coding, a particular sparse coding method, and reported a significant increase in storage capacity. Here we verify and extend their results for both heteroassociative and recurrent autoassociative networks. For this we provide a new analysis of iterative retrieval in finite autoassociative and heteroassociative networks that allows estimating storage capacity for random and block patterns. Furthermore, we have implemented various retrieval algorithms for block coding and compared them in simulations to our theoretical results and previous simulation data. In good agreement of theory and experiments, we find that finite networks employing block coding can store significantly more memory patterns. However, due to the reduced information per block pattern, it is not possible to significantly increase stored information per synapse. Asymptotically, the information retrieval capacity converges to the known limits C=ln2≈0.69 and C=(ln2)/4≈0.17 also for block coding. We have also implemented very large recurrent networks up to n=2·106 neurons, showing that maximal capacity C≈0.2 bit per synapse occurs for finite networks having a size n≈105 similar to cortical macrocolumns.
... The basis for this work is the Willshaw model [37] of associative memory. This shallow artificial neural network is a likely candidate for a computational model of brain functions [9,27]: With extreme neural energy efficiency [15,18,20], this associate memory can efficiently store a tremendous amount of patterns [21,36,26,1,11]. The storage capacity of the basic model can even be further enhanced by implementing other neurological principles, such as structural plasticity [14]. ...
... Despite the simplicity of the WN, it has been shown that this model, under specific conditions, is capable of storing a tremendous number of associations. More specifically, this is the case when the patterns that the model stores are Sparse Distributed Representations (SDRs) [2,26,27,29,10,6,24]. This type of representation, which closely resembles the selective neuron firing phenomenon of the human neocortex [7,24,22,11], corresponds to large binary vectors where only a subset of its bits are active (sparse), and where all of the positions of the vectors are quasiuniformly used across a dataset (distributed). ...
Article
Full-text available
Drawing from memory the face of a friend you have not seen in years is a difficult task. However, if you happen to cross paths, you would easily recognize each other. The biological memory is equipped with an impressive compression algorithm that can store the essential, and then infer the details to match perception. The Willshaw Memory is a simple abstract model for cortical computations which implements mechanisms of biological memories. Using our recently proposed sparse coding prescription for visual patterns, this model can store and retrieve an impressive amount of real-world data in a fault-tolerant manner. In this paper, we extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework. In this setting, the memory stores several modalities (e.g., visual, or textual) of each pattern simultaneously. After training, the memory can be used to infer missing modalities when just a subset is perceived. Using a simple encoder-memory-decoder architecture, and a newly proposed iterative retrieval algorithm for the Willshaw Model, we perform experiments on the MNIST dataset. By storing both the images and labels as modalities, a single Memory can be used not only to retrieve and complete patterns but also to classify and generate new ones. We further discuss how this model could be used for other learning tasks, thus serving as a biologically-inspired framework for learning.
... The basis for this work is the Willshaw model [37] of associative memory. This shallow artificial neural network is a likely candidate for a computational model of brain functions [9,27]: With extreme neural energy efficiency [15,18,20], this associate memory can efficiently store a tremendous amount of patterns [21,36,26,1,11]. The storage capacity of the basic model can even be further enhanced by implementing other neurological principles, such as structural plasticity [14]. ...
... Despite the simplicity of the WN, it has been shown that this model, under specific conditions, is capable of storing a tremendous number of associations. More specifically, this is the case when the patterns that the model stores are Sparse Distributed Representations (SDRs) [2,26,27,29,10,6,24]. This type of representation, which closely resembles the selective neuron firing phenomenon of the human neocortex [7,24,22,11], corresponds to large binary vectors where only a subset of its bits are active (sparse), and where all of the positions of the vectors are quasi-uniformly used across a dataset (distributed). ...
Preprint
Full-text available
Drawing from memory the face of a friend you have not seen in years is a difficult task. However, if you happen to cross paths, you would easily recognize each other. The biological memory is equipped with an impressive compression algorithm that can store the essential, and then infer the details to match perception. The Willshaw Memory is a simple abstract model for cortical computations which implements mechanisms of biological memories. Using our recently proposed sparse coding prescription for visual patterns [34], this model can store and retrieve an impressive amount of real-world data in a fault-tolerant manner. In this paper, we extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework. In this setting, the memory stores several modalities (e.g., visual, or textual) of each pattern simultaneously. After training, the memory can be used to infer missing modalities when just a subset is perceived. Using a simple encoder-memory decoder architecture, and a newly proposed iterative retrieval algorithm for the Willshaw Model, we perform experiments on the MNIST dataset. By storing both the images and labels as modalities, a single Memory can be used not only to retrieve and complete patterns but also to classify and generate new ones. We further discuss how this model could be used for other learning tasks, thus serving as a biologically-inspired framework for learning.
... 22,29,44 In particular for the latter case, it is then necessary to distinguish between different connectivity measures, for example, relating to the actual number of anatomical synapses per neuron versus the number of potential synapses 8,45 per neuron or the number of "effectual" synapses 22,29 per neuron of a Hebbian cell assembly that represents a particular memory. 1,46,47 By analysis and model simulation, one can estimate the increase in storage efficiency by structural plasticity and the relevance of learning parameters such as the resolution of synaptic weights. Interestingly, it turns out that, for plausible network parameters, structural plasticity can increase storage efficiency per actual synapse by at least one order of magnitude, whereas weight resolution appears to have only a minor role. ...
... 1,2,24,60À62 In the simplest case a memory corresponds to a group of neurons that fire at the same time, and, according to the Hebbian hypothesis "what fires together wires together" develop strong mutual synaptic connections. 1,63 Such groups of strongly connected neurons are called cell assemblies 1,47,64 and have a number of properties that suggest a function for associative memory 31À34,37 or more complex state-based computation. 65,66 For example, if a stimulus activates a part of a cell assembly, the mutual synaptic connections will quickly activate the whole cell assembly, which is believed to correspond to the retrieval or completion of a memory pattern. ...
Chapter
This chapter discusses possible functions of structural plasticity for memory formation that follow from simulating and analyzing simple Willshaw- or Hopfield-type network models realizing Hebbian cell assemblies. According to such models, structural plasticity can increase storage efficiency in sparsely connected networks by a factor of at least the inverse filling fraction (of realized potential synapses) or even several orders of magnitude for sparse neural activity as measured experimentally in memory-related cortical areas. A second important function of structural plasticity may be to increase stability of long-term memories by balancing between stability and plasticity and, thus, preventing catastrophic forgetting. Together with the analysis of storage efficiency, this suggests a novel model for efficient memory storage by structural plasticity forming cell assemblies that increase their connectivity and activity but decrease their size. This suggests a role of structural plasticity for sharpening of neural activity as observed experimentally during learning and consolidation. Moreover, structural plasticity can explain several memory phenomena such as the spacing effect or retrograde amnesia much better than alternative models that are solely based on weight plasticity.
... In previous work, CAs have been used to build a number of cognitive systems. Researchers have been developing CA based systems for quite some time [33]. In particular, the authors have done work on, for example, associative memory [18], natural language parsing [19], and category learning [20]. ...
... Thus, all codes stored in the CF are of size Q, one winner per CM. This modular structure: i) distinguishes it from many "flat" SDC models, e.g., [4][5][6]; and ii) admits an extremely efficient way to compute an input's familiarity, G (a generalized similarity measure, defined shortly), without requiring explicit comparison to each individual stored input. The model's most important contribution is a novel, normative use of noise (randomness) in the learning process to statistically preserve similarity. ...
Preprint
Full-text available
Evidence suggests information is represented in the brain, e.g., neocortex, hippocampus, in the form of sparse distributed codes (SDCs), i.e., small sets of principal cells, a kind of "cell assembly" concept. Two key questions are: a) how are such codes formed (learned) on the basis of single trials, as is needed to account for episodic memory; and b) how do more similar inputs get mapped to more similar SDCs, as needed to account for similarity-based responding, i.e., responding based on a world model? I describe a Modular Sparse Distributed Code (MSDC) and associated single-trial, on-line, non-optimization-based, unsupervised learning algorithm that answers both questions. An MSDC coding field (CF) consists of Q WTA competitive modules (CMs), each comprised of K binary units (principal cell analogs). The key principle of the learning algorithm is to add noise inversely proportional to input familiarity (directly proportional to novelty) to the deterministic synaptic input sums of the CF units, yielding a distribution in each CM from which a winner is chosen. The more familiar an input, X, the less noise added, allowing prior learning to dominate winner selection, resulting in greater expected intersection of the code assigned to X, not only with the code of the single most similar stored input, but, because all codes are stored in superposition, with the codes of all stored inputs. I believe this is the first proposal for a normative use of noise during learning, specifically, for the purpose of approximately preserving similarity from inputs to neural codes. Thus, the model constructs a world model as a side-effect of its primary action, storing episodic memories. Crucially, it runs in fixed time, i.e., the number of steps needed to store an item (an episodic memory) remains constant as the number of stored items grows.
... Thus, all codes stored in the CF or that ever become active in the CF are of size Q, one winner per CM. This modular CF structure distinguishes MSDC from numerous prior, "flat CF" sparse distributed representation (SDR) models, e.g., [28,12,16,19]. 2) The modular organization admits an extremely efficient way to compute the familiarity (G, defined shortly), a generalized similarity measure that is sensitive not just to pairwise, but to all higher-order, similarities present in the inputs, without requiring explicit comparison of a new input to stored inputs. 3) A novel, normative use of noise (randomness) in the learning process, i.e., in choosing winners in the CMs. ...
Preprint
Full-text available
There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials, and how is similarity preserved during learning, i.e., how do more similar inputs get mapped to more similar SDCs. I describe a novel Modular Sparse Distributed Code (MSDC) that provides simple, neurally plausible answers to both questions. An MSDC coding field (CF) consists of Q WTA competitive modules (CMs), each comprised of K binary units (analogs of principal cells). The modular nature of the CF makes possible a single-trial, unsupervised learning algorithm that approximately preserves similarity and crucially, runs in fixed time, i.e., the number of steps needed to store an item remains constant as the number of stored items grows. Further, once items are stored as MSDCs in superposition and such that their intersection structure reflects input similarity, both fixed time best-match retrieval and fixed time belief update (updating the probabilities of all stored items) also become possible. The algorithm’s core principle is simply to add noise into the process of choosing a code, i.e., choosing a winner in each CM, which is proportional to the novelty of the input. This causes the expected intersection of the code for an input, X, with the code of each previously stored input, Y, to be proportional to the similarity of X and Y. Results demonstrating these capabilities for spatial patterns are given in the appendix.
... Thus, all codes stored in the CF or that ever become active in the CF are of size Q, one winner per CM. This modular CF structure distinguishes MSDC from numerous prior, "flat CF" SDC models, e.g., [8][9][10]. ...
Conference Paper
Full-text available
There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials; and b) how is similarity preserved during learning, i.e., how do more similar inputs get mapped to more similar SDCs.
... Thus, all codes stored in the CF or that ever become active in the CF are of size Q, one winner per CM. This modular CF structure distinguishes MSDC from numerous prior, "flat CF" sparse distributed representation (SDR) models, e.g., [28,12,16,19]. 2) The modular organization admits an extremely efficient way to compute the familiarity (G, defined shortly), a generalized similarity measure that is sensitive not just to pairwise, but to all higher-order, similarities present in the inputs, without requiring explicit comparison of a new input to stored inputs. 3) A novel, normative use of noise (randomness) in the learning process, i.e., in choosing winners in the CMs. ...
Preprint
Full-text available
There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials, and how is similarity preserved during learning, i.e., how do more similar inputs get mapped to more similar SDCs. I describe a novel Modular Sparse Distributed Code (MSDC) that provides simple, neurally plausible answers to both questions. An MSDC coding field (CF) consists of Q WTA competitive modules (CMs), each comprised of K binary units (analogs of principal cells). The modular nature of the CF makes possible a single-trial, unsupervised learning algorithm that approximately preserves similarity and crucially, runs in fixed time, i.e., the number of steps needed to store an item remains constant as the number of stored items grows. Further, once items are stored as MSDCs in superposition and such that their intersection structure reflects input similarity, both fixed time best-match retrieval and fixed time belief update (updating the probabilities of all stored items) also become possible. The algorithm's core principle is simply to add noise into the process of choosing a code, i.e., choosing a winner in each CM, which is proportional to the novelty of the input. This causes the expected intersection of the code for an input, X, with the code of each previously stored input, Y, to be proportional to the similarity of X and Y. Results demonstrating these capabilities for spatial patterns are given in the appendix.
... Layer I is the top layer; layer VI is the bottom layer, situated next to the white matter. The neurons in the cortex can be categorized into two major cell types (Palm, 1982). Firstly, in layers II, III and V there exist increasingly large, excitatory pyramidal cells having a constant shape. ...
Book
This PhD thesis describes research on Artificial Neural Networks (ANN's), structures that are modelled on the biological neural networks in the brain. Chapter 1 starts with a description of neural networks identifying them as networks of nerve cells, called neurons. ANNs are computing models which simplify the neuron-processing model and neural-network structure in order to understand the basic principles of neural processing. The main theme of this dissertation is the organization of representation in ANN s. The research is divided into three parts addressing three different ANN models, i.e. Multi-Layer Neural Networks, Self-Organizing Feature Maps and oscillating Cellular Neural Networks. The Multi-Layer Neural Network (MLNN) is studied in Chapter 2. The neurons in a MLNN are organized in one or more layers, starting at an input layer and finishing with an output layer. The network can be adapted to establish a desired input-output behaviour by using a supervised-learning algorithm called Back Propagation. The input-output behaviour is determined by the weights assigned to the neurons' inputs. Normally, these weights are real-valued. However, we have studied the impact of complex-valued weights. Hence, we have developed a complex-valued back-propagation algorithm as well as a complex-valued neuron-processing model. A robot-arm experiment comparing the complex-valued MLNN with the real-valued MLNN demonstrates that in this case the complexvalued MLNN outperforms the real-valued MLNN. Hence, a complexvalued representation should be preferred to a real-valued representation. In Chapter 3 the representation of information in Kohonen's Self-Organizing Feature Maps (SOFMs) is studied. In many ways the SOFM resembles the primary-cortical areas in the brain representing input to and output from the cortex, e.g. vision, feeling, hearing, smelling, movement and speech. A two-dimensional SOFM can be interpreted as a map of features. The representation of features by neurons in the SOFM is created by self organization. In an organized SOFM similar features are located in the same neighbourhood. Moreover, in the primary cortex, uncorrelated signals are represented in separate areas. The research shows that likewise a system of multiple two-dimensional SOFMs can be built. The organization of the representation can be visualized and its accuracy can be tested statistically. The performance of a modular SOFM is demonstrated by 3540 measurements of 28 sensors of an oil refinery of Dutch State Mines (DSM). The final part of the research is described in Chapter 4 and addresses the organization of representation in a new ANN model in which neurons have been modelled as harmonic oscillators. The model is called the Membrain model since waves in the activation pattern strongly resemble waves on a vibrating membrane. It is shown analytically that the Membrain modulates every input as a pattern of interfering activation waves. The input is projected on the linear space of natural vibration patterns of the network, each resonating with a unique frequency. These patterns are coded in the weight matrix and can be adjusted to features for identification. The projection of the input is a feature vector that is represented in the frequency spectrum of the elastic-energy signal. This property has been tested in a Membrain Cellular Neural Network (MCNN). In this network neurons are organized in a two-dimensional grid and only neighbouring neurons are connected. Moreover, neurons on an edge are connected to neurons at the opposite edge so that the Membrain surface is virtually unlimited. Thus, waves are not reflected by boundaries and, since neurons in a CNN are identical cells, the natural vibration patterns are translation invariant. A simulation is performed to demonstrate translation-invariant signature-image recognition. Finally, in Chapter 5 the research results are evaluated. In this evaluation we focus on the organization of representation in ANN s. Three different types of organization have been studied leading to four different representation forms, i.e. a distributed, a pattern-based, a modular and a temporal representation or to a combination of any of these forms. We conclude that the strength of ANN s lies in their ability to integrate efficiently the representation forms through the connectionist ensemble of connections, neurons and activation.
... Several interesting properties were studied from this model, such as its implementation of cell assemblies (Hebb, 1949;Palm, 1982) and its biological plausibility in terms of neural energy efficiency (Laughlin & Sejnowski, 2003;Lennie, 2003;Levy & Baxter, 1996). However, in this letter, we focus on its storage capacity (Amit, Gutfreund, & Sompolinsky, 1985;Gardner, 1988;Knoblauch, Palm, & Sommer, 2010;Palm, 1980Palm, , 1992. ...
Article
Full-text available
Willshaw networks are single-layered neural networks that store associations between binary vectors. Using only binary weights, these networks can be implemented efficiently to store large numbers of patterns and allow for fault-tolerant recovery of those patterns from noisy cues. However, this is only the case when the involved codes are sparse and randomly generated. In this letter, we use a recently proposed approach that maps visual patterns into informative binary features. By doing so, we manage to transform MNIST handwritten digits into well-distributed codes that we then store in a Willshaw network in autoassociation. We perform experiments with both noisy and noiseless cues and verify a tenuous impact on the recovered pattern's relevant information. More specifically, we were able to perform retrieval after filling the memory to several factors of its number of units while preserving the information of the class to which the pattern belongs.
... NAMs have various applications, e.g., for cluster analysis, speech and object recognition, robot control, or information retrieval in large databases [3,12,13,17,21,25,35,43,50,52,54,58]. Moreover, NAMs have been used as model of neural computation for various brain structures including neocortex, hippocampus, cerebellum, and mushroom body [1,4,5,14,16,[22][23][24]26,27,37,[40][41][42]45,51,53]. ...
Chapter
Recently, Gripon and Berrou (2011) have investigated a recurrently connected Willshaw-type auto-associative memory with block coding, a particular sparse coding method, reporting a significant increase in storage capacity compared to earlier approaches. In this study we verify and generalize their results by implementing bidirectional hetero-associative networks and comparing the performance of various retrieval methods both with block coding and without block coding. For iterative retrieval in networks of size n=4096 our data confirms that block-coding with the so-called “sum-of-max” strategy performs best in terms of output noise (which is the normalized Hamming distance between stored and retrieved patterns), whereas the information storage capacity of the classical models cannot be exceeded because of the reduced Shannon information of block patterns. Our simulation experiments also provide accurate estimates of the maximum pattern number that can be stored at a tolerated noise level of 1%. It is revealed that block coding is most beneficial for sparse activity where each pattern has only klognk\sim \log n active units.
... Other estimates are 10 bilions [10], 13 bilions [10] or 20 bilions [10], but well below the often cited figure of 100 billion [7]. 8 See e.g. Churchland and Sejnowski figure of 100,000 km of "wiring [7]. ...
Chapter
The article deals with current problems of IT deployment in different segments of human activities. In the introduction the similarity of Industrial Revolution in 18 century and IT revolution today are compared. Following chapter deals with the impact of broad deployment of IT on human mental and decision process. Following chapter discuss changes in capability of human to memorize information and to judge based on information memorized and impact of IT on these abilities. The filtering model of human information processing introduce simple novelty approach using two separated filters – the knowledge filter and emotional filter. The conclusion emphasizes the thread of changes in people way of information processing, especially impact on people attitude and simple perception management.
... Consulte Kohonen (1988) ePalm (1982) para apresentação da complementação cognitiva. Cf. ...
Article
Full-text available
O ser humano define a sua identidade principalmente pela forma como apresenta, desenha e estiliza o seu corpo. Ao fazê-lo, os indivíduos fazem declarações sobre a sua filiação para um contexto social. A globalização implica uma mudança de identidade entre os membros das culturas menos industrializadas, uma vez que estão expostos aos efeitos da dominação cultural. Para o indivíduo, essa exposição pode ser tanto mais forte quanto mais autônoma era sua cultura de origem antes do confronto. Há uma parcialidade de elementos culturais que estão sendo transferidos, de modo que a cultura industrializada tem um forte impacto em outras culturas. O consentimento global em relação aos padrões de comportamento, especialmente à autoapresentação corporal e aos estilos cognitivos relacionados, leva à obliteração do conhecimento tradicional, que está entrelaçado com o comportamento sobre o qual a identidade foi definida anteriormente. Os elementos culturais transferidos estão sendo utilizados para a construção de identidades pessoais globalmente padronizadas, em que os elementos relativos ao desenho visual do corpo humano são de grande relevância para a autodefinição. A perda de identidades diversas em favor da pertença à sociedade global provoca uma série de problemas que podem ser demonstrados em modelos funcionais. Esses modelos, por sua vez, podem apoiar o planeamento de estratégias de intervenção e o trabalho de resgate. Neste artigo, analisa-se o papel do corpo nos efeitos desestabilizadores da mudança cultural e discutem-se as possibilidades de intervenção.
... theoretical studies (Wickelgren, 1999;Palm, 1982;Byrne & Huyck, 2010) indicate that a neural system with the ability to form all described memory relations has an algorithmic advantage to process the stored information. Furthermore, the neuronal dynamics resulting from interconnected memory representations match experimental results on the psychological (Romani, Pinkoviezky, Rubin, & Tsodyks, 2013) and single-neuron level (Griniasty, Tsodyks, & Amit, 1993;Amit, Brunel, & Tsodyks, 1994). ...
Article
Full-text available
The neuronal system exhibits the remarkable ability to dynamically store and organize incoming information into a web of memory representations (items), which is essential for the generation of complex behaviors. Central to memory function is that such memory items must be (1) discriminated from each other, (2) associated to each other, or (3) brought into a sequential order. However, how these three basic mechanisms are robustly implemented in an input-dependent manner by the underlying complex neuronal and synaptic dynamics is still unknown. Here, we develop a mathematical framework, which provides a direct link between different synaptic mechanisms, determining the neuronal and synaptic dynamics of the network, to create a network that emulates the above mechanisms. Combining correlation-based synaptic plasticity and homeostatic synaptic scaling, we demonstrate that these mechanisms enable the reliable formation of sequences and associations between two memory items still missing the capability for discrimination. We show that this shortcoming can be removed by additionally considering inhibitory synaptic plasticity. Thus, the here-presented framework provides a new, functionally motivated link between different known synaptic mechanisms leading to the self-organization of fundamental memory mechanisms.
... In this study, we will explore another possibility, namely that the Hebbian cell assemblies that store 46 the memories (Hebb, 1949;Palm, 1982 Neuron j is connected to neuron i at S max potential synaptic locations (here S max = 7). (Inset) Non-functional potential synapses (dashed) become functional synapses (solid) with a constant rate b and are deleted with a weight-dependent rate d(w ij,k ). ...
Article
Full-text available
Long-term memories are believed to be stored in the synapses of cortical neuronal networks. However, recent experiments report continuous creation and removal of cortical synapses, which raises the question how memories can survive on such a variable substrate. Here, we study the formation and retention of associative memory in a computational model based on Hebbian cell assemblies in the presence of both synaptic and structural plasticity. During rest periods, such as may occur during sleep, the assemblies reactivate spontaneously, reinforcing memories against ongoing synapse removal and replacement. Brief daily reactivations during rest-periods suffice to not only maintain the assemblies, but even strengthen them, and improve pattern completion, consistent with offline memory gains observed experimentally. While the connectivity inside memory representations is strengthened during rest phases, connections in the rest of the network decay and vanish thus reconciling apparently conflicting hypotheses of the influence of sleep on cortical connectivity.
... To regulate and control activity in the network, local and areaspecific inhibition is implemented (Palm, 1982;Bibbig et al., 1995;Wennekers et al., 2006), realizing, respectively, local and global competition mechanisms (Duncan, 1996(Duncan, , 2006. More precisely, in Equation (B1) the input V In (e,t) to each excitatory cell of the same area includes an area-specific ("global") inhibition term k G ω G (e,t) [with k G a constant and ω G (e,t) defined below] subtracted from the total I/EPSPs postsynaptic potentials V In in input to the cell; this regulatory mechanism ensures that area (and network) activity is maintained within physiological levels (Braitenberg and Schüz, 1998): ...
Article
Full-text available
One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions relevant for meaning processing, ranging from semantic hubs generally involved in semantic processing to modality-preferential sensorimotor areas involved in the processing of specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we improve pre-existing neurocomputational models of semantics by incorporating spiking neurons and a rich connectivity structure between the model 'areas' to mimic important features of the underlying neural substrate. Semantic learning and symbol grounding in action and perception were simulated by associative learning between co-activated neuron populations in frontal, temporal and occipital areas. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action-and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning.
... However, it remains unclear whether the interaction of synaptic and homeostatic plasticity also enables the formation of further memory-relations described before. Interestingly, several theoretical studies [7,22,23] indicate that a neural system with the ability to form all described memory-relations has an algorithmic advantage to process the stored information. Furthermore, the neuronal dynamics resulting from interconnected memory representations match experimental results on psychological [24] and single-neuron level [25,26]. ...
Preprint
Full-text available
The brain of higher-order animals continuously and dynamically learns and adapts according to variable, complex environmental conditions. For this, the neuronal system of an agent exhibits the remarkable ability to dynamically store and organize the information of plenty of different environmental stimuli into a web of memories, which is essential for the generation of complex behaviors. The basic structures of this web are the functional organizations between two memories such as discrimination, association, and sequences. However, how these basic structures are formed robustly in an input-dependent manner by the underlying, complex and high-dimensional neuronal and synaptic dynamics is still unknown. Here, we develop a mathematical framework which reduces the complexity in several stages such that we obtain a low-dimensional mathematical description. This provides a direct link between the involved synaptic mechanisms, determining the neuronal and synaptic dynamics of the network, with the ability of the network to form diverse functional organizations. Given two widely-known synaptic mechanisms, correlation-based synaptic plasticity and homeostatic synaptic scaling, we use this new framework to identify that the interplay of these mechanisms enables the reliable formation of sequences and associations between two memory representations - missing the important functional organization of discrimination. We can show that this shortcoming can be compensated by considering a third mechanism as inhibitory synaptic plasticity. Thus, the here-presented framework and results provide a new link between diverse synaptic mechanisms and emerging functional organizations of memories. Furthermore, given our mathematical framework, one can now investigate the principles underlying the formation of the web of memories. AUTHOR SUMMARY Higher-order animals are permanently exposed to a variety of environmental inputs which have to be processed and stored such that the animal can react appropriate. Thereby, the ongoing challenge for the neuronal system is to continuously store novel and meaningful stimuli and, dependent on their content, to integrate them into the existing web of knowledge or memories. The smallest organizational entity of such a web of memories is described by the functional relation of two interconnected memories: They can be either unrelated (discrimination), mutually related (association), or uni-directionally related (sequence). However, the neuronal and synaptic dynamics underlying the formation of such structures is mainly unknown. To investigate possible links between physiological mechanisms and the organization of memories, in this work, we develop a general mathematical framework enabling analytical approach. Thereby, we show that the well-known mechanisms of synaptic plasticity and homeostatic scaling in conjunction with inhibitory synaptic plasticity enables the reliable formation of all basic relations between two memories. This work provides a further step in the understanding of the complex dynamics underlying the organization of knowledge in neural systems.
... Ziel der Forschung ist die Nachbildung natürlicher neuronaler Netze (siehe [ANDE83], [CAIA61], [HECH88], [HOPF84], [MACG77], [MINS54], [PALM82], [VEMUR87]). Es erscheint nicht sinnvoll, diese Nachbildung in allen Einzelheiten vorzunehmen; man kann sich auf relativ wenige Grundprinzipien der Arbeitsweise von Nervenzellen (siehe Abbildung 1.1) beschränken. ...
Preprint
Full-text available
Vorgestellt werden zwei Modelle von Neuronen, T-Neuronen und S-Neuronen. Beide Modelle lassen nur diskrete Neuronenzustände zu. Während bei T-Neuronen der einzunehmende Neuronenzustand tabellenartig abhängig von der Eingangsbelegung aufgeführt ist, besitzen S-Neuronen einen komplexeren Aufbau. Bei ihnen werden die an den Eingang eines Neurons anliegenden Werte zunächst mit Gewichtsfaktoren multipliziert; diese Produkte bestimmen dann über die lokale Feldfunktion den Wert des lokalen Feldes des betrachteten Neurons. Die Ausgangsfunktion ordnet dem lokalen Feld schließlich den Neuronenzustand zu. Die bei den T-Neuronen durch Vergrößerung der Eingangsanzahl auftretende kombinatorische Explosion der Tabellengröße ist Motiv für die Entwicklung des S-Neuronenmodells. T-Neuronen und S-Neuronen haben gemeinsam, daß sie die Implementierung einer beliebigen Zuordnung zwischen Neuroneneingangsbelegung und Neuronenzustand gestatten (Universalität). Der Nachweis dieser Eigenschaft geschieht konstruktiv. Damit wird die gezielte Synthese neuronaler Netze mit festgelegten Eigenschaften ermöglicht. Die Problematik des Erlernens einer solchen Zuordnung wird kurz behandelt; die Arbeit zeigt auf, daß die T-Neuronen diesbezüglich keine Schwierigkeiten bereiten, während die Angabe entsprechender Lernregeln bei den S-Neuronen als komplizierter anzusehen ist. Es konnte aber gezeigt werden, daß im Modell der S-Neuronen jede Übergangsfunktion durch Modifikation allein der Gewichtsfaktoren erreichbar ist. Ebenfalls untersucht werden Modifikationen neuronaler Netze, die das Netzverhalten im Wesentlichen unbeeinflusst lassen (Morphismen, wie z.B. Umbenennungen von Neuronenzuständen). Während solche Modifikationen bei Netzen bestehend aus T-Neuronen relativ frei möglich sind, verlangen S-Neuronen in der Regel recht scharfe Zusatzbedingungen. Weiterhin gibt es Unterschiede zwischen T-Neuronen und S-Neuronen, wenn ein Neuron abhängig von der Eingangsbelegung seinen Zustand nicht mehr deterministisch bestimmt, sondern zwischen mehreren Zuständen stochastisch (Vorgabe einer Wahrscheinlichkeitsverteilung) auswählen soll. Im Gegensatz zu den T-Neuronen, bei denen diese Verteilungen unabhängig für jede Eingangsbelegung frei vorgegeben werden kann, läßt das Modell der S-Neuronen ein derartiges Vorgehen nicht zu.
... Threads can be grouped into disjoint sets, or fibers, to model neural assemblies (Palm 1982[17]), and discrete weights can be attached to pairs of communicating threads belonging to the same fiber. The interaction of threads within a given fiber obeys various communication protocols. ...
Conference Paper
Full-text available
Brain simulations as performed today in computational neuroscience rely on analytical methods i.e., mainly differential equations, to model physical processes. The question then arises: will cognitive abilities of real brains spontaneously emerge from these simulations, or should they be encoded and executed on top of them, similarly to the way computer software runs on hardware? Towards this later goal, a new framework linking neural dynamics to behaviors through a virtual machine has been reported and is used here to model brain functionalities in two domains, namely evolutive cases of analogical reasoning and a simple case of meta-cognition. It is argued that this approach to brain modeling could lead to an actualization of the concept of behaviorism as a model for the development of cognition.
Article
Concrete symbols (e.g., sun , run ) can be learned in the context of objects and actions, thereby grounding their meaning in the world. However, it is controversial whether a comparable avenue to semantic learning exists for abstract symbols (e.g., democracy ). When we simulated the putative brain mechanisms of conceptual/semantic grounding using brain‐constrained deep neural networks, the learning of instances of concrete concepts outside of language contexts led to robust neural circuits generating substantial and prolonged activations. In contrast, the learning of instances of abstract concepts yielded much reduced and only short‐lived activity. Crucially, when conceptual instances were learned in the context of wordforms, circuit activations became robust and long‐lasting for both concrete and abstract meanings. These results indicate that, although the neural correlates of concrete conceptual representations can be built from grounding experiences alone, abstract concept formation at the neurobiological level is enabled by and requires the correlated presence of linguistic forms.
Chapter
In this chapter we shall begin consideration of one of the prevailing hypotheses concerning the function of network oscillations in the brain: that oscillations are used for communication between spatially separated groups of neurons (Wilson et al., F1000Res 7:F1000 Faculty Rev-1960, 2018; Cannon et al., Eur J Neurosci 39:705–719, 2014). Cannon et al. (Eur J Neurosci 39:705–719, 2014) emphasize – correctly, we believe – that proper understanding of oscillatory communication requires detailed analysis of the underlying cellular mechanisms, rather than just describing the frequency or the areas involved. Besides mechanisms, one must additionally understand, however, just what it is that is to be communicated. Here again, we shall pursue a prevailing hypothesis that it is the contents of cell assemblies, that is, the identities of the constituent cells – which one network needs to convey to another network. That is the underlying function, so we believe, of most of the forebrain.
Chapter
The human brain and its ability to associate is one of the most fascinating things in nature. The long-known concept of binary neural associative memory offers the possibility to build a very simple hardware architecture, that allows direct association. In a BINAM the presented input is associated with the stored content of the memory, without the need for addressing. Hereby the BINAM is a fault-tolerant concept, which allows that erroneous input vectors will usually result in a correct output. In this work, we propose a modern hardware architecture of a BINAM on the VCU1525 FPGA board. We implemented the architecture in VHDL as a scalable, modular, generic, and easy to use design. For the evaluation designs in the range of 8,000 to 740,000 neurons, with equal input and output vector size, have been generated and tested on the FPGA board. Currently, a maximum clock frequency of \sim 200 MHz with a resource utilization of only \sim 33% CLBs,, \sim 22% LUTs, and \sim 10% registers can be achieved. Maximum times for the storage and association process for all designs are stated in this paper.
Presentation
Full-text available
Ziel des Kurses ist die Konfrontation der synthetischen Vorgehensweisen in der Informatik und den Ingenieurskünsten mit einer der größten Herausforderungen der empirischen Naturforschung „Wie funktionieren Nervensysteme?“. Dazu werden sowohl weitgehend geklärte als auch bloß vermutete biologische informationsverarbeitende Prozesse unter besonderer Berücksichtigung von in den genannten Wissenschaften ungewöhnlichen Randbedingungen vorgestellt. Letztere erfordern die Beschäftigung einerseits mit grundlegenden biologischen Prinzipien – wie der Genese und dem Verhalten –, andererseits mit neurowissenschaftlichen Methoden. — Anhand ihrer empirischen Disziplinen und deren Methoden wird in die Hirnforschung eingeführt. Die system- und signaltheoretischen Analysen und Beschreibungen erfolgen dann stets vor dem Hintergrund der begrenzten experimentellen Möglichkeiten und der Komplexität selbst einfacher Nervensysteme. Vergleiche biologischer Strukturen und Leistungen mit denen ausgewählter künstlicher neuronaler Netze verdeutlichen Stand und Perspektive der Forschung in den theoretischen Neurowissenschaften. — An zentraler Stelle stehen Formalisierungen der Verarbeitung in, sowie des Signalaustauschs zwischen Nervenzellen. Hier werden, u.a. historisch begründet, Grundtypen formaler Neurone eingeführt, vom McCulloch/Pitts-Konzept bis zu elaborierten impulserzeugenden Formen. Darauf aufbauend werden Systemaussagen über Zellverbände abgeleitet und Erklärungsansätze für elementare neuronale Signalverarbeitungs-Aufgaben erörtert.
Chapter
In biological research, it is common to assume that each organ of an organism serves a definite purpose. The purpose of the brain seems to be the coordination and processing of information which the animal obtains through its sense organs about the outside world and about its own internal state. An important aspect of this is the storage of information in memory and the use of the stored information in connection with the present sensory stimuli. Thus, the brain deals with information, and therefore after Shannon’s formal definition of information, it seemed most appropriate to use this new theory in brain research.
Article
Full-text available
In the global war for talent, traditional recruiting methods are failing to cope with the talent competition, so employers need the right recruiting tools to fill open positions. First, we explore how talent acquisition has transitioned from digital 1.0 to 3.0 (AI-enabled) as the digital tool redesigns business. The technology of artificial intelligence has facilitated the daily work of recruiters and improved recruitment efficiency. Further, the study analyzes that AI plays an important role in each stage of recruitment, such as recruitment promotion, job search, application, screening, assessment, and coordination. Next, after interviewing with AI recruitment stakeholders (recruiters, managers, and applicants), the study discusses their acceptance criteria for each recruitment stage; stakeholders also raised concerns about AI recruitment. Finally, we suggest that managers need to be concerned about the cost of AI recruitment, legal privacy, recruitment bias, and the possibility of replacing recruiters. Overall, the study answers the following questions: (1) How artificial intelligence is used in various stages of the recruitment process. (2) Stakeholder (applicants, recruiters, managers) perceptions of AI application in recruitment. (3) Suggestions for managers to adopt AI in recruitment. In general, the discussion will contribute to the study of the use of AI in recruitment, as well as providing recommendations for implementing AI recruitment in practice.
Chapter
Neural networks normally used to model associative memory can be regarded as consisting of dissipative units (the neurons) that interact in such a way that the network itself admits a global energy or Liapunov function. The network’s global dynamics is such that the system always evolves “downhill” in the energy landscape. In most models for associative memory, the individual neurons are described as one-dimensional, dynamical systems. In the present contribution, we explore the possibility of extending the structural scheme of associative memory neural networks to more general scenarios, where the units (that is, the neurons) are modeled as multi-dimensional, dissipative systems. With that aim in mind, we advance a coupling scheme for dissipative, multi-dimensional units, that generates dynamical features akin to those required when modeling associative memory.KeywordsNeural networksAssociative memoryLiapunov functionsMultidimensional neurons
Chapter
The dream of the intelligent machine is the vision of creating something that does not depend on having people preprogram its problem‐solving behaviour. Put another way, artificial intelligence (AI) should not seek to merely solve problems but should rather seek to solve the problem of how to solve problems. This chapter seeks to provide a focused explication of particular methods that indeed allow machines to improve themselves by learning from experience and to explain the fundamental theoretical and practical considerations of applying them to problems of machine learning. To begin this explication, the discussion first goes back to the Turing Test. The acceptance of the Turing Test focused attention on mimicking human behaviour. A human may be described as an intelligent problem‐solving machine. The idea of constructing an artificial brain or neural network has been proposed many times.
Article
A difficulty with the connectionist idea of representation in neural networks of the mammalian brain is that a single neuron cannot make a sufficient number of connections to influence the functional organization within networks of realistic size. Although cell assemblies can form to represent individual stimuli and responses, the formation of assemblies capable of recognizing an environment as a whole is unlikely. Yet such recognition is necessary for many context-dependent types of behavior. In the present paper, a hypothesis of cortico-hippocampal interaction is suggested, which can resolve this difficulty. It involves the establishing of patterns of connectivity between the cortex and hippocampus, on the basis of temporal aspects of connectivity (i.e., axonal conduction delays) as well as spatial aspects. By means of both the available repertoire of axonal conduction delays and Hebbian processes for synaptic modification, loops of connectivity are selected which carry neural activity resonating at the frequency of the hippocampal theta rhythm. Patterns of such loops encode the environment as a whole. The relation between the hippocampal theta rhythm and both general behavior and learning processes is thus clarified.
Article
Full-text available
Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling. Neural network models have potential for improving our understanding of brain functions. In this Perspective, Pulvermüller and colleagues examine various aspects of such models that may need to be constrained to make them more neurobiologically realistic and therefore better tools for understanding brain function.
Article
A novel form of neurocomputing allows machines to generate new concepts along with their anticipated consequences, all encoded as chained associative memories. Knowledge is accumulated by the system through direct experience as network chaining topologies form in response to various environmental input patterns. Thereafter, random disturbances to the connections joining these nets promote the formation of alternative chaining topologies representing novel concepts. The resulting ideational chains are then reinforced or weakened as they incorporate nets containing memories of impactful events or things. Such encodings of entities, actions, and relationships as geometric forms composed of artificial neural nets may well suggest how the human brain summarizes and appraises the states of nearly a hundred billion cortical neurons. It may also be the paradigm that allows the scaling of synthetic neural systems to brain-like proportions to achieve sentient artificial general intelligence (SAGI).
Chapter
Full-text available
According to some proponents, artificial intelligence seems to be a presupposition for machine autonomy, wheras autonomy and conscious machines are the presupposition for singularity (Cf. Logan, Information 8: 161, 2017); further on, singularity is a presupposition for transhumanism. The chapter analyses the different forms of transhumanism and its underlying philosophical anthropology, which is reductionist as well as naturalistic. Nevertheless, it can be shown that transhumanism has some (pseudo-)religious borderlines. Besides this, massive interests behind the arguments of the proponents can be figured out. Due to these hidden business models, it would be a good idea to discuss objection and rules to hedge an uncontrolled shape of these technologies.
Chapter
The article deals with current problems of IT and sets up a new view of semantic memory as a base for artificial intelligence system. The explanation of object defined as a language, not as a data, leads to linked semantic objects and memory sizing paradox.
Article
Full-text available
Synfire rings are neural circuits capable of conveying synchronous, temporally precise and self-sustained activities in a robust manner. We propose a cell assembly based paradigm for abstract neural computation centered on the concept of synfire rings. More precisely, we empirically show that Hodgkin–Huxley neural networks modularly composed of synfire rings are automata complete. We provide an algorithmic construction which, starting from any given finite state automaton, builds a corresponding Hodgkin–Huxley neural network modularly composed of synfire rings and capable of simulating it. We illustrate the correctness of the construction on two specific examples. We further analyse the stability and robustness of the construction as a function of changes in the ring topologies as well as with respect to cell death and synaptic failure mechanisms, respectively. These results establish the possibility of achieving abstract computation with bio-inspired neural networks. They might constitute a theoretical ground for the realization of biological neural computers.
Chapter
This chapter aims to illustrate the reception, encoding, and storage of information—including its modification through learning—in the brain of mammals, especially humans. The presentation of these processes begins on the level of a black-box analysis, that is, with a description of behavior, including patient studies that measure the capabilities and limitations of the entire system. The next level investigates the brain and its function based on the anatomy and histology especially of the human cortex revealing a functional specialization of cortex as already found by Brodmann (Vergleichende Lokalisationslehre der Grosshirnrinde. In ihren Priczipien dargestellt auf Grund des Zellenbaues. Barth, 1909). This functional specialization can also be seen in the various imaging studies based on the recording of the electric or magnetic brain activity or on changes in blood flow in the brain (functional magnetic resonance imaging). The third level is that of single cell recordings, mostly in animals, which shows the properties of single neurons and small neuronal assemblies. The last and most basic level considered is that of the biochemistry of information processing. On this level, description of the cellular mechanisms underlying information storage is most prominent. Modeling will help understand the experimental results on each of these levels.
Chapter
Automation of production and assembly processes is one of the important tasks in micromechanics. To produce totally automated micro factory or automate the solar concentrator production and assembly, it is necessary to develop a computer vision system that can replace an operator. A computer vision system may have several functions, for example, recognition of objects on the image of working area, recognition of mutual position of several components on the image, and measurement of component size, etc. We select several tasks that are connected with the micromechanics area and automatization—for example, size measurement of micro components. The object of measurement is a micro piston. Micro pistons are the components of heat engines that transfer the heat energy from solar concentrator to electrical energy. The goal of this work is the research and development of the LIRA (Limited Receptive Area) neural network and its application to measure the micro piston size. To obtain micro piston sizes, it is necessary to recognize its boundaries in the image. We propose to use LIRA neural network to extract and classify piston boundaries. In this chapter, we describe and analyze the preliminary results of LIRA application to micro piston boundaries recognition. Experiments with the recognition system have given us the information to improve the structure and parameters of the developed neural network. Experiments with the LIRA neural network showed the necessity to accelerate its processing time by implementing the neural network algorithms with electronic schemes such as Altera. The advantage of the neural network is its parallel structure and possibility of the training. FPGA allows the implementation of these parallel algorithms in a single device. This chapter contains brief description of ensemble neuron networks and some results of storage capacity estimation. We propose to apply this ensemble neural network to the problem of selection of adequate maneuver for robot-manipulator or for mobile robot.
Chapter
In this final chapter, the theoretical considerations of the previous chapters are applied to real-life situations. Thus, the perspectives gained on what cultures are; if, when and why they change; how they can become dominant; and what globalisation means now help to find the best practices for planning and carrying out field research in indigenous contexts. This is done on the basis of relevant articles for the United Nations Declaration on the Rights of Indigenous Peoples, which are examined under the aspect of our topic. Since comprehensive preparation for field research in indigenous contexts is indispensable, it is set out in detail how to conduct education and training, aiming at optimum transcultural competency of the researchers-to-be, as well as of others, who want to be fit for sustainable intercultural work. While preparing the field research, it is necessary to understand the semiotic functions of the indigenous people’s descriptions in already available texts and pictures. Such descriptions need to be scrutinised critically, taking the relevant psychological mechanisms of their origination into consideration. Also, the socio-cognitive functioning of scientists is analysed, resorting to functional models of intercultural processes. From such meta-perspectives and a Theory of Mind approach, the role of the researchers’ culture of origin can be taken into account with regard to their perspective-taking, the effects of their expectations and cultural distance, so that irrationalities can be avoided. Finally, practical issues are addressed, including healthcare, and advice is given as how to concretely behave in particular circumstances in indigenous settings.
Chapter
Although the suggestion that neurons in the human brain may act in functional groups reaches back at least to the beginning of the twentieth century (when Charles Sherrington published his The Integrative Action of the Nervous System [85]), it was in Donald Hebb's classic Organization of Behavior that the cell-assembly concept was first carefully formulated. Largely neglected for several decades [13], Hebb's theory of neural assemblies has more recently begun to attract broad interest from the neuroscience community. Why, one wonders, was such a reasonable suggestion so long ignored? Several answers come to mind.
ResearchGate has not been able to resolve any references for this publication.