The optimal probability of activation and the corresponding performance is studied for three designs of Sparse Distributed Memory, namely, Kanerva's original design, Jaeckel's selected-coordinates design and Karlsson's modification of Jaeckel's design. We will assume that the hard locations (in Karlsson's case, the masks), the storage addresses and the stored data are randomly chosen, and we will consider different levels of random noise in the reading address. Keywords: Sparse Distributed Memory, Probability of Activation, Performance Contents 1. Introduction 2 2. General definitions and assumptions 2 3. The error probability and the signal-to-noise ratio 4 4. Determination of the signal-to-noise ratio 5 5. Discussion of the normal approximation of Z 9 6. Discussion of the randomness assumptions for hard locations, storage addresses etc. 10 7. Numerical calculations 12 8. Summary and conclusions 13 References 13 Tables 14 1 Real World Computing Partnership 2 Swedish Institute of C...
A more efficient way of reading the SDM memory is presented. This is accomplished by using implicit information, hitherto not utilized, to find the information-carrying units and thus removing unnecessary noise when reading the memory.
Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.
Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.
An important property for any memory system is the ability to form
higher-level concepts from lower-level ones in a robust way. This
process is in the article called chunking. It is also important that
such higher-level concepts can be analyzed, i.e., broken down into their
constituent parts. This is called probing and clean-up. These issues
have previously been treated for vectors of real numbers and for dense
binary patterns. Using sparse codes instead of dense ones has many
advantages. The paper shows how to define robust chunking operations for
such sparse codes. It is shown that a sparse distributed memory (SDM)
model using sparse codes and a suitable activation mechanism can be used
as a clean-up memory. It is proved that the retrieval of the constituent
parts can be made arbitrarily exact with a growing memory. This is so
even if we let the load increase to infinity