Rod Rinkus

Rod Rinkus
Neurithmic Systems

PhD Cognitive and Neural Systems, Boston University, 1996

About

30
Publications
7,207
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
156
Citations
Citations since 2017
19 Research Items
104 Citations
2017201820192020202120222023010203040
2017201820192020202120222023010203040
2017201820192020202120222023010203040
2017201820192020202120222023010203040
Introduction
Developing theory of canonical mesoscopic (macrocolumnar) cortical algorithm, Sparsey, which stored information in the form of modular sparse distributed codes (MSDCs) and does fixed time unsupervised learning, best-match retrieval, and belief update for spatiotemporal (sequential) inputs.

Publications

Publications (30)
Article
Full-text available
Semantic memory as a computationally free side-effect of sparse distributed generative episodic memory By generative model, we mean a model with sufficient parameters to represent the deep statistical structure (not just pairwise, but ideally, statistics of all orders present) of an input domain (in contrast to a discriminative model whose goal is...
Conference Paper
Full-text available
The “neuron doctrine” says the individual neuron is the functional unit of meaning. For a single source neuron, spike coding schemes can be based on spike rate or precise spike time(s). Both are fundamentally temporal codes with two key limitations. 1) Assuming M different messages from (activation levels of) the source neuron are possible, the dec...
Presentation
Full-text available
Most leading “deep” machine learning (ML) approaches rely heavily on Backpropagation which requires repeated computation of a very high dimensional loss function gradient to guide the, typically gradual, changes to the model’s weights. I describe an alternative, binary associative memory-based approach, Sparsey, in which sparse distributed binary r...
Conference Paper
Full-text available
There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials; and b) how is similarity preserved during learning, i.e., how do mo...
Preprint
Full-text available
There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials, and how is similarity preserved during learning, i.e., how do more...
Preprint
Full-text available
There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials, and how is similarity preserved during learning, i.e., how do more...
Preprint
Full-text available
A simple, atemporal, first-spike code, operating on Combinatorial Population Codes (CPCs) (a.k.a., binary sparse distributed representations) is described, which allows the similarities (more generally, likelihoods) of all items (hypotheses) stored in a CPC field to be simultaneously transmitted with a wave of single spikes from any single active c...
Article
(Web page: http://brainworkshow.sparsey.com/the-classical-realization-of-quantum-parallelism/) In this essay, I explain how quantum parallelism can be achieved on a classical machine. The key is to represent items of information as sparse distributed representations (SDRs), i.e., relatively small sets of binary units chosen sparsely in a much lar...
Research
Full-text available
Prior emulations of quantum computing on classical hardware formally represent probability amplitudes (and their corresponding basis states) localistically, i.e., each basis state is represented by its own memory location, physically disjoint from all the others. This is the source of the belief that exponential resources, both memory and operation...
Article
(Web page: https://medium.com/@rod_83597/a-hebbian-cell-assembly-is-formed-at-full-strength-on-a-single-trial-d9def1d2fa89) Hebb defined a cell assembly as a group of reciprocally interconnected cells that represents a concept. It’s surely true that a set of cortical cells that becomes a CA will have some, possibly large, degree of interconnectiv...
Conference Paper
Full-text available
For a single source neuron, spike coding schemes can be based on rate or on precise spike time(s) relative to an event, e.g., to a particular phase of gamma. Both are fundamentally temporal, requiring a decode window duration T much longer than a single spike. But, if information is represented by population activity (distributed codes, cell assemb...
Preprint
Full-text available
Four hallmarks of human intelligence are: 1) on-line, single/few-trial learning; 2) important/salient memories and knowledge are permanent over lifelong durations, though confabulation (semantically plausible retrieval errors) accrues with age; 3) the times to learn a new item and to retrieve the best-matching (most relevant) item(s) remain constan...
Preprint
Full-text available
Abstract: For a single source neuron, spike coding schemes can be based on rate or on precise spike time(s) relative to an event, e.g., to a particular phase of gamma. Both are fundamentally temporal, requiring a decode window duration T much longer than a single spike. But, if information is represented by population activity (distributed codes, c...
Conference Paper
Full-text available
Among the more important hallmarks of human intelligence, which any artificial general intelligence (AGI) should have, are the following. 1. It must be capable of on-line learning, including with single/few trials. 2. Memories/knowledge must be permanent over lifelong durations, safe from catastrophic forgetting. Some confabulation, i.e., semantica...
Working Paper
Full-text available
Machine learning (ML) representation formats have been dominated by: a) localism, wherein individual items are represented by single units, e.g., Bayes Nets, HMMs; and b) fully distributed representations (FDR), wherein items are represented by unique activation patterns over all the units, e.g., Deep Learning (DL) and its progenitors. DL has had g...
Article
A hyperessay ( http://www.sparsey.com/Sparsey_Hyperessay.html ) describing many elements of Neurithmic Systems's overall AGI theory, Sparsey®. Since the brain is the only known thing that possesses true intelligence and since the brain's cortex is the locus of memory, reasoning, etc., our goal reduces to discovering the fundamental nature of cortic...
Article
Full-text available
The abilities to perceive, learn, and use generalities, similarities, classes, i.e., semantic memory (SM), is central to cognition. Machine learning (ML), neural network, and AI research has been primarily driven by tasks requiring such abilities. However, another central facet of cognition, single-trial formation of permanent memories of experienc...
Preprint
Full-text available
The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully (densely)-distributed codes, i.e., all (most) units participate in every code; 3) graded sy...
Article
Full-text available
It is widely acknowledged that the brain i) implements probabilistic reasoning, and ii) represents information via population/distributed coding. Previous population-based probabilistic (PPC) theories share some basic properties: 1) continuous neurons; 2) all neurons formally participate in every code; 3) decoding requires either graded synapses or...
Article
Full-text available
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (cod...
Poster
Full-text available
The remarkable structural homogeneity of isocortex strongly suggests a canonical cortical algorithm that performs the same essential function in all regions. That function is widely construed/modeled as probabilistic inference, i.e., the ability, given an input, to retrieve the best-matching memory (or, most likely hypothesis) stored in memory. Her...
Article
Full-text available
Quantum superposition says that any physical system simultaneously exists in all of its possible states, the number of which is exponential in the number of entities composing the system. The strength of presence of each possible state in the superposition, i.e., its probability of being observed, is represented by its probability amplitude coeffic...
Article
Full-text available
No generic function for the minicolumn - i.e., one that would apply equally well to all cortical areas and species - has yet been proposed. I propose that the minicolumn does have a generic functionality, which only becomes clear when seen in the context of the function of the higher-level, subsuming unit, the macrocolumn. I propose that: (a) a mac...
Conference Paper
Full-text available
A neural network model is proposed that forms sparse spatiotemporal memory traces of spatiotemporal events given single occurrences of the events. The traces are distributed in that each individual cell and synapse participates in numerous traces. This sharing of representational substrate provides the basis for similarity-based generalization and...
Thesis
Full-text available
A model is described in which three types of memory—episodic memory, complex sequence memory and semantic memory—coexist within a single distributed associative memory. Episodic memory stores traces of specific events. Its basic properties are: high capacity, single-trial learning, memory trace permanence, and ability to store non-orthogonal patter...
Conference Paper
Full-text available
The problem of representing large sets of complex state sequences (CSSs)---i.e., sequences in which states can recur multiple times---has thus far resisted solution. This paper describes a novel neural network model, TEMECOR, which has very large capacity for storing CSSs. Furthermore, in contrast to the various back-propagation-based attempts at s...
Article
A deterministic neural network-based model of a sensori-motor being operated in the physical world is proposed. The being recognizes an object by overtly manipulating it into a familiar orientation. The being does not decide how to manipulate a novel object. Rather, given a novel percept of some object, its subsequent trajectory is determined by th...
Article
Full-text available
Monitoring communication performance consists of a variety of research and clinical methods using direct observation techniques. The benefit of observing actual communication performance, compared to testing or other non-observational assessment techniques, is that the results of observation-based measures reflect the individual’s actual communicat...

Network

Cited By

Projects

Projects (2)
Project
Understand the core circuit and algorithm of neocortex, which is likely essentially similar to that of hippocampus and olfactory bulb as well.