Information Processing, Computation, and Cognition
Department of Philosophy, Center for Neurodynamics, and Department of Psychology
University of Missouri – St. Louis
St. Louis, MO, USA
Department of Philosophy and Neuroscience Institute
Georgia State University
Atlanta, GA, USA
This is a preprint of an article whose final and definitive form will be published in Journal of
Biological Physics; Journal of Biological Physics is available online at:
Computation and information processing are among the most fundamental notions in cognitive
science. They are also among the most imprecisely discussed. Many cognitive scientists take it
for granted that cognition involves computation, information processing, or both – although
others disagree vehemently. Yet different cognitive scientists use ‘computation’ and
‘information processing’ to mean different things, sometimes without realizing that they do. In
addition, computation and information processing are surrounded by several myths; first and
foremost, that they are the same thing. In this paper, we address this unsatisfactory state of
affairs by presenting a general and theory-neutral account of computation and information
processing. We also apply our framework by analyzing the relations between computation and
information processing on one hand and classicism and connectionism/computational
neuroscience on the other. We defend the relevance to cognitive science of both computation,
at least in a generic sense, and information processing, in three important senses of the term.
Our account advances several foundational debates in cognitive science by untangling some of
their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way
for the future resolution of the debates’ empirical aspects.
Keywords: classicism, cognitivism, computation, computational neuroscience, computational
theory of mind, computationalism, connectionism, information processing, meaning, neural
1. Information Processing, Computation, and the Foundations of Cognitive
Computation and information processing are among the most fundamental notions in cognitive
science. Many cognitive scientists take it for granted that cognition involves computation,
information processing, or both. Many others, however, reject theories of cognition based on
either computation or information processing [1-7]. This debate has continued for over half a
century without resolution.
An equally long-standing debate pitches classical theories of cognitive architecture [8-
13] against connectionist and neurocomputational theories [14-21]. Classical theories draw a
strong analogy between cognitive systems and digital computers. The term ‘connectionism’ is
primarily used for neural network models of cognitive phenomena constrained solely by
behavioral (as opposed to neurophysiological) data. By contrast, the term ‘computational
neuroscience’ is primarily used for neural network models constrained by neurophysiological
and possibly also behavioral data. We are interested not so much in the distinction between
connectionism and computational neuroscience as in what they have in common: the
explanation of cognition in terms of neural networks and their apparent contrast with classical
theories. Thus, for present purposes connectionism and computational neuroscience may be
For brevity’s sake, we will refer to these debates on the role information processing,
computation, and neural networks should play in a theory of cognition as the foundational
In recent years, some cognitive scientists have attempted to get around the
foundational debates by advocating a pluralism of perspectives [22-23]. According to this kind
of pluralism, it is a matter of perspective whether the brain computes, processes information, is
a classical system, or is a connectionist system. Different perspectives serve different purposes
and different purposes are legitimate. Hence, all sides of the foundational debates can be
retained if appropriately qualified.
Although pluralists are correct to point out that different descriptions of the same
phenomenon can in principle complement one another, this kind of perspectival pluralism is
flawed in one important respect. There is an extent to which different parties in the
foundational debates offer alternative explanations of the same phenomena—they can’t all be
right. Nevertheless, these pluralists are responding to something true and important: the
foundational debates are not merely empirical; they cannot be resolved solely by collecting
more data because they hinge on how we construe the relevant concepts. The way to make
progress is therefore not to accept all views at once, but to provide a clear and adequate
conceptual framework that remains neutral between different theories. Once such a
framework is in place, competing explanations can be translated into a shared language and
evaluated on empirical grounds.
Lack of conceptual housecleaning has led to the emergence of a number of myths that
stand in the way of theoretical progress. Not everyone subscribes to all of the following
assumptions, but each is widespread and influential:
(1) Computation is the same as information processing.
(2) Semantic information is necessarily true.
(3) Computation requires representation.
(4) The Church-Turing thesis entails that cognition is computation.
(5) Everything is computational.
(6) Connectionist and classical theories of cognitive architecture are mutually exclusive.
We will argue that these assumptions are mistaken and distort our understanding of
computation, information processing, and cognitive architecture.
Traditional accounts of what it takes for a physical system to perform a computation or
process information [19, 24-26] are inadequate because they are based on at least some of (1)-
(6). In lieu of these traditional accounts, we will present a general account of computation and
information processing that systematizes, refines, and extends our previous work [27-37].
We will then apply our framework by analyzing the relations between computation and
information processing on one hand and classicism and connectionism/computational
neuroscience on the other. We will defend the relevance to cognitive science of both
computation, at least in a generic sense we will articulate, and information processing, in three
important senses of the term. We will also argue that the choice among theories of cognitive
architecture is not between classicism and connectionism/computational neuroscience, but
rather between varieties of neural computation, which may be classical or non-classical.
Our account advances the foundational debates by untangling some of their conceptual
knots in a theory-neutral way. By leveling the playing field, we pave the way for the future
resolution of the debates’ empirical aspects.
2. Getting Rid of Some Myths
The notions of computation and information processing are often used interchangeably. Here
is a representative example: “I … describe the principles of operation of the human mind,
considered as an information-processing, or computational, system” *38, p. 10, emphasis
added]. This statement presupposes assumption (1) above. Why are the two notions used
interchangeably so often, without a second thought?
We suspect the historical reason for this conflation goes back to the cybernetic
movement’s effort to blend Shannon’s information theory  with Turing’s  computability
theory (as well as control theory). Cyberneticians did not clearly distinguish either between
Shannon information and semantic information or between semantic and non-semantic
computation (more on these distinctions below). But at least initially, they were fairly clear
that information and computation played distinct roles within their theories. Their idea was
that organisms and automata contain control mechanisms: information is transmitted within
the system and between system and environment, and control is exerted by means of digital
computation [41, 42].
Then the waters got muddier. When the cybernetic movement became influential in
psychology, AI, and neuroscience, ‘computation’ and ‘information’ became ubiquitous
buzzwords. Many people accepted that computation and information processing belong
together in a theory of cognition. After that, many stopped paying attention to the differences
between the two. To set the record straight and make some progress, we must get clearer on
the independent roles computation and information processing can fulfill in a theory of
The notion of digital computation was imported from computability theory into
neuroscience and psychology primarily for two reasons: first, it seemed to provide the right
mathematics for modeling neural activity ; second, it inherited mathematical tools
(algorithms, computer program, formal languages, logical formalisms, and their derivatives,
including many types of neural networks) that appeared to capture some aspects of cognition.
These reasons are not sufficient to actually establish that cognition is digital computation.
Whether cognition is digital computation is a difficult question, which lies outside the scope of
The theory that cognition is computation became so popular that it progressively led to
a stretching of the operative notion of computation. In many quarters, especially
neuroscientific ones, the term ‘computation’ is used, more or less, for whatever internal
processes explain cognition. Unlike ‘digital computation,’ which stands for a mathematical
apparatus in search of applications, ‘neural computation’ is a label in search of a theory. Of
course, the theory is quite well developed by now, as witnessed by the explosion of work in
computational and theoretical neuroscience over the last decades [20, 44-45]. The point is that
such a theory need not rely on a previously existing and independently defined notion of
computation, such as ‘digital computation’ or even ‘analog computation’ in its most
By contrast, the various notions of information (processing) have distinct roles to play.
By and large, they serve to make sense of how organisms keep track of their environments and
produce behaviors accordingly. Shannon’s notion of information can serve to address
quantitative problems of efficiency of communication in the presence of noise, including
communication between the external (distal) environment and the nervous system. Other
notions of information—specifically, semantic information—can serve to give specific semantic
content to particular states or events. This may include cognitive or neural events that reliably
correlate with events occurring in the organism’s distal environment as well as mental
representations, words, and the thoughts and sentences they constitute.
Whether cognitive or neural events fulfill all or any of the job descriptions of
computation and information processing is in part an empirical question and in part a
conceptual one. It’s a conceptual question insofar as we can mean different things by
‘information’ and ‘computation’, and insofar as there are conceptual relations between the
various notions. It’s an empirical question insofar as, once we fix the meanings of
‘computation’ and ‘information’, the extent to which computation and the processing of
information are both instantiated in the brain depends on the empirical facts of the matter.
Ok, but do these distinctions really matter? Why should a cognitive theorist care about
the differences between computation and information processing? The main theoretical
advantage of keeping them separate is to appreciate the independent contributions they can
make to a theory of cognition. Conversely, the main cost of conflating computation and
information processing is that the resulting mongrel concept may be too messy and vague to do
all the jobs that are required of it. As a result, it becomes difficult to reach consensus on
whether cognition involves either computation or information processing.
Assumption (2) is that semantic information is necessarily true; there is no such thing as
false information. This “veridicality thesis” is defended by most theorists of semantic
information [46-49]. But as we shall point out in Section 4, (2) is inconsistent with one
important use of the term ‘information’ in cognitive science *37+. Therefore, we will reject (2)
in favor of the view that semantic information may be either true or false.
Assumption (3) is that there is no computation without representation. Most accounts
of computation rely on this assumption [19, 24-26, 38]. As one of us has argued extensively
elsewhere [27, 28, 50], however, assumption (3) obscures the core notion of computation used
in computer science and computability theory—the same notion that inspired the
computational theory of cognition—as well as some important distinctions between notions of
computation. The core notion of computation does not require representation, although it is
compatible with it. In other words, computational states in the core sense may or may not be
representations. Understanding computation in its own terms, independently of
representation, will allow us to sharpen the debates over the computational theory of cognition
as well as cognitive architecture.
Assumption (4) is that cognition is computation because of the Church-Turing thesis [51,
52]. The Church-Turing thesis says that any function that is computable in an intuitive sense is
recursive or, equivalently, computable by some Turing machine [40, 53, 54].1 Since Turing
machines and other equivalent formalisms are the foundation of the mathematical theory of
computation, many authors either assume or attempt to argue that all computations are
covered by the results established by Turing and other computability theorists. But recent
scholarship has shown this view to be fallacious [29, 55]. The Church-Turing thesis does not
establish whether a function is computable. It only says that if a function is computable in a
certain intuitive sense, then it is computable by some Turing machine. Furthermore, the
intuitive sense in question has to do with what can be computed by following an algorithm (a
list of explicit instructions) defined over sequences of digital entities. Thus, the Church-Turing
thesis applies directly only to algorithmic digital computation. The relationship between
algorithmic digital computation and digital computation simpliciter, let alone other kinds of
computation, is quite complex, and the Church-Turing thesis does not settle it.
Assumption (5) is pancomputationalism: everything is computational. There are two
ways to defend (5). Some authors argue that everything is computational because describing
something as computational is just one way of interpreting it, and everything can be
interpreted that way [19, 23]. We reject this interpretational pancomputationalism because it
conflates computational modeling with computational explanation. The computational theory
of cognition is not limited to the claim that cognition can be described (modeled)
computationally, as the weather can; it adds that cognitive phenomena have a computational
explanation [28, 31, 34]. Other authors defend (5) by arguing that the universe as a whole is at
Debates on computation, information processing, and cognition are further muddied by
1 The Church-Turing thesis properly so called—i.e., the thesis supported by Church, Turing, and Kleene’s
arguments—is sometimes confused with the Physical Church-Turing thesis. The Physical Church-Turing thesis 
lies outside the scope of this paper. Suffice it to say that the Physical Church-Turing thesis is controversial and in
any case does not entail that cognition is computation in a sense that is relevant to cognitive science .
bottom computational [56, 57]. The latter is a working hypothesis or article of faith for those
interested in seeing how familiar physical laws might emerge from a “computational” or
“informational” substrate. It is not a widely accepted notion, and there is no direct evidence
The physical form of pancomputationalism is not directly relevant to theories of
cognition, because theories of cognition attempt to find out what distinguishes cognition from
other processes—not what it shares with everything else. Insofar as the theory of cognition
uses computation to distinguish cognition from other processes, it needs a notion of
computation that excludes at least some other processes as non-computational [cf. 28, 31, 34].
Someone may object as follows. Even if pancomputationalism—the thesis that
everything is computational—is true, it doesn’t follow that the claim that cognition involves
computation is vacuous. A theory of cognition still has to say which specific computations
cognition involves. The job of neuroscience and psychology is precisely to discover the specific
computations that distinguish cognition from other processes [cf. 26].
We agree that, if cognition involves computation, then the job of neuroscience and
psychology is to discover which specific computations cognition involves. But the if is
important. The job of psychology and neuroscience is to find out how cognition works,
regardless of whether it involves computation. The claim that brains compute was introduced
in neuroscience and psychology as an empirical hypothesis, to explain cognition by analogy with
digital computers. Much of the empirical import of the computational theory of cognition is
already eliminated by stretching the notion of computation from digital to generic (see below).
Stretching the notion of computation even further, so as to embrace pancomputationalism,
erases all empirical import from the claim that brains compute.
Here is another way to describe the problem. The view that cognition involves
computation has been fiercely contested. Many psychologists and neuroscientists reject it. If
we adopt an all-encompassing notion of computation, we have no way to make sense of this
debate. It is utterly implausible that critics of computationalism have simply failed to notice
that everything is computational. More likely, they object to what they perceive to be
questionable empirical commitments of computationalism. For this reason, computation as it
figures in pancomputationalism is a poor foundation for a theory of cognition. From now on,
we will leave pancomputationalism behind.
Finally, assumption (6) is that connectionist and classical theories of cognitive
architecture are mutually exclusive.2 By this assumption, we do not mean to rule out hybrid
theories that combine symbolic and connectionist modules [cf. 59]. What we mean is that
according to assumption (6), for any given module, you must have either a classical or a
2 Classical theories are often referred to as ‘symbolic’. Roughly speaking, in the present context a symbol is
something that satisfies two conditions: (i) it is a representation and (ii) it falls under a discrete (or digital)
linguistic type. As we will argue below, conceptual clarity requires keeping these two aspects of symbols separate.
Therefore, we will avoid the term ‘symbol’ and its cognates.
sometimes conflated with the debate between computationalism and anti-computationalism.3
But these are largely separate debates. Computationalists argue that cognition is computation;
anti-computationalists deny it. Classicists argue that cognition operates over language-like
structures; connectionists suggest that cognition is implemented by neural networks. So all
classicists are computationalists, but computationalists need not be classicists, and
connectionists/computational neuroscientists need not be anti-computationalists. In both
debates, the opposing camps often confusedly mix terminological disputes with substantive
disagreements over the nature of cognition. This is why we will begin our next section by
introducing a taxonomy of notions of computation.
As we will show in Section 3.4, depending on what one means by ‘computation’,
computationalism can range from being true but quite weak to being an explanatorily powerful,
though fallible, thesis about cognition. Similarly, we will show the opposition between
classicists and connectionists/computational neuroscientists to be either spurious or
substantive depending on what is meant by ‘connectionism’. We will argue that everyone is (or
ought to be) a connectionist in the most general and widespread sense of the term, whether
they realize it or not.
Different notions of computation vary along two important dimensions. The first is how
encompassing the notion is, that is, how many processes it includes as computational. The
second dimension has to do with whether being the vehicle of a computation requires
possessing meaning, or semantic properties. We will look at the first dimension first.
3.1 Digital computation
We use ‘digital computation’ for the notion implicitly defined by the classical mathematical
theory of computation, and ‘digital computationalism’ for the thesis that cognition is digital
computation. Rigorous mathematical work on computation began with Alan Turing and other
logicians in the 1930s [40, 53, 60, 61], and it is now a well-established branch of mathematics
A few years after Turing and others formalized the notion of digital computation,
Warren McCulloch and Walter Pitts  used digital computation to characterize the activities
of the brain. McCulloch and Pitts were impressed that the main vehicles of neural processes
appear to be trains of all-or-none spikes, which are discontinuous events . This led them to
conclude, rightly or wrongly, that neural processes are digital computations. They then used
their theory that brains are digital computers to explain cognition.4
McCulloch and Pitts’s theory is the first theory of cognition to employ Turing’s notion of
digital computation. Since McCulloch and Pitts’s theory had a major influence on subsequent
The debate between classicism and connectionism/computational neuroscience is
3 Sometimes, the view that cognition is dynamical is presented as an alternative to the view that cognition is
computational [e.g., 4]. This is simply a false contrast. Computation is dynamical too; the relevant question is
whether cognitive dynamics are computational.
4 Did McCulloch and Pitts really offer a theory of cognition in terms of digital computation? Absolutely. McCulloch
and Pitts’ work is widely misunderstood; for a detailed study, see *63+.
computational theories, we stress that digital computation is the principal notion that inspired
modern computational theories of cognition . This makes the clarification of digital
computation especially salient.
Digital computation may be defined both abstractly and concretely. Roughly speaking,
abstract digital computation is the manipulation of strings of discrete elements, that is, strings
of letters from a finite alphabet. Here we are interested primarily in concrete computation, or
physical computation. Letters from a finite alphabet may be physically implemented by what
we call ‘digits’. To a first approximation, concrete digital computation is the processing of
sequences of digits according to general rules defined over the digits . Let us briefly
consider the main ingredients of digital computation.
The atomic vehicles of concrete digital computation are digits, where a digit is simply a
macroscopic state (of a component of the system) whose type can be reliably and
unambiguously distinguished by the system from other macroscopic types. To each
(macroscopic) digit type, there correspond a large number of possible microscopic states.
Digital systems are engineered so as to treat all those microscopic states in the same way—the
one way that corresponds to their (macroscopic) digit type. For instance, a system may treat 4
volts plus or minus some noise in the same way (as a ‘0’), whereas it may treat 8 volts plus or
minus some noise in a different way (as a ‘1’). To ensure reliable manipulation of digits based
on their type, a physical system must manipulate at most a finite number of digit types. For
instance, ordinary computers contain only two types of digit, usually referred to as ‘0’ and ‘1’.5
Digits need not mean or represent anything, but they can; numerals represent numbers, while
other digits (e.g., ‘|’, ‘\’) do not represent anything in particular.
Digits can be concatenated (i.e., ordered) to form sequences or strings. Strings of digits
are the vehicles of digital computations. A digital computation consists in the processing of
strings of digits according to rules. A rule in the present sense is simply a map from input
strings of digits, plus possibly internal states, to output strings of digits. Examples of rules that
may figure in a digital computation include addition, multiplication, identity, and sorting.6
When we define concrete computations and the vehicles—such as digits—that they
manipulate, we need not consider all of their specific physical properties. We may consider
only the properties that are relevant to the computation, according to the rules that define the
computation. A physical system can be described at different levels of abstraction. Since
concrete computations and their vehicles can be defined independently of the physical media
that implement them, we shall call them ‘medium-independent’. That is, computational
descriptions of concrete physical systems are sufficiently abstract as to be medium-
In other words, a vehicle is medium-independent just in case the rules (i.e., the input-
output maps) that define a computation are sensitive only to differences between portions of
the vehicles along specific dimensions of variation—they are insensitive to any other physical
5 The term ‘digit’ is used in two ways. It may be used for the discrete variables that can take different values; for
instance, binary cells are often called bits, which can take either 0 or 1 as values. Alternatively, the term ‘digit’ may
be used for the values themselves. In this second sense, it is the 0’s and 1’s that are the bits. We use ‘digit’ in the
6 Addition and multiplication are usually defined as functions over numbers. To maintain consistency in the
present context, they ought to be understood as functions over strings of digits.
properties of the vehicles. Put yet another way, the rules are functions of state variables
associated with certain degrees of freedom that can be implemented differently in different
physical media. Thus, a given computation can be implemented in multiple physical media
(e.g., mechanical, electro-mechanical, electronic, magnetic, etc.), provided that the media
possess a sufficient number of dimensions of variation (or degrees of freedom) that can be
appropriately accessed and manipulated.
In the case of digits, their defining characteristic is that they are unambiguously
distinguishable by the processing mechanism under normal operating conditions. Strings of
digits are ordered sets of digits; i.e., digits such that the system can distinguish different
members of the set depending on where they lie along the string. The rules defining digital
computations are, in turn, defined in terms of strings of digits and internal states of the system,
which are simply states that the system can distinguish from one another. No further physical
properties of a physical medium are relevant to whether they implement digital computations.
Thus, digital computations can be implemented by any physical medium with the right degrees
To summarize, a physical system is a digital computing system just in case it is a system
that manipulates input strings of digits, depending on the digits’ type and their location on the
string, in accordance with a rule defined over the strings (and possibly the internal states of the
The notion of digital computation here defined is quite general. It should not be
confused with three other commonly invoked but more restrictive notions of computation:
classical computation (in the sense of Fodor and Pylyshyn ),7 computation that follows an
algorithm, and computation of Turing-computable functions (see Figure 1 below).
Let us begin with the most restrictive notion of the three: classical computation. A
classical computation is a digital computation that has two additional features. First, it
manipulates a special kind of digital vehicle: sentence-like strings of digits. Second, it is
algorithmic, meaning that it follows an algorithm—i.e., an effective, step-by-step procedure
that manipulate strings of digits and produce a result within finitely many steps. Thus, a
classical computation is a digital, algorithmic computation whose algorithms are sensitive to the
combinatorial syntax of the symbols .8
A classical computation is algorithmic, but the notion of algorithmic computation—i.e.,
digital computation that follows an algorithm—is more inclusive, because it does not require
that the vehicles being manipulated be sentence-like.
Any algorithmic computation, in turn, is Turing-computable (i.e., it can be performed by
a Turing machine). This is a version of the Church-Turing thesis, for which there is compelling
7 The term ‘classical computation’ is sometimes used as an approximate synonym of ‘digital computation’. Here
we are focusing on the more restricted sense of the term that has been used in debates on cognitive architecture
at least since .
8 Fodor and Pylyshyn  restrict their notion of classical computation to processes defined over representations,
because they operate under assumption (3)—that computation requires representation. In other words, the
sentence-like symbolic structures manipulated by Fodor and Pylyshyn’s classical computations must have semantic
content. Thus, Fodor and Pylyshyn’s classical computations are a kind of “semantic computation”. Since
assumption (3) is a red herring in the present context, in the main text we avoided assumption (3). We will discuss
the distinction between semantic and non-semantic computation in Section 3.3.
evidence . But the computation of Turing-computable functions need not be carried out by
following an algorithm. For instance, many neural networks compute Turing-computable
functions (their inputs and outputs are strings of digits, and the input-output map is Turing-
computable), but such networks need not have a level of functional organization at which they
follow the steps of an algorithm for computing their functions; no functional level at which their
internal states and state transitions are discrete.9
Finally, the computation of a Turing-computable function is a digital computation,
because Turing-computable functions are by definition functions of a denumerable domain—a
domain whose elements may be counted—and the arguments and values of such functions are,
or may be represented by, strings of digits. But it is equally possible to define functions of
strings of digits that are not Turing-computable, and to mathematically define processes that
compute such functions. Some authors have speculated that some functions that are not
Turing-computable may be computable by some physical systems [55, 64]. According to our
usage, any such computations still count as digital computations. Of course, it may well be that
only the Turing-computable functions are computable by physical systems; whether this is the
case is an empirical question that does not affect our discussion. Furthermore, the
computation of Turing-uncomputable functions is unlikely to be relevant to the study of
cognition. Be that as it may—we will continue to talk about digital computation in general.
Many other distinctions may be drawn within digital computation, such as hardwired vs.
programmable, special purpose vs. general purpose, and serial vs. parallel computation [cf. 30].
Such additional distinctions, which are orthogonal to those of Figure 1, may be used to further
classify theories of cognition that appeal to digital computation. Nevertheless, digital
computation is the most restrictive notion of computation that we will consider here. It
includes processes that follow ordinary algorithms, such as the computations performed by
standard digital computers, as well as many types of neural network computations. Since
digital computation is the notion that inspired the computational theory of cognition, it is the
most relevant notion for present purposes.
9 Two caveats: First, neural networks should not be confused with their digital simulations. A simulation is a
model or representation of a neural network; the network is what the simulation represents. Of course, a digital
simulation of a neural network is algorithmic; it doesn’t follow that the network itself (the system represented by
the simulation) is algorithmic. Second, some authors use the term ‘algorithm’ for the computations performed by
all neural networks. In this broader sense of ‘algorithm’, any processing of a signal follows an algorithm, regardless
of whether the process is defined in terms of discrete manipulations of strings of digits. Since this is a more
encompassing notion of algorithm than the one employed in computer science—indeed, it is even broader than
the notion of computing Turing-computable functions—our point stands. The point is that the notion of
computing Turing-computable functions is more inclusive than that of algorithmic computation in the standard
Figure 1. Types of digital computation and their relations of class inclusion.
Even though the notion of digital computation is broad, we need an even broader notion of
computation. In order to capture all relevant uses of ‘computation’ in cognitive science, we
now introduce the notion of ‘generic computation’. The banner of ‘generic computation’ will
subsume digital computation as defined in the previous section, analog computation, and
We use ‘generic computation’ to designate the processing of vehicles according to rules
that are sensitive to certain vehicle properties, and specifically, to differences between
different portions of the vehicles. This definition generalizes the definition of digital
computation by allowing for a broader range of vehicles (e.g., continuous variables as well as
discrete digits). We use ‘generic computationalism’ to designate the thesis that cognition is
computation in the generic sense.
Since the definition of generic computation makes no reference to specific media,
generic computation is medium-independent—it applies to all media. That is, the differences
between portions of vehicles that define a generic computation do not depend on specific
physical properties of the medium but only on the presence of relevant degrees of freedom.
Shortly after McCulloch and Pitts argued that brains perform digital computations,
others countered that neural processes may be more similar to analog computations [65, 66].
The alleged evidence for the analog computational theory includes neurotransmitters and
hormones, which are released by neurons in degrees rather than in all-or-none fashion.
Analog computation is often contrasted with digital computation, but analog
computation is a vague and slippery concept. The clearest notion of analog computation is that
of Pour-El . Roughly, abstract analog computers are systems that manipulate continuous
variables to solve certain systems of differential equations. Continuous variables are variables
that can vary continuously over time and take any real values within certain intervals.
Analog computers can be physically implemented, and physically implemented
continuous variables are different kinds of vehicles than strings of digits. While a digital
computing system can always unambiguously distinguish digits and their types from one
another, a concrete analog computing system cannot do the same with the exact values of
(physically implemented) continuous variables. This is because the values of continuous
variables can only be measured within a margin of error. Primarily due to this, analog
computations (in the present, strict sense) are a different kind of process than digital
The claim that the brain is an analog computer is ambiguous between two
interpretations. On the literal interpretation, ‘analog computer’ is given Pour-El’s precise
meaning . The theory that the brain is literally an analog computer in this literal sense was
never very popular. The primary reason is that although spike trains are a continuous function
of time, they are also sequences of all-or-none signals .
On a looser interpretation, ‘analog computer’ refers to a broader class of computing
systems. For instance, Churchland and Sejnowski use the term ‘analog’ so that containing
continuous variables is sufficient for a system to count as analog: “The input to a neuron is
analog (continuous values between 0 and 1)” *19, p. 51+. Under such a usage, even a slide rule
counts as an analog computer. Sometimes, the notion of analog computation is simply left
undefined, with the result that ‘analog computer’ refers to some otherwise unspecified
The looser interpretation of ‘analog computer’—of which Churchland and Sejnowski’s
usage is one example—is not uncommon, but we find it misleading because its boundaries are
unclear and yet it is prone to being confused with analog computation in the more precise
sense of Pour-El’s *67+. Let us now turn to the notion of ‘neural computation’.
In recent decades, the analogy between brains and computers has taken hold in
neuroscience. Many neuroscientists have started using the term ‘computation’ for the
processing of neuronal spike trains, that is, sequences of spikes produced by neurons in real
time. The processing of neuronal spike trains by neural systems is often called ‘neural
computation’. Whether neural computation is best regarded as a form of digital computation,
analog computation, or something else is a difficult question, which we cannot settle here.
Instead, we will subsume digital computation, analog computation, and neural computation
under the banner of ‘generic computation’ (Figure 2).
While digits are unambiguously distinguishable vehicles, other vehicles are not so. For
instance, concrete analog computers cannot unambiguously distinguish between any two
portions of the continuous variables they manipulate. Since the variables can take any real
values but there is a lower bound to the sensitivity of any system, it is always possible that the
difference between two portions of a continuous variable is small enough to go undetected by
the system. From this it follows that the vehicles of analog computation are not strings of
digits. Nevertheless, analog computations are only sensitive to the differences between
portions of the variables being manipulated, to the degree that they can be distinguished by the
system. Any further physical properties of the media implementing the variables are irrelevant
10 For more details on the contrast between digital and analog computers, see [31, Section 3.5].
to the computation. Like digital computers, therefore, analog computers operate on medium-
Finally, current evidence suggests that the vehicles of neural processes are neuronal
spikes and that the functionally relevant aspects of neural processes are medium-independent
aspects of the spikes—primarily, spike rates. That is, the functionally relevant aspects of spikes
may be implemented either by neural tissue or by some other physical medium, such as a
silicon-based circuit. Thus, spike trains appear to be another case of medium-independent
vehicle, in which case they qualify as proper vehicles for generic computations. Assuming that
brains process spike trains and that spikes are medium-independent vehicles, it follows by
definition that brains perform computations in the generic sense.
In conclusion, generic computation includes digital computation, analog computation,
neural computation (which may or may not correspond closely to digital or analog
computation), and more.
Figure 2. Types of generic computation and their relations of class inclusion. Neural
computation is not represented because where it belongs is controversial.
So far, we have taxonomized different notions of computation according to how broad they
are, namely, how many processes they include as computational. Now we will consider a
second dimension along which notions of computation differ. Consider digital computation.
The digits are often taken to be representations, because it is assumed that computation
requires representation . A similarly semantic view may be taken with respect to generic
One of us has argued at length that computation per se, in the sense implicitly defined
by the practices of computability theory and computer science, does not require
representation, and that any semantic notion of computation presupposes a non-semantic
notion of computation [32, 50]. Meaningful words such as ‘avocado’ are both strings of digits
and representations, and computations may be defined over them. Nonsense sequences such
as ‘#r %h@’, which represent nothing, are strings of digits too, and computations may be
defined over them just as well.
Although computation does not require representation, it certainly allows it. In fact,
generally, computations are carried out over representations. For instance, usually the states
manipulated by ordinary computers are representations.
Semantic vs. Non-Semantic Computation
To maintain generality, we will consider both semantic and non-semantic notions of
computation. In a nutshell, semantic notions of computation define computations as operating
over representations. By contrast, non-semantic notions of computation define computations
without requiring that the vehicles being manipulated be representations.
Let us take stock. We have introduced a taxonomy of varieties of computation (Figure
3). Within each category, we may distinguish a semantic notion of computation, which
presupposes that the computational vehicles are representations, and a non-semantic one,
Figure 3. Types of computation and their relations of class inclusion. Once again, neural
computation is not represented because where it belongs is controversial.
The distinctions introduced so far shed new light on the longstanding debates on
computationalism, classicism, and connectionism/computational neuroscience.
Computationalism, we have said, is the view that cognition is computation. We can now
Computationalism, Classicism, and Connectionism Reinterpreted
11 One terminological caveat. Later on, we will also speak of semantic notions of information. In the case of non-
natural semantic information, by ‘semantic’ we will mean the same as what we mean here, i.e., representational.
A representation is something that can mis-represent, i.e., may be unsatisfied or false. In the case of natural
information, by ‘semantic’ we will mean something weaker, which is not representational because it cannot mis-
represent (see below).
Digital Turing-computable Algorithmic Classical
appreciate that this may mean at least two different things: (i) cognition is digital computation,
and (ii) cognition is computation in the generic sense. Let us consider them in reverse order.
If ‘computation’ means generic computation, i.e., the processing of medium-
independent vehicles according to rules, then computationalism becomes the claim that
cognition is the manipulation of medium-independent vehicles according to rules. This is not a
trivial claim. Disputing ‘generic computationalism’ requires arguing that cognitive capacities
depend in an essential way on some specific physical property of cognitive systems other than
those required to distinguish computational vehicles from one another. For instance, one may
argue that the processing of spike rates by neural systems is essentially dependent on the use
of voltages instead of other physical properties to generate action potentials. Without those
specific physical properties, one may propose, it would be impossible to exhibit certain
Something along these lines is sometimes maintained for phenomenal consciousness,
due to the qualitative feel of conscious states [e.g., 69]. Nevertheless, many cognitive scientists
maintain that consciousness is reducible to computation and information processing [cf. 70].
Since we lack the space to discuss consciousness, we will assume (along with mainstream
cognitive scientists) that if and insofar as consciousness is not reducible to computation in the
generic sense and information processing, cognition may be studied independently of
consciousness. If this is right, then, given that cognitive processes appear to be implemented
by spike trains and that spike trains appear to be medium-independent, we may conclude that
cognition is computation in the generic sense.
If ‘computation’ means digital computation, i.e., the processing of strings of digits
according to rules, then computationalism says that cognition is the manipulation of digits. As
we saw above, this is what McCulloch and Pitts  argued. Importantly, it’s also what both
classical computational theories and many connectionist theories of cognition maintain.
Finally, this view is what most critics of computationalism object to [1, 3]. Although digital
computationalism encompasses a broad family of theories, it is a powerful thesis with
considerable explanatory scope. Its exact explanatory scope depends on the precise kind of
digital computing systems that cognitive systems are postulated to be. If cognitive systems are
digital computing systems, they may be able to compute a smaller or larger range of functions
depending on their precise mechanistic properties.12
Let us now turn to another debate sometimes conflated with the one just described,
namely the debate between classicists and connectionists. By the 1970s, McCulloch and Pitts
were mostly forgotten. The dominant paradigm in cognitive science was classical (or symbolic)
AI, aimed at writing computer programs that simulate intelligent behavior without much
concern for how brains work . It was commonly assumed that digital computationalism is
committed to classicism, that is, the idea that cognition is the manipulation of linguistic, or
sentence-like, structures. On this view, cognition consists of performing computations on
sentences with a logico-syntactic structure akin to that of natural languages, but written in the
language of thought [71, 72].
12 Digital computing systems belong in a hierarchy of systems that are computationally more or less powerful. The
hierarchy is measured by the progressively larger classes of functions each class of system can compute. Classes
include the functions computable by finite automata, pushdown automata, and Turing machines .