Continuing Commentary Commentary on Paul Smolensky (1988) On the proper treatment of connectionism. BBS 11:1-74. Abstract of the original article: A set of hypotheses is formulated for a connectlonist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models. Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule application. At the lower level, computation has the character of massively parallel satisfaction of soft numerical constraints; at the higher level, this can lead to competence characterizable by hard rules. Performance will typically deviate from this competence since behavior is achieved not by interpreting hard rules but by satisfying soft constraints. The result is a picture in which traditional and connectionist theoretical constructs collaborate intimately to provide an understanding of cognition. One of the central theses of Smolensky's target article was that connectionist models are subsymbolic. They are not to be thought of as symbolic, in the manner of traditional AI; nor, more controversially, are they to be thought of as neuronal models, models of neuronal processes. In response to his defense of this thesis, I have a question, an object, and a suggestion. The question is this: If the models are neither symbolic nor neuronal, then what is the reality to which the models are supposed to correspond? Traditionally, AI models were supposed to correspond to actual cognitive processes. According to authors such as Newell & Simon (1976) and McCarthy (1979)-defenders of strong AI-the appropriately programmed computer is thereby supposed actually to have mental states. On the cognitivist version of weak AI-as defended, for example, by Fodor (1980)-the appropriately programmed computer does not actually have mental states, but it has a cognitive model of our mental states, because our mental states have a computational structure and our mental processes are computational processes operating over the computational, that is, syntactical, features of our mental states. Now much of the appeal of the original connectionist models is that they were "neuronally inspired." That is, they were supposed to model actual or possible brain processes, not mental processes. But if Smolensky denies that they are models of brain processes, then the questions naturally arise, What are they models of? If they do not correspond to a mental reality, conscious or unconscious, and they do not correspond to a neuronal reality, then what evidence do we have that there is anything in actual human cognition to which the connectionist models do correspond? (It is worth emphasizing here the extent to which Smolensky's position differs from the position adopted by most connectionist authors [e.g. McClelland & Rumelhart 1988].) The second point I wish to make is a genuine objection, but it is related to my initial question. The objection is simply this: Smolensky maintains that the subsymbolic model, as he describes it, is nonetheless cognitive rather than, for example, physical. "The models," he says, "embody principles of cogni-tion rather than principles of physics." But his answer to the question, What is it about the models that makes them cognitive rather than physical? is woefully inadequate. He tells us that a crucial property of cognitive systems is that they maintain at a constant level a sufficient number of global conditions. That is, they maintain a large range of goals under a wide range of conditions. He does state that this is only a necessary condition for a system to be cognitive,, but it is important to emphasize how far it is from being sufficient. To begin with, if we take "goal" in the ordinary sense of a desired objective, then the criterion would be circular; because, of course, we would have to know that the system had mental states (that is, desires) in order to know that it had goals. His discussion of the river going downhill makes it clear that he does not intend "goal" in such an explicitly mentalistic sense. He simply means behavior which is as if it were directed to a goal. But now if that is what is meant by having a goal, then there are lots of systems that have a large range of goals under a wide range of conditions, but which are not in any literal sense cognitive. Thus, for example, the non-mental elements in any human or animal-mitosis, meiosis, digestion, respiration, blood circulation, antigen-antibody relations , and salivation-add up to a system with a very wide range of goals, and the system is able to pursue these goals under a very wide range of conditions, but I take it there is nothing cognitive about this system. Or, if animals seem too complex to describe adequately, consider any plant, such as a tree. The tree has analogously a large range of goals and it will pursue these under a wide range of conditions-these goals include growth, reproduction, survival, photosynthesis, growing and shedding of leaves, and so forth-but I take it there is nothing cognitive about such a system. I have so far asked a question (To what reality do the connec-tionist models correspond?) and made an objection (Smol-ensky's account of what makes the models cognitive is inadequate). Now on the basis of these, I wish to make a suggestion: If he can answer the question, he should not .worry about the objection, if he can point to a human reality in cognitive processing that the connectionist models correspond to, whether neurophysiological, mental, symbolic, or something else, then he shouldn't worry about the fact that he hasn't got a clear criterion to distinguish between the cognitive and the noncognitive. He can let the future developments of cognitive science decide what is really cognitive and what is not. I believe that the main reason he is worried about whether or not his system is genuinely cognitive is because there is no clear answer to the question, To what reality does it actually correspond? If he had an answer to that question, he wouldn't need to worry about the objection. Smolensky must be commended for the intellectual tour de force he exhibits in his BBS target article (Smolensky 1988). Yet despite his rigor there remains a serious inconsistency In his "proper treatment of connectionism." When Smolensky addresses the question of what aspect of reality networks model, he treats them as computational systems. When he poses the question "What makes these networks cognitive?" he treats them as dynamical physical systems, but he makes a mysterious appeal to complexity to differentiate them from other noncog-nitive dynamical physical systems. This dual treatment of networks Is a source of considerable confusion and the appeal to complexity Is not very enlightening. In what follows I offer a brief discussion of both points. Computational wersus dynamical physical systems. When Smolensky asks the question "What aspect of reality do connec-tionist networks model?" he chooses to treat them as computational systems. On this account the networks are nonsymbolic information-processing devices constrained by their physical configuration. Neurons are axiomatized as black boxes with well defined functional Input/output characteristics and the computational properties of large systems which can be assembled from individual neurons are investigated. In saying that the networks are treated as computational devices we mean that there Is imposed upon the physical network (in the vocabulary of Pylyshyn 1984) an Instantiation Function (IF) and a Semantic Function (SF). The IF maps an equivalence class of physical states onto a specific computational state (e.g., voltage levels are mapped onto numerical activation levels, physical connection strengths are mapped onto weights). The physical states are governed by physical law and there could be different physical stories to tell about each member of the equivalence class. But when these states are mapped onto the computational state there is a single nonphysical story to be told about them. The Semantic Function then maps these computational states onto some domain of interpretation. In the case of these networks the domain of interpretation tends to be things like "micro-features ," hypotheses, inferences, and so on. Smolensky reverts to treating connectlonist networks as dy-namical physical systems when the question "What makes connectionist models cognitive?" Is asked. I take it that, accordIng to this view, connectionist networks are once again a collection of axiomatized neural units, but there Is no IF or SF attached to them. This means that the network is a dynamical physical system evolving through time-like any other physical system-not a computational system processing information. The evolution of the system is explained by a direct appeal to physical law. The story one would tell in such a case would not be unlike the story told of billiard balls colliding on the surface of a pool table. In both of these cases Smolensky is attempting to characterize the same aspect of the world, but surely he is making very different ontological claims about It. On the first account the commitment is to the computational states of the network, and on the second account the commitment Is to the physical states. The difference between computational states and physical states Is the difference between something that is potentially semantically evaluable and something that is not. It is the difference between physics and semantics. The dilemma Smolensky faces Is the classical one. The closer one gets to physics the more difficult it becomes to see mental phenomena-though one can be reasonably certain of the ontology of the entitles one is postulating. As one embraces representations one seems closer to intentionality but the on-tology becomes suspect. Smolensky has chosen to play both sides of the fence. But surely the central problem In cognitive science Is that of getting from one side to the other-preferably by showing a continuum between the physics and the semantics (rather than by denying the reality of one or the other). What is required Is an account of Information embedded In and constrained by physics. Current theories of computation are at best inadequate and at worst irrelevant for this purpose. They take for granted the mapping from the physical to the computational states. The constraints on this mapping are surely the central issues in any theory of cognitive information processing. The current approach bifurcates the physical and the computational aspects and sets them apart as two separate realms; one belonging to the earthly world of physics, and the other to the Platonic world of mathematics. Ultimately we need a theory In which we can talk about computation as a set of processes Inseparable from the physical world. The shifting of theoretical stances between physical and computational, though convenient , just doesn't seem like the type of strategy that Is going to cut It in the long run.