ArticlePDF Available

A General Framework for Parallel Distributed Processing

Authors:
... In order to assess the relationship between FSO and PDP, now I briefly review the components of the PDP model as introduced by Rumelhart, Hinton, and McClelland in [4]. For each component I highlight similarities and "differentiae" [15], namely specific differences with respect to elements of the FSO model [16]. ...
... In what follows, uncited quotes are to be assumed from [4]. -"A set of processing units". ...
... Pictures such as in Fig. 1 represent the space of all possible states of activation of an FSO. Actors can request services or provide services-which corresponds to the input and output units in [4]. The visibility of actors is restricted by the FSO concept of community: a set of actors in physical or logical proximity, for the sake of simplicity interpreted as a locus (for instance a room; or a building; or a city, etc.) Non-visible actors correspond to the hidden units of PDP [4]. ...
Preprint
A strict interpretation of connectionism mandates complex networks of simple components. The question here is, is this simplicity to be interpreted in absolute terms? I conjecture that absolute simplicity might not be an essential attribute of connectionism, and that it may be effectively exchanged with a requirement for relative simplicity, namely simplicity with respect to the current organizational level. In this paper I provide some elements to the analysis of the above question. In particular I conjecture that fractally organized connectionist networks may provide a convenient means to achive what Leibniz calls an "art of complication", namely an effective way to encapsulate complexity and practically extend the applicability of connectionism to domains such as sociotechnical system modeling and design. Preliminary evidence to my claim is brought by considering the design of the software architecture designed for the telemonitoring service of Flemish project "Little Sister".
... An elegant mathematical proof of these limitations is given in Hinton (1989). Given these limitations of the aforementioned models, in the middle of that decade articles were published that introduced complexities into the models of previous years, such as hidden layers of neurons, which extended the learning and recognition capabilities of previous memory models (see, for example, Rumelhart, Hinton and McClelland 1986). ...
Preprint
Full-text available
In 1929 Jan Lukasiewicz used, apparently for the first time, his Polish notation to represent the operations of formal logic. This is a parenthesis-free notation, which also implies that logical functions are operators preceding the variables on which they act. In the 1980s, within the framework of research into mathematical models on the parallel processing of neural systems, a group of operators emerged-neurally inspired and based on matrix algebra-which computed logical operations automatically. These matrix operators reproduce the order of operators and variables of Polish notation. These logical matrices can also generate a three-valued logic with broad similarities to Lukasiewicz's three-valued logic. In this paper, a parallel is drawn between relevant formulas represented in Polish notation, and their counterparts in terms of neurally based matrix operators. Lukasiewicz's three-valued logic, shown in Polish notation has several points of contact with what matrices produce when they process uncertain truth vectors. This formal parallelism opens up scientific and philosophical perspectives that deserve to be further explored.
Article
Full-text available
Continuing Commentary Commentary on Paul Smolensky (1988) On the proper treatment of connectionism. BBS 11:1-74. Abstract of the original article: A set of hypotheses is formulated for a connectlonist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models. Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule application. At the lower level, computation has the character of massively parallel satisfaction of soft numerical constraints; at the higher level, this can lead to competence characterizable by hard rules. Performance will typically deviate from this competence since behavior is achieved not by interpreting hard rules but by satisfying soft constraints. The result is a picture in which traditional and connectionist theoretical constructs collaborate intimately to provide an understanding of cognition. One of the central theses of Smolensky's target article was that connectionist models are subsymbolic. They are not to be thought of as symbolic, in the manner of traditional AI; nor, more controversially, are they to be thought of as neuronal models, models of neuronal processes. In response to his defense of this thesis, I have a question, an object, and a suggestion. The question is this: If the models are neither symbolic nor neuronal, then what is the reality to which the models are supposed to correspond? Traditionally, AI models were supposed to correspond to actual cognitive processes. According to authors such as Newell & Simon (1976) and McCarthy (1979)-defenders of strong AI-the appropriately programmed computer is thereby supposed actually to have mental states. On the cognitivist version of weak AI-as defended, for example, by Fodor (1980)-the appropriately programmed computer does not actually have mental states, but it has a cognitive model of our mental states, because our mental states have a computational structure and our mental processes are computational processes operating over the computational, that is, syntactical, features of our mental states. Now much of the appeal of the original connectionist models is that they were "neuronally inspired." That is, they were supposed to model actual or possible brain processes, not mental processes. But if Smolensky denies that they are models of brain processes, then the questions naturally arise, What are they models of? If they do not correspond to a mental reality, conscious or unconscious, and they do not correspond to a neuronal reality, then what evidence do we have that there is anything in actual human cognition to which the connectionist models do correspond? (It is worth emphasizing here the extent to which Smolensky's position differs from the position adopted by most connectionist authors [e.g. McClelland & Rumelhart 1988].) The second point I wish to make is a genuine objection, but it is related to my initial question. The objection is simply this: Smolensky maintains that the subsymbolic model, as he describes it, is nonetheless cognitive rather than, for example, physical. "The models," he says, "embody principles of cogni-tion rather than principles of physics." But his answer to the question, What is it about the models that makes them cognitive rather than physical? is woefully inadequate. He tells us that a crucial property of cognitive systems is that they maintain at a constant level a sufficient number of global conditions. That is, they maintain a large range of goals under a wide range of conditions. He does state that this is only a necessary condition for a system to be cognitive,, but it is important to emphasize how far it is from being sufficient. To begin with, if we take "goal" in the ordinary sense of a desired objective, then the criterion would be circular; because, of course, we would have to know that the system had mental states (that is, desires) in order to know that it had goals. His discussion of the river going downhill makes it clear that he does not intend "goal" in such an explicitly mentalistic sense. He simply means behavior which is as if it were directed to a goal. But now if that is what is meant by having a goal, then there are lots of systems that have a large range of goals under a wide range of conditions, but which are not in any literal sense cognitive. Thus, for example, the non-mental elements in any human or animal-mitosis, meiosis, digestion, respiration, blood circulation, antigen-antibody relations , and salivation-add up to a system with a very wide range of goals, and the system is able to pursue these goals under a very wide range of conditions, but I take it there is nothing cognitive about this system. Or, if animals seem too complex to describe adequately, consider any plant, such as a tree. The tree has analogously a large range of goals and it will pursue these under a wide range of conditions-these goals include growth, reproduction, survival, photosynthesis, growing and shedding of leaves, and so forth-but I take it there is nothing cognitive about such a system. I have so far asked a question (To what reality do the connec-tionist models correspond?) and made an objection (Smol-ensky's account of what makes the models cognitive is inadequate). Now on the basis of these, I wish to make a suggestion: If he can answer the question, he should not .worry about the objection, if he can point to a human reality in cognitive processing that the connectionist models correspond to, whether neurophysiological, mental, symbolic, or something else, then he shouldn't worry about the fact that he hasn't got a clear criterion to distinguish between the cognitive and the noncognitive. He can let the future developments of cognitive science decide what is really cognitive and what is not. I believe that the main reason he is worried about whether or not his system is genuinely cognitive is because there is no clear answer to the question, To what reality does it actually correspond? If he had an answer to that question, he wouldn't need to worry about the objection. Smolensky must be commended for the intellectual tour de force he exhibits in his BBS target article (Smolensky 1988). Yet despite his rigor there remains a serious inconsistency In his "proper treatment of connectionism." When Smolensky addresses the question of what aspect of reality networks model, he treats them as computational systems. When he poses the question "What makes these networks cognitive?" he treats them as dynamical physical systems, but he makes a mysterious appeal to complexity to differentiate them from other noncog-nitive dynamical physical systems. This dual treatment of networks Is a source of considerable confusion and the appeal to complexity Is not very enlightening. In what follows I offer a brief discussion of both points. Computational wersus dynamical physical systems. When Smolensky asks the question "What aspect of reality do connec-tionist networks model?" he chooses to treat them as computational systems. On this account the networks are nonsymbolic information-processing devices constrained by their physical configuration. Neurons are axiomatized as black boxes with well defined functional Input/output characteristics and the computational properties of large systems which can be assembled from individual neurons are investigated. In saying that the networks are treated as computational devices we mean that there Is imposed upon the physical network (in the vocabulary of Pylyshyn 1984) an Instantiation Function (IF) and a Semantic Function (SF). The IF maps an equivalence class of physical states onto a specific computational state (e.g., voltage levels are mapped onto numerical activation levels, physical connection strengths are mapped onto weights). The physical states are governed by physical law and there could be different physical stories to tell about each member of the equivalence class. But when these states are mapped onto the computational state there is a single nonphysical story to be told about them. The Semantic Function then maps these computational states onto some domain of interpretation. In the case of these networks the domain of interpretation tends to be things like "micro-features ," hypotheses, inferences, and so on. Smolensky reverts to treating connectlonist networks as dy-namical physical systems when the question "What makes connectionist models cognitive?" Is asked. I take it that, accordIng to this view, connectionist networks are once again a collection of axiomatized neural units, but there Is no IF or SF attached to them. This means that the network is a dynamical physical system evolving through time-like any other physical system-not a computational system processing information. The evolution of the system is explained by a direct appeal to physical law. The story one would tell in such a case would not be unlike the story told of billiard balls colliding on the surface of a pool table. In both of these cases Smolensky is attempting to characterize the same aspect of the world, but surely he is making very different ontological claims about It. On the first account the commitment is to the computational states of the network, and on the second account the commitment Is to the physical states. The difference between computational states and physical states Is the difference between something that is potentially semantically evaluable and something that is not. It is the difference between physics and semantics. The dilemma Smolensky faces Is the classical one. The closer one gets to physics the more difficult it becomes to see mental phenomena-though one can be reasonably certain of the ontology of the entitles one is postulating. As one embraces representations one seems closer to intentionality but the on-tology becomes suspect. Smolensky has chosen to play both sides of the fence. But surely the central problem In cognitive science Is that of getting from one side to the other-preferably by showing a continuum between the physics and the semantics (rather than by denying the reality of one or the other). What is required Is an account of Information embedded In and constrained by physics. Current theories of computation are at best inadequate and at worst irrelevant for this purpose. They take for granted the mapping from the physical to the computational states. The constraints on this mapping are surely the central issues in any theory of cognitive information processing. The current approach bifurcates the physical and the computational aspects and sets them apart as two separate realms; one belonging to the earthly world of physics, and the other to the Platonic world of mathematics. Ultimately we need a theory In which we can talk about computation as a set of processes Inseparable from the physical world. The shifting of theoretical stances between physical and computational, though convenient , just doesn't seem like the type of strategy that Is going to cut It in the long run.
Article
Full-text available
The proliferation of AI systems across all domains of life as well as the complexification and opacity of algorithmic techniques, epitomised by the bourgeoning field of Deep Learning (DL), call for new methods in the Humanities for reflecting on the techno-human relation in a way that places the technical operation at its core. Grounded on the work of the philosopher of technology Gilbert Simondon, this paper puts forward individuation theory as a valuable approach to reflect on contemporary information technologies, offering an analysis of the functioning of deep neural networks (DNNs), a type of data-driven computational models at the core of major breakthroughs in AI. The purpose of this article is threefold: (1) to demonstrate how a joint reading of Simondon’s mechanology and individuation theory, foregrounded in the Simondonian concept of information, can cast new light on contemporary algorithmic techniques by considering their situated emergence as opposed to technical lineage; (2) to suspend a predictive framing of AI systems, particularly DL techniques, so as to probe into their technical operation, accounting for the data-driven individuation of these models and the integration of potentials as functionality; and finally, (3) to argue that individuation theory might in fact de-individuate AI, in the sense of disassembling the already-there, the constituted, paving the way for questioning the potentialities for data and their algorithmic relationality to articulate the unfolding of everyday life.
Article
This study proposes portfolio construction strategies based on novel sentiment, ESG and SDG scores. We utilize natural language processing to establish a novel daily score system that mitigates concerns of different rating standards. The portfolios constructed are optimized via machine learning algorithms on a monthly basis using daily historical returns. Utilizing the equal‐weighted portfolios as benchmarks, we empirically show that our optimized portfolios exhibit better trading performance in both the SPX500 and STOXX600 indices. The findings demonstrate that nonlinear models such as random forests, neural networks, and genetic algorithms can perform better than other machine learning models in portfolio management.
ResearchGate has not been able to resolve any references for this publication.