Conference Paper

Real brains and artificial intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, it is also true that many researchers are highly skeptical of the claim that artificial neural networks are more biologically plausible than are other kinds of models (Reeke & Edelman, 1988). For instance, one can generate long lists of properties of artificial neural networks that are clearly not true of the brain (Crick & Asanuma, 1986;Smolensky, 1988). ...
... As a result, PDP models are often vilified as oversimplifications by neuroscientists; some have called them stick and ball models (Douglas & Martin, 1991). Reeke and Edelman (1988) offered this blunt assessment of the neurophysiological relevance of PDP connectionism: "These new approaches, the misleading label 'neural network computing' notwithstanding, draw their inspiration from statistical physics and engineering, not from biology" (p. 144). ...
... Les questions que nous avons évoquées ne sont pas neuves, et ont été abordées par de nombreux auteurs (citons Dreyfus 1979, Winograd & Flores 1987, Varela 1988, Stewart 1994, Reeke & Edelman 1988, Malcolm & Smithers 1990, Harnad 1990). Cependant, la cible est traditionnellement restreinte à l'approche dite "cognitiviste". ...
... Nous nous sentons généralement en accord avec critiques formulées par les constructivistes (Varela 1988, Varela 1989, Dreyfus 1979, Winograd & Flores 1987, Reeke & Edelman 1988. Néanmoins elles ne nous satisfont pas complètement, parce qu'elles n'offrent aucune piste pour aborder le problème de la conception, essentiel pour la robotique. ...
Article
Robot autonomy will be achieved when robots can act in complex environments without the need of human intervention. However, the traditional methods of robot programming rely on models hav ing very restricting conditions of validity. The problem of inexpectation arises when these conditions are not met in the real situa tion. We argue that robot autonomy can't be achieved without a systematic way for taking inexpectation into account, and we explain why the classical hierarchical, behavioural or adaptive approaches of robotics are too limited for tackling this problem in natural, not ca refully controlled environments. We then suggest three paths for escaping some of those limits. Our first point is theoretical: a robot should be able to ackowledge and model its being partially ignorant of its world. For this we advocate the theory of "probability a s logic" (Jaynes 1995) as a fundamental framework. Our second point is methodological: we propose an incremental approach for buildin g robots, i.e. a systematic method for structural evolution, the motor of which is the occurence of unexpected events. The concern un derlying this approach is the origin and genesis of representations more than their performances. Our third and last point i s conceptual: we propose a notion of "contingent representation", defining a representation by its structure rather than its function. The representational capacity is intrinsic to the structure, but the representational contents (interpretation) is context -dependant. The classical notion of representation had lead some authors to reject the very notion of representation - thus giving up an unescapable guide for design, too. Contingent representation is an attempt for tackling the problem of design within new approaches yet unexploited in AI, like that of operational closure.
... If linguistic communication is understood as exchange of information, and if natural language is viewed not as a system of arbitrary signs established by convention, but as distinctly human species-specific behavior, then we must ask the question, "What is the biological function of exchanging information?" The question itself doesn't make sense if information is understood in the Shannonian way, because there is no information pre-existing in the world (Reeke and Edelman, 1988). When linguists speak of exchange of information in communication, what they usually mean is knowledge. ...
... The impact of autonomous technologies such as Artificial Intelligence (AI) -powered and guided by Machine Learning algorithms -has been felt across a myriad of industries since the late 1980s (Reeke et al., 1988;Hunter et al., 2018;Jeong, 2018;Agrawal et al., 2019). AI technologies are unparalleled in their ability to complete computation intensive tasks from both a speed and accuracy standpoint and, as a result, have been leveraged so as to positively amplify & augment human working efficiencies over the past decades. ...
Research
Full-text available
An analysis of several prominent Neural Network training methods was undertaken in an effort to positively augment Real Estate management tasks. Specifically, supervised learning techniques that utilize the following: clustering & classification algorithms, pattern association, forecasting, and other relevant statistical analysis techniques. Additionally, insights into how various Neural Network node connection weights and scoring criteria impact the network's overall effectiveness are extrapolated on at length.
Chapter
The visualization of a few proteins or specific expressed genes at a time is known as spatial transcriptomics (ST). Single-cell RNA sequencing (scRNA-seq) is an advanced sequencing method that focuses on cells at a singular level, enabling the discovery of rare cell populations, but often misses information about cell surroundings. ST and single-cell sequencing help us discover cell–cell interactions and cell organization in tissues. This allows us to understand how gene expression connects to the structure and function of systems. Deep learning (DL) drastically reduces the complexity of raw input and predicts spatial patterns from gene expression data. This chapter presents recent advancements in artificial intelligence (AI)-driven ST analysis, their challenges, and prospects.
Conference Paper
Full-text available
From over eighteen million athletes who gamify running through Nike+, to the ~97% of teens in the United States who play video games daily, augmented by the growing subset of educational platforms and technologies that seek to realize the benefits of game-thinking and mechanics to more effectively engage users, catalyze loyalty, and solve all manner of business development shortcomings, the aspects of competition and reward based positive reinforcement has steadily rose to prominence over the last two decades. With the most recent adoption, for better or worse, stemming from myriad financial service platforms, tools, and smartphone applications. Seemingly in unison, a rapid proliferation of artificial intelligence enabled solutions and features have made their way into a range of industries, with the financial services sector chief among them. The ostensibly 'clear' advantages to be had from these complementary emergent functionalities must be weighed against the growing chorus of proponents who have taken to the term 'exploitationware' as a catch all when describing gamification, in tandem with calls to address the negative amplification potential inherent of autonomous technologies. As such, this paper seeks to identify and explore the potential ramifications of user interface and experience design decisions for financial technologies when built for retail rather than institutional investors. Namely, are there root cause(s) of bandwagon effect catalyzation, e.g., noise trading, that originate from gamification functionalities and/or elements of artificial intelligence.
Research
Full-text available
This paper posits the benefits of a Narrow Artificial Intelligence solution-trained via a multilayer Neural Network-for any and all inquiries that fall under the theoretical purview of a relationship between a landlord and their tenant(s).
Research
Full-text available
This paper posits the benefits of a Predictive Artificial Intelligence solution-trained via a multilayer Neural Network-for any and all inquiries that fall under the theoretical purview of a relationship between a landlord and their tenant(s).
Article
Besides failing for the reasons Brette gives, codes fail to help us understand brain function because codes imply algorithms that compute outputs without reference to the signals' meanings. Algorithms cannot be found in the brain, only manipulations that operate on meaningful signals and that cannot be described as computations, that is, sequences of predefined operations.
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
Article
Full-text available
There exists a dynamic interaction between the world of information and the world of concepts, which is seen as a quintessential byproduct of the cultural evolution of individuals as well as of human communities. The feeling of understanding (FU) is that subjective experience that encompasses all the emotional and intellectual processes we undergo in the process of gathering evidence to achieve an understanding of an event. This experience is part of every person that has dedicated substantial efforts in scientific areas under constant research progress. The FU may have an initial growth followed by a quasi-stable regime and a possible decay when accumulated data exceeds the capacity of an individual to integrate them into an appropriate conceptual scheme. We propose a neural representation of FU based on the postulate that all cognitive activities are mapped onto dynamic neural vectors. Two models are presented that incorporate the mutual interactions among data and concepts. The first one shows how in the short time scale, FU can rise, reach a temporary steady state and subsequently decline. The second model, operating over longer scales of time, shows how a reorganization and compactification of data into global categories initiated by conceptual syntheses can yield random cycles of growth, decline and recovery of FU.
Chapter
Full-text available
Artificial life has now become a mature inter-discipline. In this contribution, its roots are traced, its key questions are raised, its main methodological tools are discussed, and finally its applications are reviewed. As part of the growing body of knowledge at the intersection between the life sciences and computing, artificial life will continue to thrive and benefit from further scientific and technical progress on both sides, the biological and the computational. It is expected to take center stage in natural computing.
Chapter
In the past decade human-centeredness has become an important enabling concept in information system development. The fast growth of the Internet and WWW and partial failure of the dot corns has further accelerated development in this area. The need for human-centeredness has been felt in practically all areas of information systems and computer science. These include e-business, intelligent systems (traditional and web-based), software engineering, multimedia data modeling, data mining, enterprise modeling and human-computer interaction. In this chapter we discuss the pragmatic issues leading to human-centeredness in these areas and the enabling theories which are converging towards human-centered system development. These enabling theories include theories in philosophy, cognitive science, psychology and work-oriented design for human-centered e-business system development framework. We conclude the chapter with a discussion section that outlines the foundations of the human-centered system development framework described in the next chapter.
Chapter
Nachdem im vorherigen Kapitel die vielfältigen Aspekte der belebten Natur, die mit dem technischen Begriff der „Informationsverarbeitung“ in Verbindung gebracht werden können, sowie die Beschränkungen und Problemfelder der heutigen technischen Systeme dargelegt wurden, zeigt dieses Kapitel mögliche Wege und Ansätze auf, diese Erkenntnisse in der Technik nutzbar zu machen. Dabei wurden Schwerpunkte gesetzt, die durch die befragten Experten, sowie durch den ermittelten Stand der Forschung und durch Erkenntnisse der an der Studie beteiligten Forschungsgruppen selbst herrühren.
Chapter
Developments in computer science raise a number of philosophical as well as practical issues. One of these issues concerns the relationship between contemporary results and the original theoretical objectives, while another relates to the use of computer tools in research and practical applications. The aim of this book is to address these issues taking into account the techniques of artificial intelligence (AI) and its‘offspring’ — intelligent knowledge based systems (IKBSs).
Chapter
Human-centered system development is not a revolutionary concept in computer science and information systems but an evolutionary and enabling one. In this chapter we look at how some areas in computer science and information systems are evolving or moving towards human-centeredness. These areas include intelligent systems, electronic commerce, software engineering, multimedia databases, data mining, enterprise modeling and human-computer interaction. This evolution is based on the need for addressing pragmatic issues in these areas. We follow these pragmatic issues with enabling theories in philosophy, cognitive science, psychology and work-oriented design for human-centered system development framework. These theories are described and discussed in terms of their contributions toward human-centered system development framework. We conclude the chapter with a discussion section that outlines the foundations of the human-centered system development framework described in the chapter.
Chapter
Wahrnehmung ist physiologisch als eine wie auch immer geartete Abbildung eines Umfeldes im Instrumentarium der Sensorik und der ihrer nachgeordeneten Verrechungsinstanzen zu verstehen. Im einfachsten Falle wäre eine derartige Wahrnehmung ein Abdruck des Außens in diese inneren Strukturen1. Der physikalische Zustand einer erfaßten Welt wäre demnach in eine Zustandsänderung des wahrnehmenden Apparates transformiert. In diesem Vokabular könnten Neurowissenschaftler Wahrnehmung als Transformationsereignis fassen, in dem physikalische Parameter des Außenraumes sich in physikalische Parameter des Innenraumes umschreiben. Wie ist dieser Transfer nun zu beschreiben? Eine derartige Korrelation zweier physikalischer Zustände wäre zwar für eine reflexphysiologische Beschreibung von Reaktionsvernetzungen zwischen Außen und Innen zureichend, in einem entsprechenden Modell würde aber nie die Struktur des übertragenen Reizgefüges erfaßt. Es beschriebe nur die Kopplung zwischen Binnenzuständen (des Gehirns) und dem Ereignisraum einer Welt, vermöchte aber nichts darüber zu sagen, was hier gegebenenfalls von einem Auenraum im Hirn ‚abgebildet‘ wird. Eine entsprechende Theorie könnte nicht über Wahrnehmungen sprechen, da sich in ihr keine Aussagen über die Struktur des Ereignisraumes der Welt finden. Die Welt reduziert sich zu einem Reaktionsort. Die Reaktionen bleiben in sich blind, sie vollziehen sich und konstituieren so einen Aktionsraum, der aber in seiner Quanlität nirgendwo reflektiert ist.
Chapter
Diverse perspectives from anthropology, philosophy, and linguistics lead us to view human knowledge as constructed moment-by-moment in interaction between people and their environment. The dynamics of human behavior is central, embracing all levels from perception (by which information is defined by the observer, not passively received), interpretation (by which representations are commented upon and thereby given meaning, not stored and retrieved from memory and simply “applied”), and communication (by which knowledge emerges through group interactions, not transmitted as predefined packets). This new conception leads us to view computer models in a new way.
Chapter
It would be idle to suggest that lawyers have not asked themselves questions either about the ontological or about the epistemological basis of their discipline. Yet it is tempting to say that these questions have become pressing from a technical point of view only since the arrival of Artificial Intelligence (AI) and expert systems research. Before these new concerns appeared, ontology and epistemology were, it might be argued, merely aspects of ideology and philosophy; they were not fundamental concerns of legal science because legal science was not, in the end, a real science. Law was simply the product either of its own history or of some special branch of logic concerned with, for example, deontics or rhetoric. Even if this is a somewhat simplistic view, it is fair to say not only that the modern world is still largely ignorant, despite a millennium of Roman legal scholarship, of the habits of mind and thought processes of the Roman jurists themselves1, but that epistemological models of legal reasoning still find themselves trapped within a particular knowledge assumption. That assumption is that legal knowledge is to be found only in propositional rules.
Book
This book is an attempt to re-evaluate some basic assumptions about language, communication, and cognition in the light of the new epistemology of autopoiesis as the theory of the living. Starting with a critique of common myths about language and communication, the author goes on to argue for a new understanding of language and cognition as functional adaptive activities in a consensual domain of interactions. He shows that such understanding is, in fact, what marks a variety of theoretical and empirical frameworks in contemporary non-Cartesian cognitive science; thus, cognitive science is in the process of working out new epistemological foundations for the study of language and cognition. In Part Two, the traditional concept of grammar is reassessed from the vantage point of autopoietic epistemology, and an analysis of specific grammatical phenomena in English and Russian is undertaken, revealing common cognitive mechanisms at work in linguistic categories. © Peter Lang GmbH Internationaler Verlag der Wissenschaften Frankfurt am Main 2008 All rights reserved.
Book
Burgeoning advancements in brain science are opening up new perspectives on how we acquire knowledge. Indeed, it is now possible to explore consciousness--the very center of human concern--by scientific means. In this illuminating book, Dr. Gerald M. Edelman offers a new theory of knowledge based on striking scientific findings about how the brain works. And he addresses the related compelling question: Does the latest research imply that all knowledge can be reduced to scientific description? Edelman's brain-based approach to knowledge has rich implications for our understanding of creativity, of the normal and abnormal functioning of the brain, and of the connections among the different ways we have of knowing. While the gulf between science and the humanities and their respective views of the world has seemed enormous in the past, the author shows that their differences can be dissolved by considering their origins in brain functions. He foresees a day when brain-based devices will be conscious, and he reflects on this and other fascinating ideas about how we come to know the world and oursel.
Article
Full-text available
Jusqu'à quel point agir et percevoir supposent-ils de "comprendre" ou même plus simplement de se "représenter" le monde ? Telle est l'une des préoccupations fondamentales des recherches sur la sensori-motricité au centre de nombreux débats en sciences cognitives. Cette question une fois formalisée, expurgée, simplifiée et traduite en termes mathématiques, nous amène à nous interroger tout au long de cet article, sur les liens qui peuvent exister entre les inférences formelles mécanisées informatiquement et leurs contreparties dans le monde physique où évolue un robot. Ainsi reformulée, la question centrale débattue devient : comment rendre effectives les inférences formelles ?
ResearchGate has not been able to resolve any references for this publication.