Article

Minds, Brains and Programs

Authors:
If you want to read the PDF, try requesting it from the authors.

Abstract

This article can be viewed as an attempt to explore the consequences of two propositions. (I) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain bran processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 'Could a machine think?' On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... 13 in the area of artificial intelligence, it is first necessary to distinguish between the general and neutral claim that computers are powerful tools for understanding the workings of the human mind and the specific and bold claim that human minds are themselves computer programs. John Searle (1980) calls the former view weak AI while he calls the latter one strong AI. as it is strong ai that subscribes to computationalism, the term "computationalism" has eventually become synonymous with the expression "strong ai. ...
... 15 c. Biological naturalism John Searle (1980Searle ( , 1983Searle ( , 1985Searle ( , 1994 is highly critical of a strong ai or computationalism, arguing (through his famous Chinese room argument) 16 that the mere implementation of a computer program is not sufficient to explain or to produce mentality. Searle calls his alternative theory of the mind biological naturalism, according to which the way to naturalize the mind (that is, to explain the mind in scientific terms) is not by means of physics or artificial intelligence but by means of biology. ...
... 15 c. Biological naturalism John Searle (1980Searle ( , 1983Searle ( , 1985Searle ( , 1994 is highly critical of a strong ai or computationalism, arguing (through his famous Chinese room argument) 16 that the mere implementation of a computer program is not sufficient to explain or to produce mentality. Searle calls his alternative theory of the mind biological naturalism, according to which the way to naturalize the mind (that is, to explain the mind in scientific terms) is not by means of physics or artificial intelligence but by means of biology. ...
... In 1964, Yehoshua [9] has pointed out that it is never possible to distinguish ambiguous meaning of a word without a Universal Encyclopaedia as he has used WSD as a part of his machine translation work. In 1980, Searle [10] devised the way in which computer system processed a language. He also highlighted the fact that linguistic symbols are meaningless unless and until it is not grounded or comprehend by someone. ...
... The work has been performed on training data set where sentences consisting of ten different ambiguous verbs such as "run", "give", "break", "call", "know", "put", "take", "make", "draw", "get" which are taken from Word net. Now, bag-of-words [10] is formed which is a collection of all the words present in the documents. This training data set is prepared with taking care of punctuation and its multiplicity. ...
... [1] main concern seems to be that our current research direction in NLU will lead to something like a Chinese Room. The Chinese Room argument is one of the classical philosophical thought experiments, in which [27] invites us to imagine a container (such as a room) populated with a person who does not speak Chinese, but who has access to a set of (extensive) instructions for manipulating Chinese symbols, such that when given an input sequence of Chinese symbols, the person can consult the instructions and produce an output that for a Chinese speaker outside the room seems like a coherent response. In short, the Chinese Room is much like our current language models. ...
... We will not attempt to contribute any novel arguments to the vast literature that exists on the Chinese Room argument, but we will point to the counter-argument commonly known as the "system reply" [27]. This response notes that for the observer of the room (whether it is an actual room, a computer, or a human that has internalized all the instructions) it will seem as if there is understanding -or at least language proficiency -going on in the room. ...
Preprint
This paper discusses the current critique against neural network-based Natural Language Understanding (NLU) solutions known as language models. We argue that much of the current debate rests on an argumentation error that we will refer to as the singleton fallacy: the assumption that language, meaning, and understanding are single and uniform phenomena that are unobtainable by (current) language models. By contrast, we will argue that there are many different types of language use, meaning and understanding, and that (current) language models are build with the explicit purpose of acquiring and representing one type of structural understanding of language. We will argue that such structural understanding may cover several different modalities, and as such can handle several different types of meaning. Our position is that we currently see no theoretical reason why such structural knowledge would be insufficient to count as "real" understanding.
... The term AI has different meanings in different contexts because there is no definite definition of AI. AI is widely classified into two categories: general AI and narrow AI [19]. General AI focuses on the simulation of human behavior in machines or computers. ...
Article
Endoscopic ultrasonography (EUS) is an essential diagnostic tool for various types of pancreatic diseases such as pancreatic tumors and chronic pancreatitis; however, EUS imaging has low specificity for the diagnosis of pancreatic diseases. Artificial intelligence (AI) is a mathematical prediction technique that automates learning and recognizes patterns in data. This review describes the details and principles of AI and deep learning algorithms. The term AI does not have any definite definition; almost all AI systems fall under narrow AI, which can handle single or limited tasks. Deep learning is based on neural networks, which is a machine learning technique that is widely used in the medical field. Deep learning involves three phases: data collection and annotation, building the deep learning architecture, and training and ability validation. For medical image diagnosis, image classification, object detection, and semantic segmentation are performed. In EUS, AI is used for detecting anatomical features, differential pancreatic tumors, and cysts. For this, conventional machine learning architectures are used, and deep learning architecture has been used in only two reports. Although the diagnostic abilities in these reports were about 85‐95%, these were exploratory researches and very few reports have included substantial evidence. AI is increasingly being used for medical image diagnosis due to its high performance and will soon become an essential technique for medical diagnosis.
... inteligencia artificial fuerte (general) se refiere a la posición que conserva que el computador no es solo un instrumento, sino más bien que la computadora apropiadamente programada es en realidad una mente y que, por ende, abarca y posee otros estados cognitivos (Hierro-Pescador, 2005). Si las acciones se hacen dentro de un contexto particular acotado como jugar ajedrez o reconocer individuos, es una inteligencia artificial débil o estrecha; pero si el sistema expresa inteligencia en un amplio conjunto de tareas y entornos solucionando problemas que podría resolver un individuo, es una inteligencia artificial fuerte o general (Searle, 1980). La diferencia entre la inteligencia artificial débil y fuerte se muestra en la Tabla 2. ...
... There are several classes in which AI is usually placed. The most used terms, introduced by Searle [8], are 'strong' AI and 'weak' AI. The distinction between strong and weak is mainly a philosophical one, concerned with whether an AI is capable of understanding in the same way a human can, or merely act like it does, respectively. ...
Preprint
Full-text available
A short review of the literature on measurement and detection of artificial general intelligence is made. Proposed benchmarks and tests for artificial general intelligence are critically evaluated against multiple criteria. Based on the findings, the most promising approaches are identified and some useful directions for future work are proposed.
... Baseadas em símbolos físicos, as informações somente são causais enquanto entidades físicas instanciadas, ocorrendo no cognitivismo uma separação entre a semântica (significado) de seu suporte físico causalmente eficiente (o símbolo). Como herdeiro deste pensamento, Searle (1980) não soube dissolver este problema e perdido nesse labirinto conceitual, após criticar o epifenomenismo, acabou por concluir que SSF artificiais são vazios semanticamente. ...
... 6 Semantics is a sticky problem with a long history in the philosophical debates within AI. The most wellknown formulation comes in John Searle's (1984Searle's ( , 1980 infamous "Chinese room" argument, a thought experiment in which an English-speaking man is locked inside a room with commands written in Chinese characters and a book of instructions. The man is able to produce proper responses to the commands by manipulating Chinese symbols but without acquiring any understanding of the language or what either the commands (inputs) or results (outputs) mean. ...
Article
Full-text available
A rise of academic capitalism over the past four decades has been well documented within many research-intensive universities. Largely missing, however, are in-depth studies of how particularly situated academic groups manage the uncertainties that come with intermittent and fickle commercial funding streams in their daily research practice and problem choice. To capture the strategies scientists adopt under these conditions, this article provides an ethnographically detailed (and true) story about how a single project in Artificial Intelligence grew over several years from a peripheral idea to the very center of an academic lab’s commercial portfolio. The analysis theorizes an epistemic form—nimble knowledge production—and documents three of its lab-level features: 1) rapid prototyping to keep sunk costs low, 2) shared search for “real world problems” rather than “theoretical” ones, and 3) nimble commitment to research problem choice. While similar forms of academic knowledge transfer have been lauded as “mode 2,” “innovative,” or “hybrid” for initiating cross-institutional collaboration and pushing science beyond disciplinary silos, this case suggests it can rely on fleeting attention to problems resistant to a quick fix.
... Unfortunately, several limitations of such purely symbolic encoding are clear. First, it is not apparent how looked-up symbols get their meaning, a version of the problem highlighted by Searle (1980)'s Chinese Room. It is not enough to know these symbolic relationships; what matters is the semantic content that they correspond to, and there is no semantic content in a database lookup of the answer. ...
Article
Full-text available
Each of our theories of mental representation provides some insight into how the mind works. However, these insights often seem incompatible, as the debates between symbolic, dynamical, emergentist, sub-symbolic, and grounded approaches to cognition attest. Mental representations—whatever they are—must share many features with each of our theories of representation, and yet there are few hypotheses about how a synthesis could be possible. Here, I develop a theory of the underpinnings of symbolic cognition that shows how sub-symbolic dynamics may give rise to higher-level cognitive representations of structures, systems of knowledge, and algorithmic processes. This theory implements a version of conceptual role semantics by positing an internal universal representation language in which learners may create mental models to capture dynamics they observe in the world. The theory formalizes one account of how truly novel conceptual content may arise, allowing us to explain how even elementary logical and computational operations may be learned from a more primitive basis. I provide an implementation that learns to represent a variety of structures, including logic, number, kinship trees, regular languages, context-free languages, domains of theories like magnetism, dominance hierarchies, list structures, quantification, and computational primitives like repetition, reversal, and recursion. This account is based on simple discrete dynamical processes that could be implemented in a variety of different physical or biological systems. In particular, I describe how the required dynamics can be directly implemented in a connectionist framework. The resulting theory provides an “assembly language” for cognition, where high-level theories of symbolic computation can be implemented in simple dynamics that themselves could be encoded in biologically plausible systems.
... Long standing questions around how to produce an explanation, if a process should be explainable [62] and what should be required for an explanation to be considered sufficient [50] remain open within the literature. These notions follow from what Searle described as within the realm of strong-AI [63], noting that machines must simulate not only the abilities of a human but also replicate the human ability to understand a story and answer questions. A desire for machines to imitate and learn like humans is not a new concept [64]. ...
Article
Transparency is a widely used but poorly defined term within the explainable artificial intelligence literature. This is due, in part, to the lack of an agreed definition and the overlap between the connected — sometimes used synonymously — concepts of interpretability and explainability. We assert that transparency is the overarching concept, with the tenets of interpretability, explainability, and predictability subordinate. We draw on a portfolio of definitions for each of these distinct concepts to propose a Human-Swarm-Teaming Transparency and Trust Architecture (HST3-Architecture). The architecture reinforces transparency as a key contributor towards situation awareness, and consequently as an enabler for effective trustworthy Human-Swarm Teaming.
... Secondly, we will sketch these issues to then focus on the one we call 'the body issue', which considers the role of the human body in the experience and creation of music. Let us begin by sketching the famous thought experiment of the Chinese Room, by philosopher John Searle (Searle 1980). Suppose that I am a native speaker of English and that I know nothing about Chinese. ...
Conference Paper
Can machines become truly creative? In this paper we argue that is not likely the case, basing our argumentation on the Chinese room argument by John Searle and on the philosophy of the body in Maurice Merleau-Ponty and Roland Barthes. Later on, we connect our ideas with contemporary findings in neuroscience, to give our claims more credibility.
... In other words, there is a problematic circularity in distributional learning: words are defined by other words, that are themselves defined by other words, etc., ending in a solipsistic form of training. To better explicit this problem, Harnad (1990) describes the Chinese dictionary problem, as an extension of the famous Chinese room argument (Searle, 1980(Searle, , 1984. Imagine that you want to learn Chinese, but you only have access to a Chinese-Chinese dictionary, would you be able to learn Chinese? ...
Thesis
While our representation of the world is shaped by our perceptions, our languages, and our interactions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning, and deep reinforcement learning is often limited to constrained environments. Yet, we ideally aim to develop large-scale multimodal and interactive models towards correctly apprehending the complexity of the world. As a first milestone, this thesis focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science. More precisely, we first designed the \GW game for assessing visually grounded language understanding of the models: two players collaborate to locate a hidden object in an image by asking a sequence of questions. We then introduce modulation as a novel deep multimodal mechanism, and we show that it successfully fuses visual and linguistic representations by taking advantage of the hierarchical structure of neural networks. Finally, we investigate how reinforcement learning can support visually grounded language learning and cement the underlying multimodal representation. We show that such interactive learning leads to consistent language strategies but gives raise to new research issues.
... The method to exactly locate the EEG signal position is the inverse problem (IP) theory, which is able to pinpoint where the signal comes from. When the EEG device figures out the signal position and the suitable component amplitude, the output feeling is clear [37]. Once the feeling is clarified, the result data is collected and is ready to recognize ADHD patient's memory as shown in Fig. 3. ...
Article
Full-text available
Attention Deficit Hyperactivity Disorder (ADHD) is a common and heritable disease that has an environmental influence on brain function. The diseases affects multiple aspects of the lives of college students, not only on their study but also on the relationships with other people. The problem with ADHD attention involves short term memory. The purpose of this paper is to investigate the capability of improving short term working memory for ADHD patients by the aid of technology a proper VR environment is built for ADHD, who are isolated from the real circumference. Electroencephalography (EEG) is taken as biofeedback to read the brain signal from the patient. A deep learning approach and an artificial neural network method, are employed to efficiently and accurately process EEG. The findings of the trial indicate that the virtual reality recommended system will play a greater role in improving the attention to the ADHD patient.
... Das Chinesisches-Zimmer-Argument: Das berühmte Chinesisches-Zimmer-Argument von John Searle(Searle 1980) kritisiert die These, dass ein Computerprogramm allein dadurch, dass es Symbole nach formal-syntaktischen/rechnerischen Regeln manipuliert, auch die Bedeutung dieser Symbole verstehen kann. Wenn das Argument von Searle valide ist, dann kann die menschliche Intelligenz (im Sinne des Verstehens von Bedeutung der Symbole, also in Bezug auf Semantik) durch Computerprogramme (Syntax) nicht vollständig nachgeahmt werden. ...
Article
Self-Enhancement: A New Form of Self-Formation? Nietzsche and Transhumanism. The following article examines the relationship between Nietzsche’s concept of the Übermensch and its transhumanist adaption within a theory of self-enhancement. While Nietzsche and Transhumanism both start from the same assumption - the crisis of the current image of man and its corresponding self-conception -, they head in different directions. In contrast to the biological understanding of self-formation, which leads transhumanists to an overly literal and therefore misleading Nietzsche interpretation, Nietzsche situates the concept of self-formation within his theory of immoralism.With their undifferentiated naturalistic view transhumanists, however, are not capable of distinguishing life from nature, morality from biology, and education from breeding and therefore misunderstand Nietzsche’s concept of self-formation in terms of a genetic self-augmentation. In this manner, Nietzsche’s concept of self-formation is once again read in the light of a biological interpretation that ignores its ethical meaning. Reading Nietzsche with the attention he deserves, it becomes clear, however, that Nietzsche’s concept of self-formation does neither imply a biological meaning nor does it serve as a foundation for transhumanist thought; rather, it refers to an ethical claim that calls upon the individual to not accept values unquestioningly, but to justify them autonomously and individually.
... With more advancement in computers, it can be expected from machines to create incrementally better content [Tan et al., 2016]. The other side of the story explains that the art is not separate from the observer itself and the observer invests his/her consciousness in interpreting the art which a machine cannot [Searle, 1980, Nagel, 1974. Plato said that -"Beauty is in the eyes of the beholder". ...
Preprint
Full-text available
This paper presents a robotic system (\textit{Chitrakar}) which autonomously converts any image of a human face to a recognizable non-self-intersecting loop (Jordan Curve) and draws it on any planar surface. The image is processed using Mask R-CNN for instance segmentation, Laplacian of Gaussian (LoG) for feature enhancement and intensity-based probabilistic stippling for the image to points conversion. These points are treated as a destination for a travelling salesman and are connected with an optimal path which is calculated heuristically by minimizing the total distance to be travelled. This path is converted to a Jordan Curve in feasible time by removing intersections using a combination of image processing, 2-opt, and Bresenham's Algorithm. The robotic system generates $n$ instances of each image for human aesthetic judgement, out of which the most appealing instance is selected for the final drawing. The drawing is executed carefully by the robot's arm using trapezoidal velocity profiles for jerk-free and fast motion. The drawing, with a decent resolution, can be completed in less than 30 minutes which is impossible to do by hand. This work demonstrates the use of robotics to augment humans in executing difficult craft-work instead of replacing them altogether.
... Generally speaking, there are two forms of AI: "Strong AI" and "Weak AI" (Kaplan and Haenlein, 2020;Searle, 1980). Strong AI refers to that which can behave in a way that equals or surpasses human intelligence, while weak AI is that which simulates human intelligence in specific problem domains. ...
Article
Artificial intelligence (AI) may be one of the most disruptive technologies of the 21st century, with the potential to transform every aspect of society. Preparing for a “good AI society” has become a hot topic, with growing public and scientific interest in the principles, policies, incentives, and ethical frameworks necessary for society to enjoy the benefits of AI while minimizing the risks associated with its use. However, despite the renewed interest in artificial intelligence, little is known of the direction in which AI scholarship is moving and whether the field is evolving towards the goal of building a “good AI society”. Based on a bibliometric analysis of 41,032 documents retrieved from the Web of Science database, this study describes the intellectual, social, and conceptual structure of AI research It provides 136 evidence-based research questions about how AI research can help understand the social changes brought about by AI and prepare for a “good AI society.” The research agenda is organized according to ten social impact domains identified from the literature, including crisis response, economic empowerment, educational challenges, environmental challenges, equality and inclusion, health and hunger, information verification and validation, infrastructure management, public and social sector management, security, and justice.
... Superficially similar communication abilities have since been demonstrated in microbes [7] and plants [8]. If "meaning" is restricted to the full combination of syntactic, semantic, and pragmatic aspects of meaning characteristic of human languages, the communications sent and received by these nonhuman organisms must be regarded as devoid of meaning, as Descartes presumably would have regarded them, and as communications sent and received by artificial intelligence (AI) systems are regarded by many today [9][10][11][12]. Hauser, Chomsky, and Fitch [6] suggest, on the contrary, that the faculty of language can be construed more broadly, and that both sensory-motor and conceptual-intentional aspects of "meaning" can be dissociated from the recursive syntax with which they are coupled in human languages. ...
Article
Full-text available
Meaning has traditionally been regarded as a problem for philosophers and psychologists. Advances in cognitive science since the early 1960s, however, broadened discussions of meaning, or more technically, the semantics of perceptions, representations, and/or actions, into biology and computer science. Here, we review the notion of “meaning” as it applies to living systems, and argue that the question of how living systems create meaning unifies the biological and cognitive sciences across both organizational and temporal scales.
... While scholars such as Searle (1980), Dennett (1991) and Chalmers (1993; have written extensively about the problem of detecting consciousness, little work has been done with contained artificial agents. As well, while a robust literature exists for measuring intelligence based on Turing's work, detecting consciousness, and furthermore self-consciousness, is not synonymous with detecting intelligence. ...
Preprint
Human-like intelligence in a machine is a contentious subject. Whether mankind should or should not pursue the creation of artificial general intelligence is hotly debated. As well, researchers have aligned in opposing factions according to whether mankind can create it. For our purposes, we assume mankind can and will do so. Thus, it becomes necessary to contemplate how to do so in a safe and trusted manner -- enter the idea of boxing or containment. As part of such thinking, we wonder how a phenomenology might be detected given the operational constraints imposed by any potential containment system. Accordingly, this work provides an analysis of existing measures of phenomenology through qualia and extends those ideas into the context of a contained artificial general intelligence.
... According to this view, our semantic memory cannot be a self-contained system in which all the representations are abstract, amodal symbols that are defined exclusively by their relations to one another (see for example Collins & Quillian, 1969;Kintsch, 1988). The best-known argument against this conceptualization is provided by Harnad's (1990) adaptation of Searle's (1980) Chinese room argument: If a monolingual English speaker suddenly finds herself in China, only equipped with a monolingual Chinese-Chinese dictionary, she will never be able to understand anything. In this case, whenever she looks up any symbol, it is only ever linked to other symbols that have no meaning for her. ...
Preprint
Full-text available
Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants' judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don't have.
... Ishowo-Oloko et al. (2019) mention how the Google Duplexa program that can make telephone calls and place restaurant reservations on behalf of its human userscan now pass as a human, and thus pass a basic version of the Turing test. Searle's Chinese room argument revolves around the idea that it is possible to pass a Turing test without any proper understanding on behalf of the machine (Searle 1980). I have already discussed the idea of understanding, and the fact that computers lack such understanding has little effect on their deceptive powers. ...
Article
Full-text available
Can artificial intelligence (AI) develop the potential to be our partner, and will we be as sensitive to its social signals as we are to those of human beings? I examine both of these questions and how cultural psychology might add such questions to its research agenda. There are three areas in which I believe there is a need for both a better understanding and added perspective. First, I will present some important concepts and ideas from the world of AI that might be beneficial for pursuing research topics focused on AI within the cultural psychology research agenda. Second, there are some very interesting questions that must be answered with respect to central notions in cultural psychology as these are tested through human interactions with AI. Third, I claim that social robots are parasitic to deeply ingrained human social behaviour, in the sense that they exploit and feed upon processes and mechanisms that evolved for purposes that were originally completely alien to human-computer interactions.
... Near-future AIs based on contemporary techniques will be apparent but unreal persons because their interior lives will be behaviorally hinted but subjectively unreal. My comments on the neural network align somewhat with Block (1978) and with Searle's (1980) "Chinese Room." On whether phenomenal consciousness would be metaphysically possible in any future artifact, here are some doubts: If any sort of non-biological machine can have true phenomenal consciousness (and not just a behavioral or functional simulation thereof), then consciousness is not limited to the physical processes (i.e. ...
Article
Full-text available
In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, we hope to open new views upon urgent and much-discussed questions that, quite soon, may confront us in our daily lives.
... Ever since Searle made his Chinese room argument [43] a lot has been said to refute it [44] [45] [46], even to the the point of mocking Searle for having made it. Perhaps Searle gets the last laugh, though; nearly all the limitations of deep neural networks we examined, as well as the fact that these limitations are neither trivial nor diminishing with new breakthroughs in computer science, are because Searle was right. ...
Preprint
Full-text available
Deep neural networks have triggered a revolution in artificial intelligence, having been applied with great results in medical imaging, semi-autonomous vehicles, ecommerce, genetics research, speech recognition, particle physics, experimental art, economic forecasting, environmental science, industrial manufacturing, and a wide variety of applications in nearly every field. This sudden success, though, may have intoxicated the research community and blinded them to the potential pitfalls of assigning deep learning a higher status than warranted. Also, research directed at alleviating the weaknesses of deep learning may seem less attractive to scientists and engineers, who focus on the low-hanging fruit of finding more and more applications for deep learning models, thus letting short-term benefits hamper long-term scientific progress. Gary Marcus wrote a paper entitled Deep Learning: A Critical Appraisal, and here we discuss Marcus' core ideas, as well as attempt a general assessment of the subject. This study examines some of the limitations of deep neural networks, with the intention of pointing towards potential paths for future research, and of clearing up some metaphysical misconceptions, held by numerous researchers, that may misdirect them.
... The problem of symbol grounding is illustrated by Searle's (1980) Chinese room problem (see also Harnad, 1990). A variant of the problem goes like this: You are a monolingual speaker of English and isolated in a room with nothing but a huge book. ...
Preprint
Full-text available
Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails re-instantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences.
... However, Searle introduced the categories Strong AI versus Weak AI [29]. Sometimes Strong AI is also called Hard AI. ...
Chapter
Full-text available
Today Artificial Intelligence (AI) is enjoying a revival of interest. A decade ago, computer product makers avoided the term, for fear of being branded wide-eyed dreamers. Now seen in a positive light, AI is enjoying frenzied attention from the public. This is a marked change. In this paper we track and examine what led to the present day preoccupation of the public to things AI, a significant deviation from some thirty years ago. In this study, we extend the work Making AI Great Again where we will provide more arguments for the factors that led to this turn of events and evaluate if what we are seeing is "real" AI. Along the way, we will offer ways in keeping the AI hope real despite the sometimes exaggerated hype that could cloud the AI achievements of the past. It is our aim that this work helps in someway to prevent AI from experiencing another Winter.
... This isolation allows to keep potentially critical information at the local level, and avoids unmaintainable knowledge base: for instance the thermometer LEC from Example 4.1 will only disclose the information (hot(room), true), without further indicating the meaning of this word. This separation between syntax and semantics is analogous to what is observed in the"Chinese Room"counter-argument to AI: processing Chinese characters is a different ability than actually understanding them (Searle, 1980). ...
Thesis
Smart homes are Cyber-Physical Systems where various components cooperate to fulfill high-level goals such as user comfort or safety. These autonomic systems can adapt at runtime without requiring human intervention. This adaptation is hard to understand for the occupant, which can hinder the adoption of smart home systems. Since the mid 2010s, explainable AI has been a topic of interest, aiming to open the black box of complex AI models. The difficulty to explain autonomic systems does not come from the intrinsic complexity of their components, but rather from their self-adaptation capability which leads changes of configuration, logic or goals at runtime. In addition, the diversity of smart home devices makes the task harder. To tackle this challenge, we propose to add an explanatory system to the existing smart home autonomic system, whose task is to observe the various controllers and devices to generate explanations. We define six goals for such a system. 1) To generate contrastive explanations in unexpected or unwanted situations. 2) To generate a shallow reasoning, whose different elements are causaly closely related to each other. 3) To be transparent, i.e. to expose its entire reasoning and which components are involved. 4) To be self-aware, integrating its reflective knowledge into the explanation. 5) To be generic and able to adapt to diverse components and system architectures. 6) To preserve privacy and favor locality of reasoning. Our proposed solution is an explanatory system in which a central component, name the ``Spotlight'', implements an algorithm named D-CAS. This algorithm identifies three elements in an explanatory process: conflict detection via observation interpretation, conflict propagation via abductive inference and simulation of possible consequences. All three steps are performed locally, by Local Explanatory Components which are sequentially interrogated by the Spotlight. Each Local Component is paired to an autonomic device or controller and act as an expert in the related knowledge domain. This organization enables the addition of new components, integrating their knowledge into the general system without need for reconfiguration. We illustrate this architecture and algorithm in a proof-of-concept demonstrator that generates explanations in typical use cases. We design Local Explanatory Components to be generic platforms that can be specialized by the addition of modules with predefined interfaces. This modularity enables the integration of various techniques for abduction, interpretation and simulation. Our system aims to handle unusual situations in which data may be scarce, making past occurrence-based abduction methods inoperable. We propose a novel approach: to estimate events memorability and use them as relevant hypotheses to a surprising phenomenon. Our high-level approach to explainability aims to be generic and paves the way towards systems integrating more advanced modules, guaranteeing smart home explainability. The overall method can also be used for other Cyber-Physical Systems.
... Behind the screen is a person who knows Chinese and a computer that can give the correct answer to the meaning of the Chinese character by using look-up tables. Just because the answer is correct, does not mean that the computer understands Chinese (Searle, 1980 ). His conclusion: 'in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. ...
Article
Full-text available
This article sets out to explore a shift in the sources of evidence-of-learning in the era of networked computing. One of the key features of recent developments has been popularly characterized as ‘big data’. We begin by examining, in general terms, the frame of reference of contemporary debates on machine intelligence and the role of machines in supporting and extending human intelligence. We go on to explore three kinds of application of computers to the task of providing evidence-of-learning to students and teachers: (1) the mechanization of tests—for instance, computer adaptive testing, and automated essay grading; (2) data mining of unstructured data—for instance, the texts of student interaction with digital artifacts, textual interactions with each other, and body sensors; (3) the design and analysis of mechanisms for the collection and analysis of structured data embedded within the learning process—for instance, in learning management systems, intelligent tutors, and simulations. A consequence of each and all of these developments is the potential to record and analyze the ‘big data’ that is generated. The article presents both an optimistic view of what may be possible as these technologies and pedagogies evolve, while offering cautionary warnings about associated dangers.
... II. WHAT DO "WEAK AI" AND "STRONG AI" MEAN? "Weak AI" and "Strong AI" are two terms coined by John Searle in the "Chinese room argument" (CRA) [13]. CRA is a thought experiment as follows: "Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. ...
Preprint
Full-text available
AI has surpassed humans across a variety of tasks such as image classification, playing games (e.g., go, "Starcraft" and poker), and protein structure prediction. However, at the same time, AI is also bearing serious controversies. Many researchers argue that little substantial progress has been made for AI in recent decades. In this paper, the author (1) explains why controversies about AI exist; (2) discriminates two paradigms of AI research, termed "weak AI" and "strong AI" (a.k.a. artificial general intelligence); (3) clarifies how to judge which paradigm a research work should be classified into; (4) discusses what is the greatest value of "weak AI" if it has no chance to develop into "strong AI".
... The first AI winter took place mainly after 1969 and was mainly subject to three problems [16]: (i) It was understood that systems based on symbolic knowledge, rules, and logic are incapable to understand the subject matter -or in other words to build a common sense. In this context, the Chinese Room Argument by Searle [38] states that a digital computer cannot gain any understanding of the challenge by simply following a syntax in a program. Moreover, changing the syntax according to adapt to new challenges requires a deep understanding of the syntax itself. ...
Preprint
Full-text available
This work provides a starting point for researchers interested in gaining a deeper understanding of the big picture of artificial intelligence (AI). To this end, a narrative is conveyed that allows the reader to develop an objective view on current developments that is free from false promises that dominate public communication. An essential takeaway for the reader is that AI must be understood as an umbrella term encompassing a plethora of different methods, schools of thought, and their respective historical movements. Consequently, a bottom-up strategy is pursued in which the field of AI is introduced by presenting various aspects that are characteristic of the subject. This paper is structured in three parts: (i) Discussion of current trends revealing false public narratives, (ii) an introduction to the history of AI focusing on recurring patterns and main characteristics, and (iii) a critical discussion on the limitations of current methods in the context of the potential emergence of a strong(er) AI. It should be noted that this work does not cover any of these aspects holistically; rather, the content addressed is a selection made by the author and subject to a didactic strategy.
... To this end, philosophy of nature, philosophy of technology, and anthropology will be involved, beyond ethics (e.g., Searle, 1980;Grunwald et al., 2002;Reggia, 2013;Wallach and Allen, 2009). ...
... L'autore, partendo dall'assunto che i modelli elaborati dal programma di ricerca della "vita artificiale" 28 siano immuni all'argomento di Searle (1980), propone un programma in tre punti per colmare il gap che ancora divide la mente umana da quella artificiale, guardando alla biologia 29 per riprodurre i nessi causali alla base del comportamento. In un'ottica così ottimistica la filosofia è chiamata a risolvere i problemi etici legati a una possibile umanizzazione dell'IA (Giger et al., 2019) e alla possibilità/necessità di implementare norme morali al suo interno (Fossa, 2020). ...
Thesis
Full-text available
- Siri, raccontami una barzelletta. Quella che sembra solo una frase per far colpo sugli ospiti, in realtà pare interrogare la filosofia – e non solo la filosofia cognitiva – su concetti e argomenti che le sono propri. Il linguaggio e la comunicazione, l’ironia, l’intelligenza, la cognizione sono da sempre suoi oggetti di studio. Di fronte alle innovazioni tecnologiche degli ultimi decenni, il primo compito della filosofia è “fondativo”, dovendo fornire alle altre discipline le categorie – etiche ad esempio – necessarie alla riflessione. Il secondo compito è “riflessivo”: raccogliere nuove istanze sorte negli altri campi del sapere e affrontarle con la sua specificità. Infine, deve svolgere un compito “predittivo”, anticipando questioni che, pur non ancora attuali, potranno far parte di futuri scenari. Inoltre, gli studi legati a humor e Intelligenza Artificiale (da qui in poi: IA) sono di una vastità e varietà tale per cui la filosofia non può esimersi dal proprio ruolo ordinatore, non delegando, ma facendosene carico utilizzando un approccio multidisciplinare.
... When a language is learnt, at least some of its novel symbols must be ''grounded'' in perceptions and actions; if not, the language learner might not know what linguistic symbols relate to in the physical world, i.e., what they are used to speak about, and, thus (in one sense) what they ''mean'' (Freud, 1891;Locke, 1909Locke, /1847Searle, 1980;Harnad, 1990Harnad, , 2012Cangelosi et al., 2000). Indeed, children typically acquire the meaning of some words used to refer to familiar objects (such as ''sun'') in situations involving the simultaneous perception of the spoken lexical item and the referent object (Bloom, 2000;Vouloumanos and Werker, 2009); similarly, it has been argued that a common situation for learning action-related words (like ''run'') involves usage and perception of the novel items just before, after or during the execution of the corresponding movement (Tomasello and Kruger, 1992). ...
Article
Full-text available
Embodied theories of grounded semantics postulate that, when word meaning is first acquired, a link is established between symbol (word form) and corresponding semantic information present in modality-specific-including primary-sensorimotor cortices of the brain. Direct experimental evidence documenting the emergence of such a link (i.e., showing that presentation of a previously unknown, meaningless word sound induces, after learning, category-specific reactivation of relevant primary sensory or motor brain areas), however, is still missing. Here, we present new neuroimaging results that provide such evidence. We taught participants aspects of the referential meaning of previously unknown, senseless novel spoken words (such as "Shruba" or "Flipe") by associating them with either a familiar action or a familiar object. After training, we used functional magnetic resonance imaging to analyze the participants' brain responses to the new speech items. We found that hearing the newly learnt object-related word sounds selectively triggered activity in the primary visual cortex, as well as secondary and higher visual areas. These results for the first time directly document the formation of a link between the novel, previously meaningless spoken items and corresponding semantic information in primary sensory areas in a category-specific manner, providing experimental support for perceptual accounts of word-meaning acquisition in the brain.
... Thus, we narrow down the terminological scope of this paper seizing the common differentiations between "weak AI" that only pretends to think and "strong AI" that refers to a mind exhibiting mental states [7], as well as on a domain-oriented level to the categorization of narrow AI and Artificial General Intelligence (AGI) [8]. While narrow AI refers to an AI that is equally as good or better than a human in a specific domain of tasks, an AGI is posed to be equally as good or better than a human in any domain of tasks [8]. ...
Conference Paper
Full-text available
Artificial Intelligence (AI) provides organizations with vast opportunities of deploying AI for competitive advantage such as improving processes, and creating new or enriched products and services. However, the failure rate of projects on implementing AI in organizations is still high, and prevents organizations from fully seizing the potential that AI exhibits. To contribute to closing this gap, we seize the unique opportunity to gain insights from five organizational cases. In particular, we empirically investigate how the unique characteristics of AI – i.e. experimental character, context sensitivity, black box character, and learning requirements – induce challenges into project management, and how these challenges are addressed in organizational (sociotechnical) contexts. This shall provide researchers with an empirical and conceptual foundation for investigating the cause-effect relationships between the characteristics of AI, project management, and organizational change. Practitioners can benchmark their own practices against the insights to increase the success rates of future AI implementations.
... The problem of symbol grounding is illustrated by Searle's (1980) Chinese room problem (see also Harnad, 1990). A variant of the problem goes like this: You are a monolingual speaker of English and isolated in a room with nothing but a huge book. ...
Article
Full-text available
Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails reinstantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences. This article is categorized under: Psychology > Language Neuroscience > Cognition Linguistics > Language in Mind and Brain.
... Without doubt, Prolog can be used for many different applications starting from the modelling of parsing and natural language comprehension and going on to the modelling of planning mechanisms and the abilities of logical inference agents. Nobody would suggest that these applications -if successful -give a deeper justification for Prolog as part of Cognitive Linguistics (at least if we reject the strong view of Artificial Intelligence; see Searle (1980)). In a similar way, the present logical system can be used for many different purposes. ...
Article
Full-text available
Horn's division of pragmatic labour (Horn, 1984) is a universal property of language, and amounts to the pairing of simple meanings to simple forms, and deviant meanings to complex forms. This division makes sense, but a community of language users that do not know it makes sense will still develop it after a while, because it gives optimal communication at minimal costs. This property of the division of pragmatic labour is shown by formalising it and applying it to a simple form of signalling games, which allows computer simulations to corroborate intuitions. The division of pragmatic labour is a stable communicative strategy that a population of communicating agents will converge on, and it cannot be replaced by alternative strategies once it is in place.
... Generally speaking, there are two forms of AI: "Strong AI" and "Weak AI" (Kaplan and Haenlein, 2020;Searle, 1980). Strong AI refers to that which can behave in a way that equals or surpasses human intelligence, while weak AI is that which simulates human intelligence in specific problem domains. ...
Article
Artificial intelligence (AI) may be one of the most disruptive technologies of the 21st century, with the potential to transform every aspect of society. Preparing for a “good AI society” has become a hot topic, with growing public and scientific interest in the principles, policies, incentives, and ethical frameworks necessary for society to enjoy the benefits of AI while minimizing the risks associated with its use. However, despite the renewed interest in artificial intelligence, little is known of the direction in which AI scholarship is moving and whether the field is evolving towards the goal of building a “good AI society”. Based on a bibliometric analysis of 40147 documents retrieved from the Web of Science database, this study describes the intellectual, social, and conceptual structure of AI research. It provides 136 evidence-based research questions about how AI research can help understand the social changes brought about by AI and prepare for a “good AI society.” The research agenda is organized according to ten social impact domains identified from the literature, including crisis response, economic empowerment, educational challenges, environmental challenges, equality and inclusion, health and hunger, information verification and validation, infrastructure management, public and social sector management, security, and justice.
Thesis
This study was designed to investigate the influence of autonomy of artificial intelligence (AI) on users’ satisfaction, continuance intention, and forgiveness toward the AI service. Also, the moderating role of severity of error in the relationship between autonomy of AI and users’ satisfaction, continuance intention, and forgiveness toward the AI service was examined. The results from 108 participants provided evidence that (a) the group using relatively more autonomous artificial intelligence showed more negative user satisfaction, continued intentions and forgiveness toward AI service compared to the group using relatively less autonomous artificial intelligence, (b) severity of error moderated the relationship between autonomy of AI and users’ satisfaction, continuance intention, and forgiveness toward the AI service.
Article
Full-text available
We’ll illustrate some subtle features of quantum computers, which are less popular in current literature, with respect to some more technical ones (like for example, quantum algorithms). In particular, we’ll focus on those aspects of quantum computers which are related to quantum gravity (via entanglement) and to the quantum side of the mind (via quantum computational logic and quantum meta-language). We expect that, in the quantum computing framework, Quantum Gravity and the Quantum Mind may appear strictly interconnected. Just to use a metaphorical language, we can say the following: General Relativity teaches how space-time deals with matter and vice-versa, Quantum Mechanics teaches (or better, it should) how reality deals with measurements. Time (the problem of time) is in the middle, together with the classical-minded observer and Quantum Gravity is still a chimera. But…Quantum computing exploits the inner quantum-computational side of quantum gravity. Quantum gravity (or better its quantum space-time background) seems to have a meta-logical structure (a quantum metalanguage) quite similar to that of the quantum brain when the latter is described by a (dissipative) Quantum Field Theory. In more suggestive words, empty quantum space-time tells to the Quantum Mind how to quantum meta-think.
Chapter
Full-text available
Currie’s (2010) argument that “i-desires” must be posited to explain our responses to fiction is critically discussed. It is argued that beliefs and desires featuring ‘in the fiction’ operators—and not sui generis imaginings (or "i-beliefs" or "i-desires")—are the crucial states involved in generating fiction-directed affect. A defense of the “Operator Claim” is mounted, according to which ‘in the fiction’ operators would be also be required within fiction-directed sui generis imaginings (or "i-beliefs" and "i-desires"), were there such. Once we appreciate that even fiction-directed sui generis imaginings would need to incorporate ‘in the fiction’ operators, the main appeal of the idea that sui generis imaginings (or "i-beliefs" or "i-desires") are at work in fiction-appreciation dissipates. [This is Chapter 10 of Explaining Imagination (OUP, 2020)]
Article
Full-text available
The article schematically presents four types of codes associated with four forms of intelligence of the living (genomic, sensorimotor, symbolic and digital). It is particularly interested in the feedback effects of each higher level on previous levels and the sustained trend towards externalization. The human species is now reshaping these various forms of intelligence. The boundaries between natural, cultural and technical are in the process of blurring. A trans-epistemic society is emerging, which expressly includes the full depth of its relations with the biosphere.
Book
Full-text available
Fernsehserien sind längst ein unabdingbarer Bestandteil unseres Alltags geworden. Insbesondere die Internetdienste haben durch erschwingliche Subskriptionsmodelle die Rezeption serieller Formate drastisch erhöht. Die mittlerweile nahezu uneingeschränkte Verfügbarkeit von Fernseh- und Internetserien und die erhöhte zeitliche und räumliche Flexibilität bei deren Rezeption wirken als Katalysatoren für diese Entwicklung. Wohl aufgrund ihrer Aktualität und Ubiquität erfreut sich die Fernsehserie auch im gegenwärtigen akademischen Diskurs einer großen Beliebtheit sowohl als Gegenstand der praxisbezogenen Lehre als auch der Forschung. Die Fernsehserienforschung befindet sich, aus wissenschaftstheoretischer Perspektive betrachtet, aktuell in einer spannenden Phase, in der ihr Stellenwert als eigenständiger Untersuchungsbereich ausgehandelt wird. Die Serie als akademischer Gegenstand kann keineswegs von einer Leitdisziplin für sich allein beansprucht werden, sondern sie liefert den unterschiedlichsten Disziplinen Anknüpfungspunkte für die Bestätigung, Justierung und Weiterentwicklung der eigenen Konzepte und Methoden. Gerade weil die Fernsehserienforschung derart hybrid ist, liegt es nahe, einen Sammelband zu verantworten, der diese heterogenen Herangehensweisen zusammenbringt. Ziel der vorliegenden Zusammenstellung ist es, über die Perspektiven einzelner Disziplinen zu Einblicken in die vielfältigen Möglichkeiten und Desiderata einer allgemeinen Fernsehserienforschung zu gelangen.
Chapter
Full-text available
Comparatively easy questions we might ask about creativity are distinguished from the hard question of explaining transformative creativity. Many have focused on the easy questions, offering no reason to think that the imagining relied upon in creative cognition cannot be reduced to more basic folk psychological states. The relevance of associative thought processes to songwriting is then explored as a means for understanding the nature of transformative creativity. Productive artificial neural networks—known as generative antagonistic networks (GANs)—are a recent example of how a system’s ability to generate novel products can both be finely tuned by prior experience and grounded in strategies that cannot be articulated by the system itself. Further, the kinds of processes exploited by GANs need not be seen as incorporating something akin to sui generis imaginative states. The chapter concludes with reflection on the added relevance of personal character to explanations of creativity. [This is Chapter 12 of the book Explaining Imagination.]
Thesis
Full-text available
Das Ziel dieser anwendungsorientierten Arbeit und ihrer Fragestellung mit praxisbezogener, aktueller Problematik, ist es, einen Design- und Entwicklungsprozess eines KI-gestützten Chatbots für den kulturfinder.sh zu entwickeln. Sowohl die technischen Möglichkeiten neuer Technologien als auch ihre etwaigen sozialen und ethischen Probleme sollen, mit praktischem Fokus auf die Lösung für das Projekt kulturfinder.sh, beleuchtet werden. Das Ergebnis soll ein realistischer Entwurf eines Chatbots für den kulturfinder.sh sein, der eine Basis für weitere Arbeiten bieten kann. Dieser soll in der Form eines Prototypen dargestellt werden. Begleitend hierzu sollen planerische und hypothetische Überlegungen formuliert werden, die eine zukünftige Umsetzung des Prototyps skizzieren könnten. Dafür ist es wichtig, die gegenwärtige Ausgangslage der momentanen technischen Entwicklungen, der dazugehörigen sozial-ethischen Überlegungen und des praktischen Umfeldes des Projektes kulturfinder.sh zu verstehen. Um eine realistische Orientierung zu garantieren, soll dabei eine enge Zusammenarbeit mit den Projektinhaber*innen vom kulturfinder.sh und seinen Projektpartner*innen, wie die Schleswig-Holsteinische Landesbibliothek, dem digiCULT-Verbund eG oder den beteiligten technischen Expert*innen von Dataport AöR stattfinden.
Book
Full-text available
This report addresses the nature, scope and possible effects of digital automation. It reviews relevant literature and situate s modern debates on technological change in historical context. It identifies threats to job quality and an unequal distribution of the risks and benefits associated with digital automation. It also offers some policy options that, if implemented, would help to harness technology for positive economic and social ends. The policy options range from industry and sectoral skills alliances that focus on facilitating transitions for workers in 'at risk' jobs, to proposals for the reduction in work time. The suggested policies derive from the view that digital automation must be managed on the basis of principles of industrial democracy and social partnership. The report argues for a new Digital Social Contract. At a time of crisis, the policy options set out in the report aim to offer hope for a digital future that works for all.
Preprint
This is an introduction to a forthcoming special issue of Interdisciplinary Science Reviews entitled "Artificial Intelligence and its Discontents" -- please enjoy.
Article
Advances in applying statistical Machine Learning (ML) led to several claims of human-level or near-human performance in tasks such as image classification & speech recognition. Such claims are unscientific primarily for two reasons, 1) They incorrectly enforce the notion that task-specific performance can be treated as manifestation of General Intelligence and 2) They are not verifiable as currently there is no set benchmark for measuring human-like cognition in a machine learning agent. Moreover, ML agent's performance is influenced by knowledge ingested in it by its human designers. Therefore, agent's performance may not necessarily reflect its true cognition. In this paper, we propose a framework that draws parallels from human cognition to measure machine's cognition. Human cognitive learning is quite well studied in developmental psychology with frameworks and metrics in place to measure actual learning. To either believe or refute the claims of human-level performance of machine learning agent, we need scientific methodology to measure its cognition. Our framework formalizes incremental implementation of human-like cognitive processes in ML agents with an implicit goal to measure it. The framework offers guiding principles for measuring, 1) Task-specific machine cognition and 2) General machine cognition that spans across tasks. The framework also provides guidelines for building domain-specific task taxonomies to cognitively profile tasks. We demonstrate application of the framework with a case study where two ML agents that perform Vision and NLP tasks are cognitively evaluated.
Chapter
Das Forschungsvorhaben der Arbeit „Systemdialog“ besteht darin, die Fähigkeiten künstlicher Intelligenz als Möglichkeit zu begreifen, um die Momente der Begegnung zwischen Mensch und Maschine neu zu gestalten, und zwar mit dem Menschen als Dreh- und Angelpunkt zukünftiger Entwicklungen. An die Stelle der Betrachtung eines Systems als Wirkungsfeld aus algorithmischer Logik und User Interface rückt dabei eine Bewertung der Maschine als digitaler Akteur.
Article
Extension aux "etats intentionnels" de l'analyse anterieure des actes de langage. L'auteur montre que les etats intentionnels ont avec les actes de langage une structure fondamentalement commune| elle s'exprime essentiellement dans ce qu'il appelle la "direction of fit", puis dans l'idee de "satisfaction". Il tire les consequences de cette theorie pour la solution de certains problemes philosophiques traditionnels.
Article
Notes that any effort psychology may make to encompass the fact of subjectivity will oblige it to accept neither (a) a metaphysical subject of experience nor (b) a field of experience that belongs to no individual. But even physicalism in regard to the relation of mind to body must recognize the existence of a certain residual subjectivity. The brain activities that constitute perceptual and other experiences have an immediacy that thoroughly objective concepts do not and cannot cover. But all knowledge about the world (including experiences) is a knowledge of its structural properties. It follows that a thoroughly objective psychology need not omit any mental event, process, or state. But areas of psychology that seek to address the occurrent immediacy of human experiences will not be able to proceed strictly objectively. It is argued specifically that perceptual theory requires reference to the qualitative aspect of immediate experience as representing an essential function in perceptual awareness of the physical environment. (58 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Both the psychology of perception and the philosophy of perception seem to show a new face when the process is considered at its own level, distinct from that of sensation. Unfamiliar conceptions in physics, anatomy, physiology, psychology, and phenomenology are required to clarify the separation and make it plausible. But there have been so many dead ends in the effort to solve the theoretical problems of perception that radical proposals may now be acceptable. Scientists are often more conservative than philosophers of science. I end, therefore, as I began, with a plea for help.
Article
Cognitive Science is likely to make little progress in the study of human behavior until we have a clear account of what a human action is. The aim of this paper is to present a sketch of a theory of action. I will locate the relation of intention to action within a general theory of Intentionality. I will introduce a distinction between prior intentions and intentions in actions; the concept of the experience of acting; and the thesis that both prior intentions and intentions in action are causally selfreferential. Each of these is independently motivated, but together they enable me to suggest solutions to several outstanding problems within action theory (deviant causal chains, the accordian effect, basic actions, etc.), to show how the logical structure of intentional action is strikingly like the logical structure of perceptions, and to construct an account of simple actions. A successfully performed intentional action characteristically consists of an intention in action together with the bodily movement or state of the agent which is its condition of satisfaction and which is caused by it. The account is extended to complex actions.
In his paper ‘The problem of serial order in behavior’ Karl Lashley (1951, p.113) points out that ‘language presents in a most striking form the integrative functions that are characteristic of the cerebral cortex’ adding ‘... the problems raised by the organization of language seem to me to be characteristic of almost all other cerebral activity’. Some idea of the complexity of the integrative processes involved in speech can be gained from the fact that the adult speaker’s ability to produce syllables at an average speed of 210 to 220 a minute (or roughly 14 phonemes per second) means individual muscular events occurring throughout the speech apparatus at a rate of several hundred every second; in the case of some phonemes the total time required to activate the muscles involved in their production being as much as twice as long as the duration of the sound itself. Not very much is known at present about what this involves on the neuronal level, where the rate at which individual events occur must be greater by a large factor, but it is a point of considerable interest that there is at least some evidence to suggest that in some instances the order of neuronal events might be different from that of the muscular events with which they are correlated.* The point Lashley is making in his paper is that any form of behaviour revealing this degree of complexity in its organization cannot be analysed as an associative chain of reflexes. But, as he points out, in the case of speech the evidence against the associative chain hypothesis is particularly compelling. This arises from considerations of two kinds. The first is the fact that the character of certain sounds is determined not only by the sounds that precede them but also by those that follow them. The second is the fact that the character of certain sounds is determined not only by the sounds in their immediate environment but also by the position they occupy with respect to the syntactic structure of the utterance. To take just one example, the speech of Standard English speakers contains at least twelve varieties (allophones) of the phoneme t . But whenever this is the first sound in a word and is immediately followed by a vowel they will always use the aspirated allophone never any of the others. This is clear evidence that in producing utterances speakers follow out principles of organization relating to syntactic structure. To produce a plausible model for speech we have to postulate not only principles of organization more complex than the Markov processes of associative chain theories but hierarchies of organization, elements on one level corresponding to what Lashley calls ‘generalized schemata of action' and Miller, Galanter & Pribram (1960) call ‘plans’ which are carried out on the level below. Evidence in favour of such a model can be obtained from a study of speech disorders, ranging from the transpositions occurring in the speech of a tired or nervous speaker to remarks of aphasics indicating that although for the most part they can only produce strings of unintelligible sounds they still ‘know what they want to say’. All these disorders can be viewed as involving in some degree a breakdown in integrative functions, an inability to carry out successfully plans for utterances.