John R. Searle’s research while affiliated with University of California, Berkeley and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Minds, Brains, and Programs
  • Article

September 1980

·

262 Reads

·

4,229 Citations

Behavioral and Brain Sciences

John R. Searle

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. “Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.





Minds, Brains and Programs

January 1980

·

842 Reads

·

1,876 Citations

Behavioral and Brain Sciences

This article can be viewed as an attempt to explore the consequences of two propositions. (I) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain bran processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 'Could a machine think?' On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.


The Intentionality of Intention and Action*

January 1979

·

32 Reads

·

171 Citations

Cognitive Science A Multidisciplinary Journal

Cognitive Science is likely to make little progress in the study of human behavior until we have a clear account of what a human action is. The aim of this paper is to present a sketch of a theory of action. I will locate the relation of intention to action within a general theory of Intentionality. I will introduce a distinction between prior intentions and intentions in actions; the concept of the experience of acting; and the thesis that both prior intentions and intentions in action are causally selfreferential. Each of these is independently motivated, but together they enable me to suggest solutions to several outstanding problems within action theory (deviant causal chains, the accordian effect, basic actions, etc.), to show how the logical structure of intentional action is strikingly like the logical structure of perceptions, and to construct an account of simple actions. A successfully performed intentional action characteristically consists of an intention in action together with the bodily movement or state of the agent which is its condition of satisfaction and which is caused by it. The account is extended to complex actions.


What Is an Intentional State?

January 1979

·

42 Reads

·

158 Citations

Mind

Extension aux "etats intentionnels" de l'analyse anterieure des actes de langage. L'auteur montre que les etats intentionnels ont avec les actes de langage une structure fondamentalement commune| elle s'exprime essentiellement dans ce qu'il appelle la "direction of fit", puis dans l'idee de "satisfaction". Il tire les consequences de cette theorie pour la solution de certains problemes philosophiques traditionnels.

Citations (6)


... We recommend engaging students in discussions about various thought experiments (e.g. Searle, 1985;Bender and Koller, 2020) and exploring both sides of the debate: those who argue that it is impossible (e.g. Bender et al., 2021) and those who believe it is possible to some extent (e.g. ...

Reference:

Teaching LLMs at Charles University: Assignments and Activities
Minds, Brains and Programs
  • Citing Article
  • January 1980

Behavioral and Brain Sciences

... La condition de sincérité (Sincerity Rule) exige que l'intention du locuteur soit vraie. Searle (1979) établit la sincérité comme la condition majeure de la félicitation. Enfin, le locuteur s'engage à accomplir l'acte présupposé, d'où la condition essentielle (Essential Rule). ...

What Is an Intentional State?
  • Citing Article
  • January 1979

Mind

... The basic idea shared by all these philosophers is that, when we talk about rules of a language or grammar 'Additional evidence is required to show that they are rules that the agent is actually following and not mere hypotheses or generalisations that correctly describe his behaviour. It is not enough to get rules that have the right predictive powers; there must be some independent reason for supposing that the rules are functioning causally' (Searle 1980). ...

Rules and causation
  • Citing Article
  • March 1980

Behavioral and Brain Sciences

... It is highly questionable, however, whether algorithmic systems have what is necessary to mentally represent propositional content and act according to it in a literal sense (Johnson, 2006). Although the behavior of these systems may be interpreted "as if" they had beliefs, desires, and intentions, they do not subjectively experience "what it's like" (Nagel, 1974) to be in a particular mental state and do not genuinely understand its semantic content (Searle, 1980). On this basis, one might argue that the attribution of mental states to algorithmic systems can only ever be metaphorical, and thus that it seems ontologically misguided to say that algorithmic systems genuinely decide. ...

Intrinsic Intentionality
  • Citing Article
  • September 1980

Behavioral and Brain Sciences

... This limitation applies to simulations of biological neural networks as well-even if a conscious brain could be simulated through performing a very large number of simple computations on a calculator, that would not make the calculator conscious in the sense of IIT. This is related to the Chinese room problem and wider discussion on the machines' ability to think [54], which is beyond the scope of this work. ...

Minds, Brains, and Programs
  • Citing Article
  • September 1980

Behavioral and Brain Sciences