Article

Performing competently

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In conclusion, these results demonstrate that there is indeed a citation bias in the judgment and decisionmaking literature. They corroborate the claims that studies which observe optimal behavior tend to be ignored in the literature (Berkeley & Humphreys, 1982;Cohen, 1981;Lopes, 1981). ...
... It would appear from the results of this study that Hammond's call might best be heeded. Although the study of reasoning errors can advance our understanding of reasoning processes , so too can the study of good judgment (Christensen-Szalanski, 1978;Lopes, 1981;Lopes & Ekberg, 1980;Rachlin, Battalio, Kagel, & Green, 1981). ...
Article
Full-text available
Examined whether selectivity was used in the citing of evidence in research on the psychology of judgment and decision making and investigated the possible effects that this citation bias might have on the views of readers of the literature. An analysis of the frequency of citations of good- and poor-performance articles cited in the Social Science Citation Index from 1972 through 1981 revealed that poor-performance articles were cited significantly more often than good-performance articles. 80 members of the Judgment and Decision Making Society, a semiformal professional group, were asked to complete a questionnaire assessing the overall quality of human judgment and decision-making abilities on a scale from 0 to 100 and to list 4 examples of documented poor judgment or decision-making performance and 4 examples of good performance. Ss recalled significantly more examples of poor than of good performance. Less experienced Ss in the field appeared to have a lower opinion of human reasoning ability than did highly experienced Ss. Also, Ss recalled 50% more examples of poor performance than of good performance, despite the fact that the variety of poor-performance examples was limited. It is concluded that there is a citation bias in the judgment and decision-making literature, and poor-performance articles are receiving most of the attention from other writers, despite equivalent proportions of each type in the journals. (33 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... Most of the literature on subjective and objective rationality fails to point out that alleged objective standards are in fact observer's conjectures. Works arguing that researchers are not granted superior knowledge include Cohen (1981), Lopes (1981, Birnbaum (1983) and Messer and Griggs (1993), who stress that researches may sometimes apply inappropriate standard of rationality (for overview see Stanovich and West (2000)). Gigerenzer (1991) and Chase et al. (1998) mention with respect to rationality concept that a single interpretation of probability calculus is not shared even by the statisticians and philosophers themselves and that views of the subject evolve. ...
Article
The paper discusses the fact that the description of choice alternatives matters when consistency is examined. It argues that alternatives given to the decision maker should be described as they are classified by the decision-maker himself and not as they are perceived by the scientist-observer. If, however, the decision-maker’s classification is discernible from his choices only, then the consistency assumption always holds. Choice theory can on one hand be based solely on the choice data and nothing else but on the other it thus looses empirical content. It is also shown that interpretation of the consistency assumption in demand theory is different than in choice theory, because the former deals with multiple agents and therefore the description of the choice alternatives cannot be in terms of a decision-maker’s classification, but instead must be given arbitrarily by the scientist-observer.
Article
Dietrich and List (Int J Game Theory 1–25, 2012) enrich the standard model of choice by explicitly modeling a decision maker’s mental state. They assume that a change in mental state either induces a change in preferences, or alternatively, a change in the decision maker’s perception of the choice problem. This paper argues that the two interpretations are not always interchangeable. Presented are two examples which demonstrate that decision maker’s (“subjective”) perception may not be adequately modeled as embodied in his preferences over (“objective”) alternatives. It is also emphasized that in order to understand choice behavior, one has to take into the account decision maker’s perception of the choice problem rather than its “objective” description by an observer.
Article
Logic and Representation brings together a collection of essays, written over a period of ten years, that apply formal logic and the notion of explicit representation of knowledge to a variety of problems in artificial intelligence, natural language semantics, and the philosophy of mind and language. Particular attention is paid to modeling and reasoning about knowledge and belief, including reasoning about one's own beliefs, and the semantics of sentences about knowledge and belief. Robert C. Moore begins by exploring the role of logic in artificial intelligence, considering logic as an analytical tool., as a basis for reasoning systems, and as a programming language. He then looks at various logical analyses of propositional attitudes, including possible-world models, syntactic models, and models based on Russellian propositions. Next Moore examines autoepistemic logic, a logic for modeling reasoning about one's own beliefs. Rounding out the volume is a section on the semantics of natural language, including a survey of problems in semantic representation; a detailed study of the relations among events, situations, and adverbs; and a presentation of a unification-based approach to semantic interpretation. Robert C. Moore is principal scientist of the Artificial Intelligence Center of SRI International.
Article
Article
Although Herbert Simon's work is often cited by political scientists, it has not generated a large research program in the discipline. This is a waste of a major intellectual resource. The main challenge to the rational choice research program - Now the most important research program in political science - Can be developed by building on Simon's ideas on bounded rationality. The essay defends this assertion by examining how the work of both the early Simon (primarily satisficing-and-search models) and the later Simon (on problem solving) can shed light on important topics in our discipline such as budgeting, turnout, and party competition.
Article
Full-text available
-
Article
There is a tension between normative and descriptive elements in the theory of rational belief. This tension has been reflected in work in psychology and decision theory as well as in philosophy. Canons of rationality should be tailored to what is humanly feasible. But rationality has normative content as well as descriptive content. A number of issues related to both deductive and inductive logic can be raised. Are there full beliefs – statements that are categorically accepted? Should statements be accepted when they become overwhelmingly probable? What is the structure imposed on these beliefs by rationality? Are they consistent? Are they deductively closed? What parameters, if any, does rational acceptance depend on? How can accepted statements come to be rejected on new evidence Should degrees of belief satisfy the probability calculus? Does conformity to the probability calculus exhaust the rational constraints that can be imposed on partial beliefs? With the acquisition of new evidence, should beliefs change in accord with Bayes' theorem? Are decisions made in accord with the principle of maximizing expected utility? Should they be? A systematic set of answers to these questions is developed on the basis of a probabilistic rule of acceptance and a conception of interval-valued logical probability according to which probabilities are based on known frequencies. This leads to limited deductive closure, a demand for only limited consistency, and the rejection of Bayes' theorem as universally applicable to changes of belief. It also becomes possible, given new evidence, to reject previously accepted statements.
Article
-
Article
Full-text available
The finding of a correlation between normative responses to judgment and reasoning questions and cognitive capacity measures (SAT score) suggests that the cause of the non-normative responses is computational in nature. This actually is consistent with the rational competence view. The implications of this finding for the adaptation view of cognition are discussed.
Article
Full-text available
This chapter makes an extended case for the philosophical relevance of recent empirical work on reasoning. It focuses on the implications of this work for an analysis of justification of inductive procedures. It argues that Nelson Goodman's elegant and enormously influential attempt to "dissolve" the problem of induction is seriously flawed. At the root of the difficulty is the fact that Goodman makes tacit assumptions about the ways in which people actually infer. These are empirical assumptions, and recent studies of inference indicate that the assumptions are false. This problem the burden of the first section of the chapter. The second section attempts to repair the damage. The trouble with Goodman's story about induction centers on his analysis of what we are saying when we say that a rule of inference is justified. The chapter then offers a new account of what is going on when people say that an inference or a rule of inference is (or is not) justified. In the' course of the analysis, the much neglected social component of justification and the role of expert authority in our cognitive lives are explored.
Article
Full-text available
People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics.
Article
Full-text available
Conducted 2 experiments with 12 paid female undergraduates and 26 undergraduates receiving course credit, respectively. The Ss judged the subjective worth of bets defined entirely by verbal phrases such as "somewhat unlikely to win sandals." A theory of information integration predicts that the subjective values of probability and payoff should combine by multiplying. Procedures from functional measurement were applied to test the model and to scale the subjective values of verbal probabilities and payoffs. Data from both experiments support the model. The Ss also judged 2-part bets such as "highly probable to win watch" and "toss-up to win bicycle." The theory predicts that the worths of the 2 parts should combine by adding. In both experiments, however, the judged worth of 2-part bets was less than the sum of the worths of the parts. This subadditivity effect was also found in reanalyses of earlier studies on commodity bundles. This raises serious questions about the traditional additive utility approach to risky decision making. (23 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Describes 3 experiments in which housewives (N = 142) estimated the probability (p) of the presence of a disease which had been indicated by diagnostic equipment. Although data given to Ss indicated p .5, Ss consistently estimated p .8. Successive trials altered the order of presentation of data and progressively reduced the data given. However, Ss always gave closely similar p values, accompanied by high confidence ratings. 2 hypotheses are examined to account for these findings. A 3rd experiment suggests the conclusion that the most important factor is that Ss import a rigid prior probability from their previous experience and ignore numerical data. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A review of the literature indicates that linear models are frequently used in situations in which decisions are made on the basis of multiple codable inputs. These models are sometimes used (a) normatively to aid the decision maker, (b) as a contrast with the decision maker in the clinical vs statistical controversy, (c) to represent the decision maker "paramorphically" and (d) to "bootstrap" the decision maker by replacing him with his representation. Examination of the contexts in which linear models have been successfully employed indicates that the contexts have the following structural characteristics in common: each input variable has a conditionally monotone relationship with the output; there is error of measurement; and deviations from optimal weighting do not make much practical difference. These characteristics ensure the success of linear models, which are so appropriate in such contexts that random linear models (i.e., models whose weights are randomly chosen except for sign) may perform quite well. 4 examples involving the prediction of such codable output variables as GPA and psychiatric diagnosis are analyzed in detail. In all 4 examples, random linear models yield predictions that are superior to those of human judges. (52 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Discusses the inadequacy of experimental evidence and logical considerations in current theories of intellectual deficits caused by cultural deprivation. Reports on the presence or absence of competence have been based on noncomparable experimental situations. Deficit interpretations have assumed that absence of performance reflects absence of a particular psychological process. A strategy is proposed which combines usual experimental approaches with ethnographic methods for the study of cognitive processes. (44 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Described 2 previously conducted psychology experiments to a total of 128 male undergraduates in 2 experiments. Some Ss were told about the actual distribution of behavior in the experiments, and others were not. Knowledge of the distributions did not influence Ss' attributions about the causes of the behavior of original participants nor their predictions about what their own behavior might be. As expected, base rate information did not even affect Ss' guesses about the behavior of particular target members of the original experimental populations. It is concluded that Ss ignore base rates for behavior just as they ignore base rates for category membership. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
That all events are equally associable and obey common laws is a central assumption of general process learning theory. A continuum of preparedness is defined which holds that organisms are prepared to associate certain events, unprepared for some, and contraprepared for others. A review of data from the traditional learning paradigms shows that the assumption of equivalent associability is false. Examples from experiments in classical conditioning, instrumental training, discrimination training, and avoidance training support the assumption. Language acquisition and the functional autonomy of motives are also viewed using the preparedness continuum. It is speculated that the laws of learning themselves may vary with the preparedness of the organism for the association and that different physiological and cognitive mechanisms may covary with the dimension. (2 p. ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Notes that an accumulating body of research on clinical judgment, decision making, and probability estimation has documented a substantial lack of ability of both experts and nonexperts. However, evidence shows that people have great confidence in their fallible judgment. This article examines how this contradiction can be resolved and, in so doing, discusses the relationship between learning and experience. The basic tasks that are considered involve judgments made for the purpose of choosing between actions. At some later time, outcome feedback is used for evaluating the accuracy of judgment. The manner in which judgments of the contingency between predictions and outcomes are made is discussed and is related to the difficulty people have in searching for disconfirming information to test hypotheses. A model for learning and maintaining confidence in one's own judgment is developed that includes the effects of experience and both the frequency and importance of positive and negative feedback. (78 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Three experiments examined how a poker player's models or mental representations of the game influence his or her play in a modified version of 5-card stud. In Exp I, experienced poker players judged the likelihood of beating pairs of poker hands each described by the upcards in the hand, the amount bet on the hand by the opponent, and the playing style of the opponent. Results indicate that the subjective likelihood of beating a pair of poker hands is a multiplicative function of the subjective likelihoods of beating each of the hands individually and that Ss bet proportionally to their subjective likelihood of winning. Exps II and III examined the evaluation mechanisms through which Ss combine information to arrive at the subjective likelihood of beating a particular hand. These mechanisms include assessing the objective threat of upcards, combining this with information from opponents bets, and discounting for possible opponent bluffs. Results show a nonmonotonic relationship between the amount of the bet and the objective threat of the upcards and support an averaging rule for 2 of 7 Ss and an adding rule for the other 5 Ss. (38 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Applied a theory of information integration to decision making with probabilistic events. 10 undergraduates judged the subjective worth of duplex bets that included independent gain and lose components. The worth of each component was assumed to be the product of a subjective weight that reflected the probability of winning or losing, and the subjective worth of the money to be won or lost. The total worth of the bet was the sum of the worths of the 2 components. Thus, each judgment required multiplying and adding operations. The multiplying model worked quite well in 4 experimental conditions. The adding model showed more serious discrepancies, though these were small in magnitude. The theory of functional measurement was applied to scale the subjective values of the probability and money stimuli. Subjective and objective values were nonlinearly related both for probability and for money. (33 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Notes that the subjective concept of randomness is used in many areas of psychological research to explain a variety of experimental results. 1 method of studying randomness is to have ss generate random series. Few results of experiments using this method, however, lend themselves to comparison and synthesis because investigators employ such a variety of experimental conditions and definitions of mathematical randomness. (24 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Clinical psychologists, physicians, and other professionals are typically called upon to combine cues to arrive at some diagnostic or prognostic decision. Mathematical representations of such clinical judges can often be constructed to capture critical aspects of their judgmental strategies. An analysis of the characteristics of such models permits a specification of the conditions under which the model itself will be a more valid predictor than will the man from whom it was derived. To ascertain whether such conditions are met in natural clinical decision making, data were reanalyzed from P. E. Meehl's (see 34:3) study of the judgments of 29 clinical psychologists attempting to differentiate psychotic from neurotic patients on the basis of their MMPI profiles. Results of these analyses indicate that for this diagnostic task models of the men are generally more valid than the men themselves. Moreover, the finding occurred even when the models were constructed on a small set of cases, and then man and model competed on a completely new set. (29 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
D. Kahneman and A. Tversky (Cognitive Psychology, 1972, 3, 430–454) claimed that “the notion that sampling variance decreases in proportion to sample size is apparently not part of man's repertoire of intuitions.” This study presents a series of experiments showing that it is possible to elicit judgments indicating that perceived sample accuracy increases with sample size. However, these judgments seem to reflect sensitivity to sample-to-population ratio rather than absolute sample size. In fact, people may trade sample size for sample-to-population ratio, even when this actually decreases expected sample accuracy.The widely held belief that the accuracy of a sample is connected with its relative size to the universe is mistaken. A sample smaller than 1%, taken from one universe, can be much more reliable than one comprising 10% of another. To determine with equal accuracy the average age of the population of New York City and of Peoria, Illinois, will require samples of equal size (variances of population being equal). (Zeisel, 1960).
Book
I / The Probability Framework.- II / Classical Statistical Theory.- III / R. A. Fisher: Likelihood and Fiducial Inference.- IV / Decision Theory.- V / Subjective and Logical Approaches.- VI / Comparison of Approaches.- VII / The Language: Syntax.- VIII / Rational Corpora.- IX / Randomness.- X / Probability.- XI / Conditional Probability.- XII / Interpretations of Probability.- XIII / Bayesian Inference.- XIV / The Fiducial Argument.- XV / Confidence Methods.- XVI / Epistemological Considerations.- Appendix / The Mathematical Background.
Article
Article
According to Ian Hacking, Francis Bacon had “no concern with probability” and “does not aim at inference under uncertainty.”2 I believe this to be an important mistake, though such mistakes are rare in Hacking’s fascinating book. In fact Bacon, and later writers influenced by him, were very much concerned with probabilities, though not with probabilities structured in accordance with the mathematical calculus of chance. I shall call the latter “Pascalian probabilities,” in tribute to one of the great mathematical pioneers in this area; and my object will be to demonstrate not only Bacon’s own concern with a non-Pascalian probability, but also the existence of a long line of philosophical or methodological reflections about such a probability, stretching at least from the seventeenth into the nineteenth century.
Article
Recent studies of deductive reasoning are reviewed with respect to three questions: (i) Do people reason logically? (ii) Is reasoning introspectible? (iii) Is reasoning sequential? It is argued that the evidence of reasoning experiments suggests a negative answer to all three questions. This conclusion is interesting, since the last two questions at least might be answered affirmatively by common sense, and affirmative answers would be more consistent with the assumptions of many psychologists in related fields. The question is raised, however, as to whether experimental studies have good external validity for the measurement of ‘reasoning’ as we would generally understand the term.
Article
Advanced undergraduate science majors attempted for approximately 10h each to discover the laws governing a dynamic system. The system included 27 fixed objects, some of which influenced the direction of a moving particle. At a given time, any one screen of a nine-screen matrix could be observed on a plasma display screen. Confirmatory strategies were the rule, even though half the subjects had been carefully instructed in strong inference. Falsification was counterproductive for some subjects. It seems that a firm base of inductive generalizations, supported by confirmatory research, is a prerequisite to useful implementation of a falsification strategy.
Article
According to the normative theory of prediction, prior probabilities (base rates), which summarize what we know before receiving any specific evidence, should remain relevant even after such evidence is obtained. In the present study, subjects were asked to estimate the probability that one of two states was true on the basis of (a) information about the prior probabilities of the states (b) information specific to the case at hand and known to be accurate with probability p. Subject's responses were determined predominantly by the specific evidence; the prior probabilities were neglected, causing the judgments to deviate markedly from the normative response. Theoretical and practical implications of this results are discussed.
Article
This article considers the implications of recent research on judgmental processes for the assessment of subjective probability distributions. It is argued that since man is a selective, sequential information processing system with limited capacity, he is ill-suited for assessing probability distributions. Various studies attesting to man's difficulties in acting as an “intuitive statistician” are summarized in support of this contention. The importance of task characteristics on judgmental performance is also emphasized. A critical survey of the probability assessment literature is provided and organized around five topics: (1) the “meaningfulness” of probability assessments; (2) methods of eliciting distributions; (3) feedback and evaluation of assessors; (4) differential ability of groups of assessors and (5) the problems of eliciting a single distribution from a group of assessors. Conclusions from the analysis with respect to future work include the need to capitalize on cognitive simplification mechanisms; making assessors aware of both human limitations and the effects of task characteristics; and emphasizing feedback concerning the nature of the task at hand.
Article
This study is concerned with the effects of prior experience on a deceptive reasoning problem. In the first experiment the subjects (students) were presented with the problem after they had experienced its logical structure. This experience was, on the whole, ineffective in allowing subsequent insight to be gained into the problem. In the second experiment the problem was presented in “thematic” form to one group, and in abstract form to the other group. Ten out of 16 subjects solved it in the thematic group, as opposed to 2 out of 16 in the abstract group. Three hypotheses are proposed to account for this result.
Article
The book was planned and written as a single, sustained argument. But earlier versions of a few parts of it have appeared separately. The object of this book is both to establish the existence of the paradoxes, and also to describe a non-Pascalian concept of probability in terms of which one can analyse the structure of forensic proof without giving rise to such typical signs of theoretical misfit. Neither the complementational principle for negation nor the multiplicative principle for conjunction applies to the central core of any forensic proof in the Anglo-American legal system. There are four parts included in this book. Accordingly, these parts have been written in such a way that they may be read in different orders by different kinds of reader.
Article
This paper explores a heuristic-representativeness-according to which the subjective probability of an event, or a sample, is determined by the degree to which it: (i) is similar in essential characteristics to its parent population; and (ii) reflects the salient features of the process by which it is generated. This heuristic is explicated in a series of empirical examples demonstrating predictable and systematic errors in the evaluation of un- certain events. In particular, since sample size does not represent any property of the population, it is expected to have little or no effect on judgment of likelihood. This prediction is confirmed in studies showing that subjective sampling distributions and posterior probability judgments are determined by the most salient characteristic of the sample (e.g., proportion, mean) without regard to the size of the sample. The present heuristic approach is contrasted with the normative (Bayesian) approach to the analysis of the judgment of uncertainty.
Article
Numerous authors (e.g., Popper, 1959) argue that scientists should try to falsify rather than confirm theories. However, recent empirical work (Wason and Johnson-Laird, 1972) suggests the existence of a confirmation bias, at least on abstract problems. Using a more realistic, computer controlled environment modeled after a real research setting, subjects in this study first formulated hypotheses about the laws governing events occurring in the environment. They then chose between pairs of environments in which they could: (I) make observations which would probably confirm these hypotheses, or (2) test alternative hypotheses. Strong evidence for a confirmation bias involving failure to choose environments allowing tests of alternative hypotheses was found. However, when subjects did obtain explicit falsifying information, they used this information to reject incorrect hypotheses.
Article
This investigation examines the extent to which intelligent young adults seek (i) confirming evidence alone (enumerative induction) or (ii) confirming and discontinuing evidence (eliminative induction), in order to draw conclusions in a simple conceptual task. The experiment is designed so that use of confirming evidence alone will almost certainly lead to erroneous conclusions because (i) the correct concept is entailed by many more obvious ones, and (ii) the universe of possible instances (numbers) is infinite. Six out of 29 subjects reached the correct conclusion without previous incorrect ones, 13 reached one incorrect conclusion, nine reached two or more incorrect conclusions, and one reached no conclusion. The results showed that those subjects, who reached two or more incorrect conclusions, were unable, or unwilling to test their hypotheses. The implications are discussed in relation to scientific thinking.
Article
A primed response time task was used to test the hypothesis that judgments in risky decision making involve an anchoring and adjustment procedure in which the amount to be won in a gamble serves as the anchor and is reduced in accord with the probability of winning. As predicted, the data revealed that priming with the amount to be won allowed faster choices between gambles and sure things than priming with the probability of winning. The experiment is discussed in terms of serial fractionation, which is a form of anchoring and adjustment that is equivalent to analog multiplication.
Article
Kahneman and Tversky's critique of Cohen's position on adults' probability reasoning is not valid. If they think Baconian logic is normatively unsound, the onus is on them to explain why. It is valid and useful because nature itself is full of causal processes. (Author/RD)
Article
The problem of the study was to discover the psychological factors operating toward the acceptance of invalid conclusions in a syllogism test. "Three such factors are suggested: the ambiguity of the word some, which is used in a distributive sense in logic ('at least some') and very often in a partitive sense in ordinary speech ('only some'); 'caution' or wariness, favoring the acceptance of weak and guarded rather than of strong conclusions; and 'atmosphere,' the global impression of 'feel' of the premises, which is affirmative or negative, universal or particular. Examination of the data from two experiments indicates that nearly all the acceptances of invalid conclusions can possibly be explained by these three hypothetical factors." (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Discusses the possibility of intellectual fallacy in experimenters' interpretations of their data. Arguments made by D. Kahneman and A. Tversky (see record 1974-02325-001) and A. Tversky and D. Kahneman (1974), that intuitive judgments of probability are biased toward predicting that outcomes will be similar to evidence, are discussed. The structure and usage of Baconian probability as opposed to Pascalian modes of reasoning are compared and offered in the reinterpretation of the Tversky-Kahneman argument. (20 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Argues that J. Cohen's (see record 1981-04405-001) critique of the present authors' work (1979) is unfounded, and that his Baconian formalism has little normative and descriptive appeal. (2 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Ss estimated probabilities of events and of the unions of those events in 3 different tasks. Probability estimates for the unions were approximately equal to the sum of the estimates for the component events, a relation demanded by probability theory. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Fallacious reasoning has led transformationalists to conclude that natural language cannot be produced by a finite state device. An alternate argument is proposed, based on a distinction between two types of generative mechanisms: iteration and recursion to a depth of one. A device which employs these mechanisms is a finite state device; and this paper contends that such a device is adequate to describe the data of natural language, including English embedded relative clauses. The proposed solution also handles multiple-branching constructions such as are found in coördination, and describes the intonation pattern of sentences containing strings of relative clauses.