Article

When is a Cause the “Same”? Incoherent Generalization across Contexts

Article

When is a Cause the “Same”? Incoherent Generalization across Contexts

If you want to read the PDF, try requesting it from the authors.

Abstract

A theory or model of cause such as Cheng's power ( p ) allows people to predict the effectiveness of a cause in a different causal context from the one in which they observed its actions. Liljeholm and Cheng demonstrated that people could detect differences in the effectiveness of the cause when causal power varied across contexts of different outcome base rates, but that they did not detect similar changes when only the cause-outcome contingency, ∆p, but not power, varied. However, their procedure allowed participants to simplify the causal scenarios and consider only a subsample of observations with a base rate of zero. This confounds p , ∆p, and the probability of an outcome (O) given a cause (C), P(O|C). Furthermore, the contingencies that they used confounded p and P(O|C) in the overall sample. Following the work of Liljeholm and Cheng, we examined whether causal induction in a wider range of situations follows the principles suggested by Cheng. Experiments 1a and 1b compared the procedure used by Liljeholm and Cheng with one that did not allow the sample of observations to be simplified. Experiments 2a and 2b compared the same two procedures using contingencies that controlled for P(O|C). The results indicated that, if the possibility of converting all contexts to a zero base rate situation was avoided, people were sensitive to changes in P(O|C), p , and ∆p when each of these was varied. This is inconsistent with Liljeholm and Cheng's conclusion that people detect only changes in p . These results question the idea that people naturally extract the metric or model of cause from their observation of stochastic events and then, reasonably exclusively, use this theory of a causal mechanism, or for that matter any simple normative theory, to generalize their experience to alternative contexts.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To our knowledge, no previous studies have examined such direct transfer questions. There are a few studies on causal generalization (Barberia, et al., 2014;Liljeholm & Cheng, 2007). These studies presented participants with complete information of two hypothetical medical intervention studies in different contexts, and then asked them to judge whether a candidate cause interacted with background causes that varied from one context to another or whether the candidate cause influenced the patients in each context in the same way. ...
... She M. Wang & P. Yin, Causal transfer 6 claimed that the causal power approach is an innate and automatic method of causal induction that happens automatically under normal circumstances with a trial-by-trial presentation format of causal instances, whereby people can automatically base their causal ratings on unobservable causal power. The normalization ensures that p is independent of the ΔP context such that it could be generalized across contexts (Barberia, et al., 2014;Cheng, 1997;Liljeholm & Cheng, 2007). Thus, the power PC theory predicts that people will generalize the capacity of a cause in accordance with its causal power. ...
... In tests of the covariation and causal power approach of causal induction, many previous studies showed that in most situations people make causal ratings in accordance with the ΔP rule rather than causal power (Barberia, et al., 2014;Cheng & Holyoak, 1995;Collins & Shanks, 2006;Danks, 2003;Jenkins & Ward, 1965;Lober & Shanks, 2000;Perales & Shanks, 2008;Rescorla & Wagner, 1972;Spellman, 1996). Moreover, some causal learning studies using a trial-by-trial presentation format found a P(E|C) effect meaning that causal ratings increase with P(E|C) (Barberia, et al., 2014;Collins & Shanks, 2006;Perales & Shanks, 2008). ...
Article
Full-text available
The covariation and causal power account for causal induction make different predictions for what is transferred in causal generalization across contexts. Two experiments tested these predictions using hypothetical scenarios in which the effect of an intervention was evaluated between (Experiment 1) or within (Experiment 2) groups. Each experiment contained a manipulation of ΔP, power and their combination. Both experiments found that causal transfer was determined by ΔP rather than causal power. The overall transfer pattern supports ΔP transfer account rather than the other transfer accounts. Causal transfers based on ΔP are irrational, violating the coherence criterion of the causal power framework. The ΔP transfer is consistent with previous findings that ΔP is a main mental non-normative measure of causal strength in causal induction. Keywords: causal generalization; causal transfer; ΔP; causal power
Article
Full-text available
Many theories of causal learning and causal induction differ in their assumptions about how people combine the causal impact of several causes presented in compound. Some theories propose that when several causes are present, their joint causal impact is equal to the linear sum of the individual impact of each cause. However, some recent theories propose that the causal impact of several causes needs to be combined by means of a noisy-OR integration rule. In other words, the probability of the effect given several causes would be equal to the sum of the probability of the effect given each cause in isolation minus the overlap between those probabilities. In the present series of experiments, participants were given information about the causal impact of several causes and then they were asked what compounds of those causes they would prefer to use if they wanted to produce the effect. The results of these experiments suggest that participants actually use a variety of strategies, including not only the linear and the noisy-OR integration rules, but also averaging the impact of several causes.
Article
Full-text available
This study showed that accuracy of the estimated relationship between a fictitious symptom and a disease depends on the interaction between the frequency of judgment and the last trial type. This effect appeared both in positive and zero contingencies (Experiment 1), and judgments were less accurate as frequency increased (Experiment 2). The effect can be explained neither by interference of previous judgments or memory demands (Experiment 3), nor by the perceptual characteristics of the stimuli (Experiments 4 and 5), and instructions intended to alter processing strategies do not produce any reliable effect. The interaction between frequency and trial type on covariation judgment is not predicted by any model (either statistical or associative) currently used to explain performance in covariation detection. The authors propose a belief-revision model to explain this effect as an important response mode variable on covariation learning.
Article
Full-text available
In causal reasoning, the observation of an event supports different inferences than an intervention that generates the event. An intervention breaks the connection between the manipulated event and its normal causes. Therefore, in contrast to an observation, an intervention prevents diagnostic inferences about the causes of the event. This chapter shows how causal Bayes nets can be used to model observations and interventions. Empirical findings are then presented demonstrating that people are highly sensitive to this distinction: (i) Their inferences conform to the distinction when reasoning counterfactually; (ii) they derive different predictions for novel observations and previously unobserved interventions when the underlying causal model entails different outcomes; and (iii) their decisions differ depending on whether an evidential statistical relation between an action and an outcome will be sustained if the action is deliberately chosen (i.e. intervened on). Other non-causal theories of learning and reasoning are not able to explain these findings.
Article
Full-text available
In two causal induction experiments subjects rated the importance of pairs of candidate causes in the production of a target effect; one candidate was present on every trial (constant cause), whereas the other was present on only some trials (variable cause). The design of both experiments consisted of a factorial combination of two values of the variable cause's covariation with the effect and three levels of the base rate of the effect. Judgements of the constant cause were inversely proportional to the level of covariation of the variable cause but were proportional to the base rate of the effect. The judgements were consistent with the predictions derived from the Rescorla-Wagner (1972) model of associative learning and with the predictions of the causal power theory of the probabilistic contrast model (Cheng, 1997) or “power PC theory”. However, judgements of the importance of the variable candidate cause were proportional to the base rate of the effect, a phenomenon that is in some cases anticipated by the power PC theory. An alternative associative model, Pearce's (1987) similaritybased generalization model, predicts the influence of the base rate of the effect on the estimates of both the constant and the variable cause.
Article
Full-text available
This study showed that accuracy of the estimated relationship between a fictitious symptom and a disease depends on the interaction between the frequency of judgment and the last trial type. This effect appeared both in positive and zero contingencies (Experiment 1), and judgments were less accurate as frequency increased (Experiment 2). The effect can be explained neither by interference of previous judgments or memory demands (Experiment 3), nor by the perceptual characteristics of the stimuli (Experiments 4 and 5), and instructions intended to alter processing strategies do not produce any reliable effect. The interaction between frequency and trial type on covariation judgment is not predicted by any model (either statistical or associative) currently used to explain performance in covariation detection. The authors propose a belief-revision model to explain this effect as an important response mode variable on covariation learning. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This paper outlines a theory and computer implementation of causal meanings and reasoning. The meanings depend on possibilities, and there are four weak causal relations: A causes B,A prevents B,A allows B, and A allows not-B, and two stronger relations of cause and prevention. Thus, A causes B corresponds to three possibilities: A and B, not-A and B, and not-A and not-B, with the temporal constraint that B does not precede A; and the stronger relation conveys only the first and last of these possibilities. Individuals represent these relations in mental models of what is true in the various possibilities. The theory predicts a number of phenomena, and, contrary to many accounts, it implies that the meaning of causation is not probabilistic, differs from the meaning of enabling conditions, and does not depend on causal powers or mechanisms. The theory also implies that causal deductions do not depend on schemas or rules.
Article
Full-text available
Because causal relations are neither observable nor deducible, they must be induced from observable events. The 2 dominant approaches to the psychology of causal induction-the covariation approach and the causal power approach-are each crippled by fundamental problems. This article proposes an integration of these approaches that overcomes these problems. The proposal is that reasoners innately treat the relation between covariation (a function defined in terms of observable events) and causal power(an unobservable entity) as that between scientists' law or model and their theory explaining the model. This solution is formalized in the power PC theory, a causal power theory of the probabilistic contrast model(P. W. Cheng & L. R. Novick, 1990). The article reviews diverse old and new empirical tests discriminating this theory from previous models, none of which is justified by a theory. The results uniquely support the power PC theory.
Article
Full-text available
We agree with Jones & Love (J&L) that much of Bayesian modeling has taken a fundamentalist approach to cognition; but we do not believe in the potential of Bayesianism to provide insights into psychological processes. We discuss the advantages of associative explanations over Bayesian approaches to causal induction, and argue that Bayesian models have added little to our understanding of human causal reasoning.
Article
Full-text available
In two experiments, we studied the strategies that people use to discover causal relationships. According to inferential approaches to causal discovery, if people attempt to discover the power of a cause, then they should naturally select the most informative and unambiguous context. For generative causes this would be a context with a low base rate of effects generated by other causes and for preventive causes a context with a high base rate. In the following experiments, we used probabilistic and/or deterministic target causes and contexts. In each experiment, participants observed several contexts in which the effect occurred with different probabilities. After this training, the participants were presented with different target causes whose causal status was unknown. In order to discover the influence of each cause, participants were allowed, on each trial, to choose the context in which the cause would be tested. As expected by inferential theories, the participants preferred to test generative causes in low base rate contexts and preventative causes in high base rate contexts. The participants, however, persisted in choosing the less informative contexts on a substantial minority of trials long after they had discovered the power of the cause. We discuss the matching law from operant conditioning as an alternative explanation of the findings.
Article
Full-text available
The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an outcome as well as to rate their confidence in those judgments. They were presented with information on the relative frequencies of the outcome given the presence and absence of various candidate causes. These relative frequencies, sample size, and the direction of the causal influence (generative vs. preventive) were manipulated. It was found that both virtual and actual sample size affected confidence. Further, confidence affected estimates of strength, but confidence and strength are dissociable. The results enable a consistent explanation of the puzzling previous finding that observed causal-strength ratings often deviated from the predictions of both of the 2 dominant models of causal strength.
Article
Full-text available
We analyze how subjects make causal judgments based on contingency information in two paradigms. In the discrete paradigm, subjects are given specific information about the frequency a, with which a purported cause occurs with the effect; the frequency b, with which it occurs without the effect; the frequency c, with which the effect occurs when the cause is absent; and the frequency d, with which both cause and effect are absent. Subjects respond toP 1 =a/(a+b) andP 2 =c/(c+d). Some subjects’ ratings are just a function ofP 1, while others are a function of ΔP =P 1 -P 2. Subjects’ postexperiment reports are accurate reflections of which model they use. Combining these two types of subjects results in data well fit by the weighted ΔP model (Allan, 1993). In the continuous paradigm, subjects control the purported causes (by clicking a mouse) and observe whether an effect occurs. Because causes and effects occur continuously in time, it is not possible to explicitly pair causes and effects. Rather, subjects report that they are responding to the rate at which the effects occur when they click versus when they do not click. Their ratings are a function of rates and not probabilities. In general, we argue that subjects’ causal ratings are judgments of the magnitude of perceptually salient variables in the experiment.
Article
Full-text available
The power PC theory (Cheng, 1997) is a normative account of causal inference, which predicts that causal judgements are based on the power p of a potential cause, where p is the cause-effect contingency normalized by the base rate of the effect. In three experiments we demonstrate that both cause-effect contingency and effect base-rate independently affect estimates in causal learning tasks. In Experiment 1, causal strength judgements were directly related to power p in a task in which the effect base-rate was manipulated across two positive and two negative contingency conditions. In Experiments 2 and 3 contingency manipulations affected causal estimates in several situations in which power p was held constant, contrary to the power PC theory's predictions. This latter effect cannot be explained by participants' conflation of reliability and causal strength, as Experiment 3 demonstrated independence of causal judgements and confidence. From a descriptive point of view, the data are compatible with Pearce's (1987) model, as well as with several other judgement rules, but not with the Rescorla-Wagner (Rescorla & Wagner, 1972) or power PC models.
Article
Full-text available
How humans infer causation from covariation has been the subject of a vigorous debate, most recently between the computational causal power account (P. W. Cheng, 1997) and associative learning theorists (e.g., K. Lober & D. R. Shanks, 2000). Whereas most researchers in the subject area agree that causal power as computed by the power PC theory offers a normative account of the inductive process. Lober and Shanks, among others, have questioned the empirical validity of the theory. This article offers a full report and additional analyses of the original study featured in Lober and Shanks's critique (M. J. Buehner & P. W. Cheng, 1997) and reports tests of Lober and Shanks's and other explanations of the pattern of causal judgments. Deviations from normativity, including the outcome-density bias, were found to be misperceptions of the input or other artifacts of the experimental procedures rather than inherent to the process of causal induction.
Article
Full-text available
The discovery of conjunctive causes--factors that act in concert to produce or prevent an effect--has been explained by purely covariational theories. Such theories assume that concomitant variations in observable events directly license causal inferences, without postulating the existence of unobservable causal relations. This article discusses problems with these theories, proposes a causal-power theory that overcomes the problems, and reports empirical evidence favoring the new theory. Unlike earlier models, the new theory derives (a). the conditions under which covariation implies conjunctive causation and (b). functions relating observable events to unobservable conjunctive causal strength. This psychological theory, which concerns simple cases involving 2 binary candidate causes and a binary effect, raises questions about normative statistics for testing causal hypotheses regarding categorical data resulting from discrete variables.
Article
Full-text available
How do people learn causal structure? In 2 studies, the authors investigated the interplay between temporal-order, intervention, and covariational cues. In Study 1, temporal order overrode covariation information, leading to spurious causal inferences when the temporal cues were misleading. In Study 2, both temporal order and intervention contributed to accurate causal inference well beyond that achievable through covariational data alone. Together, the studies show that people use both temporal-order and interventional cues to infer causal structure and that these cues dominate the available statistical information. A hypothesis-driven account of learning is endorsed, whereby people use cues such as temporal order to generate initial models and then test these models against the incoming covariational data.
Article
Full-text available
Causal judgment is assumed to play a central role in prediction, control, and explanation. Here, we consider the function or functions that map contingency information concerning the relationship between a single cue and a single outcome onto causal judgments. We evaluate normative accounts of causal induction and report the findings of an extensive meta-analysis in which we used a cross-validation model-fitting method and carried out a qualitative analysis of experimental trends in order to compare a number of alternative models. The best model to emerge from this competition is one in which judgments are based on the difference between the amount of confirming and disconfirming evidence. A rational justification for the use of this model is proposed.
Article
In three experiments, participants made causal judgements from summary presentations of information about occurrences and non-occurrences of an effect in the presence and absence of possible causes. Participants' judgements could best be accounted for by the hypothesis that some were tending to use cell information in idiosyncratic ways: types of contingency information normatively regarded as confirmatory were sometimes treated as disconfirmatory and vice versa. In all three experiments, participants' judgements were better predicted by a model based on this evidence than by the probabilistic contrast model (Cheng and Novick, 1992) or the Power PC theory (Cheng, 1997).
Article
Shanks (1985) has used a video game to investigate how subjects estimate the effect of their behaviour in a task defined by a 2×2 contingency table. The subjects were able to distinguish positive and negative contingencies from zero contingencies. In addition, they showed a learning curve and a bias to rate zero contingencies with a high outcome density higher than low-density zero contingencies. He interpreted these data as being consistent with associative models derived from animal learning. In Experiment 1 we replicated these results using a task and instructions similar to his. In a second experiment we showed that the subjects' tendency to overestimate high-density zero contingencies did not arise because the “game” was so difficult that it interfered with processing the events. In this experiment subjects were given tables of the outcome frequencies that had been determined by the earlier subjects. These subjects were, if anything, less accurate in rating the zero contingencies. We point out several logical problems with Shanks's initial task. The task did not represent a true 2×2 contingency, and aspects of it were physically impossible. In Experiment 3 we modified the task to represent a true 2×2 contingency. Using this task, we found a similar pattern of results, except that there was no evidence of the learning curve predicted by the associative models. We conclude that there is little in our data to rule out a “rule-based” analysis of contingency judgements.
Article
In experimental design, a tacit principle is that to test whether a candidate cause c (i.e., a manipulation) prevents an effect e, e must occur at least some of the time without the introduction of c. This principle is the preventive analogue of the explicit principle of avoiding a ceiling effect in tests of whether c produces e. Psychological models of causal inference that adopt either the covariation approach or the power approach, among their other problems, fail to explain these principles. The present article reports an experiment that demonstrates the operation of these principles in untutored reasoning. The results support an explanation of these principles according to the power PC theory, a theory that integrates the previous approaches to overcome the problems that cripple each.
Article
Examines the varied measures of contingency that have appeared in the psychological judgment literature concerning binary variables. It is argued that accurate judgments about related variables should not be used to infer that the judgments are based on appropriate information. (9 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Information about the structure of a causal system can come in the form of observational data—random samples of the system’s autonomous behavior—or interventional data—samples conditioned on the particular values of one or more variables that have been experimentally manipulated. Here we study people’s ability to infer causal structure from both observation and intervention, and to choose informative interventions on the basis of observational data. In three causal inference tasks, participants were to some degree capable of distinguishing between competing causal hypotheses on the basis of purely observational data. Performance improved substantially when participants were allowed to observe the effects of interventions that they performed on the systems. We develop computational models of how people infer causal structure from data and how they plan intervention experiments, based on the representational framework of causal graphical models and the inferential principles of optimal Bayesian decision-making and maximizing expected information gain. These analyses suggest that people can make rational causal inferences, subject to psychologically reasonable representational assumptions and computationally reasonable processing constraints.
Article
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
Article
Mitchell et al.'s claim, that their propositional theory is a single-process theory, is illusory because they relegate some learning to a secondary memory process. This renders the single-process theory untestable. The propositional account is not a process theory of learning, but rather, a heuristic that has led to interesting research.
Article
It has been proposed that causal power (defined as the probability with which a candidate cause would produce an effect in the absence of any other background causes) can be intuitively computed from cause-effect covariation information. Estimation of power is assumed to require a special type of counterfactual probe question, worded to remove potential sources of ambiguity. The present study analyzes the adequacy of such questions to evoke normative causal power estimation. The authors report that judgments to counterfactual probes do not conform to causal power and that they strongly depend on both the probe question wording and the way that covariation information is presented. The data are parsimoniously accounted for by an alternative model of causal judgment, the Evidence Integration rule.
Article
The authors empirically evaluate P. W. Cheng's (1997) power PC theory of causal induction. They reanalyze some published data taken to support the theory and show instead that the data are at variance with it. Then, they report 6 experiments in which participants evaluated the causal relationship between a fictitious chemical and DNA mutations. The power PC theory assumes that participants' estimates are based on the causal power p of a potential cause, where p is the contingency between the cause and the effect normalized by the base rate of the effect. Three of the experiments used a procedure in which causal information was presented trial by trial. For these experiments, the power PC theory was contrasted with the predictions of the probabilistic contrast model and the Rescorla-Wagner theory. For the remaining 3 experiments, a summary presentation format was employed to which only the probabilistic contrast model and the power PC theory are applicable. The power PC theory was unequivocally contradicted by the results obtained in these experiments, whereas the other 2 theories proved to be satisfactory.
Article
Contingency information is information about the occurrence or nonoccurrence of a certain effect in the presence or absence of a candidate cause. An objective measure of contingency is the deltaP rule, which involves subtracting the probability of occurrence of an effect when a causal candidate is absent from the probability of occurrence of the effect when the candidate is present. Causal judgements conform closely to deltaP but deviate from it under certain circumstances. Three experiments show that such deviations can be predicted by a model of causal judgement that has two components: a rule of evidence, that causal judgement is a function of the proportion of relevant instances that are judged to be confirmatory for the causal candidate, and a tendency for information about instances in which the candidate is present to have greater effect on judgement than instances in which the candidate is absent. Two experiments demonstrate how this model accounts for some recently published findings. A third experiment shows that it is possible to use the model to predict the occurrence of high causal judgements when the objective contingency is close to zero.
Article
The power PC theory of causal induction (Cheng, 1997) proposes that causal estimates are based on the power p of a potential cause, where p is the contingency between the cause and effect normalized by the base rate of the effect. Previous tests of this theory have concentrated on generative causes that have positive contingencies with their associated outcomes. Here we empirically test this theory in two experiments using preventive causes that have negative contingencies for their outcomes. Contrary to the power PC theory, the results show that causal judgments vary with contingency across conditions of constant power p. This pattern is consistent, however, with several alternative accounts of causal judgment.
Article
3 experiments are reported in which Ss were asked to judge the degree of contingency between responses and outcomes. They were exposed to 60 trials on which a choice between 2 responses was followed by 1 of 2 possible outcomes. Each S judged both contingent and noncontingent problems. Some Ss actually made response choices while others simply viewed the events. Judgments were made by Ss who attempted to produce a single favorable outcome or, on the other hand, to control the occurrence of two neutral outcomes. In all conditions the amount of contingency judged was correlated with the number of successful trials, but was entirely unrelated to the actual degree of contingency. Accuracy of judgment was not improved by pretraining Ss on selected examples, even though it was possible to remove the correlation between judgment and successes by means of an appropriate selection of pretraining problems. The relation between everyday judgments of causal relations and the present experiment is considered.
Article
We present a framework for the rational analysis of elemental causal induction-learning about the existence of a relationship between a single cause and effect-based upon causal graphical models. This framework makes precise the distinction between causal structure and causal strength: the difference between asking whether a causal relationship exists and asking how strong that causal relationship might be. We show that two leading rational models of elemental causal induction, DeltaP and causal power, both estimate causal strength, and we introduce a new rational model, causal support, that assesses causal structure. Causal support predicts several key phenomena of causal induction that cannot be accounted for by other rational models, which we explore through a series of experiments. These phenomena include the complex interaction between DeltaP and the base-rate probability of the effect in the absence of the cause, sample size effects, inferences from incomplete contingency tables, and causal learning from rates. Causal support also provides a better account of a number of existing datasets than either DeltaP or causal power.
Article
P. W. Cheng's (19974. Cheng , PW . (1997). From covariation to causation: A causal power theory. Psychological Review, 104: 367–405. [CrossRef], [Web of Science ®], [CSA]View all references) power PC theory of causal induction proposes that causal estimates are based on the power (P) of a potential cause, where P is the contingency between the cause and effect normalized by the base rate of the effect. Most previous research using a standard causal probe question has failed to support the predictions of the power PC model but recently Buehner, Cheng, and Clifford (20033. Buehner , MJ , Cheng , PW and Clifford , D . (2003). From covariation to causation: A test of the assumption of causal power. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29: 1119–1140. [CrossRef], [PubMed], [Web of Science ®]View all references) found that participants responded in terms of causal power when probed with a counterfactual test question, which they argued prompted participants to consider the base rate of the effect. However, Buehner et al. framed their counterfactual question in terms of frequency, a factor that has been demonstrated to decrease base rate neglect in judgements under uncertainty. In the experiment reported here, we sought to disentangle the influence of counterfactual and frequency framing of the probe question to determine which factor is responsible for encouraging responses in terms of causal power.
Article
Two competing psychological approaches to causal learning make different predictions regarding what aspect of perceived causality is generalized across contexts. Two experiments tested these predictions. In one experiment, the task required a judgment regarding the existence of a simple causal relation; in the other, the task required a judgment regarding the existence of an interaction between a candidate cause and unobserved background causes. The task materials did not mention assessments of causal strength. Results indicate that causal power (Cartwright, 1989; Cheng, 1997) is the mental construct that people carry from one context to another.
Naïve causality: A mental model theory of causal meaning and reasoning Structure and strengthin causal Psychology through intervention
  • Journal
  • E Goldvarg
  • Johnson
  • P N Laird
  • T L Griffiths
  • J B Tenenbaum
Journal of Goldvarg, E., & Johnson-Laird, P. N. (2001). Naïve causality: A mental model theory of causal meaning and reasoning. Cognitive Science, 25, 565–610. Griffiths, T. L., & Tenenbaum, J. B. (2005). Structure and strengthin causal Psychology, 51, 334–384. Hagmayer, Y., Sloman, S. A., Lagnado, D. A., & Waldmann,M. R. (2007). through intervention. In A. Gopnik & L. Schultz (Eds.), Causal learning: Psychology, philosophy, and computation (pp. 86–100). New York, NY: Oxford University Press
  • Cheng P. W.
  • Goldvarg E.