Kroeber’s “The Superorganic” (1917) stands as the first extreme statement of cultural holism. Some have compared it to Durkheim, the majority to Boas; some have denied any evolutionary message, others read in it a theory of “emergent evolution” arising from his transcendental holism. What was it, exactly? When understood as part of a trilogy comprising two other articles (one from 1915, the other from 1919), it emerged that his extreme brand of cultural holism was a necessary tool to carry out a relatively hidden evolutionary agenda. This led me to rethink his evolutionism, to deny that he was a cultural determinist, to understand this part of his anthropology in terms of “epistemological obstacles” (Bachelard 1938), and show that it reemerges in Appadurai’s understanding of globalization (1996).
In this article, I argue that norms and customs, despite frequently being described as being causes of behavior in the social sciences and ordinary conversation, cannot really cause behavior. Terms like "norms" and the like seem to refer to philosophically disreputable disjunctive properties. More problematically, even if they do not, or even if there can be disjunctive properties after all, I argue that norms and customs still cannot cause behavior. The social sciences would be better off without referring to properties like norms and customs as if they could be causal.
In recent years, there have been multiple instances of misconduct in science, yet no coherent framework exists for characterizing this phenomenon. The thesis of this article is that economic analysis can provide such a framework. Economic analysis leads to two categories of misconduct: replication failure and fraud. Replication failure can be understood as the scientist making optimal use of time in a professional environment where innovation is emphasized rather than replication. Fraud can be depicted as a deliberate gamble under conditions of uncertainty: The scientist takes advantage of the complexity of science and undermines the integrity of science for personal gain or advancement.
Here we propose a new theory for the origins and evolution of human warfare as a complex social phenomenon involving several behavioral traits, including aggression, risk taking, male bonding, ingroup altruism, outgroup xenophobia, dominance and subordination, and territoriality, all of which are encoded in the human genome. Among the family of great apes only chimpanzees and humans engage in war; consequently, warfare emerged in their immediate common ancestor that lived in patrilocal groups who fought one another for females. The reasons for warfare changed when the common ancestor females began to immigrate into the groups of their choice, and again, during the agricultural revolution.
In this article, the author offers a discussion of the evidential role of the Galilean constant in the history of physics. The author argues that measurable constants help theories constrain data. Theories are engines for research, and this helps explain why the Duhem-Quine thesis does not undermine scientific practice. The author connects his argument to discussion of two famous papers in the history of economic methodology, Milton Friedman's "Methodology of Positive Economics," which appealed to example of Galilean Law of Fall in its argument; and Vernon Smith's "Economics in the Laboratory." While the author offers some criticism of Friedman and Smith, most of the article is a friendly reinterpretation of their insights.
This article seeks to develop the Marxist conception of social structure by incorporating developments within critical realist philosophy. It rejects forms of economic determinism such as the base-superstructure model and those reconstructions—like Cohen’s—that attribute primacy to productive forces in explaining history and society. It argues instead that society is the product of complex, often contradictory combinations of many different structures and mechanisms. They form a structural ensemble, hierarchically arranged, but where each element has its own dynamics and emergent powers. It concludes that society is best understood through critical realist conceptions of stratification, emergence, transformation, and overdetermination.
Teny Pinkard's discussion of explanation in science and history raises some issues important for social science generally, as well as for history. I would like to focus on his analysis of the relationship between explanation and understanding, with the aim of reopening an issue which his treatment appears to have closed. In doing so, I hope to encourage further analysis of the problem of how we ‘understand’. My own discussion of this issue will be brief, moving only a little beyond Pinkard's. Primarily. I hope to show that ‘understanding’ is in fact a problem, and one that cannot merely be viewed as secondary to an analysis of ‘explanation’.
The book under review is critiqued with regard to its adherence, modification, and departure from John Maynard Keynes’s position. This review is weighted to emphasizing the role of “expectation” in Keynes’s work and its role in the book under review. The review seeks to develop an interpretation of the “psychology of society” or “structural rationality” in Keynes’s work and contrasts this with the positions of the authors in the book under review. Following this Keynes’s work is advocated as being highly malleable and a corpus that can illuminate further research and policy prescription.
The common assumption is that if a group comprising moral agents can act intentionally, as a group, then the group itself can also be properly regarded as a moral agent with respect to that action. I argue, however, that this common assumption is the result of a problematic line of reasoning I refer to as “the collective fallacy.” Recognizing the collective fallacy as a fallacy allows us to see that if there are, in fact, irreducibly joint actors, then some of them will lack the full-fledged moral agency of their members. The descriptivist question of whether a group can perform irreducibly joint intentional action need not rise and fall with the normative question of whether a group can be a moral agent.
Collective action is interpreted as a matter of people doing something together, and it is assumed that this involves their having a collective intention to do that thing together. The account of collective intention for which the author has argued elsewhere is presented. In terms that are explained, the parties are jointly committed to intend as a body that such-and-such. Collective action problems in the sense of rational choice theory - problems such as the various forms of coordination problem and the prisoner's dilemma - are then considered. An explanation is given of how, when such a problem is interpreted in terms of the parties' inclinations, a suitable collective intention resolves the problem for agents who are rational in a broad sense other than the technical sense of game theory.
This article discusses an epistemological problem faced by causal explanations of action and a proposed solution. The problem is to justify why one particular reason rather than another is specified as causally efficacious. It is argued that the problem arises independently of one’s preferred conception of singular causal claims, psychological and psychophysical generalizations, and our folk-psychological competence. The proposed fallibilist solution involves the supplementation of the reason given by narratives that contextualize it and provide additional criteria for justifying the causal claim. It is argued that narratives have a distinctive structure that can afford the justification of causal attributions without sui generis powers of narrative explanation having to be invoked.
There is a long tradition in philosophy and the social sciences that emphasizes the meaningfulness of human action. This tradition doubts or even negates the possibility of causal explanations of human action precisely on the basis that human actions have meaning. This article provides an argument in favor of methodological naturalism in the social sciences. It grants the main argument of the Interpretivists, that is, that human actions are meaningful, but it shows how a transformation of a “nexus of meaning” into a “causal nexus” can take place, proposing the “successful transformation argument.”
Evolutionary psychology and human sociobiology often reject the mere possibility of symbolic causality. Conversely, theories in which symbolic causality plays a central role tend to be both anti-nativist and anti-evolutionary. This article sketches how these apparent scientific rivals can be reconciled in the study of disgust. First, we argue that there are no good philosophical or evolutionary reasons to assume that symbolic causality is impossible. Then, we examine to what extent symbolic causality can be part of the theoretical toolbox of the evolutionary social sciences. This examination leads to the conclusion that it is possible to make evolutionary sense of Mary Douglas’s theory of disgust, and that her view of symbolic causality can and should inform evolutionary theories of (sociocultural) disgust.
Understanding how individual agency and group agency relate is of great importance for a range of philosophical and practical concerns, including responsibility ascription and institutional design. This article discusses the relation between corporate and individual responsibility in agency—in particular, the relation between corporate and individual control of actions. First, I criticize Christian List and Philip Pettit’s causal account of combined corporate and individual control. Second, I develop an alternative account in terms of structural control, and I show how this gives a better grasp of the issue. Third, I argue for an act-dualism that complements my account of control and sheds further light on the relation between corporate and individual agency and responsibility.
Social scientists associate agent-based simulation (ABS) models with three ideas about explanation: they provide generative explanations, they are models of mechanisms, and they implement methodological individualism. In light of a philosophical account of explanation, we show that these ideas are not necessarily related and offer an account of the explanatory import of ABS models. We also argue that their bottom-up research strategy should be distinguished from methodological individualism.
The article makes four interrelated claims: (1) The mechanism approach to social explanation does not presuppose a commitment to the individual-level microfoundationalism. (2) The microfoundationalist requirement that explanatory social mechanisms should always consists of interacting individuals has given rise to problematic methodological biases in social research. (3) It is possible to specify a number of plausible candidates for social macro-mechanisms where interacting collective agents (e.g. formal organizations) form the core actors. (4) The distributed cognition perspective combined with organization studies could provide us with explanatory understanding of the emergent cognitive capacities of collective agents.
The Oxford Handbook provides an extensive and innovative review of developments in Analytical Sociology (AS) which is a theory program which seeks to develop ‘thin explanations’ of social phenomena by understanding their micro-foundations through explicitly developed models and then tracing through the broader consequences of these actions and interactions for aggregate social patterns. The volume covers the key characteristics of this approach in terms of ontology and epistemology and then assays recent developments across over two dozen areas of application: each a particular social mechanism. Methodological approaches particularly pertinent to AS are also covered. However, some of the criticisms of the AS programme are also noted, especially its lack of attention thus far to meso-level and macro-level mechanisms.
The “ontological turn” is a recent movement within cultural anthropology. Its proponents want to move beyond a representationalist framework, where cultures are treated as systems of belief (concepts, etc.) that provide different perspectives on a single world. Authors who write in this vein move from talk of many cultures to many “worlds,” thus appearing to affirm a form of relativism. We argue that, unlike earlier forms of relativism, the ontological turn in anthropology is not only immune to the arguments of Donald Davidson’s “The Very Idea of a Conceptual Scheme,” but it affirms and develops the antirepresentationalist position of Davidson’s subsequent essays.
Review article on: Eric Alden Smith and Bruce Winterhalder, eds., Evolutionary Ecology and Human Behavior. Aldine de Gruyter, New York, 1992. Pp. xv, 470, tables, boxes, figures, bibliography, author index, subject index, $59.95 (cloth), $29.95 (paper).
1992 saw the publication of Evolutionary Ecology and Human Behavior, a volume intended as a "unified, coherent, and comprehensive" survey of theory and research in a field that the editors regard as having begun to coalesce in the latter part of the 1970s (pp. xiv-xv). That field is a particular version of the study of human social behavior "within a simultaneously evolutionary (selectionist) and ecological perspective" (p. xiii). It is a field in which the number of workers and the number of their projects and publications grew markedly after the 1970s, thus making appropriate not only such a survey volume as Evolutionary Ecology and Human Behavior and review articles written by those in the field (e.g., Borgerhoff Mulder 1991, Cronk 1991, Smith 1992a and 1992b) but also critical appraisals, like the one here, by those who are not in the field. My focus here is mainly on methodological or, more specifically, explanatory issues, and I rely heavily on the editors' essays and commissioned articles in Evolutionary Ecology and Human Behavior, but I refer as well to other works by contributors to the volume and their congeners. Focusing on these issues with these particular works as my texts entails addressing some issues of explanation in social and biological science in general, including issues still with us today concerning the explanatory use of theories, generalizations, models, predictions, causal statements, functionalist claims, and narratives.
In The Myth of the Framework, Popper attacks the doctrine that truth is relative to one's intellectual background. The same collection refers to his "situational analysis." This article explores the implications of both for spatial planning. Spatial planners have to justify proposals. The article first summarizes earlier work on planning methodology evolving around the rationality principle and the implications for it of Popper's work for how to do this. It then discusses the notion of the definition of the decision situation, which flows from this principle. This, of course, implies taking a leaf out of Popper's book where he discusses situational analysis. The article then gives an account of the author's work on Dutch planning doctrine, relating it to the definition of decision situations in planning and confronting it with Popper's strictures against The Myth of the Framework. The conclusion is that, whereas in the sciences that are after explain ing reality in terms of universal laws, Popper's argument holds, in planning it does not. Planning is about the search for the right course of action. In it, Popper's concern with fighting relativism and what he calls "justification" is misplaced. The quality of decisions depends on the position and the concerns of the decision taker. Relativism is in-built and so is the need to justify decisions.
This article evaluates the structural conception of interests developed by Margaret Archer as part of her dualist version of critical realism. It argues that this structural analysis of interests is untenable because, first, Archer’s account of the causal influence of interests on agents is contradictory and, second, Archer fails to offer a defensible account of her claim that interests influence agents by providing reasons for action. These problems are explored in relation to Archer’s theoretical and empirical work. I argue for an alternative account of interests that focuses on agents’ understandings of their interests and problems with these understandings.
Critical realists argue that the condition of possibility of the sciences is that they are based on a correct set of ontological assumptions or definitions. The task of philosophy is to underlabor for the sciences, by ensuring that the explanations developed are congruent with the ontological condition of possibility of the sciences. This requires critical realists to justify their claims about ontology and, to do this, they turn to ontological assumptions that are held to obtain in natural scientific knowledge and social agents’ lay knowledge. A number of problems with this approach are discussed and a problem-solving alternative is advocated.
In Kitzmiller v. Dover (2005), the only U.S. federal case on teaching Intelligent Design in public schools, the plaintiffs used the same argument as in the creation-science trials of the 1980s: Intelligent Design is religion, not science, because it invokes the supernatural; thus teaching it violates the Constitution. Although the plaintiffs won, this strategy is unwise because it is based on problematic definitions of religion and science, leads to multiple truths in society, and is unlikely to succeed before the present right-leaning Supreme Court. I suggest discarding past approaches in favor of arguing solely from the evidence for evolution.
According to many philosophers and scientists, human sociality is explained by our unique capacity to “share” attitudes with others. The conditions under which mental states are shared have been widely debated in the past two decades, focusing especially on the issue of their reducibility to individual intentionality and the place of collective intentions in the natural realm. It is not clear, however, to what extent these two issues are related and what methodologies of investigation are appropriate in each case. In this article, I propose a solution that distinguishes between epistemic and ontological interpretations of the demand for the conditions of reduction of collective intentionality. While the philosophical debate has contributed important insights into the former, recent advances in the cognitive sciences offer novel resources to tackle the latter. Drawing on Michael Tomasello’s research in the ontogeny of shared intentionality in early instances of interaction based on joint attention, I propose an empirically informed argument of what it would take to address the ontological question of irreducibility, thus making a step forward in the naturalization of collective intentionality.
John Searle claims that social-scientific laws are impossible because social phenomena are physically open-ended. William Butchard and Robert D’Amico have recently argued that, by Searle’s own lights, money is a social phenomena that is physically closed. However, Butchard and D’Amico rely on a limited set of data in order to draw this conclusion, and fail to appreciate the implications of Searle’s theory of social ontology with regard to the physical open-endedness of money. Money is not physically open-ended in the strong sense that Butchard and D’Amico require, and their argument for the possibility of social-scientific laws fails as a result.
Most work on the public-private division concerns itself with identifying the lines between both and the historical developments that shifted this line. These contributions provide an aerial view that pays little attention to the interactional micropolitics of privacy. The present article uses a pragmatist approach to analyze the local negotiation of privacy and publicity. It relies on scholarship on “accounts” and “aligning actions” to view “privacy-work” as an attempt to remove actions from having to account for them in a specific social group and “publicity-work” as a converse attempt to draw them out by demanding that actors account. Thus, I will understand privacy as whatever is hidden, situationally, behind “moving armies of stop signs” for alignment demands.
Zangwill's recent article offers a provocative & compelling account of the alleged deficiencies of the sociology of art. However, his main targets -- christened, respectively, 'production & skepticism' & 'consumption skepticism' -- are, in fact, only decontextualised & one-sided caricatures of the leading theories in this area. Zangwill has misrepresented some of the discipline's leading theorists including Bourdieu, Eagleton, Pollock, & Wolff. His own 'aesthetic' explanation of artistic acts appears, at first glance, attractive, not least for its repudiation of radical sociological reductionism. But it turns out to be altogether too simplistic an alternative. Zangwill is a sociological 'primitive' who understands adequately neither how society exists in the mind itself, nor, paradoxically, in artists' embodied sense of the right feel for the game. A less 'enchanted' approach to artists' practices is required. This needs to stress both artists' role in the public sphere & also their specific interests in the artistic field.
The philosophy of Ernest Gellner was much influenced by his studies in the social sciences. The philosophical problems he examined and the solutions he proposed originated there. To what extent does the legacy of Gellner influence the social sciences and current events and social transformations? More than a few of the essays in Ernest Gellner and Contemporary Social Thought find that while the questions he raised are fruitful, the answers he gave them do not pass the test of time. By contradiction, John Hall, in Ernest Gellner: An Intellectual Biography, finds that Gellner laid the grounds for understanding the origins and development of current events and social transformations. Gellner’s distinguished three spheres in modernity which eventually created a mosaic of nations in Europe - agraria, industria and nationalism. Most of Gellner’s critics found the creation a threat to enchantment; Hall found it disenchanting and cold yet an illuminating way to reach solutions to our contemporary problems. Assessing Gellner in the current context namely, global fluctuations and through the spectacles of these two books, I find that Gellner’s spheres in modernity are called to overlap exerting pressure on the mosaic of nations and, to a large extent, transforming it into an imagined seamless community of people in which our human rationality accommodates only a touch of reenchantment. I suppose that Gellner would agree.
Four lines of argument are adduced to support the contention that current disease-modeled approaches to learning disability (LD) are inadequate and that a more environmentally-centered approach should be utilized. The first argument employs philosophy of science to criticize the blatant operationalism of the extant theorizing, while noting that the theories frequently try to employ a realist slant. The second line of argument attacks the disease model itself, employing the work of other philosophers who have noted the extent to which "disease" is a value-laden construct. Still another line notes that, at first glance, current work on paternalism might seem to provide some kind of rationale for LD placement, but that this is probably not the case. The fourth line of argument adverts to the possibility that sociopolitical motivations underlie some of the labeling efforts. It is concluded that current efforts are fruitless and that a new definitional effort is needed, one which specifically cites the locus of disability as the classroom environment.
This paper discusses the so-called non-interference assumption (NIA) grounding causal inference in trials in both medicine and the social sciences. It states that for each participant in the experiment, the value of the potential outcome depends only upon whether she or he gets the treatment. Drawing on methodological discussion in clinical trials and laboratory experiments in economics, I defend the necessity of partial forms of blinding as a warrant of the NIA, to control the participants’ expectations and their strategic interactions with the experimenter.
Can social phenomena be understood by analyzing their parts? Contemporary economic theory often assumes that they can. The methodology of constructing models which trace the behavior of perfectly rational agents in idealized environments rests on the premise that such models, while restricted, help us isolate tendencies, that is, the stable separate effects of economic causes that can be used to explain and predict economic phenomena. In this paper, I question both the claim that models in economics supply claims about tendencies and also the view that economics, when successful, necessarily follows this method. When economics licenses successful policy interventions, as it did in the case of the Federal Communications Commission spectrum auctions, its method is not to study tendencies but rather to study the phenomenon as a whole.
Social psychologists tell us that much of human behavior is automatic. It is natural to think that automatic behavioral dispositions are ethically desirable if and only if they are suitably governed by an agent’s reflective judgments. However, we identify a class of automatic dispositions that make normatively self-standing contributions to praiseworthy action and a well-lived life, independently of, or even in spite of, an agent’s reflective judgments about what to do. We argue that the fundamental questions for the “ethics of automaticity” are what automatic dispositions are (and are not) good for and when they can (and cannot) be trusted.
This essay will defend a causal conception of action explanations in terms of an agent’s reasons by delineating a metaphysical and epistemic framework that allows us to view folk psychology as providing us with causal and autonomous explanatory strategies of accounting for individual agency. At the same time, I will calm philosophical concerns about the issue of causal deviance that have been at the center of the recent debates between causalist and noncausalist interpretations of action explanations. For that purpose, it is important to realize that the domain of folk-psychological action explanation is also the domain of skillful and goal-directed bodily movements, a domain to which we have independent epistemic access.
Within ontology new theories are extremely rare. Hacking bravely claims to have one: “historical ontology” or “dynamic nominalism.” Regrettably, he uses “nominalism” idiosyncratically, without explaining it or its qualifier. He does say what historical ontology is: it is “the presentation of the history of ontology in context.” This idea is laudable, as it invites presenting idealism as once attractive but no longer so (due to changes in perception theory, for example). But this idea is a proposal, not a theory, muchless an ontological theory, as it does not say what things are made of. Also, Hacking's details are often misleading. Thus, he falsely hints that he respects Wittgenstein and that he agrees with him. Considered as a study of ontology sans its (often amusing) incidental material, it appears surprisingly thin and repetitious. The study is either excessively opaque or quite clear but stale: the choice between these options is open.
For more than 10 years, Ulrich Beck has dominated discussion of risk issues in the social sciences. We argue that Beck's criticisms of the theory and practise of risk analysis are groundless. His understanding of what risk is is badly flawed. His attempt to identify risk and risk perception fails. He misunderstands and distorts the use of probability in risk analysis. His comments about the insurance industry show that he does not understand some of the basics of that industry. And his assertions about the wrongness of allowing acceptable levels of exposure to toxic chemicals do not stand up to scrutiny.
The kind of epistemic relativism usually refuted by its critics is less frequently observable in ethnographic research practices than the critics assume. Instead, methodological conceptual relativism can be recognized in several cases. This has significant practical implications, since the kind of epistemic relativism described by its critics, if rigorously followed, could lead to ethnographers conflating ways of argumentation accepted by their informants, with ways of argumentation accepted in academia, whereas methodological conceptual relativism does not have such consequences.
The article analysis the views approaching quantitative and qualitative methods in social sciences as separable or irreconcilable. First, we characterize these views and show how they deal with this divide and how they view the aspects of the latter. Next, we identify the works of Herbert Blumer as the basis of that divide and subject them to an analysis. Finally, by means of categories like quantity, quality, and measure, we show that the qualitative-quantitative divide is based on a wrong approach to these categories and the quantitative and qualitative methods.
In this paper, I consider the recent resurgence of “evolutionary economics”—the idea that evolutionary theory can be very useful to push forward key debates in economics—and assess the extent to which it rests on a plausible foundation. To do this, I first distinguish two ways in which evolutionary theory can, in principle, be brought to bear on an economic problem—namely, evidentially and heuristically—and then apply this distinction to the three major hypotheses that evolutionary economists have come to defend: the implausibility of rational choice theory as an account of economic rationality, the idea that firms are autonomous economic agents, and the need for a more dynamic, less equilibrium-focused economic methodology. In each of these cases, I conclude negatively: the relevant evolutionary considerations neither suggest interesting and novel hypotheses to investigate further (the hallmark of heuristic devices) nor are backed up by the needed data to constitute genuine evidence. I end by distinguishing this criticism of evolutionary economics from others that have been put forward in the literature: in particular, I make clear that, unlike those of other critics, the arguments of this paper are based on epistemic—not structural—considerations and therefore leave more room for a plausible form of evolutionary economics to come about in the future.
Popper's critique of the philosophical doctrines underlying totalitarian ideology is powerful. His arguments cut like a razor through a set of doctrines that still continue to inspire and provide intellectual weaponry to enemies of liberal democracy. However, with the totalitarian regimes of Hitler and Stalin in full view while he was writing The Poverty of Historicism and The Open Society and Its Enemies, Popper did not give full and balanced consideration to the range of effects of these doctrines can actually have on concrete ideologies. Yet, while taking Popper's critique of totalitarianism very seriously, we may recognize that the ideas he associates with totalitarianism can exist, and that they actually often do exist in benign forms. They also often exist harmlessly in healthy polities, tempered by other ideas and by institutions. Moreover, the struggle between liberalism and totalitarianism is only partly a struggle of philosophical ideas. Political argument and rhetoric appeals to feeling as well as to the intellect. This tends to be a blind spot of liberalism that sometimes contributes to its defeat.
In this book, Bogdan offers an empirically informed theory of the emergence and nature of predication with unmistakable pragmatic and developmental overtones. While the emphasis on psycho-pragmatic and developmental factors is most welcome, and while the discussion is informed and informative, Bogdan’s thesis suffers from some major weaknesses, in particular philosophical ones. Chief among these is an insufficient clarity with regard to the problem domain being addressed: Bogdan professes to offer a theory of predication as a general mental faculty but in reality he focuses on a rather narrower phenomenon. This narrow delineation of the problem domain, and Bogdan’s insistence on the discontinuity between full-fledged human predication and animal thought patterns, leads to a theoretical impasse that renders the very coherence of his proposal dubitable.
John Searle’s argument that social-scientific laws are impossible depends on a special open-ended feature of social kinds. We demonstrate that under a noncontentious understanding of bridging principles the so-called “counts-as” relation, found in the expression “X counts as Y in (context) C,” provides a bridging principle for social kinds. If we are correct, not only are social-scientific laws possible, but the “counts as” relation might provide a more perspicuous formulation for candidate bridge principles.
Nancy Cartwright claims that “Causality is a hot topic today both in philosophy and economics.” She may be right about philosophers, but not when it comes to economists. Cartwright talks about “economics” but nothing she says about it corresponds to what is taught in economics classes. Today, economics is dominated by model builders—but not all models involve econometrics. While all model builders do respect an endogenous-exogenous distinction between variables, this distinction will not be on the basis of which type of variable “causes” which, but on which variables are “determined” (and hence explained) by the model and those which are not.
Most contemporary political science researchers are advocates of multimethod research, however, the value and proper role of qualitative methodologies, like case study analysis, is disputed. A pluralistic philosophy of science can shed light on this debate. Methodological pluralism is indeed valuable, but does not entail causal pluralism. Pluralism about the goals of science is relevant to the debate and suggests a focus on the difference between evidence for warrant and evidence for use. I propose that case study research provides evidence for use through providing information that bears on the applicability of causal generalizations and risk assessment.
In the twentieth century, philosophy came to be dominated by the English-speaking world, first Britain and then the United States. Accompanying this development was an unprecedented professionalization and specialization of the discipline, the consequences of which are surveyed and evaluated in this article. The most general result has been a decline in philosophy’s normative mission, which roughly corresponds to the increasing pursuit of philosophy in isolation from public life and especially other forms of inquiry, including ultimately its own history. This is how the author explains the increasing tendency, over the past quarter-century, for philosophy to embrace the role of “underlaborer” for the special sciences. Indicative of this attitude is the long-term popularity of Thomas Kuhn’s The Structure of Scientific Revolutions, which argues that fields reach maturity when they forget their past and focus on highly specialized problems. In conclusion, the author recalls the history of philosophy that, following Kuhn’s advice, has caused us to forget, namely, the fate of Neo-Kantianism in the early twentieth century.