Preprint
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

A cross-cultural survey experiment revealed a widespread tendency to rely on a rule’s letter over its spirit when deciding which acts violate the rule. This tendency’s strength varied markedly across (k = 15) field sites, owing to cultural variation in the impact of moral appraisals on judgments of rule violation. Compared to laypeople, legal experts were more inclined to disregard their moral evaluations of the acts altogether, and consequently exhibited more pronounced textualist tendencies. Finally, we evaluated a plausible mechanism for the emergence of textualism: In a two-player coordination game, incentives to coordinate in the absence of communication reinforced participants’ adherence to rules’ literal meaning. Together, these studies (Ntotal = 5495) help clarify the origins and allure of textualism, especially in the law. Within heterogeneous communities in which members diverge in their moral appraisals involving a rule’s purpose, the rule’s literal meaning provides a clear focal point—an easily identifiable point of agreement enabling coordinated interpretation among citizens, lawmakers and judges.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Despite pervasive variation in the content of laws, legal theorists and anthropologists have argued that laws share certain abstract features and even speculated that law may be a human universal. In the present report, we evaluate this thesis through an experiment administered in 11 different countries. Are there cross‐cultural principles of law? In a between‐subjects design, participants (N = 3,054) were asked whether there could be laws that violate certain procedural principles (e.g., laws applied retrospectively or unintelligible laws), and also whether there are any such laws. Confirming our preregistered prediction, people reported that such laws cannot exist, but also (paradoxically) that there are such laws. These results document cross‐culturally and –linguistically robust beliefs about the concept of law which defy people's grasp of how legal systems function in practice.
Article
Full-text available
Prescriptive rules guide human behavior across various domains of community life, including law, morality, and etiquette. What, specifically, are rules in the eyes of their subjects, i.e., those who are expected to abide by them? Over the last sixty years, theorists in the philosophy of law have offered a useful framework with which to consider this question. Some, following H. L. A. Hart, argue that a rule’s text at least sometimes suffices to determine whether the rule itself covers a case. Others, in the spirit of Lon Fuller, believe that there is no way to understand a rule without invoking its purpose --- the benevolent ends which it is meant to advance. In this paper we ask whether people associate rules with their textual formulation or their underlying purpose. We find that both text and purpose guide people's reasoning about the scope of a rule. Overall, a rule’s text more strongly contributed to rule infraction decisions than did its purpose. The balance of these considerations, however, varied across experimental conditions: In conditions favoring a spontaneous judgment, rule interpretation was affected by moral purposes, whereas analytic conditions resulted in a greater adherence to textual interpretations. In sum, our findings suggest that the philosophical debate between textualism and purposivism partly reflects two broader approaches to normative reasoning that vary within and across individuals.
Article
Full-text available
Heuristics are commonly viewed in behavioral economics as inferior strategies resulting from agents’ cognitive limitations. Uncertainty is generally reduced to a form of risk, quantifiable in some probabilistic format. We challenge both conceptualizations and connect heuristics and uncertainty in a functional way: When uncertainty does not lend itself to risk calculations, heuristics can fare better than complex, optimization-based strategies if they satisfy the criteria for being ecological rational. This insight emerges from merging Knightian uncertainty with the study of fast-and-frugal heuristics. For many decision theorists, uncertainty is an undesirable characteristic of a situation, yet in the world of business it is considered a necessary condition for profit. In this article, we argue for complementing the study of decision making under risk using probability theory with a systematic study of decision making under uncertainty using formal models of heuristics. In doing so, we can better understand decision making in the real world and why and when simple heuristics are successful.
Article
Full-text available
Objective: Over the last two decades, many states have adopted several of the 20 laws that aim to control youth access to and possession of alcohol and prevent underage drinking in the United States. However, many of these laws have not been evaluated since their adoption. The objective of this study was to determine which minimum legal drinking age 21 (MLDA-21) laws currently have an effect on underage drinking-and-driving fatal crashes. Method: We updated the effective dates of the 20 MLDA-21 laws examined in this study and used scores of each law's strengths and weaknesses. Our structural equation model included the 20 MLDA-21 laws, impaired driving laws, seat belt safety laws, economic strength, driving exposure, beer consumption, and fatal crash ratios of drinking-to-nondrinking drivers under age 21. Results: Nine MLDA-21 laws were associated with significant decreases in fatal crash ratios of underage drinking drivers: possession of alcohol (-7.7%), purchase of alcohol (-4.2%), use alcohol and lose your license (-7.9%), zero tolerance .02 blood alcohol concentration limit for underage drivers (-2.9%), age of bartender ≥21 (-4.1%), state responsible beverage service program (-3.8%), fake identification support provisions for retailers (-11.9%), dram shop liability (-2.5%), and social host civil liability (-1.7%). Two laws were associated with significant increases in the fatal crash ratios of underage drinking drivers: prohibition of furnishing alcohol to minors (+7.2%) and registration of beer kegs (+9.6%). Conclusions: The nine effective MLDA-21 laws are estimated to be currently saving approximately 1,135 lives annually, yet only five states have enacted all nine laws. If all states adopted these nine effective MLDA-21 laws, it is estimated that an additional 210 lives could be saved every year.
Article
Full-text available
The letter of the law is its literal meaning. Here, the spirit of the law is its perceived intention. We tested the hypothesis that violating the spirit of the law accounts for culpability above and beyond breaking the mere letter. We find that one can incur culpability even when the letter of the law is not technically broken. We examine this effect across various legal contexts and discuss the implications for future research directions.
Article
Full-text available
Can judging that an agent blamelessly broke a rule lead us to claim, paradoxically, that no rule was broken at all? Surprisingly, it can. Across seven experiments, we document and explain the phenomenon of excuse validation. We found when an agent blamelessly breaks a rule, it significantly distorts people’s description of the agent’s conduct. Roughly half of people deny that a rule was broken. The results suggest that people engage in excuse validation in order to avoid indirectly blaming others for blameless transgressions. Excuse validation has implications for recent debates in normative ethics, epistemology and the philosophy of language. These debates have featured thought experiments perfectly designed to trigger excuse validation, inhibiting progress in these areas.
Article
Full-text available
Moral condemnation of harmful behavior is influenced by both cognitive and affective processes. However, despite much recent research, the proximate source of affect remains unclear. One obvious contender is empathy; simulating the victim's pain could lead one to judge an action as wrong ("outcome aversion"). An alternative, less obvious source is one's own aversion to performing the action itself ("action aversion"). To dissociate these alternatives, we developed a scale that assessed individual aversions to (a) witnessing others experience painful outcomes (e.g., seeing someone fall down stairs); and (b) performing actions that are harmless yet aversive (e.g., stabbing a fellow actor with a fake stage knife). Across 4 experiments, we found that moral condemnation of both first-person and third-party harmful behavior in the context of moral dilemmas is better predicted by one's aversion to action properties than by an affective response to victim suffering. In a fifth experiment, we manipulated both action aversion and the degree of expected suffering across a number of actions and found that both factors make large, independent contributions to moral judgment. Together, these results suggest we may judge others' actions by imagining what it would feel like to perform the action rather than experience the consequences of the action. Accordingly, they provide a counterpoint to a dominant but largely untested assumption that empathy is the key affective response governing moral judgments of harm. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
Humans are unique in the animal world in the extent to which their day-to-day behavior is governed by a complex set of rules and principles commonly called norms. Norms delimit the bounds of proper behavior in a host of domains, providing an invisible web of normative structure embracing virtually all aspects of social life. People also find many norms to be deeply meaningful. Norms give rise to powerful subjective feelings that, in the view of many, are an important part of what it is to be a human agent. Despite the vital role of norms in human lives and human behavior, and the central role they play in explanations in the social sciences, there has been very little systematic attention devoted to norms in cognitive science. Much existing research is partial and piecemeal, making it difficult to know how individual findings cohere into a comprehensive picture. Our goal in this essay is to offer an account of the psychological mechanisms and processes underlying norms that integrates what is known and can serve as a framework for future research.
Article
Full-text available
The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: (i) matched sampling on the univariate propensity score, which is a generalization of discriminant matching, (ii) multivariate adjustment by subclassification on the propensity score where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and (iii) visual representation of multivariate covariance adjustment by a two- dimensional plot.
Article
Full-text available
There has been a recent upsurge of research on moral judgment and decision making. One important issue with this body of work concerns the relative advantages of calculating costs and benefits versus adherence to moral rules. The general tenor of recent research suggests that adherence to moral rules is associated with systematic biases and that systematic cost-benefit analysis is a normatively superior decision strategy. This article queries both the merits of cost-benefit analyses and the shortcomings of moral rules. We argue that outside the very narrow domain in which consequences can be unambiguously anticipated, it is not at all clear that calculation processes optimize outcomes. In addition, there are good reasons to believe that following moral rules can lead to superior consequences in certain contexts. More generally, different modes of decision making can be seen as adaptations to particular environments. © The Author(s) 2010.
Article
Full-text available
Recent law and economics scholarship has produced much theoretical and empirical work on how and why legal disputes are settled and litigated. One of the most significant developments in this literature, attributable to the work of William Baxter and the combined efforts of George Priest and Benjamin Klein, has been the formation of a theory about both the selection of disputes for trial and the rates of success that plaintiffs enjoy for those cases that are resolved at trial. The basic theory contains two components. The selection effect refers to the proposition that the selection of tried cases is not a random sample of the mass of underlying cases. Rather, those cases that tend to be clear for either the plaintiff or the defendant under the applicable legal rules settle relatively quickly, leaving only the difficult cases for trial. Since these tried cases are not representative of the larger class of disputed cases, it is risky to draw any inference from the outcome of tried disputes to the soundness of the legal rules by which these disputes are decided. The sample of tried cases may contain many victories for the plaintiff and many for the defendant. One cannot infer from that fact alone, however, the fairness or desirability of the underlying legal rules. If the rules are heavily weighted for the plaintiff, then the closeness of the tried cases is consistent with there being many cases in which plaintiffs recover handsomely without litigation. Similarly, if the rules are weighted heavily for the defendant, then the total mass of cases brought will, by the time of trial, be reduced, as defendants will be able to settle many cases on favorable terms. The observation of hotly contested trials is therefore wholly consistent with the underlying rules that are skewed toward the plaintiff, toward the defendant, or toward neither side. The power of the selection effect is generally recognized in the academic literature. It has been the basis of complex litigation models and has been subjected to empirical testing and debate. Closely akin to but clearly distinguishable from, the selection effect, is the so-called 50 percent hypothesis. This hypothesis is a more specific prediction than the selection effect. The 50 percent hypothesis posits that the set of tried cases culled from the mass of underlying disputes will result in 50 percent victories for plaintiffs and 50 percent victories for defendants. The 50 percent hypothesis can conveniently be regarded as the limiting case of the selection effect theory. It follows, therefore, that any empirical corroboration of it generates powerful support for the more general selection hypothesis. This article suggests the incompleteness of existing methods of statistically testing the 50 percent hypothesis and reformulates the criteria for accepting or rejecting the hypothesis.
Article
A cross-cultural survey experiment revealed a dominant tendency to rely on a rule’s letter over its spirit when deciding which behaviors violate the rule. This tendency varied markedly across ( k = 15) countries, owing to variation in the impact of moral appraisals on judgments of rule violation. Compared with laypeople, legal experts were more inclined to disregard their moral evaluations of the acts altogether and consequently exhibited stronger textualist tendencies. Finally, we evaluated a plausible mechanism for the emergence of textualism: in a two-player coordination game, incentives to coordinate in the absence of communication reinforced participants’ adherence to rules’ literal meaning. Together, these studies (total n = 5,794) help clarify the origins and allure of textualism, especially in the law. Within heterogeneous communities in which members diverge in their moral appraisals involving a rule’s purpose, the rule’s literal meaning provides a clear focal point—an identifiable point of agreement enabling coordinated interpretation among citizens, lawmakers, and judges.
Article
Within legal scholarship and practice, among the most pervasive tasks is the interpretation of texts. And within legal interpretation, perhaps the most pervasive inquiry is the search for “ordinary meaning.” Jurists often treat ordinary meaning analysis as an empirical inquiry, aiming to discover a fact about how people understand language. When evaluating ordinary meaning, interpreters rely on dictionary definitions or patterns of common usage, increasingly via “legal corpus linguistics” approaches. However, the most central question about these popular methods remains open: Do they reliably reflect ordinary meaning? This Article presents experiments that assess whether (a) dictionary definitions and (b) common usage data reflect (c) how people actually understand language today. The Article elaborates the implications of two main experimental results. First, neither the dictionary nor legal corpus linguistics methods reliably track ordinary people’s judgments about meaning. This finding shifts the argumentative burden to jurists who rely on these tools to identify “ordinary meaning” or “original public meaning”: these views must articulate and demonstrate a reliable method of analysis. Moreover, this divergence illuminates several interpretive fallacies. For example, advocates of legal corpus linguistics often contend that the nonappearance of a specific use in a corpus indicates that the use is not part of the relevant term’s ordinary meaning. The experiments reveal this claim to be a “Nonappearance Fallacy.” Ordinary meaning exceeds datasets of common usage — even very large ones. Second, dictionary and legal corpus linguistics verdicts diverge dramatically from each other. Part of that divergence is explained by the finding that broad dictionary definitions tend to direct interpreters to extensive interpretations, while data of common usage tends to point interpreters to more prototypical cases. This divergence suggests two different criteria that are often relevant in interpretation: a more extensive criterion and a more narrow criterion. Although dictionaries and legal corpus linguistics might, in some cases, help us identify these criteria, a hard legal-philosophical question remains: Which of these two criteria should guide the interpretation of terms and phrases in legal texts? Insofar as there is no compelling case to prefer one, the results suggest that dictionary definitions, legal corpus linguistics, or even other more scientific measures of meaning may not be equipped in principle to deliver simple and unequivocal answers to inquiries about the so- called “ordinary meaning” of legal texts.
Article
For decades now, experiments have revealed that we humans tend to evaluate the views or activities of our own group and its members more favorably than those of outsiders. To assess convergence between experimental and observational results, we explore whether US Supreme Court justices fall prey to in-group bias in freedom-of-expression cases. A two-level hierarchical model of all votes cast between the 1953 and 2014 terms confirms that they do. Although liberal justices are (overall) more supportive of free-speech claims than conservative justices, the votes of both liberal and conservative justices tend to reflect their preferences toward the speech’s ideological grouping and not solely an underlying taste for (or against) greater protection for expression. These results suggest the importance of new research programs aimed at evaluating how other cognitive biases identified in experimental work may influence judicial behavior in actual court decisions.
Article
This Article reports the results of a study on whether political predispositions influence judicial decisionmaking. The study was designed to overcome the two principal limitations on existing empirical studies that purport to find such an influence: the use of nonexperimental methods to assess the decisions of actual judges; and the failure to use actual judges in ideologically-biased-reasoning experiments. The study involved a sample of sitting judges (n = 253), who, like members of a general public sample (n = 800), were culturally polarized on climate change.
Article
This book presents a comprehensive theory of legal interpretation, by a leading judge and legal theorist. Currently, legal philosophers and jurists apply different theories of interpretation to constitutions, statutes, rules, wills, and contracts. Aharon Barak argues that an alternative approach--purposive interpretation--allows jurists and scholars to approach all legal texts in a similar manner while remaining sensitive to the important differences. Moreover, regardless of whether purposive interpretation amounts to a unifying theory, it would still be superior to other methods of interpretation in tackling each kind of text separately.Barak explains purposive interpretation as follows: All legal interpretation must start by establishing a range of semantic meanings for a given text, from which the legal meaning is then drawn. In purposive interpretation, the text's "purpose" is the criterion for establishing which of the semantic meanings yields the legal meaning. Establishing the ultimate purpose--and thus the legal meaning--depends on the relationship between the subjective and objective purposes; that is, between the original intent of the text's author and the intent of a reasonable author and of the legal system at the time of interpretation. This is easy to establish when the subjective and objective purposes coincide. But when they don't, the relative weight given to each purpose depends on the nature of the text. For example, subjective purpose is given substantial weight in interpreting a will; objective purpose, in interpreting a constitution.Barak develops this theory with masterful scholarship and close attention to its practical application. Throughout, he contrasts his approach with that of textualists and neotextualists such as Antonin Scalia, pragmatists such as Richard Posner, and legal philosophers such as Ronald Dworkin. This book represents a profoundly important contribution to legal scholarship and a major alternative to interpretive approaches advanced by other leading figures in the judicial world.
Article
Legal decisions and theories are frequently condemned as formalistic, yet little discussion has occurred regarding exactly what the term "formalism" means. In this Article, Professor Schauer examines divergent uses of the term to elucidate its descriptive content. Conceptions of formalism, he argues, involve the notion that rules constrict the choice of the decisionmaker. Our aversion to formalism stems from denial that the language of rules either can or should constrict choice in this way. Yet Professor Schauer argues that this aversion to formalism should be rethought: At times language both can and should restrict decisionmakers. Consequently, the term "formalistic" should not be used as a blanket condemnation of a decisionmaking process; instead the debate regarding decision according to rules should be confronted on its own terms.
Article
Dual-system approaches to psychology explain the fundamental properties of human judgment, decision making, and behavior across diverse domains. Yet, the appropriate characterization of each system is a source of debate. For instance, a large body of research on moral psychology makes use of the contrast between "emotional" and "rational/cognitive" processes, yet even the chief proponents of this division recognize its shortcomings. Largely independently, research in the computational neurosciences has identified a broad division between two algorithms for learning and choice derived from formal models of reinforcement learning. One assigns value to actions intrinsically based on past experience, while another derives representations of value from an internally represented causal model of the world. This division between action- and outcome-based value representation provides an ideal framework for a dual-system theory in the moral domain.
Article
Observations were made in 10 preschools of interactions in 2 domains of social events: social conventional and moral. On the basis of criteria defining each domain, observed events could be reliably classified as social conventional or moral. As another aspect of the study, an interview was administered to children from the preschool who had witnessed the same events as the observer. The children's view of the events as social conventional or moral was in agreement with our classifications of the events in 83% of the cases. It was hypothesized that the responses of both children and adults to social conventional events differ from their responses to moral events. Observed behaviors were rated on a standard checklist of response categories. Different types of responses were elicited by the 2 types of events. Almost all responses to social conventional transgressions were initiated by adults. Children and adults responded with equal frequency to moral transgressions. Adults responded to social conventional transgressions differently from the ways they reacted to moral transgressions.
Article
Professor Hart defends the Positivist school of jurisprudence from many of the criticisms which have been leveled against its insistence on distinguishing the law that is from the law that ought to be. He first insists that the critics have confused this distinction with other Positivist theories about law which deserved criticism, and then proceeds to consider the merits of the distinction.
Article
Rephrasing the question of "law and morals" in terms of "order and good order," Professor Fuller criticizes Professor H. L. A. Hart for ignoring the internal "morality of order" necessary to the creation of all law. He then rejects Professor Hart's theory of statutory interpretation on the ground that we seek the objectives of entire provisions rather than the meanings of individual words which are claimed to have "standard instances."
Article
Cristina Bicchieri examines social norms, such as fairness, cooperation, and reciprocity, in an effort to understand their nature and dynamics, generated expectations and evolution and change. Drawing on intellectual traditions and methods, including those of social psychology, experimental economics and evolutionary game theory, Bicchieri provides an integrated account of how social norms emerge and why and when we follow them. Examining the existence and survival of inefficient norms, she demonstrates how norms evolve in ways that depend upon the psychological dispositions of the individual and how such dispositions may impair social efficiency. © Cristina Bicchieri 2006 and Cambridge University Press, 2010.
Article
Economic analysis generally assumes that law solves cooperation problems because legal sanctions change payoffs. Where the problem is one of coordination, however, this article contends that law also influences behavior by changing expectations, independent of payoffs. When individuals need to coordinate, law works to make one equilibrium "focal" and thereby creates expectations that others will play the strategy associated with that equilibrium. Once the expectations exist, they are self-fulfilling; even if the payoffs remain the same, everyone prefers to play the focal point strategy. Private expression can also change expectations, but law often has a comparative advantage in the publicity accorded to, and uniqueness of, its message, as well as the resulting reputation of public officials. The focal effect is one way to explain how law influences behavior "expressively" by what it says, independent of the sanctions it imposes. The article initially demonstrates this result using a pure coordination game, but then broadens the analysis in two ways. First, the focal point exists even when individuals have conflicting interests, as long as they share a common interest in avoiding certain outcomes. Thus, focal points matter in "Hawk-Dove" games which plausibly model a substantial amount of real world conflict. In such situations, both adjudication and regulation have some expressive influence on behavior. Second, the focal effect exists in iterated situations where equilibria evolve over time. Legal focal points can influence behavior during disequilibrium and, in several ways, supplant an existing convention. These points are illustrated with examples of traffic regulation, a sanctionless anti-smoking law, and a law creating "imperfect" liability for landlords.
Article
Textualist - or "new" textualist - statutory interpreters trumpet the virtues of following the ordinary meaning of statutory text, and warn of the vices of introducing extratextual material - particularly, legislative history. But textualism misses a step in the argument. Its understanding of ordinary meaning is admittedly contextual, and context requires attention to purpose. Judges will know much about a statute's purpose-meaning from their background knowledge of the relevant area of law. Textualism can't explain why judges should be permitted to rely on such background knowledge but should be barred from gaining new knowledge about how certain words were used. This failure of explanation is the missing step. The argument for extratextual knowledge gathering, including legislative history, is thus strengthened, and the arguments for an ordinary meaning approach lead to an uneven judicial terrain - nuanced analysis of some statutory terms, but not of others, depending upon what judges already know, versus what they might learn.
Article
Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.
Article
Recent research in moral psychology has attempted to characterize patterns of moral judgments of actions in terms of the causal and intentional properties of those actions. The present study directly compares the roles of consequence, causation, belief and desire in determining moral judgments. Judgments of the wrongness or permissibility of action were found to rely principally on the mental states of an agent, while judgments of blame and punishment are found to rely jointly on mental states and the causal connection of an agent to a harmful consequence. Also, selectively for judgments of punishment and blame, people who attempt but fail to cause harm more are judged more leniently if the harm occurs by independent means than if the harm does not occur at all. An account of these phenomena is proposed that distinguishes two processes of moral judgment: one which begins with harmful consequences and seeks a causally responsible agent, and the other which begins with an action and analyzes the mental states responsible for that action.
Textualism, the Unknown Ideal? Michigan Law Review 96
  • W Eskridge
Eskridge, W. (1998). Textualism, the Unknown Ideal? Michigan Law Review 96.
Six objections to the constitutionalization of private law
  • F Leal
Leal, F. (2015). Six objections to the constitutionalization of private law. Direitos Fundamentais & Justiça, n. 33, 123-165.
Cracking the whole code rule
  • A S Krishnakumar
A. S. Krishnakumar, Cracking the whole code rule. New York Univ. Law Rev. 96, 76-172 (2021).
Textualism and Legislative Intent, 91 Virginia Law Review
  • J Manning
Manning, J. (2005). Textualism and Legislative Intent, 91 Virginia Law Review.
A Survey of the Legal Academy
  • E Martínez
  • K Tobia
Martínez, E & Tobia, K. (2022). A Survey of the Legal Academy.