To read the full-text of this research, you can request a copy directly from the authors.
Abstract
A cross-cultural survey experiment revealed a dominant tendency to rely on a rule’s letter over its spirit when deciding which behaviors violate the rule. This tendency varied markedly across ( k = 15) countries, owing to variation in the impact of moral appraisals on judgments of rule violation. Compared with laypeople, legal experts were more inclined to disregard their moral evaluations of the acts altogether and consequently exhibited stronger textualist tendencies. Finally, we evaluated a plausible mechanism for the emergence of textualism: in a two-player coordination game, incentives to coordinate in the absence of communication reinforced participants’ adherence to rules’ literal meaning. Together, these studies (total n = 5,794) help clarify the origins and allure of textualism, especially in the law. Within heterogeneous communities in which members diverge in their moral appraisals involving a rule’s purpose, the rule’s literal meaning provides a clear focal point—an identifiable point of agreement enabling coordinated interpretation among citizens, lawmakers, and judges.
To read the full-text of this research, you can request a copy directly from the authors.
... involves an exercise in moral evaluation or not (see Hannikainen et al., 2022), and various experimental manipulations impacted people's adoption of primarily text-based versus valuebased approaches to rule interpretation. In these distinct areas, we find support for the idea that opposing intuitions appear to coexist in the layperson's mind-which we refer to as the intuitive conflict thesis. ...
In characterizing the nature of law and its proper interpretation, philosophers of law often appeal to empirical assumptions about the mind and language. However, psychological research has emphasized social norms (sustained through personal interaction), while comparatively neglecting positive rules (introduced by an authority). Addressing this imbalance, recent empirical work has begun to tackle foundational questions in jurisprudence, such as the connection between law and morality, the extent and origin of cultural variability in legal concepts, and the overlap between ordinary and expert concepts in this domain. This chapter provides an overview of ongoing research in the nascent field of experimental jurisprudence and takes stock of its implications for the concept of law. This preliminary sketch of legal cognition raises deeper questions that only a more diverse research program could answer. In closing, we advocate that research in experimental jurisprudence ought to investigate proximate and ultimate questions in parallel so as to paint a detailed portrait of the ‘legal mind’.
... Anonymized study data, analysis scripts, and stimuli (including translations) have been deposited in the Open Science Framework (https://osf.io/yw8ek/) (39). ...
A cross-cultural survey experiment revealed a widespread tendency to rely on a rule’s letter over its spirit when deciding which acts violate the rule. This tendency’s strength varied markedly across (k = 15) field sites, owing to cultural variation in the impact of moral appraisals on judgments of rule violation. Compared to laypeople, legal experts were more inclined to disregard their moral evaluations of the acts altogether, and consequently exhibited more pronounced textualist tendencies. Finally, we evaluated a plausible mechanism for the emergence of textualism: In a two-player coordination game, incentives to coordinate in the absence of communication reinforced participants’ adherence to rules’ literal meaning. Together, these studies (Ntotal = 5495) help clarify the origins and allure of textualism, especially in the law. Within heterogeneous communities in which members diverge in their moral appraisals involving a rule’s purpose, the rule’s literal meaning provides a clear focal point—an easily identifiable point of agreement enabling coordinated interpretation among citizens, lawmakers and judges.
... 32 Taken in conjunction, Studies 4 to 6 reveal a broader pattern: Legal judgments integrate various morally relevant cues, including the agent's epistemic state and the outcomes of their behavior-and these effects arise already in people's intuitive determinations (i.e., under time pressure). The opportunity to reflect appears to strengthen the effect of literal meaning on rule application (see also Hannikainen et al., 2022)-resulting in a shift toward textualist determinations over time. ...
Objectives: We sought to understand how basic competencies in moral reasoning influence the application of private, institutional, and legal rules. Hypotheses: We predicted that moral appraisals, implicating both outcome-based and mental state reasoning, would shape participants’ interpretation of rules and statutes—and asked whether these effects arise differentially under intuitive and reflective reasoning conditions. Method: In six vignette-based experiments (total N = 2,473; 293 university law students [67% women; age bracket mode: 18–22 years] and 2,180 online workers [60% women; mean age = 31.9 years]), participants considered a wide range of written rules and laws and determined whether a protagonist had violated the rule in question. We manipulated morally relevant aspects of each incident—including the valence of the rule’s purpose (Study 1) and of the outcomes that ensued (Studies 2 and 3), as well as the protagonist’s accompanying mental state (Studies 5 and 6). In two studies, we simultaneously varied whether participants decided under time pressure or following a forced delay (Studies 4 and 6). Results: Moral appraisals of the rule’s purpose, the agent’s extraneous blameworthiness, and the agent’s epistemic state impacted legal determinations and helped to explain participants’ departure from rules’ literal interpretation. Counter-literal verdicts were stronger under time pressure and were weakened by the opportunity to reflect. Conclusions: Under intuitive reasoning conditions, legal determinations draw on core competencies in moral cognition, such as outcome-based and mental state reasoning. In turn, cognitive reflection dampens these effects on statutory interpretation, allowing text to play a more influential role.
In a series of ten preregistered experiments (N = 2043), we investigate the effect of outcome valence on judgments of probability, negligence, and culpability – a phenomenon sometimes labelled moral (and legal) luck. We found that harmful outcomes, when contrasted with neutral outcomes, lead to an increased perceived probability of harm ex post, and consequently, to a greater attribution of negligence and culpability. Rather than simply postulating hindsight bias (as is common), we employ a variety of empirical means to demonstrate that the outcome-driven asymmetry across perceived probabilities constitutes a systematic cognitive distortion. We then explore three distinct strategies to alleviate the hindsight bias and its downstream effects on mens rea and culpability ascriptions. Not all strategies are successful, but some prove very promising. They should, we argue, be considered in criminal jurisprudence, where distortions due to the hindsight bias are likely considerable and deeply disconcerting.
Despite pervasive variation in the content of laws, legal theorists and anthropologists have argued that laws share certain abstract features and even speculated that law may be a human universal. In the present report, we evaluate this thesis through an experiment administered in 11 different countries. Are there cross-cultural principles of law? In a between-subjects design, participants (N = 3,054) were asked whether there could be laws that violate certain procedural principles (e.g., laws applied retrospectively or unintelligible laws), and also whether there are any such laws. Confirming our preregistered prediction, people reported that such laws cannot exist, but also (paradoxically) that there are such laws. These results document cross-culturally and –linguistically robust beliefs about the concept of law which defy people's grasp of how legal systems function in practice.
Despite pervasive variation in the content of laws, legal theorists and anthropologists have argued that laws share certain abstract features and even speculated that law may be a human universal. In the present report, we evaluate this thesis through an experiment administered in 11 different countries. Are there cross‐cultural principles of law? In a between‐subjects design, participants (N = 3,054) were asked whether there could be laws that violate certain procedural principles (e.g., laws applied retrospectively or unintelligible laws), and also whether there are any such laws. Confirming our preregistered prediction, people reported that such laws cannot exist, but also (paradoxically) that there are such laws. These results document cross‐culturally and –linguistically robust beliefs about the concept of law which defy people's grasp of how legal systems function in practice.
Prescriptive rules
guide human behavior across various domains of community life, including law,
morality, and etiquette. What, specifically, are rules in the eyes of their
subjects, i.e., those who are expected to abide by them? Over the last sixty
years, theorists in the philosophy of law have offered a useful framework with
which to consider this question. Some, following H. L. A. Hart, argue that a
rule’s text at least sometimes suffices to determine whether the rule itself
covers a case. Others, in the spirit of Lon Fuller, believe that there is no
way to understand a rule without invoking its purpose --- the benevolent ends
which it is meant to advance. In this paper we ask whether people associate
rules with their textual formulation or their underlying purpose. We find that
both text and purpose guide people's reasoning about the scope of a rule.
Overall, a rule’s text more strongly contributed to rule infraction decisions
than did its purpose. The balance of these considerations, however, varied
across experimental conditions: In conditions favoring a spontaneous judgment,
rule interpretation was affected by moral purposes, whereas analytic conditions
resulted in a greater adherence to textual interpretations. In sum, our
findings suggest that the philosophical debate between textualism and
purposivism partly reflects two broader approaches to normative reasoning that
vary within and across individuals.
Heuristics are commonly viewed in behavioral economics as inferior strategies resulting from agents’ cognitive limitations. Uncertainty is generally reduced to a form of risk, quantifiable in some probabilistic format. We challenge both conceptualizations and connect heuristics and uncertainty in a functional way: When uncertainty does not lend itself to risk calculations, heuristics can fare better than complex, optimization-based strategies if they satisfy the criteria for being ecological rational. This insight emerges from merging Knightian uncertainty with the study of fast-and-frugal heuristics. For many decision theorists, uncertainty is an undesirable characteristic of a situation, yet in the world of business it is considered a necessary condition for profit. In this article, we argue for complementing the study of decision making under risk using probability theory with a systematic study of decision making under uncertainty using formal models of heuristics. In doing so, we can better understand decision making in the real world and why and when simple heuristics are successful.
With respect to questions of fact, people use heuristics – mental short-cuts, or rules of thumb, that generally work well, but that also lead to systematic errors. People use moral heuristics too – moral short-cuts, or rules of thumb, that lead to mistaken and even ab- surd moral judgments. These judgments are highly relevant not only to morality, but to law and politics as well. Examples are given from a number of domains, including risk regulation, punishment, reproduction and sexuality, and the act/omission distinction. In all of these contexts, rapid, intuitive judgments make a great deal of sense, but sometimes produce moral mistakes that are replicated in law and pol- icy. One implication is that moral assessments ought not to be made by appealing to intuitions about exotic cases and problems; those intuitions are particularly unlikely to be reliable. Another implication is that some deeply held moral judgments are unsound if they are products of moral heuristics. The idea of error-prone heuristics is especially controversial in the moral domain, where agreement on the correct answer may be hard to elicit; but in many contexts, heuristics are at work and they do real damage. Moral framing effects, in- cluding those in the context of obligations to future generations, are also discussed.
Objective:
Over the last two decades, many states have adopted several of the 20 laws that aim to control youth access to and possession of alcohol and prevent underage drinking in the United States. However, many of these laws have not been evaluated since their adoption. The objective of this study was to determine which minimum legal drinking age 21 (MLDA-21) laws currently have an effect on underage drinking-and-driving fatal crashes.
Method:
We updated the effective dates of the 20 MLDA-21 laws examined in this study and used scores of each law's strengths and weaknesses. Our structural equation model included the 20 MLDA-21 laws, impaired driving laws, seat belt safety laws, economic strength, driving exposure, beer consumption, and fatal crash ratios of drinking-to-nondrinking drivers under age 21.
Results:
Nine MLDA-21 laws were associated with significant decreases in fatal crash ratios of underage drinking drivers: possession of alcohol (-7.7%), purchase of alcohol (-4.2%), use alcohol and lose your license (-7.9%), zero tolerance .02 blood alcohol concentration limit for underage drivers (-2.9%), age of bartender ≥21 (-4.1%), state responsible beverage service program (-3.8%), fake identification support provisions for retailers (-11.9%), dram shop liability (-2.5%), and social host civil liability (-1.7%). Two laws were associated with significant increases in the fatal crash ratios of underage drinking drivers: prohibition of furnishing alcohol to minors (+7.2%) and registration of beer kegs (+9.6%).
Conclusions:
The nine effective MLDA-21 laws are estimated to be currently saving approximately 1,135 lives annually, yet only five states have enacted all nine laws. If all states adopted these nine effective MLDA-21 laws, it is estimated that an additional 210 lives could be saved every year.
The letter of the law
is its literal meaning. Here, the spirit of the law is its perceived intention.
We tested the hypothesis that violating the spirit of the law accounts for
culpability above and beyond breaking the mere letter. We find that one can
incur culpability even when the letter of the law is not technically broken. We
examine this effect across various legal contexts and discuss the implications
for future research directions.
Moral condemnation of harmful behavior is influenced by both cognitive and affective processes. However, despite much recent research, the proximate source of affect remains unclear. One obvious contender is empathy; simulating the victim's pain could lead one to judge an action as wrong ("outcome aversion"). An alternative, less obvious source is one's own aversion to performing the action itself ("action aversion"). To dissociate these alternatives, we developed a scale that assessed individual aversions to (a) witnessing others experience painful outcomes (e.g., seeing someone fall down stairs); and (b) performing actions that are harmless yet aversive (e.g., stabbing a fellow actor with a fake stage knife). Across 4 experiments, we found that moral condemnation of both first-person and third-party harmful behavior in the context of moral dilemmas is better predicted by one's aversion to action properties than by an affective response to victim suffering. In a fifth experiment, we manipulated both action aversion and the degree of expected suffering across a number of actions and found that both factors make large, independent contributions to moral judgment. Together, these results suggest we may judge others' actions by imagining what it would feel like to perform the action rather than experience the consequences of the action. Accordingly, they provide a counterpoint to a dominant but largely untested assumption that empathy is the key affective response governing moral judgments of harm. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Humans are unique in the animal world in the extent to which their day-to-day behavior is governed by a complex set of rules and principles commonly called norms. Norms delimit the bounds of proper behavior in a host of domains, providing an invisible web of normative structure embracing virtually all aspects of social life. People also find many norms to be deeply meaningful. Norms give rise to powerful subjective feelings that, in the view of many, are an important part of what it is to be a human agent. Despite the vital role of norms in human lives and human behavior, and the central role they play in explanations in the social sciences, there has been very little systematic attention devoted to norms in cognitive science. Much existing research is partial and piecemeal, making it difficult to know how individual findings cohere into a comprehensive picture. Our goal in this essay is to offer an account of the psychological mechanisms and processes underlying norms that integrates what is known and can serve as a framework for future research.
The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: (i) matched sampling on the univariate propensity score, which is a generalization of discriminant matching, (ii) multivariate adjustment by subclassification on the propensity score where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and (iii) visual representation of multivariate covariance adjustment by a two- dimensional plot.
Recent law and economics scholarship has produced much theoretical and empirical work on how and why legal disputes are settled and litigated. One of the most significant developments in this literature, attributable to the work of William Baxter and the combined efforts of George Priest and Benjamin Klein, has been the formation of a theory about both the selection of disputes for trial and the rates of success that plaintiffs enjoy for those cases that are resolved at trial. The basic theory contains two components. The selection effect refers to the proposition that the selection of tried cases is not a random sample of the mass of underlying cases. Rather, those cases that tend to be clear for either the plaintiff or the defendant under the applicable legal rules settle relatively quickly, leaving only the difficult cases for trial.
Since these tried cases are not representative of the larger class of disputed cases, it is risky to draw any inference from the outcome of tried disputes to the soundness of the legal rules by which these disputes are decided. The sample of tried cases may contain many victories for the plaintiff and many for the defendant. One cannot infer from that fact alone, however, the fairness or desirability of the underlying legal rules. If the rules are heavily weighted for the plaintiff, then the closeness of the tried cases is consistent with there being many cases in which plaintiffs recover handsomely without litigation. Similarly, if the rules are weighted heavily for the defendant, then the total mass of cases brought will, by the time of trial, be reduced, as defendants will be able to settle many cases on favorable terms. The observation of hotly contested trials is therefore wholly consistent with the underlying rules that are skewed toward the plaintiff, toward the defendant, or toward neither side. The power of the selection effect is generally recognized in the academic literature. It has been the basis of complex litigation models and has been subjected to empirical testing and debate.
Closely akin to but clearly distinguishable from, the selection effect, is the so-called 50 percent hypothesis. This hypothesis is a more specific prediction than the selection effect. The 50 percent hypothesis posits that the set of tried cases culled from the mass of underlying disputes will result in 50 percent victories for plaintiffs and 50 percent victories for defendants. The 50 percent hypothesis can conveniently be regarded as the limiting case of the selection effect theory. It follows, therefore, that any empirical corroboration of it generates powerful support for the more general selection hypothesis. This article suggests the incompleteness of existing methods of statistically testing the 50 percent hypothesis and reformulates the criteria for accepting or rejecting the hypothesis.
For decades now, experiments have revealed that we humans tend to evaluate the views or activities of our own group and its members more favorably than those of outsiders. To assess convergence between experimental and observational results, we explore whether US Supreme Court justices fall prey to in-group bias in freedom-of-expression cases. A two-level hierarchical model of all votes cast between the 1953 and 2014 terms confirms that they do. Although liberal justices are (overall) more supportive of free-speech claims than conservative justices, the votes of both liberal and conservative justices tend to reflect their preferences toward the speech’s ideological grouping and not solely an underlying taste for (or against) greater protection for expression. These results suggest the importance of new research programs aimed at evaluating how other cognitive biases identified in experimental work may influence judicial behavior in actual court decisions.
This Article reports the results of a study on whether political predispositions influence judicial decisionmaking. The study was designed to overcome the two principal limitations on existing empirical studies that purport to find such an influence: the use of nonexperimental methods to assess the decisions of actual judges; and the failure to use actual judges in ideologically-biased-reasoning experiments. The study involved a sample of sitting judges (n = 253), who, like members of a general public sample (n = 800), were culturally polarized on climate change.
Legal decisions and theories are frequently condemned as formalistic, yet little discussion has occurred regarding exactly what the term "formalism" means. In this Article, Professor Schauer examines divergent uses of the term to elucidate its descriptive content. Conceptions of formalism, he argues, involve the notion that rules constrict the choice of the decisionmaker. Our aversion to formalism stems from denial that the language of rules either can or should constrict choice in this way. Yet Professor Schauer argues that this aversion to formalism should be rethought: At times language both can and should restrict decisionmakers. Consequently, the term "formalistic" should not be used as a blanket condemnation of a decisionmaking process; instead the debate regarding decision according to rules should be confronted on its own terms.
Dual-system approaches to psychology explain the fundamental properties of human judgment, decision making, and behavior across diverse domains. Yet, the appropriate characterization of each system is a source of debate. For instance, a large body of research on moral psychology makes use of the contrast between "emotional" and "rational/cognitive" processes, yet even the chief proponents of this division recognize its shortcomings. Largely independently, research in the computational neurosciences has identified a broad division between two algorithms for learning and choice derived from formal models of reinforcement learning. One assigns value to actions intrinsically based on past experience, while another derives representations of value from an internally represented causal model of the world. This division between action- and outcome-based value representation provides an ideal framework for a dual-system theory in the moral domain.
Observations were made in 10 preschools of interactions in 2 domains of social events: social conventional and moral. On the basis of criteria defining each domain, observed events could be reliably classified as social conventional or moral. As another aspect of the study, an interview was administered to children from the preschool who had witnessed the same events as the observer. The children's view of the events as social conventional or moral was in agreement with our classifications of the events in 83% of the cases. It was hypothesized that the responses of both children and adults to social conventional events differ from their responses to moral events. Observed behaviors were rated on a standard checklist of response categories. Different types of responses were elicited by the 2 types of events. Almost all responses to social conventional transgressions were initiated by adults. Children and adults responded with equal frequency to moral transgressions. Adults responded to social conventional transgressions differently from the ways they reacted to moral transgressions.
Professor Hart defends the Positivist school of jurisprudence from many of the criticisms which have been leveled against its insistence on distinguishing the law that is from the law that ought to be. He first insists that the critics have confused this distinction with other Positivist theories about law which deserved criticism, and then proceeds to consider the merits of the distinction.
Rephrasing the question of "law and morals" in terms of "order and good order," Professor Fuller criticizes Professor H. L. A. Hart for ignoring the internal "morality of order" necessary to the creation of all law. He then rejects Professor Hart's theory of statutory interpretation on the ground that we seek the objectives of entire provisions rather than the meanings of individual words which are claimed to have "standard instances."
Economic analysis generally assumes that law solves cooperation problems because legal sanctions change payoffs. Where the problem is one of coordination, however, this article contends that law also influences behavior by changing expectations, independent of payoffs. When individuals need to coordinate, law works to make one equilibrium "focal" and thereby creates expectations that others will play the strategy associated with that equilibrium. Once the expectations exist, they are self-fulfilling; even if the payoffs remain the same, everyone prefers to play the focal point strategy. Private expression can also change expectations, but law often has a comparative advantage in the publicity accorded to, and uniqueness of, its message, as well as the resulting reputation of public officials. The focal effect is one way to explain how law influences behavior "expressively" by what it says, independent of the sanctions it imposes. The article initially demonstrates this result using a pure coordination game, but then broadens the analysis in two ways. First, the focal point exists even when individuals have conflicting interests, as long as they share a common interest in avoiding certain outcomes. Thus, focal points matter in "Hawk-Dove" games which plausibly model a substantial amount of real world conflict. In such situations, both adjudication and regulation have some expressive influence on behavior. Second, the focal effect exists in iterated situations where equilibria evolve over time. Legal focal points can influence behavior during disequilibrium and, in several ways, supplant an existing convention. These points are illustrated with examples of traffic regulation, a sanctionless anti-smoking law, and a law creating "imperfect" liability for landlords.
Although published works rarely include causal estimates from more than a few model specifications, authors usually choose
the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices
of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates
presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties
like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise
of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing
methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a
unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software
we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models
produce more accurate and considerably less model-dependent causal inferences.
Recent research in moral psychology has attempted to characterize patterns of moral judgments of actions in terms of the causal and intentional properties of those actions. The present study directly compares the roles of consequence, causation, belief and desire in determining moral judgments. Judgments of the wrongness or permissibility of action were found to rely principally on the mental states of an agent, while judgments of blame and punishment are found to rely jointly on mental states and the causal connection of an agent to a harmful consequence. Also, selectively for judgments of punishment and blame, people who attempt but fail to cause harm more are judged more leniently if the harm occurs by independent means than if the harm does not occur at all. An account of these phenomena is proposed that distinguishes two processes of moral judgment: one which begins with harmful consequences and seeks a causally responsible agent, and the other which begins with an action and analyzes the mental states responsible for that action.
Cracking the whole code rule
Jan 2021
76
A S Krishnakumar
Krishnakumar A. S.
A. S. Krishnakumar, Cracking the whole code rule. New York Univ. Law Rev. 96, 76-172 (2021).
Are there cross-cultural legal principles? Modal reasoning uncovers procedural constraints on law