The regulation of genetically modified products pursuant to statutes enacted decades prior to the advent of biotechnology has created a regulatory system that is passive rather than proactive about risks, has difficulty adapting to biotechnology advances, and is highly fractured and inefficient--transgenic plants and animals are governed by at least twelve different statutes and five different agencies or services. The deficiencies resulting from this piecemeal approach to regulation unnecessarily expose society and the environment to adverse risks of biotechnology and introduce numerous inefficiencies into the regulatory system. These risks and inefficiencies include gaps in regulation, duplicative and inconsistent regulation, unnecessary increases in the cost of and delay in the development and commercialization of new biotechnology products. These deficiencies also increase the risk of further unnecessary biotechnology scares, which may cause public overreaction against biotechnology products, preventing the maximization of social welfare. With science and society poised to soar from first-generation biotechnology (focused on crops modified for agricultural benefit), to next-generation developments (including transgenic fish, insects, and livestock, and pharmaceutical-producing and industrial compound-producing plants and animals), it is necessary to establish a comprehensive, efficient, and scientifically rigorous regulatory system. This Article details how to achieve such a result through fixing the deficiencies in, and risks created by, the current regulatory structure. Ignoring many details, the solutions can be summarized in two categories. First, statutory and regulatory gaps that are identified must be closed with new legislation and regulation. Second, regulation of genetically modified products must be shifted from a haphazard model based on statutes not intended to cover biotechnology to a system based upon agency expertise in handling particular types of risks.
This Article explores the U.S. "patent first, ask questions later" approach to determining what subject matter should receive patent protection. Under this approach, the U.S. Patent and Trademark Office (USPTO or the Agency) issues patents on "anything under the sun made by man," and to the extent a patent's subject matter is sufficiently controversial, Congress acts retrospectively in assessing whether patents should issue on such interventions. This practice has important ramifications for morally controversial biotechnology patents specifically, and for American society generally. For many years a judicially created "moral utility" doctrine served as a type of gatekeeper of patent subject matter eligibility. The doctrine allowed both the USTPO and courts to deny patents on morally controversial subject matter under the fiction that such inventions were not "useful." The gate, however, is currently untended. A combination of the demise of the moral utility doctrine, along with expansive judicial interpretations of the scope of patent-eligible subject matter, has resulted in virtually no basis on which the USTPO or courts can deny patent protection to morally controversial, but otherwise patentable, subject matter. This is so despite position statements by the Agency to the contrary. Biotechnology is an area in which many morally controversial inventions are generated. Congress has been in react-mode following the issuance of a stream of morally controversial biotech patents, including patents on transgenic animals, surgical methods, and methods of cloning humans. With no statutory limits on patent eligibility, and with myriad concerns complicating congressional action following a patent's issuance, it is not Congress, the representative of the people, determining patent eligibility. Instead, it is patent applicants, scientific inventors, who are deciding matters of high public policy through the contents of the applications they file with the USTPO. This Article explores how the United States has come to be in this position, exposes latent problems with the "patent first" approach, and considers the benefits and disadvantages of the "ask questions first, patents later" approaches employed by some other countries. The Article concludes that granting patents on morally controversial biotech subject matter and then asking whether such inventions should be patentable is bad policy for the United States and its patent system, and posits workable, proactive ways for Congress to successfully guard the patent-eligibility gate.
Federal research regulations require that institutional review boards (IRBs) review, approve, and monitor clinical trials involving human subjects. Recent governmental reports conclude that IRBs lack sufficient resources, and critics charge that the IRB review system is in danger of imploding. While IRBs receive much criticism, however, few studies have examined how IRBs functionally perform in relation to comparable oversight bodies. This Article explores how IRBs and corporate boards exhibit remarkably similar institutional strengths and limitations. Both share significant degrees of insularity and both face similar information and time constraints in performing their monitoring duties. Additionally, IRBs and corporate boards are comprised of a mix of inside and outside interests, a feature that complicates monitoring effectiveness. Also, individuals serving on IRBs and corporate boards face enormous conformity pressures and possible social sanction for aggressive oversight. While IRB members and corporate directors are expected to exert monitoring effort, they face limited liability for subpar performance and there exist few other incentives to persevere in monitoring. Nonetheless, the corporate board perspective identifies multiple non-monitoring functions, such as mediating, that IRBs remain uniquely positioned to perform. Attempts to strengthen the monitoring role of IRBs may undercut these important non-monitoring functions. Finally, this Article asks what IRB reformers could learn from corporate boards. A note of caution is sounded regarding proposals to increase the number of outside, community members serving on IRBs, as this reform, without more, will likely have limited impact on board function. Also, the corporate board perspective suggests that calls for IRBs to take on a more direct role in reviewing financial conflicts of interest are misguided, as IRBs remain institutionally ill-suited to do this type of review.
In 2007, France created the Regulatory Authority for Technical Measures (l’Autorité de Régulation des Mesures Techniques or ARMT), an independent regulatory agency charged with promoting the interoperability of digital media distributed with embedded “technical protection measures” (TPM), also known as “digital rights management” technologies (DRM). ARMT was established in part to rectify what French lawmakers perceived as an imbalance in the rights of copyright owners and end users created when the European Copyright Directive (EUCD) was transposed into French law as the “Loi sur le Droit d’Auteur et les Droits Voisins dans la Société de l’Information” (DADVSI). ARMT is both a traditional independent regulatory agency and a novel attempt to develop a new governance structure at the national level to address global information economy challenges. The fear that other national governments might follow suit seems to have helped to cool enthusiasm for TPM among some businesses. This Article notes parallels between the limitations imposed on ARMT and those imposed on the first modern independent regulatory agencies that emerged in the United States in the late nineteenth and early twentieth centuries. Using history as a guide, it is not surprising that the ARMT’s exercise of authority has been limited during its early years; it remains possible that ARMT may become a model for legislation in other countries.It took decades before the first American independent regulatory agencies exercised real authority, and their legitimacy was not established beyond question until Roosevelt’s “New Deal.” Even though information society institutions may evolve quickly, national governments are sure to require more time to develop effective, legitimate ways to ensure that global information and communication technology (ICT) standards conform to their national social policies.
Frequent compliance with the adjudicative decisions of international institutions, such as the International Court of Justice, is puzzling because these institutions do not have the power domestic courts possess to impose sanctions. This paper uses game theory to explain the power of international adjudication via a set of expressive theories, showing how law can be effective without sanctions. When two parties disagree about conventions that arise in recurrent situations involving coordination (such as a convention of deferring to territorial claims of "first possessors"), the pronouncements of third-party legal decision-makers - adjudicators - can influence their behavior in two ways. First, adjudicative expression may construct "focal points" that clarify ambiguities in the convention. Second, adjudicative expression may provide "signals" that cause parties to update their beliefs about the facts that determine how the convention applies. Even without the power of sanctions or legitimacy, an adjudicator's focal points and signals influence the parties' behavior. After explaining the expressive power of adjudication, the paper applies the analysis to a range of third party efforts to resolve international disputes, including the first-ever review of the entire docket of the International Court of Justice. We find strong empirical support for the theory that adjudication works by clarifying ambiguous conventions or facts via cheap talk or signaling. We claim that the theory has broad implications for understanding the power of adjudication generally.
This Article offers a detailed analysis of major Taft Court decisions involving prohibition, including Olmstead v. United States, Carroll v. United States, United States v. Lanza, Lambert v. Yellowley, and Tumey v. Ohio. Prohibition, and the Eighteenth Amendment by which it was constitutionally entrenched, was the result of a social movement that fused progressive beliefs in efficiency with conservative beliefs in individual responsibility and self-control. During the 1920s the Supreme Court was a strictly bone-dry institution that regularly sustained the administrative and law enforcement techniques deployed by the federal government in its losing effort to prevent the manufacture and sale of liquor throughout the continental United States. This is surprising, because the Taft Court was in other respects dominated by conservative Justices, who were temperamentally opposed to the expansion of the national administrative state, particularly in contexts in which the national government sought to displace local police power. Prohibition represented the greatest expansion of federal regulatory authority since Reconstruction. It caused a major crisis in the theory and practice of American federalism, as the national government, which lacked the courts or police necessary for implementing the Eighteenth Amendment, sought to conscript state judicial and law enforcement resources. Close inspection reveals that the Taft Court's support for prohibition came from an unlikely alliance between two liberal Justices - Holmes and Brandeis - and three conservative Justices - Taft, Van Devanter, and Sanford. Three conservative Justices - McReynolds, Sutherland, and Butler - remained adamantly opposed to prohibition. Holmes's and Brandeis's support of prohibition likely reflects pre-New Deal liberalism's conviction that courts ought to defer to democratic lawmaking. This conviction was sorely tested by the flagrant and persistent defiance of prohibition, as well as by the repressive criminal and administrative techniques used to secure prohibition's enforcement. Not only did progressives grow suspicious of federal regulatory efforts to enforce sumptuary legislation, but they began to question the legitimacy of positive law that lacked resonance with the customs and mores of the population. These trends in American liberalism are visible in Brandeis's famous dissent in Olmstead. They would vanish with the advent of the New Deal and not reappear until the 1960s, in cases like Griswold v. Connecticut, at a time when the American administrative state had become as effectively entrenched as it had been during prohibition in the 1920s. The opposition to prohibition of McReynolds, Sutherland, and Butler represents the traditional pre-New Deal judicial conservative position that positive law, particularly positive national law, was to be judicially disciplined whenever it departed from customary social values. The vigorous support of prohibition by otherwise conservative Justices like Taft, Van Devanter, and Sanford, by contrast, represents a new development in American judicial conservatism. These Justices fused a conservative belief in social control with an embrace of legal positivism. This fusion disappeared from judicial conservatism with the repeal of the Eighteenth Amendment, and it did not reappear until the 1970s and the philosophy of Justice Rehnquist, when judicial conservatism finally came to terms with the entrenchment of the American administrative state. The brief constitutionalization of prohibition, in other words, forced Justices on both the right and the left to stop debating whether there should be an American administrative state, and required them instead to reconstruct their judicial philosophy on the assumption that the administrative state was an unalterable reality. It provoked a brief efflorescence of judicial perspectives that would not come into full flower until late in the twentieth century. Prohibition also forced a rethinking of the appropriate limits of national power, as well as fundamental developments in the meaning of Fourth Amendment limitations on law enforcement.
Premarital agreements are agreements entered immediately before marriage, usually with the intention of controlling the disposition of property upon divorce. These agreements were once universally treated as void because contrary to public policy, but are currently enforceable in all 50 states. Approximately half the states impose significant tests of procedural fairness and/or substantive fairness before enforcing the agreements. The other half treat premarital agreements more or less like other contracts. The academic commentary on these sorts of agreements also seems split: between those who think these sorts of contracts should always be enforced, and that these agreements raise no special concerns (the optimists); and those who believe that these sorts of contracts should never be enforced because they are almost always unfair, and may work to increase the subordination of women (the pessimists). The article emphasizes, in response to the optimists, that premarital agreements frequently raise serious concerns about duress and about rationality/voluntariness. However, given the value of allowing parties some freedom to set the terms on which they enter marriage, it would be useful to enforce these agreements, despite their problems. In response to the pessimists, the article emphasizes the extent to which modern contract law analysis (including the UCC treatment of good faith; recent case-law on duress and undue influence, commentary about different treatment for long-term/relational contracts, and Prof. Richard Craswell's analysis of unconscionability) can protect against most, but by no means all, of the unfairness that can result from enforcement.
Focusing on the election of Arnold Schwarzenegger as governor in California, this article examines the curious reemergence of direct democracy. The article begins by tracing the disfavored status of any direct democratic mechanism in the original constitutional design. In addition, the use of recalls further violates the Framers' commitment to fixed terms of office to insulate wise political leadership from immediate accountability to the potentially inflamed desires of political majorities. Despite this background, the Article argues that a significant part of the current impulse toward plebiscitary forms of governance owes to the increasing unaccountability of legislative branches of government toward median preferences. As a result of gerrymandering and other distortive features of modern districting, there is a growing gulf between increasingly polarized and fractious legislative delegations and the centrist preferences of the bulk of the voting public. Schwarzenegger provides a striking example with a candidate able to muster half the votes in a crowded field, yet running on a platform that could not have prevailed in the primary of either major party. This article was originally presented as the 2004 Cutler Lecture at William and Mary.
This Article empirically examines the question of whether courts of appeals judges cast ideological votes in the context of bankruptcy. The empirical study is unique insofar as it is the first to specifically examine the voting behavior of circuit court judges in bankruptcy cases. More importantly, it focuses on a particular type of dispute that arises in bankruptcy - debt-dischargeability determinations. The study implements this focused approach in order to reduce heterogeneity in result. We find, contrary to our hypotheses, no evidence that circuit court judges engage in ideological voting in bankruptcy cases. We do find, however, non-ideological factors - including the race of the judge and the disposition of the case by lower courts - that substantially influence the voting pattern of the judges in our study.The Article makes three broad contributions. First, it indicates that bankruptcy voting is comparatively non-ideological, at least at the level of the courts of appeals. Second, by identifying the influence of certain non-ideological factors on voting behavior, the Article suggests avenues for profitable future research. And third, the Article makes a methodological contribution through its fine-grained approach, which demonstrates the importance of focusing on particular legal issues in order to reduce heterogeneity in, and bolster the reliability of, findings from empirical legal studies.
This Article addresses critically the implications of the U.S. Supreme Court's recent decision in Christensen v. Harris County, 120 S.Ct. 1655 (2000), for standards of judicial review of agency interpretations of law. Christensen is a notable case in the administrative law area because it purports to clarify application of the deference doctrine first articulated in Skidmore v. Swift & Co., 323 U.S. 134 (1944). By reviving this doctrine, the case narrows application of the predominant approach to deference articulated in Chevron, U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984), thus reducing the level of deference in many appeals involving administrative agency interpretations of law. This Article addresses the deference debate in this context, criticizing Christensen, especially Justice Thomas' majority opinion. The Article argues that the majority did not properly apply Skidmore, and that the Court's decision invites ad hocery by lower courts in their review of agency legal interpretations. It concludes that conceptualizing Skidmore within the architecture of Chevron's step two ? not as an alternative to the application of Chevron ? will best promote goals of accountability, uniformity, and flexibility.
Multiple claims are a fixture of employment discrimination litigation today. It is common if not ubiquitous for opinions to begin with a version of the following litany: plaintiff brings this action under Title VII and the ADEA for race, age and gender discrimination. EEOC statistics show the exponential growth of multiple claims, in part because its intake procedures lead claimants to describe their multiple identities, at a time when they have little basis upon which to parse a specific category of bias. But increased diversity in workplace demographics suggests that frequently, disparate treatment in fact may be rooted in intersectional or complex bias: while stereotypes for women have somewhat dissipated, those for older African-American women still hold sway. Complex bias provides a counter-narrative to the currently in vogue characterization of workplace discrimination as subtle or unconscious. Despite the common sense notion that the more different a worker is, the most likely she will encounter bias, empirical evidence shows that multiple claims - which may account for more than 50% of federal court discrimination actions - have even less chance of success than single claims. A sample of summary judgment decisions on multiple claims reveals that employers prevail at a rate of 96%, as compared to 73% for employment discrimination claims generally. Multiple claims suffer from the failure of courts and intersectional legal scholars to confront the difficulties inherent in proving discrimination using narrowly circumscribed pretext analysis. Applying sex-plus concepts does not address the underlying paradox inherent in the proof of these cases: the more complex the claimant's identity, the wider the evidentiary net must be cast to find relevant comparative, statistical and anecdotal evidence. Overcoming the courts' reluctance to follow this direction requires the development and introduction of social science research that delineates the nuanced stereotypes faced by complex claimants.
Recent work at the intersection of law and behavioral biology has suggested numerous contexts in which legal thinking could benefit by integrating knowledge from behavioral biology. In one of those contexts, behavioral biology may help to provide theoretical foundation for, and potentially increased predictive power concerning, various psychological traits relevant to law. This Article describes an experiment that explores that context. The paradoxical psychological bias known as the endowment effect puzzles economists, skews market behavior, impedes efficient exchange of goods and rights, and thereby poses important problems for law. Although the effect is known to vary widely, there are at present no satisfying explanations for why it manifests when and how it does. Drawing on evolutionary biology, this Article provides a new theory of the endowment effect. Briefly, we hypothesize that the endowment effect is an evolved propensity of humans and, further, that the degree to which an item is evolutionarily relevant will affect the strength of the endowment effect. The theory generates a novel combination of three predictions. These are: (1) the effect is likely to be observable in many other species, including close primate relatives; (2) the prevalence of the effect in other species is likely to vary across items; and (3) the prevalence of the endowment effect will increase or decrease, respectively, with the increasing or decreasing evolutionary salience of the item in question. The authors tested these predictions in a chimpanzee (Pan troglodytes) experiment, recently published in Current Biology. The data, further explored here, are consistent with each of the three predictions. Consequently, this theory may explain why the endowment effect exists in humans and other species. It may also help both to predict and to explain some of the variability in the effect when it does manifest. And, more broadly, the results of the experiment suggest that combining life science and social science perspectives could lead to a more coherent framework for understanding the wider variety of other cognitive heuristics and biases relevant to law.
Wendy Gordon has noted that most of IP law is concerned with internalizing positive externalities. In two recent articles - Spillovers (with Mark Lemley) and Evaluating the Demsetzian Trend in Copyright Law, I challenge the conventional economic theory of intellectual property and specifically the idea that society ought to use intellectual property systems to internalize externalities when feasible. The nature of the challenge - or the spillovers theory - can be viewed in two ways. I would frame the challenge as an internal one based on and consistent with welfare economics. In his reply to the latter article, economist Harold Demsetz seems to accept this view while critiquing aspects of the analysis. Others, such as economist Anne Barron, have critiqued the articles, suggesting that the spillovers theory is not consistent with welfare economics, necessarily relies on some other non-economic social theory yet to be specified, and thus is truly an external challenge to the conventional economic theories of IP. What is interesting about these responses is how they frame a boundary dispute between economic and other social theories of intellectual property. That such a boundary exists is well understood. What seems worth exploring, for purposes of this essay, is how we arrive at and frame the contours of the boundary through a discussion of spillovers. Claims about what is on one side or the other of the boundary may turn on assumptions and beliefs that might not hold up on close inspection. In this short essay, I reengage this debate and the critiques I've mentioned, and explore the boundary between economic and other social theories of intellectual property. I begin with a brief discussion of the conventional economic theories of intellectual property. I then discuss the spillovers theory and various critiques.
Coase’s theory of the firm has become a familiar tool to analyze the structure and organization of businesses. Such analyses have increasingly focused on property based theories of the firm, including intellectual property. In previous work we have discussed the application of this model to patents, copyrights, and trade secrets. Here we take up the theory of the firm with regard to trademarks, which act as signals of firm reputation, and so have application and effects that differ substantially from other forms of intellectual property. Using the framework from our previous analyses, we examine the propensity of trademarks to lower transactions costs between firms, as well as within firms, suggesting that such doctrines will have significant effects on the size and structure of the firm.
Electrocution has been used in the great majority of executions in this century. Today, it is second only to lethal injection as the preferred method of execution. At the same time, however, electrocution never has been scrutinized under modern Eighth Amendment standards. This circumstance persists despite substantial evidence that death by electrocution may inflict unnecessary pain, physical violence, and mutilation, rather than the mere extinguishment of life. This Article provides the Eighth Amendment analysis of electrocution that the courts thus far have not approached. The analysis has two parts. The first inquires whether, according to available scientific evidence, electrocution amounts to cruel and unusual punishment even if it is administered as planned. The second inquires whether, in light of the frequency with which electrocutions are botched, continuing the practice amounts to cruel and unusual punishment even if the properly administered electrocution would not. Part I of this Article examines the philosophical, financial, and political forces preceding In re Kemmler, in which the United States Supreme Court bypassed determining the constitutional viability of electrocution by holding that the Eighth Amendment did not apply to the states . Part II analyzes the credibility and consequences of Kemmler, as well as the reasons for the defendant's botched execution. Part III discusses the Supreme Court's evolving execution jurisprudence and Kemmler's precedential force on 226 cases over the century. Part III also notes that courts have relied on Kemmler as constitutional support for all methods of execution as well as general Eighth Amendment propositions. Part IV evaluates the constitutionality of electrocution, providing the most thorough examination available of recent scientific and eyewitness evidence, as well as the means by which electric chairs are made and applied. Part IV ends with an account of the rise and fall of Fred A. Leuchter, also known as Dr. Death, formerly the primary manufacturer of execution equipment in this country. Part V describes eleven major botched electrocutions that have occurred since the death penalty was reinstated. Part VI suggests that electrocution does not appear to be a more humane execution method than hanging or shooting, the methods it was created to displace, and questions whether Kemmler warrants any further credibility. This Article concludes that claims that electrocution, if properly administered, provides instantaneous and therefore painless death are contradicted by substantial evidence demonstrating that it may inflict unnecessary pain, physical violence, and mutilation. Moreover, even if a properly administered electrocution should not be considered unconstitutional, the practice amounts to cruel and unusual punishment because of the frequency with which electrocutions are, and likely will continue to be, botched. The fact that courts have continued to turn a wilfully blind eye toward states' use of electrocution in light of the century-long evidence of its cruelty, negligent application, and insupportable case law, constitutes a great judicial and legislative scandal.
This article discusses hostile regulation of Chinese laundries in the American West from the 1860s to the early twentieth century. Anti-Chinese laundry laws generally took one of four forms: licensing legislation, maximum hours laws, zoning ordinances, and taxation. These laws were almost always facially neutral.The laundrymen challenged dozens of laundry ordinances in court. State courts faced with legal challenges by Chinese laundrymen to laundry regulations generally upheld the laws on police power grounds, but federal courts usually invalidated them. Some of the latter opinions preceded the infamous Lochner case by decades, but anticipated Lochner's reasoning and rhetoric.Traditional legal scholarship has criticized Lochnerian jurisprudence on three grounds: (1) judicial protection of economic liberties during the Lochner era was based on the reactionary political views of the judges involved; (2) courts invalidated progressive legislation meant to rein in corporate power and ameliorate the plight of the poor and vulnerable; and, (3) Lochnerism helped the wealthy and powerful at the expense of the rest of society, especially the poor and members of minority groups.The history of the anti-Chinese laundry laws contradicts the received wisdom. First, pro-Chinese laundry decisions fail to reveal economic class bias, but do show the courts' commitment to natural rights/free labor theory and opposition to "class legislation." Second, the history discussed in this paper provides evidence that much regulatory legislation was neither wise nor humane, but anti-competitive and discriminatory. And, third, Lochnerism protected the vulnerable and disenfranchised Chinese from hostile regulations.
This Article draws on cognitive psychology to develop a new explanation for prosecutorial misconduct. Traditionally, commentators have clothed the study of prosecutorial decision making in the rhetoric of fault. They have attributed overcharging, undisclosed exculpatory evidence, and convictions of the innocent to bad prosecutorial intentions and widespread prosecutorial wrongdoing. This fault-based lens colors both the description of the problem and the recommended solutions. In the language of fault, the problem is a culture that values obtaining and maintaining convictions over justice. The solution is to change prosecutorial values through, for example, more stringent ethical rules and increased disciplinary proceedings and sanctions against prosecutors. In this Article, I attempt instead to explain prosecutorial decision making from a cognitive perspective. I argue that even virtuous prosecutors can make normatively inappropriate decisions that result, not from flawed values, but from limits in human cognition. Prosecutors make what appear to be irrational decisions because all human decision makers share a common set of information-processing tendencies that depart from perfect rationality. In comparison to a fault-based approach, a cognitive description of the problem complicates the road for corrective action. If prosecutors fail to achieve justice not because they are bad, but because they are human, what hope is there for change? In three Parts, this Article attempts to explain how cognitive bias can affect the exercise of prosecutorial discretion and to suggest some initial reforms to improve the quality of prosecutorial decision making. Part I summarizes four related cognitive phenomena: confirmation bias, selective information processing, belief perseverance, and the avoidance of cognitive dissonance. Part II explores how these cognitive biases might adversely affect the exercise of prosecutorial discretion. Part III proposes a series of reforms that might improve the quality of prosecutorial decision making, despite limits on rationality.
There are two important questions in post-conflict constitution making, and at present neither of them has a definitive or uniformly accepted answer. The first relates to the best configuration of institutions to adopt in order to ameliorate the problem of the intergroup conflict. The second concerns the process most apt to produce the best configuration of institutions, whatever it might be. The first question is unanswered because there is a dispute among scholars and practitioners between two opposing views of appropriate institutions to mitigate conflict. Constitutional processes have not generally been geared to yield coherent exemplars of either configuration in a sufficient number of conflict-prone countries to provide a convincing demonstration of the superiority of one approach or the other. The second question is unanswered because in many cases constitutional processes are chosen in a haphazard fashion, without regard to the aptness of the process for the problems to be addressed. Meanwhile, advocates have been arguing for a single, highly structured, uniform process that may be apt for some classes of problems but is not necessarily appropriate for the full range of problems constitution makers confront in coping with divided societies. Hence the questions of what and how are both subject to debate. This Article takes up both questions. It surveys the main contending prescriptions for constitutional designs to cope with serious ethnic conflict, and it enumerates some of the main objections to each. It then reviews some of the available evidence on the efficacy of the contending prescriptions before turning to the question of adoptability. The Article notes that there are many obstacles to the adoption of a coherent set of political institutions to mitigate conflict, which derive mainly from processes of constitution making. For this reason, the Article evaluates some of the main suggestions in the recent literature on constitutional process and thereafter devotes considerable attention to the difficult question of designing a process for constitution making that is geared to the specific problems faced by constitution designers.
Establishing the rule of law is increasingly seen as the panacea for all the problems that afflict many non-western countries, particularly in post-conflict settings. Development experts prescribe it as the surest short cut to market-led growth; human rights groups advocate the rule of law as the best defense against human rights abuses; and in the area of peace and security, the rule of law is seen as the surest guarantee against the (re)-emergence of conflicts and the basis for rebuilding post-conflict societies. Therefore, in a very direct sense, the rule of law has come to be seen as the common element that development experts, security analysts and human rights activists agree upon and as the mechanism that links these disparate areas. In this article, I argue that this new-found fascination with the rule of law is misplaced. Underlying this 'linkage' idea is, I would suggest, a desire to escape from politics, by imagining the rule of law as technical, legal and apolitical. In other words, there is a tendency to think that failures of development, threats to security and human rights violations could all be avoided or managed by a resort to law. I argue that there is in fact a need to retain politics at the center of the discussions of development, human rights and security. In addition, I also argue that the invocation of rule of law hides many contradictions between the different policy agendas themselves (such as between development and human rights or between security and human rights) that cannot be fully 'resolved' by invoking the 'rule of law' as a mantra. It is far more important to inquire into the real consequences of these agendas on ordinary people. Focusing attention on the rule of law as a broad if not lofty concept diverts attention from the coherence, effectiveness or legitimacy of specific policies that are pursued to ensure security, promote development or protect human rights. The rule of law agenda threatens to obfuscate the real tradeoffs that need to be made in order to achieve these worthy goals.
There is an ongoing debate in contemporary jurisprudence over whether law, properly conceived, is capable of incorporating morality. And these debates have their important practical analogues, especially in American constitutional law. For this is where lawyers and scholars argue about whether, for example, the guarantees of equal protection, freedom of speech, and the free exercise of religion, as well as the prohibitions on cruel and unusual punishments and unreasonable searches and seizures, require courts and other governmental decisionmakers to adhere to the correct moral principles regarding equality, freedom of speech, freedom of religion, punishment, and (locational) privacy. That these and other constitutional clauses appear to speak in moral language is relatively uncontroversial, but far more controversial is what it means for authoritative law to speak in moral language, and how, if at all, such language connects law with what it is simply and pre-legally morally right (and wrong) to do. These debates about the status of morality in legal argument are important, but our goal here is not to engage them frontally. Rather, we wish to illuminate a particular aspect of these debates. And that aspect is the logic of the incorporation by law of morality, and the way in which, if at all, law can retain its lawness and retain its ability to perform law's essential functions while still being open to the full universe of moral considerations. In a word, we do not believe that this is possible, and thus we believe, and shall argue here, that even when law incorporates morality it can only serve law's primary and essential functions if it has a considerable degree of resistance to the pressure of at least some morally correct moral claims. In other words, we strive here to make the moral argument for law's ignoring of at least some moral arguments in legal decision-making.
In the past decade, a new frontier of constitutional discourse has begun to emerge, adding a fresh perspective to state constitutional law. Instead of treating states as jurisdictional islands in a sea under reign of the federal government, this new approach sees states as co-equals among themselves and between them and the federal government in a collective enterprise of democratic self-governance. This Symposium, organized around the theme of Dual Enforcement of Constitutional Norms, provides the occasion for leading scholars on state constitutional law to take a fresh look at their subject by adopting a vantage point outside of the individualized jurisdictional context. Instead, the Symposium invited participants to consider directly whether state and federal constitutional law are separate and distinct systems of law, each with its own doctrines, traditions, and dominant norms, or whether state and federal constitutional law may profitably be understood as complementary features of a shared project of elaborating and enforcing shared constitutional norms. The Articles in this issue lie along what we hope will prove to be a new frontier that moves courts and scholars closer to a sustainable interpretive theory of state constitutions, shedding important light on the role of state courts, while also addressing the federal judicial role in a system of dual enforcement.
Frequently, state-wide executive agencies and localities attempt to implement federally-inspired programs. Two predominant examples are cooperative federalism programs and incorporation of federal standards in state-specific law. Federally-inspired programs can bump into state constitutional restrictions on the allocation of powers, especially in states whose constitutional systems embrace stronger prohibitions on legislative delegation than the weak restrictions at the federal level, where national goals and standards are made. This Article addresses this tension between dual federal/state normative accounts of the constitutional allocation of powers in state implementation of federally-inspired programs. To the extent the predominant ways of resolving the tension come from federal courts, state constitutionalism is challenged to produce its own account of its relevance in an era of federal programs. After surveying and critiquing the interpretative practices of state courts in dealing with these conflicting constitutional norms, the Article presents an institutional design account of state allocation of powers which might better explain why states routinely suspend constitutional restrictions on delegation in the context of state implementation of federally-inspired programs. The Article questions whether constitutional restrictions on legislative delegation have any normative basis in the context of state implementation of federally-inspired programs, but argues that it is important for state courts to answer this question as a matter of state constitutional interpretation - not by ceding turf to federal courts under the Supremacy Clause or other federally-imposed judicial interpretations.
Much empirical analysis has documented racial disparities at the beginning and end stages of a criminal case. However, our understanding about the perpetuation of — and even corrections for — differential outcomes as the process unfolds remains less than complete. This Article provides a comprehensive examination of criminal dispositions using all DWI cases in North Carolina during the period 2001-2011, focusing on several major decision points in the process. Starting with pretrial hearings and culminating in sentencing results, we track differences in outcomes by race and gender. Before sentencing, significant gaps emerge in the severity of pretrial release conditions that disadvantage black and Hispanic defendants. Yet when prosecutors decide whether to pursue charges, we observe an initial correction mechanism: Hispanic men are almost two-thirds more likely to have those charges dropped relative to white men. Although few cases survive after the plea bargaining stage, a second correction mechanism arises: Hispanic men are substantially less likely to receive harsher sentences and are sent to jail for significantly less time relative to white men. The first mechanism is based in part on prosecutors’ reviewing the strength of the evidence but much more on declining to invest scarce resources in the pursuit of defendants who fail to appear for trial. The second mechanism seems to follow more directly from judicial discretion to reverse decisions made by law enforcement or prosecutors. We discuss possible explanations for these novel empirical results and review methods for more precisely identifying causal mechanisms in criminal justice.
In recent years, the United States Supreme Court has decided fewer cases than at any other time in its recent history. Scholars and practitioners alike have criticized the drop in the Court’s plenary docket. Some even believe that the Court has reneged on its duty to clarify and unify the law. A host of studies examine potential reasons for the Court’s change in docket size but few rely on an empirical analysis of this change and no study examines the correlation between ideological homogeneity and docket size. In the first comprehensive study of its kind, the authors analyze ideological and contextual factors to determine the conditions that are most likely to influence the size of the plenary docket. Drawing on empirical data from every Supreme Court Term between 1940 and 2008, the authors find that both ideological and contextual factors have led to the Court’s declining plenary docket. First, a Court composed of Justices who largely share the same world view is likely to hear 42 more cases per Term than an ideologically-fractured Court. Second, internal and external mechanisms, such as membership change and mandatory jurisdiction are also important. Congress’s decision to remove much of the Court’s mandatory appellate jurisdiction is associated with the Court deciding roughly 54 fewer cases per Term. In short, the data suggest that ideology and context have led to a Supreme Court that decides fewer cases.The Court’s docket is not likely to increase significantly in the near future. Unless Congress expands the Court’s mandatory appellate jurisdiction or the President makes a series of unconstrained nominations to the Court that increase its ideological homogeneity the size of the Court’s docket will remain comparably small compared to the past. As other studies have shown, because the Court’s case selection process is an important aspect of the development of the law, this Article provides the basis for further normative and empirical evaluations of the Court’s plenary docket.
This Article argues that commercial pressures are determining the news media's contemporary treatment of crime and violence, and that the resulting coverage has played a major role in reshaping public opinion, and ultimately, criminal justice policy. The news media are not mirrors, simply reflecting events in society. Rather, media content is shaped by economic and marketing considerations that frequently override traditional journalistic criteria for newsworthiness. This Article explores local and national television's treatment of crime, where the extent and style of news stories about crime are being adjusted to meet perceived viewer demand and advertising strategies, which frequently emphasize particular demographic groups with a taste for violence. Newspapers also reflect a market-driven reshaping of style and content, resulting in a continuing emphasis on crime stories as a cost-effective means to grab readers' attention. This has all occurred despite more than a decade of sharply falling crime rates. The Article also explores the accumulating social science evidence that the market-driven treatment of crime in the news media has the potential to skew American public opinion, increasing the support for various punitive policies such as mandatory minimums, longer sentences, and treating juveniles as adults. Through agenda setting and priming, media emphasis increases public concern about crime and makes it a more important criteria in assessing political leaders. Then, once the issue has been highlighted, the media's emphasis increases support for punitive policies, though the mechanisms through which this occurs are less well understood. This Article explores the evidence for the mechanisms of framing, increasing fear of crime, and instilling and reinforcing racial stereotypes and linking race to crime. Although other factors, including distinctive features of American culture and the American political system, also play a role, this Article argues that the news media are having a significant and little-understood role in increasing support for punitive criminal justice policies. Because the news media is not the only influence on public opinion, this Article also considers how the news media interacts with other factors that shape public opinion regarding the criminal justice system.
In this article, we apply economic analysis in an effort to derive the optimal damages rules for use in patent, trade secret, copyright, and trademark disputes. We proceed on the basis of two key assumptions: first, that in order to preserve the intellectual property owner's incentives to create, publish, or maintain quality control, the owner should never be rendered worse off as a result of an infringement; and second, that in order to preserve the property-like character of intellectual property rights, the infringer should never be rendered better off as a result of the infringement. On the basis of these assumptions, we conclude that the general damages rule for use in intellectual property disputes should be that the prevailing plaintiff recovers the greater of either her actual damages or the defendant's profits attributable to the infringement, with the possibility of a damages enhancement as a means of deterring infringements that are difficult to detect. We then discuss three ways in which the rules that actually govern in intellectual property disputes depart from this model--the absence of a restitutionary remedy in patent law; the use of "statutory damages" in copyright law; and the limitation on the recovery of restitutionary damages in trademark law- and consider whether these departures from the model can be viewed as rational adaptations to certain specific features of these bodies of law.
The last two decades have been a Dickensian era, showcasing the best and worst of human rights. The modern human rights revolution has helped catalyze new religious awakenings, and religious rights have therefore been substantially expanded. On the other hand, the revolution has catalyzed new conflict. A theological and legal war for souls has broken out between indigenous and foreign religious groups. These events have exposed the limitations of the human rights paradigm standing alone. Rights norms need a rights culture to be effective. Religion is indispensable and ineradicable, and religious narratives ground human rights discussions. However, religious narratives also need human rights to protect and challenge them.Religion must play an active role in the modern human rights revolution. This is not an obvious claim, as most religious texts do not speak of rights and liberties. Human rights evolved in the 1940s, when Christianity and Enlightenment ideals seemed at a low. Human rights grew rapidly through international covenants, creating a new civic faith, and religion seemed to be losing. However, such an understanding distorts the human rights discussion. Religion is the root of many human rights. Without religion, human rights become infinitely expandable, but also become captive to Western rituals and ignore the Eastern holistic approach. Furthermore, the state is given an exaggerated role as guarantor of human rights. Thus, the need exists to transform religion from a midwife of human rights into a mother of human rights.Human rights must have a more prominent place in theological discourse of modern religion. Theology must be a patron of human rights to promote discourse. Many religious traditions historically began this process. The Catholic Church inspired the first great human rights movement and based its canon law on individual and corporate rights. The Protestant Reformation was the second great human rights movement by encouraging the freedom of the Christian and promoting the role of the individual within religion. The Orthodox tradition based rights on the integrity of natural law and human community. However, all of these traditions have grown silent on the issue of human rights, due to intolerance, apathy, or hardship.As religion becomes a larger part of human rights, limits need to be set on the religious rights regime. A broad definition of religion is needed to include legitimate claims without making every claim religious. The issues of conversion and proselytism also need to be settled, as the community’s right to be left alone can conflict with the liberty for individuals to choose their faith. Finally, the question of what role a state should play in religion, separate or cooperative, must be considered and balanced in all instances to promote the religious rights movement.
The purpose of this article is to examine and discuss factors within the workplace that may affect the ability of individuals with disabilities to access and retain employment. The analysis is based on findings from a Cornell University study of human resource professionals in both the private and federal sectors (Bruyère, 2000b). Part I provides an overview of the study, selected key findings about remaining barriers, and implications for needed future workplace interventions based on the survey responses. Part II reviews selected literature addressing the workplace issues identified in the study. Part III examines some of the concepts and possible solutions regarding workplace discrimination and responses to the accommodation needs of applicants and workers with disabilities. In the conclusion, we discuss where further research is needed to address remaining employment inequities for people with disabilities.
In June of 2001, Andrea Yates drowned all of her five children in a bathtub in her suburban Texas home. Andrea Yates has since become the modern day poster child for maternal killings, which are commonly classified as either infanticide (the killing of an infant) or filicide (the killing of a child over the age of one). In the wake of Yates's case, the use of postpartum psychosis as a legal defense in cases of maternal infanticide and filicide has received considerable attention. Postpartum psychosis refers to a rare and serious mental disorder thought to occur after childbirth in some women. Despite the rare nature of postpartum psychosis, recent discussion tends erroneously to conflate all maternal killings with the disorder. Many scholars argue that the postpartum psychosis defense, along with other postpartum mental disorder defenses, should apply even more expansively to protect violent mothers from undue punishment. Some argue for changes in current laws, such as the development of a gender-specific insanity standard that caters to the intricacies of postpartum psychosis. Others support the enactment of an American Infanticide Act, which would automatically mitigate sanctions for mothers who kill. This Note argues that recent proposals are both unnecessary and misplaced, as they reflect outdated notions about female violence. Part I of this Note will explore the extent to which cultural values concerning femininity have influenced the societal response to infanticide and filicide. This section will provide an overview of feminist legal theory and its relation to cases involving mothers who kill. Part I will also address the role that traditional notions of femininity played in Yates's trials. Lastly, this section will describe the ways in which the American legal system has already incorporated popular misconceptions about female violence into its jurisprudence. Part II will outline the reasons why the states should avoid adopting an Infanticide Act. By critiquing existing Infanticide Acts in both England and Canada, this section will demonstrate that such statutes are not only premised upon the faulty presumption that all maternal killings result from the hormonal side effects of childbirth, but also reflect the misplaced belief that women should be treated lightly for violent crimes. Part III will argue that American jurisdictions should not develop gender-specific insanity standards for women suffering from postpartum psychosis because: (a) current gender-neutral insanity standards have proven effective in accommodating women who suffer from postpartum psychosis; (b) the use of a gender-specific standard promotes dangerous leniency toward female defendants; and (c) a gender-specific standard would embrace and perpetuate false ideas about women and violence.
Scholars praise the whistleblower protections of the Sarbanes-Oxley Act of 2002 as one of the most protective anti-retaliation provisions in the world. Yet, during its first three years, only 3.6% of Sarbanes-Oxley whistleblowers won relief through the initial administrative process that adjudicates such claims, and only 6.5% of whistleblowers won appeals through the process. This Article reports the results of an empirical study of all Department of Labor Sarbanes-Oxley determinations during this time, consisting of over 700 separate decisions from administrative investigations and hearings. The results of this detailed analysis demonstrate that administrative decision-makers strictly construed, and in some cases misapplied, Sarbanes-Oxley's substantive protections to the significant disadvantage of employees. These data-based findings assist in identifying the provisions and procedures of the Act that do not work as Congress intended as well as suggest potential remedies for these statutory and administrative deficiencies.
Executive term limits are pre-commitments through which the polity restricts its ability to retain a popular executive down the road. In recent years, many presidents around the world have chosen to remain in office even after their initial maximum term in office has expired. They have largely done so by amending the constitution, sometimes by replacing it entirely. The practice of revising higher law for the sake of a particular incumbent raises intriguing issues that touch ultimately on the normative justification for term limits in the first place. This article reviews the normative debate over term limits and identifies the key claims of proponents and opponents. It introduces the idea of characterizing term limits as a variety of default rule to be overcome if sufficient political support is apparent. It then turns to the historical evidence in order to assess the probability of attempts (both successful and unsuccessful) to evade term limits. It finds that, notwithstanding some high profile cases, term limits are observed with remarkable frequency. The final section considers alternative institutional designs that might accomplish some of the goals of term limits, but finds that none is likely to provide a perfect substitute. Term limits have the advantage of clarity, making them relatively easy constitutional rules to enforce, and they should be considered an effective part of the arsenal of democratic institutions.
Owen Roberts was accused of a variety of things in 1937, but “fidelity” was not among them. Justice Harlan Fiske Stone and Professor Felix Frankfurter were among many who accused Roberts of performing, as Frankfurter put it, a jurisprudential “somersault” “incapable of being attributed to a single factor relevant to the professed judicial process.” To Frankfurter, it was “all painful beyond words,” and gave him “a sickening feeling which is aroused when moral standards are adulterated in a convent.” Yet when Roberts announced his retirement from the Court eight years later, Chief Justice Stone, along with now-Justices Frankfurter and Robert Jackson, insisted that the Court’s farewell letter include the encomium, “You have made fidelity to principle your guide to decision.” Justices Black and Douglas balked at the inclusion of the sentence, with the result that no letter was ever sent. This article, prepared for the William & Mary symposium on “Fidelity, Economic Liberty, and 1937,” seeks to understand how Stone, Jackson, and Frankfurter may have come to see a consistency and integrity to Roberts’ jurisprudence that others did not detect. It does so by identifying and analyzing continuities in Roberts’ performance in cases involving Fifth and Fourteenth Amendment restraints on economic regulation that persisted after 1937. This examination aims to provide an improved understanding not only of Roberts’ jurisprudence, but also of the mechanisms of constitutional change in the 1930s. In addition, the paper attempts to offer a richer understanding of the contemporary significance of his landmark 1934 opinion in Nebbia v. New York.
Thanks to Richard Posner's classic 1972 article, A Theory of Negligence Law, the Hand formula of United States. v. Carroll Towing Co. is perhaps the most central idea of many first-year torts classes today. Students learn that the meaning of negligence should be understood in terms of Judge Learned Hand's formula comparing the costs of taking precautions with the product of the likelihood of injury without those precautions and the magnitude of such injury. There is more than a little irony, however, in the superstar status of the Hand formula in negligence law. Carroll Towing is not a negligence case at all; indeed, it is not even a tort case, but an admiralty case. Beyond that, even the very general idea of a negligent injurer being held liable for the injuries it caused is not implicated in Carroll Towing, because it is about plaintiff's fault, not defendant's fault. Posner's elevation of this formula to the apex of negligence doctrine is, though utterly sincere, nevertheless a sleight of Hand. This article puts aside the moral and evaluative arguments that frequently divide tort theorists, and instead gathers overwhelming evidence within negligence law that the Hand formula - in both its economic and its non-economic versions - simply misses the meaning of negligence within negligence doctrine. Negligence is a failure to use ordinary care. Ordinary care is that which a reasonably prudent person would use under the circumstances. While hardly self-evident or precise, concepts of ordinary care and the reasonably prudent person connote a standard of conduct that our negligence doctrine requires jurors and judges to apply as well as they can. There is every reason to believe this meaning is something quite different from the Hand formula (even if the two ideas should sometimes overlap). An accurate account of the meaning of negligence is a necessary starting point for questions or interpretation, criticism, and revision of tort doctrine. Finally, the article canvasses and rejects rights-based, conventionalistic, and virtue-based theories of the meaning of negligence, and begins to sketch an account according to which the reasonably prudent person is conceived in terms of a special kind of competency in civil life.
Immigration is a national issue and a federal responsibility — so why are states so actively involved? Their legal authority over immigration is questionable. Their institutional capacity to regulate it is limited. Even the legal actions that states take sometimes seem pointless from a regulatory perspective. Why do they enact legislation that essentially copies existing federal law? Why do they pursue regulations that are likely to be enjoined or struck down by courts? Why do they give so little priority to the immigration laws that do survive? This Article sheds light on this seemingly irrational behavior. It argues that state laws are being pursued less for their regulatory impact, and more for their ability to shape federal immigration policymaking. States have assumed this role because, as alternative policy venues, they offer political actors a way to reframe the public perception of an issue and shift debates to more favorable ground. Moreover, states are able to exert this kind of influence without having to legally implement or effectively enforce their laws. This theory offers an explanation for why states have so frequently been drawn into policy disputes over immigration in the past, such as those that led to the major immigration reforms of 1986 and 1996. It also casts new light on more recent state responses, such as Arizona’s controversial 2010 immigration enforcement law.
Many insurance law commentators believe that judges should regulate the substance of insurance policies by refusing to enforce insurance policy terms that are exploitive or otherwise unfair. The most common guide for the judicial regulation of insurance policies is the "reasonable expectations doctrine," which requires courts to disregard coverage restrictions that are beyond insureds' reasonable expectations unless the insurer specifically informed the insured about the restriction at the time of purchase. This Article argues that although the judiciary has a potential role to play in policing insurance policy terms, that role should not be defined by reference to consumers' reasonable expectations. Instead, by drawing on the parallels between insurance policies and ordinary consumer products, this Article advances a products liability framework for understanding how and why courts should regulate insurance policies. It proposes that, just as firms that make defective products must pay for the resulting injuries, insurers that issue "defective" insurance policies should have to provide coverage to insureds. The Article argues that the usefulness of the analogy to products liability law goes well beyond understanding the normative basis for the judicial regulation of insurance policies. Products liability law offers important insights into how courts can efficiently correct failures in insurance markets by encouraging effective disclosure to consumers and appropriately setting penalties so that insurers take an optimal amount of care in drafting policy terms.
Obviousness is the ultimate condition of patentability. The obviousness requirement - that inventions must, to qualify for a patent, be not simply new but sufficiently different that they would not have been obvious to the ordinarily skilled scientist - is in dispute in almost every case, and it is responsible for invalidating more patents than any other patent rule. It is also perhaps the most vexing doctrine to apply, in significant part because the ultimate question of obviousness has an I know it when I see it quality that is hard to break down into objective elements. That hasn't stopped the Federal Circuit from trying to find those objective elements. In the last quarter-century, the court has created a variety of rules designed to cabin the obviousness inquiry: an invention can't be obvious unless there is a teaching, suggestion, or motivation to combine prior art elements or modify existing technology; an invention can't be obvious merely because it is obvious to try; and so forth. In KSR v. Teleflex, the Supreme Court rejected the use of rigid rules to decide obviousness cases. In its place, the Court offered not a new test, but a constellation of factors designed to discern whether the person having ordinary skill in the art (the PHOSITA) would likely think to make the patented invention. In short, the Court sought to take a realist approach to obviousness - to make the obviousness determination less of a legal construct and to put more weight on the factual determination of what scientists would actually think and do about a particular invention. As a general principle, this realist focus is a laudable one. The too-rigid application of rules designed to prevent hindsight bias had led to a number or results that defied common sense, including the outcome of KSR itself in the Federal Circuit. But the realist approach has some (dare we say it) nonobvious implications for evidence and procedure, both in the Patent and Trademark Office (PTO) and in the courts. The greater focus on the characteristics of individual cases suggests a need for evidence and factual determinations, but the legal and structural framework under which obviousness is tested makes it difficult to make and review those determinations. The realist approach is also incomplete, because both the knowledge of the PHOSITA and the way the court approaches so-called secondary considerations of nonobviousness depend critically on the counterfactual assumption that the PHOSITA, while ordinarily skilled, is perfectly informed about the prior art. If we are to take a realist approach to obviousness, we should make it a consistent approach, so the ultimate obviousness determination actually reflects what scientists in the field would actually think. So far, despite KSR, it does not. The result of taking the realist approach seriously may be - to the surprise of many - a law of obviousness that is in some respects more favorable to patentability than the standard it displaced.
How would Congress act in a world without judicial review? This Article examines Congress’s capacity and incentives to enforce upon itself “the law of congressional lawmaking”—a largely overlooked body of law that is completely insulated from judicial enforcement. The Article explores the political safeguards that may motivate lawmakers to engage in self-policing and rule-following behavior. Its main argument is that the political safeguards that scholars and judges commonly rely upon to constrain legislative behavior actually motivate lawmakers to be lawbreakers. In addition to providing insights about Congress’s behavior in the absence of judicial review, this Article’s examination has crucial importance to the debate about judicial review of the legislative process, the general debate on whether political safeguards reduce the need for judicial review, and the burgeoning new scholarship about legislative rules.
Plea bargaining in the United States is in critical respects unregulated, and a key reason is the marginal role to which judges have been relegated. In the wake of Santobello v. New York (1971), lower courts crafted Due Process doctrines through which they supervised the fairness of some aspects of the plea bargaining process. Within a decade, however, U.S. Supreme Court decisions began to shut down any constitutional basis for judicial supervision of plea negotiations or agreements. Those decisions rested primarily on two claims: separation of powers and the practical costs of regulating plea bargaining in busy criminal justice systems. Both rationales proved enormously influential. Legislative rulemaking and state courts both largely followed the Court in excluding judges—and in effect, the law—from any meaningful role.
This Article challenges these longstanding rationales. Historical practice suggests that separation of powers doctrine does not require the prevailing, exceedingly broad conception of “exclusive” executive control over charging and other components of the plea process. This is especially true in the states, many of which had long traditions of private prosecutors and judicial oversight over certain prosecution decisions, as well as different constitutional structures. By contrast, English courts—based on both common law and legislation—retain some power to review such decisions. Moreover, assertions that legal constraints on plea bargaining would fatally impair the “efficiency” of adjudication is belied by evidence of very high guilty plea rates both in England, where bargaining is more regulated, and in U.S. courts before the Supreme Court closed off meaningful grounds for judicial review.