University of Pennsylvania Law Review

Print ISSN: 0041-9907
Americans-some of them at least-enjoy a remarkable range of expectations about their health care. They have come to rely on free choice of physicians, on autonomy and the doctrine of informed consent to care, on the belief that they can get the best care money can buy, on the assumption that resources will be available to pay for that care, and perhaps even on the hope that death can be cheated for at least a little while. But these expectations are fragile for those who have them, and they are not shared by many others. Journal Article
Americans long ago dedicated themselves to the removal of social and economic barriers that unfairly burdened racial and ethnic minorities, women, and older people. Discrimination against individuals on the basis of their membership in artificially defined groups was rejected. The current crisis in health care expenditures, however, may be reviving some old prejudices and creating new ones, particularly against older people. Questions have been raised about older Americans, their place in society, and their rightful claim on resources such as health care. The way that society addresses these questions may well determine the character and quality of life in this country over the next few decades.
Chief Judge David L. Bazelon has stood at the meeting point of psychiatry and the criminal law during the past twentyfive years. The hopes and happy promises, the disappointments and recriminations of that uneasy conjunction are vividly refracted in his judicial career. DurhamI is the first large landmark. The explicit intent of that opinion was to permit psychiatrists to explain antisocial conduct in the language of their discipline, unencumbered by competing conceptual constructs drawn, for example, from quasi-religious explanations of "bad behavior." But there was a larger purpose behind this strategy. Durham solicited psychodynamic explanations of criminal conduct because these seemed to offer hopeful directions for a more constructive, humane social response to criminal deviancy. Psychiatrists would offer not only their explanations but, more importantly, their prescriptions for cure in the dramatic liturgy of the criminal trial. The public would be seized by these performances and persuaded to commit the resources needed to meet these prescriptions.
In the 1980s and early 1990s, many observers believed that the American corporate bankruptcy laws were desperately inefficient. The managers of the debtor stayed in control as "debtor in possession" after filing for bankruptcy, and they had the exclusive right to propose a reorganization plan for at least the first four months of the case, and often far longer. The result was lengthy cases, deteriorating value and numerous academic proposals to replace Chapter 11 with an alternative regime. In the early years of the new millennium, bankruptcy could not look more different. Cases proceed much more quickly, and they are much more likely to result in auctions or other sales of assets than in previous decades. This transformation is due in part to a change in the major corporations that file for bankruptcy. Rather than industrial, bricks-and-mortar firms, many of the new debtors are knowledge-based firms with transient assets. Much more important, however, has been the adjustments creditors have made in an effort to reassert control in bankruptcy. In this Article, I focus on the two most important contractual developments: lenders' use of debtor-in-possession financing agreements as a governance lever; and the so-called pay-to-stay arrangements which give key managers bonuses for meeting specified performance goals (such as quick emergence from bankruptcy or the sale of important assets). Both of these developments can be seen as adjustments by creditors to counteract bankruptcy's interference with the shift in control rights that would ordinarily occur at the time of financial distress. As I have discussed elsewhere, chapter 11 functioned somewhat like an antitakeover device in the 1980s. Creditors have now neutralized its effects. Of the two new contractual approaches, pay-to-stay agreements have proven much more controversial, prompting heated complaints about excessive managerial pay in cases like Enron, Polaroid and Kmart. The controversy is similar in o
Although the executive branch appoints Japanese Supreme Court justices as it does in the United States, a personnel office under the control of the Supreme Court rotates lower court Japanese judges through a variety of posts. This creates the possibility that politicians might indirectly use the postings to reward or punish judges. For forty years, the Liberal Democratic Party (LDP) controlled the legislature and appointed the Supreme Court justices who in turn controlled the careers of these lower-court judges. In 1993, it temporarily lost control. We use regression analysis to examine whether the end of the LDP’s electoral lock changed the court’s promotion system, and find surprisingly little change. Whether before or after 1993, the Supreme Court used the personnel office to "manage" the careers of lower court judges. The result: uniform and predictable judgments that economize on litigation costs by facilitating out-of-court settlements.
This paper extends the author's previous results on the same theme. As the economy grows in complexity, the constraints of information and motivation tighten on centralized lawmaking. Specialized business communities develop their own norms, which the paper terms the "new law merchant." Decentralized lawmaking involves enacting into law those community norms that pass a structural test of efficiency and fairness. The structural test is obtained from a theory of games and norms. Norms arise in a community from games in which individuals have incentives to signal support for practices increasing the supply of local public goods. The standards prescribed by these norms are efficient in the absence of spill-overs or nonconvexities, but the level of informal enforcement is inefficient. The evolution of the common law towards efficiency comes from enforcing efficient social norms, not from selective litigation pressure.Please contact the Program in Law and Economics at Boalt Hall School of Law, UC Berkeley, Berkeley, CA 94720 for a copy of this paper.
This article examines the recent history and future of federal lifesaving regulation. The article argues that, considering both philosophical and practical perspectives, lifesaving regulation informed by benefit-cost analysis (BCA) has compelling advantages compared to the main alternatives to BCA. Contrary to the popular belief that BCA exerts only an anti-regulation influence, I show, based on first-hand experience in the White House from 2001 to 2006, that BCA is also an influential tool in protecting or advancing valuable lifesaving rules, especially in a pro-business Republican administration.Various criticisms of BCA that are common in the legal literature are shown to be unconvincing: the tool's alleged immorality when applied to lifesaving situations, its supposed indeterminacy due to conceptual and empirical shortcomings, and the alleged biases in the way benefits and costs are computed. But the article also pinpoints problems in the "benefit-cost" state, including opportunities for improvement in the process of lifesaving regulation. Innovations in analytic practice, coupled with improvements in the design of regulatory systems, are proposed to strengthen the efficiency and fairness of federal lifesaving regulation. The article's suggestions provide a menu of promising reforms for consideration by the new administration and the new Congress as they take office in January 2009.
Firms going public have increasingly been incorporating antitakeover provisions in their IPO charters, while shareholders of existing companies have increasingly been voting in opposition to such charter provisions. This paper identifies possible explanations for this empirical pattern. Specifically, I analyze explanations based on (1) the role of antitakeover arrangements in encouraging founders to break up their initial control blocks, (2) efficient private benefits of control, (3) agency problems among pre-IPO shareholders, (4) agency problems between pre-IPO shareholders and their IPO lawyers, (5) asymmetric information between founders and public investors about the firm's future growth prospects, and (6) bounded attention and imperfect pricing at the IPO stage.
Dan Kahan and Donald Braman’s provocative analysis contends that because people’s beliefs about firearms are primarily formed by cultural values, empirical data are unlikely to have much effect on the gun debate. Their proposed solution to this quandary is that scholars who want to help resolve the gun controversy should identify precisely the cultural visions that generate this dispute and formulate appropriate strategies for enabling those visions to be reconciled in law. In response to Kahan and Braman’s challenge to empirical research, I argue that while culture influences beliefs, it is but one of several such factors. Alongside culture (and presumably other factors as well), empirical evidence has a powerful influence on beliefs about gun control. In the first Part of this Commentary I discuss how cultural beliefs can significantly affect individuals’ beliefs about firearms and discuss strategies for helping people overcome their cultural biases to more honestly evaluate empirical evidence. The second Part provides examples of how data have played an important role in affecting individuals’ beliefs about firearms. I conclude by urging renewed attention to empirical research to inform the gun control debate.
Constitutions constitute a polity and create and entrench power. A corporate constitution - the governance choices incorporated in state law and the certificate of incorporation - resembles a political constitution. Delaware law allows parties to create corporations, to endow them with perpetual life, to assign rights and duties to "citizens" (directors and shareholders), to adopt a great variety of governance structures, and to entrench those choices.In this Article, we argue that the decision to endow directors with significant power over decisions whether and how to sell the company is a constitutional choice of governance structure. We then argue that it is, on theoretical and empirical grounds, a perfectly intelligible choice: shareholders reasonably might opt for board entrenchment - implemented, for example, by means of a staggered board - in order to enable a board to employ selling strategies more effectively and thus to increase the premium shareholders receive when the company is sold. Such a decision is a kind of pre-commitment whereby shareholders, by binding themselves ex ante, may be able to improve their collective position ex post.After examining how shareholders can entrench particular governance structures under Delaware law, we examine two issues that arise once shareholders have chosen to entrench a governance structure: the question of incomplete implementation that arises in cases such as Blasius and Liquid Audio; and the questions when and whether changed circumstances justify ex post judicial negation of shareholders' prior commitments.
"Prepared for Conference on Antitrust Law and Economics, University of Pennsylvania, November 17-18, 1978."
"Individual risk" currently plays a major role in risk assessment and in the regulatory practices of the health and safety agencies that employ risk assessment, such as EPA, FDA, OSHA, NRC, CPSC, and others. Risk assessors use the term "population risk" to mean the number of deaths caused by some hazard. By contrast, "individual risk" is the incremental probability of death that the hazard imposes on some particular person. Regulatory decision procedures keyed to individual risk are widespread. This is true both for the regulation of toxic chemicals (the heartland of risk assessment), and for other health hazards, such as radiation and pathogens; and regulatory agencies are now beginning to employ individual risk criteria for evaluating safety threats, such as occupational injuries. Sometimes, agencies look to the risk imposed on the maximally exposed individual; in other contexts, the regulatory focus is on the average individual's risk, or perhaps the risk of a person incurring an above-average but nonmaximal exposure. Sometimes, agencies seek to regulate hazards so as to reduce the individual risk level (to the maximally exposed, high-end, or average individual) below 1 in 1 million. Sometimes, instead, a risk level of 1 in 100,000 or 1 in 10,000 or even 1 in 1000 is seen as de minimis. In short, the construct of individual risk plays a variety of decisional roles, but the construct itself is quite pervasive. This Article launches a systematic critique of agency decisionmaking keyed to individual risk. Part I unpacks the construct, and shows how it invokes a frequentist rather than Bayesian conception of probability. Part II surveys agency practice, describing the wide range of regulatory contexts where individual risk levels are wholly or partly determinative of agency choice: these include most of the EPA's major programs for regulating toxins (air pollutants under the Clean Air Act, water pollutants under the Clean Water Act and
As an appellate body jurisdictionally demarcated by subject matter rather than geography, the United States Court of Appeals for the Federal Circuit occupies a unique role in the federal judiciary. This controversial institutional design has had profound effects on the jurisprudential development of the legal regimes within its purview - especially the patent law, which the Federal Circuit has come to thoroughly dominate in its two decades of existence. In this Article, we assess the court's performance against its basic premise: that, as compared to prior regional circuit involvement, centralization of legal authority will yield a clearer, more coherent, and more predictable legal infrastructure for the patent law. Using empirical data obtained from a novel study of the Federal Circuit's jurisprudence of claim construction - the interpretation of language defining a patent's scope - we conclude that, on this indicator at least, the record is decidedly mixed, though there are some encouraging signs. Specifically, the study indicates that the court is sharply divided between two basic methodological approaches to claim construction, each of which leads to distinct results. The dominant analytic framework gained additional favor during the period of the study, and yet the court became increasingly polarized. We also find that the significantly different approaches to claim construction followed by Federal Circuit judges has led to panel-dependency; claim construction analysis is clearly affected by the composition of the three-judge panel that hears and decides the case. While little in the results of this study would lead one to conclude that the court has been an unqualified success, we believe that the picture of the Federal Circuit that emerges is of a court in broad transition. Driven in part by new appointments and an effort to respond to its special mandate, a new Federal Circuit is emerging - one that appears to be more rules-driven and more consistent than b
An updated version of this article was published in the University of Pennyslvania Law Review.For over two decades, federal agencies have been required to analyze the benefits and costs of significant regulatory actions and to show that the benefits justify the costs. But the regulatory state continues to suffer from significant problems, including poor priority-setting, unintended adverse side-effects, and, on occasion, high costs for low benefits. In many cases, agencies do not offer an adequate account of either costs or benefits, and hence the commitment to cost-benefit balancing is not implemented in practice. A major current task is to ensure a deeper and wider commitment to cost-benefit analysis, properly understood. We explain how this task might be accomplished and offer a proposed executive order that would move regulation in better directions. In the course of the discussion, we explore a number of pertinent issues, including the actual record of the last two decades, the precautionary principle, the value of 'prompt letters', the role of distributional factors, and the need to incorporate independent agencies within the system of cost-benefit balancing.
This article is the first to test the empirical assumptions about American public opinion found in the Supreme Court's opinions concerning campaign finance reform. The area of campaign finance is a unique one in First Amendment law because the Court has allowed the mere perception of a problem (in this case, "corruption") to justify the curtailment of recognized First Amendment rights of speech and association. Since Buckley v. Valeo, defendants in campaign finance cases have proffered various types of evidence to support the notion that the public perceives a great deal of corruption produced by the campaign finance system. Most recently, in McConnell v. FEC, in which the Court upheld the McCain-Feingold campaign finance law, both the Department of Justice and the plaintiffs conducted and submitted into evidence public opinion polls measuring the public's perception of corruption. This article examines the data presented in that case, but also examines forty years of survey data of public attitudes toward corruption in government. We argue that trends in public perception of corruption have little to do with the campaign finance system. The share of the population describing government as corrupt went down even as soft money contributions skyrocketed. Moreover, the survey data suggest that an individual's perception of corruption derives from that person's (1) position in society (race, income, education level); (2) opinion of the incumbent President and performance of the economy over the previous year; and (3) general attitudes concerning taxation and "big government." Although we conclude that, indeed, a large majority of Americans believe that the campaign finance system contributes to corruption in government, the data suggest that campaign finance reform will have no effect on these attitudes.
This paper focuses on the interaction between uncertainty and insurability in the context of some of the risks associated with climate change. It discusses the evolution of insured losses due to weather-related disasters over the past decade, and the key drivers of the sharp increases in both economic and insured catastrophe losses over the past 20 years. In particular we examine the impact of development in hazard-prone areas and of global warming on the potential for catastrophic losses in the future. In this context we discuss the implications for insurance risk capital and the capacity of the insurance industry to handle large-scale events. A key question that needs to be addressed is the factors that determine the insurability of a risk and the extent of coverage offered by the private sector to provide protection against extreme events where there is significant uncertainty surrounding the probability and consequences of a catastrophic loss. We discuss the concepts of insurability by focusing on coverage for natural hazards, such as earthquakes, hurricanes and floods. The paper also focuses on the liability issues associated with global climate change, and possible implications for insurers (including D&O), given the difficulty in identifying potential defendants, tracing harm to their actions and apportioning damages among them. The paper concludes by suggesting ways that insurers can help mitigate future damages from global climate change by providing premium reductions and rate credits to companies investing in risk-reducing measures.
Agency problems beset firms and prompt opportunistic behavior by employees. Opportunistic behavior redistributes value, whereas cooperative behavior creates value. Firm-specific fairness norms typically promote the firm's efficiency by increasing cooperation and decreasing opportunism. Firm-specific fairness norms best promote efficiency when supported by reputation effects and when the firm's agents internalize the norms. People who internalize norms acquire good character. We will develop the concept of "good agent character," by which we mean agent character that serves the firm's profitability by embodying the firm's fairness norms. Good agent character conveys an advantage to superiors and subordinates in forming cooperative relations with other people who can read character.
In well-functioning domestic legal systems, courts provide a mechanism through which commitments and obligations are enforced. A party that fails to honor its obligations can be hauled before a court and sanctioned through seizure of person or property. The international arena also has courts or, to expand the category somewhat, tribunals. These institutions, however, lack the enforcement powers of domestic courts. How, then, do they work, and how might they work better or worse? The first objective of this Article is to establish that the role of the tribunal is to promote compliance with some underlying substantive legal rule. This simple yet often overlooked point provides a metric by which to measure the effectiveness of tribunals. But a tribunal does not operate in isolation. The use of a tribunal is one way to resolve a dispute, but reliance on diplomacy and other traditional tools of international relations is another. Furthermore, even if a case is filed with a tribunal, there may be settlement prior to a ruling and, even if there is a ruling the losing party may refuse to comply. To properly understand international tribunals, then, requires consideration of the entire range of possible outcomes to a dispute, including those that do not involve formal litigation. The second goal of the Article is develop a rational choice model of dispute resolution and tribunals that takes this reality into account. The third goal is to explore, based on the above model, various features of international tribunals and identify those that increase effectiveness and those that reduce it. Finally, the article applies the analysis to help us understand two prominent tribunals, the World Trade Organization’s Appellate Body and the United Nations Human Rights Committee.
A common justification for recent judicial and legislative interventions in consumer markets to set contract terms or to require firms to disclose price or other product-related information is that consumers are imperfectly informed with respect to the transactions they make. It is generally recognized, however, that information is never perfect; the decisionmaker's task, therefore, is to characterize, in terms of the need for intervention, real world states that are intermediate between perfect information and perfect ignorance. These decisions are now made in what can be described politely as an impressionistic fashion, because lawyers have no rigorous tools for evaluating and responding to information problems. In recent years, economists have developed a variety of models that begin to explain the behavior of markets characterized by imperfect information. These models, however, have been of little practical use to most lawyers, judges, and legislators, because of their mathematical complexity. The goal of this Article is to communicate to lawyers and decisionmakers the legal implications of this new "economics of information."
As Chairman of the Civil Aeronautics Board in the late 1970s, Alfred E. Kahn presided over the deregulation of the airlines and his book, published earlier in that decade, presented the first comprehensive integration of the economic theory and institutional practice of economic regulation. In his lengthy new introduction to this edition Kahn surveys and analyzes the deregulation revolution that has not only swept the airlines but has transformed American public utilities and private industries generally over the past seventeen years. While attitudes toward regulation have changed several times in the intervening years and government regulation has waxed and waned, the question of whether to regulate more or to regulate less is a topic of constant debate, one that The Economics of Regulation addresses incisively. It clearly remains the standard work in the field, a starting point and reference tool for anyone working in regulation. Kahn points out that while dramatic changes have come about in the structurally competitive industries - the airlines, trucking, stock exchange brokerage services, railroads, buses, cable television, oil and natural gas - the consensus about the desirability and necessity for regulated monopoly in public utilities has likewise been dissolving, under the burdens of inflation, fuel crises, and the traumatic experience with nuclear plants. Kahn reviews and assesses the changes in both areas: he is particularly frank in his appraisal of the effect of deregulation on the airlines. His conclusion today mirrors that of his original, seminal work - that different industries need different mixes of institutional arrangements that cannot be decided on the basis of ideology.
We experimentally investigate social effects in a principal-agent setting with incomplete contracts. The strategic interaction scheme is based on the well-known Investment Game (Berg et al., 1995). In our setting four agents (i.e., trustees) and one principal (i.e., trustor) are interacting and the access to choices of peers in the group of trustees is experimentally manipulated. Overall, subjects are positively influenced by peer's choices they observe. However, the positive interaction between choices is not strong enough to raise the reciprocity of those observing at the same level of those whose choices are observed.
Regulators need to rely on science to understand problems and predict the consequences of regulatory actions, but overreliance on science can actually contribute to, or at least deflect attention from, incoherent policymaking. In this article, we explore the problems with using science to justify policy decisions by analyzing the Environmental Protection Agency's recently revised air quality standards for ground-level ozone and particulate matter, some of the most significant regulations ever issued. In revising these standards, EPA mistakenly invoked science as the exclusive basis for its decisions and deflected attention from a remarkable series of inconsistencies. For example, even though EPA claimed to base its standards on a singular concern for public health, it set its standards at levels that will still lead to hundreds, if not thousands, of deaths each year. In other ways, EPA's positions were like shifting sands, changing at points that apparently were expedient for the agency. Such an outcome should not be unexpected when an agency misuses science as a policy rationale, but it also need not be inevitable if agencies learn to respect the limits of science in justifying risk standards. We conclude by offering a set of principles to give direction to standard setting by EPA and other agencies. In the case of EPA's air quality program, Congress will likely need to amend the Clean Air Act to enable EPA to break free of the conceptual incoherence in which it now finds itself mired. Decisionmakers in any setting, though, can avoid the problem of shifting sands by carefully understanding what science can and cannot do.
Democratic experimentalism, the procedural component of the "new governance" movement, has won widespread acceptance in calling for decentralization, deliberation, deregulation, and experimentation. Democratic experimentalists claim that this approach offers pragmatic solutions to social problems. Although the democratic experimentalist movement formally began only a decade ago, antipoverty law has reflected its major principles since the 1960s. This experiment has gone badly, weakening antipoverty programs. Key elements of this participatory approach to antipoverty law - decentralization, privatization, and the substitution of ad hoc problemsolving for individual rights - all contributed to the calamity that low-income people suffered during and after Hurricane Katrina. Those same features prevented the country from acting on the widely shared concern about poverty in Katrina's wake. Indeed, almost all progress in antipoverty law has come from centralized, nonparticipatory, and non-experimentalist policymaking. Democratic experimentalism assumes consensus on the nature of problems and the propriety of government action, reliable metrics for measuring success, the luxury of time, the lack of situations requiring centralized policymaking, and deliberation that is costless in most respects. It also requires that one side risk political capital to establish an experimentalist system. These assumptions have not been fulfilled in antipoverty law. Little suggests that they will be met in other fields either. Further progress in antipoverty law must come from centralized policymaking based on substantive consensus, among many, though not all, liberals and conservatives. This consensus will follow many substantive components of the new governance, including reliance on market incentives. Democratic experimentalism should learn from debates about deliberative democratic theory that has wrestled with its key weaknesses.
Legal concepts, like concepts generally, help economize on information. Conventional wisdom is correct to associate conceptualism with formalism, but misunderstands the role concepts play in law. Commentators from the Legal Realists onward have paid insufficient attention to the distinction between intensions – functions from worlds to categories – and extensions – the categories themselves. Concepts that pick out the same category can differ greatly in terms of information costs. This Article applies some tools of cognitive science to explore the economics of legal concepts. As in cognitive science, we expect simplicity of description and generality of explanation to coincide. Specifically, both the mind and the law can be regarded as information-processing devices that manage complexity and economize on information by employing concepts and rules, the specific-over-general principle, modularity, and recursiveness. These devices work in tandem to produce the economizing architecture of property. The cognitive theory is then applied to longstanding puzzles like the role of baselines such as nemo dat (“one can give that which one does not have”) and ad coelum (“one who owns the soil owns to the heavens above and the depths below”), the notion of “title,” and the function of equity as a safety valve for the law. The cognitive theory also allows one to reconcile reductionism and holism in property theory as well as static and dynamic explanations of the contours of property.
Does the United States Constitution pose an insurmountable barrier to the United States’ participation in international courts and tribunals? In a recent article, The Constitutionality of International Courts: The Forgotten Precedent of Slave-Trade Tribunals, Professor Eugene Kontorovich argues that the United States’ participation in the International Criminal Court would violate the U.S. Constitution, both as an unconstitutional delegation of federal judicial power to a court not created in accordance with Article III of the Constitution and as a violation of the Bill of Rights’ protections attendant to criminal trials in the United States. Kontorovich bases his argument primarily on history, specifically the opposition of some members of the U.S. government to membership in international courts that enforced laws prohibiting the slave trade in the nineteenth century. In response, I argue that Kontorovich has misread this bit of history. First, Kontorovich overstates the significance and sincerity of the constitutional objections. Second, contrary to Kontorovich’s assertions, the international slave-trade tribunals did not exercise criminal jurisdiction, but rather a type of civil in rem jurisdiction. This type of civil jurisdiction was well recognized in American admiralty law in the early nineteenth century and was extensively used in U.S. court cases involving the forfeiture of ships under domestic laws prohibiting the slave trade. Third, and most fundamentally, Kontorovich misunderstands the nature of the constitutional objections to membership in the international courts. When examining the sources more carefully, one sees that the individuals making these objections expressed concern about subjecting Americans to trial for violations of American law in foreign courts, a concern that they expressly stated would not be present in trials for violations of international law. The problem, in their view, was that the general law of nations still allowed the slave trade. As these men understood the law of nations, the actions of one or even two countries could not change the general law of nations. The United States was free to prohibit the slave trade for its citizens as a matter of its domestic law, but then the source of the legal prohibition would be domestic law, and it would be constitutionally suspect to delegate the power to enforce that law to an international tribunal. That — and not the supposedly criminal nature of the courts — was the key distinction between the proposed slave-trade tribunals and the other international arbitration bodies, which were seen as having been charged with implementing law-of-nations obligations, rather than municipal law. By the time the United States eventually ratified the treaty for the slave-trade courts in 1862, however, the general law of nations prohibited the slave trade. No one raised serious constitutional objections at that time. Thus, if anything, the slave-trade tribunals stand alongside the rest of the nineteenth-century arbitration commissions in which the United States participated. The tribunals thus serve as a precedent for the constitutionality of participation in international courts and tribunals as a means for interpreting and enforcing widely recognized norms of international law.
Top-cited authors
C. Sunstein
  • Harvard University
Daniel Solove
  • George Washington University
Samuel Issacharoff
  • New York University
Colin Farrell Camerer
  • California Institute of Technology
George Loewenstein
  • Carnegie Mellon University