Article

Information Markets, Administrative Decisionmaking, and Predictive Cost-Benefit Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

FutureMAP, a project of the Defense Advanced Research Projects Agency, was to involve experiments to determine whether information markets could improve Defense Department decisionmaking. Information markets are securities markets used to derive information from the prices of securities whose liquidation values are contingent on future events. The government intended to use such a market to assess the probabilities of terrorist events, including potential political assassinations. The indelicacy of this potential application contributed to a controversy leading to the cancellation of the program. In this article, Professor Abramowicz assesses whether information markets, in theory, could be useful to administrative agencies and concludes that information markets could help discipline administrative agency predictions if a number of technical hurdles such as the danger of manipulation can be overcome. Because the predictions of well-functioning information markets are objective, they function as a tool that exhibits many of the same virtues in predictive tasks that cost-benefit analysis offers for normative policy evaluation. Both approaches can help to overcome cognitive errors, thwart interest group manipulation, and discipline administrative agency decisionmaking. The article suggests that the two forms of analysis might be combined to produce a "predictive cost-benefit analysis." In such an analysis, an information market would predict the outcome of a retrospective cost-benefit analysis, to be conducted some years after the decision whether to enact a particular policy. As long as the identity of the eventual decisionmaker cannot be anticipated, predictive cost-benefit analysis estimates how an average decisionmaker would be expected to evaluate the policy. Because the predictive cost-benefit analysis assessment does not depend on the identity of current agency officials, they cannot shade the numbers to justify policies that the officials prefer for i

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Information markets make use of specifically designed contracts that yield payments based on the outcome of uncertain future events and differ from traditional equity markets in that they are not typically tied to a claim of corporate ownership. Instead, the assets are claims whose payoff is tied to some future specified contingency (Abramowicz 2004). ...
Article
In this paper we propose a research agenda on the use of information markets as tools to collect, aggregate and analyze citizens’ opinions, expectations and preferences from social media in order to support public policy design and implementation. We argue that markets are institutional settings able to efficiently allocate scarce resources, aggregate and disseminate information into prices and accommodate hedging against various types of risks. We discuss various types of information markets, as well as address the participation of both human and computational agents in such markets.
... It could be verified for all other PMs that they create independent forecasts. The PMs react to new information and not to new polls (Abramowicz, 2003) and this reaction is faster (Forsythe et al., 1992). The known betting markets are similar to the PMs because the reciprocal of the odds can be interpreted as the probability of occurrence. ...
Article
The price volatility of commodities has increased greatly in recent years. Information about the development of agricultural markets is disseminated among market participants to differing degrees. This information asymmetry is the basis for trading profits on markets. Different forecasting tools, especially statistical and econometrical methods, were developed in the agricultural sector in the past and did not achieve good forecasting accuracy, thereby resulting in a considerable price risk.The additional information held by market participants and other people in the agricultural sector are neglected by these tools. Methods using the "wisdom of crowds'" effect achieved better or equal forecasting accuracy in many forecasting applications than the standard approaches applied. The "wisdom of crowds'" effect indicates that groups reach better results than individuals or experts. A well-known example is the Ask the Audience lifeline in the TV show "Who wants to be a Millionaire?". In this paper we introduce prediction markets as a new forecasting method which has become practicable due to the World Wide Web. Prediction markets use a trading mechanism similar to a stock market to achieve the "wisdom of crowds". The participants trade certificates relating to a future event. The transaction prices of the certificates are interpreted as the forecast of the participants. In the past 20 years the area of prediction markets has evolved and prediction markets have reached good forecasting accuracy, especially in election forecasts. Successful prediction markets need enough participants with diverse information about the forecasting object. A first implementation of prediction markets in the agricultural sector to forecast the future price of rapeseed and wheat achieved better or equal predictive accuracy than the futures market.
... An interesting general policy would be to have laws "recurse," via derivative legal regimes with more focused welfare measures. So a general stadium policy might be approved at a basic level which says to approve any proposed stadium if markets estimate that it would increase some measure of regional welfare, stadium profitability, or an ex post cost-benefit calculation (Abramowicz, 2004). A stadium that would not noticeably effect national welfare may noticeably effect regional welfare. ...
Article
Democracies often fail to aggregate information, while speculative markets excel at this task. We consider a new form of governance, wherein voters would say what we want, but speculators would say how to get it. Elected representatives would oversee the after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare. Those who recommend policies that regressions suggest will raise GDP should be willing to endorse similar market advice. Using a qualitative engineering-style approach, we present three scenarios, consider thirty-three design issues, and finally a more specific design responding to those concerns.
... 51 48 See, e.g., Hanson (2003) (revised). See also Abramowicz (2004). 49 When we say the market may "know", "believe" or "suggest," we are referring to the knowledge and beliefs of speculators in the market, which will be reflected in the market price. ...
Article
Full-text available
*Mr. Hahn is executive director of the Reg-Markets Center and a senior fellow at the American Enterprise Institute. He is also a senior visiting fellow at the Smith School at Oxford. Mr. Hahn would like to thank Adam Schmidt for research assistance. The views expressed in this paper reflect solely those of the author and do not necessarily reflect
... identifying and quantifying externalities, establishing relevant boundaries). The (conscious or unconscious) bias of existing expert and organizational information systems (Abramowicz, 2004; McGarity, 2002) means that much necessary data will not be available in a systematic form. In the labour context, attempts to construct counteranalyses are often thwarted by managerial claims that information is "not collected" (Davenport & Brown, 2002, pp. ...
... In the limit, one might want a full joint probability distribution over all relevant events. Amid this vast combinatorial space of estimates of interest, decision-conditional forecasts stand out as being of special interest (Hanson, 1999; Berg & Rietz, 2003; Abramowicz, 2004; Hahn & Tetlock, 2004). For example, markets could estimate the sales of a particular product conditional on hiring various particular ad agencies to market that product. ...
Article
While a simple information market lets one trade on the probability of each value of a single variable, a full combinatorial information market lets one trade on any combination of values of a set of variables, including any conditional or joint probability. In laboratory experiments, we compare the accuracy of simple markets, two kinds of combinatorial markets, a call market and a market maker, isolated individuals who report to a scoring rule, and two ways to combine those individual reports into a group prediction. We consider two environments with asymmetric information on sparsely correlated binary variables, one with three subjects and three variables, and the other with six subjects and eight variables (thus 256 states).
... It could be verified for all other PMs that they create independent forecasts. The PMs react to new information and not to new polls (Abramowicz, 2003) and this reaction is faster (Forsythe et al., 1992). The known betting markets are similar to the PMs because the reciprocal of the odds can be interpreted as the probability of occurrence. ...
... Most political and law articles emphasize legal issues related to PM (Abramovicz 1999; Abramowicz 2004; Bell 2002; Cherry et al. 2006b; Hahn et al. 2006; Mccarthy 2007). In " Gambling for the Good, Trading for the Future: The Legality of Markets in Science Claims " (Bell 2002), Bell stated that PMs could effectively open a shortcut to the future, giving answers more quickly, accurately, and inexpensively than other forecasting methods. ...
Conference Paper
This paper presents an analysis of prediction market (PM) research relevant to information systems. Prediction markets are (online) markets are usually not traded on existing exchanges but on future events. As an emerging research area, prediction markets have received considerable attention from several disciplines, including economics, politics, marketing, computer science, electronic commerce and etc. In information systems research, however, they have been largely ignored. This study reviewed 93 academic articles concerning prediction markets. The analysis reveals that an increasing volume of PM research has been conducted, and that research themes of these studies can be categorized into three groups, namely general introduction, theoretical work, and PM applications. Building upon this work, we argue for the importance of future prediction market research and suggest potential research targets for IS researchers.
Chapter
Delphi-Märkte bezeichnen Ansätze und Implementierungen der Integration von Prognosemärkten und Delphi-Studien (Real-Time Delphi). Durch die Kombination der zwei Methoden zur Erstellung von Prognosen können potenziell gegenseitige Schwächen ausgeglichen werden. So können Prognosemärkte zum Beispiel für die Auswahl von Teilnehmenden mit Expertise herangezogen werden und motivieren durch ihren spielerischen Ansatz und Anreizmechanismen auch zur langfristigen Teilnahme. In diesem Beitrag werden zwei Potenziale für Prognosemärkte und vier Potenziale für Delphi-Studien, welche sich durch die Integration ermöglichen, theoretisch hergeleitet. Anschließend werden drei verschiedene Integrationsansätze vorgestellt, anhand derer die Integration auf User-, Markt- und Delphi-Fragen Ebene exemplarisch verdeutlicht wird und wobei gezeigt wird, dass je nach Ansatz nicht alle Potenziale erreicht werden können. Am Ende werden Empfehlungen für den Einsatz von Delphi-Märkten abgeleitet, bestehende Limitationen für Delphi-Märkte sowie zukünftige Entwicklungen aufgezeigt.
Article
In this article, we attempt to better understand war’s preponderance by exploring its relation to something we commonly see as ever present: the economy and the institutions of finance through which it is enacted. We delineate histories of warfare and finance, rendering our present as one of ‘war amongst the people’, in Rupert Smith’s words, in which finance is exemplified by the logic of the derivative. Through detailed examination of an infamous comment by Donald Rumsfeld, the then US Secretary of Defense, and the US Defense Department’s short-lived Policy Analysis Market, we explore the management of knowledge enabled by the derivative as emblematic of our times in both military and financial circles and draw upon the work of Randy Martin to suggest that this logic is increasingly imperial in its reach and ubiquitous in its effects, becoming in the process the key organisational technology of our times. At the core of the functioning of the derivative, we contend, in all of the domains in which we witness it at work, is an essential indifference to the underlying circumstances from which it purportedly derives, leaving us in a world in which we endlessly manage risks to our future security but at the cost of the loss of genuinely open futures worthy of our interest.
Article
Full-text available
An epiphenomenon of the culture of evaluation of public policies and normative production, among various alternatives (e.g. indicators), the tool of impact assessment (IAs) has emerged as a major policy innovation, which was diffused to almost all Member States of the EU during the last decade. Although the existence of various practices and understandings of the concept makes an effort of a precise definition particularly challenging, this contribution attempts to provide a typology of Impact Assessments in Europe, based on an empirical analysis of IA practices by the EU in 25 EU Member States as well as a theoretical reflection on the interaction of expertise with politics. © E.N.A.
Chapter
In Chapter 7, I considered division of labor as one basis on which to establish a community of enquiry and examined the ways in which the complexity of the problems such a community might work on might prevent it from finding optimal solutions to those problems, even when there was good facilitation of the sort of collaboration that is necessary to overcome social comparison inhibitions in collective enterprises.
Article
Common methods for obtaining and organizing information for evaluating human resource development (HRD) decisions, such as surveys, focus groups, Delphi processes, and discussion at business meetings, can be relatively costly, ad hoc, and difficult to apply. In this article, a review is presented of relatively inexpensive, continuous, and easy-to-apply innovations in information aggregation for examining futures of ideas that are drawn from principles and mechanisms of commodity futures markets. A description is given of how futures markets for ideas have strong applicability to strategic, tactical, and operational decisions about the development, diffusion, and implementation of HRD products and services. Examples are offered for how idea futures markets could support HRD decisions about sales forecasting, product efficacy, project management, environmental scanning, and identification of expertise.
Article
Intellectual property is a vital part of the global economy, accounting for about half of the GDP in countries like the United States. Innovation, competition, economic growth and jobs can all be helped or hurt by different approaches to this key asset class, where seemingly slight changes in the rules of the game can have remarkable impact. This book brings together diverse perspectives from the fields of law, economics, business and political science to explore the ways varying approaches to intellectual property can positively and negatively impact our economy and society. Employing approaches that are both theoretically rigorous and grounded in the real world, Perspectives on Commercializing Innovation is well suited for practising lawyers, managers, lawmakers and analysts, as well as academics conducting research or teaching in a range of courses in law schools, business schools and economics departments, at either the undergraduate or graduate level.
Article
Full-text available
Distribution électronique Cairn.info pour E.N.A.. © E.N.A.. Tous droits réservés pour tous pays. La reproduction ou représentation de cet article, notamment par photocopie, n'est autorisée que dans les limites des conditions générales d'utilisation du site ou, le cas échéant, des conditions générales de la licence souscrite par votre établissement. Toute autre reproduction ou représentation, en tout ou partie, sous quelque forme et de quelque manière que ce soit, est interdite sauf accord préalable et écrit de l'éditeur, en dehors des cas prévus par la législation en vigueur en France. Il est précisé que son stockage dans une base de données est également interdit.
Article
The analysis herein arises from the collision course between the sweeping reforms mandated by the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 and a single sentence of the U.S. Code, adopted nearly fifteen years earlier and largely forgotten ever since. Few were likely thinking of Section 106 of the National Securities Market Improvement Act when the Dodd-Frank Act was enacted on July 21, 2010. As applied by the D.C. Circuit less than a year later in Business Roundtable v. SEC, however, that provision's peculiar requirement of cost-benefit analysis could prove the new legislation's undoing. To help navigate this potential impasse, the Article that follows suggests the need to more carefully analyze the function and form of the cost-benefit analysis mandate in Section 106 and develops a generally applicable framework for doing so. Discussions of cost-benefit analysis have traditionally approached it as a fairly singular phenomenon-with broad aspirations of "efficiency" as its purpose and with its application in environmental and risk regulation understood to capture its form. In reality, cost-benefit analysis is both more ad hoc-and more systematically varied-than this account suggests. The framework proposed herein thus makes an important contribution to our understanding of the complexities and varieties of cost-benefit analysis generally. In the particular case of Section 106, meanwhile, it counsels a distinct function and particular characteristics of form that will better direct its application-both to the myriad regulations mandated by the Dodd-Frank Act and beyond. Properly understood, Section 106 is designed to encourage SEC attention to substantive considerations that might otherwise be neglected, given the Commission's traditional focus on investor protection. As to form, Section 106 constitutes a true mandate and one properly subject to judicial review. Contrary to the analysis in Business Roundtable, however, that mandate is procedural rather than substantive in nature. By comparison with formal cost-benefit analysis, it is less rigidly quantitative. It does, however, demand careful attention to the distributional impacts of relevant rulemaking. To such particularized ends and in such tailored form, ultimately, cost-benefit analysis has the potential to generate significant insight-both under Section 106 and for financial regulation as a whole.
Article
In this paper we provided a first analysis of forecasts generated from a new economic derivatives market. We first summarized findings from previous markets in this domain and detailed the known shortcomings of the so far used binary market designs. We proposed a radically different approach using a linear payout function. The theoretical improvements are threefold; first the number of traded stocks is reduced leading to higher liquidity in the traded stocks, secondly the 'partition-dependence' bias can been avoided and lastly information can be aggregated continuously and over longer time horizons.
Article
Macro-economic forecasts are used extensively in industry and government even though the historical accuracy and reliability is disputed. Modern information systems facilitate participatory, crowd-sourced processes that harness the collective intelligence. One instantiation of such wisdom of the crowds are prediction markets which have proven to successfully forecast the outcome of elections, sport events and product sales. Consequently we specifically design a prediction market for macro-economic variables in Germany. The proposed market design differs significantly from previous ones. It solves some of the known problems such as low liquidity and partition-dependence framing effects. The market acts as a mechanism not only to aggregate dispersed information but also to aggregate individual forecasts. It does so by incentivizing participation and rewards early, precise forecasts. Moreover, the market-platform is yet alone in aggregating these forecasts continuously and for a long time horizon. Analyzing the market-generated forecasts, we find that forecast accuracy improves constantly over time and that generated forecasts performed well in comparison to the Bloomberg-survey forecasts. From an individual perspective, market participants interact in a repeated decision-making environment closely resembling decision-making in financial markets. We analyze the impact of cognition, risk-aversion and confidence on trading activity and success.
Article
The logics of ‘finance’ and ‘security’ have been enmeshed within each other in complicated ways since at least the start of the 20th century. As fields deeply alive to the possibilities and dangers associated with risk and uncertainty, finance and security occupy overlapping but uneven fields of operation. This article examines one particular financial mechanism – political prediction markets – in order to trace out the tensions and intersections of finance and security in one particular site. Political prediction markets are designed to harness the predictive power of the market to address an inherently uncertain object – the weather, political events, terrorism, etc. A series of recent cases – most notoriously a proposal by the Pentagon to construct a ‘terrorism futures market’ – have sought to recast political prediction markets as a security practice and to enlist these markets in the ongoing ‘war on terror’. This article argues that these attempts at financializing security offer a particularly useful glimpse into one point of overlap between security and finance. As markets constructed to measure and manage uncertainty, experiments in security prediction markets foreclose political space not only as a ritual of securitization that places certain issues above or beyond political deliberation but also as a reinvocation of a conception of ‘finance’ as a somehow rational and technical domain. As the terrorism futures case reminds us, however, the rational ambitions associated with these two governmentalities of financialization and securitization can become corroded or can lose coherence in unpredictable ways. It is in the political tension that is generated through such corrosions that the future of these kinds of experiments in the financialization of security will ultimately be decided.
Article
Successful democracies throughout history--from ancient Athens to Britain on the cusp of the industrial age--have used the technology of their time to gather information for better governance. Our challenge is no different today, but it is more urgent because the accelerating pace of technological change creates potentially enormous dangers as well as benefits. Accelerating Democracy shows how to adapt democracy to new information technologies that can enhance political decision making and enable us to navigate the social rapids ahead. John O. McGinnis demonstrates how these new technologies combine to address a problem as old as democracy itself--how to help citizens better evaluate the consequences of their political choices. As society became more complex in the nineteenth century, social planning became a top-down enterprise delegated to experts and bureaucrats. Today, technology increasingly permits information to bubble up from below and filter through more dispersed and competitive sources. McGinnis explains how to use fast-evolving information technologies to more effectively analyze past public policy, bring unprecedented intensity of scrutiny to current policy proposals, and more accurately predict the results of future policy. But he argues that we can do so only if government keeps pace with technological change. For instance, it must revive federalism to permit different jurisdictions to test different policies so that their results can be evaluated, and it must legalize information markets to permit people to bet on what the consequences of a policy will be even before that policy is implemented. Accelerating Democracy reveals how we can achieve a democracy that is informed by expertise and social-scientific knowledge while shedding the arrogance and insularity of a technocracy.
Article
This Article proposes that employees be given the right to vote on mergers, sales of substantially all assets, and the other corporate combinations for which shareholders can vote. Unlike shareholders’ ballots, the employees’ choices would not be binding on the company. Referenda would be held before the required shareholders’ elections so that shareholders could know about the results before they cast their votes. Although it might be possible to implement the referendum through federal law, states could also insert the referendum into their systems for corporate governance.
Article
There is wide-ranging recognition of the need for “new accountings” that foster democracy and facilitate more participatory forms of social organization. This is particularly evident in the sustainable development and social and environmental accounting literatures, with calls for more dialogic forms of accounting. However, there has been very little consideration of how “democracy” should be approached; and, in particular, the implications of any particular model of democracy for the kinds of accounting technologies that might be advocated. This paper seeks to contribute to the theoretical development of dialogic accounting and focuses on the sustainability arena for illustrative purposes. It draws on debates between deliberative and agonistic democrats in contemporary political theory to argue the case for an agonistic approach to dialogics; one that respects difference and takes interpretive and ideological conflicts seriously. In recognition of the ways in which power intrudes in social relations so as to deny heterogeneity and privilege certain voices, it seeks to promote a broadly critical pluralist approach. To this end, the paper proposes a set of key principles for dialogic accounting and draws on ecological economist Peter Söderbaum’s work on positional analysis applied to an existing accounting tool – the Sustainability Assessment Model (SAM) – to illustrate how such an approach might be operationalized. The paper also discusses limitations of the dialogic accounting concept and impediments to its implementation.
Article
Full-text available
This Essay reports the results of an interdisciplinary project comparing political science and legal approaches to forecasting Supreme Court decisions. For every argued case during the 2002 Term, we obtained predictions of the outcome prior to oral argument using two methods-one a statistical model that relies on general case characteristics, and the other a set of independent predictions by legal specialists. The basic result is that the statistical model did better than the legal experts in forecasting the outcomes of the Term's cases: The model predicted 75 % of the Court's affirm/reverse results correctly, while the experts collectively got 59. 1 % right. These results are notable, given that the statistical model disregards information about the specific law or facts of the cases. The model's relative success was due in large part to its ability to predict more accurately the important votes of the moderate Justices (Kennedy and O'Connor) at the center of the current Court. The legal experts, by contrast, did best at predicting the votes of the more ideologically extreme justices, but had difficulty predicting the centrist justices. The relative success of the two methods also varied by issue area, with the statistical model doing particularly well in forecasting "economic activity" cases, while the experts did comparatively better in the "judicial power" cases. In addition to reporting the results in detail, the Essay explains the differing methods of prediction used and explores the implications of the findings for assessing and understanding Supreme Court decision making.
Article
Purpose – The purpose of this paper is to briefly explore some recent curious interlocking of the ideology of markets and the practice of policy. Design/methodology/approach – This particular discursive combine has most visibly been apparent in the concatenated birth and death of the US Defense Department's so‐called “Policy Analysis Market” (PAM). Yet PAM is but the most notorious example of a more sustained and pervasive attempt to use the technologies and disciplines of markets to render policy both better informed and more amenable to control through robust and seemingly incontestable systems of accountability. Given its prominence, our way in is through a brief description of PAM's origins and demise. Findings – It is found that PAM and its similar brethren of markets for use in policy formation and judgement are less concerned with the capture of reality and more with the disciplining power of a curious “objectivity”. Originality/value – Projects such as PAM are thus not easily challengeable on grounds of their veracity. Rather research that seeks to interrogate the use of market technologies in policy must look to their context and effects.
Article
In a controversial move in late 2004, the Securities and Exchange Commission (SEC) decided to require hedge fund managers to register with the agency as investment advisers. Until then, the SEC had largely refrained from ramping up hedge fund regulation, even after the collapse of Long-Term Capital Management in 1998. Although this article takes some issue with the SEC's decision to regulate hedge funds, its primary focus is not on the particular costs and benefits of regulating hedge funds. The inquiry is broader: what can we learn generally about SEC decision making and securities regulation from the SEC's decision to regulate hedge funds now by subjecting fund managers to the registration requirements of the Investment Advisers Act? Since the SEC consciously shifted direction in deciding to regulate hedge funds - and in doing so overstepped the traditional boundary of securities regulation by looking past the ability of sophisticated and wealthy hedge fund investors to protect themselves - the hedge fund rule prompts reconsideration of SEC decision making, particularly in the aftermath of Enron and the other recent corporate scandals that marked the early 2000s. Although nobody knows for sure what motivates a regulator, the SEC's decision to adopt its new hedge fund rule is consistent with two views - one political; the other, psychological. First, the SEC did not want to get caught flat footed and embarrassed again, as it had been by Enron, WorldCom, the mutual fund abuses, and securities analyst conflicts of interest; and second, after the earlier scandals, the risk of fraud and other hedge fund abuses weighed disproportionately on the agency, prompting it to act when it had not in the past. The particular concern is that such political and psychological influences result in overregulation. This article concludes with a suggestion. To mitigate the risk of overregulation, the SEC should increasingly consider using default rules instead of mandatory rules. Defaults at least give parties a chance to opt out if the SEC goes too far. Indeed, in some cases, perhaps the SEC could exercise an even lighter touch and simply articulate best practices.
Article
This short Essay addresses three topics on one aspect of the hedge fund industry - the SEC's recent efforts to regulate hedge funds. First, this Essay summarizes the regulation of hedge funds under U.S. federal securities laws insofar as protecting hedge funds is concerned. The discussion highlights four basic choices facing the SEC: (1) do nothing; (2) substantively regulate hedge funds directly; (3) regulate hedge fund managers; and (4) regulate hedge fund investors. Second, this Essay addresses the boundary between market discipline and government intervention in hedge fund regulation. To what extent should hedge fund investors be left to fend for themselves? Third, this Essay highlights two factors impacting regulatory decision making that help explain why the SEC pivoted in 2004 to regulate hedge funds when it had abstained from doing so in the past. These two factors are politics and psychology.
Article
Full-text available
Whether and how to provide transition relief from a change in legal regime is a question of critical importance. Legislatures and agencies effect changes to the law constantly, and affected private actors often seek relief from those changes, at least in the short term. Scholarship on transition relief therefore has focused almost entirely on examining when transition relief might be justified and now recognizes that there may be settings where relief from legal transitions is appropriate. Yet largely absent from these treatments is an answer to the question of which institutional actor is best positioned to decide when legal transition relief is appropriate and what form it should assume. In this Article, we address this issue in two parts: Can the private market develop adequate risk-spreading devices such that government relief is unnecessary? If government relief is warranted, what government actors are best suited to provide relief? We find that private markets will be unable to provide adequate transition insurance due to insurmountable pricing difficulties, and that the task must thus fall to governmental actors. We then analyze the available governmental actors and conclude that, in many cases, an independent agency will be best positioned to make reliable and welfare-enhancing decisions regarding transition relief.
Article
Much of legal scholarship, and in particular Law and Economics, evaluates law and predicts its effects based on an analysis of law's manipulation of individuals' incentives. Although manipulating incentives certainly explains some of law's impact on behavior (e.g., increasing airport security may deter some airplane hijackers), law has an equally important impact on behavior by manipulating perceptions (e.g., causing the public to believe that the risk of airplane hijacking has diminished as a result of the law that increased airport security). Thus, like the placebo effect of medicine, a law may impact social welfare beyond its objective effects by manipulating the public's subjective perception of the law's effectiveness. Failure to consider this largely ignored legal placebo effect may cause significant overstatement or understatement of a law's benefits. By shedding light on laws' effects on perceptions, this Article reveals forces that shape the creation of law. Legal placebo effects are a method by which politicians extract private benefits from the identification and mitigation of gaps between real and perceived risks. Some private entities compete with lawmakers through extra-legal methods. This competition affects laws' subject matter and the manner in which laws are presented to the public.
Article
The availability heuristic — a cognitive rule of thumb whereby events that are easily brought to mind are judged to be more likely — is employed by decision-makers on a daily basis. Availability campaigns occur when individuals and groups strategically exploit this cognitive tendency in order to generate publicity for a particular issue, creating pressure to effect legislative change. This paper is the first to argue that environmental availability campaigns are more beneficial than they are harmful. Because they result in pressure on Congress, these campaigns serve as a catalyst for the enactment of critical new legislative initiatives. Specifically, these campaigns streamline the legislative process by: (1) determining in a transparent and nonarbitrary manner which issues receive attention; (2) overcoming some of the undesirable barriers to the enactment of new initiatives; and (3) encouraging efficient, bipartisan cooperation to pass vital legislation and regulation. Availability campaigns have resulted in critically valuable directives such as the DDT ban, Superfund, and the Oil Pollution Act. Although the primary focus of this paper is environmental legislation, availability campaigns may have benefits in a wide variety of other areas of law and regulation.
Article
Presidential betting markets predict election outcomes more accurately than polls because of their ability to effectively aggregate information. Empirical research and theory indicates that the result extends to other contexts. Betting markets, more formally called information markets, provide accurate predictions about future product sales, box office receipts, and other future events. Moreover, market predictions generally outperform other prediction mechanisms. This paper argues that empirical research and theory indicates that we should use information markets' predictive power to make administrative decisions. In addition, it presents a model information market designed to help policy makers evaluate policies prior to their implementation by providing policy makers information about the policies' effects in the form of market predictions. To design such a market, it is necessary to determine how the market should pay off bettors when the agency does not implement a policy because the market predicts it will have an adverse effect. The problem is that bets pay off based on the outcome of an event, but when the policy makers decide not to implement a policy, the policy has no effect and thus it is unclear how to compensate bettors. This paper shows that through clever market design it is possible to return the market price of a bet, prior to an agency's decision not to implement the policy on which the bet depends, without fear of market manipulation. Consequently, even in cases where using market predictions to make administrative decisions appears problematic, it is possible.
Article
This is a paper about using reputation tracking technologies to displace criminal law enforcement and improve the tort system. The paper contains an extended application of this idea to the regulation of motorist behavior in the United States and examines the broader case for using technologies that aggregate dispersed information in various settings where reputational concerns do not adequately deter antisocial behavior. The paper begins by exploring the existing data on How's My Driving? programs for commercial fleets. Although more rigorous study is warranted, the initial data is quite promising, suggesting that the use of How's My Driving? placards in commercial trucks is associated with fleet accident reductions ranging from 20% to 53%. The paper then proposes that all vehicles on American roadways be fitted with How's My Driving? placards so as to collect some of the millions of daily stranger-on-stranger driving observations that presently go to waste. By delegating traffic regulation to the motorists themselves, the state might free up substantial law enforcement resources, police more effectively dangerous and annoying forms of driver misconduct that are rarely punished, reduce information asymmetries in the insurance market, improve the tort system, and alleviate road rage and driver frustration by providing drivers with opportunities to engage in measured expressions of displeasure. The paper addresses obvious objections to the displacement of criminal traffic enforcement with a system of How's My Driving?-based civil fines. Namely, it suggests that by using the sorts of feedback algorithms that eBay and other reputation tracking systems have employed, the problems associated with false and malicious feedback can be ameliorated. Indeed, the false feedback problem presently appears more soluble in the driving context than it is on eBay. Driver distraction is another potential pitfall, but available technologies can address this problem, and the implementation of a How's My Driving? for Everyone system likely would reduce the substantial driver distraction that already results from driver frustration and rubbernecking. The paper also addresses the privacy and due process implications of the proposed regime. It concludes by examining various non-driving applications of feedback technologies to help regulate the conduct of soldiers, police officers, hotel guests, and participants in virtual worlds, among others.
Article
This Article examines how liability insurers transmit and transform the content of corporate and securities law. D&O liability insurers are the financiers of shareholder litigation in the American legal system, paying on behalf of the corporation and its directors and officers when shareholders sue. The ability of the law to deter corporate actors thus depends upon the insurance intermediary. How, then, do insurers transmit and transform the content of corporate and securities law in underwriting D&O coverage? In this Article, we report the results of an empirical study of the D&O underwriting process. Drawing upon in-depth interviews with underwriters, actuaries, brokers, lawyers, and corporate risk managers, we find that insurers seek to price D&O policies according to the risk posed by each prospective insured and that underwriters focus on corporate governance in assessing risk. Our findings have important implications for several open issues in corporate and securities law. First, individual risk-rating may preserve the deterrence function of corporate and securities law by forcing worse-governed firms to pay higher D&O premiums than better-governed firms. Second, the importance of corporate governance in D&O underwriting provides evidence that the merits do matter in corporate and securities litigation. And third, our findings suggest that what matters in corporate governance are deep governance variables such as culture and character, rather than the formal governance structures that are typically studied. In addition, by joining the theoretical insights of economic analysis to sociological research methods, this Article provides a model for a new form of corporate and securities law scholarship that is both theoretically informed and empirically grounded.
Article
Verdicts other than guilty and not guilty are exceptional in American criminal law, yet some legal systems routinely use more than two verdicts. In Scotland, judges and juries in criminal trials choose from three verdicts: guilty, not proven, and not guilty. Not proven and not guilty are both acquittals, indistinguishable in legal consequence but different in connotation. Not guilty is for a defendant the jury thinks is innocent; not proven, for a case with insufficient evidence of guilt. One verdict announces "legally innocent" and thus exonerates. The other says "inconclusive evidence" and fails to exonerate or even stigmatizes. The American verdict of not guilty covers both of these grounds for acquittal. The jury that thinks a defendant is truly innocent has no means of conveying that message. For the jury that considers the charge unproved but does not want to assert factual guilt or innocence, no verdict speaks only to proof. The two-verdict system, or any other verdict system for that matter, limits the jury's speech. Reasons could be given for obscuring the verdict: one might say a two-verdict system maintains the presumption of innocence and prevents social stigma for unproven charges. As shown in this Comment, a two-verdict system secures neither of these advantages. With a high standard of proof, such as beyond a reasonable doubt, the public will know that some defendants are being acquitted because of insufficient evidence, not because of actual innocence. With that knowledge, the public will see the acquittal in a two-verdict system as stigmatizing and tarnishing, and no well-intentioned decree can change that fact. Once this stubborn reality of tarnishing acquittals is recognized, the arguments for a two-verdict system lose their force. This Comment proposes introducing a verdict patterned after Scotland's not proven. Part I surveys legal systems that have more than one kind of acquittal available to most defendants. Part II proposes a not proven verdict for the United States. Part III analyzes consequences of introducing this verdict, such as more information, more acquittals, and more stigma.
Article
Administrative law has been transformed after 9/11, much to its detriment. Since then, the government has mobilized almost every part of the civil bureaucracy to fight terrorism, including agencies that have no obvious expertise in that task. The vast majority of these bureaucratic initiatives suffer from predictable, persistent, and probably intractable problems - problems that contemporary legal scholars tend to ignore, even though they are central to the work of the writers who created and framed the discipline of administrative law. We analyze these problems through a survey of four administrative initiatives that exemplify the project of sending bureaucrats to war. The initiatives - two involving terrorism financing, one involving driver licensing, and one involving the adjudication of asylum claims - grow out of the two statutes perhaps most associated with the war on terrorism, the USA PATRIOT Act of 2001 and the REAL ID Act of 2005. In each of our case studies, the civil administrative schemes used to fight terrorism suffer from the incongruity of fitting civil rules into an anti-civil project, the difficulties of delegating wide discretion without adequate supervision, and the problem of using inexpert civil regulators to serve complex law enforcement ends. We conclude that anti-terrorism should rarely be the principal justification for a new administrative initiative, but offer some recommendations as to when it might make sense to re-purpose civil officials as anti-terrorism fighters.
Article
Information markets are markets for contracts that yield payments based on the outcome of an uncertain future event. They are used to predict a wide range of events, from presidential elections to printer sales. These markets frequently outperform both experts and opinion polls, and many scholars believe they have the potential to revolutionize policymaking. At the same time, they present a number of challenges.This collection of essays provides a state-of-the-art analysis of the potential impact of information markets on public policy and private decision-making. The authors assess what we really know about information markets, examine the potential of information markets to improve policy, lay out a research agenda to help improve our understanding of information markets, and explain how we might systematically improve the design of such markets.
Article
This paper is a critique of Margaret Berger and Aaron Twerski, "Uncertainty and Informed Choice: Unmasking Daubert," forthcoming in the Michigan Law Review. Berger and Twerski propose that courts recognize a cause of action that would allow plaintiffs who claim injury from pharmaceutical products, but who do not have sufficient evidence to prove causation, to recover damages for deprivation of informed choice. Berger and Twerski claim inspiration from the litigation over allegations that the morning sickness drug Bendectin caused birth defects. Considering the criteria Berger and Twerski suggest for their proposed cause of action in the context of Bendectin, it appears that a pharmaceutical manufacturer could be held liable for failure to provide informed choice: (a) even when there was never any sound scientific evidence suggesting that the product caused the harm at issue, and there was an unbroken consensus among leading experts in the field that the product did not cause such harm; (b) when the product prevented serious harm to a significant number of patients, and prevented substantial discomfort to a much greater number, even when there were no available alternative products; (c) when a plaintiff claims that she would not have taken the product had she been informed of an incredibly remote and completely unproven risk; and (d) when the defendant is unable to prove a negative - that the product in question definitely did not cause the claimed injury. No rational legal system would allow such a tort. Putting the Bendectin example aside, the informed choice proposal has the following additional weaknesses: (1) it invites reliance on unreliable junk science testimony; (2) it ignores the fact that juries are not competent to resolve subtle risk assessment issues; (3) it reflects an unwarranted belief in the ability of juries to both follow limiting instructions and ignore their emotions; (4) it ignores the problems inherent to multiple trials - even if defendants were to win most informed choice cases, safe products could still be driven off the market by a minority of contrary verdicts; (5) it ignores the inevitable costs to medical innovation as pharmaceutical companies scale back on researching product categories that would be particularly prone to litigation; (6) to preempt litigation, pharmaceutical companies would overwarn, rendering more significant warnings less useful; and (7) FDA labeling requirements would arguably preempt the proposed cause of action.
Article
There was a time when judges routinely deployed legal fictions, which Lon Fuller famously defined as false statements not intended to deceive, in order to temper the disruptive effect of changes in legal doctrine. In an age of positive law, such classic legal fictions are significantly less common. But they have been replaced by new legal fictions. In fashioning legal rules, judges rely with surprising frequency on false, debatable, or untested factual premises. At times, of course, such false premises simply reflect judicial ignorance. But there is an increasingly large body of empirical research available to judges, and more often than not judges' reliance on false premises is not the result of ignorance. Instead, judges often rely on false factual suppositions in the service of other goals. In this article, Professor Smith discusses a broad range of examples of new legal fictions, false factual suppositions that serve as the grounds for judge-made legal rules. The examples, drawn from diverse areas of doctrine, suggest a set of reasons, albeit generally unexpressed, why judges rely on new legal fictions. Sometimes judges rely on new legal fictions to mask the fact that they are making a normative choice. Other times, judges rely on new legal fictions to operationalize legal theories that are not easily put into practice. Still other times, judges deploy new legal fictions to serve functional goals and to promote administrability in adjudication. Finally, new legal fictions often serve a legitimating function, and judges rely on them - even in the face of evidence that they are false - to avoid what they perceive as de-legitimating consequences. Judges rarely acknowledge that their ostensible factual suppositions are in fact new legal fictions, and they rarely articulate the reasons for relying on them. Even assuming one concludes that judges' apparent rationales for relying on them are valid, therefore, there is a serious question whether those rationales outweigh the general interest in judicial candor. After all, a general requirement of judicial candor - which permits the academy and the public to debate, criticize, and defend judges' grounds for decision - is essential to constraining judicial power. To be sure, whether any particular reason for judicial reliance on a new legal fiction is justified turns in part on an empirical judgment about the extent to which the new legal fiction actually achieves the end that the judge deployed it to achieve. But even when we can satisfactorily answer such empirical questions, we are still faced with a normative judgment about the relative desirability of candor and the goal served by dispensing with candor. Professor Smith concludes that the ends served by reliance on new legal fictions usually are not sufficient to overcome the presumption in favor of judicial candor, but that in rare cases dispensing with judicial candor might be justified.
Article
Building on the success of prediction markets at forecasting political elections and other matters of public interest, firms have made increasing use of prediction markets to help make business decisions. This Article explores the implications of prediction markets for corporate governance. Prediction markets can increase the flow of information, encourage truth telling by internal and external firm monitors, and create incentives for agents to act in the interest of their principals. The markets can thus serve as potentially efficient alternatives to other approaches to providing information, such as the Sarbanes-Oxley Act's internal controls provisions. Prediction markets can also produce an avenue for insiders to profit on and thus reveal inside information while maintaining a level playing field in the market for a firm's securities. This creates a harmless way around existing insider trading laws, undercutting the argument for the repeal of these laws. In addition, prediction markets can reduce agency costs by providing direct assessments of corporate policies, thus serving as an alternative or complement to shareholder voting as a means of disciplining corporate boards and managers. Prediction markets may thus be particularly useful for issues where agency costs are greatest, such as executive compensation. Deployment of these markets, whether voluntarily or perhaps someday as a result of legal mandates, could improve alignment between shareholders and managers on these issues better than other proposed reforms. These markets might also displace the business judgment rule because they can furnish contemporaneous and relatively objective benchmarks for courts to evaluate business decisions.
Article
In advancing his prospect theory of patents, Edmund Kitch dismissed the possibility of distributing rights to particular inventions through auctions, arguing that the patent system avoids the need for governmental officials to define the boundaries of inventions that have not yet been created. Auctions for patent rights to entire inventive fields, however, might accentuate the benefits of a prospect approach, by allowing for earlier and broader patents. Auction designs that award the patent to the bidder that commits the most money to research and development or that agrees to charge the lowest price, meanwhile, can reduce the costs of the prospect approach. Concerns about the government's ability to decide correctly when to hold auctions, however, provide an uneasy case for patent races over patent auctions. More modest uses of auctions might improve welfare, though. For example, an auction to a small number of parties of the right to race in a technological field might reduce wasteful duplication and thus accelerate innovation. Similarly, patentees might be allowed to demand auctions for extended patent scope, with the caveat that a patentee would need to outbid others by a substantial amount to win such an auction.
Article
In his famous paper advancing a prospect theory of patents, Edmund Kitch found inspiration in, but quickly dismissed, a footnote authored by Yoram Barzel suggesting that rights to inventions might be distributed through an auction mechanism. Kitch maintained that the patent system itself achieves the benefit of an auction by giving control over the inventive process at a relatively early stage. The patent system, moreover, avoids the need for governmental officials in an auction regime to define the boundaries of inventions that have not yet been created. Patent auctions, however, may be more appealing if the auctions are for rights to inventive fields, rather than to specific inventions. Indeed, an auction system may be seen as an extreme version of the prospect theory approach, by allowing patents to be issued at an earlier stage and with broader scope than is feasible in a conventional patent system. Like prospects generally, auctions could help avoid the costs associated with duplicative patent races and with inventing around existing patents. An additional advantage of auctions is that variations in the design of the auction mechanism can help respond to specific concerns about the prospect approach. For example, an auction awarding a patent to the party that agrees to commit the most resources to a particular technological field may alleviate concerns that the prospect approach could stifle rather than stimulate innovation. Similarly, to mitigate concerns about deadweight loss, the government could sponsor an auction in which the field is awarded to the party that, in addition to paying a set amount of cash, agrees to charge the lowest price or hold the patent for the shortest term. The analysis, however, identifies a number of empirical uncertainties that together provide an uneasy case for the status quo. A principal problem is that there exists a fundamental tradeoff in auction design, between approaches that maximize the auction winner's incentive to develop inventions within the scope of the patent grant and approaches that most effectively reduce deadweight loss. There is no guarantee that the government would optimally resolve these tradeoffs ex ante. Although governmental officials ex post could compare bids that offer combinations of commitments to development and concessions on price, this approach too is prone to error. The government also faces a daunting informational task in determining when to hold a patent auction. The danger of governmental errors suggests that if patent auctions are to have any place in our innovation policy, they must avoid governmental discretion in determining when auctions should occur and in identifying the most attractive bidders. An ambitious approach might be to use information markets to make both such determinations, though it may be too early in the history of information markets to assess whether they are up to the task. A more modest approach would allow patentees to demand auctions that would provide additional patent scope, with the caveat that a patentee would need to substantially outbid others to win such an auction, and failing victory would be fined.
Article
Full-text available
This Article applies the emerging field of information markets to the prediction of Supreme Court decisions. Information markets, which aggregate information from a wide array of participants, have proven highly accurate in other contexts such as predicting presidential elections. Yet never before have they been applied to the Supreme Court, and the field of predicting Supreme Court outcomes remains underdeveloped as a result. We believe that creating a Supreme Court information market, which we have named Tiresias after the mythological Greek seer, will produce remarkably accurate predictions, create significant monetary value for participants, provide guidance for lower courts, and advance the development of information markets.
Article
Full-text available
This Article focuses on why information markets have covered certain subject areas, sometimes of minor importance, while neglecting other subject areas of greater significance. To put it another way, why do information markets exist to predict the outcome of the papal conclave and the Michael Jackson trial, but no information markets exist to predict government policy conclusions, Supreme Court decisions, or the rulings in Delaware corporate law cases? Arguably, from either a dollar value or a social utility perspective, these areas of law and business would be more important than the outcome of, say, the Jackson trial. Why, then, do these frivolous markets on celebrities like Michael Jackson thrive, while others with more serious aims have yet to be started? To answer this question, we present data from interviews with market founders about their motivations in starting various information markets. In Section III, we insert the data into an analytical framework, exploring where markets exist (primarily politics and entertainment), where they do not, and some of the reasons, including legal considerations and microeconomic decisions, that affect the subjects that information markets cover. In particular, the laws about gambling seem to have had a significant impact on the development of information markets. Despite a trend toward information markets in entertainment and politics, the emergence of an information market in any particular subject area is at least partially the product of a random walk, meaning that it cannot be predicted in advance from past data. Finally, in the last part of our Article, we contemplate whether information markets must endure the vagaries of the random walk or whether they could develop in a more organized and systematic way, either through private institutions or through government action.
Article
This paper explores the design and implementation of prediction markets, markets strategically constructed to aggregate traders' beliefs about future events. It posits that prediction markets are particularly useful as forecasting tools where traders are constrained - legally, politically, professionally, or bureaucratically - from directly sharing the information that underlies their beliefs. It concludes by articulating a possible design for such a market, as an alternative to a Pentagon program that collapsed amid public outcry in 2003.
Conference Paper
Macro-economic forecasts are used extensively in industry and government even though the historical accuracy and reliability is disputed. Prediction markets have proven to successfully forecast the outcome of elections, sport events and product sales. In this paper we provide a detailed analysis of forecasts generated from a new prediction market for economic derivatives. The proposed market design is specifically designed to forecast macro-economic variables and differs significantly from previous ones. It solves some of the known problems such as low liquidity and partition-dependence framing effects. By using finance methodology we firstly show that the market is reasonably liquid in order to continuously generate forecasts. Secondly the market forecasts performed well in comparison to the 'Bloomberg'- survey forecasts. Thirdly forecasts generated by the market fulfill the weak-form forecast efficiency implying that forecasts contained all publicly available information.
Conference Paper
In this paper we propose a research agenda on the use of information markets as tools to collect, aggregate and analyze citizens' opinions, expectations and preferences from social media in order to support public policy design and implementation. We argue that markets are institutional settings able to efficiently allocate scarce resources, aggregate and disseminate information into prices and accommodate hedging against various types of risks. We discuss various types of information markets, as well as address the participation of both human and computational agents in such markets. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Article
The use of information markets as a business intelligence (BI) technique for collecting dispersed intelligence and forming knowledge to support decision making is growing rapidly in many application fields. The objective of this chapter is to present a focused survey of how information markets work and why they produce accurate and actionable knowledge upon which effective decisions can be based. Numerous exemplars from the literature are described and key future research directions in information markets are highlighted.
Article
Prediction Markets are a family of Internet–based social computing applications, which use market price to aggregate and reveal information and opinion from dispersed audiences. The considerable complexity of these markets inhibited the full realization of the promise so far. This paper offers the P–MART classification as a tool for organizing the current state of knowledge, aiding the construction of tailored markets, identifying ingredients for Prediction Markets’ success and encouraging research. P–MART is a dual–facet classification of implementations of Prediction Markets describing traders and markets. The proposed classification framework was calibrated by examining a variety of real–world online implementations. A publicly accessible wiki resource accompanies this paper in order to stimulate further research and future expansion of the classification.
Article
Full-text available
Current estimates of regulatory benefits are too low and possibly far too low. This is because the standard economic approach to measuring costs and benefits, which attempts to estimate people's willingness to pay for various regulatory benefits, ignores a central point about valuation, thus producing numbers that systematically understate those benefits. Conventional estimates tell us the amount of income an individual, acting in isolation, would be willing to sacrifice in return for, say, an increase in safety on the job. But while these estimates are based on the implicit assumption that economic well-being depends only on absolute income, considerable evidence suggests that relative income is also an important factor. When an individual buys additional safety in isolation, he experiences not only an absolute decline in the amounts of other goods and services he can buy, but also a decline in his relative living standards. In contrast, when a regulation requires all workers to purchase additional safety, each worker gives up the same amount of other goods, so no worker experiences a decline in relative living standards. If relative living standards matter, an individual will value an across the board increase in safety more highly than an increase in safety that he alone purchases. Where the government currently pegs the value of a statistical life at about 4million,itoughttoemployavaluebetween4 million, it ought to employ a value between 4.7 million and $7 million. A conservative reading of the evidence is that when government agencies are unsure how to value regulatory benefits along a reasonable range, they should make choices toward or at the upper end.
Article
Full-text available
Article
Full-text available
It was not supposed to be like this. In Chevron and State Farm, the Supreme Court announced what appeared to be controlling standards for substantive review of administrative decisions. Chevron adopted a two-step approach to statutory interpretation under which courts were to overturn agency interpretations that were contrary to the clear intent of Congress, but defer to permissible agency constructions of a statute. State Farm indicated that an agency’s policy judgments should be analyzed according to a specific set of inquiries that focused on the agency’s reasoning process. Administrative law scholars, whether they agreed or disagreed with the Court’s standards, assumed that the two cases were landmark decisions that signaled a turning point in the substantive review of agency decisions. Instead, the Chevron framework has broken down, and State Farm has been all but ignored by agencies and the courts, including the Supreme Court.This article accounts for this breakdown by analyzing the impact of judicial incentives on substantive review in administrative law. Its centerpiece is a model of judicial behavior based on the “craft” and “outcome” components of judicial decisionmaking. Judges engage in the well-reasoned application of doctrine as a matter of craft, and they consider the implications of a result for the parties and society in general as a matter of outcome. When these components pull in opposite directions in a given case, our model suggests how judicial incentives influence the resolution of this tension. Our model of judicial behavior explains why Chevron and State Farm have not been as influential as commonly assumed. Judges have stronger incentives to control outcome and weaker incentives to develop determinate craft norms that limit pursuit of outcome in administrative law than in other areas of law. Because reliance on indeterminate craft norms enables judges to pursue outcome without sacrificing craft, judges have avoided applications of Chevron and State Farm that are determinate. Drawing on this model, we propose a modified approach to substantive judicial review that accounts for the way that judicial incentives influence substantive review doctrine. We recommend that Congress require courts to respond to a series of specific questions that would apply to substantive agency decisions. These questions would make it more difficult for judges to manipulate scope of review standards and would require more explicit reasons for affirming or reversing an agency decision.
Article
Full-text available
We review 74 experiments with no, low, or high performance-based financial incentives. The modal result has no effect on mean performance (though variance is usually reduced by higher payment). Higher incentive does improve performance often, typically judgment tasks that are responsive to better effort. Incentives also reduce presentation effects (e.g., generosity and risk-seeking). Incentive effects are comparable to effects of other variables, particularly cognitive capital and task production demands, and interact with those variables, so a narrow-minded focus on incentives alone is misguided. We also note that no replicated study has made rationality violations disappear purely by raising incentives.
Article
Full-text available
We investigate the conventional wisdom that competition among interested parties attempting to influence a decisionmaker by providing verifiable information elicits all relevant information. We find that, if the decisionmaker is strategically sophisticated and well informed about the relevant variables and about the preferences of the interested party or parties, competition may be unnecessary to achieve this result. If the decisionmaker is unsophisticated or not well informed, competition is not generally sufficient. If the interested parties' interests are sufficiently opposed, however, or if the decisionmaker is seeking to advance the parties' welfare, then competition can reduce or even eliminate the decisionmaker's need for prior knowledge about the relevant variables and for strategic sophistication. In other settings only the combination of competition among information providers and a sophisticated skepticism is sufficient to allow effective decisionmaking.
Article
Full-text available
The loss of human life resulting from environmental contaminants generally does not occur contemporaneously with the exposure to those contaminants. Some environmental problems produce harms with a latency period whereas others affect future generations. One of the most vexing questions raised by the cost-benefit analysis of environmental regulation is whether discounting, to reflect the passage of time between the exposure and the harm, is appropriate in these two scenarios. The valuations of human life used in regulatory analyses are from threats of instantaneous death in workplace settings. Discounting, to reflect that in the case of latent harms the years lost occur later in a person's lifetime, is appropriate in these circumstances. Upward adjustments of the value of life need to be undertaken, however, to account for the dread and involuntary nature of environmental carcinogens as well as for higher income levels of the victims. By not performing these adjustments, the regulatory process may be undervaluing lives by as much as a factor of six. In contrast, in the case of harms to future generations, discounting is ethically unjustified. It is simply a means of privileging the interests of the current generation. Discounting raises analytically distinct issues in the cases of latent harms and harms to future generations. In the case of latent harms, one needs to make intra-personal, intertemporal comparisons of utility, whereas in the case of harms to future generations one needs to define a metric against which to compare the utilities of individuals living in different generations. Thus, the appropriateness of discounting should be resolved differently in the two contexts.
Article
Full-text available
Evidence on the deterrent effect of capital punishment is important for many states that are currently reconsidering their position on the issue. We examine the deterrent hypothesis by using county-level, postmoratorium panel data and a system of simultaneous equations. The procedure we employ overcomes common aggregation problems, eliminates the bias arising from unobserved heterogeneity, and provides evidence relevant for current conditions. Our results suggest that capital punishment has a strong deterrent effect; each execution results, on average, in eighteen fewer murders--with a margin of error of plus or minus ten. Tests show that results are not driven by tougher sentencing laws and are robust to many alternative specifications. Copyright 2003, Oxford University Press.
Article
This chapter discusses the private and social value of information along with the reward of inventive activity. The individual is always fully acquainted with the supply–demand offers of all potential traders, and an equilibrium integrating all individuals' supply-demand offers is attained instantaneously. Individuals are unsure only about the size of their own commodity endowments and/or about the returns attainable from their own productive investments. They are subject to technological uncertainty rather than market uncertainty. The main reason is that information, viewed as a product, is only imperfectly appropriable by its discoverer. The standard literature on the economics of research and invention argues that there tends to be private underinvestment in inventive activity mainly because of the imperfect appropriability of knowledge. The contention made is that even with a patent system, the inventor can only hope to capture some fraction of the technological benefits because of his discovery. Even though practical considerations limit the effective scale and consequent impact of speculation and/or resale, the gains thus achievable eliminate any a priori anticipation of underinvestment in the generation of new technological knowledge.
Article
This paper analyzes cost-benefit analysis from legal, economic, and philosophical perspectives. The traditional defense of cost-benefit analysis is that it maximizes a social welfare function that aggregates unweighted and unrestricted preferences. Professors Adler and Posner follow many economists and philosophers who conclude that this defense is not persuasive. The view that the government should maximize the satisfaction of unrestricted preferences is not plausible. However, the authors disagree with critics who argue that cost-benefit analysis produces morally irrelevant evaluations of projects and should be abandoned. On the contrary, cost-benefit analysis, suitably constrained is consistent with a broad array of appealing normative commitments, and it is superior to alternative methods of project evaluation. It is a reasonable means to the end of maximizing overall welfare when preferences are undistorted or can be reconstructed. And it both exploits the benefits of agency specialization and constrains agencies that might otherwise evaluate projects improperly.
Article
The judicial review of administrative deregulation is a relatively new phenomenon. In this Article, Mr. Garland analyzes the standard of review, the scope of review, and the nature of the remedies that courts have found appropriate for deregulation cases. In the process of shaping these elements of review, the author argues, the courts have transformed the way they perceive the role of administrative agencies generally. Mr. Garland contends that the courts have largely rejected the "interest representation" model that conceived of agencies as quasi-legislatures whose primary purpose was to balance the interests of competing societal groups. Although the newly emerging model also appreciates the political nature of much administrative decisionmaking, its distinguishing feature is a renewed emphasis on ensuring agencies' fidelity to congressional purpose.
Article
[Introduction] Many analytical approaches to setting environmental standards require some consideration of costs and benefits. Even technology- based regulation, maligned by cost-benefit enthusiasts as the worst form of regulatory excess, typically entails consideration of economic costs. Cost-benefit analysis differs, however, from other analytical approaches in the following respect: it demands that the advantages and disadvantages of a regulatory policy be reduced, as far as possible, to numbers, and then further reduced to dollars and cents. In this feature of cost-benefit analysis lies its doom. Indeed, looking closely at the products of this pricing scheme makes it seem not only a little cold, but a little crazy as well. Consider the following examples, which we are not making up. They are not the work of a lunatic fringe, but, on the contrary, they reflect the work products of some of the most influential and reputable of today's cost-benefit practitioners. We are not sure whether to laugh or cry; we find it impossible to treat these studies as serious contributions to a rational discussion. Several years ago, states were in the middle of their litigation against tobacco companies, seeking to recoup the medical expenditures they had incurred as a result of smoking. At that time, W. Kip Viscusi - a professor of law and economics at Harvard and the primary source of the current 6.3millionestimateforthevalueofastatisticallifeundertookresearchconcludingthatstates,infact,savedmoneyastheresultofsmokingbytheircitizens.Why?Becausetheydiedearly!3Theythussavedtheirstatesthetroubleandexpenseofprovidingnursinghomecareandotherservicesassociatedwithanagingpopulation.4Viscusididntstopthere.Sogreat,underViscusisassumptions,werethefinancialbenefitstothestatesoftheircitizensprematuredeathsthat,hesuggested,"cigarettesmokingshouldbesubsidizedratherthantaxed."Amazingly,thiscynicalconclusionhasnotbeensweptintothedustbinwhereitbelongs,butinsteadrecentlyhasbeenrevived:thetobaccocompanyPhilipMorriscommissionedthewellknownconsultinggroupArthurD.LittletoexaminethefinancialbenefitstotheCzechRepublicofsmokingamongCzechcitizens.ArthurD.LittleInternational,Inc.,foundthatsmokingwasafinancialboonforthegovernmentpartlybecause,again,itcausedcitizenstodieearlierandthusreducedgovernmentexpenditureonpensions,housing,andhealthcare.6Thisconclusionrelies,sofaraswecandetermine,onperfectlyconventionalcostbenefitanalysis.Thereismore.Inrecentyears,muchhasbeenlearnedaboutthespecialriskschildrenfaceduetopesticidesintheirfood,contaminantsintheirdrinkingwater,ozoneintheair,andsoon.Becausecostbenefitanalysishasbecomemuchmoreprominentatthesametime,thereisnowabuddingindustryinvaluingchildrenshealth.Itsproductsareoftenbizarre.Taketheproblemofleadpoisoninginchildren.Oneofthemostseriousanddisturbingeffectsofleadcontaminationistheneurologicaldamageitcancauseinyoungchildren,includingpermanentlydiminishedmentalability.Puttingadollarvalueonthe(avoidable,environmentallycaused)retardationofchildrenisadauntingtask,buteconomicanalystshavenotbeendeterred.RandallLutter,afrequentregulatorycriticandascholarattheAEIBrookingsJointCenterforRegulatoryStudies,arguesthatthewaytovaluethedamageleadcausesinchildrenistolookattheamountparentsofaffectedchildrenspendonchelationtherapy,achemicaltreatmentthatissupposedtocauseexcretionofleadfromthebody.7Parentalspendingonchelationsupportsanestimatedvaluationofaslowas6.3 million estimate for the value of a statistical life' - undertook research concluding that states, in fact, saved money as the result of smoking by their citizens. Why? Because they died early! 3 They thus saved their states the trouble and expense of providing nursing home care and other services associated with an aging population. 4 Viscusi didn't stop there. So great, under Viscusi's assumptions, were the financial benefits to the states of their citizens' premature deaths that, he suggested, "cigarette smoking should be subsidized rather than taxed." ' Amazingly, this cynical conclusion has not been swept into the dustbin where it belongs, but instead recently has been revived: the tobacco company Philip Morris commissioned the well-known consulting group Arthur D. Little to examine the financial benefits to the Czech Republic of smoking among Czech citizens. Arthur D. Little International, Inc., found that smoking was a financial boon for the government-partly because, again, it caused citizens to die earlier and thus reduced government expenditure on pensions, housing, and health care.6 This conclusion relies, so far as we can determine, on perfectly conventional cost-benefit analysis. There is more. In recent years, much has been learned about the special risks children face due to pesticides in their food, contaminants in their drinking water, ozone in the air, and so on. Because cost-benefit analysis has become much more prominent at the same time, there is now a budding industry in valuing children's health. Its products are often bizarre. Take the problem of lead poisoning in children. One of the most serious and disturbing effects of lead contamination is the neurological damage it can cause in young children, including permanently diminished mental ability. Putting a dollar value on the (avoidable, environmentally caused) retardation of children is a daunting task, but economic analysts have not been deterred. Randall Lutter, a frequent regulatory critic and a scholar at the AEI-Brookings Joint Center for Regulatory Studies, argues that the way to value the damage lead causes in children is to look at the amount parents of affected children spend on chelation therapy, a chemical treatment that is supposed to cause excretion of lead from the body. 7 Parental spending on chelation supports an estimated valuation of as low as 1100 per IQ point lost due to lead poisoning. 8 Previous economic analyses by the EPA, based on the children's loss of expected future earnings, have estimated the value to be much higher-up to 9000perIQpoint.9Basedonhislowerfigure,Lutterclaimstohavediscoveredthattoomucheffortisgoingintocontrollinglead:"Hazardstandardsthatprotectchildrenfarmorethantheirparentsthinkisappropriatemaymakelittlesense";thus,"[t]heagenciesshouldconsiderrelaxingtheirleadstandards."10Infact,Lutterpresentsnoevidenceaboutwhatparentsthink,onlyaboutwhattheyspendononerarevarietyofprivatemedicaltreatment(which,asitturnsout,hasnotbeenprovenmedicallyeffectiveforchronic,lowlevelleadpoisoning)."Whyshouldenvironmentalstandardsbebasedonwhatindividualsarenowspendingondesperatepersonaleffortstoovercomesocialproblems?Forsheeranalyticalaudacity,Luttersstudyfacessomestiffcompetitionfromanotherstudyconcerningkidsthisoneconcerningthevalue,notofchildrenshealth,butoftheirlives.Inthissecondstudy,researchersexaminedmotherscarseatfasteningpractices.2Theycalculatedthedifferencebetweenthetimerequiredtofastentheseatscorrectlyandthetimemothersactuallyspentfasteningtheirchildrenintotheirseats.13Thentheyassignedamonetaryvaluetothisdifferenceoftimebasedonthemothershourlywagerate(or,inthecaseofnonworkingmoms,basedonaguessatthewagestheymighthaveearned).14Whenmotherssavedtimeand,byhypothesis,moneybyfasteningtheirchildrenscarseatsincorrectly,theywere,accordingtotheresearchers,implicitlyplacingafinitemonetaryvalueonthelifethreateningriskstotheirchildrenposedbycaraccidents.Buildingonthiscalculation,theresearcherswereabletoanswerthevexingquestionofhowmuchastatisticalchildslifeisworthtoitsmother.(Asthemotherofastatisticalchild,sheisnaturallyadeptatcomplexcalculationscomparingthevalueofsavingafewsecondsversustheslightlyincreasedrisktoherchild!)TheanswerparallelsLuttersfindingthatwearevaluingourchildrentoohighly:incarseatland,achildslifeisworthonlyabout9000 per IQ point. 9 Based on his lower figure, Lutter claims to have discovered that too much effort is going into controlling lead: "Hazard standards that protect children far more than their parents think is appropriate may make little sense"; thus, " [ t]he agencies should consider relaxing their lead standards." 1 0 In fact, Lutter presents no evidence about what parents think, only about what they spend on one rare variety of private medical treatment (which, as it turns out, has not been proven medically effective for chronic, low-level lead poisoning)." Why should environmental standards be based on what individuals are now spending on desperate personal efforts to overcome social problems? For sheer analytical audacity, Lutter's study faces some stiff competition from another study concerning kids-this one concerning the value, not of children's health, but of their lives. In this second study, researchers examined mothers' car-seat fastening practices. 2 They calculated the difference between the time required to fasten the seats correctly and the time mothers actually spent fastening their children into their seats.1 3 Then they assigned a monetary value to this difference of time based on the mothers' hourly wage rate (or, in the case of nonworking moms, based on a guess at the wages they might have earned).14 When mothers saved time-and, by hypothesis, money-by fastening their children's car seats incorrectly, they were, according to the researchers, implicitly placing a finite monetary value on the life-threatening risks to their children posed by car accidents. Building on this calculation, the researchers were able to answer the vexing question of how much a statistical child's life is worth to its mother. (As the mother of a statistical child, she is naturally adept at complex calculations comparing the value of saving a few seconds versus the slightly increased risk to her child!) The answer parallels Lutter's finding that we are valuing our children too highly: in car-seat-land, a child's life is worth only about 500,000.16 In this Article, we try to show that the absurdity of these particular analyses, though striking, is not unique to them. Indeed, we will argue, cost-benefit analysis is so inherently flawed that if one scratches the apparently benign surface of any of its products, one finds the same kind of absurdity. But before launching into this critique, it will be useful first to establish exactly what cost-benefit analysis is, and why one might think it is a good idea. [...] [Conclusion] Two features of cost-benefit analysis distinguish it from other approaches to evaluating the advantages and disadvantages of environmentally protective regulations: the translation of lives, health, and the natural environment into monetary terms, and the discounting of harms to human health and the environment that are expected to occur in the future. These features of cost-benefit analysis make it a terrible way to make decisions about environmental protection, for both intrinsic and practical reasons. Nor is it useful to keep cost-benefit analysis around as a kind of regulatory tag-along, providing information that regulators may find "interesting" even if not decisive. Cost-benefit analysis is exceedingly time - and resource - intensive, and its flaws are so deep and so large that this time and these resources are wasted on it. Once a cost-benefit analysis is performed, its bottom line number offers an irresistible sound bite that inevitably drowns out more reasoned deliberation. Moreover, given the intrinsic conflict between cost-benefit analysis and the principles of fairness that animate, or should animate, our national policy toward protecting people from being hurt by other people, the results of cost-benefit analysis cannot simply be "given some weight" along with other factors, without undermining the fundamental equality of all citizens-rich and poor, young and old, healthy and sick. Cost-benefit analysis cannot overcome its fatal flaw: it is completely reliant on the impossible attempt to price the priceless values of life, health, nature, and the future. Better public policy decisions can be made without cost-benefit analysis, by combining the successes of traditional regulation with the best of the innovative and flexible approaches that have gained ground in recent years.
Article
Some fifty-five years ago, in a seminal article called The Nature of the Firm, a young socialist named Ronald Coase sought to explain the existence of firms, of organizations within which markets were replaced by hierarchy and command. Twenty-five years later, in The Problem of Social Cost, Ronald Coase, by then a middle-aged libertarian, indicated how markets could replace hierarchy and command structures to the perceived benefit of those who organized them. Five years ago, at a conference marking the fiftieth anniversary of The Nature of the Firm, Professor Coase described the insight which allowed him to explain the existence of firms in this way: The solution was to realize that there were costs of making transactions in a market economy and that it was necessary to incorporate them into the analysis. This was not done in economics at that time—nor, I may add, is it in most present-day economic theory.
Article
Critiques of risk regulation rely pervasively on estimates of the costs of various federal regulations per life saved. As Professor Heinzerling illustrates in this Article, most of these estimates derive from a single source, a table prepared in the 1980s by an economist at the Office of Management and Budget, John Morrall. That table reports costs per life saved reaching hundreds of millions, even billions, of dollars. These oftcited estimates are, however, vastly higher than the agencies' own estimates of costs and benefits. The divergence in estimates stems from the fact that Morrall adjusted the agencies' figures by discounting future lives saved and, in many cases, greatly decreasing estimates of risk. Without these adjustments, Professor Heinzerling demonstrates, the costs per life saved of the allegedly costliest regulations drop, in virtually every case examined, to less than $5 million. This number compares favorably to currently cited estimates of the monetary value of a human life. Moreover, Morrall's calculations exclude many unquantified benefits of the regulations in question. An assessment of the cost-effectiveness of current risk regulation thus turns on one's opinions regarding discounting, risk assessment, and regulatory purposes. These in turn depend on one's views of the relative worth of lives saved today and lives saved in the future, the appropriate response to scientific uncertainty, and the relevance of unquantified benefits. As Professor Heinzerling argues, these matters involve choices among values about which reasonable people may disagree. Thus, instead of providing an objective basis for setting regulatory priorities and judging the wisdom of regulation, figures on costs per life saved embody the very normative judgments they have been thought to support.
Article
We gathered information on the cost-effectiveness of life-saving interventions in the United States from publicly available economic analyses. “Life-saving interventions” were defined as any behavioral and/or technological strategy that reduces the probability of premature death among a specified target population. We defined cost-effectiveness as the net resource costs of an intervention per year of life saved. To improve the comparability of cost-effectiveness ratios arrived at with diverse methods, we established fixed definitional goals and revised published estimates, when necessary and feasible, to meet these goals. The 587 interventions identified ranged from those that save more resources than they cost, to those costing more than 10 billion dollars per year of life saved. Overall, the median intervention costs 42,000perlifeyearsaved.Themedianmedicalinterventioncosts42,000 per life-year saved. The median medical intervention costs 19,000/life-year; injury reduction 48,000/lifeyear;andtoxincontrol48,000/life-year; and toxin control 2,800,000/life-year. Cost/life-year ratios and bibliographic references for more than 500 life-saving interventions are provided.
Article
Probability distributions of stock market returns have typically been estimated from historical time series. The possibility of extreme events such as the stock market crash of 1987 makes this a perilous enterprise. Alternative parametric and nonparametric approaches use contemporaneously observed option prices to recover their underlying risk-neutral probability distribution. Parametric methods assume an option pricing formula which is inverted to obtain parameters of the distribution. The nonparametric methods pursued here choose probabilities to minimize an objective function subject to requiring that the chosen probabilities are consistent with observed option and underlying asset prices. This paper examines alternative specifications of the minimization criterion using historically observed S&P 500 index option prices over an eight-year period. With the exception of the lower left-hand tail of the distribution, alternative optimization specifications typically produce approximately the same implied distributions. Most prominently, the paper introduces a new optimization technique for estimating expiration- date risk-neutral probability distributions based on maximizing the smoothness of the resulting probability distribution. Since an "almost closed-form" solution for this case is available, the smoothness method is computationally orders of magnitude faster than the alternatives. Considerable care is taken to specify such parameters as interest rates, dividends, and synchronous index levels, as well as to filter for general arbitrage violations and to use time aggregation to correct for unrealistic persistent jaggedness of implied volatility smiles. While time patterns of skewness and kurtosis exhibit a discontinuity across the divide of the 1987 market crash, they remain remarkably stable on either side of the divide. Moreover, since the crash, the risk-neutral probability of a three (four) standard deviation decline in the S&P index (about -36% (-46%) over a year) is about 10 (100) times more likely than under the assumption of lognormality, and about 10 (10) times more likely than apparent in the implied distribution prior to the crash.
Article
Evidence from 4 studies with 584 undergraduates demonstrates that social observers tend to perceive a "false consensus" with respect to the relative commonness of their own responses. A related bias was shown to exist in the observers' social inferences. Thus, raters estimated particular responses to be relatively common and relatively unrevealing concerning the actors' distinguishing personal dispositions when the responses in question were similar to the raters' own responses; responses differing from those of the rater, by contrast, were perceived to be relatively uncommon and revealing of the actor. These results were obtained both in questionnaire studies presenting Ss with hypothetical situations and choices and in authentic conflict situations. The implications of these findings for the understanding of social perception phenomena and for the analysis of the divergent perceptions of actors and observers are discussed. Cognitive and perceptual mechanisms are proposed which might account for distortions in perceived consensus and for corresponding biases in social inference and attributional processes. (33 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This article identifies the political and moral economies of deterrence theory in legal discourse. Drawing on an extensive social science literature, it shows that deterrence arguments in fact have little impact on citizens' views on controversial policies such as capital punishment, gun control, and hate crime laws. Citizens conventionally defend their positions in deterrence terms nonetheless only because the alternative is a highly contentious expressive idiom, which social norms, strategic calculation, and liberal morality all condemn. But not all citizens respond to these forces. Expressive zealots have an incentive to frame controversial issues in culturally partisan terms, thereby forcing moderate citizens to defect from the deterrence detente and declare their cultural allegiances as well. Accordingly, deliberations permanently cycle between the disengaged, face-saving idiom of deterrence and the partisan, face-breaking idiom of expressive condemnation. These dynamics complicate the normative assessment of deterrence. By abstracting from contentious expressive judgments, deterrence arguments serve the ends of liberal public reason, which enjoins citizens to advance arguments accessible to individuals of diverse moral persuasions. But precisely because deterrence arguments denude the law of social meaning, the prominence of the deterrence idiom impedes progressives from harnessing the expressive power of the law to challenge unjust social norms. There is no stable discourse equilibrium between the deterrence and expressive idioms, either as a positive matter or a normative one.
Article
This paper provides an extended introduction to the game theoretic analysis of cheap talk -- actions which do not have a direct cost, but which affect equilibria only indirectly, via the information about sender type which they convey. After introducing these games, the article takes a functional approach to the law of contract formation as determining when talk is cheap (when an unfulfilled assurance will not trigger liability) versus when it is not (when an unfulfilled assurance that trade will take place triggers liability for the failure to trade). Using the neologism-proof equlibrium refinement due to Farrell, it is shown that when there is no liability, the unique stable equilibrium with pretrade cheap talk may be informative. Parties may communicate information even when talk is cheap because both parties have an interest in terminating costly negotiations if there is too low a probability that a deal will eventually be struck. There also exist situations in which cheap talk will be uninformative or "concealing," because one party is able to externalize a large share of the cost of negotiating to the other, and the other will not bear that cost if it knows that the probability of trade is actually quite low. The article discusses the general structural factors -- such as market thickness, bargaining power, and the similarity between sender and receiver types -- which determine whether there is a (stable) informative cheap talk equilibrium. It then points out that legal rules which attempt to fix liability based on a court's ex post perception of whether talk was concealing or informative ex ante are unlikely to improve matters. On this basis, it is argued that the default rule for letters of intent and other preliminary argreements should be that such agreements do not bind either party to trade. The very well developed law of the United States Court of Appeals for the Second Circuit on this topic is then explicated and critiqued. Other issues such as whether a communication is an offer or an offer solicitation and promissory estoppel in the preliminary dealings context are also explored. In particular, the analysis points out that while efficiency of the the classic Hoffman v. Red Owl is much more debatable than previously assumed, as actually applied by courts today, promissory estoppel in this context may be surprisingly efficient.
Article
Cost-benefit analysis is analyzed using a model of agency delegation. In this model an agency observes the state of the world and issues a regulation, which the president may approve or reject. Cost-benefit analysis enables the president to observe the state of the world (in one version of the model), or is a signal that an agency may issue (in another version). The roles of the courts, Congress, and interest groups are also considered. It is argued that the introduction of cost-benefit analysis increases the amount of regulation, including the amount of regulation that fails cost-benefit analysis; that the president has no incentive to compel agencies to issue cost-benefit analysis, because agencies will do so when it is in the president's interest, and otherwise will not do so; that presidents benefit from cost-benefit analysis even when they do not seek efficient policies; that agencies and their supporters ought to endorse cost-benefit analysis, not resist it; and that cost-benefit analysis reduces the influence of interest groups. Evidence for these claims is discussed. Finally, it is argued that courts should force agencies to conduct cost-benefit analyses in ordinary conditions, but that they should not force agencies to comply with them.
Article
This article attempts to fill a few of the gaps in current scholarship about gatekeepers, and sets forth a proposal for a modified strict liability regime that would avoid many of the problems and costs associated with the current due diligence-based approaches. Under the proposed regime, gatekeepers (investment banking, accounting, and law firms) would be strictly liable for any securities fraud damages paid by the issuer pursuant to a settlement or judgment. Gatekeepers would not have any due diligence-based defenses for securities fraud. Instead, gatekeepers would be permitted to limit their liability by agreeing to and disclosing a percentage limitation on the scope of their liability for the issuer's damages. For example, a gatekeeper for an issue might agree ex ante to be strictly liable for 10 percent of the issuer's liability related to the issuance, measured by the present value of any payment by the issuer pursuant to a settlement or judgment. A particular gatekeeper's liability would be limited to the issuer's liability related to that gatekeeper's role (e.g., counsel for the issuer or the underwriters generally would not be liable for material misstatements or omissions in audited financial statements). The percentage for each gatekeeper could range based on competitive bargaining and market forces, with a minimum limit (e.g., the amount of the gatekeeper's fee, or perhaps a fixed amount of 1 to 5 percent) set by law. This modified strict liability proposal is intended to solve two important and parallel problems in securities regulation. The first problem is the rapidly increasing and substantial costs related to the role of gatekeepers in securities fraud, including both the costs of behavior designed to capture the benefit of due diligence-based defenses and - more importantly - the costs of resolving disputes about gatekeeper behavior. The second problem is that the value of gatekeeper certification is declining at the same time costs are increasing. The article gathers evidence to demonstrate these two problems, and shows how a strict liability regime might ameliorate them. Throughout this discussion, the article challenges the assumption that gatekeepers act as reputational intermediaries.
Article
Economists explore betting markets as prediction tools.
Article
The pace of scientific progress may be hindered by the tendency of our academic institutions to reward being popular rather than being right. A market-based alternative, where scientists can more formally ‘stake their reputation’, is presented here. It offers clear incentives to be careful and honest while contributing to a visible, self-consistent consensus on controversial (or routine) scientific questions. In addition, it allows patrons to choose questions to be researched without choosing people or methods. The bulk of this paper is spent in examining potential problems with the proposed approach. After this examination, the idea still seems to be plausible and worthy of further study.
Article
Valuations from prediction markets reveal expectations about the likelihood of events. Conditional prediction marketsrd reveal expectations conditional on other events occurring. For example, in 1996, the Iowa Electronic Markets (IEM) ran markets to predict the chances that different candidates would become the Republican Presidential nominee. Other concurrent IEM markets predicted the vote shares that each party would receive conditional on the Republican nominee chosen. Here, using these markets as examples, we show how such markets could be used for decision support. In this example, Republicans could have inferred that Dole was a weak candidate and that his nomination would result in a Clinton victory. This is only one example of the widespread potential for using specific decision support markets.
Article
Current estimates of regulatory benefits are too low, and likely far too low, because they ignore a central point about valuation-namely, that people care not only about their absolute economic position, but also about their relative economic position. We show that where the government currently pegs the value of a statistical life at about 4million,itoughttoemployavaluebetween4 million, it ought to employ a value between 4.7 million and $7 million. A conservative reading of the relevant evidence suggests that when government agencies are unsure how to value regulatory benefits along a reasonable range, they should make choices toward or at the upper end.We begin by showing that the nation is nearing the end of a first-generation debate about whether to do cost-benefit analysis, with a mounting victory for advocates of the cost-benefit approach. The second-generation debate, now underway, involves important issues about how to value costs and benefits. Conventional estimates tell us the amount of income an individual, acting in isolation, would be willing to sacrifice in return for, say, an increase in safety on the job. But these estimates rest on the implicit, undefended, and crucial assumption that people's well-being depends only on absolute income. This assumption is false. Considerable evidence suggests that relative income is also an important factor, suggesting that gains or losses in absolute income are of secondary importance unless they alter relative income. When a regulation requires all workers to purchase additional safety, each worker gives up the same amount of other goods, so no worker experiences a decline in relative living standards. The upshot is that an individual will value an across-the-board increase in safety much more highly than an increase in safety that he alone purchases. Regulatory decisions should be based on the former valuation rather than the latter. When the former valuation is used, dollar values should be incre
Article
For the original paper by Frank and Sunstein, see "Cost Benefit Analysis and Relative Position."For a related paper, see Besharov, "Three Questions About the Economics of Relative Position." The current debate over cost-benefit concerns in agencies' evaluations of government regulations is not so much whether to consider costs and benefits at all but rather what belongs in the estimated costs and benefits themselves. Overlaid is the long-standing concern that the distribution of costs and benefits needs some consideration in policy evaluations. In a recent article in the University of Chicago Law Review, Robert Frank and Cass Sunstein proposed a relatively simple method for adding distributional concerns to policy evaluation that enlarges the typically constructed estimates of the individual's willingness to pay for safer jobs or safer products. One might pay more for safety if it were the result of a government regulation that mandated greater safety across-the-board. The reason, Frank and Sunstein argue, for enlarging current estimates is that someone who takes a safer job or buys a safer product gives up wages or pays a higher price, which then moves him or her down in the ladder of income left over to buy other things. Alternatively, a worker who is given a safer job via a government regulation has no relative income consequences because all affected workers have lower pay. We show that when considering the core of the Frank and Sunstein proposal carefully one concludes that current regulatory evaluations should be left alone because there is no reason to believe that relative positional effects are important either to personal decisions in general or to currently constructed cost-benefit calculations of government regulations in particular.One of the practical problems with trying to consider relative position of income and consumption when estimating willingness to pay is that there is no unique way to
Article
For a related paper, see EPA's Arsenic Rule: The Benefits of the Standard Do Not Justify the Costs.What does cost-benefit mean, or do, in actual practice? When agencies are engaging in cost-benefit balancing, what are the interactions among law, science, and economics? This article attempts to answer that question by exploring, in some detail, the controversy over EPA's proposed regulation of arsenic in drinking water. The largest finding is that science often can produce only 'benefit ranges,' and wide ones at that. With reasonable assumptions based on the existing science data, the proposed arsenic regulation can be projected to save as few as 0 lives and as many as 112. With reasonable assumptions, the monetized benefits of the regulation can range from 0to0 to 560 million. In these circumstances, there is no obvious, correct decision for government agencies to make. These points have numerous implications for lawyers and courts, suggesting the ease of bringing legal challenges, on grounds specified here, and the importance of judicial deference in the face of scientific uncertainty. There are also policy implications. Agencies should be given the authority to issue more targeted, cost-effective regulations. They should also be required to accompany the cost-benefit analysis with an effort to identify the winners and losers, so as to see if poor people are mostly hurt or mostly helped.
Article
Recent theories of judicial decision making suggest that federal judges are likely to exploit the structure of law to protect decisions that implement their policy preferences. One perspective asserts that judges, when making decisions that move policy toward their preferred policy outcomes, will be more likely to choose legal grounds--or judicial instruments--that are difficult for other political actors to reverse than when making decisions that move policy away from their preferred outcomes. We test this "strategic instrument" perspective and compare our results with those expected from other models of judicial decision making. Using federal circuit court cases reviewing the decisions of the Environmental Protection Agency from 1981 to 1993, we conduct both bivariate analysis and multinomial logit regression to measure the effect of policy goals on the legal instruments chosen by judges. Our results support the conclusion that strategic considerations systematically influence judicial decision making. Copyright 2002 by the University of Chicago.
Article
Risk equity serves as the purported rationale for a wide range of inefficient policy practices, such as the concern that hypothetical individual risks not be too great. This paper proposes an alternative risk equity concept in terms of equitable trade-offs rather than equity in risk levels. Equalizing the cost per life saved across policy contexts will save additional lives and will give fair treatment to risks arising in a variety of domains. Equitable trade-offs will also benefit minorities who currently are disadvantaged by politically based inefficient policies. Copyright 2000 by the University of Chicago.
Article
Cost-benefit analysis is routinely used by government agencies in order to evaluate projects, but it remains controversial among academics. This paper argues that cost-benefit analysis is best understood as a welfarist decision procedure and that use of cost-benefit analysis is more likely to maximize overall well-being than is use of alternative decision procedures. The paper focuses on the problem of distorted preferences. A person's preferences are distorted when his or her satisfaction does not enhance that person's well-being. Preferences typically thought to be distorted in this sense include disinterested preferences, uninformed preferences, adaptive preferences, and objectively bad preferences; further, preferences may be a poor guide to maximizing aggregate well-being when wealth is unequally distributed. The paper describes conditions under which agencies should correct for distorted preferences, for example, by constructing informed or nonadaptive preferences, discounting objectively bad preferences, and treating people differentially on the basis of wealth. Copyright 2000 by the University of Chicago.
Article
This paper offers the conjecture that interest groups act where there are cycling majorities or other aggregation anomalies. The claim is that instability attracts political activity. This simple conjecture suggests a link between voting paradoxes, or puzzles of aggregation, and questions about why some interest groups succeed while others do not. Interest groups are seen as exploiting the opportunities offered by aggregation anomalies either by influencing procedure or by bargaining their way into successful coalitions. The link between instability and interest-group activity also bears on such normative questions as whether interest-group activity is likely to have disparate corrupting influences on legislative or judicial or direct (popular) decision making. Copyright 1999 by the University of Chicago.
Article
This paper argues that in an uncertain world options written on existing assets can improve efficiency by permitting an expansion of the contingencies that are covered by the market. The two major results obtained are, first, that complex contracts can be “built up” as portfolios of simple options and, second, that there exists a single portfolio of the assets, the efficient fund, on which all options can be written with no loss of efficiency.
Article
Insider traders and other speculators with private information are able to appropriate some part of the returns to corporate investments made at the expense of other shareholders. As a result, insider trading tends to discourage corporate investment and reduce the efficiency of corporate behavior. In the context of a theoretical model, measures that provide some indication of the sources and extent of the investment reduction are derived.
Article
When a decision maker (DM) contracts with an expert to provide information, the nature of the contract can, create incentives for the expert, and it is up to the DM to ensure that the contract provides incentives that align the expert’s and DM’s interests. In this paper, scoring rules (and related functions) are viewed as such contracts and are reinterpreted in terms of agency theory and the theory of revelation games from economics. Although scoring rules have typically been discussed in the literature as devices for eliciting and evaluating subjective probabilities, this study relies on the fact that strictly proper scoring rules reward greater expertise as well as honest revelation. We describe conditions under which a DM can use a strictly proper scoring rule as a contract to give an expert an incentive to gather an amount of information that is optimal from the DM’s perspective. The conditions we consider focus on the expert’s cost structure, and we find that the DM must have substantial knowledge of that cost structure in order to design a specific contract that provides the correct incentives. The model and analysis suggest arguments for hiring and maintaining experts in-house rather than using outside consultants.
Article
We gathered information on the cost-effectiveness of life-saving interventions in the United States from publicly available economic analyses. "Life-saving interventions" were defined as any behavioral and/or technological strategy that reduces the probability of premature death among a specified target population. We defined cost-effectiveness as the net resource costs of an intervention per year of life saved. To improve the comparability of cost-effectiveness ratios arrived at with diverse methods, we established fixed definitional goals and revised published estimates, when necessary and feasible, to meet these goals. The 587 interventions identified ranged from those that save more resources than they cost, to those costing more than 10 billion dollars per year of life saved. Overall, the median intervention costs 42,000perlifeyearsaved.Themedianmedicalinterventioncost42,000 per life-year saved. The median medical intervention cost 19,000/life-year; injury reduction 48,000/lifeyear;andtoxincontrol48,000/life-year; and toxin control 2,800,000/life-year. Cost/life-year ratios and bibliographic references for more than 500 life-saving interventions are provided.