ArticlePDF Available

If Search Neutrality is the Answer, What's the Question?

Authors:
  • International Center for Law & Economics

Abstract

In recent months a veritable legal and policy frenzy has erupted around Google generally, and more specifically concerning how its search activities should be regulated by government authorities around the world in the name of ensuring “search neutrality.” Concerns with search engine bias have led to a menu of proposed regulatory reactions. Although the debate has focused upon possible remedies to the “problem” presented by a range of Google’s business decisions, it has largely missed the predicate question of whether search engine bias is the product of market failure or otherwise generates significant economic or social harms meriting regulatory intervention in the first place. “Search neutrality” by its very terminology presupposes that mandatory neutrality or some imposition of restrictions on search engine bias is desirable, but it is an open question whether advocates of search neutrality have demonstrated that there is a problem necessitating any of the various prescribed remedies. This paper attempts to answer that question, and we evaluate both the economic and non-economic costs and benefits of search bias, as well as the solutions proposed to remedy perceived costs. We demonstrate that search bias is the product of the competitive process and link the search bias debate to the economic and empirical literature on vertical integration and the generally-efficient and pro-competitive incentives for a vertically integrated firm to favor its own content. We conclude that neither an ex ante regulatory restriction on search engine bias nor the imposition of an antitrust duty to deal upon Google would benefit consumers. Moreover, in considering the proposed remedies, we find that by they substitute away from the traditional antitrust consumer welfare standard, and would impose costs exceeding any potential benefits.
A preview of the PDF is not available
... Search neutrality, in addition to being a particularly rich topic, is one of only two sorts of algorithmic neutrality to receive sustained public, academic, and legal attention. 3 (For work on search neutrality, see, among many others, (Grimmelmann, 2010), (Crane, 2012), (Lao, 2013), (Manne and Wright, 2012), (Gillespie, 2014), and (Grimmelmann, 2014).) ...
... See e.g.(Rudner, 1953).12 Other theorists have made other arguments that neutral search is somehow impossible or incoherent; see e.g.(Grimmelmann, 2010),(Manne and Wright, 2012),(Lao, 2013), and(Gillespie, 2014). See also(Dotan, 2020),(Fazelpour and Danks, 2021), and (Johnson, fc) on how underdetermination arguments about neutrality, or value-freedom, in science relate to algorithms. ...
Preprint
Full-text available
Bias infects the algorithms that wield increasing control over our lives. Predictive policing systems overestimate crime in communities of color; hiring algorithms dock qualified female candidates; and facial recognition software struggles to recognize dark-skinned faces. Algorithmic bias has received significant attention. Algorithmic neutrality, in contrast, has been largely neglected. Algorithmic neutrality is my topic. I take up three questions. What is algorithmic neutrality? Is algorithmic neutrality possible? When we have an eye to algorithmic neutrality, what can we learn about algorithmic bias? To answer these questions in concrete terms, I work with a case study: search engines. Drawing on work about neutrality in science, I say that a search engine is neutral only if certain values—like political ideologies or the financial interests of the search engine operator—play no role in how the search engine ranks pages. Search neutrality, I argue, is impossible. Its impossibility seems to threaten the significance of search bias: if no search engine is neutral, then every search engine is biased. To defuse this threat, I distinguish two forms of bias—failing-on-its-own-terms bias and other-values bias. This distinction allows us to make sense of search bias—and capture its normative complexion—despite the impossibility of neutrality.
... Stricter liability, however, might stifle innovation. 458 It might impede the significant technological progress witnessed in recent years, 459 including increases [Vol. 21:1:1 in productivity and personal satisfaction. ...
Article
Full-text available
The seminal book Nudge by Richard Thaler and Cass Sunstein demonstrates that policy makers can prod behavioral changes. A nudge is "any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives." This type of strategy, and the notion of libertarian paternalism at its base, prompted discussions and objections. Academic literature tends to focus on the positive potential of nudges and neglects to address libertarian paternalism that does not promote the welfare of individuals and third parties, but rather infringes on it-a concept this Article refers to as "evil nudges." This kind of choice architecture, which negatively influences individual behavior, raises a variety of legal questions and challenges that policy makers must address; yet it remains under-conceptualized. Should the law recognize liability for evil nudges that result in bad faith influence? This Article aims to answer this question. It suggests the inclusion of nudges within tort law, arguing that nudges can and should be subject to third-party liability. The inclusion of evil nudges within tort law can be explored broadly, but this Article focuses on one particular case study: the liability of online intermediaries for speech torts caused by evil nudges. This case study provides a natural starting point for considering liability for evil nudges, as designing effective nudges is much easier in a technologically connected environment than in a brick-and-mortar world. This Article demonstrates that online intermediaries are not just passive middlemen. They influence decisions through website design and promote behavioral change among internet users. The use of big * Ph.D., Cheshin Post-Doctoral Fellow, Hebrew University of Jerusalem-Faculty of Law & Law and Cyber Program, HUJI Cyber Security Research Center. I am most grateful to Professor Tal Zarsky for his insightful comments and guidance in previous stages. I further thank Jonathan Lewy for his valuable input and the Cheshin Fellowship for the financial support. Last but not the least, I thank Alyx E. Eva, Erin E. Meyers, and their colleagues on the Vanderbilt Journal of Entertainment & Technology Law staff for the thoughtful comments, suggestions, and outstanding editorial work. 2 EVIL NUDGES [Vol. 21:1:1 data, the use of artificial intelligence, and the growing use of Internet of Things (IoT) technologies enables unprecedented hyperinfluence. Drawing on network theory, psychology, marketing, and information systems, this Article further demonstrates how nudges influence the process of information diffusion in digital networks. It shows that by nudging, intermediaries can amplify the severity of speech-related harm. This Article introduces an innovative taxonomy of nudges that online intermediaries utilize, and explains how nudges influence, change internet users' interactions, and form social relations. Afterwards, it examines case law and normative considerations regarding the liability of intermediaries for choice architecture. It argues that the law should respond to "evil nudges," and it proposes nuanced differential guidelines for deciding cases of intermediary liability. It does so while accounting for basic principles of tort law, as well as freedom of speech, reputation, fairness, efficiency, and the importance of promoting innovation.
... Wird Google demnach ein Marktmachtmissbrauch unterstellt, stellt sich die Frage, welche Abhilfemaßnahmen dem Unternehmen auferlegt werden sollten, um zukünftig wettbewerbswidrige Verzerrungen der Suchfunktion zu vermeiden. In der Literatur wird insbesondere die Sicherstellung von Suchneutralität als Möglichkeit aufgeführt (Pollock 2010;Edelman 2011;Ammori und Pelican 2012;Crane 2012;Manne und Wright 2012). Wie bereits diskutiert, ist dies in der Durchführung jedoch kaum realisierbar und könnte obendrein weitere Innovationen zur Verbesserung der Suchqualität hemmen (Grimmelmann 2011;Crane 2012;Bork und Sidak 2012). ...
Article
Zunehmend rücken Online-Plattformen, die als Intermediäre zwischen verschiedenen Kundengruppen agieren, aufgrund ihrer tatsächlichen oder vermuteten Marktmacht in den Fokus regionaler und nationaler Wettbewerbsbehörden. Infolge der mehrseitigen und dynamischen Marktstruktur erweist sich eine eindeutige kartellrechtliche Beurteilung jedoch als schwierig. Der vorliegende Beitrag diskutiert anhand relevanter kartellrechtlicher Verfahren mögliche Problemfelder, die sich bei Anwendung konventioneller Bewertungsansätze ergeben. Zunächst werden die spezifischen Charakteristika von OnlinePlattformmärkten erörtert. Sodann werden beispielhaft Fälle zu Vertikalbeschränkungen sowie zu den Unternehmen Google und Facebook diskutiert und evaluiert. Dabei wird festgestellt, dass bewährte kartellrechtliche Instrumente auf Online-Plattformmärkten grundsätzlich angewendet werden können. Gleichwohl müssen spezifische Marktcharakteristika verstärkt Berücksichtigung finden. Nur unter dieser Prämisse können Wettbewerbseffekte verlässlich festgestellt werden und Entscheidungen getroffen werden, die sich förderlich auf den Wettbewerb, zukünftige Innovationen und technischen Fortschritt auswirken.
... Actually tweaking the variables of the data processing algorithm and changing the way the machine learns are tasks reserved for a select few engineers, who optimise and experiment on the algorithms that Google or Facebook use globally. Although public regulators have not yet been able to influence Google algorithms, the FTC and EU investigation into 'Search neutrality' revolves precisely around this question (Manne & Wright, 2012;Petit, 2012;van Hoboken, 2012). Can an Internet search monopoly be compelled to change not only its second-order principles, but its first-order principles as well? ...
Article
Full-text available
The world we inhabit is surrounded by ‘coded objects’ from credit cards to airplanes to telephones (Kitchin and Dodge 2011). Sadly the governance mechanisms of many of these technologies are only poorly understood, leading to the common premise that such technologies are ‘neutral’ (Brey 2005; Winner 1980), thereby obscuring normative and power-related consequences of their design (Bauman et al. 2014; Denardis 2012). In order to unpack supposedly neutral technologies, the following paper will try and foreground two of key questions around the technologies used on the global Internet: 1) how are content regulatory regimes governed and 2) how are the algorithms embedded in software governed? The following paper will explore these two aspects in turn, before drawing conclusions on understanding the normative frameworks embedded in technological systems. Article first published online: 22 MARCH 2016
Article
Full-text available
How do Big Tech platforms affect the exercise of fundamental rights? What can the States do, in the context of their sovereignty, to moderate these actors’ powers? What has been explored in the context of Mexico? This article discusses how Big Tech platforms, such as Facebook and Google, may impact our collective lives and democracy. It highlights the legal implications of access to information and freedom of expression. This research provides an overall legal framework on this issue, to later place in context the Mexican draft bill introduced in 2021 to regulate platforms’ content moderation practices, analyzing its flaws and areas of improvement, and suggesting specific elements for further legal discussion to prevent abuse of power from these companies within the Latin American and Mexican context. A comparative legal methodology is used, resorting to elements of American and European Law, to later discuss the Mexican legal framework.
Article
Full-text available
This paper provides an economic and legal theory of harm applicable to the case against Google in Europe over search bias. So far, no clear legal and economic theory has yet been delineated by the European Commission, nor consensus in the literature has emerged with regard to the theory of foreclosure that could support the case, or with regard to the specific form of abuse of dominance applicable under European law. The paper shows that the law and economics of tying applies to search bias. From a legal standpoint, it is not necessary to rely on the more formalistic elements of Article 102 TFEU, or to characterize Google as an essential facility, in order to find a valid legal theory of harm. We show that Google’s conduct of linking its proprietary vertical (or specialized) search platforms to its horizontal (or general) search platform through visual prominence, as it has done with Google Shopping, fits within the legal boundaries of tying under European law. From an economic perspective, we show that the two-sided nature of both horizontal and vertical search provides compelling reasons why foreclosure of competition may be profitable, and why the single monopoly profit theorem may fail in this context. As we show in the paper, by tying vertical search to general search through visual prominence, Google can attract additional advertisers on its vertical search platform that would have possibly advertised on competing vertical search platforms without a tie. The effect of tying is a restriction on competition in vertical search that deserves antitrust scrutiny.
Article
The technological foundations and businesses models upon which digital companies currently function, the so-called ‘platforms’, are characterized by the sort of features that attract antitrust scrutiny. They are usually conformed as multi-sided markets; they pervasively exhibit network effects; the nature and scope of entry barriers is uncertain; dominance in each market is duly granted to the successful company; foreclosure of small firms is always wandering around. Thus, it’s no surprise that digital markets champions such as Facebook, Amazon, Google and Apple have received close attention from the regulators in both sides of the Atlantic. Accordingly, antitrust enforcement efforts both in Europe and the United States have recently increased in number of actions taken and the amount of fines imposed in high-tech industries. Recent decisions in this field have rekindled in-depth and largely unresolved debates concerning the appropriate role of antitrust enforcement in such markets. Nevertheless, the Statement of Objections sent recently by the European Commission to Google, for allegedly abusing its dominant position in the internet-search market goes a step further, jeopardizing the very goals and objectives competition policy is supposed to pursue. Against this background and in light of other recent high-profile cases (Intel, Tomra, Post Danmark, and the everlasting Microsoft) this paper seeks to set an adequate legal and economic framework for the prohibition of monopolization under article 102 of the Treaty of Functioning of the European Union (TFUE). This sort of high tech markets are usually characterised for a heavy reliance in ICT (information and communication technologies), the importance of product design and the development of platforms. In addition, on the basis of ‘winner-takes-it-all’ paradigm, the platform on which they operate may be considered as an ‘essential facility’. As a result, any discrimination or denial of access may be considered abusive.However, the legal treatment to be applied to ‘refusals to supply’ remains as one of the most contentious issues in contemporary EU competition law. It was at the heart of Microsoft’s misbehaviour in 2004 -- and all the subsequent fines that followed -- and, ten years later, if Google’s conduct regarding discrimination of competing firms in favour of its own services may walk the very same path. The four topics addressed in this paper concern three long-lasting controversies and one recent matter: i) under what circumstances a dominant firm is entitled to respond -- albeit, aggressively -- to its competitor’s challenges; ii) the precise scope -- if there’s any -- of the ‘objective justification’ defence; iii) if that sort of conduct is prohibited only when leads to anticompetitive foreclosure, without restrictive effects, is there room for a ‘by object’ interdiction of abuse of a dominant position?; iv) finally, in the specific digital industry, and given the still uncertain economics of multi-sided market and the evolving nature of prevalent businesses models, is really a ‘monopoly’ as bad and harmful as with the bricks-and-mortar or oil-well-and-pipeline predecessors? The ultimate question is, does current antitrust enforcement in these specific markets achieve its goal of fostering competition or rather the undesired effect of chilling innovation?
Article
Full-text available
Some of the thorniest problems of communications law and policy were supposed to have been solved by the Internet. The issue of who can speak, or access the means of speech, was said to have been solved by the arrival of ubiquitous, relatively cheap access to the Internet. The problem of media concentration was supposed to have been solved now that so many more speakers could contribute. While the Internet has undoubtedly assisted with these problems, new gatekeepers have arisen, and that their actions are not necessarily supportive of a healthy, pluralistic communications environment. While the problem of access to the means of speech seems to have been greatly alleviated by the Internet, the chokepoint has now shifted downstream to a class of intermediaries that select and filter information en route to listeners. Examples of this class of selection intermediaries include search engines, software filters, Internet Service Providers (ISPs) that block or filter content, and spam blocklists. It is true that we have long been surrounded by too much information, and we have relied on various intermediaries to assist us in finding and choosing information. Why, then, is the role of selection intermediaries on the Internet worthy of comment? In my view, the Internet offers an opportunity for us to craft new approaches to the selection intermediary function in a way that enables us to keep as much of the speech freedom engendered by the Internet as possible. There is a danger that by reflexively drawing analogies to familiar old selection intermediaries, such as libraries, we will tolerate selection criteria that erode the freedom of speech made possible on the Internet. In the age of the Internet, a complete theory of communication rights must explicitly address the effects of selection intermediaries and recognize as protected each of the steps involved in the communicative relationship between speaker and listener. This includes not only the right to speak and the right to hear (which are already recognized forms of free speech rights), but also the right to reach an audience free from the influence of extraneous criteria of discrimination imposed by selection intermediaries. If selection intermediaries block or discriminate against a speaker on grounds that listeners would not have selected, that speaker's ability to speak freely has been undermined. The paper makes a case for the recognition of this right. It also considers whether government regulations to give effect to this right could be imposed without violating the free speech rights of selection intermediaries.
Article
Full-text available
This article argues that mandatory securities disclosure regulation has unanticipated and ill-considered consequences. Disclosure regulation makes some forms of behavior more expensive relative to others. Rational actors will respond by shifting some conduct into comparatively cheaper outlets. And these alternative behaviors may actually be less beneficial than the regulated, deterred behavior. Likewise, required disclosure of corporate information to investors makes shareholder governance less costly and more likely, even where it should be deterred. In essence, disclosure regulation effectively proscribes, it does not prescribe. Thus, depending on the viability of other behaviors, forced disclosure may induce unwanted behavioral responses. The article identifies two broad concepts that encapsulate these dynamics. The first is a "hydraulic theory" of securities disclosure regulation. Under this theory, disclosure regulation triggers behavioral hydraulics which may lead to an undesirable shift in executive behavior, as well as an undesirable shift in the pool of candidates for corporate executive positions. The second is an information cost theory of securities disclosure regulation. Under this theory, mandated disclosure is both unnecessary to market efficiency and affirmatively harmful to firms' competitive schemes of corporate governance.
Article
Full-text available
In this paper, we consider whether rights management systems can be supported by legal and institutional infrastructures that enable appropriate public access to the works secured by these technologies. We focus primarily on the design challenges posed by the fair use doctrine, which historically has played a central role in preserving such access. Throughout the paper, however, we also use the term "fair use" to refer more generally to the variety of limiting doctrines within copyright law that serve this goal. We begin in Part II by reviewing the contours of the fair use doctrine and the legal and policy requirements that mandate appropriate public access to copyrighted works and other publicly available informational works. Part Ill discusses the nature and purpose of rights management systems, their legal status under the current anti-circumvention provisions of U.S. law, and their likely effects on fair use. In Part IV, we consider the foreseeable technical and institutional options that might enable proper public access to secured works and offer a proposal combining minimum system flexibility requirements in exchange for copyright enforcement and "key escrow" in exchange for anti-circumvention protection. Part V assesses the legal feasibility of such a system and concludes that the proposal comports with the United States' obligations under international copyright agreements. Finally, Part VI considers whether implementation of the proposal would represent good policy and concludes that it may be the best realistic alternative for preserving fair use in the digital age.
Article
The legality of nonprice vertical practices in the U.S. is determined by their likely competitive effects. An optimal enforcement rule combines evidence with theory to update prior beliefs, and specifies a decision that minimizes the expected loss. Because the welfare effects of vertical practices are theoretically ambiguous, optimal decisions depend heavily on prior beliefs, which should be guided by empirical evidence. Empirically, vertical restraints appear to reduce price and/or increase output. Thus, absent a good natural experiment to evaluate a particular restraint’s effect, an optimal policy places a heavy burden on plaintiffs to show that a restraint is anticompetitive.
Article
The Department of Transportation's announced plan to terminate all federal regulation of airline computer reservation systems (CRS) in 2004 is somewhat surprising in light of modern economic theories of regulation that highlight barriers to reform. This Article presents evidence on how CRS regulation affects the market for CRS services from the perspectives of both traditional and modern theories of regulation. We conclude that the announcement of a plan to terminate CRS regulations is consistent with traditional theories of regulation in which the government acts to maximize social welfare. We also demonstrate that the traditional approach to evaluating the merits of regulation, as sometimes applied, exhibits a bias toward rule retention by assuming that the relevant alternative to regulation is a state of laissez-faire. In fact, the relevant alternative is typically other forms of intervention by the government, such as antitrust enforcement, which poses as the government's strategic alternative for most if not all prior DOT regulation of CRS markets. Finally, we examine the practical relevance of modern theories of regulation for explaining the recent move towards deregulation. The occurrence of entry and technological change prior to CRS deregulation is of special interest from this perspective. The termination of CRS regulation is indeed consistent both with the traditional theory of deregulation in the public interest and with the modern interest group theory of deregulation in which deregulation is the ultimate conclusion of a process. Other modern theories of regulation appear not to explain the timing of reform in this instance.
Article
Behavioral economics (BE) examines the implications for decision-making when actors suffer from biases documented in the psychological literature. This article considers how such biases affect regulatory decisions. The article posits a simple model of a regulator who serves as an agent to a political overseer. The regulator chooses a policy that accounts for the rewards she receives from the political overseer — whose optimal policy is assumed to maximize short-run outputs that garner political support, rather than long-term welfare outcomes — and the weight the regulator puts on the optimal long run policy. Flawed heuristics and myopia are likely to lead regulators to adopt policies closer to the preferences of political overseers than they would otherwise. The incentive structure for regulators is likely to reward those who adopt politically expedient policies, either intentionally (due to a desire to please the political overseer) or accidentally (due to bounded rationality). The article urges that careful thought be given to calls for greater state intervention, especially when those calls seek to correct firm biases. The article proposes measures that focus rewards to regulators on outcomes rather than outputs as a way to help ameliorate regulatory biases.
Article
The 1962 drug amendments seek to prevent wasted expenditure stimulated by exaggerated claims for effectiveness of new drugs by requiring premarketing approval of all new drug claims by the Food and Drug Administration. The compliance costs are shown to have engendered a marked reduction in drug innovation. Consumer surplus analysis is then adapted and supplemented with "expert" drug evaluations to estimate the relevant benefits and costs. The main finding is that benefits forgone on effective new drugs exceed greatly the waste avoided on ineffective drugs. The estimated net impact is equivalent to a 5-10 percent tax on drug purchases.