ArticlePDF Available

Abstract

The search neutrality debate is about whether search engines should or should not be allowed to uprank certain results among the organic content matching a query. This debate is related to that of network neutrality , which focuses on whether all bytes being transmitted through the Internet should be treated equally. In a recent paper, we have formulated a model that formalizes this question and characterized an optimal ranking policy for a search engine. The model relies on the trade-off between short-term revenues, captured by the benefits of highly-paying results, and long-term revenues which can increase by providing users with more relevant results to minimize churn. In this article, we apply that model to investigate the relations between search neutrality and innovation. We illustrate through a simple setting and computer simulations that a revenue-maximizing search engine may indeed deter innovation at the content level. Our simple setting obviously simplifies reality, but this has the advantage of providing better insights on how optimization by some actors impacts other actors.
Letter
Non-Neutrality of Search Engines and its Impact on
Innovation
Pierre L’Ecuyer,1Patrick Maillé,2Nicolás E. Stier-Moses,3and Bruno Tuffin4
1Université de Montréal, Canada
2IMT Atlantique, France
3Universidad Torcuato Di Tella, Argentina
4Inria, France
Correspondence: Bruno Tuffin, Inria, Campus Universitaire de Beaulieu, 35042 Rennes Cedex, France. Email:
bruno.tuffin@inria.fr
Abstract
The search neutrality debate is about whether search engines should or should not be allowed to uprank certain
results among the organic content matching a query. This debate is related to that of network neutrality, which
focuses on whether all bytes being transmitted through the Internet should be treated equally. In a recent paper,
we have formulated a model that formalizes this question and characterized an optimal ranking policy for a search
engine. The model relies on the trade-off between short-term revenues, captured by the benefits of highly-paying
results, and long-term revenues which can increase by providing users with more relevant results to minimize churn.
In this article, we apply that model to investigate the relations between search neutrality and innovation. We
illustrate through a simple setting and computer simulations that a revenue-maximizing search engine may indeed
deter innovation at the content level. Our simple setting obviously simplifies reality, but this has the advantage of
providing better insights on how optimization by some actors impacts other actors.
Keywords: Search engine, revenue maximization, neutrality, innovation
1 Introduction
There is an ongoing public debate about search neutrality for the Internet. Recently, some search engines (SEs)
have been under scrutiny by individuals, organizations that oversee the Internet, and regulators in various countries
because the organic search ranking is not only based on measures of relevance, but is also influenced by revenue
considerations (3). For example, Google could favor YouTube and other content of its own because of the extra
Citation: P. L’Ecuyer, P. Maillé, N.E. Stier-Moses, B. Tuffin. Non-Neutrality of Search Engines and its Impact on Innovation. Internet
Technology Letters, Forthcoming.
2 L’Ecuyer et al
revenue generated from keeping users within their ecosystem of pages and services. Extra revenue relates mainly to
additional ads that users are more likely to click. Bias in organic search ranking has been observed in experiments (5,
7, 12). In (12), for example, it was reported that Microsoft-owned content was 26 times more likely to be displayed
on the first page of Bing (owned by Microsoft) than with any other SE, and that Google content was 17 times more
likely to appear in a Google Search first page than with other SEs. These issues are of interest to governments and
regulators, such as the US Federal Trade Commission (1), the US Senate (10), and the European Union. For instance,
in June 2017, European antitrust officials fined Google 2.7 billion dollars in a lawsuit about search neutrality (8).
The crucial question is whether a SE should only base its ranking on link relevance, and whether a non-neutral
SE could hurt the Internet economy by hampering competition and innovation by favoring content providers that
can afford higher fees. The consequence may be that new applications/content may have a harder time reaching
the top positions of search rankings, which would limit their distribution and their chances of being successful.
The underlying question relates to other policy debates about whether and how to regulate the Internet, the most
prominent example being the net neutrality debate (7, 9).
In (6), we studied a model for a SE that wants to balance the long-term revenues arising from the additional
visits generated by showing more relevant content vs. the short-term revenues generated by prioritizing highly-paying
content. For this model, we have characterized a simple, optimal ranking policy for the SE and showed how to
compute it. The goal in the present paper is to use this model to illustrate how allowing rankings based on other
features than just relevance could potentially hurt content innovation and investment incentives. A different aspect
of non-neutrality of SEs was also investigated in (2), namely the impact of an SE ranking policy on the strategy
of content providers earning money from ads; the key assumption being that more advertisement leads to a lower
quality perceived by users and therefore a potentially lower ranking by a neutral engine.
Section 2 summarizes the model and main results of (6). In Section 3, we construct and study a simple setting
that captures and illustrates the impact of a non-neutral SE on content innovation and investment. This provides
insights on how an SE that implements a revenue-maximizing ranking policy can have a negative impact.
2 An Optimization Model for Search Rankings
We start by summarizing the model and results of (6), which will be used to study content innovation in the next
section. In this model, a SE receives search queries at random. A query is abstracted out as a random vector
Y= (M, R1, G1, . . . , RM, GM), in which Mis the number of pages (or organic links) that match the query (it can be
random), and for i∈ {1, . . . , M },Riis the estimated relevance factor for Page iand Giis the expected SE revenue
conditional on the link to Page ibeing clicked. This revenue may include direct sales, revenue made from ads placed
in the associated page, etc. The values of Riand Giare typically estimated by the SE using its internal methodology
L’Ecuyer et al 3
and data. The probability that the link to Page iis clicked (called click-through rate or CTR) when placed at
position kis assumed to be the product θkψ(Ri)of a relevance effect through some non-decreasing function ψand
a position effect through a (fixed) factor θk, with θ1θ2≥ · · · ≥ · · · ≥ 0. The different requests Yare assumed to
be independent with some known distribution (in practice, this distribution will be learned from data).
The SE uses a ranking rule πto select a permutation π(Y) = (π1(Y), . . . , πM(Y)) of the Mlinks for each request
Y, to produce an ordered list of the organic links. The link to Page iis placed in position πi(Y), for each i. A
“neutral” ranking always orders the links in decreasing order of Ri, i.e., based on (estimated) relevance only. A greedy
SE that only cares about short-term profits may rank links by decreasing order of ψ(Ri)Gi, but this strategy ignores
that users may churn after being disappointed by the quality of the results. A smart SE should optimize the tradeoff
between immediate revenue generated by highly-paying results, and increased relevance which attracts more users.
In (6), we show how to do this under the following setting.
Let ˜
Ri:= ψ(Ri)Riand ˜
Gi:= ψ(Ri)Gibe the relevance and expected revenue of link iweighted by the quality-
related factor ψ(Ri)in the CTR. Let
r:= EY"M
X
i=1
θπi(Y)˜
Ri#and g:= EY"M
X
i=1
θπi(Y)˜
Gi#
be the average relevance and average revenue per request, for a given ranking rule π. The average number of arriving
requests per unit of time is assumed to be λ(r)for some non-negative increasing function λ. We let βbe the average
SE revenue per query arising from the sponsored content shown in the search results page. The SE wants to maximize
its average revenue per unit of time in the long run, which is
λ(r)(g+β).
To mimic a neutral SE, one can just set all Gito 0. The main assumptions of this model are standard (11) and are
further discussed in (6).
One of the main results of (6) is to characterize an optimal ranking policy πexplicitly, under mild technical
assumptions, when Yhas a continuous distribution. We show that an optimal policy must be an LO-ρpolicy (linear
ordering policy with weight ρ), which means that it must rank the links by decreasing order of ˜
Ri+ρ˜
Gifor some
optimal value of the real coefficient ρ0, say ρ. This ρcan be found easily via a stochastic optimization procedure
such as fixed-point iteration combined with simulation, assuming that Ycan be simulated. The parameter ρhas an
economic interpretation: it encodes the tradeoff between short-term revenues (the case of large ρ), and the long-term
revenues arising from additional visits (the case of ρ= 0). In real life, the distribution of Ymay change slowly over
4 L’Ecuyer et al
time, but it can be re-estimated continuously and ρcan be updated dynamically. Given ρ, using the policy is very
simple and fast.
3 Impact of Non-Neutrality on Innovation and Investment
This section illustrates the potential impact of using non-neutral instead of neutral ranking policies, under the model
put forward in (6) which we summarized in Section 2. To provide insights, we use a simplified setting and show how
non-neutrality may harm competition and innovation.
We consider a first content provider (CP) that is vertically integrated with the SE, a second CP that invests in
innovation, and a fringe of other CPs that compete with them. More specifically, we suppose that among the M
pages corresponding to any request Y, Page 1 is served directly by the SE while the others are served by third-party
CPs. Page 2 is served by a CP that invests in content quality, as described below. The other pages are from the rest
of the CPs and they are all homogeneous. For our illustration, we assume that M= 10 for all Y. The revenue G1and
the relevance Riare assumed to be uniformly distributed over [0,1], except for R2, which is uniformly distributed
over [0,1 + z]for an investment effort z > 0selected by CP 2. This captures the idea that CP 2 invests in quality to
improve its relevance. We assume that the relevance Rifor all iand the revenue G1are all mutually independent.
Since the revenue generated by Pages 2 to 10 does not go to the SE, we set Gi= 0 for those pages. We assume
that the revenues made by the corresponding CPs with their pages are also uniform over [0,1], and are independent
across CPs. In addition to its revenue generated as CP 1, the SE receives an expected revenue of β= 1 per query,
from sponsored search. The other model parameters are selected as follows: We take λ(r) = r,ψ(r)1, and the
click-through rates CTR(i) = θiare set as measured in (4); see Table 1.
Table 1: CTR values used in the simulations, taken from (4)
θ1θ2θ3θ4θ5θ6θ7θ8θ9θ10
0.364 0.125 0.095 0.079 0.061 0.041 0.038 0.035 0.03 0.022
To see what happens when the SE is non-neutral, we perform simulations in which we compute ρfor the SE and
look at the behavior of the expected revenue for the SE, the CPs, and the users, as a function of ρ. We also compute
the optimal zfor CP 2 and see how this optimal zchanges when the SE policy goes from neutral to non-neutral. In
these experiments, we assume (as a simplification) that the distribution of Yis always immediately available to the
SE. (In real life there will be a delay to learn and dynamically update the estimate when the distribution changes,
but this would not cause problems if the distribution changes slowly.)
L’Ecuyer et al 5
Figure 1 shows the results of the numerical analysis when the SE ranks the links according to ˜
Ri+ρ˜
Gi, for varying
values of ρ, with z= 1. Recall that we assumed that each CP makes some revenue when its page is clicked, which is
uniformly distributed on [0,1] and independent of all other variables; hence, for i2the revenue of CP iis simply
half its visit rate. For a neutral ranking (ρ= 0), CP 2 makes more revenue than the other CPs, as expected, because
it regularly obtains a higher ranking (note that we do not include innovation costs here). However, when ρincreases
above approximately 0.8, CP 1 becomes the one with the highest revenue, despite its (stochastically) lower relevance.
The optimal ranking rule for the SE is an LO-ρpolicy in which ρdepends on z. For z= 1, we find ρ0.7.
02468
0
0.2
0.4
0.6
0.8
ρ
SE Revenue
Relevance r(ρ)
CP 1
CP 2
Other CPs
02468
5·102
0.1
0.15
0.2
ρ
Figure 1: Relevance and revenues (left) and visit rates (right) per unit of time for the setting with vertical integration
of CP 1 and investment of CP 2. On the left, the upper line gives the total revenue for the SE, λ(r)(g+β), the
second upper line gives the global relevance r, which can be seen as a global measure of user satisfaction, and the
lower lines are the revenues to the CPs.
We now take the perspective of CP 2, and compute its optimal investment level z, anticipating that the SE is going
to compute the optimal LO-ρpolicy for that choice of zand will rank content accordingly. The profit for CP 2 is the
revenue generated by the search market–i.e., half its visit rate–minus ztimes the unit investment cost. To maximize
profits in this Stackelberg setting, we simulated the outcomes and computed ρ=ρ(z)over a fine grid of values of
z. In Figure 2, we see that ρincreases as a function of z. However, this is not a general property: if λis convex then
the SE may tend to be “more neutral” (choose a lower ρ) when the average relevance of CP 2 increases because
there would be more to gain from improving the average relevance than from improving the average revenues.
Figure 3 shows the curves for the CP profits (left) and visit rates (right) as functions of ρ, for both the neutral and
non-neutral situations, assuming a unit investment cost of 0.4for CP 2. For CP 1, both the neutral and non-neutral
revenues (and visit rates) increase with z, thanks to the increased relevance, and the difference between them is quite
large and it increases with z. The latter occurs because ρincreases with zand the average relevance also improves,
attracting more visits. For CP 2, this difference between neutral and non-neutral also increases with the investment
6 L’Ecuyer et al
02468
0.5
1
1.5
2
Investment level z
ρ
SE revenue
SE revenue (neutral)
Avg. relevance r(ρ)
Avg. relevance r(neutral)
Figure 2: The optimal weight ρthat the SE would use in the (non-neutral) ranking, and the corresponding relevance,
as functions of CP 2 effort.
in quality, for two reasons: (a) increasing zincreases ρ, which increases the non-neutrality, to the detriment of CP 2
whose page is pushed behind that of CP 1 more often, and (b) even for a fixed ρ, with a larger z, one has R2> R1
more often, and then the situation in which these two pages are placed in reverse order of relevance occurs more
frequently, hurting CP 2. For the other CPs, increasing zalso increases revenue by increasing the arrival rate, but
at the same time it may decrease revenue because these other CPs will have their pages ranked lower on average, in
both the neutral and non-neutral situations.
The fact that CP 2 is hurt more by non-neutrality when it invests more has the consequence that non-neutrality
reduces its optimal level of investment. In the present example, the optimal investment levels are as follows:
In a neutral regime, CP 2 would select z= 1.25, and obtain a net profit of 0.046 per unit of time.
In a non-neutral regime, CP 2 would select z= 1.05, and obtain a net profit of 0.038 per unit of time.
That is, with a non-neutral policy, CP 2 invests 16% less and its profit decreases by 26% in the long run. A non-
neutral policy with these optimal values of zand ρalso decreases the global relevance r(which measures the users
satisfaction) by 4%, from 0.773 to 0.7395.
4 Conclusion
We have considered a search market where an SE focuses on ranking algorithms that maximize its long-term revenue
and we have compared it with results arising from a neutral ranking algorithm based only on relevance. To investigate
the impact on content innovation between non-neutral and neutral ranking policies, we considered several CPs: one
integrated by the SE (hence often favored in the rankings), one investing to improve its quality and revenues, and
REFERENCES 7
02468
0
5·102
0.1
0.15
z
02468
0
0.2
0.4
0.6
z
CP 1 (neutral case) CP 1 (non-neutral case)
CP 2 (neutral case) CP 2 (non-neutral case)
Other CPs (neutral case) Other CPs (non-neutral case)
Figure 3: CP profits (including quality investment, at unit cost 0.4)(left) and visit rates to various CPs (right) as a
function of the investment zfrom CP 2.
a fringe of several independent and ex-ante identical CPs. One conclusion from our case study is that under non-
neutral search policies, CPs can under-invest and therefore this could curb innovation. This would of course depend
on the setting and parameter values of the model, but our example provides insight on how this could happen.
The framework we have used here can be directly applied to real-life situations, provided that data is available to
estimate the relevant distributions and parameters of the model to capture the real-world SE marketplace.
References
1. Brill J. Statement of the Commission regarding Google’s search practices. http://www.ftc.gov/public-statements/2013/01/
statement-commission-regarding-googles- search-practices . Last accessed Oct 2014.; 2013
2. Coucheney P., D’Acquisto G., Maillé P., Naldi M., Tuffin B. Influence of search neutrality on the economics of advertisement-financed
content. ACM Transactions on Internet Technology. 2014;14(2-3):Article 10.
3. Crowcroft J. Net neutrality: The technical side of the debate: A white paper. ACM SIGCOMM Computer Communication Review.
2007January;7(1).
4. Dejarnette R. Click-through rate of top 10 search results in Google. http://www.internetmarketingninjas.com/blog/
search-engine-optimization/click-through-rate, last accessed June 68, 2017; 2012
5. Edelman B., Lockwood B. Measuring bias in “organic” web search. http://www.benedelman.org/searchbias, last accessed June 8,
2017; 2011
6. L’Ecuyer P., Maillé P., Stier-Moses N., Tuffin B. Revenue-maximizing rankings for online platforms with quality-sensitive consumers.
Operations Research. 2017;65(2):408–423.
7. Maillé P., Tuffin B. Telecommunication network economics: From theory to applications: Cambridge University Press; 2014.
8 REFERENCES
8. New York Times. Google fined record $2.7 billion in e.u. antitrust ruling. Jun 27, https://www.nytimes.com/2017/06/27/technology/
eu-google- fine.html . Last accessed June 2017.; 2017
9. Odlyzko A. Network neutrality, search neutrality, and the never-ending conflict between efficiency and fairness in markets. Review of
Network Economics. 2009;8(1):40–60.
10. Rushe D. Eric Schmidt Google senate hearing – as it happened. http://www.guardian.co.uk/technology/blog/2011/sep/21/
eric-schmidt-google-senate- hearing . Last accessed Oct 2014; 2012
11. Varian H.R. Position auctions. International Journal of Industrial Organization. 2007;25(6):1163–1178.
12. Wright J. D. Defining and measuring search bias: Some preliminary evidence. 12-14: George Mason University School of Law; 2012.
... The tool provided first knowledge about how to access an API and how to process and visualize collected data. Through the technical reconstruction of a search engine, it became clear, that search engines like Google can't be regarded as neutral gateways(L'Ecuyer et al. 2018). The search results that Google delivers depends on its construction process. ...
Thesis
Full-text available
Social networks like Facebook, Twitter and others are becoming increasingly important and enable novel approaches for political communication. Simultaneously, the misuse of personal data is of rising concern for many policymakers worldwide. Firstly, computational automation enables the easy and quick dissemination of opinions, while false social media profiles create the impression of fake publicity. Secondly, personal data collected from social networks offer political actors the possibility to predict the behavior of their potential voters. Methods of psychology are used for target group segmentation and are the basis for persuasive political adverts (Micro-Targeting) like Cambridge Analytica used during the US elections 2016. These developments prompted universities around the globe to launch research projects. However, studies that examine the significance of personal data for political communication in Switzerland are rare to find. Although the elections in October 2019 showed clearly that swiss parties increasingly use personal data for their campaigns. There is little research that sheds light on the involved actors, for example, the agencies and their relationships with the parties. Therefore, the project addresses the following research questions: How do Switzerlands parties collect and utilize personal data from potential voters to predict their attitudes, motivations, and behaviors? How does this influence their political communication in terms of design and message? With whom do they collaborate in this process? The project produces knowledge about the process of using personal data for political communication on the example of Swiss ballot meetings and analyses the involved actors and their relations. It proposes a combination of different research methods to examine the outcome (e.g. the advert on Facebook) as well as the decision-making process behind the advert. A web content analysis creates a database of political adverts placed on social networks by Swiss parties. A series of expert interviews illuminate the intentions of the actors (e.g. parties, agencies) who created these ads. The collected data is visualized in three thematic maps that depict the actors, their relations and the political communication in the networks. The project is conducted by an interdisciplinary research team from the fields of political science, data science, and design research. The findings will be disseminated in lectures, peer-reviewed journals, and through an online open-source documentation. The produced knowledge is valuable for state institutions during election observation or social networks preventing digital propaganda.
... The fact that third party content is now within the realm of Google, the search engine can choose to prioritise which content end-users should see first, at the detriment of non-AMP enabled web pages. This situation further raises questions of centralisation of data and search neutrality [16]. ...
Chapter
This chapter gives more insight on the role and possible non-neutral behavior of a particular and important intermediate actor between content and user, namely, search engines, leading to another debate called the search neutrality debate.
Article
Full-text available
Net neutrality has attracted the attention of media policymakers in recent years. They need to understand the positive and negative effects of this concept on the public interest in order to be able to determine how and when they should have a policy intervention in this issue. Therefore, the present study aims to analyze net neutrality in relation to public interest. The qualitative research approach and content analysis Has been adopted. Participants in the study consisted of 10 media policymakers, lawyers and media industry experts and in-depth and semi-structured interviews were conducted. Due to the prevalence of coronavirus and the need to keep social distance, researchers have used online chat software to conduct interviews. The results of data analysis are based on the conceptual framework adopted (Fairness, Pluralism of views and agenda setting), in the form of three main themes (network neutrality and Fairness in media industry, network neutrality and Pluralism of views in media industry and agenda setting by media). Finally, the relationship between these main themes and the theme of "network neutrality and public interest" was explained.
Article
Full-text available
When a customer searches for a keyword at a classified ads website, at an online retailer, or at a search engine (SE), the platform has exponentially many choices in how to sort the output to the query. The two extremes are (a) to consider a ranking based on relevance only, which attracts more customers in the long run because of perceived quality, and (b) to consider a ranking based on the expected revenue to be generated by immediate conversions, which maximizes short-term revenue. Typically, these two objectives are not perfectly positively correlated and hence the main question is what middle ground between them should be chosen. We introduce stochastic models and propose effective solution methods that can be used to optimize the ranking considering long-term revenues. A key feature of our model is that customers are quality-sensitive and are attracted to the platform or driven away depending on the average relevance of the output. The proposed methods are of crucial importance in e-business and encompass: (i) classified ad websites which can favor paid ads by ranking them higher, (ii) online retailers which can rank products they sell according to buyers' interests and/or the margins these products have, (iii) SEs which can position the content that they serve higher in the output page than third-party content to keep users in their platforms for longer and earn more. This goes in detriment of just offering rankings based on relevance only and is directly linked to the current search neutrality debate.
Article
Full-text available
The search neutrality debate questions the ranking methods of search engines. We analyze the issue when content providers offer content for free, but get revenues-from advertising. We investigate the noncooperative game among competing content providers under different ranking policies. When the search engine is not involved with high-quality content providers, it should adopt neutral ranking, also maximizing user quality-of-experience. If the search engine controls high-quality content, favoring its ranking and adding advertisement yield a larger revenue. Though user perceived quality may not be impaired, the advertising revenues of the other content providers drastically decrease.
Article
We address whether local ISPs should be allowed to charge content providers, who derive advertising revenue, for the right to access end-users. We compare two-sided pricing where such charges are allowed to one-sided pricing where they are prohibited. By deriving provider equilibrium actions (prices and investments), we determine which regime is welfare-superior as a function of a few key parameters. We find that two-sided pricing is more favorable when the ratio between parameters characterizing advertising rates and end-user price sensitivity is either low or high.
Book
Presenting a balance of theory and practice, this up-to-date guide provides a comprehensive overview of the key issues in telecommunication network economics, as well as the mathematical models behind the solutions. These mathematical foundations enable the reader to understand the economic issues arising at this pivotal time in network economics, from business, research and political perspectives. This is followed by a unique practical guide to current topics, including app stores, volume-based pricing, auctions for advertisements, search engine business models, the network neutrality debate, the relationship between mobile operators and mobile virtual network operators, and the economics of security. The guide discusses all types of players in telecommunications, from users, to access and transit network providers, to service providers (including search engines, cloud providers or content delivery networks), to content providers and regulatory bodies. Ideal for graduate students, researchers and industry practitioners working in telecommunications.
Article
Network Neutrality is the subject of much current debate. In this white paper I try to find the signal in the noise by taking a largely technical look at various definitions of network neutrality and the feasibility and complexity of implementing systems that support those ideas.First off, there are a lot of emotional terms used to describe various aspects of what makes the melting pot of the neutrality debate. For example, censorship or black-holing (where route filtering, fire-walling and port blocking might say what is happening in less insightful way); free-riding is often bandied about to describe the business of making money on the net (rather than overlay service provision); monopolistic tendencies, instead of the natural inclination of an organisation that owns a lot of kit that they've sunk capital into, to want to make revenue from it.The paper describes the basic realities of the net, which has never been a level playing field for many accidental and some deliberate reasons, and then looks at the future evolution of IP (and lower level) services; the evolution of overlay services, and the evolution of the structure of the ISP business space (access, core and other); finally, I appeal to simple minded economic and regulatory arguments to ask whether there is any case at all for special pleading for the Internet as a special case, different from other services, or utilities. Mutatis mutandis .
Article
A market is considered, the demand side of which consists of a large number of consumers with identical tastes but different income levels, and the supply side, of two firms selling at no cost products which are relatively close substitutes for each other. Consumers are assumed to make indivisible and mutually exclusive purchases. A full characterization of the demand structure and the non cooperative market solution is given, and the dependence of the latter on income distribution and quality parameters is analyzed.
Article
This paper develops a game theoretic model based on a two-sided market framework to investigate the net neutrality debate. In particular, we consider investment incentives of Internet Service Providers (ISPs) under a neutral and non-neutral network regimes. In our model, two interconnected ISPs compete over quality and prices for heterogeneous content providers (CPs) and heterogeneous consumers. We consider two definitions of a non-neutral network: in the first, ISPs charge access fees to off-network CPs; in the second, ISPs offer "priority lanes''. In the neutral regime, connecting to a single ISP allows a CP to communicate with all consumers and all CPs obtain the same quality of service. With a combination of theoretical and numerical results we find that under both definitions ISPs' quality-investment levels are driven by the trade-off they make between softening price competition on the consumer side and increasing revenues extracted from CPs. Specifically, in the non-neutral regime, because it is easier to extract surplus through appropriate CP pricing, ISPs' investment levels are larger. Because CP quality is enhanced by the quality of ISPs, larger investment levels imply that CPs' revenues increase. Similarly, consumer surplus increases as well. The main insight resulting from our model is that social welfare is larger in the non-neutral regime. Our results highlight important mechanisms related to ISPs' investments that play a key role in market outcomes, providing useful insights that enrich the net neutrality debate.
Article
We discuss network neutrality regulation of the Internet in the context of a two-sided market model. Platforms sell broadband Internet access services to residential consumers and may set fees to content and application providers on the Internet. When access is monopolized, cross-group externalities (network effects) can give a rationale for network neutrality regulation (requiring zero fees to content providers): there exist parameter ranges for which network neutrality regulation increases the total surplus compared to the fully private optimum at which the monopoly platform imposes positive fees on content providers. However, for other parameter values, network neutrality regulation can decrease total surplus. Extending the model to a duopoly of residential broadband ISPs, we again find parameter values such that network neutrality regulation increases total surplus suggesting that network neutrality regulation could be warranted even when some competition is present.
Article
This paper studies, using a two-sided market framework, the impact of regulation on platform's pricing scheme, on investment decisions, on network users' decision to join the network, and on welfare. We take a monopoly platform that serves a continuum of vertically differentiated buyers and sellers that, after deciding to enter, will start to trade. The profit-maximizing platform can only charge a different entry fee to all network users. If profit-maximizing platform cannot charge sellers, i.e. Net Neutrality regulation, will be more network users, more investment, and welfare is higher. If profit-maximizing platform cannot charge buyers there will be more investment than without the regulation. If on top of not charging buyers, network effects make sellers' trade surplus and buyers' trade surplus close, then less network users will be excluded, and welfare will be higher with the regulation than with the profit-maximizing platform. If network effects make sellers' surplus high enough compared to buyers' surplus, welfare is higher with the profit-maximizing platform. Finally, we show that welfare when a profit-maximizing platform cannot charge sellers, is higher than when he cannot charge buyers
Article
This paper examines the the concept of network neutrality in telecommunications policy and its relationship to Darwinian theories of innovation. It also considers the record of broadband discrimination practiced by broadband operators in the early 2000s.