Yale Law School
Yale Law School Legal Scholarship Repository
Student Prize PapersYale Law School Student Scholarship
A Question of Rank
This Article is brought to you for free and open access by the Yale Law School Student Scholarship at Yale Law School Legal Scholarship Repository. It
has been accepted for inclusion in Student Prize Papers by an authorized administrator of Yale Law School Legal Scholarship Repository. For more
information, please email@example.com.
Ehrenberg, Shuky, "A Question of Rank" (2007).Student Prize Papers.Paper 18.
A Question of Rank1
An empirical investigation into law school rank maximizing strategies, with a focus on
U.S. News and World Report rankings have long been a part of the law school application
process, with school rank often playing an important role in a prospective student’s decisions.
This Paper addresses the question of whether law schools act strategically in order to maximize
their U.S. News and World Report ranking, with a focus on the admissions process. The Paper
will show that some law schools admit students in order to maximize their ranking, as opposed to
admitting students expected to succeed in law school. The Paper will also include a more general
discussion of U.S. News’s ranking methodology, and possible implications to affirmative action
and minority admissions in law schools.
1 An expanded form of the data sets not included in the body of the note are on file with the author, and available
2 Thanks to Muhammad Asali, Alessandra Casella, Phoebus Dhrymes, Susan Elmes, Eiichi Miyagawa, and Atila
Abdulkadiroglu of Columbia University’s Economics Department, as well as to Andrew Gelman of Columbia
University’s Political Science and Statistics department for (ongoing) advice on questions of methodology. Thanks
also to Jeffrey Stake of Indiana Law School for his advice on the matter of law school rankings.
I. Why do Law Schools Care? (Do Law Schools Care)?....................................................... 7
II. Ranking Methodology and Rank Maximizing Strategies in Admissions......................... 10
A. Weighting of LSAT and UGPA................................................................................ 11
B. Differential weighting of LSAT and UGPA for different score ranges.................... 21
C. Ratio of Admitted to Applied Students..................................................................... 27
III. Beyond Admissions.......................................................................................................... 30
A. Quality Assessment .................................................................................................. 30
B. Placement Success ................................................................................................... 32
C. Faculty Resources .................................................................................................... 33
D. Size Matters .............................................................................................................. 34
IV. Conclusions and Implications........................................................................................... 39
The ranking of educational institutions in the United States has a tradition rich in both
history and controversy. Due to the availability of large, easily accessible digital databases and a
variety of quantifiable measures by which educational institutions can be ranked, few
programs—be they academic or professional—have escaped the roving eyes of U.S. News and
World Report, The Princeton Review, and other such publications.3
While ranking educational institutions can convey useful information to prospective
students, ranking mechanisms are not uncontroversial. One often-overlooked, yet crucially
important, element of a ranking mechanism is not so much the quality of the information that it
conveys, but rather the effect of the methodology it employs on the institutions that it ranks. For
instance, since most college rankings include “yield” (the percentage of students admitted who
enroll) as a component of their algorithm, colleges who wish to maximize their rank have an
incentive to reject students whom they think will not enroll if admitted.4 Perversely, the students
most likely to be rejected are the more successful ones, since they are most likely to have a better
Both statistical and anecdotal evidence suggests that colleges do in fact act upon these
incentives, with some schools going so far as to hire outside consultants to assist them in
maximizing their student yield.6 Although yield is not a central component of U.S. News and
World Report’s law school ranking algorithm, it contains other weightier components that allow
3 These publications rank academic and professional programs for various degree granting programs, using different
formulas. See, e.g., America’s Best Graduate Schools, U.S. NEWS & WORLD REP., (2007),
http://www.usnews.com/usnews/edu/grad/rankings/law/lawindex_brief.php; Best 361 Colleges,
PRINCETON REV. (2007) available at http://www.princetonreview.com/college/research/rankings/rankings.asp (last
visited Apr. 15, 2007).
4 Daniel Golden, How Colleges Reject the Top Applicants – and Boost Their Status, WALL ST. J., May 29, 2001, at
for strategic rank-maximizing action on the part of law schools. This Paper explores the impact
of U.S. News and World Report’s ranking mechanism on law schools, with a focus on the
admissions process. Since various characteristics of an admitted class—such as Law School
Admissions Test (LSAT) scores and Undergraduate Grade Point Average (UGPA)—are
significant factors in a school’s rank, law schools have an incentive to act strategically with
respect to these variables in order to maximize their rank.
This Paper uses both statistical and case study evidence to show that some law schools do
in fact act on these incentives, implicitly introducing rank into their admissions considerations.
The Paper will show that some law schools weight certain admissions variables (LSAT and
UGPA) in a way that is commensurate with maximizing their rank. These law schools give
LSAT scores a heavier weight than its ability to predict success in law school warrants, due to its
heavy impact on rank. One of the consequences of this skewed admissions policy is a bias
against minority students, who tend to perform poorly on the LSAT when compared to non-
minorities.7 Additionally, I show that law schools provide and deny information to prospective
students in a manner that is likely to maximize their rank, by maximizing their applicant pool—
another component of U.S. News’s ranking algorithm.8
Although numerous scholars have criticized the validity of U.S. News law school
rankings,9 there are relatively few systematic empirical studies available. A number of noted
7 See generally William C. Kidder, Does the LSAT Mirror or Magnify Racial and Ethnic Differences in Educational
Attainment?: A Study of Equally Achieving “Elite” College Students, 89 CAL. L. REV. 1055, 1058 (2001) (analyzing
minority achievement on the LSAT test).
8 More specifically, the ratio of accepted to applied students is a component of rank. By increasing the numerator of
this variable, number of applicants, the law school is effectively increasing the value of the variable as a whole. See
generally America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
9 See, e.g., Brian Leiter, Measuring the Academic Distinction of Law Faculties, 29 J. LEGAL STUD. 451, 452 (2000)
(listing a number of flaws in U.S. News’s ranking methodology); Richard A. Posner, Law School Rankings, 81 IND.
L.J. 13, 22 (2006) (providing additional critiques of U.S. News’s methodology, and suggesting several
improvements); Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation:
exceptions include Sauder and Lancaster’s study on the impact of rank on law school behavior.10
Sauder and Lancaster focus on the impact of rank on prospective students, arguing that students
take rank into account when applying to schools, as evidenced by a correlation between school
rank and number of applicants. A recent study by Stake and Alexeev examines student and law
school reactions to rank,11 finding that rank affects the quality of students applying to law
schools as well as a law school’s reputation.12 This Paper’s investigation into whether law
schools act strategically with respect to rank in admissions can be seen as a natural extension of
this literature, with a focus on adverse effects of U.S. News rankings on law schools’ admissions
incentives, and law schools’ reactions to these incentives.
The Paper proceeds in four parts. Part I examines the impact of law school rankings on
law schools, faculty, and students in a general manner, showing that law schools, as well as law
school administrators have good reason to care about their rank. Part II introduces and explains
the selectivity portion of U.S. News and World Report’s law school ranking algorithm, which
includes various measures related to the law school admissions process. I discuss possible
methods by which law schools can act strategically within this category in order to maximize
their rank, and bring statistical and case study evidence to show that some law schools do in fact
maximize rank through admissions. Part II concludes with a discussion of the implications of
rank-maximizing admissions, with a focus on minority candidates. Part III discusses strategic
manipulations available to law schools outside of the admissions process, with anecdotal
evidence suggesting that such manipulations take place. Part IV suggests a number of approaches
Ways Rankings Mislead, 81 IND. L.J. 229, 233-241 (2006) (criticizing U.S. News’s rankings as having a negative
impact on both prospective students and law schools).
10 See generally Michael Sauder and Ryon Lancaster, Do Rankings Matter? The Effects of U.S. News & World
Report Rankings on the Admissions Process of Law Schools, 40 LAW & SOC’Y REV. 105, 105-134 (2006).
11 Jeffrey E. Stake and Michael Alexeev, Who Responds To U.S. NEWS & WORLD REP.’s Law School Rankings?.
IND. LEGAL STUD. RESEARCH PAPER NO. 55, (2006), available at, http://ssrn.com/abstract=913427 (last visited Apr.
15, 2007). The article is in draft form, with some robustness tests still being conducted.
12 Reputation as measured by U.S. News surveys.
available to law schools in order to combat the negative impact of U.S. News rankings, as well as
ways U.S. News can minimize incentives for strategic action on the part of law schools.
would already have any information that a ranking system might convey. Be it due to a lack of
information, an inability to correctly evaluate admissions chances, or other factors, the number of
applications to a law school are not (all else equal) powerfully correlated with its size.
Specifically, law schools that are otherwise similar (very near in rank and location, or type of
location, urban v. rural, etc.) will not receive a drastically lower number of applications just
because they are smaller. In order to reinforce this assertion, I will compare several groups of
otherwise similar law schools and show that although a reduction in size brings about a
corresponding reduction in applications, they are not proportional.104
For example, Harvard and Stanford Law Schools have occupied top five slots in U.S.
News rankings for a number of years.105 For the 2005 academic year Harvard admitted 834
students out of 7,391 applicants, with a total of 554 matriculating. Stanford on the other hand
admitted 390 out of 5,040 applicants for a total of 166 matriculating. Yet Stanford University is
considered far more selective than Harvard. This despite the fact that Harvard has a higher ratio
of admitted to matriculating students, meaning that of the people admitted to Harvard, more
chose to go than of those admitted to Stanford. Stanford got a higher score in selectivity simply
because it is small. Lest we think the discrepancy has something to do with California weather,
or the joys of living in Palo Alto as opposed to Boston, let us examine a school that is more
similar to Harvard in climate, and, some claim, in character. The University of Chicago’s Law
School admitted 750 students for a target class of 192, out of 4,737 applicants.106 Although their
104 I bring a number of high ranking case studies, though more complete data is available. See America’s Best
Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/law/brief/lawrank_brief.php (last visited Apr. 15, 2007); ABA
Official Guide to Law Schools http://officialguide.lsac.org/search/cgi-bin/results.asp?PageNo= (last visited Apr. 15,
105 See Top Law Schools, available at http://www.top-law-schools.com/profiles.html (last visited Apr. 15, 2007).
106 See infra p. 38,(table 5).
rate of admitted to matriculating students is far lower than Harvard’s, their selectivity is
considered nearly the same, simply due to the target size of their class.107
NYU and Columbia are very similar schools, in the same city, and with virtually identical
rank. Despite the fact that Columbia admits approximately twenty five percent less students, they
still receive more applicants than NYU.108 This seems to indicate that students are either unaware
of the effect of school size on admissions prospects, or simply choose to ignore any such effect.
This raises another problem: One possible measure of school selectivity takes the form of
a tournament109—how likely is a student to choose a certain school given the total set of schools
he was admitted to.110 While data on individual students is not available, the percentage of
admitted students who choose to attend a school (Matric / Admit ratio in Table 4) does capture a
similar idea, since it indicates how many individuals prefer attending that law school over their
next best option.111 Assuming that most people admitted to similar law schools have similar
opportunity costs, this proportion would be a measure of law school value from the student’s
perspective. Note that the student’s choice is closely correlated with ranking, though not as
closely with selectivity. This suggests that students with higher scores choose the higher ranked
school, increasing that school’s rank through their higher scores.
Table 5: comparing school and student selectivity112
107 Evidence of such behavior exists in college applications as well. Christopher Avery et al.,COLLEGE ADMISSIONS
PROJECT (2005), 14.
108 See infra p.35, (table5).
109 See Cass Sunstein, Ranking Law Schools, A Market Test?, 91 IND. L.J. 25, 25-34 (2006).
110 See generally Christopher Avery, et al., A Revealed Preference Ranking of U.S. Colleges and Universities, NBER
working paper 10803, (2004). Note that the data presented here is far less precise, since it assumes admission to
comparable schools, as opposed to the College study which actually measures admission.
111 This is far from a perfect measure since an individual’s alternative options are unknown.
112 America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/law/brief/lawrank_brief.php (last visited Apr. 15, 2007).
While it is unlikely that law schools actually manipulate their size to increase their
ranking, this results in an interesting fault in the ranking methodology well worth mentioning.
Note that there is not necessarily a correlation between school size and classroom size, or school
size and the ratio of faculty to students. Although most schools do publish statistics on class
sizes, these are not used by U.S. News’s ranking algorithm.113 U.S. News’s use of a simple ratio
of admitted to declined students results in an advantage given to smaller schools simply because
they are small. A more accurate measure of acceptance rates would incorporate size into its
considerations as well.
E. Possible Future Empirical Research
Although empirical investigations into the impact, effectiveness and accuracy of law
school rankings have increased significantly over the past decade,114 a number of significant
questions still remain. These include, but are not limited to, differences between state and private
institutions, and variables introduced by transfer and part-time students.
State schools may be at a strategic disadvantage with respect to rank-maximizing
admissions strategies, since some states require state schools to admit a certain percentage of in
state applicants. Furthermore, some state schools simply choose to admit more in state students
113 Faculty student ratio, on the other hand, is a component of U.S. News rank, available at
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
114 See, e.g., Jeffrey E. Stake & Michael Alexeev, Who Responds To U.S. NEWS AND WORLD REP.’s Law School
Rankings?, IND. LEGAL STUD. RESEARCH PAPER NO. 55 (June 30, 2006), available at,
http://ssrn.com/abstract=913427 (last visited Apr. 15, 2007); Michael Sauder and Ryon Lancaster, Do Rankings
Matter? The Effects of U.S. News & World Report Rankings on the Admissions Process of Law Schools, 40 LAW &
SOC’Y REV. 105, 105–134 (2006).
for independent reasons.115 These restrictions on the pool of students admitted to state schools
can have a negative effect on their average test scores, especially in small states, where there are
a limited number of students taking the LSAT. Consequentially, state schools may receive a
lower rank than they would be otherwise, simply because they admit more in state students.116
Finding empirical evidence corroborating this theory may indicate that state schools should
either be ranked separately, or that binding proportions on in state admissions should be
incorporated into a school’s rank.
Since transfer student scores are not factored into a school’s ranking, a school
maximizing its rank will care less about a transfer student’s undergraduate GPA and LSAT
scores. This would presumably allow the school to focus on the better predictor for law school
success, ignoring possible effects on rank. Additionally, a school may wish to admit more
students through transfer, thereby admitting better students without adversely affecting its
ranking.117 Like transfers, part-time student data is not included in a school’s rank.
Unfortunately, both transfers and part-time students are not always included in detailed ABA and
LSAC data, making law schools the only source for such data. Corroboration of these hypotheses
can lead to an incorporation of transfer students into ranking algorithms, as well as construction
of separate rankings for part-time programs.
115 Such policies are not always explicit, and can come in the form of additional financial benefits. See, e.g.,
http://www.law.wayne.edu/current/financial_aid.html. Detailed data on multiple schools available from U.S. News.
116 It is not entirely clear that there is something with state school rank being adversely affected in this manner, since
the quality of student admitted (as crudely measured by LSAT and UGPA) is indeed lower, regardless of the reasons
117 Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L.J. 229, 238 (2006); Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31,
2005, at Section A4.
IV. Conclusions and Implications
Rank plays an important role in disseminating information about law schools in an
information-scarce market. Yet as this Paper and others convincingly demonstrate,118 the current
system conveys inaccurate information, resulting in a variety of harmful effects. Prospective law
students take ranking into account when deciding which school to apply to, despite mounting
evidence suggesting that U.S. News’s ranking methodology is deeply flawed.119 Law schools
react to rankings in a number of contexts, 120 including admissions decisions.
Yet despite these flaws in U.S. News’s methodology, rankings can still act as a useful tool
in the evaluation of law schools, if implemented correctly. Although the purpose of this Paper is
not to propose a new ranking methodology, the flaws demonstrated by the statistical analysis in
this Paper suggest two possible changes in approach to law school rankings. The first proposal is
of a very general nature, while the second is more specific in its scope.121
Generally, producing information as opposed to weighted information can improve the
situation markedly. If rankings were category specific, prospective students (and faculty) would
be able to weight individual factors as they saw fit, as opposed to having U.S. News give weights
in a somewhat arbitrary manner. For example, if category specific rankings are provided for
library size and student faculty ratio, individual students can decide independently on how to
118 See, e.g., Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation:
Ways Rankings Mislead, 81 IND. L.J., 229, 269 (2006); Brian Leiter, How to Rank Law Schools, 81 IND. L. J. 47, 47-
119 See Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L.J. 229, 269 (2006).
120 See generally id.
121 For a number of other interesting proposals, see id. at 260; Brian Leiter, How to Rank Law Schools, 81 IND. L. J.
48, 47-52 (2006).
weight each factor: if library size is important to one student but not another, each student can
weight library size differently.122 The situation is analogous for student-faculty ratio.123
This type of ranking system has the advantage of making undesirable strategic behavior
on the part of law schools more difficult, since the weights associated with different school
quality measures are determined by prospective students on an individual basis. Including a
measure of faculty quality according to specific area of law may give the added advantage of
encouraging some schools to specialize.124 This type of information would be particularly useful
for prospective students with narrow areas of interest within law.
Although measures based on individual categories are useful, if the popularity of U.S.
News’s ranking system is an indicator, there is a strong market demand for a single aggregate
ranking system, in addition to categorical rankings. While improving upon the individual
measures employed by U.S. News is certainly possible, this Paper supports making available a
student choice, or “tournament” based model of rank, as well or instead of the current method
employed by U.S. News. Designing a single rank based on a tournament model has many
advantages over U.S. News’s methodology.125 Tournament-type ranking has been implemented
in the college context, with some success.126 In the college context this model is implemented
through the construction of a tournament between schools, competing over students. Schools
receive “points” in the “tournament” when students select their school over other schools to
122 More generally, if separate rank r is given to each category c from a total of n categories, each student i can
associate a weight wc with each category according to his own preferences. Each individual student can then
aggregate these weighted variables to compose a list of preferences according to available information and the
weight, or importance, he associates with each variable, resulting in a student specific weighted ranking for each
123 Brian Leiter already provides some information along these lines. See LeiterRankings available at
http://www.leiterrankings.com/ (last visited Apr. 15, 2007).
124 Such specialization may also make a generic ranking method seem even less viable, since it is difficult to
compare across areas of specialization within law.
125 For a similar proposal, see Cass Sunstein, Ranking Law Schools, A Market Test?, 91 IND. L.J. 25, 25-34 (2006).
126 See generally Christopher Avery, et al., A Revealed preference Ranking of U.S. Colleges and Universities NBER
working paper 10803 (2004).
which they have been admitted. For example, if a student was admitted to both college A and B,
and chose to attend college A, that student will be said to have ranked college A over college B.
Rank is then determined according to an aggregation of individual student choices.
Manipulating tournament rank is difficult, since the determining factor of a school’s rank
is whether students choose to attend it over other schools. Furthermore, student preference seems
a more relevant measure of a school’s quality than some arbitrarily weighted numeric score, or
the size of a school’s library.
Since both of the above proposals make strategic action on the part of law schools
extremely difficult, they will severely limit the incentives law schools currently face to
differentially weight admissions variables in order to maximize rank. This in turn will improve
minority admissions prospects in some law schools, without the use of affirmative action, or
other politically controversial activities.
In conclusion, there is little doubt that law schools can and do act strategically in order to
maximize their U.S. News rank in a variety of ways. Furthermore, it is clear that some of these
rank-maximizing strategies may result in real harm. Regardless of the future ranking
methodologies, it is important to realize that by providing more and better information to
prospective students, law schools can help overcome the negative impact of rankings. Increased
transparency in admissions processes and in the characteristics of an admitted class, such as
LSAT and UGPA, can only help dissolve the disproportionate impact rankings have on both
students and law schools.