Yale Law School
Yale Law School Legal Scholarship Repository
Student Prize PapersYale Law School Student Scholarship
A Question of Rank
This Article is brought to you for free and open access by the Yale Law School Student Scholarship at Yale Law School Legal Scholarship Repository. It
has been accepted for inclusion in Student Prize Papers by an authorized administrator of Yale Law School Legal Scholarship Repository. For more
information, please email@example.com.
Ehrenberg, Shuky, "A Question of Rank" (2007).Student Prize Papers.Paper 18.
A Question of Rank1
An empirical investigation into law school rank maximizing strategies, with a focus on
U.S. News and World Report rankings have long been a part of the law school application
process, with school rank often playing an important role in a prospective student’s decisions.
This Paper addresses the question of whether law schools act strategically in order to maximize
their U.S. News and World Report ranking, with a focus on the admissions process. The Paper
will show that some law schools admit students in order to maximize their ranking, as opposed to
admitting students expected to succeed in law school. The Paper will also include a more general
discussion of U.S. News’s ranking methodology, and possible implications to affirmative action
and minority admissions in law schools.
1 An expanded form of the data sets not included in the body of the note are on file with the author, and available
2 Thanks to Muhammad Asali, Alessandra Casella, Phoebus Dhrymes, Susan Elmes, Eiichi Miyagawa, and Atila
Abdulkadiroglu of Columbia University’s Economics Department, as well as to Andrew Gelman of Columbia
University’s Political Science and Statistics department for (ongoing) advice on questions of methodology. Thanks
also to Jeffrey Stake of Indiana Law School for his advice on the matter of law school rankings.
I. Why do Law Schools Care? (Do Law Schools Care)?....................................................... 7
II. Ranking Methodology and Rank Maximizing Strategies in Admissions......................... 10
A. Weighting of LSAT and UGPA................................................................................ 11
B. Differential weighting of LSAT and UGPA for different score ranges.................... 21
C. Ratio of Admitted to Applied Students..................................................................... 27
III. Beyond Admissions.......................................................................................................... 30
A. Quality Assessment .................................................................................................. 30
B. Placement Success ................................................................................................... 32
C. Faculty Resources .................................................................................................... 33
D. Size Matters .............................................................................................................. 34
IV. Conclusions and Implications........................................................................................... 39
The ranking of educational institutions in the United States has a tradition rich in both
history and controversy. Due to the availability of large, easily accessible digital databases and a
variety of quantifiable measures by which educational institutions can be ranked, few
programs—be they academic or professional—have escaped the roving eyes of U.S. News and
World Report, The Princeton Review, and other such publications.3
While ranking educational institutions can convey useful information to prospective
students, ranking mechanisms are not uncontroversial. One often-overlooked, yet crucially
important, element of a ranking mechanism is not so much the quality of the information that it
conveys, but rather the effect of the methodology it employs on the institutions that it ranks. For
instance, since most college rankings include “yield” (the percentage of students admitted who
enroll) as a component of their algorithm, colleges who wish to maximize their rank have an
incentive to reject students whom they think will not enroll if admitted.4 Perversely, the students
most likely to be rejected are the more successful ones, since they are most likely to have a better
Both statistical and anecdotal evidence suggests that colleges do in fact act upon these
incentives, with some schools going so far as to hire outside consultants to assist them in
maximizing their student yield.6 Although yield is not a central component of U.S. News and
World Report’s law school ranking algorithm, it contains other weightier components that allow
3 These publications rank academic and professional programs for various degree granting programs, using different
formulas. See, e.g., America’s Best Graduate Schools, U.S. NEWS & WORLD REP., (2007),
http://www.usnews.com/usnews/edu/grad/rankings/law/lawindex_brief.php; Best 361 Colleges,
PRINCETON REV. (2007) available at http://www.princetonreview.com/college/research/rankings/rankings.asp (last
visited Apr. 15, 2007).
4 Daniel Golden, How Colleges Reject the Top Applicants – and Boost Their Status, WALL ST. J., May 29, 2001, at
for strategic rank-maximizing action on the part of law schools. This Paper explores the impact
of U.S. News and World Report’s ranking mechanism on law schools, with a focus on the
admissions process. Since various characteristics of an admitted class—such as Law School
Admissions Test (LSAT) scores and Undergraduate Grade Point Average (UGPA)—are
significant factors in a school’s rank, law schools have an incentive to act strategically with
respect to these variables in order to maximize their rank.
This Paper uses both statistical and case study evidence to show that some law schools do
in fact act on these incentives, implicitly introducing rank into their admissions considerations.
The Paper will show that some law schools weight certain admissions variables (LSAT and
UGPA) in a way that is commensurate with maximizing their rank. These law schools give
LSAT scores a heavier weight than its ability to predict success in law school warrants, due to its
heavy impact on rank. One of the consequences of this skewed admissions policy is a bias
against minority students, who tend to perform poorly on the LSAT when compared to non-
minorities.7 Additionally, I show that law schools provide and deny information to prospective
students in a manner that is likely to maximize their rank, by maximizing their applicant pool—
another component of U.S. News’s ranking algorithm.8
Although numerous scholars have criticized the validity of U.S. News law school
rankings,9 there are relatively few systematic empirical studies available. A number of noted
7 See generally William C. Kidder, Does the LSAT Mirror or Magnify Racial and Ethnic Differences in Educational
Attainment?: A Study of Equally Achieving “Elite” College Students, 89 CAL. L. REV. 1055, 1058 (2001) (analyzing
minority achievement on the LSAT test).
8 More specifically, the ratio of accepted to applied students is a component of rank. By increasing the numerator of
this variable, number of applicants, the law school is effectively increasing the value of the variable as a whole. See
generally America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
9 See, e.g., Brian Leiter, Measuring the Academic Distinction of Law Faculties, 29 J. LEGAL STUD. 451, 452 (2000)
(listing a number of flaws in U.S. News’s ranking methodology); Richard A. Posner, Law School Rankings, 81 IND.
L.J. 13, 22 (2006) (providing additional critiques of U.S. News’s methodology, and suggesting several
improvements); Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation:
exceptions include Sauder and Lancaster’s study on the impact of rank on law school behavior.10
Sauder and Lancaster focus on the impact of rank on prospective students, arguing that students
take rank into account when applying to schools, as evidenced by a correlation between school
rank and number of applicants. A recent study by Stake and Alexeev examines student and law
school reactions to rank,11 finding that rank affects the quality of students applying to law
schools as well as a law school’s reputation.12 This Paper’s investigation into whether law
schools act strategically with respect to rank in admissions can be seen as a natural extension of
this literature, with a focus on adverse effects of U.S. News rankings on law schools’ admissions
incentives, and law schools’ reactions to these incentives.
The Paper proceeds in four parts. Part I examines the impact of law school rankings on
law schools, faculty, and students in a general manner, showing that law schools, as well as law
school administrators have good reason to care about their rank. Part II introduces and explains
the selectivity portion of U.S. News and World Report’s law school ranking algorithm, which
includes various measures related to the law school admissions process. I discuss possible
methods by which law schools can act strategically within this category in order to maximize
their rank, and bring statistical and case study evidence to show that some law schools do in fact
maximize rank through admissions. Part II concludes with a discussion of the implications of
rank-maximizing admissions, with a focus on minority candidates. Part III discusses strategic
manipulations available to law schools outside of the admissions process, with anecdotal
evidence suggesting that such manipulations take place. Part IV suggests a number of approaches
Ways Rankings Mislead, 81 IND. L.J. 229, 233-241 (2006) (criticizing U.S. News’s rankings as having a negative
impact on both prospective students and law schools).
10 See generally Michael Sauder and Ryon Lancaster, Do Rankings Matter? The Effects of U.S. News & World
Report Rankings on the Admissions Process of Law Schools, 40 LAW & SOC’Y REV. 105, 105-134 (2006).
11 Jeffrey E. Stake and Michael Alexeev, Who Responds To U.S. NEWS & WORLD REP.’s Law School Rankings?.
IND. LEGAL STUD. RESEARCH PAPER NO. 55, (2006), available at, http://ssrn.com/abstract=913427 (last visited Apr.
15, 2007). The article is in draft form, with some robustness tests still being conducted.
12 Reputation as measured by U.S. News surveys.
available to law schools in order to combat the negative impact of U.S. News rankings, as well as
ways U.S. News can minimize incentives for strategic action on the part of law schools.
I. Why Do Law Schools Care? (Do Law Schools Care)?
The answer to this question seems to be straightforward: they care very much. Law school
administrators and faculty refer to rankings on a regular basis, in both official and unofficial
capacities. Additionally, U.S. News actively promotes use of its rankings, claiming that they are
closely correlated with earnings of law school graduates.13 In the rare event that a prospective
student has not availed himself of U.S. News’s advice prior to applying to law schools, he is
likely to be referred to their ranking system by law schools themselves, some of whom refer to
U.S. News rankings openly in their promotional material.14
Law schools have attempted to deal with the question of rank on a number of occasions.
In a letter signed by the deans of over one hundred law schools, and published on the Law
School Admissions Council’s web page, the question of rank is “officially” addressed. The letter
claims “ranking systems are inherently flawed because none of them can take your special needs
and circumstances into account.”15 The letter then proceeds to list a number of factors that are
“excluded entirely or severely undervalued by all of the numerical ranking systems.” As an
example of such a system the letter cites U.S. News’s ranking methodology.16 Along similar
lines, though in a less official capacity, Jane Bahls published an article in the American Bar
Association sponsored magazine, Student Lawyer, which included a host of anecdotal evidence
indicating students and faculty care intensely about their school’s rank.17
Despite the “official” positions expressed in the letter from law school deans and Jane
Bahls’ article, most law schools are more than willing to refer to rank when they believe it
13 See generally, Anne McGrath, U.S. NEWS AND WORLD REP. Ultimate Guide to Law Schools, Sourcebook (2004).
14 See infra p. 9.
15 Letter from Law School Deans, Deans Speak Out About Rankings (2007),
http://www.lsac.org/LSAC.asp?url=lsac/deans-speak-out-rankings.asp (last visited Apr. 15, 2007).
17 Jane Bahls, Ranking the Rankings, 31 STUDENT LAW. 7 (2003), available at
http://www.abanet.org/lsd/stulawyer/mar03/rankinggame.html (last visited Apr. 15, 2007).
benefits them to do so. Law schools explicitly refer to ranking in promotional material sent to
prospective students, with comments such as: “U.S. News & World Report ranked Mercer first in
the country for its legal writing program”18 and “U.S. News and World Report awards the School
of Law a high national ranking and ranks our Clinical Education Program 3rd in the country.”19
Although some law schools do not mention rankings at all, the above examples are far
from anomalous. Many law schools refer to rank in either mail-based promotional materials or
on their websites.20 Although most top-ranked law schools (top 25) do not refer to U.S. News by
name, nearly all refer to themselves as a “top ranked law school.”21
Some law school faculty and students care so intensely about U.S. News rank that they
actively put pressure on senior administrators to attempt to maximize the school’s rank. The
most extreme case of such pressure occurred as recently as April of 2006, in the University of
Houston Law Center. The then-Dean of the law school, Professor Nancy Rapoport, resigned,
later indicating that her resignation was partially motivated by faculty and student protests over a
significant drop in the school’s U.S. News and World Report ranking.22 Although the University
of Houston’s Law Center flourished both administratively and academically under Professor
Rapoport’s leadership,23 the negative impact on the law school’s rank was sufficient to incite the
ire of the law school’s student population.24
18 See Mercer Law School promotional material, (2006).
19 See Washington University in St. Louis School of Law promotional material, (2006).
20 See, e.g., Law School Cracks Top 10 in U.S. News Graduate Program Rankings, DUKE UNIV. NEWS AND COMM.,
(2007), available at http://www.dukenews.duke.edu/2007/03/usnews_rankings.html (last visited Apr. 15, 2007);
Legal Writing Program Ranked Number One and Law School Ranked in the Top 100 by U.S. News,
http://www.law.mercer.edu/ (last visited Apr. 15, 2007); Mason Law Climbs to 34th in U.S. News Rankings, George
Mason University School of Law Current News, http://www.law.gmu.edu/currnews/story.php?ID=721.
21 See e.g., University of Michigan Promotional Material (2006); Duke University Promotional Material,(2006).
22 Rick Casey, U.S. News Skews Ranking of UH Law School, HOUS. CHRON., Apr. 18, 2006.
23 The University of Houston’s Law Center had one of the better law faculties in the state. Leiter Reports,
http://leiterlawschool.typepad.com/leiter/ (April 17, 2006) (last visited Apr. 15, 2007).
In a recent New York Times Article, Alex Wellen, who attended a symposium on law
school rankings at the University of Indiana Law School, discusses some of the actions taken by
law school administrators in order to increase their rank.25 According to Wellen, schools
misreport expenses and library size, open part-time programs, and hire recent graduates unable to
find a job on the legal market, all for the sake of a higher U.S. News rank.26
In sum, there is a mass of anecdotal evidence suggesting that law schools care intensely
about, and wish to increase, their U.S. News rankings.27 Yet the question remains, how far are
law schools willing to go in order to maximize their rank—and at what cost? The following parts
of this Paper will explore some of the more sophisticated strategies available to rank-anxious law
schools, employing a variety of statistical tools. Some of the normative implications of these
rank-manipulating strategies will also be explored, followed by a discussion of possible
25 Indiana University Law School, SYMPOSIUM ON THE NEXT GENERATION OF LAW SCHOOL RANKINGS (2006).
27 The National Jurist has also published a recent article on law school rankings. A number of law school professors
are quoted as having accused other law schools of attempting to game the rankings, see Rebecca Luczycki, The
Rankings Race, How Far Will Law Schools Go to Win?, THE NAT’L JURIST, January 2007.
II. Ranking Methodology and Rank-Maximizing Strategies in Admissions
While rank is often a component in a person’s college decision, it is not always a
determining factor. Although certain colleges may be considered “better” than others, each
college is truly different, with many of the most popular ranking mechanisms dividing colleges
into different categories, and providing intra-category rankings.28 Law schools, by contrast, are
required to teach very similar subjects in order to receive the approval of the American Bar
Association—the institution that formally licenses lawyers and law schools. As a consequence,
law school curricula are fairly homogenous, with few exceptions. Therefore, a single ranking
dimension for all law schools is not as unreasonable.29
Unlike many other graduate and professional programs,30 law school rankings are
dominated by a single publication—U.S. News and World Report, resulting in a lack of
competing ranking mechanisms.31 Fortunately, U.S. News is very open about its ranking
methodology, advertising the exact algorithm it uses online.32 The algorithm is composed of four
major categories: Quality Assessment, Selectivity, Placement Success, and Faculty Resources,
each with a number of subcategories. Each subcategory is given a weight in the ranking, with the
total sum of all subcategory weights equal to one. 33
28 U.S. News separates colleges into Research Universities and Liberal Arts programs, as well as by region.
Additionally, business and engineering programs are ranked separately from other colleges, with engineering
programs themselves having separate rankings. See U.S. News and World Rep., available at
http://www.usnews.com/usnews/edu/college/rankings/rankindex_brief.php (last visited Apr. 15, 2007).
29 A number of field-specific rankings for law schools are available, however, some of which are discussed later in
this note. Although U.S. News publishes specialty rankings for law schools, these are more of an afterthought, as
opposed to colleges where separate rankings are the only rankings available.
30 Other academic and professional programs often enjoy a number of competing ranking methodologies. Economics
PhD programs are ranked by U.S. News, as well as by Econphd, which includes references to a number of different
ranking methodologies. Econphd, available at http://www.econphd.net/guide.htm (last visited Apr. 15, 2007).
31 See Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31, 2005, at Section 4A.
32 Law Methodology, America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
The ranking algorithm for school i can be represented by the equation
)( 1125 . 0
)( 0075 . 0
)(03 . 0)( 02 . 0
)( 18. 0)( 025 . 0) ( 1 . 0
)( 125. 0)( 15. 0
)(25 . 0
ERARUGPA LSAT LJPA
with the coefficient preceding each variable indicating its weight in the rankings.34 For example,
the coefficient PA (Peer Assessment) is given a weight of 0.25, or twenty five percent of a
school’s total rank. The criteria examined in this Part fall under the category of selectivity, and
include LSAT, UGPA and AR. U.S. News explains these variables as follows:
Selectivity (weighted by .25)
• Median LSAT Scores (.125) (LSAT)
The calculated median of the scores on the Law School Admissions Test of the 2005
entering class of the full-time J.D. program. The calculated median is the midpoint of the
25th and 75th percentile scores.
• Median Undergrad GPA (.10) (UGPA)
The calculated median of the undergraduate grade point average of the 2005 entering
class of the full-time J.D. program. The calculated median is the midpoint of the 25th and
75th percentile scores.
• Acceptance Rate (.025) (AR)
The proportion of applicants to the full-time program who were accepted for entry into
the 2004 entering class.35
A. Weighting of LSAT and UGPA
Two components featured prominently in both U.S. News’s rankings and law school’s
admissions considerations are the LSAT and UGPA. Law schools ostensibly use the LSAT and
UGPA, in conjunction with a host of other variables (such as recommendations, personal
statements, resumes) as predictors or proxies for a prospective student’s future success in law
school, which this Paper measures simply as law school GPA.36 Although different law schools
give different weights to different variables in their admissions decisions, with some weighing
34 See infra Part II (discussing the exact meaning of each coefficient).
35 Law Methodology, America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
36 Except in cases where other measures of success are also available, such as law school rank if measured
independently of GPA. In these cases I include, or at least address, both measures.
“immeasurable” variables such as letters of recommendation more heavily than others, nearly all
schools give LSAT and UGPA considerable weight.37
The LSAC publishes annual reports measuring how successful LSAT scores and UGPA
are in predicting future law school performance of prospective students. Although there is some
variation in predictive ability across years, it is small, with LSAT being a slightly better proxy
for success in law school than UGPA.38 Additionally, there is significant variation in predictive
ability between different law schools, though both LSAT and UGPA are considered to be good
predictors of success in nearly all schools, indicating that their prominent use as admissions
criteria appears reasonable.39
Within U.S. News’s ranking algorithm, the LSAT is given a weight of twelve and a half
percent, while UGPA is given a weight of ten percent of a school’s total rank.40 This difference
in weights in the ranking is commensurate with the difference in predictive value associated with
each variable: LSAT is a better predictor of law school success than UGPA by approximately
twenty five percent.41 Yet as the LSAC data indicates, proxy values for LSAT and UGPA as
predictors of law school GPA are far from homogenous across schools: for some law schools
LSAT is a better predictor of law school GPA, and in others UGPA is a better predictor.42
37 See LSAC predictive model, http://officialguide.lsac.org/search/cgi-bin/results.asp?PageNo= (last visited Apr. 15,
2007) (indicating that admissions to law schools can be predicted within a very narrow confidence interval using
only LSAT and UGPA as inputs).; see also infra p. 27-28 (discussing criticisms of this admissions practice).
38 Specifically, the LSAC finds that the correlation between LSAT and law school GPA ranges from 0.04 to 0.56,
with a median of 0.34. LSAC INFO. BOOKLET (2007), 27, available at http://lsac.org/pdfs/2007-
2008/Infobooktext2007web.pdf (last visited Apr. 15, 2007). Correlation data for UGPA and law school GPA is also
available for some years, with a mean correlation of approximately 0.28. See Predictive Validity of the LSAT: a
National Summary for the 2001-2002 Correlation Studies, LSAC RESEARCH REP. SERIES, 2001. Although the Data
is for 2001-2002 the report indicates that there is little variance between years. For a more complete discussion of
the data see appendix.
39 See id.
40 Supra p. 11.
41 For mean scores, see supra note 36.
42 See Graph 1, data from Predictive Validity of the LSAT: a National Summary for the 2001-2002 Correlation
Studies, LSAC RESEARCH REP. SERIES, 2001.
The graph below displays the distributions of predictive values for LSAT and UGPA,
with LSAT represented in purple. The horizontal axis represents correlation of LSAT and UGPA
with law school GPA, on a scale of zero to one, with one indicating perfect correlation.43 The
vertical axis represents the number of schools for which that specific correlation exists. For
example, the graph indicates that there are five law schools in which the correlation between
UGPA and law school GPA is 0.175. The graph clearly shows that while LSAT is generally
more closely correlated with law school GPA, there are some schools for which UGPA is more
closely correlated with law school GPA.
If UGPA is a better predictor of performance in a specific law school than LSAT, a law
school not acting strategically will weight UGPA at least as heavily as LSAT scores in
admissions considerations. Generally, schools that are not acting strategically will weight LSAT
43 Since the highest correlation for a single variable is about 0.6, correlations beyond that point are omitted from the
Number of Schools
Comparing LSAT and UGPA correlation with first year law school grades
Figure 1: Distributions of LSAT and UGPA predictive ability.
. . . .
and UGPA in admissions approximately according to their predictive values of a student’s
success in law school.44 This leads directly to a first hypothesis of law school strategic behavior:
If a law school is acting strategically, it will weight LSAT scores disproportionately in its
admissions considerations. By giving LSAT a heavier weight, it will increase its ranking (since
LSAT is given a heavier weight in U.S. News’s ranking algorithm) at the expense of student
In order to test this hypothesis a simple statistical model will be employed. Since two
different measures—LSAT and UGPA—use two different scales—120-180 and 0-4.33,
respectively—both variables must be converted to a single scale in order to facilitate a
comparison. This conversion is done through standardizing each variable around its mean,
resulting in both measures being evaluated on a single scale.
The statistical model used for the actual comparison is a logistic regression. The model
proposes a relationship between a student’s admissions probability, and his LSAT and UGPA.
Although both variables affect the probability a student will be admitted, they are each given
different weights. The purpose of the model is simply to discern and allow for the comparison of
the different weights a specific law school gives to UGPA and LSAT in its admissions
considerations. The probability of admission to a given law school (
tP ) is placed on the left hand
side of the equation, and the standardized versions of LSAT and UGPA are placed on the right
hand side. The coefficients of the standardized variables are the weights given to each variable,
which are comparable due to the standardization of both variables.
Formally, with LSAT = X1 and UGPA = X2:45
44 This will not necessarily result in precise measurements: there are a number of variables such as extracurricular
activities etc. which law schools may also take into consideration and which are omitted from this analysis. Yet
despite this fact, as I will argue subsequently, there is no real reason to believe that any of these variables are
correlated with a higher LSAT score and a lower UGPA.
Although a large number of law schools are tested over time, individual data is presented only
for Brigham Young University (BYU), with other schools being represented in an aggregate
Since it is reasonable to assume that a school will form its opinion regarding which
students are likely to succeed within its framework over time, I will use an average of LSAT and
UGPA’s proxy values at BYU over a number of years.47 These proxy values are a measure of the
effectiveness of LSAT and UGPA in predicting success in law school of BYU students. Two
measures of success are given: grades at the end of the first year of law school, and grades at the
end of the third year of law school.48
On average, the correlations between law school rank and proxies for BYU are as
Table 1: Predictive Value of LSAT and GPA for success in Brigham Young University49
LSAT and first year ranking .744
UGPA and first year ranking .740
45 Specifically, the standardization subtracts from each observation Xi from the school’s applicant pool the mean of
that observation, X , and divides by its standard deviation, σ. The coefficients of the standardized variables, αi, are
what we wish to compare, with a large coefficient meaning more weight given to that variable.
46 Brigham Young University is an attractive school to present since correlation data is available for a large number
of years between law school grades and both LSAT and UGPA independent of the LSAC study. See generally D.A.
Thomas, Predicting Law School Academic Performance from LSAT Scores and Undergraduate Grade Point
Averages: A Comprehensive Study, 35 ARIZ. ST. L.J. 1007 (2003). Since UGPA and LSAT are of approximately
equal predictive value for Brigham Young, it is clearly not a representative, or average school. Yet this Note’s
argument is not that all schools act strategically with respect to rank—indeed, on average, rankings conform to the
variable’s proxy values. Rather, this Note argues that some schools, such as Brigham Young, act strategically.
47 This is largely a formality, since variance in predictive ability over time is extremely low. Id. at 1017. These
predictive values are higher than average, since these are not simple correlations between LSAT or UGPA and law
school grades, but rather between law school rank—a more sophisticated measure of law school success. See id. at
48 Since variation between these measures is extremely small, the relative merits of each will not be discussed.
49 Id. at 1018.
LSAT and ranking at graduation .730
UGPA and ranking at graduation .733
Combined LSAT and UGPA index at graduation .744
Although there are minute differences between LSAT and UGPA predictive power, they
are not statistically significant.50 Therefore, if BYU is not acting strategically in admissions with
respect to rank, it should weight LSAT and UGPA at approximately the same level in its
admissions considerations. The results of the logistic regression show otherwise: Brigham Young
weights LSAT scores nearly twice as heavily as it weights UGPA, this despite their nearly
identical value in predicting law school success.51
Table 2: Comparing weights of LSAT and GPA in admissions, Brigham Young 200552
Table 3: Comparing weights of LSAT and GPA in admissions, Brigham Young 2004
The bolded variables in each table under the “coefficient” column are the weights given
to each variable by Brigham Young.53 It is easy to see that in both years BYU weights LSAT
50 Id. at 1018.
51 Data is computed on an annual basis. I present data for the years 2004-2005.
52 Data from Wendy Margolis, ABA-LSAC Official Guide to ABA-Approved Law Schools (1997-2006). The
available data is categorical, with few categories. In order to overcome this problem, I bias against findings,
assuming that LSAT scores always fit to the lowest point in their category. For example, an LSAT score falling into
the category of 160-165 is assumed to be 160.
53 The standard errors are a measure of the coefficient’s variation. Both coefficients exhibit low variation, making a
simple comparison between the coefficients reasonable. The Z statistic is a measure of statistical significance of the
coefficient, using the ratio of the coefficient and its standard deviation.
nearly twice as heavily as it weights UGPA. This is a strong indication of strategic action, since
UGPA and LSAT have nearly identical predictive values for success at BYU.54
Comparing weights of LSAT and UGPA in admissions decisions to their predictive
values is difficult for many schools, since detailed data on predictive values is rarely available.
Consequently, this Paper uses aggregate data, resulting in a more general type of conclusion—it
is possible to discern whether there are some law schools acting strategically, though it is
difficult to point to strategic action on the part of any individual school.
In order to arrive at these aggregate conclusions, data from a random sample of law
school admissions decisions is examined, as per Brigham Young in tables two and three. 55 These
law school admissions decisions are then compared to an aggregate distribution of predictive
values for UGPA and LSAT for all schools.56 The results of this comparison are displayed in a
graph below. The thick black line in the graph indicates the ratio of the average predictive ability
for law schools:
. The two dotted lines above and below the thick line indicate an
approximate ninety percent confidence interval.57 Given these distributions, if law schools are
not acting strategically approximately ninety percent of them should fall within the two dotted
54 See infra p. 20.
55 Schools not mentioned elsewhere in the Note include: George Mason U., American U., Duke U., Campbell U.,
Cardozo U., Catholic U., Arizona U., Arizona State, Baltimore U., Ave Maria, Boston U., Buffalo U., UC Davis,
UC Hastings, UCLA, Yale, U. Cincinnati, Queens, U. Colorado, U Connecticut, Creighton, Denver, Detroit, Drake,
Florida (Levin), Florida State, Florida A&M, George Washington U., U. Georgia, Georgia State, Golden Gate, and
Gonzaga University. Schools were largely selected on an alphabetical basis, with Yale examined since it is one of
the few highly ranked law schools who make their admissions data freely available.
56 A graphical representation of this data is available in graph 1, supra p. 14.
57 The confidence interval is for the ratio distribution,
approximation, computed from distributions made available by LSAC. See Predictive Validity of the LSAT: a
National Summary for the 2001-2002 Correlation Studies, LSAC RESEARCH REP. SERIES 6-8 (2001). Finding the
exact ratio of two normal distributions with non-zero correlation is a complex process with questionable advantages
over approximate methods. See generally T. Pham-Gia et al., Density of the Ratio of Two Normal Random Variables
and Applications, COMM. IN STAT., THEORY AND METHODS 1568, 1568-1591 (2006).
) 01 . 0 ,71. 0 (N~/ LSAT UGPA
, which is a (conservative)
Although a number of law schools do fall between the dotted lines, there are a large
number of outliers—far more than the expected ten percent. This indicates that it is very likely
that a number of law schools are acting strategically. Due to the aggregate nature of the data,
however, it is difficult to identify specific transgressors, since the model predicts that a small
number of law schools should fall outside the dotted lines.
Such strategic action is disconcerting for a number of reasons. Firstly, this system is
blatantly unmeritocratic, resulting in inferior students being preferred for apparently arbitrary
reasons. Simply put, this would result in less-qualified lawyers.
Secondly, and possibly more disturbingly, since many minorities perform poorly on the
LSAT when compared to UGPA, strategic manipulation on the part of law schools that results in
additional weight being given to the LSAT may inadvertently discriminate against minority
Figure 2: The black line indicates the average predictive ratio of UGPA to LSAT, (2006).
147 1013 161922 2528 31
Individual Law Schools
Ratio, UGPA / LSAT
applicants. Longstanding empirical evidence suggests that minority students perform poorly on
the LSAT compared to UGPA. This means that for a given UGPA and major, minority students
do not do as well on the LSAT as non-minority students.58 Attempts to explain this phenomenon
abound, including the suggestion that the LSAT may be a ‘culturally biased’ test, biasing against
students from minority backgrounds.59
Given the poor performance of minorities on the LSAT, weighting the LSAT more
heavily in admissions considerations will result in fewer minorities being admitted to law school.
This would occur despite the fact that these minority students may be as qualified or more
qualified than non-minority students with higher LSAT scores and lower UGPAs. Indeed the
University of Michigan’s Law School, famed for its affirmative action program,60 experienced a
dramatic drop in its U.S. News rank in the past two decades.61 One might speculate that these two
events—Michigan’s affirmative action program, and minority’s tendency towards lower LSAT
scores—caused the University of Michigan’s drop in U.S. News rankings.
Of course, this hypothesis could be worded more positively: If law schools do react to
rank in their admissions decisions, changing the weighting of LSAT and UGPA in U.S. News’s
ranking algorithm may result in some law schools admitting more minority students without loss
of student quality.62
58 See generally William C. Kidder, Does the LSAT Mirror or Magnify Racial and Ethnic Differences in Educational
Attainment?: A Study of Equally Achieving “Elite” College Students, 89 CAL. L. REV. 1055, 1058, (2001); see also
Jeffrey Stake, Minority Admissions To Law School: More Trouble Ahead, and Two Solutions, in THE LSAT, U.S.
NEWS & WORLD REP., AND MINORITY ADMISSIONS (2005).
59 Kidder, supra note 58, at 1058-1059.
60 Upheld in Grutter v. Bollinger, 539 U.S. 306 (2003).
61 See Top Law Schools, available at http://www.top-law-schools.com/michigan-law-school.html (last visited Apr.
15, 2007) (describing Michigan’s drop from being ranked third to tenth).
62 Note that this would not necessarily be the case on average, since U.S. News’s weighting of LSAT scores is fairly
close to their average predictive value of law school performance. See Predictive Validity of the LSAT: a National
Summary for the 2001-2002 Correlation Studies, LSAC RESEARCH REP. SERIES, 2001; Supra at 12.
Since most law schools consider factors other than LSAT scores and UGPA when
making admissions decisions, the above results are not perfect; such factors include letters of
recommendation, extracurricular activities, honors and awards, etc. Given the nature of these
omitted factors, however, it is far more likely that they would strengthen the results than weaken
them. The omitted variables would only be a problem if they are correlated with LSAT scores
and not with UGPA, since that would allow for the possibility that the observed increased weight
given to LSAT scores may actually be weight given to these variables. For example, if the
quality of a letter of recommendation is closely correlated with a student’s LSAT score, the
observed increase in weight given to the LSAT may in fact be weight given to this letter of
Although most of these omitted variables are difficult to measure, it is unlikely that they
are correlated with a test measuring “acquired reading and verbal reasoning skills.”63 On the
contrary, it is far more likely that one’s grades—a measure of achievement over time—are more
closely correlated with extracurricular activities and recommendations, which presumably also
measure long-term accomplishments.
A more convincing argument would suggest that LSAT is correlated with undergraduate
institution quality, and that law schools are actually selecting applicants from high-quality
colleges who happen to have high LSAT scores. On its face, this argument seems reasonable,
since many colleges take other standardized tests such as the SAT and ACT into their admissions
considerations.64 In order for this argument to effectively explain the above observations in the
data, however, there would also have to be no correlation between quality of undergraduate
institution and grades. This does not appear to be the case, since there is considerable evidence
63 See About the LSAT, http://www.lsac.org/LSAC.asp?url=lsac/about-the-lsat.asp (last visited Apr. 15, 2007).
64 See generally College Board, www.collegeboard.com. Although the LSAT and SAT are meant to measure
different qualities, one might argue that they both measure ‘test taking ability.’
suggesting that “better” institutions often have increased grade inflation, 65 meaning that a person
coming from a “higher quality” undergraduate institution is also likely to have higher grades. In
conclusion, although it is possible that omitted variables explain law school’s differential
weighting of LSAT and UGPA, this is highly unlikely. In order to further assure that this is not
the case, another possible instance of strategic action in law school admissions will be examined.
If law schools are found to act strategically with respect to rank in other contexts, arguments
such as those made above become less likely.66
B. Differential weighting of LSAT and UGPA for different score ranges
Another method of rank manipulation available to law schools is differentially weighting
UGPA and LSAT in different ranges. Since a school’s U.S. News rank relies only on its twenty
fifth and seventy fifth percentile LSAT and UGPA scores, students scoring at or slightly above
those ranges are far more significant than students scoring outside of those ranges. For example,
for a school with a historical seventy five percent LSAT range of 165, whether a student scores
170 or 172 on the LSAT is probably not very significant for that school as far as rank is
concerned. However, whether a student scores a 164 or a 166 might be. Thus if the school is rank
maximizing, it will give less weight to the difference between 170 and 172 than it would to the
difference between 164 and 166, despite their numeric equivalence.67 This leads to a second
hypothesis about law school strategic rank maximizing action: If a law school is rank
65 George D. Kuh and Shouping Hu, Unraveling the Complexity of the Increase in College Grades from the Mid-
1980s to the Mid-1990s, 21 EDUC. EVALUATION & POL’Y ANALYSIS 297, 297-320 (1999). Additionally, even if it
were the case that LSAT is a proxy for quality of undergraduate institution, this would still not explain why certain
law schools continue to use it, since it is clearly not an effective proxy for success in some law schools.
66 The following can be seen as a type of instrument: although LSAT may be correlated with quality of
undergraduate institution, the following hypothesis associated with differential weighting of scores is not. Thus if
law schools act strategically in this context as well, LSAT score correlation with quality of undergraduate institution
is clearly not a sufficient explanation for placing an increased weight on the LSAT.
67 Although it is possible that LSAT and UGPA predictive ability of law school success is not homogenous—
prediction is better in certain ranges, this is not a significant problem here, since scores from all ranges are
maximizing, it will give a heavier weight to scores at or near its twenty-fifth and seventy-fifth
In order to test this hypothesis, two measures are used. First, a model identical to the one
used previously to measure the weights law schools give to LSAT and UGPA in admissions will
be applied, with a small difference: a separate calculation is used for each category so that
differences in weights can be computed along different categories. This method is far from
perfect for a number of reasons; however it will probably indicate any egregious manipulations.
Again data is brought for a single school, Brigham Young.69
Table 4: Comparing weights above and below 25% and 75% for Brigham Young,
LSAT (161, 166), UGPA(3.52, 3.86)70
LSAT < 161
LSAT > 166
UGPA < 3.52
UGPA > 3.86
Dropped due to co-linearity (not enough categories)
The results for BYU are weak at best, with very similar LSAT and UGPA weights for all
categories. As a whole there is a large amount of variance amongst schools, with some schools
showing nearly no change in weighting (like BYU), while others showing some change. Of the
schools tested, some do show significant differences in the category pre-identified as the most
68 Since the ratio of admitted to applied students is also a factor in rankings, law schools have an incentive to decline
a student they won’t think will accept their offer (all else being equal). Therefore, a rank maximizing law school
would in fact prefer to deny admissions to candidates they think are “too good”—as per the student with the 172
LSAT score in the example above.
69 The categorical nature of the data decreases the value of any observations markedly, since the hypothesis itself
deals with a more nuanced change in weighting of different variables across a narrow range of scores for both LSAT
70 Since the data is categorical, I round away from the median in all tests. For example, if the true seventy five
percent score is 158, I will use 160, since the crucial part of my hypothesis is that the data comes from the set above
the seventy five percent (or below the twenty five percent) point, with the exact cut off point less central. The
numbers following the measure are twenty-five and seventy-five percentile points, respectively.
susceptible to manipulation-very high LSAT scores.71 The graph below shows the difference in
weights for the LSAT, with the blue line indicating average weight, and the pink line indicating
weight around the seventy-five percentile point. Although this graph seems to indicate a large
amount of strategic action, this is deceptive, since the data points represent coefficient estimates
without taking the variance of these coefficients into account. Therefore, the graph’s implications
are meaningful, but not conclusive.
Due to the limited implications of the first test resulting from low quality of data, a
second test is conducted, with higher quality data. However, the value of this test is also limited
since it uses data from 1995—a time where U.S. News rankings were at least somewhat less
pervasive. Furthermore, data is only available for a single year, and only for students who were
71 A total of twenty schools were examined, including: : George Mason U., American U., Duke U., Campbell U.,
Cardozo U., Catholic U., Arizona U., Arizona State, Baltimore U., Ave Maria, Boston U., Buffalo U., UC Davis,
UC Hastings, UCLA, Yale, U. Cincinnati, Queens, U. Colorado, U Connecticut
13579 11 13 15 171921
Figure 3: Ratio of LSAT and UGPA weights: the blue line indicates average weights.
The pink line indicates weights above the seventy five percentile point. All negative
weights are generated by the above seventy five points, indicating that the LSAT can
have a negative impact for students who scored “too high”.
actually admitted to law school.72 Here a slightly different methodology is applied, though the
purpose is similar: an attempt to find a break in the weighting of LSAT and UGPA at suspect
The following histogram shows the distribution of LSAT scores for admitted students
from one school. Schools acting strategically will have a break near the dotted lines, as well as a
small dip near the center, indicating a decrease in the number of students with LSAT scores in
between the twenty five and seventy five percentile points. Despite the suggestive shape of the
distribution, it is only a weak implication of strategic manipulation, since the distribution
includes only admitted students. This means that it is possible that it is applicants and not law
schools who act strategically, by applying mostly to schools for which their LSAT scores fall
within the twenty five and seventy five percentile range. In order to effectuate a truly convincing
test, information on the distribution of applicants is also necessary.
Returning to an example, if a law school has a twenty-five percentage LSAT score of
160, and a seventy-five percentage score of 168, observing a sudden drop around the seventy-
five percentage score can be explained in several ways. On the one hand, it is possible that the
law school gives a lower weight to scores significantly above 168, both since such students are
not likely to attend the law school (since they are likely to have other, possibly better options),
and since the impact on rank of all scores above the seventy-fifth percentile point is equal. On
the other hand, it is possible that the sudden jump is due to students, who prefer to apply to
schools where their scores fall within the midrange, and is thus independent of strategic action on
the school’s part. That being said, this analysis is not completely void of value, since it strongly
implies the possibility of strategic behavior, even if it does not assure it.
72 See http://www.law.ucla.edu/sander/Systemic/SA.htm (last visited Apr. 15, 2007), the histogram presented is for
school 19 in the data set.
Of the twenty schools examined, eleven indicate the possibility of various levels of
strategic activity, with others showing little or no such indication.73 There are a number of
reasons why some schools may not take advantage of this apparent opportunity to maximize
rank. The first and most obvious is that it requires a somewhat sophisticated form of strategic
action, with different weights given to different cut-off points based on some moderately
complicated probabilistic calculations. Compare this to the relatively simple strategy observed in
the first hypothesis: weighting the LSAT more heavily. Another possible explanation is
optimism on the part of some law schools with respect to the statistics of its future
matriculants.74 Such optimism would result in law schools admitting individuals with higher
scores, hoping that their seventy-fifth percentile would conform to these scores. Returning to the
73 The sample size is small here due to the limited availability of data.
74 The University of Chicago managed to over-estimate both twenty five percent and seventy five percent LSAT
scores in their admissions brochures for the last two years.
Standardized LSAT Score
Figure 4: Distribution of admitted LSAT scores: the dotted lines
represent the twenty fifth and seventy fifth percentile points.
previous example, even though a law school’s seventy-five percentage LSAT score from the
previous year was 165, the law school may be optimistic, believing that its new seventy-five
percent LSAT score will be 170. The law school would then give a heavier weight to an LSAT
score of 170, expecting it to be in the neighborhood of its seventy five percentile score, even
though its actual seventy five percent LSAT score turns out to be lower.
Finally, regardless of whether LSAT or UGPA are weighted more heavily relative to one
another, it is clear that they are both given a heavier weight than their true predictive ability.
Even schools that seem to weight LSAT and UGPA approximately in line with their relative
predictive values often give both scores too heavy a weight in absolute terms. UGPA and LSAT
combined are far from perfect predictors of law school performance,75 yet some law schools take
little else into consideration.76
The Law School Admissions Council (LSAC), the not for profit organization responsible
for administering the LSAT, publishes an official online guide to ABA approved Law Schools.77
The Guide provides many useful services and information, including a “UGPA/ LSAT Filter,”78
indicating a ninety-five percent confidence interval for admissions, based solely on UGPA and
LSAT. This means that the LSAC can predict with a probability of ninety five percent whether a
student will be admitted to a given law school without any information other then a prospective
student’s LSAT and UGPA. The statistical model the LSAC uses in order to make these
predictions is similar to the model employed by this Paper.79 The results of the model are
75 See Predictive Validity of the LSAT: a National Summary for the 2001-2002 Correlation Studies, LSAC
RESEARCH REP. SERIES (2001).
76 See Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L.J. 229, 233 (2006).
77 Official Guide to ABA-Approved Law Schools, http://officialguide.lsac.org/docs/cgi-bin/home.asp (last visited
Apr. 15, 2007).
78 See id.
79 See supra p. 15. Recall that the model places probability of admission on the left hand side of the equation,
meaning that it can be used to compute the probability of admission given a prospective student’s LSAT and UGPA.
striking. In nearly all cases LSAT and UGPA result in a narrow probability interval, between
zero and one, with one indicating certainty, and zero indicating no chance whatsoever.80 This
clearly indicates that LSAT and UGPA are given an extremely heavy weight in admissions
considerations—probably more so than their combined predictive ability for law school
C. Ratio of Admitted to Applied Students
Turning to the ratio of admitted to applied students, the final component in the selectivity
portion of U.S. News’s ranking algorithm, the incentive law schools face is clear: a law school
interested in maximizing its rank will attempt to maximize its applicant pool.82 Since law schools
want to get the best students they can independent of rank, some attempts to maximize
applications are both reasonable and desirable. Therefore I will focus my analysis on attempts to
maximize applicants that are unlikely to increase the quality of a school’s matriculating students.
Since prospective students lack information about their future institution (as evidenced by
the proliferation of ranking mechanisms, forums, and other information-disseminating products
and events), a law school could conceivably make itself seem more attractive by making
80 ABA Official Guide to Law Schools. http://officialguide.lsac.org/search/cgi-bin/results.asp?PageNo= (last visited
Apr. 15, 2007)
81 For many schools, LSAT and UGPA predict admissions to law schools to within a range of 0.15, or fifteen
percent. Supra note 70. According to the LSAC’s own study, the predictive value of LSAT and UGPA combined is
between fifty and sixty percent, indicating that law schools give both LSAT and UGPA a heavier combined weight
than their predictive value. For a general discussion on the negative implications of the overweighting of numerical
scores, see Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L J. 229, 233 (2006). It is worth noting that my conclusions are far more tentative than
his, for the following reasons: giving LSAT and UGPA a heavier weight than their predictive value may be
unrelated to rank maximizing strategies. Although LSAT and UGPA predictive ability is limited, it is not which
variables explain the portion of success in law school that LSAT and UGPA miss. For example, if LSAT and UGPA
together explain 0.6 of law school GPA, it is not clear that law schools know what explains the rest of law school
success. It may be information discernable from letters of recommendations, etc. on the other hand it is also possible
that this information is simply not available to law schools. As a result LSAT and UGPA are given a heavier weight,
since they are a “sure thing”. This explanation becomes irrelevant under my hypotheses, since it does not explain
differences in LSAT and UGPA relative weight, only in LSAT and UGPA aggregate weight compared to other
admissions criteria. However, it may explain some of the disproportionate weight given to both variables combined.
82 The size of a law school’s entering class is assumed to be constant in this context.
prospective students think that their admission chances are greater than they really are.
Specifically, a high-ranking law school could hide specific admissions data that may deter
potential applicants. This leads to a third hypothesis: Higher ranked law schools with very high
numeric admissions scores will be less likely to display detailed scores of admitted students than
lower- and mid-ranking schools, so as not to deter students with lower scores from applying.
Of the top ten law schools, only Yale makes the details of its incoming class’s statistics
available. The next ten (10th-20th ranked schools) reveal a dramatic increase in score reporting,
with approximately fifty percent of these law schools choosing to report detailed admissions
data.83 Although it is difficult to conduct a statistical test measuring the impact of this strategy, a
cursory glance at table 5 indicates that of the top 6 law schools, Yale receives the least
applicants, despite being ranked first. Attempting to explain this anomaly simply by Yale’s very
low admissions rate is not likely—Stanford, with a similar admissions rate and even smaller
class size, receives more applicants than Yale.84 It is far more likely that the information revealed
by Yale about the quality of its applicants serves as a deterrent to marginally qualified students.
Another method by which a law school may attempt to manipulate this portion of the
rank is through offering financial incentives (mostly in the form of fee waivers) to over-qualified
students.85 Although the law school may realize that these students will not actually attend their
school even if admitted, processing their application is not necessarily a costly endeavor. That
being said, it is difficult to distinguish such attempts from legitimately trying to enroll highly
83 See Official Guide to ABA-Approved Law Schools, 2005-2006.
84 More generally, I will show that the correlation between the number of students who apply and the number who
are admitted to a law school is not high. See infra p. 34.
85 Joanna Grossman, U.S. News & World Report's 2005 Law School Rankings: Why They May Not Be Trustworthy,
and How the Alternative Ranking Systems Compare, FINDLAW LEGAL NEWS AND COMMENTARY, Apr. 06, 2004,
available at http://writ.news.findlaw.com/grossman/20040406.html.
Since the ratio of admitted to applied students is also affected by fee waivers, there may
be an incentive to give fee wavers to under-qualified students as well, since processing their
applications would probably have a low cost, since they are not being seriously considered.86 It is
worth noting that law schools emphatically deny any use of this latter strategy, claiming that they
thoroughly read all applications.87
Additionally, as mentioned previously,88 there is some evidence suggesting that law
schools reject over-qualified applicants, since these applicants are less likely to enroll if
admitted, due to a proliferation of competing offers.
In conclusion, it appears that law schools allow rank-maximizing strategy to seep into
admissions decisions at a number of points. Data and statistical analysis convincingly shows that
some law schools weigh LSAT scores more heavily than they would otherwise due to the heavy
weight LSAT is given in U.S. News’s ranking algorithm. Furthermore, despite data constraints, it
appears that law schools weight LSAT and UGPA differently at different points, since only the
seventy-five and twenty-five percentage scores for these variables are relevant for U.S. News
rank. Finally, law schools engage in strategic information dissemination in order to encourage
the maximum number of applicants, irrespective of applicant quality or probability of admission.
86 For instance, if a law school uses a “cut-off” LSAT score, it can quickly and cheaply discard all applications with
scores below that point.
87 There is no concrete evidence suggesting that this type of manipulation occurs.
88 See supra Part II.B (graph 3 & table 4).
III. Beyond Admissions
U.S. News’s ranking algorithm takes into account a number of factors other than
admissions. The following section discusses possible methods law schools could employ in order
to manipulate these other factors.
A. Quality Assessment (weighted by .40)
Measures of quality assessment are the result of survey data collected by U.S. News on a
regular basis. Various members of the legal community, including practitioners, judges and
academics participate in these surveys. Although U.S. News surveys are most certainly open to
strategic manipulation, finding evidence of large scale manipulation is not possible, due to data
constraints. Furthermore, even if they are not being strategically manipulated, it is not clear that
the surveys always contain useful information. It is possible, and even likely that U.S. News
gives these surveys a heavy weight in the rankings (a total of forty percent) since they are
exclusive information—U.S. News conducts the surveys. Other data used by U.S. News is
• Peer Assessment Score (.25) (PA)
In the fall of 2005, law school deans, deans of academic affairs, the chair of faculty
appointments, and the most recently tenured faculty members were asked to rate
programs on a scale from "marginal" (1) to "outstanding" (5). Those individuals who did
not know enough about a school to evaluate it fairly were asked to mark "don't know." A
school's score is the average of all the respondents who rated it. Responses of "don't
know" counted neither for nor against a school. About 70 percent of those surveyed
• Assessment Score by Lawyers/Judges (.15) (LJ)
In the fall of 2005, legal professionals, including the hiring partners of law firms, state
attorneys general, and selected federal and state judges, were asked to rate programs on a
scale from "marginal" (1) to "outstanding" (5). Those individuals who did not know
enough about a school to evaluate it fairly were asked to mark "don't know." A school's
89 Nearly all data used by U.S. News is made available by the ABA and LSAC. ABA Official Guide to Law Schools
http://officialguide.lsac.org/search/cgi-bin/results.asp?PageNo= (last visited Apr. 15, 2007).
score is the average of all the respondents who rated it. Responses of "don't know"
counted neither for nor against a school. About 27 percent of those surveyed responded.90
While it is possible (and even likely) that graduates of certain schools rank competing
schools lower in U.S. News surveys, there is no effective method of testing this hypothesis
without access to U.S. News’s raw survey data. Therefore, for the sake of this Paper, the Quality
Assessment category will be largely ignored. Since this is the most heavily weighted category
within the ranking algorithm, it is otherwise an attractive target for investigation. Despite a
dearth of high-quality data, The Association of American Law Schools (AALS) published a
report on U.S. News rankings, which suggested that such strategic manipulation may indeed be
taking place, though no conclusive evidence was presented.91
Another problem with this component of rank is that there is little evidence to suggest
that lawyers are aware of the changes occurring within law schools. The then-Dean92 of NYU
Law School, John Sexton, summarized this problem nicely by saying that “If [lawyers] were
asked about Princeton Law School, it would appear on the top 20—but it doesn’t exist.”93
Unlike other academic institutions, law school deans and professors are not necessarily
engaged in original research. Additionally, most law journals are not peer-reviewed. As a
consequence, familiarity with the location of one’s peers and the quality of their research is not
always necessary for a law professor, and even less so for a practicing lawyer.94 Furthermore,
90 America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/07law_meth_brief.php (last visited Apr. 15, 2007).
91 Stephen P. Klein & Laura Hamilton, The Validity of The U.S. NEWS AND WORLD REP. Rankings of ABA Law
Schools, AALS REPORT, Feb. 18, 1998. The report contains a number of excellent criticisms, some of which have
been subsequently corrected by U.S. News. See also Joanna Grossman, U.S. News & World Report's 2005 Law
School Rankings: Why They May Not Be Trustworthy, and How the Alternative Ranking Systems Compare,
FINDLAW LEGAL NEWS AND COMMENTARY, Apr. 06, 2004 (Criticizing law school rankings on a number of points).
92 Now President of NYU.
93 Jan Hoffman, Judge Not, Law Schools Demand of a Magazine that Ranks Them, N.Y. TIMES, Feb. 19, 1998 at
94 A recent article by Thomas Adcock, Federal Judges Discuss Law Review Usefulness, N.Y. LAWYER, Mar. 2007,
quotes a Judge as saying “[the bench now uses law reviews] like drunkards use lampposts, more for support than for
while academics often encounter each other in conferences and presentations, legal practitioners
are less likely to encounter academics, resulting in lawyers making judgments over highly
skewed samples. For instance, a lawyer in a firm that does not often hire from a specific law
school may have absolutely no frame of reference to measure the quality of that law school. A
lawyer in a firm that does hire from a specific law school may (and is likely) to have encountered
a highly skewed subset of that law school’s graduates, and none of its professors, depending on
the specifics of the firm.
B. Placement Success (weighted by .20)
Placement success includes measures of a graduating class’s bar passage rates and
percentage of a class employed upon and shortly after graduation. Despite significant anecdotal
evidence suggesting that law schools hire their own students in order to increase employment
rates, no large scale data is available.95
• Employment Rates for Graduates (ER)
The employment rates for 2004 graduating class. Graduates who are working or pursuing
graduate degrees are considered employed. Those graduates not seeking jobs are
excluded. Employment rates are measured at graduation (.06) and nine months after
graduation (.12). For the nine-month employment rate, 25 percent of those whose status
is unknown are counted as employed.
• Bar Passage Rate (.02) (BAR)
The ratio of the school's bar passage rate of the 2004 graduating class to that jurisdiction's
overall state bar passage rate for first-time test takers in summer 2004 and winter 2005.
The jurisdiction listed is the state where the largest number of 2004 graduates took the
state bar exam.96
Since law schools may have a variety of reasons for increasing graduate employment,
regardless of effects on rank, I will assume that no manipulation is taking place; or at least that
illumination,” indicating his view of the limited usefulness of law reviews to most practitioners. See also Adam
Liptak, When Rendering Decisions, Judges are Finding Law Reviews Irrelevant, N.Y. TIMES, Mar. 19, 2007.
95 See Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31, 2005, at Section A4.
96 America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
manipulation is not undesirable. Although there are ways for law schools to manipulate
employment rates, these would be very difficult to measure.97 Manipulating the percent of
students passing the bar is also difficult, since the measure takes into account the difficulty of the
test (by including overall bar passage rate in its analysis). The only way law schools can
effectively manipulate these numbers is through “teaching to the test,” which may be done for
reasons that have nothing to do with U.S. News rankings.98 Thus placement shall bracketed off in
this Paper’s analysis.
C. Faculty Resources (weighted by .15)
Faculty Resources, or rather faculty and resources, include various measurements of
resources the law school makes available to its students. Here also data is confined to a series of
• Expenditures Per Student (EXP)
The average expenditures per student for the 2004 and 2005 fiscal years. The average
instruction, library, and supporting services (.0975) are measured, as are all other items,
including financial aid (.015).
• Student/Faculty Ratio (.03) (SFR)
The ratio of students to faculty members for the fall 2005 class, using the American Bar
• Library Resources (.0075) (LIB)
The total number of volumes and titles in the school's law library at the end of the 2005
Faculty and Resources seems like another likely target for manipulation, since it involves
several explicit measures all of which are quantifiable. While some anecdotal evidence indicates
97 But see supra at 33.
98 People generally like passing tests, especially when that test can have a heavy impact on their future salary.
99 See Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31, 2005.
100 America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
that certain law schools may have implemented some questionable human resource management
practices in the past, there is no reason to believe this is a common occurrence.101
One area of resources which has generated mutual accusations from several law schools
is the question of library size.102 Since the size (number of volumes and titles) in a library is the
measure used for this specific part of the ranking, a rank maximizing law school would be happy
to increase the number of volumes in its library regardless of their relevance.103 While it is
theoretically possible to examine the content of libraries through some sort of sampling process,
and incorporate a measure of the “quality” of books into the law school’s rank, such a process
would be extremely time consuming, not to mention controversial. However, using a measure
which includes book use—as measured by proportion of books used by law students and
faculty—as opposed to simply measuring the number of books in a library, allows for the
usefulness of books to be taken into account. Finally, in the age of internet and online databases,
a law school’s virtual library and printing access may be far more useful than a physical library.
Unfortunately, availability of library data is limited, placing such measures beyond the scope of
D. Size Matters
In a situation of perfect information, law school size would not matter very much:
prospective students would realize that they are less likely to be admitted to smaller law schools
(all else being equal), and would thus be less likely to apply to one. But in a situation of perfect
information there would be no need for law school rankings either, since prospective students
101 See, e.g., Joanna Grossman, U.S. News & World Report's 2005 Law School Rankings: Why They May Not Be
Trustworthy, and How the Alternative Ranking Systems Compare, FINDLAW LEGAL NEWS AND COMMENTARY, Apr.
06, 2004; Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31, 2005.
102 See e.g., Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31, 2005.
103 See Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L.J. 229, 242, 269 (2006).
would already have any information that a ranking system might convey. Be it due to a lack of
information, an inability to correctly evaluate admissions chances, or other factors, the number of
applications to a law school are not (all else equal) powerfully correlated with its size.
Specifically, law schools that are otherwise similar (very near in rank and location, or type of
location, urban v. rural, etc.) will not receive a drastically lower number of applications just
because they are smaller. In order to reinforce this assertion, I will compare several groups of
otherwise similar law schools and show that although a reduction in size brings about a
corresponding reduction in applications, they are not proportional.104
For example, Harvard and Stanford Law Schools have occupied top five slots in U.S.
News rankings for a number of years.105 For the 2005 academic year Harvard admitted 834
students out of 7,391 applicants, with a total of 554 matriculating. Stanford on the other hand
admitted 390 out of 5,040 applicants for a total of 166 matriculating. Yet Stanford University is
considered far more selective than Harvard. This despite the fact that Harvard has a higher ratio
of admitted to matriculating students, meaning that of the people admitted to Harvard, more
chose to go than of those admitted to Stanford. Stanford got a higher score in selectivity simply
because it is small. Lest we think the discrepancy has something to do with California weather,
or the joys of living in Palo Alto as opposed to Boston, let us examine a school that is more
similar to Harvard in climate, and, some claim, in character. The University of Chicago’s Law
School admitted 750 students for a target class of 192, out of 4,737 applicants.106 Although their
104 I bring a number of high ranking case studies, though more complete data is available. See America’s Best
Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/law/brief/lawrank_brief.php (last visited Apr. 15, 2007); ABA
Official Guide to Law Schools http://officialguide.lsac.org/search/cgi-bin/results.asp?PageNo= (last visited Apr. 15,
105 See Top Law Schools, available at http://www.top-law-schools.com/profiles.html (last visited Apr. 15, 2007).
106 See infra p. 38,(table 5).
rate of admitted to matriculating students is far lower than Harvard’s, their selectivity is
considered nearly the same, simply due to the target size of their class.107
NYU and Columbia are very similar schools, in the same city, and with virtually identical
rank. Despite the fact that Columbia admits approximately twenty five percent less students, they
still receive more applicants than NYU.108 This seems to indicate that students are either unaware
of the effect of school size on admissions prospects, or simply choose to ignore any such effect.
This raises another problem: One possible measure of school selectivity takes the form of
a tournament109—how likely is a student to choose a certain school given the total set of schools
he was admitted to.110 While data on individual students is not available, the percentage of
admitted students who choose to attend a school (Matric / Admit ratio in Table 4) does capture a
similar idea, since it indicates how many individuals prefer attending that law school over their
next best option.111 Assuming that most people admitted to similar law schools have similar
opportunity costs, this proportion would be a measure of law school value from the student’s
perspective. Note that the student’s choice is closely correlated with ranking, though not as
closely with selectivity. This suggests that students with higher scores choose the higher ranked
school, increasing that school’s rank through their higher scores.
Table 5: comparing school and student selectivity112
107 Evidence of such behavior exists in college applications as well. Christopher Avery et al.,COLLEGE ADMISSIONS
PROJECT (2005), 14.
108 See infra p.35, (table5).
109 See Cass Sunstein, Ranking Law Schools, A Market Test?, 91 IND. L.J. 25, 25-34 (2006).
110 See generally Christopher Avery, et al., A Revealed Preference Ranking of U.S. Colleges and Universities, NBER
working paper 10803, (2004). Note that the data presented here is far less precise, since it assumes admission to
comparable schools, as opposed to the College study which actually measures admission.
111 This is far from a perfect measure since an individual’s alternative options are unknown.
112 America’s Best Graduate Schools U.S. NEWS AND WORLD REP. (2007)
http://www.usnews.com/usnews/edu/grad/rankings/law/brief/lawrank_brief.php (last visited Apr. 15, 2007).
While it is unlikely that law schools actually manipulate their size to increase their
ranking, this results in an interesting fault in the ranking methodology well worth mentioning.
Note that there is not necessarily a correlation between school size and classroom size, or school
size and the ratio of faculty to students. Although most schools do publish statistics on class
sizes, these are not used by U.S. News’s ranking algorithm.113 U.S. News’s use of a simple ratio
of admitted to declined students results in an advantage given to smaller schools simply because
they are small. A more accurate measure of acceptance rates would incorporate size into its
considerations as well.
E. Possible Future Empirical Research
Although empirical investigations into the impact, effectiveness and accuracy of law
school rankings have increased significantly over the past decade,114 a number of significant
questions still remain. These include, but are not limited to, differences between state and private
institutions, and variables introduced by transfer and part-time students.
State schools may be at a strategic disadvantage with respect to rank-maximizing
admissions strategies, since some states require state schools to admit a certain percentage of in
state applicants. Furthermore, some state schools simply choose to admit more in state students
113 Faculty student ratio, on the other hand, is a component of U.S. News rank, available at
http://www.usnews.com/usnews/edu/grad/rankings/about/08law_meth_brief.php (last visited Apr. 15, 2007).
114 See, e.g., Jeffrey E. Stake & Michael Alexeev, Who Responds To U.S. NEWS AND WORLD REP.’s Law School
Rankings?, IND. LEGAL STUD. RESEARCH PAPER NO. 55 (June 30, 2006), available at,
http://ssrn.com/abstract=913427 (last visited Apr. 15, 2007); Michael Sauder and Ryon Lancaster, Do Rankings
Matter? The Effects of U.S. News & World Report Rankings on the Admissions Process of Law Schools, 40 LAW &
SOC’Y REV. 105, 105–134 (2006).
for independent reasons.115 These restrictions on the pool of students admitted to state schools
can have a negative effect on their average test scores, especially in small states, where there are
a limited number of students taking the LSAT. Consequentially, state schools may receive a
lower rank than they would be otherwise, simply because they admit more in state students.116
Finding empirical evidence corroborating this theory may indicate that state schools should
either be ranked separately, or that binding proportions on in state admissions should be
incorporated into a school’s rank.
Since transfer student scores are not factored into a school’s ranking, a school
maximizing its rank will care less about a transfer student’s undergraduate GPA and LSAT
scores. This would presumably allow the school to focus on the better predictor for law school
success, ignoring possible effects on rank. Additionally, a school may wish to admit more
students through transfer, thereby admitting better students without adversely affecting its
ranking.117 Like transfers, part-time student data is not included in a school’s rank.
Unfortunately, both transfers and part-time students are not always included in detailed ABA and
LSAC data, making law schools the only source for such data. Corroboration of these hypotheses
can lead to an incorporation of transfer students into ranking algorithms, as well as construction
of separate rankings for part-time programs.
115 Such policies are not always explicit, and can come in the form of additional financial benefits. See, e.g.,
http://www.law.wayne.edu/current/financial_aid.html. Detailed data on multiple schools available from U.S. News.
116 It is not entirely clear that there is something with state school rank being adversely affected in this manner, since
the quality of student admitted (as crudely measured by LSAT and UGPA) is indeed lower, regardless of the reasons
117 Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L.J. 229, 238 (2006); Alex Wellen, The $8.78 Million Maneuver, N.Y. TIMES, July 31,
2005, at Section A4.
IV. Conclusions and Implications
Rank plays an important role in disseminating information about law schools in an
information-scarce market. Yet as this Paper and others convincingly demonstrate,118 the current
system conveys inaccurate information, resulting in a variety of harmful effects. Prospective law
students take ranking into account when deciding which school to apply to, despite mounting
evidence suggesting that U.S. News’s ranking methodology is deeply flawed.119 Law schools
react to rankings in a number of contexts, 120 including admissions decisions.
Yet despite these flaws in U.S. News’s methodology, rankings can still act as a useful tool
in the evaluation of law schools, if implemented correctly. Although the purpose of this Paper is
not to propose a new ranking methodology, the flaws demonstrated by the statistical analysis in
this Paper suggest two possible changes in approach to law school rankings. The first proposal is
of a very general nature, while the second is more specific in its scope.121
Generally, producing information as opposed to weighted information can improve the
situation markedly. If rankings were category specific, prospective students (and faculty) would
be able to weight individual factors as they saw fit, as opposed to having U.S. News give weights
in a somewhat arbitrary manner. For example, if category specific rankings are provided for
library size and student faculty ratio, individual students can decide independently on how to
118 See, e.g., Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation:
Ways Rankings Mislead, 81 IND. L.J., 229, 269 (2006); Brian Leiter, How to Rank Law Schools, 81 IND. L. J. 47, 47-
119 See Jeffrey Stake, The Interplay Between Law School Rankings, Reputations, and Resource Allocation: Ways
Rankings Mislead, 81 IND. L.J. 229, 269 (2006).
120 See generally id.
121 For a number of other interesting proposals, see id. at 260; Brian Leiter, How to Rank Law Schools, 81 IND. L. J.
48, 47-52 (2006).
weight each factor: if library size is important to one student but not another, each student can
weight library size differently.122 The situation is analogous for student-faculty ratio.123
This type of ranking system has the advantage of making undesirable strategic behavior
on the part of law schools more difficult, since the weights associated with different school
quality measures are determined by prospective students on an individual basis. Including a
measure of faculty quality according to specific area of law may give the added advantage of
encouraging some schools to specialize.124 This type of information would be particularly useful
for prospective students with narrow areas of interest within law.
Although measures based on individual categories are useful, if the popularity of U.S.
News’s ranking system is an indicator, there is a strong market demand for a single aggregate
ranking system, in addition to categorical rankings. While improving upon the individual
measures employed by U.S. News is certainly possible, this Paper supports making available a
student choice, or “tournament” based model of rank, as well or instead of the current method
employed by U.S. News. Designing a single rank based on a tournament model has many
advantages over U.S. News’s methodology.125 Tournament-type ranking has been implemented
in the college context, with some success.126 In the college context this model is implemented
through the construction of a tournament between schools, competing over students. Schools
receive “points” in the “tournament” when students select their school over other schools to
122 More generally, if separate rank r is given to each category c from a total of n categories, each student i can
associate a weight wc with each category according to his own preferences. Each individual student can then
aggregate these weighted variables to compose a list of preferences according to available information and the
weight, or importance, he associates with each variable, resulting in a student specific weighted ranking for each
123 Brian Leiter already provides some information along these lines. See LeiterRankings available at
http://www.leiterrankings.com/ (last visited Apr. 15, 2007).
124 Such specialization may also make a generic ranking method seem even less viable, since it is difficult to
compare across areas of specialization within law.
125 For a similar proposal, see Cass Sunstein, Ranking Law Schools, A Market Test?, 91 IND. L.J. 25, 25-34 (2006).
126 See generally Christopher Avery, et al., A Revealed preference Ranking of U.S. Colleges and Universities NBER
working paper 10803 (2004).
which they have been admitted. For example, if a student was admitted to both college A and B,
and chose to attend college A, that student will be said to have ranked college A over college B.
Rank is then determined according to an aggregation of individual student choices.
Manipulating tournament rank is difficult, since the determining factor of a school’s rank
is whether students choose to attend it over other schools. Furthermore, student preference seems
a more relevant measure of a school’s quality than some arbitrarily weighted numeric score, or
the size of a school’s library.
Since both of the above proposals make strategic action on the part of law schools
extremely difficult, they will severely limit the incentives law schools currently face to
differentially weight admissions variables in order to maximize rank. This in turn will improve
minority admissions prospects in some law schools, without the use of affirmative action, or
other politically controversial activities.
In conclusion, there is little doubt that law schools can and do act strategically in order to
maximize their U.S. News rank in a variety of ways. Furthermore, it is clear that some of these
rank-maximizing strategies may result in real harm. Regardless of the future ranking
methodologies, it is important to realize that by providing more and better information to
prospective students, law schools can help overcome the negative impact of rankings. Increased
transparency in admissions processes and in the characteristics of an admitted class, such as
LSAT and UGPA, can only help dissolve the disproportionate impact rankings have on both
students and law schools.