This paper empirically evaluates the reporting of adjusted effect sizes (e.g., adjusted R 2 , omega 2) in published multiple regression studies by (a) documenting the frequency of adjusted effect reporting and interpretation, (b) identifying the types of corrected effects reported, and (c) estimating the degree of "shrinkage" present across regression analyses by using the information found in published journal articles to calculate corrected effects based on various formulae. Adjusted effects were infrequently reported in the literature, and interpretation of adjusted effects that were reported was rare. esearchers are becoming increasingly aware that interpretation of effect sizes is critical in evaluating empirical results (Henson & Smith, stated: It is hard to imagine a situation in which a dichotomous accept-reject decision is better than reporting an actual p-value or, better still, a confidence interval. . . Always provide some effect-size estimate when reporting a p-value. (p. 599, italics added). The Task Force went on to state, "Always present effect sizes for primary outcomes . . .It helps to add brief comments that place these effect sizes in a practical and theoretical context" (Wilkinson & APA Task Force on Statistical Inference, 1999, p. 599). This directive was a substantial step beyond the fourth edition of the APA's Publication Manual, which only recommended reporting of effect sizes in research (APA, 1994). Several empirical studies demonstrated, however, that this recommendation had little impact on the number of effect sizes reported in articles and it affected the interpretation of effect sizes even less (cf. Henson & Smith, 2000; Vacha-Haase, Nilsson, Reetz, Lance, & Thompson, 2000). The fifth edition of the APA manual (APA, 2001) incorporated the Task Force's directive, stating "For the reader to fully understand the importance of your findings, it is almost always necessary to include some index of effect size or strength of relationship in your Results section" (p. 25). The current APA manual also called the "failure to report effect sizes" a "defect in the design and reporting of research" (p. 5). At least 23 journals have followed suit, requiring the inclusion of effect sizes with statistical results (Onwuegbuzie, Levin, & Leech, 2003). The use of effect sizes has been widely discussed in the literature vis-à-vis null hypothesis significance tests (NHST). A discussion of issues surrounding the use of NHSTs is beyond the scope of this paper. Harlow, Mulaik, and Steiger (1997) present a balanced discussion of the debate for interested readers. Huberty and Pike (1999) and Huberty (2002) document the historical development of statistical testing and effect sizes, respectively. Indeed, Pedhazur and Schmelkin (1991) noted that, "Probably few methodological issues have generated as much controversy among sociobehavioral scientists as the use of [statistical significance] tests" (p. 198). Elsewhere, Pedhazur (1997) indicated that the "controversy is due, in part, to various misconceptions of the role and meaning of such [statistical significance] tests in the context of scientific inquiry" (p. 26). These "misconceptions" have been attacked for considerable time (see e.g., Berkson, 1942; Tyler, 1931), and yet they persist in modern research practice (Cohen, 1994; Finch, Cumming, & Thomason, 2001). Nevertheless, current methodological practice is increasingly emphasizing the need for effect size indices and more accurate interpretation of NHSTs (Kline, 2004). Some researchers recommend using effect sizes and NHSTs together (Fan, 2001; Huberty, 1987). Moreover, some critics of NHSTs have argued that effect sizes should be reported whether or not the results are statistically significant (Rosnow & Rosenthal, 1989; Thompson, 1999). As Roberts and Henson (2002) stated, ". . .one remaining point of debate concerns whether effect sizes should be reported (a) for all null hypothesis tests, even non-statistically significant ones, or (b) only after a finding is first determined to be statistically significant" (pp. 242-243).