Content uploaded by John Kilburn
Author content
All content in this area was uploaded by John Kilburn on Sep 25, 2020
Content may be subject to copyright.
APA PROOFS
Much Ado About Nothing: The Misestimation and Overinterpretation
of Violent Video Game Effects in Eastern and Western Nations:
Comment on Anderson et al. (2010)
Christopher J. Ferguson and John Kilburn
Texas A&M International University
The issue of violent video game influences on youth violence and aggression remains intensely debated
in the scholarly literature and among the general public. Several recent meta-analyses, examining
outcome measures most closely related to serious aggressive acts, found little evidence for a relationship
between violent video games and aggression or violence. In a new meta-analysis, C. A. Anderson et al.
(2010) questioned these findings. However, their analysis has several methodological issues that limit the
interpretability of their results. In their analysis, C. A. Anderson et al. included many studies that do not
relate well to serious aggression, an apparently biased sample of unpublished studies, and a “best
practices” analysis that appears unreliable and does not consider the impact of unstandardized aggression
measures on the inflation of effect size estimates. They also focused on bivariate correlations rather than
better controlled estimates of effects. Despite a number of methodological flaws that all appear likely to
inflate effect size estimates, the final estimate of r⫽.15 is still indicative of only weak effects. Contrasts
between the claims of C. A. Anderson et al. (2010) and real-world data on youth violence are discussed.
Keywords: computer games, mass media, youth violence, aggression, child development
Over the last two decades, society has expressed concern that
violent video games (VVGs) may play some role in youth vio-
lence. To answer some of these questions, we engaged in a series
of precise meta-analyses of VVG studies that most closely related
to violent outcomes (e.g., Ferguson, 2007; Ferguson & Kilburn,
2009). Indeed, we were well aware that less precise measures tend
to overestimate effects (Paik & Comstock, 1994). We also had
questions regarding whether journals had been selectively publish-
ing significant studies and potentially ignoring nonsignificant stud-
ies. Our results were clear: The influence of VVGs on serious acts
of aggression or violence is minimal, and publication bias is a
problem in this research field. We also noted (as did Paik &
Comstock, 1994) that the best measures of aggression and violence
produced the weakest effects and that problematic unstandardized
use of some aggression measures, particularly in experimental
studies, tended to inflate effects.
Points of Agreement and Disagreement With
Anderson et al.
Anderson et al. (2010) critiqued our analyses and offered an
alternative of their own. Our analyses agree that the uncorrected
estimate for VVG effects is quite small (r⫽.15 in both analyses).
We also agree that meta-analytic researchers must take careful
steps to minimize the influence of publication bias. But our re-
search groups disagree on many points: whether to include unpub-
lished studies, how best to analyze and correct for publication bias,
whether bivariate correlations are a proper estimate of VVG ef-
fects, how precise standardized and valid aggression measures
need to be to adequately answer research questions, and how effect
size estimates should be interpreted. We have concerns that Ander-
son et al. have made several misstatements about our meta-
analyses and meta-analyses more generally and have also made
significant errors in their own analyses that render their results
difficult to interpret.
Building the Perfect Meta-Analytic Beast
We are honored that Anderson et al. (2010) selected our anal-
yses to contrast with their own. However, readers should be aware
that other recent meta-analyses on VVGs and media violence more
broadly have been no more supportive of Anderson et al.’s position
than our own (Savage & Yancey, 2008; Sherry, 2001, 2007).
Anderson et al. surprisingly cite Sherry (2001) as if supportive of
their position, but in fact he is quite clear that he does not find the
results of his analyses persuasive for the causal position. Indeed,
he is specifically critical of the Anderson et al. research group,
stating, “Further, why do some researchers (e.g., Gentile & Ander-
son, 2003) continue to argue that video games are dangerous
despite evidence to the contrary?” (Sherry, 2001, p. 244).
Anderson et al. (2010) suggested that we should have included
unpublished studies in our analyses and that the best way to negate
publication bias issues is to “conduct a search for relevant studies
that is thorough, systematic, unbiased, transparent, and clearly
documented” (p. xx). We note that, given that one of our questions
Christopher J. Ferguson and John Kilburn, Department of Behavioral,
Applied Sciences and Criminal Justice, Texas A&M International Univer-
sity.
Correspondence concerning this article should be addressed to Christo-
pher J. Ferguson, Department of Behavioral, Applied Sciences and Crim-
inal Justice, Texas A&M International University, Laredo, TX 78045.
E-mail: CJFerguson1111@aol.com
Psychological Bulletin © 2010 American Psychological Association
2010, Vol. ●●, No. ●, 000– 000 0033-2909/10/$12.00 DOI: 10.1037/a0018566
1
tapraid5/z2r-psybul/z2r-psybul/z2r00210/z2r2177d10z
xppws S⫽1 12/23/09 7:13 Art: 2009-0286
APA PROOFS
specifically regarded the amount of bias in the published literature,
including unpublished studies would be counterintuitive. Although
including unpublished studies in meta-analyses is certainly com-
mon, is it really as “widely accepted” as they claim? Further, does
their meta-analysis live up to their own rhetoric?
First, Anderson et al. (2010) failed to note that many scholars
have been critical of the inclusion of unpublished studies in meta-
analyses. Baumeister, DeWall, and Vohs (2009) noted that one
weakness of meta-analysis is that the inclusion of dubious unpub-
lished works can “muddy the waters” (p. 490). Smith and Egger
(1998), echoing our own concerns, noted that including unpub-
lished studies increases bias, particularly when located studies are
not representative of the broader array of studies. Others have
noted that inclusion of unpublished studies remains controversial,
although certainly common, and it is not uncommon for meta-
analyses to avoid unpublished studies (Cook et al., 1993). Thus,
Anderson et al.’s implication that we essentially invented the
notion of avoiding unpublished studies is fanciful, much as we
would like to take credit.
Despite the comments of Anderson et al. (2010) supporting a
search for unpublished studies that is “thorough, systematic, un-
biased, transparent, and clearly documented,” they actually pro-
vide little information about how they located unpublished studies.
However, one common procedure, although certainly not suffi-
cient in and of itself, is to request unpublished studies from known
researchers in the field (Egger & Smith, 1998). It is surprising then
that, although the Anderson et al. researchers were in contact with
us (i.e., C. J. Ferguson), they neither mentioned their meta-analysis
nor requested in-press or unpublished studies. As such, they
missed several in-press studies (e.g., Ferguson & Rueda, in press;
Ferguson, San Miguel, & Hartley, 2009) as well as a larger number
of “on review” papers and papers for which data had been col-
lected but not yet written up. We express the concern that other
research groups that, arguably, have presented research not in line
with Anderson et al.’s hypotheses may not have been contacted
(e.g., Barnett, Coulson, & Foreman, 2008; Colwell & Kato, 2003;
Kutner & Olson, 2008; Ryan, Rigby, & Przybylski, 2006; Un-
sworth, Devilly, & Ward, 2007; Williams & Skoric, 2005). For
example, we note that several published reports (e.g., Barnett et al.,
2008; Olson et al., 2009; Przybylski, Weinstein, Ryan, & Rigby,
2009) from this group of authors have been missed. Thus, from
only a small group of researchers, albeit those who differ from
Anderson et al. in perspective, a considerable number of published,
in-press, and unpublished studies were missed. One can only
speculate at the number of other missed studies from unknown
authors. On the other hand, when examining the appendix of
included studies, one finds that unpublished studies from Anderson
et al.’s research group and colleagues are well represented. For
example, of two unpublished studies, both are from Anderson et
al.’s broader research group. Of three in-press manuscripts in-
cluded, two (67%) are from the Anderson et al. group. Of confer-
ence presentations included, 9 of 12 (75%) are from the Anderson
et al. group and colleagues. Whatever techniques used by Ander-
son et al. to garner unpublished studies, these techniques worked
very well for their own unpublished studies but poorly for those
from other groups. We do not conclude that this was purposeful on
the part of Anderson et al.; rather, this matter highlights our
concerns about including unpublished studies.
Publication Bias Exists in VVG Studies
Our original meta-analyses indicated that published studies of
VVGs are products of publication bias. Anderson et al. (2010)
does not appear to have disputed this but suggested we should have
included unpublished studies instead of our publication bias anal-
yses. Anderson et al. focused on our use of the “trim and fill”
procedure. As Anderson et al. indicated, the trim and fill is not
without imperfections. However, they failed to mention that we
actually used a wide range of publication bias analyses and looked
for concordance between these analyses. Indeed, we found a
general agreement between publication bias tests for studies of
aggressive behavior and VVGs. The trim and fill procedure can
function as an estimate for the degree of publication bias, partic-
ularly when there are sound theoretical reasons to expect publica-
tion bias. As Egger and Smith (1998) indicated, publication bias is
quite common. Ioannidis (2005) observed that bias is particularly
prevalent in new or “hot” research fields, as that on VVGs cer-
tainly is. Other scholars have expressed concern that VVG studies
have become politicized, which increases the risk for bias (e.g.,
Grimes, Anderson, & Bergen, 2008; Kutner & Olson, 2008;
Sherry, 2007). We find suggestions that VVG studies are immune
to publication bias effects to be naive. However, the reader need
not take our word for it. Publication bias appears evident in a
previous meta-analysis by this research team (Anderson, 2004). Of
the published studies (n⫽32) in this analysis, 19 were supportive
of the causal view, nine were inconclusive, and four were nonsup-
portive. Of the unpublished studies (n⫽11), one was supportive,
one was inconclusive, and nine were nonsupportive. The differ-
ence between published and unpublished studies is obvious.
Best Practices or Best of the Worst?
Some of the suggestions offered by Anderson et al. (2010)
concerning “best practices” appear reasonable, but we express
concern that they did not raise the issue of unstandardized aggres-
sion measures used in many VVG studies. A measure of aggres-
sion (or any other construct) is unstandardized when the method
for calculating outcomes scores is not clearly set; this allows
different scholars to calculate outcomes in very different ways (or
the same author may calculate outcomes differently between stud-
ies). By contrast, a measure may be considered standardized when
measurements taken from it (as well as its administration) are “set
in stone” and do not vary across studies or across researchers (the
aggression score developed from the Child Behavior Checklist is
an example of a standardized aggression measure). The benefit of
standardized measures is that researchers must accept the out-
comes from these measures whether or not the outcomes are
favorable to their hypothesis. Unstandardized assessments poten-
tially allow researchers to select from among multiple outcomes
those which best fit their a priori hypotheses. For instance, the
Anderson et al. research group has assessed the “noise blast”
aggression measure differently across multiple studies, with little
explanation as to why (for a discussion, see Ferguson, 2007). Our
previous analyses have suggested that unstandardized measures
tend to inflate effect size estimates, as noted, potentially because
researchers may ignore the “worst” outcomes and select the “best”
outcomes to interpret (we argue that this is human nature and do
not mean to imply any purposeful unethical behavior). Standard-
2FERGUSON AND KILBURN
tapraid5/z2r-psybul/z2r-psybul/z2r00210/z2r2177d10z
xppws S⫽1 12/23/09 7:13 Art: 2009-0286
APA PROOFS
ization is a basic tenet of psychometrics; thus, it is unfortunate that
it has been so ignored in this research field. Unfortunately, the best
practices-nominated studies are populated with manuscripts in
which unstandardized assessments were used. This fact, rather
than the quality of those reports, probably explains why the effect
sizes seen for this group or paper were higher than those for other
papers.
We also find that Anderson et al. (2010) did not rigidly apply
their own standards. For instance, they nominated at least one
paper (Konijn, Nije Bijvank, & Bushman, 2007) as best practices,
although it included several games with violent content descriptors
(The Sims 2, Tony Hawk’s Underground 2, Final Fantasy) in its
nonviolent game condition, thus making its results uninterpretable.
Panee and Ballard (2002) were nominated as best practices even
though all participants played the same game. Similarly, Anderson
et al. seem particularly disinclined toward Williams and Skoric
(2005), despite the fact that this study does indeed (contrary to
Anderson et al.’s assertions) include a measure of verbal aggres-
sion at least as ecologically valid, if not more so, than that of many
of those studies nominated as best practices.
Anderson et al. (2010) included several studies from which it is
unclear how effect size estimates meaningful to the basic hypoth-
eses were calculated. For example, Hagell and Newburn (1994)
provided only descriptive percentiles and no analyses from which
a meaningful effect size estimate could be calculated. Hind (1995)
reported only the degree to which offender and nonoffender youths
liked different kinds of games, not their reaction to playing these
games or any correlation between play and behavior. Kestenbaum
and Weinstein (1985) reported pvalues, but no other statistics, and
these for some outcomes but not all. In the Panee and Ballard
(2002) study, all participants played the same violent game without
any variation in game violence content. Silvern and Williamson
(1987) reported only a pretest/posttest design in which all children
played the same video game (Space Invaders). We do not believe
that these studies (or many others) provide meaningful information
related to VVGs and youth violence.
Is Psychology Inventing a Phantom
Youth Violence Crisis?
Anderson et al. (2010) neglected to report on one very basic
piece of information. Namely, as VVGs have become more pop-
ular in the United States and elsewhere, violent crime rates among
youths and adults in the United States, Canada, United Kingdom,
Japan, and most other industrialized nations have plummeted to
lows not seen since the 1960s. Figure 1 (adapted from Ferguson,
2008) presents this information for youth violence rates in the
United States. Similar patterns are seen for other nations. Even the
Anderson et al. group appears to have acknowledged that this kind
of data is important to consider: “Nonetheless, dramatic reductions
in media violence exposure of children should, over a several year
period, lead to detectible reductions in real world aggression by
those children. This would further provide evidence for a strong
media violence link to aggression” (Barlett & Anderson, 2009, p.
10). In fact, we are seeing the opposite relationship, in which
dramatic increases in VVGs are correlated with dramatic decreases
in youth violence. The correlation coefficient for this data is r⫽
⫺.95, a near-perfect correlation in the wrong direction. We agree
with Barlett and Anderson (2009) that this kind of evidence is
strong. Barlett and Anderson, of course, cannot have it both ways,
with crime data important only so long as they are consistent with
Barlett and Anderson’s beliefs.
Last, Anderson et al. (2010) suggested that the r⫽.15 relation-
ship is too conservative and, nonetheless, as strong as that seen in
other areas of criminology. The r⫽.15 estimate includes only
basic controls; therefore, this estimate is probably too liberal. Our
Figure 1. Trends in youth violence and video game sales in the United States. Video game data were obtained
from the NPD Group, Inc./Retail Tracking Service. Youth violence data were obtained from Childstats.gov
3
MUCH ADO ABOUT NOTHING: COMMENT
F1
tapraid5/z2r-psybul/z2r-psybul/z2r00210/z2r2177d10z
xppws S⫽1 12/23/09 7:13 Art: 2009-0286
APA PROOFS
own research suggests that when other risk factors (e.g., depres-
sion, peers, family) are controlled, video game effects drop to near
zero (Ferguson et al., 2009). Indeed, focusing on bivariate corre-
lations is problematic, as they overestimate relationships due to
third variables. Males both play more VVGs and are more aggres-
sive than females. Thus, aggression will tend to correlate with
VVGs and with any other male-dominated activity, such as grow-
ing beards, dating women, and wearing pants rather than dresses.
Anderson et al. noticed this themselves. It is obvious that control-
ling other important risk factors related to personality, family, and
even genes (if one could) would further reduce the unique predic-
tive value of VVGs. Anderson et al. ignored this third variable
effect, although it has been well known for some time. It is also not
true that the r⫽.15 estimate— even if we were to believe that it
is accurate—is on par with other criminological effect size esti-
mates. Table 1 compiles a list of effect size estimates from crim-
inology (Ferguson, 2009). As can be seen, compared to other
criminological effects, the VVG connection is rather weak. Fur-
thermore, Anderson et al. claimed that small effects may accumu-
late over time yet found the weakest effects from longitudinal
studies, in contradiction to this claim. It should be noted that this
2.25% coefficient of determination reflects a change of nonpatho-
logical aggression to the tune of 2.25% within individuals; it does
not mean that 2.25% of normal children became antisocial or any
other such alarmist interpretation of this effect. We observe that
Anderson et al. themselves acknowledged that this effect is for
nonserious aggression (Footnote 12) due to the limitations of many
of the measures included in this analysis.
In conclusion, we believe that Anderson et al. (2010) are sincere
in their concerns for children and beliefs about VVGs. However,
their current meta-analysis contains numerous flaws, all of which
converge on overestimating and overinterpreting the influence of
VVGs on aggression. Nonetheless, they find only weak effects.
Given that discussions of VVGs tend to inform public policy, both
scientists and policymakers need to consider whether these results
will get the “bang for their buck” out of any forthcoming policy
recommendations. There are real risks that the exaggerated focus
on VVGs, fueled by some scientists, distracts society from much
more important causes of aggression, including poverty, peer
influences, depression, family violence, and Gene ⫻Environment
interactions. Although it is certainly true that few researchers
suggest that VVGs are the sole cause of violence, this does not
mean they cannot be wrong about VVGs having any meaningful
effect at all. Psychology, too often, has lost its ability to put the
weak (if any) effects found for VVGs on aggression into a proper
perspective. In doing so, it does more to misinform than inform
public debates on this issue.
References
Anderson, C. (2004). An update on the effects of playing violent video
games. Journal of Adolescence, 27, 113–122.
Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J.,
Sakamoto, A., . . . Saleem, M. (2010). Violent video game effects on
aggression, empathy, and prosocial behavior in Eastern and Western
countries. Psychological Bulletin, 136, xxx–xxx.
Barlett, C. P., & Anderson, C. A. (2009). Violent video games and public
policy. Retrieved from http://www.psychology.iastate.edu/faculty/caa/
abstracts/2005–2009/09BA2english.pdf
Barnett, J., Coulson, M., & Foreman, N. (2008, April). The WoW! factor:
Reduced levels of anger after violent on-line play. Poster session pre-
sented at the annual meeting of the British Psychological Society,
Dublin, Ireland.
Baumeister, R., DeWall, C., & Vohs, K. (2009). Social rejection, control,
numbness and emotion: How not to be fooled by Gerber and Wheeler
(2009). Perspectives on Psychological Science, 4, 489 – 493.
Colwell, J., & Kato, M. (2003). Investigation of the relationship between
social isolation, self-esteem, aggression and computer game play in
Japanese adolescents. Asian Journal of Social Psychology, 6, 149 –158.
Cook, D., Guyatt, G., Ryan, G., Clifton, J., Buckingham, L., Willan, A., . . .
Oxman, A. (1993, June 2). Should unpublished data be included in
meta-analyses? Current convictions and controversies. JAMA, 269,
2749 –2753.
Egger, M., & Smith, G. (1998, January 3). Meta-analysis: Bias in location
and selection of studies. British Medical Journal, 315, 61– 66. Retri-
eved from http://www.British Medical Journal.com/archive/7124/
7124ed2.htm
Ferguson, C. J. (2007). Evidence for publication bias in video game
violence effects literature: A meta-analytic review. Aggression and
Violent Behavior, 12, 470 – 482.
Ferguson, C. J. (2008). The school shooting/violent video game link:
Causal link or moral panic? Journal of Investigative Psychology and
Offender Profiling, 5, 25–37.
Ferguson, C. J. (Ed.). (2009). Violent crime: Clinical and social implica-
tions. Thousand Oaks, CA: Sage.
Ferguson, C. J., & Kilburn, J. (2009). The public health risks of media
violence: A meta-analytic review. Journal of Pediatrics, 154, 759 –763.
Ferguson, C. J., & Rueda, S. M. (in press). The Hitman study: Violent
video game exposure effects on aggressive behavior, hostile feelings and
depression. European Psychologist.
Ferguson, C. J., San Miguel, C., & Hartley, R. D. (2009). A multivariate
analysis of youth violence and aggression: The influence of family,
peers, depression and media violence. Journal of Pediatrics, 155, 904 –
908.
Grimes, T., Anderson, J., & Bergen, L. (2008). Media violence and
aggression: Science and ideology. Thousand Oaks, CA: Sage.
Hagell, A., & Newburn, T. (1994). Young offenders and the media:
Table 1
Effect Sizes, Criminal Justice Research
Relationship Effect size (r)
Video game sales and youth violence rates in the
United States ⫺.95
Genetic influences on antisocial behavior .75
Self-control and perceptions of criminal
opportunity on crime .58
Protective effect of community institutions on
neighborhood crime .39
VVG playing on visuospatial cognitive ability .36
Firearms ownership on crime .35
Incarceration use as a deterrent on crime .33
Aggressive personality and violent crime .25
Poverty on crime .25
Childhood physical abuse and adult violent crime .22
Child witnessing domestic violence on future
aggression .18
Video game violence and nonserious aggression
a
.15
Television violence on violent crime .10
VVG playing on serious aggressive behavior
b
.04
Note. VVG ⫽violent video game.
a
Indicates calculated by Anderson et al. (2010). All other effects compiled
by Ferguson (2009), where original sources are reported.
b
Estimate
corrected for publication bias in published studies.
4FERGUSON AND KILBURN
T1
tapraid5/z2r-psybul/z2r-psybul/z2r00210/z2r2177d10z
xppws S⫽1 12/23/09 7:13 Art: 2009-0286
APA PROOFS
Viewing habits and preferences. London, England: Policy Studies Insti-
tute.
Hind, P. A. (1995). A study of reported satisfaction with differentially
aggressive computer games amongst incarcerated young offenders. Is-
sues in Criminological and Legal Psychology, 22, 28 –36.
Ioannidis, J. P. (2005). Why most published research findings are false.
PLoS Medicine, 2, e124. Retrieved from http://www.plosmedicine.org/
article/info:doi/10.1371/journal.pmed.0020124
Kestenbaum, G. I., & Weinstein, L. (1985). Personality, psychopathology
and developmental issues in male adolescent video game use. Journal of
the American Academy of Child Psychiatry, 24, 329 –333.
Konijn, E. A., Nije Bijvank, M., & Bushman, B. J. (2007). I wish I were
a warrior: The role of wishful identification in effects of violent video
games on aggression in adolescent boys. Developmental Psychology, 43,
1038 –1044.
Kutner, L., & Olson, C. (2008). Grand theft childhood: The surprising
truth about violent video games and what parents can do. New York,
NY: Simon & Schuster.
Olson, C., Kutner, L., Baer, L., Beresin, E., Warner, D., & Nicholi, A.
(2009). M-rated video games and aggressive or problem behavior among
young adolescents. Applied Developmental Science, 13, 188 –198.
Paik, H., & Comstock, G. (1994). The effects of television violence on
antisocial behavior: A meta-analysis. Communication Research, 21,
516 –539.
Panee, C. D., & Ballard, M. E. (2002). High versus low aggressive priming
during video game training: Effects on violent action during game play,
hostility, heart rate, and blood pressure. Journal of Applied Social
Psychology, 32, 2458 –2474.
Przybylski, A., Weinstein, N., Ryan, R., & Rigby, C. S. (2009). Having to
versus wanting to play: Background and consequences of harmonious
versus obsessive engagement in video games. CyberPsychology & Be-
havior, 12, 485– 492.
Ryan, R., Rigby, C. S., & Przybylski, A. (2006). The motivational pull of
video games: A self-determination theory approach. Motivation and
Emotion, 30, 344 –360.
Savage, J., & Yancey, C. (2008). The effects of media violence exposure
on criminal aggression: A meta-analysis. Criminal Justice and Behavior,
35, 1123–1136.
Sherry, J. (2001). The effects of violent video games on aggression: A
meta-analysis. Human Communication Research, 27, 409 – 431.
Sherry, J. (2007). Violent video games and aggression: Why can’t we find
links? In R. Preiss, B. Gayle, N. Burrell, M. Allen, & J. Bryant (Eds.),
Mass media effects research: Advances through meta-analysis (pp.
231–248). Mahwah, NJ: Erlbaum.
Silvern, S. B., & Williamson, P. A. (1987). The effects of video game play
on young children’s aggression, fantasy and prosocial behavior. Journal
of Applied Developmental Psychology, 8, 453– 462.
Smith, G., & Egger, M. (1998, January 17). Meta-analysis: Unresolved
issues and future developments. British Medical Journal, 316, 221–225.
Retrieved from http://www.BritishMedicalJournal.com/archive/7126/
7126ed8.htm
Unsworth, G., Devilly, G., & Ward, T. (2007). The effect of playing violent
videogames on adolescents: Should parents be quaking in their boots?
Psychology, Crime & Law, 13, 383–394.
Williams, D., & Skoric, M. (2005). Internet fantasy violence: A test of
aggression in an online game. Communication Monographs, 72, 217–
233.
Received October 30, 2009
Revision received November 20, 2009
Accepted November 24, 2009 䡲
5
MUCH ADO ABOUT NOTHING: COMMENT
tapraid5/z2r-psybul/z2r-psybul/z2r00210/z2r2177d10z
xppws S⫽1 12/23/09 7:13 Art: 2009-0286