Robert Rosenthal’s research while affiliated with University of California, Riverside and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (117)


Intervening in teachers' expectations: A random effects meta-analytic approach to examining the effectiveness of an intervention
  • Article

August 2016

·

2,373 Reads

·

44 Citations

Learning and Individual Differences

·

Robert Rosenthal

Experimental studies within the education field are rare. The current study used a random effects meta-analytic approach to examine the effectiveness of a teacher expectation intervention across different schools, grade levels, socioeconomic levels, ethnicities, and gender in terms of student mathematics achievement. Teachers were randomly assigned to intervention and control groups, and through professional development workshops were trained in the practices of teachers who have high expectations for all students. The intervention related to three key areas: grouping and learning experiences, class climate, and goal setting. No matter which grouping variables were employed in the random effects meta-analyses, effect sizes in mathematics achievement for the students whose teachers were part of the intervention group compared with students with control group teachers were large, ranging from r = 0.61–0.87. The usefulness of the instructional strategies that formed the basis of the intervention is discussed in light of the relevant literature and the educational implications are presented.


Reflections on the origins of meta-analysis

February 2015

·

38 Reads

·

3 Citations

Research Synthesis Methods

In this interview, we discuss my early uses of meta-analytic procedures, first to combine p-values and then to combine effect sizes as well. My interest in quantifying the magnitude and the statistical significance of the effect of interpersonal expectations probably grew out of the following: (1) a long-held interest in the concept of replication and (2) a series of controversies over the very existence of any effect of interpersonal expectations held, for example, by psychological experimenters, classroom teachers, and leaders of various organizations. Copyright © 2015 John Wiley & Sons, Ltd.



Table 1 2 × 2 Agreement matrix showing 57 % rater agreement The associated r value in this example was −.27, which is statistically significant in the opposite direction 
A novel rater agreement methodology for language transcriptions: Evidence from a nonhuman speaker
  • Article
  • Full-text available

July 2014

·

105 Reads

The ability to measure agreement between two independent observers is vital to any observational study. We use a unique situation, the calculation of inter-rater reliability for transcriptions of a parrot’s speech, to present a novel method of dealing with inter-rater reliability which we believe can be applied to situations in which speech from human subjects may be difficult to transcribe. Challenges encountered included (1) a sparse original agreement matrix which yielded an omnibus measure of inter-rater reliability, (2) “lopsided” 2×22\times 2 2 × 2 matrices (i.e. subsets) from the overall matrix and (3) categories used by the transcribers which could not be pre-determined. Our novel approach involved calculating reliability on two levels—that of the corpus and that of the above mentioned smaller subsets of data. Specifically, the technique included the “reverse engineering” of categories, the use of a “null” category when one rater observed a behavior and the other did not, and the use of Fisher’s Exact Test to calculate r r -equivalent for the smaller paired subset comparisons. We hope this technique will be useful to those working in similar situations where speech may be difficult to transcribe, such as with small children.

Download


Table 1 Examples of identical 98% and 50% agreement showing both negative and positive reliability 
Figure 2 of 3
of observations of each behaviour made by two researchers, Louis and Jane, and their percentage agreement and omnibus kappa 
Can you believe my eyes? The importance of interobserver reliability statistics in observations of animal behaviour

December 2009

·

4,659 Reads

·

144 Citations

Animal Behaviour

Interobserver reliability is a vital part of all psychological studies that use an observational methodology to address questions of human behavior. This article provides a brief review of some important points for clarity’s sake, as we believe the current treatment of these topics has become more a case of theory than of practice. In general, there are two types of observer reliability: within observer reliability and between observer reliability. In the event of an experiment in which only one observer is plausible, it is possible, and important, to assess reliability and estimate the likelihood of bias by briefly using a second observer to conduct interobserver reliability trials for a small, random, portion of the data. One of the most common ways of measuring reliability between two observers without the problems inherent in percentage agreement is by using Cohen’s kappa, which takes into account the chance agreement of two observers. The focused kappa greatly expands the researcher’s capability for analysis, as it can be tailored to the question of interest on a case-by-case basis. The scientific method provides a structure that emphasizes the importance of neutrality on the part of the scientist, and it is most likely valid to claim that many scientists pride themselves on their ability to remain unbiased. (PsycINFO Database Record (c) 2012 APA, all rights reserved)


"Effect sizes for experimenting psychologists": Correction to Rosnow and Rosenthal (2003)

June 2009

·

129 Reads

·

6 Citations

Canadian Journal of Experimental Psychology

Reports an error in "Effect sizes for experimenting psychologists" by Ralph L. Rosnow and Robert Rosenthal (Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 2003[Sep], Vol 57[3], 221-237). A portion of the note to Table 1 was incorrect. The second sentence of the note should read as follows: Fisher’s ʐr is the log transformation of r, that is, ½ loge [(1 + r)/(1 - r)]. (The following abstract of the original article appeared in record 2003-08374-009.) [Correction Notice: An erratum for this article was reported in Vol 63(1) of Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale (see record 2009-03064-004). Correction for Note in TABLE 1 (on page 222): The second sentence should read as follows: Fisher’s zr is the log transformation of r, that is, 1⁄2 loge[(1 + r)/(1 − r)].] This article describes three families of effect size estimators and their use in situations of general and specific interest to experimenting psychologists. The situations discussed include both between- and within-group (repeated measures) designs. Also described is the counternull statistic, which is useful in preventing common errors of interpretation in null hypothesis significance testing. The emphasis is on correlation (r-type) effect size indicators, but a wide variety of difference-type and ratio-type effect size estimators are also described. (PsycINFO Database Record (c) 2012 APA, all rights reserved)


Situational Determinants of Volunteering

May 2009

·

14 Reads

This book is really three-books-in-one, dealing with the topic of artifacts in behavioral research. It is about the problems of experimenter effects which have not been solved. Experimenters still differ in the ways in which they see, interpret, and manipulate their data. Experimenters still obtain different responses from research participants (human or infrahuman) as a function of experimenters' states and traits of biosocial, psychosocial, and situational origins. Experimenters' expectations still serve too often as self-fulfilling prophecies, a problem that biomedical researchers have acknowledged and guarded against better than have behavioral researchers; e.g., many biomedical studies would be considered of unpublishable quality had their experimenters not been blind to experimental condition. Problems of participant or subject effects have also not been solved. Researchers usually still draw research samples from a population of volunteers that differ along many dimensions from those not finding their way into our research. Research participants are still often suspicious of experimenters' intent, try to figure out what experimenters are after, and are concerned about what the experimenter thinks of them. That portion of the complexity of human behavior that can be attributed to the social nature of behavioral research can be conceptualized as a set of artifacts to be isolated, measured, considered, and, sometimes, eliminated. This book examines the methodological and substantive implications of sources of artifacts in behavioral research and strategies for improving this situation.


A Preface to Three Prefaces

May 2009

·

22 Reads

·

114 Citations

This book is really three-books-in-one, dealing with the topic of artifacts in behavioral research. It is about the problems of experimenter effects which have not been solved. Experimenters still differ in the ways in which they see, interpret, and manipulate their data. Experimenters still obtain different responses from research participants (human or infrahuman) as a function of experimenters' states and traits of biosocial, psychosocial, and situational origins. Experimenters' expectations still serve too often as self-fulfilling prophecies, a problem that biomedical researchers have acknowledged and guarded against better than have behavioral researchers; e.g., many biomedical studies would be considered of unpublishable quality had their experimenters not been blind to experimental condition. Problems of participant or subject effects have also not been solved. Researchers usually still draw research samples from a population of volunteers that differ along many dimensions from those not finding their way into our research. Research participants are still often suspicious of experimenters' intent, try to figure out what experimenters are after, and are concerned about what the experimenter thinks of them. That portion of the complexity of human behavior that can be attributed to the social nature of behavioral research can be conceptualized as a set of artifacts to be isolated, measured, considered, and, sometimes, eliminated. This book examines the methodological and substantive implications of sources of artifacts in behavioral research and strategies for improving this situation.


An Integrative Overview

May 2009

·

8 Reads

This book is really three-books-in-one, dealing with the topic of artifacts in behavioral research. It is about the problems of experimenter effects which have not been solved. Experimenters still differ in the ways in which they see, interpret, and manipulate their data. Experimenters still obtain different responses from research participants (human or infrahuman) as a function of experimenters' states and traits of biosocial, psychosocial, and situational origins. Experimenters' expectations still serve too often as self-fulfilling prophecies, a problem that biomedical researchers have acknowledged and guarded against better than have behavioral researchers; e.g., many biomedical studies would be considered of unpublishable quality had their experimenters not been blind to experimental condition. Problems of participant or subject effects have also not been solved. Researchers usually still draw research samples from a population of volunteers that differ along many dimensions from those not finding their way into our research. Research participants are still often suspicious of experimenters' intent, try to figure out what experimenters are after, and are concerned about what the experimenter thinks of them. That portion of the complexity of human behavior that can be attributed to the social nature of behavioral research can be conceptualized as a set of artifacts to be isolated, measured, considered, and, sometimes, eliminated. This book examines the methodological and substantive implications of sources of artifacts in behavioral research and strategies for improving this situation.


Citations (88)


... These t's correspond to d's ranging from 0.78 to 1.10, which are all considered medium to large effect sizes. We did not use an omnibus F test since apriori focused T tests for 3 variables are better than an omnibus unfocused F test, as argued by Rosnow & Rosenthal (1992). ...

Reference:

An Empirical Study of Gauging Political Leadership: Comparing Trump, Putin and Zelenskyy
Focused tests of significance and effect size estimation in counseling psychology.
  • Citing Chapter
  • January 1992

... The dimensional effect of the detected differences was quantified by calculating Rosenthal's rank correlation (r), providing an estimate of the practical magnitude of the differences between groups. The interpretation of r follows these thresholds: 0.10 ≤ r < 0.24 is considered a small effect, 0.24 ≤ r < 0.37 is a medium effect, and r ≥ 0.37 is a large effect [40]. The analysis was developed in R [41] and the significant value was set to 0.05. ...

Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach
  • Citing Book
  • December 1999

... Even when it comes to 'measuring' career prospects, there may be parallels (to citationbased numbers such as h-indices and JIFs) -at least historically in disciplines such as psychology. As Rosnow and Rosenthal (1989) point out, "It may not be an exaggeration to say that for many PhD students, for whom the .05 alpha has acquired almost an ontological mystique, it can mean joy, a doctoral degree, and a tenure-track position at a major university if their dissertation p is less than .05. ...

Statistical procedures and the justification of knowledge in psychological science.
  • Citing Chapter
  • January 1992

... In recent decades, research has consistently shown that teacher expectations are an important element that affects students' learning outcomes (Friedrich et al., 2015;Gershenson et al., 2015;Li et al., 2023;Lorenz 2018;Meissel et al., 2017;Rubie-Davies & Rosenthal, 2016;Schenke et al, 2017;Timmermans et al., 2021;Tobisch & Dresel, 2017;Wang et al., 2019;Wang et al., 2021;Westphal et al., 2016). Teacher expectations seem to develop in response to certain characteristics of the students and of the teachers themselves (Ross, 1998). ...

Intervening in teachers' expectations: A random effects meta-analytic approach to examining the effectiveness of an intervention
  • Citing Article
  • August 2016

Learning and Individual Differences

... They cannot go back and make these decisions themselves because they are no longer blind to each other's decisions. We know from Hiller, Rosenthal, Bornstein, Berry, and Brunell-Neuleib's (1999) meta-analytic review of the Rorschach that the complex decision making required in construct validity meta-analyses can result in significantly more disagreements between experts than one might assume. ...

A comprehensive meta-analysis of Rorschach and MMPI validity
  • Citing Article
  • January 1999

J.B. Hiller

·

R. Rosenthal

·

R.F. Bornstein

·

[...]

·

S. Brunell-Neuleib

... 66,67 These studies were useful in quantifying the surgical benefits and residual deficits of the procedures by determining facial expression through facial landmarks of the mouth or in conjunction with the eyes in videos or images derived from them. [64][65][66]68 AU detection With human facial expressions catalogued into AUs in the Facial Action Coding System, 68 it is possible to more objectively quantify the changes and extent of facial muscle movement. 69 The nature of AUs allows for the objective journals.sagepub.com/home/oed ...

The New Handbook of Methods in Nonverbal Behavior Research
  • Citing Book
  • March 2008

... On the other hand, we 'exploited' (to use a McGuire term) the knowledge that people involved with or interested in an area (e.g. suicidality) would be more likely to volunteer to participate in research on the subject (Harris et al., 2009;Rosenthal and Rosnow, 2009). That increased participation allowed us to better examine study factors through improved statistical power (DeVellis, 2012;Rothman et al., 2012). ...

Empirical Research on Voluntarism as an Artifact-Independent Variable
  • Citing Chapter
  • May 2009

... The right parietal cortex is involved in visuospatial processing (52). For instance, when the right parietal cortex was suppressed, participants were unable to perform spatial tasks (53). Interestingly, when males perform spatial tasks, their bilateral hemispheres are involved, whereas females tend to rely on their right hemispheres (54). ...

A Preface to Three Prefaces
  • Citing Article
  • May 2009

... Transparency was the betweensubjects factor and concurrent task the within-subjects factor. Main effects of transparency, or interactions between transparency and concurrent task demands, were followed up with planned orthogonal contrasts that directly paralleled our hypotheses (Rosenthal and Rosnow, 1985) by comparing high to medium transparency, and high to low transparency, when the concurrent task was present and absent. High transparency served as the benchmark condition, and as such we did not compare the low and medium transparency conditions. ...

Contrast Analysis: Focused Comparisons in the Analysis of Variance.
  • Citing Article
  • December 1987