Lab

Psychological Methods Lab

About the lab

Featured research (20)

Online collaborative projects in which users contribute to extensive knowledge bases such as Wikipedia or OpenStreetMap have become increasingly popular while yielding highly accurate information. Collaboration in such projects is organized sequentially with one contributor creating an entry and the following contributors deciding whether to adjust or to maintain the presented information. We refer to this process as sequential collaboration since individual judgments directly depend on the previous judgment. As sequential collaboration has not yet been examined systematically, we investigate whether dependent, sequential judgments become increasingly more accurate. Moreover, we test whether final sequential judgments are more accurate than the unweighted average of independent judgments from equally large groups. We conducted three studies with groups of four to six contributors who either answered general knowledge questions (Experiments 1 and 2) or located cities on maps (Experiment 3). As expected, individual judgments became more accurate across the course of sequential chains and final estimates were similarly accurate as unweighted averaging of independent judgments. These results show that sequential collaboration profits from dependent, incremental judgments, thereby shedding light on the contribution process underlying large-scale online collaborative projects.
Bayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which model comparison is most appropriate, van Doorn et al. compared three corresponding Bayes factors using a case study. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use Bayesian model selection for the direct comparison of a larger set of mixed models reflecting different auxiliary assumptions regarding the heterogeneity of effect sizes across individuals. In a standard one-factorial, repeated measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, one can test both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an average effect of condition while taking model selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model averaging among a larger set of mixed models performs well in recovering the true, data-generating model.
The phenomenon of sequential collaboration and future research perspectives.
Bayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which of these model comparisons is most appropriate, van Doorn et al. used a case study to compare the corresponding Bayes factors. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use the Bayes factor for performing model selection among a larger set of mixed models that represent different auxiliary assumptions. In a standard one-factorial, repeated-measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, the Bayes factor enables testing both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an effect of condition while taking model-selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model selection among a larger set of mixed models performs well in recovering the true, data-generating model.
Davis-Stober and Regenwetter (2019; D&R) showed that even when all predictions of a theory hold in separate studies, not even a single individual may be described by all predictions jointly. To illustrate this 'paradox' of converging evidence, D&R derived upper and lower bounds on the proportion of individuals for whom all predictions of a theory hold. These bounds reflect extreme positive and negative stochastic dependence of individual differences across predictions. However, psychological theories often make more specific assumptions such as true individual differences being independent or positively correlated (e.g., due to a common underlying trait). Based on this psychometric perspective, I extend D&R's conceptual framework by developing a multivariate normal model of individual effects. Assuming perfect consistency (i.e., a correlation of one) of individual effects across predictions, the proportion of individuals described by all predictions of a theory is identical to D&R's upper bound. The proportion drops substantially when assuming independence of individual effects. However, irrespective of the assumed correlation structure, the multivariate normal model implies a lower bound that is strictly above D&R's lower bound if a theory makes at least three predictions. Hence, the scope of a theory can be improved by specifying whether individual effects are assumed to show a certain level of consistency across predictions (similar to a trait) or whether they are statistically independent (similar to a state).

Lab head

Daniel W. Heck
Department
  • Faculty of Psychology

Members (3)

Gunnar Lemmer
  • Philipps University of Marburg
Maren Mayer
  • Universität Mannheim
Oliver Schmidt
  • Philipps University of Marburg
Florence Bockting
Florence Bockting
  • Not confirmed yet
Raphael Hartmann
Raphael Hartmann
  • Not confirmed yet