Figure - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Source publication
In the present article, we investigate personality traits that may lead a respondent to refuse to answer a forced-choice personality item. For this purpose, we use forced-choice items with an adapted response format. As in a traditional forced-choice item, the respondent is instructed to choose one out of two statements to describe their personalit...
Context in source publication
Context 1
... indicates that there are significant effects of the person and item covariates and their interactions on the propensity to refuse to choose a response option and to give one of two reasons as a justification. Consequently, we report the estimates of Model 3 in all subsequent analyses (see Table 1). The reference level for the Node variable is Node 1. Item positivity is effect-coded. ...Similar publications
Present research explored the association of personality traits with GAD, Perceived Stress and Optimism
Citations
... In the context of PISA, for example, one pseudo-item might represent the decision to award any credit at all, whereas another represents the decision to award full credit (conditional on the first). IRTree models have been used in a variety of situations, such as the understanding of faking (Lee et al. 2022), extreme/midpoint response styles (Ames and Myers 2021) or item skipping (Storme et al. 2024) in personality questionnaires. More closely related to the present topic, IRTree models have been used to explore the dimensionality of snapshot ratings of divergent thinking test responses (Forthmann et al. 2019). ...
... Finally, as done using the GPCM in PISA, generalized IRTree models can be extended to multiple-group and latent regression frameworks (e.g., Plieninger 2021; Storme et al. 2024). More broadly, IRTree models can accommodate several features typical of large-scale assessments, including missing data, clustered sampling structures (e.g., students nested within schools or countries), and the use of respondent weights. ...
In the PISA 2022 creative thinking test, students provide a response to a prompt, which is then coded by human raters as no credit, partial credit, or full credit. Like many large‐scale educational testing frameworks, PISA uses the generalized partial credit model (GPCM) as a response model for these ordinal ratings. In this paper, we show that the instructions given to the raters violate some assumptions of the GPCM as it is used: Raters are instructed to rate according to steps that involve multiple attributes (appropriateness and diversity/originality), with a different (set of) attribute(s) necessary to pass the different thresholds of the scoring scale. Instead of the GPCM, we propose multidimensional generalized item response tree models that allow us to account for the sequential nature of the ratings and to disentangle the attributes measured from the original scores. We discuss advantages, limitations, as well as recommendations for future research.