Content uploaded by Gerit Pfuhl
Author content
All content in this area was uploaded by Gerit Pfuhl on Jun 15, 2023
Content may be subject to copyright.
OBJECT ORIENTATION EFFECTS 1
Investigating Object Orientation Effects Across 18 Languages1
Sau-Chin Chen1, Erin Buchanan2, Zoltan Kekecs3,66, Jeremy K. Miller4, Anna Szabelska5,2
Balazs Aczel3, Pablo Bernabeu6,67, Patrick Forscher7,68, Attila Szuts3, Zahir Vally8, Ali H.3
Al-Hoorie9, Mai Helmy10,69, Caio Santos Alves da Silva11, Luana Oliveira da Silva11, Yago4
Luksevicius de Moraes11, Rafael Ming Chi Santos Hsu11, Anthonieta Looman Mafra11 ,5
Jaroslava V. Valentova11, Marco Antonio Correa Varella11, Barnaby Dixson12, Kim6
Peters12, Nik Steffens12, Omid Ghasemi13 , Andrew Roberts13, Robert M. Ross14, Ian D.7
Stephen13,70, Marina Milyavskaya15, Kelly Wang15, Kaitlyn M. Werner15, Dawn L.8
Holford16, Miroslav Sirota16, Thomas Rhys Evans17, Dermot Lynott18, Bethany M. Lane19,9
Danny Riis19, Glenn P. Williams20, Chrystalle B. Y. Tan21, Alicia Foo22 , Steve M. J.10
Janssen22, Nwadiogo Chisom Arinze23, Izuchukwu Lawrence Gabriel Ndukaihe23, David11
Moreau24, Brianna Jurosic25 , Brynna Leach25, Savannah Lewis26, Peter R. Mallik27,12
Kathleen Schmidt25, William J. Chopik28, Leigh Ann Vaughn29, Manyu Li30, Carmel A.13
Levitan31, Daniel Storage32 , Carlota Batres33, Janina Enachescu34, Jerome Olsen34 , Martin14
Voracek34, Claus Lamm35 , Ekaterina Pronizius35, Tilli Ripp36, Jan Philipp Röer36, Roxane15
Schnepper36, Marietta Papadatou-Pastou37, Aviv Mokady38, Niv Reggev38 , Priyanka16
Chandel39, Pratibha Kujur39 , Babita Pande39, Arti Parganiha39, Noorshama Parveen39,17
Sraddha Pradhan39, Margaret Messiah Singh39 , Max Korbmacher40, Jonas R. Kunst41 ,18
Christian K. Tamnes41, Frederike S. Woelfert41 , Kristoffer Klevjer42, Sarah E. Martiny42,19
Gerit Pfuhl42, Sylwia Adamus43, Krystian Barzykowski43, Katarzyna Filip43, Patrícia20
Arriaga44, Vasilije Gvozdenović45, Vanja Ković45, Tao-tao Gan46, Hu Chuan-Peng47,21
OBJECT ORIENTATION EFFECTS 2
Qing-Lan Liu46, Zhong Chen48 , Fei Gao48, Lisa Li48, Jozef Bavoľár49, Monika Hricová49,22
Pavol Kačmár49, Matúš Adamkovič50,71, Peter Babinčák51, Gabriel Baník51,52, Ivan23
Ropovik52,72, Danilo Zambrano Ricaurte53, Sara Álvarez Solas54, Harry Manley55,73 , Panita24
Suavansri55, Chun-Chia Kung56, Asil Ali Özdoğru57 , Belemir Çoktok57, Çağlar Solak58 ,25
Sinem Söylemez58, Sami Çoksan59, İlker Dalgar60, Mahmoud Elsherif61, Martin Vasilev62,26
Vinka Mlakic63, Elisabeth Oberzaucher64, Stefan Stieger63, Selina Volsa63, Janis Zickfeld65,27
and & Christopher R. Chartier25
28
1Department of Human Development and Psychology29
Tzu-Chi University30
2Harrisburg University of Science and Technology31
3Institute of Psychology32
ELTE33
Eotvos Lorand University34
4Department of Psychology35
Willamette University36
5Institute of Cognition and Culture37
Queen’s University Belfast38
6Department of Psychology39
Lancaster University40
7LIP/PC2S41
Université Grenoble Alpes42
8Department of Clinical Psychology43
United Arab Emirates University44
9Independent Researcher45
10 Psychology Department46
College of Education47
Sultan Qaboos University48
OBJECT ORIENTATION EFFECTS 3
11 Department of Experimental Psychology49
Institute of Psychology50
University of Sao Paulo51
12 School of Psychology52
University of Queensland53
13 Department of Psychology54
Macquarie University55
14 Department of Philosophy56
Macquarie University57
15 Department of Psychology58
Carleton University59
16 Department of Psychology60
University of Essex61
17 School of Social62
Psychological and Behavioural Sciences63
Coventry University64
18 Department of Psychology65
Maynooth University66
19 Division of Psychology67
School of Social and Health Sciences68
Abertay University69
20 School of Psychology70
Faculty of Health Sciences and Wellbeing71
University of Sunderland72
21 School of Psychology and Vision Sciences73
University of Leicester74
22 School of Psychology75
OBJECT ORIENTATION EFFECTS 4
University of Nottingham Malaysia76
23 Department of Psychology77
Alex Ekwueme Federal University78
24 School of Psychology79
University of Auckland80
25 Department of Psychology81
Ashland University82
26 Department of Psychology83
University of Alabama84
27 Hubbard Decision Research85
28 Department of Psychology86
Michigan State University87
29 Department of Psychology88
Ithaca College89
30 Department of Psychology90
University of Louisiana at Lafayette91
31 Department of Cognitive Science92
Occidental College93
32 Department of Psychology94
University of Denver95
33 Department of Psychology96
Franklin and Marshall College97
34 Faculty of Psychology98
University of Vienna99
35 Department of Cognition100
Emotion101
and Methods in Psychology102
OBJECT ORIENTATION EFFECTS 5
Faculty of Psychology103
University of Vienna104
36 Department of Psychology and Psychotherapy105
Witten/Herdecke University106
37 School of Education107
National and Kapodistrian University of Athens108
38 Department of Psychology and School of Brain Sciences and Cognition109
Ben Gurion University110
39 School of Studies in Life Science111
Pt. Ravishankar Shukla University112
40 Department of Health and Functioning113
Western Norway University of Applied Sciences114
41 Department of Psychology115
University of Oslo116
42 Department of Psychology117
UiT - The Arctic University of Norway118
43 Institute of Psychology119
Jagiellonian University120
44 Iscte-University Institute of Lisbon121
45 Laboratory for Neurocognition and Applied Cognition122
Faculty of Philosophy123
University of Belgrade124
46 Department of Psychology125
Hubei University126
47 School of Psychology127
Nanjing Normal University128
48 Faculty of Arts and Humanities129
OBJECT ORIENTATION EFFECTS 6
University of Macau130
49 Department of Psychology131
Faculty of Arts132
Pavol Jozef Šafarik University in Košice133
50 Institute of Social Sciences134
CSPS135
Slovak Academy of Sciences136
51 Institute of Psychology137
University of Presov138
52 Institute for Research and Development of Education139
Faculty of Education140
Charles University141
53 Faculty of Psychology142
Fundación Universitaria Konrad Lorenz143
54 Ecosystem Engineer144
Universidad Regional Amazónica Ikiam145
55 Faculty of Psychology146
Chulalongkorn University147
56 Department of Psychology148
National Cheng Kung University149
57 Department of Psychology150
Üsküdar University151
58 Department of Psychology152
Manisa Celal Bayar University153
59 Department of Psychology154
Erzurum Technical University155
60 Department of Psychology156
OBJECT ORIENTATION EFFECTS 7
Ankara Medipol University157
61 Department of Vision Sciences158
University of Leicester159
62 Bournemouth University160
Talbot Campus161
63 Department of Psychology and Psychodynamics162
Karl Landsteiner University of Health Sciences163
64 Department of Evolutionary Anthropology164
University of Vienna165
65 Department of Management Aarhus University166
66 Department of Psychology167
Lund University168
67 Department of Language and Culture169
UiT The Arctic University of Norway170
68 Busara Center for Behavioral Economics171
69 Psychology Department172
Faculty of Arts173
Menoufia University174
70 Department of Psychology175
Nottingham Trent University176
71 University of Jyväskylä177
Finland178
72 Faculty of Education179
University of Presov180
73 Faculty of Behavioral Sciences181
Education182
& Languages183
OBJECT ORIENTATION EFFECTS 8
HELP University Subang 2184
OBJECT ORIENTATION EFFECTS 9
Author Note185
186
Funding statement. Matúš Adamkovič was supported by APVV-20-0319; Robert187
M. Ross was supported by Australian Research Council (grant number: DP180102384) and188
the John Templeton Foundation (grant ID: 62631); Zoltan Kekecs was supported by János189
Bolyai Research Scholarship of the Hungarian Academy of Science; Mahmoud Elsherif was190
supported by Leverhulme Trust; Glenn P. Williams was supported by Leverhulme Trust191
Research Project Grant (RPG-2016-093); Ivan Ropovik was supported by NPO Systemic192
Risk Institute (LX22NPO5101); Krystian Barzykowski was supported by National Science193
Centre, Poland (2019/35/B/HS6/00528); Gabriel Baník was supported by194
PRIMUS/20/HUM/009; Patrícia Arriaga was supported by Portuguese National195
Foundation for Science and Technology (FCT UID/PSI/03125/2019); Monika Hricová was196
supported by VEGA 1/0145/23.197
Ethical approval statement. Authors who collected data on site and online had198
the ethical approval/agreement from their local institutions. The latest status of ethical199
approval for all the participating authors is available at the public OSF folder200
(https://osf.io/e428p/ “IRB approvals” in Files).201
Acknowledgement. We thank the suggestions from the editor and two reviewers202
on our first and second proposals. CB would like to thank Tyler McGee for help with data203
collection. PB would like to thank Liam Morgillo for help with data collection.204
The authors made the following contributions. Sau-Chin Chen: Conceptualization,205
Data curation, Formal analysis, Investigation, Methodology, Resources, Software,206
Supervision, Validation, Visualization, Writing - original draft, Writing - review & editing;207
Erin Buchanan: Formal analysis, Project administration, Resources, Software, Validation,208
Writing - review & editing; Zoltan Kekecs: Project administration, Writing - review &209
OBJECT ORIENTATION EFFECTS 10
editing; Jeremy K. Miller: Project administration, Resources, Supervision, Writing - review210
& editing; Anna Szabelska: Project administration, Writing - original draft, Writing -211
review & editing; Balazs Aczel: Investigation, Methodology, Resources, Writing - review &212
editing; Pablo Bernabeu: Investigation, Methodology, Visualization, Writing - review &213
editing; Patrick Forscher: Methodology, Writing - review & editing; Attila Szuts:214
Investigation, Methodology, Resources, Writing - review & editing; Zahir Vally:215
Investigation, Resources, Writing - review & editing; Ali H. Al-Hoorie: Investigation,216
Resources, Writing - review & editing; Mai Helmy: Investigation, Resources, Writing -217
review & editing; Caio Santos Alves da Silva: Investigation, Resources, Writing - review &218
editing; Luana Oliveira da Silva: Investigation, Resources, Writing - review & editing; Yago219
Luksevicius de Moraes: Investigation, Resources, Writing - review & editing; Rafael Ming220
Chi Santos Hsu: Investigation, Resources, Writing - review & editing; Anthonieta Looman221
Mafra: Investigation, Resources, Writing - review & editing; Jaroslava V. Valentova:222
Investigation, Resources, Writing - review & editing; Marco Antonio Correa Varella:223
Investigation, Resources, Writing - review & editing; Barnaby Dixson: Investigation,224
Writing - review & editing; Kim Peters: Investigation, Writing - review & editing; Nik225
Steffens: Investigation, Writing - review & editing; Omid Ghasemi: Investigation, Writing -226
review & editing; Andrew Roberts: Investigation, Writing - review & editing; Robert M.227
Ross: Investigation, Writing - review & editing; Ian D. Stephen: Investigation, Writing -228
review & editing; Marina Milyavskaya: Investigation, Writing - review & editing; Kelly229
Wang: Investigation, Writing - review & editing; Kaitlyn M. Werner: Investigation,230
Writing - review & editing; Dawn L. Holford: Investigation, Writing - review & editing;231
Miroslav Sirota: Investigation, Writing - review & editing; Thomas Rhys Evans:232
Investigation, Writing - review & editing; Dermot Lynott: Investigation, Writing - review233
& editing; Bethany M. Lane: Investigation, Writing - review & editing; Danny Riis:234
Investigation, Writing - review & editing; Glenn P. Williams: Investigation, Writing -235
review & editing; Chrystalle B. Y. Tan: Investigation, Writing - review & editing; Alicia236
OBJECT ORIENTATION EFFECTS 11
Foo: Investigation, Writing - review & editing; Steve M. J. Janssen: Investigation, Writing237
- review & editing; Nwadiogo Chisom Arinze: Investigation, Writing - review & editing;238
Izuchukwu Lawrence Gabriel Ndukaihe: Investigation, Writing - review & editing; David239
Moreau: Investigation, Writing - review & editing; Brianna Jurosic: Investigation, Writing240
- review & editing; Brynna Leach: Investigation, Writing - review & editing; Savannah241
Lewis: Investigation, Writing - review & editing; Peter R. Mallik: Investigation, Writing -242
review & editing; Kathleen Schmidt: Investigation, Resources, Writing - review & editing;243
William J. Chopik: Investigation, Writing - review & editing; Leigh Ann Vaughn:244
Investigation, Writing - review & editing; Manyu Li: Investigation, Writing - review &245
editing; Carmel A. Levitan: Investigation, Writing - review & editing; Daniel Storage:246
Investigation, Writing - review & editing; Carlota Batres: Investigation, Writing - review &247
editing; Janina Enachescu: Investigation, Resources, Writing - review & editing; Jerome248
Olsen: Investigation, Resources, Writing - review & editing; Martin Voracek: Investigation,249
Resources, Writing - review & editing; Claus Lamm: Investigation, Resources, Writing -250
review & editing; Ekaterina Pronizius: Investigation, Writing - review & editing; Tilli251
Ripp: Investigation, Writing - review & editing; Jan Philipp Röer: Investigation, Writing -252
review & editing; Roxane Schnepper: Investigation, Writing - review & editing; Marietta253
Papadatou-Pastou: Investigation, Resources, Writing - review & editing; Aviv Mokady:254
Investigation, Resources, Writing - review & editing; Niv Reggev: Investigation, Resources,255
Writing - review & editing; Priyanka Chandel: Investigation, Resources, Writing - review &256
editing; Pratibha Kujur: Investigation, Resources, Writing - review & editing; Babita257
Pande: Investigation, Resources, Supervision, Writing - review & editing; Arti Parganiha:258
Investigation, Resources, Supervision, Writing - review & editing; Noorshama Parveen:259
Investigation, Resources, Writing - review & editing; Sraddha Pradhan: Investigation,260
Resources, Writing - review & editing; Margaret Messiah Singh: Investigation, Writing -261
review & editing; Max Korbmacher: Investigation, Writing - review & editing; Jonas R.262
Kunst: Investigation, Resources, Writing - review & editing; Christian K. Tamnes:263
OBJECT ORIENTATION EFFECTS 12
Investigation, Resources, Writing - review & editing; Frederike S. Woelfert: Investigation,264
Writing - review & editing; Kristoffer Klevjer: Investigation, Writing - review & editing;265
Sarah E. Martiny: Investigation, Writing - review & editing; Gerit Pfuhl: Investigation,266
Resources, Writing - review & editing; Sylwia Adamus: Investigation, Resources, Writing -267
review & editing; Krystian Barzykowski: Investigation, Resources, Supervision, Writing -268
review & editing; Katarzyna Filip: Investigation, Resources, Writing - review & editing;269
Patrícia Arriaga: Funding acquisition, Investigation, Resources, Writing - review & editing;270
Vasilije Gvozdenović: Investigation, Resources, Writing - review & editing; Vanja Ković:271
Investigation, Resources, Writing - review & editing; Tao-tao Gan: Investigation, Writing -272
review & editing; Hu Chuan-Peng: Investigation, Writing - review & editing; Qing-Lan273
Liu: Investigation, Writing - review & editing; Zhong Chen: Investigation, Writing - review274
& editing; Fei Gao: Investigation, Resources, Writing - review & editing; Lisa Li:275
Investigation, Resources, Writing - review & editing; Jozef Bavoľár: Investigation,276
Resources, Writing - review & editing; Monika Hricová: Investigation, Resources, Writing -277
review & editing; Pavol Kačmár: Investigation, Resources, Writing - review & editing;278
Matúš Adamkovič: Investigation, Writing - review & editing; Peter Babinčák:279
Investigation, Writing - review & editing; Gabriel Baník: Investigation, Writing - review &280
editing; Ivan Ropovik: Investigation, Writing - review & editing; Danilo Zambrano281
Ricaurte: Investigation, Resources, Writing - review & editing; Sara Álvarez Solas:282
Investigation, Resources, Writing - review & editing; Harry Manley: Investigation,283
Resources, Writing - review & editing; Panita Suavansri: Investigation, Resources, Writing284
- review & editing; Chun-Chia Kung: Investigation, Resources, Writing - review & editing;285
Asil Ali Özdoğru: Investigation, Resources, Writing - review & editing; Belemir Çoktok:286
Investigation, Resources, Writing - review & editing; Çağlar Solak: Investigation, Writing -287
review & editing; Sinem Söylemez: Investigation, Writing - review & editing; Sami Çoksan:288
Investigation, Resources, Writing - review & editing; İlker Dalgar: Resources, Writing -289
review & editing; Mahmoud Elsherif: Writing - review & editing; Martin Vasilev: Writing -290
OBJECT ORIENTATION EFFECTS 13
review & editing; Vinka Mlakic: Resources, Writing - review & editing; Elisabeth291
Oberzaucher: Resources, Writing - review & editing; Stefan Stieger: Resources, Writing -292
review & editing; Selina Volsa: Resources, Writing - review & editing; Janis Zickfeld:293
Resources, Writing - review & editing; Christopher R. Chartier: Investigation, Project294
administration, Writing - review & editing.295
Correspondence concerning this article should be addressed to Sau-Chin Chen.296
E-mail: csc2009@mail.tcu.edu.tw297
OBJECT ORIENTATION EFFECTS 14
Abstract298
Mental simulation theories of language comprehension propose that people automatically299
create mental representations of objects mentioned in sentences. Mental representation is300
often measured with the sentence-picture verification task, wherein participants first read a301
sentence that implies the object property (i.e., shape and orientation). Participants then302
respond to an image of an object by indicating whether it was an object from the sentence303
or not. Previous studies have shown matching advantages for shape, but findings304
concerning object orientation have not been robust across languages. This registered report305
investigated the match advantage of object orientation across 18 languages in nearly 4,000306
participants. The preregistered analysis revealed no compelling evidence for a match307
advantage for orientation across languages. Additionally, the match advantage was not308
predicted by mental rotation scores. Overall, the results did not support current mental309
simulation theories.310
Keywords: cross-lingual research, language comprehension, mental rotation, mental311
simulation312
OBJECT ORIENTATION EFFECTS 15
Investigating Object Orientation Effects Across 18 Languages313
Mental simulation of object properties is a major topic in conceptual processing314
research (Ostarek & Huettig, 2019; Scorolli, 2014). Theoretical frameworks of conceptual315
processing describe the integration of linguistic representations and situated simulation316
(e.g., reading about bicycles integrates the situation in which bicycles would be used,317
Barsalou, 2008; Zwaan, 2014a). Proponents of situated cognition contend that perceptual318
representations can be generated during language processing (Barsalou, 1999; Wilson,319
2002), as cognition is thought to be an interaction of the body, environment, and320
processing (Barsalou, 2020). Given this definition of situated cognition, it is important to321
investigate previously established embodied cognition effects across multiple environments322
(in this case, languages and cultures), especially as the credibility revolution has indicated323
that not all published findings are replicable (Vazire, 2018).324
One empirical index of situated simulation is the mental simulation effect measured325
in the sentence-picture verification task (see Figure 1). This task requires participants to326
read a probe sentence displayed on the screen. On the following screen, participants see a327
picture of an object and must verify whether the object was mentioned in the probe328
sentence. Verification response times are used to test the mental simulation effect, which329
occurs when people are faster to respond to pictures that match the properties implied by330
the probe sentences. For example, the orientation implied by the sentence Tom hammered331
the nail into the wall would be matched if the following picture showed a332
horizontally-oriented nail rather than a vertically-oriented one. The opposite would be true333
of the sentence Tom hammered the nail into the floor plank.334
Mental simulation effects have been demonstrated for object shape (Zwaan et al.,335
2002), color (Connell, 2007), and orientation (Stanfield & Zwaan, 2001). Subsequent336
replication studies revealed consistent results for shape but inconsistent findings for color337
and orientation effects (De Koning et al., 2017; Rommers et al., 2013; Zwaan & Pecher,338
OBJECT ORIENTATION EFFECTS 16
Figure 1
Procedure of the sentence-picture verification task, with an example of matching orientation.
2012). Existing theoretical frameworks do not provide much guidance regarding the339
potential causes for this discrepancy. With the accumulating concerns about the lack of340
reproducibility (e.g., Kaschak & Madden, 2021), researchers have found it challenging to341
reconcile the theory of mental simulation with the failures to replicate some of the effects342
(e.g., Morey et al., 2022). In an empirical discipline like cognitive science, a theory requires343
the support of reproducible results.344
The reliability of match advantage effects seems to vary depending on both the345
object properties and the languages under study. Mental simulation effects for object shape346
have consistently been found in English (Zwaan et al., 2017; Zwaan & Madden, 2005;347
OBJECT ORIENTATION EFFECTS 17
Zwaan & Pecher, 2012), Chinese (Li & Shang, 2017), Dutch (De Koning et al., 2017;348
Engelen et al., 2011; Pecher et al., 2009; Rommers et al., 2013), German (Koster et al.,349
2018), Croatian (Šetić & Domijan, 2017), and Japanese (Sato et al., 2013). Object350
orientation, on the other hand, has produced mixed results across languages: namely,351
positive evidence in English (Stanfield & Zwaan, 2001; Zwaan & Pecher, 2012) and in352
Chinese (Chen et al., 2020), and null evidence in Dutch (De Koning et al., 2017; Rommers353
et al., 2013) and in German as second language (Koster et al., 2018). Among studies on354
shape and orientation, the effects of object orientation have been smaller than those of355
object shape (e.g., d= 0.10 vs. 0.17 in Zwaan & Pecher, 2012; d= 0.07 vs. 0.27 in De356
Koning et al., 2017). To understand the causes for the discrepancies among object357
properties and languages, it is imperative to consider the cross-linguistic and experimental358
factors of the sentence-picture verification task.359
Cross-linguistic, Methodological, and Cognitive Factors360
Several factors might contribute to cross-linguistic differences in the match361
advantage of object orientation. First, languages differ in how they encode motion and362
placement events in sentences (Newman, 2002; Verkerk, 2014). Second, the potential role363
of mental rotation as a confound has been considered (Rommers et al., 2013). We expand364
on how linguistic, methodological, and cognitive factors hinder the improvement of365
theoretical frameworks below.366
Linguistic Factors. The probe sentences used in object orientation studies usually367
contain several motion events (e.g., The ant walked towards the pot of honey and tried to368
climb in). Languages encode motion events in different ways, and grammatical differences369
between lexical encodings could explain different match advantage results. According to370
Verkerk (2014), Germanic languages (e.g., Dutch, English, and German) generally encode371
the manner of motion in the verb (e.g., The ant dashed), while conveying the path372
information through satellite adjuncts (e.g., towards the pot of honey). In contrast, other373
OBJECT ORIENTATION EFFECTS 18
languages, such as the Romance family (e.g., Portuguese, Spanish), more often encode the374
path in the verb (e.g., crossing,exiting). Crucially, past research on the match advantage375
of object orientation is exclusively based on Germanic languages, and yet, there were376
differences across those languages, with English being the only one that consistently377
yielded the match advantage. As a minor difference across Germanic languages in this378
regard, Verkerk (2014) notes that path-only constructions (e.g., The ant went to the feast)379
are more common in English than in other Germanic languages.380
Another topic to be considered is the lexical encoding of placement in each381
language, as the stimuli contain several placement events (e.g., Sara situated the expensive382
plate on its holder on the shelf ). Chen et al. (2020) and Koster et al. (2018) noted that383
some Germanic languages, such as German and Dutch, often make the orientation of384
objects more explicit than English. In English, for example, the verb put does not convey a385
specific orientation in the sentences She put the book on the table and She put the bottle on386
the table. However, in German and Dutch, speakers preferred the verbs laid or stood in the387
above sentences. In this case, the verb lay encodes a horizontal orientation, whereas the388
verb stand encodes a vertical orientation. This distinction extends to verbs indicating389
existence. As Newman (2002) exemplified, an English speaker would be likely to say390
There’s a lamp in the corner, whereas a Dutch speaker would be more likely to say There391
‘stands’ a lamp in the corner. Nonetheless, we cannot conclude that these cross-linguistic392
differences are affecting the match advantage across languages because the current theories393
(e.g., Language and Situated Simulation, Barsalou, 2008) have not addressed the potential394
influence of linguistic aspects such as the lexical encoding of placement.395
Methodological factors. Inconsistent findings on the match advantage of object396
orientation may be due to variability in task design. For example, studies failing to detect397
the match advantage may not have required participants to verify the probe sentence after398
the response to the target picture (see Zwaan, 2014a). Without such a verification,399
OBJECT ORIENTATION EFFECTS 19
participants might have paid less attention to the meaning of the probe sentences, in which400
they would have been less likely to form a mental representation of the objects (e.g., Zwaan401
& van Oostendorp, 1993). In this regard, variability originating from differences in the402
characteristics of experiments can substantially influence the results (Barsalou, 2019;403
Kaschak & Madden, 2021).404
Cognitive Factors. Since Stanfield and Zwaan (2001) showed a match advantage405
of object orientation, later studies on this topic have examined the association between the406
match advantage and alternative cognitive mechanisms rather than situated simulation.407
One of these potential mechanisms is spatial cognition, which can be measured with mental408
rotation tasks. Indeed, studies have suggested that mental rotation tasks offer valid409
reflections of previous spatial experience (Frick & Möhring, 2013) and of current spatial410
cognition (Chu & Kita, 2008; Pouw et al., 2014). Some previous studies have drawn on411
mental rotation to study mental simulation. For instance, De Koning et al. (2017)412
observed that the effectiveness of mental rotation increased with the size of the depicted413
object. Chen et al. (2020) examined the implication of this finding for the match414
advantage of object orientation (Stanfield & Zwaan, 2001), and implemented a415
picture-picture verification task using the mental rotation paradigm (D. Cohen & Kubovy,416
1993). In each trial, two pictures appeared on opposite sides of the screen. Participants417
had to verify whether the pictures represented identical or different objects.418
Chen et al. (2020) not only revealed shorter verification times for matching419
orientations (i.e., two identical pictures presented in horizontal or vertical orientation) but420
also replicated the larger effect for larger objects (i.e., pictures of bridges versus pictures of421
pens). The results were consistent across the three languages investigated: English, Dutch422
and Chinese. Compared to the results of sentence-picture verification and picture-picture423
verification, Chen et al. (2020) converted the picture-picture verification times to the424
mental rotation scores that were the discrepancy of verification times between the identical425
OBJECT ORIENTATION EFFECTS 20
and different orientations1. Their analysis showed that mental rotation affected the Dutch426
participants’ sentence-picture verification performance. With the measurement of mental427
rotation scores, we explore the association of spatial cognition and the effect of orientation428
in comprehension across the investigated languages.429
Purposes of this study430
To scrutinize the discrepancies in findings across languages and cognitive factors, we431
examined the reproducibility of the object orientation effect in a multi-lab collaboration.432
Our pre-registered plan aimed at detecting a general match advantage of object orientation433
across languages and evaluated the magnitude of match advantage in each specific434
language. Additionally, we examined whether the match advantages were related to the435
mental rotation index. Thus, this study followed the original methods from Stanfield and436
Zwaan (2001) and addressed two primary questions: (1) How much of the match advantage437
of object orientation can be obtained within different languages, and (2) How are differences438
in the mental rotation index associated with the match advantage across languages?439
Method440
Hypotheses and Design441
The study design for the sentence-picture and picture-picture verification tasks was442
mixed, using between-participant (language) and within-participant (match versus443
mismatch object orientation) independent variables. In the sentence-picture verification444
task, the match condition reflects a match between the sentence and the picture, whereas445
in the picture-picture verification it reflects a match in orientation between two pictures.446
The only dependent variable for both tasks was response time. The time difference447
between match conditions in each task is the measurement of mental simulation effects (for448
the sentence-picture task) and mental rotation scores (for the picture-picture task). We did449
1In the pre-registered plan, we used the term “imagery score” but this term was confusing. Therefore, we
used “mental rotation scores” instead of “imagery scores” in the final report.
OBJECT ORIENTATION EFFECTS 21
not select languages systematically, but instead based on our collaboration recruitment450
with the Psychological Science Accelerator (PSA, Moshontz et al., 2018).451
We pre-registered the following hypotheses:452
(1) In the sentence-picture verification task, we expected response times to be shorter for453
matching compared to mismatching orientations within each language. In the454
picture-picture verification task, we expected shorter response time for identical455
orientation compared to different orientations. We did not have any specific456
hypotheses about the relative size of the object orientation match advantage in457
different languages.458
(2) Based on the assumption that ‘mental rotation is a general cognitive function’, we459
expect equal mental rotation scores across languages, but no association between460
mental rotation scores and mental simulation effects (see Chen et al., 2020).461
Participants462
We performed a pre-registered power analysis, which sought to achieve a power of463
80% in a directional one-sample t-test. When an effect size of d= 0.20 was hypothesized, a464
sample size of N= 156 was required. Instead, for a hypothetical effect size d= 0.10, a465
sample size of N= 620 was required.’In addition, a power analysis tailored to mixed-effects466
models was performed. The effect size hypothesized in this analysis was equal to that467
observed by Zwaan and Pecher (2012), and the number of items was 100 (i.e., 24 planned468
items nested within at least five languages). The result revealed that a sample size of N=469
400 would be required to achieve a power of 90%. We expected laboratories to show470
differences in orientation effects, and therefore, the mixed effect analysis treated the471
laboratories as a random variable to account for different analyses. The laboratories were472
allowed to follow a secondary plan: a team collected at least their pre-registered minimum473
sample size (suggested 100 to 160 participants, most implemented 50), and then determine474
OBJECT ORIENTATION EFFECTS 22
whether or not to continue data collection via Bayesian sequential analysis (stopping data475
collection if BF10 = 10 or 0.10)2.476
We collected data in 18 languages from 50 laboratories. Each laboratory chose a477
maximal sample size and an incremental nfor sequential analysis before their data478
collection. Because the pre-registered power analysis did not match the final analysis plan,479
we additionally completed a sensitivity analysis to ensure the sample size was adequate to480
detect small effects, and the results indicated that if the observe orientation difference in481
reaction time between the different orientations was overall 2.36 ms or higher, the effect482
would be detected as significant. Appendix A summarizes the details of the sensitivity483
analysis.484
The original sample sizes are presented in Table 1 for the teams that provided raw485
data to the project. Data collection proceeded in two broad stages: initially we collected486
data in the laboratory. However, when the global COVID-19 pandemic made this practice487
impossible to continue, we moved data collection online. In total, 4,248 unique participants488
completed the present study with 2,843 completing the in-person version and 1,405489
completing the online version3. The in-person version included 35 research teams and the490
online version included 19 with 50 total teams across both data collection methods (i.e.,491
some labs completed both in-person and online data collection). Based on492
2See details of power analysis in the pre-registered plan, pp. 13 - 15. https://psyarxiv.com/t2pjv/
3Data for this study was collected together with another unrelated study (Phills et al., 2022) during the
same data collection session, with the two studies using different data collection platforms. The
demographic data was collected within the platform of the other study during the in-person sessions. Some
participants only completed the Phills et al. study and dropped out without completing the present study,
and there were also some data entry errors in the demographic data. Thus, the demographic data of some
participants who took the present study are missing or unidentifiable (n= 39 cannot be matched to a lab,
n= 2,053 were missing gender information, and n= 332 were missing age information). Importantly, this
does not affect the integrity of the experimental research data.
OBJECT ORIENTATION EFFECTS 23
recommendations from the research teams (TUR_007, TWN_002), two sets of data were493
excluded from all analyses due to participants being non-native speakers. Figure 2 provides494
a flow chart for participant exclusion and inclusion for analyses. All participating495
laboratories had either ethical approval or institutional evaluation before data collection.496
All data and analysis scripts are available on the source files497
(https://codeocean.com/capsule/3994879). Appendix B summarizes the average498
characteristics by language and laboratory.499
Materials500
Sentences. 24 critical sentence pairs (48 total sentences) were included in this501
study following Stanfield and Zwaan (2001). Each pair consisted of versions that differed in502
their implied orientation of the object embedded in the sentence. For instance, the sentence503
The librarian put the book back on the table - which implies a horizontal orientation - had a504
counterpart in the sentence The librarian put the book back on the shelf - which implies a505
vertical orientation. Another two sets of 24 sentences were included as filler sentences for506
the task demand. These sentences were not matched to any particular orientation but507
included a potential object for depiction. For example, After a week the painting arrived by508
mail, and The flowers that were planted last week had survived the storm were included as509
filler sentences. Each participant was shown 24 critical sentences and 24 filler sentences in510
the study. The filler sentences were included to counterbalance the number of yes-no511
answers to create an even 50% ratio.512
Pictures. The study included 24 critical matched pictures that only varied in their513
orientation (vertical/horizontal) for a total of 48 critical pictures (from Zwaan & Pecher,514
2012). These pictures were matched to their respective sentences for implied orientation.515
The librarian put the book back on the table was matched with a horizontally-oriented book,516
while The librarian put the book back on the shelf was matched with a vertically-oriented517
book. For counterbalancing, the mismatch between picture orientation and sentence was518
OBJECT ORIENTATION EFFECTS 24
Table 1
Demographic and Sample Size Characteristics
Language SPT r ials P PT rials SPNP PNDemoNF emaleNM aleNMAge S DAge
Arabic 2544 2544 106 106 107 42 12 32.26 18.59
Brazilian Portuguese 1200 1200 50 50 50 36 13 30.80 8.73
English 45189 45312 1884 1888 2055 1360 465 21.71 3.85
German 5616 5616 234 234 248 98 26 22.34 3.40
Greek 2376 2376 99 99 109 0 0 33.86 11.30
Hebrew 3576 3571 149 149 181 0 0 24.25 9.29
Hindi 1896 1896 79 79 86 57 27 21.66 3.46
Magyar 3610 3816 151 159 168 3 1 21.50 2.82
Norwegian 3576 3576 149 149 154 13 9 25.22 6.40
Polish 1368 1368 57 57 146 0 0 23.25 7.96
Portuguese 1488 1464 62 61 55 26 23 30.74 9.09
Serbian 3120 3120 130 130 130 108 21 21.38 4.50
Simplified Chinese 2040 2016 85 84 96 0 1 21.92 4.68
Slovak 3881 3599 162 150 325 1 0 21.77 2.33
Spanish 3120 3096 130 129 146 0 0 21.73 3.83
Thai 1200 1152 50 48 50 29 9 21.54 3.81
Traditional Chinese 3600 3600 150 150 186 69 46 20.89 2.44
Turkish 6456 6432 269 268 274 36 14 21.38 4.59
Note. SP = Sentence Picture Verification, PP = Picture Picture Verification. Sample sizes for
demographics may be higher than the sample size for the this study, as participants could have
only completed the bundled experiment. Additionally, not all entries could be unambigously
matched by lab ID, and therefore, demographic sample sizes could also be less than data
collected.
OBJECT ORIENTATION EFFECTS 25
Figure 2
Sample size and exclusions. N = number of unique participants, T = number of trials.
The final combined sample was summarized to a median score for each match/mismatch
condition, and therefore, includes one summary score per person.
OBJECT ORIENTATION EFFECTS 26
Table 2
Trial conditions for the Sentence-Picture and Picture-Picture Verification Task
Condition Item 1 Item 2 Answer Number
Sentence-Picture Critical Match Critical Sentence: Horizontal Critical Picture: Horizontal Yes 6
Sentence-Picture Critical Match Critical Sentence: Vertical Critical Picture: Vertical Yes 6
Sentence-Picture Critical Mismatch Critical Sentence: Horizontal Critical Picture: Vertical Yes 6
Sentence-Picture Critical Mismatch Critical Sentence: Vertical Critical Picture: Horizontal Yes 6
Sentence-Picture Filler Sentence Picture No 24
Picture-Picture Critical Match Critical Picture: Horizontal Critical Picture: Horizontal Yes 6
Picture-Picture Critical Match Critical Picture: Vertical Critical Picture: Vertical Yes 6
Picture-Picture Critical Mismatch Critical Picture: Horizontal Critical Picture: Vertical Yes 6
Picture-Picture Critical Mismatch Critical Picture: Vertical Critical Picture: Horizontal Yes 6
Picture-Picture Filler Picture Picture No 24
created, and the book would be shown in the respective opposite orientation (see519
orientation pairs at https://osf.io/utqxb). Another 48 pictures were included for the fillers520
which were unrelated to the corresponding sentence. Therefore, the answer to critical pairs521
was always “yes”, while the filler sentence-picture combinations answer was always “no”.522
Picture-Picture Trials. The picture-picture verification task used the same object523
pictures as the above task. The 24 critical picture pairs were included as match trials and524
were counterbalanced such as half the time they appeared with the same object and525
orientation (i.e., the same picture), and half the time with the opposite orientation (i.e.,526
horizontal and vertical). The filler pictures were randomly paired to create mismatch trials.527
Table 2 shows the counterbalancing and combinations for trials.528
Procedure529
Sentence-Picture Task. The sentence-picture verification task was always530
administered first. This task began with six practice trials. Each trial started with a531
left-aligned vertically-centered fixation point displayed for 1,000 ms, immediately followed532
by the probe sentence. The sentence remained on the screen until the participant pressed533
the space key, acknowledging that they had read the sentence. Then, the object picture534
OBJECT ORIENTATION EFFECTS 27
was presented in the center of the screen until the participant responded, or it disappeared535
after two seconds. Participants were instructed to verify, as quickly and accurately as536
possible, whether the object on screen had been mentioned in the probe sentence.537
Following Stanfield and Zwaan (2001), a memory check test was carried out after every538
three to eight trials to ensure that participants had read each sentence carefully.539
As shown in Table 2, the trials for the sentence-picture task were created by540
counterbalancing the sentence implied orientation (vertical, horizontal) by the pictured541
object orientation creating a fully-crossed combination between matching sentences and542
objects. Therefore, each participant only saw one of the four possible combinations543
(sentence orientation 2 x object orientation 2). For the filler items, sentences and pictures544
were randomly assigned in two separate patterns, and these were included with the critical545
pairs. Stimuli lists were created in Excel, and this information can be found at546
https://osf.io/utqxb.547
Translation of Sentences. The translation of probe sentences followed our548
pre-registered plan. Every non-English language coordinator was required to recruit at549
least four translators who were fluent in both English and the target language. Every550
language coordinator supervised the translators using the Psychological Science Accelerator551
guidelines (https://psysciacc.org/translation-process/). In addition, the coordinator and552
participating laboratories consulted about each of the following points:553
1) Four translators could denote the items that are unfamiliar to a particular language554
based on object familiarity ratings. The two forward translators would suggest555
alternative probe sentences and object pictures to replace the unfamiliar objects. The556
two backward translators would evaluate the suggested items.557
2) Some objects in a particular language have different spellings or pronunciations558
among countries and geographical zones due to dialect. For example, American559
OBJECT ORIENTATION EFFECTS 28
speakers tend to write tire whereas British speakers tend to write tyre. Every560
coordinator would mark these local translations in the final version of translated561
materials. Participating laboratories could replace the names to match the local562
dialect.563
Picture-Picture Task. Next, the picture-picture verification task was564
administered. In each trial, two objects appeared on either side of the central fixation point565
until either the participant indicated that the pictures displayed the same object or two566
different objects, or until two seconds elapsed. As shown in Table 2, four possible567
combinations of critical orientations could be shown with the picture (same, different) by568
orientation (same, different). Each participant only saw one of the critical combinations,569
and filler items were randomly paired in two combinations to match. The stimuli lists can570
be found at https://osf.io/utqxb.571
Software Implementation. The study was executed using OpenSesame software572
for millisecond timing (Mathôt et al., 2012). After data collection moved online, to573
minimize the differences between on-site and web-based studies, we converted the original574
Python code to Javascript and collected the data using OpenSesame through a JATOS575
server (Lange et al., 2015; Mathôt & March, 2022). We proceeded with the online study576
from February to June 2021 after the changes in the procedure were approved by the577
journal editor and reviewers. Following the literature, we did not anticipate any578
theoretically important differences between the two data collection methods (see579
Anwyl-Irvine et al., 2020; Bridges et al., 2020; de Leeuw & Motz, 2016). The instructions580
and experimental scripts are available at the public OSF folder (https://osf.io/e428p/581
“Materials” in Files).582
OBJECT ORIENTATION EFFECTS 29
Analysis Plan583
To test Hypothesis 1, our first planned analysis4used a random-effects584
meta-analysis model that estimated the match advantage across laboratories and585
languages. The meta-analysis summarized the median reaction times by match condition586
to determine the effect size by laboratory. The following formula was used:587
d=MdnM ismatch −MdnMatch
qMAD2
Mismatch +MAD2
Match −2×r×MADMismatch ×M ADM atch
×q2×(1 −r)
where dis Cohen’s d(Fritz et al., 2012), Mdn is Median, MAD is median absolute588
deviation, and ris correlation between match and mismatch condition. Meta-analytic589
effect sizes were computed for those languages that had data from more than one team.590
Continuing to test Hypothesis 1, next, we ran planned mixed-effects models using591
each individual response time from the sentence-picture verification task as the dependent592
variable. In each analysis, we first built a simple linear regression model with a fixed593
intercept-only. Then, we systematically added random intercepts and fixed effects, arriving594
at the final model. First, the random intercepts were added to the model one-by-one in the595
following order: participant ID, target, laboratory ID, and finally language. See below596
section for decision criteria for determining the final random-effect structure. Then, the597
fixed effect of matching condition (match vs. mismatch) was added to the model.598
Language-specific mixed-effects models were conducted in the same way if the599
meta-analysis showed a significant orientation effect.600
According to the pre-registration, we planned to test Hypothesis 2 by first601
evaluating the equality of mental rotation scores across languages using an ANOVA.602
4See the analysis plan in the pre-registered plan, pp. 19 - 20, https://psyarxiv.com/t2pjv/. This plan was
changed to a random-effects model to ensure that we did not assume the exact same effect size for each
language and lab.
OBJECT ORIENTATION EFFECTS 30
However, this plan was updated to use mixed models instead due to the nested structure of603
the data (Gelman, 2006). The same analysis plan was used for model building and604
selection as described above for the sentence-picture verification task.605
To further assess Hypothesis 2, the last planned analysis was to use mental rotation606
scores for the prediction of mental stimulation with an interaction between language and607
mental rotation scores computed from the picture-picture task to determine if there were608
differences in prediction of match advantage in the sentence-picture task. Here, we used a609
mixed-effects model as well to control for the random effect of the data collection lab, and610
with language, mental rotation score, and their interaction as fixed effect predictors.611
Decision criterion for model selection and hypothesis testing. The612
inclusion of both random and fixed effects in models was assessed using model comparison613
based on the Akaike information criterion (AIC). While this method is less conservative614
than alternatives such as the likelihood ratio test (Matuschek et al., 2017), the AIC was615
deemed appropriate due to the modest effect sizes that tend to be produced by mental616
simulation effects, and the limited sample sizes in the present study (albeit larger samples617
than those of most previous studies). Models with lower AIC were preferred over models618
with higher AIC, and in cases where the difference in AIC did not reach 2 (Burnham &619
Anderson, 1998), the model with fewer parameters was preferred.620
p-values for each effect were calculated using the Satterthwaite approximation for621
degrees of freedom for individual predictor coefficients and meta-analysis (Luke, 2017).622
p-values were interpreted using the pre-registered αlevel of .05.623
Intra-lab analysis during data collection.624
Before data collection, each lab decided whether they wanted to apply a sequential625
analysis (Schönbrodt et al., 2017) or whether they wanted to settle for a fixed sample size.626
The pre-registered protocol for labs applying sequential analysis established that they627
OBJECT ORIENTATION EFFECTS 31
could stop data collection upon reaching the pre-registered criterion (BF10 = 10 or .10), or628
the maximal sample size. Each laboratory chose a fixed sample size and an incremental n629
for sequential analysis before their data collection. Two laboratories (HUN_001,630
TWN_001) stopped data collection at the pre-registered criterion, BF10 = .10. Fourteen631
laboratories did not finish the sequential analysis because (1) twelve laboratories were632
interrupted by the pandemic outbreak; (2) two laboratories (TUR_007E, TWN_002E)633
recruited English-speaking participants to comply with institutional policies. Lab-based634
records were reported on a public website as each laboratory completed data collection635
(details are available in Appendix C).636
Results637
Data Screening638
As shown in Figure 2, participants’ data were deleted listwise from the639
sentence-picture and picture-picture tasks if they did not perform with at least 70%640
accuracy. Next, the data were screened for outliers. Our pre-registered plan excluded641
outliers based on a linear mixed-model analysis for participants in the third quantile of the642
grand intercept (i.e., participants with the longest average response times). After643
examining the data from both online and in-person data collection, it became clear that644
both a minimum response latency and maximum response latency should be employed, as645
improbable times existed at both ends of the distribution. The minimum response time646
was set to 160 ms based on Hick’s Law (Kvålseth, 2021; Proctor & Schneider, 2018). The647
maximum response latency was calculated as two times the mean absolute deviation plus648
the median calculated separately for each participant. Exclusions were performed at the649
trial level for these outlier response times.650
To ensure equivalence between data collection methods, we evaluated the response651
times predicted by the fixed effects of the interaction between match (match vs. mismatch)652
and data collection source (in-person vs. online). We included random intercepts for653
OBJECT ORIENTATION EFFECTS 32
participants, lab, language, and random slopes for source by lab and source by language.654
This analysis showed no difference between data sources: b= 2.41, SE = 2.77, t(73729.28)655
= 0.87, p= .385. Therefore, the following analyses did not separate in-person and online656
data. Table 3 provides a summary of the match advantage by language for the657
sentence-picture verification task.658
Although we combined the two data sets in the final data analysis, it is worth659
considering that online participants’ attention may be easily distracted given the lack of660
environmental control and experimenter overview. However, this secondary task revealed661
that online participants had a higher percent correct than in-person participants,662
t(3,214.86) = 35.77, p< .001, Monline = 85.46 (SD = 14.20) and Min−person = 67.71 (SD =663
16.26).664
Hypothesis 1: Meta-Analysis of the Orientation Effect665
The planned meta-analysis examined the effect overall and within languages wherein666
at least two laboratories had collected data (Arabic, English, German, Norway, Simplified667
Chinese, Traditional Chinese, Slovakian, and Turkish). Figure 3 showed a significant668
positive orientation effect across German laboratories (b= 16.68, 95% CI [7.75, 25.62]) but669
did not reveal a significant overall effect (b= 2.05, 95% CI [-2.71, 6.82]). Also, a significant670
negative orientation effect was found in the Hungarian (b= -20.00, 95% CI [-29.60, -10.40])671
and the Serbian laboratory (b= -17.25, 95% CI [-32.26, -2.24]), although in these languages672
only a single laboratory participated, so no language-specific meta-analysis was conducted.673
Hypothesis 1: Mixed-Linear Modeling of the Orientation Effect674
First, an intercept-only model of response times with no random intercepts was675
computed for comparison purposes AIC = 1008828.79. The model with the random676
intercept by participants was an improvement over this model, AIC = 971783.32. The677
addition of a target random intercept improved model fit over the participant678
intercept-only model, AIC = 969506.32. Data collection lab was then added to the model679
OBJECT ORIENTATION EFFECTS 33
Table 3
Descriptive Summary of Sentence-Picture Verification Task by Language
Language Accuracy Percent Mismatching Matching Match Advantage
Arabic 90.65 580.25 (167.53) 581.00 (200.89) -0.75
Brazilian Portuguese 94.87 641.00 (136.40) 654.50 (146.78) -13.50
English 95.04 576.75 (124.17) 578.75 (127.87) -2.00
German 96.53 593.00 (106.75) 576.00 (107.12) 17.00
Greek 92.35 753.50 (225.36) 728.50 (230.91) 25.00
Hebrew 96.73 569.50 (98.59) 574.50 (110.45) -5.00
Hindi 91.32 638.50 (207.19) 662.00 (228.32) -23.50
Hungarian 96.47 623.00 (111.94) 643.00 (129.73) -20.00
Norwegian 96.93 592.50 (126.39) 612.00 (136.03) -19.50
Polish 96.11 601.00 (139.36) 586.00 (108.23) 15.00
Portuguese 95.01 616.50 (144.55) 607.00 (145.29) 9.50
Serbian 94.78 617.75 (158.64) 635.00 (168.28) -17.25
Simplified Chinese 92.39 655.00 (170.50) 642.50 (158.64) 12.50
Slovak 96.45 610.50 (125.28) 607.25 (117.87) 3.25
Spanish 94.32 663.00 (147.52) 676.00 (154.19) -13.00
Thai 93.92 652.50 (177.91) 637.75 (130.10) 14.75
Traditional Chinese 94.41 625.00 (139.36) 620.00 (123.06) 5.00
Turkish 95.38 654.50 (146.04) 637.00 (126.02) 17.50
Note. Average accuracy percentage, Median response times and median absolute deviations (in
parentheses) per match condition (Mismatching, Matching); Match advantage (difference in
response times).
OBJECT ORIENTATION EFFECTS 34
Figure 3
Meta-analysis on match advantage of object orientation for all languages. Diamonds indicate
summary estimates, the midpoint of the diamond indicating the point estimate, and the left
and right endpoints indicating the lower and upper bounds of the confidence interval of the
estimated effect size. The lowermost diamond represents the estimate derived from the whole
dataset.
OBJECT ORIENTATION EFFECTS 35
as a random intercept, also showing model improvement, AIC = 969265.28, and the680
random intercept of language was added last, AIC = 969263.66 which did not show model681
improvement at least 2 points change. Last, the fixed effect of match advantage was added682
with approximately the same fit as the three random-intercept model, AIC = 969265.06.683
This model did not reveal a significant effect of match advantage: b= -0.17, SE = 1.20,684
t(69830.14) = -0.14, p= .887.685
We conducted an exploratory mixed-effects model on German data as this was the686
only language indicating a significant match advantage in the meta-analysis. An687
intercept-only model with random effects for participants, target, and lab was used as a688
comparison, AIC = 55828.57. The addition of the fixed effect of match showed a small689
improvement over this random-intercept model, AIC = 55824.52. Whereas the AIC values690
indicated a significant change, the p-value did not reveal a significant effect of match691
advantage: b= 4.84, SE = 4.12, t(4085.71) = 1.17, p= .241. All the details of the above692
fixed effects and random intercepts are summarized in Appendix D.693
Hypothesis 2: Mental Rotation Scores694
Using the same steps as described for the sentence-picture verification mixed model,695
we first started with an intercept-only model with no random effects for comparison, AIC696
= 1029362.78. The addition of random intercepts by subject, AIC = 979873.47, by item,697
AIC = 977037.64, by lab, AIC = 976721.45, and by language, AIC = 976717.46, all698
subsequently improved model fit. Next, the match effect for object orientation was entered699
as the fixed effect for mental rotation score, AIC = 973054.93, which showed improvement700
over the random intercepts model. This model showed a significant effect of object701
orientation, b= 32.30, SE = 0.53, t(79585.24) = 61.23, p< .001, such that identical702
orientations were processed faster than rotated orientations. The point estimates of the703
orientation effect varied between 23.79–40.24, revealing a range of 14 ms across languages.704
The coefficients of all mixed-effects models are reported in Appendix E, along with all705
OBJECT ORIENTATION EFFECTS 36
effects presented by language.706
Hypothesis 2: Prediction of Match Advantage707
The last analysis included a mixed effects regression model using the interaction of708
language and mental rotation scores to predict match advantage in the sentence-picture709
task. First, an intercept-only model was calculated for comparison, AIC = 42678.66, which710
was improved slightly by adding a random intercept by data collection lab, AIC =711
42677.80. The addition of the fixed effects interaction of language and mental rotation712
score improved the overall model, AIC = 42633.44. English was used as the comparison713
group for all language comparisons. Neither the mental rotation score nor the interaction714
of mental rotation score and language were significant, and these results are detailed in715
Appendix E.716
Discussion717
This study aimed to test a global object orientation effect and to estimate the718
magnitude of object orientation effect in each particular language. The findings of our719
study did not support the existence of the object orientation effect as an outcome of720
general cognitive function. Furthermore, our data failed to replicate the effects in English721
and Chinese, languages in which the effect has been reported previously (Chen et al., 2020;722
Stanfield & Zwaan, 2001; Zwaan & Pecher, 2012). The only language in which we found an723
indication of the orientation effect in the predicted direction was German, but this effect724
was evident only in the meta-analysis and not in the mixed-effects model approach.725
Although tangential to our topic, an effect of mental rotation was observed, such that726
identical orientations were processed faster than rotated orientations. However, the mental727
rotation score did not predict the object orientation effects nor interact with language.728
Overall, the failure to replicate the previously reported object orientation effects casts729
doubt on the existence of the effect as a language-general phenomenon (Kaschak &730
Madden, 2021). Below, we summarize the lessons and limitations of the methodology and731
analysis, and discuss theoretical issues related to the orientation effect as an effective probe732
OBJECT ORIENTATION EFFECTS 37
to investigate the mental simulation process.733
Methodological Considerations734
By examining the failed replications of the object orientation effect in the735
English-language labs (see Figure 3), researchers can further identify the possible factors736
that may have contributed to the discrepancies between the results of this project and the737
original studies. Although our project had a larger sample of English-speaking participants738
compared to the original studies (i.e., Stanfield & Zwaan, 2001; Zwaan & Pecher, 2012),739
our English-speaking participants came from multiple countries where the participants’740
lexical knowledge is not completely consistent with American English. Although we741
prepared an alternative version of the stimuli for British English, these two versions of742
English stimuli did not cover all English language backgrounds, such as participants from743
Malaysia and Africa. Despite the overall non-significant effect in all English-language data,744
the meta-analysis indicated three significant positive team-based effects (USA_173,745
USA_030 and USA_032, see Figure 3) but also three significant negative effects (USA_33,746
USA_20, and GBR_005, see Figure 3). Future cross-linguistic studies should attempt to747
balance sample sizes across languages to allow reliable cross-linguistic comparisons.748
Regarding the failed replication of Chinese orientation effects, the past study used749
simpler sentence content compared to this project. Chen et al. (2020) used the probe750
sentences in which the target objects were the subject of sentences (e.g., The nail was751
hammered into the wal l; bold added to mark the subject noun). The Chinese probe752
sentences in this project were translated from the English sentences used in Stanfield and753
Zwaan (2001), in which the target objects are the object of sentences (e.g., The carpenter754
hammered the nail into the wal l; bold added to mark object noun). It is possible that the755
object orientation effect may be present or stronger when the target objects are the subject756
of the sentence, rather than the direct object, and future studies could explore this757
distinction.758
OBJECT ORIENTATION EFFECTS 38
Lastly, past studies that employed a secondary task among the experimental trials759
(Chen et al., 2020; Kaschak & Madden, 2021; Stanfield & Zwaan, 2001) showed a positive760
object orientation effect. In our study, the memory check did not increase the likelihood to761
detect the mental simulation effects. In addition, we did not find that mental imagery762
predicted match advantage, which implies that this strategy to ensure linguistic processing763
had limited influence in our study.764
Analysis Considerations765
The orientation effects were analyzed using a meta-analytic approach and766
mixed-effects models. Neither approach revealed an overall effect of object orientation. In767
the language-by-language analysis, a significant orientation effect was found in the German768
language data in the meta-analysis. The mixed model analysis did not confirm this result769
because the effect in the German data was not significant according to our pre-registered770
test criteria. There is considerable debate in the statistical community regarding the771
precision of the pvalues computed for linear mixed models (Bolker, 2015). One alternative,772
less-conservative approach to testing the significance of a fixed effect predictor is assessing773
the difference in the AIC model fit index between a model that contains a fixed effect774
predictor and one that does not (Matuschek et al., 2017). Using this approach in an775
exploratory analysis, we found that the effect of orientation in the German language data776
was not negligible, rendering this result compatible with the result obtained for German in777
the meta-analysis. However, considered in the general context including all the other778
results, the present exploratory result for German could stem from measurement error779
(Loken & Gelman, 2017) or from family-wise error (Armstrong, 2014).780
When a topic area yields inconsistent or small effects, some researchers have781
questioned the utility of further research (Brysbaert, 2020; Sala & Gobet, 2017). However,782
research on embodied cognition should continue with the aim of determining the factors783
behind the variability of the effects. One of these factors could be the nature of the784
OBJECT ORIENTATION EFFECTS 39
variables used - for instance, categorical versus continuous. The object orientation design is785
a factorial, congruency paradigm, based on congruent (matching) and incongruent786
(mismatching) conditions. Another paradigm of similar characteristics, namely the action787
sentence compatibility effect, similarly failed to replicate in a large-scale study (Morey et788
al., 2022). Whereas factorial paradigms require the use of categorical variables, other789
studies have operationalized sensorimotor information using continuous variables, and790
observed significant effects (Bernabeu, 2022; Lynott et al., 2020; Petilli et al., 2021). Since791
continuous variables contain more information, they may afford more statistical power (J.792
Cohen, 1983). Furthermore, in addition to categorical versus continuous predictors,793
sensorimotor effects are likely to be moderated by factors influencing participants’794
attention during experiments (Barsalou, 2019; Noah et al., 2018). Last, due to publication795
bias, the true size of sensorimotor effects is likely to be smaller than that observed in796
small-sample studies (Vasishth & Gelman, 2021). Indeed, studying these effects reliably797
may require samples exceeding 1,000 participants (Bernabeu, 2022). In summary,798
addressing the above issues may permit the analytic sensitivity needed to observe the799
presence and causes of object orientation effects.800
Theoretical Considerations801
Scholars interested in mental simulation have investigated whether the human mind802
processes linguistic content as abstract symbols or as grounded mental representations803
(Barsalou, 1999, 2008; Zwaan, 2014b). Some of the tasks used to test these theories-such as804
the sentence-verification task-rely on priming-based logic, whereby a designed sentence805
generates representations along some dimension (such as orientation) that facilitates or806
interferes with the processing of the subsequent stimulus (Roelke et al., 2018).807
Furthermore, embodied cognition theories suggest that the reading of the sentence will808
activate perceptual experience, thus facilitating a matching object picture and causing809
interference for a mismatching picture (Kaschak & Madden, 2021; McNamara, 2005). To810
scrutinize these effects, future studies could augment the sentence-picture verification task811
OBJECT ORIENTATION EFFECTS 40
to compare the degree of priming based on object orientation with the priming based on812
other semantic information. The present study constitutes the first large-scale,813
cross-linguistic approach to the object orientation effect. Cross-linguistic studies are rare in814
the present topic, and generally in the topic of conceptual/semantic processing. In future815
studies, the basis for cross-linguistic comparisons in conceptual processing should be816
expanded, for instance, by studying the lexicosemantic features of the stimuli used, how817
those differ across languages, and how those differences may influence psycholinguistic818
processing. For the development of this founding work, the field of linguistic relativity may819
be useful as a model (e.g., Athanasopoulos, 2023).820
In addition, further research should compare the size of mental simulation effects821
with the size of effects that are associated with the symbolic account of conceptual822
processing. The symbolic account posits that conceptual processing (i.e., the823
comprehension of the meaning of words) depends on the abstract symbols (e.g.,824
propositions and production rules). So far, some of these comparisons have supported both825
accounts. However, in some studies, the effects of the symbolic account have been larger826
than those of the embodied account (Bernabeu, 2022; Louwerse et al., 2015), whereas the827
reverse has been true in other studies (Fernandino et al., 2022; Tong et al., 2022).828
Limitations829
This study reflects the challenges to assess the mental simulation of object830
orientation across languages, especially when dealing with effects that require large sample831
sizes (see Loken & Gelman, 2017; Vadillo et al., 2016). Our data collection deviated from832
the pre-registered plan because of the COVID-19 pandemic. Due to the lack of participant833
monitoring online, and an inspection of the data, we post-hoc used filtering on outliers in834
terms of participants’ response times for both too fast (< 160 ms) and too slow responses835
(2 MAD beyond the median for each participant individually). After these exclusions, a836
mixed-effects model confirmed no difference of response times between in-person and online837
OBJECT ORIENTATION EFFECTS 41
data. Future studies could evaluate how the task environments alter the magnitude of the838
orientation effect.839
Conclusion840
Based on the results of this project, we did not find evidence for a general object841
orientation effect across languages. Our findings on the orientation effects question the842
theoretical importance of mental simulation in linguistic processing, but they also provide843
directions for new avenues of investigation.844
OBJECT ORIENTATION EFFECTS 42
References845
Anwyl-Irvine, A. L., Massonnié, J., Flitton, A., Kirkham, N., & Evershed, J. K. (2020).846
Gorilla in our midst: An online behavioral experiment builder. Behavior Research847
Methods,52 (1), 388–407. https://doi.org/10.3758/s13428-019-01237-x848
Armstrong, R. A. (2014). When to use the Bonferroni correction. Ophthalmic and849
Physiological Optics,34 (5), 502–508. https://doi.org/10.1111/opo.12131850
Athanasopoulos, P. (2023). Linguistic relativity (J. Culpeper, Ed.; pp. 469–477).851
Routledge.852
Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences,22,853
577–660. https://doi.org/10.1017/S0140525X99002149854
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology,59, 617–645.855
https://doi.org/10.1146/annurev.psych.59.103006.093639856
Barsalou, L. W. (2019). Establishing generalizable mechanisms. Psychological Inquiry,857
30 (4), 220–230. https://doi.org/10.1080/1047840X.2019.1693857858
Barsalou, L. W. (2020). Challenges and opportunities for grounding cognition. Journal of859
Cognition,3(1), 31. https://doi.org/10.5334/joc.116860
Bernabeu, P. (2022). Language and sensorimotor simulation in conceptual processing:861
Multilevel analysis and statistical power.862
https://doi.org/10.17635/LANCASTER/THESIS/1795863
Bolker, B. M. (2015). Linear and generalized linear mixed models (G. A. Fox, S.864
Negrete-Yankelevich, & V. J. Sosa, Eds.; pp. 309–333). Oxford University Press.865
https://doi.org/10.1093/acprof:oso/9780199672547.003.0014866
Bridges, D., Pitiot, A., MacAskill, M. R., & Peirce, J. W. (2020). The timing mega-study:867
Comparing a range of experiment generators, both lab-based and online. PeerJ,8,868
e9414. https://doi.org/10.7717/peerj.9414869
Brysbaert, M. (2020). Power considerations in bilingualism research: Time to step up our870
game. Bilingualism: Language and Cognition,24 (5), 813–818.871
OBJECT ORIENTATION EFFECTS 43
https://doi.org/10.1017/s1366728920000437872
Burnham, K. P., & Anderson, D. R. (1998). Practical use of the information-theoretic873
approach (pp. 75–117). Springer New York.874
https://doi.org/10.1007/978-1-4757-2917-7_3875
Chen, S.-C., de Koning, B. B., & Zwaan, R. A. (2020). Does object size matter with regard876
to the mental simulation of object orientation? Experimental Psychology,67 (1), 56–72.877
https://doi.org/10.1027/1618-3169/a000468878
Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights879
into the microdevelopment of the motor strategy. Journal of Experimental Psychology:880
General,137 (4), 706–723. https://doi.org/10.1037/a0013157881
Cohen, D., & Kubovy, M. (1993). Mental rotation, mental representation, and flat slopes.882
Cognitive Psychology,25, 351–382. https://doi.org/10.1006/cogp.1993.1009883
Cohen, J. (1983). The Cost of Dichotomization. Applied Psychological Measurement,7(3),884
249–253. https://doi.org/10.1177/014662168300700301885
Connell, L. (2007). Representing object colour in language comprehension. Cognition,102,886
476–485. https://doi.org/10.1016/j.cognition.2006.02.009887
De Koning, B. B., Wassenburg, S. I., Bos, L. T., & Van der Schoot, M. (2017). Mental888
simulation of four visual object properties: Similarities and differences as assessed by889
the sentence-picture verification task. Journal of Cognitive Psychology,29 (4), 420–432.890
https://doi.org/10.1080/20445911.2017.1281283891
de Leeuw, J. R., & Motz, B. A. (2016). Psychophysics in a Web browser? Comparing892
response times collected with JavaScript and Psychophysics Toolbox in a visual search893
task. Behavior Research Methods,48 (1), 1–12.894
https://doi.org/10.3758/s13428-015-0567-2895
Engelen, J. A. A., Bouwmeester, S., de Bruin, A. B. H., & Zwaan, R. A. (2011). Perceptual896
simulation in developing language comprehension. Journal of Experimental Child897
Psychology,110 (4), 659–675. https://doi.org/10.1016/j.jecp.2011.06.009898
OBJECT ORIENTATION EFFECTS 44
Fernandino, L., Tong, J.-Q., Conant, L. L., Humphries, C. J., & Binder, J. R. (2022).899
Decoding the information structure underlying the neural representation of concepts.900
Proceedings of the National Academy of Sciences,119 (6), e2108091119.901
https://doi.org/10.1073/pnas.2108091119902
Frick, A., & Möhring, W. (2013). Mental object rotation and motor development in 8- and903
10-month-old infants. Journal of Experimental Child Psychology,115 (4), 708–720.904
https://doi.org/10.1016/j.jecp.2013.04.001905
Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use,906
calculations, and interpretation. Journal of Experimental Psychology: General,141 (1),907
2–18. https://doi.org/10.1037/a0024338908
Kaschak, M. P., & Madden, J. (2021). Embodiment in the lab: Theory, measurement, and909
reproducibility. In M. D. Robinson & L. E. Thomas (Eds.), Handbook of Embodied910
Psychology (pp. 619–635). Springer International Publishing.911
https://doi.org/10.1007/978-3-030-78471-3_27912
Koster, D., Cadierno, T., & Chiarandini, M. (2018). Mental simulation of object913
orientation and size: A conceptual replication with second language learners. Journal of914
the European Second Language Association,2(1). https://doi.org/10.22599/jesla.39915
Kvålseth, T. O. (2021). Hick’s law equivalent for reaction time to individual stimuli.916
British Journal of Mathematical and Statistical Psychology,74 (S1), 275–293.917
https://doi.org/10.1111/bmsp.12232918
Lange, K., Kühn, S., & Filevich, E. (2015). "Just Another Tool for Online Studies”919
(JATOS): An easy solution for setup and management of web servers supporting online920
studies. PLOS ONE,10 (6), e0130834. https://doi.org/10.1371/journal.pone.0130834921
Li, Y., & Shang, L. (2017). An ERP study on the mental simulation of implied object color922
information during Chinese sentence comprehension. Journal of Psychological Science,923
40 (1), 29–36. https://doi.org/10.16719/j.cnki.1671-6981.20170105924
Loken, E., & Gelman, A. (2017). Measurement error and the replication crisis. Science,925
OBJECT ORIENTATION EFFECTS 45
355 (6325), 584–585. https://doi.org/10.1126/science.aal3618926
Louwerse, M. M., Hutchinson, S., Tillman, R., & Recchia, G. (2015). Effect size matters:927
the role of language statistics and perceptual simulation in conceptual processing.928
Language, Cognition and Neuroscience,30 (4), 430–447.929
https://doi.org/10.1080/23273798.2014.981552930
Luke, S. G. (2017). Evaluating significance in linear mixed-effects models in R. Behavior931
Research Methods,49 (4), 1494–1502. https://doi.org/10.3758/s13428-016-0809-y932
Lynott, D., Connell, L., Brysbaert, M., Brand, J., & Carney, J. (2020). The Lancaster933
Sensorimotor Norms: multidimensional measures of perceptual and action strength for934
40,000 English words. Behavior Research Methods,52 (3), 1271–1291.935
https://doi.org/10.3758/s13428-019-01316-z936
Mathôt, S., & March, J. (2022). Conducting linguistic experiments online with937
OpenSesame and OSWeb. Language Learning,72 (4), 1017–1048.938
https://doi.org/10.1111/lang.12509939
Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical940
experiment builder for the social sciences. Behavior Research Methods,44 (2), 314–324.941
https://doi.org/10.3758/s13428-011-0168-7942
Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I943
error and power in linear mixed models. Journal of Memory and Language,94,944
305–315. https://doi.org/10.1016/j.jml.2017.01.001945
McNamara, T. P. (2005). Semantic priming: Perspectives from memory and word946
recognition. Psychology Press.947
Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens,948
D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen,949
B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell,950
E., . . . Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the951
action-sentence compatibility effect (ACE). Psychonomic Bulletin & Review,29 (2),952
OBJECT ORIENTATION EFFECTS 46
613–626. https://doi.org/10.3758/s13423-021-01927-8953
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S.,954
Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R.,955
Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J.,956
Aczel, B., .. . Chartier, C. R. (2018). The Psychological Science Accelerator:957
Advancing psychology through a distributed collaborative network. Advances in958
Methods and Practices in Psychological Science,1(4), 501–515.959
https://doi.org/10.1177/2515245918797607960
Newman, J. (2002). 1. A cross-linguistic overview of the posture verbs “Sit,” “Stand,” and961
“Lie.” In J. Newman (Ed.), Typological Studies in Language (Vol. 51, pp. 1–24). John962
Benjamins Publishing Company. https://doi.org/10.1075/tsl.51.02new963
Noah, T., Schul, Y., & Mayo, R. (2018). When both the original study and its failed964
replication are correct: Feeling observed eliminates the facial-feedback effect. Journal of965
Personality and Social Psychology,114 (5), 657–664.966
https://doi.org/10.1037/pspa0000121967
Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current968
Directions in Psychological Science,28 (6), 593–599.969
https://doi.org/10.1177/0963721419866441970
Pecher, D., van Dantzig, S., Zwaan, R. A., & Zeelenberg, R. (2009). Language971
comprehenders retain implied shape and orientation of objects. The Quarterly Journal972
of Experimental Psychology,62 (6), 1108–1114.973
https://doi.org/10.1080/17470210802633255974
Petilli, M. A., Günther, F., Vergallito, A., Ciapparelli, M., & Marelli, M. (2021).975
Data-driven computational models reveal perceptual simulation in word processing.976
Journal of Memory and Language,117, 104194.977
https://doi.org/10.1016/j.jml.2020.104194978
Phills, C., Kekecs, Z., & Chartier, C. (2022). Pre-Registration.979
OBJECT ORIENTATION EFFECTS 47
https://doi.org/10.17605/OSF.IO/4HMGS980
Pouw, W. T. J. L., de Nooijer, J. A., van Gog, T., Zwaan, R. A., & Paas, F. (2014).981
Toward a more embedded/extended perspective on the cognitive function of gestures.982
Frontiers in Psychology,5. https://doi.org/10.3389/fpsyg.2014.00359983
Proctor, R. W., & Schneider, D. W. (2018). Hick’s law for choice reaction time: A review.984
Quarterly Journal of Experimental Psychology,71 (6), 1281–1299.985
https://doi.org/10.1080/17470218.2017.1322622986
Roelke, A., Franke, N., Biemann, C., Radach, R., Jacobs, A. M., & Hofmann, M. J. (2018).987
A novel co-occurrence-based approach to predict pure associative and semantic priming.988
Psychonomic Bulletin & Review,25 (4), 1488–1493.989
https://doi.org/10.3758/s13423-018-1453-6990
Rommers, J., Meyer, A. S., & Huettig, F. (2013). Object shape and orientation do not991
routinely influence performance during language processing. Psychological Science,992
24 (11), 2218–2225. https://doi.org/10.1177/0956797613490746993
Sala, G., & Gobet, F. (2017). Does Far Transfer Exist? Negative Evidence From Chess,994
Music, and Working Memory Training. Current Directions in Psychological Science,995
26 (6), 515–520. https://doi.org/10.1177/0963721417712760996
Sato, M., Schafer, A. J., & Bergen, B. K. (2013). One word at a time: Mental997
representations of object shape change incrementally during sentence processing.998
Language and Cognition,5(04), 345–373. https://doi.org/10.1515/langcog-2013-0022999
Schönbrodt, F. D., Wagenmakers, E.-J., Zehetleitner, M., & Perugini, M. (2017).1000
Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences.1001
Psychological Methods,22 (2), 322–339. https://doi.org/10.1037/met00000611002
Scorolli, C. (2014). Embodiment and language. In L. Shapiro (Ed.), The Routledge1003
handbook of embodied cognition (pp. 145–156). Routledge.1004
Šetić, M., & Domijan, D. (2017). Numerical congruency effect in the sentence-picture1005
verification task. Experimental Psychology,64 (3), 159–169.1006
OBJECT ORIENTATION EFFECTS 48
https://doi.org/10.1027/1618-3169/a0003581007
Stanfield, R. A., & Zwaan, R. A. (2001). The effect of implied orientation derived from1008
verbal context on picture recognition. Psychological Science,12 (2), 153–156.1009
https://doi.org/10.1111/1467-9280.003261010
Tong, J., Binder, J. R., Humphries, C., Mazurchuk, S., Conant, L. L., & Fernandino, L.1011
(2022). A distributed network for multimodal experiential representation of concepts.1012
The Journal of Neuroscience,42 (37), 7121–7130.1013
https://doi.org/10.1523/JNEUROSCI.1243-21.20221014
Vadillo, M. A., Konstantinidis, E., & Shanks, D. R. (2016). Underpowered samples, false1015
negatives, and unconscious learning. Psychonomic Bul letin & Review,23 (1), 87–102.1016
https://doi.org/10.3758/s13423-015-0892-61017
Vasishth, S., & Gelman, A. (2021). How to embrace variation and accept uncertainty in1018
linguistic and psycholinguistic data analysis. Linguistics,59 (5), 1311–1342.1019
https://doi.org/10.1515/ling-2019-00511020
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and1021
progress. Perspectives on Psychological Science,13 (4), 411–417.1022
https://doi.org/10.1177/17456916177518841023
Verkerk, A. (2014). The correlation between motion event encoding and path verb lexicon1024
size in the indo-european language family. Folia Linguistica,35 (1).1025
https://doi.org/10.1515/flih.2014.0091026
Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review,1027
9(4), 625–636. https://doi.org/10.3758/BF031963221028
Zwaan, R. A. (2014a). Embodiment and language comprehension: Reframing the1029
discussion. Trends in Cognitive Sciences,18 (5), 229–234.1030
https://doi.org/10.1016/j.tics.2014.02.0081031
Zwaan, R. A. (2014b). Replications should be performed With power and precision: A1032
response to Rommers, Meyer, and Huettig (2013). Psychological Science,25 (1),1033
OBJECT ORIENTATION EFFECTS 49
305–307. https://doi.org/10.1177/09567976135096341034
Zwaan, R. A., Diane Pecher, Paolacci, G., Bouwmeester, S., Verkoeijen, P., Dijkstra, K., &1035
Zeelenberg, R. (2017). Participant Nonnaiveté and the reproducibility of cognitive1036
psychology. Psychonomic Bul letin & Review, 1–5.1037
https://doi.org/10.3758/s13423-017-1348-y1038
Zwaan, R. A., & Madden, C. J. (2005). Embodied sentence comprehension. In D. Pecher1039
& R. A. Zwaan (Eds.), Grounding cognition: The role of perception and action in1040
memory, language, and thinking (pp. 224–245). Cambridge University Press.1041
Zwaan, R. A., & Pecher, D. (2012). Revisiting mental simulation in language1042
comprehension: Six replication attempts. PLoS ONE,7, e51382.1043
https://doi.org/10.1371/journal.pone.00513821044
Zwaan, R. A., Stanfield, R. A., & Yaxley, R. H. (2002). Language comprehenders mentally1045
represent the shapes of objects. Psychological Science,13, 168–171.1046
https://doi.org/10.1111/1467-9280.004301047
Zwaan, R. A., & van Oostendorp, H. (1993). Do readers construct spatial representations1048
in naturalistic story comprehension? Discourse Processes,16 (1-2), 125–143.1049
https://doi.org/10.1080/016385393095448321050
OBJECT ORIENTATION EFFECTS 50
Appendix A
Sensitivity Analyses
The R codes for the sensitivity analysis on the trial level were written by Erin M.1051
Buchanan.1052
Load data and run models1053
The data for the sensitivity analysis shared the same exclusion criterion for the1054
pre-registered mixed-effects models. The first step is to determine if there is a minimum1055
number of trials required for stable results.1056
View the Results1057
b values1058
These values represent the bvalues found for each run of 3 up to 12 trials.1059
-0.17, -0.17, -0.17, -0.17, -0.18, -0.12, 0.49, -0.14, 0.67, and 3.111060
p values1061
These values represent the pvalues found for each run of 7 up to 12 trials.1062
.887, .887, .887, .890, .880, .918, .687, .913, .647, .1501063
As we can see, the effect is generally negative until participants were required to1064
have 7-12 correct trials. When participants accurately answer all 12 trials the effect is1065
approximately 3 ms. Examination of the p-values indicates that no coefficients would have1066
been considered significant.1067
Calculate the Sensitivity1068
Given we used all data points, the smallest detectable effect with our standard error1069
and degrees of freedom would have been:1070
## [1] 2.3564411071
OBJECT ORIENTATION EFFECTS 51
Appendix B
Data Collection Logs
The log website was initiated since the data collection began. The public logs were1072
updated when a laboratory updated their data for the sequential analysis. The link to1073
access the public site is: https://scgeeker.github.io/PSA002_log_site/index.html1074
If you want to check the sequential analysis result of a laboratory, at first you have1075
to identify the ID and language of this laboratory from “Overview” page. Next you will1076
navigate to the language page under the banner “Tracking Logs”. For example, you want to1077
see the result of “GBR_005”. Navigate “Tracking Logs -> English”. Search the figure by1078
ID “GBR_005”.1079
The source files of the public logs are available in the github repository:1080
https://github.com/SCgeeker/PSA002_log_site1081
All the raw data and log files are compressed in the project OSF repository:1082
https://osf.io/e428p/1083
The R code to conduct the Bayesian sequential analysis is available at1084
“data_seq_analysis.R”. This file can be found at:1085
https://github.com/SCgeeker/PSA002_log_site1086
Note 1 USA_067, BRA_004 and POL_004 were unavailable because the teams1087
withdrew.1088
Note 2 Some mistakes happened between the collaborators’ communications and1089
required advanced data wrangling. For example, some AUS_091 participants were assigned1090
to NZL_005. The Rmd file in NZL_005 folder were used to identify the AUS_0911091
participants’ data then move them to AUS_091 folder.1092
OBJECT ORIENTATION EFFECTS 52
Datasets1093
Complete data can be found online with this manuscript or on each collaborators1094
OSF page. Please see the Lab_Info.csv on https://osf.io/e428p/.1095
Flunecy test for the online study1096
At the beginning of the online study, participants will hear the verbal instruction1097
narrated by a native speaker. The original English transcript is as below:1098
“In this session you will complete two tasks. The first task is called the sentence1099
picture verification task. In this task, you will read a sentence. You will then see a picture.1100
Your job is to verify whether the picture represents an object that was described in the1101
sentence or not. The second task is the picture verification task. In this task you will see1102
two pictures on the screen at the same time and determine whether they are the same or1103
different. Once you have completed both tasks, you will receive a completion code that you1104
can use to verify your participation in the study.”1105
The fluency test are three multiple choice questions. The question text and the1106
correct answers are as below:1107
•How many tasks will you run in this session?1108
A: 1 *B: 2 C: 31109
•When will you get the completion code?1110
A: Before the second task B: After the first task *C: After the second task1111
•What will you do in the sentence-picture verification task?1112
A: Confirm two pictures for their objects1113
*B: Read a sentence and verify a picture C: Judge sentences for their accuracy1114
OBJECT ORIENTATION EFFECTS 53
Distributions of scripts1115
The instructions and experimental scripts are available at the public OSF folder1116
(https://osf.io/e428p/ “Materials/js” folder in Files). To upload to a jatos server, a script1117
had to be converted to the compatible package. Researchers could do this conversion by1118
“OSWEB” package in OpenSesame. We rent an remote server for the distributions during1119
the data collection period. Any researcher would distribute the scripts on a free jatos1120
server such as MindProbe (https://www.mindprobe.eu/).1121
OBJECT ORIENTATION EFFECTS 54
Appendix C
Demographic Characteristics by Language
OBJECT ORIENTATION EFFECTS 55
Table C1
Demographic and Sample Size Characteristics by Language Part 1
Language SPT r ials P PT rials SPNP PNDemoNF emaleNM aleNMAge S DAge
Arabic 1248 1248 52 52 53 0 0 38.00 NaN
Arabic 1296 1296 54 54 54 42 12 26.51 18.59
Brazilian Portuguese 1200 1200 50 50 50 36 13 30.80 8.73
English 2376 2376 99 99 103 46 37 20.14 3.32
English 3840 3840 160 160 160 127 25 26.03 11.55
English 2352 2376 98 99 104 54 40 20.26 3.66
English 1272 1272 53 53 76 57 13 19.96 3.90
English 1200 1200 50 50 51 37 13 20.14 2.46
English 1200 1200 50 50 58 46 11 18.74 1.62
English 720 720 30 30 32 15 11 25.70 9.40
English 1200 1224 50 51 52 38 11 22.56 3.90
English 2400 2400 100 100 109 65 30 20.73 2.00
English 1248 1248 52 52 52 24 22 23.94 11.29
English 7680 7680 320 320 320 244 56 23.21 5.43
English 1248 1272 52 53 71 50 12 18.89 0.95
Note. SP = Sentence Picture Verification, PP = Picture Picture Verification. Sample sizes for
demographics may be higher than the sample size for the this study, as participants could have
only completed the bundled experiment. Additionally, not all entries could be unambigously
matched by lab ID, and therefore, demographic sample sizes could also be less than data
collected. Each row represents a single lab.
OBJECT ORIENTATION EFFECTS 56
Table C2
Demographic and Sample Size Characteristics by Lab Part 2
Language SPT r ials P PT rials SPNP PNDemoNF emaleNM aleNMAge S DAge
English 1536 1536 64 64 102 79 11 19.82 2.42
English 264 264 11 11 12 9 2 20.36 1.91
English 288 288 12 12 12 6 5 21.17 1.19
English 1512 1512 63 63 63 30 23 22.34 11.55
English 7980 8064 333 336 403 258 76 19.63 2.12
English 648 648 27 27 31 20 3 36.00 0.96
English 1209 1224 51 51 51 30 21 19.29 1.51
English 3000 3024 125 126 129 90 25 20.06 1.36
English 1200 1200 50 50 61 35 15 18.86 1.63
English 816 744 34 31 3 0 3 19.67 0.58
German 2400 2400 100 100 114 0 1 20.94 2.56
German 2592 2592 108 108 108 80 22 22.18 4.26
German 624 624 26 26 26 18 3 23.88 3.39
Greek 2376 2376 99 99 109 0 0 33.86 11.30
Hebrew 3576 3571 149 149 181 0 0 24.25 9.29
Hindi 1896 1896 79 79 86 57 27 21.66 3.46
Magyar 3610 3816 151 159 168 3 1 21.50 2.82
Norwegian 504 504 21 21 21 12 8 30.10 8.58
Norwegian 1320 1320 55 55 53 1 1 23.55 6.25
Norwegian 1752 1752 73 73 80 0 0 22.00 4.38
Note. SP = Sentence Picture Verification, PP = Picture Picture Verification. Sample
sizes for demographics may be higher than the sample size for the this study, as
participants could have only completed the bundled experiment. Additionally, not all
entries could be unambigously matched by lab ID, and therefore, demographic sample
sizes could also be less than data collected.
OBJECT ORIENTATION EFFECTS 57
Table C3
Demographic and Sample Size Characteristics by Lab Part 3
Language SPT r ials P PT rials SPNP PNDemoNF emaleNM aleNMAge S DAge
Polish 1368 1368 57 57 146 0 0 23.25 7.96
Portuguese 1488 1464 62 61 55 26 23 30.74 9.09
Serbian 3120 3120 130 130 130 108 21 21.38 4.50
Simplified Chinese 1200 1200 50 50 57 0 0 18.66 3.92
Simplified Chinese 840 816 35 34 39 0 1 25.17 5.44
Slovak 2419 2400 101 100 103 1 0 21.59 2.51
Slovak 1462 1199 61 50 222 0 0 21.96 2.14
Spanish 1680 1656 70 69 70 0 0 21.36 3.36
Spanish 1440 1440 60 60 76 0 0 22.10 4.30
Thai 1200 1152 50 48 50 29 9 21.54 3.81
Traditional Chinese 1440 1440 60 60 70 45 14 20.73 1.21
Traditional Chinese 2160 2160 90 90 116 24 32 21.04 3.66
Turkish 2184 2184 91 91 93 0 0 20.92 2.93
Turkish 1896 1896 79 79 80 36 14 21.58 8.64
Turkish 2376 2352 99 98 101 0 0 21.63 2.19
Note. SP = Sentence Picture Verification, PP = Picture Picture Verification. Sample sizes for
demographics may be higher than the sample size for the this study, as participants could
have only completed the bundled experiment. Additionally, not all entries could be
unambigously matched by lab ID, and therefore, demographic sample sizes could also be less
than data collected.
OBJECT ORIENTATION EFFECTS 58
Appendix D
Model Estimates for Mental Simulation
All model estimates are given below for the planned mixed linear model to estimate the1122
matching effect for object orientation in the sentence picture verification task.1123
Note. Fixed indicates fixed parameters in multilevel models, while “ran_pars”1124
indicates the random intercepts included in the model.1125
OBJECT ORIENTATION EFFECTS 59
Table D1
Intercept Only Object Orientation Results
Term Estimate (b)SE t p
(Intercept) 654.71 0.84 775.11 < .001
Table D2
Subject-Random Intercept Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 654.26 2.69 243.34 3,787.12 < .001
ran_pars Subject sd__(Intercept) 161.40 NA NA NA
ran_pars Residual sd__Observation 165.05 NA NA NA
Table D3
Subject and Item-Random Intercept Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 655.47 5.16 126.97 84.63 < .001
ran_pars Subject sd__(Intercept) 161.37 NA NA NA
ran_pars Target sd__(Intercept) 30.54 NA NA NA
ran_pars Residual sd__Observation 162.17 NA NA NA
OBJECT ORIENTATION EFFECTS 60
Table D4
Subject, Item, and Lab-Random Intercept Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 659.63 9.67 68.24 65.54 < .001
ran_pars Subject sd__(Intercept) 153.76 NA NA NA
ran_pars Target sd__(Intercept) 30.56 NA NA NA
ran_pars PSA_ID sd__(Intercept) 55.76 NA NA NA
ran_pars Residual sd__Observation 162.17 NA NA NA
Table D5
Subject, Item, Lab, and Language-Random Intercept Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 672.75 11.88 56.61 23.94 < .001
ran_pars Subject sd__(Intercept) 153.78 NA NA NA
ran_pars Target sd__(Intercept) 30.56 NA NA NA
ran_pars PSA_ID sd__(Intercept) 48.60 NA NA NA
ran_pars Language sd__(Intercept) 25.52 NA NA NA
ran_pars Residual sd__Observation 162.17 NA NA NA
OBJECT ORIENTATION EFFECTS 61
Table D6
Fixed Effects Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 659.71 9.68 68.12 66.04 < .001
fixed NA MatchN -0.17 1.20 -0.14 69,830.14 .887
ran_pars Subject sd__(Intercept) 153.76 NA NA NA
ran_pars Target sd__(Intercept) 30.56 NA NA NA
ran_pars PSA_ID sd__(Intercept) 55.76 NA NA NA
ran_pars Residual sd__Observation 162.17 NA NA NA
Table D7
Random Effects German Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 631.96 19.65 32.15 1.74 .002
ran_pars Subject sd__(Intercept) 129.89 NA NA NA
ran_pars Target sd__(Intercept) 33.04 NA NA NA
ran_pars PSA_ID sd__(Intercept) 28.11 NA NA NA
ran_pars Residual sd__Observation 134.89 NA NA NA
OBJECT ORIENTATION EFFECTS 62
Table D8
Fixed Effects German Object Orientation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 629.52 19.74 31.90 1.77 .002
fixed NA MatchN 4.84 4.12 1.17 4,085.71 .241
ran_pars Subject sd__(Intercept) 129.90 NA NA NA
ran_pars Target sd__(Intercept) 33.06 NA NA NA
ran_pars PSA_ID sd__(Intercept) 28.05 NA NA NA
ran_pars Residual sd__Observation 134.88 NA NA NA
OBJECT ORIENTATION EFFECTS 63
Appendix E
Model Estimates for Mental Rotation
All model estimates are given below for the mixed linear model for the prediction of mental1126
rotation scores by orientation, and the effects of predicting mental simulation effects1127
(object orientation) with the mental rotation scores.1128
Note. Fixed indicates fixed parameters in multilevel models, while “ran_pars”1129
indicates the random intercepts included in the model.1130
OBJECT ORIENTATION EFFECTS 64
Table E1
Intercept Only Mental Rotation Results
Term Estimate (b)SE t p
(Intercept) 589.10 0.40 1,485.40 < .001
Table E2
Subject-Random Intercept Mental Rotation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 588.28 1.35 436.44 3,957.55 < .001
ran_pars Subject sd__(Intercept) 83.04 NA NA NA
ran_pars Residual sd__Observation 79.04 NA NA NA
Table E3
Subject and Item-Random Intercept Mental Rotation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 589.21 2.70 217.83 79.98 < .001
ran_pars Subject sd__(Intercept) 82.94 NA NA NA
ran_pars Picture1 sd__(Intercept) 16.26 NA NA NA
ran_pars Residual sd__Observation 77.56 NA NA NA
OBJECT ORIENTATION EFFECTS 65
Table E4
Subject, Item, and Lab-Random Intercept Mental Rotation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 590.18 5.16 114.34 69.45 < .001
ran_pars Subject sd__(Intercept) 78.36 NA NA NA
ran_pars Picture1 sd__(Intercept) 16.32 NA NA NA
ran_pars PSA_ID sd__(Intercept) 30.12 NA NA NA
ran_pars Residual sd__Observation 77.56 NA NA NA
Table E5
Subject, Item, Lab, and Language-Random Intercept Mental Rotation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 596.71 6.98 85.47 22.17 < .001
ran_pars Subject sd__(Intercept) 78.37 NA NA NA
ran_pars Picture1 sd__(Intercept) 16.32 NA NA NA
ran_pars PSA_ID sd__(Intercept) 24.00 NA NA NA
ran_pars Language sd__(Intercept) 19.29 NA NA NA
ran_pars Residual sd__Observation 77.56 NA NA NA
OBJECT ORIENTATION EFFECTS 66
Table E6
Fixed Effects Mental Rotation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) 581.52 7.02 82.82 22.57 < .001
fixed NA IdenticalN 32.30 0.53 61.23 79,585.24 < .001
ran_pars Subject sd__(Intercept) 78.24 NA NA NA
ran_pars Picture1 sd__(Intercept) 16.89 NA NA NA
ran_pars PSA_ID sd__(Intercept) 24.01 NA NA NA
ran_pars Language sd__(Intercept) 19.33 NA NA NA
ran_pars Residual sd__Observation 75.80 NA NA NA
OBJECT ORIENTATION EFFECTS 67
Table E7
Language Specific Mental Rotation Results
Language Coefficient (b)SE
Arabic 28.27 3.36
English 33.02 0.77
German 31.38 1.91
Brazilian Portuguese 23.79 4.85
Simplified Chinese 32.40 3.64
Spanish 40.24 3.75
Greek 30.59 3.67
Hungarian 25.43 2.57
Hindi 35.83 3.86
Hebrew 29.02 2.43
Norwegian 28.12 2.59
Polish 38.74 3.51
Portuguese 34.67 4.05
Serbian 25.93 2.75
Slovak 33.34 2.61
Thai 34.99 4.13
Turkish 37.46 2.29
Traditional Chinese 30.31 2.45
OBJECT ORIENTATION EFFECTS 68
Table E8
Intercept Only Predicting Mental Simulation
Results
Term Estimate (b)SE t p
(Intercept) -0.74 1.67 -0.44 .661
Table E9
Lab-Random Intercept Predicting Mental Simulation Results
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) -0.74 1.67 -0.44 3,543.00 .661
ran_pars PSA_ID sd__(Intercept) 0.00 NA NA NA
ran_pars Residual sd__Observation 99.67 NA NA NA
OBJECT ORIENTATION EFFECTS 69
Table E10
Fixed Effects Interaction Language and Rotation Predicting Mental Simulation Results
Part 1
Effect Group Term Estimate (b)SE t df p
fixed NA (Intercept) -3.43 3.07 -1.12 3,510.00 .264
fixed NA LanguageArabic 16.27 16.34 1.00 3,510.00 .319
fixed NA LanguageBrazilian Portuguese -17.00 16.63 -1.02 3,510.00 .307
fixed NA LanguageGerman 20.65 9.17 2.25 3,510.00 .024
fixed NA LanguageGreek 7.52 12.58 0.60 3,510.00 .550
fixed NA LanguageHindi 1.24 15.39 0.08 3,510.00 .936
fixed NA LanguageHungarian -10.30 10.01 -1.03 3,510.00 .304
fixed NA LanguageNorwegian 19.66 10.31 1.91 3,510.00 .057
fixed NA LanguagePolish -5.67 17.87 -0.32 3,510.00 .751
fixed NA LanguagePortuguese -15.90 17.21 -0.92 3,510.00 .356
fixed NA LanguageSerbian -0.48 11.20 -0.04 3,510.00 .966
fixed NA LanguageSimplified Chinese 22.70 14.45 1.57 3,510.00 .116
fixed NA LanguageSlovak -0.34 11.58 -0.03 3,510.00 .976
fixed NA LanguageSpanish 5.39 11.51 0.47 3,510.00 .640
fixed NA LanguageThai 27.53 21.22 1.30 3,510.00 .195
fixed NA LanguageTraditional Chinese -8.24 11.20 -0.74 3,510.00 .462
fixed NA LanguageTurkish 10.66 8.57 1.24 3,510.00 .214
fixed NA Imagery 0.04 0.05 0.79 3,510.00 .432
OBJECT ORIENTATION EFFECTS 70
Table E11
Fixed Effects Interaction Language and Rotation Predicting Mental Simulation Results Part 2
Effect Group Term Estimate (b)SE t df p
fixed NA LanguageArabic:Imagery -0.37 0.24 -1.55 3,510.00 .122
fixed NA LanguageBrazilian Portuguese:Imagery 0.18 0.29 0.62 3,510.00 .536
fixed NA LanguageGerman:Imagery -0.37 0.19 -1.96 3,510.00 .050
fixed NA LanguageGreek:Imagery -0.07 0.20 -0.36 3,510.00 .718
fixed NA LanguageHindi:Imagery -0.36 0.27 -1.34 3,510.00 .181
fixed NA LanguageHungarian:Imagery 0.10 0.22 0.45 3,510.00 .653
fixed NA LanguageNorwegian:Imagery -0.07 0.19 -0.35 3,510.00 .726
fixed NA LanguagePolish:Imagery 0.29 0.31 0.91 3,510.00 .363
fixed NA LanguagePortuguese:Imagery -0.05 0.32 -0.15 3,510.00 .884
fixed NA LanguageSerbian:Imagery -0.12 0.22 -0.56 3,510.00 .576
fixed NA LanguageSimplified Chinese:Imagery -0.32 0.24 -1.32 3,510.00 .187
fixed NA LanguageSlovak:Imagery 0.12 0.21 0.57 3,510.00 .568
fixed NA LanguageSpanish:Imagery 0.05 0.17 0.28 3,510.00 .781
fixed NA LanguageThai:Imagery -0.50 0.38 -1.32 3,510.00 .186
fixed NA LanguageTraditional Chinese:Imagery 0.13 0.23 0.59 3,510.00 .556
fixed NA LanguageTurkish:Imagery -0.15 0.14 -1.09 3,510.00 .274
ran_pars PSA_ID sd__(Intercept) 0.00 NA NA NA
ran_pars Residual sd__Observation 99.73 NA NA NA