ArticlePublisher preview available

A Worldwide Test of the Predictive Validity of Ideal Partner Preference Matching

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ideal partner preferences (i.e., ratings of the desirability of attributes like attractiveness or intelligence) are the source of numerous foundational findings in the interdisciplinary literature on human mating. Recently, research on the predictive validity of ideal partner preference matching (i.e., Do people positively evaluate partners who match vs. mismatch their ideals?) has become mired in several problems. First, articles exhibit discrepant analytic and reporting practices. Second, different findings emerge across laboratories worldwide, perhaps because they sample different relationship contexts and/or populations. This registered report—partnered with the Psychological Science Accelerator—uses a highly powered design (N = 10,358) across 43 countries and 22 languages to estimate preference-matching effect sizes. The most rigorous tests revealed significant preference-matching effects in the whole sample and for partnered and single participants separately. The “corrected pattern metric” that collapses across 35 traits revealed a zero-order effect of β = .19 and an effect of β = .11 when included alongside a normative preference-matching metric. Specific traits in the “level metric” (interaction) tests revealed very small (average β = .04) effects. Effect sizes were similar for partnered participants who reported ideals before entering a relationship, and there was no consistent evidence that individual differences moderated any effects. Comparisons between stated and revealed preferences shed light on gender differences and similarities: For attractiveness, men’s and (especially) women’s stated preferences underestimated revealed preferences (i.e., they thought attractiveness was less important than it actually was). For earning potential, men’s stated preferences underestimated—and women’s stated preferences overestimated—revealed preferences. Implications for the literature on human mating are discussed.
A Worldwide Test of the Predictive Validity of
Ideal Partner Preference Matching
Paul W. Eastwick
1
, Jehan Sparks
2
, Eli J. Finkel
3, 4, 5
, Eva M. Meza
1
,MatúšAdamkovič
6, 7, 8
, Peter Adu
9
,
Ting Ai
10
, Aderonke A. Akintola
11
, Laith Al-Shawaf
12, 13, 14
, Denisa Apriliawati
15
, Patrícia Arriaga
16
,
Benjamin Aubert-Teillaud
17
, Gabriel Baník
18
, Krystian Barzykowski
19
, Carlota Batres
20
,
Katherine J. Baucom
21
, Elizabeth Z. Beaulieu
21
, Maciej Behnke
22, 23
, Natalie Butcher
24
,
Deborah Y. Charles
25
, Jane Minyan Chen
26
, Jeong Eun Cheon
27
, Phakkanun Chittham
28
,
Patrycja Chwiłkowska
22
, Chin Wen Cong
29
, Lee T. Copping
24
, Nadia S. Corral-Frias
30
,
Vera Ćubela Adori´c
31
, Mikaela Dizon
32
, Hongfei Du
33
, Michael I. Ehinmowo
34
, Daniela A. Escribano
10
,
Natalia M. Espinosa
35
, Francisca Expo´sito
36
, Gilad Feldman
37
, Raquel Freitag
38
, Martha Frias Armenta
39
,
Albina Gallyamova
40
, Omri Gillath
10
, Biljana Gjoneska
41
, Theolos Gkinopoulos
42
, Franca Grafe
43
,
Dmitry Grigoryev
40
, Agata Groyecka-Bernard
44
, Gul Gunaydin
45
, Ruby Ilustrisimo
46
, Emily Impett
47
,
Pavol Kačmár
48
, Young-Hoon Kim
27
, Mirosław Kocur
49
, Marta Kowal
49
, Maatangi Krishna
50
,
Paul Danielle Labor
51
, Jackson G. Lu
52
, Marc Y. Lucas
53
, Wojciech P. Małecki
54
, Klara Malinakova
55
,
SoaMeißner
43
, Zdeněk Meier
55
, Michal Misiak
49, 56
, Amy Muise
57
, Lukas Novak
55
, Jiaqing O
58
,
Asil A. Özdoğru
59, 60
, Haeyoung Gideon Park
47
, Mariola Paruzel
22
, Zoran Pavlovi´c
61
, Marcell Püski
62
,
Gianni Ribeiro
63, 64
, S. Craig Roberts
49, 65
, Jan P. Röer
43
, Ivan Ropovik
66, 67
, Robert M. Ross
68
,
Ezgi Sakman
69
, Cristina E. Salvador
35
, Emre Selcuk
45
, Shayna Skakoon-Sparling
70
,
Agnieszka Sorokowska
44
, Piotr Sorokowski
49
, Ognen Spasovski
71
, Sarah C. E. Stanton
32
,
Suzanne L. K. Stewart
72
, Viren Swami
73, 74
, Barnabas Szaszi
62
, Kaito Takashima
75
, Peter Tavel
55
,
Julian Tejada
76
, Eric Tu
57
, Jarno Tuominen
77
, David Vaidis
78
, Zahir Vally
79
, Leigh Ann Vaughn
80
,
Laura Villanueva-Moya
36
, Dian Wisnuwardhani
81
, Yuki Yamada
82
, Fumiya Yonemitsu
83
, Radka Žídková
55
,
Kristýna Živná
55
, and Nicholas A. Coles
84
1
Department of Psychology, University of California, Davis
2
Behavioral Decision Making Group, University of California, Los Angeles Anderson School of Management
3
Department of Psychology, Northwestern University
4
Kellogg School of Management, Northwestern University
5
Institute for Policy Research, Northwestern University
6
Centre of Social and Psychological Sciences, Slovak Academy of Sciences
7
Faculty of Education, Charles University
8
Faculty of Humanities and Social Sciences, University of Jyväskylä
9
Wellington Faculty of Health, School of Health, Victoria University of Wellington
10
Department of Social Psychology, University of Kansas
11
Department of Psychology, Redeemers University
12
Department of Psychology, University of Colorado, Colorado Springs
13
Lyda Hill Institute for Human Resilience, University of Colorado, Colorado Springs
14
Institute for Advanced Study in Toulouse (IAST), France
15
Department of Psychology, Universitas Islam Negeri Sunan Kalijaga
16
Department of Social and Organizational Psychology, Iscte-University Institute of Lisbon
17
Institut de Psychologie, UniversitéParis Cité
18
Department of Educational Psychology and Psychology of Health, Pavol Jozef Safarik University
19
Faculty of Philosophy, Institute of Psychology, Jagiellonian University
20
Department of Psychology, Franklin and Marshall College
21
Department of Psychology, University of Utah
22
Department of Psychology and Cognitive Science, Adam Mickiewicz University
23
Cognitive Neuroscience Center, Adam Mickiewicz University
24
Department of Psychology, Teesside University
25
Department of Psychology, Christ University
26
Department of Psychology, Wellesley College
27
Department of Psychology, Yonsei University
28
Faculty of Psychology, Chulalongkorn University
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Personality and Social Psychology:
Personality Processes and Individual Differences
© 2024 American Psychological Association 2025, Vol. 128, No. 1, 123146
ISSN: 0022-3514 https://doi.org/10.1037/pspp0000524
123
Article
Full-text available
The present study explores the relative importance of men’s physical attractiveness and purported intelligence to the hypothetical mate preferences and choices of 201 daughters and 187 corresponding parents. We measured self-reported mate preferences and then varied men’s physical attractiveness (more vs. less attractive) and intelligence (higher vs. lower peer intelligence rating) in a 2 × 2 independent groups design replicated across daughters and parents. Although intelligence was rated by both women and their parents as significantly more important than physical attractiveness for a long-term mate for daughters, both daughters (72.64%) and their parents (59.56%) chose the more attractive man as the best long-term partner for daughters, regardless of his purported intelligence level. Furthermore, although daughters rated a partner’s attractiveness as more important than their parents did when considering a mate for daughters, daughters’ and parents’ choices corresponded 73.8% of the time, suggesting less conflict over mate choices than may be predicted based on self-reported ideal mate preferences. However, when higher attractiveness and higher intelligence were not paired, women were more likely to choose the more attractive man while parents were more likely to choose the more intelligent man, suggesting different trade-off preferences for women and their parents. Men’s physical attractiveness may be more important to both daughters and parents than consciously realized, and daughters’ and parents’ mate choices may correspond more closely than their responses to rating scales might suggest.
Article
Full-text available
The current study addresses the open question whether ideal partner preferences are linked to relationship decisions and relationship outcomes. Using a longitudinal design across 13 years, we investigated whether partner preferences are associated with perceived characteristics of actual partners (i.e. ideal-trait correlation) and whether a closer match between ideals and perceptions of a partner’s traits is associated with better relationship outcomes (i.e. ideal partner preference-matching effects). A community sample of 178 participants (90 women) reported their ideal partner preferences in 2006 (mean age at T2 M = 45.7 years, SD = 7.2). In 2019, they reported their relationship histories since then, providing ratings of 322 relationships. We found a positive association between participants’ initial ideals and partner trait perceptions. This ideal-trait correlation was stronger with current ideals, consistent with the possibility of preference adjustment towards the partner. The match between ideals and perceived partner traits was operationalised using different metrics. A closer match was associated with higher relationship commitment across all metrics, while for relationship quality, the link was not apparent for the corrected pattern metric. Evidence of matching effects for relationship length was mixed and largely absent for break-up initiation. Implications for the ideal partner preference literature are discussed.
Article
Full-text available
Maintaining data quality on Amazon Mechanical Turk (MTurk) has always been a concern for researchers. These concerns have grown recently due to the bot crisis of 2018 and observations that past safeguards of data quality (e.g., approval ratings of 95%) no longer work. To address data quality concerns, CloudResearch, a third-party website that interfaces with MTurk, has assessed ~165,000 MTurkers and categorized them into those that provide high- (~100,000, Approved) and low- (~65,000, Blocked) quality data. Here, we examined the predictive validity of CloudResearch’s vetting. In a pre-registered study, participants (N = 900) from the Approved and Blocked groups, along with a Standard MTurk sample (95% HIT acceptance ratio, 100+ completed HITs), completed an array of data-quality measures. Across several indices, Approved participants (i) identified the content of images more accurately, (ii) answered more reading comprehension questions correctly, (iii) responded to reversed coded items more consistently, (iv) passed a greater number of attention checks, (v) self-reported less cheating and actually left the survey window less often on easily Googleable questions, (vi) replicated classic psychology experimental effects more reliably, and (vii) answered AI-stumping questions more accurately than Blocked participants, who performed at chance on multiple outcomes. Data quality of the Standard sample was generally in between the Approved and Blocked groups. We discuss how MTurk’s Approval Rating system is no longer an effective data-quality control, and we discuss the advantages afforded by using the Approved group for scientific studies on MTurk.
Article
Full-text available
Following theories of emotional embodiment, the facial feedback hypothesis suggests that individuals’ subjective experiences of emotion are influenced by their facial expressions. However, evidence for this hypothesis has been mixed. We thus formed a global adversarial collaboration and carried out a preregistered, multicentre study designed to specify and test the conditions that should most reliably produce facial feedback effects. Data from n = 3,878 participants spanning 19 countries indicated that a facial mimicry and voluntary facial action task could both amplify and initiate feelings of happiness. However, evidence of facial feedback effects was less conclusive when facial feedback was manipulated unobtrusively via a pen-in-mouth task.
Article
Full-text available
This study compared dating experiences through smartphone apps (e.g., Tinder) with offline-initiated dating. Previous research suggests that people feel greater apprehensiveness toward internet dating relative to traditional dating methods. Using an experience-sampling design (N = 793) over one month, we examined attraction, perceptions of dating partners (sexiness, warmth), and behaviors (sexual intercourse, alcohol) across dating modalities, and alongside trait sociosexuality, destiny/growth beliefs, romanticism, and gender. Results showed that participants reported experiences were similar for offline and app-initiated dates, except for those high in destiny/growth or romantic beliefs, who tended to feel less attraction to dating partners. Despite this similarity, participants viewed dating apps negatively. We also found little support for ideal partner preferences correlating with attraction or dating outcomes. We suggest that initial beliefs about dating may bias people away from dating app experiences, and personality traits such as romantic beliefs may dictate outcomes much more than the method of meeting.
Article
Full-text available
Path models to test claims about mediation and moderation are a staple of psychology. But applied researchers may sometimes not understand the underlying causal inference problems and thus endorse conclusions that rest on unrealistic assumptions. In this article, we aim to provide a clear explanation for the limited conditions under which standard procedures for mediation and moderation analysis can succeed. We discuss why reversing arrows or comparing model fit indices cannot tell us which model is the right one and how tests of conditional independence can at least tell us where our model goes wrong. Causal modeling practices in psychology are far from optimal but may be kept alive by domain norms that demand every article makes some novel claim about processes and boundary conditions. We end with a vision for a different research culture in which causal inference is pursued in a much slower, more deliberate, and collaborative manner.
Article
Full-text available
There are massive literatures on initial attraction and established relationships. But few studies capture early relationship development: the interstitial period in which people experience rising and falling romantic interest for partners who could—but often do not—become sexual or dating partners. In this study, 208 single participants reported on 1,065 potential romantic partners across 7,179 data points over 7 months. In stage 1, we used random forests (a type of machine learning) to estimate how well different classes of variables (e.g., individual differences vs. target-specific constructs) predicted participants’ romantic interest in these potential partners. We also tested (and found only modest support for) the perceiver × target moderation account of compatibility: the meta-theoretical perspective that some types of perceivers experience greater romantic interest for some types of targets. In stage 2, we used multilevel modeling to depict predictors retained by the random-forests models; robust (positive) main effects emerged for many variables, including sociosexuality, sex drive, perceptions of the partner’s positive attributes (e.g., attractive and exciting), attachment features (e.g., proximity seeking), and perceived interest. Finally, we found no support for ideal partner preference-matching effects on romantic interest. The discussion highlights the need for new models to explain the origin of romantic compatibility.
Article
Full-text available
There are two unresolved puzzles in the literature examining how people evaluate mates (i.e., prospective or current romantic/sexual partners). First, compatibility is theoretically crucial, but attempts to explain why certain perceivers are compatible with certain targets have revealed small effects. Second, features of partners (e.g., personality, consensually rated attributes) affect perceivers' evaluations strongly in initial-attraction contexts but weakly in established relationships. Mate Evaluation Theory (MET) addresses these puzzles, beginning with the Social Relations Model postulate that all evaluative constructs (e.g., attraction, relationship satisfaction) consist of target, perceiver, and relationship variance. MET then explains how people draw evaluations from mates' attributes using four information sources: (a) shared evolved mechanisms and cultural scripts (common lens, which produces target variance); (b) individual differences that affect how a perceiver views all targets (perceiver lens, which produces perceiver variance); (c) individual differences that affect how a perceiver views some targets, depending on the targets' features (feature lens, which produces some relationship variance); and (d) narratives about and idiosyncratic reactions to one particular target (target-specific lens, which produces most relationship variance). These two distinct sources of relationship variance (i.e., feature vs. target-specific) address Puzzle #1: Previous attempts to explain compatibility used feature lens information, but relationship variance likely derives primarily from the (understudied) target-specific lens. MET also addresses Puzzle #2 by suggesting that repeated interaction causes the target-specific lens to expand, which reduces perceivers' use of the common lens. We conclude with new predictions and implications at the intersection of the human-mating and person-perception literatures. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Article
Interaction analyses (also termed “moderation” analyses or “moderated multiple regression”) are a form of linear regression analysis designed to test whether the association between two variables changes when conditioned on a third variable. It can be challenging to perform a power analysis for interactions with existing software, particularly when variables are correlated and continuous. Moreover, although power is affected by main effects, their correlation, and variable reliability, it can be unclear how to incorporate these effects into a power analysis. The R package InteractionPoweR and associated Shiny apps allow researchers with minimal or no programming experience to perform analytic and simulation-based power analyses for interactions. At minimum, these analyses require the Pearson’s correlation between variables and sample size, and additional parameters, including reliability and the number of discrete levels that a variable takes (e.g., binary or Likert scale), can optionally be specified. In this tutorial, we demonstrate how to perform power analyses using our package and give examples of how power can be affected by main effects, correlations between main effects, reliability, and variable distributions. We also include a brief discussion of how researchers may select an appropriate interaction effect size when performing a power analysis.
Article
Multilevel models are used ubiquitously in the social and behavioral sciences and effect sizes are critical for contextualizing results. A general framework of R-squared effect size measures for multilevel models has only recently been developed. Rights and Sterba (2019) distinguished each source of explained variance for each possible kind of outcome variance. Though researchers have long desired a comprehensive and coherent approach to computing R-squared measures for multilevel models, the use of this framework has a steep learning curve. The purpose of this tutorial is to introduce and demonstrate using a new R package - r2mlm - that automates the intensive computations involved in implementing the framework and provides accompanying graphics to visualize all multilevel R-squared measures together. We use accessible illustrations with open data and code to demonstrate how to use and interpret the R package output.