ArticlePDF Available

Abstract and Figures

People draw automatic social inferences from photos of unfamiliar faces and these first impressions are associated with important real-world outcomes. Here we examine the effect of selecting online profile images on first impressions. We model the process of profile image selection by asking participants to indicate the likelihood that images of their own face (“self-selection”) and of an unfamiliar face (“other-selection”) would be used as profile images on key social networking sites. Across two large Internet-based studies (n = 610), in line with predictions, image selections accentuated favorable social impressions and these impressions were aligned to the social context of the networking sites. However, contrary to predictions based on people’s general expertise in self-presentation, other-selected images conferred more favorable impressions than self-selected images. We conclude that people make suboptimal choices when selecting their own profile pictures, such that self-perception places important limits on facial first impressions formed by others. These results underscore the dynamic nature of person perception in real-world contexts. Electronic supplementary material The online version of this article (doi:10.1186/s41235-017-0058-3) contains supplementary material, which is available to authorized users.
Content may be subject to copyright.
O R I G I N A L A R T I C L E Open Access
Choosing face: The curse of self in profile
image selection
David White
1,3*
, Clare A. M. Sutherland
2,3
and Amy L. Burton
4
Abstract
People draw automatic social inferences from photos of unfamiliar faces and these first impressions are associated
with important real-world outcomes. Here we examine the effect of selecting online profile images on first impressions.
We model the process of profile image selection by asking participants to indicate the likelihood that images of their
own face (self-selection) and of an unfamiliar face (other-selection) would be used as profile images on key social
networking sites. Across two large Internet-based studies (n = 610), in line with predictions, image selections accentuated
favorable social impressions and these impressions were aligned to the social context of the networking sites. However,
contrary to predictions based on peoples general expertise in self-presentation, other-selected images conferred more
favorableimpressionsthanself-selectedimages.Weconcludethatpeoplemakesuboptimalchoiceswhenselectingtheir
ownprofilepictures,suchthatself-perception places important limits on facial first impressions formed by others. These
results underscore the dynamic nature of personperceptioninreal-worldcontexts.
Keywords: Face perception, Self perception, Impression formation, Interpersonal accuracy, Online social networks, Visual
communication, Photography
Significance
Selecting profile pictures is a common task in the digital
age. Research suggests that choosing the right image
may be critical peoples first impressions from profile
photos shape important decisions, such as choices of
whom to date, befriend, or employ. Surprisingly, the
process of image selection has not yet been studied dir-
ectly. Here, we show that people select profile pictures
that produce positive impressions on unfamiliar viewers.
These impressions are tailored to fit specific networking
contexts: dating images appear more attractive and pro-
fessional images appear more competent. Strikingly, we
show for the first time that participants select more flat-
tering profile images when selecting pictures for other
people compared with when selecting for themselves.
This phenomenon has clear practical significance: should
people wish to put their best face forward,they should
ask someone else to choose it.
Background
Key events in our professional, social, and romantic lives
unfold on the Internet. Approximately one-third of em-
ployers search online for information on job candidates
(Acquisti & Fong, 2015), half of British adults that are
currently searching for a relationship have used online
dating (YouGov, 2014), and 1.79 billion people world-
wide have an active Facebook account (Facebook, 2016).
As a result, we are continually forming first impressions
of unfamiliar people in professional, romantic, and social
contexts via social networking sites. Pictures that are
chosen to represent us in these online environment-
s—“profile images”—establish a critical link between an
individuals online and offline personas.
Profile image choices are likely to have a significant
impact on the way people are perceived by others. We
make inferences about an individuals character and
personality within a split second of exposure to a photo-
graph of their face (Willis & Todorov, 2006). These
impressions have been shown to predict important and di-
verse real-world outcomesboth online and offlinein-
cluding the number of votes received by political
candidates (Olivola, Funk, & Todorov, 2014), company
profits generated during a CEOstenure(Rule&Ambady,
* Correspondence: david.white@unsw.edu.au
1
School of Psychology, University of New South Wales Sydney, Sydney,
Australia
3
ARC Centre of Excellence in Cognition and its Disorders, Macquarie
University, Sydney, NSW, Australia
Full list of author information is available at the end of the article
C
ognitive Researc
h
: Princip
l
es
and Im
p
lications
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made.
White et al. Cognitive Research: Principles and Implications (2017) 2:23
DOI 10.1186/s41235-017-0058-3
2008), selection as a suspect from police line-ups (Flowe &
Humphries, 2011), and the popularity of an Airbnb hosts
rental accommodation (Ert, Fleischer, & Magen, 2016).
Importantly, previous studies are almost exclusively
based on the premise that a single image is representa-
tive of a persons appearance. In studies of facial first im-
pressions, participants tend to form impressions of
computer-generated images or photographs captured in
controlled studio conditions (e.g., expressionless, facing
forwards; for a review see Todorov, Olivola, Dotsch, &
Mende-Siedlecki, 2015). This procedure minimizes nat-
ural variation found in photos of faces captured outside
of the laboratory. However, recent studies have empha-
sized the important role important of this natural vari-
ability in forming social impressions. Critically, ratings
of attractiveness (Jenkins, White, Van Montfort, &
Burton, 2011) and character traits (Hehman, Flake, &
Freeman, 2015; Todorov & Porter, 2014; cf. McCurrie et al.,
2016) can vary more across different images of the same
persons face, than they do across faces of different people.
In previous works, the types of images found on the
Internet have been described as ambientphotographs,
as they capture dynamic aspects of faces and the envir-
onment such as expression, pose, and lighting (see Fig. 1;
Jenkins et al., 2011; Sutherland et al., 2013; Vernon,
Sutherland, Young, & Hartley, 2014). Importantly, influ-
ential models of social trait judgments that have been
generated by ratings of studio-captured imagery (Ooster-
hof & Todorov, 2008) do not fully capture impressions
made from ambient facial images (Sutherland et al.,
2013; Todorov & Porter, 2014).
Focus on invariant aspects of facial appearance has
also caused facial first impression research to overlook
the importance that photograph selection has in moder-
ating the social impact of a persons face. However,
recent work has begun to address this shortfall. In one
recent study, unfamiliar viewers were able to select
studio-controlled images of unfamiliar faces that accen-
tuated traits associated with specific scenarios: for ex-
ample, selecting images for a resume that accentuated
impressions of competence, relative to other images of
that individual (Todorov & Porter, 2014, Experiments 2
& 3). Separately, studies of impression management in
online social networks have found that people report
selecting images to transmit desirable impressions
(Siibak, 2009) and that dating profile images tend to por-
tray people to be more attractive than images taken in a
laboratory (Hancock & Toma, 2009).
Critically, however, the process of self-selecting profile
images has not been studied experimentally. Thus, while
it is clear that variation in photos of the same face can
modulate social impression formation (see also Jenkins
et al., 2011; Wu, Sheppard, & Mitchell, 2016), it is not
clear how well people exploit this variation to confer
Fig. 1 Example image sets provided by two participants in the Profile Image Dataset. Each participant selected the most and least likely image to
be used in three social media contexts (see Fig. 3a), then rated the likelihood that each image would be used in each context, before rating trait
impressions. They then repeated this procedure with an unfamiliar face. Images used with permission and the full Profile Image Dataset is
available online in Additional file 2
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 2 of 9
favorable impressions. This is important because percep-
tion of ones own face is often less veridical than percep-
tion of other faces. For example, when asked to select
images that represent the best likeness of themselves
from photo albums, participants choose images that are
less representative of their current appearance than im-
ages chosen by people with no prior familiarity (White,
Burton, & Kemp, 2015). Previous studies also report sys-
tematic biases to choose images of their own face as bet-
ter likenesses when they have been digitally altered to be
more typical (Allen, Brady, & Tredoux, 2009), more at-
tractive (Epley & Whitchurch, 2008; Zell & Balcetis,
2012), and more trustworthy (Verosky & Todorov,
2010); perhaps reflecting a general bias to evaluate one-
self more favorably than others (Epley & Whitchurch,
2008; cf. Brown, 2012).
Given that people appear to be sensitive to variation in
impressions produced by different photographs
(Todorov & Porter, 2014) and are motivated to portray
themselves favorably in profile images (Hancock &
Toma, 2009; Siibak, 2009), we predicted that people
would be able to select images of themselves to accentu-
ate positive traits. In addition, we compared the benefit
of selecting profile images of oneself to selection by an
unfamiliar person. This comparison is critical in order to
differentiate two, equally plausible, hypotheses: namely
that self-selection may help or hinder the process of
selecting favorable profile images.
On the one hand, the ability to select flattering profile
images may be hindered by an impaired ability to view
ones own face accurately (e.g., White et al., 2015) and in
an overly optimistic light (Epley & Whitchurch, 2008).
This evidence leads to the prediction that people would
select better photographs for strangers. On the other
hand, this ability may be enhanced by peoples expertise
in selecting flattering online photographs of themselves
(Hancock & Toma, 2009) and in self-presentation more
generally (e.g., Goffman, 1959; Leary & Allen, 2011;
Schlenker, 2003). This reasoning leads to the opposite
prediction that people would select better photographs
for themselves.
We tested these predictions by examining the effect of
selecting profile images on first impressions. We asked
participants to indicate the likelihood that images of
their own face, and of an unfamiliar face, would be used
as profile images in three key social networking contexts
(Facebook, dating, professional: see the Profile Image
Datasetsection). We then recruited unfamiliar viewers
via the Internet to provide trait impressions of these
images (see the Calibration experiment and Selection ex-
periment sections). This approach enabled us to system-
atically examine the impact of photo selection on
appearance-based inferences for the first time, by com-
paring the effect of selecting ones own profile image
(self-selection) to selection by unfamiliar others (other-
selection).
Method and results
Profile Image Dataset
The Profile Image Dataset collected in this research con-
sists of 12 images each of 102 students (1224 total
images), downloaded from their Facebook accounts.
1
Previous studies of photo selection have used studio-
captured imagery that does not capture the full diversity
of facial images shared via social media (Todorov &
Porter, 2014), in terms of variations in pose, expression,
and image-capture conditions across images of the same
face (see Jenkins et al., 2011). Downloading photos from
Facebook ensured that these were representative of vari-
ations in portrait photographs that are posted online.
In total, 114 first year undergraduate students con-
sented to take part in the study in exchange for course
credit. Participants provided 12 images in which their
face: (1) took up a substantial proportion of the overall
image; (2) was in clear view; (3) faced the camera; and
(4) was not obscured (e.g., by sunglasses, hair, or hands).
Any images not meeting these criteria or with poor reso-
lution were rejected and the participant was asked to re-
place the image with another from their Facebook
gallery. In total, 102 participants (51 women; mean age
= 19.4 years, SD = 2.28 years) provided a full set of 12
usable images. Images were then cropped to frame the
face at a fixed aspect ratio and resized to 200 ×
300 pixels. Examples of images provided by two partici-
pants are shown in Fig. 1.
To capture self-selection profile image preferences, the
set of 12 downloaded images were then presented on a
computer monitor. Participants were asked to select
which of the 12 images they were most and least likely
to use as profile images for Facebook, professional (e.g.,
LinkedIn), and dating (e.g., Match.com) network sites.
Context order was counterbalanced across subjects. The
most and least likely profile images were used in the Se-
lection experiment. After making these selections,
participants then indicated their profile image prefer-
ences by rating the likelihood that they would use each
of their 12 images in these contexts.
Finally, participants rated their images for five social
impressions (attractiveness, trustworthiness, dominance,
competence, confidence). These five ratings were made
concurrently. Trustworthiness, dominance, and attract-
iveness were included to capture the three main dimen-
sions of facial first impressions (Oosterhof & Todorov,
2008; Sutherland et al., 2013). Competence and confi-
dence were included because these judgments are asso-
ciated with romantic and professional success (Murphy
et al., 2015; Todorov et al., 2015). Both selection likeli-
hood and trait judgments were rated on scales from 1
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 3 of 9
(very low) to 9 (very high), and these ratings were used
in the Calibration experiment.
To capture other-selection profile image preferences,
participants completed an identical procedure with a set
of 12 images of a randomly selected subject of the same
gender that had participated in the study previously. The
experimenter confirmed that the participant was
unfamiliar with the person pictured in the photographs
before recording their selections and instructed them to
evaluate the likelihood that they would select each image
if they were the person depicted. Order of self/other rat-
ing procedures was counterbalanced across participants.
Online rating experiments
Next, we recruited new unfamiliar viewers via the Inter-
net to rate the trait impressions produced by the Profile
Image Dataset. Online ratings were collected in two ex-
periments. First, in the Calibration experiment, we col-
lected ratings of trait impressions to the entire image
database and calculated the extent to which these first
impressions were predicted by profile image preferences,
provided during collection of the Profile Image Dataset.
Second, in the Selection experiment, we collected ratings
of trait impressions to only those images that had been
explicitly selected as most/least likely to be selected as
profile images. In both experiments, we examine the
moderating effect of profile image preferences on first
impressions; comparing the impact of participants
preferences for images of their own face (self-selection)
to preferences for images of an unfamiliar face (other-
selection).
Calibration experiment
Method
A total of 178 unfamiliar viewers were recruited online
via the online crowdsourcing platform Amazon Mechan-
ical Turk (M-Turk; see Buhrmester, Kwang, & Gosling,
2011) and were paid US$1. Eighteen were excluded be-
fore analysis as they reported engaging in a distracting
activity during the experiment, leaving a final sample of
160 (80 women, mean age = 36.4 years; SD = 12.2 years).
Each unfamiliar viewer rated 12 different images of 12
different people (144 images presented individually in a
random order). This method resulted in a pre-
determined sample size of 20 raters per image that was
considered sufficient to provide a stable estimate of
trait impressions (see Oosterhof & Todorov, 2008).
Viewers were instructed to rate how attractive, trust-
worthy, dominant, confident, and competent the person
appeared in each image on a scale from 1 (very low) to
9 (very high). These five ratings were made on separate
rating scales and scales were presented concurrently on
the same screen as the photos.
Results
We calculated the extent to which both self-photograph
and other-photograph selection likelihood ratings were
calibrated with: (1) participantsown ratings of trait im-
pressions collected in the image collection phase (Own
calibration); and (2) ratings of unfamiliar viewers trait
impressions, collected via the Internet (Internet calibra-
tion).
2
Calibration scores indexed participantsability to
choose images that accentuated positive impressions and
were calculated separately by face identity using Spear-
mans rank correlation. We calculated calibration for
each of the three social network contexts, to reveal
which traits were most accentuated by profile image
selection in each context, and analyzed these data separ-
ately for own and Internet ratings. Results of this ana-
lysis are shown in Fig. 2.
Own and Internet calibration scores were analyzed by
mixed ANOVA with between-subject factor of Selection
Type (self, other) and within-subject factors Context
(Facebook, dating, professional) and Trait (attractive-
ness, trustworthiness, dominance, competence, confi-
dence). For own calibration, the main effect of Selection
Type was non-significant, F (1,202) = 1.48, p= 0.25, η
p
2
=
0.007, with high average calibration between image
selection and positive social impressions for both self-
selected (M = 0.509; SD = 0.319) and other-selected
photographs (M = 0.543; SD = 0.317). For Internet cali-
bration, the main effect of Selection Type was signifi-
cant, F (1,202) = 4.12, p= 0.044, η
p
2
= 0.020. Critically,
there was greater calibration between image selection
and positive social impressions for other-selected (M =
0.227; SD = 0.340) compared to self-selected photo-
graphs (M = 0.165; SD = 0.344).
In both own and Internet calibration analysis, the
interaction between Context and Selection Type was sig-
nificant (Own: F [2, 404] = 4.16, p= 0.016, η
p
2
= 0.020;
Internet: F [2, 404] = 4.26, p= 0.015, η
p
2
= 0.021), reflect-
ive of higher calibration for other-selections compared
to self-selections in professional (Own: F [1, 202] = 5.73,
p = 0.018, η
p
2
= 0.028; Internet: F [1, 202] = 11.16, p<
0.000, η
p
2
= 0.052) but not Facebook or dating contexts
(all Fs < 1). In general, interactions revealed that traits
were aligned to network contexts, such that attractive-
ness tended to calibrate most with social and dating
networks and competence and trustworthiness to pro-
fessional networks (see Additional file 1 for full details of
this analysis).
Discussion
Consistent with predictions based on studies of self-
presentation (e.g., Hancock & Toma, 2009; Siibak, 2009),
the pattern of results observed in the Calibration
experiment lends broad support to the notion that
people select images of themselves to accentuate positive
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 4 of 9
trait impressions and that these selections are fitted to spe-
cific social networking contexts (cf. Leary & Allen, 2011).
Strikingly, however, the profile image preferences indicated
in other-selections were more calibrated to impressions
formed by unfamiliar viewers than self-selections. This re-
sult is contrary to the prediction based on self-presentation
literature, that participants would select more flattering im-
ages of themselves than of other people.
Notably, the cost of self-selection applied only to pro-
fessional profile image selections, raising the possibility
that costs of self-selection were specific to this network
context. Therefore, in a second experiment, we again ex-
amined effects of self-selection on first impressions, but
using a more direct test: comparing trait judgments to
images that had been explicitly chosen as most and least
likely to be used as profile images for different network
contexts (see Profile Image Datasetmethod).
In the Calibration experiment, unfamiliar viewers also
rated 12 images of a single individual, making it likely
that this diluted their first impressions. Further, these
viewers made multiple trait judgments to a single photo,
which may increase overlap in these judgments (Rhodes,
2006). We addressed these potential concerns in the Se-
lection experiment, by now presenting unfamiliar
viewers with only two images of each participant (most/
least likely profile image choice) and asking viewers to
rate these images for a single trait impression.
Selection experiment
Method
A total of 482 new unfamiliar viewers were recruited on-
line via M-Turk and were paid US$1. Data from 50
viewers were excluded from the analysis because they
did not pass the quality criteria used in the previous ex-
periment, leaving a final sample of 432 (273 women),
with an average age of 36.4 years (SD = 11.6 years).
In this experiment, we focused on impressions of at-
tractiveness, trustworthiness, and competence. Viewers
rated images that had been selected by participants in
the Profile Image Dataset as being most and least likely
to be used in each social network context. This proced-
ure resulted in 12 images of each pictured identity (3
contexts × self/other selected × least/most likely; Fig. 3a).
To balance the design of the Selection experiment we
Fig. 2 Results from the Calibration experiment. Calibration was computed separately for self-selection and other-selection as the correlation between
likelihood of profile image choice and: (1) participantsown trait impressions (top panels); (2) impressions of unfamiliar viewers recruited via the Internet
(bottom panels). Higher calibration indexes participantsability to choose profile images that increase positive impressions. Participantslikelihood of
selecting a photograph of their own face (self-selection: top left) and an unfamiliar face (other-selection: top right) was strongly calibrated to their own
impressions. However, in general, self-selections were less well calibrated to the impressions of unfamiliar viewers (bottom left) than were other-selections
(bottom right). Error bars represent ±1 standard error
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 5 of 9
randomly selected a subset of 96 pictured identities from
the Profile Image Dataset. A total of 1152 images were
divided into 12 counterbalanced versions of the experi-
ment. This method resulted in a sample size of 36
viewers per counterbalanced version. Each viewer rated
192 images on a single trait (attractiveness, trustworthi-
ness, competence), with each pictured identity appearing
twice (most and least likely images from one combin-
ation of Context/Selection Type). The experimental de-
sign ensured that assignment of pictured identities to
conditions was counterbalanced across viewers.
Results
Difference scores were calculated separately for each
viewer in the Selection experiment by subtracting their
mean trait ratings to least likelyimages from ratings to
most likelyimages. This provided a measure of the ef-
fect of image selection on facial first impressions at the
level of the viewer. These data were analyzed by using a
mixed-factor ANOVA with between-subject factor of
Trait (attractiveness/trustworthiness/competence) and
within-subject factors of Selection Type (self/other) and
Context (Facebook/dating/professional). Mean difference
scores for each condition are shown in Fig. 3b.
This analysis revealed a significant main effect of Se-
lection Type, F (2, 429) = 77.2; p< 0.001, η
p
2
= 0.152, with
other-selections again enhancing trait impressions more
than self-selections. The main effect of Context was also
significant, F (2, 858) = 78.7, p< 0.001, η
p
2
= 0.155, with
image selection having the greatest effect on trait judg-
ments in professional network (M = 0.621; SD = 0.787)
compared with Facebook (M = 0.370; SD = 0.657) and
dating contexts (M = 0.255; SD = 0.587).
Main effects were qualified by three two-way interac-
tions. First, the interaction between Context and Trait
was significant (see Fig. 3c [left]: F [4, 858] = 73.8; p<
Fig. 3 aExamples of most and least likely image selections used in the Selection experiment. Images are used with permission and the full set of
experimental materials are available online in Additional file 5. bMean difference between trait impression ratings to photographs chosen as most and
least likely profile pictures for each of three contexts. Positive values signify higher trait ratings for images selected as most likelyprofile images, again
revealing more positive first impressions for images that were selected by an unfamiliar other (light gray) when compared to self-selections (dark gray).
cSignificant two-way interactions (see text for details of analysis). All error bars denote ±1 standard error
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 6 of 9
0.001 η
p
2
= 0.256), indicating that different traits were
accentuated in different online contexts. Specifically, selec-
tions for Facebook (M = 0.619; SD = 0.355) and dating (M
= 0.475; SD = 0.366) accentuated ratings of attractiveness
more than professional networking selections (M = 0.246;
SD = 0.380). Selections for professional networking contexts
conferred significantly more benefit to trustworthiness (M
= 0.590; SD = 0.648) and competence (M = 1.029; SD =
0.638) relative to selections for Facebook (Trustworthiness:
M = 0.137; SD = 0.470, Competence: M = 0.353; SD = 0.503)
and Dating (Trustworthiness: M = 0.058; SD = 0.372,
Competence: M = 0.232; SD = 0.391).
Second, the interaction between Selection Type and
Trait was significant (see Fig. 3c [middle]: F [4, 858] =
9.18; p< 0.001; η
p
2
= 0.041). The benefit of other-selection
over self-selection was carried by other-selections con-
ferring more positive impressions for trustworthiness, F
(1, 429) = 46.2; p< 0.001; η
p
2
= 0.103, and competence, F
(1, 429) = 46.8; p< 0.001; η
p
2
= 0.104. Interestingly, other-
selections did not confer a significant benefit for attract-
iveness impressions, F (1, 429) = 2.47; p> 0.05; η
p
2
=
0.012. Third, the interaction between Selection Type and
Context was significant (see Fig. 3c [right]: F [4, 858] =
9.18; p< 0.001; η
p
2
= 0.041). Other-selections produced
more positive effects on trait impressions in comparison
to self-selection across all contexts, but to differing de-
grees (Facebook: F [1, 429] = 27.6; p< 0.000; η
p
2
= 0.063;
dating: F [1, 429] = 53.1; p< 0.001; η
p
2
= 0.112; profes-
sional: F [1, 429] = 10.5; p= 0.001; η
p
2
= 0.024).
Discussion
Results of the Selection experiment replicated the main
findings of the previous experiment. First, profile image
selection accentuated positive first impressions and these
impressions were matched to specific network contexts.
This confirms that people are aware of the different im-
pressions that different images confer and adjust their
choices to fit the particular context. Second, and more
surprisingly, self-selected profile images conferred less
favorable impressions when compared to other-selected
images. Whereas this effect was limited to professional
networking contexts in the Calibration experiment,
using a more sensitive test in the Selection experiment,
we observed the effect across all networking contexts.
General discussion
This paper reports the first systematic test of peoples
profile image selection behavior. Strikingly, we found
that people selected images of themselves that cast less
favorable first impressions than images selected by
strangers. At face value, this result appears to run con-
trary to a vast literature showing that people portray
themselves more positively than other people. Self-
enhancement is a pervasive human tendency in a variety
of social contexts (e.g., Goffman, 1959; Schlenker, 2003),
including social networking sites (see Hancock & Toma,
2009; Siibak, 2009). Interestingly, pioneering work by
Erving Goffman conceptualized self-presentation as a
process of projecting deliberately choreographed face
to others (Goffman, 1955) and a large literature shows
that people manage their appearance to improve likeli-
hood of desirable outcomes.
Given this apparent expertise in showing face, it might
be expected that people would also be experts in choosing
face: they would be more adept at selecting favorable facial
images of themselves than they would be at selecting fa-
vorable facial images of unfamiliar people. However, our
results clearly argue against any such self-expertise.
Although our results are surprising in the context of
self-enhancement research, they may be related to the
finding that people tend to perceive themselves more
positively than other people. For example, it has been
shown that people evaluate images of ones own face as
more trustworthy than unfamiliar faces (Verosky &
Todorov, 2010). Importantly, the task faced when select-
ing profile images is to discriminate between images of
your own face. The existence of positivity biases is
therefore unlikely to improve a persons ability to make
these selections, if such biases are independent of dis-
crimination (cf. Macmillan & Creelman, 2004). One ap-
parently plausible account of our findings is that,
somewhat paradoxically, these self-enhancing biases in
perception may in fact interfere with a persons ability to
discriminate between images when selecting one to por-
tray a positive impression.
Although plausible, this account of self-selection costs
is inconsistent with the fact that costs were specific to
certain trait impressions. In the Selection experiment,
although we observed overall costs within each social
network context, costs were nevertheless specific to im-
pressions of trustworthiness and competence and were
not observed for attractiveness. Previous studies have
shown that people perceive their own face to be both
more trustworthy (Verosky & Todorov, 2010) and more
attractive than other peoples faces (Epley & Whitchurch,
2008; Zell & Balcetis, 2012). Explanations of self-selection
costs in terms of self-enhancing biases are not able to ac-
count for the fact that we observed costs in one trait
evaluation but not the other. This in turn suggests that
the mechanisms responsible for self-enhancing biases, and
the cost of self-selection reported here, are relatively
independent.
Given that this is the first report of self-selection costs
in profile image choice, future research is necessary to
elucidate the precise mechanisms underlying these costs.
In particular, it will be important to examine the contri-
bution of familiarity more closely. Recent work shows
similar self-selection costs when choosing images that
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 7 of 9
are representative of our current appearance: people
choose images of themselves that are less representative
than images chosen by unfamiliar viewers after brief
familiarization (White et al., 2015). This shows that diffi-
culties in selecting images of our own face are not spe-
cific to socially motivated tasks. Interestingly, very
recent evidence suggests that memory for specific
images of familiar faces may be impaired relative to un-
familiar faces (Armann, Jenkins, & Burton, 2016); raising
the possibility that familiarity for any facenot only our
own facecauses difficulty in discriminating between
different images of that face. Future studies designed to
test this possibility can help to separate contributions of
visual familiarity from the broader cognitive system of
self-representation (see Devue & Brédart, 2011).
Notwithstanding a large cost of self-selection, we
found that first impressions were substantially enhanced
by profile image selection and these selections were tai-
lored to social networking contexts. Overall, participants
were aware of the impressions made by different images
of their face and made profile image choices accordingly,
fitting facial first impressions to the social context of the
audience. This extends recent work showing that people
can detect subtle differences in impressions made by dif-
ferent photos of the same unfamiliar face, both when
photos are captured in controlled studio conditions
(Todorov & Porter, 2014) and in ambient environments
(Jenkins et al., 2011). In parallel, computer scientists
have made impressive progress in developing automated
methods for predicting humans first impressions from
ambient facial imagery. Using deep neural networks
trained on humans ratings of first impressions,
McCurrie et al. (2016) were able to predict facial first
impressions from face images relatively accurately (cf.
Vernon et al., 2014). In future work, it may be useful to
compare human profile selection choices to these com-
putational benchmarks.
More broadly, our results have implications for self-
presentation in modern society. Recent data show that
1.8 billion images are uploaded every day to popular
social networking sites (KPCB, 2014), leading to a multi-
tude of new opportunities for self-monitoring behavior
(see also Hancock & Toma, 2009; Siibak, 2009; Van
Dijck, 2008; Walther, 1996). Self-selection of images is a
multi-staged process: taking selfies(see Re, Wang, He,
& Rule, 2016); deleting images from digital cameras;
selecting images to upload to social media; untagging
images on Facebook (see Lang & Barton, 2015). In this
context, an important limitation of the present study is
that images were initially downloaded from Facebook.
Therefore, selection behavior reported in this paper may
represent the final stage in a hierarchy of selection filters
that combine to determine a persons online appearance.
Nevertheless, given the robust cost of self-selection we
observe here, it is likely that this effect serves to limit
positive facial impressions at multiple levels in this hier-
archy, thereby curtailing peoples ability to put their
best face forward.
Conclusions
Given the diverse opportunities for self-monitoring via
digital media, understanding the dynamics of selection
behavior will be important in developing models of facial
first impressions that are relevant to real-world social
networking contexts. We propose that image selection
tasks can provide a lens through which to understand
processes that modulate the signaling and receiving of
these impressions in daily lifefrom current impression
management goals to inherent perceptual abilities. For
now, it is clear that the facial first impressions we trans-
mit to unfamiliar peoplevia online social network-
sare constrained by how we perceive ourselves. Our
results also impart practical wisdom: when it comes to
choosing the best version of ourselves, it may be wise to
let other people choose for us.
Endnotes
1
All images and accompanying rating data are available
in Additional files. Participants have consented to the use
of their images in future research. To protect participants
privacy, the mapping between images and rating data has
been withheld. Should researchers require this informa-
tion, the full Profile Image Dataset is available from the
authors on request.
2
Because average ratings of M-Turk raters were used to
compute calibration, we checked the stability of these
ratings across subjects using Cronbachs Alpha. This ana-
lysis confirmed high levels of reliability for all impressions
(Attractiveness = 0.893, Trustworthiness = 0.821, Domin-
ance = 0.721, Competence = 0.756, Confidence = 0.785).
Additional files
Additional file 1: Full description of analysis in the Calibration
experiment. (PDF 166 KB)
Additional file 2: Images used in the Calibration experiment. (PDF 16.7 MB)
Additional file 3: Raw rating data from Calibration experiment.
(XLSX 107 KB)
Additional file 4: Spearman's rho scores from Calibration experiment.
(XLSX 125 KB)
Additional file 5: Images used in the Selection experiment. (PDF 17.0 MB)
Additional file 6: Rating data from the Selection experiment (by viewer).
(XLSX 87.4 KB)
Additional file 7: Rating data from the Selection experiment (by image).
(XLSX 528 KB)
Acknowledgements
This research was supported by Australian Research Council grants to DW
(LP130100702) and CS (DP170104602), postdoctoral research support from the
Australian Research Council Centre of Excellence in Cognition and its Disorders,
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 8 of 9
University of Western Australia (CE110001021) and an ESRC Overseas Institutional
Visit award (ES/1900748/1) to CS. The authors thank Manuela Tan and
undergraduate volunteers at the UNSW School of Psychology for assisting with
the pilot work that led to this research.
Authorscontributions
DW developed the study concept. All authors contributed to the study
design. AB performed experimentation and data collection. All authors
contributed to the data analysis and prepared Additional files 1,2,3,4,5,6
and 7. DW drafted the manuscript and CS provided critical revisions. All
authors approved the final version of the manuscript for submission.
Competing interests
The authors declare that they have no competing interests.
Ethics approval and consent to participate
This study was approved by the Human Research Ethics Committee at the
University of New South Wales. All participants provided written informed
consent and appropriate photographic release.
Author details
1
School of Psychology, University of New South Wales Sydney, Sydney,
Australia.
2
School of Psychology, University of Western Australia, Crawley,
Australia.
3
ARC Centre of Excellence in Cognition and its Disorders,
Macquarie University, Sydney, NSW, Australia.
4
School of Psychology,
University of Sydney, Sydney, Australia.
Received: 14 December 2016 Accepted: 21 February 2017
References
Acquisti, A., & Fong, C. M. (2015). An experiment in hiring discrimination via
online social networks. Available at SSRN: https://ssrn.com/abstract=2031979
or http://dx.doi.org/10.2139/ssrn.2031979. Accessed 1 Oct 2016.
Allen, H., Brady, N., & Tredoux, C. (2009). Perception of best likenessto highly
familiar faces of self and friend. Perception, 38(12), 18211830. doi:10.1068/
p6424
Armann, R. G., Jenkins, R., & Burton, A. M. (2016). A familiarity disadvantage for
remembering specific images of faces. Journal of Experimental Psychology:
Human Perception and Performance, 42(4), 571.
Brown, J. D. (2012). Understanding the better than average effect: Motives (still)
matter. Personality and Social Psychology Bulletin, 38(2), 209219. doi:10.1177/
0146167211432763
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazons Mechanical Turk a
new source of inexpensive, yet high-quality, data? Perspectives on
Psychological Science, 6(1), 35. doi:10.1177/1745691610393980
Devue, C., & Brédart, S. (2011). The neural correlates of visual self-recognition.
Consciousness and Cognition, 20(1), 4051.
Epley, N., & Whitchurch, E. (2008). Mirror, mirror on the wall: Enhancement in self-
recognition. Personality and Social Psychology Bulletin, 34(9), 11591170.
doi:10.1177/0146167208318601
Ert, E., Fleischer, A., & Magen, N. (2016). Trust and reputation in the sharing economy:
The role of personal photos on Airbnb. Tourism Management, 55,6273.
Facebook. (2016). Company information. http://newsroom.fb.com/company-info/.
Accessed 30 Sept 2016.
Flowe, H. D., & Humphries, J. E. (2011). An examination of criminal face bias in a
random sample of police lineups. Applied Cognitive Psychology, 25(2), 265273.
doi:10.1002/acp.1673
Goffman, E. (1955). On face-work. Psychiatry, 18(3), 213231. doi:10.1521/
00332747.1955.11023008
Goffman, E. (1959). The presentation of self in everyday life. New York: Anchor Books.
Hancock, J. T., & Toma, C. L. (2009). Putting your best face forward: The accuracy
of online dating photographs. Journal of Communication, 59(2), 367386.
doi:10.1111/j.1460-2466.2009.01420.x
Hehman, E., Flake, J. K., & Freeman, J. B. (2015). Static and dynamic facial cues
differentially affect the consistency of social evaluations. Personality and
Social Psychology Bulletin, 41(8), 11231134. doi:10.1177/0146167215591495
Jenkins, R., White, D., Van Montfort, X., & Burton, A. M. (2011). Variability in photos
of the same face. Cognition, 121, 313323. doi:10.1016/j.cognition.2011.08.001
KPCB. (2014). Internet trends 2014. Retrieved 6 January 2016, from www.kpcb.
com/blog/2014-internet-trends
Lang, C., & Barton, H. (2015). Just untag it: Exploring the management of
undesirable Facebook photos. Computers in Human Behavior, 43, 147155.
doi:10.1016/j.chb.2014.10.051
Leary, M. R., & Allen, A. B. (2011). Self-presentational persona: Simultaneous
management of multiple impressions. Journal of Personality and Social
Psychology, 101(5), 10331049. doi:10.1037/a0023884
Macmillan, N. A., & Creelman, C. D. (2004). Detection theory: A usersguide.NewYork:
Psychology Press.
McCurrie, M., Beletti, F., Parzianello, L., Westendorp, A., Anthony, S., & Scheirer, W.
(2016). Predicting first impressions with deep learning. arXiv preprint arXiv:
1610.08119.
Murphy, S. C., von Hippel, W., Dubbs, S. L., Angilletta, M. J., Wilson, R. S., Trivers, R.,
& Barlow, F. K. (2015). The role of overconfidence in romantic desirability and
competition. Personality and Social Psychology Bulletin, 41, 10361052. doi:10.
1177/0146167215588754
Olivola, C. Y., Funk, F., & Todorov, A. (2014). Social attributions from faces bias
human choices. Trends in Cognitive Sciences, 18(11), 566570. doi:10.1016/j.
tics.2014.09.007
Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation.
PNAS, 105(32), 1108711092. doi:10.1073/pnas.0805664105
Re, D. E., Wang, S. A., He, J. C., & Rule, N. O. (2016). Selfie indulgence self-favoring
biases in perceptions of selfies. Social Psychological and Personality Science, 7,
588. doi:10.1177/1948550616644299
Rhodes, G. (2006). The evolutionary psychology of facial beauty. Annual Review of
Psychology, 57(1), 199226. doi:10.1146/annurev.psych.57.102904.190208
Rule, N. O., & Ambady, N. (2008). The face of success: Inferences from chief
executive officersappearance predict company profits. Psychological Science,
19(2), 109111.
Schlenker, B. R. (2003). Self-presentation. In M. R. Leary (Ed.), Handbook of Self and
Identity (2nd ed.). New York: Guilford Publications.
Siibak, A. (2009). Constructing the self through the photo selection-visual
impression management on social networking websites. Cyberpsychology:
Journal of Psychosocial Research on Cyberspace, 3(1), 1.
Sutherland, C. A. M., Oldmeadow, J. A., Santos, I. M., Towler, J., Burt, D. M., &
Young, A. W. (2013). Social inferences from faces: Ambient images generate a
three-dimensional model. Cognition, 127(1), 105118. doi:10.1016/j.cognition.
2012.12.001
Todorov, A., Olivola, C. Y., Dotsch, R., & Mende-Siedlecki, P. (2015). Social
attributions from faces: Determinants, consequences, accuracy, and
functional significance. Annual Review of Psychology, 66(1), 519545. doi:10.
1146/annurev-psych-113011-143831
Todorov, A., & Porter, J. M. (2014). Misleading first impressions: Different for
different facial images of the same person. Psychological Science, 25(7), 1404
1417. doi:10.1177/0956797614532474
Van Dijck, J. (2008). Digital photography: communication, identity, memory. Visual
Communication, 7(1), 5776. doi:10.1177/1470357207084865
Vernon, R. J. W., Sutherland, C. A. M., Young, A. W., & Hartley, T. (2014). Modeling first
impressions from highly variable facial images. PNAS, 111(32), E3353E3361.
doi:10.1073/pnas.1409860111
Verosky, S. C., & Todorov, A. (2010). Differential neural responses to faces
physically similar to the self as a function of their valence. NeuroImage, 49(2),
16901698.
Walther, J. B. (1996). Computer-mediated communication impersonal,
interpersonal, and hyperpersonal interaction. Communication Research, 23(1),
343. doi:10.1177/009365096023001001
White, D., Burton, A. L., & Kemp, R. I. (2015). Not looking yourself: The cost of
self-selecting photographs for identity verification. British Journal of
Psychology, 107(2), 359373. doi:10.1111/bjop.12141
Willis, J., & Todorov, A. (2006). First impressions: making up your mind after a 100-
ms exposure to a face. Psychological Science, 17(7), 592598. doi:10.1111/j.
1467-9280.2006.01750.x
Wu, W., Sheppard, E., & Mitchell, P. (2016). Being Sherlock Holmes: Can we sense empathy
from a brief sample of behaviour? British Journal of Psychology, 107(1), 122.
YouGov. (2014). Online dating services. http://yougov.co.uk/news/2014/02/13/
seven-ten-online-dating-virgins-willing-try-findin/. Accessed 11 June 2014.
Zell, E., & Balcetis, E. (2012). The influence of social comparison on visual representation
of onesface.PLoS One, 7(5), e36742. doi:10.1371/journal.pone.0036742
White et al. Cognitive Research: Principles and Implications (2017) 2:23 Page 9 of 9
... The models were tested on five out-of-sample independent datasets that are publicly available (Lin et al., 2021;Oh et al., 2020;Oosterhof & Todorov, 2008;Walker et al., 2018;White et al., 2017). These test datasets were selected to sample social judgments from different types of faces, including studio portraits of frontal, neutral faces, computergenerated faces, and ambient photos of faces taken under unconstrained conditions. ...
... The Oosterhof and Todorov (2008) dataset included ratings for 300 computer-generated frontal, neutral, white faces on nine social attributes. The White et al. (2017) dataset originally included ratings for 1224 ambient photos (12 images of each of the 102 individuals of various races) taken in real-world contexts downloaded from their Facebook accounts (varied in viewpoint, facial expression, background, illumination, etc.) on five social attributes. We only used 504 photos of white individuals (12 images of each of the 42 individuals). ...
... Therefore, in the case where the same social attribute in the training dataset was not available in the test dataset, we used the synonym/antonym of the fitted social attribute in the test dataset (if available). Based on this rationale, we tested the models that were fit to the corresponding social attributes in the Chicago Face Database on nine social attributes in the Lin et al. (2021) dataset, four social attributes in the Oh et al. (2020) dataset, four social attributes in the Oosterhof and Todorov (2008) dataset, and three social attributes in the White et al. (2017) dataset. ...
Article
Full-text available
People spontaneously infer other people's psychology from faces, encompassing inferences of their affective states, cognitive states, and stable traits such as personality. These judgments are known to be often invalid, but nonetheless bias many social decisions. Their importance and ubiquity have made them popular targets for automated prediction using deep convolutional neural networks (DCNNs). Here, we investigated the applicability of this approach: how well does it generalize, and what biases does it introduce? We compared three distinct sets of features (from a face identification DCNN, an object recognition DCNN, and using facial geometry), and tested their prediction across multiple out-of-sample datasets. Across judgments and datasets, features from both pre-trained DCNNs provided better predictions than did facial geometry. However, predictions using object recognition DCNN features were not robust to superficial cues (e.g., color and hair style). Importantly, predictions using face identification DCNN features were not specific: models trained to predict one social judgment (e.g., trustworthiness) also significantly predicted other social judgments (e.g., femininity and criminal), and at an even higher accuracy in some cases than predicting the judgment of interest (e.g., trustworthiness). Models trained to predict affective states (e.g., happy) also significantly predicted judgments of stable traits (e.g., sociable), and vice versa. Our analysis pipeline not only provides a flexible and efficient framework for predicting affective and social judgments from faces but also highlights the dangers of such automated predictions: correlated but unintended judgments can drive the predictions of the intended judgments. Supplementary information: The online version contains supplementary material available at 10.1007/s42761-021-00075-5.
... The models were tested on six independent publicly available datasets 14,24,30,31,37 . These test datasets were selected to sample trait ratings for different types of faces, including studio portraits of frontal, neutral faces, computer-generated faces, and ambient photos of faces taken under unconstrained conditions. ...
... The mean prediction accuracy for each trait was obtained by averaging the accuracies across bootstrap iterations. For the test dataset that contained a large number of ambient photos (504 photos of 42 white individuals were selected for testing from the 1224 photos of 102 individuals of all races) 31 , one image was randomly sampled from each individual's images at each bootstrap iteration (i.e., 42 images were included at each iteration) to prevent bias in prediction accuracy. ...
Article
Full-text available
Judgments of people from their faces are often invalid but influence many social decisions (e.g., legal sentencing), making them an important target for automated prediction. Direct training of deep convolutional neural networks (DCNNs) is difficult because of sparse human ratings, but features obtained from DCNNs pre-trained on other classifications (e.g., object recognition) can predict trait judgments within a given face database. However, it remains unknown if this latter approach generalizes across faces, raters, or traits. Here we directly compare three distinct types of face features, and test them across multiple out-of-sample datasets and traits. DCNNs pre-trained on face identification provided features that generalized the best, and models trained to predict a given trait also predicted several other traits. We demonstrate the flexibility, generalizability, and efficiency of using DCNN features to predict human trait judgments from faces, providing an easily scalable framework for automated prediction of human judgment.
... The models were tested on six independent publicly available datasets 14,24,30,31,37 . These test datasets were selected to sample trait ratings for different types of faces, including studio portraits of frontal, neutral faces, computer-generated faces, and ambient photos of faces taken under unconstrained conditions. ...
... The mean prediction accuracy for each trait was obtained by averaging the accuracies across bootstrap iterations. For the test dataset that contained a large number of ambient photos (504 photos of 42 white individuals were selected for testing from the 1224 photos of 102 individuals of all races) 31 , one image was randomly sampled from each individual's images at each bootstrap iteration (i.e., 42 images were included at each iteration) to prevent bias in prediction accuracy. ...
Preprint
Full-text available
Judgments of people from their faces are often invalid but influence many social decisions (e.g., legal sentencing), making them an important target for automated prediction. Direct training of deep convolutional neural networks (DCNNs) is difficult because of sparse human ratings, but features obtained from DCNNs pre-trained on other classifications (e.g., object recognition) can predict trait judgments within a given face database. However, it remains unknown if this latter approach generalizes across faces, raters, or traits. Here we directly compare three distinct types of face features, and test them across multiple out-of-sample datasets and traits. DCNNs pre-trained on face identification provided features that generalized the best, and models trained to predict a given trait also predicted several other traits. We demonstrate the flexibility, generalizability, and efficiency of using DCNN features to predict human trait judgments from faces, providing an easily scalable framework for automated prediction of human judgment.
... First, only internet users with Facebook account could be sampled, raising the possibility of selection bias [62][63][64]. Second, people choose their profile pictures from a range of photos, and thus this selection possibly suffered from vanity-or identity-type selection bias [65][66][67]. ...
Preprint
Full-text available
One of the most contested questions about human behaviour is whether there are inherent sex or gender differences in the formation and maintenance of social bonds. On one hand, female and male brains are structurally almost identical, and while there are sex differences in the endocrine system, these are small, while much of gendered identity and behaviour is learned. On the other hand, sex differences in some aspects of social behaviour have deep evolutionary roots, and are widely present in non-human animals. This observational study recorded the frequency of same-aged, adult human groups appearing in public spaces through 2636 hours, recording group formation by 1.2mn people via 170 research assistants in 46 countries across the world. The results show (a) a significant sex-gender difference in same-sex-same-age frequency, in that ~50% more female-female than male-male pairs are observed in public spaces globally, and (b) that despite regional variation, the patterns holds up in every global region. This is the first study of sex-gender difference in dyadic social behaviour across the world on this scale, and the first global study that uses direct rather than internet-based observations.
... For example, when proof-reading, people identify more errors when proof-reading others' work than their own work (Worman, 1979). In another example, people who choose photos for others' online dating profiles select photos that are considered more attractive and flattering than photos people choose for their own dating profiles (White, Sutherland, & Burton, 2017). ...
Article
Full-text available
Previous research has generally shown that people’s decisions conform to the four-fold pattern of prospect theory; that is, people over-weight prospects with small probabilities and under-weight prospects with large probabilities. In terms of making risky decisions, the four-fold pattern unfolds accordingly: people make (1) risk-seeking choices among options that involve small-probable gains or large-probable losses; and (2) risk-averse choices among options that involve small-probable losses or large-probable gains. In three experiments and a summary quantitative model, we found that for interpersonal choices—decisions people make for others—the four-fold pattern attenuates and reverses in shape. We attributed this transformation to a unique signature in interpersonal decision makers’ emotions, which varied in mean, mode, and distribution from personal decision makers’. In all, our research offers new insights on prospect theory, interpersonal decision making, and the affective psychology of risk.
Article
Face perception is crucial to social interactions, yet people vary in how easily they can recognize their friends, verify an identification document or notice someone’s smile. There are widespread differences in people’s ability to recognize faces, and research has particularly focused on exceptionally good or poor recognition performance. In this Review, we synthesize the literature on individual differences in face processing across various tasks including identification and estimates of emotional state and social attributes. The individual differences approach has considerable untapped potential for theoretical progress in understanding the perceptual and cognitive organization of face processing. This approach also has practical consequences — for example, in determining who is best suited to check passports. We also discuss the underlying structural and anatomical predictors of face perception ability. Furthermore, we highlight problems of measurement that pose challenges for the effective study of individual differences. Finally, we note that research in individual differences rarely addresses perception of familiar faces. Despite people’s everyday experience of being ‘good’ or ‘bad’ with faces, a theory of how people recognize their friends remains elusive. The ability to recognize identity, emotion and other attributes from faces varies across individuals. In this Review, White and Burton synthesize research on individual differences in face processing and the implications of variability in face processing ability for theory and applied settings.
Article
The thoughts that come to mind when viewing a face depend partly on the face and partly on the viewer. This basic interaction raises the question of how much common ground there is in face-evoked thoughts, and how this compares to viewers' expectations. Previous analyses have focused on early perceptual stages of face processing. Here we take a more expansive approach that encompasses later associative stages. In Experiment 1 (free association), participants exhibited strong egocentric bias, greatly overestimating the extent to which other people's thoughts resembled their own. In Experiment 2, we show that viewers' familiarity with a face can be decoded from their face-evoked thoughts. In Experiment 3 (person association), participants reported who came to mind when viewing a face—a task that emphasises connections in a social network rather than nodes. Here again, viewers' estimates of common ground exceeded actual common ground by a large margin. We assume that a face elicits much the same thoughts in other people as it does in us, but that is a mistake. In this respect, we are more isolated than we think.
Article
Human face processing has been attributed to holistic processing. Here, we ask whether humans are sensitive to configural information when perceiving facial attractiveness. By referring to a traditional Chinese aesthetic theory—Three Forehead and Five Eyes—we generated a series of faces that differed in spacing between facial features. We adopted a two-alternative forced-choice task in Experiment 1 and a rating task in Experiment 2 to assess attractiveness. Both tasks showed a consistent result: The faces which fit the Chinese aesthetic theory were chosen or rated as most attractive. This effect of configural information on facial attractiveness was larger for faces with highly attractive features than for faces with low attractive features. These findings provide experimental evidence for the traditional Chinese aesthetic theory. This issue can be further explored from the perspective of culture in the future.
Article
First impressions of traits are formed rapidly and nonconsciously, suggesting an automatic process. We examined whether first impressions of trustworthiness are mandatory, another component of automaticity in face processing. In Experiment 1a, participants rated faces displaying subtle happy, subtle angry, and neutral expressions on trustworthiness. Happy faces were rated as more trustworthy than neutral faces; angry faces were rated as less trustworthy. In Experiment 1b, participants learned eight identities, half showing subtle happy and half showing subtle angry expressions. They then rated neutral images of these same identities (plus four novel neutral faces) on trustworthiness. Multilevel modeling analyses showed that identities previously shown with subtle expressions of happiness were rated as more trustworthy than novel identities. There was no effect of previously seen subtle angry expressions on ratings of trustworthiness. Mandatory first impressions based on subtle facial expressions were also reflected in two ratings designed to assess real-world outcomes. Participants indicated that they were more likely to vote for identities that had posed happy expressions and more likely to loan them money. These findings demonstrate that first impressions of trustworthiness based on previously seen subtle happy, but not angry, expressions are mandatory and are likely to have behavioral consequences.
Article
Full-text available
'Sharing economy' platforms such as Airbnb have recently flourished in the tourism industry. The prominent appearance of sellers’ photos on these platforms motivated our study. We suggest that the presence of these photos can have a significant impact on guests’ decision making. Specifically, we contend that guests infer the host’s trustworthiness from these photos, and that their choice is affected by this inference. In an empirical analysis of Airbnb’s data and a controlled experiment, we found that the more trustworthy the host is perceived to be from her photo, the higher the price of the listing and the probability of its being chosen. We also find that a host's reputation, communicated by her online review scores, has no effect on listing price or likelihood of consumer booking. We further demonstrate that if review scores are varied experimentally, they affect guests’ decisions, but the role of the host’s photo remains significant.
Article
Full-text available
Describable visual facial attributes are now commonplace in human biometrics and affective computing, with existing algorithms even reaching a sufficient point of maturity for placement into commercial products. These algorithms model objective facets of facial appearance, such as hair and eye color, expression, and aspects of the geometry of the face. A natural extension, which has not been studied to any great extent thus far, is the ability to model subjective attributes that are assigned to a face based purely on visual judgements. For instance, with just a glance, our first impression of a face may lead us to believe that a person is smart, worthy of our trust, and perhaps even our admiration - regardless of the underlying truth behind such attributes. Psychologists believe that these judgements are based on a variety of factors such as emotional states, personality traits, and other physiognomic cues. But work in this direction leads to an interesting question: how do we create models for problems where there is no ground truth, only measurable behavior? In this paper, we introduce a new convolutional neural network-based regression framework that allows us to train predictive models of crowd behavior for social attribute assignment. Over images from the AFLW face database, these models demonstrate strong correlations with human crowd ratings.
Article
Full-text available
This article takes as a point of departure Erving Goffman's (1959) ideas and the self-discrepancy theory of Higgins (1987) in order to introduce the habits of self-presentation of young people in the online environments. The aim of my article is to examine the reasons for joining SNS and the aspects young people would hope to emphasize on their profile images in social networking sites (SNS). I also focus on the qualities that are considered to be crucial by the 11 to 18-year-olds in order to become popular among their peers in the online community. The analysis is based on the findings of a questionnaire survey carried out in comprehensive schools in Estonia among 11 to 18-year-old pupils (N= 713). The results show that motives with a distinctly social focus dominate among the reasons for creating a profile in SNS. However, visible gender differences occur in the reasons for selecting particular profile images. The findings reveal that girls creating their visual self value both the aesthetic, emotional, self-reflecting as well as aesthetic-symbolical aspects of photographing more than their male counterparts. Furthermore, visual impression management in SNS varies according to the expectations of the reference group at hand, as the profile images of the young are constructed and re-constructed based on the values associated with "the ideal self" or "the ought self".
Article
People often perceive themselves as more attractive and likable than others do. Here, we examined how these self-favoring biases manifest in a highly popular novel context that is particularly self-focused—selfies. Specifically, we analyzed selfie-takers’ and non-selfie-takers’ perceptions of their selfies versus photos taken by others and compared these to the judgments of external perceivers. Although selfie-takers and non-selfie-takers reported equal levels of narcissism, we found that the selfie-takers perceived themselves as more attractive and likable in their selfies than in others’ photos, but that non-selfie-takers viewed both photos similarly. Furthermore, external judges rated the targets as less attractive, less likable, and more narcissistic in their selfies than in the photos taken by others. Thus, self-enhancing misperceptions may support selfie-takers’ positive evaluations of their selfies, revealing notable biases in self-perception.
Article
‘Sharing economy’ platforms such as Airbnb have recently flourished in the tourism industry. The prominent appearance of sellers' photos on these platforms motivated our study. We suggest that the presence of these photos can have a significant impact on guests' decision making. Specifically, we contend that guests infer the host's trustworthiness from these photos, and that their choice is affected by this inference. In an empirical analysis of Airbnb's data and a controlled experiment, we found that the more trustworthy the host is perceived to be from her photo, the higher the price of the listing and the probability of its being chosen. We also find that a host's reputation, communicated by her online review scores, has no effect on listing price or likelihood of consumer booking. We further demonstrate that if review scores are varied experimentally, they affect guests' decisions, but the role of the host's photo remains significant.
Article
Familiar faces are remembered better than unfamiliar faces. Furthermore, it is much easier to match images of familiar than unfamiliar faces. These findings could be accounted for by quantitative differences in the ease with which faces are encoded. However, it has been argued that there are also some qualitative differences in familiar and unfamiliar face processing. Unfamiliar faces are held to rely on superficial, pictorial representations, whereas familiar faces invoke more abstract representations. Here we present 2 studies that show, for 1 task, an advantage for unfamiliar faces. In recognition memory, viewers are better able to reject a new picture, if it depicts an unfamiliar face. This rare advantage for unfamiliar faces supports the notion that familiarity brings about some representational changes, and further emphasizes the idea that theoretical accounts of face processing should incorporate familiarity. (PsycINFO Database Record
Article
Mentalizing (otherwise known as 'theory of mind') involves a special process that is adapted for predicting and explaining the behaviour of others (targets) based on inferences about targets' beliefs and character. This research investigated how well participants made inferences about an especially apposite aspect of character, empathy. Participants were invited to make inferences of self-rated empathy after watching or listening to an unfamiliar target for a few seconds telling a scripted joke (or answering questions about him/herself or reading aloud a paragraph of promotional material). Across three studies, participants were good at identifying targets with low and high self-rated empathy but not good at identifying those who are average. Such inferences, especially of high self-rated empathy, seemed to be based mainly on clues in the target's behaviour, presented either in a video, a still photograph or in an audio track. However, participants were not as effective in guessing which targets had low or average self-rated empathy from a still photograph showing a neutral pose or from an audio track. We conclude with discussion of the scope and the adaptive value of this inferential ability.
Article
Photo-identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self-selected and other-selected high-likeness images. Surprisingly, images selected by previously unfamiliar viewers - after very limited exposure to a target face - were more accurately matched than self-selected images chosen by the target identity themselves. Results also revealed extremely low inter-rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self-selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other-selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance. © 2015 The British Psychological Society.