Article

Consequences of erudite vernacular utilized irrespective of necessity: Problems with using long words needlessly

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Most texts on writing style encourage authors to avoid overly-complex words. However, a majority of undergraduates admit to deliberately increasing the complexity of their vocabulary so as to give the impression of intelligence. This paper explores the extent to which this strategy is effective. Experiments 1–3 manipulate complexity of texts and find a negative relationship between complexity and judged intelligence. This relationship held regardless of the quality of the original essay, and irrespective of the participants' prior expectations of essay quality. The negative impact of complexity was mediated by processing fluency. Experiment 4 directly manipulated fluency and found that texts in hard to read fonts are judged to come from less intelligent authors. Experiment 5 investigated discounting of fluency. When obvious causes for low fluency exist that are not relevant to the judgement at hand, people reduce their reliance on fluency as a cue; in fact, in an effort not to be influenced by the irrelevant source of fluency, they over-compensate and are biased in the opposite direction. Implications and applications are discussed. Copyright © 2005 John Wiley & Sons, Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The idea that simple language patterns can improve perceptions of scientists is supported by decades of processing fluency research and feelings-as-information theory (13)(14)(15)(16). This literature suggests people tend to use their feelings when consuming information (16,17), and people often prefer simplicity over complexity because simple (fluent) information feels better to most people than complex (disfluent) information. ...
... The most common linguistic fluency dimension evaluated in the literature is lexical fluency, which considers the degree to which people use common and everyday terms in communication. People perceive scientists to be more intelligent if their work is written with simple words (e.g. the word job) compared to complex words (e.g. the word occupation) (15). In most cases, people prefer simple synonyms for a concept compared to complex synonyms of the same concept because it is more of a challenge to interpret and comprehend complexity, and people are economical with their effort and attention (22,25). ...
... Finally, participants made various perceptions of the author (e.g. intelligence and trustworthiness) based on prior work (14,15,38), judgments about the identity of who wrote the scientific summary (AI or human), and assessed the complexity in each text as a manipulation check. The order of these measures was randomized, and items within each block were randomized as well. ...
Article
Full-text available
This paper evaluated the effectiveness of using generative AI to simplify science communication and enhance the public’s understanding of science. By comparing lay summaries of journal articles from PNAS, yoked to those generated by AI, this work first assessed linguistic simplicity differences across such summaries and public perceptions in follow-up experiments. Specifically, Study 1a analyzed simplicity features of PNAS abstracts (scientific summaries) and significance statements (lay summaries), observing that lay summaries were indeed linguistically simpler, but effect size differences were small. Study 1b used a large language model, GPT-4, to create significance statements based on paper abstracts and this more than doubled the average effect size without fine-tuning. Study 2 experimentally demonstrated that simply-written GPT summaries facilitated more favorable perceptions of scientists (they were perceived as more credible and trustworthy, but less intelligent) than more complexly-written human PNAS summaries. Crucially, Study 3 experimentally demonstrated that participants comprehended scientific writing better after reading simple GPT summaries compared to complex PNAS summaries. In their own words, participants also summarized scientific papers in a more detailed and concrete manner after reading GPT summaries compared to PNAS summaries of the same article. AI has the potential to engage scientific communities and the public via a simple language heuristic, advocating for its integration into scientific dissemination for a more informed society.
... In fact, there is a close link between syntactic and cognitive complexity (Szmrecsanyi, 2004), and some authors even refer to "syntactic processing fluency" (Frazier, 1985). When statements within the same context vary in complexity, fluency accounts predict that individuals attribute the emerging variability in fluency to the validity of the presented judgments (Oppenheimer, 2006). Assuming that a positive ecological correlation of processing fluency and truth status is the default in many environments (Unkelbach, 2007), it follows that syntactically simple statements should receive higher truth ratings than complex statements. ...
... The fact that the complexity manipulation affected processing times but not truth judgments suggests that participants discounted fluency as a diagnostic cue. Instead of relying on differences in fluency for making truth judgments, participants may have attributed variability in processing fluency to the syntactic complexity of the presented statements (Oppenheimer, 2006). In fact, given that simple and complex statements differed substantially in length, it may have been relatively easy for participants to recognize the actual source of fluency differences. ...
... While asking participants for ratings of syntactic complexity comes with the possible drawback of demand effects (Corneille & Lush, 2023;Fiedler et al., 2021), it clearly increases the salience of complexity. If the discounting explanation is correct, increasing the salience of the actual source of fluency differences should again result in a null effect of the complexity manipulation similarly as in Experiment 1 (Oppenheimer, 2006). Third, the design of Experiment 2 also allows us to use subjective complexity ratings as predictor variable for truth judgments. ...
... It is still unclear, however, whether these differences translate to different effects on judgments about a speaker. In addition to a general link between the processing burden imposed by disfluency and its impact on evaluations [18,19], recent research has shown that the type and location of disfluencies play a role in their effect on perceived competence [12] and confidence [10]. These findings are consistent with the notion that more severe processing disruptions might more negatively affect judgments, but relatively little is known about how variations within a disfluency type (for example, the syntactic context of disfluent repetitions) or interactions between commonly co-occurring disfluencies (such as repetitions and FPs) impact evaluations. ...
... Another consideration is that listeners' theories about the cause of disfluencies may shape their judgments. Disfluency can trigger negative evaluations when listeners assume a disfluent speaker is not willing or able to communicate effectively [18] but it has been shown in non-speech contexts that providing an obvious explanation for disfluency can attenuate its effect [19], a phenomenon known as discounting [23]. If discounting effects apply to speech disfluencies as well, this could give speakers a concrete tool for potentially mitigating the effects of disfluency on how listeners perceive them. ...
... When participants heard that the speaker was anxious about giving a lecture, the effect of repetitions on competence was eliminated (and it is also worth noting that the "anxious" voices were not rated as sounding less competent overall). Presumably, in line with [19], this represents a discounting effect: listeners may have concluded that the repetitions were unrelated to the speaker's competence because they were offered a better explanation. One of the participants commented that admitting to anxiety was a "classic public speaking mistake" but our results suggest otherwise. ...
... Perceived difficulty (disfluency) is linked to an increase in risk perception , deliberation, and self-control Alter, 2013), also reducing the perceived intelligence of an author (Oppenheimer, 2006). Disfluency deliberations also enhance detailed memory coding (Alter et al., 2007;Alter, 2013;Weissgerber & Reinhard, 2017), prevent confirmatory biases, reduce options overconfidence Aydin, 2016), and the effect of heuristics, previous expectations, beliefs, and prejudices (Alter, 2013;Alter et al., 2007;. ...
... A dificuldade percebida (disfluência) está ligada ao aumento da percepção de risco , da deliberação e do autocontrole Alter, 2013), reduzindo até mesmo a inteligência percebida de um autor para quem a lê (Oppenheimer, 2006). ...
Article
Full-text available
Purpose: To investigate the effect of disfluency (perceived difficulty) and prior motivation to do and disseminate electronic word-of-mouth (eWOM) on headlines/posts online, as well as the mediating role of perceived truth. Design/methodology: This study involves three online experiments emulating “X” (former Twitter) messages and Instagram/Facebook posts. Disfluency was measured in Experiment 1 and manipulated in Experiments 2 and 3 while also measuring prior motivation to disseminate eWOM. Findings: higher prior motivation increased fake and authentic news dissemination, but disfluency diminished this effect through its influence on perceived truth. Originality/value: These results demonstrate that people tend to disseminate authentic and fake news owing to a carryover effect, and this tendency is affected by prior eWOM motivation. Disfluency can, thus, not only help prevent fake news dissemination but also inhibit authentic (real) news dissemination. These effects are due to perceived truth, not attention or perceived relevance, and only affect people with higher eWOM motivation. Because the perceptual disfluency manipulations tested are like what occurs daily (i.e., “dark theme” in smartphones and Instagram’s use of font colors), we propose that similar proceedings can decrease the mass propagation of widely disseminated fake news.
... Finally, participants made various perceptions of the author (e.g., intelligence, trustworthiness) based on prior work (14,15,34), judgments about the identity of who wrote the scientific summary (AI or human), and assessed the complexity in each text as a manipulation check. The order of these measures was randomized, and items within each block were randomized as well. ...
... Based on prior work (15,34), three questions asked participants to rate how clear ("How clear was the writing in the summary you just read?"), complex ("How complex was the writing in the summary you just read?"), and how well they understood each scientific summary ("How much of this writing did you understand?"). Ratings for the first two questions were made on 7point Likert-type scales from 1 = Not at all to 7 = Extremely. ...
Preprint
Full-text available
This paper evaluated the effectiveness of using generative AI to simplify science communication and enhance public trust in science. By comparing lay summaries of journal articles from PNAS, yoked to those generated by AI, this work assessed linguistic simplicity across such summaries and public perceptions. Study 1a analyzed simplicity features of PNAS abstracts (scientific summaries) and significance statements (lay summaries), observing that lay summaries were indeed linguistically simpler, but effect size differences were small. Study 1b used GPT-4 to create significance statements based on paper abstracts and this more than doubled the average effect size without fine-tuning. Finally, Study 2 experimentally demonstrated that simply-written GPT summaries facilitated more favorable public perceptions of scientists (their credibility, trustworthiness) than more complexly-written human PNAS summaries. AI has the potential to engage scientific communities and the public via a simple language heuristic, advocating for its integration into scientific dissemination for a more informed society.
... In short, the easier (i.e., more fluent) the information is processed, the more favorable the judgments (Alter & Oppenheimer, 2009;Oppenheimer, 2008 for reviews). For instance, people judged statements to be true rather than false when the statements were fluently processed (Reber & Schwarz, 1999); consumer products were more likely to be chosen when the names of the products were fluently processed (Novemsky et al., 2007); the authors of essays that were easier to read were rated as more intelligent (Oppenheimer, 2006); and a person with an easyto-pronounce name would be liked more than someone with a difficult-to-pronounce name (Laham et al., 2012). While most of these empirical studies evaluated neutral and positive stimuli, they have provided ample evidence that fluent processing has a positive effect on human judgments. ...
... Fluent processing triggers positive affect through the feeling of familiarity or the successful operation of the cognitive system. This process is inherently positive and helps with evaluative judgments (Schwarz & Clore, 1983, 2006. Current research on processing fluency mainly assumes the hedonic nature of fluency and has accumulated empirical evidence that is compatible with this prediction. ...
Article
Full-text available
The importance of processing fluency in evaluative judgments has been repeatedly demonstrated across many domains such as liking, beauty, and truth. However, a clear picture of the nature of processing fluency has yet to emerge. Fluent processing has been suggested to form evaluative judgments in a hedonic nature, in which existing judgmental tendencies always shift in a positive direction. Alternatively, fluency has been proposed to amplify evaluative judgments bidirectionally. However, uncertainty remains regarding the influence of processing fluency on pre-existing judgmental tendencies. Specifically, the extent to which the effect of stimuli belonging to specific categories varies within an individual remains unclear. This study assessed the influence of fluent processing on two specific categories (cats/spiders) using a visual search task. Fluency was manipulated by the set size of the stimuli and presentation duration. Fluency intensified pre-existing judgmental tendencies in two divergent directions: The initially favored stimuli were liked more, while the initially unfavored ones were liked less when the processing of stimuli was fluent. There was a significant correlation between favored and unfavored stimuli in terms of the magnitude of the effect, and such effect was influenced by visual attention, suggesting that processing fluency goes beyond a hedonic and unidimensional nature.
... With linguistic fluency, certain language patterns induce greater processing fluency than other similar language patterns despite being logically equivalent based on content. Linguistic fluency can manifest in many forms, including lexical fluency, where some words are simpler alternatives to more complex words (i.e., use vs. utilize) [46]. Linguistic fluency can be manipulated using a variety of methods [11,46]; in our study, we operationalize linguistic veracity as a manifestation of linguistic fluency, such that low linguistic veracity should promote linguistic fluency, whereas high linguistic veracity should minimize fluency and encourage cognitive processing of a post's content. ...
... Linguistic fluency can manifest in many forms, including lexical fluency, where some words are simpler alternatives to more complex words (i.e., use vs. utilize) [46]. Linguistic fluency can be manipulated using a variety of methods [11,46]; in our study, we operationalize linguistic veracity as a manifestation of linguistic fluency, such that low linguistic veracity should promote linguistic fluency, whereas high linguistic veracity should minimize fluency and encourage cognitive processing of a post's content. ...
... This suggests that users were processing the explanations at a shallow level, relying on simple textual cues such as overall length to predict LLM accuracy. This result is consistent with studies in social psychology and communication research that suggest that longer answers or explanations may be perceived as more persuasive or credible, even when they do not contain more meaningful information 27,28 . This length bias has also been found in domains such as peer reviews, where longer reviews are perceived as more persuasive and informative even if the information content remains the same 29 . ...
Article
Full-text available
As artificial intelligence systems, particularly large language models (LLMs), become increasingly integrated into decision-making processes, the ability to trust their outputs is crucial. To earn human trust, LLMs must be well calibrated such that they can accurately assess and communicate the likelihood of their predictions being correct. Whereas recent work has focused on LLMs’ internal confidence, less is understood about how effectively they convey uncertainty to users. Here we explore the calibration gap, which refers to the difference between human confidence in LLM-generated answers and the models’ actual confidence, and the discrimination gap, which reflects how well humans and models can distinguish between correct and incorrect answers. Our experiments with multiple-choice and short-answer questions reveal that users tend to overestimate the accuracy of LLM responses when provided with default explanations. Moreover, longer explanations increased user confidence, even when the extra length did not improve answer accuracy. By adjusting LLM explanations to better reflect the models’ internal confidence, both the calibration gap and the discrimination gap narrowed, significantly improving user perception of LLM accuracy. These findings underscore the importance of accurate uncertainty communication and highlight the effect of explanation length in influencing user trust in artificial-intelligence-assisted decision-making environments.
... Easy-to-process names, or fluent names, have been associated with increased ownership, better liquidity, and greater business values. Short, uncomplicated words are digested more frequently and cause a good emotional state, as demonstrated by Oppenheimer (2006). Participants in a financial survey by Alter & Oppenheimer (2006) showed that fictitious companies with more fluid names will yield higher future returns. ...
Article
Full-text available
The study aims to investigate the presence of affect heuristics in investment decisions and analyze the influence of company and financial tool names in investment decisions. The framework of the Affect Heuristic Model was adapted to measure perceived risk and perceived benefit. Besides the impact of fluency, association and familiar names were tested to discover the level of perceived risk and perceived benefit during the investment decision. The research was conducted among 150 investors who invest in the Nepal Stock Exchange through an online form. The study indicates that Nepalese investors tend to rely on heuristic shortcuts, such as fluency, familiarity, and association, when assessing investment opportunities. They are notably influenced by affect ‘name’ heuristics, shaping their perceptions of benefits. Moreover, their perception of risk and benefit is more influenced by trends and superficial factors like glitz than past performance and corporate character. Local companies and well-known brands are favored due to the familiarity heuristic.
... Possible reasons for these missing dark patterns are as follows: (1) Ambiguity and lack of precise standards: Some dark Manuscript submitted to ACM II-CL II-FA II-PE II-AD II-IS II-PT II-RF II- patterns are vaguely defined, which often hinders researchers from clearly identifying them in screenshots of mobile applications or webpages. For example, although "complex language" is described as "decision-related information that may be intentionally or unintentionally made difficult to understand through the use of complex language" [10,65], there is no concrete benchmark defining what constitutes complex language. This lack of clarity may mislead researchers attempting to identify such patterns. ...
Preprint
Full-text available
As digital interfaces become increasingly prevalent, certain manipulative design elements have emerged that may harm user interests, raising associated ethical concerns and bringing dark patterns into focus as a significant research topic. Manipulative design strategies are widely used in user interfaces (UI) primarily to guide user behavior in ways that favor service providers, often at the cost of the users themselves. This paper addresses three main challenges in dark pattern research: inconsistencies and incompleteness in classification, limitations of detection tools, and insufficient comprehensiveness in existing datasets. In this study, we propose a comprehensive analytical framework--the Dark Pattern Analysis Framework (DPAF). Using this framework, we developed a taxonomy comprising 68 types of dark patterns, each annotated in detail to illustrate its impact on users, potential scenarios, and real-world examples, validated through industry surveys. Furthermore, we evaluated the effectiveness of current detection tools and assessed the completeness of available datasets. Our findings indicate that, among the 8 detection tools studied, only 31 types of dark patterns are identifiable, resulting in a coverage rate of just 45.5%. Similarly, our analysis of four datasets, encompassing 5,561 instances, reveals coverage of only 30 types of dark patterns, with an overall coverage rate of 44%. Based on the available datasets, we standardized classifications and merged datasets to form a unified image dataset and a unified text dataset. These results highlight significant room for improvement in the field of dark pattern detection. This research not only deepens our understanding of dark pattern classification and detection tools but also offers valuable insights for future research and practice in this domain.
... studies revealed significant associations between overt subject pronouns and non-narrative clauses, on the one hand, and between null-subject pronouns and narrative clauses, on the other (e.g. Owens and elgibali 2013;Omari, 2011;al-shawashreh, 2016). For example, Owens and elgibali (2013) found that narrative clauses favor subject continuity, resulting in more subject pronoun omission. ...
Article
Full-text available
this forensic linguistic study aims to explore some style markers of three prominent Jordanian columnists. the study examined five semantic, three syntactic, and five structural style features in 75 genre-controlled arabic newspaper articles. in this investigation, we followed stylistic (qualitative) and stylometric (quantitative) approaches to determine the linguistic style of the three targeted authors and their writing fingerprints. Findings revealed many distinctive style markers among the three authors, such as adopting unique bags of words and employing different word order choices. moreover, the present study demonstrated that different linguistic styles could be associated with various personality traits, varied demographic backgrounds, and other emotional and psychological states of the authors. this work contributes to the general research on authorship attribution and profiling. it identified authors' numerical profiles of style markers to assist experts in attributing unidentified texts to their most likely author. the research findings contribute to the field and hold promise for the future, potentially leading to the development of forensic software packages for authorship identification and solving authorship issues such as plagiarism detection.
... studies revealed significant associations between overt subject pronouns and non-narrative clauses, on the one hand, and between null-subject pronouns and narrative clauses, on the other (e.g. Owens and elgibali 2013;Omari, 2011;al-shawashreh, 2016). For example, Owens and elgibali (2013) found that narrative clauses favor subject continuity, resulting in more subject pronoun omission. ...
Article
Full-text available
This forensic linguistic study aims to explore some style markers of three prominent Jordanian columnists. The study examined five semantic, three syntactic, and five structural style features in 75 genre-controlled Arabic newspaper articles. In this investigation, we followed stylistic (qualitative) and stylometric (quantitative) approaches to determine the linguistic style of the three targeted authors and their writing fingerprints. Findings revealed many distinctive style markers among the three authors, such as adopting unique bags of words and employing different word order choices. Moreover, the present study demonstrated that different linguistic styles could be associated with various personality traits, varied demographic backgrounds, and other emotional and psychological states of the authors. This work contributes to the general research on authorship attribution and profiling. It identified authors’ numerical profiles of style markers to assist experts in attributing unidentified texts to their most likely author. The research findings contribute to the field and hold promise for the future, potentially leading to the development of forensic software packages for authorship identification and solving authorship issues such as plagiarism detection.
... Plavén-Sigray and colleagues (2017) found that niche, scientific jargon damages the readability of abstracts. Authors in the organizational sciences often use overly complicated wording and grammar to amplify the impression of intelligence at the cost of readability and comprehension (Oppenheimer, 2006). As a result, practitioners and laypeople often struggle to understand technical jargon and academic language in abstracts. ...
... The reasons are twofold. First, for the users, the responses with verbosity compensation will lead to confusion and inefficiency (Fowler, 1927;Oppenheimer, 2006). Second, for the servers, the verbosity leads to unnecessary higher costs and higher latency. ...
Preprint
Full-text available
When unsure about an answer, humans often respond with more words than necessary, hoping that part of the response will be correct. We observe a similar behavior in large language models (LLMs), which we term "Verbosity Compensation" (VC). VC is harmful because it confuses the user understanding, leading to low efficiency, and influences the LLM services by increasing the latency and cost of generating useless tokens. In this paper, we present the first work that defines and analyzes Verbosity Compensation, explores its causes, and proposes a simple mitigating approach. We define Verbosity Compensation as the behavior of generating responses that can be compressed without information loss when prompted to write concisely. Our experiments, conducted on five datasets of knowledge and reasoning-based QA tasks with 14 newly developed LLMs, reveal three conclusions. 1) We reveal a pervasive presence of verbosity compensation across all models and all datasets. Notably, GPT-4 exhibits a VC frequency of 50.40%. 2) We reveal the large performance gap between verbose and concise responses, with a notable difference of 27.61% on the Qasper dataset. We also demonstrate that this difference does not naturally diminish as LLM capability increases. Both 1) and 2) highlight the urgent need to mitigate the frequency of VC behavior and disentangle verbosity with veracity. We propose a simple yet effective cascade algorithm that replaces the verbose responses with the other model-generated responses. The results show that our approach effectively alleviates the VC of the Mistral model from 63.81% to 16.16% on the Qasper dataset. 3) We also find that verbose responses exhibit higher uncertainty across all five datasets, suggesting a strong connection between verbosity and model uncertainty. Our dataset and code are available at https://github.com/psunlpgroup/VerbosityLLM.
... Specifically, the way in which information is presented, such as the use of infographics (Riggs et al., 2022) or the narrative form (Bullock et al., 2021), can positively impact processing fluency. Also, the language used in the conveyance of information, such as the use of semantically simpler words (Oppenheimer, 2006), the presence of more familiar or commonly used words , or syntactically simple sentence structures (Tolochko et al., 2019), can all make the processing of information feel more fluent. Another factor that can impact fluency is message repetition. ...
Article
Full-text available
This experiment (N = 1,019) examined how a state of processing fluency, induced through either an easy or difficult task (reading a simple vs. complex message or recalling few vs. many examples) impacted participants’ ability to subsequently detect misinformation. The results revealed that, as intended, easier tasks led to higher reports of processing fluency. In turn, increased processing fluency was positively associated with internal efficacy. Finally, internal efficacy was positively related to misinformation detection using a signal detection task. This work suggests that feelings of ease while processing information can promote confidence and a more discerning style of information processing. Given the proliferation of misinformation online, an understanding of how metacognitions – like processing fluency – can disrupt the tacit acceptance of information carries important democratic and normative implications.
... It is a personal preference for style and precision, but it is also rooted in the discussions about positionality, privilege, and professionalisation of language, that all comes down as ethical questions that challenge us all (see, for example,Silverman 2003;Oppenheimer 2006). I like to think that research is to know more of the world, and the results are for others to learn from -it should be made as effortless and pleasurable as possible. ...
Thesis
Full-text available
This thesis delves into the interactions of residents living within an incomplete and fractured housing project in Catania, Sicilia. It mirrors the everyday experiences to the perception of the residential area of Librino as an urban ruin and symbol of poverty. Through the lens of 'toward an anthropology of the good' (Robbins 2013), this study seeks to uncover the regenerative and socially constructed practices and engagements of everyday life within Librino by using participant observation as research method. Research question is 'How residents negotiate and situate themselves in the fluid area of segregated space?'. Focus is on their urban condition, intersectional marginalisation, segregation, and foremost, on the acts of engagement that have gradually transformed the community and their environment in the absence of effective policies and infrastructure. The research emphasizes the role and influence of academical knowledge production, as well as the importance of recognizing the responsibility of researchers. Introducing the uplifting and hopeful spirit of many of the residents with ethnographic work challenges stereotypes, and aims to portray these people in Librino not as exotic or criminal 'others', but as individuals with agency and belonging. Reckless urbanisation and the alarming rise in amount of people facing urban poverty are urgent globally shared problems, therefore insight gained from observing life in Librino can provide additional frameworks for future research in this regard. Research was conducted as three-month stay in Librino, immersing into the daily activities of residents and activists from the area. This was done by living with local families, taking part in events and meetings hosted in Librino, to solidarity action and communal projects like sewing club, gardening, and surprisingly, rugby training. These are recorded as fieldnotes and photographs. Theoretical postulations come namely from Henri Lefebvre, David Harvey, Tom Slater, and Setha Low. They are seen on these pages as major contributors on anthropological theories about space and place, urban segregation, and inequality. Lefebvre has conducted a wide scope of urban studies while Low studies the social construction of space. Slater is an avid advocate of research-based policies for sustainable urbanisation, and Harvey has long-constructed radical approach to the neoliberal urban policy, showcasing more democratic approaches. Theory about belonging, agency and communities comes from the canon of Janet Carsten, Pierre Bourdieu, and from ethnographic studies done in communities around Mediterranean. Librino specific knowledge comes from policy publications and urban scholars like Laura Saija who with their colleges have conducted several studies in peripheries of Catania. Other notable contributing work include Andrea Muehlebachs theories about neoliberal welfare in Italia, Janet and Peter Schneiders immense amount of text about Sicilian life, and Phaedra Douzina-Bakalakis ideas about social relations as engagement networks. Three main arguments emerge: Librino possesses unique characteristics as place, social life is nuanced and observed to be vibrant on many arenas, and the people and place in Librino have been abandoned by governing bodies.
... The literature on this is very rich and many instances can be given. For example, findings point to texts written with easier-to-read fonts seeming more familiar (Reber & Zupanek 2002), text that is easier to understand was predicted as written by a more intelligent writer (Oppenheimer, 2006), fluent stimuli associated with valence and high arousal (symmetric) compared to disfluent (asymmetric) stimuli (Bertamini, 2013) and easier retrieved stimuli are preferred to hard-to-retrieve stimuli (Bornstein & D'Agostino, 1992). This situation leads us to a literature that suggests there is a positive reaction towards stimuli to those with more fluent processing potential. ...
Research
A much shorter version of this research was published in a scientific congress proceeding. Therefore, I can no longer share the full text openly. Please send a personal request for the paper.
... Finally, we also believe that AI provides an opportunity for better and more approachable public understanding of science. A host of evidence from communication research and psychology suggests people perceive the writers of complex texts to be less intelligent, more difficult to understand, and less warm and moral than the writers of simple texts (Oppenheimer, 2006(Oppenheimer, , 2008Markowitz et al., 2021). Given how complex most science papers are written for the average person, it is therefore in scientists' best interest, and perhaps a scholarly imperative, to communicate research in simple terms. ...
Article
Full-text available
The social sciences have long relied on comparative work as the foundation upon which we understand the complexities of human behavior and society. However, as we go deeper into the era of artificial intelligence (AI), it becomes imperative to move beyond mere comparison (e.g., how AI compares to humans across a range of tasks) to establish a visionary agenda for AI as collaborative partners in the pursuit of knowledge and scientific inquiry. This paper articulates an agenda that envisions AI models as the preeminent scientific collaborators. We advocate for the profound notion that our thinking should evolve to anticipate, and include, AI models as one of the most impactful tools in the social scientist's toolbox, offering assistance and collaboration with low-level tasks (e.g., analysis and interpretation of research findings) and high-level tasks (e.g., the discovery of new academic frontiers) alike. This transformation requires us to imagine AI's possible/probable roles in the research process. We defend the inevitable benefits of AI as knowledge generators and research collaborators—agents who facilitate the scientific journey, aiming to make complex human issues more tractable and comprehensible. We foresee AI tools acting as co-researchers, contributing to research proposals and driving breakthrough discoveries. Ethical considerations are paramount, encompassing democratizing access to AI tools, fostering interdisciplinary collaborations, ensuring transparency, fairness, and privacy in AI-driven research, and addressing limitations and biases in large language models. Embracing AI as collaborative partners will revolutionize the landscape of social sciences, enabling innovative, inclusive, and ethically sound research practices.
... To end with a third, slightly different example, let me look at Oppenheimer (2006), who argues with experimental evidence that texts that are easier to process are deemed to be written by more intelligent authors. Being easier to process is determined by looking at the complexity of the words used in a text. ...
Article
Full-text available
Adam Carter (2022) recently proposed that a successful analysis of knowledge needs to include an autonomy condition. Autonomy, for Carter, requires a lack of a compulsion history. A compulsion history bypasses one’s cognitive competences and results in a belief that is difficult to shed. I argue that Carter’s autonomy condition does not cover partially autonomous beliefs properly. Some belief-forming processes are partially bypassing one’s competences, but not bypassing them completely. I provide a case for partially autonomous belief based on processing fluency effects and argue that partially autonomous beliefs only amount to knowledge in some cases. I finally suggest how to adjust the autonomy condition to capture partially autonomous belief properly.
... Sí, intentamos porque no es fácil usar las palabras con precisión y siempre podemos mejorar. Así que usamos palabras simples, claras y las necesarias para ser eficientes con el diseño de esta guía (Wells, 2004;Oppenheimer, 2005). ...
Book
Full-text available
El número de plantas de tratamiento de aguas residuales domésticas (PTAR) en Antioquia aumentó 74 % en los últimos nueve años. Este aumento nos mostró que necesitamos una guía para prevenir errores de diseño, instalación y operación en las futuras PTAR. Entonces diseñamos una guía para satisfacer esta nueva necesidad. Para diseñar esta guía tuvimos cuatro objetivos. El primero, elegir 89 PTAR de la experiencia que teníamos para operar estas PTAR. El segundo, clasificar estas PTAR en niveles y tecnologías. El tercero, describir varios errores de diseño conceptual en las PTAR. El cuarto, diseñar esta guía con las tecnologías más usadas en Antioquia. Aquí describimos que las PTAR usan una combinación de varios niveles y tecnologías. Estos niveles son pretratamiento, tratamiento primario, secundario y tratamiento de lodos. Estas tecnologías son sedimentadores primarios (con placas inclinadas), reactores UASB, reactores PBR, reactores anaerobios de lodos y lechos de arena. Estas PTAR tienen poblaciones menores a 30.000, varios errores de diseño y problemas operacionales. Estos errores son la inapropiada localización de estas PTAR, los tiempos de retención hidráulicos bajos en estos reactores PBR y los lechos de arena tiene un tamaño muy pequeño. Estos problemas operaciones son la distribución desigual del caudal de ingreso y salida en estos reactores UASB y PBR. Sugerimos usar esta guía de 2 maneras. La primera, para prevenir estos errores de diseño en las futuras PTAR. La segunda, para mejorar la capacitación de los operarios a cargo de estas PTAR. Entonces si tienes que diseñar una PTAR. ¡Esta es tu guía!
... 696Public Engagement and Policy 697Finally, we also believe that AI provides an opportunity for better and more approachable public 698 understanding of science. A host of evidence from communication research and psychology 699suggests people perceive the writers of complex texts to be less intelligent, more difficult to 700 understand, and less warm and moral than the writers of simple texts(Oppenheimer, 2006(Oppenheimer, , 2008 701 Markowitz et al., 2021). Given how complex most science papers are written for the average 702 person, it is therefore in scientists' best interest, and perhaps a scholarly imperative, to 703 communicate research in simple terms. ...
Preprint
Full-text available
The social sciences have long relied on comparative work as the foundation upon which we understand the complexities of human behavior and society. However, as we delve deeper into the era of artificial intelligence (AI), it becomes imperative to move beyond mere comparison (e.g., how AI compares to humans across a range of tasks) to establish a visionary agenda for AI as collaborative partners in the pursuit of knowledge and scientific inquiry. This paper articulates an agenda that envisions AI models as the preeminent scientific collaborators. We advocate for the profound notion that our thinking should evolve to anticipate, and include, AI models as one of the most impactful tools in the social scientist's toolbox, offering assistance and collaboration with low-level tasks (e.g., analysis and interpretation of research findings) and high-level tasks (e.g., the discovery of new academic frontiers) alike. This transformation requires us to imagine AI's possible/probable roles in the research process. We discuss the prospect of AI as active knowledge generators within the scientific community — agents who participate in the scientific journey, aiming to make complex human issues more tractable and comprehensible. We foresee AI tools acting as co-researchers, contributing to research proposals and driving breakthrough discoveries. Ethical considerations are paramount, encompassing democratizing access to AI tools, fostering interdisciplinary collaborations, ensuring transparency, fairness, and privacy in AI-driven research, and addressing limitations and biases in large language models. Embracing AI as collaborative partners will revolutionize the landscape of social sciences, enabling innovative, inclusive, and ethically sound research practices.
... These inferior intellectual abilities are also often regarded as the reason for grammatical errors, but these errors can also be seen as stemming from writing in a language that the author does not speak natively (Rana et al., 2019;Vignovic & Thompson, 2010). The incorrect use of complex words and syntax might mark the user as attempting to impress others with his/her broad vocabulary and intelligence (Oppenheimer, 2006;Oxford, n.d.). ...
Article
Full-text available
Language errors are prevalent on social media. We explored the effect of these errors on perceptions of the writer and the persuasiveness of the content that was posted. In an online experiment, participants (N = 325) were randomly assigned to read one of six identical texts designed as screenshots of Facebook posts that differed only in the types of mistakes they contained. The participants were then asked to report their attributions for the mistakes, perceptions of the writer, and attitudes related to the post. Language errors led to negative perceptions of the writer. In addition, these perceptions depended on the types of errors made and the reasons attributed to them. For example, typographical errors indirectly led to perceptions of the writer as rash, through attributing the errors to the writer’s hastiness. Spelling errors, on the other hand, indirectly led to perceptions of the writer as less intelligent, through attributing the errors to the writer’s inferior intellectual abilities. Moreover, language errors indirectly led to less acceptance of the writer’s claims. The findings are discussed in the context of attribution theory and the heuristic systematic model.
... 9 These aspects largely bear on fluent processing. In general, when it comes to fluency, simple is better: slower speech, simpler words, less complexity (Oppenheimer, 2006). For example, complex syntactic structures are harder for people to comprehend (less fluent) and less persuasive than simple language (Lowrey, 1998). ...
Article
Full-text available
General Audience Summary When people encounter new information, it can be made easier to understand by an accompanying cocktail of words, gestures, and behaviors. The problem is this same cocktail—called semantic context—can also create the illusion of understanding. Take, for example, foreign films and television series. Subtitles help viewers understand what characters are saying and what is happening. What is interesting is that viewers attend to subtitles effortlessly and may even lose awareness of the subtitles despite still relying on the subtitled information. But could subtitles create a semantic context that encourages viewers to be more confident they had learned the foreign language even when they had not? To answer this question, we conducted five experiments in which we showed participants a video clip of people speaking Danish—either with or without subtitles—and asked everyone to rate their ability to understand Danish in new situations. Then we asked people to translate Danish audio clips to see if they had learned any Danish. We found those who saw the subtitled video were more confident in their ability to understand Danish in new situations compared to those who saw the unsubtitled clips, even though they were not able to translate any more of the Danish audio clips. These findings suggest that relative to situations of lesser semantic context, greater semantic context can create illusions of one’s ability to do something implausible.
... We find that authors' secondperson pronoun usage is associated with decreased word complexity in reviewer comments (see Column (3) in Table 4). This result suggests that the reviewers, when addressed using second-person pronouns, favored more plain, readable language over complex and formal written language, a choice often made to facilitate a conversation [29][30][31]40,41 . ...
Article
Full-text available
Pronoun usage’s psychological underpinning and behavioral consequence have fascinated researchers, with much research attention paid to second-person pronouns like “you,” “your,” and “yours.” While these pronouns’ effects are understood in many contexts, their role in bilateral, dynamic conversations (especially those outside of close relationships) remains less explored. This research attempts to bridge this gap by examining 25,679 instances of peer review correspondence with Nature Communications using the difference-in-differences method. Here we show that authors addressing reviewers using second-person pronouns receive fewer questions, shorter responses, and more positive feedback. Further analyses suggest that this shift in the review process occurs because “you” (vs. non-“you”) usage creates a more personal and engaging conversation. Employing the peer review process of scientific papers as a backdrop, this research reveals the behavioral and psychological effects that second-person pronouns have in interactive written communications.
... In addition to jargon, however, many other linguistic and nonlinguistic factors can influence message processing fluency, including syntactic complexity (Shulman & Sweitzer, 2018), font style and size (Oppenheimer, 2006), source accent (Dragojevic & Goatley-Soan, 2022), and background noise (Munro, 1998), among others (for a review, see Alter & Oppenheimer, 2009). In practice, science communication messages may contain multiple such factors simultaneously. ...
Article
Full-text available
We examined whether source accent moderates jargon's effects on listeners’ processing fluency and receptivity to science communication. Americans heard a speaker describing science using either jargon or non-jargon and speaking with either a native (standard American) or foreign (Hispanic) accent. Compared to non-jargon, jargon disrupted listeners’ fluency for both speakers, but especially the foreign-accented speaker; jargon also reduced information-seeking intentions and perceived source and message credibility, but only for the foreign-accented speaker. Fluency mediated the effects of jargon on outcomes.
... We manipulated color complexity by altering the objects and colors in the image according to scores for color complexity (ColorComplexityhigh = 8.26, ColorComplexitylow = 6.77). To alter text complexity, we manipulated the linguistic characteristics of the caption words by substituting synonyms that differed in their lexical complexity, but not their semantics (Oppenheimer 2006). For instance, the less complex words "clean" and "plant" were paired with the more complex synonyms "hygienic" and "philodendron." ...
... Under the status beliefs that the higher-income countries are often more reliable, the lower-income countries may feel compelled to use complex language to shield their speeches from refutation and to appear more learned (Pennebaker & King, 1999), as such language is more difficult to understand. Despite counter-evidence that big words impede processing fluency (Oppenheimer, 2006), the notion that big words demonstrate intelligence is still pervasive. Another explanation may be that the promotion of plain English (Stoll et al., 2022) has yet to reach lower-income countries. ...
Article
Full-text available
Can a set of Linguistic Inquiry and Word Count (LIWC) indicators differentiate between the UN speeches from higher-income and lower-income countries? Based on 34 years of 6,095 speeches (14,300,539 words) and dynamic income grouping, an elastic net analysis selects 18 such categories. The hypothetical explanations are discussed with four theoretical perspectives: gender-neutral language, politeness, scarcity mindset, and expectation states theory. The findings on cross-group LIWC variation provide UN-setting linguistic evidence for the theories and insights into the language of global income inequality between countries.
Article
Synopsis The research problem We investigated the relationship between product market competition and the textual characteristics of corporate social responsibility (CSR) disclosures. Specifically, we investigated three textual characteristics: tone of optimism, tone of tangibility (matter-of-factness), and readability. Motivation or theoretical reasoning On the one hand, the three ways in which CSR disclosure can enhance corporate success in competitive product market situations are as follows: (1) More readable disclosures with more optimistic and matter-of-fact tones help firms attract new customers while enhancing customer loyalty and brand value. (2) Increased market competition is expected to encourage firms to provide more-readable CSR disclosures with optimistic and matter-of-fact tones to enhance their access to external financing at lower costs. (3) CSR disclosure may strengthen a firm’s connections with business stakeholders (e.g., employees and suppliers). These connections are conducive to corporate success in competitive product market situations. On the other hand, it is well established that firms find CSR disclosure to be costly. The test hypotheses A significant relationship exists between product market competition and the three textual characteristics of CSR disclosures, namely, tone of optimism, tone of tangibility (matter-of-factness), and readability. Target population Our sample comprised 2,018 firm-year observations (2002–2020) of listed firms in Australia. Findings Our study found that firms facing an increase in product market competition tend to publish less-readable CSR disclosures with less use of optimism and matter-of-fact tones of language, and vice versa. In practical terms, this indicates that firms fail to leverage CSR disclosure in managing their product market competition, even though CSR disclosure is recognized as an effective marketing and brand strategy. Therefore, our study examined whether or not the CSR committee, as a key sustainability governance mechanism on CSR disclosure, could contribute to mitigating this missed opportunity. We found that the negative relationship between the two variables is attenuated by the presence of a CSR committee and by the CSR committee’s effectiveness. Our study should be of interest to firms, users of CSR disclosures, and regulators.
Article
Previous fluency research has demonstrated that when messages are heard in degraded audio quality, the speaker and the content they are communicating are judged more negatively than when heard in high quality. Using a virtual court paradigm, we investigated the efficacy of two different instructions to reduce the technology‐based bias—highlighting (1) the source responsible for audio quality (Experiment 1) and (2) variations in audio quality (Experiment 2). Results converged in showing that when instructions were provided prior to listening to recordings, people continued to evaluate speakers presented in low quality more negatively than those in high quality. However, results from Experiment 2 suggested that instructions provided after recordings may be effective and warrant further investigation. Given the digital divide and disproportionate impact of digital disruptions, these findings raise concerns about equity in high stakes environments such as remote justice.
Article
Considerable research suggests making information simpler is better. Simplification improves the efficiency of information extraction and lowers psychological frictions, leading to its popularity with policymakers and practitioners worldwide. However, it remains unclear when and how simplification can be utilized most effectively, or if there are contexts where simplification may produce unintended maleficent effects. Using two large-scale field experiments (N = 126,673), we test whether simplifying account statements helps encourage retirement savings in Mexico. We partner with two retirement firms, one ranked high in rate of returns and the other ranked lower. We find that simplifying retirement account statements improves contribution rates for consumers in the high-ranking firm but reduces contribution rates for consumers in the low-ranking firm. Five follow-up experiments provide evidence consistent with a fluency amplification account. Simplifying information improves processing fluency making it easier to accurately recall firm rank relative to the control, which amplifies behavior bidirectionally: High-ranking (low-ranking) firm consumers more accurately recall their firm’s rank, subsequently increasing (decreasing) contributions. However, if simplification is harnessed in ways that improve processing fluency and lower perceived switching costs, then simplification can improve retirement savings for everyone either by boosting contributions or encouraging people to switch to higher performing alternatives.
Article
When we use language to communicate, we must choose what to say, what not to say, and how to say it. That is, we must decide how to frame the message. These linguistic choices matter: Framing a discussion one way or another can influence how people think, feel, and act in many important domains, including politics, health, business, journalism, law, and even conversations with loved ones. The ubiquity of framing effects raises several important questions relevant to the public interest: What makes certain messages so potent and others so ineffectual? Do framing effects pose a threat to our autonomy, or are they a rational response to variation in linguistic content? Can we learn to use language more effectively to promote policy reforms or other causes we believe in, or is this an overly idealistic goal? In this article, we address these questions by providing an integrative review of the psychology of framing. We begin with a brief history of the concept of framing and a survey of common framing effects. We then outline the cognitive, social-pragmatic, and emotional mechanisms underlying such effects. This discussion centers on the view that framing is a natural—and unavoidable—feature of human communication. From this perspective, framing effects reflect a sensible response to messages that communicate different information. In the second half of the article, we provide a taxonomy of linguistic framing techniques, describing various ways that the structure or content of a message can be altered to shape people’s mental models of what is being described. Some framing manipulations are subtle, involving a slight shift in grammar or wording. Others are more overt, involving wholesale changes to a message. Finally, we consider factors that moderate the impact of framing, gaps in the current empirical literature, and opportunities for future research. We conclude by offering general recommendations for effective framing and reflecting on the place of framing in society. Linguistic framing is powerful, but its effects are not inevitable—we can always reframe an issue to ourselves or other people.
Article
Prior research found a word complexity effect: Authors who use complex words are less favorably received when writing academic essays, business letters, and other relatively formal communications. The present study tested if word choice affects evaluations of messages between friends (Experiments 1-2) and spoken messages (Experiment 2). Three widespread dimensions of social judgments were studied – namely, persuasiveness, competence, and sincerity. Participants read/heard messages that varied (between-participants) by ordinary versus low-frequency words ( sad vs. forlorn). Messages containing low-frequency words (mostly) received lower evaluations. Most importantly, word choice effects in messages between friends were consistently found – for both written and spoken language. Feedback analysis (Experiment 2) revealed that the overuse of “big vocabulary” conflicts with conscious social beliefs regarding ways to communicate, showing that social judgments spring from a combination of conscious social beliefs and the relatively unconscious influence of fluency.
Article
We use a large-scale data set of thousands of field experiments conducted on Upworthy.com , an online media platform, to investigate the cognitive, motivational, affective, and grammatical factors implementable in messages that increase engagement with online content.
Article
Face transplantation is a highly sensationalized procedure in the media. The purpose of this study is to assess the content and readability of online materials that prospective patients/public encounter regarding face transplantation. A search for face transplantation was performed on Google. Sites were categorized under 3 groups: established face transplant programs, informational third-party sources (eg, Wikipedia), and news article/tabloid sites. Each site was assessed for readability using 6 different readability metrics, while quality was assessed utilizing JAMA benchmark criteria and DISCERN instrument. One-way ANOVA with post hoc Tukey’s multiple comparisons test was used for analysis. News sources were significantly easier to read than face transplant program sites (10.4 grade reading level vs. 12.4). For the JAMA benchmark, face transplant programs demonstrated the lowest average score relative to third-party sites, and news sources (2.05 vs. 2.91 vs. 3.67, respectively; P <0.001), but had significantly greater DISCERN scores than news sources (53.50 vs. 45.83, P =0.019). News sources were significantly more accessible, readable, and offered greater transparency of authorship compared with reputable sources, despite their lack of expertise on face transplantation. Face transplant programs should update their websites to ensure readability and accessibility of the information provided to the public.
Chapter
Experimental methods from psycholinguistics allow experimental philosophers to study important automatic inferences, with a view to explaining and assessing philosophically relevant intuitions and arguments. Philosophical thought is shaped by verbal reasoning in natural language. Such reasoning is driven by automatic comprehension inferences. Such inferences shape, e.g., intuitions about verbally described cases, in philosophical thought experiments; more generally, they shape moves from premises to conclusions in philosophical arguments. These inferences can be examined with questionnaire-based and eye-tracking methods from psycholinguistics. We explain how these methods can be adapted for use in experimental philosophy. We demonstrate their application by presenting a new eye-tracking study that helps assess the influential philosophical ``argument from illusion.'' The study examines whether stereotypical inferences from polysemous words (viz., appearance verbs) are automatically triggered even when prefaced by contexts that defeat the inferences. We use this worked example to explain the key conceptual steps involved in designing behavioural experiments, step by step. Going beyond the worked example, we also explain methods that require no laboratory facilities.
Article
In the dynamic landscape of workplace communication, business jargon plays a crucial yet potentially divisive role. This experiential learning exercise introduces the concept of jargon literacy, which we define as the ability to recognize, understand, and use specialized terminology within a particular professional context, enabling participants to engage with the complexities of jargon through simulated “manager” and “employee” roles. Aimed at bridging communication gaps and clarifying misunderstandings, this exercise is suitable for both undergraduate and graduate curricula and adaptable for in-person and online formats. Emphasizing clear, inclusive dialogue, the exercise proposes innovative solutions to jargon-related obstacles, fostering effective communication across diverse organizational contexts. Designed to actively engage students, it offers practical insights into overcoming jargon-induced barriers and enhancing essential communication skills for today’s varied workplace settings.
Article
As digital interfaces become increasingly prevalent, a series of ethical issues have surfaced, with dark patterns emerging as a key research focus. These manipulative design strategies are widely employed in User Interfaces (UI) with the primary aim of steering user behavior in favor of service providers, often at the expense of the users themselves. This paper aims to address three main challenges in the study of dark patterns: inconsistencies and incompleteness in classification, limitations of detection tools, and inadequacies in data comprehensiveness. In this paper, we introduce a comprehensive framework, called the Dark Pattern Analysis Framework (DPAF). Using this framework, we construct a comprehensive taxonomy of dark patterns, encompassing 64 types, each labeled with its impact on users and the likely scenarios in which it appears, validated through an industry survey. When assessing the capabilities of the detection tools and the completeness of the dataset, we find that of all dark patterns, the five detection tools can only identify 32, yielding a coverage rate of merely 50%. Although the four existing datasets collectively contain 5,566 instances, they cover only 32 of all types of dark patterns, also resulting in a total coverage rate of 50%. The results discussed above suggest that there is still significant room for advancement in the field of dark pattern detection. Through this research, we not only deepen our understanding of dark pattern classification and detection tools, but also offer valuable insights for future research and practice in this field.
Article
En julio de 2023, el comité ISO/TC 37 de la Organización Internacional de Normalización (ISO) culminó años de esfuerzos colaborativos al publicar la norma ISO 24495-1 de lenguaje claro. Esta norma, fruto de un amplio consenso, está ahora al alcance de quienes quieran promover la claridad en las comunicaciones. Elaborada por especialistas en lenguaje claro: lingüistas, redactores técnicos, traductores, creadores de contenido y diseñadores de diversos países, esta referencia autorizada orientará a los autores en la elaboración de textos (documentos impresos o digitales, y guiones para material multimedia) que resulten claros y accesibles para su público destinatario. Aplicable a la mayoría de los idiomas escritos, la norma incorpora los hallazgos más recientes en investigación sobre lenguaje claro y la práctica acumulada de expertos en la materia.
Article
Full-text available
Over 30,000 field experiments with The Washington Post and Upworthy showed that readers prefer simpler headlines (e.g., more common words and more readable writing) over more complex ones. A follow-up mechanism experiment showed that readers from the general public paid more attention to, and processed more deeply, the simpler headlines compared to the complex headlines. That is, a signal detection study suggested readers were guided by a simpler-writing heuristic, such that they skipped over relatively complex headlines to focus their attention on the simpler headlines. Notably, a sample of professional writers, including journalists, did not show this pattern, suggesting that those writing the news may read it differently from those consuming it. Simplifying writing can help news outlets compete in the competitive online attention economy, and simple language can make news more approachable to online readers.
Article
Full-text available
A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.
Article
Full-text available
People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics.
Article
Full-text available
Presents a summary and synthesis of the author's work on attribution theory concerning the mechanisms involved in the process of causal explanations. The attribution theory is related to studies of social perception, self-perception, and psychological epistemology. Two systematic statements of attribution theory are described, discussed, and illustrated with empirical data: the covariation and the configuration concepts. Some problems for attribution theory are considered, including the interplay between preconceptions and new information, simple vs. complex schemata, attribution of covariation among causes, and illusions in attributions. The role of attribution in decision making and behavior is discussed. (56 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Argues that people use systematic rules for assessing cause, both in science and everyday inference. By explicating the processes that underlie the judgment of causation, the authors review and integrate various theories of causality proposed by psychologists, philosophers, statisticians, and others. Because causal judgment involves inference and uncertainty, the literature on judgment under uncertainty is also considered. It is suggested that the idea of a "causal field" is central for determining causal relevance, differentiating causes from conditions, determining the salience of alternative explanations, and affecting molar versus molecular explanations. Various "cues-to-causality" such as covariation, temporal order, contiguity in time and space, and similarity of cause and effect are discussed, and it is shown how these cues can conflict with probabilistic ideas. A model for combining the cues and the causal field is outlined that explicates methodological issues such as spurious correlation, "causalation," and causal inference in case studies. The discounting of an explanation by specific alternatives is discussed as a special case of the sequential updating of beliefs. Conjunctive explanations in multiple causation are also considered. (120 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Guidelines and tips are offered for writing a Psychological Bulletin review article that will be accessible to the widest possible audience. Techniques are discussed for organizing a review into a coherent narrative, and the importance of giving readers a clear take-home message is emphasized. In addition, advice is given for rewriting a manuscript that has been reviewed and returned with an invitation to revise and resubmit. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Nonfamous names presented once in an experiment are mistakenly judged as famous 24 hr later. On an immediate test, no such false fame occurs. This phenomenon parallels the sleeper effect found in studies of persuasion. People may escape the unconscious effects of misleading information by recollecting its source, raising the criterion level of familiarity required for judgments of fame, or by changing from familiarity to a more analytic basis for judgment. These strategies place constraints on the likelihood of sleeper effects. We discuss these results as the unconscious use of the past as a tool vs its conscious use as an object of reflection. Conscious recollection of the source of information does not always occur spontaneously when information is used as a tool in judgment. Rather, conscious recollection is a separate act. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A general theory of domain identification is used to describe achievement barriers still faced by women in advanced quantitative areas and by African Americans in school. The theory assumes that sustained school success requires identification with school and its subdomains; that societal pressures on these groups (e.g., economic disadvantage, gender roles) can frustrate this identification; and that in school domains where these groups are negatively stereotyped, those who have become domain identified face the further barrier of stereotype threat, the threat that others’ judgments or their own actions will negatively stereotype them in the domain. Research shows that this threat dramatically depresses the standardized test performance of women and African Americans who are in the academic vanguard of their groups (offering a new interpretation of group differences in standardized test performance), that it causes disidentification with school, and that practices that reduce this threat can reduce these negative effects.
Article
Full-text available
Can language use reflect personality style? Studies examined the reliability, factor structure, and validity of written language using a word-based, computerized text analysis program. Daily diaries from 15 substance abuse inpatients, daily writing assignments from 35 students, and journal abstracts from 40 social psychologists demonstrated good internal consistency for over 36 language dimensions. Analyses of the best 15 language dimensions from essays by 838 students yielded 4 factors that replicated across written samples from another 381 students. Finally, linguistic profiles from writing samples were compared with Thematic Apperception Test coding, self-reports, and behavioral measures from 79 students and with self-reports of a 5-factor measure and health markers from more than 1,200 students. Despite modest effect sizes, the data suggest that linguistic style is an independent and meaningful way of exploring personality.
Article
Full-text available
A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.
Article
A general theory of domain identification is used to describe achievement barriers still faced by women in advanced quantitative areas and by African Americans in school. The theory assumes that sustained school success requires identification with school and its subdomains; that societal pressures on these groups (e.g., economic disadvantage, gender roles) can frustrate this identification; and that in school domains where these groups are negatively stereotyped, those who have become domain identified face the further barrier of stereotype threat, the threat that others' judgments or their own actions will negatively stereotype them in the domain. Research shows that this threat dramatically depresses the standardized test performance of women and African Americans who are in the academic vanguard of their groups (offering a new interpretation of group differences in standardized test performance), that it causes disidentification with school, and that practices that reduce this threat can reduce these negative effects.
Article
Article
According to a two-step account of the mere-exposure effect, repeated exposure leads to the subjective feeling of perceptual fluency, which in turn influences liking. If so, perceptual fluency manipulated by means other than repetition should influence liking. In three experiments, effects of perceptual fluency on affective judgments were examined. In Experiment 1, higher perceptual fluency was achieved by presenting a matching rather than nonmatching prime before showing a target picture. Participants judged targets as prettier if preceded by a matching rather than nonmatching prime. In Experi- ment 2, perceptual fluency was manipulated by figure-ground contrast. Stimuli were judged as more pretty, and less ugly, the higher the con- trast. In Experiment 3, perceptual fluency was manipulated by presen- tation duration. Stimuli shown for a longer duration were liked more, and disliked less. We conclude (a) that perceptual fluency increases liking and (b) that the experience of fluency is affectively positive, and hence attributed to positive but not to negative features, as reflected in a differential impact on positive and negative judgments. 0
Article
Human reasoning is accompanied by metacognitive experiences, most notably the ease or difficulty of recall and thought generation and the fluency with which new information can be processed. These experiences are informative in their own right. They can serve as a basis of judgment in addition to, or at the expense of, declarative information and can qualify the conclusions drawn from recalled content. What exactly people conclude from a given metacognitive experience depends on the naive theory of mental processes they bring to bear, rendering the outcomes highly variable. The obtained judgments cannot be predicted on the basis of accessible declarative information alone; we cannot understand human judgment without taking into account the interplay of declarative and experiential information.
Article
During his years as mayor of New York City, Rudolph Giuliani was perceived as undergoing changes in personality as a result of a number of personal crises and, later, the terrorist attacks on the World Trade Center on September 11, 2001. One method by which to study individual differences is to explore the natural use of language of an individual. Giuliani's use of language was measured from 35 of his press conferences between his election in 1993 and late 2001. Significant changes in his linguistic style were found in the ways he identified with others, expressed emotions, and exhibited cognitive complexity. Implications for using an analysis of linguistic styles to understand personality are discussed.
Article
This paper explores a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability, i.e., by the ease with which relevant instances come to mind. In general, availability is correlated with ecological frequency, but it is also affected by other factors. Consequently, the reliance on the availability heuristic leads to systematic biases. Such biases are demonstrated in the judged frequency of classes of words, of combinatorial outcomes, and of repeated events. The phenomenon of illusory correlation is explained as an availability bias. The effects of the availability of incidents and scenarios on subjective probability are discussed.
Article
Thesis (Ph. D.)--Stanford University, 1993. Submitted to the Department of Sociology. Copyright by the author.
Article
We define mental contamination as the process whereby a person has an unwanted response because of mental processing that is unconscious or uncontrollable. This type of bias is distinguishable from the failure to know or apply normative rules of inference and can be further divided into the unwanted consequences of automatic processing and source confusion, which is the confusion of 2 or more causes of a response. Mental contamination is difficult to avoid because it results from both fundamental properties of human cognition (e.g., a lack of awareness of mental processes) and faulty lay beliefs about the mind (e.g., incorrect theories about mental biases). People's lay beliefs determine the steps they take (or fail to take) to correct their judgments and thus are an important but neglected source of biased responses. Strategies for avoiding contamination, such as controlling one's exposure to biasing information, are discussed.
Article
Recent articles on familiarity (e.g. Whittlesea, B.W.A, 1993. Journal of Experimental Psychology 19, 1235) have argued that the feeling of familiarity is produced by unconscious attribution of fluent processing to a source in the past. In this article, we refine that notion: We argue that is not fluency per se, but rather fluent processing occurring under unexpected circumstances that produces the feeling. We demonstrate cases in which moderately fluent processing produces more familiarity than does highly fluent processing, at least when the former is surprising.
Article
Statements of the form "Osorno is in Chile" were presented in colors that made them easy or difficult to read against a white background and participants judged the truth of the statement. Moderately visible statements were judged as true at chance level, whereas highly visible statements were judged as true significantly above chance level. We conclude that perceptual fluency affects judgments of truth.
Article
B. W. A. Whittlesea and L. D. Williams (1998, 2000) proposed the discrepancy-attribution hypothesis to explain the source of feelings of familiarity. By that hypothesis, people chronically evaluate the coherence of their processing. When the quality of processing is perceived as being discrepant from that which could be expected, people engage in an attributional process; the feeling of familiarity occurs when perceived discrepancy is attributed to prior experience. In the present article, the authors provide convergent evidence for that hypothesis and show that it can also explain feelings of familiarity for nonlinguistic stimuli. They demonstrate that the perception of discrepancy is not automatic but instead depends critically on the attitude that people adopt toward their processing, given the task and context. The connection between the discrepancy-attribution hypothesis and the "revelation effect" is also explored (e.g., D. L. Westerman & R. L. Greene, 1996).
Article
In the accompanying article (B. W. A. Whittlesea & L. D. Williams, 2001), surprising violation of an expectation was observed to cause an illusion of familiarity. The authors interpreted that evidence as support for the discrepancy-attribution hypothesis. This article extended the scope of that hypothesis, investigating the consequences of surprising validation of expectations. Subjects were shown recognition probes as completions of sentence stems. Their expectations were manipulated by presenting predictive, nonpredictive, and inconsistent stems. Predictive stems caused an illusion of familiarity, but only when the subjects also experienced uncertainty about the outcome. That is, as predicted by the discrepancy-attribution hypothesis, feelings of familiarity occurred only when processing of a recognition target caused surprise. The article provides a discussion of the ways in which a perception of discrepancy can come about, as well as the origin and nature of unconscious expectations.
Article
Discounting is a causal-reasoning phenomenon in which increasing confidence in the likelihood of a particular cause decreases confidence in the likelihood of all other causes. This article provides evidence that individuals apply discounting principles to making causal attributions about internal cognitive states. In particular, the three studies reported show that individuals will fail to use the availability heuristic in frequency estimations when salient causal explanations for availability exist. Experiment 1 shows that fame is used as a cue for discounting in estimates of surname frequency. Experiment 2 demonstrates that individuals discount the availability of their own last name. Experiment 3, which used individuals' initials in a letter-frequency estimation task, demonstrates that simple priming of alternative causal models leads to discounting of availability. Discounting of cognitive states can occur spontaneously, even when alternative causal models are never explicitly provided.
Article
Fluency--the ease with which people process information--is a central piece of information we take into account when we make judgments about the world. Prior research has shown that fluency affects judgments in a wide variety of domains, including frequency, familiarity, and confidence. In this paper, we present evidence that fluency also plays a role in categorization judgments. In Experiment 1, participants judged a variety of different exemplars to be worse category members if they were less fluent (because they were presented in a smaller typeface). In Experiment 2, we found that fluency also affected judgments of feature typicality. In Experiment 3, we demonstrated that the effects of fluency can be reversed when a salient attribution for reduced fluency is available (i.e., the stimuli are hard to read because they were printed by a printer with low toner). In Experiment 4 we replicated these effects using a within-subject design, which ruled out the possibility that the effects were a statistical artifact caused by aggregation of data. We propose a possible mechanism for these effects: if an exemplar and its category are closely related, activation of one will cause priming of the other, leading to increased fluency. Over time, feelings of fluency come to be used as a valid cue that can become confused with more traditional sources of information about category membership.
Confidence as inference from subjective experience. Talk presented at the meeting of the Society for Spontaneous discounting of availability in frequency judgment tasks
  • R J Norwick
  • N Epley
Norwick, R. J., & Epley, N. (November, 2002). Confidence as inference from subjective experience. Talk presented at the meeting of the Society for Judgment and Decision Making, Kansas City, MO. Oppenheimer, D. M. (2004). Spontaneous discounting of availability in frequency judgment tasks. Psychological Science, 15(2), 100–105.
Factors influencing spontaneous discounting of fluency in frequency judgment
  • D M Oppenheimer
  • B Monin
Oppenheimer, D. M., & Monin, B. (in prep). Factors influencing spontaneous discounting of fluency in frequency judgment.
Meditations on First Philosophy (S. Tweyman, Trans.). London: Routledge. (Original work published 1641)
  • R Descartes
Descartes, R. (1993). Meditations on First Philosophy (S. Tweyman, Trans.). London: Routledge. (Original work published 1641).
London: Routledge. (Original work published 1641)
  • Descartes
  • Pennebaker