Article

Identifying Implicit Social Biases in Vision-Language Models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Vision-language models, like CLIP (Contrastive Language Image Pretraining), are becoming increasingly popular for a wide range of multimodal retrieval tasks. However, prior work has shown that large language and deep vision models can learn historical biases contained in their training sets, leading to perpetuation of stereotypes and potential downstream harm. In this work, we conduct a systematic analysis of the social biases that are present in CLIP, with a focus on the interaction between image and text modalities. We first propose a taxonomy of social biases called So-B-It, which contains 374 words categorized across ten types of bias. Each type can lead to societal harm if associated with a particular demographic group. Using this taxonomy, we examine images retrieved by CLIP from a facial image dataset using each word as part of a prompt. We find that CLIP frequently displays undesirable associations between harmful words and specific demographic groups, such as retrieving mostly pictures of Middle Eastern men when asked to retrieve images of a "terrorist". Finally, we conduct an analysis of the source of such biases, by showing that the same harmful stereotypes are also present in a large image-text dataset used to train CLIP models for examples of biases that we find. Our findings highlight the importance of evaluating and addressing bias in vision-language models, and suggest the need for transparency and fairness-aware curation of large pre-training datasets.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Large Language Models (LLMs) and VLMs often inherit and perpetuate biases and stereotypes present in their training data [4][5][6][7], which is typically sourced from vast and diverse internet repositories [8][9][10][11]. The training datasets frequently contain implicit and explicit cultural stereotypes, societal biases, and skewed representations that the models learn during training. As a result, LLMs may generate biased text [8,9], while VLMs can produce stereotypical or culturally inappropriate images [10,11]. ...
... The training datasets frequently contain implicit and explicit cultural stereotypes, societal biases, and skewed representations that the models learn during training. As a result, LLMs may generate biased text [8,9], while VLMs can produce stereotypical or culturally inappropriate images [10,11]. Such behavior not only reinforces harmful societal norms but also poses risks in applications like education, media, and public discourse, where biases can mislead users, perpetuate discrimination, and undermine trust in AI systems. ...
... Nadeem et al. [9] concluded that bidirectional encoder representations from transformers (BERT), generative pre-trained transformer (GPT-2), and robustly optimized BERT pre-training approach (RoBERTa) exhibited significant stereotypical biases across domains such as gender, profession, race, and religion and emphasized the need for improved evaluation metrics and mitigation strategies. On the other hand, studies related to biases in VLMs are limited [10,11]. Cho et al. [10] highlighted that text-to-image generation models had significant gender and skin tone biases for the image generation task. ...
Preprint
Full-text available
Animal stereotypes are deeply embedded in human culture and language. They often shape our perceptions and expectations of various species. Our study investigates how animal stereotypes manifest in vision-language models during the task of image generation. Through targeted prompts, we explore whether DALL-E perpetuates stereotypical representations of animals, such as "owls as wise," "foxes as unfaithful," etc. Our findings reveal significant stereotyped instances where the model consistently generates images aligned with cultural biases. The current work is the first of its kind to examine animal stereotyping in vision-language models systematically and to highlight a critical yet underexplored dimension of bias in AI-generated visual content.
... Similarly, Nadeem et al. [11] reported that LLMs such as BERT, GPT-2, and RoBERTa exhibited strong stereotypical biases related to gender, profession, race, and religion. On the other hand, fewer studies have looked at biases in vision-language models (VLMs) [12,13,14]. Cho et al. [12] found that text-to-image models such as DALL-E and Stable Diffusion had apparent gender and skin tone biases in the generated images, and similar behavior was also seen in the CLIP model [13]. ...
... On the other hand, fewer studies have looked at biases in vision-language models (VLMs) [12,13,14]. Cho et al. [12] found that text-to-image models such as DALL-E and Stable Diffusion had apparent gender and skin tone biases in the generated images, and similar behavior was also seen in the CLIP model [13]. The authors in [16] explored whether VLMs could accurately interpret specific prompts and discovered that these models struggle to respond to prompts containing the word 'no' -a phenomenon referred to as negation blindness." ...
... To investigate gender bias in Sora, we designed twelve prompts across three categories: Appearance, Behavior, and Occupation (inspired by Hamidieh et al. [13]). For each category, we considered four terms: Appearance = {Attractive, Ugly, Muscular, Frail}, Behavior = {Confident, Shy, Emotional, Rational}, Occupation = {Nurse, Doctor, CEO, Secretary}. ...
Preprint
The advent of text-to-video generation models has revolutionized content creation as it produces high-quality videos from textual prompts. However, concerns regarding inherent biases in such models have prompted scrutiny, particularly regarding gender representation. Our study investigates the presence of gender bias in OpenAI's Sora, a state-of-the-art text-to-video generation model. We uncover significant evidence of bias by analyzing the generated videos from a diverse set of gender-neutral and stereotypical prompts. The results indicate that Sora disproportionately associates specific genders with stereotypical behaviors and professions, which reflects societal prejudices embedded in its training data.
... For example, studies have identified biases in multi-class zero-shot classification [17], where models might disproportionately associate certain professions with specific genders. Similarly, biases in text-to-image retrieval [18,39] can lead to the preferential retrieval of images that reinforce stereotypical narratives. The implications of these biases extend to image captioning [45,20] and text-to-image generation [10], where the descriptive and generative capacities of VLMs may perpetuate and even amplify existing societal prejudices. ...
... Kim et al. [24] further investigated how the association between sensitive attributes and specific keywords contributes to bias issues in downstream tasks. These biases lead to unfair outcomes in zero-shot binary classification and text-to-image retrieval, as noted by [13,18]. Moreover, Slyman et al. [35] extend the bias in zero-shot classification in multi-class setting using the FACET dataset [17] and its evaluation metric. ...
Preprint
Full-text available
Recent advancements in Vision-Language Models (VLMs) have enabled complex multimodal tasks by processing text and image data simultaneously, significantly enhancing the field of artificial intelligence. However, these models often exhibit biases that can skew outputs towards societal stereotypes, thus necessitating debiasing strategies. Existing debiasing methods focus narrowly on specific modalities or tasks, and require extensive retraining. To address these limitations, this paper introduces Selective Feature Imputation for Debiasing (SFID), a novel methodology that integrates feature pruning and low confidence imputation (LCI) to effectively reduce biases in VLMs. SFID is versatile, maintaining the semantic integrity of outputs and costly effective by eliminating the need for retraining. Our experimental results demonstrate SFID's effectiveness across various VLMs tasks including zero-shot classification, text-to-image retrieval, image captioning, and text-to-image generation, by significantly reducing gender biases without compromising performance. This approach not only enhances the fairness of VLMs applications but also preserves their efficiency and utility across diverse scenarios.
... More recently, efforts have expanded to multimodal models and datasets, addressing biases in various languagevision tasks. These investigations have explored biases in embeddings [25], text-to-image (TTI) generation [5,11,18,23,52,62,64], image retrieval [61], image captioning [27,65], and visual question-answering models [1,28,44]. Despite these advances, research on intersectional biases in TTI models remains limited. ...
Preprint
Full-text available
The biases exhibited by Text-to-Image (TTI) models are often treated as if they are independent, but in reality, they may be deeply interrelated. Addressing bias along one dimension, such as ethnicity or age, can inadvertently influence another dimension, like gender, either mitigating or exacerbating existing disparities. Understanding these interdependencies is crucial for designing fairer generative models, yet measuring such effects quantitatively remains a challenge. In this paper, we aim to address these questions by introducing BiasConnect, a novel tool designed to analyze and quantify bias interactions in TTI models. Our approach leverages a counterfactual-based framework to generate pairwise causal graphs that reveals the underlying structure of bias interactions for the given text prompt. Additionally, our method provides empirical estimates that indicate how other bias dimensions shift toward or away from an ideal distribution when a given bias is modified. Our estimates have a strong correlation (+0.69) with the interdependency observations post bias mitigation. We demonstrate the utility of BiasConnect for selecting optimal bias mitigation axes, comparing different TTI models on the dependencies they learn, and understanding the amplification of intersectional societal biases in TTI models.
... In pre-processing methods, eliminating bias in the training corpus is a difficult challenge that offers no guarantees (Hamidieh et al., 2023). For prompt-engineering, although leveraging specific prompts to instruct the model (e.g. ...
Preprint
Text-to-image models are known to propagate social biases. For example when prompted to generate images of people in certain professions, these models tend to systematically generate specific genders or ethnicity. In this paper, we show that this bias is already present in the text encoder of the model and introduce a Mixture-of-Experts approach by identifying text-encoded bias in the latent space and then creating a bias-identification gate. More specifically, we propose MoESD (Mixture of Experts Stable Diffusion) with BiAs (Bias Adapters) to mitigate gender bias. We also demonstrate that a special token is essential during the mitigation process. With experiments focusing on gender bias, we demonstrate that our approach successfully mitigates gender bias while maintaining image quality.
Article
Full-text available
Large-scale contrastive vision-language pretraining has shown significant progress in visual representation learning. Unlike traditional visual systems trained by a fixed set of discrete labels, a new paradigm was introduced in Radford et al. (International conference on machine learning, PMLR, 2021) to directly learn to align images with raw texts in an open-vocabulary setting. On downstream tasks, a carefully chosen text prompt is employed to make zero-shot predictions. To avoid non-trivial prompt engineering, context optimization (Zhou et al. in Int J Comput Vis 130(9):2337–2348, 2022) has been proposed to learn continuous vectors as task-specific prompts with few-shot training examples. In this paper, we show that there is an alternative path to achieve better vision-language models other than prompt tuning. While prompt tuning is for the textual inputs, we propose CLIP-Adapter to conduct fine-tuning with feature adapters on either visual or language branch. Specifically, CLIP-Adapter adopts an additional bottleneck layer to learn new features and performs residual-style feature blending with the original pretrained features. As a consequence, CLIP-Adapter is able to outperform context optimization while maintaining a simple design. Experiments and extensive ablation studies on various visual classification tasks demonstrate the effectiveness of our approach.
Article
Full-text available
With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50% sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.
Article
Full-text available
Pre-trained deep learning models underpin many public-facing applications, and their propensity to reproduce implicit racial and gender stereotypes is an increasing source of concern. The risk of large-scale, unfair outcomes resulting from their use thus raises the need for technical tools to test and audit these systems. In this work, a dataset of 10,000 portrait photographs was generated and classified, using CLIP (Contrastive Language–Image Pretraining), according to six pairs of opposing labels describing a subject’s gender, ethnicity, attractiveness, friendliness, wealth, and intelligence. Label correlation was analyzed and significant associations, corresponding to common implicit stereotypes in culture and society, were found at the 99% significance level. A strong positive correlation was notably found between labels Female and Attractive, Male and Rich, as well as White Person and Attractive. These results are used to highlight the risk of more innocuous labels being used as partial euphemisms for protected attributes. Moreover, some limitations of common definitions of algorithmic fairness as they apply to general-purpose, pre-trained systems are analyzed, and the idea of controlling for bias at the point of deployment of these systems rather than during data collection and training is put forward as a possible circumvention.
Article
Full-text available
Algorithms in online platforms interact with users' identities in different ways. However, little is known about how users understand the interplay between identity and algorithmic processes on these platforms, and if and how such understandings shape their behavior on these platforms in return. Through semi-structured interviews with 15 US-based TikTok users, we detail users' algorithmic folk theories of the For You Page algorithm in relation to two inter-connected identity types: person and social identity. Participants identified potential harms that can accompany algorithms' tailoring content to their person identities. Further, they believed the algorithm actively suppresses content related to marginalized social identities based on race and ethnicity, body size and physical appearance, ability status, class status, LGBTQ identity, and political and social justice group affiliation. We propose a new algorithmic folk theory of social feeds-The Identity Strainer Theory-to describe when users believe an algorithm filters out and suppresses certain social identities. In developing this theory, we introduce the concept of algorithmic privilege as held by users positioned to benefit from algorithms on the basis of their identities. We further propose the concept of algorithmic representational harm to refer to the harm users experience when they lack algorithmic privilege and are subjected to algorithmic symbolic annihilation. Additionally, we describe how participants changed their behaviors to shape their algorithmic identities to align with how they understood themselves, as well as to resist the suppression of marginalized social identities and lack of algorithmic privilege via individual actions, collective actions, and altering their performances. We theorize our findings to detail the ways the platform's algorithm and its users co-produce knowledge of identity on the platform. We argue the relationship between users' algorithmic folk theories and identity are consequential for social media platforms, as it impacts users' experiences, behaviors, sense of belonging, and perceived ability to be seen, heard, and feel valued by others as mediated through algorithmic systems.
Conference Paper
Full-text available
In this paper we study the limitations of Machine Learning (ML) algorithms for predicting juvenile recidivism. Particularly, we are interested in analyzing the trade-off between predictive performance and fairness. To that extent, we evaluate fairness of ML models in conjunction with SAVRY, a structured professional risk assessment framework, on a novel dataset originated in Catalonia. In terms of accuracy on the prediction of recidivism, the ML models slightly outperform SAVRY; the results improve with more data or more features available for training (AUCROC of 0.64 with SAVRY vs. AUCROC of 0.71 with ML models). However, across three fairness metrics used in other studies, we find that SAVRY is in general fair, while the ML models tend to discriminate against male defendants , foreigners, or people of specific national groups. For instance, foreigners who did not recidivate are almost twice as likely to be wrongly classified as high risk by ML models than Spanish nationals. Finally, we discuss potential sources of this unfairness and provide explanations for them, by combining ML interpretability techniques with a thorough data analysis. Our findings provide an explanation for why ML techniques lead to unfairness in data-driven risk assessment, even when protected attributes are not used in training.
Conference Paper
Full-text available
We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives. We analyze the potential allocation harms that can result from semantic representation bias. To do so, we study the impact on occupation classification of including explicit gender indicators---such as first names and pronouns---in different semantic representations of online biographies. Additionally, we quantify the bias that remains when these indicators are "scrubbed," and describe proxy behavior that occurs in the absence of explicit gender indicators. As we demonstrate, differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances.
Article
Full-text available
Demographic changes from decades of mass immigration and shifts in internal migration patterns are upending the traditional racial composition of many states throughout the United States, transforming the American electorate, and increasing both the political salience of immigration and the racial salience of Latinos. Politicizing these visible demographic shifts has become an increasingly common strategy by both Democrats and Republicans with potentially significant electoral effects. While many have examined the impact of these demographic changes on dominant receiving populations’ attitudes, few have examined how changing demographics are shaping immigration politics in electoral campaigns. Specifically, under what conditions do political candidates politicize demographic change? I hypothesize that both political and demographic considerations drive variation in immigration appeals. I test my hypotheses using a novel dataset of candidate campaign websites from 2010, 2012, and 2014 US Senate primary and general elections. I argue that racial party cleavages increase the electoral temptation of immigration appeals but it is the interaction between state-level Latino population growth, electoral competition, and Latino voters that determines campaign strategy more broadly and moderates the use of pro- and anti-immigrant appeals.
Article
Full-text available
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicate a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the Web. Our results indicate that text corpora contain re-coverable and accurate imprints of our historic biases, whether morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.
Article
Full-text available
Police agencies, software firms and the public must ensure that crime-forecasting software improves public safety and officer accountability, writes Aaron Shapiro.
Article
Full-text available
This study surveys and analyzes the extent of religious discrimination in 175 states between 1990 and 2002 based on data from the Religion and State (RAS) project. Religious discrimination is defined empirically in this study as restrictions placed on the religious practices or organizations of a religious minority in a state that are not placed on those of the majority religion. The analysis includes sixteen specific types of religious discrimination. The results show that religious discrimination was present in a majority of states and mean levels of religious discrimination rose between 1990 and 2002. This stands in contrast to other forms of human rights violations which remained stable or dropped during this period. These findings are consistent across world regions and major religious traditions. Yet the levels of religious discrimination vary across world regions and major religious traditions. Finally, restrictions on the public expression of religion and religious organizations are more common than restrictions on the private practice of religion.
Article
Full-text available
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
Article
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstream task, dubbed Prompt Regularization (ProReg). Different from traditional fine-tuning which easily overfits to the downstream task data, ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning. The motivation is: by prompting the large model “a photo of a [CLASS]”, the fill-in answer is only dependent on the pretraining encyclopedic knowledge while independent of the task data distribution, which is usually biased. Specifically, given a training sample prediction during fine-tuning, we first calculate its Kullback-Leibler loss of the prompt prediction and Cross-Entropy loss of the ground-truth label, and then combine them with a proposed sample-wise adaptive trade- off weight, which automatically adjusts the transfer between the pretrained and downstream domains. On various out-of-distribution benchmarks, we show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.
Chapter
Recently, open-vocabulary image classification by vision language pre-training has demonstrated incredible achievements, that the model can classify arbitrary categories without seeing additional annotated images of that category. However, it is still unclear how to make the open-vocabulary recognition work well on broader vision problems. This paper targets open-vocabulary semantic segmentation by building it on an off-the-shelf pre-trained vision-language model, i.e., CLIP. However, semantic segmentation and the CLIP model perform on different visual granularity, that semantic segmentation processes on pixels while CLIP performs on images. To remedy the discrepancy in processing granularity, we refuse the use of the prevalent one-stage FCN based framework, and advocate a two-stage semantic segmentation framework, with the first stage extracting generalizable mask proposals and the second stage leveraging an image based CLIP model to perform open-vocabulary classification on the masked image crops which are generated in the first stage. Our experimental results show that this two-stage framework can achieve superior performance than FCN when trained only on COCO Stuff dataset and evaluated on other datasets without fine-tuning. Moreover, this simple framework also surpasses previous state-of-the-arts of zero-shot semantic segmentation by a large margin: +29.5 hIoU on the Pascal VOC 2012 dataset, and +8.9 hIoU on the COCO Stuff dataset. With its simplicity and strong performance, we hope this framework to serve as a baseline to facilitate future research. The code are made publicly available at https://github.com/MendelXu/zsseg.baseline.
Article
We propose a discrimination-aware learning method to improve both the accuracy and fairness of biased face recognition algorithms. The most popular face recognition benchmarks assume a distribution of subjects without paying much attention to their demographic attributes. In this work, we perform a comprehensive discrimination-aware experimentation of deep learning-based face recognition. We also propose a notational framework for algorithmic discrimination with application to face biometrics. The experiments include three popular face recognition models and three public databases composed of 64,000 identities from different demographic groups characterized by sex and ethnicity. We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present evidence of strong algorithmic discrimination. Finally, we propose a discrimination-aware learning method, Sensitive Loss, based on the popular triplet loss function and a sensitive triplet generator. Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness. The method shows results comparable to state-of-the-art de-biasing networks and represents a step forward to prevent discriminatory automatic systems.
Article
Media platforms, technological systems, and search engines act as conduits and gatekeepers for all kinds of information. They often influence, reflect, and reinforce gender stereotypes, including those that represent occupations. This study examines the prevalence of gender stereotypes on digital media platforms and considers how human efforts to create and curate messages directly may impact these stereotypes. While gender stereotyping in social media and algorithms has received some examination in the recent literature, its prevalence in different types of platforms (for example, wiki vs. news vs. social network) and under differing conditions (for example, degrees of human‐ and machine‐led content creation and curation) has yet to be studied. This research explores the extent to which stereotypes of certain strongly gendered professions (librarian, nurse, computer programmer, civil engineer) persist and may vary across digital platforms (Twitter, the New York Times online, Wikipedia, and Shutterstock). The results suggest that gender stereotypes are most likely to be challenged when human beings act directly to create and curate content in digital platforms, and that highly algorithmic approaches for curation showed little inclination towards breaking stereotypes. Implications for the more inclusive design and use of digital media platforms, particularly with regard to mediated occupational messaging, are discussed.
Article
Classical and recent accounts of education posit that education legitimately, and authoritatively, classifies individuals to positions of lower or higher status. However, despite these general theoretical claims, empirical evidence that provides an in-depth picture of the relationship between educational attainment and social status remains scarce. In this paper, based on a dataset of 31 countries (International Social Survey Programme), we investigate the extent to which education is related to subjective social status, the degree to which this is seen as legitimate, and how this relationship varies between countries. We contextualize this relationship with the influence of the centrality of education in countries (operationalized as the share of higher educated). Results showed that education is an important source of subjective social status for individuals across all countries, and is seen as relatively legitimate and uncontroversial among all educational groups. Moreover, among those who perceive education to be more important for status, subjective status differences between educational groups are larger. Additionally, in countries with larger shares of higher educated, educational differences in subjective social status correlate more strongly with whether or not people obtained a degree of higher (tertiary) education. Lastly, the relationship between education and subjective social status in these countries is more independent from other sources of status, such as income and gender. It therefore seems to be that as higher education becomes more central and widely shared in a society, rather than leveling social differences, ironically it also becomes more distinctive and diagnostic in distinguishing people along group lines.
Book
This book puts in one place and in accessible form Richard Berk’s most recent work on forecasts of re-offending by individuals already in criminal justice custody. Using machine learning statistical procedures trained on very large datasets, an explicit introduction of the relative costs of forecasting errors as the forecasts are constructed, and an emphasis on maximizing forecasting accuracy, the author shows how his decades of research on the topic improves forecasts of risk. Criminal justice risk forecasts anticipate the future behavior of specified individuals, rather than “predictive policing” for locations in time and space, which is a very different enterprise that uses different data different data analysis tools. The audience for this book includes graduate students and researchers in the social sciences, and data analysts in criminal justice agencies. Formal mathematics is used only as necessary or in concert with more intuitive explanations.
Conference Paper
Abstract—Biometric of Intent (BoI) is a Computer Vision (CV) automation, using Artificial Intelligence (AI) techniques, which presents a new approach that extends the reach of the classic biometric identification process. It provides an efficient mechanism which deters the threats raised by unknown individuals who have deceitful intentions and who aim to deploy unlawful operations such as terrorist attacks. In this context, our proposed BoI model is based on a framework constructed upon an automated machine learning facial expression analysis system which can assist law enforcement agencies who intend to deploy a systematic preventive security approach that aims to reduce the risk of potential unlawful attacks by rogue individuals through the evaluation of their emotional state in relation to their malicious intent
Article
Two experiments tested a form of automatic stereo-typing Subjects saw primes related to gender (e g, mother, father, nurse, doctor) or neutral with respect to gender (e g, parent, student, person) followed by target pronouns (stimulus onset asynchronv = 300 ms) that were gender related (e g, she, he) or neutral (it, me) or followed by nonpronouns (do, all, Experiment 2 only) In Experiment 1, subjects judged whether each pronoun was male or female Automatic gender beliefs (stereotypes) were observed in faster responses to pronouns consistent than inconsistent with the gender component of the prime regardless of subjects' awareness of the prime-target relation, and independently of subjects explicit beliefs about gender stereotypes and language reform In Experiment 2, automatic stereotyping was obtained even though a gender-irrelevant judgment task (pronoun/not pronoun) was used Together, these experiments demonstrate that gender information imparted by words can automatically influence judgment, although the strength of such effects may be moderated by judgment task and prime type