Table 3 - uploaded by Arthur E. Blank
Content may be subject to copyright.
Proportion of Subjects Who Returned the Questionnaire High status Random status

Proportion of Subjects Who Returned the Questionnaire High status Random status

Source publication
Article
Full-text available
Conducted 3 field experiments to test the hypothesis that complex social behavior that appears to be enacted mindfully instead may be performed without conscious attention to relevant semantics. 200 Ss in compliance paradigms received communications that either were or were not semantically sensible, were or were not structurally consistent with th...

Context in source publication

Context 1
... there were no main effects, a contrast that set the high-status congruent group as different from the remaining groups, which in turn were equal to each other, was significant at p < .OS, F(l, 74) = 5.91. The congruent and in- congruent cells of Table 2 are broken down for examination in Table 3. The analyses of variance of these data were not significant. ...

Similar publications

Article
Full-text available
Current measures of feminist identity are based on developmental models and cannot be used with men. We introduce and validate a new measure of feminist consciousness, the Feminist Consciousness Scale (FCS) which is based on dominant social psychological theories of politicized social identities, and assesses identity, injustice, and efficacy compo...

Citations

... This is consistent with prior findings in HCI that explanations can increase overreliance [6,33,97,117,132], including explanations generated by LLMs [93,109]. It is also consistent with prior work in psychology, which finds that explanations are often found compelling even when they contain little content [39,70] or content that experts judge irrelevant [46], and that effects of superficial cues on explanation quality are more severe when time and prior knowledge are limited [45,57]. In the absence of effort and expertise, users will inevitably rely on superficial cues to explanation quality, such as fluency [111], a characteristic that LLM explanations typically possess in spades. ...
Preprint
Full-text available
Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct. Mitigating such overreliance is a key challenge. Through a think-aloud study in which participants use an LLM-infused application to answer objective questions, we identify several features of LLM responses that shape users' reliance: explanations (supporting details for answers), inconsistencies in explanations, and sources. Through a large-scale, pre-registered, controlled experiment (N=308), we isolate and study the effects of these features on users' reliance, accuracy, and other measures. We find that the presence of explanations increases reliance on both correct and incorrect responses. However, we observe less reliance on incorrect responses when sources are provided or when explanations exhibit inconsistencies. We discuss the implications of these findings for fostering appropriate reliance on LLMs.
... Applied to uncertain situations (see also Proposition 2), not being socially mindful can be a subtle sign that one wishes to keep the self away from other individuals or groups -for a variety of reasons. This sign is subtle, because not leaving choice in uncertain or noisy situations that are not sharply defined can still be interpreted as "mindless" in the sense of simply being inattentive to certain determining elements of a situation (Langer, 2014;Langer et al., 1978). Future research could investigate the question of avoidance versus approach motivations in the context of social mindfulness. ...
... and finally (c) precommitment without specificity, a precommitment to a vague action (simply reviewing the project) if a concrete state of the world arises. Beyond clarifying the elements of a precommitment that make it effective, Study 2 also allows us to evaluate the possibility that observers are sometimes sympathetic to any justification for de-escalation (Dolinski & Nawrat, 1998;Langer et al., 1978). ...
... Supporting Hypothesis 2, Study 2 clarifies the conceptual elements of a precommitment that make it effective: conditionality and specificity. In addition, it demonstrates that not just any justification improves trust (as one might reasonably suspect; Dolinski & Nawrat, 1998;Langer et al., 1978). ...
Article
Full-text available
Following through on commitments builds trust. However, blind adherence to a prior course of action can undermine key organizational objectives. How can this challenge be resolved? Four primary experiments and five supplemental experiments (collective N = 7,759, all preregistered) reveal an effective communication strategy: precommitment (i.e., a public pledge to change course conditional on a concrete future state of the world). In the presence (vs. absence) of precommitment, observers deemed decision makers who de-escalated commitment as more trustworthy. This effect held across the roles of the decision makers (entrepreneurs vs. established leaders), the relationship with the decision makers (follower vs. third-party observer), contexts (consumer products vs. infrastructure projects), and measures (perceived integrity vs. incentivized behavior). These benefits for integrity were attenuated when the precommitment was to a vague future action or was not conditional on a concrete future state of the world. Finally, results revealed that precommitment can yield a negative externality: undermining perceived confidence and motivation among followers at a project’s inception. Altogether, our work provides a nuanced perspective on a communication strategy decision makers can use to align short-term personal incentives (i.e., reputation management) and long-term organizational incentives (i.e., value maximization).
... Research studies have shown that transparency averts overtrusting AI [129]. However, other types of explanations, such as justification, might lead to users' overtrust by representing manipulative information [130]. Also, researchers have warned that too much focus on transparency, especially at the early stages of an AI product, can damage innovations [131]. ...
Article
Full-text available
The increasing use of artificial intelligence (AI) systems in our daily lives through various applications, services, and products highlights the significance of trust and distrust in AI from a user perspective. AI-driven systems have significantly diffused into various aspects of our lives, serving as beneficial “tools” used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust and distrust in AI serve as regulators and could significantly control the level of this diffusion, as trust can increase, and distrust may reduce the rate of adoption of AI. Recently, a variety of studies focused on the different dimensions of trust and distrust in AI and its relevant considerations. In this systematic literature review, after conceptualizing trust in the current AI literature, we will investigate trust in different types of human–machine interaction and its impact on technology acceptance in different domains. Additionally, we propose a taxonomy of technical (i.e., safety, accuracy, robustness) and non-technical axiological (i.e., ethical, legal, and mixed) trustworthiness metrics, along with some trustworthy measurements. Moreover, we examine major trust-breakers in AI (e.g., autonomy and dignity threats) and trustmakers; and propose some future directions and probable solutions for the transition to a trustworthy AI.
... The third method analyzed in the present study as a means to reduce IS was mindfulness. The concept of mindfulness/mindlessness was introduced and developed by Langer (Langer, 1989;Langer et al., 1978;Langer & Moldoveanu, 2000). In one well-known study (Langer et al., 1978), people waiting in a queue to a copy machine were asked to let another person (the experimenter) to copy without queuing. ...
... The concept of mindfulness/mindlessness was introduced and developed by Langer (Langer, 1989;Langer et al., 1978;Langer & Moldoveanu, 2000). In one well-known study (Langer et al., 1978), people waiting in a queue to a copy machine were asked to let another person (the experimenter) to copy without queuing. The request was either easy or more difficult, and three versions of it were manipulated: a logical reason, a placebic one ('because I want to make copies'), or no request. ...
... The request was either easy or more difficult, and three versions of it were manipulated: a logical reason, a placebic one ('because I want to make copies'), or no request. Langer et al. (1978) found that when the request was easy, the placebic reason was as effective as the logical one and interpreted this as result of activating a mindless state of the mind. ...
Article
Full-text available
Three experiments investigated the mechanisms, correlates, and methods of immunization against interrogative suggestibility (IS). IS involves reliance in memory reports on suggestions contained in misleading questions (Yield) and the tendency to change answers under negative feedback about the quality of previous testimony (Shift). All three studies found that the milder version of the tool used in the studies (GSS) resulted in lower Yield and Shift. In analyses considering the memory states of the participants, IS was found to be highest when participants mistakenly attributed the information contained in the suggestive questions to the original material. However, significant percentages of the participants succumbed to suggestions and changed answers even when they were aware of the discrepancy between the original material and the information contained in the questions. The warning against suggestions was found to lower Yield and Shift, and this was especially true when participants were aware of discrepancies between original material and suggestions. Enhancing self-esteem and inducing mindfulness did not reduce IS. The correlations between IS, including IS in individual mindfulness states, with the Big Five personality traits, anxiety, susceptibility to influence, and self-esteem were inconsistent.
... One of the reasons why the topic of AI generated explanations and misinformation remains unexplored is that the use of explanations as a tactic for misinformation goes against the commonly held beliefs that explanations always make AI systems more transparent, trustworthy [16, 17], and fair [18, 19]. While researchers have shown that honest explanations can assist people in determining the veracity of information [20, 21] and improve their decision-making outcomes [22], as well as reduce human overreliance on AI systems [23], research in psychology has demonstrated that even poor explanations can significantly impact people's actions and beliefs [24,25,26]. This implies that the mere presence of an explanation can lead to changes in beliefs and behavior, regardless of its quality or veracity. ...
Preprint
Full-text available
Advanced Artificial Intelligence (AI) systems, specifically large language models (LLMs), have the capability to generate not just misinformation, but also deceptive explanations that can justify and propagate false information and erode trust in the truth. We examined the impact of deceptive AI generated explanations on individuals' beliefs in a pre-registered online experiment with 23,840 observations from 1,192 participants. We found that in addition to being more persuasive than accurate and honest explanations, AI-generated deceptive explanations can significantly amplify belief in false news headlines and undermine true ones as compared to AI systems that simply classify the headline incorrectly as being true/false. Moreover, our results show that personal factors such as cognitive reflection and trust in AI do not necessarily protect individuals from these effects caused by deceptive AI generated explanations. Instead, our results show that the logical validity of AI generated deceptive explanations, that is whether the explanation has a causal effect on the truthfulness of the AI's classification, plays a critical role in countering their persuasiveness - with logically invalid explanations being deemed less credible. This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments, fostering greater resilience against advanced AI-driven misinformation.
... In this case, the less syntactically complex sentence would be seen as less polite while the more syntactically complex sentence would be seen as more polite. While not a study of politeness, researchers found that the complexity of a request can have an impact on the likelihood of complying with the request, with complex requests being more successful [26]. Although linguistic complexity often co-occurs with the presence of politeness devices, the factors have not been studied independently, such as by creating politeness expressions that match on linguistic complexity. ...
... At the same time that face-saving structures and request structures may influence politeness, the two together might also influence politeness. In the imposition study, Langer et al. [26] found that when the level of imposition was low, the complex requests and the complex-with-additional-information requests led to similar amounts of compliance. However, when the imposition was higher, the complex request with additional information was the most successful. ...
Article
Full-text available
We examined how politeness perception can change when used by a human or voice assistant in different contexts. We conducted two norming studies and two experiments. In the norming studies, we assessed the levels of positive politeness (cooperation) and negative politeness (respecting autonomy) conveyed by a range of politeness strategies across task (Norming Study 1) and social (Norming Study 2) request types. In the experiments, we tested the effect of request type and imposition level on the perception of written requests (Experiment 1) and requests spoken by a voice assistant (Experiment 2). We found that the perception of politeness strategies varied by request type. Positive politeness strategies were rated as very polite with task requests. In contrast, both positive and negative politeness strategies were rated as very polite with social requests. We also found that people expect agents to respect their autonomy more than they expect them to cooperate. Detailed studies of how request context interacts with politeness strategies to affect politeness perception have not previously been reported. Technology designers might find Tables 4 and 5 in this report especially useful for determining what politeness strategies are most appropriate for a given situation as well as what politeness strategies will evoke the desired feeling (autonomy or cooperation).
... Careful and systematic evaluation of arguments requires knowledge, skills, practice but also time and effort. That is why it is prevalent to put less effort into argument processing, relying on mental shortcuts and heuristics to quickly judge a message's persuasive or argumentative value (Langer, Blank, and Chanowitz 1978;Eagly and Chaiken 1984;Petty and Cacioppo 1986). When the interlocutor's motivation or ability to evaluate the arguments is low or their resources are limited, heuristic processing is more likely. ...
... Usually, we are forced to evaluate arguments in a much quicker, almost instantaneous manner based on some subsidiary and reasonably efficient criteria. In a famous experiment, Ellen Langer, Blank, and Chanowitz (1978) asked participants to approach people waiting in line to use a photocopier and ask if they could cut in. Participants used different phrases to formulate their request (to make five copies), which eventually produced different results. ...
Article
Full-text available
The paper discusses the role of systemic means of persuasion in argument evaluation. The core class of systemic means of persuasion is regress stoppers, whose fundamental function is to halt the infinite regress of justification by making claims more acceptable. The paper explores how systemic means of persuasion relate to the structure of arguments in the Toulmin model and function as persuasion cues that are typically processed heuristically. The study includes stylometric analysis and statistical data from three corpora, revealing these means as complementary to explicit argumentation. Observations and examples are drawn from an original corpus of competitive debates.
... On the other hand, the Western science approach to mindfulness has long advanced the idea that the human mind is a computer-like data processing entity where language plays a pivotal role sorting out external raw data and giving them appropriate meanings (see Weick and Putnam 2006, Weick and Sutcliffe 2006, Reb and Choi 2014. In this stream of research, mindfulness is defined as the cognitive-linguistic process of attending to external stimuli (i.e., sight, smell, taste, texture, and sound) for the purpose of better judgments (Langer et al. 1978;Langer 1989Langer , 2014. All sensory experiences are encoded, organized, and filtered through existing linguistic categories and then given specific meanings, such as 'safe' or 'dangerous', 'ugly' or 'beautiful', 'good' or 'bad', and 'right' or 'wrong.' ...
Article
Full-text available
This study explores how Buddhist mindfulness as a self-reflective practice helps individuals respond to a paradox and ultimately dismantle it. To deeply immerse myself into this context, I conducted a nine-month ethnographic fieldwork in three Korean Buddhist temples that confront the paradox between the need for financial resources and spiritual values that disavow money. The findings show a series of cognitive mechanisms that reveal multiple roles of mindfulness, manifested as silence and skepticism of language. First, the monastic environment enables monks to become familiar with a life of silence that turns their attention to the inner mind from the external-empirical world. The silence serves as a mental buffer when monks switch between their sacred role and their business role. Over time, deep silence directs them to skepticism of language that triggers doubt on preexisting linguistic categories, boundaries, and separations. When the preexisting linguistic categories finally disappear in their mind, monks no longer rely on any differentiating or integrating tactic to navigate their paradox. In other words, they no longer perceive a paradox, which means the paradox has disappeared from their life. These cognitive mechanisms construct the monks’ worldview on contradictions, conflicts, and dualities, leading them from the experience of paradox to a unique mental state, the nonexperience of paradox. Integrating this mental state and the worldview of Buddhist monks with paradox research, this study theorizes a Buddhist mindfulness view of paradox. Funding: This work was supported by Chulalongkorn University.
... 31 In terms of origins, some pitfalls are a consequence of uncritical acceptance of explanations. Langer et al. 36 point out that people are likely to accept explanations without conscious attention if no effortful thinking is required from them. In Kahneman's dual-process theory 27 terms, this means that if we do not invoke mindful and deliberative (system 2) thinking with explanations, we increase the likelihood of uncritical consumption. ...
... In Kahneman's dual-process theory 27 terms, this means that if we do not invoke mindful and deliberative (system 2) thinking with explanations, we increase the likelihood of uncritical consumption. To trigger mindfulness, Langer et al. 36 recommend to design for ''effortful responses'' or ''thoughtful responding.'' To help with mindfulness, we can incorporate the lenses of seamful design, 37 which emphasize configurability, agency, appropriation, and revelation of complexity. ...
Article
Full-text available
To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful effects is important. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls (EPs), unanticipated negative downstream effects from AI explanations manifesting even when there is no intention to manipulate users. EPs are different from dark patterns, which are intentionally deceptive practices. We articulate the concept of EPs by demarcating it from dark patterns and highlighting the challenges arising from uncertainties around pitfalls. We situate and operationalize the concept using a case study that showcases how, despite best intentions, unsuspecting negative effects, such as unwarranted trust in numerical explanations, can emerge. We propose proactive and preventative strategies to address EPs at three interconnected levels: research, design, and organizational. We discuss design and societal implications around reframing AI adoption, recalibrating stakeholder empowerment, and resisting the “move fast and break things” mindset.