Article

Chatbots or Humans? Effects of Agent Identity and Information Sensitivity on Users’ Privacy Management and Behavioral Intentions: A Comparative Experimental Study between China and the United States

Taylor & Francis
International Journal of Human-Computer Interaction
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Internet platforms use these data for further analysis to obtain as complete a user profile as possible (Tang et al., 2013). These technological developments have disturbed individual privacy boundaries and failed to satisfy personal privacy preferences (Choi & Jerath, 2022;Liu et al., 2023). ...
... Perceived privacy concerns also significantly influence an individual's willingness to disclose information (Nikkhah & Sabherwal, 2022). The higher the degree of concern about privacy issues, the lower the willingness to disclose information (Bansal et al., 2010;Kang et al., 2022;Liu et al., 2023;Xu et al., 2013). ...
... We used the k-means algorithm to cluster the information categories into five groups and named them according to the characteristics of the information items within each group: "less sensitive social attributes," "potential risk information," "consumption trace," "individual action details," and "health and social sensitive." As shown in Figure 5, the five categories of information sensitivity range from low to high, with the identifiable nature of users' real identities showing a consistent trend toward increasing sensitivity, such that the higher the sensitivity of the information, the less likely users are to disclose it, which is consistent with the findings of previous studies (Bansal et al., 2010;Kang et al., 2022;Lee et al., 2015;Liu et al., 2023;Markos et al., 2017;Xu et al., 2013). The higher the sensitivity of the information, the less likely users are to continue using the Internet platform after being forced to disclose it, which is consistent with previous studies Li et al., 2016Li et al., , 2023Liu et al., 2023;Xu, 2019). ...
Article
Full-text available
The online environment has evolved with the development of emerging information technologies. In response to rising voices discussing the boundaries of collecting and using user data on platforms, this study explored Chinese Internet users' information sensitivity as an indicator of data classification governance. This study employed a two‐stage research approach. First, 60 types of information that users disclose to Internet platforms in the era of big data and artificial intelligence (AI) were identified. Biometric identification, travel records, preference, trace information, and other information reflecting the characteristics of network collection in the era of big data and AI were also included. Second, based on 397 questionnaires, the information categories were clustered into five groups: less‐sensitive social attributes, consumption traces, individual action details, potential risk information, and health and social sensitivity. Of the total disclosed information types, 61.7% were perceived as highly sensitive by Chinese users in the current Internet environment; the higher the sensitivity of the information, the less likely users were to disclose it and use the online platform. Moreover, newly added information types have a high sensitivity. These findings provide insights into the policy design and governance of Internet platform data collection and usage practices in the era of big data and AI.
... The scope of traditional research in this field has expanded to include the communication fidelity of chatbots, the degree to which they mimic real human conversations, and the subjective nature of these exchanges from the user's perspective. At this conference, we will start with basic functional assessments and explore more nuanced components of UX, including emotional resonance, situational awareness, and depth perception [9] [10]. Each of these features helps users create engaging media. ...
... Recent developments in artificial intelligence have drawn attention to the importance of incorporating new ideas in user satisfaction and engagement in different cultural contexts, where cultural heritage plays a crucial role in user experience [6]. It is questionable whether the debate has evolved from a binary understanding of usability to a spectrum exploring how the user experience (UX) of chatbots conveys emotional and cognitive dimensions, opening the door to deeper research into the Cultural differences in chatbots opening interactions between users [9]. ...
Article
Full-text available
Chatbots have evolved into sophisticated conversational agents, significantly reshaping human-computer interaction through their ability to simulate human-like conversation. This paper explores the evolution of chatbots, focusing on third-generation agents capable of engaging in complex dialogues and grasping context. Traditionally, research has centered on evaluating the fidelity of chatbot communication and users’ perception of interaction quality. However, recent discourse has emphasized the importance of user experience (UX), including emotional resonance and contextual awareness. Cultural background has also emerged as a significant determinant of UX, necessitating an examination of cultural differences in chatbot interactions. This study investigates the interaction dynamics between non-anthropomorphic chatbots and users from diverse cultural backgrounds. Through empirical study, we reveal how cultural subtleties impact UX design principles and user approval, and we put forward ideas for cross-cultural optimization. Our research advances our knowledge of how chatbots engage with people from different cultural backgrounds and provides useful guidance for creating chatbots that are sensitive to cultural differences.
... For instance, algorithmic errors or system vulnerabilities can lead to infringement issues, as seen in traffic accidents involving autonomous vehicles, where liability attribution remains contentious [34]. Additionally, concerns around data ownership and privacy breaches are increasingly prominent [35]. Trust deviations at any stage of the human-machine interaction process can severely impact decision outcomes, undermining employees' confidence in the technology. ...
Article
Full-text available
Intelligent and virtualized transformation of the enterprise requires employees to undertake both technological breakthroughs and in-depth development, two innovative activities that seem to compete for resources. How employees, guided by algorithmically integrated decision-making, utilize massive information and technology to catalyze creativity in this path is not yet known. Based on research data from 198 corporate innovators, the enabling mechanisms of human-machine collaborative decision-making for dualistic innovations are examined. The study finds that: human-machine collaborative decision-making positively promotes both exploratory innovations and exploitative innovations; human-machine collaborative decision-making stimulates both types of dualistic innovations by enhancing the level of human-machine trust among employees; and corporate innovation culture positively moderates the direct effect of human-machine collaborative decision-making on dualistic innovations. The findings broaden the application of human-machine cooperation theory in the field of management and provide new ideas for enterprises to accurately identify employees’ attitudes and tendencies towards human-machine cooperation and formulate targeted strategies to stimulate dualistic innovations.
... In light of these developments, IAM is becoming increasingly crucial for enterprises. IAM ensures controlled access to sensitive data and user privacy by regulating who can access what information and resources within an organization's systems and applications (Aboukadri, Ouaddah, and Mezrioui 2024;Liu et al. 2023). ...
Article
Purpose This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments. Design/methodology/approach Employing a questionnaire survey ( N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information. Findings Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox. Research limitations/implications This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al ., 2023; Folk et al ., 2023). Practical implications The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al ., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al ., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al ., 2023). Originality/value The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.
Article
China enacted its first Personal Information Protection Law (PIPL) on 1 November 2021. However, there is a dearth of systematic research examining the implementation of new privacy policies exercised by digital platforms and user engagement with these policies. This study establishes a triple-layered comparative approach to explore the complexities and particularities of privacy policy practices in Chinese digital platforms. The methodology encompasses the analysis of privacy policies from representative platforms—WeChat, Taobao, and Douyin—alongside user experience garnered through a walkthrough method and insights from 28 interviews with platform users. Through critical discourse analysis, the research revealed that state-dominant policy discourses were ingrained in the formulation of platform privacy regulations to legitimize their authority over user data ownership. The users perceived a strong sense of passive protection, characterized by the rigid “agreement” discourse practices that underscore their vulnerability in everyday digital platform usage. The findings shed light on intricate power dynamics at play between platforms, their privacy policies, and users, which leads to polarized reactions from users toward privacy concerns. By examining the articulation of digital privacy policies as instruments of statecraft, we offer a nuanced view of describing non-Western experiences of privacy values and regulatory practices in the digital age.
Article
Full-text available
The integration of chatbots in the financial sector has significantly improved customer service processes, providing efficient solutions for query management and problem resolution. These automated systems have proven to be valuable tools in enhancing operational efficiency and customer satisfaction in financial institutions. This study aims to conduct a systematic literature review on the impact of chatbots in customer service within the financial sector. A review of 61 relevant publications from 2018 to 2024 was conducted. Articles were selected from databases such as Scopus, IEEE Xplore, ARDI, Web of Science, and ProQuest. The findings highlight that efficiency and customer satisfaction are central to the perception of service quality, aligning with the automation of the user experience. The bibliometric analysis reveals a predominance of publications from countries such as India, Germany, and Australia, underscoring the academic and practical relevance of the topic. Additionally, essential thematic terms such as “artificial intelligence” and “advanced automation” were identified, reflecting technological evolution in this field. This study provides significant insights for future theoretical, practical, and managerial developments, offering a framework to optimize chatbot implementation in highly regulated environments.
Article
Full-text available
Purpose This study aimed to explore the impact of Artificial Intelligence (AI) characteristics, namely Perceived Animacy (PAN), perceived intelligence (PIN), and perceived anthropomorphism (PAI), on user satisfaction (ESA) and continuous intentions (CIN) by integrating Expectation Confirmation Theory (ECT), with a particular focus on Generation Y and Z. Design/methodology/approach Using a quantitative method, the study collected 495 data from Gen Y (204) and Z (291) respondents who were users of digital banking apps through structured questionnaires that were analysed using PLS-SEM. The latter helped investigate the driving forces of AI characteristics and user behavioural intentions as well as reveal generation-specific features of digital banking engagement. Findings The study revealed that PAN and PIN have significant positive effects on the anthropomorphic perceptions of digital banking apps, which in turn increases perceived usefulness, satisfaction, and continuous intentions. In particular, the influence of these AI attributes varies across generations; Gen Y’s loyalty is mostly based on the benefits derived from AI features, whereas Gen Z places a greater value on the anthropomorphic factor of AI. This marked a generational shift in the demand for digital banking services. Research limitations/implications The specificity of Indian Gen Y and Z users defines the scope of this study, suggesting that demographic and geographical boundaries can be broadened in future AI-related banking research. Practical implications The results have important implications for bank executive officers and policymakers in developing AI-supported digital banking interfaces that appeal to the unique tastes of millennial customers, thus emphasising the importance of personalising AI functionalities to enhance user participation and loyalty. Originality/value This study enriches the digital banking literature by combining AI attributes with ECT, offering a granular understanding of AI’s role in modulating young consumers' satisfaction and continuance intentions. It underscores the strategic imperative of AI in cultivating compelling and loyalty-inducing digital banking environments tailored to the evolving expectations of Generations Y and Z.
Article
Full-text available
This study explored digital privacy concerns in the use of chatbots as a digital banking service. Three dimensions of trust were tested in relation to user self-disclosure in order to better understand the consumer-chatbot experience in banking. The methodology selected for this research study followed a conclusive, pre-experimental, two-group one-shot case study research design which made use of a non-probability snowballing sampling technique. Privacy concerns were found to have a significantly negative relationship with user self-disclosure in both treatment groups. Respondents exposed to their preferred banking brand experienced lower user self-disclosure and brand trust than those exposed to a fictitious banking brand within the South African context. It is recommended that companies using chatbots focus on easing privacy concerns and build foundations of trust. The gains that chatbots have made in the form of increased productivity and quality of customer service rely on relationships with users who need to disclose personal information. Through this study, we concluded that, despite its power to influence decision-making, the power of a brand is not enough for consumers to considerably increase self-disclosure. Rather, a bridge of trust (through education, communication and product development) is needed that encompasses all three elements of trust, which are brand trust, cognitive trust and emotional trust. Limited research exists on the relationship between financial services marketing and chatbot adoption. Thus, this study addressed a theoretical gap, by adding brand trust to existing studies on cognitive and emotional trust regarding user self-disclosure.
Article
Full-text available
Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.
Article
Full-text available
Algorithmic profiling has become increasingly prevalent in many social fields and practices, including finance, marketing, law, cultural consumption and production, and social engagement. Although researchers have begun to investigate algorithmic profiling from various perspectives, socio-technical studies of algorithmic profiling that consider users' everyday perceptions are still scarce. In this article, we expand upon existing user-centered research and focus on people's awareness and imaginaries of algorithmic profiling, specifically in the context of social media and targeted advertising. We conducted an online survey geared toward understanding how Facebook users react to and make sense of algorithmic profiling when it is made visible. The methodology relied on qualitative accounts as well as quantitative data from 292 Facebook users in the United States and their reactions to their algorithmically inferred 'Your Interests' and 'Your Categories' sections on Facebook. The results illustrate a broad set of reactions and rationales to Facebook's (public-facing) algorithmic profiling, ranging from shock and surprise, to accounts of how superficial-and in some cases, inaccurate-the profiles were. Taken together with the increasing reliance on Facebook as critical social infrastructure, our study highlights a sense of algorithmic disillusionment requiring further research. ARTICLE HISTORY
Article
Full-text available
In this study, we extended and tested the privacy calculus framework in the context of a hypothetical AI-based contact-tracing technology for application during the COVID-19 pandemic that is based on the communication privacy management and contextual integrity theories. Specifically, we investigated how the perceived privacy risks and benefits of information disclosure affect the public’s willingness to opt in and adopt contact-tracing technologies and how social and contextual factors influence their decision-making process. Four hundred eighteen adults in the United States participated in the study via Amazon Mechanical Turk in August 2020. A percentile bootstrap method with 5,000 resamples and bias-corrected 95% confidence intervals in structural equation modeling was used for data analysis. The participants’ privacy concerns and perceived benefits significantly influenced their opt-in and adoption intentions, which suggests that the privacy calculus framework applies to the context of COVID-19 contact-tracing technologies. Perceived social, personal, and reciprocal benefits were identified as crucial mediators that link contextual variables to both opt-in and adoption intentions. Although this study was based on a hypothetical AI-based contact-tracing app, our findings provide meaningful theoretical and practical implications for future research investigating the public’s technology adoption in contexts where tradeoffs between privacy risks and public health coexist.
Article
Full-text available
Based on the theoretical framework of agency effect, this study examined the role of affect in influencing the effects of chatbot versus human brand representatives in the context of health marketing communication about HPV vaccines. We conducted a 2 (perceived agency: chatbot vs. human) × 3 (affect elicitation: embarrassment, anger, neutral) between-subject lab experiment with 142 participants, who were randomly assigned to interact with either a perceived chatbot or a human representative. Key findings from self-reported and behavioral data highlight the complexity of consumer–chatbot communication. Specifically, participants reported lower interaction satisfaction with the chatbot than with the human representative when anger was evoked. However, participants were more likely to disclose concerns of HPV risks and provide more elaborate answers to the perceived human representative when embarrassment was elicited. Overall, the chatbot performed comparably to the human representative in terms of perceived usefulness and influence over participants' compliance intention in all emotional contexts. The findings complement the Computers as Social Actors paradigm and offer strategic guidelines to capitalize on the relative advantages of chatbot versus human representatives.
Article
Full-text available
Smart speakers can transform interactions with users into retrievable data, posing new challenges to privacy management. Privacy management in smart speakers can be more complex than just making decisions about disclosure based on the risk–benefit analysis. Hence, this study attempts to integrate privacy self-efficacy and the multidimensional view of privacy management behaviors into the privacy calculus model and proposes an extended privacy calculus model for smart speaker usage. The study explicates three types of privacy management strategies in smart speaker usage: privacy disclosure, boundary linkage, and boundary control. A survey of smart speaker users (N = 474) finds that perceived benefits are positively associated with privacy disclosure and boundary linkage, whereas perceived privacy risks are negatively related to these two strategies. Also, perceived privacy risks are positively related to boundary control. Finally, privacy self-efficacy promotes all three strategies while mitigating the impact of perceived privacy risks and boosting the impact of perceived benefits on privacy management.
Article
Full-text available
This study examined how and when a chatbot’s emotional support was effective in reducing people’s stress and worry. It compared emotional support from chatbot versus human partners in terms of its process and conditional effects on stress/worry reduction. In an online experiment, participants discussed a personal stressor with a chatbot or a human partner who provided none, or either one or both of emotional support and reciprocal self-disclosure. The results showed that emotional support from a conversational partner was mediated through perceived supportiveness of the partner to reduce stress and worry among participants, and the link from emotional support to perceived supportiveness was stronger for a human than for a chatbot. A conversational partner’s reciprocal self-disclosure enhanced the positive effect of emotional support on worry reduction. However, when emotional support was absent, a solely self-disclosing chatbot reduced even less stress than a chatbot not providing any response to participants’ stress. Lay Summary In recent years, AI chatbots have increasingly been used to provide empathy and support to people who are experiencing stressful times. This study compared emotional support from a chatbot compared to that of a human who provided support. We were interested in examining which approach could best effectively reduce people’s worry and stress. When either a person or a chatbot was able to engage with a stressed individual and tell that individual about their own experiences, they were able to build rapport. We found that this type of reciprocal self-disclosure was effective in calming the worry of the individual. Interestingly, if a chatbot only reciprocally self-disclosed but offered no emotional support, the outcome was worse than if the chatbot did not respond to people at all. This work will help in the development of supportive chatbots by providing insights into when and what they should self-disclose.
Article
Full-text available
Chatbots are technological tools equipped with artificial intelligence that allow companies to interact with their consumers. Through their computers or mobile devices, consumers can use this technology to search for information, make purchases or request after-sales services. This study aims to identify the role of attitude toward chatbots and privacy concern in the relationship between attitude toward mobile advertising and behavioral intent to use chatbots. After reviewing the literature, the study proposes a moderated mediation model. Through a survey, the study shows that attitude toward mobile advertising does not have a direct effect on the behavioral intent to use chatbot, but is rather mediated by one’s attitude toward chatbots. In fact, the interactivity is unidirectional in the case of mobile advertising (from the company to the consumer), but bidirectional in the case of chatbots (in which consumers have an active role in communication). In line with these assumptions, the data analysis shows that internet privacy concerns only negatively moderate the relationship between attitude toward chatbots and behavioral intent to use this technology. These results can be useful for companies and researchers in terms of developing and testing new digital marketing strategies. The paper concludes with a discussion of the results’ theoretical and managerial implications.
Article
Full-text available
Voice shopping is becoming increasingly popular among consumers due to the ubiquitous presence of artificial intelligence (AI)-based voice assistants in our daily lives. This study explores how personality, trust, privacy concerns, and prior experiences affect customer experience performance perceptions and the combinations of these factors that lead to high customer experience performance. Goldberg’s Big Five Factors of personality, a contextualized theory of reasoned action (TRA-privacy), and recent literature on customer experience are used to develop and propose a conceptual research model. The model was tested using survey data from 224 US-based voice shoppers. The data were analyzed using partial least squares structural equation modelling (PLS-SEM) and fuzzy-set qualitative comparative analysis (fsQCA). PLS-SEM revealed that trust and privacy concerns mediate the relationship between personality (agreeableness, emotional instability, and conscientiousness) and voice shoppers’ perceptions of customer experience performance. FsQCA reveals the combinations of these factors that lead to high perceptions of customer experience performance. This study contributes to voice shopping literature, which is a relatively understudied area of e-commerce research yet an increasingly popular shopping method.
Article
Full-text available
Intelligent Virtual Assistants (IVA) such as Apple Siri, Google Assistant, are increasingly being used to assist users with performing different tasks. However, their characteristics also raise user privacy concerns related to the provision of information to the IVA. Drawing upon the communication privacy management theory, two experiments were conducted to investigate the impact of information sensitivity, types of IVA (anthropomorphized versus objectified IVA), and the roles of IVA (servant versus partner) on privacy concerns and user willingness to disclose information to IVA. Study 1 showed that information sensitivity and anthropomorphism significantly impact user privacy concerns. Study 2 revealed that if highly sensitive information was required, a partner IVA would trigger greater privacy concerns, while in low sensitive information contexts, it would evoke a more secure feeling than a servant IVA. Subsequent theoretical and managerial implications of these studies are discussed accordingly.
Article
Full-text available
Artificially intelligent (AI) agents increasingly occupy roles once served by humans in computer-mediated communication (CMC). Technological affordances like emoji give interactants (humans or bots) the ability to partially overcome the limited nonverbal information in CMC. However, despite the growth of chatbots as conversational partners, few CMC and human-machine communication (HMC) studies have explored how bots’ use of emoji impact perceptions of communicator quality. This study examined the relationship between emoji use and observers’ impressions of interpersonal attractiveness, CMC competence, and source credibility; and whether impressions formed of human versus chatbot message sources were different. Results demonstrated that participants rated emoji-using chatbot message sources similarly to human message sources, and both humans and bots are significantly more socially attractive, CMC competent, and credible when compared to verbal-only message senders. Results are discussed with respect to the CASA paradigm and the human-to-human interaction script framework.
Article
Full-text available
Conversational agents are increasingly becoming integrated into everyday technologies and can collect large amounts of data about users. As these agents mimic interpersonal interactions, we draw on communication privacy management theory to explore people's privacy expectations with conversational agents. We conducted a 3x3 factorial experiment in which we manipulated agents' social interactivity and data sharing practices to understand how these factors influence people's judgments about potential privacy violations and their evaluations of agents. Participants perceived agents that shared response data with advertisers more negatively compared to agents that shared such data with only their companies; perceptions of privacy violations did not differ between agents that shared data with their companies and agents that did not share information at all. Participants also perceived the socially interactive agent's sharing practices less negatively than those of the other agents, highlighting a potential privacy vulnerability that users are exposed to in interactions with socially interactive conversational agents.
Article
Full-text available
Artificial intelligence (AI) and people’s interactions with it—through virtual agents, socialbots, and language-generation software—do not fit neatly into paradigms of communication theory that have long focused on human–human communication. To address this disconnect between communication theory and emerging technology, this article provides a starting point for articulating the differences between communicative AI and previous technologies and introduces a theoretical basis for navigating these conditions in the form of scholarship within human–machine communication (HMC). Drawing on an HMC framework, we outline a research agenda built around three key aspects of communicative AI technologies: (1) the functional dimensions through which people make sense of these devices and applications as communicators, (2) the relational dynamics through which people associate with these technologies and, in turn, relate to themselves and others, and (3) the metaphysical implications called up by blurring ontological boundaries surrounding what constitutes human, machine, and communication.
Chapter
Full-text available
Abstract. Based on the paradigm of “computers are social actors” (CASA) and the idea of media equation, this study aims to examine whether smartphones elicit social responses originally exclusive for human-human interaction. Referring to the stereotype of gender-specific colors, participants (n = 108) of a laboratory experiment interacted with a phone presented either in a blue (male) or a pink (female) sleeve to solve five social dilemmas with the phone always arguing for one of two options given. Afterwards, participants rated the femininity and the masculinity of the phone as well as its competence and trustworthiness. Furthermore, the participants’ conformity with the choice recommendations the phone made was analyzed. Consistent with gender stereotypes, participants ascribed significantly more masculine attributes to the blue sleeved smartphone and more female attributes to the pink phone. The blue phone was perceived as more competent and participants followed its advice significantly more often compared to the pink sleeved smartphone. Results on how trustworthiness was perceived were only found for male participants who perceived the blue phone to be more trustworthy. In sum, the study reveals both the CASA paradigm and the psychological perspective on users to be fruitful approaches for future research. Moreover, the results also reveal practical implications regarding the importance of gender sensitive development of digital devices. Keywords: CASA · Smartphones · Gender stereotypes · Media equation
Article
Full-text available
This paper undertakes a comparative legal study to analyze the challenges of privacy and personal data protection posed by Artificial Intelligence (“AI”) embedded in Robots, and to offer policy suggestions. After identifying the benefits from various AI usages and the risks posed by AI-related technologies, I then analyze legal frameworks and relevant discussions in the EU, USA, Canada, and Japan, and further consider the efforts of Privacy by Design (“PbD”) originating in Ontario, Canada. While various AI usages provide great convenience, many issues, including profiling, discriminatory decisions, lack of transparency, and impeding consent, have emerged. The unpredictability arising from the AI machine learning function poses further difficulties, which have only been partially addressed by legal frameworks in the aforementioned jurisdictions. However, analyzing the relevant discussions yielded several suggestions. The first priority is adopting PbD as the most flexible, soft-legal, and preferable approach toward AI-oriented issues. Implementing PbD will protect individual privacy and personal data without specific efforts, and achieve both the development of AI and the advancement of privacy and personal data protection. Technical measures that can adapt to an individual’s dynamic choices according to the “context” should be further developed. Furthermore, alternative technical measures, including those to solve the “algorithmic black box” or achieve differential privacy, warrant thorough examination. If AI surpasses human intelligence, a terminating function, such as a “kill switch” will be the last resort to preserve individual choice. Despite numerous difficulties, we must prepare for the coming AI-prevalent society by taking a flexible approach.
Article
Full-text available
Mobile shopping on smartphone platform is becoming popular in many countries. Motivated by research suggesting that behavioral models do not universally hold across cultures, this study investigated mobile shopping continuance intentions under the influence of user espoused cultural values in China and the United States. A research model drawing on UTAUT was developed with perceived effort expectancy, performance expectancy, mobile social influence, and privacy protection as the key determinants. Data from a US sample of 656 and a Chinese sample of 866 were analyzed using Partial Least Square (PLS) procedures to test this model, including the hypothesized antecedent role of some espoused cultural values. The product-indicator approach was adopted to test the hypothesized moderating effect from the espoused cultural values. Contrary to our expectation, the empirical data did not show strong support for the moderating effect of the espoused cultural values. Our data show espoused cultural values more of an antecedent or predictor of mobile shopping continuance. The findings show that culture exerts impact on the mobile shopping continuance decision process both at the macro (country difference) and micro (individual difference as espoused values) level. Relevant discussion, limitations and implications are also provided.
Article
Full-text available
The “privacy calculus” approach to studying online privacy implies that willingness to engage in disclosures on social network sites (SNSs) depends on evaluation of the resulting risks and benefits. In this article, we propose that cultural factors influence the perception of privacy risks and social gratifications. Based on survey data collected from participants from five countries (Germany [n = 740], the Netherlands [n = 89], the United Kingdom [n = 67], the United States [n = 489], and China [n = 165]), we successfully replicated the privacy calculus. Furthermore, we found that culture plays an important role: As expected, people from cultures ranking high in individualism found it less important to generate social gratifications on SNSs as compared to people from collectivist-oriented countries. However, the latter placed greater emphasis on privacy risks—presumably to safeguard the collective. Furthermore, we identified uncertainty avoidance to be a cultural dimension crucially influencing the perception of SNS risks and benefits. As expected, people from cultures ranking high in uncertainty avoidance found privacy risks to be more important when making privacy-related disclosure decisions. At the same time, these participants ascribed lower importance to social gratifications—possibly because social encounters are perceived to be less controllable in the social media environment.
Article
When someone intimately discloses themselves to a robot, does that make them like the robot more? Does a robot’s reciprocal disclosure contribute to a human’s liking of the robot? To explore whether these disclosure-liking effects in human–human interaction also apply to human–robot interaction, we conducted a between-subjects lab experiment to examine how self-disclosure intimacy (intimate vs. non-intimate) and reciprocal self-disclosure (yes vs. no) from the robot influence participants’ social perceptions (i.e., likability, trustworthiness, and social attraction) toward the robot. None of the disclosure-liking effects were confirmed by the results. In contrast, reciprocal self-disclosure from the robot increased liking in intimate self-disclosure but decreased liking in non-intimate self-disclosure, indicating a crossover interaction effect on likability. A post-hoc analysis was conducted to further understand these patterns. Implications in terms of the computers are social actors (CASA) paradigm were discussed.
Article
This paper comparatively studies people’s opinions on AI privacy between the US and China. Based on data collected from Twitter and Weibo, we perform text clustering and content analysis to classify opinion types and analyze the symptoms of opinion polarization and drivers by regression analysis. Results show that US people express more concerns about AI privacy, focusing more on privacy disclosure by AI applications. In contrast, Chinese people are more optimistic about AI’s role in promoting privacy protection. Security, economics, and application are driving factors leading to the polarization of the US people, while technologies and algorithms influence the polarization of Chinese people. This study offers methodological guidance for examining the relationship between AI and user privacy. It also guides the government agencies and other practitioners in developing the policies on AI regulations for privacy protection.
Article
Smart speakers' voice recognition technology has not only advanced the efficiency of communication between users and machines, but also raised users' privacy concerns. As smart speakers listen to users' voice commands and collect audio data to improve algorithms, it is crucial to understand how users manage their privacy settings to protect personal information. Combining the uses and gratifications approach, the Media Equation, and communication privacy management theory, this study surveyed 991 participants' attitudes and behavior patterns related to smart speaker use. The study explored the unique gratifications that users seek, identified the main strategies that users adopt to manage their privacy, and suggested that users apply interpersonal privacy management rules to interactions with smart media. In addition, users' gratifications affect their privacy management via two routes: a protective route that highlights the role of perceived privacy risks, and a precautionary route that emphasizes the impact of users' social presence experiences.
Article
In the digital environment, chatbots as customer service agents assist consumers in decision making. Based on the computers-are-social-actors paradigm, this study examines the perceived differences in communication quality and privacy risks between different service agents and their impact on consumers' adoption intention, and investigates whether these perceived differences might depend on differences in the user's human interaction need. A series of five scenario-based experiments were carried out to collect data and test hypotheses. It was discovered that: different types of service agents directly affect consumers' adoption intention; perceived communication quality and privacy risk mediate the effect of service agent type on adoption intention; the effects of service agent type on perceived accuracy, communicative competence, and privacy risk are moderated by the need for human interaction. The findings of this study provide important insights into the rational use of human−computer interacation in e-commerce.
Article
Purpose “Smart devices think you're “too lazy” to opt out of privacy defaults” was the headline of a recent news report indicating that individuals might be too lazy to stop disclosing their private information and therefore to protect their information privacy. In current privacy research, privacy concerns and self-disclosure are central constructs regarding protecting privacy. One might assume that being concerned about protecting privacy would lead individuals to disclose less personal information. However, past research has shown that individuals continue to disclose personal information despite high privacy concerns, which is commonly referred to as the privacy paradox. This study introduces laziness as a personality trait in the privacy context, asking to what degree individual laziness influences privacy issues. Design/methodology/approach After conceptualizing, defining and operationalizing laziness, the authors analyzed information collected in a longitudinal empirical study and evaluated the results through structural equation modeling. Findings The findings show that the privacy paradox holds true, yet the level of laziness influences it. In particular, the privacy paradox applies to very lazy individuals but not to less lazy individuals. Research limitations/implications With these results one can better explain the privacy paradox and self-disclosure behavior. Practical implications The state might want to introduce laws that not only bring organizations to handle information in a private manner but also make it as easy as possible for individuals to protect their privacy. Originality/value Based on a literature review, a clear research gap has been identified, filled by this research study.
Article
As mobile payment technology is at a nascent stage, the use of facial recognition payment (FRP) services is gradually penetrating the lives of Chinese people. Although the FRP system may have advantages over other payment technologies, a civil lawsuit over refusing to submit facial information and a series of illegal activities related to selling facial information have raised the public's privacy concerns, which might further engender Chinese users' resistance towards FRP. Based on privacy calculus theory and innovation resistance theory, this study builds a research model of FRP and examines it by using a cross-sectional study with 1200 Chinese users. The findings demonstrate that the perceived effectiveness of privacy policy has significant relationships with privacy control, perceived privacy risk, perceived benefits, and resistance. Both privacy control and perceived privacy risk are significantly related to privacy concerns. There is also a significant relationship between the perceived privacy risk and resistance to FRP. Meanwhile, privacy concerns positively affect user resistance, while perceived benefits negatively affect user resistance. In contrast to previous research, the perceived privacy risk has a positive impact on the perceived benefits. This study offers cutting-edge contributions to both academia and industry.
Article
This study aims to know what are the factors determining the adoption of M-Banking app among customers in Cameroon. In other words, what are the factors that influence users in their decisions to adopt and use a system or technology such as the MBanking app, and indirectly, what is the impact of this use on both the customers and financial inclusion? The research model developed relying on a combination of Technology Acceptance Model (TAM ), Unified Theory of Acceptance and Use of Technology ( UTAUT2 ), Information System Success Model ( ISSM ), and Protection Motivation Theory ( PMT ) and other constructs; it was then tested with a sample of 223 users of the “ SARA” M-Banking app of the financial institution called “ Afriland First Bank” . Findings revealed that: (1) utilitarian expectation, hedonic motivation, and status gain, habit, and perceived privacy concern have a significant influence on the intention to adopt M-Banking apps; and (2) the exploitative/explorative use of this technology has an impact on user’s loyalty and satisfaction but also contributes strongly to fostering financial inclusion in Cameroon. Also, the Multi-group analysis was performed on the sample using 2 gender-based groups (males, n=121; females, n=102).
Article
Online users are increasingly exposed to chatbots as one form of AI-enabled media technologies, employed for persuasive purposes, e.g., making product/service recommendations. However, the persuasive potential of chatbots has not yet been fully explored. Using an online experiment (N = 242), we investigate the extent to which communicating with a stand-alone chatbot influences affective and behavioral responses compared to interactive Web sites. Several underlying mechanisms are studied, showing that enjoyment is the key mechanism explaining the positive effect of chatbots (vs. Web sites) on recommendation adherence and attitudes. Contrary to expectations, perceived anthropomorphism seems not to be particularly relevant in this comparison.
Article
This study examines the persuasion mechanism in product recommendations made by a voice-based conversational agent and explores whether the personalized content reflecting the customer's preferences and the agent's social role of a friend, rather than a secretary, generate a more positive attitude toward the product in the context of voice shopping. With the framework of dual modes of information-processing models, we hypothesized that the personalization of messages would be a central route with a greater impact on attitude for products with high involvement. By contrast, the social role of the conversational agent was expected to represent a peripheral route with a greater impact on products with low involvement. An experimental study was designed to test the effects of personalized content that reflected individual preferences for product attributes and a friend role of a voice agent with high and low product involvement. The results showed main effects of both personalization and the social role on building attitudes toward the product. Although no interaction effect for personalization and involvement was found, there was a significant interaction effect for the social role and involvement. This study contributes to persuasion theory by extending it to the interaction with a conversational agent. For practitioners, the study provides insights into the importance of the personalized content of recommendations and the need for consideration of an alternative social role in the design of voice shopping through a conversational agent.
Article
Chatbots are used frequently in business to facilitate various processes,particularly those related to customer service and personalization. In this article,we propose novel methods of tracking human-chatbot interactions and measuringchatbot performance that take into consideration ethical concerns, particularlytrust. Our proposed methodology links neuroscientific methods, text mining, andmachine learning. We argue that trust isthefocalpointofsuccessfulhuman-chatbot interaction and assess how trust as a relevant category is being redefinedwith the advent of deep learning supported chatbots. We propose a novel methodof analyzing the content of messages produced in human-chatbot interactions, us-ing the Condor Tribefinder system we developed for text mining that is based on amachine learning classification engine. Our results will help build better social botsfor interaction in business or commercial environments
Article
Communication Privacy Management (CPM) theory explains one of the most important, yet challenging social processes in everyday life, that is, managing disclosing and protecting private information. The CPM privacy management system offers researchers, students, and the public a comprehensive approach to the complex and fluid character of privacy management in action. Following an overview of Communication Privacy Management framework, this review focuses on recent research utilizing CPM concepts that cross a growing number of contexts and illustrates the way people navigate privacy in action. Researchers operationalize the use of privacy rules and other core concepts that help describe and explain the ups and downs of privacy management people encounter.
Article
With the rapid growth of mobile phone use, mobile advertising has increasingly become a powerful tool for marketers to reach targeted consumers worldwide. This study investigates how attitude, trust, and privacy concerns influence mobile advertising effectiveness in a cross-cultural context including China and the U.S. Our results show much similarity between Chinese and American consumers. Overall, in both countries, we found that beliefs about mobile advertising significantly influence consumers’ attitudes toward mobile advertising, which in turn influence the intention to use mobile advertising and purchase intention. Specifically, perceived informational usefulness, perceived entertainment usefulness and perceived ease of use emerged as significant predictors for consumers’ attitudes toward mobile advertising. Perceived social usefulness is a significant predictor among Chinese consumers but not among Americans. In both markets, trust positively and significantly influences attitudes, whereas privacy concerns are a significant negative influencing factor.
Conference Paper
In this day and age of identity theft, are we likely to trust machines more than humans for handling our personal information? We answer this question by invoking the concept of "machine heuristic," which is a rule of thumb that machines are more secure and trustworthy than humans. In an experiment (N = 160) that involved making airline reservations, users were more likely to reveal their credit card information to a machine agent than a human agent. We demonstrate that cues on the interface trigger the machine heuristic by showing that those with higher cognitive accessibility of the heuristic (i.e., stronger prior belief in the rule of thumb) were more likely than those with lower accessibility to disclose to a machine, but they did not differ in their disclosure to a human. These findings have implications for design of interface cues conveying machine vs. human sources of our online interactions.
Article
Online engagement is prevalent in society but can be stressful because many debates are controversial. Further, online engagement has become a global phenomenon, and individuals from different cultural backgrounds discuss political and civic issues on social media. Drawing upon social exchange theory and the concept of national culture, this study aims to further explore individuals' participation in online engagement by examining the effects of privacy and culture. We hypothesize that social capital, social media evaluation, and privacy control are positively related to online expression, while privacy risk has a negative effect on online expression. Furthermore, the effects of social capital, social media evaluation, and privacy risk and privacy control on online expression are moderated by culture. Our hypotheses are tested with survey data collected from Australia and China. The results show that social capital and social media evaluation have positive effects in all sub-samples, while privacy risk and privacy control have significant effects for the high uncertainty avoidance sub-samples. Our study contributes to the literature by clarifying the role of privacy and highlighting the importance of culture in online engagement. Public managers need to work with social media providers to better protect individuals’ privacy and take their cultural backgrounds into consideration.
Article
Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions. What can we do to humanize chatbots? Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots? We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase. Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.
Article
As concerns about personal information privacy (PIP) continue to grow, an increasing number of studies have empirically investigated the phenomenon. However, researchers are not well informed about the shift of PIP research trends with time. In particular, there is a lack of understanding of what constructs have been studied in what contexts. As a result, researchers may design their study without sufficient guidance. This problem can lead to unproductive efforts in advancing PIP research. Therefore, it is important and timely to review prior PIP research to enhance our understanding of how it has evolved. We are particularly interested in understanding the chronological changes in contexts and research constructs studied. We use a chronological stage model of PIP research we develop, a set of contextual variables identified from prior literature, and the four-party PIP model suggested by Conger et al. (2013) as theoretical foundations to conduct a chronological literature review of empirical PIP concern studies. We find several PIP research trends during the last two decades, such as the quantity of PIP research has drastically increased; the variety of contexts and research constructs being studied has increased substantially; and many constructs have been studied only once while only a few have been repeatedly studied. We also find that the focus of PIP research has shifted from general/unspecified contexts to specific ones. We discuss the contributions of the study and recommendations for future research directions. We propose a fifth party as an emergent player in the ecosystem of PIP and call for future research that investigates it.
Article
When we ask a chatbot for advice about a personal problem, should it simply provide informational support and refrain from offering emotional support? Or, should it show sympathy and empathize with our situation? Although expression of caring and understanding is valued in supportive human communications, do we want the same from a chatbot, or do we simply reject it due to its artificiality and uncanniness? To answer this question, we conducted two experiments with a chatbot providing online medical information advice about a sensitive personal issue. In Study 1, participants (N = 158) simply read a dialogue between a chatbot and a human user. In Study 2, participants (N = 88) interacted with a real chatbot. We tested the effect of three types of empathic expression-sympathy, cognitive empathy, and affective empathy-on individuals' perceptions of the service and the chatbot. Data reveal that expression of sympathy and empathy is favored over unemotional provision of advice, in support of the Computers are Social Actors (CASA) paradigm. This is particularly true for users who are initially skeptical about machines possessing social cognitive capabilities. Theoretical, methodological, and practical implications are discussed.
Article
Disclosing personal information to another person has beneficial emotional, relational, and psychological outcomes. When disclosers believe they are interacting with a computer instead of another person, such as a chatbot that can simulate human-to-human conversation, outcomes may be undermined, enhanced, or equivalent. Our experiment examined downstream effects after emotional versus factual disclosures in conversations with a supposed chatbot or person. The effects of emotional disclosure were equivalent whether participants thought they were disclosing to a chatbot or to a person. This study advances current understanding of disclosure and whether its impact is altered by technology, providing support for media equivalency as a primary mechanism for the consequences of disclosing to a chatbot.
Article
Individuals presently interact with their diverse social circles on social networking sites and may find it challenging to maintain their privacy while deriving pleasure through self-disclosure. Drawing upon the communication privacy management theory, our study examines how boundary coordination and boundary turbulence can influence individuals’ self-disclosure decisions. Further, our study examines how the effects of boundary coordination and boundary turbulence differ across cultures. Our hypotheses are tested with survey data collected from the United States and China. The results strongly support our hypotheses and show interesting cultural differences. The implications for theory and practice are discussed.
Article
This study investigates factors that affect user decisions on which information to share, and specifically whether and how to disclose sensitive personal information, when using social networking sites (SNSs). The determinants of personal information disclosure (self-disclosure) are identified using a framework that combines communication privacy management and social penetration theories. Communication privacy management theory is applied to identify which rules guide users’ sharing of personal information. Social penetration theory is used to understand personal information disclosure approaches—deep and shallow—that people employ on SNSs. Structural equation modeling was used to analyze data from 315 Facebook users who were also undergraduate students. Results show that individuals self-disclose more on SNSs when they know how to coordinate disclosure boundaries, and particularly when they have learned from prior privacy infringements. While types of relationships are important in determining self-disclosure approaches, SNSs users who have experienced a privacy breach follow different privacy coordination rules compared with those who have not experienced such an incident. Our results present an interesting twist in which the “fooled once” users show higher levels of information sharing at all levels. These users have learned their lessons and their way through privacy management options, eventually leading to a higher self-disclosure.
Article
Drivers of social commerce usage has been the focus of scholars in recent years, but mobile social media users' resistance behavior towards mobile social commerce has been in the darkness and therefore worth torched lights on. With the data collected from mobile social media users who have no experience in mobile social commerce, Artificial Neural Network analysis was engaged to capture both linear and nonlinear relationships in a research model that consists of innovation barriers and privacy concern. Surprisingly, all resistances positively correlated with usage intention, except for image barrier, which appeared to be the most influencing resistance. Several explanations were offered for such outcomes. The possible coexistence of resistance behavior and usage intention resembles the fitting justification. Mobile social media users intend to embrace mobile social commerce; however, their intentions have been held up by their perceptions on innovation barriers and privacy concern. Based upon these outcomes, this study has reaffirmed the coexistence of resistances and usage intention, as well as the "privacy paradox" phenomenon. These discoveries are believed to have contributed to the existing literature. Practitioners are then advised to act accordingly to these findings, and several methods on catalyzing mobile social media users' adoption decision were suggested.
Article
As society continues its rapid change to a digitized individual, corporate, and government environment it is prudent for researchers to investigate the zeitgeist of the global citizenry. The technological changes brought about by big data analytics are changing the way we gather and view data. This big data analytics sentiment research examines how Chinese and American respondents may view big data collection and analytics differently. The paper follows with an analysis of reported attitudes toward possible viewpoints from each country on various big data analytics topics ranging from individual to business and governmental foci. Hofstede's cultural dimensions are used to inform and frame our research hypotheses. Findings suggest that Chinese and American perspectives differ on individual data values, with the Chinese being more open to data collection and analytic techniques targeted toward individuals. Furthermore, support is found that US respondents have a more favorable view of businesses' use of data analytics. Finally, there is a strong difference in the attitudes toward governmental use of data, where US respondents do not favor governmental big data analytics usage and the Chinese respondents indicated a greater acceptance of governmental data usage. These findings are helpful in better understanding appropriate technological change and adoption from a societal perspective. Specifically, this research provides insights for corporate business and government entities suggesting how they might adjust their approach to big data collection and management in order to better support and sustain their organization's services and products.
Article
Users’ privacy on social network sites is one of the most important and urgent issues in both industry and academic fields. This paper is intended to investigate the effect of users’ demographics, social network site experience, personal social network size, and blogging productivity on privacy disclosure behaviors by analyzing the data collected from social network sites. Based on two levels of disclosed privacy sensitivity information, the textual information of a user's blog postings can be converted into a 4-tuple to represent their privacy disclosure patterns, containing the breadth and depth of disclosure, and frequencies of highly and less sensitive disclosures. Collections of a user's privacy disclosure patterns in social network sites can effectively reflect the user's privacy disclosure behaviors. Applying the general linear modeling approach to blogging data converted with a coding scheme, we find that males and females have significantly differentiated privacy disclosure patterns in dimensions related to the breadth and depth of disclosure. In addition, age has a significant negative relationship with the breadth and depth of disclosure, as well as with highly sensitive disclosure. We also find that social network site experience, personal social network size, and blog length are not significantly related to users’ privacy disclosure patterns, while blog number always has positive associations with privacy disclosure patterns.
Article
Soziale Onlinenetzwerke (SON) stützen sich ausschließlich auf nutzergenerierten Inhalt um ihren Mitgliedern ein ansprechendes und lohnendes Erlebnis zu bieten. Infolgedessen sind die Belebung der Kommunikation zwischen Nutzern sowie die Stimulierung ihrer Selbstoffenbarung im Netz unerlässlich für die Zukunftsfähigkeit von SON. Soziale Netzwerke sind weltweit beliebt und deren Nutzer werden zunehmend kulturell vielfältiger. Um ihre Mitglieder zu motivieren Informationen zu teilen bedarf es des Verständnisses kultureller Feinheiten. Bisher bietet die derzeitige Forschung nur begrenzte Einblicke in die Rolle der Kultur, die hinter dieser Bereitwilligkeit von Selbstoffenbarung der Nutzer in Onlinenetzwerken steht. Aufbauend auf dem Privatsphärekalkül untersucht diese Studie die Rolle zweier kultureller Dimensionen – Individualismus und Unsicherheitsvermeidung – bei der Selbstoffenbarung auf SON. Die Umfrageergebnisse deutscher und amerikanischer Facebook-Nutzer bilden hierbei die Basis für die Analyse. Die Resultate des Strukturgleichungsmodells und der Multi-Gruppen-Analyse offenbaren deutliche Unterschiede in den kognitiven Strukturen dieser beiden Kulturen. Hierbei spielen Vertrauensannahmen eine entscheidende Rolle bei der Selbstoffenbarung von Nutzern mit individualistischem Hintergrund. Gleichzeitig beeinflusst die Unsicherheitsvermeidung die Auswirkung von den Bedenken hinsichtlich der Privatsphäre. Der Beitrag der Autoren zu der Theorie ist die Ablehnung des universellen Charakters des Privatsphärekalküls. Die Forschungsergebnisse geben den Betreibern von SON eine Reihe von Empfehlungen, um die Erstellung und Teilung von Inhalten ihrer heterogenen Zielgruppen zu stimulieren.