ArticlePDF Available

ChatGPT Undermines Human Reflexivity, Scientific Responsibility and Responsible Management Research

Authors:
  • School of Managemnet - University of Bath

Abstract

With ChatGPT being promoted to and by academics for writing scholarly articles more effectively, we ask what kind of knowledge does it produce, what does this means for our reflexivity as responsible management educators/researchers, and how an absence of reflexivity disqualifies us from shaping management knowledge in responsible ways. We urgently need to grasp what makes human knowledge distinct compared to knowledge generated by ChatGPT et al. Thus, we first explain how ChatGPT operates and unpack its intrinsic epistemological limitations. Using high-probability choices that are derivative, ChatGPT has no stake in the knowledge it produces and is thus prone to offer irresponsible outputs. By contrast, genuine human thinking-embodied in a contingent socio-cultural setting-uses low-probability choices both 'inside' and 'outside' the box of training data, making it creative, contextual and committed. We conclude that the use of ChatGPT is wholly incompatible with scientific responsibility and responsible management.
... In the example, ChatGPT is also prompted to produce the initial, divergent prompt itself, which is then used to produce the creative writing output. Research as well as practice therefore generally remain limited in an optimization paradigm (Cabantous & Gond, 2011) where AI is seen as an exclusively convergent technology that is expected to provide accurate answers or solutions (Hannigan et al., 2024;Lindebaum & Fleming, 2024), rather than distinguishing between convergent and divergent applications of the technology. GenAI's generative and aleatory nature can be seen as a useful feature (creativity) promoting divergent thinking rather than a bug (hallucination), depending on the task and context. ...
... Further concerns arise from human-computer interaction harms and loss of human autonomy and agency, such as anthropomorphization leading to misplaced trust and excessive dependency (Ray, 2023;Weidinger et al., 2022). Intentional design of human-AI interaction patterns can however promote complementary learning effects that reconfigure expertise and workflows (Barrett, Oborn, Orlikowski, & Yates, 2012), rather than GenAI becoming a competitive artifact characterized by overreliance and skill degradation (Chen & Chan, 2024;Lindebaum & Fleming, 2024). ...
... Future research could investigate how organizations can design learning environments that promote active engagement with GenAI while maintaining sufficient domain expertise for critical evaluation of outputs, particularly given the risk of deskilling and overreliance on AI-generated solutions (Hannigan et al., 2024;Lindebaum & Fleming, 2024). Future studies may also investigate optimal approaches to algorithmic decision authority versus human discretion (Grote, Zürich, Parker, & Crowston, 2024;Hillebrand et al., 2025;Kim, Glaeser, Hillis, Kominers, & Luca, 2024), particularly given the need to maintain strategic alignment while enabling decentralized innovation through GenAI tools. ...
Preprint
Full-text available
The rapid emergence of generative artificial intelligence (GenAI) is profoundly transforming the nature of work and organizations, challenging prevalent views of AI as primarily enabling prediction and optimization. This paper argues that GenAI represents a qualitative shift that necessitates a fundamental reassessment of AI's role in management and organizations. By identifying and analyzing four critical dimensions ─ (i) GenAI's broad applicability as a general-purpose technology; (ii) its ability to catalyze exploratory and combinatorial innovation; (iii) its capacity to enhance cognitive diversity and decision-making; and (iv) its democratizing effect on AI adoption and value creation ─ the paper highlights GenAI's potential to augment and scale human creativity, learning, and innovation. Building on insights from the AI and management literature, as well as on theory of human-AI agency, the paper develops a novel perspective that challenges the dominant efficiency-oriented narrative. It proposes that a human-complementary approach to GenAI development and implementation, leveraging it as a generative catalyst for exploration, can enable radically increased creativity, innovation, and growth. GenAI’s democratizing aspects can amplify these mechanisms, promoting widely shared growth when combined with appropriate policy and managerial choices. Implications for theory, practice, and future research directions are discussed, drawing attention to the need for approaches in GenAI development and deployment that are complementary rather than competitive to human beings. The paper concludes by discussing the theoretical, practical, and policy implications of this transformative technology. It outlines future research directions, emphasizing the critical role of human agency in determining the organizational, societal, and ethical outcomes associated with AI adoption and implementation.
... Hannigan, McCarthy, and Spicer (2024) explain that GenAI tools are "'predicting' responses rather than 'knowing' the meaning of their responses" and coin the term "botshit" for what is produced by uncritically using GenAI output: Something that can happen to be right or wrong, but that is used without regard for its veracity. Lindebaum and Fleming (2023) argue that the use of GenAI tools in qualitative analysis would undermine responsible research and change our understanding of what research should be; they point out that GenAI tools have no stake in the outcome of the research and cannot accommodate the context of the research. This means that GenAI tools could infringe at least one of Felten's five principles for good practice in SoTL, that research should be grounded in context (Felten 2013). ...
... We wanted to be able to answer the questions concerning GenAI analyses of qualitative SoTL datasets, and contribute to the discussion about how we, as a SoTL community and as SoTL researchers, can learn to work with GenAI in a scholarly way. We set out to explore the potential application of these tools for qualitative analysis in SoTL projects, taking into account the warnings from Lindebaum and Fleming (2023) and from Davison et al. (2024) but also the positive practical example presented by Gamieldien, Case, and Katz (2023). ...
Article
Full-text available
Generative AI tools (GenAI) are increasingly used for academic tasks, including qualitative data analysis for the Scholarship of Teaching and Learning (SoTL). In our practice as academic developers, we are frequently asked for advice on whether this use for GenAI is reliable, valid, and ethical. Since this is a new field, we have not been able to answer this confidently based on published literature, which depicts both very positive as well as highly cautionary accounts. To fill this gap, we experiment with the use of chatbot style GenAI (namely ChatGPT 4, ChatGPT 4o, and Microsoft Copilot) to support or conduct qualitative analysis of survey and interview data from a SoTL project, which had previously been analysed by experienced researchers using thematic analysis. At first sight, the output looked plausible, but the results were incomplete and not reproducible. In some instances, interpretations and extrapolations of data happened when it was clearly stated in the prompt that the tool should only analyse a specified dataset based on explicit instructions. Since both algorithm and training data of the GenAI tools are undisclosed, it is impossible to know how the outputs had been arrived at. We conclude that while results may look plausible initially, digging deeper soon reveals serious problems; the lack of transparency about how analyses are conducted and results are generated means that no reproducible method can be described. We therefore warn against an uncritical use of GenAI in qualitative analysis of SoTL data.
... The proliferation of generative AI tools has transformed the landscape of teaching and academic writing (Koivisto & Grassini, 2023;Lo, 2023;Rudolph, Tan, & Tan, 2023). While AI-assisted writing can enhance productivity, for instance, when used to proofread essays, it can also undermine critical and creative thinking (Bechky & Davis, 2025;Lindebaum & Fleming, 2024;Messeri & Crockett, 2024). Moreover, its use raises concerns regarding academic integrity and authorship (Rudolph et al., 2023;Thorp, 2023), including in educational settings where grades are often largely determined by writing assignments. ...
Preprint
Full-text available
This paper introduces a simple JavaScript-based web application designed to assist educators in detecting AI-generated content in student essays and written assignments. Unlike existing AI detection tools that rely on obfuscated machine learning models, AIDetection.info employs a heuristic-based approach to identify common syntactic traces left by generative AI models, such as ChatGPT, Claude, Grok, DeepSeek, Gemini, Llama/Meta, Microsoft Copilot, Grammarly AI, and other text-generating models and wrapper applications. The tool scans documents in bulk for potential AI artifacts, as well as AI citations and acknowledgments, and provides a visual summary with downloadable Excel and CSV reports. This article details its methodology, functionalities, limitations, and applications within educational settings.
... However, ChatGPT has limitations. It does not capture nonverbal cues such as body language or tone of voice, which are key elements for building trust and strengthening relations in tacit knowledge exchange [18]. Moreover, it may struggle to grasp cultural nuances, limiting its ability to cultivate deep understanding among students. ...
Article
Full-text available
In educational settings, the socialization, externalization, combination, and internalization (SECI) model promotes collaboration, the creation of new knowledge, and the integration of individual and collective experiences, thereby enriching learning with greater depth and meaning. The objective of this study was to identify the factors that contribute to the optimal development of the SECI model through the use of ChatGPT, to understand their interaction. The experiment included undergraduate students of business sciences at public and private universities in Cusco, Huancayo, and Lima, Peru. The students were divided into control and experimental groups, with 100 students in each group. The main selection criteria were prior knowledge of and willingness to use ChatGPT and being enrolled in a finance course. The phases of the SECI model were evaluated using ChatGPT in the experimental group through group discussions, brainstorming sessions, the creation of a manual, and individual assessments, exploring the impact of ChatGPT usage on learning. A comparison and description of each phase was conducted. Results revealed that while the use of ChatGPT improved the quality of interaction and externalization of knowledge in the experimental group compared to the control group, it did not have a notable impact on the quality of combination or internalization of knowledge. A multinomial logistic regression analyzed the impact of ChatGPT usage in each phase. Findings showed that the SECI model without ChatGPT explained 63.4 % of the change in internalization, while with ChatGPT it was 41.3 %. A positive correlation was observed between the use of ChatGPT and the performance in objective tests during the externalization and combination phases. However, while it enhanced the generation of ideas in the early stages of learning, it did not necessarily increase tacit knowledge in the internalization phase. The importance of using tools such as ChatGPT carefully and selectively, especially in the context of acquiring specialized knowledge, was emphasized.
... 9 Successful critical thinking requires not only being able to do the cognitive work but also having the interest and inclination to do so, building on reflexivity, embodiment, and emotion. 10 It is not enough to be able to question; successful critical thinking requires a willingness to fully engage in the questioning. While nurturing and promoting critical thinking is the university's primary role, it also must attend to whether its environment reinforces the support people need to maintain a firm sense of their identity. ...
... AI is also somewhat limited in its ability to synthesise backing in the manner Williamson did due to its inability to access tacit knowledge, which, by definition, is uncodifiable (Polanyi, 2012(Polanyi, [1958; Walsh, 2017). Moreover, AI, in its current form, would be hard-pressed to invoke the non-rational, even if it were to be capable of constructing a coherent argument (Lindebaum & Fleming, 2024). Simply put, the onus for persuasion resides with the researcher, and developed properly, can serve as a basis for competitive advantage in a marketplace for ideas. ...
Article
Full-text available
In a marketplace of ideas where theories can act as substitutes, theorists seek to persuade peers to engage with their theories. Given this critical role of persuasion, how do theorists do so? To address this question, the current study adopts a pragmatist perspective and employs the Toulmin model of arguments to examine how Oliver Williamson persuaded his peers to engage with transaction cost economics. The study unpacks how Williamson structured his arguments, introduced new constructs and language, and employed analogies and metaphors to foster a consensus, giving rise to an epistemic community. The study highlights that not only do values influence how arguments are crafted and evaluated, but also appealing to them plays a key role in persuasion. In doing so, the study considers both the rational and non-rational aspects of theorizing and persuasion. Finally, the study discusses the significance of argumentation in the context of AI and theorizing in strategic management.
Article
Artificial intelligence (AI) has become part and parcel of scientific knowledge production since the latest iterations of generative AI models (e.g., ChatGPT, DeepSeek, Claude, or Gemini) became widely available. Given AI has rapidly evolved since the initial release of ChatGPT in 2022, researching how AI’s capabilities impact organizations and how researchers make use of AI tools can be likened to a moving target. In this editorial essay, we explore the implications of the introduction of AI in the context of academic research, both as the subject of investigation (i.e., research on AI) and as a research tool to facilitate academic writing, data generation, or the peer review process (i.e., research with AI). Specifically, concerning research on AI, we consider issues around clarity regarding both existing definitions and concepts in the AI literature and how these are influenced by the rapid technological evolution of AI’s capabilities. In regard to research with AI, we reflect on the advantages and disadvantages of the use of AI as a research tool and discuss the Human Relations AI Usage Policy . Overall, our aim is not to be overly prescriptive on how to conduct research on and with AI but to encourage authors to reflect on how to best capture AI as a moving target in the context of their research endeavors.
Article
Full-text available
Ein unkritischer Einsatz von KI an der Hochschule kann Kompetenz-, Kontroll- und Sozialverluste befördern und damit selbstbestimmtes Handeln beeinträchtigen, das in Lehre, Studium und Forschung als eigener Wert angesehen werden kann. Die bisherige primär empirisch ausgerichtete Forschung trägt wenig dazu bei, KI-Risiken besser zu verstehen und einen selbstbestimmten Umgang mit KI an der Hochschule auf wissenschaftlicher Basis zu stärken. Der Beitrag diskutiert dieses Forschungsdefizit, schlägt eine wissenschaftsdidaktische Rahmung vor und zeigt bildungstheoretische und gestaltungsbasierte Perspektiven für die Hochschulbildungsforschung auf, die das empirische Vorgehen ergänzen.
Article
Full-text available
The future of theory in the age of big data and algorithms is a frequent topic in management research. However, with corporate ownership of big data and data processing capabilities designed for profit generation increasing rapidly, we witness a shift from scientific to 'corporate empiricism'. Building on this debate, our 'Point' essay argues that theorizing in management research is at risk now. Unlike the 'Counterpoint' article, which portrays a bright future for management theory given available technological opportunities, we are concerned about management researchers increasingly 'borrowing' data from the corporate realm (e.g., Google et al.) to build or test theory. Our objection is that this data borrowing can harm scientific theorizing due to how scaling effects, proxy measures and algorithmic decision-making performatively combine to undermine the scientific validity of theories. This undermining occurs through reducing scientific explanations, while technology shapes theory and reality in profit-predicting rather than truth-seeking manner. Our essay has meta-theoretical implications for management theory per se, as well as for political debates concerning the jurisdiction and legitimacy of knowledge claims in management research. Practically, these implications connect to debates on scientific responsibilities of researchers.
Article
Full-text available
In this essay, we question evaluative practises concerning 'scientific excellence' as solely captured in so-called 'A journals', because they can entail a disconnection between the measure and its contents. Where this occurs, we start writing for our own immediate 'survival' and long-term social standing among our peers. Along the way, however, there is a risk that we lose sight of what 'meaningful' research can feel and look like. While research must be reliable or trustworthy, we advocate the use of complementary evaluative practises that involve learned societies, along with employee or employer-focused organisations and their assessment of the meaningfulness of published research in their respective contexts. Our proposal (i) encourages diverse forms of research contribution, (ii) enables researchers to develop a collaborative approach that supports engagement and dialogue between researchers and the appropriate audiences for their work, and (iii) ensures that the impact of research can be developed more intentionally after publication.
Article
Full-text available
From employees’ varied interpretations of software efficacy to consumers’ diverse be- liefs about data privacy, technology frames refer to cognitive interpretations, assumptions and expectations that people use to comprehend the essence of information technology within a particular context. These frames differ across groups with different values, in- terests, experiences and expertise, having critical implications for researchers, managers and organizations. Despite theoretical enthusiasm to understand technology frames, lim- ited methodological insights exist on how to systematically explore and compare tech- nology frames. This gap impedes researchers from exploring novel questions related to technology frames, their variations and how they can be managed effectively. This paper proposes a cognitive method for comparing and elaborating on technology frames. Build- ing on causal mapping and empirical studies, the method formulates steps to plan, elicit, compare and elaborate on the relationships that underlie framing differences. The method offers detailed recommendations and templates for effectively organizing and communi- cating diverse manifestations of framing differences and their implications. The paper concludes by highlighting the method’s practical implications and encouraging research to advance extant knowledge of technology frames in the rapidly changing digital world.
Article
Full-text available
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.
Article
Current and future developments in artificial intelligence (AI) systems have the capacity to revolutionize the research process for better or worse. On the one hand, AI systems can serve as collaborators as they help streamline and conduct our research. On the other hand, such systems can also become our adversaries when they impoverish our ability to learn as theorists, or when they lead us astray through inaccurate, biased, or fake information. No matter which angle is considered, and whether we like it or not, AI systems are here to stay. In this curated discussion, we raise questions about human centrality and agency in the research process, and about the multiple philosophical and practical challenges we are facing now and ones we will face in the future.