Article

Experimental evidence on the productivity effects of generative artificial intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We examined the productivity effects of a generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, in the context of midlevel professional writing tasks. In a preregistered online experiment, we assigned occupation-specific, incentivized writing tasks to 453 college-educated professionals and randomly exposed half of them to ChatGPT. Our results show that ChatGPT substantially raised productivity: The average time taken decreased by 40% and output quality rose by 18%. Inequality between workers decreased, and concern and excitement about AI temporarily rose. Workers exposed to ChatGPT during the experiment were 2 times as likely to report using it in their real job 2 weeks after the experiment and 1.6 times as likely 2 months after the experiment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This technology offers both promise and peril: while democratizing access to creative tools, it also risks deepening cognitive and social inequalities. Scholars have highlighted generative AI's potential to augment human creativity in areas as diverse as writing, music, and visual arts (Noy & Zhang, 2023;Zhou & Lee, 2024;Nakavachara et al., 2024). Yet, others caution that such advancements may exacerbate disparities in skill valuation, favoring those who can effectively leverage AI while marginalizing others (Acemoglu et al., 2022;Doshi & Hauser, 2024;Lee & Chung, 2024;Eloundou et al., 2023). ...
... While initial evidence suggests that generative AI can enhance creative performance, answers to this nuanced question remain elusive (Jia et al., 2023;Li et al., 2024). Some argue that AI could reduce inequality by leveling the playing field, allowing lower-performing individuals to close the performance gap (Eloundou et al., 2023;Noy & Zhang, 2023). Yet, studies also suggest that much of the observed performance gain stems from participants relying heavily on AI-generated outputs with minimal human input, resulting in automation rather than meaningful human-AI collaboration (Noy & Zhang, 2023;Doshi & Hauser, 2024). ...
... Some argue that AI could reduce inequality by leveling the playing field, allowing lower-performing individuals to close the performance gap (Eloundou et al., 2023;Noy & Zhang, 2023). Yet, studies also suggest that much of the observed performance gain stems from participants relying heavily on AI-generated outputs with minimal human input, resulting in automation rather than meaningful human-AI collaboration (Noy & Zhang, 2023;Doshi & Hauser, 2024). This paradox highlights the need to examine whether generative AI truly democratizes creativity or amplifies disparities by favoring those already equipped with the skills to use it effectively. ...
Preprint
Full-text available
Generative AI is rapidly reshaping creative work, raising critical questions about its beneficiaries and societal implications. This study challenges prevailing assumptions by exploring how generative AI interacts with diverse forms of human capital in creative tasks. Through two random controlled experiments in flash fiction writing and song composition, we uncover a paradox: while AI democratizes access to creative tools, it simultaneously amplifies cognitive inequalities. Our findings reveal that AI enhances general human capital (cognitive abilities and education) by facilitating adaptability and idea integration but diminishes the value of domain-specific expertise. We introduce a novel theoretical framework that merges human capital theory with the automation-augmentation perspective, offering a nuanced understanding of human-AI collaboration. This framework elucidates how AI shifts the locus of creative advantage from specialized expertise to broader cognitive adaptability. Contrary to the notion of AI as a universal equalizer, our work highlights its potential to exacerbate disparities in skill valuation, reshaping workplace hierarchies and redefining the nature of creativity in the AI era. These insights advance theories of human capital and automation while providing actionable guidance for organizations navigating AI integration amidst workforce inequalities.
... Данное исследование представляет и оценивает метод симбиотического повышения производительности труда с использованием генеративного ИИ, который направлен на создание синергетической комбинации человеческого опыта и возможностей ИИ при выполнении сложных задач. Данное исследование концентрируется № 10 (127) октябрь, 2024 г. 10 на бизнес секторе экономики из-за их значительного влияния на экономику в целом и высокого потенциала для интеграции генеративного ИИ. ...
... Недавние исследования продемонстрировали положительное влияние ИИ на производительность труда. В экспериментальном исследовании Noy и Zhang было показано, что использование ИИ сокращает затраты времени на 10 минут, или на 37 %, при выполнении задач по написанию текстов [10]. Как пишет Nielsen, инструменты генеративного ИИ увеличили производительность труда бизнеспользователей на 66 % при выполнении рабочих задач [8]. ...
... Результаты данного исследования предоставляют эмпирическую поддержку эффективности труда в повышении производительности труда и качества выполнения задач. Значительные улучшения, наблюдаемые как в показателях производительности труда (скорость выполнения задач и коэффициент завершения), так и в результатах качества, согласуются с предыдущими исследованиями работы с помощью ИИ [10,8,9,2]. ...
... The relationship between productivity and effort is integral, with the latter frequently being a prerequisite for achieving elevated performance levels. This in turn influences acceptance and adoption of AI, demonstrating that higher productivity is not merely a consequence but a catalyst for technological engagement (Noy & Zhang, 2023). ...
... The term "perceived productivity" describes a person's subjective assessment of their own degree of production (Vuolle et al., 2008). It is based on their personal assessment of how efficiently and effectively they are accomplishing tasks and achieving goals (Noy & Zhang, 2023). ...
... This is partially in consensus with the literature reporting that ease of use determines the intention to use AI (Chatterjee et al., 2021;Hao et al., 2021;Hong, 2022). Contrary to Noy and Zhang (2023) productivity was measured in the current study using subjective measurement: however, using more objective measurements like earning per minutes or task completion rate may reveal more significant results. ...
Article
Full-text available
In the business corporate world, artificial intelligence (AI) is becoming a disruptive force. This study explores the intricacies of adopting AI in corporate environments, emphasizing factors that affect both behavioral intentions and real usage patterns. This study, which drew on the Unified Theory of Acceptance and Use of Technology (UTAUT), identified the distinctive features of AI and added new determinants, including perceived humanness, bias, job threat, functionality, transparency, and privacy and security issues. These determinants cover technological, human-centric, and situational aspects which can either catalyze or hinder AI acceptance. Our quantitative research, involving 223 professionals across diverse sectors in Saudi Arabia, expanded the UTAUT model by revealing critical factors driving AI acceptance, including ethics and privacy considerations. Intriguingly, certain latent factors were identified to inversely affect AI application. This research addresses important ethical, security, and operational issues related to AI deployment, while also expanding the theoretical understanding of AI's role in business. Such insights are paramount for decision-makers, practitioners, and academics alike, ensuring the sustainable and responsible incorporation of AI in the business realm.
... the study by Noy et al. has shown chatGPt substantially raised working efficiency and writing quality for workers in multiple fields, undertaking generic writing tasks [15]. however, these tasks differ from academic writing. ...
... time taken during the task serves as a direct and potent metric for efficacy assessment. Our findings align consistently with the majority of research, indicating an improved efficacy when utilizing chatGPt assistance [15,22,23]. this positive influence stems from the tool's support in various aspects, encompassing research topic selection [24], experimental design, literature research, and composition of manuscripts [10,[25][26][27]. ...
... this approach accounts for potential edits and corrections, potentially enhancing accuracy by students. this positive trend in improving writing quality is consistent with a separate study [15] involving substantial bonuses served as a motivating incentive for high-quality experiment completion. thus, the provision of accurate guidance is pivotal in mitigating potential pitfalls associated with employing chatGPt as a writing aid for students. ...
Article
Full-text available
Background ChatGPT is widely used for writing tasks, yet its effects on medical students’ academic writing remain underexplored. This study aims to elucidate ChatGPT’s impact on academic writing efficiency and quality among medical students, while also evaluating students’ attitudes towards its use in academic writing. Methods We collected systematic reviews from 130 third-year medical students and administered a questionnaire to assess ChatGPT usage and student attitudes. Three independent reviewers graded the papers using EASE guidelines, and statistical analysis compared articles generated with or without ChatGPT assistance across various parameters, with rigorous quality control ensuring survey reliability and validity. Results In this study, 33 students (25.8%) utilized ChatGPT for writing (ChatGPT group) and 95 (74.2%) did not (Control group). The ChatGPT group exhibited significantly higher daily technology use and prior experience with ChatGPT (p < 0.05). Writing time was significantly reduced in the ChatGPT group (p = 0.04), with 69.7% completing tasks within 2–3 days compared to 48.4% in the control group. They also achieved higher article quality scores (p < 0.0001) with improvements in completeness, credibility, and scientific content. Self-assessment indicated enhanced writing skills (p < 0.01), confidence (p < 0.001), satisfaction (p < 0.001) and a positive attitude toward its future use in the ChatGPT group. Conclusions Integrating ChatGPT in medical academic writing, with proper guidance, improves efficiency and quality, illustrating artificial intelligence’s potential in shaping medical education methodologies.
... For example, Noy and Zhang (2023) discovered that integrating ChatGPT significantly enhances productivity in college-educated professions by reducing the time needed for tasks by 40% and improving output quality by 18%. Similarly, Dell'Acqua et al. (2023) demonstrated that AI integration in a professional's workflow, specifically for tasks within AI's capabilities, leads to notable performance gains. ...
... While the overall progress was gradual, the youngest top players nevertheless displayed accelerated gains during a key AI breakthrough. Our findings demonstrate that even the most elite performers can benefit from AI developments when the conditions are right-a conclusion that has remained elusive in recent research on AI's impact on human performance (Dell'Acqua et al., 2023;Noy & Zhang, 2023). Our results align well with the perspective (Kasparov, 2017;Mollick, 2024) that recent AI advancements offer a unique opportunity for collaboration, rather than competition, even at the pinnacle of chess-a game long regarded by the AI field as a quintessential example of human intelligence. ...
Article
Full-text available
Advances in Artificial Intelligence (AI) have made significant strides in recent years, often supplementing rather than replacing human performance. The extent of their assistance at the highest levels of human performance remains unclear. We analyse over 11.6 million decisions of elite chess players, a domain commonly used as a testbed for AI and psychology due to its complexity and objective assessment. We investigated the impact of two AI chess revolutions: the first in the late 1990s with the rise of powerful PCs and internet access and the second in the late 2010s with deep learning‐powered chess engines. The rate of human improvement mirrored AI advancements, but contrary to expectations, the quality of decisions mostly improved steadily over four decades, irrespective of age, with no distinct periods of rapid improvement. Only the youngest top players saw marked gains in the late 1990s, likely due to better access to knowledge and computers. Surprisingly, the recent wave of neural network‐powered engines has not significantly impacted the best players – at least, not yet. Our research highlights AI's potential to enhance human capability in complex tasks, given the right conditions, even among the most elite performers.
... Although such AI systems may outperform humans in some specific tasks, their predominant value proposition lies in supporting human capabilities. They serve as robust tools that enhance decision-making, elevate the quality of analytical endeavors, and augment overall productivity [19]. For instance, complex tasks such as consolidating multiple financial statements from diverse subsidiaries of a large corporation can be streamlined by an LLM-based system [20]. ...
Article
Full-text available
This paper introduces MarketSenseAI, an innovative framework leveraging GPT-4’s advanced reasoning for selecting stocks in financial markets. By integrating Chain of Thought and In-Context Learning, MarketSenseAI analyzes diverse data sources, including market trends, news, fundamentals, and macroeconomic factors, to emulate expert investment decision-making. The development, implementation, and validation of the framework are elaborately discussed, underscoring its capability to generate actionable and interpretable investment signals. A notable feature of this work is employing GPT-4 both as a predictive mechanism and signal evaluator, revealing the significant impact of the AI-generated explanations on signal accuracy, reliability, and acceptance. Through empirical testing on the competitive S&P 100 stocks over a 15-month period, MarketSenseAI demonstrated exceptional performance, delivering excess alpha of 10–30% and achieving a cumulative return of up to 72% over the period, while maintaining a risk profile comparable to the broader market. Our findings highlight the transformative potential of Large Language Models in financial decision-making, marking a significant leap in integrating generative AI into financial analytics and investment strategies.
... For example, Transformer-based models like ChatGPTs (Open AI, 2024) have demonstrated remarkable abilities in tasks ranging from drafting emails to writing code, showcasing the versatility of GenAI in understanding and generating natural language. (Noy and Zhang, 2023) reported that ChatGPT can substantially raise productivity by decreasing the average time consuming on midlevel professional writing tasks by 40% and increasing the output quality by 18%. ...
Preprint
A key stumbling block in effective supply chain risk management for companies and policymakers is a lack of visibility on interdependent supply network relationships. Relationship prediction, also called link prediction is an emergent area of supply chain surveillance research that aims to increase the visibility of supply chains using data-driven techniques. Existing methods have been successful for predicting relationships but struggle to extract the context in which these relationships are embedded - such as the products being supplied or locations they are supplied from. Lack of context prevents practitioners from distinguishing transactional relations from established supply chain relations, hindering accurate estimations of risk. In this work, we develop a new Generative Artificial Intelligence (Gen AI) enhanced machine learning framework that leverages pre-trained language models as embedding models combined with machine learning models to predict supply chain relationships within knowledge graphs. By integrating Generative AI techniques, our approach captures the nuanced semantic relationships between entities, thereby improving supply chain visibility and facilitating more precise risk management. Using data from a real case study, we show that GenAI-enhanced link prediction surpasses all benchmarks, and demonstrate how GenAI models can be explored and effectively used in supply chain risk management.
... Chat GPT og etter hvert også andre konkurrerende språkmodeller ble raskt adoptert i en rekke sektorer, og har hatt betydelig innvirkning på arbeidet til brukere over hele verden [1]. Bruk av generativ KI har vist seg å kunne forbedre effektivitet og kvalitet i utførelsen av et bredt spekter av oppgaver [2]. Det er imidlertid begrenset forskning på hvordan slik teknologi påvirker kreativitet, en fundamental menneskelig egenskap [3]. ...
Conference Paper
Full-text available
Abstrakt. Denne artikkelen presenterer resultater av en studie som ser på hvilken innvirkning bruk av generativ kunstig intelligens kan ha på individuell kreativitet hos kunnskapsarbeidere i konsulentbransjen. Studien er basert på intervjuer med ni informanter fra to konsulentselskaper. Vi finner at bruk av generativ KI kan ha både positiv og negativ innvirkning, og diskuterer viktige forutsetninger for at KI skal fremme snarere enn hemme menneskelig kreativitet. Studien bidrar til forsk-ning på betydningen av KI for organisasjoner, og er relevant ikke bare for kon-sulentbransjen, men også i andre former for kunnskapsarbeid. Nøkkelord: Kunstig intelligens, KI, Kreativitet, Kunnskapsarbeidere 1 Innledning Kunstig intelligens kan medføre fundamentale endringer i hvordan vi forstår og an-vender kreativitet [1]. Generativ KI viser en omfattende kapasitet til å utføre oppgaver som kan karakteriseres som kreative, inkludert generering av tekst, musikk, kode, bilder og video. Utviklingen og bruken av generativ kunst
... In Study 2, seven out of eight participants expressed that 10 years from now, a moderate amount of their current tasks in architectural design could be done by a machine instead of themselves. These observations, consistent with recent studies of generative AI-assisted writing (Noy and Zhang, 2023), underscore the importance of investigating effective human-AI approaches within GD. In this paper, we set out to investigate to what extent the HI approach helps human experts build a sense of partnership in design co-creation. ...
Article
Full-text available
The Hybrid Intelligence Technology Acceptance Model (HI-TAM) presented in this paper offers a novel framework for training and adopting generative design (GD) assistants, facilitating co-creation between human experts and AI systems. Despite the promising outcomes of GD, such as augmented human cognition and highly creative design products, challenges remain in the perception, adoption, and sustained collaboration with AI, especially in creative design industries where personalized and specialized assistance is crucial for individual style and expression. In this two-study paper, we present a holistic hybrid intelligence (HI) approach for individual experts to train and personalize their GD assistants on-the-fly. Culminating in the HI-TAM, our contribution to human-AI interaction is 4-fold including (i) domain-specific suitability of the HI approach for real-world application design, (ii) a programmable common language that facilitates the clear communication of expert design goals to the generative algorithm, (iii) a human-centered continual training loop that seamlessly integrates AI training into the expert's workflow, (iv) a hybrid intelligence narrative that encourages the psychological willingness to invest time and effort in training a virtual assistant. This approach facilitates individuals' direct communication of design objectives to AI and fosters a psychologically safe environment for adopting, training, and improving AI systems without the fear of job-replacement. To demonstrate the suitability of HI-TAM, in Study 1 we surveyed 41 architectural professionals to identify the most preferred workflow scenario for an HI approach. In Study 2, we used mixed methods to empirically evaluate this approach with 8 architectural professionals, who individually co-created floor plan layouts of office buildings with a GD assistant through the lens of HI-TAM. Our results suggest that the HI-TAM enables professionals, even non-technical ones, to adopt and trust AI-enhanced co-creative tools.
... From the perspective of teachers are researchers, AI bots can serve as powerful aids, streamlining tasks such as content creation for classes, plan personalized learning sequences, write proofread texts, translate, grade, organize a course, or a set of courses in a specific field, proposing meaningful problems, lab assignments, etc (Noy & Zhang, 2023;Nazaretsky et al., 2022;Hadi Mogavi et al., 2024). ...
Article
Full-text available
Understanding how students interact with AI bots is a first step towards integrating them into instructional design. In this report, the results of a survey conducted in three European higher education institutions, and in the context of four different areas are presented. Among other things, they reveal for what purposes students use ChatGPT, whether they trust and feel satisfied with the interaction, how they perceive ChatGPT as a tool to support learning, and if they intend to use it in the future. The study compares results across groups by analyzing data obtained from convenience samples, which include participants of three European countries, with diverse backgrounds, varying technology and science-related fields, as well as academic program levels. Students’ opinions regarding the utilization of ChatGPT in assessments are also documented, along with their perspectives on the potential future applications of these AI tools. The authors, teaching different subjects at different levels of higher education programs, describe their views on integrating ChatGPT and similar AI bots into instructional design.
... Generative AI, a distinct yet related branch of AI, has emerged as a significant branch capable of creating new content, whether text, images, audio, or code, by identifying and replicating patterns from learned data [13][14][15]. This innovative technology has gained traction due to its ability to mimic human creativity and enhance productivity across diverse fields, from language generation to data analysis [16]. What sets generative AI apart from traditional AI is its capacity to produce original outputs rather than relying solely on predefined responses, making it especially valuable in environments that demand creativity, adaptability, and personalization [17]. ...
Article
Full-text available
This paper explores the potential of generative artificial intelligence (AI) to transform higher education. Generative AI is a technology that can create new content, like text, images, and code, by learning patterns from existing data. As generative AI tools become more popular, there is growing interest in how AI can improve teaching, learning, and research. Higher education faces many challenges, such as meeting diverse learning needs and preparing students for fast-changing careers. Generative AI offers solutions by personalizing learning experiences, making education more engaging, and supporting skill development through adaptive content. It can also help researchers by automating tasks like data analysis and hypothesis generation, making research faster and more efficient. Moreover, generative AI can streamline administrative tasks, improving efficiency across institutions. However, using AI also raises concerns about privacy, bias, academic integrity, and equal access. To address these issues, institutions must establish clear ethical guidelines, ensure data security, and promote fairness in AI use. Training for faculty and AI literacy for students are essential to maximize benefits while minimizing risks. The paper suggests a strategic framework for integrating AI in higher education, focusing on infrastructure, ethical practices, and continuous learning. By adopting AI responsibly, higher education can become more inclusive, engaging, and practical, preparing students for the demands of a technology-driven world.
... These include enhancing scientific writing, promoting equity and versatility in research, supporting medical research through efficient data analysis and reviews, improving healthcare practices, and advancing healthcare education and learning. [2][3][4][5][6][7] Drawbacks have also been pointed out for medical applications, including a lack of consideration of all the determinants that influence medical advice with ethical implications if patients experience harm. 3,4,8,9 In medical education, ChatGPT demonstrates potential in several important areas. ...
Article
Full-text available
The large language model (LLM) ChatGPT can answer open-ended and complex questions, but its accuracy in providing reliable medical information requires a careful assessment. As part of the AICHECK (Artificial Intelligence for CME Health E-learning Contents and Knowledge) Study, aimed at evaluating the potential of ChatGPT in continuous medical education (CME), we compared ChatGPT-generated educational contents to the recommendations of the National Institute for Health and Care Excellence (NICE) guidelines on acne vulgaris. ChatGPT version 4 was exposed to a 23-item questionnaire developed by an experienced dermatologist. A panel of five dermatologists rated the answers positively in terms of “quality” (87.8%), “readability” (94.8%), “accuracy” (75.7%), “thoroughness” (85.2%), and “consistency” with guidelines (76.8%). The references provided by ChatGPT obtained positive ratings for “pertinence” (94.6%), “relevance” (91.2%), and “update” (62.3%). The internal reproducibility was adequate both for answers (93.5%) and references (67.4%). Answers related to issues of uncertainty and/or controversy in the scientific community scored the lowest. This study underscores the need to develop rigorous evaluation criteria for AI-generated medical content and for expert oversight to ensure accuracy and guideline adherence.
... Such platforms now directly impact art management and organizations operating within the production of cultural goods (Anantrasirichai and Bull 2022;McCormack et al. 2023). Although GAI's potential to boost or harm human creativity is still being debated (Granulo et al. 2021;Ameen et al. 2022;De Cremer et al. 2023) it has unequivocally proven valuable in speeding up content production and critical for competitive advantage (Krakowski et al. 2023;McKinsey 2023;Noy and Zhang 2023). ...
Article
Full-text available
Generative Artificial Intelligence (GAI) has the potential to automate, integrate or augment human creativity. Current literature reveals that organizations adopting such disruptive technology can both boost or hinder human creativity. Such ambiguity poses an ethical dilemma for decision-makers: while managers are pressured to adopt GAI quickly for optimization, holding on to their economic responsibilities, they must also ensure that its deployment is ethically enrooted and yields people-centered outcomes. This work seeks to discuss and inform managerial decision-making upon GAI deployment, by elucidating how ethically-salient dimensions of human creativity can be safeguarded and supported through GAI adoption. To do so, we draw on Personalism and its account of human creativity, as tied to inner morality and intrinsic dignity of the person. By this way, we present a model that highlights how three core dimensions—uniqueness, relationality, and unpredictability—are essential to preserve the human element in creative tasks in GAI adoption. Overall, this normative work contributes to enhance our knowledge on personalism within organizational studies, to shed new light on how organizations can safeguard the ethical nexus between human creativity and human intrinsic dignity, and to highlight how humanism in business can support people-centered AI deployment.
... Interestingly, individuals with lower skill levels benefited most from ChatGPT, indicating that AI may be able to close productivity gaps and lessen productivity disparity. This study found that the generative writing tool reduced the amount of time workers spent on assignments and increased the output of workers with fewer skills (Noy & Zhang, 2023). ...
Article
Full-text available
In today's rapidly evolving digital landscape, the convergence of Artificial Intelligence (AI) and education has become paramount for organizations seeking to thrive amid the data revolution. This paper explores the integration of Artificial Intelligence into education, emphasizing its transformative potential. It delves into various facets of AI in education, including its benefits, challenges, ethical considerations, impact on teaching practices, utilization of AI-powered tools, and what are the Future Trends and Innovations for AI through real-world examples and educational insights. All of these points through a review of literature and articles from global databases. Findings: This paper underscores AI's significance in education and the role of AI in shaping the future of business in today's dynamic landscape, albeit contingent upon continuous evaluation, addressing the challenges and ethical dilemmas associated with its implementation. In conclusion, we emphasize the necessity of collaborative efforts among educators, technologists, policymakers, and industry stakeholders to harness AI's full potential of AI in education and integrate it into the educational landscape.
... Agents automating web-based tasks with minimal human intervention can significantly boost personal and workplace productivity [35,36]. A prevalent interaction mechanism involves a human providing a natural language instruction (e.g., "use delta.com to book a flight from JFK to Haneda on . . . ...
Preprint
Full-text available
State-of-the-art multimodal web agents, powered by Multimodal Large Language Models (MLLMs), can autonomously execute many web tasks by processing user instructions and interacting with graphical user interfaces (GUIs). Current strategies for building web agents rely on (i) the generalizability of underlying MLLMs and their steerability via prompting, and (ii) large-scale fine-tuning of MLLMs on web-related tasks. However, web agents still struggle to automate tasks on unseen websites and domains, limiting their applicability to enterprise-specific and proprietary platforms. Beyond generalization from large-scale pre-training and fine-tuning, we propose building agents for few-shot adaptability using human demonstrations. We introduce the AdaptAgent framework that enables both proprietary and open-weights multimodal web agents to adapt to new websites and domains using few human demonstrations (up to 2). Our experiments on two popular benchmarks -- Mind2Web & VisualWebArena -- show that using in-context demonstrations (for proprietary models) or meta-adaptation demonstrations (for meta-learned open-weights models) boosts task success rate by 3.36% to 7.21% over non-adapted state-of-the-art models, corresponding to a relative increase of 21.03% to 65.75%. Furthermore, our additional analyses (a) show the effectiveness of multimodal demonstrations over text-only ones, (b) shed light on the influence of different data selection strategies during meta-learning on the generalization of the agent, and (c) demonstrate the effect of number of few-shot examples on the web agent's success rate. Overall, our results unlock a complementary axis for developing widely applicable multimodal web agents beyond large-scale pre-training and fine-tuning, emphasizing few-shot adaptability.
... Penggunaan AI dalam strategi pemasaran telah terbukti meningkatkan efektivitas dan daya saing perusahaan. Sebuah studi oleh Noy dan Zhang menunjukkan bahwa penerapan AI dapat meningkatkan produktivitas dan memberikan wawasan yang lebih baik tentang perilaku konsumen, yang sangat penting dalam pasar yang kompetitif (Noy & Zhang, 2023). Selain itu, pemahaman tentang sentimen konsumen dapat ditingkatkan melalui penggunaan AI, yang memungkinkan perusahaan untuk merespons dengan lebih cepat terhadap perubahan preferensi pasar (Noranee & Othman, 2023). ...
Article
Full-text available
This study aims to analyze the factors influencing the readiness for artificial intelligence (AI) adoption in marketing among business actors in the creative industry sector in West Java. The research employs a quantitative approach, collecting data through surveys and questionnaires from 226 respondents. Multiple regression analysis was conducted to examine the relationship between technological, organizational, and environmental factors on business performance. The results indicate that organizational context factors significantly influence AI adoption in marketing, while technological and environmental contexts do not show a significant impact. These findings provide insights for business practitioners in formulating more effective strategies for AI adoption to enhance business performance in the creative industry sector.
... FMs are general-purpose technologies (Bresnahan and Trajtenberg 1995; Brynjolfsson, Rock, and Syverson 2021) that define an emerging market (Eloundou et al. 2023;Bommasani et al. 2021, §5.5) in the (digital) economy (Acemoglu and Autor 2010;Acemoglu and Restrepo 2018;Agrawal, Gans, and Goldfarb 2021;Autor et al. 2022). Early work shows that FMs can complete tasks of significant economic value (Noy and Zhang 2023;Felten, Raj, and Seamans 2023;Korinek 2023), i.e. the realizable market potential of FMs. Ecosystem Graphs naturally complements this work by defining the realized impact of FMs at macro-scale, complementing more grounded analyses such as Peng et al. (2023) on developer productivity using GitHub Copilot and Eloundou et al. (2023) on labor exposure using GPT-4. ...
Article
Foundation models (e.g. GPT-4, Gemini, Llama 3) pervasively influence society, warranting greater understanding. While the models garner much attention, accurately characterizing their impact requires considering the broader sociotechnical ecosystem in which they are created and deployed. We propose Ecosystem Graphs as a documentation framework to centralize knowledge of this ecosystem. Ecosystem Graphs is composed of assets (datasets, models, applications) linked together by dependencies that indicate technical and social relationships. To supplement the graph structure, each asset is further enriched with fine-grained metadata, such as the model’s estimated training emissions or licensing guidelines. Since its release in March 2023, Ecosystem Graphs represents an ongoing effort to document 568 assets (112 datasets, 359 models, 97 applications) from 117 organizations. Ecosystem Graphs functions as a multifunctional resource: we discuss two major uses by the 2024 AI Index and the UK’s Competition and Markets Authority that demonstrate the value of Ecosystem Graphs.
... and Silva 2023; Eloundou et al. 2023). Recent research indicates that organizational integration of generative AI can complement the skills of educated professionals, especially early in their careers, increasing productivity and job satisfaction by automating repetitive tasks and making know-how of experienced workers more available to entry-level staff (Noy and Zhang 2023;Brynjolfsson, Li, and Raymond 2023). ...
Article
Calls to use open generative language models in academic research have highlighted the need for reproducibility and transparency in scientific research. However, the impact of generative AI extends well beyond academia, as corporations and public interest organizations have begun integrating these models into their data science pipelines. We expand this lens to include the impact of open models on organizations, focusing specifically on fact-checking organizations, which use AI to observe and analyze large volumes of circulating misinformation, yet must also ensure the reproducibility and impartiality of their work. We wanted to understand where fact-checking organizations use open models in their data science pipelines; what motivates their use of open models or proprietary models; and how their use of open or proprietary models can inform research on the societal impact of generative AI. To answer these questions, we conducted an interview study with N=24 professionals at 20 fact-checking organizations on six continents. Based on these interviews, we offer a five-component conceptual model of where fact-checking organizations employ generative AI to support or automate parts of their data science pipeline, including Data Ingestion, Data Analysis, Data Retrieval, Data Delivery, and Data Sharing. We then provide taxonomies of fact-checking organizations' motivations for using open models and the limitations that prevent them for further adopting open models, finding that they prefer open models for Organizational Autonomy, Data Privacy and Ownership, Application Specificity, and Capability Transparency. However, they nonetheless use proprietary models due to perceived advantages in Performance, Usability, and Safety, as well as Opportunity Costs related to participation in emerging generative AI ecosystems. Finally, we propose a research agenda to address limitations of both open and proprietary models. Our research provides novel perspective on open models in data-driven organizations.
... Human-computer interaction research, behavioural studies, user research, ethnographic studies and social, economic and environmental impact assessments all present established fields with validated frameworks, metrics, and measurement approaches that have been applied to evaluate the safety of generative and other types of AI systems in their proper contexts (e.g. Elish and Watkins (2020); Marda and Narayan (2020); Brynjolfsson, Li, and Raymond (2023); Peng et al. (2023); Noy and Zhang (2023)). These approaches can be leveraged more systematically to comprehensively assess the safety of generative AI systems in relevant contexts, such as specific use cases, user groups, or institutions in which AI systems may be deployed. ...
Article
Generative AI systems produce a range of ethical and social risks. Evaluation of these risks is a critical step on the path to ensuring the safety of these systems. However, evaluation requires the availability of validated and established measurement approaches and tools. In this paper, we provide an empirical review of the methods and tools that are available for evaluating known safety of generative AI systems to date. To this end, we review more than 200 safety-related evaluations that have been applied to generative AI systems. We categorise each evaluation along multiple axes to create a detailed snapshot of the safety evaluation landscape to date. We release this data for researchers and AI safety practitioners (https://bitly.ws/3hUzu). Analysing the current safety evaluation landscape reveals three systemic ”evaluation gaps”. First, a ”modality gap” emerges as few safety evaluations exist for non-text modalities. Second, a ”risk coverage gap” arises as evaluations for several ethical and social risks are simply lacking. Third, a ”context gap” arises as most safety evaluations are model-centric and fail to take into account the broader context in which AI systems operate. Devising next steps for safety practitioners based on these findings, we present tactical ”low-hanging fruit” steps towards closing the identified evaluation gaps and their limitations. We close by discussing the role and limitations of safety evaluation to ensure the safety of generative AI systems.
... Data protection and transparency are needed as consumers become more suspicious of data collection and use [9,[50][51][52][53]. If poorly developed and maintained, AI systems might perpetuate preconceptions and discrimination, creating ethical issues [2,[54][55][56][57][58][59]. These issues must be addressed to make AI-driven breakthroughs inclusive, fair, and trustworthy. ...
Article
Full-text available
AI is transforming retail and e-commerce with unprecedented personalization, predictive analytics, and real-time customer involvement. AI-powered recommendation engines, chatbots, and sentiment analysis tools enable customer-centric tactics as consumers want more personalized experiences. AI's capacity to analyze massive volumes of customer data allows merchants to develop personalized shopping experiences that boost customer pleasure and loyalty. For instance, deep learning-based recommendation systems accurately predict client preferences, increasing conversion rates and average order values. AI-powered predictive analytics is changing inventory management, demand forecasting, and pricing tactics in retail. Stock levels, waste, and profitability are optimized by machine learning algorithms that examine historical sales data, market trends, and customer behavior. Real-time AI insights enable dynamic pricing models that adjust instantaneously to supply and demand changes, maintaining competitiveness in fast-paced e-commerce. AI-enabled real-time engagement is changing business-customer interactions. Conversational AI can answer client questions instantly and personally with smart chatbots and voice assistants, improving user experience and lowering operational expenses. Visual AI technologies like image identification and augmented reality enable virtual try-ons and visual search, improving online purchasing. The use of AI in retail and e-commerce has highlighted ethical issues such data privacy and algorithmic fairness. Growing sustainably requires balancing consumer data personalization with trust. This paper examines how AI might improve retail and e-commerce consumer experience, supported by recent breakthroughs and industry trends. It shows how AI may improve purchasing experiences while tackling implementation and ethical issues in a changing digital economy.
... Climate change and feeding a growing population are unprecedented problems for agriculture [3,[70][71][72][73][74]. Precision farming revolutionizes farming as traditional methods struggle to adapt [2,[75][76][77][78][79][80]. AI is transforming how farmers grow crops, maximize resources, and build climate resilience. ...
Article
Full-text available
AI enables data-driven, climate-resilient crop management, revolutionizing precision farming. Traditional farming methods are failing to maintain productivity and food security as climate change increases. AI-powered innovations boost agricultural operations via predictive analytics, real-time data processing, and machine learning algorithms. The latest precision farming AI technology may improve crop management climate resilience, as this article examines. These advancements rely on AI-driven soil analysis, weather prediction, and pest management systems. Machine learning models trained on large datasets accurately anticipate weather patterns and crop health, allowing farmers to make proactive decisions. AI algorithms and IoT sensors monitor soil health, moisture, and nutrient content in real time, enabling precise watering and fertilization. Drones and computer vision improve crop monitoring by detecting illnesses and stress factors early with unparalleled precision. Generative AI models are also used to simulate climatic scenarios to study crop adaptability and generate climate-resilient seed types. AI optimizes water and fertilizer use to maximize yields and promote sustainable farming. Recent research show that AI can incorporate satellite imagery and field data into comprehensive decision-support systems to help farmers adapt to local and global climates. Precision farming with AI faces obstacles. High implementation costs, data privacy concerns, and the rural digital divide limit its use. This report emphasizes the need for governmental interventions, public-private collaborations, and capacity-building to close these gaps and democratize agricultural AI technologies. AI can empower farmers, alleviate climate threats, and secure the global food chain by encouraging innovation and inclusivity.
... AI systems evaluate historical and real-time data to give employees and management relevant information to make quick decisions. AI algorithms forecast market trends, analyze risks, and suggest investment strategies, helping financial professionals handle uncertainty [70][71][72][73][74]. AI-powered diagnostic devices let doctors find imaging anomalies and recommend treatment approaches based on patient history [75][76][77][78][79][80]. AI improves accuracy and decision-making across industries by delivering data-driven support. ...
Article
Full-text available
Human-artificial intelligence (AI) collaboration is revolutionizing the workplace by increasing productivity and changing job responsibilities. This study examines how people and AI work together to boost efficiency, creativity, and innovation across industries. Employees can focus on strategic decision-making, problem-solving, and interpersonal interactions while AI handles monotonous and data-intensive activities. This transition is changing work positions, requiring a dynamic mix of technical and soft abilities. New AI technologies including generative AI, NLP, and multimodal AI are being integrated into processes, according to the report. These tools empower employees with actionable information and reduce cognitive overload through automation, predictive analytics, and personalized user experiences. New technologies like AI co-pilots and digital twins enable real-time collaboration, scenario modeling, and performance optimization, revolutionizing industries. Skill obsolescence, ethical issues, and organizational re-skilling arise from this change. The study outlines best practices for human-AI partnerships using recent case studies from healthcare, banking, and creative industries. User-centric AI systems reinforce human strengths rather than replace them. The research also emphasizes continuous learning ecosystems, where people have the tools and training to adapt to AI-driven situations. As a balanced perspective, the research shows that human-AI collaboration boosts productivity and job role transformation but also requires proactive efforts to address worker displacement and ethical issues. The findings show that smooth AI integration in the workplace requires transparency, agility, and a human-centered approach. This research guides firms to harness the revolutionary power of human-AI collaboration for sustainable, inclusive workplace growth.
... If training data reflects past inequalities, manufacturing company recruitment AI systems can repeat gender or racial biases. One prominent example is AI-driven hiring systems that rate candidates based on biased past data, rejecting qualified underrepresented prospects [70][71][72][73][74]. Global deployment of industrial AI systems makes this task even more difficult because algorithmic design often ignores cultural and societal diversity. ...
Article
Full-text available
Rapid AI use across industries has transformed operations, boosting efficiency, innovation, and competitiveness. However, its expansion has raised important ethical issues. Bias, privacy, and accountability are major obstacles to responsible industrial AI use. Skewed datasets or algorithmic design errors in AI systems promote discrimination, reinforcing inequities in hiring, lending, and healthcare. AI fairness solutions have improved, but bias mitigation in dynamic, real-world industrial contexts is still difficult. As industries use massive amounts of sensitive data for predictive analytics, operational automation, and personalization, privacy issues rise. The General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) are falling behind technology, which might compromise consumer trust and data security. AI accountability complicates the ethical challenge. Many machine learning models, especially deep learning algorithms, are opaque, making decision tracing difficult and assigning blame for errors, accidents, and unexpected effects difficult. The stakes are enormous in autonomous mobility and healthcare, thus this is crucial. Explainable AI (XAI) and AI ethical standards from the IEEE and European Commission can improve transparency and accountability, but their industrial application is patchy. This article examines ethical issues using real-world case studies, new technology, and changing policy frameworks. It emphasizes the necessity for a multidisciplinary strategy with strong technology, aggressive policy, and ethical awareness. Industries can use AI's revolutionary power to build trust and equitable growth by eliminating bias, protecting privacy, and assuring responsibility. This study shows that ethical foresight is crucial to influencing industrial AI's future and ensuring its benefits are shared.
... AI is increasing financial services efficiency, decision-making, and client experience [60][61][62][63][64]. Since the business is more data-driven, AI has gained popularity in fraud detection, risk management, and algorithmic trading [3,[65][66][67][68][69]. Machine learning algorithms, predictive analytics, and real-time data processing help institutions compete in a changing financial sector. ...
Article
Full-text available
Fraud detection, risk management, and algorithmic trading optimization are being revolutionized by AI in financial services. AI reduces false positives and speeds up fraud detection by spotting trends and anomalies in real time using advanced machine learning techniques. Financial institutions can now fight sophisticated cyber attacks with AI-powered fraud detection systems that analyze massive databases and detect illicit conduct with unparalleled accuracy. AI-powered predictive analytics are changing how financial organizations identify and mitigate risks. Institutions can predict credit defaults, market swings, and operational weaknesses using big data and AI. Natural language processing (NLP) techniques are extracting insights from unstructured data sources including regulatory filings and market news to improve decision-making. Real-time risk monitoring systems enable proactive interventions to reduce losses and assure regulatory compliance. AI is transforming algorithmic trading, another financial breakthrough. Advanced machine learning models analyze historical and live market data to predict price movements, find arbitrage opportunities, and execute trades in milliseconds. Reinforcement learning is helping design adaptable algorithms that respond to market changes, increasing profitability and reducing risk. AI also promotes ethical and transparent trading tactics, solving market manipulation problems. This study analyses the newest AI applications in financial services and their disruptive influence. Generative AI, federated learning, and quantum computing will further transform the sector. AI adoption has many benefits, but data privacy, algorithmic bias, and legal complexity must be addressed to sustain progress. AI can improve financial services efficiency, resilience, and creativity, creating a future where technology drives trust and strategic advantage.
... Content generation and customer interaction strategy optimization are both possible with generative AI [84][85][86][87][88]. AI can predict future behavior by evaluating many touchpoint user encounters. ...
Article
Full-text available
Generative AI is transforming marketing and advertising by providing unprecedented personalization and consumer engagement. Advanced models such as ChatGPT, DALL·E, and MidJourney enable marketers to tailor content to particular consumer interests, fostering emotional bonds and brand loyalty. These AI-driven technologies use massive datasets and machine learning algorithms to forecast consumer behavior, create targeted marketing campaigns, and create truly human content, bridging the gap between brands and their target consumers. Generational AI analyzes massive volumes of customer data, including browser patterns, purchase history, and social media activity, to create personalized advertising tactics in the age of data-driven decision-making. Personalised email marketing, ad creatives, and voice-enabled interactions ensure that consumers receive communications tailored to their interests and requirements, increasing engagement. AI-powered systems forecast the optimal times to communicate with consumers, making campaigns timely and relevant. Scalability and cost savings are possible with generative AI. A/B testing, copywriting, and audience segmentation can be automated to free up resources for creative and strategic work. AI helps improve marketing inclusivity and diversity by creating content that appeals to a wide demographic and respects cultural differences. These advances present obstacles. Data privacy, computational biases, and ethics in AI-driven marketing are crucial. Regulators and organizations must balance personalization and customer trust for sustained adoption. Despite these obstacles, generative AI is being adopted across industries, giving organizations new ways to innovate and outperform. This article examines how generative AI is improving personalization, engagement, and ethics in marketing and advertising. The findings show that generative AI can transform industry practices and promote consumer-centric marketing.
... Cost reduction from AI AI-driven supply chain optimization also reduces costs [70][71][72][73][74]. AI finds supply chain inefficiencies and costsaving options [75][76][77][78][79][80]. Order processing and invoicing management are automated using robotics process automation (RPA), reducing human expenses and errors. ...
Article
Full-text available
AI is transforming supply chain management, especially in improving operations through better demand forecasting and cost reduction. This study examines how AI will redefine supply chain strategy by incorporating advanced machine learning models and data analytics to improve accuracy and efficiency. AI has improved demand forecasting by evaluating massive volumes of historical data and real-time supply chain inputs. These AI systems use predictive analytics to find patterns and trends, helping firms predict demand. This feature reduces overstock and understock, lowering inventory costs and improving service. Neural networks and decision trees have helped process complicated datasets and make more accurate forecasts for seasonal changes, market patterns, and consumer behaviour. AI-driven systems automate regular operations and optimize logistical routes, changing supply chain cost management. RPA and NLP are increasingly utilized to automate order processing and customer service. This streamlines operations and decreases human error and costs. AI's route and schedule optimization reduces fuel costs and speeds delivery, benefiting the bottom line. This paper also considers integrating AI with IoT and blockchain. IoT devices track items in real time, improving transparency and enabling proactive disruption management. However, blockchain maintains data integrity across the supply chain, fostering stakeholder trust and collaboration. The paper shows various case studies of top organizations that have successfully integrated AI into their supply chains, improving demand forecasting accuracy and operational cost reductions. AI's impact on supply chain management is also examined, including data protection, skilled labor, and startup costs.
... Unplanned downtime disrupts production plans, delays deliveries, and lowers income, making it a costly manufacturing issue [112][113][114]. AI-driven predictive maintenance detects possible issues early and allows prompt intervention. ...
Article
Full-text available
Predictive maintenance with AI has transformed manufacturing by improving operating efficiency, reducing downtime, and optimizing resource use. AI-powered predictive maintenance monitors equipment performance and predicts breakdowns using advanced machine learning algorithms, IoT sensors, and real-time data analytics. This proactive strategy decreases unplanned downtime, maintenance costs, and essential machinery longevity. AI models find trends and abnormalities in massive historical and real-time data, improving maintenance scheduling and resource allocation. Deep learning and neural networks have improved predictive maintenance procedures, enabling complicated failure mode diagnosis and non-linear degradation pattern prediction. Real-time simulations and scenario studies using digital twins-virtual representations of actual assets-give manufacturers unprecedented insights into equipment performance and failure processes. AI-driven predictive maintenance also supports Industry 4.0 concepts, improving industrial ecosystem connectivity and interoperability. Cloud systems enable scalable, cost-effective implementations, while edge computing ensures real-time, low-latency data processing. These technologies enable manufacturers to go from reactive to predictive and, eventually, prescriptive maintenance, where AI predicts faults and suggests fixes. AI-driven maintenance reduces energy consumption and waste, which supports greener production. Despite these benefits, data security, model interpretability, and competent staff remain important. To overcome these obstacles, research focuses on explainable AI (XAI) frameworks and strong cybersecurity. AI-driven predictive maintenance will boost innovation, resilience, and competitiveness in the global industrial landscape as production settings become more complex. This article discusses how AI is transforming predictive maintenance and how it will affect manufacturing.
... Machine learning algorithms can detect small irregularities in network traffic, user behavior, and system activity that may suggest malice. Over time, these systems learn from historical data and attack patterns, enhancing accuracy and efficacy [7,[80][81][82][83]. Cyber enemies are becoming more sophisticated, making adaptation crucial. ...
Article
Full-text available
AI in cybersecurity is a disruptive method to addressing the growing sophistication of cyber threats in the digital age. AI-driven technologies like machine learning (ML) and data analytics may improve threat detection and prevention, according to this study. Modern attack vectors including zero-day vulnerabilities, APTs, and ransomware challenge traditional cybersecurity measures. Adaptive learning models and predictive analytics provide proactive threat detection and real-time mitigation with AI. When taught on massive datasets of historical and real-time threat data, machine learning algorithms can find trends and anomalies that rule-based systems miss. IDS and EPP accuracy improves greatly with this feature. AI-powered solutions also automate incident response and dynamic threat hunting, decreasing cyber event detection and containment times. Data analytics aggregates and analyzes logs from multiple sources to provide actionable insights that improve security. Advanced NLP and generative AI have changed threat intelligence by rapidly analyzing cyber threat reports and dark web activities. AI adoption in cybersecurity faces obstacles include algorithmic bias, adversarial attacks on AI models, and hackers misusing AI. Explainable AI (XAI) frameworks are essential for AI-driven cybersecurity transparency and trust. This article brings together AI and cybersecurity trends, tools, and methodologies to discuss predictive threat modeling and proactive protection measures. Industry leaders' case studies show how AI-based solutions can detect and prevent complex assaults. This study emphasizes the necessity for interdisciplinary collaboration to maximize AI's impact on digital ecosystems while resolving ethical and legal issues. The findings support AI-driven cybersecurity to protect vital infrastructure, corporate networks, and personal data in an increasingly linked world.
... This has sparked Explainable Artificial Intelligence (XAI), which aims to make AI systems' operations and choices clear, interpretable, and trustworthy. XAI is changing how organizations employ AI by explaining machine learning models, addressing accountability, fairness, and ethics while enabling informed decision-making [75][76][77][78][79][80]. AI models, especially those using deep learning and complicated methodologies, are sometimes derided as "black boxes." ...
Article
Full-text available
AI integration across industries has led to automation and data-driven decision-making. AI systems are complex and opaque, which hinders their adoption and use in important business activities. Explainable Artificial Intelligence (XAI) improves AI model interpretability and transparency, enabling trust and informed industrial decision-making. XAI has the ability to improve company processes in finance, healthcare, manufacturing, and customer service, according to this research. To help stakeholders understand AI decision-making, XAI methodologies provide unambiguous explanations of forecasts, actions, and consequences. Transparency boosts end-user confidence, helps firms meet ethical, regulatory, and AI implementation challenges. XAI helps businesses discover AI biases, improve model performance, and improve outcomes through iterative refinement by providing interpretability. AI-driven decisions in healthcare and finance can have a major influence on people's lives and finances, making XAI essential. XAI technical methods such model-agnostic approaches, explainable neural networks, and post-hoc interpretation are examined in the article. It also addresses computational costs, integration complexity, and interpretability-performance balance for enterprises implementing XAI. As AI evolves, XAI models will likely become more transparent and accurate, enabling their widespread application across industries. XAI integration in business operations unlocks the full potential of AI systems to improve business efficiency, customer satisfaction, and strategic decision-making by making them powerful, ethical, accountable, and trustworthy.
... Moreover, it is important to remember that group differences in predictions from AI systems is not always indicative of bias. Indeed, the power of algorithms and AI is in the use of AI in the workplace can increase output quality by almost 20% and reduce time taken to complete tasks by 40% [63]. However, it is estimated that clerical work is the most exposed to the effects of generative AI while other tasks have much less exposure with an estimated 1-4% of other tasks having a high level of AI exposure compared to 24% of clerical tasks. ...
Article
Full-text available
Artificial intelligence (AI) is increasingly infiltrating our lives, and a large proportion of the population use the technology whether they know it or not. While AI can offer significant transformative benefits, this is only true if it is used in a safe and responsible way with the right guardrails. Indeed, there have been several instances of harm resulting from the use of AI without the appropriate safeguards in place. As such, it is unsurprising that there are mixed views of AI in society, where the negative view can in fact manifest as a dystopian view of “robots taking over”. In this paper, we explore these positive and negative views of AI and the factors driving such perceptions. We propose that negative perceptions of AI often concern job displacement, bias and fairness, and misalignment with human values, while positive perceptions typically focus on specific applications and benefits of AI, such as in scientific research, healthcare, and education. Moreover, we posit that the types of perceptions one has about AI are driven by their proximity to AI, whether general or specific applications of AI are being considered, knowledge of AI, and how it is framed in the media. We end with a framework for reducing threat perceptions of AI, such that the technology can be embraced more confidently in tandem with risk management practices.
... AI can help create personalized loyalty programs that appeal to each customer. AI-powered sentiment analysis lets businesses assess customer satisfaction and proactively address negative sentiment [54][55][56][57][58]. This approach helps companies prevent issues from escalating, strengthening relationships and lowering churn. ...
Article
Full-text available
Email marketing with AI has transformed customer engagement, allowing brands to deliver hyper-personalized experiences that boost engagement and retention. This paper examines AI's rapid adoption in email marketing, focusing on its ability to personalize content, optimize timing, and improve message relevance through data-driven insights. With machine learning algorithms, NLP, and predictive analytics, AI helps marketers analyze massive amounts of customer data and gain actionable insights for highly targeted campaigns that match individual preferences and behaviors. Dynamic audience segmentation is a major advancement in AI-powered email marketing. Machine learning models use browsing history, purchase behavior, and interaction frequency to create segments that change with customer actions, keeping messages relevant and timely. AI-driven tools can also predict optimal send times for each recipient, improving open rates and engagement metrics. NLP allows dynamic content generation tailored to individual preferences, improving customer perception of value and relevance in brand interactions. AI automates personalized email journeys, saving time and improving consistency. Businesses can reduce churn and strengthen customer relationships by using AI to make customers feel valued and understood. To build consumer trust, AI-driven marketing must prioritize data privacy and transparency, according to the research. As AI technology advances, email marketing will use data-driven strategies to build deeper customer relationships and increase loyalty and retention. As personalization and engagement become increasingly important in digital marketing, this paper shows how AI can give email marketers an edge.
... AI is also automating marketing tasks, allowing brands to launch large-scale personalized campaigns with little effort. Email marketing, social media ads, and content creation can be automated by AI to personalize messages [54][55][56][57][58]. AI-driven automation allows brands to scale one-to-one consumer interactions like dynamic ad insertion and personalized email recommendations. ...
Article
Full-text available
AI has transformed personalized marketing and consumer behavior analysis, improving customer engagement and conversion rates. This paper examines how machine learning, predictive analytics, and NLP are changing brand-consumer interactions through personalized marketing strategies. AI lets brands create highly targeted marketing messages that better predict consumer needs and preferences by analyzing massive amounts of data. AI tools can optimize communication channels and timing through dynamic content recommendations, personalized email campaigns, and behavior-predictive insights, maximizing marketing impact. The study also examines consumer behavior analytics, where AI creates comprehensive customer profiles from real-time data from social media, e-commerce platforms, and in-store interactions. AI helps marketers identify key touchpoints and develop long-term engagement strategies by decoding consumer preferences. AI combined with AR and VR enhances consumer experiences by enabling immersive product interactions, increasing engagement and purchase intent. This paper also addresses data privacy, algorithmic bias, and transparency in AI-driven consumer interactions as ethical issues in personalized marketing. Responsible AI practices are crucial for consumer trust and regulatory compliance. In today's data-driven economy, AI's personalized marketing and consumer behavior analysis have increased engagement and conversion rates, giving brands an edge. Businesses can improve their marketing strategies and build stronger relationships with customers by evolving with AI, fostering brand loyalty and sustainable growth in the digital age.
... Historical statistical models used for risk assessment often couldn't keep up with market volatility and missed nuanced, rapidly changing data points. However, machine learning can use real-time data from multiple sources to perform dynamic and granular risk assessments [2,3,[67][68][69][70][71]. This agility is especially useful in volatile financial markets, where economic indicators, global events, and social trends change quickly. ...
Article
Full-text available
Financial risk assessment and fraud detection are being transformed by artificial intelligence (AI), which improves efficiency and accuracy. AI-driven systems can process massive amounts of financial data in real time using advanced machine learning (ML) algorithms and big data analytics to identify patterns and anomalies that indicate risks or fraud. Predictive models can analyze spending behavior, transaction history, and social data, improving credit risk assessments. Thus, financial institutions can make better lending decisions, lowering default rates and stabilizing portfolios. AI is also good at detecting unusual patterns that may indicate fraud. AI algorithms can continuously monitor transactions and flag those that deviate from norms or match fraud patterns. Banking and payments fraud losses have decreased due to this scrutiny, ensuring consumer and financial institution security. AI's adaptability lets it respond quickly to new fraud schemes, making it a valuable asset in a changing threat landscape. However, using AI for financial risk assessment and fraud detection raises ethical concerns. Model training with large datasets can lead to data privacy breaches and biases, especially if training data are unrepresentative or skewed. AI's "black box" opacity can make it hard for stakeholders to understand or challenge automated decisions, complicating accountability. AI-driven models may unintentionally discriminate against certain groups, raising lending and risk assessment fairness concerns. AI's opportunities must be balanced with transparency, fairness, and data protection. These ethical issues must be addressed to build trust and sustain AI's use in financial risk assessment and fraud detection.
... To ensure healthcare AI applications are reliable and equitable, rigorous validation, oversight, and bias-mitigation are needed. Healthcare AI debates center on ethics [9,[60][61][62][63][64][65]. The use of AI raises questions about accountability, transparency, and human oversight. ...
Article
Full-text available
The rapid development of artificial intelligence (AI) in healthcare offers unprecedented opportunities but also significant challenges and ethical issues. Machine learning, deep learning, and natural language processing are being integrated into healthcare systems to improve diagnostics, personalized treatment plans, predictive analytics, and robotic-assisted surgery. AI can process large datasets faster and more accurately than humans, which has great potential to improve clinical decision-making and patient outcomes. AI-driven radiology and pathology diagnostic tools support early disease detection and precision medicine with remarkable accuracy. Unfortunately, this integration is difficult. Medical data complexity, algorithmic bias, and data privacy are major obstacles. AI models need high-quality, diverse datasets to be accurate and unbiased, but healthcare data vary by demographics and conditions, risking model inefficiency or errors. AI integration into healthcare infrastructure requires significant investment, technical expertise, and strict regulatory standards, which can delay adoption, especially in under-resourced settings. Equally important are ethics. Governance frameworks must address patient privacy, informed consent, decision-making accountability, and reduced human oversight. To maintain patient trust and reduce AI-driven errors, transparency and explainability are essential as AI becomes more involved in diagnostics and treatment recommendations. To overcome these challenges and establish strong regulatory and ethical frameworks, technologists, healthcare professionals, and policymakers must work together to deploy AI in healthcare. This will ensure AI is used responsibly, equitably, and effectively, improving patient care and operational efficiency while upholding ethical standards in healthcare.
... By recognizing customer behavior patterns, AI can predict needs, predict future interactions, and make user-friendly suggestions [54-58]. Modern businesses need this level of personalization because consumers value unique experiences that match their tastes and needs [3,[59][60][61][62]. Therefore, hyper-personalization boosts customer satisfaction and loyalty because customers are more likely to return to brands that understand their preferences [2-3,63-66]. ...
Article
Full-text available
AI-driven customer service is revolutionizing how businesses interact with customers by improving personalization, loyalty, and satisfaction through data-driven insights and responsive interactions. AI technologies like machine learning (ML), natural language processing (NLP), and generative models allow companies to scale customer experiences that match individual preferences, behaviors, and needs. AI tools in customer service, such as chatbots and virtual assistants, are improving response times and issue resolution, increasing customer satisfaction and loyalty. Companies can analyze massive datasets in real time using AI to improve customer profiles and predict future needs. AI-driven systems boost brand loyalty by personalizing interactions and making customers feel valued. Additionally, generative AI models like ChatGPT are improving customer engagement and reducing friction by providing human-like responses to conversational experiences. AI-driven sentiment analysis tools help businesses anticipate customer dissatisfaction by assessing customer emotions and feedback. Along with personalization, AI-based solutions improve customer loyalty programs by making them more dynamic and engaging. Businesses can identify high-value customers, personalize offers, and encourage repeat business with predictive analytics. Despite these advances, ethical issues like data privacy and customer interaction must be addressed. As AI-driven customer service evolves, balancing automation and personalized human interaction is crucial. This paper examines current trends, case studies, and future developments to demonstrate how AI can transform service environments into customer-centric, responsive, and adaptable ones that foster long-term customer loyalty and satisfaction.
... Academic work has started documenting substantial productivity gains in a wide range of creative and knowledge intensive tasks such as coding (Peng et al., 2023), creative writing (Doshi & Hauser, 2023), professional business tasks (Noy & Zhang, 2023), ideation (Girotra et al., 2023), strategic management (Dell'Acqua et al. 2023), and legal services (Choi et al., 2024). ...
Article
Full-text available
Entrepreneurship has entered a new era shaped by AI, demanding accelerated scholarly advances to keep pace with this transformative technology-yet this demands that academics bridge the gap between the AI revolution's ambiguities and meaningful scholarly contributions. To motivate and guide future research on AI's transformative role in entrepreneurship, we introduce an ongoing special issue in Entrepreneurship Theory and Practice (ETP) and outline multiple compelling opportunities for future research. Unlike typical editorials, we offer a prospective vision-rather than retrospective, after the articles have been accepted and published-at this project's outset, to empower the field to prospect and establish new scholarly foundations in the relatively uncharted world of AI in the domain of entrepreneurship. Accordingly, we highlight the 'AI PEN' (Prospecting and Establishing Nexus) as a desirable research approach to advance this literature going forward. We hope, and anticipate, that our invitation to submit proposals to this special issue facilitates novel empirical as well as theory-focused contributions to the literature.
Preprint
Full-text available
Artificial intelligence has, so far, largely automated routine tasks, but what does it mean for the future of work if Large Language Models (LLMs) show creativity comparable to humans? To measure the creativity of LLMs holistically, the current study uses 13 creative tasks spanning three domains. We benchmark the LLMs against individual humans, and also take a novel approach by comparing them to the collective creativity of groups of humans. We find that the best LLMs (Claude and GPT-4) rank in the 52nd percentile against humans, and overall LLMs excel in divergent thinking and problem solving but lag in creative writing. When questioned 10 times, an LLM's collective creativity is equivalent to 8-10 humans. When more responses are requested, two additional responses of LLMs equal one extra human. Ultimately, LLMs, when optimally applied, may compete with a small group of humans in the future of work.
Article
Previous efforts to support creative problem-solving have included (a) techniques such as brainstorming and design thinking to stimulate creative ideas, and (b) software tools to record and share these ideas. Now, generative AI technologies can suggest new ideas that might never have occurred to the users, and users can then select from these ideas or use them to stimulate even more ideas. To explore these possibilities, we developed a system called Supermind Ideator that uses a large language model (LLM) and adds prompts, fine tuning, and a specialized user interface in order to help users reformulate their problem statements and generate possible solutions. This provides scaffolding to guide users through a set of creative problem-solving techniques, including some techniques specifically intended to help generate innovative ideas about designing groups of people and/or computers (“superminds”). In an experimental study, we found that people using Supermind Ideator generated significantly more innovative ideas than those generated by people using ChatGPT or people working alone. Thus our results suggest that the benefits of using LLMs for creative problem-solving can be substantially enhanced by scaffolding designed specifically for this purpose.
Article
医学新质生产力是一个相对较新的概念,它指的是应用现代科技手段,特别是互联网、大数据、人工智能和元宇宙等新兴技术,来提升医疗服务质量和效率的能力。这种能力的核心在于创新,包括技术创新、管理创新和服务创新。本文综述了医学新质生产力的技术基础及发展现状,为其发展指明方向,为构建现代医学体系和实现中华民族伟大复兴的中国梦贡献力量。
Article
Introduction: Providing one-on-one support to large cohorts is challenging, yet emerging AI technologies show promise in bridging the gap between the support students want and what educators can provide. They offer students a way to engage with their course material in a way that feels fluent and instinctive. Whilst educators may have views on the appropriates for AI, the tools themselves, as well as the novel ways in which they can be used, are continually changing. Methods: The aim of this study was to probe students' familiarity with AI tools, their views on its current uses, their understanding of universities' AI policies, and finally their impressions of its importance, both to their degree and their future careers. We surveyed 453 psychology and sport science students across two institutions in the UK, predominantly those in the first and second year of undergraduate study, and conducted a series of five focus groups to explore the emerging themes of the survey in more detail. Results: Our results showed a wide range of responses in terms of students' familiarity with the tools and what they believe AI tools could and should not be used for. Most students emphasized the importance of understanding how AI tools function and their potential applications in both their academic studies and future careers. The results indicated a strong desire among students to learn more about AI technologies. Furthermore, there was a significant interest in receiving dedicated support for integrating these tools into their coursework, driven by the belief that such skills will be sought after by future employers. However, most students were not familiar with their university's published AI policies. Discussion: This research on pedagogical methods supports a broader long-term ambition to better understand and improve our teaching, learning, and student engagement through the adoption of AI and the effective use of technology and suggests a need for a more comprehensive approach to communicating these important guidelines on an on-going basis, especially as the tools and guidelines evolve.
Article
This paper explores how artificial intelligence (AI) may impact the strategic decision-making (SDM) process in firms. We illustrate how AI could augment existing SDM tools and provide empirical evidence from a leading accelerator program and a start-up competition that current large language models can generate and evaluate strategies at a level comparable to entrepreneurs and investors. We then examine implications for the key cognitive processes underlying SDM—search, representation, and aggregation. Our analysis suggests that AI has the potential to enhance the speed, quality, and scale of strategic analysis, while also enabling new approaches, like virtual strategy simulations. However, the ultimate impact on firm performance will depend on competitive dynamics as AI capabilities progress. We propose a framework connecting AI use in SDM to firm outcomes and discuss how AI may reshape sources of competitive advantage. We conclude by considering how AI could both support and challenge core tenets of the theory-based view of strategy. Overall, our work maps out an emerging research frontier at the intersection of AI and strategy. History: This paper has been accepted for the Strategy Science Special Issue on Theory-Based View. Funding: The authors are grateful to their collaborating organizations and to the University of Michigan, Bocconi University Junior Researchers’ Grant, and the INSEAD eLab Research Fund for financial support.
Article
当前,医学数字人GPT的研究主要集中于其在医疗健康领域中的应用。这种技术能够通过自动解读医疗影像和电子病历,帮助医生更快更准确地做出诊断,提高诊断精度和效率。 同时,它还可以提供个性化的健康教育和患者关怀,从而改善患者的体验,并提高患者的满意度和依从性。此外,GPT能够自动化处理大量的文本数据,显著降低医疗人员的工作量,降低医疗成本。其预诊断和健康管理功能也有助于预防和早期发现疾病,减少后期治疗的成本。在科研方面,GPT可以识别医疗数据中的异常现象,帮助科研人员发现新的治疗方法或疾病预测模型。 它还能根据已有的医学知识自动生成新的假说和实验方案,为科研人员提供实用的建议。此外,GPT还可以通过推理和逻辑思维,帮助解决医学难题,促进科研的进展。展望未来,医学数字人GPT有着广阔的发展前景。随着技术的不断进步和医疗需求的日益增长,GPT在医疗健康领域的应用将更加广泛和深入。它不仅可以提高医疗服务的质量和效率,还可以推动医学科研的创新和发展。同时,随着人们对隐私和数据安全的关注度不断提高,如何确保敏感医疗数据的安全存储和处理,避免数据泄露的风险,维护患者隐私和数据合规性,也将是医学数字人GPT未来发展的重要考虑因素。
Preprint
Full-text available
p>Engineering education is constantly evolving to keep up with the latest technological developments and meet the changing needs of the engineering industry. One promising development in this field is the use of generative artificial intelligence technology, such as the ChatGPT conversational agent. ChatGPT has the potential to offer personalized and effective learning experiences by providing students with customized feedback and explanations, as well as creating realistic virtual simulations for hands-on learning. However, it is important to also consider the limitations of this technology. ChatGPT and other generative AI systems are only as good as their training data and may perpetuate biases or even generate and spread misinformation. Additionally, the use of generative AI in education raises ethical concerns such as the potential for unethical or dishonest use by students and the potential unemployment of humans who are made redundant by technology. While the current state of generative AI technology represented by ChatGPT is impressive but flawed, it is only a preview of what is to come. It is important for engineering educators to understand the implications of this technology and study how to adapt the engineering education ecosystem to ensure that the next generation of engineers can take advantage of the benefits offered by generative AI while minimizing any negative consequences.</p
Article
Full-text available
We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch, and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover which aspects are most critical to researchers and found that these include attention, comprehension, honesty, and reliability. We then explored differences in these data quality aspects in two studies (N ~ 4000), with or without data quality filters (approval ratings). We found considerable differences between the sites, especially in comprehension, attention, and dishonesty. In Study 1 (without filters), we found that only Prolific provided high data quality on all measures. In Study 2 (with filters), we found high data quality among CloudResearch and Prolific. MTurk showed alarmingly low data quality even with data quality filters. We also found that while reputation (approval rating) did not predict data quality, frequency and purpose of usage did, especially on MTurk: the lowest data quality came from MTurk participants who report using the site as their main source of income but spend few hours on it per week. We provide a framework for future investigation into the ever-changing nature of data quality in online research, and how the evolving set of platforms and panels performs on these key aspects.
Article
Full-text available
Bias, unfairness and lack of transparency and accountability in Artificial Intelligence (AI) systems, and the potential for the misuse of predictive models for decision-making have raised concerns about the ethical impact and unintended consequences of new technologies for society across every sector where data-driven innovation is taking place. This paper reviews the landscape of suggested ethical frameworks with a focus on those which go beyond high-level statements of principles and offer practical tools for application of these principles in the production and deployment of systems. This work provides an assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology. We review other historical uses of risk assessments and audits and create a typology that allows us to compare current AI ethics tools to Best Practices found in previous methodologies from technology, environment, privacy, finance and engineering. We analyse current AI ethics tools and their support for diverse stakeholders and components of the AI development and deployment lifecycle as well as the types of tools used to facilitate use. From this, we identify gaps in current AI ethics tools in auditing and risk assessment that should be considered going forward.
Article
Full-text available
When industrial robots are adopted by firms in a local labor market, some workers are displaced and become unemployed. Other workers that are not directly affected by automation may however fear that these new technologies might replace their working tasks in the future. This fear of a possible future replacement is important because it negatively affects workers’ job satisfaction at present. This paper studies the extent to which automation affects workers’ job satisfaction, and whether this effect differs for high- versus low-skilled workers. The empirical analysis uses microdata for several thousand workers in Norway from the Working Life Barometer survey for the period 2016–2019, combined with information on the introduction of industrial robots in Norway from the International Federation of Robotics. Our identification strategy exploits variation in the pace of introduction of industrial robots in Norwegian regions and industries since 2007 to instrument workers’ fear of replacement. The results indicate that automation in industrial firms in recent years have induced 40% of the workers that are currently in employment to fear that their work might be replaced by a smart machine in the future. Such fear of future replacement does negatively affect workers’ job satisfaction at present. This negative effect is driven by low-skilled workers, which are those carrying out routine-based tasks, and who are therefore more exposed to the risks of automation.
Article
Full-text available
Rapid advances in artificial intelligence (AI) and automation technologies have the potential to significantly disrupt labor markets. While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. In this paper we discuss the barriers that inhibit scientists from measuring the effects of AI and automation on the future of work. These barriers include the lack of high-quality data about the nature of work (e.g., the dynamic requirements of occupations), lack of empirically informed models of key microlevel processes (e.g., skill substitution and human–machine complementarity), and insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms (e.g., urban migration and international trade policy). Overcoming these barriers requires improvements in the longitudinal and spatial resolution of data, as well as refinements to data on workplace skills. These improvements will enable multidisciplinary research to quantitatively monitor and predict the complex evolution of work in tandem with technological progress. Finally, given the fundamental uncertainty in predicting technological change, we recommend developing a decision framework that focuses on resilience to unexpected scenarios in addition to general equilibrium behavior.
Article
Full-text available
In this essay, I begin by identifying the reasons that automation has not wiped out a majority of jobs over the decades and centuries. Automation does indeed substitute for labor—as it is typically intended to do. However, automation also complements labor, raises output in ways that leads to higher demand for labor, and interacts with adjustments in labor supply. Journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor. Changes in technology do alter the types of jobs available and what those jobs pay. In the last few decades, one noticeable change has been a "polarization" of the labor market, in which wage gains went disproportionately to those at the top and at the bottom of the income and skill distribution, not to those in the middle; however, I also argue, this polarization is unlikely to continue very far into future. The final section of this paper reflects on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. I argue that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
Article
Full-text available
Using the 2003 National Survey of College Graduates, I examine how immigrants perform relative to natives in activities likely to increase U.S. productivity, according to the type of visa on which they first entered the United States. Immigrants who first entered on a student/trainee visa or a temporary work visa have a large advantage over natives in wages, patenting, commercializing or licensing patents, and publishing. In general, this advantage is explained by immigrants' higher education and field of study, but this is not the case for publishing, and immigrants are more likely to start companies than natives with similar education. Immigrants without U.S. education and who arrived at older ages suffer a wage handicap, which offsets savings to the United States from their having completed more education abroad. Immigrants who entered with legal permanent residence do not outperform natives for any of the outcomes considered.
Article
Al applications are tackling economic and social challenges facing developing countries. Economically speaking, Al possesses unique mechanisms that allow it to have significant impacts on economic productivity. While developing countries may experience a decline in outsourcing jobs from developed countries, the potential negative impact of such decline can be minimized by appropriate policy to deploy Al solutions. The true potential of Al comes from the ability to complement as well as enhance traditional factors of production.
Article
Recent advances in artificial intelligence are primarily driven by machine learning, a prediction technology. Prediction is useful because it is an input into decision-making. In order to appreciate the impact of artificial intelligence on jobs, it is important to understand the relative roles of prediction and decision tasks. We describe and provide examples of how artificial intelligence will affect labor, emphasizing differences between when the automation of prediction leads to automating decisions versus enhancing decision-making by humans.
Article
Job-testing technologies enable firms to rely less on human judgment when making hiring decisions. Placing more weight on test scores may improve hiring decisions by reducing the influence of human bias or mistakes but may also lead firms to forgo the potentially valuable private information of their managers. We study the introduction of job testing across 15 firms employing low-skilled service sector workers. When faced with similar applicant pools, we find that managers who appear to hire against test recommendations end up with worse average hires. This suggests that managers often overrule test recommendations because they are biased or mistaken, not only because they have superior private information. © The Author(s) 2017. Published by Oxford University Press on behalf of the President and Fellows of Harvard College. All rights reserved.
Article
We examine the concerns that new technologies will render labor redundant in a framework in which tasks previously performed by labor can be automated and new versions of existing tasks, in which labor has a comparative advantage, can be created. In a static version where capital is fixed and technology is exogenous, automation reduces employment and the labor share, and may even reduce wages, while the creation of new tasks has the opposite effects. Our full model endogenizes capital accumulation and the direction of research toward automation and the creation of new tasks. If the long-run rental rate of capital relative to the wage is sufficiently low, the long-run equilibrium involves automation of all tasks. Otherwise, there exists a stable balanced growth path in which the two types of innovations go hand-in-hand. Stability is a consequence of the fact that automation reduces the cost of producing using labor, and thus discourages further automation and encourages the creation of new tasks. In an extension with heterogeneous skills, we show that inequality increases during transitions driven both by faster automation and the introduction of new tasks, and characterize the conditions under which inequality stabilizes in the long run.
Article
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals.
Article
We analyze the impact of labor demand and labor market regulations on the corporate structure of firms. Higher wages are associated with lower monitoring, irrespective of whether these high wages are caused by labor market regulations, unions or higher labor demand. We also find that the organization of firms has important macroeconomic implications. In particular, monitoring is a type of “rent-seeking” activity and the decentralized equilibrium spends excessive resources on monitoring. Labor market regulations that reduce monitoring by pushing wages up may increase net output or reduce it only by a small amount even though they reduce employment.
Article
This paper empirically assesses the wage effects of the Job Corps program, one of the largest federally funded job training programs in the U.S. Even with the aid of a randomized experiment, the impact of a training program on wages is difficult to study because of sample selection, a pervasive problem in applied microeconometric research. Wage rates are only observed for those who are employed, and employment status itself may be affected by the training program. This paper develops an intuitive trimming procedure for bounding average treatment effects in the presence of sample selection. In contrast to existing methods, the procedure requires neither exclusion restrictions nor a bounded support for the outcome of interest. Identification results, estimators, and their asymptotic distribution are presented. The bounds suggest that the program raised wages, consistent with the notion that the Job Corps raises earnings by increasing human capital, rather than solely through encouraging work. The estimator is generally applicable to typical treatment evaluation problems in which there is nonrandom sample selection/attrition.
Rock GPTs are GPTs: An early look at the labor market impact potential of large language models
  • T S Eloundou
  • P Manning
  • D Mishkin
Automation after the assembly line: computerized machine tools employment and productivity in the United States
  • L P Boustan
  • J Choi
  • D Clingingsmith
AI skill and productivity: The case of taxi drivers
  • Kanazawa D Kawaguchi
  • H Shigeoka
  • Y Watanabe
The labor market impacts of technological change: From unbridled enthusiasm to qualified optimism to vast uncertainty
  • D Autor
Generative AI at work
  • E D Brynjolfsson
  • L R Li
  • Raymond
How novelists use generative language models: An exploratory user study
  • Calderwood V Qiu
  • K Ilonka Gero
  • L B Chilton
Malone A test for evaluating performance in human-computer systems
  • Campero M Vaccaro
  • J Song
  • H A Wen
  • T W Almaatouq