Article

Artificial Intelligence: A Modern Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Humankind has given itself the scientific name homo sapiens--man the wise--because our mental capacities are so important to our everyday lives and our sense of self. The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right. AI has produced many significant and impressive products even at this early stage in its development. Although no one can predict the future in detail, it is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization. AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain{brain}, whether biological or electronic, to perceive, understand, predict, and manipulate a world far larger and more complicated than itself? How do we go about making something with those properties? These are hard questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in AI has solid evidence that the quest is possible. All the researcher has to do is look in the mirror to see an example of an intelligent system. AI is one of the newest disciplines. It was formally initiated in 1956, when the name was coined, although at that point work had been under way for about five years. Along with modern genetics, it is regularly cited as the ``field I would most like to be in'' by scientists in other disciplines. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. AI, on the other hand, still has openings for a full-time Einstein. The study of intelligence is also one of the oldest disciplines. For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. The advent of usable computers in the early 1950s turned the learned but armchair speculation concerning these mental faculties into a real experimental and theoretical discipline. Many felt that the new ``Electronic Super-Brains'' had unlimited potential for intelligence. ``Faster Than Einstein'' was a typical headline. But as well as providing a vehicle for creating artificially intelligent entities, the computer provides a tool for testing theories of intelligence, and many theories failed to withstand the test--a case of ``out of the armchair, into the fire.'' AI has turned out to be more difficult than many at first imagined, and modern ideas are much richer, more subtle, and more interesting as a result. AI currently encompasses a huge variety of subfields, from general-purpose areas such as perception and logical reasoning, to specific tasks such as playing chess, proving mathematical theorems, writing poetry{poetry}, and diagnosing diseases. Often, scientists in other fields move gradually into artificial intelligence, where they find the tools and vocabulary to systematize and automate the intellectual tasks on which they have been working all their lives. Similarly, workers in AI can choose to apply their methods to any area of human intellectual endeavor. In this sense, it is truly a universal field.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, during the last few decades, researchers have succeeded in transferring some of these human traits-at least partially-to machines, in an incessant quest to make machines seem 'alive.' As an idea largely propagated in sci-fi movies [145], a machine incorporating 'human intelligence' seems impossible, even though what constitutes intelligence over machines has been a topic of intense debate [18,32,83,102,143]. ...
... Even if we boil down the discussion to a specific area, for example, semantics and linguistics, we keep seeing strong debate [144]. This debate has its origin in understanding what defines human reasoning (e.g., [143,Ch. 10]). ...
... The rigidity of deterministic rules led researchers to think differently. In the early 90s, researchers changed how they approached language models: they shifted their investigation to statistics and probability models [143,Sec. 1.3.6]. Researchers realized that the nature of language is varying and random, and a probabilistic approach could better incorporate the randomness of language into machines (e.g., [39,83,108,128]). ...
Chapter
Full-text available
The Rockefeller Series in Science and Technology (ISSN 3067-0667) publishes research monographs, edited works, collections of papers, reviews of previously published studies, and advanced textbooks covering all relevant subjects related to science and technology. Monographs of this series can contain the following contributions: - Book chapters that revisit and expand the interpretation of previously published works; - Timely discussions of a relevant topic; - Literature review with insights and novel interpretations; - A carefully designed study that sparks interest in the research community; - Research tailored for graduate students and professionals; - Technical reports; - Other topics related to the book series. The monographs published in the Rockefeller Series on Science and Technology attract the interest of researchers, students, and professionals.
... It is a field considered to be at the intersection of computer science, Artificial Intelligence (AI), and computational linguistics [1]. AI is defined as "the automation of activities that we associate with human thinking, activities such as decision-making, problem-solving, and learning" [2]. NLP has numerous applications based on the textual form, for instance, a chatbot (chatter robot) [3], autocomplete suggestions [4], MT, QA [5], and paraphrase generation [6]. ...
... White space removal is needed before sentence tokenization. Tokenization means splitting a text into tokens, such as words, phrases, sentences, numbers, or punctuation [2,73]. Punctuation marks are used to identify the edges of sentences, which could be terminated by a full stop (.), question mark ‫,)؟(‬ or exclamation mark (!), depending on the context of the sentence [74]. ...
... White space removal is needed before sentence tokenization. Tokenization means splitting a text into tokens, such as words, phrases, sentences, numbers, or punctuation [2,73]. Punctuation marks are used to identify the edges of sentences, which could be terminated by a full stop (.), question mark ( ), or exclamation mark (!), depending on the context of the sentence [74]. ...
Article
Full-text available
Paraphrasing means expressing the semantic meaning of a text using different words. Paraphrasing has a significant impact on numerous Natural Language Processing (NLP) applications, such as Machine Translation (MT) and Question Answering (QA). Machine Learning (ML) methods are frequently employed to generate new paraphrased text, and the generative method is commonly used for text generation. Generative Pre-trained Transformer (GPT) models have demonstrated effectiveness in various text generation tasks, including summarization, proofreading, and rephrasing of English texts. However, GPT-4’s capabilities in Arabic paraphrase generation have not been extensively studied despite Arabic being one of the most widely spoken languages. In this paper, the researchers evaluate the capabilities of GPT-4 in text paraphrasing for Arabic. Furthermore, the paper presents a comprehensive evaluation method for paraphrase quality and developing a detailed framework for evaluation. The framework comprises Bilingual Evaluation Understudy (BLEU), Recall-Oriented Understudy for Gisting Evaluation (ROUGE), Lexical Diversity (LD), Jaccard similarity, and word embedding using the Arabic Bi-directional Encoder Representation from Transformers (AraBERT) model with cosine and Euclidean similarity. This paper illustrates that GPT-4 can effectively produce a new paraphrased sentence that is semantically equivalent to the original sentence, and the quality framework efficiently ranks paraphrased pairs according to quality criteria.
... In particular, this was applied to classification problems in medical applications [13,20,21] as well as to medical decision making in a general context [22]. Additionally, it was proposed as a basic rationale for optimizing ML models [23]. This approach converts the construction of the ML model into a process for finding an optimal decision rule based on probabilities and weights (i.e. ...
... In particular, the model was coupled to the corresponding ROC curves, for this purpose. In comparison to references like [17,22,23], we utilized a different notation which does not require the full background about decision theory and utility functions, but provides a self-explanatory description. ...
... Basically, the expected risk ER(s) can be considered as a negative version of a utility function, since it represents some kind of costs instead of utilities / benefits. This is consistent with the general definition in normative decision theory [23]. According to this approach, the expected utility EU (s) is defined as the sum of utilities U (r) across all potential outcomes r from a set R of results weighted by the respective probabilities P Result(s) = r s , i.e. ...
Article
Full-text available
Background In the future, more medical devices will be based on machine learning (ML) methods. In general, the consideration of risks is a crucial aspect for evaluating medical devices. Accordingly, risks and their associated costs should be taken into account when assessing the performance of ML-based medical devices. This paper addresses the following three research questions towards a risk-based evaluation with a focus on ML-based classification models. Methods First, we analyzed how often risk-based metrics are currently utilized in the context of ML-based classification models. This was performed using a literature research based on a sample of recent scientific publications. Second, we introduce an approach for evaluating such models where expected risks and associated costs are integrated into the corresponding performance metrics. Additionally, we analyze the impact of different risk ratios on the resulting overall performance. Third, we elaborate how such risk-based approaches relate to regulatory requirements in the field of medical devices. A set of use case scenarios were utilized to demonstrate necessities and practical implications, in this regard. Results First, it was shown that currently most scientific publications do not include risk-based approaches for measuring performance. Second, it was demonstrated that risk-based considerations have a substantial impact on the outcome. The relative increase of the resulting overall risks can go up to 196% when the ratio between different types of risks (false negatives vs. false positives) changes by a factor of 10.0. Third, we elaborated that risk-based considerations need to be included into the assessment of ML-based medical devices, according to the relevant EU regulations and standards. In particular, this applies when a substantial impact on the clinical outcome / in terms of the risk-benefit relationship occurs. Conclusion In summary, we demonstrated the necessity of a risk-based approach for the evaluation of medical devices which include ML-based classification methods. We showed that currently many scientific papers in this area do not include risk considerations. We developed basic steps towards a risk-based assessment of ML-based classifiers and elaborated consequences that could occur, when these steps are neglected. And, we demonstrated the consistency of our approach with current regulatory requirements in the EU.
... The immersive learning experiences provided by DTs and AI make agricultural education more engaging and effective and ensure that future farmers are wellequipped to adopt productive and sustainable practices, securing the future of viticulture and agriculture for generations to come. This follows the principles of bounded rationality in AI, where systems must operate optimally within the constraints of computational resources and environmental complexity [8]. However, despite the remarkable progress in deep learning, the debate about its ultimate role and limitations remains open. ...
... Innov. 2025, 8,38 ...
Article
Full-text available
Integrating Artificial Intelligence (AI) and Extended Reality (XR) technologies into agriculture presents a transformative opportunity to modernize education and sustainable food production. Traditional agriculture training remains resource-intensive, time-consuming, and geographically restrictive, limiting scalability. This study explores an AI-driven Digital Twin (DT) system embedded within a gamified XR environment designed to enhance decision-making, resource management, and practical training in viticulture as well as woody crop management. A survey among stakeholders in the viticultural sector revealed that participants are increasingly open to adopting Virtual Reality (VR) combined with AI-enhanced technologies, signaling a readiness for digital learning transformation in the field. The survey revealed a 4.48/7 willingness to adopt XR-based training, a 4.85/7 interest in digital solutions for precision agriculture, and a moderate climate change concern of 4.16/7, indicating a strong readiness for digital learning transformation. Our findings confirm that combining AI-powered virtual educators with DT simulations provides interactive, real-time feedback, allowing users to experiment with vineyard management strategies in a risk-free setting. Unlike previous studies focusing on crop monitoring or AI-based decision support, this study examines the potential of combining Digital Twins (DTs) with AI-driven personal assistants to improve decision-making, resource management, and overall productivity in agriculture. Proof-of-concept implementations in Unity and Oculus Quest 3 demonstrate how AI-driven NPC educators can personalize training, simulate climate adaptation strategies, and enhance stakeholder engagement. The research employs a design-oriented approach, integrating feedback from industry experts and end-users to refine the educational and practical applications of DTs in agriculture. Furthermore, this study highlights proof-of-concept implementations using the Unity cross game engine platform, showcasing virtual environments where students can interact with AI-powered educators in simulated vineyard settings. Digital innovations support students and farmers in enhancing crop yields and play an important role in educating the next generation of digital farmers.
... In a previous paper, we explained that Data Engineering (obtaining, cleaning, and representing data) is at least as important as Algorithm Engineering (cf. [1], introduction of section 18.11 and section 18.11.2), and we presented the responsibility for the Data Engineer to design the right data for further ML tasks. ...
Article
Five years before the release of ChatGPT, the world of Machine Translation (MT) was dominated by unimodal AI implementations, generally bilingual or multilingual AI models with only text modality. The era of Large Language Models (LLMs) led to various multimodal translation initiatives with text and image modalities, based on custom data engineering techniques that introduced expectations for improvement in the field of MT when using multimodal options. In our work, we introduced a first of its kind AI multimodal translation with four modalities (text, image, audio and video), from English towards a low resource language and vice-versa. Our results confirmed that multimodal translation generalizes better, always brings improvement to unimodal text translation, and superior performance as the number of unseen samples increases. Moreover, this initiative is a hope for worldwide low resource languages for which the use of non-text modalities is a great solution to data scarcity in the field.
... Each data point contains features and an associated output label. The goal of supervised learning algorithms is to learn a function that maps feature numerical vectors (inputs) to labels (desired outputs) based on example input-output pairs (Russell and Norvig, 2016) also known as training examples (Bishop and Nasrabadi, 2006). A supervised learning algorithm analyzes the training data and produces an inferred function that can be used to map new testing examples, generalizing from the training data to unseen situations. ...
Article
Full-text available
Despite accumulated evidence indicating glyphosate herbicide (GLY) presents endocrine disrupting properties, there are still discrepancies. Moreover, few epidemiological studies have focused on hormone-related pathologies. This work aimed to investigate the associations between urinary GLY levels and breast cancer (BC) in women from a region of intense agricultural activity in Argentina, exploring residential proximity to agricultural fields as a potential risk factor for BC. This was a case-control study that involved 90 women from different populations in the Province of Santa Fe, Argentina. Demographic data, lifestyle factors, and residential history were obtained through a questionnaire, while medical outcomes and reproductive history were acquired from medical records. Spot urine samples were collected and the concentrations of GLY and its primary metabolite, aminomethylphosphonic acid (AMPA) were quantified by ultra-high-performance liquid chromatography–mass spectrometry. Odds ratios were estimated to assess the strength of the association between the case/control type and each predictor. GLY concentrations were above the limit of detection (LOD) in 86.1% of samples, with a range of 0.37–10.07 µg GLY/g creatinine. AMPA was not detected in any of the samples analyzed. Although urinary GLY concentrations showed no differences between the case and control groups, women residing near agricultural fields showed an increased risk of BC (OR: 7.38, 95% CI: 2.74–21.90). These original findings show the ubiquitous presence of GLY in adult women from Argentina. Interestingly, women living near agricultural fields have a higher risk of BC, suggesting that exposure not only to GLY but also to agrochemicals in general, could predispose to the development of BC in Argentina. While this study provides valuable insights, further and broader assessments of BC distribution in relation to agrochemical exposure acroos different regions of Argentina are needed.
... (2020) According to S.J. Russell and P. Norvig, the definitions of artificial intelligence can be divided into four categories: Replication of human thought, replication of rational thought, replication of human behavior, replication of rational behavior. Russell and Norvig (2004) These definitions will certainly no longer do justice to the capabilities of AI in 2025. AI can do more and learns... ...
Article
Full-text available
Today, AI is on everyone's lips and is included in many written documents. For a long time, it was seen as an additive to human statements and behavior. Recently, however, there have been indications that it is capable of more. To this end, the author conducted an in-depth interview with an AI. The answers to three questions are reproduced in the article. Far-reaching insights emerged that go beyond normal human knowledge in terms of depth and quality. AI must be credited with an upward evolution that makes it a valuable tool for the future of humanity.
... The primary contribution of this thesis is the development of a computational fog-based detection system 14 . This system operates within the computational fog layer and this system is implemented across distributed computational fog nodes 15 , where each node functions as an independent network intrusion detection system equipped with a majority voting mechanism 16 .The core contribution of this research is the identification of attacks in the lower layer of the computing cloud with group learning in fog nodes [17][18][19][20][21] . Conventional intrusion detection systems (IDS) in smart grids are often unable to effectively detect and respond to increasingly sophisticated and dynamic DDoS attacks in real time. ...
Article
Full-text available
The integration of advanced technologies into the infrastructure of modern smart grids has revolutionized the efficiency and reliability of energy distribution systems. However, the increasing reliance on interconnected digital systems exposes smart grids to various cyber threats, with distributed denial-of-service (DDoS) attacks posing a significant risk. This paper presents an effective method for identifying smart grid DDoS attacks by introducing the use of the deep neural network VGG19 combined with the Harris Hawks Optimization Algorithm (HHO). The suggested approach uses the robust feature extraction capability of VGG19-DNN for network traffic pattern analysis to detect abnormal traffic flows indicative of DDoS attacks. These features are then optimized using the HHO to enhance accuracy and efficiency. The approach also utilizes a distributed architecture for real-time monitoring and response, enabling timely mitigating of DDoS threats without compromising smart grid performance. The efficacy of the proposed framework is evaluated through extensive simulations and experiments using real-world smart grid datasets. Results demonstrated that the proposed approach outperforms existing methods in terms of detection accuracy and computational efficiency. Moreover, the robustness of the proposed solution against different attack scenarios is analyzed, and its scalability for large-scale deployments is validated. A comprehensive framework for protecting smart grids from DDoS attacks is developed, enabling more robust resilience and security of critical energy infrastructures against increasingly sophisticated cyber threats.
... Each such agent implements a function that maps percept sequences to actions, and it is presumed that these actions are selected in such a way as to maximize some performance measure, given the evidence provided by the percept sequence. AI aims to develop machines and programs that can perform tasks considered intelligent if done by humans, using techniques such as machine learning, reasoning, and adaptation to improve performance based on experience" [7]. ...
Article
Full-text available
This paper investigates the complex challenges professionals face in managing cyber risks and implementing human risk management programs. Emphasizing the crucial role of human behavior in effectively mitigating cyber risks, the paper highlights the transformative impact of utilizing the „Golden Circle” methodology. This human-centered methodology initiates discussions with the question „WHY”, articulating the fundamental purpose of human risk management and promoting an „inside-out” approach, starting with employee motivation and engagement. This approach ensures the sustainability of human risk management practices by fostering a sense of responsibility and belief in the mission. Furthermore, the integration of Artificial Intelligence (AI) is explored to enhance human risk management, with AI techniques such as machine learning analyzing behavioral patterns to predict potential risks and automate responses. However, the paper also addresses the drawbacks of AI, including sophisticated phishing attacks and deepfakes exploiting human vulnerabilities. Combining AI with the „Golden Circle” allows organizations to identify why employees are susceptible to attacks and how to tailor training, achieving a more robust and proactive risk management strategy. The paper offers tips and recommendations for evolving and sustaining this integrated methodology over time, ensuring its continued effectiveness in the dynamic cybersecurity landscape.
... AI refers to the simulation of human intelligence processes by machines, particularly computer systems. Core AI capabilities include learning, reasoning, problem-solving, perception, and language understanding [Russell & Norvig, 2016]. In marketing, AI systems analyse consumer behaviour, automate communications, and predict trends. ...
Article
The integration of Artificial Intelligence (AI) into marketing represents a paradigm shift in how businesses approach customer engagement, campaign management, and decision-making processes. This paper provides an in-depth analysis of the diverse applications of AI within the marketing landscape, emphasizing its transformative impact on traditional marketing strategies. AI technologies such as machine learning, natural language processing, and predictive analytics enable marketers to analyze large volumes of data, generate personalized customer experiences, automate repetitive tasks, and optimize campaign effectiveness in real-time. Through an extensive review, this study explores key AI applications, including dynamic customer segmentation, personalized advertising and recommendation systems, AI-powered chatbots, and content generation tools. Each application is examined for its contribution to enhancing marketing efficiency, improving customer satisfaction, and driving business growth. Furthermore, the paper discusses critical challenges faced by marketers, such as data privacy concerns, algorithmic biases, transparency issues, and the complexities of integrating AI systems within existing organizational frameworks. Ethical considerations surrounding the use of AI in marketing are also analyzed, highlighting the need for responsible AI deployment to foster consumer trust and comply with regulatory standards. Looking ahead, emerging trends such as explainable AI, voice and visual search optimization, and augmented reality applications are identified as key drivers shaping the future of AI-enabled marketing. Overall, this comprehensive analysis provides valuable insights for marketing professionals, researchers, and organizations aiming to leverage AI’s capabilities effectively and ethically to gain competitive advantages in an increasingly digital marketplace
... Unlike traditional AI-driven educational tools, which primarily function as static content generators or recommendation engines, Agentic AI systems engage in iterative reasoning and contextual decision-making, continuously adapting based on user interactions (Russell & Norvig, 2020). ...
Article
Full-text available
span id="docs-internal-guid-5d638118-7fff-0289-7602-4076551a55e8"> This scientific article examines the transformative potential of Agentic AI in Science, Technology, Engineering, and Mathematics (STEM) education. It highlights how Agentic AI can enhance learning outcomes in these subjects, reduce cognitive load, and better prepare students for the demands of an AI-driven workforce. By utilising smart tools like GitHub Copilot, Agentic AI systems provide opportunities to improve STEM learning environments through personalised assistance, collaborative problem-solving, and skill development, all while automating repetitive tasks. This paper also discusses the future implications of the job market influenced by Agentic AI, emphasising the need for upskilling and the necessity for educational systems to adapt to emerging roles. It also touches on ethical aspects like fair access and AI literacy while highlighting how Agentic AI could help bridge the gap between traditional education and real-world job readiness. </span
... For instance, AI can offer greater insights into client preferences and budgeting based on past expenditures, enabling staff members to explore the best ways to serve customers (Nguyen and Malik, 2022;Rashid et al., 2024). Considering how important GKS is to improving staff service quality and, in turn, client satisfaction, more research is required to look at how technology and AI-enabled applications are being used more and more to support employees and customers' experiences with these technologies, as well as how they affect the level of customer happiness and employee service quality (Russell and Norvig, 2010;Shahzad et al., 2020). By sharing knowledge with AI help, staff members can create a more individualized and memorable experience, which could lead to clients believing that the quality of the services is higher. ...
Article
Purpose This research aims to explore how circular economy practices (CEPs) address environmental challenges in manufacturing while providing a competitive edge for sustainable growth. It examines the role of green knowledge sharing, green creative climate and enhanced artificial intelligence information quality in fostering the successful adoption of CEP, offering strategies to improve collaboration and innovation in green practices. Design/methodology/approach This research employed a quantitative method by using a survey to gather data from 332 respondents representing Chinese manufacturing SMEs. We applied partial least square structural equation modeling for hypothesis testing, offering robust insights into the relationships among the variables and their implications for the manufacturing sector. Findings The results show that green knowledge sharing and green creative climate are favorably connected to CEP. Meanwhile, green creative climate is a key mediator between green knowledge sharing and CEP. In comparison, artificial intelligence information quality positively moderates among targeted relationships. The importance-performance map analysis highlighted the superior importance (28.70) of green knowledge sharing and the exceptional performance (67.638) of green creative climate toward CEP. Research limitations/implications The findings can aid in improving academic and professional understanding of managing and evaluating CEP at the project and firm levels in the manufacturing sector. Therefore, policymakers and managers may implement CEP by emphasizing green knowledge sharing, green creative climate, and artificial intelligence information quality. Originality/value This research contributes to the limited prevailing literature by enhancing the understanding of green knowledge sharing, green creative climate, artificial intelligence information quality and CEP. It sheds light on the potential role of green knowledge sharing and green creative climate, as they are performing the role of catalysts for enhancing information quality and fostering CEP in organizations.
... AI's application in PM is not always intuitive due to the unstructured nature of project data compared to more structured fields, as noted by Russell and Norvig (2021). However, the existing literature and AI tools demonstrate their utility in critical areas such as cost estimation, risk assessment, and resource allocation (Lewicka, 2024). ...
Article
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a Global Navigation Satellite System (GNSS), low visibility, and cluttered environments significantly increase uncertainty levels and cause partial observability. These challenges grow when compact, low-cost, entry-level sensors are employed. This study proposes a model-based reinforcement learning (RL) approach to enable UAVs to navigate and make decisions autonomously in environments where the GNSS is unavailable and visibility is limited. Designed for search and rescue operations, the system enables UAVs to navigate cluttered indoor environments, detect targets, and avoid obstacles under low-visibility conditions. The architecture integrates onboard sensors, including a thermal camera to detect a collapsed person (target), a 2D LiDAR and an IMU for localization. The decision-making module employs the ABT solver for real-time policy computation. The framework presented in this work relies on low-cost, entry-level sensors, making it suitable for lightweight UAV platforms. Experimental results demonstrate high success rates in target detection and robust performance in obstacle avoidance and navigation despite uncertainties in pose estimation and detection. The framework was first assessed in simulation, compared with a baseline algorithm, and then through real-life testing across several scenarios. The proposed system represents a step forward in UAV autonomy for critical applications, with potential extensions to unknown and fully stochastic environments.
Article
Full-text available
У статті досліджено екологічні наслідки цифровізації зовнішньоекономічної діяльності (ЗЕД), зокрема вплив впровадження штучного інтелекту (ШІ) на енергоспоживання та викиди парникових газів. Проаналізовано енергетичні витрати, пов’язані з навчанням і використанням великих мовних моделей (LLM), таких як BERT, GPT-3 та GPT-4, а також основні джерела енергоспоживання в цифровій економіці, включаючи дата-центри, мережі передачі даних та кінцеві пристрої. Наведено приклади застосування моделей ШІ в ЗЕД, зокрема в логістиці, енергетиці та виробництві, що демонструють потенціал ШІ у підвищенні енергоефективності та зниженні вуглецевого сліду. Показано взаємозв’язок моделей ШІ з етапами ЗЕД, що дозволяє оцінити трансформацію цієї сфери під впливом ШІ та її екологічні наслідки. Зроблено припущення, що одним з екологічних наслідків цифровізації ЗЕД може стати поява «енергетичної петлі», коли впровадження енергоефективних цифрових рішень призводить до зростання загального енергоспоживання через збільшення обсягів обчислень, що нівелює очікувані екологічні переваги від застосування ШІ під час етапів зовнішньоекономічних операцій. Наголошено на необхідності балансування між технологічним прогресом і екологічною відповідальністю, необхідності поєднання технічної ефективності з екологічною відповідальністю при реалізації цифрової зовнішньоекономічної політики, особливо в контексті міжнародної торгівлі та дотримання екологічних стандартів ЄС. Оригінальність роботи полягає в новому погляді на взаємозв’язок між використанням у міжнародній економічній діяльності цифрових технологій, зокрема, штучного інтелекту і наслідками для довікілля та енергоефективністі. Цей підхід дозволяє враховувати як прямі, так і непрямі ефекти впровадження ШІ, включаючи потенційні ризики виникнення «енергетичної петлі». Ключові слова: ШІ, цифровізація, міжнародна економічна діяльність, міжнародна економіка, енергоефективність, CO₂, екологічна ефективність, довкілля, Smart Grid, GPT-моделі, мовні моделі.
Chapter
The research assesses Artificial Intelligence impacts on organizational culture together with employee dynamics by using Employee Feedback Sentiment Analysis as its primary methodology supported by Natural Language Processing (NLP) analysis methods. Analyzed information originates from anonymous workplace data collections which include surveys in combination with workplace discussions to determine employee perceptions of AI implementation efforts. The application of NLP techniques produces employee sentiment analysis results about AI automation decision systems and collaboration aspects through sentiment classification and topic modeling methods. The primary findings show employees demonstrate positive reactions towards AI systems because they enhance operational performance even though they take issue with the potential discharge of jobs and AI's ethical dilemmas at work. The research contributes crucial findings about structural changes within workplace relationships which occur due to AI systems that transform worker trust structures and adaptation systems.
Chapter
A class of problems solved by dynamic programming (DP) methods represents another specific branch of optimization dealing with inequality constraints. The reign of DP covers a wide range of problems and techniques; however, the solution to the problems expressed by a generally non-linear (separable) objective function with a linear constraint in the form of inequalities will only be introduced in this chapter. Hence, it constitutes a more general task than the LP problem.
Preprint
Full-text available
This paper proposes a novel paradigm of Artificial Intelligence (AI) grounded in the epistemological process of converting tacit knowledge into explicit knowledge. Drawing on the foundational philosophies of science-particularly the works of Popper, Kuhn, Lakatos, and Gospodarek-the study conceptualizes AI not merely as a computational tool but as a systemic method for epistemic transformation. The paradigm is structured as a Lakatosian Research Programme, with a clearly defined hard core asserting that AI enables the symbolic representation of internalized, experiential knowledge. Surrounding this core is a protective belt of auxiliary hypotheses derived from general systems theory, cybernetics, machine learning, and symbolic processing. The programme's heuristics guide theoretical and technological advancements while preserving its epistemological foundation. By formalizing the tacit-to-explicit knowledge conversion, this paradigm repositions AI as a critical instrument for knowledge creation, management, and application in digital and socio-technical systems.
Chapter
The rapid development and implementation of new technologies can be lifesaving and serve to reduce the impact of disasters. When confronted with the unprecedented challenges posed by the COVID-19 viral pandemic, which emerged in China before spreading globally, the country had to mobilize all available resources, including expertise, knowledge, skills, technologies, material resources, and policies, to effectively address the crisis. Innovation plays a pivotal role during emergencies and crises by enabling quick problem-solving and adaptability to changing circumstances. The COVID-19 pandemic underscored the paramount importance of innovation. For example, the creation of drones for search and rescue operations and the utilization of data analytics to track and contain the spread of infectious diseases proved to be instrumental. The study also delved into the challenges China encountered when implementing Artificial Intelligence solutions to navigate the crisis.
Article
Full-text available
The work aims to study the multiple ways of AI aided technology in the packaging design process, analyze the application status of AI technology, and deduce the technology application strategy that is more suitable for the demand of the current packaging design market. By analyzing the actual application cases of AI technology in the packaging design process, the transformation of packaging design under the background of intelligent technology intervention was summarized and the innovation and application level of technology were discussed. Combined with the design practice, the application strategy of the technology was further studied, and the application prospect of AI technology in packaging design was put forward. AI aided technology can optimize packaging design from multiple application levels, such as user data analysis, personalized design extension and scheme intelligent optimization, and bring greater production benefits. The rational application of AI technology is helpful to change design thinking and expand packaging design forms. In packaging design, using intelligent technology to assist design according to different application scenarios can improve design efficiency and meet the development demands of the current digital economy era.
Chapter
This article aims to study the impact of artificial intelligence (AI) on customer satisfaction regarding the services offered by communication companies, investigating how the use of AI in service automation, offer personalization, and proactive network management influences customer satisfaction. A quantitative research approach was employed, utilizing a structured questionnaire distributed to a random sample of 200 customers of Moroccan telecommunications companies. Data collection focused on customer perceptions and experiences with AI-driven services, and the results were analyzed using structural equation modeling (SEM) with SmartPLS. Findings indicate that automation via AI significantly enhances customer satisfaction, while personalization shows an even stronger effect. Additionally, proactive management of network issues positively impacts customer satisfaction. These results underscore the critical role of AI in improving customer experiences and satisfaction levels.
Chapter
This chapter explores how Artificial Intelligence (AI) can be harmonized with Sustainable Human Resource Management (HRM) to foster eco-friendly, ethical, and efficient business practices. When AI technologies revolutionize conventional HR functions, they provide opportunities for reduced resource use, improved workplace sustainability and concordance with the UN Sustainable Development Goals (SDGs). This chapter provides an in-depth analysis of AI's role in recruitment, training, resource optimization, and employee well-being while emphasizing the potential for AI to drive both environmental and social sustainability within organizations.
Chapter
Generative artificial intelligence (GenAI) is an emerging technology that has significantly transformed the interaction between humans and machines. GenAI has the capacity to create content such as text, images, and videos, and it even uses human language. In the educational field, tools such as ChatGPT stand out for their ability to maintain coherent conversations, simulating human interactions. This study aims to offer a comprehensive and critical view of the convergence of GenAI and higher education. To this end, a systematic literature review has been carried out following the PRISMA protocol through the WoS and Dialnet databases. The analysis focuses on understanding the role of GenAI in this context, identifying both the opportunities and challenges associated with its implementation. The results of the study highlight key challenge areas, promising trends, and future prospects. Likewise, the effects of GenAI on students and teachers are analyzed, paying special attention to the ethical and social implications that accompany its integration into higher education.
Article
Full-text available
The integration of artificial intelligence (AI) into legal technology, particularly in contract drafting, presents both innovative potential and significant legal challenges. This research focuses on the legal validity of AI-generated contracts under Indonesian law. While AI can efficiently generate comprehensive and standardized agreements, its lack of legal subjectivity raises questions about its capacity to enter into binding contracts. By analyzing the Indonesian Civil Code, specifically Articles 1313, 1320, 1367, and 1368, this study explores whether AI-generated contracts can be considered legally valid and how AI stands as a legal subject in the contract. Theoretical frameworks such as legal subject theory, agency theory, and the objective theory of contracts are used to examine the issue. Comparative insights from the European Union's AI Act also highlight possible directions for Indonesia to develop a clear regulatory approach. The research concludes that while AI cannot currently be recognized as an independent legal subject, responsibility for its actions may be assigned to its users or developers.
Article
Artificial Intelligence (AI) is rapidly transforming the landscape of academic law libraries worldwide, offering new opportunities for enhancing legal research, information management, and user engagement. This article examines emerging trends in AI applications within academic law libraries, focusing on global developments alongside the unique challenges and opportunities faced in the Caribbean context. Key areas of exploration include AI-powered legal research tools, natural language processing (NLP) applications, and the ethical considerations surrounding AI integration. Drawing from insights presented at the CARALL Conference in July 2024, this article provides a comparative analysis of global best practices and proposes strategic recommendations for Caribbean academic law libraries to harness the potential of AI while addressing regional gaps in technological infrastructure and AI literacy.
Chapter
The rapid integration of artificial intelligence (AI) technologies in the financial services industry has led to significant transformations, fundamentally altering operational frameworks within financial institutions. AI is streamlining customer engagement and enhancing methodologies in risk management, making it essential for organizations to effectively measure the impact of these initiatives. This chapter, titled “Measuring the Impact of AI: Key Metrics and KPIs in Financial Services,” provides a comprehensive overview of the critical metrics and performance indicators necessary for evaluating the effectiveness of AI initiatives in the sector.
Article
Full-text available
The strategic use of artificial intelligence (AI) by MasterCard to better its business operations, increase customer service, strengthen security measures, and get a competitive advantage is investigated in this study. The objectives of the study are to analyse MasterCard's AI strategies, examine specific applications and case studies, and evaluate the impacts and ethical considerations associated with AI implementation. The research methodology involves a comprehensive review of secondary data, including industry reports, case studies, and scholarly articles. Findings indicate that MasterCard effectively leverages AI for fraud detection and prevention, customer experience personalisation, operational efficiencies, and predictive analytics. Successful AI projects have significantly improved these areas, demonstrating the transformative AI perspective in the banking sector. However, challenges such as data privacy, ethical implications, and regulatory compliance are also highlighted. The paper concludes with future directions of AI at MasterCard and recommendations for other financial institutions seeking to implement similar strategies.
Chapter
This chapter reviews the methods and application of artificial intelligence (AI) in lifestyle medicine research. The chapter highlights how AI technologies have been used to develop individualized interventions via accurate pattern recognition and classification using machine learning and neural networks. The integration of AI in lifestyle and nutritional medicine is examined, showcasing its applications in predicting chronic disease risks, optimizing dietary recommendations, and improving health outcomes. Ethical considerations, challenges, and future directions for AI in lifestyle medicine are also discussed, emphasizing the need for interdisciplinary collaboration and responsible implementation.
Article
We introduce a special issue of the Journal of Teacher Education examining the transformative implications of Generative Artificial Intelligence (GenAI) systems, particularly Large Language Models (LLMs), for teacher education. Unlike previous cultural technologies that served primarily as tools for human expression, GenAI represents a paradigmatic shift where technology becomes an active participant in cultural content creation and transformation. We position GenAI within the broader context of educational disruption, including post-COVID-19 learning recovery, social media's psychological impacts, and environmental sustainability concerns. Wel examine GenAI's technical evolution from rule-based symbolic systems to transformer-architecture neural networks capable of multimodal content generation and dynamic user interaction. Critical examination reveals significant challenges including algorithmic bias perpetuation, environmental sustainability costs, data colonization practices, and the phenomenon of AI "hallucination" in educational contexts. We argue against technological determinism, emphasizing the need for pedagogically-driven rather than corporate-efficiency driven AI integration. We conclude by emphasizing teacher educators' unique strengths in navigating technological transformation: deep pedagogical content knowledge, commitment to justice and equity, practical wisdom, and capacity for critical evaluation. Rather than positioning educators as perpetually behind technological curves, we assert teacher education's essential role in shaping AI development to serve broader educational and societal needs rather than merely adapting to corporate technological visions.
Chapter
Full-text available
This chapter looks at future developments and maximization of health data reuse in public health. The digital maturity of healthcare systems is, for example, a crucial factor in enabling the availability of electronic health data and their sharing through interconnected databases. The frontiers opened by artificial intelligence to improve health surveillance, disease detection, and resource allocation are changing public health programmes and population well-being by enabling targeted health promotion efforts, identifying high-risk populations, enhancing communication strategies tailored to specific patient subgroups, optimizing logistics in healthcare delivery, and supporting professionals’ decision-making processes. The common data spaces, which are going to be built in the EU to promote data sharing and innovation, are sustained and strengthened by important reforms, such as the European for Health Data Space Regulation, which aims to standardize eHealth data exchange, empower individuals, and facilitate the secondary use of health data for research, innovation, and policy making by providing precise rules for health data governance, interoperability, and safe data sharing across EU Member States.
Article
Full-text available
This article aims to analyze the adoption and usability of Artificial Intelligence (AI) in digital marketing content creation, simultaneously addressing associated perceptions, challenges, and ethical considerations. A mixed methodology was developed using a descriptive-exploratory approach, applying a digital survey to 470 users and complemented by a focus group composed of six advertising agencies and company experts. The results indicate a high degree of adoption of AI-based tools, highlighting their ease of use and benefits in productivity, operational efficiency, and strategic optimization. At the ethical level, significant concerns about authorship, content originality, and potential algorithmic biases emerged. It is concluded that although AI is widely accepted in the sector, ethical and operational challenges remain. Ongoing technology training programs and the development of clear ethical guidelines are recommended to ensure these tools' sustainable and responsible adoption in contemporary marketing.
Article
The modeling and control of networks over finite lattices are studied via the algebraic state space approach. Using the semi-tensor product of matrices, we obtain the algebraic state space representation (ASSR) of the dynamics of (control) networks over finite lattices. Basic properties concerning networks over sublattices and product lattices are investigated, which shows the application of the analysis of lattice structure in the model reduction and control design of networks. Then, algorithms are developed to recover the lattice structure from the structure matrix of a network over a lattice, and to construct comparability graphs over a finite set to verify whether a multiple-valued logical network is defined over a lattice. Finally, numerical examples are presented to illustrate the results.
Article
Organizational communication plays a critical role in shaping the success of modern businesses. In today’s rapidly digitalizing environment, organizations are becoming aware of the changes that new technologies create in internal communication. The effectiveness of communication in organizations depends on the correct use of language production and meaning processes. Understanding the complex relationship between language production, organizational communication, and meaning-making is particularly important to grasp the role of artificial intelligence (AI) in these processes. This study integrates Karl Weick’s Theory of Meaning with a phenomenological framework to examine how language mediates communication in organizations from a phenomenological perspective. The integration of AI technologies in organizations adds new dimensions to corporate communication strategies, increases the accuracy of language use, and reduces uncertainty. However, this process also requires careful consideration of ethical implications and the dynamics of organizational culture. This study examines in depth the integration of Weick’s theories with AI-supported communication tools, highlighting the challenges and opportunities that arise in this context. It explores the potential of AI technologies, especially in areas such as language processing and machine learning, in providing a more effective meaning-making process in corporate communication. Finally, the research highlights the importance of creativity and standardization of meaning, suggesting that with the increasing use of AI to maximize its benefits in organizational communication, a more consistent and effective communication process can be achieved.
Chapter
Full-text available
The Rockefeller Series in Science and Technology (ISSN 3067-0667) publishes research monographs, edited works, collections of papers, reviews of previously published studies, and advanced textbooks covering all relevant subjects related to science and technology. Monographs of this series can contain the following contributions: Book chapters that revisit and expand the interpretation of previously published works; Timely discussions of a relevant topic; Literature review with insights and novel interpretations; A carefully designed study that sparks interest in the research community; Research tailored for graduate students and professionals; Technical reports; Other topics related to the book series. The monographs published in the Rockefeller Series on Science and Technology attract the interest of researchers, students, and professionals.
Article
Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems. Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures. However, achieving efficient path optimization in complex large-scale three-dimensional (3D) scenes remains a significant challenge for vulnerability assessment. This paper introduces a novel AA^*-algorithmic framework for 3D security modeling and vulnerability assessment. Within this framework, the 3D facility models were first developed in 3ds Max and then incorporated into Unity for AA^* heuristic pathfinding. The AA^*-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal. An admissible heuristic is ensured by incorporating the minimum probability of detection (PDminP_\text{D}^\text{min}) and diagonal distance to estimate the heuristic function. The 3D AA^* heuristic search was demonstrated using a hypothetical laboratory facility, where a comparison was also carried out between the AA^* and Dijkstra algorithms for optimal path identification. Comparative results indicate that the proposed AA^*-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency. Finally, the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.
Preprint
Full-text available
Explainable Artificial Intelligence (AI) has emerged as a field of study that aims to provide transparency and interpretability to machine learning models. As AI algorithms become increasingly complex and pervasive in various domains, the ability to understand and interpret their decisions becomes crucial for ensuring fairness, accountability, and trustworthiness. This abstract provides an overview of the importance of explainable AI and highlights some of the key techniques and approaches used in interpreting and understanding machine learning models. The abstract begins by emphasizing the growing significance of explainability in AI systems. As machine learning models are deployed in critical applications such as healthcare, finance, and autonomous vehicles, it becomes essential to comprehend the reasoning behind their predictions. Explainable AI methods provide insights into how these models arrive at their decisions, enabling stakeholders to identify biases, diagnose errors, and gain actionable insights from the model's behavior. The abstract then delves into various techniques employed in interpreting machine learning models. These techniques include rule-based explanations, feature importance analysis, surrogate models, and model-agnostic approaches. Rule-based explanations aim to provide human-understandable rules that explain the model's decision-making process. Feature importance analysis identifies the most influential features contributing to the model's predictions. Surrogate models create simplified approximations of complex models that are easier to interpret. Model-agnostic approaches, on the other hand, focus on post-hoc interpretability by generating explanations irrespective of the underlying model architecture. Furthermore, the abstract explores the challenges associated with explainable AI, such as the trade-off between interpretability and model performance, the need for domain expertise to interpret explanations, and the ethical considerations of revealing sensitive information. It also discusses the ongoing efforts in standardization and regulation of explainable AI to ensure its responsible and ethical use. Introduction:
Article
Increasing crop productivity and ensuring sustainability requires smart agricultural management. This study integrates Swarm Robotics Optimization (SRO) and Convolutional Neural Networks (CNN) to improve cassava crop soil data classification. Combining the two methods improves soil data analysis efficiency and precision. IoT sensors can measure soil data, including NPK levels, pH, humidity, and temperature, in real time. An Internet of Things (IoT) gateway transfers sensor data to a cloud service over Wi-Fi or LoRaWAN. The data is safely stored and accessible for cloud processing. Then, we'll classify the data using CNN's pattern recognition and feature extraction skills. However, CNN performance depends on the ideal parameter and architecture arrangement. Here, SRO matters. Objective Based on social insect behaviour, SRO actively searches the search space to optimize CNN settings. SRO's adaptive optimization can help CNN classify complex soil data. So, hybrid technique. The optimized convolutional neural network (CNN) model trained on enhanced soil data classifies soil accurately. We compare this SRO-optimized CNN to other machine-learning algorithms to demonstrate its efficacy. The soil data can help gardeners prune cassava crops accurately. Classifying soil types helps farmers allocate resources, improve crop health, and boost yields. This technique increases cassava output and promotes sustainability. Advanced soil data classification using Swarm Robotics Optimization and Convolutional Neural Networks is trustworthy. The improved model improves crop management and yields by informing agricultural decisions. Future studies will add environmental characteristics and include more crops to improve precision agriculture.
Article
Several synaptic weight matrices have been proposed for Hopfield neural network (HNN) models, where chaotic dynamics may arise. Contrary to those works, this manuscript aims to present a synaptic weight matrix where every entry can be set as an integer, harvesting an elegant chaotic HNN from a chaos theory point of view. Analytical and numerical analyses such as equilibrium points, bifurcation diagrams, Lyapunov exponents, and basins of attraction demonstrate that the proposed HNN exhibits complex behaviors across a wide range of parameter values. Also, we extend the study of the HNN into the fractional order domain. Moreover, the design and implementation details of the proposed neural network using field programmable analog arrays (FPAAs) are thoroughly discussed. This includes the various components and their configurations, highlighting how they contribute to the overall functionality of the neural network. As a result, we found a strong correlation between numerical simulations and SPICE circuit simulations.
ResearchGate has not been able to resolve any references for this publication.