Fig 2 - uploaded by Vagan Terziyan
Content may be subject to copyright.
Players (1, 2) for the "Fake" role (left). Players (I, II) for the "Debunker" role (right).

Players (1, 2) for the "Fake" role (left). Players (I, II) for the "Debunker" role (right).

Source publication
Article
Full-text available
Industry 4.0 systems are extensively using artificial intelligence (AI) to enable smartness, automation and flexibility within variety of processes. Due to the importance of the systems, they are potential targets for attackers trying to take control over the critical processes. Attackers use various vulnerabilities of such systems including specif...

Contexts in source publication

Context 1
... is a sparring partner for D and its role is to simulate the attacks to train D and strengthen the digital immune system as a whole. We distinguish two principal ways to model possible attacks with the help of the G, and, accordingly, two types of the "Fake"/G player (Figure 2 (left)): ...
Context 2
... let us consider the element, which ensures security functions of the digital immunity, i.e., the Debunker (D in terms of GAN). We distinguish two principal types of security modelling (Figure 2 (right)): I) "Traditional Discriminator" based on AI; II) "Turing Discriminator" based on collective intelligence. ...

Citations

... Generative allocation can also be acquired using reinforcement learning. In this configuration, agents gain experience in data construction by collaborating with the surroundings and gathering encouragements and feedback varying in the quality of the samples they produce [12]. This method has been used in fields such as text generation, where user feedback-driven reinforcement acquisition helps enhance created text. ...
Chapter
Full-text available
The introduction of synthetic intellect in the processes and procedures also termed artificial intelligence (AI) has risen as a disruptive catalyst across many different sectors. AI has found widespread application in automated systems, financing, and healthcare. With the increasing use of AI, models and algorithms have started surpassing the significant ideals of human performance. The significance of generative AI (GenAI) lies in its concentration on the realms of knowledge work and creative labor, which encompasses a significant number of humans, totalling billions. Procreative AI or GenAI has the potential of enhancing the efficiency and creativity of people by at least 10%. It can also improve their speed, efficiency, and overall capabilities to some extent. GenAI possesses the capability of generating immense economic value up to trillions of dollars. AI systems used to be rule-based and had limited capabilities. The emergence of algorithms for deep learning toward the century’s end paved the way for more sophisticated GenAI models. GenAI is a specialized branch of AI that specifically concentrates on the production of novel content, including audio, video, and writings, through the utilization of artificially intelligent deep learning algorithms. Implementing AI for decision-making in sustainable project life cycle operations remains difficult. Recent progress in large language models (LLMs) has empowered developers to engage with generative algorithms, effectively transforming natural language queries into code for diverse programming languages. Highly advanced technologies such as Codacy and OpenAI Codex are widely utilized to assist LLMs and their adoption is steadily growing. GenAI solutions leverage aspects such as “completion” to significantly improve the creation of software. AI is extensively used in a number of manufacturing organizations, including those in healthcare, finance, robotics, and automation. However, using AI to make decisions in sustainable project life cycle management is still a difficult undertaking. The present research explores the utilization of AI techniques for decision-making to promote sustainable operations throughout the various phases of a project life cycle. This chapter compares the existing GenAI project life cycles and also suggests the practical framework, as also provides the various futuristic recommendations for sustainable project life cycles. Keywords: Generative AI, artificial intelligence, sustainable project life cycle, large language models (LLMs), intelligent systems, edge and fog computing
... Generative Adversarial Networks (GANs) [23] with many variations of their architectures [24] are known to be one of the most powerful ML tools for a wide range of applications, which involve image processing and generation [25]. The backbone idea behind GANs is synchronous adversarial training of two capabilities (competing neural networks): Discriminator, which separates generated (fake) images from the real ones; and Generator of realisticallylooking images. ...
Article
Smart manufacturing uses emerging deep learning models, and particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), for different industrial diagnostics tasks, e.g., classification, detection, recognition, prediction, synthetic data generation, security, etc., on the basis of image data. In spite of being efficient for these objectives, the majority of current deep learning models lack interpretability and explainability. They can discover features hidden within input data together with their mutual co-occurrence. However, they are weak at discovering and making explicit hidden causalities between the features, which could be the reason behind the particular diagnoses. In this paper, we suggest Causality-Aware CNNs (CA-CNNs) and Causality-Aware GANs (CA-GANs) to address the issue of learning hidden causalities within images. The core architecture includes an additional layer of neurons (after the last convolution-pooling and just before the dense layers), which learns pairwise conditional probabilities (aka causality estimates) for the features. Computations for these neurons are driven by the adaptive Lehmer mean function. Learned causalities are merged with the features during flattening and (via fully connected layers) influence the classification outcomes. Such causality estimates can be done for the mixed inputs where images are combined with other data. We argue that CA-CNNs not only improve the classification performance of normal CNNs but also open additional opportunities for the explainability of the models’ outcomes. We consider as an additional advantage for CA-CNNs (if used as a discriminator within CA-GANs) the possibility to generate realistically looking images with respect to the causalities.
... Deep convolutional architectures success leads naturally to the further active development of neural generative architectures, i.e., GANs [13], and further introduction of CNN and GAN-supported innovations in Industry 4.0 [39,40,41,42]. In the first place, GANs enable development of novel simulation-based decision-making and decisionsupport industrial tools, such as, digital twins [43,44], and maintain their robustness and accuracy by means of adversarial training [45,46]. ...
... Putting two learning models in confrontation solves a problem of a strong teacher, since both networks act as constantly developing challengers for the opponent, thus, synchronously coevolving during the training. Some architectures rely on the idea of reaching better generation quality by primarily enhancing the discriminative component [46,47]. ...
Article
Biologicalization (biological transformation) is an emerging trend in Industry 4.0 affecting digitization of manufacturing and related processes. It brings up the next generation of manufacturing technology and systems that extensively use biological and bio-inspired principles, materials, functions, structures and resources. This research is a contribution to the further convergence of computer and human vision for more robust and accurate automated object recognition and image generation. We present VOneGANs, a novel class of generative adversarial networks (GANs) with the qualitatively updated discriminative component. The new model incorporates a biologically constrained digital primary visual cortex V1. This earliest cortical visual area performs the first stage of human‘s visual processing and is believed to be a reason of its robustness and accuracy. Experiments with the updated architectures confirm the improved stability of GANs training and the higher quality of the automatically generated visual content. The promising results allow considering VOneGANs as providers of high-quality training content and as enablers of future simulation-based decision-making and decision-support tools for condition-monitoring, supervisory control, diagnostics, predictive maintenance, and cybersecurity in Industry 4.0.
Article
Full-text available
This study introduces the implementation of a Multivocal Literature Review (MLR) approach to analyze machine learning (ML) and deep learning (DL) strategies for securing the Industrial Internet of Things (IIoT) in smart industry environments. By extending and validating existing research and practical insights, this review aims to provide valuable guidance for both novice and intermediate audiences. The review process identified an initial pool of 403 sources (367 white literature, WL, and 36 gray literature, GL), which was refined to a set of 263 sources (247 WL and 16 GL), addressing research questions. Key contributions include: (1) a detailed classification of core technologies influencing smart industry, enhancing the identification of specific security vulnerabilities; (2) a systematic categorization of critical security challenges, including network threats, software threats, tampering and deception attacks, and advanced attacks, with a graphical summary of prevalent threats; (3) an overview of recent advancements in securing IIoT environments, encompassing theoretical frameworks, intrusion detection systems (IDS), intelligent algorithms, and datasets; and (4) a discussion on current strategy limitations, identifying open research and practice challenges. The innovative use of a MLR, incorporating diverse perspectives beyond traditional scientific literature, broadens the study’s scope and improves data traceability, offering actionable recommendations for advancing ML and DL strategies in cybersecurity.
Chapter
The machine learning component has significantly influenced the manufacturing business, according to the Industry 4.0 standard. The Industry 4.0 paradigm encourages the use of smart sensors, tools, and gadgets to enable smart factories that continuously collect data on production. Actionable intelligence can be formed by means of ML techniques by dealing with the collected data to increase production output without materially changing the required resources. Additionally, it is now possible to recognize complex production designs owing to machine learning techniques' ability to provide analytical visions, including intelligent and continuous inspection, predictive maintenance, quality improvement, process optimization, supply chain management, and task scheduling. This research presents analysis of internet of things-enabled manufacturing, tools other than machine learning structures used in conventional in addition to unconventional machining processes, and their strengths and weaknesses in an Industry 4.0 context, as well as a perspective on the manufacturing paradigm.
Article
Smart manufacturing needs digital clones of physical objects (digital twins) and human decision-makers (cognitive clones). The latter requires use of machine learning to capture hidden personalised decision models from humans. Machine learning nowadays is a subject of various adversarial attacks (poisoning, evasion, etc.). Responsible use of machine learning requires digital immunity (the capability of smart systems to operate robustly in adversarial conditions). Both problems (clones and immunity training) have the same backbone solution, which is adversarial training (learning on automatically generated adversarial samples). In this study, we design and experimentally test special algorithms for adversarial samples generation to fit simultaneously both purposes: to better personalise decision models for digital clones and to train digital immunity, thus, ensuring robustness of autonomous decision models. We demonstrate that our algorithms facilitate the desired robustness and accuracy of the training process.
Article
Full-text available
Industry 4.0 and Smart Manufacturing are associated with the Cyber-Physical-Social Systems populated and controlled by the Collective Intelligence (human and artificial). They are an important component of Critical Infrastructure and they are essential for the functioning of a society and economy. Hybrid Threats nowadays target critical infrastructure and particularly vulnerabilities associated with both human and artificial intelligence. This article summarizes some latest studies of WARN: “Academic Response to Hybrid Threats” (the Erasmus+ project), which aim for the resilience (regarding hybrid threats) of various Industry 4.0 architectures and, especially, of the human and artificial decision-making within Industry 4.0 processes. This study discovered certain analogy between (cognitive) resilience of human and artificial intelligence against cognitive hacks (special adversarial hybrid activity) and suggested the approaches to train the resilience with the special adversarial training techniques. The study also provides the recommendations for higher education institutions on adding such training and related courses to their various programs. The specifics of related courses would be as follows: their learning objectives and related intended learning outcomes are not an update of personal knowledge, skills, beliefs or values (traditional outcomes) but the robustness and resilience of the already available ones.
Article
Smart manufacturing often requires digital clones of physical objects (twins) and human decision-makers (“cognitive clones”). The latter requires use of machine learning to capture hidden personalized decision models from humans. Machine learning nowadays is a subject of various adversarial attacks (poisoning, evasion, etc.) on the training and testing data. Responsible use of machine learning requires some kind of “digital immunity” (the capability of smart systems to operate robustly in adversarial conditions). Both problems (clones and immunity training) require the same backbone solution, which is adversarial training (learning on the basis of automatically generated adversarial samples). In this study we designed and experimentally tested special algorithms for adversarial samples generation to fit simultaneously both purposes: better personalize the decision models for digital clones and to train digital immunity to ensure robustness of the autonomous decision models. We demonstrated that our algorithms essentially facilitate training process towards desired robustness for both problems.
Article
Full-text available
Artificial Intelligence (AI) is known to be a driving force behind the Industry 4.0. Nowadays the current hype on development and industrial adoption of the AI systems is mostly associated with the deep learning, i.e., with the abilities of the AI to perform various specific cognitive activities better than humans do. However, what about the Artificial General Intelligence (AGI), associated with the generic ability of a machine to perform consciously any task that a human can? Do we have many samples of the AGI research adopted by Industry 4.0 and used for smart manufacturing? In this paper, we report the systematic mapping study regarding the AGI-related papers (published during the five-year period) to find out whether AGI is giving up its positions within AI as an attractive tool to address the industry needs. We show what the major concerns of the AGI academic community are nowadays and how the AGI findings have been already or could be potentially applied within the Industry 4.0. We have discovered that the gap between the AGI studies and the industrial needs is still high and even has some indications to grow. However, some AGI-related findings have potential to make real value in smart manufacturing.
Chapter
The mass implementation of the Internet of Things (IoT) creates a new computing paradigm where ubiquitous networks of devices with embedded sensors and actuators support innovative business models. This research uses a combination of natural language processing and corpus linguistics techniques to support identification of the risk factors present in multi-dimensional industrial IoT assets (IIoT). The methods reviewed are found to streamline the manual stages traditionally associated with robust knowledge synthesis processes such as PRISMA. The methods explored can help decision makers and researchers to systematically identify trends and directions in the literature across the broad domain of IoT. The resulting findings can then contribute to risk management planning in what is an emerging and complex field, particularly the industrial use of IoT, and for which historic risk data is immature.