ArticlePDF Available

International Journal on Recent and Innovation Trends in Computing and Communication Navigating the Landscape of Robust and Secure Artificial Intelligence: A Comprehensive Literature Review

Authors:
  • ServiceTitan
  • SAP America Inc.

Abstract

Addressing the multidimensional nature of Artificial Intelligence assurance, this thorough survey is dedicated to elaborating on various aspects of ensuring the reliability and safety of computerized systems. It steers through the turbulent seas of model enervates, unmodelled phenomena, and security menaces to give an elaborate lit review. The review touches upon the boisterous ways of addressing these intricate mitigation strategies for model errors used in the past, the challenges of under-specification with modern ML models, and how understanding uncertainty is crucial. In addition, it evaluates the AI system’s security basis, the emerging Adversary Machine Learning field, and its processes necessary for testing and evaluation of weaker adversarial case studies. The review of literature also looks upon the situation of DoD context, how the terrain surrounding developmental and operational testing is altering with all these shifts in culture that must be implemented if not to implement robust but secure AI implementation.
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
61
7
IJRITCC | December 2023, Available @ http://www.ijritcc.org
Navigating the Landscape of Robust and Secure
Artificial Intelligence: A Comprehensive Literature
Review
Saurabh Suman Choudhuri1, Jayesh Jhurani2
1Vice President & Global Head of Digital Modalities, SAP America Inc.
Email id: s.choudhuri@sap.com1; 1IEEE id: 99962111.
2IT Manager, ServiceTitan, Inc. Email id: jjhurani@servicetitan.com2
Abstract
Addressing the multidimensional nature of Artificial Intelligence assurance, this thorough survey is dedicated to elaborating on
various aspects of ensuring the reliability and safety of computerized systems. It steers through the turbulent seas of model enervates,
unmodelled phenomena, and security menaces to give an elaborate lit review. The review touches upon the boisterous ways of
addressing these intricate mitigation strategies for model errors used in the past, the challenges of under-specification with modern
ML models, and how understanding uncertainty is crucial. In addition, it evaluates the AI system’s security basis, the emerging
Adversary Machine Learning field, and its processes necessary for testing and evaluation of weaker adversarial case studies. The
review of literature also looks upon the situation of DoD context, how the terrain surrounding developmental and operational testing
is altering with all these shifts in culture that must be implemented if not to implement robust but secure AI implementation.
Introduction
The only issue of concern is the resilience of Artificial
Intelligence (AI) systems and several challenges which
include difficulty in model accuracy, tasks we are unable to
model, and security threats. Correcting errors regarding the
model, that is, imperfections, such as problems related to
optimization mechanisms and criteria of regularization and
inference algorithmics, is one of the areas addressed by
contemporary literature. Despite the expectations, the under-
specification problem has significant hindrances in modern
machine learning, especially deep learning, facilitating
hidden biases in prediction and leading to undesired failures
once deployed. The review further gives special focus to
calibrated uncertainty measures that are central in steering the
accrued intricacies occasioned by the introduced dataset
shifts. Security issues developing in AI systems are analyzed
focusing on the proven software security formation, as well
as the nascent adversarial machine learning domain. The
literature review reveals how AI systems emerge as domain-
level solutions that are embedded intrinsically in the defense
function, thereby illustrating the developmental and
operational testing landscape for the Department of Defense
(DoD).
Robustness of AI Components and Systems
Robustness in AI systems represents a complex examination,
interrogating the layers of mistakes arising from model errors
and unmodeled states. This paper captures the current
understanding of those challenges while highlighting the
researcher’s significant detail given on the controversies
proposed to be in place and gaps that require research.
Addressing Model Errors
Interestingly, there is a huge literature that stresses the need
to consider model errors in considering steps to increase the
automation resilience or AI system [1]. A vast range of
techniques related to such eventually used include robust
optimization, regularization, risk-sensitive objective
functions, and robust inference algorithms that have shown
much coverage. Although some of these approaches have
shown effectiveness in controlled environments, a significant
gap remains ever since even a few tools translate these
theoretical breakthroughs into everyday effective
implementation.
Underspecification in Modern ML Systems
As further discovered specification is becoming a massive
hurdle in developing ML systems, more pronouncedly those
that use deep learning, to achieve robustness. This
phenomenon has been identified as under-specification due to
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
61
8
IJRITCC | December 2023, Available @ http://www.ijritcc.org
the algorithm's ability to solve optimization problems when
training deep neural networks, resulting in various solutions
with very similar average performance [2]. This is in line with
the models that emerge owing to this phenomenon; such
models harbor a hidden bias and underlined fault, thus
making their eventual implementation filled with
inconsistencies. Model selection in deep learning is the
gaping hole of an explorable AI as described by many
researchers who practice XAI (Figure 1). Lack of seeing the
potential consequences of human actions that can be revealed
through understanding model decisions has given significant
room for blame and faults to enter advocacy and government
policy making. The resulting xenophobia from such assumed
complicity presents a social challenge if not addressed.
Understanding Uncertainty in ML Models
Figure 1. The concept of XAI [3].
frequent mentions throughout the literature. From items such
Understanding and trusting ML model uncertainty means
having a high level of finality to the structure of modern ML
algorithms. The phenomenon, referred to as dataset shift and
defined [1], consists of a distortion of the training data
distribution due to changes made during the operational
period. Calibration methods are noted in the literature as
essential techniques that aid in setting uncertainty scores so
that they accurately capture the likelihood of prediction
accuracy. Effectively calibrated uncertainty measures allow
the design, integration, and monitoring of policies to office
prudence in the operation of AI systems.
Challenges and Opportunities in Robust AI
The roadblocks accompanying the development of Robust AI
are opportunities to call to mind that they do not present
obstacles but call for rapid attention. However, there are
problems involving the determination of robustness criteria
and testing strategies in the AI system life cycle despite
as model evaluation, deployment, and continuous monitoring
throughout operational stages, the literature illuminates these
interim challenges that are critical [2], [4]. Three directions
are identified as promising strategies including “building
robustness in” through smart design and customization and
utilizing algorithms with robustness features. And while the
literature notes the need for additional study and coordinated
unification of testing procedures.
Tools and Practices for Measuring Robustness
The reviewed literature stresses the need for creating
methods, frameworks, and practices that would allow
measuring AI component resiliency as well as system-level
resiliency. The abovementioned individuals must use better
tools, of which the following are recognized as necessarily
proper for AI engineers, product managers, designers,
software engineers, systems engineers, and operators. These
tools serve as the centerpiece of activities in developing,
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
61
9
IJRITCC | December 2023, Available @ http://www.ijritcc.org
engineering, and running AI-embodied features with
certainty as well as confidence.
Security Challenges in Modern AI Systems
However, one critical aspect that needs to be addressed while
building AI systems in the modern technological landscape is
security. This study provides an integrated look at the
complex sophistication of the security risks in modern AI
systems, specifically focusing on securing against intentional
perversion and involuntary failure.
Foundations in Software Security
AI is a system of amalgamation of some software and data
being part of the bigger world namely, it comes under systems
included in the roofs designated to software cyber-physical
systems. In consideration of this, AI engineers should resort
to previously known information and best practices
originating from software security domains. The following
measures are innovative to fill this gap: integrating MITRE’s
ATT&CK (Figure 2) framework for securing ML systems in
production and building on prioritized security features
focused on AI.
Figure 2. ATT & CK Model relationships [5].
Adversarial Machine Learning: Taxonomy and Strategies
Advances in modern ML algorithms, with notable ones being
deep learning have unveiled contemporary attack routes
hence the popularity of adversarial machine learning.
Scientists who research in this field want to understand
machine learning models and their underlying threats as well
as ways by which we can protect them from being attacked
[6], [7]. Taxonomy categorizes attacks into three areas:
Learning something incorrect, performing an action of the
wrong sort, and even revealing something that is correct but
should unveiled, not made available nor exposed.
Misrepresenting or corrupting operational data, attackers
exploit models by providing malicious examples to take
unexpected and unfavorable responses. Additionally, the
attackers can take advantage of elements that allow them to
target ML models deployed in production.
Mitigation Strategies and Trade-offs
Combating the adversarial AI involves embracing intricate
coordinating principles as well as embedded compromises.
As is the tact of such arm-lifting decision makers in this
scenario, defenders system builders and operators are
required to make decisions as they find themselves between
‘dirty’ or somewhat shady tradeoffs in information advantage
afforded to attacker versus defender [8]–[10]. Interestingly,
the latter study does not make an explicit connection to its
critics but also clearly reveals the complicated tradeoff
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
620
IJRITCC | December 2023, Available @ http://www.ijritcc.org
between do right thing policy enforcement, learning, and
disclosure. It is a common paradox that such models are
created to “do the right things”; they may, however,
eventually become more deserving of information unveiling.
Research and Development Imperatives
Since AI is becoming hostile, the surrounding area makes it
continuously necessary to research and invent new ideas. The
main areas of focus include accounting for the relations
between several defense policies, taking the level of
information availability by both attackers and defenders into
consideration, as well as budget limitation management.
Furthermore, there is a significant need for certain products
that enable the builders of AI systems to understand what it
need security-wise.
Expanding Security Coordination and Red Teaming
There are two major fields of opportunity for upgrading AI
systems in terms of their security against developing threats.
First, there is an indication to fully exploit the research
findings of security vulnerability coordination as a way of
accommodating AI technologies for connecting with new
vulnerabilities. With AI increasingly being used in real-world
systems, developing strategies to understand and resolve its
inevitable peculiar security implications constitutes one of the
most pressing issues [11], [12]. Red teaming presents a
second area of benefit: improving red teaming capabilities is
an effective approach. A traditional practical part of
optimizing security in software systems, red teaming can be
operationalized as an inspective tool to evaluate the aspects
of the security panorama in AI-lion environments.
Processes and Tools for Testing, Evaluating, and
Analyzing AI Systems
Considering the popularity of new trends such as AI systems
development and deployment, consideration for robustness
and security have been given a lot of attention. This detailed
study covers crucial processes and tools that need to be
performed for testing, verification, and other procedures for
AI systems evaluation. Seen from an AI engineering point of
view, this discussion highlights the need for dedicated tools,
methods design patterns, and artificial intelligence standards
that help the application projection of a full-scale solution
responsible building and operation.
AI Engineering Landscape
For a full and accurate understanding of AI systems’ level of
strength in robustness and security, one needs to look closely
at the technical, algorithmic, as well as mathematical
constructs upon which such interaction is structured. Though
innovative instruments are not uncommon, the highly specific
nature of AI systems and particularly ML applications
requires an entirely different set of tools and processes.
Different from conventional system software engineering, AI
focuses on issues that are usually larger in their profile, less
resolved but formulated rather vague with more inherent
complexity of input and result spaces [3], [12], [13]. Although
some traditional software engineering tools are useful in
providing support, they fail in solving AI problems
completely. This also shows that such tools are particularly
required, and innovative ones developed for the slight
difference of AI development.
Challenges in Existing Testing Tools
Traditional testing tools, mostly designed for conventional
software development, often have limitations when used to
test AI and ML algorithms. Large problem spaces, fuzzy
objectives due to understandings of end states or emergent
behavior in smart systems, and complex mapping of inputs to
outputs, all require greater refinements to which traditional
testing methods do not cater Sometimes, a clear gap is
observable that leads to the development of completely new
verticals. Responding to the nuances of the AI systems
requires going beyond the ordinary and using tools fine-tuned
for addressing nuances of AI development.
Incorporation into Modern Software Development
Seamlessly performing the operations necessitates AI system
tools integrated into present-day software development
processes. In parallel with traditional software engineering
ideas, AI engineers need to have instruments such as those
designed for software reverse engineer rings, static and
dynamic code analysis, and fuzz testing [5], [8]. On the other
hand, AI is unique in its methodological requirements which
involve additional innovative approaches that can be oriented
into making standard testing different in certain ways. The
use of AI-specific tactics that are part of contemporary action
trajectories enables coherent incorporation into the system
compliance with the overall purpose of achieving soundness
and security.
Integration into DevOps and MLOps Pipelines
For AI development and deployment, it is necessary to
integrate AI tools into DevOps or Machine Learning
Operations (MLOps) pipelines. It is this integration that
makes the process easier and simpler - by enabling
Continuous Integration and Continuous Delivery (CI/CD)
(Figure 3) along the way. Integrating CM into the CI/CD
framework requires the promotion of continuous monitoring
and security enhancement. This ongoing monitoring ensures
that the resilience and security of AI systems are dynamic
measures and not a steady solution as it guides throughout the
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
62
1
IJRITCC | December 2023, Available @ http://www.ijritcc.org
system lifecycle [4], [7], [8]. One of the roles that Continuous
Monitoring plays is to facilitate real-time assessment for
consistent identification of potential weaknesses with
necessary and incremental implementations such as controls,
mitigations, model retraining even systems redesign based on
the actual performance of the running systems.
Figure 3. Relationship Between Continuous Integration, Delivery, And Deployment [14].
Foundations of Robust and Secure AI
Apart from their inherent value, these properties support
mission success and develop other similar characteristics like
safety, availability, credibility, deliverability, and conformity.
Strong and stable systems have a significant role in fulfilling
policy dependencies such as privacy, equity, and morality.
The highly volatile DoD necessitates a complete paradigm
shift in the developmental test and evaluation (DT&E) and
operational test and evaluation (OT&E) processes to
incorporate AI within business-as-usual strategies.
Evolution of DT&E and OT&E
AI systems cannot be evaluated using traditional platforms,
and the entire process of DoD’s acquisition must involve
processes such as DT&E and OT&E. However, this evolution
needs to involve careful deliberation on the generation of
system testing requirements procurement of such and cost-
related issues concerned with continuous monitoring [6],
[10], [12]. AI was included in OT&E through the recent
workshop that was organized by the University of Maryland’s
ARLIS, where it exposed the needs and challenges it
introduced to carry out this process. Interestingly, the
workshop highlighted the gap between what can be right
easily measured at measured and what materially influences
operations.
Pacing Test and Evaluation Practices with Technological
Advances
Under the evolution dynamics of modern technologies, there
is a need for an agile and proactive test and evaluation
community within the DoD. However, this covers an increase
in the number of AI testers capable of handling all these
complexities caused by leveraging activities involving AI
systems and simulators [7], [13]. The cultural burden is
associated with creating a belief in risk-taking across the
entire set of stakeholders related to AI systems creation and
implementation. The proselytizing and prototyping are
ingredients fundamentally across domains, AI intelligibility
makes a unique set of challenges as some information reasons
about whether half-hooked onto a machine setting up the
system testing early in program development stages.
The Crucial Role of Rigorous Testing
Despite the common belief that testing as an activity is slow,
it is a process of finding defects and redundant features,
especially late on during project management functions. Deep
iterations of inquiry, learning, construction, and testing are
fundamentally critical for the teams accountable for
designing and developing AI systems. This method allows for
pointing out inconsistencies in the information flow that
structures the general program of actions within this system.
The consistent assessment of the model's capability to hold
up to unanticipated phenomena and tolerate attacks is
crucially important. Also, its viability for decision-making is
deemed necessary to ensure efficiency.
Interdependence and Experimentation
In any complicated structure, the comprehension of
interlocking random elements is vital. Teams must steer by
experimentation to ‘fingerprint’ such interdependencies, on
the one hand, and devise contingent plans for unexpected
behaviors caused by changes within systems. This throws
light on the need for a non-linear perspective analysis that
embodies the complex interdependencies within the AI
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
62
2
IJRITCC | December 2023, Available @ http://www.ijritcc.org
system [4], [10], [11]. In the process of organizing their
utilization in high-risk environments such as managing
under-control highway systems or power grids, it should
ensure the adaptability and safety of AI Systems. Also, the
whole issue regarding national security uses must be given
even more emphasis because such applications necessarily
pose a high level of risk.
Cultural Shift and the Path Forward
As the DoD aims to achieve powerful and resilient AI
systems, such an effort should engage a broad-based strategy.
This comprises looking at things in different ways, that is, for
the known unknowns and the unknown” unknown”, as well
as a culture of both experimenting and testing. Using AI
systems to function in higher-value environments requires
serious thought to be put into the threats and weaknesses [4],
[8]. With only time, AI systems leveraged throughout the
critical infrastructures will form enticing targets and the DoD
should remain alert over new risks that could surface.
Conclusion
This review highlights the complex and dense terrain of
guaranteeing resilient and safe AI systems. In dealing with the
issues relating to model errors, partial specification, and
threats, a careful approach is necessary for coming up with
intelligent solutions The introduction of AI into the DoD’s
testing practices and tendencies requires a cultural change,
recognizing experimentation as well as proactive testing
necessary. Strict testing procedures, persistence in a model’s
reliability assessment amidst evolving conditions as well as
mechanisms of AI systems interdependency are critical for
effective achievement. With the rise of AI practices in high-
risk environments, it has become necessary for the DoD to
proceed with caution in addressing cropping-up risks to
ensure resilience as a secure and culturally cautious strategy
towards its implementation. To maintain the competitiveness
of advantages over emerging cellular and network threats,
there must be constant R&D effort underpinning and
supporting AI systems that need to meet rigid standards
geared towards robustness and security.
References
[1] R. Dzombak and S. Beckman, “Unpacking
capabilities underlying design (thinking) process,”
Int. J. Eng. Educ., vol. 36, no. 2, pp. 574–585, 2020.
[2] S. Patel and K. Mehta, “Systems, design, and
entrepreneurial thinking: Comparative frameworks,”
Syst. Pract. Action Res., vol. 30, pp. 515–533, 2017.
[3] C. I. Nwakanma et al., “Explainable artificial
intelligence (xai) for intrusion detection and
mitigation in intelligent connected vehicles: A
review,” Appl. Sci., vol. 13, no. 3, p. 1252, 2023.
[4] J. M. Spring, A. Galyardt, A. D. Householder, and N.
VanHoudnos, “On managing vulnerabilities in
AI/ML systems,” in New Security Paradigms
Workshop 2020, 2020, pp. 111–126.
[5] B. E. Strom, A. Applebaum, D. P. Miller, K. C.
Nickels, A. G. Pennington, and C. B. Thomas, “Mitre
att&ck: Design and philosophy,” in Technical report,
The MITRE Corporation, 2018.
[6] J. Helland and N. VanHoudnos, “On the human-
recognizability phenomenon of adversarially trained
deep image classifiers,” arXiv Prepr.
arXiv2101.05219, 2020.
[7] P. Bajcsy, N. J. Schaub, and M. Majurski, “Designing
trojan detectors in neural networks using interactive
simulations,” Appl. Sci., vol. 11, no. 4, p. 1865, 2021.
[8] K. Tran, W. Neiswanger, J. Yoon, Q. Zhang, E. Xing,
and Z. W. Ulissi, “Methods for comparing
uncertainty quantifications for material property
predictions,” Mach. Learn. Sci. Technol., vol. 1, no.
2, p. 25006, 2020.
[9] Y. Ovadia et al., “Can you trust your model’s
uncertainty? evaluating predictive uncertainty under
dataset shift,” Adv. Neural Inf. Process. Syst., vol. 32,
2019.
[10] A. B. Arrieta et al., “Explainable Artificial
Intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI,”
Inf. fusion, vol. 58, pp. 82–115, 2020.
[11] A. Tocchetti and M. Brambilla, “The role of human
knowledge in explainable AI,” Data, vol. 7, no. 7, p.
93, 2022.
[12] A. D’Amour et al., “Underspecification presents
challenges for credibility in modern machine
learning,” J. Mach. Learn. Res., vol. 23, no. 1, pp.
10237–10297, 2022.
[13] B. Nour, M. Pourzandi, and M. Debbabi, “A survey
on threat hunting in enterprise networks,” IEEE
Commun. Surv. Tutorials, 2023.
[14] M. Shahin, M. A. Babar, and L. Zhu, “Continuous
integration, delivery and deployment: a systematic
review on approaches, tools, challenges and
practices,” IEEE access, vol. 5, pp. 3909–3943, 2017.
International Journal on Recent and Innovation Trends in Computing and Communication
ISSN: 2321-8169 Volume: 11 Issue: 11
Article Received: 15 August 2023 Revised: 16 September 2023 Accepted: 7 October 2023
_ _ _
62
3
IJRITCC | December 2023, Available @ http://www.ijritcc.org
... It has made it possible for people to access their emails, files, and other collaborative tools from any device, increasing productivity and allowing them to work from anywhere. In addition, it has fueled the growth of gig economy, enabling individuals to pursue their entrepreneurial dreams without the limitations of traditional office space and infrastructure [4]. Cloud computing has revolutionized the way governments operate. ...
Article
Full-text available
The rise of the cloud has revolutionized the way people think about and act on information. Its ability to provide cost-effective and scalable solutions for fraud detection and identity management has revolutionized the way organizations and individuals think about and act on these issues. This paper aims to provide a comprehensive analysis of the significance of the cloud in addressing these challenges. Using advanced technologies such as big data analytics, machine learning, and real-time monitoring and monitoring, cloud-based fraud and identity solutions can help organizations improve their operational efficiency and security. This paper explores the various aspects of fraud detection and identity management that can be achieved through these solutions. The paper investigates the various challenges that can be encountered when implementing cloud-based methods, such as privacy, scalability, and integration issues. Future research initiatives and techniques, like federated learning and quantum computing, as well as artificial intelligence techniques, are also covered. This paper highlights the continuous progress and innovations in the field of fraud management and identity.
... This process can help them identify and prevent potential threats. With the help of advanced threat detection tools, they can also respond quickly to incidents to ensure the availability of their services [24]. In the future, research programs related to the security of cloud applications will focus on identifying and preventing new threats, developing effective mitigation techniques, and adapting to the changes brought about by the evolution of attack vectors and technologies. ...
Article
Full-text available
The rapid emergence and evolution of cloud computing have revolutionized the way organizations and individuals manage their data. However, this new technology also comes with its own set of security issues. The paper investigates the security concerns that arise when it comes to using cloud computing. It will help researchers and practitioners identify the most effective ways to protect their organizations from these threats. The paper explores the critical role that the application layer plays in the delivery of cloud services. It highlights the numerous vulnerabilities in this component that allow it to be exploited for various attacks, such as distributed denial of service and SQL injection. It also emphasizes the importance of following regulatory standards, such as HIPAA and GDPR, in order to protect sensitive information. This paper explores the various security techniques that can be used to protect an organization's cloud computing applications. Some of these include multi-factor authentication and encryption techniques. It also suggests the use of continuous monitoring and firewalls. The paper is based on the findings of a survey, which provides recommendations for improving the security of an organization's cloud computing applications. Through the sharing of knowledge and best practices, organizations can manage the complexity of their cloud computing environment.
... The AI/ML solution significantly improved problem detection and response times by employing continuous real-time analytics and machine learning. As a result, the organization witnessed a significant enhancement in network performance and customer satisfaction, clearly demonstrating the effectiveness of AI in cybersecurity [8]. ...
Article
Amidst the fast-paced advancements in the digital realm, ensuring cybersecurity has become of utmost importance for enterprises across the globe. Conventional reactive methods for cybersecurity are inadequate in addressing the complex and constantly evolving nature of cyber threats. Consequently, there is an increasing requirement to implement proactive security measures that utilize cutting-edge technologies like Artificial Intelligence (AI) and Machine Learning (ML). Integrating AI and ML techniques to improve the ability to recognize and respond to potential threats in a proactive manner. This paper examines the benefits, difficulties, and future possibilities of incorporating AI and ML into cyber threat intelligence by thoroughly analyzing current literature and case studies. The results of this study indicate that AI and ML-based cyber threat intelligent systems provide substantial benefits in terms of identifying, examining, and addressing threats, eventually enhancing an organization's cybersecurity position.
Article
Full-text available
The potential for an intelligent transportation system (ITS) has been made possible by the growth of the Internet of things (IoT) and artificial intelligence (AI), resulting in the integration of IoT and ITS—known as the Internet of vehicles (IoV). To achieve the goal of automatic driving and efficient mobility, IoV is now combined with modern communication technologies (such as 5G) to achieve intelligent connected vehicles (ICVs). However, IoV is challenged with security risks in the following five (5) domains: ICV security, intelligent device security, service platform security, V2X communication security, and data security. Numerous AI models have been developed to mitigate the impact of intrusion threats on ICVs. On the other hand, the rise in explainable AI (XAI) results from the requirement to inject confidence, transparency, and repeatability into the development of AI for the security of ICV and to provide a safe ITS. As a result, the scope of this review covered the XAI models used in ICV intrusion detection systems (IDSs), their taxonomies, and outstanding research problems. The results of the study show that XAI though in its infancy of application to ICV, is a promising research direction in the quest for improving the network efficiency of ICVs. The paper further reveals that XAI increased transparency will foster its acceptability in the automobile industry.
Article
Full-text available
As the performance and complexity of machine learning models have grown significantly over the last years, there has been an increasing need to develop methodologies to describe their behaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e., high-performing models whose internal logic is challenging to describe and understand. Therefore, the machine learning and AI field is facing a new challenge: making models more explainable through appropriate techniques. The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system). This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. Furthermore, a discussion on the challenges, state-of-the-art, and future trends in explainability is also provided.
Article
Full-text available
This paper addresses the problem of designing trojan detectors in neural networks (NNs) using interactive simulations. Trojans in NNs are defined as triggers in inputs that cause misclassification of such inputs into a class (or classes) unintended by the design of a NN-based model. The goal of our work is to understand encodings of a variety of trojan types in fully connected layers of neural networks. Our approach is: (1) to simulate nine types of trojan embeddings into dot patterns; (2) to devise measurements of NN states; and (3) to design trojan detectors in NN-based classification models. The interactive simulations are built on top of TensorFlow Playground with in-memory storage of data and NN coefficients. The simulations provide analytical, visualization, and output operations performed on training datasets and NN architectures. The measurements of a NN include: (a) model inefficiency using modified Kullback–Liebler (KL) divergence from uniformly distributed states; and (b) model sensitivity to variables related to data and NNs. Using the KL divergence measurements at each NN layer and per each predicted class label, a trojan detector is devised to discriminate NN models with or without trojans. To document robustness of such a trojan detector with respect to NN architectures, dataset perturbations, and trojan types, several properties of the KL divergence measurement are presented.
Article
Full-text available
Data science and informatics tools have been proliferating recently within the computational materials science and catalysis fields. This proliferation has spurned the creation of various frameworks for automated materials screening, discovery, and design. Underpinning these frameworks are surrogate models with uncertainty estimates on their predictions. These uncertainty estimates are instrumental for determining which materials to screen next, but the computational catalysis field does not yet have a standard procedure for judging the quality of such uncertainty estimates. Here we present a suite of figures and performance metrics derived from the machine learning community that can be used to judge the quality of such uncertainty estimates. This suite probes the accuracy, calibration, and sharpness of a model quantitatively. We then show a case study where we judge various methods for predicting density-functional-theory-calculated adsorption energies. Of the methods studied here, we find that the best performer is a model where a convolutional neural network is used to supply features to a Gaussian process regressor, which then makes predictions of adsorption energies along with corresponding uncertainty estimates.
Article
Full-text available
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence , namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Article
Full-text available
Context: Continuous practices, i.e., continuous integration, delivery, and deployment, are the software development industry practices that enable organizations to frequently and reliably release new features and products. With the increasing interest in and literature on continuous practices, it is important to systematically review and synthesize the approaches, tools, challenges, and practices reported for adopting and implementing continuous practices. Objective: This research aimed at systematically reviewing the state of the art of continuous practices to classify approaches and tools, identify challenges and practices in this regard, and identify the gaps for future research. Method: We used systematic literature review (SLR) method for reviewing the peer-reviewed papers on continuous practices published between 2004 and 1st June 2016. We applied thematic analysis method for analysing the data extracted from reviewing 69 papers selected using predefined criteria. Results: We have identified thirty approaches and associated tools, which facilitate the implementation of continuous practices in the following ways: (1) "reducing build and test time in continuous integration (CI)"; (2) "increasing visibility and awareness on build and test results in CI"; (3) "supporting (semi-) automated continuous testing"; (4) "detecting violations, flaws and faults in CI"; (5) "addressing security and scalability issues in deployment pipeline", and (6) "improving dependability and reliability of deployment process". We have also determined a list of critical factors such as "testing (effort and time)", "team awareness and transparency", "good design principles", "customer", "highly skilled and motivated team", "application domain", and "appropriate infrastructure" that should be carefully considered when introducing continuous practices in a given organization.
Article
Full-text available
The philosophies of design thinking, entrepreneurial thinking, and systems thinking have widespread application in diverse fields. However, due to the inherently abstract rhetoric and lack of commonly accepted frameworks, these philosophies are often considered buzzwords and fads. This article deconstructs the rhetoric and literature from leaders of these three philosophies and identifies their fundamental tenets. A conceptual framework that captures the differences and convergences between design thinking, entrepreneurial thinking, and systems thinking is presented. A series of four case studies derived from diverse settings like healthcare, agriculture, and social networks further illustrate these interconnections. The article argues that the emergent integration of these philosophies, as captured in the fundamental tenets, presents the most compelling opportunities for the practical application of these theoretical frameworks.
Article
With the rapidly evolving technological landscape, the huge development of the Internet of Things, and the embracing of digital transformation, the world is witnessing an explosion in data generation and a rapid evolution of new applications that lead to new, wider, and more sophisticated threats that are complex and hard to be detected. Advanced persistence threats use continuous, clandestine, and sophisticated techniques to gain access to a system and remain hidden for a prolonged period of time, with potentially destructive consequences. Those stealthy attacks are often not detectable by advanced intrusion detection systems (e.g., LightBasin attack was detected in 2022 and has been active since 2016). Indeed, threat actors are able to quickly and intelligently alter their tactics to avoid being detected by security defense lines (e.g., prevention and detection mechanisms). In response to these evolving threats, organizations need to adopt new proactive defense approaches. Threat hunting is a proactive security line exercised to uncover stealthy attacks, malicious activities, and suspicious entities that could circumvent standard detection mechanisms. Additionally, threat hunting is an iterative approach to generate and revise threat hypotheses endeavoring to provide early attack detection in a proactive way. The proactiveness consists of testing and validating the initial hypothesis using various manual and automated tools/techniques with the objective of confirming/refuting the existence of an attack. This survey studies the threat hunting concept and provides a comprehensive review of the existing solutions for Enterprise networks. In particular, we provide a threat hunting taxonomy based on the used technique and a sub-classification based on the detailed approach. Furthermore, we discuss the existing standardization efforts. Finally, we provide a qualitative discussion on current advances and identify various research gaps and challenges that may be considered by the research community to design concrete and efficient threat hunting solutions.
Article
Engineering graduates must know how to frame and solve non-routine problems. While design classes explicitly teach problem framing and solving, it is lacking throughout much of the rest of the engineering curriculum and is often relegated to capstone classes at the end of the students' educational experience. This paper explores problem framing and solving through the lens of experiential learning theory. It captures core problem framing and solving approaches from critical, design and systems thinking and concludes with a table of learning outcomes that might be drawn upon in designing an engineering curriculum that more fully develops the problem framing and solving capabilities of its students.