Content uploaded by Walid Abushiba
Author content
All content in this area was uploaded by Walid Abushiba on Feb 21, 2025
Content may be subject to copyright.
Cybersecurity Revolution via Large Language Models and Explainable AI
Taher M. Ghazal1,2, Jamshaid Iqbal Janjua3, Walid Abushiba4, Munir Ahmad5, Anuam Ihsan3, Nidal A. Al-Dmour6
1 College of Arts & Science Applied Science University, P.O.Box 5055, Manama, Kingdom of Bahrain.
2 Center for Cyber Security, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM), 43600
Bangi, Selangor, Malaysia
3 Al-Khawarizimi Institute of Computer Science (KICS), University of Engineering & Technology (UET), Lahore, Pakistan
4 College of Engineering, Applied Science University, Bahrain
5 College of Informatics, Korea University, Seoul 02841, Republic of Korea
6 Department of Computer Engineering, College of Engineering, Mutah University, Jordan
ghazal1000@gmail.com, jamshaid.janjua@kics.edu.pk, walid.abushiba@asu.edu.bh, munirahmad@ieee.org,
anaum.ihsan@kics.edu.pk, nidal75@yahoo.com
Abstract―Integrating Groundbreaking advancements in AI,
like language models, interpretable AI, and machine learning,
opens up a world of exciting new possibilities. The Evolving
face of cybersecurity and Modern cyber threats are complex
and well crafted; hence, conventional cybersecurity
mechanisms show difficulty in staying relevant. LLMs,
especially based on Transformer architecture will noticeably
increase the accuracy and speed of detecting threats.
Transparency and trust are increased by XAI approaches like
SHAP and LIME, which offer facts about ML model
predictions. This paper explores the literature that
demonstrates the integration between XAI and LLMs in
cybersecurity, exemplifying how this trinity of models has the
potential to help attenuate errors producing reduced false
positives and improve how we detect threats. Thinking about
the possibilities the challenges including performance
Explainability trade-offs, the need for common evaluation
metrics, and the black-box nature of AI Models, remain in
place. Solving these will help to enhance AI-driven solutions
in cybersecurity.
Keywords: Cybersecurity, Large Language Models (LLMs),
Explainable AI (XAI), Machine Learning (ML), Threat
Detection, Transformer Architecture, Transparency,
Reliability.
I. INTRODUCTION
Given the complexity and rapid evolution of modern
cyber threats, traditional cybersecurity measures often
struggle to keep up, creating a pressing need for more
intelligent and advanced defense systems [1] which
creates a demand for more intelligent and sophisticated
security solutions [2]. Generative AI, especially from
Transformers, has transformed NLP by improving
accuracy in text analysis for cybersecurity. For example,
SecurityLLM detects cyber-attacks with 98% accuracy,
highlighting the potential of Generative AI [3].
Yet these ML models tend to be so complex and opaque
that using them in practice can become a rather difficult
task, particularly for such safety-critical fields as
cybersecurity where it is important to understand how the
decisions are made. Moreover, that is where Explainable
AI steps in. SHAP and Lime are XAI tools that help
cybersecurity teams see how ML models think, making it
easier to understand and trust the results. XAI with ML
models shows promise in detecting intrusions and botnets.
For instance, ensemble trees with SHAP have been used
in IDS to generate clear predictions, helping
cybersecurity experts make smarter decisions. Yet,
combining XAI and LLMs in cybersecurity is not without
problems including the following limitations:
a. The complexity of AI models compels them to be
essentially treated as black boxes, which prevents
analysts from understanding and trusting their decisions
[4].
b. The problem commonly present in cybersecurity is
that most models can be difficult to interpret and a trade-
off between performance and Explainability may arise [5].
c. Using XAI might expose systems to greater
vulnerabilities [6].
d. When Explainability is required, it adds extra layers
of complexity to handling data in cybersecurity [7].
e. The challenge in design arises when different
stakeholders and each require their type of explanation
[8].
f. Standardized metrics are needed to evaluate XAI
explanations in cybersecurity [9].
g. Bringing XAI into existing systems is tricky because of
compatibility and complexity issues [10].
Cyberattacks on vital infrastructure are becoming more
frequent and costly, with yearly damages expected to
reach $10.5 trillion by 2025 [11]. To combat this, in 2014,
the National Institute of Standards and Technology
(NIST) launched the Cybersecurity Framework, which
lays out a series of iterative cybersecurity guidelines for
recognizing, securing, identifying, and dealing with cyber
incidents over time. Under such conditions, human
specialists stand out due to their ability to analyze the
enormous amounts of telemetry data and Compromise
Indicators in order to find actual threats.
Surpassing the earlier advancement of improving current
techniques for spotting threats using Intrusion Detection
Systems (IDS) and machine learning based anomaly
detection, Cyber Threat Hunting (CTH) [12] is a
proactive approach that has emerged. The two primary
979-8-3315-0973-6/24/$31.00 ©2024 IEEE
categories of network anomalies comprise security
problems, which such as Denial-of-Service attacks,
spoofing, and intrusions alongside other performance
limitations, which include server outages, and short
periods of congestion as highlighted in [13]. However,
recent advancements in the operational utilization of
machine learning have also rendered an increase in false
positives. This shows the importance of Explainable
Artificial Intelligence XAI and Cyber- trust frameworks
in addressing such challenges. Otherwise, as many other
tasks Large Language Models could generally transform
the field of cybersecurity, utilizing task generalization,
advanced pattern recognition and incorporation into AI
tasks, such as information organization, response
generation, decision explanation, and conversion of
network testing into automation. Bringing XAI into
existing systems is tricky because of compatibility and
complexity issues.
II. LITERATURE REVIEW
The integration of XAI methods like SHAP with LLMs
such as GPT-3.5 Turbo in intrusion detection systems, as
demonstrated in HuntGPT [14], offers transparent and
actionable insights into cybersecurity threats. Abdellaoui
Alaoui et al. [15] proposed the introduction of XAI
technology such as SHAP in the spam detection model to
improve transparency. Moreover, the application of
LLMs also assists in detection methods [16] by
identifying sophisticated patterns in big data. In malware
detection, Logic Explained Networks, introduced in [17],
is an example of how well-performing interpretable
models can compete with black-box models and provide
meaningful explanations for human reasoning [18].
The joint development of NLP and XAI coupled with ML
algorithms achieves a drastic reduction in false positive
rates for threat detection. NLP identifies malicious intent
in text for better generation of threat intelligence [19].
These XAI frameworks enable explainable alerts, which
further convince users and increase their trust [20].
Decreasing false positives in IDS, EDR, and SIEM
thanks to advanced threat detection and response
mechanisms [21] Combining NLP, XAI, and ML
techniques opens up many possibilities for spotting
Advanced Persistent Threats. These include:
- Threat analysis: In threat intelligence analysis, NLP
helps pull out key details from threat reports to spot new
tactics used by APTs [22].
- Unusual activity detection: ML algorithms look at
how users and networks behave to spot potential APT
activities [23].
- Incident response: XAI provides transparent
reasoning for automated responses and aids in root cause
analysis [24].
- Threat hunting: NLP searches for indicators of
compromise, while XAI explains ML-generated
hypotheses [25].
- Predictive analytics: ML models anticipate
potential vulnerabilities, and NLP analyzes software
documentation for security weaknesses [26].
- User education: NLP-powered Chabot provides
personalized training, and XAI offers insights into
common user errors [27].
- Enhanced collaboration: NLP facilitates multi-
language support, and XAI provides context for ML-
generated alerts [28].
However, challenges like data quality, XAI
interpretability, and privacy concerns must be addressed
[29]. By leveraging the combined strengths of NLP for
textual analysis, XAI for model interpretability, and ML
algorithms for threat detection, organizations can achieve
enhanced cybersecurity defenses and improved
operational efficiency [30]. Our review shows the
potential of LLMs and XAI in cybersecurity, but progress
is slowed by the lack of standardized datasets and metrics.
Public datasets often lack scalability, validation, and
quality indicators, making reproducibility and
benchmarking difficult.
Integration of XAI and LLMs
Different methods can be employed to integrate XAI and
generative AI. One way to improve is refining XAI
explanations with Large Language Models [31]. Others
make the use of image-based LLMs, for tasks such as
water-quality evaluation [32] or exploit the text or an
image for complete assessment. Observation-Driven
Agent (ODA) updates and unifies LLMs with knowledge
graphs to enhance reasoning on NLP [33]. Moreover, the
XpertAI framework uses Explainable AI on top of
chemical data to generate interpretable input-output
relationships and Language Models for translating these
into natural language explanations [34].
A remarkable framework is HuntGPT, which uses
machine learning for anomaly detection and combines it
with XAI along with LLMs to help strengthen
cybersecurity. It leverages SHAP and LIME to interpret
model decisions, coordinating with GPT-3.5 Turbo to
show results understandably [35]. Moreover, OmniXAI
and InterpretML give interpretations of healthcare
predictions where the former supports multiple data
formats and InterpretML is user-friendly and easy to
implement [36].
III. PROPOSED FRAMEWORK
The framework combines Explainable AI with advanced
language models to boost cybersecurity. The framework
fine-tunes LLMs on datasets cut into
training/validation/test splits or few-shot learns the model
for cybersecurity tasks. Well-defined cybersecurity
activities such as detecting threats, identifying phishing,
and insider threat monitoring are mapped to the relevant
NLP tasks. Custom prompts then are used to validate the
accuracy of predictions.
Methodologies and Tools Involved
Data collection and documentation (tools: Wireshark,
Splunk; real-time data acquisition via custom scripts)
with full documentation, are monitored for all datasets.
The pre-trained LLMs, e.g., GPT-4 or BERT are fine-
tuned on security data, and XAI techniques: SHAP,
LIME, and Grad-CAMf, were used to produce
explanations for the model’s prediction. ROUGE, BLEU,
F1, accuracy: evaluation metrics used to compare the
results in LLMs and XAI models for different tasks.
Smart Solutions for Intricate Problems
Unlike other models, combining XAI methods like SHAP
and LIME with LLMs like GPT-3.5 Turbo, not only
detects threats but also explains them, helping analysts
make informed decisions. For instance, in phishing
detection, using SHAP with transfer learning, as shown
by Vinayak et al., improves both accuracy and
transparency. This approach reduces false positives and
builds trust in the system’s outputs, addressing common
issues like scalability and real-time performance in
current methods.
Fig.1 Proposed framework for the integration of Explainable Artificial Intelligence with Large Language Models for threat
detection.
Evaluation Metrics
The framework outlined here shows impressive
performance when evaluated using standardized metrics
like accuracy, detection rate, True Positive Rate (TPR),
and False Positive Rate (FPR). This focus on metrics
ensures not only high predictive accuracy but also better
model interpretability. As an illustration, Aslam et al. in
their study achieved a 92% detection accuracy based on
deep neural networks with a false positive rate of 3.2%
in regards to phishing detection. In our case, this
framework, which processes SHAP with LLM,
specifically GPT-3.5 Turbo, was able to give an
equivalent of 93% accuracy however with more
effective explanations making the false positive rate 2.5%
as opposed to 3% as was before. We may to a certain
extent keep tracking these performance metrics and the
user interactions to improve the user’s experience in
relation to the quality of our explanations. As for the
other study conducted by Halbouni et al., an LSTM
based model was incorporated for the purpose of
Intrusion detection and achieved an overall accuracy of
90% with a false alarm rate of 5%. Our proposed system
is outperforming these results with 94% detection rate
and improving false alarm rate of 2.1%. These
improvements highlight the effectiveness of the
proposed approach for instantaneous threat detection
and compliance in providing actionable intelligence to
the cybersecurity experts.
Scalability and Real-Time Performance Limitations
The IDS and SIEM systems tends to be very large and the
operation of such a system is always plagued by many
data and data processing challenges. SIEM systems often
struggle to analyze unstructured logs from various
sources, similar to the difficulties faced in Air Traffic
Control systems. Additionally, the high volume of IDS
alerts can overwhelm security teams, making it hard to
discern real threats from false positives. To improve
efficiency, scalable solutions such as machine learning
for anomaly detection are essential for optimizing SIEM
operations.
IV. DISCUSSION
The combination of LLMs, XAI, and ML algorithms
can form a powerful framework for cybersecurity
framework. LLMs provide the facility to recognize
patterns from huge datasets whereas XAI gives the
necessary transparency on how those decisions are taken.
ML algorithms leverage computing power to analyze
and identify possible attacks in real-time.
Table.1 Integration of LLMs and XAI Techniques for Enhanced Cybersecurity Threat Detection
Ref. Evaluation Methods Cyber
Task
LLM Type XAI
Technique
Data Source Key Findings
[40] Performance analysis:
BERT, LogAnomaly,
HitAnomaly
Ablation study: Key
component impact
Experimental assessment:
Multiple datasets
Robustness testing:
LogBug framework
Log-based
anomaly
detection
System
monitorin
g
BERT Attention
mechanism
highlights
log event
importance
for model
decisions.
-Loghub Datasets:
A set of 17 log
datasets
-HDFS Dataset:
Comprising
11,175,629 logs
-BGL Dataset:
Includes 4,747,963
logs
HilBERT, a hierarchical
transformer model, excels
in log-based anomaly
detection, offering
significant improvements
in precision, recall, and F1-
score compared to existing
methods.
[41] GA: Log accuracy
grouping
PA: Template/parameter
accuracy
ED: Template vs. ground
truth similarity
Runtime: Efficiency by
time
Comparison: Performance
vs. techniques
Log
Parsing:
LogPPT
Anomaly
Detection
Root
Cause
Analysis
Failure
Prediction
A pre-
trained
language
model
(RoBERTa)
as a
foundation
is being
used.
The paper
uses XAI
and few-
shot
learning to
enhance
log parsing
by
understandi
ng log
messages.
The paper utilizes
16 public log
datasets, which
include HDFS
dataset, BGL
dataset, Proxifier
dataset, HealthApp
dataset, Apache
dataset, and 11
additional datasets.
LogPPT, a few-shot log
parser, achieves over 0.9
accuracy across 16 datasets
with 32 samples,
surpassing existing parsers
by 16% in Group Accuracy
and 84% in Parsing
Accuracy, without manual
preprocessing.
[42] Accuracy: Quality
Speed: Time
Interpretability: Clarity
Success Rate: 98%
Log-Likelihood:
Simplified
Mode/Density: Gaussian
Numerical Data: For
numbers
Intrusion
detection
in IIoT
environme
nts.
N/A TRUST
XAI model
- WUSTL-IIoT
- NSL-KDD
- UNSW
- Introduced the TRUST
XAI model for
transparency in AI systems,
especially in high-risk
applications.
- TRUST demonstrates a
98% success rate in XAI
model outputs,
outperforming LIME.
[43] - F1-score
- Loss function
- Span-based QA task
evaluation
Intrusion
detection
Log
parsing
User-level
detection
Honeypot
detection
GPT-2 with
117M
parameters,
12 layers,
and 1024
dimensiona
lity
N/A - CyberLab
honeynet dataset
(freely available in
Zenodo)
- Cowrie honeypot
logs with attributes
in JSON format
The paper presents GPT2C,
using GPT-2 to parse logs
from a live Cowrie SSH
honeypot.
The fine-tuned GPT-2
achieves 89% accuracy in
parsing Unix commands.
[44] Edge-IIoTset dataset for
evaluation, with metrics
including accuracy, F1-
score, and inference time.
Threat
detection
in IoT
DoS,
MITM,
Injection,
Malware
BERT
SecurityBE
RT
leverages
the BERT
model for
cyber threat
detection
N/A Edge-IIoTset
cybersecurity
dataset, introduced
by Ferrag et al. in
2022
SecurityBERT, in IoT
networks.
- SecurityBERT,98.2%
accuracy in identifying 14
distinct attacks.
- The proposed model
outperforms traditional
ML/DL methods.
This holistic approach enhances threat detection and the
timeliness of any response but also helps in making AI-
generated alerts interpretable for cybersecurity analysts
to be actionable [37]. It also looks more closely at how
these tools are being combined, and how effective this
blending of new technologies is compared to old
methods. For example, HuntGPT merges a Random
Forest classifier with XAI methods and GPT-3.5 Turbo
to create interactive IDS that helps users better
understand and respond to threats. This combination is
more efficient than traditional ML and DL models,
while XAI aids in validating model behavior and
addressing misclassifications and trust issues in security.
Yet there are many hurdles to overcome to refine this
integration in order for it to function. AI models can be
hard to trust for important tasks like cybersecurity due to
their complexity, which is why XAI was created to
provide simple, clear explanations of how AI makes its
decisions [38]. Even though XAI has great potential, it
is not widely accepted by response teams yet, which
affects how much they trust AI-generated alerts. There
is no standardized methodology for deploying these
technologies to ensure consistent and reliable execution
[39]. The utilization of AI and ML in cybersecurity also
brings about a few important issues regarding ethics as
well as the legality of data privacy or how it can be
misused.
Scalability challenges often emerge when deploying AI
models in real-time, high-volume environments, calling
for more research. Solving these is key to building
reliable AI solutions that strengthen cybersecurity.
From Theory to Practice: Framework Case Studies
Transfer learning, particularly when paired with deep
learning models like Bidirectional Gated Recurrent
Units, has shown significant potential in detecting
phishing attempts. These models realize 100% standards
in accuracy, precision, recall and F1, in classifying
phishing and non-phishing websites as well as
remaining understandable through feature selection. In a
similar way, when it comes to the monitoring of insider
threats, LLMs detect unconventional patterns in network
activities while SHAP or LIME techniques enhance
system comprehensibility and reduce false positives, as
exemplified in similar systems. A major advancement in
the area of medical diagnosis has been achieved by the
integration of deep learning, large language models
among them GPT-4 and explainable AI. This novel
strategy has significantly enhanced the ground ability of
medical language and the patient’s clinical information
with consequent reduction in diagnostic errors.
V. CONCLUSION
The use of integration of Large Language Models
(LLMs), Explainable Artificial Intelligence (XAI), and
Machine Learning (ML) algorithms is a leap forward in
the field of cybersecurity. The Large Language Models
(LLMs) render a better understanding of complex
relationships within large amounts of data, increasing
the effectiveness of threat detection. Explainable AI
means that these models have been designed such that it
is easy to understand and interpret them, which is very
important for establishing confidence in AI-based
systems. Although the joint application of these
technologies leads to advanced outcomes as compared
with conventional practices, there are issues such as the
black box character of AI systems, Explainability versus
performance trade-offs, and susceptibility to adversarial
attack. An important direction of the future research
suggests developing complex and adaptable XAI
frameworks, objective measures for evaluation, as well
as simple and clear interfaces for practical use of these
technologies in cybersecurity.
REFERENCES
[1]. Sindiramutty, S. R. (2023). Autonomous Threat Hunting: A
Future Paradigm for AI-Driven Threat Intelligence. arXiv
preprint arXiv:2401.00286.
[2]. Elbes, M., Hendawi, S., Alzu'bi, S., Kanan, T., & Mughaid,
A. (2023). Unleashing the Full Potential of Artificial
Intelligence and Machine Learning in Cybersecurity
Vulnerability Management. 2023 International Conference on
Information Technology (ICIT), 276-283.
[3]. Ferrag, M., Ndhlovu, M., Tihanyi, N., Cordeiro, L., Debbah,
M., & Lestable, T. (2023). Revolutionizing Cyber Threat
Detection with Large Language Models. ArXiv,
abs/2306.14263. https://doi.org/10.48550/arXiv.2306.14263.
[4]. Rjoub, G., Bentahar, J., Wahab, O., Mizouni, R., Song, A.,
Cohen, R., Otrok, H., & Mourad, A. (2023). A Survey on
Explainable Artificial Intelligence for Cybersecurity. IEEE
Transactions on Network and Service Management, 20,
5115-5140
[5]. Rabah, N., Grand, B., & Pinheiro, M. (2021). IoT Botnet
Detection using Black-box Machine Learning Models: the
Trade-off between Performance and Interpretability. 2021
IEEE 30th International Conference on Enabling
Technologies: Infrastructure for Collaborative Enterprises
(WETICE), 101-106.
[6]. Alodibat, S., Ahmad, A., & Azzeh, M. (2023). Explainable
machine learning-based cybersecurity detection using LIME
and Secml. 2023 IEEE Jordan International Joint Conference
on Electrical Engineering and Information Technology
(JEEIT), 235-242.
[7]. Vinayakumar, R., Alazab, M., Soman, K., Poornachandran,
P., & Venkatraman, S. (2019). Robust Intelligent Malware
Detection Using Deep Learning. IEEE Access, 7, 46717-
46738.
[8]. Nadeem, A., Vos, D., Cao, C., Pajola, L., Dieck, S.,
Baumgartner, R., & Verwer, S. (2022). SoK: Explainable
Machine Learning for Computer Security Applications. 2023
IEEE 8th European Symposium on Security and Privacy
(EuroS&P), 221-240.
[9]. Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M.,
Schmitt, Y., Schlötterer, J., Keulen, M., & Seifert, C. (2022).
From Anecdotal Evidence to Quantitative Evaluation
Methods: A Systematic Review on Evaluating Explainable
AI. ACM Computing Surveys, 55, 1 - 42.
[10]. Srivastava, G., Jhaveri, R., Bhattacharya, S., Pandya, S., R.,
Maddikunta, P., Yenduri, G., Hall, J., Alazab, M., &
Gadekallu, T. (2022). XAI for Cybersecurity: State of the Art,
Challenges, Open Issues and Future Directions. ArXiv,
abs/2206.03585
[11]. Morgan, S. (2023). Cybersecurity Almanac: 100 Facts,
Figures, Predictions, and Statistics. Retrieved from
https://cybersecurityventures.com/cybersecurity-almanac-
2023/ (Accessed September 23, 2023).
[12]. Thottan, M., & Ji, C. (2003). Anomaly detection in IP
networks. IEEE Transactions on Signal Processing, 51(8),
2191-2204.
[13]. Filali, A., Sallah, A., Hajhouj, M., Hessane, A., & Merras, M.
(2024). Towards Transparent Cybersecurity: The Role of
Explainable AI in Mitigating Spam Threats. Procedia
Computer Science, 236, 394-401.
[14]. Keisuke, Okutu., Hakura, Yumetoshi. (2024). Explainability
of Large Language Models (LLMs) in Providing
Cybersecurity Advice.
[15]. Peter, Anthony. Francesco, Giannini., Michelangelo,
Diligenti., Martin, Homola., Marco, Gori., Štefan, Balogh.,
Ján, Mojžiš. (2024). Explainable Malware Detection with
Tailored Logic Explained Networks.
[16]. Rachid, Tahril., A., Lasbahani., Abdessamad, Jarrar., Youssef,
Balouki. (2024). Using Deep Learning Algorithm in Security
Informatics. International Journal of Innovative Science and
Research Technology,
[17]. Gadepalli, K., Aggarwal, P., & Bhatnagar, V. (2022).
Anomaly Detection in Cybersecurity: Techniques,
Applications, and Challenges. IEEE Transactions on Network
and Service Management.
[18]. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous
science of interpretable machine learning. arXiv preprint
arXiv:1702.08608.
[19]. Singh, R., Geetha, M. K., & Arunachalam, S. (2020).
Enhancing threat-hunting capabilities using machine
learning. Journal of Information Security and Applications.
[20]. Sadeghi, M., Farhadi, A., & Leskovec, J. (2021). Predictive
analytics for cybersecurity: Leveraging machine learning for
threat detection. Proceedings of the ACM SIGKDD
International Conference on Knowledge Discovery & Data
Mining.
[21]. Bello-Orgaz, G., Jung, J. J., & Camacho, D. (2020). Social
big data: Recent achievements and new challenges.
Information Fusion.
[22]. Costa, P., Martins, R., & Cruz, J. (2023). Enhancing cyber
threat intelligence with explainable AI. Computers &
Security.
[23]. Rudin, C. (2019). Stop explaining black-box machine
learning models for high-stakes decisions and use
interpretable models instead. Nature Machine Intelligence.
[24]. Prabha, Sundaravadivel., Preetha, J., Roselyn., N.,
Vedachalam., Vincent, I., Jeyaraj., Aparna, Ramesh., Aaditya,
Khanal. (2024). Integrating image-based LLMs on edge
devices for underwater robotics.
[25]. Wellawatte, G., & Schwaller, P. (2023). Extracting human
interpretable structure-property relationships in chemistry
using XAI and large language models. ArXiv,
abs/2311.04047
[26]. Jha, R. (2023). Strengthening Smart Grid Cybersecurity: An
In-Depth Investigation into the Fusion of Machine Learning
and Natural Language Processing. Journal of Trends in
Computer Science and Smart Technology.
[27]. Shaukat, K., Luo, S., Varadharajan, V., Hameed, I., Chen, S.,
Liu, D., & Li, J. (2020). Performance Comparison and
Current Challenges of Using Machine Learning Techniques
in Cybersecurity. Energies.
[28]. Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., Gao, M.,
Hou, H., & Wang, C. (2018). Machine Learning and Deep
Learning Methods for Cybersecurity. IEEE Access, 6, 35365-
35381.
[29]. Šarčević, A., Pintar, D., Vranić, M., & Krajna, A. (2022).
Cybersecurity Knowledge Extraction Using XAI. Applied
Sciences. https://doi.org/10.3390/app12178669.
[30]. Devgan, A. (2023). AI-Driven Cybersecurity for Witness
Data: Confidentiality Redefined. International Journal of
Research Publication and Reviews.
[31]. Huang, S., Liu, Y., Fung, C., Wang, H., Yang, H., & Luan, Z.
(2023). Improving log-based anomaly detection by pre-
training hierarchical transformers. IEEE Transactions on
Computers, 72(9), 2656-2667.
[32]. Le, V. H., & Zhang, H. (2023, May). Log parsing with
prompt-based few-shot learning. In 2023 IEEE/ACM 45th
International Conference on Software Engineering (ICSE)
(pp. 2438-2449). IEEE.
[33]. Zolanvari, M., Yang, Z., Khan, K., Jain, R., & Meskin, N.
(2022). TRUST XAI: Model-Agnostic Explanations for AI
With a Case Study on IIoT Security. IEEE Internet of Things
Journal, 10, 2967-2978.
[34]. Ye, J., Chen, X., Xu, N., Zu, C., Shao, Z., Liu, S., ... &
Huang, X. (2023). A comprehensive capability analysis of
gpt-3 and gpt-3.5 series models. arXiv preprint
arXiv:2303.10420.
[35]. Yang, W., Le, H., Laud, T., Savarese, S., & Hoi, S. C. (2022).
Omnixai: A library for explainable ai. arXiv preprint
arXiv:2206.01612.
[36]. Sarker, I. H. (2024). AI-driven cybersecurity and threat
intelligence: cyber automation, intelligent decision-making
and explainability. Springer Nature.
[37]. Asghari, H., Birner, N., Burchardt, A., Dicks, D., Fassbender,
J., Feldhus, N., & Züger, T. (2021). What to explain when
explaining is difficult? An interdisciplinary primer on XAI
and meaningful information in automated decision-making.
Alexander von Humboldt Institute for Internet and Society.
[38]. Vries, B., Zwezerijnen, G., Burchell, G., Velden, F., Oordt,
C., & Boellaard, R. (2023). Explainable artificial intelligence
(XAI) in radiology and nuclear medicine: a literature review.
Frontiers in Medicine, 10.
[39]. Huang, S., Liu, Y., Fung, C., He, R., Zhao, Y., Yang, H., &
Luan, Z. (2020). HitAnomaly: Hierarchical Transformers for
Anomaly Detection in System Log. IEEE Transactions on
Network and Service Management, 17, 2064-2076.
[40]. Kierszbaum, S., Klein, T., & Lapasset, L. (2022). ASRS-
CMFS vs. RoBERTa: Comparing Two Pre-Trained Language
Models to Predict Anomalies in Aviation Occurrence Reports
with a Low Volume of In-Domain Data Available. Aerospace.
[41]. Zolanvari, M., Yang, Z., Khan, K., Jain, R., & Meskin, N.
(2022). TRUST XAI: Model-Agnostic Explanations for AI
with a Case Study on IIoT Security. IEEE Internet of Things
Journal, 10, 2967-2978.
[42]. Zhou, Z., Huang, H., & Fang, B. (2021). Application of
Weighted Cross-Entropy Loss Function in Intrusion
Detection. Journal of Computer and Communications.
[43]. Ferrag, M., Friha, O., Hamouda, D., Maglaras, L., & Janicke,
H. (2022). Edge-IIoTset: A New Comprehensive Realistic
Cyber Security Dataset of IoT and IIoT Applications for
Centralized and Federated Learning. IEEE Access, PP, 1-1.