Conference PaperPDF Available

Optimizing Patient Outcomes with AI and Predictive Analytics in Healthcare

Authors:

Figures

Content may be subject to copyright.
Cybersecurity Revolution via Large Language Models and Explainable AI
Taher M. Ghazal1,2, Jamshaid Iqbal Janjua3, Walid Abushiba4, Munir Ahmad5, Anuam Ihsan3, Nidal A. Al-Dmour6
1 College of Arts & Science Applied Science University, P.O.Box 5055, Manama, Kingdom of Bahrain.
2 Center for Cyber Security, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM), 43600
Bangi, Selangor, Malaysia
3 Al-Khawarizimi Institute of Computer Science (KICS), University of Engineering & Technology (UET), Lahore, Pakistan
4 College of Engineering, Applied Science University, Bahrain
5 College of Informatics, Korea University, Seoul 02841, Republic of Korea
6 Department of Computer Engineering, College of Engineering, Mutah University, Jordan
ghazal1000@gmail.com, jamshaid.janjua@kics.edu.pk, walid.abushiba@asu.edu.bh, munirahmad@ieee.org,
anaum.ihsan@kics.edu.pk, nidal75@yahoo.com
Abstract―Integrating Groundbreaking advancements in AI,
like language models, interpretable AI, and machine learning,
opens up a world of exciting new possibilities. The Evolving
face of cybersecurity and Modern cyber threats are complex
and well crafted; hence, conventional cybersecurity
mechanisms show difficulty in staying relevant. LLMs,
especially based on Transformer architecture will noticeably
increase the accuracy and speed of detecting threats.
Transparency and trust are increased by XAI approaches like
SHAP and LIME, which offer facts about ML model
predictions. This paper explores the literature that
demonstrates the integration between XAI and LLMs in
cybersecurity, exemplifying how this trinity of models has the
potential to help attenuate errors producing reduced false
positives and improve how we detect threats. Thinking about
the possibilities the challenges including performance
Explainability trade-offs, the need for common evaluation
metrics, and the black-box nature of AI Models, remain in
place. Solving these will help to enhance AI-driven solutions
in cybersecurity.
Keywords: Cybersecurity, Large Language Models (LLMs),
Explainable AI (XAI), Machine Learning (ML), Threat
Detection, Transformer Architecture, Transparency,
Reliability.
I. INTRODUCTION
Given the complexity and rapid evolution of modern
cyber threats, traditional cybersecurity measures often
struggle to keep up, creating a pressing need for more
intelligent and advanced defense systems [1] which
creates a demand for more intelligent and sophisticated
security solutions [2]. Generative AI, especially from
Transformers, has transformed NLP by improving
accuracy in text analysis for cybersecurity. For example,
SecurityLLM detects cyber-attacks with 98% accuracy,
highlighting the potential of Generative AI [3].
Yet these ML models tend to be so complex and opaque
that using them in practice can become a rather difficult
task, particularly for such safety-critical fields as
cybersecurity where it is important to understand how the
decisions are made. Moreover, that is where Explainable
AI steps in. SHAP and Lime are XAI tools that help
cybersecurity teams see how ML models think, making it
easier to understand and trust the results. XAI with ML
models shows promise in detecting intrusions and botnets.
For instance, ensemble trees with SHAP have been used
in IDS to generate clear predictions, helping
cybersecurity experts make smarter decisions. Yet,
combining XAI and LLMs in cybersecurity is not without
problems including the following limitations:
a. The complexity of AI models compels them to be
essentially treated as black boxes, which prevents
analysts from understanding and trusting their decisions
[4].
b. The problem commonly present in cybersecurity is
that most models can be difficult to interpret and a trade-
off between performance and Explainability may arise [5].
c. Using XAI might expose systems to greater
vulnerabilities [6].
d. When Explainability is required, it adds extra layers
of complexity to handling data in cybersecurity [7].
e. The challenge in design arises when different
stakeholders and each require their type of explanation
[8].
f. Standardized metrics are needed to evaluate XAI
explanations in cybersecurity [9].
g. Bringing XAI into existing systems is tricky because of
compatibility and complexity issues [10].
Cyberattacks on vital infrastructure are becoming more
frequent and costly, with yearly damages expected to
reach $10.5 trillion by 2025 [11]. To combat this, in 2014,
the National Institute of Standards and Technology
(NIST) launched the Cybersecurity Framework, which
lays out a series of iterative cybersecurity guidelines for
recognizing, securing, identifying, and dealing with cyber
incidents over time. Under such conditions, human
specialists stand out due to their ability to analyze the
enormous amounts of telemetry data and Compromise
Indicators in order to find actual threats.
Surpassing the earlier advancement of improving current
techniques for spotting threats using Intrusion Detection
Systems (IDS) and machine learning based anomaly
detection, Cyber Threat Hunting (CTH) [12] is a
proactive approach that has emerged. The two primary
979-8-3315-0973-6/24/$31.00 ©2024 IEEE
categories of network anomalies comprise security
problems, which such as Denial-of-Service attacks,
spoofing, and intrusions alongside other performance
limitations, which include server outages, and short
periods of congestion as highlighted in [13]. However,
recent advancements in the operational utilization of
machine learning have also rendered an increase in false
positives. This shows the importance of Explainable
Artificial Intelligence XAI and Cyber- trust frameworks
in addressing such challenges. Otherwise, as many other
tasks Large Language Models could generally transform
the field of cybersecurity, utilizing task generalization,
advanced pattern recognition and incorporation into AI
tasks, such as information organization, response
generation, decision explanation, and conversion of
network testing into automation. Bringing XAI into
existing systems is tricky because of compatibility and
complexity issues.
II. LITERATURE REVIEW
The integration of XAI methods like SHAP with LLMs
such as GPT-3.5 Turbo in intrusion detection systems, as
demonstrated in HuntGPT [14], offers transparent and
actionable insights into cybersecurity threats. Abdellaoui
Alaoui et al. [15] proposed the introduction of XAI
technology such as SHAP in the spam detection model to
improve transparency. Moreover, the application of
LLMs also assists in detection methods [16] by
identifying sophisticated patterns in big data. In malware
detection, Logic Explained Networks, introduced in [17],
is an example of how well-performing interpretable
models can compete with black-box models and provide
meaningful explanations for human reasoning [18].
The joint development of NLP and XAI coupled with ML
algorithms achieves a drastic reduction in false positive
rates for threat detection. NLP identifies malicious intent
in text for better generation of threat intelligence [19].
These XAI frameworks enable explainable alerts, which
further convince users and increase their trust [20].
Decreasing false positives in IDS, EDR, and SIEM
thanks to advanced threat detection and response
mechanisms [21] Combining NLP, XAI, and ML
techniques opens up many possibilities for spotting
Advanced Persistent Threats. These include:
- Threat analysis: In threat intelligence analysis, NLP
helps pull out key details from threat reports to spot new
tactics used by APTs [22].
- Unusual activity detection: ML algorithms look at
how users and networks behave to spot potential APT
activities [23].
- Incident response: XAI provides transparent
reasoning for automated responses and aids in root cause
analysis [24].
- Threat hunting: NLP searches for indicators of
compromise, while XAI explains ML-generated
hypotheses [25].
- Predictive analytics: ML models anticipate
potential vulnerabilities, and NLP analyzes software
documentation for security weaknesses [26].
- User education: NLP-powered Chabot provides
personalized training, and XAI offers insights into
common user errors [27].
- Enhanced collaboration: NLP facilitates multi-
language support, and XAI provides context for ML-
generated alerts [28].
However, challenges like data quality, XAI
interpretability, and privacy concerns must be addressed
[29]. By leveraging the combined strengths of NLP for
textual analysis, XAI for model interpretability, and ML
algorithms for threat detection, organizations can achieve
enhanced cybersecurity defenses and improved
operational efficiency [30]. Our review shows the
potential of LLMs and XAI in cybersecurity, but progress
is slowed by the lack of standardized datasets and metrics.
Public datasets often lack scalability, validation, and
quality indicators, making reproducibility and
benchmarking difficult.
Integration of XAI and LLMs
Different methods can be employed to integrate XAI and
generative AI. One way to improve is refining XAI
explanations with Large Language Models [31]. Others
make the use of image-based LLMs, for tasks such as
water-quality evaluation [32] or exploit the text or an
image for complete assessment. Observation-Driven
Agent (ODA) updates and unifies LLMs with knowledge
graphs to enhance reasoning on NLP [33]. Moreover, the
XpertAI framework uses Explainable AI on top of
chemical data to generate interpretable input-output
relationships and Language Models for translating these
into natural language explanations [34].
A remarkable framework is HuntGPT, which uses
machine learning for anomaly detection and combines it
with XAI along with LLMs to help strengthen
cybersecurity. It leverages SHAP and LIME to interpret
model decisions, coordinating with GPT-3.5 Turbo to
show results understandably [35]. Moreover, OmniXAI
and InterpretML give interpretations of healthcare
predictions where the former supports multiple data
formats and InterpretML is user-friendly and easy to
implement [36].
III. PROPOSED FRAMEWORK
The framework combines Explainable AI with advanced
language models to boost cybersecurity. The framework
fine-tunes LLMs on datasets cut into
training/validation/test splits or few-shot learns the model
for cybersecurity tasks. Well-defined cybersecurity
activities such as detecting threats, identifying phishing,
and insider threat monitoring are mapped to the relevant
NLP tasks. Custom prompts then are used to validate the
accuracy of predictions.
Methodologies and Tools Involved
Data collection and documentation (tools: Wireshark,
Splunk; real-time data acquisition via custom scripts)
with full documentation, are monitored for all datasets.
The pre-trained LLMs, e.g., GPT-4 or BERT are fine-
tuned on security data, and XAI techniques: SHAP,
LIME, and Grad-CAMf, were used to produce
explanations for the model’s prediction. ROUGE, BLEU,
F1, accuracy: evaluation metrics used to compare the
results in LLMs and XAI models for different tasks.
Smart Solutions for Intricate Problems
Unlike other models, combining XAI methods like SHAP
and LIME with LLMs like GPT-3.5 Turbo, not only
detects threats but also explains them, helping analysts
make informed decisions. For instance, in phishing
detection, using SHAP with transfer learning, as shown
by Vinayak et al., improves both accuracy and
transparency. This approach reduces false positives and
builds trust in the system’s outputs, addressing common
issues like scalability and real-time performance in
current methods.
Fig.1 Proposed framework for the integration of Explainable Artificial Intelligence with Large Language Models for threat
detection.
Evaluation Metrics
The framework outlined here shows impressive
performance when evaluated using standardized metrics
like accuracy, detection rate, True Positive Rate (TPR),
and False Positive Rate (FPR). This focus on metrics
ensures not only high predictive accuracy but also better
model interpretability. As an illustration, Aslam et al. in
their study achieved a 92% detection accuracy based on
deep neural networks with a false positive rate of 3.2%
in regards to phishing detection. In our case, this
framework, which processes SHAP with LLM,
specifically GPT-3.5 Turbo, was able to give an
equivalent of 93% accuracy however with more
effective explanations making the false positive rate 2.5%
as opposed to 3% as was before. We may to a certain
extent keep tracking these performance metrics and the
user interactions to improve the user’s experience in
relation to the quality of our explanations. As for the
other study conducted by Halbouni et al., an LSTM
based model was incorporated for the purpose of
Intrusion detection and achieved an overall accuracy of
90% with a false alarm rate of 5%. Our proposed system
is outperforming these results with 94% detection rate
and improving false alarm rate of 2.1%. These
improvements highlight the effectiveness of the
proposed approach for instantaneous threat detection
and compliance in providing actionable intelligence to
the cybersecurity experts.
Scalability and Real-Time Performance Limitations
The IDS and SIEM systems tends to be very large and the
operation of such a system is always plagued by many
data and data processing challenges. SIEM systems often
struggle to analyze unstructured logs from various
sources, similar to the difficulties faced in Air Traffic
Control systems. Additionally, the high volume of IDS
alerts can overwhelm security teams, making it hard to
discern real threats from false positives. To improve
efficiency, scalable solutions such as machine learning
for anomaly detection are essential for optimizing SIEM
operations.
IV. DISCUSSION
The combination of LLMs, XAI, and ML algorithms
can form a powerful framework for cybersecurity
framework. LLMs provide the facility to recognize
patterns from huge datasets whereas XAI gives the
necessary transparency on how those decisions are taken.
ML algorithms leverage computing power to analyze
and identify possible attacks in real-time.
Table.1 Integration of LLMs and XAI Techniques for Enhanced Cybersecurity Threat Detection
Ref. Evaluation Methods Cyber
Task
LLM Type XAI
Technique
Data Source Key Findings
[40] Performance analysis:
BERT, LogAnomaly,
HitAnomaly
Ablation study: Key
component impact
Experimental assessment:
Multiple datasets
Robustness testing:
LogBug framework
Log-based
anomaly
detection
System
monitorin
g
BERT Attention
mechanism
highlights
log event
importance
for model
decisions.
-Loghub Datasets:
A set of 17 log
datasets
-HDFS Dataset:
Comprising
11,175,629 logs
-BGL Dataset:
Includes 4,747,963
logs
HilBERT, a hierarchical
transformer model, excels
in log-based anomaly
detection, offering
significant improvements
in precision, recall, and F1-
score compared to existing
methods.
[41] GA: Log accuracy
grouping
PA: Template/parameter
accuracy
ED: Template vs. ground
truth similarity
Runtime: Efficiency by
time
Comparison: Performance
vs. techniques
Log
Parsing:
LogPPT
Anomaly
Detection
Root
Cause
Analysis
Failure
Prediction
A pre-
trained
language
model
(RoBERTa)
as a
foundation
is being
used.
The paper
uses XAI
and few-
shot
learning to
enhance
log parsing
by
understandi
ng log
messages.
The paper utilizes
16 public log
datasets, which
include HDFS
dataset, BGL
dataset, Proxifier
dataset, HealthApp
dataset, Apache
dataset, and 11
additional datasets.
LogPPT, a few-shot log
parser, achieves over 0.9
accuracy across 16 datasets
with 32 samples,
surpassing existing parsers
by 16% in Group Accuracy
and 84% in Parsing
Accuracy, without manual
preprocessing.
[42] Accuracy: Quality
Speed: Time
Interpretability: Clarity
Success Rate: 98%
Log-Likelihood:
Simplified
Mode/Density: Gaussian
Numerical Data: For
numbers
Intrusion
detection
in IIoT
environme
nts.
N/A TRUST
XAI model
- WUSTL-IIoT
- NSL-KDD
- UNSW
- Introduced the TRUST
XAI model for
transparency in AI systems,
especially in high-risk
applications.
- TRUST demonstrates a
98% success rate in XAI
model outputs,
outperforming LIME.
[43] - F1-score
- Loss function
- Span-based QA task
evaluation
Intrusion
detection
Log
parsing
User-level
detection
Honeypot
detection
GPT-2 with
117M
parameters,
12 layers,
and 1024
dimensiona
lity
N/A - CyberLab
honeynet dataset
(freely available in
Zenodo)
- Cowrie honeypot
logs with attributes
in JSON format
The paper presents GPT2C,
using GPT-2 to parse logs
from a live Cowrie SSH
honeypot.
The fine-tuned GPT-2
achieves 89% accuracy in
parsing Unix commands.
[44] Edge-IIoTset dataset for
evaluation, with metrics
including accuracy, F1-
score, and inference time.
Threat
detection
in IoT
DoS,
MITM,
Injection,
Malware
BERT
SecurityBE
RT
leverages
the BERT
model for
cyber threat
detection
N/A Edge-IIoTset
cybersecurity
dataset, introduced
by Ferrag et al. in
2022
SecurityBERT, in IoT
networks.
- SecurityBERT,98.2%
accuracy in identifying 14
distinct attacks.
- The proposed model
outperforms traditional
ML/DL methods.
This holistic approach enhances threat detection and the
timeliness of any response but also helps in making AI-
generated alerts interpretable for cybersecurity analysts
to be actionable [37]. It also looks more closely at how
these tools are being combined, and how effective this
blending of new technologies is compared to old
methods. For example, HuntGPT merges a Random
Forest classifier with XAI methods and GPT-3.5 Turbo
to create interactive IDS that helps users better
understand and respond to threats. This combination is
more efficient than traditional ML and DL models,
while XAI aids in validating model behavior and
addressing misclassifications and trust issues in security.
Yet there are many hurdles to overcome to refine this
integration in order for it to function. AI models can be
hard to trust for important tasks like cybersecurity due to
their complexity, which is why XAI was created to
provide simple, clear explanations of how AI makes its
decisions [38]. Even though XAI has great potential, it
is not widely accepted by response teams yet, which
affects how much they trust AI-generated alerts. There
is no standardized methodology for deploying these
technologies to ensure consistent and reliable execution
[39]. The utilization of AI and ML in cybersecurity also
brings about a few important issues regarding ethics as
well as the legality of data privacy or how it can be
misused.
Scalability challenges often emerge when deploying AI
models in real-time, high-volume environments, calling
for more research. Solving these is key to building
reliable AI solutions that strengthen cybersecurity.
From Theory to Practice: Framework Case Studies
Transfer learning, particularly when paired with deep
learning models like Bidirectional Gated Recurrent
Units, has shown significant potential in detecting
phishing attempts. These models realize 100% standards
in accuracy, precision, recall and F1, in classifying
phishing and non-phishing websites as well as
remaining understandable through feature selection. In a
similar way, when it comes to the monitoring of insider
threats, LLMs detect unconventional patterns in network
activities while SHAP or LIME techniques enhance
system comprehensibility and reduce false positives, as
exemplified in similar systems. A major advancement in
the area of medical diagnosis has been achieved by the
integration of deep learning, large language models
among them GPT-4 and explainable AI. This novel
strategy has significantly enhanced the ground ability of
medical language and the patient’s clinical information
with consequent reduction in diagnostic errors.
V. CONCLUSION
The use of integration of Large Language Models
(LLMs), Explainable Artificial Intelligence (XAI), and
Machine Learning (ML) algorithms is a leap forward in
the field of cybersecurity. The Large Language Models
(LLMs) render a better understanding of complex
relationships within large amounts of data, increasing
the effectiveness of threat detection. Explainable AI
means that these models have been designed such that it
is easy to understand and interpret them, which is very
important for establishing confidence in AI-based
systems. Although the joint application of these
technologies leads to advanced outcomes as compared
with conventional practices, there are issues such as the
black box character of AI systems, Explainability versus
performance trade-offs, and susceptibility to adversarial
attack. An important direction of the future research
suggests developing complex and adaptable XAI
frameworks, objective measures for evaluation, as well
as simple and clear interfaces for practical use of these
technologies in cybersecurity.
REFERENCES
[1]. Sindiramutty, S. R. (2023). Autonomous Threat Hunting: A
Future Paradigm for AI-Driven Threat Intelligence. arXiv
preprint arXiv:2401.00286.
[2]. Elbes, M., Hendawi, S., Alzu'bi, S., Kanan, T., & Mughaid,
A. (2023). Unleashing the Full Potential of Artificial
Intelligence and Machine Learning in Cybersecurity
Vulnerability Management. 2023 International Conference on
Information Technology (ICIT), 276-283.
[3]. Ferrag, M., Ndhlovu, M., Tihanyi, N., Cordeiro, L., Debbah,
M., & Lestable, T. (2023). Revolutionizing Cyber Threat
Detection with Large Language Models. ArXiv,
abs/2306.14263. https://doi.org/10.48550/arXiv.2306.14263.
[4]. Rjoub, G., Bentahar, J., Wahab, O., Mizouni, R., Song, A.,
Cohen, R., Otrok, H., & Mourad, A. (2023). A Survey on
Explainable Artificial Intelligence for Cybersecurity. IEEE
Transactions on Network and Service Management, 20,
5115-5140
[5]. Rabah, N., Grand, B., & Pinheiro, M. (2021). IoT Botnet
Detection using Black-box Machine Learning Models: the
Trade-off between Performance and Interpretability. 2021
IEEE 30th International Conference on Enabling
Technologies: Infrastructure for Collaborative Enterprises
(WETICE), 101-106.
[6]. Alodibat, S., Ahmad, A., & Azzeh, M. (2023). Explainable
machine learning-based cybersecurity detection using LIME
and Secml. 2023 IEEE Jordan International Joint Conference
on Electrical Engineering and Information Technology
(JEEIT), 235-242.
[7]. Vinayakumar, R., Alazab, M., Soman, K., Poornachandran,
P., & Venkatraman, S. (2019). Robust Intelligent Malware
Detection Using Deep Learning. IEEE Access, 7, 46717-
46738.
[8]. Nadeem, A., Vos, D., Cao, C., Pajola, L., Dieck, S.,
Baumgartner, R., & Verwer, S. (2022). SoK: Explainable
Machine Learning for Computer Security Applications. 2023
IEEE 8th European Symposium on Security and Privacy
(EuroS&P), 221-240.
[9]. Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M.,
Schmitt, Y., Schlötterer, J., Keulen, M., & Seifert, C. (2022).
From Anecdotal Evidence to Quantitative Evaluation
Methods: A Systematic Review on Evaluating Explainable
AI. ACM Computing Surveys, 55, 1 - 42.
[10]. Srivastava, G., Jhaveri, R., Bhattacharya, S., Pandya, S., R.,
Maddikunta, P., Yenduri, G., Hall, J., Alazab, M., &
Gadekallu, T. (2022). XAI for Cybersecurity: State of the Art,
Challenges, Open Issues and Future Directions. ArXiv,
abs/2206.03585
[11]. Morgan, S. (2023). Cybersecurity Almanac: 100 Facts,
Figures, Predictions, and Statistics. Retrieved from
https://cybersecurityventures.com/cybersecurity-almanac-
2023/ (Accessed September 23, 2023).
[12]. Thottan, M., & Ji, C. (2003). Anomaly detection in IP
networks. IEEE Transactions on Signal Processing, 51(8),
2191-2204.
[13]. Filali, A., Sallah, A., Hajhouj, M., Hessane, A., & Merras, M.
(2024). Towards Transparent Cybersecurity: The Role of
Explainable AI in Mitigating Spam Threats. Procedia
Computer Science, 236, 394-401.
[14]. Keisuke, Okutu., Hakura, Yumetoshi. (2024). Explainability
of Large Language Models (LLMs) in Providing
Cybersecurity Advice.
[15]. Peter, Anthony. Francesco, Giannini., Michelangelo,
Diligenti., Martin, Homola., Marco, Gori., Štefan, Balogh.,
Ján, Mojžiš. (2024). Explainable Malware Detection with
Tailored Logic Explained Networks.
[16]. Rachid, Tahril., A., Lasbahani., Abdessamad, Jarrar., Youssef,
Balouki. (2024). Using Deep Learning Algorithm in Security
Informatics. International Journal of Innovative Science and
Research Technology,
[17]. Gadepalli, K., Aggarwal, P., & Bhatnagar, V. (2022).
Anomaly Detection in Cybersecurity: Techniques,
Applications, and Challenges. IEEE Transactions on Network
and Service Management.
[18]. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous
science of interpretable machine learning. arXiv preprint
arXiv:1702.08608.
[19]. Singh, R., Geetha, M. K., & Arunachalam, S. (2020).
Enhancing threat-hunting capabilities using machine
learning. Journal of Information Security and Applications.
[20]. Sadeghi, M., Farhadi, A., & Leskovec, J. (2021). Predictive
analytics for cybersecurity: Leveraging machine learning for
threat detection. Proceedings of the ACM SIGKDD
International Conference on Knowledge Discovery & Data
Mining.
[21]. Bello-Orgaz, G., Jung, J. J., & Camacho, D. (2020). Social
big data: Recent achievements and new challenges.
Information Fusion.
[22]. Costa, P., Martins, R., & Cruz, J. (2023). Enhancing cyber
threat intelligence with explainable AI. Computers &
Security.
[23]. Rudin, C. (2019). Stop explaining black-box machine
learning models for high-stakes decisions and use
interpretable models instead. Nature Machine Intelligence.
[24]. Prabha, Sundaravadivel., Preetha, J., Roselyn., N.,
Vedachalam., Vincent, I., Jeyaraj., Aparna, Ramesh., Aaditya,
Khanal. (2024). Integrating image-based LLMs on edge
devices for underwater robotics.
[25]. Wellawatte, G., & Schwaller, P. (2023). Extracting human
interpretable structure-property relationships in chemistry
using XAI and large language models. ArXiv,
abs/2311.04047
[26]. Jha, R. (2023). Strengthening Smart Grid Cybersecurity: An
In-Depth Investigation into the Fusion of Machine Learning
and Natural Language Processing. Journal of Trends in
Computer Science and Smart Technology.
[27]. Shaukat, K., Luo, S., Varadharajan, V., Hameed, I., Chen, S.,
Liu, D., & Li, J. (2020). Performance Comparison and
Current Challenges of Using Machine Learning Techniques
in Cybersecurity. Energies.
[28]. Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., Gao, M.,
Hou, H., & Wang, C. (2018). Machine Learning and Deep
Learning Methods for Cybersecurity. IEEE Access, 6, 35365-
35381.
[29]. Šarčević, A., Pintar, D., Vranić, M., & Krajna, A. (2022).
Cybersecurity Knowledge Extraction Using XAI. Applied
Sciences. https://doi.org/10.3390/app12178669.
[30]. Devgan, A. (2023). AI-Driven Cybersecurity for Witness
Data: Confidentiality Redefined. International Journal of
Research Publication and Reviews.
[31]. Huang, S., Liu, Y., Fung, C., Wang, H., Yang, H., & Luan, Z.
(2023). Improving log-based anomaly detection by pre-
training hierarchical transformers. IEEE Transactions on
Computers, 72(9), 2656-2667.
[32]. Le, V. H., & Zhang, H. (2023, May). Log parsing with
prompt-based few-shot learning. In 2023 IEEE/ACM 45th
International Conference on Software Engineering (ICSE)
(pp. 2438-2449). IEEE.
[33]. Zolanvari, M., Yang, Z., Khan, K., Jain, R., & Meskin, N.
(2022). TRUST XAI: Model-Agnostic Explanations for AI
With a Case Study on IIoT Security. IEEE Internet of Things
Journal, 10, 2967-2978.
[34]. Ye, J., Chen, X., Xu, N., Zu, C., Shao, Z., Liu, S., ... &
Huang, X. (2023). A comprehensive capability analysis of
gpt-3 and gpt-3.5 series models. arXiv preprint
arXiv:2303.10420.
[35]. Yang, W., Le, H., Laud, T., Savarese, S., & Hoi, S. C. (2022).
Omnixai: A library for explainable ai. arXiv preprint
arXiv:2206.01612.
[36]. Sarker, I. H. (2024). AI-driven cybersecurity and threat
intelligence: cyber automation, intelligent decision-making
and explainability. Springer Nature.
[37]. Asghari, H., Birner, N., Burchardt, A., Dicks, D., Fassbender,
J., Feldhus, N., & Züger, T. (2021). What to explain when
explaining is difficult? An interdisciplinary primer on XAI
and meaningful information in automated decision-making.
Alexander von Humboldt Institute for Internet and Society.
[38]. Vries, B., Zwezerijnen, G., Burchell, G., Velden, F., Oordt,
C., & Boellaard, R. (2023). Explainable artificial intelligence
(XAI) in radiology and nuclear medicine: a literature review.
Frontiers in Medicine, 10.
[39]. Huang, S., Liu, Y., Fung, C., He, R., Zhao, Y., Yang, H., &
Luan, Z. (2020). HitAnomaly: Hierarchical Transformers for
Anomaly Detection in System Log. IEEE Transactions on
Network and Service Management, 17, 2064-2076.
[40]. Kierszbaum, S., Klein, T., & Lapasset, L. (2022). ASRS-
CMFS vs. RoBERTa: Comparing Two Pre-Trained Language
Models to Predict Anomalies in Aviation Occurrence Reports
with a Low Volume of In-Domain Data Available. Aerospace.
[41]. Zolanvari, M., Yang, Z., Khan, K., Jain, R., & Meskin, N.
(2022). TRUST XAI: Model-Agnostic Explanations for AI
with a Case Study on IIoT Security. IEEE Internet of Things
Journal, 10, 2967-2978.
[42]. Zhou, Z., Huang, H., & Fang, B. (2021). Application of
Weighted Cross-Entropy Loss Function in Intrusion
Detection. Journal of Computer and Communications.
[43]. Ferrag, M., Friha, O., Hamouda, D., Maglaras, L., & Janicke,
H. (2022). Edge-IIoTset: A New Comprehensive Realistic
Cyber Security Dataset of IoT and IIoT Applications for
Centralized and Federated Learning. IEEE Access, PP, 1-1.
... Patients' crucial data is being held hostage in a form of cyber blackmail, ultimately locking up entire healthcare systems and services. It is essential now more than ever to bolster these sectors with optimized cybersecurity strategies that can keep up with the ever growing sophistication of attacks and threats [2]. ...
Conference Paper
Full-text available
The growing digitization of crucial infrastructure, especially in areas like health care, has put these sectors at risk of complex cyberattacks. Managing a patient's protected health information (PHI) and operational systems of health care facilities is highly sensitive; breaches can result in data theft, service interruptions, and even endanger the safety of patients. This paper presents the use of AI cybersecurity solutions for the protection of vital infrastructure and patient data with advanced anomaly detection, intrusion prevention, and threat intelligence systems. AI algorithms can analyze large amounts of information in real time and identify new threats while adapting defenses automatically. The study covers central applications of AI: machine learning-enhanced intrusion detection systems, endpoint protection, and behavior-based mitigation of cyber threats. It also tackles issues of model accuracy-false positives, privacy, and model size. Further, the paper presents an integrated cyberspace security approach designed specifically for the healthcare systems with the goals of minimizing exploitable assets and maximizing sensitive data security. The core directions are aimed at building transparent AI models and employing blockchain technology for improved assurance of data integrity.
... Deep learning applications, such as gas pipeline leakage detection, highlight AI's role in infrastructure surveillance, which can be extended to water, waste, and energy systems for sustainability [49]. Predictive analytics, already proven effective in healthcare optimization, can similarly improve environmental sector planning and management [50]. Fraud detection techniques in finance have been adapted to identify wasteful practices, ensuring that ESG resources are managed responsibly. ...
Article
Full-text available
One of the key areas where Artificial Intelligence (AI) can counteract the forces of destruction and promote sustainability is intelligent decision-making, resource allocation, and minimizing environmental impact. This paper focuses on how AI aids environmental surveillance, energy management, waste management, and sustainable agriculture. AI-powered systems enhance climate modeling, energy grid optimization, supply chain efficiency, and recycling processes. By utilizing large datasets, real-time analytics, and AI, intelligent automation supports sustainability efforts globally. However, ethical concerns and energy consumption challenges should not deter AI from becoming a driving force in a greener future, especially with emerging technologies like blockchain and IoT. The objective of this paper is to explore AI's role in sustainable development through its applications, challenges, and future directions, demonstrating how AI-driven solutions can create a more resilient and environmentally friendly world.
... Certain challenges persist especially regarding the auditability and transparency of AI systems for important healthcare operations [23]. Moreover, the aspects concerning data protection, consent, and the ethical boundaries of the use of AI in the medicine field are still extensively searched and debated [24]. ...
... Sufian et al. (2024) emphasize that meticulous calculation of this angle is very critical to orthodontic practice. Other clinical studies further stress the necessity of having the right angle NLA during orthodontic therapy in order to advance aesthetic (Ghazal et al., 2024) and global patient acceptance. ...
Article
Class II maloclussion is the most prevalent in orthodontic patients. During camolflauge treatment of this maloclussionnasolabial angel increases inevitably for which a threshold value needs to be defined. Objective: The objective of this study was to calculate the mean score for the modified profile of a woman of class II div 1, by digitally simulating a rise in nasolabial angle from the initial image. Methods: This cross-sectional study was undertaken at Puniab Dental Hospital/de' Montmorency College of Dentistry from July 15 to December 01, 2024. A profile picture and lateral cephalometric radiograph of a female with an untreated skeletal Class 2 Division I relationship, a normal mandibular plane angle and normal face height were used. The NLA of the subject's profile image was adjusted to 104.9±4º using Adobe Photoshop CS2. The base image was then digitally changed to produce additional profile photographs, imitating increase in nasolabial angle by 2.0, 4.0, and 6.0 standard deviations (corresponding images called C, B, and A in the questionnaire). Results: The mean age of lay persons was 29.14±5.41 years, minimum age was 18 and maximum age was 41 years. The gender of 90(58.1%) were males and 65(41.9%) were females. Mean attractiveness of facial profile was evaluated by calculating mean attractiveness as the lay people rank the images from 1 to 5. Mean attractiveness score was 4.74±0.44 for image B followed by 4.54±0.50 for image A, 4.37±0.50 for image C and 3.27±0.45 for base image. Conclusions: According to the study, both the nontreated and profile with biggest nasolabial angle (NLA) had the least pleasing appearance. To achieve an aesthetic profile at the end of treatment while treating a class II DIV 1 patient the nasolabial angle should not exceed 121°.
Article
Full-text available
The use of Artificial Intelligence, especially machine learning and artificial neural networks, has dramatically increased in urology, assisting in innovative ways for diagnosis, prognosis, and treatment planning. This paper presents an up-to-date review of AI advances in the field of urology that pertain to its imaging aspects, particularly regarding the diagnosis of prostate cancer, kidney stones, and bladder cancer. The deep learning methods, especially convolutional neural networks, proved to be very effective in many medical imaging tasks, such as automated abnormal growth detection, organ segmentation, etc. Additionally, deep learning systems have performed well in predicting a patient’s outcome, including post-operative complications and recovery. Nevertheless, the progress made greatly differs from the goals set, and AI’s integration into clinical practice remains an unmet need due to obstacles posed by inefficient datasets and the opacity of some AI algorithms.This paper also discusses the key challenges in implementing AI tools in urology, as well as the potential for future research to enhance the accuracy, interpretability, and clinical applicability of AI-driven solutions. Ultimately, AI is poised to play a transformative role in urology, offering the potential for more personalized, efficient, and precise patient care.
Article
Full-text available
The security of the Web is a significant issue for personal, corporate, and state users in the context of digitalisation. With all kinds of activities related to the Internet growing, different types of threats also emerge, including phishing, malware, ransomware, and others which threaten personal information, funds, and critical system structures. The present paper discusses the principal threats that are inherent in the web environment, the effects of these threats, and protection means. Phishing is a common kind of social engineering that aims at making the target release relevant information. Viruses and worms, in their broad sense, include Malware, which sneaks into systems to corrupt, steal or delete important information. Ransomware is a virus that encrypts a victim"s files and asks for payment for the decryption key. At the same time, drive-by downloads are another virus that installs themselves on a victim"s computer from compromised websites without the victim knowing. With spoofing and man-in-the-middle attacks, data integrity is not preserved. At the same time, SQL injection and cross-site scripting are aimed at controlling web applications in order to control databases. Ddos-attack or Distributed Denial of Service attacks on services knock them off balance by flooding them with traffic. Minimising risks entails keeping oneself posted on the latest developments, updating the software, locking passwords in the cyber world, using a two-factor identification key, and making sure that different strains of technology have backup copies of the materials that have been tampered with. Adhering to security practices and being alert to new threats is vital for an organisation to have a safe online existence. Hence, this study emphasises the need to adopt appropriate measures for evaluating web threats in order to have safe interactions.
Article
Full-text available
Explainable Artificial Intelligence (XAI) is an emerging field in AI that aims to address the opaque nature of machine learning models. Furthermore, it has been shown that XAI can be used to extract input-output relationships, making them a useful tool in chemistry to understand structure-property relationships. However, one of the main limitations of XAI methods is that they are developed for technically oriented users. We propose the XpertAI framework that integrates XAI methods with large language models (LLMs) accessing scientific literature to generate accessible natural language explanations of raw chemical data automatically. We conducted 5 case studies to evaluate the performance of XpertAI. Our results show that XpertAI combines the strengths of LLMs and XAI tools in generating specific, scientific, and interpretable explanations.
Article
Full-text available
The most recent statistics show that of all cancers, cancer of the breast is the most common, killing about 900,000 individuals annually. Finding the disease early and correctly diagnosing it can increase the chances of a good result, which lowers the death rate. Early diagnosis can, in fact, prevent the disease from spreading and prevent premature victims from experiencing it. In this work, a comparison is made between advanced deep learning techniques and traditional machine learning for the analysis of breast cancer. We evaluated a deep learning model based on neural networks and traditional machine learning approaches such as Support Vector Classifier (SVC), Decision Tree, and Random Forest. Several demographic and clinical data were included in the diverse dataset of this investigation. This study compared traditional machine learning models (Random Forest, Decision Tree, SVC) with a neural network-based deep learning model in breast cancer analysis using features such as age, family history, genetic mutation, hormone therapy, mammogram results, breast pain, menopausal status, BMI, alcohol consumption, physical activity, smoking status, breast cancer diagnosis, frequency of screening, awareness source, symptom awareness, screening preference, and geographical location. SVC obtained an 86.36%, Decision Tree an 86.18%, and Random Forest an 86.00%. The deep learning model more precisely, a neural network outperformed these results with a highest 93% accuracy. To evaluate their diagnostic usefulness for breast cancer analysis, this study compares deep learning algorithms with more traditional machine learning methods. Accuracy ratings for the machine learning models were 86.00% for Random Forest, 86.18% for Decision Tree, and 88.36% for Support Vector Classifier.
Article
Full-text available
This narrative review explores two case scenarios related to immunoglobulin A nephropathy (IgAN) and the application of predictive monitoring, big data analysis and artificial intelligence (AI) in improving treatment outcomes. The first scenario discusses how online service providers accurately understand consumer preferences and needs through the use of AI-powered big data analysis. The author, a clinical nephrologist, contemplates the potential application of similar methodologies, including AI, in his medical practice to better understand and meet patient needs. The second scenario presents a case study of a 20-year-old man with IgAN. The patient exhibited recurring symptoms, including gross haematuria and tonsillitis, over a 2-year period. Through histological examination and treatment with renin–angiotensin system blockade and corticosteroids, the patient experienced significant improvement in kidney function and reduced proteinuria over 15 years of follow-up. The case highlights the importance of individualized treatment strategies and the use of predictive tools, such as AI-based predictive models, in assessing treatment response and predicting long-term outcomes in IgAN patients. The article further discusses the collection and analysis of real-world big data, including electronic health records, for studying disease natural history, predicting treatment responses and identifying prognostic biomarkers. Challenges in integrating data from various sources and issues such as missing data and data processing limitations are also addressed. Mathematical models, including logistic regression and Cox regression analysis, are discussed for predicting clinical outcomes and analysing changes in variables over time. Additionally, the application of machine learning algorithms, including AI techniques, in analysing big data and predicting outcomes in IgAN is explored. In conclusion, the article highlights the potential benefits of leveraging AI-powered big data analysis, predictive monitoring and machine learning algorithms to enhance patient care and improve treatment outcomes in IgAN.
Article
Full-text available
Smart grid technology has transformed electricity distribution and management, but it also exposes critical infrastructures to cybersecurity threats. To mitigate these risks, the integration of machine learning (ML) and natural language processing (NLP) techniques has emerged as a promising approach. This survey paper analyses current research and applications related to ML and NLP integration, exploring methods for risk assessment, log analysis, threat analysis, intrusion detection, and anomaly detection. It also explores challenges, potential opportunities, and future research directions for enhancing smart grid cybersecurity through the synergy of ML and NLP. The study's key contributions include providing a thorough understanding of state-of-the-art techniques and paving the way for more robust and resilient smart grid defences against cyber threats.
Article
The utilization of deep learning algorithms in security informatics has revolutionized cybersecurity, offering advanced solutions for threat detection and mitigation. This paper presents findings from research exploring the efficacy of deep learning in various security domains, including anomaly detection, malware detection, phishing detection, and threat intelligence analysis. Results demonstrate high detection rates and accuracy, with anomaly detection achieving a remarkable 98.5% detection rate and malware detection showcasing a classification accuracy of 99.2%. Phishing detection also yielded promising results with a detection accuracy of 95.8%. These findings underscore the potential of deep learning in enhancing security defenses. However, challenges such as interpretability and robustness remain, necessitating further research and development. By addressing these challenges and prioritizing robust security measures, organizations can leverage deep learning to create more effective and trustworthy security solutions, thereby mitigating cyber threats and safeguarding digital assets.