Applied Artificial Intelligence

Applied Artificial Intelligence

Published by Taylor & Francis

Online ISSN: 1087-6545

·

Print ISSN: 0883-9514

Journal websiteAuthor guidelines

Top-read articles

143 reads in the past 30 days

Schematic diagram to reflect the flow of work to detect lung cancer using XGBoost algorithm and deep learning ResNet101 with hyperparameter tuning.
General architecture of XGBoost algorithm.
Lung cancer NSCLC and SCLC original image samples along with heatmaps.
AUC-ROC and error graph to detect the lung cancer using XGBoost.
ROC and PR curves to detect lung cancer.

+11

The Deep Learning ResNet101 and Ensemble XGBoost Algorithm with Hyperparameters Optimization Accurately Predict the Lung Cancer

June 2023

·

1,920 Reads

·

18 Citations

·

·

·

[...]

·

Download

Aims and scope


Focuses on research on artificial intelligence, including applications to solve engineering, administration and education tasks, and evaluations of AI systems.

  • Applied Artificial Intelligence addresses concerns in applied research and applications of artificial intelligence (AI).
  • The journal also acts as a medium for exchanging ideas and thoughts about impacts of AI research.
  • Articles highlight advances in uses of AI systems for solving tasks in management, industry, engineering, administration, and education; evaluations of existing AI systems and tools, emphasizing comparative studies and user experiences; and the economic, social, and cultural impacts of AI.
  • Papers on key applications, highlighting methods, time schedules, person-months needed, and other relevant material are welcome.
  • All submitted manuscripts are subject to initial evaluation by the Editor and, if found suitable for further consideration, to peer review by independent, anonymous expert reviewers.

For a full list of the subject areas this journal covers, please visit the journal website.

Recent articles


The proposed methodological framework.
Sorting of road signs.
The level of compliance of road signs at different $q$q values.
Ergonomic road sign evaluation and multi-criteria sorting based on q-rung orthopair fuzzy information embedded in CRITIC and TOPSIS-Sort
  • Article
  • Full-text available

February 2025

·

8 Reads

Maria Gemel Palconit

·

Dyonne Bernadine Mirasol

·

Dyanne Brendalyn Mirasol-Cavero

·

[...]

·

This work proposes a novel multi-criteria sorting approach for evaluating the compliance of road signs based on ergonomic principles and sign comprehension using an integrated Criteria Importance Through Intercriteria Correlation (CRITIC) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) sorting (TOPSIS-Sort) under an environment that handles uncertainty via q-rung orthopair fuzzy sets (q-ROFS). The q-ROF-CRITIC assigns the priority weights of the attributes (i.e. comprehension and ergonomic principles), whereas the q-ROF-TOPSIS-Sort evaluates and classifies the compliance levels of road signs in view of a set of pre-defined categories, consequently bridging the limitations of the TOPSIS-Sort in handling imprecise evaluations. Demonstrated in an actual case study of evaluating 83 road signs in the Philippines, results show that prohibitive signs have the highest comprehension levels, while parking and stop signs, together with road obstacle signs, belong to the medium compliance level. Low-level compliance is observed for supplementary and intersection road signs due to unfamiliarity, while horizontal signs are ergonomically low on spatial and physical attributes. The proposed approach is supported by sensitivity analysis of q values and comparative assessments with other methods. The findings encourage further investigation into comprehensibility evaluations and open avenues for exploring the factors that influence road sign comprehension.


Variable Selection Algorithm for Explaining Anomalies in Real-World Regenerative Thermal Oxidizers

February 2025

·

4 Reads

The use of regenerative thermal oxidizers (RTOs), which reduce hazardous air pollution and save energy, has increased with the rapid growth of industrial technology. Therefore, detecting and explaining anomalies in RTOs have become important. To accurately detect anomalies in RTOs, it is required to apply reconstruction-based anomaly detection (AD) models, which is currently proposed as a main AD research area. However, traditional explainable artificial intelligence (XAI) cannot explain reconstruction-based AD models to identify main facilities in RTOs. To address this problem, we developed a method to improve the accuracy of XAI in explaining reconstruction-based AD models. Specifically, we first grouped the variables based on correlation and the clustering analysis. We then calculated the impact of each group on normal/abnormal events in terms of the maximum mean discrepancy and cosine similarity. Using the most influencing variables based on our method, XAI correctly identified the main variables without considering unnecessary variables. Experimental results on a real-world RTO dataset showed that our method improve the accuracy of XAI that determine the main variables compared to the traditional XAI.


Deep Learning-Based Energy Consumption Prediction Model for Green Industrial Parks

Enhancing the accuracy of industrial building energy consumption forecasts is beneficial for improving energy management and addressing the imbalance between supply and demand in building electricity use. To overcome the limitations of existing energy consumption forecasting methods, which inadequately consider the specific energy usage characteristics and user behaviors in parks and often perform poorly at predicting extreme values, this study proposes a hybrid energy consumption forecasting model combines Singular Spectrum Analysis (SSA) and Long Short-Term Memory (LSTM) neural networks. Initially, SSA is used to extract the autocorrelation of the electricity consumption series and eliminate the mutual interference caused by component mixing. Then, fuzzy entropy values are utilized to differentiate the complexity of various components, reconstructing them into high-frequency and low-frequency components. These components are then predicted using a multi-factor LSTM model optimized by improved particle swarm optimization, with the results aggregated for the final forecast. The results indicate that the model’s root mean square error is only 12.116 kWh, which is lower compared to the LSTM multi-factor model, the EMD-LSTM model, and the SSA-LSTM model. The model shows a closer fit to the original series trend and more accurate predictions at extreme points, aligning more closely with actual values.


AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development

February 2025

·

2 Reads

The expansion of Artificial Intelligence in sectors such as healthcare, finance, and communication has raised critical ethical concerns surrounding transparency, fairness, and privacy. Addressing these issues is essential for the responsible development and deployment of AI systems. This research establishes a comprehensive ethical framework that mitigates biases and promotes accountability in AI technologies. A comparative analysis of international AI policy frameworks from regions including the European Union, United States, and China is conducted using analytical tools such as Venn diagrams and Cartesian graphs. These tools allow for a visual and systematic evaluation of the ethical principles guiding AI development across different jurisdictions. The results reveal significant variations in how global regions prioritize transparency, fairness, and privacy, with challenges in creating a unified ethical standard. To address these challenges, we propose technical strategies, including fairness-aware algorithms, routine audits, and the establishment of diverse development teams to ensure ethical AI practices. This paper provides actionable recommendations for integrating ethical oversight into the AI lifecycle, advocating for the creation of AI systems that are both technically sophisticated and aligned with societal values. The findings underscore the necessity of global collaboration in fostering ethical AI development.


A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network

To address the problem of single-mode vulnerability to complex environments, a multimode fusion network with mutual attention is proposed. This network combines the use of laser, infrared and millimeter wave modalities to leverage the advantages of each mode in different environments, increasing the network’s resilience to interference. The study begins with the construction of pixel-level fusion networks, feature-weighted fusion networks and the multimode mutual attention fusion network. A comprehensive introduction to the multimode mutual attention fusion network is given, as well as a comparison with the other two networks. The model is then trained and evaluated using data from glide rocket and drone experiments. Finally, an analysis of the anti-outlier interference capability of the multimode fusion network with mutual attention is carried out. The test results show that the multimode mutual attention fusion network containing a feature fusion attention mechanism has the highest detection performance and anti-interference ability. Without interference, the network achieves a remarkable accuracy of 0.98 for multi-target recognition. In addition, with an accuracy of 0.96, it ensures a high level of stability in various interference environments. In addition, the introduction of multi-scale fusion has improved the rocket’s speed adaptability by about 75%.


DC-BiLSTM-CNN Algorithm for Sentiment Analysis of Chinese Product Reviews

February 2025

·

1 Read

The rapid growth of e-commerce has led to a significant increase in user feedback, especially in the form of post-purchase comments on online platforms. These reviews not only reflect customer sentiments but also crucially influence other users’ purchasing decisions due to their public accessibility. The sheer volume and complexity of product reviews make manual sorting challenging, necessitating businesses to autonomously process and discern customer sentiments. Chinese, a predominant language on e-commerce platforms, presents unique challenges in sentiment analysis due to its character-based nature. This paper introduces an innovative Dual-Channel BiLSTM-CNN (DC-BiLSTM-CNN) algorithm. Based on the language characteristics of Chinese product reviews, a sentiment analysis algorithm, dual channel BiLSTM-CNN (DC-BiLSTM-CNN), is proposed. The algorithm constructs two channels, transforming text into both character and word vectors and inputting them into Bidirectional Long Short-Term Memory (BiLSTM), and Convolutional Neural Network (CNN) models. The combination of these channels facilitates a more comprehensive feature extraction from reviews. Comparative analysis revealed that DC-BiLSTM-CNN significantly outperforms baseline models, substantially enhancing the classification of product reviews. We conclude that the proposed DC-BiLSTM-CNN algorithm offers an effective solution for handling Chinese product reviews, carrying positive implications for businesses seeking to enhance product and service quality, ultimately resulting in heightened user satisfaction.


A novel RNN architecture to improve the precision of ship trajectory predictions

February 2025

·

3 Reads

Monitoring maritime transport activities is crucial for ensuring the security and safety of people and goods. This type of monitoring often relies on the use of navigation systems such as the Automatic Identification System (AIS). AIS data has been used to support the defense teams when identifying equipment defects, locating suspicious activity, ensuring ship collision avoidance, and detecting hazardous events. In this context, Ship Trajectory Prediction (STP) has been conducted using AIS data to support the estimation of vessel routes and locations, contributing to maritime safety and situational awareness. Currently, the Ornstein-Uhlenbeck (OU) model is considered the state-of-the-art for STP. However, this model can be time-consuming and can only represent a single vessel track. To solve these challenges, Recurrent Neural Network (RNN) models have been applied to STP to allow scalability for large data sets and to capture larger regions or anomalous vessels behavior. This research proposes a new RNN architecture that decreases the prediction error up to 50% for cargo vessels when compared to the OU model. Results also confirm that the proposed Decimal Preservation layer can benefit other RNN architectures developed in the literature by reducing their prediction errors for complex data sets.


Torque Prediction In Deep Hole Drilling: Artificial Neural Networks Versus Nonlinear Regression Model

January 2025

·

16 Reads

One of the main challenges when drilling small and deep holes is the difficulty of chip evacuation. As the hole depth increases, chips tend to become tightly compressed, causing chip jamming. It leads to a rapid increase in cutting forces and strong random fluctuations. The discontinuous chip evacuation process makes the cutting force signal strongly nonlinear and random, making it difficult to predict accurately. In this paper, we have developed a two-layer artificial neural network (ANN) model for training using the Levenberg-Marquardt algorithm to predict torque during deep drilling. Unlike many previous studies, this model uses hole depth as an input vector element instead of hole diameter. The model has been validated through experiments drilling AISI-304 stainless steel with hole depth-to-diameter ratios of 8 under continuous drilling conditions with ultrasonic-assisted vibration. The performance of the ANN model was compared with the exponential model and evaluated by the MAPE index. Results show that the ANN model has better predictive capability, the average MAPE value approximately four times smaller and higher reliability with a standard deviation approximately 3.5 times smaller than the exponential function model. This model can be further refined to predict torque for drilling deep holes for future studies.


PegasosQSVM based fake news detection approach.
ZFeatureMap circuit from Qiskit, applied on two qubits (${q_0}$q0 and ${q_1}$q1).
ZZFeatureMap circuit from Qiskit, similar to the ZFeatureMap but with layers of controlled Z gates, applied to two qubits (${q_0}$q0 and ${q_1}$q1).
Accuracy score performance in terms of the number of steps used. (a) ZFeatureMap accuracy score performance for values from 100 to 400. (b) ZZFeatureMap accuracy score performance for values from 100 to 200.
Time performance in seconds for various values of number of steps. (a) ZFeatureMap time performance for various values of the number of steps for values in the range [100,400]. (b) ZZFeatureMap time performance change for various values in the range [100,200].
PegasosQSVM: A Quantum Machine Learning Approach for Accurate Fake News Detection

January 2025

·

9 Reads

The rapid spread of fake news on social media poses a significant threat to modern societies. Traditional machine learning approaches have limitations in handling the ever-increasing volume and complexity of data. This research explores quantum machine learning for fake news classification by proposing Pegasos Quantum Support Vector Machines, a novel algorithm combining Pegasos Support Vector Machines with quantum kernels, and advanced data encoding. Through experimentation on the IBM Qasm Simulator, Pegasos Quantum Support Vector Machines scored 90.67% in accuracy. This study is primarily focused on local simulation, where the proposed algorithm scored as high as 95.63%, with 95.44% precision, 99.52% recall, and 96.76% f1-score. The achieved results outperform other machine learning methods on the BUZZFEED dataset, including Quantum Neural Networks and Quantum K-Nearest Neighbors. Its successful implementation paves the way for further refinement of quantum machine learning techniques in fake news classification. The PegasosQSVM algorithm encounters, however, some implementation issues on real world Quantum Processing Units(QPU). Noisy Intermediate-Scale Quantum era QPU are prone to noise effects that affect the computations negatively, and by extension, the results of quantum machine learning algorithms. Further implementation on real QPU and use of error mitigation techniques, are needed for optimal results on quantum hardware.


Encrypted Search Method for Cloud Computing Data Under Attack Based on TF-IDF and Apriori Algorithm

January 2025

·

3 Reads

This paper designs the MKSE and SEMSS methods. Among them, MKSE uses an improved TF-IDF weight calculation method to extract keywords and applies virtual keywords to construct inverted indexes, making it difficult for malicious attackers to infer the index content easily. SEMSS uses the Apriori algorithm to mine the co-occurrence relationship between words and find the keyword set that meets the minimum support threshold to improve the recall rate of search results. Finally, the security of the scheme is verified from the aspects of semantic security, effici this paper designsency, data integrity, etc. The results showed that the data encryption time of MKSE and TRSE methods increased gradually with the increase in document collection storage. The index build time was increased as the document set grew. The accuracy of the improved TF-IDF method was 63.8%. The running time of Apriori decreased with the increase of minimum support. When the minimum support was 12.0%, the Apriori algorithm ran for 211 seconds. The MKSE method was more efficient than the TRSE method in searching documents by query keywords. When the document set size was 3000, the SEMSS method had a full search rate of 81.09%. This research realizes the semantic security of outsourced data, which can efficiently and comprehensively carry out cryptographic retrieval based on keyword sorting.


From Baseline to Best Practice: An Advanced Feature Selection, Feature Resampling and Grid Search Techniques to Improve Injury Severity Prediction

January 2025

·

45 Reads

This work addresses the need for precise prediction models that predict the severity of injuries sustained in traffic crashes as a regression task. To this end, we thoroughly analyzed traffic crashes in Rome between 2016 and 2019, gathering data on vehicle attributes and environmental factors. Fourth predictive systems are employed to investigate the intricate problem of predicting the severity of injuries sustained in traffic crashes using different regression algorithms, such as Random Forest, Decision Trees, XGBoost, and Artificial Neural Networks. Compared to comparable systems without feature selection, feature resampling, and optimization methods, the results demonstrate that employing optimized XGBoost along with grid search in conjunction with SelectKBest and SMOTE strategy has resulted in greater performance, with an 89% R2 score. Our findings provide insight into the requirement for accurate forecasting models in optimization and balanced approaches to enhancing traffic safety. These findings offer a viable way to improve traffic safety tactics. As far as we know and as of right now, there hasn’t been much interest in supporting a fusion-based system that critically reviews machine learning techniques using grid search optimization, feature selection, and smote technique and examines how injury severity prediction is affected by road crashes.


Workflow of overall analysis.
Process of the keyword selection.
Standardized keyword occurrence frequency by design process.
Artificial Intelligence in Design Process: An Analysis Using Text Mining

January 2025

·

38 Reads

The progress of Artificial Intelligence (AI) offers modern designers opportunities to explore innovative design processes. Particularly, generative AI that creates images and other content through text can contribute to the creative processes in various design fields such as graphics, industrial design, UX, and fashion. However, there is a lack of comprehensive research on AI’s role and applications throughout the entire design process, and current papers often employ qualitative methods such as interviews and case studies. Therefore, this paper aims to quantitatively analyze experts’ views on AI’s utilization in the whole design process through text mining of literature. The researchers selected 126 papers through scientific databases such as ScienceDirect, Web of Science, and utilized the keyword matching method to extract the frequency of keywords for each stage of the design process – Research, Ideation, Mock-up, Production, and Evaluation. Through text mining, research findings indicate that AI is predominantly discussed in the later stages of design, particularly in the production process, while its use in the mock-up stage is perceived to be low. Additionally, distinct differences in AI use across design disciplines were identified: graphics focusing on ideation; UX on evaluation; and fashion on production.


Personalised Affective Classification Through Enhanced EEG Signal Analysis

January 2025

·

11 Reads

Background and Objectives Declining mental health is a prominent and concerning issue. Affective classification, which employs machine learning on brain signals captured from electroencephalogram (EEG), is a prevalent approach to address this issue. However, many existing studies have adopted a one-size-fits-all approach, where data from multiple individuals are combined to create a single “generic” classification model. This overlooks individual differences and may not accurately capture the unique emotional patterns of each person. Methods This study explored the performance of six machine learning algorithms in classifying a benchmark EEG dataset (collected with a MUSE device) for affective research. We replicated the best performing models on the dataset found in the literature and present a comparative analysis of performance between existing studies and our personalised approach. We also adapted another EEG dataset (commonly called DEAP) to validate the personalised approach. Evaluation was based on accuracy and significance test using McNemar statistics. Model runtime was also used as an efficiency metric. Results The personalised approach consistently outperformed the generalised method across both datasets. McNemar’s test revealed significant improvements in all but one machine learning algorithm. Notably, the Decision Tree algorithm consistently excelled in the personalised mode, achieving an accuracy improvement of 0.85% (p<0.001p \lt 0.001) on the MUSE dataset and a 4.30% improvement on the DEAP dataset, which was also statistically significant (p=0.004p = 0.004). Both Decision Tree models were more efficient than their generalised counterpart with 1.270 and 23.020-s efficiency gain on the MUSE and DEAP datasets, respectively. Conclusions This research concludes that smaller, personalised models are a far more effective way of conducting affective classification, and this was validated with both small (MUSE) and large (DEAP) datasets consisting of EEG samples from 4 to 32 subjects, respectively.


Discrete Wavelet Transform Sampling for Image Super Resolution

January 2025

·

6 Reads

In battlefield environments, drones depend on high-resolution imagery for critical tasks such as target identification and situational awareness. However, acquiring clear images of distant targets presents a significant challenge. To address this, we propose a supervised learning approach for image super-resolution. Our network architecture builds upon the U-Net framework, incorporating enhancements to the encoder and decoder through techniques such as Discrete Wavelet Transform, Channel Attention Residual Modules, Selective Kernel Feature Fusion, Weight Normalization, and Dropout. We evaluate our model on a super-resolution dataset and compare its performance against other networks, highlighting the importance of minimizing trainable parameters for real-time deployment on resource-constrained drone platforms. The effWicacy of our proposed network is further validated through image recognition tasks and real-world scenario testing. By enhancing image clarity at extended ranges, our approach enables drones to detect adversaries earlier, facilitating proactive countermeasures and improving mission success rates


System Logs Anomaly Detection. Are we on the right path?

December 2024

·

16 Reads

System logs are universally used for monitoring user access, performance, and behavior in software applications. Large-scale industrial systems generate an immense volume of logs, which are difficult to handle with human capabilities. Therefore, an automated method is essential for filtering vast amounts of data. System log anomaly detection is crucial in the security field for identifying system failures, sophisticated internal attacks, and other deviations from the norm. This research area requires further development, as most Deep Learning solutions in the literature are semi-supervised. This poses a significant limitation since these solutions are impractical for large-scale ecosystems due to the high cost of labeling data. This paper introduces a method that replaces the supervised phase of semi-supervised methods with fully unsupervised heuristics, utilizing the elbow method, interquartile range, and Simulated Annealing. The unsupervised results are comparable to the semi-supervised State of the Art while demonstrating greater applicability in real-world applications. This work proposes a more suitable benchmark for the log anomaly outlier detection problem, where the training data include both normal and abnormal sequences and precede the test sessions in time. Additionally, it presents metrics on distinct log sequences to mitigate the impact of unbalanced anomaly types.


Test environment design.
Performance comparison of ChatGPT 3.5, Gemini 1.5 pro, and Claude 3.5 Sonnet.
Evaluating LLMs for Code Generation in HRI: A Comparative Study of ChatGPT, Gemini, and Claude

December 2024

·

61 Reads

·

1 Citation

This study investigates the effectiveness of Large Language Models (LLMs) in generating code for Human-Robot Interaction (HRI) applications. We present the first direct comparison of ChatGPT 3.5, Gemini 1.5 Pro, and Claude 3.5 Sonnet in the specific context of generating code for Human-Robot Interaction applications. Through a series of 20 carefully designed prompts, ranging from simple movement commands to complex object manipulation scenarios, we evaluate the models’ ability to generate accurate and context-aware code. Our findings reveal significant variations in performance, with Claude 3.5 Sonnet achieving a 95% success rate, Gemini 1.5 Pro at 60%, and ChatGPT 3.5 at 20%. The study highlights the rapid advancement in LLM capabilities for specialized programming tasks while also identifying persistent challenges in spatial reasoning and adherence to specific constraints. These results suggest promising applications for LLMs in robotics development and education while emphasizing the continued need for human oversight and specialized training in AI-assisted programming for HRI.


The flowchart for the proposed SAIPS algorithm.
The pseudo-code for the proposed SAIPS algorithm.
Sales data of the Raba series product from 2010 to 2011 offered by an international top-tier IgC corporation listed in Taiwan.
Flow chart of ARIMA models establishment.
The chart for related prediction algorithms taking the sales data sample of the raba series product.
Application of Immunological and Swarm Intelligence Learning-Based Algorithm for Industrial Grade Computer Sales Prediction

December 2024

·

15 Reads

This paper strives to raise the imitating effectiveness of radial basis function-based neural network (RNNet) through biological learning (BL) and swarm intelligence (SI) optimization algorithms. Latter, the artificial immune system (AIS) and particle swarm optimization (PSO) algorithms are utilized for RNNet to regulate. The proposed synthesis of AIS-inspired and PSO-inspired (SAIPS) algorithm incorporates the complementary development and prospecting abilities to realize optimized resolution. The attribute of population variation has shown high frequency to meet the global optimum to replace local optimum being restricted and outperforms in five standard nonlinear trial functions. The experimental results have represented that the consolidation of AIS-inspired and PSO-inspired algorithms is an outstanding approach and therefore a hybrid algorithm is proposed, which aims to obtain an expression that can cultivate optimum precision among related algorithms in this research. The algorithm then evaluates results from five standard inspections and an empirical industrial grade computer (IgC) sales prediction instance in Taiwan, which reveals that the proposed SAIPS algorithm exceeds the performance among related algorithms as well as the relevant auto-regressive integrated moving average (ARIMA) models in terms of accuracy and time spent.


Image Segmentation Deep Learning Model for Early Detection of Banana Diseases

December 2024

·

71 Reads

Bananas are among the most widely produced perennial fruits and staple food crops that are highly affected by numerous diseases. When not managed early, Fusarium Wilt and Black Sigatoka are two of the most detrimental banana diseases in East Africa, resulting in production losses of 30% to 100%. Early detection of these banana diseases is necessary for designing proper management practices to avoid further yields and financial losses. The recent advances and successes of deep learning in detecting plant diseases have inspired this study. This study assessed a U-Net semantic segmentation deep learning model for the early detection and segmentation of Fusarium Wilt and Black Sigatoka banana diseases. This model was trained using 18,240 banana leaf and stalk images affected by these two banana diseases. The dataset was collected from the farms using mobile phone cameras with the guidance of agricultural experts and was annotated to label the images. The results showed that the U-Net model achieved a Dice Coefficient of 96.45% and an Intersection over Union (IoU) of 93.23%. The model accurately segmented areas where the banana leaves and stalks were damaged by Fusarium Wilt and Black Sigatoka diseases.


Fractal Neural Network Approach for Analyzing Satellite Images

December 2024

·

69 Reads

Satellites play a critical role in modern technology by providing images for various applications, such as detecting infrastructure and assessing environmental impacts. The author’s work investigates the application of Fractal Neural Networks (FractalNet) for automating the detection of specific objects in satellite images. The study aims to improve processing speed and accuracy compared to traditional Convolutional Neural Networks (CNNs). The research involves developing and comparing FractalNet with CNNs, focusing on their effectiveness in image classification. The architecture of FractalNet, characterized by recursive structures and deep layers, is evaluated against CNNs like VGG16 and ResNet50. Data collection included manually gathering high-resolution satellite images of specific objects from Google Earth. The neural network models were trained and tested with varying hyperparameters, including learning rates and batch sizes. FractalNet demonstrated superior performance over CNNs, particularly in deep network configurations. The results improved significantly with data augmentation and optimized hyperparameters, achieving a test accuracy of up to 93.26% with a 32-layer model. Fractal neural networks offer a promising approach for automating satellite image analysis, providing better accuracy and robustness compared to traditional CNNs architectures.


Data security sharing model based on FL and blockchain.
Reputation comments updated evaluation flow chart.
Reputation values based on different reputation schemes.
A Blockchain-Integrated Federated Learning Approach for Secure Data Sharing and Privacy Protection in Multi-Device Communication

December 2024

·

13 Reads

The secure transmission of communication data between different devices still faces numerous potential challenges, such as data tampering, data integrity, network attacks, and the risks of information leakage or forgery. This approach aims to handle the distributed trust issues of federated learning users and update data states rapidly. By modeling multi-source devices through federated learning, the model parameters and reputation values of participating devices are stored on the blockchain. This method incorporates factors such as experience, familiarity, and timeliness to more quickly gather reliable information about nodes to assess their behavior. Simulation results on the MNIST dataset show that when the proportion of selfish nodes is below 50%, the convergence time increases with the proportion of selfish nodes. Compared to advanced algorithms, the proposed model saves approximately 6% of interaction time. As the number of transactions significantly increases, the system’s TPS (Transactions Per Second) decreases, with an average TPS of only 3079.35 when the maximum number of transactions is 4000. The proposed scheme can filter out high-quality data sources during real-time dynamic data exchange, enhancing the accuracy of federated learning training and ensuring privacy security.


A Typical Infrared Background Radiation Prediction Model Based on RF-VMD and Optimized Hybrid Neural Network

December 2024

·

8 Reads

The short-term prediction and adjustment of a target’s infrared radiation hold significant value in military camouflage applications. Existing radiation prediction models generally require real-time environmental and meteorological data support, resulting in lag in active camouflage. To meet the demand for active camouflage of background infrared (IR) radiation, a short-term background IR radiation prediction method based on historical data is proposed. First, a random forest (RF) is used to filter the collected multidimensional meteorological parameters. Variational mode decomposition (VMD) is applied for time-frequency analysis on these parameters, optimizing them with Bayesian algorithms and decomposing them into multivariate intrinsic mode functions (IMFs) with similar frequencies to reduce the impact of nonlinearity in the data. Based on the superimposed IMFs as inputs, a hybrid deep neural network prediction model is established. The model optimizes the CNN-LSTM network with residual connections and introduces a multi-head self-attention mechanism to enhance spatiotemporal feature extraction of the multidimensional meteorological parameters, focusing on key temporal feature regions. According to the experimental results, the constructed model demonstrates high prediction accuracy and adaptability across different background environments, with a low parameter count and fast prediction capability, meeting the practical application needs for various complex backgrounds.


Enhancing the Recognition of Collinear Building Patterns by Shape Cognition Based on Graph Neural Networks

December 2024

·

17 Reads

Building patterns are important components of urban structures and functions, and their accurate recognition is the foundation of urban spatial analysis, cartographic generalization, and other tasks. Current building pattern recognition methods are often based on a shape index that can only characterize shape features from one aspect, resulting in significant errors. In this study, a building pattern recognition method based on a graph neural network is proposed to enhance shape cognition and focus on recognizing collinear patterns. First, a building shape classification model that integrates global shape and graph node structure features was constructed to quantitatively study shape cognition. Subsequently, a collinear pattern recognition (CPR) model was established based on a dual building graph. The shape cognition results were integrated into the model to enhance its recognition ability. The results show that the shape classification model can be used to effectively distinguish different shape categories and support building pattern recognition tasks. Based on the CPR model, false recognitions can be avoided, and recognition results similar to those of visual cognition can be obtained. Compared with the comparative methods, both models have significant advantages in terms of statistical results and implementation.


IHML: Incremental Heuristic Meta-Learner.
SHAP values for features for Higgs Boson Data.
Elbow method (EM) process to determine $\alpha $α.
IHML: Incremental Heuristic Meta-Learner

December 2024

·

19 Reads

The landscape of machine learning constantly demands innovative approaches to enhance algorithms’ performance across diverse tasks. Meta-learning, known as “learning to learn” is a promising way to overcome these diversity challenges by blending multiple algorithms. This study introduces the IHML: Incremental Heuristic Meta-Learner, a novel meta-learning algorithm for classification tasks. By leveraging a variety of base-learners with distinct learning dynamics, such as Gaussian, tree, and instance, IHML offers a comprehensive solution adaptable to different data characteristics. Moreover, the core contributions of IHML lie in its ability to tackle the optimal base-learner and feature sets determination mechanism with the help of Explainable Artificial Intelligence (XAI) and heuristic elbow methods. Existing work in this context utilizes XAI mostly in pre-processing the data or post-analysis of the results, however, IHML incorporates XAI into the learning process in an iterative manner and improves the prediction performance of the meta-learner. To observe the performance of the proposed IHML, we used five different datasets from astrophysics, physics, biology, e-commerce, and economics. The results show that the proposed model achieves more accuracy (in average % 10 and at most % 71 improvements) compared to the baseline machine learning models in the literature.


Conceptual framework.
Stages involved in the process of selecting papers.
Human-Artificial Intelligence in Management Functions: A Synergistic Symbiosis Relationship

December 2024

·

34 Reads

This review paper aims to investigate how the mutual interaction of artificial intelligence (AI) and human intelligence (HI) affects management functions. To achieve this, we use a question-based approach and a systematic literature review to elucidate the potential for AI and HI to interact and create a mutually beneficial symbiotic effect in management functions. We underscore the main issues that organizations must consider when transitioning to AI management. Specifically, in this review paper, we highlight the interaction between AI and HI; the investigation of this relationship in management functions such as planning and decision-making, organizing, leading, and controlling; the mutually beneficial impact of this symbiotic relationship in management; and possible ethical dilemmas. The paper concludes by identifying gaps in the existing literature, providing practical advice on integrating AI into various management functions, and exploring approaches that highlight specific areas requiring attention in future research.


Artificial Intelligence in Cybersecurity: A Comprehensive Review and Future Direction

December 2024

·

216 Reads

As cybercrimes are becoming increasingly complex, it is imperative for cybersecurity measures to become more robust and sophisticated. The crux lies in extracting patterns or insights from cybersecurity data to build data-driven models, thus making the security systems automated and intelligent. To comprehend and analyze cybersecurity data, several Artificial Intelligence (AI) methods such as Machine Learning (ML) techniques, are employed to monitor network environments and actively combat cyber threats. This study explored the various AI techniques and how they are applied in cybersecurity. A comprehensive literature review was conducted, including a bibliometric analysis and systematic literature review following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Using data extracted from two main scholarly databases: Clarivate’s Web of Science (WoS) and Scopus, this article examines relevant academic literature to understand the diverse ways in which AI techniques are employed to strengthen cybersecurity measures. These applications range from anomaly detection and threat identification to predictive analytics and automated incident response. A total of 14,509 peer-reviewed research papers were identified of which 9611 were from the Scopus database and 4898 from the WoS database. These research papers were further filtered, and a total of 939 relevant papers were eventually selected and used. The review offers insights into the effectiveness, challenges, and emerging trends in utilizing AI for cybersecurity purposes.


Journal metrics


2.9 (2023)

Journal Impact Factor™


20%

Acceptance rate


5.2 (2023)

CiteScore™


40 days

Submission to first decision


0.989 (2023)

SNIP


0.598 (2023)

SJR

Editors