304 reads in the past 30 days
Towards Safer Cities: AI-Powered Infrastructure Fault Detection Based on YOLOv11April 2025
·
304 Reads
Published by MDPI
Online ISSN: 1999-5903
304 reads in the past 30 days
Towards Safer Cities: AI-Powered Infrastructure Fault Detection Based on YOLOv11April 2025
·
304 Reads
232 reads in the past 30 days
Large Language Models Meet Next-Generation Networking Technologies: A ReviewOctober 2024
·
1,374 Reads
·
19 Citations
224 reads in the past 30 days
A Review of ARIMA vs. Machine Learning Approaches for Time Series Forecasting in Data Driven NetworksJuly 2023
·
2,252 Reads
·
151 Citations
207 reads in the past 30 days
Teamwork Conflict Management Training and Conflict Resolution Practice via Large Language ModelsMay 2024
·
2,348 Reads
·
11 Citations
191 reads in the past 30 days
An Overview of WebAssembly for IoT: Background, Tools, State-of-the-Art, Challenges, and Future DirectionsAugust 2023
·
5,327 Reads
·
29 Citations
Future Internet (ISSN 1999-5903) is a scholarly open access journal which provides an advanced forum for science and research concerned with evolution of Internet technologies and related smart systems for “Net-Living” development. The general reference subject is therefore the evolution towards the future internet ecosystem, which is feeding a continuous, intensive, artificial transformation of the lived environment, for a widespread and significant improvement of well-being in all spheres of human life (private, public, professional).
Scope
• Advanced communications network infrastructures • Evolution of Internet basic services • Internet of Things • Industrial internet • Centralized and distributed data centers • Embedded computing • Cloud computing • Software defined network functions and network virtualization • Cloud-let and fog-computing • Cyber-physical systems • Network and distributed operating systems • Smart city • Artificial and augmented intelligence • Smart systems and applications • Net-living human factors and quality of life enhancement • Human-computer interaction and usability • Cyber security compliance • Quality of experience • Big data, open data, and analytical tools
May 2025
Alexander Gegov
·
Boriana Vatchova
·
Yordanka Boneva
·
Alexandar Ichtev
Computer-aided transport modelling is essential for testing different control strategies for traffic lights. One approach to modelling traffic control is by heuristically defining fuzzy rules for the control of traffic light systems and applying them to a network of hierarchically dependent crossroads. In this paper, such a network is investigated through modelling the geometry of the network in the simulation environment Aimsun. This environment is based on real-world traffic data and is used in this paper with the MATLAB R2019a-Fuzzy toolbox. It focuses on the development of a network of intersections, as well as four fuzzy models and the behaviour of these models on the investigated intersections. The transport network consists of four intersections. The novelty of the proposed approach is in the application of heuristic fuzzy rules to the modelling and control of traffic flow through these intersections. The motivation behind the use of this approach is to address inherent uncertainties using a fuzzy method and analyse its main findings in relation to a classical deterministic approach.
May 2025
·
3 Reads
Navneet Kaur
·
Lav Gupta
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack surface, exposing healthcare systems to cybersecurity risks like data breaches, device manipulation, and potentially life-threatening disruptions. While 6G networks offer significant benefits for healthcare, such as ultra-low latency, extensive connectivity, and AI-native capabilities, as highlighted in the ITU 6G (IMT-2030) framework, they are expected to introduce new and potentially more severe security challenges. These advancements put critical medical systems at greater risk, highlighting the need for more robust security measures. This study leverages AI techniques to systematically identify security vulnerabilities within 6G-enabled healthcare environments. Additionally, the proposed approach strengthens AI-driven security through use of multiple XAI techniques cross-validated against each other. Drawing on the insights provided by XAI, we tailor our mitigation strategies to the ITU-defined 6G usage scenarios, with a focus on their applicability to medical IoT networks. We propose that these strategies will effectively address potential vulnerabilities and enhance the security of medical systems leveraging IoT and 6G networks.
May 2025
·
4 Reads
Lingfeng Shen
·
Jiangtao Nie
·
Ming Li
·
[...]
·
Xin He
This study concentrates on physical layer security (PLS) in UAV-aided Internet of Things (IoT) networks and proposes an innovative approach to enhance security by optimizing the trajectory of unmanned aerial vehicles (UAVs). In an IoT system with multiple eavesdroppers, formulating the optimal UAV trajectory poses a non-convex and non-differentiable optimization challenge. The paper utilizes the successive convex approximation (SCA) method in conjunction with hypograph theory to address this challenge. First, a set of trajectory increment variables is introduced to replace the original UAV trajectory coordinates, thereby converting the original non-convex problem into a sequence of convex subproblems. Subsequently, hypograph theory is employed to convert these non-differentiable subproblems into standard convex forms, which can be solved using the CVX toolbox. Simulation results demonstrate the UAV’s trajectory fluctuations under different parameters, affirming that trajectory optimization significantly improves PLS performance in IoT systems.
May 2025
·
2 Reads
The accurate identification of look-alike medical vials is essential for patient safety, particularly when similar vials contain different substances, volumes, or concentrations. Traditional methods, such as manual selection or barcode-based identification, are prone to human error or face reliability issues under varying lighting conditions. This study addresses these challenges by introducing a real-time deep learning-based vial identification system, leveraging a Lightweight YOLOv4 model optimized for edge devices. The system is integrated into a Mixed Reality (MR) environment, enabling the real-time detection and annotation of vials with immediate operator feedback. Compared to standard barcode-based methods and the baseline YOLOv4-Tiny model, the proposed approach improves identification accuracy while maintaining low computational overhead. The experimental evaluations demonstrate a mean average precision (mAP) of 98.76 percent, with an inference speed of 68 milliseconds per frame on HoloLens 2, achieving real-time performance. The results highlight the model’s robustness in diverse lighting conditions and its ability to mitigate misclassifications of visually similar vials. By combining deep learning with MR, this system offers a more reliable and efficient alternative for pharmaceutical and medical applications, paving the way for AI-driven MR-assisted workflows in critical healthcare environments.
May 2025
·
5 Reads
With an increasing number of illegal radio stations, connected cars, and IoT devices, high-accuracy radio source localization techniques are in demand. Traditional methods such as GPS positioning and triangulation suffer from accuracy degradation in NLOS (non-line-of-sight) environments due to obstructions. In contrast, the fingerprinting method builds a database of pre-collected radio information and estimates the source location via pattern matching, maintaining relatively high accuracy in NLOS environments. This study aims to improve the accuracy of fingerprinting-based localization by optimizing UAV flight paths. Previous research mainly relied on RSSI-based localization, but we introduce an AOA model considering AOA (angle of arrival) and EOA (elevation of arrival), as well as a HYBRID model that integrates multiple radio features with weighting. Using Wireless Insite, we conducted ray-tracing simulations based on the Institute of Science Tokyo’s Ookayama campus and optimized UAV flight paths with PSO (Particle Swarm Optimization). Results show that the HYBRID model achieved the highest accuracy, limiting the maximum error to 20 m. Sequential estimation improved accuracy for high-error sources, particularly when RSSI was used first, followed by AOA or HYBRID. Future work includes estimating unknown frequency sources, refining sequential estimation, and implementing cooperative localization.
May 2025
·
17 Reads
Smart cities are widely regarded as a promising solution to urbanization challenges; however, environmental aspects such as outdoor thermal comfort and urban heat island are often less addressed than social and economic dimensions of sustainability. To address this gap, we developed and evaluated an affordable, scalable, and cost-effective weather station platform, consisting of a centralized server and portable edge devices to facilitate urban heat island and outdoor thermal comfort studies. This edge device is designed in accordance with the ISO 7726 (1998) standards and further enhanced with a positioning system. The device can regularly log parameters such as air temperature, relative humidity, globe temperature, wind speed, and geographical coordinates. Strategic selection of components allowed for a low-cost device that can perform data manipulation, pre-processing, store the data, and exchange data with a centralized server via the internet. The centralized server facilitates scalability, processing, storage, and live monitoring of data acquisition processes. The edge devices’ electrical and shielding design was evaluated against a commercial weather station, showing Mean Absolute Error and Root Mean Square Error values of 0.1 and 0.33, respectively, for air temperature. Further, empirical test campaigns were conducted under two scenarios: “stop-and-go” and “on-the-move”. These tests provided an insight into transition and response times required for urban heat island and thermal comfort studies, and evaluated the platform’s overall performance, validating it for nuanced human-scale thermal comfort, urban heat island, and bio-meteorological studies.
May 2025
·
11 Reads
The rapid expansion of network environments has introduced significant cybersecurity challenges, particularly in handling high-dimensional traffic and detecting sophisticated threats. This study presents a novel, scalable Hybrid Autoencoder–Extreme Learning Machine (AE–ELM) framework for Intrusion Detection Systems (IDS), specifically designed to operate effectively in dynamic, cloud-supported IoT environments. The scientific novelty lies in the integration of an Autoencoder for deep feature compression with an Extreme Learning Machine for rapid and accurate classification, enhanced through adaptive thresholding techniques. Evaluated on the CSE-CIC-IDS2018 dataset, the proposed method demonstrates a high detection accuracy of 98.52%, outperforming conventional models in terms of precision, recall, and scalability. Additionally, the framework exhibits strong adaptability to emerging threats and reduced computational overhead, making it a practical solution for real-time, scalable IDS in next-generation network infrastructures.
May 2025
·
4 Reads
In complex indoor and outdoor scenarios, traditional GPS-based ranging technology faces limitations in availability due to signal occlusion and user privacy issues. Wireless signal ranging technology based on 5G base stations has emerged as a potential alternative. However, existing methods are limited by low efficiency in constructing static signal databases, poor environmental adaptability, and high resource overhead, restricting their practical application. This paper proposes a 5G wireless signal ranging framework that integrates mobile edge computing (MEC) and crowdsourced intelligence to systematically address the aforementioned issues. This study designs a progressive solution by (1) building a crowdsourced data collection network, using mobile terminals equipped with GPS technology to automatically collect device signal features, replacing inefficient manual drive tests; (2) developing a progressive signal update algorithm that integrates real-time crowdsourced data and historical signals to optimize the signal fingerprint database in dynamic environments; (3) establishing an edge service architecture to offload signal matching and trajectory estimation tasks to MEC nodes, using lightweight computing engines to reduce the load on the core network. Experimental results demonstrate a mean positioning error of 5 m, with 95% of devices achieving errors within 10 m, as well as building and floor prediction error rates of 0.5% and 1%, respectively. The proposed framework outperforms traditional static methods by 3× in ranging accuracy while maintaining computational efficiency, achieving significant improvements in environmental adaptability and service scalability.
May 2025
·
13 Reads
·
·
·
[...]
·
In the original publication [...]
May 2025
·
2 Reads
Container technology is currently one of the mainstream technologies in the field of cloud computing, yet its adoption in resource-constrained, latency-sensitive edge environments introduces unique security challenges. While existing system call-based anomaly-detection methods partially address these issues, they suffer from high false positive rates and excessive computational overhead. To achieve security and observability in edge-native containerized environments and lower the cost of computing resources, we propose an unsupervised anomaly-detection method based on system calls. This method filters out unnecessary system call data through automatic rule generation and an unsupervised classification model. To increase the accuracy of anomaly detection and reduce the false positive rates, this method embeds system calls into sequences using the proposed Syscall2vec and processes the remain sequences in favor of the anomaly detection model’s analysis. We conduct experiments using our method with a background based on modern containerized cloud microservices. The results show that the detection part of our method improves the F1 score by 23.88% and 41.31%, respectively, as compared to HIDS and LSTM-VAE. Moreover, our method can effectively reduce the original processing data to 13%, which means that it significantly lowers the cost of computing resources.
May 2025
·
18 Reads
Smart cities are urban areas that use contemporary technology to improve citizens’ overall quality of life. These modern digital civil hubs aim to manage environmental conditions, traffic flow, and infrastructure through interconnected and data-driven decision-making systems. Today, many applications employ intelligent sensors for real-time data acquisition, leveraging visualization to derive actionable insights. However, despite the proliferation of such platforms, challenges like high data volume, noise, and incompleteness continue to hinder practical visual analysis. As missing data is a frequent issue in visualizing those urban sensing systems, our approach prioritizes their correction as a fundamental step. We deploy a hybrid imputation strategy combining SARIMAX, k-nearest neighbors, and random forest regression to address this. Building on this foundation, we propose an interactive web-based pipeline that processes, analyzes, and presents the sensor data provided by Basel’s “Smarte Strasse”. Our platform receives and projects environmental measurements, i.e., NO2, O3, PM2.5, and traffic noise, as well as mobility indicators such as vehicle speed and type, parking occupancy, and electric vehicle charging behavior. By resolving gaps in the data, we provide a solid foundation for high-fidelity and quality visual analytics. Built on the Flask web framework, the platform incorporates performance optimizations through Flask-Caching. Concerning the user’s dashboard, it supports interactive exploration via dynamic charts and spatial maps. This way, we demonstrate how future internet technologies permit the accessibility of complex urban sensor data for research, planning, and public engagement. Lastly, our open-source web-based application keeps reproducible, privacy-aware urban analytics.
May 2025
·
2 Reads
Network Intrusion Detection Systems (NIDS) often suffer from severe class imbalance, where minority attack types are underrepresented, leading to degraded detection performance. To address this challenge, we propose a novel augmentation framework that integrates Soft Nearest Neighbor Loss (SNNL) into Generative Adversarial Networks (GANs), including WGAN, CWGAN, and WGAN-GP. Unlike traditional oversampling methods (e.g., SMOTE, ADASYN), our approach improves feature-space alignment between real and synthetic samples, enhancing classifier generalization on rare classes. Experiments on NSL-KDD, CSE-CIC-IDS2017, and CSE-CIC-IDS2018 show that SNNL-augmented GANs consistently improve minority-class F1-scores without degrading overall accuracy or majority-class performance. UMAP visualizations confirm that SNNL produces more compact and class-consistent sample distributions. We also evaluate the computational overhead, finding the added cost moderate. These results demonstrate the effectiveness and practicality of SNNL as a general enhancement for GAN-based data augmentation in imbalanced NIDS tasks.
May 2025
·
6 Reads
Maintaining optimal microclimatic conditions within greenhouses represents a significant challenge in modern agricultural contexts, where prediction systems play a crucial role in regulating temperature and humidity, thereby enabling timely interventions to prevent plant diseases or adverse growth conditions. In this work, we propose a novel approach which integrates a cascaded Feed-Forward Neural Network (FFNN) with the Granular Computing paradigm to achieve accurate microclimate forecasting and reduced computational complexity. The experimental results demonstrate that the accuracy of our approach is the same as that of the FFNN-based approach but the complexity is reduced, making this solution particularly well suited for deployment on edge devices with limited computational capabilities. Our innovative approach has been validated using a real-world dataset collected from four greenhouses and integrated into a distributed network architecture. This setup supports the execution of predictive models both on sensors deployed within the greenhouse and at the network edge, where more computationally intensive models can be utilized to enhance decision-making accuracy.
May 2025
·
3 Reads
Artificial neural networks (ANNs) are increasingly effective in addressing complex scientific and technological challenges. However, challenges persist in synthesizing neural network models and defining their structural parameters. This study investigates the use of parallel evolutionary algorithms on distributed computing systems (DCSs) to optimize energy consumption and computational time. New mathematical models for DCS performance and reliability are proposed, based on a mass service system framework, along with a multi-criteria optimization model designed for resource-intensive computational problems. This model employs a multi-criteria GA to generate a diverse set of Pareto-optimal solutions. Additionally, a decision-support system is developed, incorporating the multi-criteria GA, allowing for customization of the genetic algorithm (GA) and the construction of specialized ANNs for specific problem domains. The application of the decision-support system (DSS) demonstrated performance of 1220.745 TFLOPS and an availability factor of 99.03%. These findings highlight the potential of the proposed DCS framework to enhance computational efficiency in relevant applications.
May 2025
·
14 Reads
The Industrial Internet of Things (IIoT) integrates sensors, machines, and data processing in industrial facilities to enable real-time monitoring, predictive insights, and autonomous control of equipment [...]
May 2025
·
10 Reads
During the COVID-19 pandemic, social media platforms emerged as both vital information sources and conduits for the rapid spread of propaganda and misinformation. However, existing studies often rely on single-label classification, lack contextual sensitivity, or use models that struggle to effectively capture nuanced propaganda cues across multiple categories. These limitations hinder the development of robust, generalizable detection systems in dynamic online environments. In this study, we propose a novel deep learning (DL) framework grounded in fine-tuning the RoBERTa model for a multi-label, multi-class (ML-MC) classification task, selecting RoBERTa due to its strong contextual representation capabilities and demonstrated superiority in complex NLP tasks. Our approach is rigorously benchmarked against traditional and neural methods, including, TF-IDF with n-grams, Conditional Random Fields (CRFs), and long short-term memory (LSTM) networks. While LSTM models show strong performance in capturing sequential patterns, our RoBERTa-based model achieves the highest overall accuracy at 88%, outperforming state-of-the-art baselines. Framed within the diffusion of innovations theory, the proposed model offers clear relative advantages—including accuracy, scalability, and contextual adaptability—that support its early adoption by Information Systems researchers and practitioners. This study not only contributes a high-performing detection model but also delivers methodological and theoretical insights for combating propaganda in digital discourse, enhancing resilience in online information ecosystems.
May 2025
·
10 Reads
A high level of data integrity is a strong requirement in systems where the life of people depends on accurate and timely responses. In healthcare emergency response systems, a centralized authority that handles data related to occurring events is prone to challenges, such as, e.g., disputes over event timestamps and data authenticity. To address both the potential lack of trust among collaborating parties and the inability of an authority to clearly certify events by itself, this paper proposes a blockchain-based framework designed to provide proof of integrity and authenticity of data in healthcare emergency response systems. The proposed solution integrates blockchain technology to certify the accuracy of events throughout their incident lifecycle. Critical events are timestamped and hashed using SHA-256; then, such hashes are stored immutably on an EVM-compatible blockchain via smart contracts. The system combines blockchain technology with cloud storage to ensure scalability, security, and transparency. Blockchain technology provides the advantage of eliminating a trusted server, providing timestamping and reducing costs by forgoing such a service. The experimental results, using publicly available incident data, demonstrated the feasibility and effectiveness of this approach. The system provides a cost-effective, scalable solution for managing incident data while keeping a proof of their integrity. The proposed blockchain-based framework offers a reliable, transparent mechanism for certifying incident-related data. This fosters trust among healthcare emergency response system actors.
May 2025
·
14 Reads
DNS over HTTPS (DoH) is an advanced version of the traditional DNS protocol that prevents eavesdropping and man-in-the-middle attacks by encrypting queries and responses. However, it introduces new challenges such as encrypted traffic communication, masking malicious activity, tunneling attacks, and complicating intrusion detection system (IDS) packet inspection. In contrast, unencrypted packets in the traditional Non-DoH version remain vulnerable to eavesdropping, privacy breaches, and spoofing. To address these challenges, an optimized dual-path feature selection approach is designed to select the most efficient packet features for binary class (DoH-Normal, DoH-Malicious) and multiclass (Non-DoH, DoH-Normal, DoH-Malicious) classification. Ant Colony Optimization (ACO) is integrated with machine learning algorithms such as XGBoost, K-Nearest Neighbors (KNN), Random Forest (RF), and Convolutional Neural Networks (CNNs) using CIRA-CIC-DoHBrw-2020 as the benchmark dataset. Experimental results show that the proposed model selects the most effective features for both scenarios, achieving the highest detection and outperforming previous studies in IDS. The highest accuracy obtained for binary and multiclass classifications was 0.9999 and 0.9955, respectively. The optimized feature set contributed significantly to reducing computational costs and processing time across all utilized classifiers. The results provide a robust, fast, and accurate solution to challenges associated with encrypted DNS packets.
May 2025
·
6 Reads
The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access to training data. The research demonstrates an insider-driven poison-label backdoor approach in which triggers are introduced into the training dataset. These triggers misclassify poisoned inputs while maintaining standard classification on clean data. An adversary can improve the stealth and effectiveness of such attacks by utilizing XAI techniques, which makes the detection of such attacks more difficult. The study uses publicly available datasets to evaluate the robustness of the deep learning models in this situation. Our experiments show that adversarial training considerably reduces backdoor attacks. These results are verified using various performance metrics, revealing model vulnerabilities and possible countermeasures. The findings demonstrate the importance of robust training techniques and effective adversarial defenses to improve the security of deep learning models against insider-driven backdoor attacks.
May 2025
This paper presents TCReC, an innovative model designed for reconstructing network traffic characteristics in the presence of packet loss. With the rapid expansion of wireless networks driven by edge computing, IoT, and 5G technologies, challenges such as transmission instability, channel competition, and environmental interference have led to significant packet loss rates, adversely impacting deep learning-based network traffic analysis tasks. To address this issue, TCReC leverages masked autoencoder techniques to reconstruct missing traffic features, ensuring reliable input for downstream tasks in edge computing scenarios. Experimental results demonstrate that TCReC maintains detection model accuracy within 10% of the original data, even under packet loss rates as high as 70%. For instance, on the ISCX-VPN-2016 dataset, TCReC achieves a Reconstruction Ability Index (RAI) of 94.02%, while on the CIC-IDS-2017 dataset, it achieves an RAI of 94.99% when combined with LSTM, significantly outperforming other methods such as Transformer, KNN, and RNN. Additionally, TCReC exhibits robustness across various packet loss scenarios, consistently delivering high-quality feature reconstruction for both attack traffic and common Internet application data. TCReC provides a robust solution for network traffic analysis in high-loss edge computing scenarios, offering practical value for real-world deployment.
May 2025
·
5 Reads
This paper investigates, applies, and evaluates state-of-the-art Large Language Models (LLMs) for the classification of posts from a dark web hackers’ forum into four cyber-security categories. The LLMs applied included Mistral-7B-Instruct-v0.2, Gemma-1.1-7B, Llama-3-8B-Instruct, and Llama-2-7B, with zero-shot learning, few-shot learning, and fine-tuning. The four cyber-security categories consisted of “Access Control and Management”, “Availability Protection and Security by Design Mechanisms”, “Software and Firmware Flaws”, and “not relevant”. The hackers’ posts were also classified and labelled by a human cyber-security expert, allowing a detailed evaluation of the classification accuracy per each LLM and customization/learning method. We verified LLM fine-tuning as the most effective mechanism to enhance the accuracy and reliability of the classifications. The results include the methodology applied and the labelled hackers’ posts dataset.
May 2025
·
3 Reads
Named Data Networking (NDN) is highly susceptible to Distributed Denial of Service (DDoS) attacks, such as Interest Flooding Attack (IFA) and Cache Pollution Attack (CPA). These attacks exploit the inherent data retrieval and caching mechanisms of NDN, leading to severe disruptions in data availability and network efficiency, thereby undermining the overall performance and reliability of the system. In this paper, an attack detection method based on an improved XGBoost is proposed and applied to the hybrid attack pattern of IFA and CPA. Through experiments, the performance of the new attacks and the efficacy of the detection algorithm are analyzed. In comparison with other algorithms, the proposed method is demonstrated to have advantages in terms of the advanced nature of the proposed classifier, which is confirmed by the AUC-score.
May 2025
·
12 Reads
Blockchain technology is emerging as a pivotal framework to enhance the security of internet-based systems, especially as advancements in machine learning (ML), artificial intelligence (AI), and cyber–physical systems such as smart grids and IoT applications in healthcare continue to accelerate. Although these innovations promise significant improvements, security remains a critical challenge. Blockchain offers a secure foundation for integrating diverse technologies; however, vulnerabilities—including adversarial exploits—can undermine performance and compromise application reliability. To address these risks effectively, it is essential to comprehensively analyze the vulnerability landscape of blockchain systems. This paper contributes in two key ways. First, it presents a unique layer-based framework for analyzing and illustrating security attacks within blockchain architectures. Second, it introduces a novel taxonomy that classifies existing research on blockchain vulnerability detection. Our analysis reveals that while ML and deep learning offer promising approaches for detecting vulnerabilities, their effectiveness often depends on access to extensive and high-quality datasets. Additionally, the layer-based framework demonstrates that vulnerabilities span all layers of a blockchain system, with attacks frequently targeting the consensus process, network integrity, and smart contract code. Overall, this paper provides a comprehensive overview of blockchain security threats and detection methods, emphasizing the need for a multifaceted approach to safeguard these evolving systems.
May 2025
·
7 Reads
This study introduces an AI-based framework for stroke diagnosis that merges clinical data and curated imaging data. The system utilizes traditional machine learning and advanced deep learning techniques to tackle dataset imbalances and variability in stroke presentations. Our approach involves rigorous data preprocessing, feature engineering, and ensemble techniques to optimize the predictive performance. Comprehensive evaluations demonstrate that gradient-boosted models outperform in accuracy, while CNNs enhance stroke detection rates. Calibration and threshold optimization are utilized to align predictions with clinical requirements, ensuring diagnostic reliability. This multi-modal framework highlights the capacity of AI to accelerate stroke diagnosis and aid clinical decision making, ultimately enhancing patient outcomes in critical care.
April 2025
·
5 Reads
With its fast advancements, cloud computing opens many opportunities for research in various applications from the robotics field. In our paper, we further explore the prospect of integrating Cloud AI object recognition services into an industrial robotics sorting task. Starting from our previously implemented solution on a digital twin, we are now putting our proposed architecture to the test in the real world, on an industrial robot, where factors such as illumination, shadows, different colors, and textures of the materials influence the performance of the vision system. We compare the results of our suggested method with those from an industrial machine vision software, indicating promising performance and opening additional application perspectives in the robotics field, simultaneously with the continuous improvement of Cloud and AI technology.
Journal Impact Factor™
Acceptance rate
CiteScore™
Submission to first decision
Submission to publication
Acceptance to publication
Article processing charge
Managing Editor