Information

Information

Published by MDPI

Online ISSN: 2078-2489

Journal websiteAuthor guidelines

Top-read articles

435 reads in the past 30 days

Most frequent sources.
Zone-wise distribution of journals.
Calculations for the Lotkas's law.
Corresponding authors' countries.
Thematic evolution.

+7

Artificial Intelligence in Digital Marketing: Insights from a Comprehensive Review

December 2023

·

5,887 Reads

·

71 Citations

Christos Ziakis

·

Download

Aims and scope


Aims: Information (ISSN 2078-2489) is an international, scientific open access journal of information science and technology, data, knowledge and communication. It publishes reviews, regular research papers and short communications. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restrictions on the maximum length of manuscripts. The full experimental details must be provided so that the results can be reproduced.

Scope:

Information theory and methodology, including, but not limited to: -coding theory (including data compression, error-correction and cryptographic algorithms) -information-theoretic security -quantum information -philosophy/ethics of information -post quantum computing

Information intelligence, including, but not limited to: -knowledge management -social media and social networks -big data and cloud computing -artificial intelligence -internet of things/internet of everything

Information processes, including, but not limited to: -digital signal processing -data mining -information extraction

Information applications, including, but not limited to: -human-machine interface -information in society and social development -business process management -blockchain and emerging technologies

Information and communications technology, including, but not limited to: -communication systems and networks -wireless sensor network -mobile communication services

Recent articles


A Fuzzy-Neural Model for Personalized Learning Recommendations Grounded in Experiential Learning Theory
  • Article

April 2025

·

3 Reads

Christos Troussas

·

Akrivi Krouska

·

Phivos Mylonas

·

Cleo Sgouropoulou

Personalized learning is a defining characteristic of current education, with flexible and adaptable experiences that respond to individual learners’ requirements and approaches to learning. Traditional implementations of educational theories—such as Kolb’s Experiential Learning Theory—often follow rule-based approaches, offering predefined structures but lacking adaptability to dynamically changing learner behavior. In contrast, AI-based approaches such as artificial neural networks (ANNs) have high adaptability but lack interpretability. In this work, a new model, a fuzzy-ANN model, is developed that combines fuzzy logic with ANNs to make recommendations for activities in the learning process, overcoming current model weaknesses. In the first stage, fuzzy logic is used to map Kolb’s dimensions of learning style onto continuous membership values, providing a flexible and easier-to-interpret representation of learners’ preferred approaches to learning. These fuzzy weights are then processed in an ANN, enabling refinement and improvement in learning recommendations through analysis of patterns and adaptable learning. To make recommendations adapt and develop over time, a Weighted Sum Model (WSM) is used, combining learner activity trends and real-time feedback in dynamically updating proposed activity recommendations. Experimental evaluation in an educational environment shows that the model effectively generates personalized and changing experiences for learners, in harmony with learners’ requirements and activity trends.


A Novel Involution-Based Lightweight Network for Fabric Defect Detection

April 2025

Zhenxia Ke

·

Lingjie Yu

·

Chao Zhi

·

[...]

·

Yuming Zhang

For automatic fabric defect detection with deep learning, diverse textures and defect forms are often required for a large training set. However, the computation cost of convolution neural networks (CNNs)-based models is very high. This research proposed an involution-enabled Faster R-CNN network by using the bottleneck structure of the residual network. The involution has two advantages over convolution: first, it can capture a larger range of receptive fields in the spatial dimension; then, parameters are shared in the channel dimension to reduce information redundancy, thus reducing parameters and computation. The detection performance is evaluated by Params, floating-point operations per second (FLOPs), and average precision (AP) in the collected dataset containing 6308 defective fabric images. The experiment results demonstrate that the proposed involution-based network achieves a lighter model, with Params reduced to 31.21 M and FLOPs decreased to 176.19 G, compared to the Faster R-CNN’s 41.14 M Params and 206.68 G FLOPs. Additionally, it slightly improves the detection effect of large defects, increasing the AP value from 50.5% to 51.1%. The findings of this research could offer a promising solution for efficient fabric defect detection in practical textile manufacturing.


Design of a Device for Optimizing Burden Distribution in a Blast Furnace Hopper

April 2025

·

2 Reads

Gabriele Degrassi

·

Lucia Parussini

·

Marco Boscolo

·

[...]

·

Vincenzo Dimastromatteo

The coke and ore are stacked alternately in layers inside the blast furnace. The capability of the charging system to distribute them in the desired manner and with optimum strata thickness is crucial for the efficiency and high-performance operation of the blast furnace itself. The objective of this work is the optimization of the charging equipment of a specific blast furnace. This blast furnace consists of a hopper, a single bell and a deflector inserted in the hopper under the conveyor belt. The focus is the search for a deflector geometry capable of distributing the material as evenly as possible in the hopper in order to ensure the effective disposal of the material released in the blast furnace. This search was performed by coupling the discrete element method with a multi-strategy and self-adapting optimization algorithm. The numerical results were qualitatively validated with a laboratory-scale model. Low cost and the simplicity of operation and maintenance are the strengths of the proposed charging system. Moreover, the methodological approach can be extended to other applications and contexts, such as chemical, pharmaceutical and food processing industries. This is especially true when complex material release conditions necessitate achieving bulk material distribution requirements in containers, silos, hoppers or similar components.


Early Heart Attack Detection Using Hybrid Deep Learning Techniques

April 2025

·

1 Read

Niga Amanj Hussain

·

Aree Ali Mohammed

Given the significant risk that heart disease, particularly heart attacks, poses to individuals’ lives, it is crucial to develop effective techniques for early detection. Advanced machine learning and deep learning algorithms have the ability to predict heart attacks by analyzing a patient’s medical history and overall health. These algorithms can process large datasets, extracting valuable insights that help mitigate the risk of fatal outcomes. This study integrates a deep learning approach to predict and detect heart attacks early by classifying patient data as normal or abnormal. The proposed model combines a Convolutional Neural Network (CNN) with self-attention, leveraging the self-attention mechanism to focus on the most critical aspects of the sequence. Since heart attack risk is closely tied to the changes in vital signs over time, this approach enables the model to learn and assign appropriate weights to each input component. Improvements and modifications to the hybrid model resulted in a 98.71% accuracy rate during testing. The model’s strong performance on evaluation metrics shows its potential effectiveness in detecting heart attacks.


Building a Cybersecurity Culture in Higher Education: Proposing a Cybersecurity Awareness Paradigm

April 2025

·

4 Reads

Reismary Armas

·

Hamed Taherdoost

Today, the world is experiencing constant technological evolution, allowing cyberattacks to manifest through different vectors and widely impacting victims, from specific users to serious damage to institutions’ integrity. Research has shown that a significant percentage of recorded cyber incidents are attributed to social engineering practices or human error. In response to this growing threat, reinforcing cybersecurity awareness among users has become an urgent strategy to develop and apply. However, addressing cybersecurity awareness is a difficult challenge, specifically in the HE industry, where cybersecurity awareness should be an essential part of this type of institution due to the amount of critical data it handles. In addition to the need to strengthen the preparation of new professionals, statistics have shown a significant increase in successful security attacks in this industry. Therefore, this study proposes a conceptual Cybersecurity Awareness and Training Framework for Higher Education to facilitate the establishment of systems that improve the cybersecurity awareness of students in any academic institution, extending to all audiences that coexist in it. This framework encompasses key components intended to continually improve the development, integration, delivery, and evaluation of cybersecurity knowledge for individuals directly or indirectly related to the institution’s information assets.


From Pixels to Insights: Unsupervised Knowledge Graph Generation with Large Language Model

April 2025

Lei Chen

·

Zhenyu Chen

·

Wei Yang

·

[...]

·

Yong Li

The role of image data in knowledge extraction and representation has become increasingly significant. This study introduces a novel methodology, termed Image to Graph via Large Language Model (ImgGraph-LLM), which constructs a knowledge graph for each image in a dataset. Unlike existing methods that rely on text descriptions or multimodal data to build a comprehensive knowledge graph, our approach focuses solely on unlabeled individual image data, representing a distinct form of unsupervised knowledge graph construction. To tackle the challenge of generating a knowledge graph from individual images in an unsupervised manner, we first design two self-supervised operations to generate training data from unlabeled images. We then propose an iterative fine-tuning process that uses this self-supervised information, enabling the fine-tuned LLM to recognize the triplets needed to construct the knowledge graph. To improve the accuracy of triplet extraction, we introduce filtering strategies that effectively remove low-confidence training data. Finally, experiments on two large-scale real-world datasets demonstrate the superiority of our proposed model.


Optimized Marine Target Detection in Remote Sensing Images with Attention Mechanism and Multi-Scale Feature Fusion

April 2025

Xiantao Jiang

·

Tianyi Liu

·

Tian Song

·

Qi Cen

With the continuous growth of maritime activities and the shipping trade, the application of maritime target detection in remote sensing images has become increasingly important. However, existing detection methods face numerous challenges, such as small target localization, recognition of targets with large aspect ratios, and high computational demands. In this paper, we propose an improved target detection model, named YOLOv5-ASC, to address the challenges in maritime target detection. The proposed YOLOv5-ASC integrates three core components: an Attention-based Receptive Field Enhancement Module (ARFEM), an optimized SIoU loss function, and a Deformable Convolution Module (C3DCN). These components work together to enhance the model’s performance in detecting complex maritime targets by improving its ability to capture multi-scale features, optimize the localization process, and adapt to the large aspect ratios typical of maritime objects. Experimental results show that, compared to the original YOLOv5 model, YOLOv5-ASC achieves a 4.36 percentage point increase in mAP@0.5 and a 9.87 percentage point improvement in precision, while maintaining computational complexity within a reasonable range. The proposed method not only achieves significant performance improvements on the ShipRSImageNet dataset but also demonstrates strong potential for application in complex maritime remote sensing scenarios.


Enhancing E-Recruitment Recommendations Through Text Summarization Techniques

April 2025

·

3 Reads

Reham Hesham El-Deeb

·

Walid Abdelmoez

·

Nashwa El-Bendary

This research aims to enhance e-recruitment systems using text summarization techniques and pretrained large language models (LLMs). A job recommender system is built with integrated text summarization. The text summarization techniques that are selected are BART, T5 (Text-to-Text Transfer Transformer), BERT, and Pegasus. Content-based recommendation is the model chosen to be implemented. The LinkedIn Job Postings dataset is used. The evaluation of the text summarization techniques is performed using ROUGE-1, ROUGE-2, and ROUGE-L. The results of this approach deduce that the recommendation does improve after text summarization. BERT outperforms other summarization techniques. Recommendation evaluations show that, for MRR, BERT performs 256.44% better, indicating relevant recommendations at the top more effectively. For RMSE, there is a 29.29% boost, indicating recommendations closer to the actual values. For MAP, a 106.46% enhancement is achieved, presenting the highest precision in recommendations. Lastly, for NDCG, there is an 83.94% increase, signifying that the most relevant recommendations are ranked higher.


Evaluating the Impact of Synthetic Data on Emotion Classification: A Linguistic and Structural Analysis
  • Article
  • Full-text available

April 2025

·

7 Reads

Emotion classification in natural language processing (NLP) has recently witnessed significant advancements. However, class imbalance in emotion datasets remains a critical challenge, as dominant emotion categories tend to overshadow less frequent ones, leading to biased model predictions. Traditional techniques, such as undersampling and oversampling, offer partial solutions. More recently, synthetic data generation using large language models (LLMs) has emerged as a promising strategy for augmenting minority classes and improving model robustness. In this study, we investigate the impact of synthetic data augmentation on German-language emotion classification. Using an imbalanced dataset, we systematically evaluate multiple balancing strategies, including undersampling overrepresented classes and generating synthetic data for underrepresented emotions using a GPT-4-based model in a few-shot prompting setting. Beyond enhancing model performance, we conduct a detailed linguistic analysis of the synthetic samples, examining their lexical diversity, syntactic structures, and semantic coherence to determine their contribution to overall model generalization. Our results demonstrate that integrating synthetic data significantly improves classification performance, particularly for minority emotion categories, while maintaining overall model stability. However, our linguistic evaluation reveals that synthetic examples exhibit reduced lexical diversity and simplified syntactic structures, which may introduce limitations in certain real-world applications. These findings highlight both the potential and the challenges of synthetic data augmentation in emotion classification. By providing a comprehensive evaluation of balancing techniques and the linguistic properties of generated text, this study contributes to the ongoing discourse on improving NLP models for underrepresented linguistic phenomena.


Center-Guided Network with Dynamic Attention for Transmission Tower Detection

April 2025

·

3 Reads

Xiaobin Li

·

Zhuwei Liang

·

Jingbin Yang

·

[...]

·

Yuge Xu

Transmission tower detection in aerial images is the critical step for the inspection of power transmission equipment, which is essential for the stable operation of the power system. However, transmission towers in aerial images pose numerous challenges for object detection due to their multi-scale elongated shapes, large aspect ratios, and visually similar backgrounds. To address these problems, we propose the Center-Guided network with Dynamic Attention (CGDA) for detecting TTs from aerial images. Specifically, we apply ResNet and FPN as the feature extractor to extract high-quality and multi-scale features. To obtain more discriminative information, the dynamic attention mechanism is employed to dynamically fuse multi-scale feature maps and place more attention on the object regions. In addition, a two-stage detection head is proposed to employ a two-stage detection process to perform more accurate detection. Extensive experiments are conducted on a subset of the public TTPLA dataset. The results show that CGDA achieves competitive performance in detecting TTs, demonstrating the effectiveness of the proposed approach.


Research on Price Prediction of Stock Price Index Based on Combination Method with Introduction of Options Market Information

April 2025

·

3 Reads

Yi Hu

·

Xin Sui

·

Qi Zhang

·

Wei Zhang

This study establishes a combination method-based prediction model for the CSI 300 stock index price embedded with options market information. Firstly, utilizing options and spot market information, a BP neural network is employed to predict the CSI 300 stock index price. Secondly, a logical framework based on a combination method is constructed to further optimize the CSI 300 stock index price prediction through decomposition–clustering, error adjustment, and weighted integration approaches. The results demonstrate the following: (1) Compared to price predictions based solely on spot market information, the introduction of options market information significantly enhances the forecasting performance for the CSI 300 index price. (2) From the perspective of options moneyness classification, after incorporating options information, different types of options contracts exhibit varying impacts on the CSI 300 index price prediction. Prior to optimization, predictions incorporating in-the-money call options with maximum trading volume yield the optimal performance based on the MSE metric. (3) Under the logical framework of the combination method, the prediction effect for the CSI 300 stock index price is gradually improved after introducing the decomposition–clustering method, the error adjustment method, and the price-weighted integration method, which shows that it is appropriate to use the combination method to optimize the price prediction. Overall, this study proposes a combination method for price forecasting incorporating options market information across diverse contract types. It allows for weighted integration of prediction results derived from various options information, offering a novel research angle for spot market price prediction. The study also underscores the importance of implicit information mining and multi-market information fusion for price prediction, which is expected to become a key research focus in this field.


Synthetic User Generation in Games: Cloning Player Behavior with Transformer Models

April 2025

·

4 Reads

Alfredo Chapa Mata

·

Hisa Nimi

·

Juan Carlos Chacón

User-centered design (UCD) commonly requires direct player participation, yet budget limitations or restricted access to users can impede this goal. To address these challenges, this research explores a transformer-based approach coupled with a diffusion process to replicate real player behavior in a 2D side-scrolling action–adventure environment that emphasizes exploration. By collecting an extensive set of gameplay data from real participants in an open-source game, “A Robot Named Fight!”, this study gathered comprehensive state and input information for training. A transformer model was then adapted to generate button-press sequences from encoded game states, while the diffusion mechanism iteratively introduced and removed noise to refine its predictions. The results indicate a high degree of replication of the participant’s actions in contexts similar to the training data, as well as reasonable adaptation to previously unseen scenarios. Observational analysis further confirmed that the model mirrored essential aspects of the user’s style, including navigation strategies, the avoidance of unnecessary combat, and selective obstacle clearance. Despite hardware constraints and reliance on a single observer’s feedback, these findings suggest that a transformer–diffusion methodology can robustly approximate user behavior. This approach holds promise not only for automated playtesting and level design assistance in similar action–adventure games but also for broader domains where simulating user interaction can streamline iterative design and enhance player-centric outcomes.


Benchmarking Methods for Pointwise Reliability

April 2025

·

5 Reads

The growing interest in machine learning in a critical domain like healthcare emphasizes the need for reliable predictions, as decisions based on these outputs can have significant consequences. This study benchmarks methods for assessing pointwise reliability, focusing on data-driven techniques based on the density principle and the local fit principle. These methods evaluate the reliability of individual predictions by analyzing their similarity to training data and evaluating the performance of the model in local regions. Aiming to establish a standardized comparison, the study introduces a benchmark framework that combines error rate evaluations across reliability intervals with t-distributed Stochastic Neighbor Embedding visualizations to further validate the results. The results demonstrate that methods combining density and local fit principles generally outperform those relying on a single principle, achieving lower error rates for high-reliability predictions. Furthermore, the study identifies challenges such as the adjustment of method parameters and clustering limitations and provides insight into their impact on reliability assessments.


Predicting and Preventing School Dropout with Business Intelligence: Insights from a Systematic Review

April 2025

·

28 Reads

School dropout in higher education remains a significant global challenge with profound socioeconomic consequences. To address this complex issue, educational institutions increasingly rely on business intelligence (BI) and related predictive analytics, such as machine learning and data mining techniques. This systematic review critically examines the application of BI and predictive analytics for analyzing and preventing student dropout, synthesizing evidence from 230 studies published globally between 1996 and 2025. We collected literature from the Google Scholar and Scopus databases using a comprehensive search strategy, incorporating keywords such as “business intelligence”, “machine learning”, and “big data”. The results highlight a wide range of predictive tools and methodologies, notably data visualization platforms (e.g., Power BI) and algorithms like decision trees, Random Forest, and logistic regression, demonstrating effectiveness in identifying dropout patterns and at-risk students. Common predictive variables included personal, socioeconomic, academic, institutional, and engagement-related factors, reflecting dropout’s multifaceted nature. Critical challenges identified include data privacy regulations (e.g., GDPR and FERPA), limited data integration capabilities, interpretability of advanced models, ethical considerations, and educators’ capacity to leverage BI effectively. Despite these challenges, BI applications significantly enhance institutions’ ability to predict dropout accurately and implement timely, targeted interventions. This review emphasizes the need for ongoing research on integrating ethical AI-driven analytics and scaling BI solutions across diverse educational contexts to reduce dropout rates effectively and sustainably.


Active Distribution Network Source–Network–Load–Storage Collaborative Interaction Considering Multiple Flexible and Controllable Resources

April 2025

In the context of rapid advancement of smart cities, a distribution network (DN) serving as the backbone of urban operations is a way to confront multifaceted challenges that demand innovative solutions. Central among these, it is imperative to optimize resource allocation and enhance the efficient utilization of diverse energy sources, with particular emphasis on seamless integration of renewable energy systems into existing infrastructure. At the same time, considering that the traditional power system’s “rigid”, instantaneous, dynamic, and balanced law of electricity, “source-load”, is difficult to adapt to the grid-connection of a high proportion of distributed generations (DGs), the collaborative interaction of multiple flexible controllable resources, like flexible loads, are able to supplement the power system with sufficient “flexibility” to effectively alleviate the uncertainty caused by intermittent fluctuations in new energy. Therefore, an active distribution network (ADN) intraday, reactive, power optimization-scheduling model is designed. The dynamic reactive power collaborative interaction model, considering the integration of DG, energy storage (ES), flexible loads, as well as reactive power compensators into the IEEE 33-node system, is constructed with the goals of reducing intraday network losses, keeping voltage deviations to a minimum throughout the day, and optimizing static voltage stability in an active distribution network. Simulation outcomes for an enhanced IEEE 33-node system show that coordinated operation of source–network–load–storage effectively reduces intraday active power loss, improves voltage regulation capability, and achieves secure and reliable operation under ADN. Therefore, it will contribute to the construction of future smart city power systems to a certain extent.


Automated Construction and Mining of Text-Based Modern Chinese Character Databases: A Case Study of Fujian

April 2025

·

2 Reads

Historical figures are crucial for understanding historical processes and social changes. However, existing databases of historical figures primarily focused on ancient Chinese individuals and are limited by the simplistic organization of textual information, lacking structured processing. Therefore, this study proposes an automatic method for constructing a spatio-temporal database of modern Chinese figures. The character state transition matrix reveals the spatio-temporal evolution of historical figures, while the random walk algorithm identifies their primary migration patterns. Using historical figures from Fujian Province (1840–2009) as a case study, the results demonstrate that this method effectively constructs the spatio-temporal chain of figures, encompassing time, space, and events. The character state transition matrix indicates a fluctuating trend of state change from 1840 to 2009, initially increasing and then decreasing. By applying keyword extraction and the random walk method, this study finds that the state transitions and their causes align with the historical trends. The four-dimensional analytical framework of “character-time-space-event” established in this study holds significant value for the field of digital humanities.


Optimized Digital Watermarking for Robust Information Security in Embedded Systems

April 2025

·

13 Reads

With the exponential growth in transactions and exchanges carried out via the Internet, the risks of the falsification and distortion of information are multiplying, encouraged by widespread access to the virtual world. In this context, digital image watermarking has emerged as an essential solution for protecting digital content by enhancing its durability and resistance to manipulation. However, no current digital watermarking technology offers complete protection against all forms of attack, with each method often limited to specific applications. This field has recently benefited from the integration of deep learning techniques, which have brought significant advances in information security. This article explores the implementation of digital watermarking in embedded systems, addressing the challenges posed by resource constraints such as memory, computing power, and energy consumption. We propose optimization techniques, including frequency domain methods and the use of lightweight deep learning models, to enhance the robustness and resilience of embedded systems. The experimental results validate the effectiveness of these approaches for enhanced image protection, opening new prospects for the development of information security technologies adapted to embedded environments.


Collaborative Modeling of BPMN and HCPN: Formal Mapping and Iterative Evolution of Process Models for Scenario Changes

April 2025

·

1 Read

Dynamic and changeable business scenarios pose significant challenges to the adaptability and verifiability of process models. Despite its widespread adoption as an ISO-standard modeling language, Business Process Model and Notation (BPMN) faces inherent limitations in formal semantics and verification capabilities, hindering the mathematical validation of process evolution behaviors under scenario changes. To address these challenges, this paper proposes a collaborative modeling framework integrating BPMN with hierarchical colored Petri nets (HCPNs), enabling the efficient iterative evolution and correctness verification of process change through formal mapping and localized evolution mechanism. First, hierarchical mapping rules are established with subnet-based modular decomposition, transforming BPMN elements into an HCPN executable model and effectively resolving semantic ambiguities; second, atomic evolution operations (addition, deletion, and replacement) are defined to achieve partial HCPN updates, eliminating the computational overhead of global remapping. Furthermore, an automated verification pipeline is constructed by analyzing state spaces, validating critical properties such as deadlock freeness and behavioral reachability. Evaluated through an intelligent AI-driven service scenario involving multi-gateway processes, the framework demonstrates behavioral effectiveness. This work provides a pragmatic solution for scenario-driven process evolution in domains requiring agile iteration, such as fintech and smart manufacturing.


A Survey of Open-Source Autonomous Driving Systems and Their Impact on Research

April 2025

·

16 Reads

Open-source autonomous driving systems (ADS) have become a cornerstone of autonomous vehicle development. By providing access to cutting-edge technology, fostering global collaboration, and accelerating innovation, these platforms are transforming the automated vehicle landscape. This survey conducts a comprehensive analysis of leading open-source ADS platforms, evaluating their functionalities, strengths, and limitations. Through an extensive literature review, the survey explores their adoption and utilization across key research domains. Additionally, it identifies emerging trends shaping the field. The main contributions of this survey include (1) a detailed overview of leading open-source platforms, highlighting their strengths and weaknesses; (2) an examination of their impact on research; and (3) a synthesis of current trends, particularly in interoperability with emerging technologies such as AI/ML solutions and edge computing. This study aims to provide researchers and practitioners with a holistic understanding of open-source ADS platforms, guiding them in selecting the right platforms for future innovation.


AI Narrative Modeling: How Machines’ Intelligence Reproduces Archetypal Storytelling

April 2025

·

4 Reads

This study examines how large language models reproduce Jungian archetypal patterns in storytelling. Results indicate that AI excels at replicating structured, goal-oriented archetypes (Hero, Wise Old Man), but it struggles with psychologically complex and ambiguous narratives (Shadow, Trickster). Expert evaluations confirmed these patterns, rating AI higher on narrative coherence and thematic alignment than on emotional depth and creative originality.


Transfer Learning for Facial Expression Recognition

April 2025

·

26 Reads

Facial expressions reflect psychological states and are crucial for understanding human emotions. Traditional facial expression recognition methods face challenges in real-world healthcare applications due to variations in facial structure, lighting conditions and occlusion. We present a methodology based on transfer learning with the pre-trained models VGG-19 and ResNet-152, and we highlight dataset-specific preprocessing techniques that include resizing images to 124 × 124 pixels, augmenting the data and selectively freezing layers to enhance the robustness of the model. This study explores the application of deep learning-based facial expression recognition in healthcare, particularly for remote patient monitoring and telemedicine, where accurate facial expression recognition can enhance patient assessment and early diagnosis of psychological conditions such as depression and anxiety. The proposed method achieved an average accuracy of 0.98 on the CK+ dataset, demonstrating its effectiveness in controlled environments. However performance varied across datasets, with accuracy rates of 0.44 on FER2013 and 0.89 on JAFFE, reflecting the challenges posed by noisy and diverse data. Our findings emphasize the potential of deep learning-based facial expression recognition in healthcare applications while underscoring the importance of dataset-specific model optimization to improve generalization across different data distributions. This research contributes to the advancement of automated facial expression recognition in telemedicine, supporting enhanced doctor-patient communication and improving patient care.


Figure 1. PRISMA-ScR Diagram Illustrating the Study Selection Process.
Summary of Studies Included in the Review.
Cont.
ChatGPT in ESL Higher Education: Enhancing Writing, Engagement, and Learning Outcomes

April 2025

·

35 Reads

Artificial intelligence (AI) in education has become increasingly common in higher education, particularly in learning English as a second language (ESL). ChatGPT is a conversational AI model frequently used to support language acquisition by creating personalized, interactive learning experiences. This narrative review explored the impact of ChatGPT on ESL in higher education within the past three years. It employed a qualitative literature review using EBSCOhost, ERIC, and JSTOR databases. A total of 29 peer-reviewed articles published between 2023 and 2025 were selected for review. The Scale for the Assessment of Narrative Review Articles (SANRA) was applied as an assessment tool for quality and reliability. The results indicated that ChatGPT enhances learning outcomes in ESL by helping students improve their writing skills, grammar proficiency, and speaking fluency. Moreover, it fostered student engagement due to its personalized feedback and accessible learning resources. There were, however, concerns about plagiarism, factual errors, and dependency on AI tools. Although ChatGPT and similar models present promising opportunities and benefits in ESL education, there is a need for structured implementation and ethical guidance.


Perspectives on Managing AI Ethics in the Digital Age

April 2025

·

10 Reads

The rapid advancement of artificial intelligence (AI) has introduced unprecedented opportunities and challenges, necessitating a robust ethical and regulatory framework to guide its development. This study reviews key ethical concerns such as algorithmic bias, transparency, accountability, and the tension between automation and human oversight. It discusses the concept of algor-ethics-a framework for embedding ethical considerations throughout the AI lifecycle-as an antidote to algocracy, where power is concentrated in those who control data and algorithms. The study also examines AI's transformative potential in diverse sectors, including healthcare, Insurtech, environmental sustainability, and space exploration, underscoring the need for ethical alignment. Ultimately , it advocates for a global, transdisciplinary approach to AI governance that integrates legal, ethical, and technical perspectives, ensuring AI serves humanity while upholding democratic values and social justice. In the second part of the paper, the author offers a synoptic view of AI governance across six major jurisdictions-the United States, China, the European Union, Japan, Canada, and Brazil-highlighting their distinct regulatory approaches. While the EU's AI Act as well as Japan's and Canada's frameworks prioritize fundamental rights and risk-based regulation, the US's strategy leans towards fostering innovation with executive directives and sector-specific oversight. In contrast, China's framework integrates AI governance with state-driven ideological imperatives, enforcing compliance with socialist core values, whereas Brazil's framework is still lacking the institutional depth of the more mature ones mentioned above, despite its commitment to fairness and democratic oversight. Eventually, strategic and governance considerations that should help chief data/AI officers and AI managers are provided in order to successfully leverage the transformative potential of AI for value creation purposes, also in view of the emerging international standards in terms of AI.


ET-Mamba: A Mamba Model for Encrypted Traffic Classification

April 2025

·

2 Reads

With the widespread use of encryption protocols on network data, fast and effective encryption traffic classification can improve the efficiency of traffic analysis. A resampling method combining Wasserstein GAN and random selection is proposed for solving the dataset imbalance problem, and it uses Wasserstein GAN for oversampling and random selection for undersampling to achieve class equalization. Based on Mamba, an ultra-low parametric quantity model, we propose an encrypted traffic classification model, ET-Mamba, which has a pre-training phase and a fine-tuning phase. During the pre-training phase, positional embedding is used to characterize the blocks of the traffic grayscale image, and random masking is used to strengthen the learning of the intrinsic correlation among the blocks of the traffic grayscale image. During the fine-tuning phase, the agent attention mechanism is adopted in the feature extraction phase to achieve global information modeling at a low computational cost, and the SmoothLoss function is designed to solve the problem of the insufficient generalization ability of cross-entropy loss function during training. The experimental results show that the proposed model significantly reduces the number of parameters and outperforms other models in terms of classification accuracy on non-VPN datasets.


Camera Pose Generation Based on Unity3D

April 2025

·

1 Read

Deep learning models performing complex tasks require the support of datasets. With the advancement of virtual reality technology, the use of virtual datasets in deep learning models is becoming more and more widespread. Indoor scenes represents a significant area of interest for the application of machine vision technologies. Existing virtual indoor datasets exhibit deficiencies with regard to camera poses, resulting in problems such as occlusion, object omission, and objects having too small of a proportion of the image, and perform poorly in the training for object detection and simultaneous localization and mapping (SLAM) tasks. Aiming at the problems regarding the capacity of cameras to comprehensively capture scene objects, this study presents an enhanced algorithm based on rapidly exploring random tree star (RRT*) for the generation of camera poses in a 3D indoor scene. Meanwhile, in order to generate multimodal data for various deep learning tasks, this study designs an automatic image acquisition module under the Unity3D platform. The experimental results from running the model on several mainstream virtual indoor datasets—such as 3D-FRONT and Hypersim—indicate that the image sequences generated in this study show enhancements in terms of object capture rate and efficiency. Even in cluttered environments such as those in SceneNet RGB-D, the object capture rate remains stable at around 75%. Compared with the image sequences from the original datasets, those generated in this study achieve improvements in the object detection and SLAM tasks, with increases of up to approximately 30% in mAP for the YOLOv10 object detection task and up to approximately 10% in SR for the ORB-SLAM algorithm.


Journal metrics


2.4 (2023)

Journal Impact Factor™


33.44%

Acceptance rate


6.9 (2023)

CiteScore™


3.8 days

Submission to first decision


40 days

Submission to publication


16.4 days

Acceptance to publication


1600 CHF

Article processing charge

Editors