Jerry Chun-Wei Lin’s research while affiliated with Silesian University of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (691)


An illustrative example of highly associated fuzzy churn patterns. The figure simply illustrates the concept of HAFCP. In the diagram, ‘L’, ‘M’, and ‘H’ represent low, medium, and high respectively, which are calculated by the membership function in fuzzy‐set theory.
The proposed framework for mining HAFCPs: This comprehensive framework outlines the process of extracting HAFCPs, detailing the methodology from initiation to conclusion. To facilitate understanding, the extension work will present a straightforward algorithmic example. This example will elaborate each step of the process, providing a clear explanation of how HAFCPs are identified and derived within our framework.
An example of Gaussian membership functions representing linguistic terms (Low, Medium, and High) for age and spending variables in fuzzy‐set theory.
Top‐10 features of explainability in five datasets.
Explainability of Highly Associated Fuzzy Churn Patterns in Binary Classification
  • Article
  • Publisher preview available

May 2025

·

11 Reads

·

2 Citations

D. Y. C. Wang

·

·

Jerry Chun‐Wei Lin

Customer churn, particularly in the telecommunications sector, influences both costs and profits. As the explainability of models becomes increasingly important, this study emphasises not only the explainability of customer churn through machine learning models, but also the importance of identifying multivariate patterns and setting soft bounds for intuitive interpretation. The main objective is to use a machine learning model and fuzzy‐set theory with top‐k HUIM to identify highly associated patterns of customer churn with intuitive identification, referred to as Highly Associated Fuzzy Churn Patterns (HAFCP). Moreover, this method aids in uncovering association rules among multiple features across low, medium, and high distributions. Such discoveries are instrumental in enhancing the explainability of findings. Experiments show that when the top‐5 HAFCPs are included in five datasets, a mixture of performance results is observed, with some showing notable improvements. It becomes clear that high importance features enhance explanatory power through their distribution and patterns associated with other features. As a result, the study introduces an innovative approach that improves the explainability and effectiveness of customer churn prediction models.

View access options



NBOEA-FHUI
Distribution of different initial population
The influence of the proposed population initialization strategy on convergence speed of the algorithm
The influence of the proposed offspring generation strategy on convergence speed of the algorithm
Comparison of convergence speed of five algorithms on four datasets
A novel efficient bi-objective evolutionary algorithm for frequent and high utility itemsets mining

February 2025

·

15 Reads

Memetic Computing

Mining frequent and high utility itemsets (FHUIs) from transaction database is an important task in data mining. In order to overcome the difficulties of parameter setting and huge search space in traditional algorithms for mining FHUIs, the task of mining FHUIs was modeled as a bi-objective problem and then solved by multi-objective evolutionary algorithms (MOEAs) in previous works. However, MOEAs may be inefficient when the number of transactions and items in the transaction database becomes large. To address this problem, we propose a novel efficient bi-objective evolutionary algorithm for mining FHUIs (NBOEA-FHUI). In NBOEA-FHUI, a novel initialization strategy is proposed, which takes the support, utility, and diversity of the initial population into account. The proposed initial strategy can make the initial population have relative high utility and support values with high population diversity. To improve the quality of the offspring, a method for estimating the support and utility value of itemsets and an offspring generation strategy are proposed in NBOEA-FHUI. The support and utility values of itemsets which are roughly proportional to their true values can be calculated by the estimation method with little computation. The proposed offspring generation strategy can generate better offspring based on the estimated support and utility value. Experimental results on several real datasets demonstrate that the proposed algorithm has better performance than the state-of-the-art MOEAs in terms of the convergence speed, search efficiency, and solution accuracy in the task of mining FHUIs.


Towards a Supporting Framework for Neuro-Developmental Disorder: Considering Artificial Intelligence, Serious Games and Eye Tracking

February 2025

·

18 Reads

This paper focuses on developing a framework for uncovering insights about NDD children's performance (e.g., raw gaze cluster analysis, duration analysis \& area of interest for sustained attention, stimuli expectancy, loss of focus/motivation, inhibitory control) and informing their teachers. The hypothesis behind this work is that self-adaptation of games can contribute to improving students' well-being and performance by suggesting personalized activities (e.g., highlighting stimuli to increase attention or choosing a difficulty level that matches students' abilities). The aim is to examine how AI can be used to help solve this problem. The results would not only contribute to a better understanding of the problems of NDD children and their teachers but also help psychologists to validate the results against their clinical knowledge, improve communication with patients and identify areas for further investigation, e.g., by explaining the decision made and preserving the children's private data in the learning process.


Fig. 1: The 10-20 methods for electrode placement using reference electrodes A1 and A2.
Fig. 3: SSLR and TL based LSTM-GRU Model Architecture and parameter Settings (SSRepL-ADHD)
Fig. 4: Confusion Matrix of Classification Models
Fig. 5: Graphical Results of Training and Validation
SSRepL-ADHD: Adaptive Complex Representation Learning Framework for ADHD Detection from Visual Attention Tasks

February 2025

·

41 Reads

Self Supervised Representation Learning (SSRepL) can capture meaningful and robust representations of the Attention Deficit Hyperactivity Disorder (ADHD) data and have the potential to improve the model's performance on also downstream different types of Neurodevelopmental disorder (NDD) detection. In this paper, a novel SSRepL and Transfer Learning (TL)-based framework that incorporates a Long Short-Term Memory (LSTM) and a Gated Recurrent Units (GRU) model is proposed to detect children with potential symptoms of ADHD. This model uses Electroencephalogram (EEG) signals extracted during visual attention tasks to accurately detect ADHD by preprocessing EEG signal quality through normalization, filtering, and data balancing. For the experimental analysis, we use three different models: 1) SSRepL and TL-based LSTM-GRU model named as SSRepL-ADHD, which integrates LSTM and GRU layers to capture temporal dependencies in the data, 2) lightweight SSRepL-based DNN model (LSSRepL-DNN), and 3) Random Forest (RF). In the study, these models are thoroughly evaluated using well-known performance metrics (i.e., accuracy, precision, recall, and F1-score). The results show that the proposed SSRepL-ADHD model achieves the maximum accuracy of 81.11% while admitting the difficulties associated with dataset imbalance and feature selection.


Convolution Bridge: An Effective Algorithmic Migration Strategy From CNNs to GNNs

January 2025

·

6 Reads

IEEE Transactions on Neural Networks and Learning Systems

Graph neural networks (GNNs), as a rising star in machine learning, are widely used in relational data models and have achieved outstanding performance in graph tasks. GNN continuously takes inspiration from mature models in other domains such as computer vision and natural language processing to motivate the development of graph algorithms. However, due to the various data structures from different domains, the cross-domain migration of models has to go through a long period of disassembly and reconstruction, which may not yield the desired results. To preserve the excellent properties of convolution and optimize the migration process from convolutional neural networks (CNNs) to GNNs, we propose a convolution bridge. The convolution bridge realizes the data alignment from CNN to GNN, so that the CNN-based model can be efficiently migrated to the graph structure model. To demonstrate the effectiveness of our migration strategy, we migrated the inception module and U-Net architecture from CNNs to GNNs, named GraInc and GraU-Net, for the node-level task and the graph-level task, respectively. Experimental results show that GraInc and GraU-Net are highly competitive compared to the current state-of-the-art models, particularly on dense graph datasets.


Distributed Multi-Head Learning Systems for Power Consumption Prediction

January 2025

·

40 Reads

As more and more automatic vehicles, power con-1 sumption prediction becomes a vital issue for task scheduling and 2 energy management. Most research focuses on automatic vehicles 3 in transportation, but few focus on automatic ground vehicles 4 (AGVs) in smart factories, which face complex environments and 5 generate large amounts of data. There is an inevitable trade-6 off between feature diversity and interference. In this paper, 7 we propose Distributed Multi-Head learning (DMH) systems for 8 power consumption prediction in smart factories. Multi-head 9 learning mechanisms are proposed in DMH to reduce noise 10 interference and improve accuracy. Additionally, DMH systems 11 are designed as distributed and split learning, reducing the client-12 to-server transmission cost, sharing knowledge without sharing 13 local data and models, and enhancing the privacy and security 14 levels. Experimental results show that the proposed DMH systems 15 rank in the top-2 on most datasets and scenarios. DMH-E 16 system reduces the error of the state-of-the-art systems by 14.5% 17 to 24.0%. Effectiveness studies demonstrate the effectiveness 18 of Pearson correlation-based feature engineering, and feature 19 grouping with the proposed multi-head learning further enhances 20 prediction performance. 21 Index Terms-Power consumption prediction, smart factory, 22 electric vehicles, multi-head learning, split learning 23


An efficient PSO-based evolutionary model for closed high-utility itemset mining

Applied Intelligence

High-utility itemset mining (HUIM) is a widely adopted data mining technique for discovering valuable patterns in transactional databases. Although HUIM can provide useful knowledge in various types of data, it can be challenging to interpret the results when many patterns are found. To alleviate this, closed high-utility itemset mining (CHUIM) has been suggested, which provides users with a more concise and meaningful set of solutions. However, CHUIM is a computationally demanding task, and current approaches can require prolonged runtimes. This paper aims to solve this problem and proposes a meta-heuristic model based on particle swarm optimization (PSO) to discover CHUIs, called CHUI-PSO. Moreover, the algorithm incorporates several new strategies to reduce the computational cost associated with similar existing techniques. First, we introduce Extended TWU pruning (ETP), which aims to decrease the number of possible candidates to improve the discovery of solutions in large search spaces. Second, we propose two new utility upper bounds, used to estimate itemset utilities and bypass expensive candidate evaluations. Finally, to increase population diversity and prevent redundant computations, we suggest a structure called ExploredSet to maintain and utilize the evaluated candidates. Extensive experimental results show that CHUI-PSO outperforms the current state-of-the-art algorithms regarding execution time, accuracy, and convergence.


Collaborative Ontology Matching With Dual Population Genetic Programming and Active Meta-Learning

January 2025

IEEE Transactions on Evolutionary Computation

Ontology provides a structured language to encapsulate domain-specific knowledge and harmonize diverse data. Ontology matching identifies similar entities in distinct ontologies, facilitating knowledge integration and information exchange. Similarity features are crucial for ontology matching by measuring entity resemblance, but noisy and redundant features can obscure relevant ones, reducing matching quality. To improve the accuracy of matching results, we propose a dual population genetic programming with an active meta-learning to build a high-quality similarity feature, which owns three novel components. First, a dual population genetic programming is developed to construct high-level similarity feature with a two-layer individual representation, a dual population based co-evolutionary mechanism, and a novel fitness function based on partial standard alignment. Second, a new active learning model is presented to update the partial standard alignment through an efficient interactive procedure, guiding the algorithm towards building more reliable similarity features. Finally, a weighted random forest meta-learning model is designed to train the expert vote aggregation model with their historical behaviors, and fine-tunes the model’s performance with a compact genetic algorithm. Experimental results on the Ontology Alignment Evaluation Initiative’s interactive matching tasks demonstrate that our method consistently achieves higher accuracy and better efficiency compared to advanced matching techniques across various expert error rates.


Citations (35)


... This PhD study draws on the findings from three of my published papers on customer behavior analysis and predictive modeling [11][12][13]. Fig. 1 shows the proposed framework for this study. The research work carried out as part of the doctoral study has led to the selected papers briefly described below: ...

Reference:

Enhancing Customer Behavior Prediction and Interpretability
Explainability of Highly Associated Fuzzy Churn Patterns in Binary Classification

... This PhD study draws on the findings from three of my published papers on customer behavior analysis and predictive modeling [11][12][13]. Fig. 1 shows the proposed framework for this study. The research work carried out as part of the doctoral study has led to the selected papers briefly described below: ...

A Utility-Mining-Driven Active Learning Approach for Analyzing Clickstream Sequences
  • Citing Conference Paper
  • December 2024

... Federated server collects the local updates and is combined using aggregation techniques such as Federated Averaging. The global model weights from federated server is shared to the individual hospitals to make predictions on local models or tune hyperparameters to enhance the predictions [21]. ...

HTTPS: Heterogeneous Transfer learning for spliT Prediction System evaluated on healthcare data
  • Citing Article
  • January 2025

Information Fusion

... Author suggesting further research to refine DL techniques and explore additional features such as genetic or behavioural data for more precise ADHD classification. The paper [33] focuses on the development of an explainable AI (XAI) framework to assist psychologists in diagnosing ADHD. The authors introduce a DL model that not only provides accurate predictions but also offers interpretable insights into the factors influencing ADHD diagnosis making the model more accessible and understandable for clinical practitioners. ...

Enhancing Psychologists' Understanding Through Explainable Deep Learning Framework for ADHD Diagnosis

... Consequently, developing advanced visual quality inspection methods and systems is essential [2]. Several researchers have developed detection systems for pipelines [3], bearings [4], and additive manufacturing [5], leveraging sophisticated techniques for visual detection. ...

Applied AI in Defect Detection for Additive Manufacturing: Current Literature, Metrics, Datasets, and Open Challenges
  • Citing Article
  • June 2024

IEEE Instrumentation & Measurement Magazine

... Since the enhancement of di®erent types of image design features requires constant¯lling of detailed information, this will cause large di®erences between activation graphs of di®erent convolution layers. 25,26 The di®erences between covariances of di®erent activation graphs are obtained by introducing style loss: ...

An indoor blind area-oriented autonomous robotic path planning approach using deep reinforcement learning
  • Citing Article
  • May 2024

Expert Systems with Applications

... • Structural Similarity Index Measure (SSIM): This index is utilized for gauging the likeness between two images by considering their luminance (lumi), contrast (cont), and structure (stru). The SSIM metric, which is applied to two images with identical dimensions, is expressed as follows [50][51][52]: ...

Video dataset containing video quality assessment scores obtained from standardized objective and subjective testing

Data in Brief

... Experimental results show that APSO-RL off significantly improves solution accuracy. Since no existing evolutionary heuristic methods have been applied to top-k HUPM, the particle swarm optimization methods for top-k HUPM, TKU-PSO [42] were proposed. The methods first leverage existing solutions and utility estimation of itemsets to avoid redundancy and unnecessary candidate evaluations. ...

TKU-PSO: An Efficient Particle Swarm Optimization Model for Top-K High-Utility Itemset Mining
  • Citing Article
  • January 2024

International Journal of Interactive Multimedia and Artificial Intelligence

... The main goal is to develop an integrative framework that connects ML models with top-k high utility itemset mining (HUIM). In a previous study, a churn prediction pattern (CPP) (Wang et al. 2023) was defined as a set of features that can influence several definable indicators for customer churn evaluation. In this study, the extracted patterns are defined as Highly Associated Fuzzy Churn Patterns (HAFCP), which are conceptually illustrated in Figure 1. ...

Explainability of Leverage Points Exploration for Customer Churn Prediction
  • Citing Conference Paper
  • December 2023

... As mentioned in section 3.3 those were emulated by changing payload over the chosen threshold. The input dataset described in section 4.1 was used to train a 2-hidden layers (with 80 units per layer) LSTM (Long Short-Term Memory) and 2-hidden layers (with 80 units per layer) GRU (Gated Recurrent Unit) models forecasting MPC, which proved to be effective in our previous works [2][3][4]. Similarly, the input window size was set to ∆T = 50 elements and the forecast horizon to ∆t = 10. ...

Effective Prediction of Energy Consumption in Automated Guided Vehicles with Recurrent and Convolutional Neural Networks
  • Citing Conference Paper
  • December 2023