FIGURE 4 - uploaded by Murat Kuzlu
Content may be subject to copyright.
Source publication
Over the last two decades, Artificial Intelligence (AI) approaches have been applied to various applications of the smart grid, such as demand response, predictive maintenance, and load forecasting. However, AI is still considered to be a "black-box" due to its lack of explainability and transparency, especially for something like solar photovoltai...
Context in source publication
Context 1
... is also capable of local interpretability of the models. Figures 4-6 show the local explanations for 3 hours of a day, i.e., 6 th , 7 th , and 8 th . LIME results consist of three parts: (1-Left) Prediction probabilities of solar PV power generation forecasting, (2-Middle) The LIME explanation of more instances. ...
Similar publications
In this research, an effort is made to address microgrid systems' operational challenges, characterized by power oscillations that eventually contribute to grid instability. An integrated strategy is proposed, leveraging the strengths of convolutional and Gated Recurrent Unit (GRU) layers. This approach is aimed at effectively extracting temporal d...
The construction of smart grids has greatly changed the power grid pattern and power supply structure. For the power system, reasonable power planning and demand response is necessary to ensure the stable operation of a society. Accurate load prediction is the basis for realizing demand response for the power system. This paper proposes a Pre-Atten...
The fast growth in renewable power generation, crucial for reducing carbon emissions in the traditional energy system, is constrained by negative environmental and economic repercussions, demanding a smarter integration with conventional energy sources. However, the seamless integration of renewable energy into grid imposes major challenges due to...
In the context of the dual carbon goal strategy, the proportion of new energy generation has increased annually, large‐scale renewable energy integration has been achieved, and the intermittent and uncertain operating characteristics pose an enormous challenge to the complete and stable operation of an integrated energy system (IES), promoting the...
Users’ electricity usage information is helpful to promote performance of load forecasting and demand response. Users’ metering data contains abundant usage information and various approaches have been developed to extract users’ usage information from metering data. Since a user have specific several operation states and the user’s electricity con...
Citations
... Explainability and user trust are critical for adopting LLMs in critical infrastructure. Explainability addresses the "black box" nature of some LLMs, evaluating their ability to justify decisions, as highlighted by Kuzlu et al. [70] for solar forecasting. Articulating reasoning is key for building confidence. ...
Large Language Models (LLMs) are changing the way we operate our society and will undoubtedly impact power systems as well - but how exactly? By integrating various data streams - including real-time grid data, market dynamics, and consumer behaviors - LLMs have the potential to make power system operations more adaptive, enhance proactive security measures, and deliver personalized energy services. This paper provides a comprehensive analysis of 30 real-world applications across eight key categories: Grid Operations and Management, Energy Markets and Trading, Personalized Energy Management and Customer Engagement, Grid Planning and Education, Grid Security and Compliance, Advanced Data Analysis and Knowledge Discovery, Emerging Applications and Societal Impact, and LLM-Enhanced Reinforcement Learning. Critical technical hurdles, such as data privacy and model reliability, are examined, along with possible solutions. Ultimately, this review illustrates how LLMs can significantly contribute to building more resilient, efficient, and sustainable energy infrastructures, underscoring the necessity of their responsible and equitable deployment.
... In healthcare and law, it is crucial for trust and accountability, while in other areas like image or speech recognition, performance might be more important [28]. XAI has been applied in various fields, including helping diagnose Alzheimer's disease [29] from medical images, planning treatments for glaucoma [30], predicting bankruptcies [31], and forecasting solar panel performance [32]. ...
Explainable AI (XAI) frameworks are becoming essential in many areas, including the medical field, as they help us to understand AI decisions, increasing clinical trust and improving patient care. This research presents a robust and comprehensive Explainable AI framework. To classify images from the BloodMNIST and Raabin‐WBC datasets, various pre‐trained convolutional neural network (CNN) architectures: the VGG, the ResNet, the DenseNet, the EfficientNet, the MobileNet variants, the SqueezeNet, and the Xception are implemented both individually and in combination with SpinalNet. For parameter analysis, four models, VGG16, VGG19, ResNet50, and ResNet101, were combined with SpinalNet. Notably, these SpinalNet hybrid models significantly reduced the model parameters while maintaining or even improving the model accuracy. For example, the VGG 16 + SpinalNet shows a 40.74% parameter reduction and accuracy of 98.92% (BloodMnist) and 98.32% (Raabin‐WBC). Similarly, the combinations of VGG19, ResNet50, and ResNet101 with SpinalNet resulted in weight parameter reductions by 36.36%, 65.33%, and 52.13%, respectively, with improved accuracy for both datasets. These hybrid SpinalNet models are highly efficient and well‐suited for resource‐limited environments. The authors have developed a dynamic model selection framework. This framework optimally selects the best models based on prediction scores, prioritizing lightweight models in cases of ties. This method guarantees that for every input, the most effective model is used, which results in higher accuracy as well as better outcomes. Explainable AI (XAI) techniques: Local Interpretable Model‐agnostic Explanations (LIME), SHapley Additive ExPlanations (SHAP), and Gradient‐weighted Class Activation Mapping (Grad‐CAM) are implemented. These help us to understand the key features that influence the model predictions. By combining these XAI methods with dynamic model selection, this research not only achieves excellent accuracy but also provides useful insights into the elements that influence model predictions.
... Furthermore, XAI methods like SHAP can reveal how different inputs affect the predictions, which is crucial for optimizing energy dispatch and enhancing the resilience of power systems [20]. Kuzlu et al. performed solar PV energy forecasting using XAI tools such as SHAP, Local Interpretable Model-agnostic Explanations (LIME), and ELI5, which can facilitate employing XAI technologies for smart grid use [21]. Additionally, Zyl et al. found SHAP to be superior to find the features that degrades the forecast accuracy in case of time series energy forecasting [22]. ...
... However, the literature has not explored the variation in such relationships across seasons. Furthermore, the possibility of using XAI for feature selections has only briefly been touched upon by the research community, with mixed results: one study reported improved accuracy after removing less important features, while another found a decline in performance [14,15]. ...
... Some works developed in this area have used XAI to gain insights into how a model makes its predictions [14,15]. These works applied XAI to random forests and XGBoost ensemble algorithms, respectively. ...
... These works applied XAI to random forests and XGBoost ensemble algorithms, respectively. In [14], the authors provided a comparison between LIME, SHAP, and Explain Like I'm 5 (ELI5), and reported that SHAP was the only method among the ones used that delivered a global explanation, although being the method that was computationally more expensive and time-consuming. In [15], the authors used ELI5 and reported Root Mean Squared Error (RMSE) scores for models built with all available features and models built with just a subset, showing a decline in performance for the latter. ...
This work explores the effectiveness of explainable artificial intelligence in mapping solar photovoltaic power outputs based on weather data, focusing on short-term mappings. We analyzed the impact values provided by the Shapley additive explanation method when applied to two algorithms designed for tabular data—XGBoost and TabNet—and conducted a comprehensive evaluation of the overall model and across seasons. Our findings revealed that the impact of selected features remained relatively consistent throughout the year, underscoring their uniformity across seasons. Additionally, we propose a feature selection methodology utilizing the explanation values to produce more efficient models, by reducing data requirements while maintaining performance within a threshold of the original model. The effectiveness of the proposed methodology was demonstrated through its application to a residential dataset in Madeira, Portugal, augmented with weather data sourced from SolCast.
... The research showcases the potential of AI in enhancing the operation and maintenance of renewable energy systems, indicating a promising avenue for future energy management strategies. In similar research Kuzlu 3 et al. [23] explores the use of explainable AI tools for forecasting solar PV power generation. The research demonstrated how these AI tools can provide meaningful insights into the complex nature of solar power generation, thereby improving forecasting accuracy and aiding decision-making processes. ...
... According to [30], the most common XAI techniques in the energy field are Local Interpretable Modelagnostic Explanations (LIMEs) and Shapley Additive Explanations (SHAPs), both of which are compatible with any ML model. In [31], XAI techniques, including Explain Like I'm 5 (ELI5), LIME, and SHAP, were applied to solar power forecasting. Among these, SHAP stands out for its ability to provide both global and local interpretability and is the only method offering a complete explanation of model behavior. ...
... This analysis provides an interpretable framework for understanding the criteria driving each agent's behavior, facilitating the practical implementation of DRL models in real-world operations. Various SHAP methods are available, including the standard method [31] and the Deep-SHAP method [32], which uses a modified calculation approach. In this study, the authors employed the standard SHAP method. ...
Citation: Matsushima, F.; Aoki, M.; Nakamura, Y.; Verma, S.C.; Ueda, K.; Imanishi, Y. Multi-Timescale Voltage Control Method Using Limited Measurable Information with Explainable Deep Reinforcement Learning. Energies 2025, 18, 653. Abstract: The integration of photovoltaic (PV) power generation systems has significantly increased the complexity of voltage distribution in power grids, making it challenging for conventional Load Ratio Control Transformers (LRTs) to manage voltage fluctuations caused by weather-dependent PV output variations. Power Conditioning Systems (PCSs) interconnected with PV installations are increasingly considered for voltage control to address these challenges. This study proposes a Machine Learning (ML)-based control method for sub-transmission grids, integrating long-term LRT tap-changing with short-term reactive power control of PCSs. The approach estimates the voltage at each grid node using a Deep Neural Network (DNN) that processes measurable substation data. Based on these estimated voltages, the method determines optimal LRT tap positions and PCS reactive power outputs using Deep Reinforcement Learning (DRL). This enables real-time voltage monitoring and control using only substation measurements, even in grids without extensive sensor installations, ensuring all node voltages remain within specified limits. To improve the model's transparency, Shapley Additive Explanation (SHAP), an Explainable AI (XAI) technique, is applied to the DRL model. SHAP enhances interpretability and confirms the effectiveness of the proposed method. Numerical simulations further validate its performance, demonstrating its potential for effective voltage management in modern power grids.
... Khan et al. [15] presented an ensemble method for solar PV prediction that improved prediction accuracy by stacking multiple machine-learning models. Furthermore, Kuzlu et al. [16] used explainable artificial intelligence (XAI) techniques to acquire knowledge of solar PV power production prediction, providing an improved comprehension of the predictive systems' effectiveness and dependability. Table 1 shows the summary table. ...
... Additionally, it offers valuable insights into how the independent factors relate to the target variable, enabling a deeper grasp of the underlying principles governing the system. However, Linear Regression also has its limitations, as pointed out in reference [26]. The assumption is a linear connection in between the independent factors and the objective variable, which may not always be the case in real-world scenarios. ...
... The assumption is a linear connection in between the independent factors and the objective variable, which may not always be the case in real-world scenarios. Furthermore, it presupposes that errors follow a normal distribution and maintain a consistent variance, which may not necessarily align with the real-world conditions in all cases [26]. ...
This study explores the potential of using low-computing-cost machine learning models for predicting Bending Loss in Photonic Crystal Fibers (PCFs). Algorithms for machine learning that utilise historical data and trends can be utilized to provide a potent tool for predicting Bending Loss. The bending loss data and the other associated parameters of the bent PCF were obtained using the Finite Element Method-based modal solution technique (FEM). The PCF has 3 ring air-holes in the cladding with a pitch length () of 2.6m a wavelength () of 1.55m and a silica refractive index (n) of 1.445. The bending radius was varied from of 10000m to 230m and the calculations were done in the Transverse Electric (TE) mode. The Bending Loss Dataset was used to train and evaluate five different low-computing-cost regression Algorithms such as Linear Regression, Random Forest Regressor, Gradient Boosting Regressor, Support Vector Machine Regressor, and Gaussian Process Regression are utilized. The Linear Regression model was found to be the most accurate and reliable predictor of Bending Loss in Photonic Crystal Fibres (PCFs) achieving a Mean Square Error (MSE) of 0.0002 and an R-squared (R2) score of 0.9999. The findings of this work show how machine learning models can be used to forecast crucial PCF parameters, which could progress the field of photonics even utilizing Low-Computing-Cost computers. The use of machine learning models have the potential to greatly increase efficiency and accuracy of predicting important parameters in PCFs by improving the design and optimisation of PCFs for diverse optical applications.
... Before training, we employed feature selection algorithm [57] to identify the most significant data features contributing to prediction target. To pre-train the DNN models, we divided the data in the source domain into a training set comprising 90% of the data and a validation set containing the remaining 10%. ...
Traditional textile factories consume substantial energy, making energy-efficient production optimization crucial for sustainability and cost reduction. Meanwhile, deep neural networks (DNNs), which are effective for factory output prediction and operational optimization, require extensive historical data—posing challenges due to high sensor deployment and data collection costs. To address this, we propose Ensemble Deep Transfer Learning (EDTL), a novel framework that enhances prediction accuracy and data efficiency by integrating transfer learning with an ensemble strategy and a feature alignment layer. EDTL pretrains DNN models on data-rich production lines (source domain) and adapts them to data-limited lines (target domain), reducing dependency on large datasets. Experiments on real-world textile factory datasets show that EDTL improves prediction accuracy by 5.66% and enhances model robustness by 3.96% compared to conventional DNNs, particularly in data-limited scenarios (20%–40% data availability). This research contributes to energy-efficient textile manufacturing by enabling accurate predictions with fewer data requirements, providing a scalable and cost-effective solution for smart production systems.
... • Smart Grid:Due to the complexity of smart grid, which involves a large amount of real-time data analysis, automated decision-making, and optimization, XAI has been regarded as one of the future solutions in the field of smart grid. By using [171] tools such as LIME, SHAP, and ELI5, insights into the inner workings of solar photovoltaic energy forecast models are provided. Understanding can lead to improvements in photovoltaic forecast models and identifying relevant parameters [171]. ...
... By using [171] tools such as LIME, SHAP, and ELI5, insights into the inner workings of solar photovoltaic energy forecast models are provided. Understanding can lead to improvements in photovoltaic forecast models and identifying relevant parameters [171]. ...
The unprecedented advancement of Artificial Intelligence (AI) has positioned Explainable AI (XAI) as a critical enabler in addressing the complexities of next-generation wireless communications. With the evolution of the 6G networks, characterized by ultra-low latency, massive data rates, and intricate network structures, the need for transparency, interpretability, and fairness in AI-driven decision-making has become more urgent than ever. This survey provides a comprehensive review of the current state and future potential of XAI in communications, with a focus on network slicing, a fundamental technology for resource management in 6G. By systematically categorizing XAI methodologies–ranging from model-agnostic to model-specific approaches, and from pre-model to post-model strategies–this paper identifies their unique advantages, limitations, and applications in wireless communications. Moreover, the survey emphasizes the role of XAI in network slicing for vehicular network, highlighting its ability to enhance transparency and reliability in scenarios requiring real-time decision-making and high-stakes operational environments. Real-world use cases are examined to illustrate how XAI-driven systems can improve resource allocation, facilitate fault diagnosis, and meet regulatory requirements for ethical AI deployment. By addressing the inherent challenges of applying XAI in complex, dynamic networks, this survey offers critical insights into the convergence of XAI and 6G technologies. Future research directions, including scalability, real-time applicability, and interdisciplinary integration, are discussed, establishing a foundation for advancing transparent and trustworthy AI in 6G communications systems.