ArticlePDF Available

Abstract and Figures

Microgrids have recently emerged as a building block for smart grids combining distributed renewable energy sources (RESs), energy storage devices, and load management methodologies. The intermittent nature of RESs brings several challenges to the smart microgrids, such as reliability, power quality, and balance between supply and demand. Thus, forecasting power generation from RESs, such as wind turbines and solar panels, is becoming essential for the efficient and perpetual operations of the power grid and it also helps in attaining optimal utilization of RESs. Energy demand forecasting is also an integral part of smart microgrids that helps in planning the power generation and energy trading with commercial grid. Machine learning (ML) and deep learning (DL) based models are promising solutions for predicting consumers’ demands and energy generations from RESs. In this context, this manuscript provides a comprehensive survey of the existing DL-based approaches, which are developed for power forecasting of wind turbines and solar panels as well as electric power load forecasting. It also discusses the datasets used to train and test the different DL-based prediction models, enabling future researchers to identify appropriate datasets to use in their work. Even though there are a few related surveys regarding energy management in smart grid applications, they are focused on a specific production application such as either solar or wind. Moreover, none of the surveys review the forecasting schemes for production and load side simultaneously. Finally, previous surveys do not consider the datasets used for forecasting despite their significance in DL-based forecasting approaches. Hence, our survey work is intrinsically different due to its data centered view, along with presenting DL-based applications for load and energy generation forecasting in both residential and commercial sectors. The comparison of different DL approaches discussed in this manuscript reveals that the efficiency of such forecasting methods is highly dependent on the amount of the historical data and thus a large number of data storage devices and high processing power devices are required to deal with big data. Finally, this study raises several open research problems and opportunities in the area of renewable energy forecasting for smart microgrids.
Content may be subject to copyright.
A preview of the PDF is not available
... Other important fields of study like climate change [37], sustainable agriculture [38,39], renewable energy forecasting [40,41] or waste management [42] are gaining special interest over the last years, increasing the scope of the use of machine learning in different green scenarios. ...
... In recent years, Deep Learning (DL) methods have proven to be particularly feasible and effective for accurate renewable energy forecasting (Wang et al., 2019;Alkhayat & Mehmood, 2021;Aslam et al., 2021). Nevertheless, power systems are a critical infrastructure that can be targeted by criminal, terrorist, or military attacks. ...
Article
Full-text available
In recent years, researchers proposed a variety of deep learning models for wind power forecasting. These models predict the wind power generation of wind farms or entire regions more accurately than traditional machine learning algorithms or physical models. However, latest research has shown that deep learning models can often be manipulated by adversarial attacks. Since wind power forecasts are essential for the stability of modern power systems, it is important to protect them from this threat. In this work, we investigate the vulnerability of two different forecasting models to targeted, semi-targeted, and untargeted adversarial attacks. We consider a long short-term memory (LSTM) network for predicting the power generation of individual wind farms and a convolutional neural network (CNN) for forecasting the wind power generation throughout Germany. Moreover, we propose the Total Adversarial Robustness Score (TARS), an evaluation metric for quantifying the robustness of regression models to targeted and semi-targeted adversarial attacks. It assesses the impact of attacks on the model’s performance, as well as the extent to which the attacker’s goal was achieved, by assigning a score between 0 (very vulnerable) and 1 (very robust). In our experiments, the LSTM forecasting model was fairly robust and achieved a TARS value of over 0.78 for all adversarial attacks investigated. The CNN forecasting model only achieved TARS values below 0.10 when trained ordinarily, and was thus very vulnerable. Yet, its robustness could be significantly improved by adversarial training, which always resulted in a TARS above 0.46.
Article
Full-text available
This paper investigates smart grid energy supply forecasting and economic operation management, with a focus on building an efficient energy supply prediction model. Four datasets were selected for training, and a Snake Optimizer (SO) algorithm-optimized Bigru-Attention model was proposed to construct a comprehensive and efficient prediction model, aiming to enhance the reliability, sustainability, and cost-effectiveness of the power system. The research process includes data preprocessing, model training, and model evaluation. Data preprocessing ensures data quality and suitability. In the model training phase, the Snake Optimizer (SO) algorithm-optimized Bigru-Attention model combines time series, spatial features, and optimization features to build a comprehensive prediction model. The model evaluation phase calculates metrics such as prediction error, accuracy, and stability, and also examines the model’s training time, inference time, number of parameters, and computational complexity to assess its efficiency and scalability. The contribution of this research lies in proposing the Snake Optimizer (SO) algorithm-optimized Bigru-Attention model and constructing an efficient comprehensive prediction model. The results indicate that the Snake Optimizer (SO) algorithm exhibits significant advantages and contributes to enhancing the effectiveness of the experimental process. The model holds promising applications in the field of energy supply forecasting and provides robust support for the stable operation and optimized economic management of smart grids. Moreover, this study has positive social and economic implications for the development of smart grids and sustainable energy utilization.
Chapter
Microgrids are a critical component of smart and sustainable cities as they provide localized power generation and distribution that can be optimized for efficiency, cost, and environmental impact. Further, many microgrids are characterized for the presence of smart homes, which are an integral part of smart cities as they play a crucial role in improving the overall efficiency and sustainability of urban areas. One of the primary benefits of smart homes is the ability to reduce energy consumption and carbon emissions, for example, by automatically adjusting the temperature, lighting, and other settings to optimize energy usage based on the users’ needs and preferences. However, the efficient management of smart homes located in microgrids is still an open question. In particular, the efficient management of microgrids including smart homes requires measuring and processing a large amount of electrical data related to the energy generated by the power sources of the microgrid, the energy consumed by the loads (home appliances), the level of battery storage and the amount of transferred power flow between the microgrid and the main grid. This paper reflects on how to measure electrical variables at different points of the microgrids using low-cost smart meters with IoT capabilities and how to apply Non-Intrusive Load Monitoring (NILM) methods for energy efficiency purposes. The empirical study involving a real microgrid including a hybrid wind-solar generation system and a set of home appliances show that it is possible to collect data of different parts of the microgrid for efficient management purposes.KeywordsMicrogridsSmart Sustainable CitiesSmart homesSmart MetersIoT
Article
Full-text available
Ocean energy technologies are in their developmental stages, like other renewable energy sources. To be usable in the energy market, most components of wave energy devices require further improvement. Additionally, wave resource characteristics must be evaluated and estimated correctly to assess the wave energy potential in various coastal areas. Multiple algorithms integrated with numerical models have recently been developed and utilized to estimate, predict, and forecast wave characteristics and wave energy resources. Each algorithm is vital in designing wave energy converters (WECs) to harvest more energy. Although several algorithms based on optimization approaches have been developed for efficiently designing WECs, they are unreliable and suffer from high computational costs. To this end, novel algorithms incorporating machine learning and deep learning have been presented to forecast wave energy resources and optimize WEC design. This review aims to classify and discuss the key characteristics of machine learning and deep learning algorithms that apply to wave energy forecast and optimal configuration of WECs. Consequently, in terms of convergence rate, combining optimization methods, machine learning, and deep learning algorithms can improve the WECs configuration and wave characteristic forecasting and optimization. In addition, the high capability of learning algorithms for forecasting wave resource and energy characteristics was emphasized. Moreover, a review of power take-off (PTO) coefficients and the control of WECs demonstrated the indispensable ability of learning algorithms to optimize PTO parameters and the design of WECs.
Article
Full-text available
This paper presents a stochastic model predictive control approach combined with a time-series forecasting technique to tackle the problem of microgrid energy management in the face of uncertainty. The data-driven non-parametric chance constraint method is used to formulate chance constraints for stochastic model predictive control, while removing the dependency on probability density assumptions of uncertain variables and retaining the linear structure of the resulting optimization problem. The proposed approach is suitable for implementation on systems with limited computational power or limited memory storage, thanks to its simple linear structure and its ability to provide accurate results within pre-defined confidence levels, even when using small data batches. The proposed forecasting and stochastic model predictive control approaches are applied on a numerical example featuring a small grid-connected microgrid with PV generation, a battery storage system, and a non-controllable load, showing the ability to reduce costs by reducing the confidence level, and to satisfy pre-defined confidence levels.
Article
Full-text available
The global population growth has led to a considerable rise in demand for wheat. Today, the amount of energy consumption in agriculture has also increased due to the need for sufficient food for the growing population. Thus, agricultural policymakers in most countries rely on prediction models to influence food security policies. This research aims to predict and reduce the amount of energy consumption in wheat production. Data were collected from the farms of Estahban city in Fars province of Iran by the Jihad Agricultural Department's experts for 20 years from 1994 to 2013. In this study, a novel prediction method based on consumed energy in the production period is proposed. The model is developed based on artificial intelligence to forecast the output energy in wheat production and uses extreme learning machine (ELM) and support vector regression (SVR). In the experimental stage, the value of elevation metrics for the EVM and ELM was reported to be equal to 0.000000409 and 0.9531, respectively. Total input energy (consumed) is found to be 1,460,503.1 Mega Joules (MJ), and output energy (produced wheat) is 1,401,011.945 MJ for the Estahban. The result indicates the superiority of the ELM model to enhance the decisions of the agricultural policymakers.
Article
Full-text available
The outbreak of COVID-19 Coronavirus, namely SARS-CoV-2, has created a calamitous situation throughout the world. The cumulative incidence of COVID-19 is rapidly increasing day by day. Machine Learning (ML) and Cloud Computing can be deployed very effectively to track the disease, predict growth of the epidemic and design strategies and policy to manage its spread. This study applies an improved mathematical model to analyse and predict the growth of the epidemic. An ML-based improved model has been applied to predict the potential threat of COVID-19 in countries worldwide. We show that using iterative weighting for fitting Generalized Inverse Weibull distribution, a better fit can be obtained to develop a prediction framework. This has been deployed on a cloud computing platform for more accurate and real-time prediction of the growth behavior of the epidemic. A data driven approach with higher accuracy as here can be very useful for a proactive response from the government and citizens. Finally, we propose a set of research opportunities and setup grounds for further practical applications.
Article
Full-text available
The recent emergence of Internet of Things (IoT) technologies in mission-critical applications in the maritime industry has led to the introduction of the Internet of Ships (IoS) paradigm. IoS is a novel application domain of IoT that refers to the network of smart interconnected maritime objects, which can be any physical device or infrastructure associated with a ship, a port, or the transportation itself, with the goal of significantly boosting the shipping industry towards improved safety, efficiency, and environmental sustainability. In this manuscript, we provide a comprehensive survey of the IoS paradigm, its architecture, its key elements, and its main characteristics. Furthermore, we review the state of the art for its emerging applications, including safety enhancements, route planning and optimization, collaborative decision making, automatic fault detection and preemptive maintenance, cargo tracking, environmental monitoring, energy-efficient operations, and automatic berthing. Finally, the presented open challenges and future opportunities for research in the areas of satellite communications, security, privacy, maritime data collection, data management, and analytics, provide a road-map towards optimized maritime operations and autonomous shipping.
Article
Full-text available
This paper proposes a novel integrated machine learning (ML) technique to forecast the heat demand of buildings in a District Heating System (DHS). The proposed short-term (24h-ahead) heat demand forecasting model is based the integration of Empirical Mode Decomposition (EMD), Imperialistic Competitive Algorithm (ICA) and Support Vector Machine (SVM). The proposed model also embeds a ML-based feature selection technique combining binary genetic algorithm (BGA) and Gaussian Process Regression (GPR). The model is developed using a two-year (2015 - 2016) hourly dataset of actual district heat demand obtained from various buildings in an area of a city. Several variables from different domains such as seasonality (calendar), weather, occupancy and heat demand are used to construct the initial feature space for feature selection process. Short-term forecasting models are also implemented using the Persistence approach as a reference and other eight ML approaches: artificial neural network (ANN), genetic algorithm combined with ANN (GA-ANN), ICA-ANN, SVM, GA-SVM, ICA-SVM, EMD-GA-ANN, and EMD-ICA-ANN. The performance of the proposed EMD-ICA-SVM-based forecasting model is tested using an out-of-sample one-year (2017) hourly dataset of district heat consumption of various building types. Comparative analysis of the forecasting performance of the models was performed. The obtained results demonstrate that the devised model forecasts the heat energy demand with improved performance evaluated using various accuracy metrics. Moreover, the devised model achieves outperformed forecasting accuracy enhancement, compared to the other nine evaluated models.
Article
Full-text available
Accurate prediction of solar irradiance is beneficial in reducing energy waste associated with photovoltaic power plants, preventing system damage caused by the severe fluctuation of solar irradiance, and stationarizing the power output integration between different power grids. Considering the randomness and multiple dimension of weather data, a hybrid deep learning model that combines a gated recurrent unit (GRU) neural network and an attention mechanism is proposed forecasting the solar irradiance changes in four different seasons. In the first step, the Inception neural network and ResNet are designed to extract features from the original dataset. Secondly, the extracted features are inputted into the recurrent neural network (RNN) network for model training. Experimental results show that the proposed hybrid deep learning model accurately predicts solar irradiance changes in a short-term manner. In addition, the forecasting performance of the model is better than traditional deep learning models (such as long short term memory and GRU).
Article
Full-text available
Microgrid is becoming an essential part of the power grid regarding reliability, economy, and environment. Renewable energies are main sources of energy in microgrids. Long-term solar generation forecasting is an important issue in microgrid planning and design from an engineering point of view. Solar generation forecasting mainly depends on solar radiation forecasting. Long-term solar radiation forecasting can also be used for estimating the degradation-rate-influenced energy potentials of photovoltaic (PV) panel. In this paper, a comparative study of different deep learning approaches is carried out for forecasting one year ahead hourly and daily solar radiation. In the proposed method, state of the art deep learning and machine learning architectures like gated recurrent units (GRUs), long short term memory (LSTM), recurrent neural network (RNN), feed forward neural network (FFNN), and support vector regression (SVR) models are compared. The proposed method uses historical solar radiation data and clear sky global horizontal irradiance (GHI). Even though all the models performed well, GRU performed relatively better compared to the other models. The proposed models are also compared with traditional state of the art methods for long-term solar radiation forecasting, i.e., random forest regression (RFR). The proposed models outperformed the traditional method, hence proving their efficiency.
Article
In the last few years, the application of Machine Learning approaches like Deep Neural Network (DNN) models have become more attractive in the healthcare system given the rising complexity of the healthcare data. Machine Learning (ML) algorithms provide efficient and effective data analysis models to uncover hidden patterns and other meaningful information from the considerable amount of health data that conventional analytics are not able to discover in a reasonable time. In particular, Deep Learning (DL) techniques have been shown as promising methods in pattern recognition in the healthcare systems. Motivated by this consideration, the contribution of this paper is to investigate the deep learning approaches applied to healthcare systems by reviewing the cutting-edge network architectures, applications, and industrial trends. The goal is first to provide extensive insight into the application of deep learning models in healthcare solutions to bridge deep learning techniques and human healthcare interpretability. And then, to present the existing open challenges and future directions.
Article
System‐of‐Systems capability is inherently tied to the participation and performance of the constituent systems and the network performance which connects the systems together. It is imperative for the SoS stakeholders to quantify the SoS capability and performance to any uncertain variations in the system participation and network outages so that the system participation is incentivized and network design optimized. However, given the independent operations, management, and objectives of constituent systems, along with an increasing number of systems that collectively become a part of SoS, it becomes difficult to obtain a closed analytical function for SoS performance characterization. In this paper, we investigate and compare two machine learning techniques, Artificial Neural Network and Parametric Bayesian Estimation, to obtain a predictive model of the SoS given the uncertainty in the constituent system participation and the network conditions. We demonstrate our approach on a smart grid SoS application example and describe how the two machine learning techniques enable SoS robustness and resilience analysis by quantifying the uncertainty in the model and SoS operations. The results of smart grid example establish the value of SoS uncertainty quantification (UQ) and show how smart grid operators can utilize UQ models to maintain the desired robustness as operating conditions evolve and how the designers can incorporate low‐cost networks into the SoS while maintaining high performance and resilience.
Article
Deep learning has become one of the most widely accepted paradigms regarding machine learning. It focuses on the use of hierarchical data models and builds upon the notion that in order to learn about high level data representations, a better understanding of intermediate level representation is needed. Restricted Boltzmann Machines and deep belief networks are two main types of deep learning algorithms commonly used in a wide array of classification and pattern recognition tasks. Examples of these tasks are natural language recognition, neuroimaging studies, forecasting time series, parametric voice synthesis, and speech emotion recognition among others. Recent machine learning studies suggest that deep learning networks can help map feature problems into a more advantageous position, hence improving the classification process. However, selecting a suitable Deep learning architecture in response to a specific problem can be difficult. In this study, we intend to investigate whether discriminative measures, such as Anova, Pearsonâs Correlation, Fisher score, Gain ratio, ReliefF, OneR among others, could offer pointers to identify useful neural nods in a Deep learning network. This is due to the fact that normally not all hidden neurons provide insightful information for a classification task. Our approach consists in using some of these discriminative measures to rank the hidden neurons based on their output values, and then prune them in accordance to their position within said ranking. Our results indicate that this approach is also helpful in multiclass classification problems and the pruning process seems to have a positive effect in diminishing the resulting error rate.
Article
The aim of this paper is to explore the multi-agent reinforcement learning approach for residential multi-carrier energy management. Defining the multi-agent system not only enhances the possibility of dedicating separate demand response programs for different components but also accelerates the computational calculations. We employ the Q-learning to provide the optimum solution in solving the presented residential energy management problem. Furthermore, to address uncertainties, a scenario-based method with the real data and proper probability density functions is used. Deterministic and stochastic numerical calculations are made to justify the effectiveness and robustness of the proposed method. The simulated results indicate that the application of the proposed reinforcement learning-based method leads to lower cost schemes for consumers rather than the conventional optimization-based energy management programs.