Conference Paper

An Integrated ML-Ops Framework for Automating AI-based Photovoltaic Forecasting

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A system-of-systems DT for Virtual Power Plants (VPPs) will be developed, functioning as a connector for the above state-of-the-art technologies, and will be extended to cover the use case demonstrators in each CEL of the REV. By utilising the DT for community RES aggregation in distribution networks, short-term decisions can be made, both semi-automated and human-dependent 54 . In addition, the operations of the DT will support a real-time approach, augment the availability of data, and provide information on network parameters, such as congestion of lines and load on transformers, for the current and potential forecasted states of the suggested network configurations 55 . ...
Article
Full-text available
Renewable energy valleys (REVs) represent a transformative concept poised to reshape global energy landscapes. These comprehensive ecosystems transition regions from conventional energy sources to sustainable, self-reliant hubs for renewable energy generation, distribution, and consumption. At their core, REVs integrate advanced information and communication technology (ICT), interoperable digital solutions, social innovation processes, and economically viable business models. They offer a vision of decentralized, low-carbon landscapes accessible to all, capable of meeting local energy demands year-round by harnessing multiple renewable energy sources (RES) and leveraging energy storage technologies. This paper provides an overview of the key components and objectives of REVs, including digital integration through advanced ICT technologies and open digital solutions that enable the seamless management of RES within the REV. The social innovation aspect via the REV’s active communities is also examined, encouraging their participation in the co-design, implementation, and benefit-sharing of renewable energy solutions. In addition, business viability through sustainable business models central to the REV framework is proposed, ensuring affordability and accessibility to all stakeholders. The paper presents a case study of Crete, showcasing how the REV idea can work in real life. Crete utilizes various energy sources to become energy-independent, lower carbon emissions, and enhance system resilience. Advanced energy storage technologies are employed to ensure supply and demand balance within the REV. Situated on the picturesque island of Crete, Greece, it is pioneering the establishment of a Renewable Energy Valley ‘Living Lab’ (REV-Lab), integrating Community Energy Labs (CELs) as innovation hubs. This initiative exemplifies the REV model, striving to create a digitalized, distributed, and low-carbon landscape accessible to all residents throughout the year.
Conference Paper
Full-text available
Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm , and has quite a few effective implementations such as XGBoost and pGBRT. Although many engineering optimizations have been adopted in these implementations , the efficiency and scalability are still unsatisfactory when the feature dimension is high and data size is large. A major reason is that for each feature, they need to scan all the data instances to estimate the information gain of all possible split points, which is very time consuming. To tackle this problem, we propose two novel techniques: Gradient-based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB). With GOSS, we exclude a significant proportion of data instances with small gradients, and only use the rest to estimate the information gain. We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size. With EFB, we bundle mutually exclusive features (i.e., they rarely take nonzero values simultaneously), to reduce the number of features. We prove that finding the optimal bundling of exclusive features is NP-hard, but a greedy algorithm can achieve quite good approximation ratio (and thus can effectively reduce the number of features without hurting the accuracy of split point determination by much). We call our new GBDT implementation with GOSS and EFB LightGBM. Our experiments on multiple public datasets show that, LightGBM speeds up the training process of conventional GBDT by up to over 20 times while achieving almost the same accuracy.
Article
Full-text available
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be used by different stakeholders, with different backgrounds, preferences, abilities, skills, and goals. Our methodology is aligned with the explainable artificial intelligence (XAI) paradigm and aims to enhance the interpretability of AI-empowered decision support systems (DSSs). Specifically, a clustering-based approach is adopted to customize the depth of explainability based on the specific needs of different user groups. This approach improves the accuracy and effectiveness of energy management analytics while promoting transparency and trust in the decision-making process. The methodology is structured around an iterative development lifecycle for an intelligent decision support system and includes several steps, such as stakeholder identification, an empirical study on usability and explainability, user clustering analysis, and the implementation of an XAI framework. The XAI framework comprises XAI clusters and local and global XAI, which facilitate higher adoption rates of the AI system and ensure responsible and safe deployment. The methodology is tested on a stacked neural network for an analytics service, which estimates energy savings from renovations, and aims to increase adoption rates and benefit the circular economy.
Article
Full-text available
Energy management is crucial for various activities in the energy sector, such as effective exploitation of energy resources, reliability in supply, energy conservation, and integrated energy systems. In this context, several machine learning and deep learning models have been developed during the last decades focusing on energy demand and renewable energy source (RES) production forecasting. However, most forecasting models are trained using batch learning, ingesting all data to build a model in a static fashion. The main drawback of models trained offline is that they tend to mis-calibrate after launch. In this study, we propose a novel, integrated online (or incremental) learning framework that recognizes the dynamic nature of learning environments in energy-related time-series forecasting problems. The proposed paradigm is applied to the problem of energy forecasting, resulting in the construction of models that dynamically adapt to new patterns of streaming data. The evaluation process is realized using a real use case consisting of an energy demand and a RES production forecasting problem. Experimental results indicate that online learning models outperform offline learning models by 8.6% in the case of energy demand and by 11.9% in the case of RES forecasting in terms of mean absolute error (MAE), highlighting the benefits of incremental learning.
Article
Full-text available
Accurately forecasting solar plants production is critical for balancing supply and demand and for scheduling distribution networks operation in the context of inclusive smart cities and energy communities. However, the problem becomes more demanding, when there is insufficient amount of data to adequately train forecasting models, due to plants being recently installed or because of lack of smart-meters. Transfer learning (TL) offers the capability of transferring knowledge from the source domain to different target domains to resolve related problems. This study uses the stacked Long Short-Term Memory (LSTM) model with three TL strategies to provide accurate solar plant production forecasts. TL is exploited both for weight initialization of the LSTM model and for feature extraction, using different freezing approaches. The presented TL strategies are compared to the conventional non-TL model, as well as to the smart persistence model, at forecasting the hourly production of 6 solar plants. Results indicate that TL models significantly outperform the conventional one, achieving 12.6% accuracy improvement in terms of RMSE and 16.3% in terms of forecast skill index with 1 year of training data. The gap between the two approaches becomes even bigger when fewer training data are available (especially in the case of a 3-month training set), breaking new ground in power production forecasting of newly installed solar plants and rendering TL a reliable tool in the hands of self-producers towards the ultimate goal of energy balancing and demand response management from an early stage.
Article
Full-text available
The inherent instability of wind power production leads to critical problems for smooth power generation from wind turbines, which then requires an accurate forecast of wind power. In this study, an effective short term wind power prediction methodology is presented, which uses an intelligent ensemble regressor that comprises Artificial Neural Networks and Genetic Programming. In contrast to existing series based combination of wind power predictors, whereby the error or variation in the leading predictor is propagated down the stream to the next predictors, the proposed intelligent ensemble predictor avoids this shortcoming by introducing Genetical Programming based semi-stochastic combination of neural networks. It is observed that the decision of the individual base regressors may vary due to the frequent and inherent fluctuations in the atmospheric conditions and thus meteorological properties. The novelty of the reported work lies in creating ensemble to generate an intelligent, collective and robust decision space and thereby avoiding large errors due to the sensitivity of the individual wind predictors. The proposed ensemble based regressor, Genetic Programming based ensemble of Artificial Neural Networks, has been implemented and tested on data taken from five different wind farms located in Europe. Obtained numerical results of the proposed model in terms of various error measures are compared with the recent artificial intelligence based strategies to demonstrate the efficacy of the proposed scheme. Average root mean squared error of the proposed model for five wind farms is 0.117575.
Article
Full-text available
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with particularly strong support for training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model in contrast to existing systems, and demonstrate the compelling performance that TensorFlow achieves for several real-world applications.
Article
Full-text available
Surveying the solar cell landscape The rate of development and deployment of large-scale photovoltaic systems over recent years has been unprecedented. Because the cost of photovoltaic systems is only partly determined by the cost of the solar cells, efficiency is a key driver to reduce the cost of solar energy. There are several materials systems being explored to achieve high efficiency at low cost. Polman et al. comprehensively and systematically review the leading candidate materials, present the limitations of each system, and analyze how these limitations can be overcome and overall cell performance improved. Science , this issue p. 10.1126/science.aad4424
Article
Full-text available
—In this paper we will discuss pandas, a Python library of rich data structures and tools for working with structured data sets common to statistics, finance, social sciences, and many other fields. The library provides integrated, intuitive routines for performing common data manipulations and analysis on such data sets. It aims to be the foundational layer for the future of statistical computing in Python. It serves as a strong complement to the existing scientific Python stack while implementing and improving upon the kinds of data manipulation tools found in other statistical programming languages such as R. In addition to detailing its design and features of pandas, we will discuss future avenues of work and growth opportunities for statistics and data analysis applications in the Python language.
Conference Paper
Full-text available
Medium-term price forecasting plays an important role in electricity markets. Obtaining reasonably accurate medium-term electricity price predictions is critical in many applications, such as maintenance scheduling, generation expansion planning, and bilateral contracting. Medium-term price forecasting is a complex task because of the long forecasting horizon, dependence of medium-term electricity prices to various variables, and limited availability of explanatory data. In this paper, different aspects of a medium-term electricity price forecasting are discussed, and several data-driven 12-month-ahead approaches are developed. The presented approaches are applied to Nord Pool and the Ontario electricity market to examine their performance and effectiveness.
Article
Full-text available
The classical approach to long term load forecasting is often limited to the use of load and weather information occurring with monthly or annual frequency. This low resolution, infrequent data can sometimes lead to inaccurate forecasts. Load forecasters often have a hard time explaining the errors based on the limited information available through the low resolution data. The increasing usage of smart grid and advanced metering infrastructure (AMI) technologies provides the utility load forecasters with high resolution, layered information to improve the load forecasting process. In this paper, we propose a modern approach that takes advantage of hourly information to create more accurate and defensible forecasts. The proposed approach has been deployed across many U.S. utilities, including a recent implementation at North Carolina Electric Membership Corporation (NCEMC), which is used as the case study in this paper. Three key elements of long term load forecasting are being modernized: predictive modeling, scenario analysis, and weather normalization. We first show the superior accuracy of the predictive models attained from hourly data, over the classical methods of forecasting using monthly or annual peak data. We then develop probabilistic forecasts through cross scenario analysis. Finally, we illustrate the concept of load normalization and normalize the load using the proposed hourly models.
Conference Paper
Full-text available
this paper we present an investigation for the short term (up 24 hours) load forecasting of the demand for the South Sulewesi's (Sulewesi Island - Indonesia) Power System, using a multiple linear regression (MLR) method. After a brief analytical discussion of the technique, the usage of polynomial terms and the steps to compose the MLR model will be explained. Report on implementation of MLR algorithm using commercially available tool such as Microsoft EXCELTM will also be discussed. As a case study, historical data consisting of hourly load demand and temperatures of South Sulawesi electrical system will be used, to forecast the short term load. The results will be presented and analysed potential for improvement using alternative methods is also discussed.
Article
Full-text available
In the Python world, NumPy arrays are the standard representation for numerical data and enable efficient implementation of numerical computations in a high-level language. As this effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts.
Conference Paper
Full-text available
Data integration is the problem of combining data residing at different sources, and providing the user with a unified view of these data. The problem of designing data integration systems is important in current real world applications, and is characterized by a number of issues that are interesting from a theoretical point of view. This document presents on overview of the material to be presented in a tutorial on data integration. The tutorial is focused on some of the theoretical issues that are relevant for data integration. Special attention will be devoted to the following aspects: modeling a data integration application, processing queries in data integration, dealing with inconsistent data sources, and reasoning on queries.
Article
Full-text available
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
Article
Full-text available
In most situations, simple techniques for handling missing data (such as complete case analysis, overall mean imputation, and the missing-indicator method) produce biased results, whereas imputation techniques yield valid results without complicating the analysis once the imputations are carried out. Imputation techniques are based on the idea that any subject in a study sample can be replaced by a new randomly chosen subject from the same source population. Imputation of missing data on a variable is replacing that missing by a value that is drawn from an estimate of the distribution of this variable. In single imputation, only one estimate is used. In multiple imputation, various estimates are used, reflecting the uncertainty in the estimation of this distribution. Under the general conditions of so-called missing at random and missing completely at random, both single and multiple imputations result in unbiased estimates of study associations. But single imputation results in too small estimated standard errors, whereas multiple imputation results in correctly estimated standard errors and confidence intervals. In this article we explain why all this is the case, and use a simple simulation study to demonstrate our explanations. We also explain and illustrate why two frequently used methods to handle missing data, i.e., overall mean imputation and the missing-indicator method, almost always result in biased estimates.
Article
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
Article
This book covers topics in portfolio management and multicriteria decision analysis (MCDA), presenting a transparent and unified methodology for the portfolio construction process. The most important feature of the book includes the proposed methodological framework that integrates two individual subsystems, the portfolio selection subsystem and the portfolio optimization subsystem. An additional highlight of the book includes the detailed, step-by-step implementation of the proposed multicriteria algorithms in Python. The implementation is presented in detail; each step is elaborately described, from the input of the data to the extraction of the results. Algorithms are organized into small cells of code, accompanied by targeted remarks and comments, in order to help the reader to fully understand their mechanics. Readers are provided with a link to access the source code through GitHub. This Work may also be considered as a reference which presents the state-of-art research on portfolio construction with multiple and complex investment objectives and constraints. The book consists of eight chapters. A brief introduction is provided in Chapter 1. The fundamental issues of modern portfolio theory are discussed in Chapter 2. In Chapter 3, the various multicriteria decision aid methods, either discrete or continuous, are concisely described. In Chapter 4, a comprehensive review of the published literature in the field of multicriteria portfolio management is considered. In Chapter 5, an integrated and original multicriteria portfolio construction methodology is developed. Chapter 6 presents the web-based information system, in which the suggested methodological framework has been implemented. In Chapter 7, the experimental application of the proposed methodology is discussed and in Chapter 8, the authors provide overall conclusions. The readership of the book aims to be a diverse group, including fund managers, risk managers, investment advisors, bankers, private investors, analytics scientists, operations researchers scientists, and computer engineers, to name just several. Portions of the book may be used as instructional for either advanced undergraduate or post-graduate courses in investment analysis, portfolio engineering, decision science, computer science, or financial engineering.
Article
Building energy prediction plays a vital role in developing a model predictive controller for consumers and optimizing energy distribution plan for utilities. Common approaches for energy prediction include physical models, data-driven models and hybrid models. Among them, data-driven approaches have become a popular topic in recent years due to their ability to discover statistical patterns without expertise knowledge. To acquire the latest research trends, this study first summarizes the limitations of earlier reviews: seldom present comprehensive review for the entire data-driven process for building energy prediction and rarely summarize the input updating strategies when applying the trained data-driven model to multi-step energy prediction. To overcome these gaps, this paper provides a comprehensive review on building energy prediction, covering the entire data-driven process that includes feature engineering, potential data-driven models and expected outputs. The distribution of 105 papers, which focus on building energy prediction by data-driven approaches, are reviewed over data source, feature types, model utilization and prediction outputs. Then, in order to implement the trained data-driven models into multi-step prediction, input updating strategies are reviewed to deal with the time series property of energy related data. Finally, the review concludes with some potential future research directions based on discussion of existing research gaps.
Article
Integration of photovoltaics into power grids is difficult as solar energy is highly dependent on climate and geography; often fluctuating erratically. This causes penetrations and voltage surges, system instability, inefficient utilities planning and financial loss. Forecast models can help; however, time stamp, forecast horizon, input correlation analysis, data pre and post-processing, weather classification, network optimization, uncertainty quantification and performance evaluations need consideration. Thus, contemporary forecasting techniques are reviewed and evaluated. Input correlational analyses reveal that solar irradiance is most correlated with Photovoltaic output, and so, weather classification and cloud motion study are crucial. Moreover, the best data cleansing processes: normalization and wavelet transforms, and augmentation using generative adversarial network are recommended for network training and forecasting. Furthermore, optimization of inputs and network parameters, using genetic algorithm and particle swarm optimization, is emphasized. Next, established performance evaluation metrics MAE, RMSE and MAPE are discussed, with suggestions for including economic utility metrics. Subsequently, modelling approaches are critiqued, objectively compared and categorized into physical, statistical, artificial intelligence, ensemble and hybrid approaches. It is determined that ensembles of artificial neural networks are best for forecasting short term photovoltaic power forecast and online sequential extreme learning machine superb for adaptive networks; while Bootstrap technique optimum for estimating uncertainty. Additionally, convolutional neural network is found to excel in eliciting a model's deep underlying non-linear input-output relationships. The conclusions drawn impart fresh insights in photovoltaic power forecast initiatives, especially in the use of hybrid artificial neural networks and evolutionary algorithms.
Article
Conducting the accurate forecasting of wind speed is a both challenging and difficult task. However, this task is of great significance for wind farm scheduling and safe integration into the grid. In this paper, the wind speed at 10 or 30 min is predicted using only historical wind speed data. The existing single models are not enough to overcome the instability and inherent complexity of wind speed. To enhance forecasting ability, a novel hybrid model based on complementary ensemble empirical mode decomposition (CEEMD) and modified wind driven optimization is introduced for wind speed forecasting in this paper. CEEMD is utilized to decompose the original wind speed series into several intrinsic mode functions (IMFs), and each IMF is forecasted by back propagation neural network (BP). A new optimization algorithm combined Broyden family and wind driven optimization is presented and applied to optimize the initial weights and thresholds of BP. Finally, all forecasted IMFs are integrated as final forecasts. The 10min and 30min wind speed from the province of Shandong, China, were used in this paper as the case study, and the results confirm that the proposed hybrid model can improve the forecasting accuracy and stability.
Conference Paper
Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.
Article
Due to the challenge of climate and energy crisis, renewable energy generation including solar generation has experienced significant growth. Increasingly high penetration level of photovoltaic (PV) generation arises in smart grid. Solar power is intermittent and variable, as the solar source at the ground level is highly dependent on cloud cover variability, atmospheric aerosol levels, and other atmosphere parameters. The inherent variability of large-scale solar generation introduces significant challenges to smart grid energy management. Accurate forecasting of solar power/irradiance is critical to secure economic operation of the smart grid. This paper provides a comprehensive review of the theoretical forecasting methodologies for both solar resource and PV power. Applications of solar forecasting in energy management of smart grid are also investigated in detail.
Article
Hyperparameter learning has traditionally been a manual task because of the limited number of trials. Today's computing infrastructures allow bigger evaluation budgets, thus opening the way for algorithmic approaches. Recently, surrogate-based optimization was successfully applied to hyperparameter learning for deep belief networks and to WEKA classifiers. The methods combined brute force computational power with model building about the behavior of the error function in the hyperparameter space, and they could significantly improve on manual hyperparameter tuning. What may make experienced practitioners even better at hyperparameter optimization is their ability to generalize across similar learning problems. In this paper, we propose a generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand. To this end, we combine surrogate-based ranking and optimization techniques for surrogate-based collaborative tuning (SCoT). We demonstrate SCoT in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization.
Article
This paper reviews state-of-the-art on wind speed/power forecasting and solar irradiance forecasting with ensemble methods. The ensemble forecasting methods are grouped into two main categories: competitive ensemble forecasting and cooperative ensemble forecasting. The competitive ensemble forecasting is further categorized based on data diversity and parameter diversity. The cooperative ensemble forecasting is divided according to pre-processing and post-processing. Typical articles are discussed according to each category and their characteristics are highlighted. We also conduct comparisons based on reported results and comparisons based on simulations conducted by us. Suggestions for future research include ensemble of different paradigms and inter-category ensemble methods among others.
Article
The smart grid is conceived of as an electric grid that can deliver electricity in a controlled, smart way from points of generation to active consumers. Demand response (DR), by promoting the interaction and responsiveness of the customers, may offer a broad range of potential benefits on system operation and expansion and on market efficiency. Moreover, by improving the reliability of the power system and, in the long term, lowering peak demand, DR reduces overall plant and capital cost investments and postpones the need for network upgrades. In this paper a survey of DR potentials and benefits in smart grids is presented. Innovative enabling technologies and systems, such as smart meters, energy controllers, communication systems, decisive to facilitate the coordination of efficiency and DR in a smart grid, are described and discussed with reference to real industrial case studies and research projects.
Article
The performance of photovoltaic modules is influenced by solar spectrum even under the same solar irradiance conditions. Spectral factor (SF) is a useful index indicating the ratio of available solar irradiance between actual solar spectrums and the standard AM1·5-G spectrum. In this study, the influence of solar spectrum on photovoltaic performance in cloudy weather as well as in fine weather is quantitatively evaluated as the reciprocal of SF (SF−1). In the cases of fine weather, the SF−1 suggests that solar spectrum has little influence (within a few %) on the performance of pc-Si, a-Si:H/sc-Si, and copper indium gallium (di)selenide modules, because of the “offset effect”. The performance of a-Si:H modules and the top layers of a-Si:H/µc-Si:H modules can vary by more than ± 10% under the extreme conditions in Japan. The seasonal and locational variations in the SF−1 of the bottom layers are about ± several %. A negative correlation is shown between the top and bottom layers, indicating that the performance of a-Si:H/µc-Si:H modules does not exceed the performance, at which the currents of the top and bottom layers are balanced, by the influence of solar spectrum. In the cases of cloudy weather, the SF−1 of the pc-Si, a-Si:H/sc-Si, and copper indium gallium (di)selenide modules is generally higher, suggesting favorable for performance than that in fine weather. Much higher SF−1 than that in fine weather is shown by the a-Si:H module and the top layer of the a-Si:H/µc-Si:H module. The SF−1 of the bottom layer neither simply depend on season nor on location. Copyright © 2011 John Wiley & Sons, Ltd.
Article
The liberalization of electricity markets more than ten years ago in the vast majority of developed countries has introduced the need of modelling and forecasting electricity prices and volatilities, both in the short and long term.Thus, there is a need of providing methodology that is able to deal with the most important features of electricity price series, which are well known for presenting not only structure in conditional mean but also time-varying conditional variances.In this work we propose a new model, which allows to extract conditionally heteroskedastic common factors from the vector of electricity prices. These common factors are jointly estimated as well as their relationship with the original vector of series, and the dynamics affecting both their conditional mean and variance. The estimation of the model is carried out under the state-space formulation.The new model proposed is applied to extract seasonal common dynamic factors as well as common volatility factors for electricity prices and the estimation results are used to forecast electricity prices and their volatilities in the Spanish zone of the Iberian Market.Several simplified/alternative models are also considered as benchmarks to illustrate that the proposed approach is superior to all of them in terms of explanatory and predictive power.
Article
The relative abilities of 2, dimensioned statistics-the root-mean-square error (RMSE) and the mean absolute error (MAE) -to describe average model-performance error are examined. The RMSE is of special interest because it is widely reported in the climatic and environmental literature; nevertheless, it is an inappropriate and misinterpreted measure of average error. RMSE is inappropriate because it is a function of 3 characteristics of a set of errors, rather than of one (the average error). RMSE varies with the variability within the distribution of error magnitudes and with the square root of the number of errors (n(1/2)), as well as with the average-error magnitude (MAE). Our findings indicate that MAE is a more natural measure of average error, and (unlike RMSE) is unambiguous. Dimensioned evaluations and inter-comparisons of average model-performance error, therefore, should be based on MAE.
Article
Smart meter is an advanced energy meter that measures consumption of electrical energy providing additional information compared to a conventional energy meter. Integration of smart meters into electricity grid involves implementation of a variety of techniques and software, depending on the features that the situation demands. Design of a smart meter depends on the requirements of the utility company as well as the customer. This paper discusses various features and technologies that can be integrated with a smart meter. In fact, deployment of smart meters needs proper selection and implementation of a communication network satisfying the security standards of smart grid communication. This paper outlines various issues and challenges involved in design, deployment, utilization, and maintenance of the smart meter infrastructure. In addition, several applications and advantages of smart meter, in the view of future electricity market are discussed in detail. This paper explains the importance of introducing smart meters in developing countries. In addition, the status of smart metering in various countries is also illustrated.
Article
Renewable technologies are considered as clean sources of energy and optimal use of these resources minimize environmental impacts, produce minimum secondary wastes and are sustainable based on current and future economic and social societal needs. Sun is the source of all energies. The primary forms of solar energy are heat and light. Sunlight and heat are transformed and absorbed by the environment in a multitude of ways. Some of these transformations result in renewable energy flows such as biomass and wind energy. Renewable energy technologies provide an excellent opportunity for mitigation of greenhouse gas emission and reducing global warming through substituting conventional energy sources. In this article a review has been done on scope of CO2 mitigation through solar cooker, water heater, dryer, biofuel, improved cookstoves and by hydrogen.
Article
In the world, wind power is rapidly becoming a generation technology of significance. Unpredictability and variability of wind power generation is one of the fundamental difficulties faced by power system operators. Good forecasting tools are urgent needed under the relevant issues associated with the integration of wind energy into the power system. This paper gives a bibliographical survey on the general background of research and developments in the fields of wind speed and wind power forecasting. Based on the assessment of wind power forecasting models, further direction for additional research and application is proposed.
Conference Paper
Wind power may present undesirable discontinuities and fluctuations due to considerable variations in wind speed, which may affect adversely the smooth operation of the grid. Effective wind forecast is essential in order to report the amount of energy supply with high accuracy, which is crucial for planning energy resources for power system operators. Variations in wind power cannot be sufficiently estimated by persistence type basic forecasting methods particularly in medium and long terms. Therefore a new statistical method is presented here in this paper based on independent component analysis (ICA) and autoregressive (AR) model. ICA is utilized in order to exploit the hidden factors which may exist in the wind speed time-series. It is understood that ICA, especially ICA methods based on exploiting the time structure like second order blind identification (SOBI) can be used as a preliminary step in wind speed forecasting.
Lightgbm: A highly efficient gradient boosting decision tree
  • Ke
A gentle introduction to imputation of missing values
  • A R T Donders
  • G Van Der Heijden
  • T Stijnen
  • K G Moons