Very short-term electricity load demand forecasting using support vector regression.
ABSTRACT In this paper, we present a new approach for very short term electricity load demand forecasting. In particular, we apply support vector regression to predict the load demand every 5 minutes based on historical data from the Australian electricity operator NEMMCO for 2006-2008. The results show that support vector regression is a very promising approach, outperforming backpropagation neural networks, which is the most popular prediction model used by both industry forecasters and researchers. However, it is interesting to note that support vector regression gives similar results to the simpler linear regression and least means squares models. We also discuss the performance of four different feature sets with these prediction models and the application of a correlation-based sub-set feature selection method.
Conference Proceeding: Short term load forecasting[show abstract] [hide abstract]
ABSTRACT: The exponential smoothing method and the Box-Jenkins approach to time series analysis methods for short term load forecasting are presented. The application of both methods to load forecasting of a public electric utility in Slovenia is presented. Methods are compared for accuracy and simplicityElectrotechnical Conference, 1991. Proceedings., 6th Mediterranean; 06/1991
Conference Proceeding: Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning.[show abstract] [hide abstract]
ABSTRACT: Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often out-performs the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does-reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller.Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), Stanford University, Stanford, CA, USA, June 29 - July 2, 2000; 01/2000