Article

An approach for constructing parsimonious generalized Gaussian kernel regression models

Intelligent Systems and Diagnostics Group, Department of Electronic and Computer Engineering, University of Portsmouth, Anglesea Building, Anglesea Road, Portsmouth PO1 3DJ, UK; School of Electronics and Computer Science, University of Southampton, Highfield, Southampton SO17 1BJ, UK
Neurocomputing (Impact Factor: 2.01). 12/2004; DOI: 10.1016/j.neucom.2004.06.003
Source: DBLP

ABSTRACT The paper proposes a novel construction algorithm for generalized Gaussian kernel regression models. Each kernel regressor in the generalized Gaussian kernel regression model has an individual diagonal covariance matrix, which is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. The standard orthogonal least squares algorithm is then used to select a sparse generalized kernel regression model from the resulting full regression matrix. Experimental results involving two real data sets demonstrate the effectiveness of the proposed regression modeling approach.

0 Bookmarks
 · 
93 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Instead of minimizing the observed training error, Support Vector Regression (SVR) attempts to minimize the generalization error bound so as to achieve generalized performance. The idea of SVR is based on the computation of a linear regression function in a high dimensional feature space where the input data are mapped via a nonlinear function. SVR has been applied in various fields – time series and financial (noisy and risky) prediction, approximation of complex engineering analyses, convex quadratic programming and choices of loss functions, etc. In this paper, an attempt has been made to review the existing theory, methods, recent developments and scopes of SVR.
    Neural Information Processing – Letters and Reviews. 11/2007; 11.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a growing algorithm to design the architecture of RBF neural network called growing RBF neural network algorithm (GRBF). The GRBF starts from a single prototype randomly initialized in the feature space; the whole algorithm consists of two major parts: the structure learning phase and parameter adjusting phase. In the structure algorithm, the growing strategy is used to judge when and where the RBF neural network should be grown in the hidden layer based on the sensitivity analysis of the network output. In the parameter adjusting strategy, the whole weights of the RBF should be adjusted for improving the whole capabilities of the GRBF. In the end, the proposed GRBF network is employed to track non-linear functions. The computational complexity analysis and the results of the simulations confirm the efficiency of the proposed algorithm.
    08/2009: pages 73-82;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The radial basis function (RBF) network has its foundation in the conventional approximation theory. It has the capability of universal approximation. The RBF network is a popular alternative to the well-known multilayer perceptron (MLP), since it has a simpler structure and a much faster training process. In this paper, we give a comprehensive survey on the RBF network and its learning. Many aspects associated with the RBF network, such as network structure, universal approimation capability, radial basis functions, RBF network learning, structure optimization, normalized RBF networks, application to dynamic system modeling, and nonlinear complex-valued signal processing, are described. We also compare the features and capability of the two models.
    ISRN Applied Mathematics. 03/2012; 2012.

Full-text (2 Sources)

Download
8 Downloads
Available from
Sep 2, 2014