Conference Proceeding

Value-at-risk forecasting with combined neural network model.

01/2010; DOI:10.1109/ICNC.2010.5583173 In proceeding of: Sixth International Conference on Natural Computation, ICNC 2010, Yantai, Shandong, China, 10-12 August 2010
Source: DBLP

ABSTRACT This paper develops a neural network model for solving the Value-at-risk forecasting problems. The application of forecasting methods in neural network models is discussed, which involves normal-GARCH model and grey forecasting model. Compared to the use of traditional models, the new method is fast, easy to implement, numerically reliable. After describing the model, experimental results from Chinese equity market verify the effectiveness and applicability of the proposed work.

0 0
 · 
0 Bookmarks
 · 
42 Views
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Two methods for classification based on the Bayes strategy and nonparametric estimators for probability density functions are reviewed. The two methods are named the probabilistic neural network (PNN) and the polynomial Adaline. Both methods involve one-pass learning algorithms that can be implemented directly in parallel neural network architectures. The performances of the two methods are compared with multipass backpropagation networks, and relative advantages and disadvantages are discussed. PNN and the polynomial Adaline are complementary techniques because they implement the same decision boundaries but have different advantages for applications. PNN is easy to use and is extremely fast for moderate-sized databases. For very large databases and for mature applications in which classification speed is more important than training speed, the polynomial equivalent can be found
    IEEE Transactions on Neural Networks 04/1990; · 2.95 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: This article introduces Adaptive Resonance Theory 2-A (ART 2-A), an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large scale neural computation.
    Neural Networks. 08/1991;
  • 01/1986; Bradford Books/MIT Press.