This paper develops a neural network model for solving the Value-at-risk forecasting problems. The application of forecasting methods in neural network models is discussed, which involves normal-GARCH model and grey forecasting model. Compared to the use of traditional models, the new method is fast, easy to implement, numerically reliable. After describing the model, experimental results from Chinese equity market verify the effectiveness and applicability of the proposed work.
[Show abstract][Hide abstract] ABSTRACT: Two methods for classification based on the Bayes strategy and
nonparametric estimators for probability density functions are reviewed.
The two methods are named the probabilistic neural network (PNN) and the
polynomial Adaline. Both methods involve one-pass learning algorithms
that can be implemented directly in parallel neural network
architectures. The performances of the two methods are compared with
multipass backpropagation networks, and relative advantages and
disadvantages are discussed. PNN and the polynomial Adaline are
complementary techniques because they implement the same decision
boundaries but have different advantages for applications. PNN is easy
to use and is extremely fast for moderate-sized databases. For very
large databases and for mature applications in which classification
speed is more important than training speed, the polynomial equivalent
can be found
IEEE Transactions on Neural Networks 04/1990; · 2.95 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This article introduces Adaptive Resonance Theory 2-A (ART 2-A), an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large scale neural computation.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.