Fig 5 - uploaded by Eric Granger
Content may be subject to copyright.
Average performance of fuzzy ARTMAP (with MT+, MT-, WMT and PSO(MT)) versus training subset size for D µ (ξ tot = 13%). Error bars are standard error of the sample mean.

Average performance of fuzzy ARTMAP (with MT+, MT-, WMT and PSO(MT)) versus training subset size for D µ (ξ tot = 13%). Error bars are standard error of the sample mean.

Source publication
Article
Full-text available
Training fuzzy ARTMAP neural networks for classification using data from com-plex real-world environments may lead to category proliferation, and yield poor performance. This problem is known to occur whenever the training set contains noisy and overlapping data. Moreover, when the training set contains identical input patterns that belong to diffe...

Contexts in source publication

Context 1
... Synthetic data with overlapping class distributions: Figure 5 presents the average performance obtained when fuzzy ARTMAP is trained with the four MT strategies -MT-, MT+, WMT and PSO(MT) -on D µ (13%). The generalisation errors for the Quadratic Bayes classifier (CQB), as well as the theoretical probability of error (ξ tot ), are also shown for reference. ...
Context 2
... shown in Figure 5(a), PSO(MT) generally yields the lowest generalisation error over training set sizes, followed by WMT, MT+, and then MT-. With more than 20 training patterns per class, the error of both MT-and MT+ algorithms tends to increase in a manner that is indicative of fuzzy ARTMAP overtraining 18 . ...
Context 3
... results indicate that the MT process of fuzzy ARTMAP has a considerable impact on performance obtained with overlapping data, especially when ε is optimized. As shown in Figure 5(d), when α = 0.001, β = 1 and ρ = 0, and class distributions overlap, the values of ε that minimize error tends from about 0 towards 0.8 as the training set size grows. Higher ε settings tend to create a growing number of category hyperrectangles close to the bourdary between classes. ...