Fig 1 - uploaded by Denny Denny
Content may be subject to copyright.
Prototype vector mi where i = {4, 5, 6} cannot be BMU for xi, as d (m, mi) ≥ 2 · d (xi, m). Thus, these prototype vectors can be ignored in finding BMU.

Prototype vector mi where i = {4, 5, 6} cannot be BMU for xi, as d (m, mi) ≥ 2 · d (xi, m). Thus, these prototype vectors can be ignored in finding BMU.

Source publication
Chapter
Full-text available
Triangle inequality optimization is one of several strategies on the \(k\)-means algorithm that can reduce the search space in finding the nearest prototype vector. This optimization can also be applied towards Self-Organizing Maps training, particularly during finding the best matching unit in the batch training approach. This paper investigates v...

Context in source publication

Context 1
... then no further BMU searching is necessary for x i . The rest of the prototype vectors cannot be the BMU for x i , as their distance to x i cannot be smaller than d (x i , m). Figure 1 shows the graphical representation in two dimensional space. ...

Similar publications

Article
Full-text available
The idea of reusing or transferring information from previously learned tasks (source tasks) for the learning of new tasks (target tasks) has the potential to significantly improve the sample efficiency of a reinforcement learning agent. In this work, we describe a novel approach for reusing previously acquired knowledge by using it to guide the ex...
Article
Full-text available
SOM is a popular artificial neural network algorithm to perform rational clustering on many different data sets. There is a disadvantage of the SOM that can run on a predefined completed data set. Various problems are encountered on a time-stream data sets when clustering by using standard SOM since the time-stream data sets are generated dependent...
Preprint
Full-text available
The idea of reusing or transferring information from previously learned tasks (source tasks) for the learning of new tasks (target tasks) has the potential to significantly improve the sample efficiency of a reinforcement learning agent. In this work, we describe a novel approach for reusing previously acquired knowledge by using it to guide the ex...