This paper investigates the stability of two recently proposed modified NLMS learning rules that are based on calculating the smallest weight change which stores the current training pattern exactly. The Lp (p = 1, 2, infinity) norm used to measure the weight update produces different learning algorithms, and it is shown that both new learning rules (p = 1, infinity) can become unstable, as the
... [Show full abstract] parameter error increases without bound. This is in direct contrast to the standard (p = 2 norm) NLMS rule which is unconditionally stable (in the sense described in this paper - monotonically non-increasing weight error), and indeed the NLMS rule was originally derived to overcome such limitations. The conditions under which instability can occur are investigated both theoretically and in simulation and are shown to depend on the form of the input vector and only indirectly on the learning rate.