Oleg A. Lebedko's scientific contributions

Publications (4)

Article
Full-text available
Neuro-fuzzy systems based on Radial Basis Function Networks (RBFN) and other hybrid artificial intelligence techniques are currently under intensive investigation. This paper presents a RBFN training algorithm based on evolutionary programming and cooperative evolution. The algorithm alternatively applies basis function adaptation and backpropagati...
Article
A hybrid learning algorithm designed for feedforward neural networks is proposed. Presented procedure combines the advantages of both global evolutionary programming search and local gradient tuning through cooperation of hidden neurons. Such an approach allows to implement search in a single network that differs from traditionally employed evoluti...
Article
Full-text available
: This paper describes two algorithms based on cooperative evolution of internal hidden network representations and a combination of global evolutionary and local search procedures. The obtained experimental results are better in comparison with prototype methods. It is demonstrated, that the applications of pure gradient or pure genetic algorithms...
Conference Paper
This paper describes two a lgorithms based on cooperative e volution of internal hidden n etwork representations and a combination of global evolutionary a nd local search p rocedures. The obtained experimental results are better in comparison with prototype methods. It is demonstrated, that t he applications of pure gradient or pure genetic algori...

Citations

... Unlike MLP and more recent deep learning ANNs (also requiring larger training datasets), RBF processing architecture that has only one hidden layer. For future advancements, the traditional RBF is still considered a good candidate for modifications that would allow adaptive and evolving operation for incremental learning such as [11,12]. Future advancements are likely to investigate the underlying KNN responsible for multivariate Gaussian parameter settings from data that could be modified with the evolving clustering function (ECF) or other adaptive and evolving classification model alternatives [13]. ...
... In other words, GA identifies the deepest valley and SGD determines a setting at or close to the minimum. In practice, in several studies, the hybrid algorithms are reported to outperform either GA or BP alone [2,33,34]. The first part of the hybrid methods (GA) is based on choosing several (usually hundreds) random initial settings of learnable parameters (each one is called an individual which together form a population), compute the error/cost function for each one, affix a score called fitness based on their distance from the target, select a certain number of individuals (surviving individuals), and construct a new population that inherits features from the latter. ...
... The GA can select the optimum parameters and produce a range of possible solutions during the evolution of adaptive and dynamic system structure [57]. Topchy et al. [58] used a GA technique to choose an optimum set of weights for an ANN network which showed that convergence time for a GA-based model can be reduced drastically and that an optimum solution can be obtained satisfactorily. The GA can be applied in several ways to design the ANN. ...
... To address the problem of search of neural networks structures various restrictions are introduced. For example, only multi-layered ANNs can be con-sidered [31,10,41] and operations, which change its structure include adding or removal of a new layer or nodes for a selected layer. Considering NE algorithms with more flexible search of ANN topology, a popular trick is to allow only growth of ANN structure [38,37]. ...