Zaur Shibzukhov

Zaur Shibzukhov
Moscow Pedagogical State University · Institute of Mathematics and Informatics

Doctor of Physical-Mathematical Sciences (RU)

About

43
Publications
1,138
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
94
Citations

Publications

Publications (43)
Chapter
A new robust variant of the formulation of the problem and the method of searching for the principal components is considered. It is based on the application of differentiable estimates of the mean value, insensitive to outliers, to construct a robust target functional. This approach makes it possible to overcome the impact of outliers in the data....
Article
Full-text available
The author considers the robust approach to constructing machine learning algorithms based on minimizing robust finite sums of parameterized functions. This algorithm is based on finite robust differentiable aggregation summation functions, which are stable with respect to outliers.
Article
Full-text available
In this paper, we propose an extended version of the principle of minimizing empirical risk (ER) based on the use of averaging aggregating functions (AAF) for calculating the ER instead of the arithmetic mean. This is expedient if the distribution of losses has outliers and hence risk assessments are biased. Therefore, a robust estimate of the aver...
Chapter
The article considers the approach to the construction of robust methods and machine learning algorithms, which are based on the principle of minimizing estimates of average values that are insensitive to outliers. Proposed machine learning algorithms are based on the principle of iterative reweighting. Illustrative examples show the ability of the...
Chapter
The \(\Sigma \Pi \)-neuron is a biologically inspired formal model for logical information processing. The \(\Sigma \Pi \)-neuron model adequately reflects information processing processes in the cerebral cortex and in the dendritic trees of neurons. The advantage of the \(\Sigma \Pi \)-neuron model is the ability to accurately represent any Boolea...
Chapter
The classical approach in machine learning, based on the principle of minimizing the arithmetic mean of parameterized functions, is vulnerable if training is carried out on the basis of data containing outliers. The article considers the approach to the construction of robust methods and machine learning algorithms, which are based on the principle...
Article
Full-text available
The ΣΠ-neuron is a biologically inspired formal model for logical information processing. The ΣΠ-neuron model adequately reflects information processing processes in the cerebral cortex and in the dendritic trees of neurons. The advantage of the ΣΠ-neuron model is the ability to accurately represent any Boolean function and the possibility of const...
Article
Full-text available
The problem of finding the centers and scattering matrices for a finite set of points containing outliers in a multidimensional space is considered. A new approach is considered in which instead of the arithmetic mean, differentiable mean values are used that are insensitive to outliers. An iterative reweighting scheme for searching for centers and...
Article
Full-text available
A new approach to robust clustering based on the search for cluster centers is proposed. It is based on minimizing the robust estimates of the averages and the sum of the functions of pseudo-distances to cluster centers. An algorithm of iterative reweighing type for finding cluster centres is proposed. Examples are given showing the stability of th...
Chapter
The article proposes an extended version of the principle of minimizing the empirical risk for training neural networks that is stable with respect to a large number of outliers in the training data. It is based on the use of -averaging and -averaging functions instead of arithmetic mean for estimating empirical risk. An iteratively re-weighted sch...
Article
Full-text available
The paper suggests an extended version of principle of empirical risk minimization and principle of smoothly winsorized averages minimization for robust neural networks learning. It's based on using of M-averaging and WM-averaging functions instead of the arithmetic mean for empirical risk estimation. These approaches generalize robust algorithms b...
Article
Рассматривается робастный подход к построению алгоритмов машинного обучения, основанный на минимизации робастных конечных сумм параметризованных функций. Он основывается на применении конечных робастных дифференцируемых агрегирующих функций суммирования, которые являются устойчивыми по отношению к выбросам.
Chapter
The paper suggests an extended version of principle of empirical risk minimization and principle of smoothly winsorized sums minimization for robust neural networks learning. It’s based on using of M-averaging functions instead of the arithmetic mean for empirical risk estimation (M-risk). Theese approaches generalize robust algorithms based on usi...
Conference Paper
Full-text available
A robust approach to the design of machine learning algorithms, based on minimising finite sums of the parametrised functions is considered. This method implies using robust finite-sum differentiable aggregating functions that are resistant to outliers.
Article
An extended version of the principle of empirical risk minimization is proposed. It is based on the application of averaging aggregation functions, rather than arithmetic means, to compute empirical risk. This is justified if the distribution of losses has outliers or is substantially distorted, which results in that the risk estimate becomes biase...
Conference Paper
A new class of the artificial neuron models is described in this work. These models are based on assumptions: (1) contributions of synapses are summing with the help of certain aggregation operation; (2) contribution of synaptic clusters are computed with the help of another aggregation operation on the set of simple synapses. These models include...
Article
A class of pointwise and aggregately corrected algorithms of recognition and predictions, certain types of correct aggregation operations with them, when finite sets of pointwise and aggregationally correct algorithms are transformed into new aggregationally correct algorithms, is discussed in this article. Such operations extend the classes of bas...
Article
The abstract classes of algorithms for processing information coded in a class of algebras with zero and unit has been reported. Learning of well-behaved algorithms is implemented by using a direct constructive algebraic procedure, which makes it possible to simultaneously form the structure of algorithm and tune its parameters, so that the algorit...
Article
A consecutive approach to constructing correct closures for finite sets of weal (heuristic) recognition algorithms is considered in the framework of the Zhuravlev's algebraic approach. The suggested approach is based on constructing the sequence of extensions using ∑Φ-operators (∑Π-operator extension) of limited complexity, which converges to corre...
Article
Considered is effective constructing of correct recognizing algorithms using standard learning information in the framework of algebraic approach to recognition problems. Determined is permissible class of algorithms transforming the set of initial descriptions of investigated objects to the set of sparse vectors conforming to one natural requireme...
Article
A class of networks consisting of algebraic Sigma-Pi neurons and Sigmu-Pi neuron modules with input in an arbitrary ring without zero divisors is considered. Each Sigma-Pi neuron implements the composition of a polylinear function with a scalar one. Each Sigma-Pi neuron module is a set of competitive Sigma-Pi neurons. A recurrent method for supervi...
Article
The paper is concerned with synthesis of the architecture, minimization of the complexity, and recurrent local learning and self-organization algorithms for Diophantine neural networks over a finite field that generalize logical and threshold-polynomial ntworks in a Boolean field.

Network

Cited By