Preprint
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

The present paper deals with construction of newly family of Neural Network operators, that is,Steklov Neural Network operators. By using Steklov type integral, we introduce a new version of Neural Network operators and we obtain some convergence theorems for the family, such as, pointwise and uniform convergence,rate of convergence via moduli of smoothness of order r.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this paper we introduce a new class of sampling-type operators, named Steklov sampling operators. The idea is to consider a sampling series based on a kernel function that is a discrete approximate identity, and which constitutes a reconstruction process of a given signal f , based on a family of sample values which are Steklov integrals of order r evaluated at the nodes k / w , kZk \in {\mathbb {Z}} k ∈ Z , w>0w>0 w > 0 . The convergence properties of the introduced sampling operators in continuous functions spaces and in the LpL^p L p -setting have been studied. Moreover, the main properties of the Steklov-type functions have been exploited in order to establish results concerning the high order of approximation. Such results have been obtained in a quantitative version thanks to the use of the well-known modulus of smoothness of the approximated functions, and assuming suitable Strang-Fix type conditions, which are very typical assumptions in applications involving Fourier and Harmonic analysis. Concerning the quantitative estimates, we proposed two different approaches; the first one holds in the case of Steklov sampling operators defined with kernels with compact support, its proof is substantially based on the application of the generalized Minkowski inequality, and it is valid with respect to the p -norm, with 1p+1 \le p \le +\infty 1 ≤ p ≤ + ∞ . In the second case, the restriction on the support of the kernel is removed and the corresponding estimates are valid only for 1<p+1 < p\le +\infty 1 < p ≤ + ∞ . Here, the key point of the proof is the application of the well-known Hardy–Littlewood maximal inequality. Finally, a deep comparison between the proposed Steklov sampling series and the already existing sampling-type operators has been given, in order to show the effectiveness of the proposed constructive method of approximation. Examples of kernel functions satisfying the required assumptions have been provided.
Article
Full-text available
The present paper deals with construction of a new family of exponential sampling Kantorovich operators based on a suitable fractional-type integral operators. We study convergence properties of newly constructed operators and give a quantitative form of the rate of convergence thanks to logarithmic modulus of continuity. To obtain an asymptotic formula in the sense of Voronovskaja, we consider locally regular functions. The rest of the paper devoted to approximations of newly constructed operators in logarithmic weighted space of functions. By utilizing a suitable weighted logarithmic modulus of continuity, we obtain a rate of convergence and give a quantitative form of Voronovskaja-type theorem via remainder of Mellin–Taylor’s formula. Furthermore, some examples of kernels which satisfy certain assumptions are presented and the results are examined by illustrative numerical tables and graphical representations.
Article
Full-text available
The present article deals with local and global approximation behaviors of sampling Durrmeyer operators for functions belonging to weighted spaces of continuous functions. After giving some fundamental notations of sampling type approximation methods and presenting well definiteness of the operators on weighted spaces of functions, we examine pointwise and uniform convergence of the family of operators and determine the rate of convergence via weighted modulus of continuity. A quantitative Voronovskaja theorem is also proved in order to obtain rate of pointwise convergence and upper estimate for this convergence. The last section is devoted to some numerical evaluations of sampling Durrmeyer operators with suitable kernels.
Article
Full-text available
The present paper deals with an extension of approximation properties of generalized sampling series to weighted spaces of functions. A pointwise and uniform convergence theorem for the series is proved for functions belonging to weighted spaces. A rate of convergence by means of weighted moduli of continuity is presented and a quantitative Voronovskaja type theorem is obtained.
Article
Full-text available
In this paper, we introduce a new family of operators by generalizing Kantorovich type of exponential sampling series by replacing integral means over exponentially spaced intervals with its more general analogue, Mellin Gauss Weierstrass singular integrals. Pointwise convergence of the family of operators is presented and a quantitative form of the convergence using a logarithmic modulus of continuity is given. Moreover, considering locally regular functions, an asymptotic formula in the sense of Voronovskaja is obtained. By introducing a new modulus of continuity for functions belonging to logarithmic weighted space of functions, a rate of convergence is obtained. Some examples of kernels satisfying the obtained results are presented.
Article
Full-text available
In this paper, we establish a quantitative estimate for multivariate sampling Kantorovich operators by means of the modulus of smoothness in the general setting of Orlicz spaces. As a consequence, the qualitative order of convergence can be obtained, in case of functions belonging to suitable Lipschitz classes. In the particular instance of Lp-spaces, using a direct approach, we obtain a sharper estimate than that one that can be deduced from the general case.
Article
Full-text available
Here we provide a unifying treatment of the convergence of a general form of sampling type operators, given by the so-called Durrmeyer sampling type series. In particular we provide a pointwise and uniform convergence theorem on R\mathbb{R}, and in this context we also furnish a quantitative estimate for the order of approximation, using the modulus of continuity of the function to be approximated. Then we obtain a modular convergence theorem in the general setting of Orlicz spaces Lφ(R)L^\varphi(\mathbb{R}). From the latter result, the convergence in Lp(R)L^p(\mathbb{R})-space, LαlogβLL^\alpha\log^\beta L, and the exponential spaces follow as particular cases. Finally, applications and examples with graphical representations are given for several sampling series with special kernels.
Article
Full-text available
In the present paper, asymptotic expansion and Voronovskaja type theorem for the neural network operators have been proved. The above results are based on the computation of the algebraic truncated moments of the density functions generated by suitable sigmoidal functions, such as the logistic functions, sigmoidal functions generated by splines and other. Further, operators with high-order convergence are also studied by considering finite linear combination of the above neural network type operators and Voronovskaja type theorems are again proved. At the end of the paper, numerical results are provided.
Article
Full-text available
In the present paper we establish a quantitative estimate for the sampling Kantorovich operators with respect to the modulus of continuity in Orlicz spaces defined in terms of the modular functional. At the end of the paper, concrete examples are discussed, both for what concerns the kernels of the above operators, as well as for some concrete instances of Orlicz spaces.
Article
Full-text available
In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop.
Article
Full-text available
This paper deals with the Kantorovich version of generalized sampling series, the first one to be primarily concerned with this version. It is devoted to the study of these series on Orlicz spaces, L φ (ℝ), in the instance of irregularly spaced samples. A modular convergence theorem for functions f∈L φ (ℝ) is deduced. The convergence in L p (ℝ)-space, LlogL-space, and exponential spaces follow as particular results. Applications are given to several sampling series with special kernels, especially in the instance of discontinuous signals. Graphical representations for the various examples are included.
Article
Full-text available
In this paper, we study pointwise and uniform convergence, as well as the order of approximation, for a family of linear positive neural network operators activated by certain sigmoidal functions. Only the case of functions of one variable is considered, but it can be expected that our results can be generalized to handle multivariate functions as well. Our approach allows us to extend previously existing results. The order of approximation is studied for functions belonging to suitable Lipschitz classes and using a moment-type approach. The special cases of neural network operators activated by logistic, hyperbolic tangent, and ramp sigmoidal functions are considered. In particular, we show that for C(1)-functions, the order of approximation for our operators with logistic and hyperbolic tangent functions here obtained is higher with respect to that established in some previous papers. The case of quasi-interpolation operators constructed with sigmoidal functions is also considered.
Article
Full-text available
We consider the problem of approximating the Sobolev class of functions by neural networks with a single hidden layer, establishing both upper and lower bounds. The upper bound uses a probabilistic approach, based on the Radon and wavelet transforms, and yields similar rates to those derived recently under more restrictive conditions on the activation function. Moreover, the construction using the Radon and wavelet transforms seems very natural to the problem. Additionally, geometrical arguments are used to establish lower bounds for two types of commonly used activation functions. The results demonstrate the tightness of the bounds, up to a factor logarithmic in the number of nodes of the neural network.
Article
In this paper, we investigate the approximation properties of exponential sampling series within logarithmically weighted spaces of continuous functions. Initially, we demonstrate the pointwise and uniform convergence of exponential sampling series in weighted spaces and present the rates of convergence via a suitable modulus of continuity in logarithmic weighted spaces. Subsequently, we establish a quantitative representation of the pointwise asymptotic behavior of these series using Mellin–Taylor’s expansion. Finally, it is given some examples of kernels and numerical evaluations.
Article
In this paper, we introduce a family generalized Kantorovich-type exponential sampling operators of bivariate functions by using the bivariate Mellin-Gauss-Weierstrass operator. Approximation behaviour of the series is established at continuity points of log-uniformly continuous functions. A rate of convergence of the family of operators is presented by means of logarithmic modulus of continuity and a Voronovskaja-type theorem is proved in order to determine rate of pointwise convergence. Convergence of the family of operators is also investigated for functions belonging to weighted space. Furthermore, some examples of the kernels which support our results are given.
Article
This paper deals with approximation properties of bivariate sampling Durrmeyer operators for functions belonging to weighted spaces of functions. After a short preliminaries and auxilary results we present well-definiteness of (S_w^{ζ,ζ}). Main results of the paper includes pointwise and uniform convergence of the family of operators, rate of convergence via bivariate weighted modulus of continuity and quantitative Voronovskaja type theorem.
Article
In this paper, we generalize the family of exponential sampling series for functions of n variables and study their pointwise and uniform convergence as well as the rate of convergence for the functions belonging to space of log-uniformly continuous functions. Furthermore, we state and prove the generalized Mellin-Taylor’s expansion of multivariate functions and using this expansion we establish pointwise asymptotic behaviour of the series by means of Voronovskaja type theorem.
Article
This paper is devoted to construction of multidimensional Kantorovich modifications of exponential sampling series, which allows to approximate suitable measurable functions by considering their mean values on just one section of the function involved. Approximation behavior of newly con- structed operators is investigated at continuity points for log-uniformly continuous functions. The rate of convergence of the series is presented for the same functions by means of logarithmic modulus of continuity. A Voronovskaja type theorem is also presented by means of Mellin derivative.
Article
Here we introduce a generalization of the exponential sampling series of optical physics and establish pointwise and uniform convergence theorem, also in a quantitative form. Moreover we compare the error of approximation for Mellin band-limited functions using both classical and generalized exponential sampling series.
Article
In this paper we consider a new definition of generalized sampling type series using an approach introduced by Durrmeyer for the Bernstein polynomials. We establish an asymptotic formula for functions f with a polynomial growth and as a consequence we obtain a Voronovskaja type formula. Then we consider suitable linear combinations that provide a better order of approximation. Finally, some examples are given, in particular certain central B-splines are discussed.
Article
The paper briefy reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.
Chapter
If a signal f is band-limited to [—πW, πW] for some W > 0, then f can be completely reconstructed for all values of t ∈ R from its sampled values f (k/W), k ∈ Z, taken just at the nodes k/W, equally spaced apart on the whole R, in terms of \begin{array}{*{20}{c}} {f(t) = \sum\limits_{{k = - \infty }}^{\infty } {f\left( {\frac{k}{W}} \right)\frac{{\sin \pi (Wt - k)}}{{\pi (Wt - k)}}} } & {(t \in R).} \\ \end{array} (5.1.1) KeywordsBounded Linear OperatorLinear PredictionPolynomial SplinePast SampleNyquist RateThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Article
A three-layered neural network having n hidden-layer units can implement a mapping of n points in ℝ d onto ℝ. In this paper, the activation function of hidden-layer units is extended to higher-dimensional functions, so that the sigmoid function defined on ℝ and the radial basis function defined on ℝ d can be treated on a common basis. We assume that activation functions cannot be scaled. Even under this restriction, a wide class of functions can be activation functions for the mapping of a finite number of points. If the support of the Fourier transform of a slowly increasing function includes a converging sequence of points on a line, it can be an activation function for the mapping of points without scaling. This condition does not depend on the dimension of the space on which the activation function is defined; both the logistic function on ℝ and the Gauss kernel on ℝ d satisfy it. The result extends the work of Y. Itô and K. Saito [Math. Sci. 21, 27–33 (1996; Zbl 0852.68037)], in which the activation function is restricted to sigmoid functions.
Article
Let D subset of R-d be a compact set and let Phi be a uniformly bounded set of D --> R functions. For a given real-valued function f defined on D and a given natural number n, we are looking for a good uniform approximation to f of the form Sigma(i=1)(n) a(i)phi(i), with phi(i) is an element of Phi, a(i) is an element of R. Two main cases are considered: (1) when D is a finite set and (2) when the set Phi is formed by the functions phi(upsilon, b)(x) := s(upsilon . x + b), where upsilon is an element of R-d, b is an element of R, and s is a fixed R --> R function. (C) 1998 Academic Press.
Article
The sinc-kernel function of the sampling series is replaced by spline functions having compact support, all built up from the B-splines M//n. The resulting generalized sampling series reduces to finite sums so that no truncation error occurs. Moreover, the approximation error generally decreases more rapidly than for for the classical series when W tends to infinity, 1/W being the distance between the sampling points. For the kernel function PHI (t) equals 5M//4(t) minus 4M//5(t) with support left bracket minus 5/2, 5/2 right bracket , e. g. , it decreases with order O(W** minus **4) provided the signal has a fourth order derivative; for the classical series the order is O(W** minus **4 log W). Seven characteristic examples are treated in detail.
Article
In contrast to the classical Shannon sampling theorem, signal functions are considered which are not band-limited but duration-limited. It is shown that these functions can be approximately represented by a discrete set of samples. The error is estimated that arises when only a finite number of samples is selected.
Article
We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.
Article
The aim of this paper is to investigate the error which results from the method of approximation operators with logarithmic sigmoidal function. By means of the method of extending functions, a class of feed-forward neural network operators is introduced. Using these operators as approximation tools, the upper bounds of errors, in uniform norm, approximating continuous functions, are estimated. Also, a class of quasi-interpolation operators with logarithmic sigmoidal function is constructed for approximating continuous functions defined on the total real axis.
Article
This paper quantifies the approximation capability of Neural Networks and their application in machine learning theory. The problem of Learning Neural Networks from samples is considered. The sample size which is sufficient for obtaining the almost-optimal stochastic approximation of function classes is obtained. In the terms of the accuracy confidence function we show that the least square estimator is almost-optimal for the problem. Moreover, we consider the analogous problems related to learning by radial basis functions. Learning theory is a growing field of research which attracts a large number of researchers from a variety of disciplines such as computer science, economics and neural networks. Mathematics is important for investigating learning problems since it provides the necessary level of rigorous analysis that leads to understanding the fundamental concepts and properties of learning. Specifically, the learning problem is reduced to finding a regression function (the average function of a given random processes) using the corresponding manifold under the condition that the function is not known but belongs to some given class of functions. The learning network problems have a long history (see the works of V. Vapnik, M.G.D. Powell, P.L. Bartlett, etc.).
Article
A method is developed for representing any communication system geometrically. Messages and the corresponding signals are points in two "function spaces," and the modulation process is a mapping of one space into the other. Using this representation, a number of results in communication theory are deduced concerning expansion and compression of bandwidth and the threshold effect. Formulas are found for the maxmum rate of transmission of binary digits over a system when the signal is perturbed by various types of noise. Some of the properties of "ideal" systems which transmit at this maxmum rate are discussed. The equivalent number of binary digits per second for certain information sources is calculated.
A theory of local learning, the learning channel, and the optimality of backpropagation
  • P Baldi
  • P Sadowsky
Baldi, P., Sadowsky, P.: A theory of local learning, the learning channel, and the optimality of backpropagation. Neural Netw. 83, 51-74 (2016).
Approximation of functions of n-variable quasipolynomials
  • J A Brudnii
J.A. Brudnii, Approximation of functions of n-variable quasipolynomials. Isv. Acad. Nauk SSSR Ser. Mat. 34, 564-583 (1970). ((in Russian))
Modification of the Steklov function
  • V A Popov
  • B Sendov
V. A. Popov, B. Sendov, Modification of the Steklov function. C. R. Acad. Bulg. Sci., 36 (1983), 315-317.