Journal of Numerical Analysis and Approximation Theory

Published by Academia Romana Filiala Cluj

Online ISSN: 2501-059X

·

Print ISSN: 2457-6794

Articles


Sharp inequalities for the Neuman-Sandor mean in terms of arithmetic and contra-harmonic means
  • Article
  • Full-text available

September 2012

·

14 Reads

·

·

Bao-Yu Liu
In this paper, we find the greatest values $\alpha$ and $\lambda$, and the least values $\beta$ and $\mu$ such that the double inequalities $$C^{\alpha}(a,b)A^{1-\alpha}(a,b)<M(a,b)<C^{\beta}(a,b)A^{1-\beta}(a,b)$$ and &[C(a,b)/6+5 A(a,b)/6]^{\lambda}[C^{1/6}(a,b)A^{5/6}(a,b)]^{1-\lambda}<M(a,b) &\qquad<[C(a,b)/6+5 A(a,b)/6]^{\mu}[C^{1/6}(a,b)A^{5/6}(a,b)]^{1-\mu} hold for all $a,b>0$ with $a\neq b$, where $M(a,b)$, $A(a,b)$ and $C(a,b)$ denote the Neuman-S\'andor, arithmetic, and contra-harmonic means of $a$ and $b$, respectively.
Download
Share


On Berman's phenomenon for (0,1,2) Hermite-Fejér interpolation

September 2019

·

1 Read

Given \(f\in C[-1,1]\) and \(n\) points (nodes) in \([-1,1]\), the Hermite-Fejer interpolation (HFI) polynomial is the polynomial of degree at most \(2n-1\) which agrees with \(f\) and has zero derivative at each of the nodes. In 1916, L. Fejer showed that if the nodes are chosen to be the zeros of \(T_{n}(x)\), the \(n\)th Chebyshev polynomial of the first kind, then the HFI polynomials converge uniformly to \(f\) as \(n\rightarrow\infty\). Later, D.L. Berman established the rather surprising result that this convergence property is no longer true for all \(f\) if the Chebyshev nodes are augmented by including the endpoints \(-1\) and \(1\) as additional nodes. This behaviour has become known as Berman's phenomenon. The aim of this paper is to investigate Berman's phenomenon in the setting of \((0,1,2)\) HFI, where the interpolation polynomial agrees with \(f\) and has vanishing first and second derivatives at each node. The principal result provides simple necessary and sufficient conditions, in terms of the (one-sided) derivatives of \(f\) at \(\pm 1\), for pointwise and uniform convergence of \((0,1,2)\) HFI on the augmented Chebyshev nodes if \(f\in C^{4}[-1,1]\), and confirms that Berman's phenomenon occurs for \((0,1,2)\) HFI.

Fig. 1. The graph of the cubic B-spline for the knots -2,-1,0,1,2.
Fig. 2. A mechanical spline.
About B-splines. Twenty answers to one question: What is the cubic B-spline for the knots -2,-1,0,1,2?

September 2016

·

37 Reads

In this composition an attempt is made to answer one simple question only: What is the cubic B-spline for the knots -2,-1,0,1,2? The note will take you on a most interesting trip through various fields of Mathematics and finally convince you on how little we know.





On the unique solvability and numerical study of absolute value equations

December 2019

·

10 Reads

The aim of this paper is twofold. Firstly, we consider the unique solvability of absolute value equations (AVE), \(Ax-B\vert x\vert =b\), when the condition \(\Vert A^{-1}\Vert <\frac{1}{\left\Vert B\right\Vert }\) holds. This is a generalization of an earlier result by Mangasarian and Meyer for the special case where \(B=I\). Secondly, a generalized Newton method for solving the AVE is proposed. We show under the condition \(\Vert A^{-1}\Vert <\frac{1}{4\Vert B\Vert }\), that the algorithm converges linearly global to the unique solution of the AVE. Numerical results are reported to show the efficiency of the proposed method and to compare with an available method.

Preconditioned conjugate gradient methods for absolute value equations

September 2020

·

7 Reads

We investigate the NP-hard absolute value equations (AVE), \(Ax-B|x| =b\), where \(A,B\) are given symmetric matrices in \(\mathbb{R}^{n\times n}, \ b\in \mathbb{R}^{n}\). By reformulating the AVE as an equivalent unconstrained convex quadratic optimization, we prove that the unique solution of the AVE is the unique minimum of the corresponding quadratic optimization. Then across the latter, we adopt the preconditioned conjugate gradient methods to determining an approximate solution of the AVE. The computational results show the efficiency of these approaches in dealing with the AVE.







On accelerating the convergence of the successive approximations method

February 2001

·

3 Reads

In a previous paper of us, we have shown that no q-superlinear convergence to a fixed point \(x^\ast\) of a nonlinear mapping \(G\) may be attained by the successive approximations when \(G^\prime(x^\ast)\) has no eigenvalue equal to 0. However, high convergence orders may be attained if one considers perturbed successive approximations.We characterize the correction terms which must be added at each step in order to obtain convergence with q-order 2 of the resulted iterates.

On the acceleration of the convergence of certain iterative proceedings (II)
The research reflected in this paper has its origin in the study of the convergence of the sequences generated through the use of certain methods derived from the well-known Newton-Kantorovich method for the approximation of the solution of an equation in a linear normed space, together with the inverse of the Fréchet differential on this solution. An important place is given in the paper to the notion of convergence speed order of an approximant sequence of the solution of an equation. Considering given an approximant sequence which verifies certain conditions expressed through the inequalities (25), we will build another approximant sequence through the relations (22), sequence which finds its convergence speed order ameliorated. We will analyze certain special cases and, in the same time, we will determine optimal methods from the point of view of the convergence speed order.

Accurate Chebyshev collocation solutions for the biharmonic eigenproblem on a rectangle

September 2017

·

6 Reads

We are concerned with accurate Chebyshev collocation (ChC) solutions to fourth order eigenvalue problems. We consider the 1D case as well as the 2D case. In order to improve the accuracy of computation we use the precondtitioning strategy for second order differential operator introduced by Labrosse in 2009. The fourth order differential operator is factorized as a product of second order operators. In order to asses the accuracy of our method we calculate the so called drift of the first five eigenvalues. In both cases ChC method with the considered preconditioners provides accurate eigenpairs of interest.

General multivariate arctangent function activated neural network approximations

September 2022

·

27 Reads

Here we expose multivariate quantitative approximations of Banach space valued continuous multivariate functions on a box or \(\mathbb{R}^{N}\), \(N\in \mathbb{N}\), by the multivariate normalized, quasi-interpolation, Kantorovich type and quadrature type neural network operators. We treat also the case of approximation by iterated operators of the last four types. These approximations are derived by establishing multidimensional Jackson type inequalities involving the multivariate modulus of continuity of the engaged function or its high order Frechet derivatives. Our multivariate operators are defined by using a multidimensional density function induced by the arctangent function. The approximations are pointwise and uniform. The related feed-forward neural network is with one hidden layer.






Top-cited authors