Article

Algorithm 790: CSHEP2D: Cubic Shepard method for bivariate interpolation of scattered data

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

We describe a new algorithm for scattered data interpolation. The method is similar to that of Algorithm 660 but achieves cubic precision and C2 continuity at very little additional cost. An accompanying article presents test results that show the method to be among the most accurate available.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In our application, we have chosen the cubic Shepard-like interpolant implemented in the ACM Algorithm 790 (CSHEP2D) by R.J. Renka [19]. As is known, for reasonably dense data sets CSHEP2D is among the most accurate and efficient scattered data algorithms available [20]. ...
... where S(x) is the Shepard-like interpolant [19] on the scattered point set. Recall that S(x) has only C 2 regularity, and thus quite slow convergence of its Xu interpolants could be expected, in view of (2.18). ...
... where Ω is the "lynx-eye" shaped domain, given by a horizontal elliptical domain with a vertical elliptical hole; see Observe that K = Ω (which is not simply-connected) is a generalized sector (see Section 3.2), with boundaries defined by the polar equation of the ellipses ρ(θ) = ab 1 + tan 2 θ b 2 + a 2 tan 2 θ , 0 ≤ θ ≤ 2π , (4.4) where a and b are the horizontal and vertical semi-axes, respectively; the distribution of N = 312 Xu-like points in K (degree n = 24), is shown in Fig. 4.1 (left). [19]. In Table 4.5 we report the compression errors, corresponding to Xu-like interpolation of the Shepard-like interpolant quoted above, at a sequence of degrees. ...
... The new operators do not require either the use of special partitions of the node convex hull or special structured data as in [8]. We deeply study their approximation properties and provide an application to the scattered data interpolation problem; the numerical results show that this new approach is comparable with the other well known bivariate schemes QSHEP2D and CSHEP2D by Renka [34,35]. ...
... In section 4 we apply the bivariate Shepard-Bernoulli operators to the scattered data interpolation problem. The numerical results on some commonly used test functions for scattered data approximation [32,36] show that the bivariate interpolation scheme proposed here is comparable well with the better known operators QSHEP2D [34] and CSHEP2D [35]. Finally, in section 5 we draw conclusions. ...
... , N can be replaced by the coefficients b 10 , b 01 , b 20 , b 11 , b 02 of the cubic polynomial which fits the data values (V k , f (V k )) , k = 1, . . . , N on a set of nearby nodes in a weighted least-square sense, as in the definition of operator CSHEP2D in [35]. ...
Article
Full-text available
In this paper we extend the Shepard-Bernoulli operators introduced in [6] to the bivariate case. These new interpolation operators are realized by using local support basis functions introduced in [23] instead of classical Shepard basis functions and the bivariate three point extension [13] of the generalized Taylor polynomial introduced by F. Costabile in [11]. The new operators do not require either the use of special partitions of the node convex hull or special structured data as in [8]. We deeply study their approximation properties and provide an application to the scattered data interpolation problem; the numerical results show that this new approach is comparable with the other well known bivariate schemes QSHEP2D and CSHEP2D by Renka [34, 35].
... TOBLER (1979) indicates the similarity of the first possibility with the Laplace equation, and of the second with the biharmonic equation. Other averaging methods are conceivable, such as interpolation using the Modified Shepard algorithm (RENKA 1988(RENKA , 1999. ...
... Some methods have been tested that construct a continuous surface through the points. Good results were obtained with the Modified Shepard interpolation, both in the quadratic and the cubic variant (RENKA 1988, 1999, THACKER et al. 2010. The linear interpolation is less appropriate than the Modified Shepard interpolation (Fig. 7b). ...
Preprint
Full-text available
When constructing a continuous surface, the properties of the original data should be preserved as much as possible. For data aggregated to polygonal units the main property is the volume. Its preservation is guaranteed by using the pycnophylactic interpolation method. For some polygon networks the original version of the method with a regular grid as the surface reference is only suboptimal. This is the case, for example, if the areas of the polygons vary greatly, if the polygons are very irregular, or the course of the original boundary lines has to be maintained. An alternative to the regular grid is the irregular triangle network (TIN). The segments of the boundaries can be made identical to edges of the triangles. The first step to construct the TIN is using the points of the boundary lines. Additional points necessary for the construction of a smooth surface are inserted into the polygons. Further densification of the mesh can be initiated by using methods for generating networks with specified properties, also called quality meshes. Only minor modifications of the original pycnophylactic interpolation method are necessary for quality meshes.
... The doFORC software implements four nonparametric methods for estimating regression surface and the corresponding partial derivatives: LOESS 31,32 and three modified Shepard methods [42][43][44] (quadratic polynomial, cubic polynomial, and cosine series). The motivation is to enable researchers to experiment with different algorithms using their data and select one (or more) that is best suited to their needs. ...
... The method was improved by Renka et al., [42][43][44] which also developed several variations of the method based on polynomial and trigonometric functions for P k in order to increase the precision of the approximation: quadratic polynomial 42 ...
Preprint
Full-text available
First-order reversal curves (FORC) diagram method is one of the most successful characterization techniques used to characterize complex hysteretic phenomena not only in magnetism, but also in other areas of science like in ferroelectricity, geology, archeology, light-induced and pressure hysteresis in spin-transition materials, etc. Because the definition of the FORC diagram involves a second-order derivative, the main problem in their numerical calculation is that the derivative of a function for which only discrete noise-contaminated data values are available magnifies the noise that is inevitably present in measurements. In this paper we present doFORC tool for calculating FORC diagrams of noise scattered data. It can provide both a smooth approximation of the measured magnetization and all its partial derivatives. doFORC is a free, portable application working on various operating systems, with an easy to use graphical interface, with four regression methods implemented to obtain a smooth approximation of the data which may then be differentiated to obtain approximations for derivatives. In order to perform the diagnostics and goodness of fit doFORC computes residuals to characterize the difference between observed and predicted values, generalized cross-validation to measure the predictive performance, two information criteria to quantify the information that is lost by using an approximate model, and three degrees of freedom to compare different amounts of smoothing being performed by different smoothing methods. Based on these doFORC can perform automatic smoothing parameter selection.
... QSHEP5D (developed by Berry [1999] in C++ as an upgrade to QSHEP3D for five-dimensional interpolation) has been translated to Fortran 95 in QSHEPMD. CSHEP2D, a cubic Shepard method, is also a direct translation of the original ACM algorithm, written in FORTRAN 77, developed by Renka [1999a]. However, since the original codes were written using single precision, the tolerance for detecting an ill-conditioned system was changed from the arbitrary value of 0.01 to the square root of machine epsilon for the current processor. ...
... CSHEP2D (cubic Shepard algorithm for two-dimensional data) is Algorithm 790 developed by Renka [1999a]. The subroutine CSHEP2 in SHEPPACK is a direct Fortran 95 translation of Renka's FORTRAN 77 code of the same name. ...
Article
Full-text available
Scattered data interpolation problems arise in many applications. Shepard’s method for constructing a global interpolant by blending local interpolants using local-support weight functions usually creates reasonable approximations. SHEPPACK is a Fortran 95 package containing five versions of the modified Shepard algorithm: quadratic (Fortran 95 translations of Algorithms 660, 661, and 798), cubic (Fortran 95 translation of Algorithm 791), and linear variations of the original Shepard algorithm. An option to the linear Shepard code is a statistically robust fit, intended to be used when the data is known to contain outliers. SHEPPACK also includes a hybrid robust piecewise linear estimation algorithm RIPPLE (residual initiated polynomial-time piecewise linear estimation) intended for data from piecewise linear functions in arbitrary dimension m. The main goal of SHEPPACK is to provide users with a single consistent package containing most existing polynomial variations of Shepard’s algorithm. The algorithms target data of different dimensions. The linear Shepard algorithm, robust linear Shepard algorithm, and RIPPLE are the only algorithms in the package that are applicable to arbitrary dimensional data.
... QSHEP5D (developed by Berry [1999] in C++ as an upgrade to QSHEP3D for five-dimensional interpolation) has been translated to Fortran 95 in QSHEPMD. CSHEP2D, a cubic Shepard method, is also a direct translation of the original ACM algorithm, written in FORTRAN 77, developed by Renka [1999a]. However, since the original codes were written using single precision, the tolerance for detecting an ill-conditioned system was changed from the arbitrary value of 0.01 to the square root of machine epsilon for the current processor. ...
... CSHEP2D (cubic Shepard algorithm for two-dimensional data) is Algorithm 790 developed by Renka [1999a]. The subroutine CSHEP2 in SHEPPACK is a direct Fortran 95 translation of Renka's FORTRAN 77 code of the same name. ...
Article
Full-text available
Scattered data interpolation problems arise in many applications. Shepard's method for construct-ing a global interpolant by blending local interpolants using local-support weight functions usually creates reasonable approximations. SHEPPACK is a Fortran 95 package containing five versions of the modified Shepard algorithm: quadratic (Fortran 95 translations of Algorithms 660, 661, and 798), cubic (Fortran 95 translation of Algorithm 791), and linear variations of the original Shepard algorithm. An option to the linear Shepard code is a statistically robust fit, intended to be used when the data is known to contain outliers. SHEPPACK also includes a hybrid robust piecewise linear estimation algorithm RIPPLE (residual initiated polynomial-time piecewise lin-ear estimation) intended for data from piecewise linear functions in arbitrary dimension m. The main goal of SHEPPACK is to provide users with a single consistent package containing most existing polynomial variations of Shepard's algorithm. The algorithms target data of different dimensions. The linear Shepard algorithm, robust linear Shepard algorithm, and RIPPLE are the only algorithms in the package that are applicable to arbitrary dimensional data.
... Since the function is assumed to be known only as a large scattered data set, its values at Xu points have to be computed through an auxiliary function. In our application, we have chosen the cubic Shepard-like interpolant implemented in the ACM Algorithm 790 (CSHEP2D) by R.J. Renka[19]. As is known, for reasonably dense data sets CSHEP2D is among the most accurate and efficient scattered data algorithms available[20]. ...
... where S(x) is the Shepard-like interpolant[19]on the scattered point set. Recall that S(x) has only C 2 regularity, and thus quite slow convergence of its Xu interpolants could be expected, in view of (2.18). ...
Article
Full-text available
In a recent paper, Y. Xu proposed a set of Chebyshev-like points for polynomial interpolation on the square [−1, 1] 2 . We have recently proved that the Lebesgue constant of these points grows like log 2 of the degree (as with the best known points for the square), and we have implemented an accurate version of their Lagrange interpolation formula at linear cost. Here we construct non-polynomial Xu-like interpolation formulas on bivariate compact domains with various geometries, by means of composition with suitable smooth transformations. Moreover, we show applications of Xu-like interpolation to the compression of surfaces given as large scattered data sets.
... The following procedure of reconstruction of reanalysis temperature values at the weather stations' locations was used. Firstly, we tested a set of interpolation methods comprising bilinear interpolation, third-order polynomial, inverse distance weighted, modified Shepard's interpolation, and basic geostatistical kriging and found that the modified Shepard's interpolation method [14] is the most exact to reconstruct the reanalysis data. The testing was done by reconstructing temperatures at relevant regular grid nodes from temperatures at the nodes of the doubled size grid and comparing them with those given in the reanalysis set. ...
... The testing was done by reconstructing temperatures at relevant regular grid nodes from temperatures at the nodes of the doubled size grid and comparing them with those given in the reanalysis set. Thereafter, surface air temperature values from the reanalysis datasets were reconstructed at the weather stations' location coordinates using the formulas of the modified Shepard's method [14] and a special sub-grid with a node located exactly at the station location. At these station locations we also have in situ observations of the meteorological value which can be compared with corresponding interpolated reanalysis data. ...
Article
Full-text available
The spatiotemporal pattern of the dynamics of surface air temperature and precipitation and those bioclimatic indices that are based upon factors which control vegetation cover are investigated. Surface air temperature and precipitation data are retrieved from the ECMWF ERA Interim reanalysis and APHRODITE JMA datasets, respectively, which were found to be the closest to the observational data. We created an archive of bioclimatic indices for further detailed studies of interrelations between local climate and vegetation cover changes, which include carbon uptake changes related to changes of vegetation types and amount, as well as with spatial shifts of vegetation zones. Meanwhile, analysis reveals significant positive trends of the growing season length accompanied by a statistically significant increase of the sums of the growing degree days and precipitation over the south of West Siberia. The trends hint at a tendency for an increase of vegetation ecosystems' productivity across the south of West Siberia (55°–60°N, 59°–84°E) in the past several decades and (if sustained) may lead to a future increase of vegetation productivity in this region.
... In order to increase the precision of the QSHEP2D operator, in 1999 Renka [49] introduced the CSHEP2D operator ...
... To test the accuracy of approximation of the Shepard Hermite-Birkhoff operators (16) in the bivariate interpolation of large sets of scattered data, we carried out a series of experiments by setting N w = 13 nodes in the ball B(x i , R w i ), in order to define the basis functions W µ,i (x ), and N t = N w nodes in B(x i , R t i ), in order to associate to each node the triangle ∆(i). As for the numerical experiments we consider the set of Renka's test functions (see [49]) generally used in the bivariate interpolation of large sets of scattered data. The numerical results are obtained by using a set of 1089 regularly distributed interpolation nodes in the unit square R = [0, 1] × [0, 1]. ...
Article
Full-text available
Interpolation problems arise in many areas where there is a need to construct a continuous surface from irregularly spaced data points. This problem has a number of solutions and, among them, the choice of interpolation technique depends on the distribution of points in the data set, the application domain, the approximating function or the method that is prevalent in the discipline. We discuss on Shepard's interpolation method and some of its variations, which have been proposed in order to increase the accuracy of approximation of the original method, to improve its efficiency or even to solve specific interpolation problems.
... Using (9) we then have (8), (11), (9), (12) and Proposition 3 we have ...
... The definition of the quadratic Bernoulli polynomial (14) requires the knowledge of the gradient of the function f , ∇ f , at each sample point x i . In line with others well known operators for scattered data interpolation [11][12][13] we estimate the differences of derivatives f j 2 by using a least square fit on a set of vertices of nearby triangles. For each triangle t j we consider the nearest N t vertices of nearby triangles and we use them to estimate the previous differences of derivatives. ...
Article
Full-text available
In this paper we discuss an improvement of the triangular Shepard operator proposed by Little to extend the Shepard method. In particular, we use triangle based basis functions in combination with a modified version of the linear local interpolant on the vertices of the triangle. We deeply study the resulting operator, which uses functional and derivative data, has cubic approximation order and a good accuracy of approximation. Suggestions on how to avoid the use of derivative data, without losing both order and accuracy of approximation, are given.
... The doFORC software implements four nonparametric methods for estimating regression surface and the corresponding partial derivatives: LOESS 31,32 and three modified Shepard methods [42][43][44] (quadratic polynomial, cubic polynomial, and cosine series). The motivation is to enable researchers to experiment with different algorithms using their data and select one (or more) that is best suited to their needs. ...
... The method was improved by Renka et al., [42][43][44] which also developed several variations of the method based on polynomial and trigonometric functions for P k in order to increase the precision of the approximation: quadratic polynomial 42 ...
Preprint
Full-text available
First-order reversal curves (FORC) diagram method is one of the most successful characterization techniques used to characterize complex hysteretic phenomena not only in magnetism, but also in other areas of science like in ferroelectricity, geology, archeology, light-induced and pressure hysteresis in spin-transition materials, etc. Because the definition of the FORC diagram involves a second-order derivative, the main problem in their numerical calculation is that the derivative of a function for which only discrete noise-contaminated data values are available magnifies the noise that is inevitably present in measurements. In this paper we present doFORC tool for calculating FORC diagrams of noise scattered data. It can provide both a smooth approximation of the measured magnetization and all its partial derivatives. doFORC is a free, portable application working on various operating systems, with an easy to use graphical interface, with four regression methods implemented to obtain a smooth approximation of the data which may then be differentiated to obtain approximations for derivatives. In order to perform the diagnostics and goodness of fit doFORC computes residuals to characterize the difference between observed and predicted values, generalized cross-validation to measure the predictive performance, two information criteria to quantify the information that is lost by using an approximate model, and three degrees of freedom to compare different amounts of smoothing being performed by different smoothing methods. Based on these doFORC can perform automatic smoothing parameter selection.
... The doFORC software implements four nonparametric methods for estimating regression surface and the corresponding partial derivatives: LOESS 31,32 and three modified Shepard methods [42][43][44] (quadratic polynomial, cubic polynomial, and cosine series). The motivation is to enable researchers to experiment with different algorithms using their data and select one (or more) that is best suited to their needs. ...
... The method was improved by Renka et al., [42][43][44] which also developed several variations of the method based on polynomial and trigonometric functions for P k in order to increase the precision of the approximation: quadratic polynomial 42 ...
... The weight functions k w must be unitary in the k-node and must decrease gradually around this node. We obtained good results with gaussian weight functions with the same standard deviations for all the nodes but even the constant function We can highlight some features of this WLS method relative to the Pike method: a) a versatile data input; b) a better identification of the distribution; c) one can try more weight functions k w and different values for c N until the desired precision is reached; d) the base functions j ϕ can also be replaced, for example with cosine series functions [8]. FORC diagram can be directly used in investigating the physical mechanisms giving rise to hysteresis in magnetic systems [4]. ...
... We can use any method to interpolate the initial FORC diagram but if we deal with complex systems which can have complicated FORC diagrams (multiple peaks, asymmetries) or if the nodal density varies widely in the Preisach plane, a flexible method is necessary. Very good results we obtained with a modified Shepard algorithm with 10-parameter cosine series nodal functions [8]. ...
Article
A new identification strategy of the distribution in Preisach-type models is described in this paper. The mixed second derivative of the First Order Reversal Curves (FORC) is evaluated after an interpolation in the weighted least-squares sense and a Shepard interpolation method is applied in order to replace the initial irregular grid with a regular one. The main advantages of this strategy are that it can deal with FORC curves on irregular grids and when the experimental errors are important, the weighted least-squares sense of this method increases the precision of the FORC diagram. The parameter set values given in this paper have been established using FORC curves obtained by simulations with a Complete-Moving-Hysteresis model on known Preisach distributions.
... Variations of the Shepard method based on linear, quadratic and cubic polynomials, namely the modified Shepard methods, have been proposed [1,2,3] in order to solve the classical problem of scattered data interpolation in R 2 (approximation of an unknown function when only functional values on a set of n the original Shepard method, with quadratic and cubic polynomials that interpolate the data value at each node and fit the data values on a set of nearby 10 nodes in a weighted least-squares sense. More recently, Thacker and others [3] have suggested the choice of linear polynomials for the local least squares fit. ...
... Similarly to the triangular Shepard basis functions, the six-point basis functions (2) are multinode basis functions [16], that is they are defined as the normaliza-85 tion of the product of inverse distances from the nodes in s j and, consequently, they are positive and form a partition of unity. For these kind of basis functions the cardinality property changes and becomes the following. ...
Article
Full-text available
The problem of Lagrange interpolation of functions of two variables by quadratic polynomials based on nodes which are vertices of a triangulation has been recently studied and local six-tuples of vertices which assure the uniqueness and the optimal-order of the interpolation polynomial are known. Following the idea of Little and the theoretical results on the approximation order and accuracy of the triangular Shepard method, we introduce an hexagonal Shepard operator with quadratic precision and cubic approximation order for the classical problem of scattered data approximation without least square fit.
... The main advantage of his modified version is that data is fit to a function of three or more independent variables. Later, Renka presented a slightly different implementation of his algorithm, which achieves cubic precision and second derivative continuity at little additional cost [141]. ...
... We use standard numerical methods to solve the discretized system as discussed in Section 6. tests the forcing term is analytically known everywhere; in the general case it will be known only in the domain. We have used Shepard cubic extrapolation ( [39]) to compute a smooth extension of the body force. ...
Article
Full-text available
Version: June 2002 We present a new method for the solution of the Stokes equations. Our goal is to develop a robust and scalable methodology for two and three dimensional, moving-boundary, flow simulations. Our method is based on Anita Mayo's method for the Poisson's equation: "The Fast Solution of Poisson's and the Biharmonic Equations on Irregular Regions", SIAM J. Num. Anal., 21 (1984), pp. 285– 299. We embed the domain in a rectangular domain, for which fast solvers are available, and we impose the boundary conditions as interface (jump) conditions on the velocities and tractions. We use an indirect boundary integral formulation for the homogeneous Stokes equations to compute the jumps. The resulting integral equations are discretized by Nyström's method. The rectangular domain problem is discretized by finite elements for a velocity-pressure formulation with equal order interpolation bilinear elements (Q1-Q1). Stabilization is used to circumvent the inf − sup condition for the pressure space. For the integral equations, fast matrix vector multiplications are achieved via a N log N algorithm based on a block representation of the discrete integral operator, combined with (kernel independent) singular value decomposition to sparsify low-rank blocks. Our code is built on top of PETSc, an MPI based parallel linear algebra library. The regular grid solver is a Krylov method (Conjugate Residuals) combined with an optimal two-level Schwartz-preconditioner. For the integral equation we use GMRES. We have tested our algorithm on several numerical examples and we have observed optimal convergence rates.
... The modification improves the method both in addressing shape preservation and in making it local to a neighborhood of points [5] [7]. ACM algorithm 790 (CSHEP2D) [11], used later in Section 6, is a variation of the modified Shepard's method. Another important method of interpolation is known as Hardy's Multiquadrics [6]. ...
Article
Full-text available
We investigate the performance of DEI, an approach [2] that computes off-mesh approximations of PDE solutions, and can also be used as a technique for scattered data in-terpolation and surface representation. For the general case of unstructured meshes, we found it necessary to modify the original DEI. The resulting method, ADEI, adjusts the parameter of the interpolant, obtaining better performance. Finally, we measure ADEI's performance using different sets of scattered data and test functions and compare ADEI against two methods from the collection of ACM algo-rithms: Algorithms 752 [10] and 790 [11]. The results show that ADEI is better than, if not comparable to, the best of the compared scattered data interpolation techniques.
... Step-1 interpolation, we use the 3 km equal-spaced grid. Then in Step-2, we use the 250 m grid to interpolate further by using the modified Shepard's method [33]. Distributions of the correlation of HSAF mod versus the averaged residual (the averaged spectral ratio between HSAF mod and HSAF obs ) for the same 546 sites shown in Figure 6. ...
Chapter
Full-text available
We first derived site amplification factors (SAFs) from the observed strong motions by the Japanese nationwide networks, namely, K-NET and KiK-net of National Institute of Earthquake Research and Disaster Resilience and Shindokei (Instrumental Seismic Intensity) Network of Japan Meteorological Agency by using the so-called generalized spectral inversion technique. We can use these SAFs for strong motion prediction at these observation sites, however, we need at least observed weak motion or microtremor data to quantify SAF at an arbitrary site. So we tested the capability of the current velocity models in Japan whether they can reproduce or not the observed SAFs at the nearest grid of every 250 m as the one-dimensional theoretical transfer functions (TTF). We found that at about one-half of the sites the calculated 1D TTFs show more or less acceptable fit to the observed SAFs, however, the TTFs tend to underestimate the observed SAFs in general. Therefore, we propose a simple, empirical method to fill the gap between the observed SAFs and the calculated TTFs. Validation examples show that our proposed method effectively predict better SAFs than the direct substitute of TTFs at sites without observed data.
... Spth also shows in [36], that S mod has quadratic precision, that is, if F is quadratic, S mod (x, y) = F (x, y) for any point (x, y). ACM algorithm 790 (CSHEP2D) developed by Renka and Brown [30], is a variation of the MSM. This is one of the two methods we use to compare the performance of the technique we introduce in Chapter 3. ...
Article
This paper presents a novel approach to estimate the contact pattern for gear drives. The proposed method is based on the geometric properties of the generated surfaces of the pinion and the gear and it neglects the mechanical characteristics of the mating members. The key feature of the method is the superimposition of a virtual marking compound over the gear surface that mimics the industrial practice of contact pattern inspection. For each meshing condition the instantaneous contact area is estimated as the intersection of the pinion surface with the marking compound gear surface. Finally, the contact pattern is estimated by the convex envelope of all the instantaneous contact areas in the zr-plane. The marking compound shape is identified through an optimization process to match a target contact pattern obtained, e.g. with an accurate loaded tooth contact analysis tool.The proposed method has been tested with a dedicated FEM software package capable of producing a very accurate contact pattern estimation of the contact pattern under load. Extensive simulations have shown that, once the optimal marking compound shape has been identified, the proposed method computes a reliable contact pattern even for very different surface geometries (e.g. an ease-off on the pinion surface) and assembly errors. The computational cost of the entire procedure is about two orders of magnitude lower than that required to run the full FEM analysis.
... In order to generate, from the data set X , sets of six-points s j . For each node x i ∈ X , we choose another five points from X , distinct pairwise and distinct from x i ∈ X , which reduce the bound for the error in Proposition 4 Then, we carry out various numerical experiments with the following set of functions, introduced by Franke [18,19], Renka and Brown [26]. ...
Article
In this paper, we present an improvement of the Hexagonal Shepard method which uses functional and first order derivative data. More in details, we use six-point basis functions in combination with the modified local interpolant on six-points. The resulting operator reproduces polynomials up to degree 3 and has quartic approximation order. Several numerical results show the good accuracy of approximation of the proposed operator.
... However, the results of RBF interpolation from highly irregular (Integra) data were worse than for Shepard's interpolation scheme. In particular, we use Renka's method for 2D data[Renka 1988b;Renka 1999]and also for 3D data[Renka 1988a]. An example of resampling BRDF data is shown inFigure 6. ...
Article
Full-text available
We discuss the validation of BTF data measurements by means used for BRDF measurements. First, we show how to apply the Helmholtz reciprocity and isotropy for a single data set. Second, we discuss a cross-validation for BRDF measurement data obtained from two different measurement setups, where the measurements are not calibrated or the level of accuracy is not known. We show the practical problems encountered and the solutions we have used to validate physical setup for four material samples. We describe a novel coordinate system suitable for resampling the BRDF data from one data set to another data set. Further, we show how the perceptually uniform color space CIE 1976 L*a*b* is used for cross-comparison of BRDF data measurements, which were not calibrated.
... Let T 0 be the set of all hexagons with at least a vertex in U 0 . The definitions of h ′ in (10) and M in (13) imply that T 0 contains at least one and at most M hexagons and for each hexagon t j ∈ T 0 we have ...
Article
Full-text available
The triangular Shepard method, introduced by Little in 1983 [7], is a convex combination of triangular basis functions with linear polynomials, based on the vertices of the triangles, that locally interpolate the given data at the vertices. The method has linear precision and reaches quadratic approximation order [3]. As specified by Little, the triangular Shepard method can be generalized to higher dimensions and to sets of more than three points. In this paper we introduce the multinode Shepard method as a generalization of the triangular Shepard method in the case of scattered points in s , s ∈ , and we study the remainder term and its asymptotic behavior.
... The basis function P k (x) was the constant g (k) in the original Shepard algorithm , and later variants used a quadratic polynomial (Franke and Nielson, 1980;Renka, 1988a,b,c;Berry and Minser, 1999), a cubic polynomial (Renka, 1999a), and a cosine trigonometric polynomial (Renka, 1999b). The primary disadvantages for large data sets is that a considerable amount of preprocessing is needed to determine closest points and calculate the local approximation. ...
... Spth also shows in [36], that S mod has quadratic precision, that is, if F is quadratic, S mod (x, y) = F (x, y) for any point (x, y). ACM algorithm 790 (CSHEP2D) developed by Renka and Brown [30], is a variation of the MSM. This is one of the two methods we use to compare the performance of the technique we introduce in Chapter 3. ...
... The six-tuple set S is computed by means of the Dalik's algorithm applied to the Delaunay triangulation of the nodes. The numerical experiments are realized by considering the set of test functions generally used in this field, defined in [8] (see Fig. 9). ...
Chapter
As specified by Little [7], the triangular Shepard method can be generalized to higher dimensions and to set of more than three points. In line with this idea, the hexagonal Shepard method has been recently introduced by combining six-points basis functions with quadratic Lagrange polynomials interpolating on these points and the error of approximation has been carried out by adapting, to the case of six points, the technique developed in [4]. As for the triangular Shepard method, the use of appropriate set of six-points is crucial both for the accuracy and the computational cost of the hexagonal Shepard method. In this paper we discuss about some algorithm to find useful six-tuple of points in a fast manner without the use of any triangulation of the nodes.
... Bei diesem Verfahren sind die Sprünge an den Polygongrenzen erheblich geringer, dadurch konvergiert das Iterationsverfahren vielleicht schneller. Andere Verfahren als die modifizierte Shepard-Interpolation (RENKA 1988, 1999 sind auch denkbar, etwa eine Kerndichte-Schätzung (kernel density estimation, KDE) oder Trend-Oberflächen.5.2 Glättung In Analogie zum Rechenverfahren im regelmäßigen Gitter wird im Glättungsschritt der entfernungsgewichtete Mittelwert aus den z-Werten der Nachbarpunkte angewendet, nach der Formel (unter anderem bei SHEPARD 1968): Nachbarn, der Nachbarn der direkten Nachbarn um den Bezugspunkt (zweiter Ring der Nachbarn, hellgrauer Bereich). In Analogie zur Gravitation wird bei Interpolationsrechnungen meistens der Exponent p bei der Anwendung in Interpolationsverfahren auf den Wert 2 gesetzt. ...
Article
Wolf-Dieter RASE wolf.rase@t-online.de Zusammenfassung Bei der Konstruktion einer kontinuierlichen Oberfläche müssen die Eigenschaften der ursprüngli-chen Daten erhalten bleiben, bei flächenbezogenen Daten das Volumen über den Bezugseinheiten. Das Verfahren der pyknophylaktischen Interpolation garantiert die Volumenerhaltung. In manchen Anwendungsfällen ist die ursprüngliche Version des Verfahrens mit einem regelmäßigen Gitter als Oberflächenmodell nur suboptimal, etwa wenn der Verlauf der ursprünglichen Grenzlinien genau eingehalten werden muss, oder bei sehr unterschiedlichen Flächengrößen der Polygone. Eine Lö-sung ist die Verwendung eines unregelmäßigen Dreiecksnetzes (TIN) als Datenmodell für die Ober-fläche. Die Grenzen der Polygone sind identisch mit Kanten im Dreiecksnetz. In die Polygone wer-den zusätzliche Punkte eingefügt, die für die Konstruktion einer glatten Oberfläche notwendig sind. Die Verdichtung erfolgt mit Verfahren für die Erzeugung von Netzen mit vorgegebenen Qualitätsei-genschaften (quality meshes). Zur Anpassung an die geometrischen Eigenschaften von unregelmä-ßigen Dreiecksnetzen sind kleinere Modifikationen des ursprünglichen Algorithmus mit regelmäßi-gem Gitter erforderlich.
... The smoothing or averaging step requires different algorithms than the filters presented for the regular grid, because the number of neighbors and their distances varies at each point. Modified inverse distance weighting (Renka 1988(Renka , 1999a(Renka , 1999b) is one of several methods applicable for averaging the points and smoothing the surface. ...
... In particular, scaling is inevitable for RBFs with compact support [2]. The same problem exists for modifications of Shepard's method [3]. If φ(y) in (1.1) is replaced by φ(y/σ), what is the proper choice of σ? ...
Article
Full-text available
For bivariate and trivariate interpolation we propose in this paper a set of integrable radial basis functions (RBFs). These RBFs are found as fundamental solutions of appropriate PDEs and they are optimal in a special sense. The condition number of the interpolation matrices as well as the order of convergence of the inter- polation are estimated. Moreover, the proposed RBFs provide smooth approximations and approximate fulfillment of the interpolation conditions. This property allows us to avoid the undecidable problem of choosing the right scale parameter for the RBFs. Instead we propose an iterative procedure in which a sequence of improving approx- imations is obtained by means of a decreasing sequence of scale parameters in an a priori given range. The paper provides a few clear examples of the advantage of the proposed interpolation method.
... [25] For this purpose, the population functions and band structure under the k-integral are interpolated with a cubic Shepard method. [26] The equations of motion (1) and (2) in combination with equations (3) to (6) are integrated in time using an adaptive predictor-corrector method. [27] We use a Monkhorst-Pack grid to sample the Brillouin zone. ...
Article
When exploring new materials for their potential in (opto)electronic device applications, it is important to understand the role of various carrier interaction and scattering processes. Research on transition metal dichalcogenide (TMD) semiconductors has recently progressed towards the realisation of working devices, which involve light-emitting diodes, nanocavity lasers, and single-photon emitters. In these two-dimensional atomically thin semiconductors, the Coulomb interaction is known to be much stronger than in quantum wells of conventional semiconductors like GaAs, as witnessed by the 50 times larger exciton binding energy. The question arises, whether this directly translates into equivalently faster carrier-carrier Coulomb scattering of excited carriers. Here we show that a combination of ab-initio band-structure and many-body theory predicts carrier relaxation on a 50-fs time scale, which is less than an order of magnitude faster than in quantum wells. These scattering times compete with the recently reported sub-ps exciton recombination times, thus making it harder to achieve population inversion and lasing.
... Согласно общим рекомендациям [28] по выбору числа точек, принято значение N w =9. Данная модификация хорошо подходит для случаев неравномерной неизвестной заранее сетки, а также позволяет использовать вместо константных значений z j в формуле (2) некоторый функционал, например линейный или квадратичный [29]. В рассматриваемом случае было решено ограничиться константными значениями. ...
Article
Full-text available
In the article, an algorithm for constructing deformable face models, based on the use of Active Shape Model method, Shepard method of landscape surfaces restoring and set of 3D particular face models, is described. Alternative to the EER, the assessment of accuracy in the task of the person recognition using their face image based on an anchored value of FAR is offered. The results of testing the algorithm are presented. We demonstrate the results of using the obtained models within the framework of recognition algorithm performance on a large base of several thousand images (FERET image database by 2000 year), which contains photographs of people at angles of 0, 45 and 90 degrees relative to the optical axis of the camera. Analysis of the results showed that the use of deformable face models does not reduce the quality of the person recognition by face image even under difficult initial conditions and in some cases leads to improving recognition results.
... In between the grid points, the color map is visualized with cubic Shepard interpolation. 6 All three maps in Fig. 4 show that the highest dopant concentration can be obtained using ∼260 ○ C deposition temperature and ∼2.5% target al concentration. ...
Article
Full-text available
The Semilab SE-2000 spectroscopic ellipsometer is a versatile thin film characterization instrument capable of spectroscopic ellipsometry measurements covering a large spectral range from ultraviolet to near infrared within a few seconds and into the mid-infrared in a few minutes. It is suitable for characterizing thin films from monolayers to complex multi-layer laminates and bulk materials. This article demonstrates the unique capabilities of the SE-2000 system by the wide spectral range investigation of Al doped ZnO layers on different substrates and with different layer structures. Using data fits to the Drude dispersion law, the electrical properties of Al:ZnO were determined despite the presence of other conductive layers. The results were corroborated with four-point-probe measurements on a single Al:ZnO layer deposited on a glass substrate.
... Figure 4 shows the 2D energy surface of bilayer C 2 N, referenced to AB stacking, as a function of relative shift (ΔX, ΔY). The discrete set of the sliding energy were interpolated to finer mesh by using Renka II procedure [39][40][41] . In Fig. 4, it is clearly shown that the high symmetry AB-stacking is not the most favourable stable configuration. ...
Article
Full-text available
In recent years, a 2D graphene-like sheet: monolayer C2N was synthesized via a simple wet-chemical reaction. Here, we studied the stability and electronic properties of bilayer C2N. According to a previous study, a bilayer may exist in one of three highly symmetric stacking configurations, namely as AA, AB and AB′-stacking. For the AA-stacking, the top layer is directly stacked on the bottom layer. Furthermore, AB- and AB′-stacking can be obtained by shifting the top layer of AA-stacking by a/3-b/3 along zigzag direction and by a/2 along armchair direction, respectively, where a and b are translation vectors of the unit cell. By using first-principles calculations, we calculated the stability of AA, AB and AB′-stacking C2N and their electronic band structure. We found that the AB-stacking is the most favorable structure and has the highest band gap, which appeared to agree with previous study. Nevertheless, we furthermore examine the energy landscape and translation sliding barriers between stacking layers. From energy profiles, we interestingly found that the most stable positions are shifted from the high symmetry AB-stacking. In electronic band structure details, band characteristic can be modified according to the shift. The interlayer shear mode close to local minimum point was determined to be roughly 2.02 × 1012 rad/s.
... It is suspected that the averaging step with inverse distance weighting (Shepard 1968) is probably too simple to cope with the varying distances to the neighboring nodes. A more advanced algorithm of the same family, for example the algorithm implemented by Renka (1988 Renka ( , 1999), might be a better solution. If smoothing for irregular triangles can be improved other algorithms to obtain quality meshes should be tested. ...
Article
Full-text available
The interpolation of continuous surfaces from discrete points is supported by most GIS software packages. Some packages provide additional options for the interpolation from 3D line objects, for example surface-specific lines, or contour lines digitized from topographic maps. Demographic, social and economic data can also be used to construct and display smooth surfaces. The variables are usually published as sums for polygonal units, such as the number of inhabitants in communities or counties. In the case of point and line objects the geometric properties have to be maintained in the interpolated surface. For polygon-based data the geometric properties of the polygon boundary and the volume should be preserved, avoiding redistribution of parts of the volume to neighboring units during interpolation. The pycnophylactic interpolation method computes a continuous surface from polygon-based data and simultaneously enforces volume preservation in the polygons. The original procedure using a regular grid is extended to surface representations based on an irregular triangular network (TIN).
... We use standard numerical methods to solve the discretized system as discussed in Section 6. tests the forcing term is analytically known everywhere; in the general case it will be known only in the domain. We have used Shepard cubic extrapolation ([50]) to compute a smooth extension of the body force. ...
Article
Version: July 2003 We present a new method for the solution of the Stokes equations. The main features of our method are: (1) it can be applied to arbitrary geometries in a black-box fashion; (2) it is second order accurate; and (3) it has optimal algorithmic complexity. Our approach, to which we refer as the Embedded Boundary Integral method, is based on Anita Mayo's work for the Poisson's equa-tion: "The Fast Solution of Poisson's and the Biharmonic Equations on Irregular Regions", SIAM Journal on Numerical Analysis, 21 (1984), pp. 285–299. We embed the domain in a rectangular do-main, for which fast solvers are available, and we impose the boundary conditions as interface (jump) conditions on the velocities and tractions. We use an indirect boundary integral formulation for the homogeneous Stokes equations to compute the jumps. The resulting equations are discretized by Nyström's method. The rectangular domain problem is discretized by finite elements for a velocity-pressure formulation with equal order interpolation bilinear elements (£ ¥ ¤ -£ ¥ ¤) . Stabilization is used to circumvent the © condition for the pressure space. For the integral equations, fast matrix vector multiplications are achieved via an " $ # % algorithm based on a block representation of the discrete integral operator, combined with (kernel independent) singular value decomposition to spar-sify low-rank blocks. The regular grid solver is a Krylov method (Conjugate Residuals) combined with an optimal two-level Schwartz-preconditioner. For the integral equation we use GMRES. We have tested our algorithm on several numerical examples and we have observed optimal convergence rates.
Article
We present a numerical scheme for the computation of conservative fluid velocity, pressure and temperature fields in a porous medium. For the velocity and pressure we use the primal–dual mixed finite element method of Trujillo and Thomas while for the temperature we use a cell-centered finite volume method. The motivation for this choice of discretization is to compute accurate conservative quantities. Since the variant of the mixed finite element method we use is not commonly used, the numerical schemes are presented in detail. We sketch the computational details and present numerical experiments that justify the accuracy predicted by the theory.
Conference Paper
Full-text available
Scattered data interpolation and approximation problems arise in many applications. Shepard's method for constructing global interpolants by blending local interpolants using locally-supported weight functions usually creates reasonable approximations. This paper describes SHEPPACK, a Fortran 95 package containing five versions of the modified Shepard algorithm. These five versions include quadratic (TOMS Algorithm 660, 661, and 798), cubic (TOMS Algorithm 790), and linear variations of the original Shepard algorithm. The main goal of SHEPPACK is to provide users with a single consistent package consisting of all existing polynomial variations of Shepard's algorithm. The algorithms target data of different dimensions. The linear Shepard algorithm is the only algorithm in the package that is applicable to arbitrary dimensional data. The motivation is to enable researchers to experiment with different algorithms using their data and select one (or more) that is best suited to their needs, and to support interpolation for sparse, high dimensional data sets.
Article
Full-text available
Abstract The proximal average operator is recognized for its ability to transform two convex functions into another convex function. However, we prove with examples that the proximal average operator does have limitations, with respect to convexity. We also look at the importance of ‚ 2 [0;1] and describe an idea of how to plot the proximal average of two convex functions more e‐ciently. ii Table of Contents
Article
We present results of accuracy tests on scattered-data fitting methods that have been published as ACM algorithms. The algorithms include seven triangulation-based methods and three modified Shepard methods, two of which are new algorithms. Our purpose is twofold: to guide potential users in the selection of an appropriate algorithm and to provide a test suite for assessing the accuracy of new methods (or existing methods that are not included in this survey). Our test suite consists of five sets of nodes, with node counts ranging from 25 to 100, and 10 test functions. These are made available in the form of three Fortran subroutines: TESTDT returns one of the node sets; TSTFN1 returns a value and, optionally, a gradient value, of one of the test functions; and TSTFN2 returns a value, first partials, and second partial derivatives of one of the test functions.
Article
A three-level model system for the prediction of local flows in mountainous terrain is described. The system is based upon an operational weather prediction model with a horizontal grid spacing of about 10 km. The large-scale flow is transformed to a more detailed terrain, first by a mesoscale model with grid spacing of about 1 km, and then by a local-scale model with a grid spacing of about 0.2 km. The weather prediction model is hydrostatic, while the two other models are non-hydrostatic. As a case study the model system has been applied to estimate wind and turbulence over Várnes airport, Norway, where data on turbulent flight conditions were provided near the runway. The actual case was chosen due to previous experiences, which indicate that south-easterly winds may cause severe turbulence in a region close to the airport. Local terrain induced turbulence seems to be the main reason for these effects. The predicted local flow in the actual region is characterized by narrow secondary vortices along the flow, and large turbulent intensity associated with these vortices. A similar pattern is indicated by the sparse observations, although there seems to be a difference in mean wind direction between data and predictions. Due to fairly coarse data for sea surface temperature, errors could be induced in the turbulence damping via the Richardson number. An adjustment for this data problem improved the predictions.
Article
The first-order reversal curve (FORC) diagram method is one of the most successful characterization techniques used to characterize complex hysteretic phenomena not only in magnetism but also in other areas of science like in ferroelectricity, geology, archeology, in spin-transition materials, etc. Because the definition of the FORC diagram involves a second-order derivative, the main problem in their numerical calculation is that the derivative of a function for which only discrete noise-contaminated data values are available magnifies the noise that is inevitably present in measurements. In this paper, we present the doFORC tool for calculating FORC diagrams of noise scattered data. It can provide both a smooth approximation of the measured magnetization and all its partial derivatives. Even if doFORC is mainly dedicated to FORC diagrams' computation, it can process a general set of arbitrarily distributed two-dimensional points. doFORC is a free, portable application working on various operating systems, with an easy to use graphical interface, with four regression methods implemented to obtain a smooth approximation of the data which may then be differentiated to obtain approximations for derivatives. In order to perform the diagnostics and goodness of fit, doFORC computes residuals to characterize the difference between the observed and predicted values, generalized cross-validation to measure the predictive performance, two information criteria to quantify the information that is lost by using an approximate model, and three degrees of freedom to compare different amounts of smoothing being performed by different smoothing methods. Based on these, doFORC can perform automatic smoothing parameter selection.
Article
We need a method that can predict strong ground motions with sufficient accuracy at any target sites. So far, however, our knowledge about the characteristics of source, path, and site factors of the observed strong motions has not been fully utilized. First, we performed the analysis to investigate the properties of the factors mentioned above based on a generalized inversion technique (GIT) on Fourier spectra of strong motion networks deployed in Japan, then we tried to model not only spectral amplitude but also phase in spectra. Next, we constructed a procedure for predicting the strong motions considering both the spectral difference between the whole duration of motion and the S-wave portion and the effects of soil nonlinearity. Finally, we confirmed the proposed method worked well for the largest aftershock (Mw7.8) of the 2011 Tohoku earthquake with a special consideration of its smaller stress drop as a regional source characteristic.
Article
Full-text available
Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source, robust, efficient, thread-safe libraries for a wide range of applications in computational stellar astrophysics. A one-dimensional stellar evolution module, MESAstar, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very low mass to massive stars, including advanced evolutionary phases. MESAstar solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. State-of-the-art modules provide equation of state, opacity, nuclear reaction rates, element diffusion data, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own explicitly defined public interface to facilitate independent development. Several detailed examples indicate the extensive verification and testing that is continuously performed and demonstrate the wide range of capabilities that MESA possesses. These examples include evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets to very old ages; the complete evolutionary track of a 1 M ☉ star from the pre-main sequence (PMS) to a cooling white dwarf; the solar sound speed profile; the evolution of intermediate-mass stars through the He-core burning phase and thermal pulses on the He-shell burning asymptotic giant branch phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; the complete evolutionary tracks of massive stars from the PMS to the onset of core collapse; mass transfer from stars undergoing Roche lobe overflow; and the evolution of helium accretion onto a neutron star. MESA can be downloaded from the project Web site (http://mesa.sourceforge.net/).
Chapter
The problem of reconstruction of an unknown function from a finite number of given scattered data is well known and well studied in approximation theory. The methods developed with this goal are several and are successfully applied in different contexts. Due to the need of fast and accurate approximation methods, in this paper we numerically compare some variation of the Shepard method obtained by considering different basis functions.
Article
Scattered data interpolation problems arise in many applications. Shepard’s method for constructing a global interpolant by blending local interpolants using local-support weight functions usually creates reasonable approximations. SHEPPACK is a Fortran 95 package containing five versions of the modified Shepard algorithm: quadratic (Fortran 95 translations of Algorithms 660, 661, and 798), cubic (Fortran 95 translation of Algorithm 791), and linear variations of the original Shepard algorithm. An option to the linear Shepard code is a statistically robust fit, intended to be used when the data is known to contain outliers. SHEPPACK also includes a hybrid robust piecewise linear estimation algorithm RIPPLE (residual initiated polynomial-time piecewise linear estimation) intended for data from piecewise linear functions in arbitrary dimension m . The main goal of SHEPPACK is to provide users with a single consistent package containing most existing polynomial variations of Shepard’s algorithm. The algorithms target data of different dimensions. The linear Shepard algorithm, robust linear Shepard algorithm, and RIPPLE are the only algorithms in the package that are applicable to arbitrary dimensional data.
Article
The Perturbed Matrix Method (PMM) approach to be used in combination with Molecular Dynamics (MD) trajectories (MD-PMM) has been recoded from scratch, improved in several aspects, and implemented in the Gaussian suite of programs for allowing user friendly and yet flexible tool to estimate quantum chemistry observables in complex systems in condensed phases. Particular attention has been devoted to a description of rigid and flexible quantum centers together with powerful essential dynamics and clustering approaches. The default implementation is fully black-box and does not require any external action concerning both MD and PMM sections. At the same time fine tuning of different parameters and use of external data are allowed in all the steps of the procedure. Two specific systems (Tyrosine and Uridine) have been reinvestigated with the new version of the code in order to validate the implementation, check the performances, and illustrate some new features.
Article
The UV–vis spectrum of Tyrosine and its response to different backbone protonation states have been studied by applying the Perturbed Matrix Method (PMM) in conjunction with molecular dynamics (MD) simulations. Herein, we theoretically reproduce the UV–vis absorption spectrum of aqueous solution of Tyrosine in its zwitterionic, anionic and cationic forms, as well as of aqua‐p‐Cresol (i.e., the moiety that constitutes the side chain portion of Tyrosine). To achieve a better accuracy in the MD sampling, the Tyrosine Force Field (FF) parameters were derived de novo via quantum mechanical calculations. The UV–vis absorption spectra are computed considering the occurring electronic transitions in the vertical approximation for each of the chromophore configurations sampled by the classical MD simulations, thus including the effects of the chromophore semiclassical structural fluctuations. Finally, the explicit treatment of the perturbing effect of the embedding environment permits to fully model the inhomogeneous bandwidth of the electronic spectra. Comparison between our theoretical–computational results and experimental data shows that the used model captures the essential features of the spectroscopic process, thus allowing to perform further analysis on the strict relationship between the quantum properties of the chromophore and the different embedding environments. © 2018 Wiley Periodicals, Inc. Tyrosine UV–vis spectroscopy is an important tool to study protein response to environmental changes. Thus, a deep understanding of the effects of the embedding environment on the Quantum properties of the chromophore is crucial. These effects may be described using Perturbed Matrix Method (PMM). Herein, we theoretically reproduce the absorption spectrum of aqueous solution of Tyrosine in its zwitterionic, anionic and cationic conditions by applying the PMM procedure in conjunction with molecular dynamics simulations.
Article
Full-text available
Computational simulation of UV/vis spectra in condensed phases can be performed starting from converged molecular dynamics simulations and then performing QM/MM computations for a statistically significant number of snapshots. However, the need of variational solutions (e.g. ONIOM/EE) for a huge number of snapshots makes unpractical the use of state-of-the-art QM Hamiltonians. On the other hand, the effectivity of perturbative approaches (e.g. PMM) comes at the price of poor convergence for configurations strongly different from the reference one. In this paper we introduce an integrated strategy based on a cluster analysis of the MD snapshots: next a representative configuration for each cluster is treated at the ONIOM/EE level, whereas local fluctuations within each cluster are described at the PMM level. Some representative systems (uracil in dimethylformamide and in water and tyrosine zwitterion in water) are analysed to show the effectivity and flexibility of the proposed strategy.
Article
Engineering tests generally acquire a large set of response data at once while focusing on a small number of variables. Higher-order response models are better for capturing and displaying the characteristics such data can reveal, but response surface models traditionally are linear or quadratic (first or second order) only. If attempted for a higher order, the matrix X of the test inputs formed for fitting the test responses often immediately stops satisfying the full-rank assumption on X traditionally made for computing the regression coefficients. The resulting product matrix XT X is singular and hence cannot be inverted, directly or indirectly, for obtaining the desired least-squares regression coefficients from the system of normal equations. Ill-conditioning of X when near rank deficiency, not unusual even in the traditional second-order response modeling, can also render the computed coefficients unreliable. This paper incorporates singular value decomposition (SVD) to handle the rank deficiency and ill-conditioning problems directly. The proposed SVD-based fitting algorithm enables response modeling to the desirable high-order reliably and efficiently with no difficulty despite rank deficiency in X. Two proposed trimming algorithms enable reduction of redundant terms or insignificant coefficients of the fitted polynomials on a sound mathematical base.
Article
This paper presents a method of constructing a smooth function of two or more variables that interpolates data values at arbitrarily distributed points. Shepard's method for fitting a surface to data values at scattered points in the plane has the advantages of a small storage requirement and an easy generalization to more than two independent variables, but suffers from low accuracy and a high computational cost relative to some alternative methods. Localizations of this method have reasonably low computational costs, but remain relatively inaccurate. We describe a modified Shepard's method that, without sacrificing the advantages, has accuracy comparable to other local methods. Computational efficiency is also improved by using a cell method for nearest-neighbor searching. Test results for two and three independent variables are presented.
Article
QSHEP2D is an implementation of the modified quadratic Shepard method for the case of two independent variables. The software conforms to both the 1966 and 1977 (Subset) ANSI Standards for FORTRAN, and has no system dependencies. Header comments in each routine contain detailed descriptions of the calling sequences, and all parameter names conform to the FORTRAN typing default. The primary purpose of the package is to construct a once-continuously differentiable function Q(X, Y) such that Q interpolates a set of N data values F//i at arbitrarily distributed nodes (X//i, Y//i) for i equals 1, . . . , N. Also, two of the subroutines, STORE2 AND GETNP2, may be used alone to solve closest point problems. 1 Ref.
Article
We present results of accuracy tests on scattered-data fitting methods that have been published as ACM algorithms. The algorithms include seven triangulation-based methods and three modified Shepard methods, two of which are new algorithms. Our purpose is twofold: to guide potential users in the selection of an appropriate algorithm and to provide a test suite for assessing the accuracy of new methods (or existing methods that are not included in this survey). Our test suite consists of five sets of nodes, with node counts ranging from 25 to 100, and 10 test functions. These are made available in the form of three Fortran subroutines: TESTDT returns one of the node sets; TSTFN1 returns a value and, optionally, a gradient value, of one of the test functions; and TSTFN2 returns a value, first partials, and second partial derivatives of one of the test functions.