Book

An Introduction to Numerical Analysis

Authors:
... The control value set U ⊂ R m in (2) is assumed to be compact and convex. By (see e.g., [1,4,9,32]). ...
... Let || · || Cn([0,t f ]) be the norm of a real Banach space C n ([0, t f ]) of continuous functions ψ : [0, t f ] → R n . Recall the following natural embedding result (see e.g., [2,4]): ...
... where L 2 m [0, t f ] denotes the Lebesgue space of all square integrable vector functions on [0, t f ] (see e.g., [1,2,22]). We next give a simple characterization of the control functions determined by (4). ...
Article
This paper deals with a novel approximation technique for constrained optimal feedback control problems. We consider a conventional optimal control processes governed by the closed-loop dynamics and apply the B-relaxation approach. We next establish some useful convergence properties of the resulting relaxations for the extended and for the originally given Feedback Optimal Control Problems (FOCPs). The approximative approach involving the B-relaxation reduces the nonlinear closed-loop dynamic system to a control-affine model. This system reduction makes it possible to apply the generic Hamilton–Jacobi–Bellman (HJB) solution methodology and determine an exact optimal feedback in the relaxed FOCP. The resulting explicit solution scheme is finally applied to the originally given constrained FOCP. The obtained theoretical results provide a rigorous mathematical basis for the possible numerical approaches to the optimal policy iterations.
... In the case of d = 2 and p ∈ {1, 6} the minimizers of the worst case integration error were determined in [43] for N ≤ 16 points. Interestingly, it was observed that if N = F ℓ is a Fibonacci number (in this range 1, 2, 3,5,8,13) the algorithm found that the Fibonacci lattices given by ...
... We will recall some facts about the Hermite interpolation. We start with an error formula ( [24], Lemma 2.1 or [3,64]). ...
... The forbidden frequencies are of the form (m 1 , m 2 ) = (0, 0) with m 1 +3m 2 ≡ 0 mod 5. Our construction will only use frequencies from the square (m 1 , m 2 ) ∈ {0, 1, 2, 3, 4} 2 although a bigger range could also be considered. There, the forbidden frequencies are (1, 3), (2, 1), (3,4), (4,2). ...
Preprint
Full-text available
We investigate the question of the optimality of Fibonacci lattices with respect to tensor product energies on the torus, most notably the periodic L2L_2-discrepancy, diaphony and the worst case error of the quasi-Monte Carlo integration over certain parametrized dominating mixed smoothness Sobolev spaces HpdH_p^d of periodic functions. We consider two methods for this question. First, a method based on Delsarte's LP-bound from coding theory which will give us, among others, the Fibonacci lattices as the natural candidates for optimal point sets. Second, we will adapt the continuous LP-bound on the sphere (and other spaces) to the torus to get optimality in the continuous setting. We conclude with a more in depth look at the 5-point Fibonacci lattice, giving an effectively computable algorithm for checking if it is optimal and rigorously proving its optimality for quasi-Monte Carlo integration in the range 0<p11.660 < p \leq 11.66. We also prove a result on the universal optimality of 3 points in any dimension. The novelty of this approach is the application of LP-methods for tensor product energies in the torus and the systematic study of the simultaneous global optimality of periodic point sets for a class of tensor product potential functions.
... .. This requirement motivates the "sinking" of all large cr ij entries to the lower triangular zone and the "floating" of all the small entries to the upper triangular zone. The lower and upper triangular zones should remain separated by a unitary diagonal and thus all row ordering operations must be followed by their transposed column ordering operations [2,24]. A basic ordering algorithm is therefore proposed in algorithm (1), where algorithm 1 attempts to float and sink the rows with the smallest and largest cumulative characteristic coupling ratios cr, respectively. ...
... The use of a reorganized system of equations that groups the strong coupling terms in the lower triangular zone of [J global ] * is hypothesized to improve the performance/stability of a partitioned scheme. This is because partitioned schemes in non-linear resolution methods are analogous to the iterative methods of linear resolution methods [2,24]. Moreover, the fact that execution speeds of many partitioned strategies can be largely superior to monolithic schemes makes their use important in computational physics [13,21,12,17,4]. ...
... Moreover, the fact that execution speeds of many partitioned strategies can be largely superior to monolithic schemes makes their use important in computational physics [13,21,12,17,4]. The improvement in execution speed is generally due to a smaller algebraic system of equations sent for resolution (iterative or direct) [2,24]. Seeking to profit from reduced execution speeds, stability and good convergence rates, we design a partitioned scheme that is applied to the reorganized system of equations (16). ...
Article
Full-text available
The role of dimensionless ratios in engineering and physics is ubiquitous; but their utility in the multiphysics community is sometimes overlooked. Notably, in the multiphysics modelling community, coupling methods are often discussed and developed without an explicit monitoring of the various dimensionless ratios of the various inter-physics coupling terms. However, it is evident that the varying strengths of the coupling terms in a multiphysics model of k physics solvers/modules will influence either the convergence rate, the stability of the coupling scheme and the program execution speed. In fact, it is well known that the “ordering” of the predictor physics modules is primordial to the performance characteristics of a multiphysics coupling scheme. However, the question of “how to order” (who came first, the chicken or the egg?) the k physics modules remains vaguely discussed. In fact, physics ordering is generally based on the scientist's experience or on problem specific stability analyses performed on academic computational configurations. In the case of generic multiphysics coupling, where volume, interface and/or surface coupling terms can manifest, the optimal ordering of the physics modules may strongly vary along simulation time (for the same application) and/or across applications. Motivated to find an approximate measure that does not resort to cumbersome and problem specific stability analyses, we borrow the concept of dimensionless numbers from physics and apply it to the algebraic systems that manifest in multiphysics computational models. The “chicken-egg” algorithm is based on a dimensionless methodology that serves to “reorder” the Jacobian matrix of an exact Newton-Raphson implicit scheme. The method poses a dimensionless preconditioner that estimates the different strengths of the various coupling terms found in the multiphysics application. The chicken-egg algorithm estimates at every given time step the order of magnitude of coupling terms and correspondingly orders the k partitioned physics solvers automatically. This algorithm is tested for the first time on a thermo-hygro-corrosive multiphysics model and shows promising results. Benchmarking against monolithic and diagonalised calculation strategies, the first numerical tests show a significant reduction in iterations before convergence and thus over a 1.7-fold improvement in program execution time.
... That is, two additional sets of nodes, S ± = {x k ± P } N k=0 , are artificially introduced and N k,n is then the set of n points in S S + S − nearest to x k . Now, if u has at least n continuous derivatives with respect to the first argument everywhere in Ω, then for each value of t there exists a polynomial interpolant of degree at most n − 1 [19] s k (x, t) = n j=1 ψ k,j (x)u(x k,j , t), that satisfies s k (x k,j , t) = u(x k,j , t), j = 1, 2, . . . , n. ...
... that is O(h s t ) accurate [19]. ...
Preprint
Full-text available
A growing body of literature has been leveraging techniques of machine learning (ML) to build novel approaches to approximating the solutions to partial differential equations. Noticeably absent from the literature is a systematic exploration of the stability of the solutions generated by these ML approaches. Here, a recurrent network is introduced that matches precisely the evaluation of a multistep method paired with a collocation method for approximating spatial derivatives in the advection diffusion equation. This allows for two things: 1) the use of traditional tools for analyzing the stability of a numerical method for solving PDEs and 2) bringing to bear efficient techniques of ML for the training of approximations for the action of (spatial) linear operators. Observations on impacts of varying the large number of parameters in even this simple linear problem are presented. Further, it is demonstrated that stable solutions can be found even where traditional numerical methods may fail.
... See the end of the docu- 34 ment for further details on references. 35 ...
... Sophisticated integration schemes for computing I using values y k = f (x k ) had been developed in the pre-computer ages (before 1950) when the computation of y k for n of order 10 3 was a problem. Now, BFCTs allow to obtain results for large n up to 10 8 , using simple quadratures such as the formula of trapezoids with equal spacing [35] described below as the MATLAB ® code n = 10^m; X = linspace(a,b,n+1); Y = f(X); h = (b-a)/n; ... T_n = h*(sum(Y(2:n)) + (Y(1) + Y(n+1))/2) ...
Article
Full-text available
In this paper, we consider the application of brute force computational techniques (BFCTs) for solving computational problems in mathematical analysis and matrix algebra in a floating-point computing environment. These techniques include, among others, simple matrix computations and the analysis of graphs of functions. Since BFCTs are based on matrix calculations, the program system MATLAB® is suitable for their computer realization. The computations in this paper are completed in double precision floating-point arithmetic, obeying the 2019 IEEE Standard for binary floating-point calculations. One of the aims of this paper is to analyze cases where popular algorithms and software fail to produce correct answers, failing to alert the user. In real-time control applications, this may have catastrophic consequences with heavy material damage and human casualties. It is known, or suspected, that a number of man-made catastrophes such as the Dharhan accident (1991), Ariane 5 launch failure (1996), Boeing 737 Max tragedies (2018, 2019) and others are due to errors in the computer software and hardware. Another application of BFCTs is finding good initial guesses for known computational algorithms. Sometimes, simple and relatively fast BFCTs are useful tools in solving computational problems correctly and in real time. Among particular problems considered are the genuine addition of machine numbers, numerically stable computations, finding minimums of arrays, the minimization of functions, solving finite equations, integration and differentiation, computing condensed and canonical forms of matrices and clarifying the concepts of the least squares method in the light of the conflict remainders vs. errors. Usually, BFCTs are applied under the user’s supervision, which is not possible in the automatic implementation of computational methods. To implement BFCTs automatically is a challenging problem in the area of artificial intelligence and of mathematical artificial intelligence in particular. BFCTs allow to reveal the underlying arithmetic in the performance of computational algorithms. Last but not least, this paper has tutorial value, as computational algorithms and mathematical software are often taught without considering the properties of computational algorithms and machine arithmetic.
... In this paper we present a nodal integration approach for bilinear quadrilaterals based on the trapezoidal rule of integration (Atkinson, 1989). In the present approach, integration points are made to coincide with nodal points, but they belong to a single element. ...
... In one dimension the trapezoidal rule for numerical integration (Atkinson, 1989) is ...
Article
Full-text available
In this work we present a nodal integrated linear quadrilateral, that is, with only integration points at the nodes of the element for plasticity applications. A trapezoidal rule is used instead of the classical Gaussian rules for the integration of the nonlinear stiffness matrix. In principle, this numerical integration rule should be discarded since it cannot exactly integrate the stiffness matrices, even in the limit when the element size becomes infinitesimal. However, the internal forces are correctly calculated and the element is convergent. Comparisons are shown with other formulations and the element appears to be very effective.
... Let w i and w be respectively a convergent subsequence and its limit. Since w is not convergent, to w, there exists a subsequence (w k ) with k 0 that converges tow in the sense as in (3.10) but the limit w is distinctive to w in the topology of W 1,2 ...
... (y j+k − y j ) 2 Hence, (3.8) and (3.9) follow. ...
Article
Full-text available
We introduce a discrete-time random walk model on a one-dimensional lattice with a nonconstant sojourn time and prove that the discrete density converges to a solution of a continuum diffusion equation. Our random walk model is not Markovian due to the heterogeneity in the sojourn time, unlike a random walk model with a nonconstant walk length. We derive a Markovian process by choosing appropriate subindexes of the time-space grid points and then show the convergence of its discrete density through the parabolic-scale limit. We also find Green’s function of the continuum diffusion equation and present three Monte Carlo simulations to validate the random walk model and the diffusion equation.
... There are numerous approaches to derive these [3][4][5]. By using of Lagrange polynomial and integrating the aforementioned polynomials to obtained (2) is one way to interpolate ( ) at + 1 ...
... Recall that for an integer value of degree of precision is + 1 for even integers and for odd integers. Although there exist many precise and efficient techniques for numerical integration available in the literature, Dehghan et al. [6] recently enhanced the precision degree of closed Newton-Cotes quadrature by including the interval's boundary locations as two extra variables and rescaling the original integral to fit the optimal boundary locations [3][4][5]. They have utilised this approach on Gauss-Legendre quadrature [7], Gauss-Chebyshev quadrature [8], and open Newton-Cotes quadrature [9] in https://internationalpubls.com ...
Article
An innovative set of closed Newton-Cotes quadrature techniques for numerical integration is introduced, using the value of arithmetic mean derivative at the midpoint. The existing closed Newton cotes quadrature technique and the computation of numerical integration using arithmetic mean derivative-based closed Newton cotes quadrature techniques are compared. Additionally, it is demonstrated that the proposed quadrature rule provided more precise solutions and an increase the order of precision (accuracy) over traditional closed Newton-Cotes quadrature formula. Error terms for the proposed approach and the existing approach are compared. Lastly, a few numerical examples show that the suggested approach's numerical superiority over an existing closed Newton-Cotes quadrature formula.
... Additionally, these methods can struggle with non-local operators, singular kernels, or fractional derivatives, which introduce further complexities [20]. Even though more recent approaches, like Gaussian quadrature and collocation methods, provide enhanced accuracy for certain cases, their applicability is often restricted to simpler, well-behaved systems [5,8,[20][21][22]. ...
... These residual connections are crucial for ensuring stable training, particularly when handling multi-dimensional or highly non-linear systems [25,32,33]. Furthermore, RISN leverages high-accuracy numerical techniques, such as Gaussian quadrature for efficient and precise calculation of integral terms, and fractional operational matrices to handle fractional derivatives with minimal error [21,37]. This combination of deep learning with classical numerical methods allows RISN to achieve high accuracy and stability across a wide range of equation types, including those that pose challenges for traditional methods and standard PINNs. ...
Preprint
Full-text available
In this paper, we present the Residual Integral Solver Network (RISN), a novel neural network architecture designed to solve a wide range of integral and integro-differential equations, including one-dimensional, multi-dimensional, ordinary and partial integro-differential, systems, and fractional types. RISN integrates residual connections with high-accurate numerical methods such as Gaussian quadrature and fractional derivative operational matrices, enabling it to achieve higher accuracy and stability than traditional Physics-Informed Neural Networks (PINN). The residual connections help mitigate vanishing gradient issues, allowing RISN to handle deeper networks and more complex kernels, particularly in multi-dimensional problems. Through extensive experiments, we demonstrate that RISN consistently outperforms PINN, achieving significantly lower Mean Absolute Errors (MAE) across various types of equations. The results highlight RISN's robustness and efficiency in solving challenging integral and integro-differential problems, making it a valuable tool for real-world applications where traditional methods often struggle.
... see [39]. Hence, using (39) and (42)-(44), we obtain the following discretization u n+1 (ω) ≈ u(t n+1 , ω) ...
... see [39]. Hence, using (39) and (42)-(44), we obtain the following discretization u n+1 (ω) ≈ u(t n+1 , ω) ...
Preprint
Full-text available
This paper deals with the construction of numerical stable solutions of random mean square Fisher-KPP models with advection. The construction of the numerical scheme is performed in two stages. Firstly, a semidiscretization technique transforms the original continuous problem into a nonlinear unhomogeneous system of random differential equations. Then, by extending to the random framework the ideas of the exponential time differencing method, a full vector discretization of the problem addresses to a random vector difference scheme. A sample approach of the random vector difference scheme, the use of properties of Metzler matrices and the logarithmic norm allow the proof of stability of the numerical solutions in the mean square sense. In spite of the computational complexity the results are illustrated by comparing the results with a test problem where the exact solution is known.
... We solve the finite element model by FEniCS (A python-based open-source software) [29]. The compression strain is applied by an increment of 0.001 using Newton-Raphson method [40] with a random perturbation until the instability occurs. Upon reaching the instability, dynamic relaxation method [41] was used to obtain the post-wrinkling profile. ...
... Normalized wavelength (left axis) and the normalized critical strain (right axis, shown as a negative value), in the respective form of(40) and(41), as a function of T /λ c , with δ = 0.5, α = 5 andT = 5, 7.5, 12.5, 20. Four distinct regions, I-IV, are identified and shown in the figure by different gray scale shading. ...
Article
Full-text available
Wrinkling of film/substrate system is commonly observed in nature, and has recently gained significant research interest in various industrial sectors such as flexible electronics. However, most studies have treated both the film and the substrate as homogeneous materials, and the understanding toward the wrinkling of heterogeneous film/substrate systems has remained largely unexplored. In this study, we present a systematic study on the wrinkling of a heterogeneous thin film over a liquid substrate, with a particular focus on the influence of the microstructures on the critical and post-wrinkling behavior of the system, using both analytic method and numerical simulation. When the critical wavelength is much larger than the characteristic length scale of material heterogeneity, the wrinkling behavior is dictated by the homogenized properties of the film; yet when the two scales are close to each other, the material heterogeneity has a strong influence on the critical wavelength. Furthermore, under such scenario, new post-wrinkling profiles can also be observed, which is a function of both the geometric and mechanical properties of the film. Such observation not only provides insights into wrinkling behavior of heterogeneous film/substrate systems, but can also potentially inspire new ways of controlling the wrinkling profiles.
... Our primary objective is to develop an open-type generalized quadrature rule with precision-5. To achieve this, we combine two lowerprecision rules: the anti-Gaussian rule [1,2,3,11,12] and Steffensen's rule and the, both of which have a precision of 3. By amalgamating these rules, we introduce a novel technique for achieving higher precision in numerical integration. ...
... Laurie's concept [3] enables deriving an anti-Gaussian quadrature rule from the Gaussian 2-point rule, which integrates polynomials up to degree 3 exactly. The anti-Gaussian rule complements this by minimizing errors for higher-degree polynomials, specifically targeting those orthogonal to the ones integrated accurately by the Gaussian rule [1,8,11,12]. In this paper, we employ the following anti-Gaussian rule. ...
Article
Full-text available
We propose a generalized quadrature rule for approximating indefinite integrals by combining Steffensen's rule with the Anti-Gaussian rule. The convergence properties of this new method are thoroughly analyzed to ensure its reliability. Our error analysis highlights the improved accuracy of the generalized quadrature rule compared to its base methods. We test the rule on various example integrals to support these theoretical findings, demonstrating its effectiveness and precision. This approach significantly improves numerical integration, making it a valuable tool for solving indefinite integrals with greater accuracy and efficiency.
... 83-85). From the well known result of the convergent rule for periodic functions, see [Atk89], we can prove that with the new integral curve, the numerical method will converge much faster, depending on the choice of the new integration curve. ...
... where w satisfies Assumption 19 and v is defined by (19). We will approximate the function v by trigonometrical interpolation (for references see [Atk89]), and then study the convergence of the numerical integration based on the approximation. Let [A 0 , A 1 ] be divided uniformly into N subintervals, where N is assumed to be even in this paper. ...
Preprint
In this paper, we will introduce a high order numerical method to solve the scattering problems with non-periodic incident fields and (locally perturbed) periodic surfaces. For the problems we are considering, the classical methods to treat quasi-periodic scattering problems no longer work, while a Bloch transform based numerical method was proposed in [LZ17b]. This numerical method, on one hand, is able to solve this kind of problems convergently; on the other hand, it takes up a lot of time and memory during the computation. The motivation of this paper is to improve this numerical method, from the regularity results of the Bloch transform of the total field, which have been studied in [Zha17]. As the set of the singularities of the total field is discrete in R\R, and finite in one periodic cell, we are able to improve the numerical method by designing a proper integration contour with special conditions at the singularities. With a good choice of the transformation, we can prove that the new numerical method could possess a super algebraic convergence rate. \high{This new method improves the efficient significantly. At the end of this paper, several numerical results will be provided to show the fast convergence of the new method.} The method also provides a possibility to solve more complicated problems efficiently, e.g., three dimensional problems, or electromagnetic scattering problems.
... , ξ n in the generalized Hermite sense. Thus, in Newtonian form, this polynomial is given as in (see, e.g., Stoer and Bulirsch [9,Chapter 2] or Atkinson [1,Chapter 3]) ...
... 1. Limiting property: When ξ i all tend to 0 simultaneously, it follows from the equations in (2.8) that R p,k (z) tends to the approximant s n+k,k (z) from the method STEA of Sidi [3] as the latter is being applied to the Maclaurin series of F (z). 1 Here n = p − k. ...
Preprint
In a series of recent publications of the author, three interpolation procedures, denoted IMPE, IMMPE, and ITEA, were proposed for vector-valued functions F(z), where F : \C \to\C^N, and their algebraic properties were studied. The convergence studies of two of the methods, namely, IMPE and IMMPE, were also carried out as these methods are being applied to meromorphic functions with simple poles, and de Montessus and K\"{o}nig type theorems for them were proved. In the present work, we concentrate on ITEA. We study its convergence properties as it is applied to meromorphic functions with simple poles, and prove de Montessus and K\"{o}nig type theorems analogous to those obtained for IMPE and IMMPE.
... The latter serves as an indicator of the structural build-up of the paste. It is computed as the area enclosed by the loop using the established Simpson's numerical integration rule (see Atkinson [80], Eq. 5.1.15). ...
Article
Full-text available
As the concrete industry moves toward sustainable, automated construction, understanding the rheological behavior of alternative binders is essential. In particular, the rheology of limestone calcined clay cement (LC 3) is extremely sensitive to the type of calcium sulfate used. This study systematically investigates the impact of anhydrite (CaSO 4), bassanite (CaSO 4 ⋅0.5 H 2 O), and gypsum (CaSO 4 ⋅2 H 2 O) on hydration kinetics, structural build-up, and workability of LC³ pastes. Isothermal calorimetry, rotational and oscillatory rheometry (Large-Amplitude Oscillatory Shear (LAOS) tests) were used to decouple the interplays between sulfate dissolution, hydration and thixotropic behavior. The results indicate that bassanite accelerates early-age structuration due to its rapid dissolution and ettringite formation, yielding a high structuration rate (A thix = 0.5 Pa/min) and optimal shear stress evolution (up to 102 Pa). Conversely, gypsum retards structuration and extends workability beyond 140 minutes, but compromises early stiffening. Anhydrite, despite its coarser morphology, exhibited intermediate behavior with rapid workability reduction. LAOS analysis also identified distinct viscoelastic thresholds. Pastes with bassanite reached critical strain (10 − 3) and crossover strain (10 − 2) at minimal deformation, ideal for automated construction, while gypsum formulations showed delayed stiffening. This study demonstrates that sulfate selection directly controls open time, with bassanite formulations requiring a 90-minute operational time frame to balance extrudability and layer stability. These findings underscore the need to tailor calcium sulfate type to application-specific rheological demands and offer a pathway to optimize LC 3 binders for automated processes such as robotic shotcreting and 3D concrete printing.
... The grid was transformed from the geographic coordinate system to UTM projection to conduct volume calculation. Applied volume calculation algorithms include the trapezoidal rule, Simpson's rule, and Simpson's 3/8 rule (Atkinson, 1989). Each method approximates different three-dimensional (3D) connection shapes between the data points, which slightly influence the calculated volume. ...
Article
Full-text available
The sustainable management of marginal seas is based on a thorough understanding of their evolutionary trends in the past. The paleogeographic evolution of marginal seas is controlled not only by global and regional driving forces (eustatic sea-level change and isostatic/tectonic movements) but also by sediment erosion, transport, and deposition at smaller scales. Consistent paleogeographic reconstructions at a marginal sea scale considering the global, regional, and local processes are yet to be derived, and this study presents an effort towards this goal. We present a high-resolution (0.01°×0.01°) paleogeographic reconstruction of the entire Baltic Sea and its coast for the Holocene period by combining eustatic sea-level change, glacio-isostatic movement, and sediment deposition. Our results are validated by comparison with field-based reconstructions of relative sea level (RSL) and successfully reproduce the connection/disconnection between the Baltic Sea and the North Sea during the transitions between lake and sea phases. A consistent map of Holocene sediment thickness (SED) in the Baltic Sea has been generated, which shows that relatively thick Holocene sediment deposits (up to 36 m) are located in the southern and central parts of the Baltic Sea, corresponding to depressions of sub-basins, including the Arkona Basin, the Bornholm Basin, and the Eastern and Western Gotland Basin. In addition, some shallower coastal areas in the southern Baltic Sea also host locally confined deposits with thicknesses larger than 20 m and are mostly associated with alongshore sediment transport and the formation of barrier islands and spits. In contrast to the southern Baltic Sea, the Holocene sediment thickness in the northern Baltic Sea is relatively thin and mostly less than 6 m. The morphological evolution of the Baltic Sea and its coastline is featured by two distinct patterns. In the northeastern part, the change in the coastline and offshore morphology is dominated by regression caused by post-glacial rebound that outpaces the eustatic sea-level rise, and the influence of sediment transport is very minor, whereas a transgression, together with active sediment erosion, transport, and deposition, has constantly shaped the coastline and the offshore morphology in the southeastern part, leading to the formation of a wide variety of coastal landscapes such as barrier islands, spits, and lagoons.
... For fundamental results about Simpsons's rules (frst and second) and corresponding inequalities see [2][3][4][5][6]. Also, more motivational techniques and results can be found in [5,[7][8][9][10][11][12][13][14] and references therein. ...
Article
Full-text available
This paper deals with a new sharp version of Simpson’s second inequality by using the concepts of absolute continuity, Grüss inequality, and Chebyshev functionals. To demonstrate the applicability of the main result, three examples are given. Also, as generalization of the main result, a Simpson’s second type inequality related to the class of Riemann–Liouville fractional integrals is obtained. In addition, Simpson’s 3/8 formula is applied to approximate the Riemann integral of an absolutely continuous function as well as estimation of approximation error.
... entries. Finally, (28) can be accurately evaluated using numerical integration methods, e.g., the 100-point trapezoidal rule [17]. where N ← ⌈N * ⌉. ...
Preprint
Full-text available
This paper investigates the use of beyond diagonal reconfigurable intelligent surface (BD-RIS) with N elements to advance integrated sensing and communication (ISAC). We address a key gap in the statistical characterizations of the radar signal-to-noise ratio (SNR) and the communication signal-to-interference-plus-noise ratio (SINR) by deriving tractable closed-form cumulative distribution functions (CDFs) for these metrics. Our approach maximizes the radar SNR by jointly configuring radar beamforming and BD-RIS phase shifts. Subsequently, zero-forcing is adopted to mitigate user interference, enhancing the communication SINR. To meet ISAC outage requirements, we propose an analytically-driven successive non-inversion sampling (SNIS) algorithm for estimating network parameters satisfying network outage constraints. Numerical results illustrate the accuracy of the derived CDFs and demonstrate the effectiveness of the proposed SNIS algorithm.
... Thus, we calculate the Hermite interpolation polynomial corresponding to the function based on the values given in Table 1. In many applications, it is important to consider both the interpolation polynomial , which approximates the function , and its derivative ′ that interpolates the derivative function ′ [1]. ...
Article
In this paper we present the stages of implementing a numerical method to approximate a function. In order to achieve the objective of this work, we consider a function defined on an interval [a,b] and select three nodes from within the assumed interval. At these three chosen points, we will also know the values of the function at these points and also the values of the first order derivatives at the three points. An interpolation polynomial, of minimum degree, with the assumed nodes will be obtained.
... Both parameters A,k and A,k can be calculated in terms of k and k from the distribution N( k , 2 k ) of the indicator values. Using the trapezoidal rule, a well-known and widely used numerical integration method (Atkinson, 1989;Rahman & Schmeisser, 1990), the parameters are given by A,k = k (s k (n k − 1)) and A,k = k (s k √ n k − 1.5) with s k being a scaling factor. The scaling factor accounts for the differences in width between the k-th time step and the previous time steps and the resulting difference in magnitude between A k and the areas given by N( A,k , 2 A,k ) . ...
Article
Full-text available
Customer reviews from digital platforms are a vital data resource for recommender and other decision support systems. The performance of these systems is highly dependent on the quality of the underlying data—particularly its currency. Existing metrics for assessing the currency of customer reviews are often based solely on data age. They do not consider that customer reviews can be outdated with respect to one aspect (e.g., guest room after renovation) while still being up-to-date with respect to others (e.g., location). Moreover, they disregard that customer reviews can only become outdated due to state changes of the corresponding item (e.g., renovation), which are associated with uncertainty. We propose a probability-based metric for the aspect-based currency of customer reviews. The values of the metric represent the probability that information in a set of customer reviews is still up-to-date. Our evaluation on a large TripAdvisor dataset shows that the values of the metric are reliable and discriminate well between up-to-date and outdated data, paving the way for data quality-aware decision-making based on customer reviews.
... Changes in stress were calculated at the top ( σ Top ), midpoint ( σ Mid ), and base ( σ Base ) of the layers beneath the centre of the embankment (EXT1) and beneath the edge of the embankment crest (EXT2). These were used to derive the weighted average change in stress in each layer ( σ LayerAve ) using Simpson's rule (Atkinson 1989): ...
Article
Full-text available
The nonlinear stress–strain behaviour of stiff clays and weak rocks at small and medium strains may be a critical consideration in the design of geotechnical structures. Empirical methods have been developed for estimating the maximum shear modulus and the normalised shear modulus reduction with strain of fine-grained soils. These are usually expressed as functions of the void ratio (or specific volume) and average effective (confining) stress, based on results from laboratory tests. However, the fidelity of these equations has not been widely evaluated in situ. This paper describes the use of in situ measurements from an instrumented embankment to calculate the operational in situ shear modulus of the underlying stiff clays and weathered mudstones at medium and large strains. It is shown that the shear modulus at very small strain of the weathered clays increased linearly with depth, consistent with empirical equations. The gradient of the normalised, nonlinear stiffnesses of the clays were comparable with those measured in laboratory tests of fine-grained soils, at a range of strains. However, the values for the reference strain, where the maximum shear modulus reduces by 50%, were lower than was predicted by the empirical equations.
... Consider, for instance, the definite integration of a function, the problem of finding the roots of an algebraic, trigonometric, or exponential equation, Cauchy problems for ordinary differential equations, boundary value problems for partial differential equations, and linear and non-linear algebraic systems. See, for example, [12,71,85,142,193]. ...
Preprint
Full-text available
Scientific Machine Learning (SciML) is a recently emerged research field which combines physics-based and data-driven models for the numerical approximation of differential problems. Physics-based models rely on the physical understanding of the problem, subsequent mathematical formulation, and numerical approximation. Data-driven models instead aim to extract relations between input and output data without arguing any causality principle underlining the available data distribution. In recent years, data-driven models have been rapidly developed and popularized. Such a diffusion has been triggered by a huge availability of data, increasingly cheap computing power, and the development of powerful ML algorithms. SciML leverages the physical awareness of physics-based models and the efficiency of data-driven algorithms. With SciML, we can inject physics and mathematical knowledge into ML algorithms. Yet, we can rely on data-driven algorithms' capability to discover complex and nonlinear patterns from data and improve the descriptive capacity of physics-based models. After recalling the mathematical foundations of digital modelling and ML algorithms and presenting the most popular ML architectures, we discuss the great potential of a broad variety of SciML strategies in solving complex problems governed by PDEs. Finally, we illustrate the successful application of SciML to the simulation of the human cardiac function, a field of significant socioeconomic importance that poses numerous challenges on both the mathematical and computational fronts. Despite the robustness and accuracy of physics-based models, certain aspects, such as unveiling constitutive laws for cardiac cells and myocardial material properties, as well as devising efficient reduced order models to dominate the extraordinary computational complexity, have been successfully tackled by leveraging data-driven models.
... The error for J 1 (ℓ) and the order 2ι + 2 can be written as [42] ...
Article
Full-text available
The primary objective of this study is to present a new technique and library designed to validate the outcomes of numerical methods used for addressing various issues. This paper specifically examines the reverse osmosis (RO) model, a well-known water purification system. A crucial aspect of this problem involves solving an integral that is part of the overall solution. This integral is handled using one of the quadrature integration methods, with a focus on Romberg integration in this study. To manage the number of iterations, as well as to ensure accuracy and minimize errors, we employ the CESTAC method (Controle et Estimation Stochastique des Arrondis de Calculs) alongside the CADNA (Control of Accuracy and Debugging for Numerical Applications) library. By implementing this approach, we aim to achieve not only optimal results, but also the best method step and minimal error, and we aim to address numerical instabilities. The results show that only 16 iterations of the Romberg integration rule will be enough to find the approximate solutions.To demonstrate the efficacy and precision of our proposed method, we conducted two comprehensive comparative studies with the Sinc integration. The first study compares the optimal iteration count, optimal approximation, and optimal error between the single and double exponential decay methods and the Romberg integration technique. The second study evaluates the number of iterations required for convergence within various predefined tolerance values. The findings from both studies consistently indicate that our method outperforms the Sinc integration in terms of computational efficiency. Additionally, these comparative analyses highlight the potential of our approach as a reliable and effective tool for numerical integration.
... This physical system, as shown below, is described by a set of differential equations, which ought to hold throughout the entire lifespan of the system. To ensure convergence, the simplest method that can be applied in this context is Euler's method [7]. ...
Article
A graph visualisation tool can be invaluable in code comprehension. It is a well-known and researched field of graphical informatics. Several good algorithms were developed, but most of the graph drawing tools mainly focus on the generation of static drawing. In this paper, we present an approach to force-directed layout generation that is orders of magnitudes faster than the trivial implementation. This technique is based on the Runge-Kutta methods and is efficient enough to visualise the user-requested parts (views) quickly for relatively large Semantic Program Graphs of Erlang projects in soft real-time. Such a graph might assist code comprehension in the RefactorErl framework even better.
... To compute the total differential area for each subfragment length, we use the Composite Trapezoidal Rule described by Atkinson [22]: ...
Article
Full-text available
Crypto-ransomware attacks have been a growing threat over the last few years. The goal of every ransomware strain is encrypting user data, such that attackers can later demand users a ransom for unlocking their data. To maximise their earning chances, attackers equip their ransomware with strong encryption which produce files with high entropy values. Davies et al. proposed Differential Area Analysis (DAA), a technique that analyses files headers to differentiate compressed, regularly encrypted, and ransomware-encrypted files. In this paper, first we propose three different attacks to perform malicious header manipulation and bypass DAA detection. Then, we propose three countermeasures, namely 2-Fragments (2F), 3-Fragments (3F), and 4-Fragments (4F), which can be applied equally against each of the three attacks we propose. We conduct a number of experiments to analyse the ability of our countermeasures to detect ransomware-encrypted files, whether implementing our proposed attacks or not. Last, we test the robustness of our own countermeasures by analysing the performance, in terms of files per second analysed and resilience to extensive injection of low-entropy data. Our results show that our detection countermeasures are viable and deployable alternatives to DAA.
... Let f be a real-valued function defined on the segment [a, b]. The divided difference of order n of the function f at distinct points x 0 , ..., x n ∈ [a, b], is defined recursively (see [1], [10] The value f [x 0 , . . . , x n ] is independent of the order of the points x 0 , . . . ...
... can be approximated by (x k+1 , ω k+1 ), which is computed using an explicit Runge-Kutta method [38] on R 6 × R 3 . As for computing the approximation of the solution to Eq. 24 at time t = t k+1 , provided the value of R at time t = t k , the CG method -presented in Section 2.2.2 -is applied. ...
Article
Full-text available
This paper addresses the problem of performing aggressive manoeuvres by using multirotor vehicles that include passing through any specific point within the full state space of the vehicle. To this end, the design of optimal trajectories considers the dynamical model of the vehicles by numerically integrating it backwards in time, in the manifold where the dynamics evolve, and dividing the manoeuvres into three distinct phases to accommodate any combination of initial, desired, and final states. In the first phase, the vehicles fly from an initial to a launch configuration to achieve the necessary momenta to reach the desired one in the second phase. To ensure the feasibility of executing the second phase, the relation between snap and body torques is exploited by commanding the vehicles to track geodesic curves on SO(3) during the backwards integration. The vehicles are then driven to a final configuration in the third phase. Most existing solutions to execute aggressive and precise manoeuvres with these rotorcraft focus either on the attitude control problem, leaving the position in open-loop, or use different controllers for different sections of the manoeuvre. In this work, a single tracking controller is considered to validate the proposed trajectory planning strategy in a realistic simulation environment, which involves the PX4 firmware, and in a controlled experimental setup. The results demonstrate that accurate tracking of the designed trajectories enables the vehicles to perform 360-degree loops at great speed and manoeuvres that facilitate the exchange of a parcel between two multirotor vehicles during flight.
... The local truncation error can be analyzed using a Taylor expansion of each term in the LMM. In a k-step LMM, initial values up to time step (k − 1) are required, and both initial error and truncation error influence accuracy [22]. ...
Preprint
Full-text available
Differential equations are a crucial mathematical tool used in a wide range of applications. If the solution to an initial value problem (IVP) can be transformed into an oracle, it can be utilized in various fields such as search and optimization, achieving quadratic speedup with respect to the number of candidates compared to its classical counterpart. In the past, attempts have been made to implement such an oracle using the Euler method. In this study, we propose a quantum linear multistep method (QLMM) that applies the linear multistep method, commonly used to numerically solve IVPs on classical computers, to generate a numerical solution of the IVP for use in a quantum oracle. We also propose a method to find the optimal form of QLMM for a given IVP. Finally, through computer simulations, we derive the QLMM formulation for an example IVP and show that the solution from the optimized QLMM can be used in an optimization problem.
... Simulations of our model were conducted in Python using the Euler method [57]. The model was calibrated using the epidemiological data described in the previous section, see subsection "Epidemiological data", referring to the daily positive detected cases, hospitalized and ICU cases, as well as the number of deaths due to COVID-19 infection. ...
Article
Full-text available
Background COVID-19, caused by SARS-CoV-2, has spread globally, presenting a significant public health challenge. Vaccination has played a critical role in reducing severe disease and deaths. However, the waning of immunity after vaccination and the emergence of immune-escape variants require the continuation of vaccination efforts, including booster doses, to maintain population immunity. This study models the dynamics of COVID-19 in the Basque Country, Spain, aiming to characterize the population’s immunity profile and assess its impact on the severity of outbreaks from 2020 to 2022. Methods A SIR/DS model was developed to analyze the interplay of virus-specific and vaccine-induced immunity. The model includes three levels of immunity, with boosting effects from reinfection and/or vaccination. It was validated using empirical daily case data from the Basque Country. The model tracks shifts in immunity status and their effects on disease dynamics over time. Results The COVID-19 epidemic in the Basque Country progressed through three distinct phases, each shaped by dynamic interactions between virus transmission, public health interventions, and vaccination efforts. The initial phase was marked by a rapid surge in cases, followed by a decline due to strict public health measures, with a seroprevalence of 1.3%1.3\%. In the intermediate phase, multiple smaller outbreaks emerged as restrictions were relaxed and new variants, such as Alpha and Delta, appeared. During this period, reinfection rates reached 20%20\%, and seroprevalence increased to 32%32\%. The final phase, dominated by the Omicron variant, saw a significant rise in cases driven by waning immunity and the variant’s high transmissibility. Notably, 34%34\% of infections during this phase occurred in the naive population, with seroprevalence peaking at 43%43\%. Across all phases, the infection of naive and unvaccinated individuals contributed significantly to the severity of outbreaks, emphasizing the critical role of vaccination in mitigating disease impact. Conclusion The findings underscore the importance of continuous monitoring and adaptive public health strategies to mitigate the evolving epidemiological and immunological landscape of COVID-19. Dynamic interactions between immunity levels, reinfections, and vaccinations are critical in shaping outbreak severity and guiding evidence-based interventions.
... When ρ = 1, the conformal mapping in Equation (75) from [9] is equivalent to Equation (57) using the Fourier series (see [14], p.178). However, for ρ ̸ = 1, the two algorithms are slightly different. ...
Article
Full-text available
The discrete null-field equation method (DNFEM) was proposed based on the null-field equation (NFE) of Green’s representation formulation, where only disk domains were discussed. However, the study of the DNFEM for bounded simply connected domains S is essential for practical applications. Since the source nodes must be located outside of a solution domain S, the first issue in computations is how to locate them. It includes two topics—Topic I: The source nodes must be located not only outside S but also outside the exterior boundary layers. The width of the exterior boundary layers is derived as O(1/N), where N is the number of unknowns in the DNFEM. Topic II: There are numerous locations for source nodes outside the exterior boundary layers. Based on the sensitivity index, several better choices of pseudo-boundaries are studied for bounded simply connected domains. The advanced study of Topics I and II needs stability and error analysis. The bounds of condition numbers (Cond) are derived for bounded simply connected domains, and they are similar to those of the method of fundamental solutions (MFS). New error bounds are also provided for bounded simply connected domains. The thorough study of determining better locations of source nodes is also valid for the MFS and the discrete boundary integral equation method (DBIEM). The development of algorithms based on the NFE lags far behind that of the traditional boundary element method (BEM). Some progress has been made by following the MFS, and reported in this paper. From the theory and computations in this paper, the DNFEM may become a competent boundary method in scientific/engineering computing.
... Finally, the inference process of CFM is straightforward and consists of the following steps: (1) Get a sample from p 0 ; and (2) Query the learned vector field v t (x; θ) to solve the ODE (3) with off-the-shelf solvers, e.g., based on the Euler method [50]. ...
Preprint
Full-text available
Diffusion-based visuomotor policies excel at learning complex robotic tasks by effectively combining visual data with high-dimensional, multi-modal action distributions. However, diffusion models often suffer from slow inference due to costly denoising processes or require complex sequential training arising from recent distilling approaches. This paper introduces Riemannian Flow Matching Policy (RFMP), a model that inherits the easy training and fast inference capabilities of flow matching (FM). Moreover, RFMP inherently incorporates geometric constraints commonly found in realistic robotic applications, as the robot state resides on a Riemannian manifold. To enhance the robustness of RFMP, we propose Stable RFMP (SRFMP), which leverages LaSalle's invariance principle to equip the dynamics of FM with stability to the support of a target Riemannian distribution. Rigorous evaluation on eight simulated and real-world tasks show that RFMP successfully learns and synthesizes complex sensorimotor policies on Euclidean and Riemannian spaces with efficient training and inference phases, outperforming Diffusion Policies while remaining competitive with Consistency Policies.
... Cubic splines are used to interpolate the velocity fields. Time stepping for the particles is performed by means of a fourth-order Adams-Bashforth method (see, e.g., §6.7 in [40]). ...
Preprint
On their roller coaster ride through turbulence, tracer particles sample the fluctuations of the underlying fields in space and time. Quantitatively relating particle and field statistics remains a fundamental challenge in a large variety of turbulent flows. We quantify how tracer particles sample turbulence by expressing their temporal velocity fluctuations in terms of an effective probabilistic sampling of spatial velocity field fluctuations. To corroborate our theory, we investigate an extensive suite of direct numerical simulations of hydrodynamic turbulence covering a Taylor-scale Reynolds number range from 150 to 430. Our approach allows the assessment of particle statistics from the knowledge of flow field statistics only, therefore opening avenues to a new generation of models for transport in complex flows.
... To study mixed-mode waves, we conduct numerical simulations of the coupled system and observe propagation of waves in the spatial domain. We specifically consider a coupled PC with 100 particles in each channel and solve Eq. (2) directly using a fourth-order Runge-Kutta time integration scheme [39]. Fig. 2c). ...
Preprint
We investigate wave mixing effects in a phononic crystal that couples the wave dynamics of two channels -- primary and control ones -- via a variable stiffness mechanism. We demonstrate analytically and numerically that the wave transmission in the primary channel can be manipulated by the control channel's signal. We show that the application of control waves allows the selection of a specific mode through the primary channel. We also demonstrate that the mixing of two wave modes is possible whereby a modulation effect is observed. A detailed study of the design parameters is also carried out to optimize the switching capabilities of the proposed system. Finally, we verify that the system can fulfill both switching and amplification functionalities, potentially enabling the realization of an acoustic transistor.
... All in all, the above two approaches for breaking the exchange symmetry are equivalent. Apart from this issue, we discretize MFEs according to the Euler method [45] with step size dt = 0.1. In Fig. 4 (left) we show examples of numerical integrations for a handful of values of ν. ...
Preprint
The Naming Game is an agent-based model where individuals communicate to name an initially unnamed object. On a large class of networks continual pairwise interactions lead the system to an ultimate consensus state, in which agents converge on a globally shared name. Soon after the introduction of the model, it was observed in literature that on community-based networks the path to consensus passes through metastable multi-language states. Subsequently, it was proposed to use this feature as a mean to discover communities in a given network. In this paper we show that metastable states correspond to genuine multi-language phases, emerging in the thermodynamic limit when the fraction of links connecting communities drops below critical thresholds. In particular, we study the transition to multi-language states in the stochastic block model and on networks with community overlap. We also examine the scaling of critical thresholds under variations of topological properties of the network, such as the number and relative size of communities and the structure of intra-/inter-community links. Our results provide a theoretical justification for the proposed use of the model as a community-detection algorithm.
... Trajectories of fluid tracers were computed using a second-order Adams-Bashforth method (see, e.g., [39]) combined with cubic interpolation performed on four-point interpolation kernels. Convergence tests with higher-order integration and interpolation schemes [40] were performed, but are not reported here. ...
Preprint
We present an analysis of the Navier-Stokes equations based on a spatial filtering technique to elucidate the multi-scale nature of fully developed turbulence. In particular, the advection of a band-pass-filtered small-scale contribution by larger scales is considered, and rigorous upper bounds are established for the various dynamically active scales. The analytical predictions are confirmed with direct numerical simulation data. The results are discussed with respect to the establishment of effective large-scale equations valid for turbulent flows.
Preprint
Flow-based methods for sampling and generative modeling use continuous-time dynamical systems to represent a {transport map} that pushes forward a source measure to a target measure. The introduction of a time axis provides considerable design freedom, and a central question is how to exploit this freedom. Though many popular methods seek straight line (i.e., zero acceleration) trajectories, we show here that a specific class of ``curved'' trajectories can significantly improve approximation and learning. In particular, we consider the unit-time interpolation of any given transport map T and seek the schedule τ:[0,1][0,1]\tau: [0,1] \to [0,1] that minimizes the spatial Lipschitz constant of the corresponding velocity field over all times t[0,1]t \in [0,1]. This quantity is crucial as it allows for control of the approximation error when the velocity field is learned from data. We show that, for a broad class of source/target measures and transport maps T, the \emph{optimal schedule} can be computed in closed form, and that the resulting optimal Lipschitz constant is \emph{exponentially smaller} than that induced by an identity schedule (corresponding to, for instance, the Wasserstein geodesic). Our proof technique relies on the calculus of variations and Γ\Gamma-convergence, allowing us to approximate the aforementioned degenerate objective by a family of smooth, tractable problems.
Preprint
Full-text available
This article deals with the efficient and certified numerical approximation of the smallest eigenvalue and the associated eigenspace of a large-scale parametric Hermitian matrix. For this aim, we rely on projection-based model order reduction (MOR), i.e., we approximate the large-scale problem by projecting it onto a suitable subspace and reducing it to one of a much smaller dimension. Such a subspace is constructed by means of weak greedy-type strategies. After detailing the connections with the reduced basis method for source problems, we introduce a novel error estimate for the approximation error related to the eigenspace associated with the smallest eigenvalue. Since the difference between the second smallest and the smallest eigenvalue, the so-called spectral gap, is crucial for the reliability of the error estimate, we propose efficiently computable upper and lower bounds for higher eigenvalues and for the spectral gap, which enable the assembly of a subspace for the MOR approximation of the spectral gap. Based on that, a second subspace is then generated for the MOR approximation of the eigenspace associated with the smallest eigenvalue. We also provide efficiently computable conditions to ensure that the multiplicity of the smallest eigenvalue is fully captured in the reduced space. This work is motivated by a specific application: the repeated identifications of the states with minimal energy, the so-called ground states, of parametric quantum spin system models.
Chapter
We study the individual behavior of the eigenvalues of the laplacian matrices of the cyclic graph of order n, where one edge has weight αC\alpha \in \mathbb {C}, with Re(α)>1\operatorname {Re}(\alpha )>1, and all the others have weights 1. This paper is a sequel to two previous ones where we considered Re(α)[0,1]\operatorname {Re}(\alpha ) \in [0,1] and Re(α)<0\operatorname {Re}(\alpha )<0. Now, we prove that for Re(α)>1\operatorname {Re}(\alpha )>1 and n>Re(α)/Re(α1)n > \operatorname {Re}(\alpha )/\operatorname {Re}(\alpha -1), one eigenvalue is greater than 4 while the others belong to [0,4] and are distributed as the function x4sin2(x/2)x\mapsto 4\sin ^2(x/2). Additionally, we prove that as n tends to \infty , the outlier eigenvalue converges exponentially to 4Re(α)2/(2Re(α)1)4\operatorname {Re}(\alpha )^2/(2\operatorname {Re}(\alpha )-1). We give exact formulas for half of the inner eigenvalues, while for the others we justify the convergence of Newton’s method and the fixed-point iteration method. We find asymptotic expansions, as n tends to \infty , both for the eigenvalues belonging to [0,4] and the outliers. We also compute the eigenvectors and their norms.
Preprint
Full-text available
The Weak-form Estimation of Non-linear Dynamics (WENDy) algorithm is extended to accommodate systems of ordinary differential equations that are nonlinear-in-parameters (NiP). The extension rests on derived analytic expressions for a likelihood function, its gradient and its Hessian matrix. WENDy makes use of these to approximate a maximum likelihood estimator based on optimization routines suited for non-convex optimization problems. The resulting parameter estimation algorithm has better accuracy, a substantially larger domain of convergence, and is often orders of magnitude faster than the conventional output error least squares method (based on forward solvers). The WENDy.jl algorithm is efficiently implemented in Julia. We demonstrate the algorithm's ability to accommodate the weak form optimization for both additive normal and multiplicative log-normal noise, and present results on a suite of benchmark systems of ordinary differential equations. In order to demonstrate the practical benefits of our approach, we present extensive comparisons between our method and output error methods in terms of accuracy, precision, bias, and coverage.
Chapter
It is demonstrated that the Gauss–Kruger projection is not the best conformal projection for the meridional zone, but, in accordance with Chebyshev’s theorem, for a region bounded by two curves with equal linear scale factors. These curves intersect the meridional zone at two points on the equator, forming a doubly connected region. Using the properties of the projections within triad, based on the Gauss–Kruger projection, both an ideal projection according to the Airy criterion and the best projection from a set of close-to-equal-area projections were constructed. A comparison of distortions and corrections to the angles of the resulting projections was carried out. The relationships between the Airy ideal projection for the aforementioned region and the UTM and Cassini–Soldner projections are demonstrated.
Chapter
About ninety years ago, Lewin (Hirsch and Smale in Differential equations, dynamical systems, and linear algebra, Academic Press, 1974; Lewin, K. (1933). Environmental forces in child behavior and development. In Murchison, C. (Ed.), Handbook of Child Psychology. Worcester: Clark University Press.;) introduced various dynamic concepts such as social force field, equilibrium, and barrier, to explain the psychological influence of environment on the behavior and development of children, tried to develop a new geometry which he termed hodological space, and proposed a topological psychology (Lewin (Lewin in Principles of topological psychology, McGraw-Hill, 1936)).
Book
With an emphasis on both ordinary differential equations (ODEs) and partial differential equations (PDEs), this extensive textbook offers a thorough examination of numerical techniques for solving differential equations. ODEs are covered in detail in the first chapter, which starts with an introduction to basic ideas and moves through a number of numerical solution methods. It begins with Euler's method and progresses to more complex techniques such as the Taylor series method, Picard's method, the first, second, and fourth-order Runge-Kutta methods, and multistep techniques like Adams-Bashforth and Adams-Moulton. Detailed derivations, algorithms, benefits, drawbacks, and worked examples are provided for each method. PDEs are covered in the second chapter, which also looks at their classification, mathematical behavior, and numerical solution methods. The Finite Difference Method (FDM) and the Finite Element Method (FEM), two important numerical techniques, are covered in great detail. Elements, shape functions, preprocessing, solver execution, post-processing, and the important ideas of weak form and Galerkin's approach are all covered in detail in the FEM section.
Article
The present work reveals that numerical analysis is a branch of Mathematics that deals with devising efficient methods for obtaining numerical solutions to difficult Mathematical problems. We conducted this research paper by observing the different types of reviews, as well as conducting and evaluating literature review papers. Numerical analysis is mathematics which has developed efficient methods for obtaining numerical solutions to difficult mathematical problems. Most of the mathematical problems that arise in science and engineering are very difficult and sometimes impossible to solve properly. The method of numerical analysis is being used mainly in the fields of mathematics and computer science and is continuously creating and applying algorithms to solve numerical problems of mathematics
Article
Full-text available
The new generation of computing devices tends to support multiple floating-point formats and different computing precision. Besides single and double precision, half precision is embraced and widely supported by new computing devices. Low-precision representations have compact memory size and lightweight computing strength, and they also bring opportunities to the optimization of BLAS routines. This paper proposes a new sparse matrix partition approach based on IEEE 754 standard floating-point format. An input sparse matrix in double precision is partitioned and transformed into several sub-matrices in different precision without loss of accuracy. Most non-zero elements can be stored in half or single precision, if the most significant bits of exponent and the least significant bits of mantissa are zeros in double-precision representation. Based on this mixed-precision representation of sparse matrix, we also present a new SpMV algorithm pSpMV for GPU devices. pSpMV not only reduces the memory access overhead, but also reduces the computing strength of floating-point numbers. Experimental results on two GPU devices show that pSpMV achieves a geometric mean speedup of 1.39x on Tesla V100 and 1.45x on Tesla P100 over double-precision SpMV for 2,554 sparse matrices.
Preprint
The diversity product and the diversity sum are two very important parameters for a good-performing unitary space time constellation. A basic question is what the maximal diversity product (or sum) is. In this paper we are going to derive general upper bounds on the diversity sum and the diversity product for unitary constellations of any dimension n and any size m using packing techniques on the compact Lie group U(n).
Preprint
In a recent Rapid Communication [A. Stan, Phys. Rev. B \textbf{93}, 041103(R) (2016)], the reliability of the Keldysh--Kadanoff--Baym equations (KBE) using correlated selfenergy approximations applied to linear and nonlinear response has been questioned. In particular, the existence of a universal attractor has been predicted that would drive the dynamics of any correlated system towards an unphysical homogeneous density distribution regardless of the system type, the interaction and the many-body approximation. Moreover, it was conjectured that even the mean-field dynamics would be damped. Here, by performing accurate solutions of the KBE for situations studied in that paper, we prove these claims wrong being caused by numerical inaccuracies.
ResearchGate has not been able to resolve any references for this publication.