Article

Numerical Recipes: The Art of Scientific Computing

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Thus, we require a solver that can handle a non-linear partial differential equation. We have used a method for advection-diffusion equations is which space and time is discretized, and partial derivatives are approximated by finite differences [205,264]. Let space be discretized into the set of points {r i }, which form a cubic lattice with lattice spacing ∆x. Let time be similarly discretized as a set of points {t n } equally spaced by a time ∆t. ...
... The term Γ D (r, t) is handled well via centered differences [264]: ...
... The non-linear advective term Γ adv (r, t) proved to be significantly trickier. To discretize it, we implemented a variation of Richtmyer's Two-Step Lax-Wendroff method [264][265][266]. We start by approximating the solute concentrations marched forward half a time step (due to advection only) and located in between the lattice nodes: ...
Thesis
Full-text available
This work focuses on the ways in which the transport of chemically-reactive fluids may be affected by the presence of confinement. Specifically, we study flow through microchannels whose walls are patterned with catalytic material. We explore two setups which differ in their material properties and transport mechanisms. Despite these differences, properties such as the channel corrugation or the distribution of catalytic material play a role in both setups. The first system consists of a gas driven through a catalytic microreactor. On the walls of said reactor the gas undergoes a chemical reaction, which leads to the generation of a second chemical species. The second system comprises a catalytic microchannel/pore filled with a reactive liquid, leading to the formation of a dissolved chemical species. The concentration of this species is spatially-varying near the channel wall. Due to the phenomenon of diffusioosmosis, flows form along the walls. As such, the second system differs fundamentally from the gas-phase reactor, in the sense that motion is driven internally (due to the presence of the chemical reaction) rather than externally (due to the presence of an applied pressure drop). Additionally, the gas-phase reactor explores a regime in which the reactive fluid is compressible. However, it also neglects the inverse chemical reaction that turns product back into reactant. In contrast, said inverse reaction is included in the diffusioosmotic channel. We establish connections between the two above-described systems to the wider topics of microfluidics, synthetic colloidal swimmers, and the design of catalytic reactors. A literature review of these topics is presented, aiming at motivating this work, and to present the state of the art. To perform our studies, we employ both numerical and analytical methods. Our numerical methods consist chiefly of the lattice Boltz- mann method, to which we couple an additional finite difference solver. We further present a detailed derivation of this method, and a demonstration of its validity as a solver of the governing equations. To obtain analytical results, we have effectively reduced the dimensionality of the governing equations by employing the Fick-Jacobs and lubrication approximations. By means of such a theory, we are able to obtain closed-form results for the pressure inside the gas-phase reactor, as well as for the flow of product at the reactor outlet. We find an optimum reactor length that maximizes this flow. We show that in the highly-compressible regime, there is also an optimum corrugation height. We apply our theory to a model monolithic reactor and find an optimum channel size. We then perform lattice Boltzmann simulations of diffusioosmosis- driven catalytically-active channels/pores. We find that fore-aft symmetric pores exhibit three different kinds of dynamics: a mixing state with convection currents, a pumping state where a net flow develops, and an oscillating state where the flow rate exhibits sustained oscillations. We discuss the mechanism behind each of these states, and show how they depend on parameters such as the distribution of catalytic material, and the corrugation height of the channel wall. To complement the above-mentioned study, we develop a Fick- Jacobs theory for such active pores. From it, we conclude that the spontaneous symmetry breaking that leads to pumping is governed by three timescales: the timescale for diffusive and advective trans- port, as well as the average lifetime of the solute. We show how hysteresis and spontaneous jumps in the flow rate are a generic property of fore-aft asymmetric pores. Said effects can be triggered by asymmetry both in the shape of the pore, or in the distribution of catalytic material. Finally, we summarize our main results, and present an outlook for future research.
... Bar Shalom, Klapisch and Oreg proposed in [13] to use of a predictor-corrector due to Hamming (see [30] §24.3) when it is stable, and proposed a modified predictor-corrector for the region in which the latter method is unstable. In [5,14], Wilson, Sonnad, Sterne and Isaacs proposed to use a Kaps-Rentrop method [31,32]. In [17] Rawitscher proposed a spectral iterative method based on a Chebyshev expansion. ...
... In [5,14], the authors used a Rosenbrock method proposed by Kaps and Rentrop [31], frequently used for stiff equations [32]. Typical numerical methods for stiff equations are usually based on adaptive grid refinement, aiming at correctly sampling all solutions, with all their various scales of variation. ...
... The Kaps-Rentrop (KR) method [31,32] was used by Wilson et al. to solve the amplitude equation related to Dirac equation, which is an analogous of Eq. (12) [5,14]. It is an adaptive mesh refinement method based on an implicit scheme. ...
Preprint
We propose an explicit numerical method to solve Milne's phase-amplitude equations. Previously proposed methods solve directly Milne's nonlinear equation for the amplitude. For that reason, they exhibit high sensitivity to errors and are prone to instability through the growth of a spurious, rapidly varying component of the amplitude. This makes the systematic use of these methods difficult. On the contrary, the present method is based on solving a linear third-order equation which is equivalent to the nonlinear amplitude equation. This linear equation was derived by Kiyokawa, who used it to obtain analytical results on Coulomb wavefunctions [Kiyokawa, AIP Advances, 2015]. The present method uses this linear equation for numerical computation, thus resolving the problem of the growth of a rapidly varying component.
... From the point of view of a measurement process, the force F l i (r i ) in Eq. (5) can be interpreted as the noisy result of said measurement and the G ll ji (r ji )'s as fitting functions for it, being D l i (t) the instant deviations between measured and fitted values (de los Santos-López et al., 2022). Therefore, the values of G ll ji (r ji ) can be obtained using the least-squares method (Press et al., 1992). ...
... To solve this equation, the least-squares (LS) method is used (Press et al., 1992), requiring that the values of the elements of G minimize those of D T α D α . This leads to the following equation, ...
... which is solved by using the Singular Value Decomposition (SVD) (Press et al., 1992). We perform the same procedure for all F α , with α = x, y, z, independently, and thus obtain three different solutions for G, which when compared turn out to be equal, considering small statistical errors. ...
Article
The determination of the effective interactions in many-body systems still possesses a tremendous challenge in Condensed Matter Physics. The problem mainly resides in the fact that, on the one hand, there is not a unique route to obtain the effective forces between the "observable" constituents and, on the other hand, one typically considers that the diluted limit of those components always corresponds to the effective forces at all particle concentrations. In the particular case of colloidal dispersions, it is very important to have experimental, theoretical, and computational tools that allow us to determine, accurately and without using significant approximations, the effective interactions between colloids. In this contribution, we report on a detailed study of depletion potentials in colloidal mixtures at finite concentration, namely, binary and ternary mixtures of disks (in two dimensions or 2D) and spheres (in three dimensions or 3D). To this end, the depletion forces between the large colloidal species are determined through a recently developed scheme based on the so-called contraction of the description, which has been extended and built at the level of the bare forces, i.e., we basically consider that all particles interact through the bare potentials and the net force acting on a given particle is determined by the second's Newton law. This scheme can be easily adapted to Molecular Dynamics simulations. To verify the physical consistency of the formalism, we explicitly show that in the diluted limit of large particles, the resulting depletion potential reproduces correctly the AO-Vrij limit. As further proof of the accuracy of the results, a comparison of the structure of the large colloids in the whole mixture with the one that results using exclusively the depletion potential is carried out. In 3D, the results are explicitly compared with the potential of mean force and those obtained with the integral equations formalism. We also report a new colloidal stability mechanism based on the use of two different species of depletant agents at low and equal concentrations. Although we highlight the fact that the depletion potential depends on the concentration of the large species, we show that this dependence can be eliminated when the colloidal dispersion is open to a reservoir of small particles, i.e., when the chemical potential of small particles is fixed. Finally, we report the effects of polydispersity on the depletion interaction between large colloids.
... Recall the formulation of problem (18). It is required to find a solution of the homogeneous boundary value problem with zero Dirichlet-Robin conditions and a given initial condition: where is the heat conductivity coefficient depending on the material properties. ...
... where > 0 is the set of positive solutions of the equation √ = −ℎ tan(√ ), ∈ ℕ. Finally, the solution of the initial problem 2 (15) is obtained by adding the solution ( ) of the boundary value problem (17) and the solution ( , ) of the homogeneous boundary value problem (18): ...
... Among the numerical algorithms for solving initial and boundary value problems for linear ODEs of the first and second order, there are many methods that use the initial approximation (boundary conditions) as the initially active condition that determines all further solution of the problem. These are methods such as Euler, Adams-Bashforth, Runge-Kutta, etc. [18]. Other methods, based on approximation of the solution using global functions [1][2][3][4][5], are based on the construction of systems of equations that simultaneously include both initial (boundary) conditions and conditions that specify the behavior of the derivatives of the desired solution. ...
Article
Full-text available
For one-dimensional inhomogeneous (with respect to the spatial variable) linear parabolic equations, a combined approach is used, dividing the original problem into two subproblems. The first of them is an inhomogeneous one-dimensional Poisson problem with Dirichlet-Robin boundary conditions, the search for a solution of which is based on the Chebyshev collocation method. The method was developed based on previously published algorithms for solving ordinary differential equations, in which the solution is sought in the form of an expansion in Chebyshev polynomials of the 1st kind on Gauss-Lobatto grids, which allows the use of discrete orthogonality of polynomials. This approach turns out to be very economical and stable compared to traditional methods, which often lead to the solution of poorly defined systems of linear algebraic equations. In the described approach, the successful use of integration matrices allows complete elimination of the need to deal with ill-conditioned matrices. The second, homogeneous problem of thermal conductivity is solved by the method of separation of variables. In this case, finding the expansion coefficients of the desired solution in the complete set of solutions to the corresponding Sturm-Liouville problem is reduced to calculating integrals of known functions. A simple technique for constructing Chebyshev interpolants of integrands allows to calculate the integrals by summing interpolation coefficients.
... A commonly employed technique for stepsize control in this method is step doubling [6,7]. However, this approach necessitates 11 derivative evaluations per step, often requiring substantial computing time. ...
... Runge-Kutta algorithms are largely used to integrate Ordinary Differential Equations, thanks to their precision and the stability of the calculations they offer despite being explicit methods. In addition, they provide interesting possibilities for predicting the size of integration steps [6,7,8]. ...
Preprint
Full-text available
Neural Ordinary Differential Equations (Neural ODEs) represent a significant breakthrough in deep learning, promising to bridge the gap between machine learning and the rich theoretical frameworks developed in various mathematical fields over centuries. In this work, we propose a novel approach that leverages adaptive feedforward gradient estimation to improve the efficiency, consistency, and interpretability of Neural ODEs. Our method eliminates the need for backpropagation and the adjoint method, reducing computational overhead and memory usage while maintaining accuracy. The proposed approach has been validated through practical applications, and showed good performance relative to Neural ODEs state of the art methods.
... The case studies presented are compared in terms of accuracy and CPU time. The accuracy of the responses obtained by the CCA TR -LU method is compared against the PSCAD/EMTDC ® , while in terms of CPU time the responses of the CCA TR -LU method are compared against the CCA TR method including conventional LU decomposition [23]. The method allows to represent different power quality adverse phenomena and stability assessment such as voltage sags & swells, harmonics, among others. ...
... Also, taking as a reference the permissible values established by the IEEE Std 519-2014, for the individual current harmonics from 3rd to 9th, they meet the standard since they are less than 4%, while for the voltage harmonics from 3rd to 9th they also meet the respective standard, since their magnitudes are less than 3%. C. Performance: CPU times Table II summarizes the CPU processing times obtained during the application of the CCA TR method, which are based on a sparse matrix factorization technique [18] and a conventional LU decomposition process [23]. The processing times required by the conventional CCA TR -LU method are considerably higher than those required by the sparse CCA TR -LU method, Table II shows these execution times, rows 3-4, columns 2-3, illustrates that the sparse CCA TR -LU method is 2.2 and 2.1 times faster than conventional CCA TR -LU method. ...
... The Ralston method is a second-order technique that relies on using specific weights for the slope, achieving a balance between accuracy and computational costs. This method calculates two slopes and then combines them to obtain an updated solution, making it suitable for problems that require a moderate level of accuracy [17][18][19]. ...
Article
Full-text available
This research explores the numerical solutions of nonlinear ordinary differential equations using four distinct methods: Heun method, the Midpoint method, Ralston's method, and the fourth-order Runge-Kutta method (RK4). Given the complexity and prevalence of nonlinear equations in various scientific fields, effective numerical techniques are essential for obtaining accurate solutions. This research contributes to the understanding of numerical methods in solving complex differential equations, aiding practitioners in selecting the most appropriate approach for their needs.
... Figure reprinted from Adler et al. [4] in accordance with AGU Publications permission policy follows. For each wavelength, the retrieval algorithm scans over possible values of the real part and the imaginary part of the RI of the material and seeks the pair of values ( m real , m imag ) that leads to a minimum of the merit function χ 2 /N 2 sizes , where (see, e.g., [105,their Eq. 15.5.5], [1,27]): ...
Chapter
The ability to accurately calculate the scattering and absorption properties of atmospheric aerosol particles is crucial for atmospheric modeling and remote sensing. Theoretical calculations of the scattering and absorption of solar radiation by aerosol particles have been carried out for decades, and much progress has been made over the years in describing the scattering of solar radiation not just by homogeneous spherical particles, but also by complex aerosol particle configurations and shapes, such as internal mixtures of different components, including layered particles; bare and coated aggregates; spheroids, ellipsoids, cylinders, Chebyshev particles, and other non-spherical shapes. In this review, we will focus on a specific type of complexity in internal aerosol configuration, namely aerosol particles comprised of disordered materials with and without macroscopic porosity. We will review the unique considerations in simulating the scattering of light by such particles and some specific insights obtained over the years from the research of our group.
... A simple periodogram of our RV data (computed as in Press et al. 1992) confirms that rotational modulation dominates the RV fluctuations, with the strongest peak located close to rot and associated with a false alarm probability (FAP) of ≃0.01 per cent (see top plot of Fig. C1 in Appendix C). Fitting now our RVs with QP GPR (as previously achieved for our ℓ data, see Sec. 4) confirms this result, yielding a period consistent with rot (3.00 ± 0.01 d) and a GP amplitude 1 = 0.41 +0.12 −0.09 km s −1 for the rotational modulation, as well as filtered RVs (i.e., the difference between the raw RVs and the derived GP) whose rms of 0.24 km s −1 accounts for most of the excess RV dispersion (with respect to the median RV error bar of 0.16 km s −1 ) but not for all ( 2 r = 2.08, see Table 4). ...
Preprint
This paper presents near-infrared spectropolarimetric and velocimetric observations of the young planet-hosting T Tauri star PDS 70, collected with SPIRou at the 3.6m Canada-France-Hawaii Telescope from 2020 to 2024. Clear Zeeman signatures from magnetic fields at the surface of PDS 70 are detected in our data set of 40 circularly polarized spectra. Longitudinal fields inferred from Zeeman signatures, ranging from -116 to 176 G, are modulated on a timescale of 3.008±\pm0.006 d, confirming that this is the rotation period of PDS 70. Applying Zeeman-Doppler imaging to subsets of unpolarized and circularly polarised line profiles, we show that PDS 70 hosts low-contrast brightness spots and a large-scale magnetic field in its photosphere, featuring in particular a dipole component of strength 200-420 G that evolves on a timescale of months. From the broadening of spectral lines, we also infer that PDS 70 hosts a small-scale field of 2.51±\pm0.12 kG. Radial velocities derived from unpolarized line profiles are rotationally modulated as well, and exhibit additional longer-term chromatic variability, most likely attributable to magnetic activity rather than to a close-in giant planet (with a 3sigma upper limit on its minimum mass of ~4 Mjup at a distance of ~0.2 au). We finally confirm that accretion occurs at the surface of PDS 70, generating modulated red-shifted absorption in the 1083.3-nm He i triplet, and show that the large-scale magnetic field, often strong enough to disrupt the inner accretion disc up to the corotation radius, weakens as the star gets fainter and redder (as in 2022), suggesting that dust from the disc more easily penetrates the stellar magnetosphere in such phases.
... Numerical solution NS10 has an accuracy O(τ + h 2 ) and it is computed with O(MN) operations because I + η∆ is a tridiagonal M-matrix [37,38]. Now we construct the finite difference scheme of equation (42) which uses approximation A 3 for the partial derivative in time and central difference approximation for the partial derivative in space. ...
Preprint
Full-text available
In this paper we consider constructions of first derivative approximations using the generating function. The weights of the approximations contain the powers of a parameter whose modulus is less than one. The values of the initial weights are determined, and the convergence and order of the approximations are proved. The paper discusses applications of approximations of first derivative for numerical solution of ordinary and partial differential equations and proposes an algorithm for fast computation of the numerical solution. Proofs of the convergence and accuracy of the numerical solutions are presented and the performance of the numerical methods considered is compared with the Euler method. The main goal of constructing approximations for integer-order derivatives of this type is their application in deriving high-order approximations for fractional derivatives, whose weights have specific properties. The paper proposes the construction of an approximation for the fractional derivative and its application for numerically solving fractional differential equations. The theoretical results for the accuracy and order of the numerical methods are confirmed by the experimental results presented in the paper.
... Equation ( 3 ) is solved for the initial parameters ( w c , μ c , σ c and w f , μ f , σ f ) using an unsupervised machine learning technique known as the EM algorithm (Dempster, Laird & Rubin 1977 ;Fraley & Raftery 1998 ;Press et al. 2007 ;Deisenroth et al. 2020 ). Following the e x ecution of the EM algorithm on the initial parameters, the final values of parameters ( w k , μ k , σ k ) are obtained. ...
Article
Full-text available
We present a statistical, photometric, and spectral energy distribution (SED) analysis to characterize the blue straggler stars (BSSs) populations in the Galactic old open cluster Berkeley 39. Berkeley 39 is a 6.16 Gyr old open cluster located at a distance of 3.99 Kpc.Gaia DR3 astrometry data have been used to estimate the membership probabilities using ensemble-based unsupervised machine learning techniques. We identified 21 BSS candidates on the colour–magnitude diagram, with 19 of them being detected in the Swift/UVOT UVW2 filter. We analysed the radial surface density profile and examined the cluster dynamical states and mass segregation effect. The SEDs of 19 BSSs are constructed using multiwavelength data covering UV to IR wavelengths. A single-component SED is fitted successfully for 14 BSS candidates. We discovered hot companions in five BSS candidates. These hot companions have temperatures of approximately 14 000 to 23 000 K, radii ranging from 0.04 to 0.13 R⊙, and luminosities ranging from 0.16 to 2.91 L⊙. Among these, three are most likely extremely low-mass white dwarfs(WDs) with masses around 0.17 to 0.18 M⊙, and two are low-mass WDs with masses around 0.18 to 0.39 M⊙. This confirms that they are post-mass transfer (Case A or Case B) systems. We also investigated the variable characteristics of BSSs by analysing their light curves using data from TESS. Our analysis confirms that two BSSs identified as eclipsing binaries in Gaia DR3 are indeed eclipsing binaries. Additionally, one of the two eclipsing binary BSSs shows evidence of having hot companions, as indicated by the multiwavelength SEDs.
... To implement this mathematically, correlating input variables are generated for calculation. The so called ''Gaussian Copula'' [50][51][52] can be used for the generation of the correlated random values. ...
Article
Full-text available
Timber–concrete composite (TCC) combines the advantages of timber and reinforced concrete construction. In multi-storey buildings, the design of TCC floors is often governed by the deformation serviceability requirements. One way to improve deformation serviceability is through the design of continuous floor systems. However, the current practice of TCC slab systems is often limited to load-carrying as single-span beams. This paper presents an analytical model based on the component method for determining the moment–rotation relation of continuous TCC slabs in the range with negative bending moments. The investigations involve the development of load–displacement curves for the individual force-transmitting components, including timber and reinforced concrete, followed by their assembly into a component model. The model has been verified using finite element calculations and the results demonstrate that the moment–rotation relation of continuous TCC systems can be accurately captured. As the variability of the material properties strongly influences the stiffness of the joint and therefore the deformation and stresses in the TCC slab, a probabilistic analysis of the material parameters was performed using the Monte Carlo method. The probabilistic investigations show that the variability of the input parameters has a significant impact on the joint stiffness, with notable scattering observed in the moment–rotation relation, especially in the range of the cracking phase of the concrete. The results from the probabilistic investigations enable initial statements about the joint stiffness of continuous timber–concrete composite slabs in ranges with negative bending moments.
... When non-periodic signals are under consideration, they must be zero padded, cf. Press (2007), before Hilbert transformation. ...
Preprint
In insect locomotion, the transmission of energy from muscles to motion is a process within which there are many sources of dissipation. One significant but understudied source is the structural damping within the insect exoskeleton itself: the thorax and limbs. Experimental evidence suggests that exoskeletal damping shows frequency (or, rate) independence, but investigation into its nature and implications has been hampered by a lack methods for simulating the time-domain behaviour of this damping. Here, synergising and extending results across applied mathematics and seismic analysis, we provide these methods. We show that existing models of exoskeletal rate-independent damping are equivalent to an important singular integral in time: the Hilbert transform. However, these models are strongly noncausal, violating the directionality of time. We derive the unique causal analogue of these existing exoskeletal damping models, as well as an accessible approximation to them, as Hadamard finite-part integrals in time, and provide methods for simulating them. These methods are demonstrated on several current problems in insect biomechanics. Finally, we demonstrate, for the first time, that existing rate-independent damping models are not strictly dissipative: in certain circumstances they are capable of generating negative power without apparently storing energy, likely violating conservation of energy. This work resolves a key methodological impasse in the understanding of insect exoskeletal dynamics and offers new insights into the micro-structural origins of rate-independent damping as well as the directions required in order to resolve violations of causality and the conservation of energy in existing models.
... The matrix ∆ is the tridiagonal matrix of dimension N − 1 with main diagonal entries equal to 2 and entries above and below the main diagonal equal to −1. Numerical solution (59) is computed with O(MN) operations because I + η∆ is a tridiagonal M-matrix [37,38]. A Crank-Nicolson scheme for the heat equation has second-order accuracy [39,40]. ...
Article
Full-text available
In this paper, we consider constructions of first derivative approximations using the generating function. The weights of the approximations contain the powers of a parameter whose modulus is less than one. The values of the initial weights are determined, and the convergence and order of the approximations are proved. The paper discusses applications of approximations of the first derivative for the numerical solution of ordinary and partial differential equations and proposes an algorithm for fast computation of the numerical solution. Proofs of the convergence and accuracy of the numerical solutions are presented and the performance of the numerical methods considered is compared with the Euler method. The main goal of constructing approximations for integer-order derivatives of this type is their application in deriving high-order approximations for fractional derivatives, whose weights have specific properties. The paper proposes the construction of an approximation for the fractional derivative and its application for numerically solving fractional differential equations. The theoretical results for the accuracy and order of the numerical methods are confirmed by the experimental results presented in the paper.
... This function is sometimes referred to as the incomplete gamma function (Press et al., 1989). The value of F X (x) represents the probability that an event, at least as great a magnitude as x, will occur in one time interval, while the probability of exceedence during the same time interval is given by [1*F X (x)]. ...
... Comparatively, the proposed PS propagator requires a one-dimensional Fourier transform and three steps of interpolation. The highest order interpolation used in this simulation was a cubic spline with computational complexity O(n) [23]. Evidently, the linearithmic growth of the Fourier transform dominates, so that the PS propagator has a computational efficiency of O (n log 2 n). ...
Preprint
Full-text available
The propagation of wave fields and their interactions with matter are important for established and emerging fields in optical sciences. Efficient methods for predicting such behaviour have been employed routinely for coherent sources. However, most real world optical systems exhibit partial coherence, for which the present mathematical description involves high dimensional complex functions and hence poses challenges for numerical implementations. This demands significant computational resources to determine the properties of partially coherent wavefields. Here, we describe the novel Phase-Space (PS) propagator, an efficient and self-consistent technique for free space propagation of wave fields which are partially coherent in the spatial domain. The PS propagator makes use of the fact that the propagation of a wave field in free space is equivalent to a shearing of the corresponding PSD function. Computationally, this approach is simpler and the need for using different propagation methods for near and far-field regions is removed.
... Fourth-order space centred differencing in and second-order, one-sided, differencing is used in . This is combined with a fourth-order accurate (in time) Adams-Bashforth-Moulton (ABM) predictor-corrector sequence for time integration 24 . We introduce the functions  =  (Φ, , ) and  = (Φ, , ) which are defined as ...
... A batch size of 1 frame was used for training using the Adam optimiser. For each simulation, fourth order Runge-Kutta numerical integration was used in order to calculate the benchmark true y using x (Press et al., 2007). ...
Preprint
Predicting the response of nonlinear dynamical systems subject to random, broadband excitation is important across a range of scientific disciplines, such as structural dynamics and neuroscience. Building data-driven models requires experimental measurements of the system input and output, but it can be difficult to determine whether inaccuracies in the model stem from modelling errors or noise. This paper presents a novel method to identify the causal component of the input-output data from measurements of a system in the presence of output noise, as a function of frequency, without needing a high fidelity model. An output prediction, calculated using an available model, is optimally combined with noisy measurements of the output to predict the input to the system. The parameters of the algorithm balance the two output signals and are utilised to calculate a nonlinear coherence metric as a measure of causality. This method is applicable to a broad class of nonlinear dynamical systems. There are currently no solutions to this problem in the absence of a complete benchmark model.
... We integrate the system using a relaxation method (backward time, centred space); see Press et al. (2007). All simulations were implemented in the software Wolfram Mathematica 14.0. ...
Preprint
Full-text available
The growth of plants is a hydromechanical phenomenon in which cells enlarge by absorbing water, while their walls expand and remodel under turgor-induced tension. In multicellular tissues, where cells are mechanically interconnected, morphogenesis results from the combined effect of local cell growths, which reflects the action of heterogeneous mechanical, physical, and chemical fields, each exerting varying degrees of nonlocal influence within the tissue. To describe this process, we propose a physical field theory of plant growth. This theory treats the tissue as a poromorphoelastic body, namely a growing poroelastic medium, where growth arises from pressure-induced deformations and osmotically-driven imbibition of the tissue. From this perspective, growing regions correspond to hydraulic sinks, leading to the possibility of complex non-local regulations, such as water competition and growth-induced water potential gradients. More in general, this work aims to establish foundations for a mechanistic, mechanical field theory of morphogenesis in plants, where growth arises from the interplay of multiple physical fields, and where biochemical regulations are integrated through specific physical parameters.
... The spectral peaks are expressible in percentages of the respective peak's contribution to data variance (var%) or in decibels, dB (Pagiatakis, 1999). GVSA has many benefits and, in numerous ways and situations, outperforms the Fourier method as merely a special case of GVSA (Craymer, 1998); for example, when analyzing long gapped records, i.e., most natural-data records (Omerbashich, 2022(Omerbashich, , 2006Press et al., 2007;Pagiatakis, 1999;Wells et al., 1985). By discarding unreliable data in the records, such as non-calibrated telescope observations, I also take advantage of this blindness to data gaps as a feature exclusive to the least-squares class of spectral analysis techniques. ...
Preprint
Full-text available
Rather than as a star classically assumed to feature elusive dynamo or a proverbial engine and impulsively alternating polarity, the Sun reveals itself in the 385.8-2.439-nHz (1-mo-13-yr) band of polar ({\phi}Sun>|70{\deg}|) wind's decadal dynamics, dominated by fast (>700 km s^-1) winds, as a globally completely vibrating revolving-field magnetic alternator at work at all times. Thus North-South separation of 1994-2008 Ulysses <10 nT wind polar samplings reveals spectral signatures of an entirely >99%-significant Sun-borne global sharp Alfven resonance (AR) governed by Ps=~11-yr Schwabe global damping (equilibrium) mode northside, ~10-yr degeneration equatorially, and ~9-yr southside. AR is accompanied by a symmetrical antiresonance P(-) whose both N/S tailing harmonics P(-17) are the well-known PR=~154-day (Ps/3/3/3 to +-0.1%) Rieger period dominating solar-system (planetary) dynamics and space weather. Upward drifting low-frequency trends reveal a rigid core; undertones a core offset away from apex. Core wobble at 2.2+/-0.1-yr triggers AR. Multiple total spectral symmetries of solar activity (historical solar-cycle lengths, sunspot and calcium numbers) expose the solar alternator moderating sunspots, nanoflares and CMEs that resemble machinery sparking. Unlike a resonating motor restrained from separating its casing, the cageless Sun lacks a stator and vibrates freely, resulting in all-spin and mass release (fast winds) in an axial shake-off beyond L1. The result was verified against remote data and the experiment, replacing dynamo with magnetoalternator and advancing basic knowledge on the >100 billion trillions solar-type stars. Shannon's theory-based Gauss-Vanicek spectral analysis revolutionizes astrophysics by rigorously simulating fleet formations from a single spacecraft and physics by computing nonlinear global dynamics directly (rendering spherical approximation obsolete).
... Spherical Gauss quadratures (as coined by Ref. 41) generalize one-dimensional Gauss quadratures by approximating the integrand by Wigner D-matrix elements or spherical polynomials. Sampling points and weights are obtained from the condition of achieving a given degree L on S 2 or SO(3) 29,37,41,76 . This yields large systems of equations that are generally hard to solve. ...
Preprint
Full-text available
In molecular physics, it is often necessary to average over the orientation of molecules when calculating observables, in particular when modelling experiments in the liquid or gas phase. Evaluated in terms of Euler angles, this is closely related to integration over two- or three-dimensional unit spheres, a common problem discussed in numerical analysis. The computational cost of the integration depends significantly on the quadrature method, making the selection of an appropriate method crucial for the feasibility of simulations. After reviewing several classes of spherical quadrature methods in terms of their efficiency and error distribution, we derive guidelines for choosing the best quadrature method for orientation averages and illustrate these with three examples from chiral molecule physics. While Gauss quadratures allow for achieving numerically exact integration for a wide range of applications, other methods offer advantages in specific circumstances. Our guidelines can also by applied to higher-dimensional spherical domains and other geometries. We also present a Python package providing a flexible interface to a variety of quadrature methods.
... Subsequently, after evolution time t > 3000, the superfluid regime is reestablished, albeit with some reduction in energy in the minority component due to the excitations created in the majority BEC component. Numerical simulations were performed using the methods of split-step fast Fourier transform [31,32] and a Runge-Kutta in interaction picture [27]. The integration domain of length L ∈ [−6π, 6π] is employed to accommodate 1024 Fourier modes with a corresponding space step ∆x = 0.036. ...
Article
Full-text available
Superfluid and dissipative regimes in the dynamics of a two-component quasi-one-dimensional Bose–Einstein condensate (BEC) with unequal atom numbers in the two components have been explored. The system supports localized waves of the symbiotic type owing to the same-species repulsion and cross-species attraction. The minority BEC component moves through the majority component and creates excitations. To quantify the emerging excitations, we introduce a time-dependent function called disturbance. Through numerical simulations of the coupled Gross–Pitaevskii equations with periodic boundary conditions, we have identified a critical velocity of the localized wave, above which a transition from the superfluid to dissipative regime occurs, as evidenced by a sharp increase in the disturbance function. The factors responsible for the discrepancy between the actual critical velocity and the speed of sound, expected from theoretical arguments, have been discussed.
... Here, N obs is the number of spectral resolution elements, covering isolated H 2 O transitions, used in the fitting, F obs and F mod are the corresponding observed and model flux, respectively, and σ denotes the uncertainty on the flux (see Table A.1). The uncertainties on the best-fit parameters are taken from the confidence intervals, which are determined for, respectively, 1σ, 2σ, and 3σ as χ 2 red,min +2.3, χ 2 red,min +6.2 and χ 2 red,min +11.8 (Avni 1976;Press et al. 1992). ...
Preprint
The MRS mode of the JWST-MIRI instrument gives insights into the chemical richness and complexity of the inner regions of planet-forming disks. Here, we analyse the H2_2O-rich spectrum of the compact disk DR Tau. We probe the excitation conditions of the H2_2O transitions observed in different wavelength regions across the entire spectrum using LTE slab models, probing both the rovibrational and rotational transitions. These regions suggest a radial temperature gradient, as the excitation temperature (emitting radius) decreases (increases) with increasing wavelength. To explain the derived emitting radii, we require a larger inclination for the inner disk (i~20-23 degrees) compared to the outer disk (i~5 degrees), agreeing with our previous analysis on CO. We also analyse the pure rotational spectrum (<10 micron) using a large, structured disk (CI Tau) as a template, confirming the presence of the radial gradient, and by fitting multiple components to further characterise the radial and vertical temperature gradients present in the spectrum. At least three temperature components (T~180-800 K) are required to reproduce the rotational spectrum of H2_2O arising from the inner ~0.3-8 au. These components describe a radial temperature gradient that scales roughly as ~R0.5^{-0.5} in the emitting layers. As the H2_2O is mainly optically thick, we derive a lower limit on the abundance ratio of H2_2O/CO~0.17, suggesting a potential depletion of H2_2O. Similarly to previous work, we detect a cold H2_2O component (T~180 K) originating from near the snowline. We cannot conclude if an enhancement of the H2_2O reservoir is observed following radial drift. A consistent analysis of a larger sample of compact disks is necessary to study the importance of drift in enhancing the H2_2O abundances.
... To obtain a bijective association between magnetospheric and ionospheric mesh cells, ionosphere mesh node coordinates are 130 traced upwards along the fieldlines with an adaptive Euler tracing algorithm (Press et al., 1992) (compare Figure 4). Within the magnetospheric simulation domain, the magnetic field values are interpolated using the reconstruction method of Balsara (2017). ...
Preprint
Full-text available
Simulations of the coupled ionosphere-magnetosphere system are a key tool to understand geospace and its response to space weather. For the most part, they are based on fluid descriptions of plasma (magnetohydrodynamics, MHD) formalism, coupled an electrostatic ionosphere. Kinetic approaches to modeling the global magnetosphere with a coupled ionosphere system are still a rarity.We present an ionospheric boundary model for the global near-Earth plasma simulation system Vlasiator. It complements the magnetospheric hybrid-Vlasov simulations with an inner boundary condition that solves the ionospheric potential based on field-aligned current and plasma quantities from the magnetospheric domain. This new ionospheric module solves the ionospheric potential in a height-integrated approach on an unstructured grid and couples back to the hybrid-kinetic simulation by mapping the resulting electric field to the magnetosphere's inner boundary.The solver is benchmarked against a set of well-established analytic reference cases, and we discuss the benefits of a spherical Fibonacci mesh for use in ionospheric modeling. Preliminary results from coupled global magnetospheric-ionospheric simulations are presented, showing formation of both Region 1 and Region 2 current systems.
... In circuit theory, signal processing and mechanical systems, it is useful to express a first-order nonlinear system as the following autonomous nonlinear system [21][22][23]88] ...
Article
Full-text available
Recently, the sinosoidal output response in power series (SORPS) formalism was presented for system identification and simulation. Based on the concept of characteristic curves (CCs), it establishes a mathematical connection between power series and Fourier series for a first-order nonlinear system (Gonzalez in Sci Rep 13:1955, 2023). However, the system identification procedure discussed there, based on fast Fourier transform (FFT), presents the limitations of requiring a sinusoidal single tone for the dynamical variable and equally spaced time steps for the input–output dataset (DS). These limitations are here addressed by introducing a different approach: we use a power series-based model (referred to as model 1) for system modeling instead of FFT, where two hyperparameters A^0A^0{\hat{A}}_0 and A^1A^1{\hat{A}}_1 are optimally defined depending on the DS. Subsequently, two additional models are obtained from parameters obtained in model 1: another power series-based model (model 2) and a Fourier analysis-based model (model 3). These models are useful for comparing parameters obtained from different DSs. Through an illustrative example, we show that while the predicted values from the models are the same due to a mathematical equivalence, the parameters obtained for each model vary to a greater or lesser extent depending on the DS used for system estimation. Hence, the parameters of the Fourier analysis-based model exhibit notably less variation compared to those of the power series-based model, highlighting the reliability of using the Fourier analysis-based model for comparing model parameters obtained from different DSs. Overall, this work expands the applicability of the SORPS formalism to system identification from arbitrary input–output data and represents a groundbreaking contribution relying on the concept of CCs, which can be straightforwardly applied to higher-order nonlinear systems. The method of CCs can be considered as complementary to the commonly used approach (such as NARMAX-models and sparse regression techniques) that emphasizes the estimation of the individual parameter values of the model. Instead, the CCs-based methods emphasize the computation of the CCs as a whole. CCs-based models present the advantages that the system identification is uniquely defined, and that it can be applied for any system without additional algebraic operations. Thus, the parsimonious principle defined by the NARMAX-philosophy is extended from the concept of a model with as few parameters as possible to the concept of finding the lowest model order that correctly describe the input–output data. This opens up a wide variety of potential applications, covering areas such as vibration analysis, structural dynamics, viscoelastic materials, design and modeling of nonlinear electric circuits, voltammetry techniques in electrochemistry, structural health monitoring, and fault diagnosis.
... For the minimization of Eq. (29) we use the Levenberg-Marquardt (LM) algorithm as implemented in the LMFIT package in Python. We have also performed Monte-Carlo (MC) simulations of the synthetic data sets (as described in Ref.106, Chapter 15.6) using 300 to 1000 sets. ...
Article
Full-text available
Interest in the Rosenzweig–Porter model, a parameter-dependent random-matrix model which interpolates between Poisson and Wigner–Dyson (WD) statistics describing the fluctuation properties of the eigenstates of typical quantum systems with regular and chaotic classical dynamics, respectively, has come up again in recent years in the field of many-body quantum chaos. The reason is that the model exhibits parameter ranges in which the eigenvectors are Anderson-localized, non-ergodic (fractal) and ergodic extended, respectively. The central question is how these phases and their transitions can be distinguished through properties of the eigenvalues and eigenvectors. We present numerical results for all symmetry classes of Dyson’s threefold way. We analyzed the fluctuation properties in the eigenvalue spectra, and compared them with existing and new analytical results. Based on these results we propose characteristics of the short- and long-range correlations as measures to explore the transition from Poisson to WD statistics. Furthermore, we performed in-depth studies of the properties of the eigenvectors in terms of the fractal dimensions, the Kullback–Leibler (KL) divergences and the fidelity susceptibility. The ergodic and Anderson transitions take place at the same parameter values and a finite size scaling analysis of the KL divergences at the transitions yields the same critical exponents for all three WD classes, thus indicating superuniversality of these transitions.
... which is solved for E min by using a secant method [33]. The normalization factor N 0 is determined by substituting E min into (20). ...
Article
Full-text available
The Fermi bubbles and the eROSITA bubbles around the Milky Way Galaxy are speculated to be the aftermaths of past jet eruptions from a supermassive black hole in the galactic center. In this work, a 2.5D axisymmetric relativistic magnetohydrodynamic (RMHD) model is applied to simulate a jet eruption from our galactic center and to reconstruct the observed Fermi bubbles and eROSITA bubbles. High-energy non-thermal electrons are excited around forward shock and discontinuity transition regions in the simulated plasma distributions. The γ-ray and X-ray emissions from these electrons manifest patterns on the skymap that match the observed Fermi bubbles and eROSITA bubbles, respectively, in shape, size and radiation intensity. The influence of the background magnetic field, initial mass distribution in the Galaxy, and the jet parameters on the plasma distributions and hence these bubbles is analyzed. Subtle effects on the evolution of plasma distributions attributed to the adoption of a galactic disk model versus a spiral-arm model are also studied.
... For the case of general matrices there exist several research papers that propose methods to obtain their inverses [14][15][16]. Other algorithms focus on special types of matrices such as positive definite matrix [17], tridiagonal matrix [18] and diagonal matrix [19]. ...
Article
Full-text available
In the literature survey, there exist several research papers devoted to the generation of non-singular matrices with coefficients in a finite field. One of these papers refers to the generation of such matrices through the multiplication of polynomials modulus a primitive polynomial. However, the complexity bound given for the algorithm is not accurate. Thus, in this paper, we conduct a new analysis on the complexity of the same. We also remove the restriction of using a primitive polynomial to generate the matrix by using an arbitrary monic polynomial over a finite field whose independent term is distinct from zero.
... where the subscript "b" refers to "at boundary", k 2 = 4s b s/ (s b + s) 2 + (z b − z) 2 , and K(k) = π/2 0 (1 − k 2 sin 2 α) −1 dα is the complete elliptic integral of the first kind [18][19][20]. This integral approach ensures that the boundary conditions accurately reflect the physical distribution of charge, yielding more accurate solutions within the simulation domain when combined with the SOR method. ...
Article
Full-text available
Poisson’s equation frequently emerges in many fields, yet its exact solution is rarely feasible, making the numerical approach notably valuable. This study aims to provide a tutorial-level guide to numerically solving Poisson’s equation, focusing on estimating the electrostatic field and potential resulting from an axially symmetric Gaussian charge distribution. The Finite Difference Method is utilized to discretize the desired spatial domain into a grid of points and approximate the derivatives using finite difference approximations. The resulting system of linear equations is then tackled using the Successive Over-Relaxation technique. Our results suggest that the potential obtained from the direct integration of the distance-weighted charge density is a reasonable choice for Dirichlet boundary conditions. We examine a scenario involving a charge in free space; the numerical electrostatic potential is estimated to be within a tolerable error range compared to the exact solution.
... DeMars et al. [27] proposes using trapezoidal integration to perform the time integration and methods like the Newton Cotes quadrature [29] to perform the spherical integration. This reduces computational expense, while ensuring user defined accuracy. ...
Article
In this research, several existing semi-analytical dynamics and uncertainty propagation techniques are combined with conjunction models to achieve fast and accurate probability of collision results. Initial Gaussian uncertainty associated with two objects is split into smaller Gaussian distributions using Gaussian mixture models to achieve mixture components that will maintain linearity over longer propagation times. These mixture components are propagated forward using second-order state transition tensors that can capture the nonlinearity in the propagation accurately by taking into account the desired dynamics. The dynamic solution and these tensors are computed using the Deprit–Lie averaging approach, including transformations between mean and osculating states, which accounts for perturbations due to solar radiation pressure and [Formula: see text]. This simplified dynamic system allows fast and accurate propagation by combining the speed of propagation with averaged dynamics and the accuracy of short-period variation addition. Combined, these mathematical tools are used to propagate the object’s uncertainties forward. The final distributions are compared using analytical conjunction methods to compute the probability of collision, which is then compared to the Monte Carlo result to confirm the method’s validity.
... Equation 12 incorporates the total all-component net molar flow rate (eq S6) at each stage and LLE constraints (eq S11) between the aqueous and organic streams leaving each stage, which are calculated using eqs 9 and 10 ( Figure 4a) 3 , and T; using the hybrj function implemented in scipy. 41,68 In MWE, the organic stream entering the liquid−liquid contactor contains a small amount of water , respectively) is calculated by numerically solving a mass balance on water in the solvent stream entering the liquid−liquid contactor (stream 10 in Figure 1). The inlet organic-stream molar flow rate ...
... The topology files for the simulations were generated using AmberTools22 [68]. For each system, energy minimization using the steepest descent method [69] was first taken, followed by equilibration for 2 ns under NPT conditions. The minimization convergence was declared when the maximum force became smaller than 1000 kJ mol −1 nm −1 . ...
Article
Full-text available
Vascular endothelial growth factor 165 (VEGF165) is a prominent isoform of the VEGF-A protein that plays a crucial role in various angiogenesis-related diseases. It is homodimeric, and each of its monomers is composed of two domains connected by a flexible linker. DNA aptamers, which have emerged as potent therapeutic molecules for many proteins with high specificity and affinity, can also work for VEGF165. A DNA aptamer heterodimer composed of monomers of V7t1 and del5-1 connected by a flexible linker (V7t1:del5-1) exhibits a greater binding affinity with VEGF165 compared to either of the two monomers alone. Although the structure of the complex formed between the aptamer heterodimer and VEGF165 is unknown due to the highly flexible linkers, gaining structural information will still be valuable for future developments. Toward this end of accessing structural information, we adopt an ensemble docking approach here. We first obtain an ensemble of structures for both VEGF165 and the aptamer heterodimer by considering both small- and large-scale motions. We then proceed through an extraction process based on ensemble docking, molecular dynamics simulations, and binding free energy calculations to predict the structures of the VEGF165/V7t1:del5-1 complex. Through the same procedures, we reach a new aptamer heterodimer that bears a locked nucleic acid-modified counterpart of V7t1, namely RNV66:del5-1, which also binds well with VEGF165. We apply the same protocol to the monomeric units V7t1, RNV66, and del5-1 to target VEGF165. We observe that V7t1:del5-1 and RNV66:del5-1 show higher binding affinities with VEGF165 than any of the monomers, consistent with experiments that support the notion that aptamer heterodimers are more effective anti-VEGF165 aptamers than monomeric aptamers. Among the five different aptamers studied here, the newly designed RNV66:del5-1 shows the highest binding affinity with VEGF165. We expect that our ensemble docking approach can help in de novo designs of homo/heterodimeric anti-angiogenic drugs to target the homodimeric VEGF165.
... SH is a complete set of orthogonal functions defined on the surface of a sphere. The coordination of the vertices on a particle surface can be represented by SH expansion as follows (Press et al. 1992): 0 B @ xðθ; ϕÞ yðθ; ϕÞ zðθ; ϕÞ ...
... Também foi usado o método aglomerativo baseado na média do grupo ("Arithmetic Average Clustering" -UPGMA) indicado para diminuir a distorção da matriz original durante a construção do dendrograma(Valentin, 2000). O Coeficiente de Correlação cofenético, que avalia o grau de deformação provocado pela construção do dendrograma, foi usado como medida de validação do dendrograma contra a matriz original de dados(MCGARIGAL et al., 2000).Na avaliação do efeito das relações monotônicas (baseadas em uma variável descritora de cada vez) entre as variáveis descritoras, aplicou-se o coeficiente de correlação de Spearman (rs) para as seguintes variáveis: meses do ano, profundidade, temperatura, oxigênio dissolvido e salinidade(PRESS et al., 1992). A padronização das variáveis consideradas foi obtida pela média da variável analisada (média = 0 e desvio padrão = 1). ...
Research
Full-text available
This manuscript is the result of my master's thesis in Ecology. I worked with data from elasmobranch fishing in the Arraial do Cabo Marine Extractive Reserve - Resexmar-AC in the years 1985-1988. I extracted information on bathymetry, sexual maturation, and spatial segregation related to temperature, salinity, and dissolved oxygen.
... Se registraron medidas directas (entre las 10 a. m. y 12:00 m.) sobre 3 hojas de cada cultivo elegidas aleatoriamente para cada variedad estudiada. Antes del cálculo de los índices, se aplicó el filtro de Savitzky-Golay(Savitzky y Golay, 1964;Press et al., 2007), que consiste en un algoritmo polinomial flexible muy utilizado para eliminar el ruido y suavizar los datos. Los índices fueron determinados para cada conjunto de datos, luego promediados y seguidamente se determinó la desviación cuadrática media (RMSE) según la fórmula:RMSE = ∑ ( ̅ − ) 2 ,Donde ̅ es la media y n el número de datos.Para el cálculo de los índices de reflectancia se tomaron en cuenta los siguientes parámetros ópticos: 720 nm para el borde rojo, 560 nm para el máximo de la región verde, 520 nm para el borde azul-verde y 800 nm para la meseta de reflectancia del NIR. ...
Article
Full-text available
Para esta investigación se estudiaron los índices de reflectancia espectral de pigmentos (clorofila, antocianina y carotenoides) contenidos en hojas de 6 variedades de cultivos andinos registrados en el Instituto Nacional de Innovación Agraria (INIA) de Ayacucho, Perú: maíz de grano blanco (MB) INIA 620 Wari y maíz de grano y tusa de color morado INIA 615 Negro Canaán (MM) (Zea mays); tubérculos de papa color blanca (PB) de la variedad Yungay y tubérculos de papa de color roja (PR) INIA 316 Roja Ayacuchana (Solanum tuberosum); y quinua de grano blanco (QB) de la variedad Blanca de Junín y de grano rojo (QR) INIA 620 Pasankalla (Chenopodium quinoa). Los índices se determinaron a partir de datos de reflectancia espectral R(λ) entre 350 y 2500 nm, obtenidos mediante el espectrorradiómetro ASD FieldSpec 4, entre el 17 de febrero y el 9 de marzo de 2020, tiempo dividido en tres periodos bien definidos (inicial, crítico y final). Las medidas directas de reflectancia R(λ) en la región visible mostraron una mayor presencia de antocianinas en la quinua roja (QR) que en el resto de cultivos. Los 4 índices de clorofila calculados (SR, NDCI,ChlRE, Chlgreen) tienen el mismo comportamiento hacia el descenso para cada cultivo estudiado, por lo que puede utilizarse cualquiera de ellos en la cuantificación del contenido de clorofila. La quinua roja, a diferencia de los otros, mostró una tendencia al incremento en la última medición. Para los índices de antocianinas y carotenoides los índices utilizados muestran también el mismo comportamiento en cada cultivo, es decir, tendencia a la disminución o estabilización, como en la QB, QR Y PR. En el caso del índice de la razón carotenoides/clorofila (Car/Chl) no se da la misma tendencia en cada cultivo; sin embargo, el índice CClHE es el que mejor se acomoda en los 6 cultivos, por mostrar más estacionariedad para todos los cultivos. No obstante, es recomendable validar su uso para cada cultivo.
... Since the temperature and water vapor pressure provided by ERA5 are inconsistent with the spatial and temporal resolution of the tomographic results, the Gaussian distance weighting function in the horizontal direction and the exponential function in the vertical direction are used to interpolate ERA5 to be consistent with them. In terms of time, the temperature and water vapor partial pressure of ERA5 can be interpolated by the Chebyshev function of order 9 (Press et al. 1992), which Wet refractivity obtained from tomography-derived and radiosonde-derived data. RS is the wet refractivity derived using radiosonde products, Trad is the wet refractivity derived using the traditional tomography method, and Opti is the wet refractivity derived using the optimized method can achieve a time resolution consistent with the tomography results. ...
Article
Full-text available
Monitoring urban heat island (UHI) effect is critical because it causes health problems and excessive energy consumption more energy when cooling buildings. In this study, we propose an approach for UHI monitoring by fusing data from ground-based global navigation satellite system (GNSS), space-based GNSS radio occultation (RO), and radiosonde. The idea of the approach is as follows: First, the first and second grid tops are defined based on historical RO and radiosonde observations. Next, the wet refractivities between the first and second grid tops are fitted to higher-order spherical harmonics and they are used as the inputs of GNSS tomography. Then, the temperature and water vapor partial pressure are estimated by using best search method based on the tomography-derived wet refractivity. In the end, the UHI intensity is evaluated by calculating the temperature difference between the urban regions and nearby rural regions. Feasibility of the UHI intensity monitoring approach was evaluated with GNSS RO and radiosonde data in 2010-2019, as well as ground-based GNSS data in 2020 in Hong Kong, China, by taking synoptic temperature data as reference. The result shows that the proposed approach achieved an accuracy of 1.2 K at a 95% confidence level.
Article
Full-text available
In the paper, we give several limit case Euler–Abel type transforms for alternating cosine and sine series. Making use of a property of the operator of generalized difference, applied in the transforms, we give transforms for nonalternating series, which are stronger than similar transforms for alternating series given earlier. MSC2020 Classification: Primary 65B10, Secondary 42A38, 40A25, 40A05
Article
Full-text available
Objective This article studies how misreporting errors in crime surveys affect our understanding of the Dark Figure of Crime (DFC). Methods The paper adopts a Partial Identification framework which relies on assumptions that are weaker (and thus more credible) than those required by parametric models. Unlike common parametric models, Partial Identification handles both under-reporting and over-reporting of crimes (due to, say, stigma, memory errors or misunderstanding of upsetting events). We apply this framework to the Crime Survey for England and Wales to characterise the uncertainty surrounding crimes by severity and geographic region. Results Depending on the assumptions considered, the partial identification regions for the DFC vary from [0.000, 0.774] to [0.351, 0.411]. A credible estimate places the true DFC in [0.31, 0.51]. This range was obtained while allowing for a substantive amount of reporting error (25%) and assuming that people do not over-report crimes in surveys (saying they are a victim of crime erroneously or falsely). Across regions, uncertainty is larger in the north of England. Conclusion Accounting for misreporting introduces uncertainty about the actual magnitude of the DFC. This uncertainty is contingent on the unknown proportion of misreported crimes in the survey. When this proportion is modest (10% or below), raw survey estimates offer valuable insights, albeit with lingering uncertainty. However, researchers may want to opt for Partial Identification regions based on larger misreported proportions when examining relatively infrequent crimes that carry substantial stigma, such as sexual crimes or domestic violence. The width of the partial identification regions in this paper fluctuates among different regions of England and Wales, indicating varying levels of uncertainty surrounding the DFC in distinct localities. Consequently, previous research relying on parametric assumptions and resulting in singular point estimates necessitates re-evaluation in light of the findings presented herein.
Article
Full-text available
This work examines swipe-based interactions on smart devices, like smartphones and smartwatches, that detect vibration signals through defined swipe surfaces. We investigate how these devices, held in users’ hands or worn on their wrists, process vibration signals from swipe interactions and ambient noise using a support vector machine (SVM). The work details the signal processing workflow involving filters, sliding windows, feature vectors, SVM kernels, and ambient noise management. It includes how we separate the vibration signal from a potential swipe surface and ambient noise. We explore both software and human factors influencing the signals: the former includes the computational techniques mentioned, while the latter encompasses swipe orientation, contact, and movement. Our findings show that the SVM classifies swipe surface signals with an accuracy of 69.61% when both devices are used, 97.59% with only the smartphone, and 99.79% with only the smartwatch. However, the classification accuracy drops to about 50% in field user studies simulating real-world conditions such as phone calls, typing, walking, and other undirected movements throughout the day. The decline in performance under these conditions suggests challenges in ambient noise discrimination, which this work discusses, along with potential strategies for improvement in future research.
Preprint
Full-text available
Amblyopia is a developmental disorder that results from abnormal visual experience in early life. Amblyopia typically reduces visual performance in one eye. We studied the representation of visual motion information in area MT and nearby extrastriate visual areas in two monkeys made amblyopic by creating an artificial strabismus in early life, and in a single age-matched control monkey. Tested monocularly, cortical responses to moving dot patterns, gratings, and plaids were qualitatively normal in awake, fixating amblyopic monkeys, with primarily subtle differences between the eyes. However, the number of binocularly driven neurons was substantially lower than normal; of the neurons driven predominantly by one eye, the great majority responded only to stimuli presented to the fellow eye. The small population driven by the amblyopic eye showed reduced coherence sensitivity and a preference for faster speeds in much the same way as behavioral deficits. We conclude that, while we do find important differences between neurons driven by the two eyes, amblyopia does not lead to a large scale reorganization of visual receptive fields in the dorsal stream when tested through the amblyopic eye, but rather creates a substantial shift in eye preference toward the fellow eye.
Article
Statistical hypothesis testing assumes that the samples being analyzed are statistically independent, meaning that the occurrence of one sample does not affect the probability of the occurrence of another. In reality, however, this assumption may not always hold. When samples are not independent, it is important to consider their interdependence when interpreting the results of the hypothesis test. In this study, we address the issue of sample dependence in hypothesis testing by introducing the concept of adjusted sample size. This adjusted sample size provides additional information about the test results, which is particularly useful when samples exhibit dependence. To determine the adjusted sample size, we use the theory of networks to quantify sample dependence and model the variance of network density as a function of sample size. Our approach involves estimating the adjusted sample size by analyzing the variance of the network density, which reflects the degree of sample dependence. Through simulations, we demonstrate that dependent samples yield a higher variance in network density compared to independent samples, validating our method for estimating the adjusted sample size. Furthermore, we apply our proposed method to genomic datasets, estimating the adjusted sample size to effectively account for sample dependence in hypothesis testing. This guides interpreting test results and ensures more accurate data analysis.
Chapter
This chapter presents the basic elements of sensitivity analysis (SA), with an emphasis on their use in life cycle assessment. We discuss topics such as local and global SA, one-at-a-time and all-at-a-time SA, uncertainty apportioning, and the use of scenarios for addressing sensitivity.
Article
Full-text available
Discovering nonlinear differential equations that describe system dynamics from empirical data is a fundamental challenge in contemporary science. While current methods can identify such equations, they often require extensive manual hyperparameter tuning, limiting their applicability. Here, we propose a methodology to identify dynamical laws by integrating denoising techniques to smooth the signal, sparse regression to identify the relevant parameters, and bootstrap confidence intervals to quantify the uncertainty of the estimates. We evaluate our method on well-known ordinary differential equations with an ensemble of random initial conditions, time series of increasing length, and varying signal-to-noise ratios. Our algorithm consistently identifies three-dimensional systems, given moderately-sized time series and high levels of signal quality relative to background noise. By accurately discovering dynamical systems automatically, our methodology has the potential to impact the understanding of complex systems, especially in fields where data are abundant, but developing mathematical models demands considerable effort.
ResearchGate has not been able to resolve any references for this publication.