Article

The Fourier Transform & Its Applications

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Due to the relationships between the intensity image I(x, y) in Eq. (1), the PSF h(x, y) in Eq. (2), and the Zernike coe cients a q in Eq. (7), the Zernike coe cients a q correspond to a particular intensity result and can thus be used to parameterize and model the associated PSF of an aberrated imaging system. ...
... As has been pointed out in Section 1.2, there is an ambiguity associated with predicting the Zernike coecients from intensity images. In particular, we have found via symmetry properties of the Fourier transform [2] (details in Appendix) that angularly even Zernike polynomials which correspond to even values of radial order n and even values of angular frequency m generate the same PSF and thus the same intensity image for oppositely signed Zernike coe cients. In other words, the signs of these coe cients are immaterial for the description of the intensity PSF. ...
... In order to explicitly show the ambiguity associated with predicting Zernike coe cients from intensity images, we utilize the symmetry properties of the Fourier transform [2]. We begin by writing an expression for the point spread function h(x, y) in terms of the Fourier transform as ...
Preprint
Full-text available
Recovering the turbulence-degraded point spread function from a single intensity image is important for a variety of imaging applications. Here, a deep learning model based on a convolution neural network is applied to intensity images to predict a modified set of Zernike polynomial coefficients corresponding to wavefront aberrations in the pupil due to turbulence. The modified set assigns an absolute value to coefficients of even radial orders due to a sign ambiguity associated with this problem and is shown to be sufficient for specifying the intensity point spread function. Simulated image data of a point object and simple extended objects over a range of turbulence and detection noise levels are created for the learning model. The MSE results for the learning model show that the best prediction is found when observing a point object, but it is possible to recover a useful set of modified Zernike coefficients from an extended object image that is subject to detection noise and turbulence.
... In order to begin our discussion on the new computational method for Fourier transform of functions, we define Fourier transform of f (t) in non-unitary angular frequency form [12] ...
... This function plays important role in Fourier analysis, probability theory and many other real applications of mathematics. Dirac δ-function has the following properties [12] • ...
... As a → 0 + family of g(w, a) functions satisfies exact the same properties of δ function. That means (2.7) δ(w) = lim a→0 + g(w, a) The Fourier transform of δ-function and it's inverse is given by [12] (2.8) ...
Article
Full-text available
The aim of this study is to calculate the well-known Fourier Transforms of functions in a different way. Through our procedure, the Fourier Transform of functions is obtained by considering the Differential Transform Method (DTM) easily without resorting to complex integration.
... It is well known (Marks II R. J. 1991), (Korohoda P., Borgosz J. 1999), (Osgood B. 2014), (Borys A., Korohoda P. 2017) that signal sampling performed at the critical sampling rate can provide some unwanted problems when carrying out the inverse operation. That is in reconstructing (restoring) the analog signal from its samples. ...
... where ( ) δ ⋅ means the so-called Dirac delta impulse (Dirac P. A. M. 1947), (Marks II R. J. 1991), (Osgood B. 2014), which is also called the Dirac delta function (improperly) or the Dirac distribution (properly) in the literature. ...
... In the literature, the operation of signal sampling is modeled as a modulation of the so-called Dirac comb (Marks II R. J. 1991), (Osgood B. 2014 ...
Article
Full-text available
In this paper, a problem of a perfect recovering cosinusoidal signal of any phase being sampled critically is considered. It is shown that there is no general solution to this problem. Its detailed analysis presented here shows that recovering both the original cosinusoidal signal amplitude and its phase is not possible at all. Only one of this quantities can be recovered under the assumption that the second one is known. And even then, performing some additional calculations is needed. As a byproduct, it is shown here that a transfer function of the recovering filter that must be used in the case of the critical sampling differs from the one which is used when a cosinusoidal signal is sampled with the use of a sampling frequency greater than the Nyquist rate. All the results achieved in this paper are soundly justified by thorough derivations.
... see, for example, (Marks II R. J. 1991), (Osgood B. 2014). It is a useful object in signal processing and telecommunications theories. ...
... The signal reconstruction or its recovering performed in the frequency domain means multiplication of the Fourier transform of the sampled signal, denoted here by (Osgood B. 2014). In other words, the above means carrying out the following operation: ...
... Note that another form of ( ) H f is also used in the literature for the interpolation filter. It differs, however, only slightly from the one given by (19) and (1), and is used, for example, in (Osgood B. 2014). The difference between these transfer functions mentioned above regards only two points In what follows, we will consider only the latter point because the situation at the former one is exactly a mirror image of that at ...
Article
Full-text available
When the sampling of an analog signal uses the sampling rate equal to exactly twice the value of a maximal frequency occurring in the signal spectrum, it is called a critical one. As known from the literature, this kind of sampling can be ambiguous in the sense that the reconstructed signal from the samples obtained by criti-cal sampling is not unique. For example, such is the case of sampling of a cosinusoidal signal of any phase. In this paper, we explain in very detail the reasons of this behavior. Furthermore, it is also shown here that manipulating values of the coefficients of the transfer function of an ideal rectangular reconstruction filter at the transition edges from its zero to non-zero values, and vice versa, does not eliminate the ambiguity mentioned above.
... the square of the plane wave spectrum (ie the power spectrum) is obtained In terms of v from the Fourier transform of the mutual coherence function of the wave field [a]. This is simply because the mutual coherence function is just the complex spatial correlation function of the field; correlation functions and power spectra in the relevant domains are related by the Fourier transform [9]. Furthermore. ...
... Another well known theorem relating to Fourier transforms [9] may now be noted. that is that the convolution of two functions is the transform of the product of where PM is the coherence function and TM is the array sensitivity distribution. ...
... what represents the impulse at the instant kT on the timeline according to the sampling model visualized in Fig. 1c). Further, the sequence of such impulses describes the weighted periodization [24] of the Dirac delta. And, this is a correct result within the model of signal sampling in force [1][2][3][4][5][6][7][8][9][10][11], [24]. ...
... Further, the sequence of such impulses describes the weighted periodization [24] of the Dirac delta. And, this is a correct result within the model of signal sampling in force [1][2][3][4][5][6][7][8][9][10][11], [24]. Now, let us consider for a moment the notation (2) again to check what would happen there for signals other than the Dirac impulse ( ) t  . ...
... The limiting computational step of applying SΠ in (5) is the multiplication byĤ. The recursive structure ofĤ permits us to compute SΠA in O(N d log N ) time, through Fourier methods [69]. ...
... Therefore, multiplying A ∈ R N ×d with Π takes O(N dk 2 log k N ) operations. We follow a similar analysis to that of [69,Section 6.10.2]. ...
Preprint
In this work, we propose methods for speeding up linear regression distributively, while ensuring security. We leverage randomized sketching techniques, and improve straggler resilience in asynchronous systems. Specifically, we apply a random orthonormal matrix and then subsample \textit{blocks}, to simultaneously secure the information and reduce the dimension of the regression problem. In our setup, the transformation corresponds to an encoded encryption in an \textit{approximate gradient coding scheme}, and the subsampling corresponds to the responses of the non-straggling workers; in a centralized coded computing network. This results in a distributive \textit{iterative sketching} approach for an $\ell_2$-subspace embedding, \textit{i.e.} a new sketch is considered at each iteration. We also focus on the special case of the \textit{Subsampled Randomized Hadamard Transform}, which we generalize to block sampling; and discuss how it can be modified in order to secure the data.
... During this time Pawsey, the leader of the radio astronomy group, invited Bracewell to be co-author of the book Radio Astronomy (Pawsey and Bracewell, 1955), and Bracewell later sur¬ mised that this was partly a device to get him more involved in the subject. Pawsey also asked him to produce a pictorial diction¬ ary of Fourier transforms, which later led to Bracewell's most important book, The Fou¬ rier Transform and its Applications (Bracewell, 1965 (Bracewell and Roberts, 1954) This paper was particularly important in the early years of radio astronomy, when the relation between the true profile of a source and the profile obtained by scan- "Strip Integration in Radio Astronomy" (Bracewell, 1956) This paper considered the construction of two-dimensional images from one-dimen¬ sional scans of a source with a range of posi¬ tion angles as was required, for example, to obtain a solar map from Christiansen's early grating-array observations. The Fou¬ rier transform relationships involved are succinctly illustrated in a diagram ( Figure 5 in his paper) which Bracewell later refers to in his chapter in Sullivan's 1984 book as the projection-slice theorem'. ...
... The Fourier Transform and its Applications (Bracewell, 1965) The end of this period saw the publication of arrays and operated at 9.1 cm wavelength. It is described by Bracewell and Swarup (1961). ...
... That is, let us write an equivalent of (11) for this case. We get then (17) (14) can be expressed as the socalled Dirac comb multiplied by T (Bracewell R. N. 2000), (Osgood B. 2014). So, this gives ...
... where the symbol ( ) comb T t stands for the Dirac comb (Bracewell R. N. 2000), (Osgood B. 2014). , ET yt , an additional operation is needed (see for more details (Borys A. 2020b)). ...
Article
The objective of this paper is to show from another perspective that the definition of the spectrum of a sam-pled signal, which is used at present by researchers and engineers, is nothing else than an arbitrary choice for what is possibly not uniquely definable. To this end and for illustration, the Shannon’s proof of recon-struction formula is used. As we know, an auxiliary mathematical entity is constructed in this proof by per-forming periodization of the spectrum of an analog, bandlimited, energy signal. Admittedly, this entity is not called there a spectrum of the sampled signal - there is simply no need for this in the proof – but as such it is used in signal processing. And, it is not clear why just this auxiliary mathematical object has been cho-sen in signal processing to play a role of a definition of the spectrum of a sampled signal. We show here what are the interpretation inconsistences associated with the above choice. Finally, we propose another, simpler and more useful definition of the spectrum of a sampled signal, for the cases where it can be needed.
... Fourier transforms was defined by Cochran et al. (1967); algorithms and application of Fast Fourier transforms were presented by Marquardt (1963), Cooley and Turkey (1965), and Cooley et al. (1967). Applications of Fourier Transforms are presented in Bracewell (1986). In the area of potential field geophysical (gravity and magnetic) data, the pioneering studies of Dean (1958), Mesko (1965, Odegard and Berg (1965) led to an interpretational ease that results from the transformation of observed data from the space to the frequency domain. ...
... This center weighting is highly advantageous as a means of focusing the analysis to the sources, which occur within the data interval, and suppressing the contribution of field anomalies near the data extremities. In addition, Bracewell (1986) suggested the subtraction of the data mean value from the field intensity before applying the transformation. This operation effectively removes long wavelengths not defined by the finite data interval. ...
Thesis
Full-text available
Large increases in energy cost during the past two decades have rapidly increased the search for and the development of nonfossil fuel and renewable alternative energy sources all over the world, of which geothermal resources is one of them. Magnetic surveying is an important part of geothermal exploration. In this study, total field magnetic intensity maps were evaluated using spectral methods in order to estimate the heat flow, which is the primary observable quantity in geothermal exploration. The study area is bounded by Latitudes 80 30¢ and 100 00¢ North and longitudes 40 30¢ and 60 00¢ East. It is an area of about 27,200 square kilometres situated at the Western side of Central Nigeria. Depth estimates were made from the analysis of the spectra of magnetic anomalies. The rates of decay of the spectrum were used to calculate the mean depth to the top of magnetic sources. A significant peak in the spectrum indicates that the source bottom (Curie-point isotherm depth) is detectable. These depths information were then analysed to determine the heat flow. In cognizance of aliasing error resulting from digitization spacing of aeromagnetic maps, an optimal digitisation spacing of 0.875km was used to digitise the aeromagnetic maps. Since the geology of the area is not complex, a plane least-squares polynomial fit method was applied to the digitised data to obtain regional residual separation. Upward continuation technique was also utilized to suppress short wavelength components of the residual magnetic anomalies in the study area. The continuation was carried out at a height of 0.282km. The results obtained from the spectral evaluation reveal that the depths to deep-seated geologic structures increase from the edges to the central portion of the study area and vary between 0.52 to 4.38km with an average of 2.16 ± 0.94 km while the shallow depths vary from 0.08 to 1.8km with an average of 0.71 ± 0.35 km. Furthermore in this study, Curie-point isotherm depth estimated in the area varies between 10 and 30km. Consequently, the heat flow in the surveyed area varies between 10 and 120 mW/m2. The average heat flow in thermally ‘normal’ continental regions is about 60mW/m2. Values in excess of about 80 – 100 mW/m2 indicate anomalous geothermal conditions. Anomalous high heat flow values were observed in the surveyed area. Therefore, four prospects have been elucidated and highly recommended for further detailed geothermal exploration in the study area. These prospects are areas within the axis of Mokwa and Rabba in the west-central part of the study area, Kwati and Akerre in the north-eastern part, Lafiagi and Lata in the south-eastern part of the study area and around Wuya. These areas may possibly have geothermal sources and reservoirs and therefore, should be targeted for detailed geothermal surveys. An advantage of this wide area regional analysis is the mapping of temperature isotherm and basement structures which would hence, direct detailed exploration towards the more promising areas. This would also provide information on the economic feasibility of geothermal, mineral potential and petroleum maturation prospects in the study area. More so, it would assist in the planning of similar surveys in other areas.
... The Fast Fourier Transform (FFT) is a well-established computational tool that is commonly used to find the frequency components of a signal buried in noise. [18][19][20][21][22][23] It is based on the Fourier Analysis method which states that any periodic function can be represented as an infinite enumerable sum of trigonometric functions. 24 FFT is a method for efficiently computing the Discrete Fourier Transform (DFT) of time 5 series and facilitates power spectrum analysis and filter simulation of signals. ...
Preprint
Amperometry is a commonly used electrochemical method for studying the process of exocytosis in real-time. Given the high precision of recording that amperometry procedures offer, the volume of data generated can span over several hundreds of megabytes to a few gigabytes and therefore necessitates systematic and reproducible methods for analysis. Though the spike characteristics of amperometry traces in the time domain hold information about the dynamics of exocytosis, these biochemical signals are, more often than not, characterized by time-varying signal properties. Such signals with time-variant properties may occur at different frequencies and therefore analyzing them in the frequency domain may provide statistical validation for observations already established in the time domain. This necessitates the use of time-variant, frequency-selective signal processing methods as well, which can adeptly quantify the dominant or mean frequencies in the signal. The Fast Fourier Transform (FFT) is a well-established computational tool that is commonly used to find the frequency components of a signal buried in noise. In this work, we outline a method for spike-based frequency analysis of amperometry traces using FFT that also provides statistical validation of observations on spike characteristics in the time domain. We demonstrate the method by utilizing simulated signals and by subsequently testing it on diverse amperometry datasets generated from different experiments with various chemical stimulations. To our knowledge, this is the first fully automated open-source tool available dedicated to the analysis of spikes extracted from amperometry signals in the frequency domain.
... Some properties of the dirac delta functions are as following: [14,16,17] i. ...
Article
Full-text available
In this paper, we present another method for computing Fourier transforms of functions considering the Variational Iteration Method (VIM). Through our procedure, the Fourier transforms of functions can be calculated precisely and without reference to complex integration.
... Here we determine Fourier series involving an increasing number of harmonics, choosing the number we regard as suitable. By analogy with polynomials, we term the number of harmonics, as in other publications (for instance, [90]), the degree. Degree n has 2n + 1 defining parameters: a constant term and n cosine terms and n sine terms for the x and y components, respectively: ...
Technical Report
Full-text available
This set of examples addresses measurement in healthcare in the following topic areas. The examples show improved and alternative treatments of the evaluation of measurement uncertainty, building on current practice in these areas. A diversity of topics is addressed, such as uncertainty arising in image reconstruction, determination of nanoparticle size distribution in waste water, quantification of small volumes and flows in accurate dose delivery to patients, and the determination of haemoglobin concentration in blood.
... Here we determine Fourier series involving an increasing number of harmonics, choosing the number we regard as suitable. By analogy with polynomials, we term the number of harmonics, as in other publications (for instance, [390]), the degree. Degree n has 2n + 1 defining parameters: a constant term and n cosine terms and n sine terms for the x and y components, respectively: ...
Book
Full-text available
In this document, the examples illustrate various aspects of uncertainty evaluation and the use of uncertainty statements in conformity assessment. These aspects include, but are not limited to – choice of the mechanism for propagating measurement uncertainty, – reporting measurement results and measurement uncertainty, – conformity assessment, and – evaluating covariances between input quantities.
... Wavelet transform is a mapping function that is placed in Fourier and real domains to provide significant localized information by considering vertical, horizontal, and diagonal directions of the inputs [18]. The Fourier method converts the signal information from a time domain to a frequency domain [19]. The last transformation method, Gabor transform, is also used to transform an image representation to both the spatial and frequency domains. ...
Article
Full-text available
Fabric quality has an important role in the textile sector. Fabric defect, which is a highly important factor that influences the fabric quality, has become a concept that researchers are trying to minimize. Due to the limited capacity of human resources, human-based defect detection results in low performance and significant loss of time. To overcome human-based limited capacity, computer vision-based methods have emerged. Thanks to new additions to these methods over time, fabric defect detection methods have begun to show almost one hundred percent performance. Convolutional Neural Networks (CNNs) play a leading role in this high-performance success. However, Convolutional Neural Networks cause information loss in the pooling process. Capsule Networks is a useful technique for minimizing information loss. This paper proposes Capsule Networks, a new generation method that represents an alternative to Convolutional Neural Networks for deep learning tasks. TILDA dataset as source data for training and testing phases are employed. The model is trained for 100, 200, and 270 epoch times. Model performance is evaluated based on accuracy, recall, and precision performance metrics. Compared to mainstream deep learning algorithms, this method offers improved performance in terms of accuracy. This method has been performed under different circumstances and has achieved a performance value of 98.7%. The main contributions of this study are to use Capsule Networks in the fabric defect detection domain and to obtain a significant performance result.
Chapter
This chapter first focusses on the theoretical foundations of quaternion Fourier transforms. Basically, in the complex Fourier transform the imaginary complex unit i is replaced by any constant unit pure quaternion squaring to −1. Since quaternion multiplication is generically non-commutative, a kernel factor can either be placed to the right or the left of the signal to be transformed, the signal itself may be scalar or quaternion valued as well. Furthermore, two kernel factors can be used, one on the left and one on the right, yielding a two-sided QFT. Due to quaternion non-commutativity it is not trivial to derive the properties of QFTs. A recent comprehensive treatment is given in E. Hitzer, Quaternion and Clifford Fourier Transforms, Chapman and Hall/CRC, London, 1st edition (September 22, 2021). The one-sided QFT had been implemented for quite a while in the Quaternion Toolbox for Matlab (QTFM) by S. Sangwine, and most recently he has expanded this in version 3.4 to include the two-sided QFT as well (https://sourceforge.net/projects/qtfm/).
Chapter
This paper presents a novel method for diagnosing compressor faults using the Graph Attention Network (GAT). Specifically, we address the challenge of analyzing multivariate time series data generated by compressors. Our proposed method consists of three main steps. Firstly, we construct a temporal graph for each variable of the multivariate time series using the Limited Penetrable Visibility Graph (LPVG) to capture the temporal dependencies within each variable. Subsequently, we feed these temporal graphs into the GAT to obtain a representation of each variable. Secondly, we leverage the inter-variable dependencies by constructing an adaptive inter-variable graph, where each node represents a variable, and the node feature is the previously obtained variable representation. We then input this inter-variable graph into another GAT to further capture the dependencies between variables. Finally, we use the output from the GAT to train a classifier for fault diagnosis, resulting in an end-to-end model. Our proposed method outperforms existing techniques in diagnosing faults in a real dataset.KeywordsCompressor Fault DiagnosisMultivariate Time SeriesLimited Penetrable Visibility GraphGraph Attention Network
Article
In geophysics, whether the structure is close to the surface or deeper, information about the depth and location of the structure can be obtained by using magnetic data. The importance of this study is an example of the application of the Normalized Full Gradient (NFG) method to an archaeological site to find the depth and location of structures that cause magnetic anomalies collected at the archaeological site. One of the parameters affecting the shape and size of the magnetic anomaly is the depth of the source causing the anomaly. For this reason, it is significant to determine the location of the source correctly. One of the methods used to determine the depth of structure using magnetic field data is NFG technique. In the application of downward analytical continuation, distortions due to the passage of mass depth occur and this method invalidates this. The NFG technique was tested on anomalies caused by prism-shaped synthetic models. Test studies on synthetic models with the NFG technique have yielded satisfactory findings. Based on the findings, the NFG technique was carried out to the real magnetic anomaly collected in the ancient city of Sapinuwa. The findings were compared with the building remains unearthed as a result of the proposed trench excavations. The obtained results have shown satisfactory results.
Article
The interaction between the ventricles and atria in the heart is an important aspect of cardiac function. During ventricular arrhythmias, such as ventricular tachycardia and ventricular fibrillation, the atrial interbeat interval appears different from that of normal sinus rhythm, even though there is no direct electrical connection between the ventricles and atria. To understand this phenomenon, bivariate time-series Fourier analysis was performed on ventricular and atrial signals. The results showed different levels of correlation from the ventricles to the atria during ventricular arrhythmias. We found that low interaction was associated with self-terminating ventricular arrhythmias, while strong connections were mostly seen in sustained ventricular arrhythmias. These findings suggest that the underlying mechanism behind this interaction may be due to the presence of mechano-electrical coupling, which serves as a bridge from the ventricles to the atria (reciprocal connections).
Article
Induced polarization (IP) is a widely used geophysical exploration technique. Continuous random noise is one of the most prevalent interferences that can seriously contaminate the IP signal and distort the apparent electrical characteristics. We propose a noise separation algorithm based on deep learning to overcome this issue. The standard IP signals are first produced by combining the Cole-Cole model and Fourier series decomposition, and then the mathematical simulation is used to generate various types of random noise interferences, which are subsequently added to the IP signals. Then, a de-noising auto-encoder deep neural network structure is built and trained by using noisy signals as input samples and pure signals as output samples. The resulting optimum network is capable of automatically reconstructing a clean IP signal from the noisy input. This network is tested using synthetic datasets. The trained neural network can perform the noise reduction of thousands of survey points in a matter of seconds and reduce signal distortion from about 25% to less than 5%. Deep-learning-based de-noising provides superior computation speed and precision compared to the wavelet de-noising and smoothing filtering approach. The data for high-quality signals do not vary considerably before and after noise reduction. The noise interferences are successfully suppressed for low-quality signals. Based on the findings, the de-noising auto-encoder deep neural network has a promising future for suppressing random noise interferences, which can aid in improving the quality of IP data with high efficiency and precision.
Article
Full-text available
By solving the existing expectation‐signal‐to‐noise ratio (expectation‐SNR) based inequality model of the closed‐form instantaneous cross‐correlation function type of Choi‐Williams distribution (CICFCWD), the linear canonical transform (LCT) free parameters selection strategies obtained are usually unsatisfactory. Since the second‐order moment variance outperforms the first‐order moment expectation in accurately characterizing output SNRs, this paper uses the variance analysis technique to improve parameters selection strategies. The CICFCWD's average variance of deterministic signals embedded in additive zero‐mean stationary circular Gaussian noise processes is first obtained. Then the so‐called variance‐SNRs are defined and applied to model a variance‐SNR based inequality. A stronger inequalities system is also formulated by integrating expectation‐SNR and variance‐SNR based inequality models. Finally, a direct application of the system in noisy one‐component and bi‐component linear frequency‐modulated (LFM) signals detection is studied. Analytical algebraic constraints on LCT free parameters newly derived seem more accurate than the existing ones, achieving better noise suppression effects. Our methods have potential applications in optical, radar, communication and medical signal processing.
Preprint
Full-text available
Very often, in the course of uncertainty quantification tasks or data analysis, one has to deal with high‐dimensional random variables. Here the interest is mainly to compute characterizations like the entropy, the Kullback–Leibler divergence, more general f$$ f $$‐divergences, or other such characteristics based on the probability density. The density is often not available directly, and it is a computational challenge to just represent it in a numerically feasible fashion in case the dimension is even moderately large. It is an even stronger numerical challenge to then actually compute said characteristics in the high‐dimensional case. In this regard it is proposed to approximate the discretized density in a compressed form, in particular by a low‐rank tensor. This can alternatively be obtained from the corresponding probability characteristic function, or more general representations of the underlying random variable. The mentioned characterizations need point‐wise functions like the logarithm. This normally rather trivial task becomes computationally difficult when the density is approximated in a compressed resp. low‐rank tensor format, as the point values are not directly accessible. The computations become possible by considering the compressed data as an element of an associative, commutative algebra with an inner product, and using matrix algorithms to accomplish the mentioned tasks. The representation as a low‐rank element of a high order tensor space allows to reduce the computational complexity and storage cost from exponential in the dimension to almost linear.
Technical Report
Full-text available
An inverse fast Fourier transform (IFFT) algorithm is developed to solve initial value problems (IVPs) for wave propagation in nonlocal peridynamic media. The IFFT solutions compare well with solutions obtained using Mathematica’s NIntegrate function and verified using a spherical Bessel function series solution. A nonlinear dispersion relation is derived using Floquet theory for a periodic elastic medium of infinite extent, which we use to solve an IVP for a homogenized peridynamic medium using our IFFT algorithm; this solution compares well with a spherical Bessel function series solution. A local-nonlocal peridynamic correspondence principle is identified, which enables direct determination of nonlocal Fourier transform domain solutions to IVPs; the correspondence principle only requires identification of the nonlinear dispersion curve for the material and does not require definition of a micromodulus function, although the latter is implicitly defined via an integral equation. Results are useful for modeling and verification of dispersive wave propagation in large-scale peridynamic numerical simulations.
Article
Instantaneous frequency is an important seismic attribute, which can indicate thin beds and lithofacies boundaries. However, instantaneous frequency attribute is susceptible to noise when it is obtained by the traditional Hilbert transform (HT) method. We propose a robust method for instantaneous frequency estimation. The method first obtains the time-frequency distribution of the seismic signal by inverse spectral decomposition (ISD) and then calculates the analytic signal through the window HT transform. Inverse spectral decomposition achieves high-resolution time-frequency distribution by adding sparse constraint to the corresponding inverse problem, and therefore the noise can be suppressed. The algorithm we choose to solve the mix ℓ2 − ℓ1 problem is the fast iterative shrinkage-thresholding algorithm (FISTA). Compared with the traditional iterative least squares (IRLS) algorithm, FISTA can achieve a better computational efficiency. We perform the method on a quadratic frequency modulation (QFM) signal, a synthetic data based on a wedge model and field data sets to demonstrate its performance, compared with the HT method and the time-frequency adaptive filtering method.
Article
Full-text available
We study the sequence fn of functions that is defined by recursive convolutions as f1(x)=Π(x),fn+1(x)=(fn∗f1)(x),n∈N,where Π is the unit rectangle function. We find out the general closed-form of the sequence fn and apply it for the evaluation of the improper integral 2π∫0∞sin(ξ)ξncos(2xξ)dξ,x∈R,n∈N,n≥2.We also study some interesting features of the numerical coefficients that appear in the closed-form expression of fn. In connection to the numerical coefficients that appear in closed-form expression of fn, we introduce a map F defined on N×N0, by the rule, F(n,s)=∑i=0nis(-1)n-ii!(n-i)!, and show that its range is in N0, where N0:=N∪{0}.
Article
In this paper, we discuss about the hierarchical Bayesian (HB) estimation concerning system reliability for a waste‐to‐energy (WTE) process. The main goal of this approach is to obtain this estimation WTE process reliability under different loss functions related series, parallel, and k‐out‐of‐m systems. In this case, we can drive system outcome by estimating each individual component. It is assumed that components are independent and identically distributed exponential random variables. Properties of the HB estimations under different loss functions are also provided, and comparisons are made between Bayesian and HB estimators via Monte Carlo simulation. The implementation of the proposed procedure was illustrated in detail by employing numerical practical examples extracted from a WTE process at the final part of this paper.
Article
Ebstein's anomaly is an abnormality in the pediatric heart disease group. The anomaly is described as a structural defect by considering the whole heart. It can be manifested by typical symptoms in auscultation and can be detected with other diagnostic methods such as ECG. Systolic ejection click and murmur are the most important symptoms in the diagnosis of disease. In this study, heart sound signal recorded from a 13-year-old patient was analyzed with different numerical methods along with a normal heart sound signal. The signals were first examined in the time plane and findings in auscultation were observed. The frequency components of the signals were then obtained. Additional frequency components emerged in findings of disease in this plane compared to the normal one. Spectrograms enable to observe the differences in time frequency and amplitude components. Bispectral analysis was performed as a high order spectral analysis method by diversifying the analysis. In bispectral analysis of the anomaly, click and murmurs are manifested by equiphase surfaces distributed at high frequencies.Lastly, the power spectrum density of the signals were examined. The decrease in the additional power peak and power rating of the diseased signal was remarkable.
Article
Full-text available
Audio search algorithms are used to detect the matching file in large databases, especially in multimedia applications and smart appliances. Based on different audio fingerprint extraction methods, similar algorithms have been developed and applied in different fields. These algorithms are expected to perform the detection in a reliable and robust way within the possible shortest time. In this study, based on spectral peaks method, an audio fingerprint algorithm with a few minor modifications was developed to accurately detect the matching audio file in the target database. The algorithm was demonstrated and then the effect of spectrogram parameters such as window size, overlap and number of FFT was investigated in terms of reliability and robustness of the program under three different noise sources. The database was a relatively small size one with five genres of music. In this study, it was aimed to contribute to new audio file detection studies based on spectral peaks method. It was observed that the the variation in the spectrogram parameters have significantly affected the number of matchings (NM), reliability and robustness. Under high noise conditions the optimal spectrogram parameters were determined as 512;50%;512, respectively. We did not observe, however, a significant effect of music genre on NM.
Preprint
We present a new approach to constrained classical fields that enables the action formalism to dictate how external sources must enter the resulting equations of motion. If symmetries asserted upon the varied fields can be modeled as restrictions in Fourier space, we prove that these restrictions are automatically applied to external sources in an unambiguous way. In contrast, the typical procedure inserts symmetric ansatze into the Euler-Lagrange differential equations, even for external sources not being solved. This requires ad hoc constraint of external sources, which can introduce leading-order errors to model systems despite superficial consistency between model field and source terms. To demonstrate, we consider Robertson-Walker cosmologies within General Relativity and prove that the influence of point-like relativistic pressure sources on cosmological dynamics cannot be excluded by theoretical arguments.
Article
We study Fourier and Laplace transforms for Fourier hyperfunctions with values in a complex locally convex Hausdorff space. Since any hyperfunction with values in a wide class of locally convex Hausdorff spaces can be extended to a Fourier hyperfunction, this gives simple notions of asymptotic Fourier and Laplace transforms for vector-valued hyperfunctions, which improves the existing models of Komatsu, Bäumer, Lumer and Neubrander, and Langenbruch.
Article
We describe how to implement the spectral kurtosis method of interference removal (zapping) on a digitized signal of averaged power values. Spectral kurtosis is a hypothesis test, analogous to the t-test, with a null hypothesis that the amplitudes from which power is formed belong to a ‘good’ distribution – typically Gaussian with zero mean – where power values are zapped if the hypothesis is rejected at a specified confidence level. We derive signal-to-noise ratios (SNRs) as a function of amount of zapping for folded radio pulsar observations consisting of a sum of signals from multiple telescopes in independent radio-frequency interference (RFI) environments, comparing four methods to compensate for lost data with coherent (tied-array) and incoherent summation. For coherently summed amplitudes, scaling amplitudes from non-zapped telescopes achieves a higher SNR than replacing zapped amplitudes with artificial noise. For incoherently summed power values, the highest SNR is given by scaling power from non-zapped telescopes to maintain a constant mean. We use spectral kurtosis to clean a tied-array radio pulsar observation by the Large European Array for Pulsars (LEAP): the signal from one telescope is zapped with time and frequency resolutions of 6.25 ms and 0.16 MHz, removing interference along with 0.27 per cent of ‘good’ data, giving an uncertainty of 0.25 μs in pulse time of arrival (TOA) for PSR J1022+1001. We use a single-telescope observation to demonstrate recovery of the pulse profile shape, with 0.6 per cent of data zapped and a reduction from 1.22 to 0.70 μs in TOA uncertainty.
Article
Full-text available
An important goal for vision science is to develop quantitative models of the representation of visual signals at post-receptoral sites. To this end, we develop the quadratic color model (QCM) and examine its ability to account for the BOLD fMRI response in human V1 to spatially-uniform, temporal chromatic modulations that systematically vary in chromatic direction and contrast. We find that the QCM explains the same, cross-validated variance as a conventional general linear model, with far fewer free parameters. The QCM generalizes to allow prediction of V1 responses to a large range of modulations. We replicate the results for each subject and find good agreement across both replications and subjects. We find that within the LM cone contrast plane, V1 is most sensitive to L-M contrast modulations and least sensitive to L+M contrast modulations. Within V1, we observe little to no change in chromatic sensitivity as a function of eccentricity.
Article
The exhaust device of gas-driven fan propulsion system (for a VTOL aircraft) adopts vector exhaust guide vane (VEGV) to achieve vector thrust in the range of 0-90°, which requires that the deflection angle of the VEGV reach about ±45∘ and the total pressure recovery coefficient above 0.985. Therefore, the exhaust device needs a wide-range and low-loss VEGV to ensure efficient operation of the propulsion system. And the key of this VEGV is to eliminate the separation of the suction side under large deflection angle. So, this paper designed a new type of VEGV that can meet the demands by reducing the inverse pressure gradient of the suction surface (one of the necessary conditions for two-dimensional separation). In order to realize this design process, the spectral method of small disturbance equation (SMSDE) for two-dimensional subsonic flow was developed. Then, two VEGVs were designed by the SMSDE and verified by numerical simulation with Reynolds number 10⁶ and blade solidity 1.18. The results show that the two VEGVs eliminate the separation of suction side at inlet Mach number of 0.25 and blade stagger angle of 45°. And the VEGV with lateral blade guarantees this characteristic up to Mach 0.3. Besides, when the inlet Mach number is below 0.3, the VEGV with lateral blade can ensure that the total pressure recovery coefficient is greater than 0.985, the outlet area ratio is greater than 0.95 and the exhaust angle error is less than 3.5°. Finally, Experimental verification of the vector exhaust device using the VEGV with lateral blade was carried out. The results show that the new VEGV has a higher total pressure recovery coefficient and outlet area ratio than the traditional VIGV at large deflection angles.
ResearchGate has not been able to resolve any references for this publication.