Questions related to Inverse Problems
I am working on topology optimization for photonic devices. I need to apply a custom spatial filter on the designed geometry to make it fabricable with the CMOS process. I know there exist spatial filters to remove the pixel-by-pixel and small features from the geometry. However, I have not seen any custom analytical or numerical filters in the literature. Can anyone suggest a reference to help me through this?
I have a series of shadowgraphy images obtained at different angles from a rotating geometry and I want to use them to reconstruct the 3D geometry. Theoretically, the values recorded in shadowgraphy images are the sum of laplacians of refractive index in parallel planes with respect to the detector as opposed to the absorption coefficients recorded in x-ray tomography. I have calculated the forward projections from the sum of laplacians in different views and now to reconstruct the geometry I need to solve the inverse problem. In x-ray tomography, the inverse problem is solved analytically by back projecting the recorded values into the image domain. However, the same principles cannot be applied to shadowgraphy images since Beer's law is no longer valid here. I am trying to reconstruct a 3D domain containing the refractive indices from 2D shadowgraphy image and I was wondering if anyone could help me with this problem?
In epidemiology modelling, in order to calibrate such a model, we require discrete observations of the population compartments functions. If we neglect the underreporting and other issues (such as vital dynamics etc.), observing the most compartments sizes is relatively easy. The infected (I) population is the integrated newly infections per day, reduced by the integrated removed. It is the case with the removed (R) compartment, which is constituted by the integrated newly recovered and deceased per day. In the classical SIR model, again rejecting the possibility of reinfection, the susceptible (S) class is calculated as the whole population, reduced by the infected and removed. Statistics is also collected for other compartments as quarantined (Q), hospitalized (H) and so on.
But how to measure the exposed (E) population share? Is there any (real world) statistics, or how to calculate it from the existing data?
I am in the middle of solving a linear algebra problem and reach the point to have an ill-condition matrix in the written lagrangian. To minimize the error function I am using Tikhonov method; however, after adding epsilon to the components of the matrix I am seeing a considerable difference in the results. I have seen some libraries of Scipy in Python to deal with such problems I don't know how much it can be applied. I really appreciate if anyone kindly help me to crack the problem.
Greetings.....I am doing anomaly detection using EIT. Forward problem is solved using FEM using COMSOL . Now have to (reconstruct image)start with inverse problem of EIT. Can someone suggest which reconstruction algorithm will give the best result? Whether MATLAB is better or PYTHON is better to solve. Thanks in advance
The paper describes the insides - the basic concepts of the Math Microscope, demonstrates the results of Super-Resolution images obtained from the Event Horizon Telescope and analyzes the results of the movement of clusters of stars that go around the Black Hole. The presence of point objects - single stars in the SR image allowed us to implement a new breakthrough approach in the problem of SR images of Powehi Black Hole in the concept of MM. In the paper, we reviewed and illustrated new concepts: Invertability Indicators and Adequacy Characteristics of discrete Models of Apparatus Functions. With these new concepts, in the inverse problem, for the first time, we were able to answer simple questions: What are we dealing with? Moreover, have we solved the inverse problem? The paper demonstrates the “manual solution” of the problem of Reconstruction of AFs and Super-Resolution on MM. In the Discussion at the end of the paper, we pose the problem of creating two Artificial Intelligences for the automated solution of the R&SR problem with the interpretation of the SR results of BH images from EHT.
Dear Colleagues and Authors,
plenty of problems in mathematics, economics, physics, biology, chemistry, and engineering, e.g., optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, non-destructive testing, and other disciplines, can be reduced to solving an inverse problem in an abstract space, e.g., in Hilbert and Banach spaces. Inverse problems are called that because they start with the results and then calculate the causes. Solving inverse problems is a non-trivial task that involves many areas of Mathematics and Techniques. In cases where the problem is ill-posed, small errors in the data are greatly amplified in the solution, and therefore, regularization techniques using parameter choice rules with optimal convergence rates are necessary.
Currently, I am editing a special issue on "Numerical Analysis: Inverse Problems – Theory and Applications 2021" with a Switzerland-based "Mathematics" MDPI Journal.
I would like to draw your attention to this possibility of submitting research articles:
Please let me know if you need any help.
Thank you for your kind consideration,
Interested in numerical techniques for traditional deterministic (Tikhonov's) and especially in information-probabilistic (Bayesian) paradigm, uncertainty propagation analysis etc.
This is more of a survey question than a query for precise mathematical detailing. Opinions are welcome!
I'm working on the inverse problems on topological induces. Actually I got a partial answer for my question. But I have to generalize this result. Please help me in this regard.
I am interested in image reconstruction and I mostly go to IEEE NSS-MIC and Fully3D
I would like to attend to new conferences about applied mathematics for image processing, with topics such as inverse problems, optimisation and machine-learning. Any advice?
I am trying to optimize an algorithm that can predict the size of a network, only some of the nodes of which are visible (available for measurement), by using time series data from the observable nodes from multiple experiments. The network can be anything from linear a linear time-invariant system to highly non-linear, noisy system such as biochemical signalling pathways.
In order to optimise this algorithm I need as much data as possible (by which understand trajectories of the observed nodes in response to some initial conditions sampled over some time period). What am I am asking is:
is it is possible to synthetically generate data with GANs using already available time-series data instead of doing more experiments which can be costly at best and not possible at worst.
Obviously there is no GAN that would fit any type of system but for the moment I want to know if
1) this is possible in the first place
2) this is practical, or in other words if the dataset size required to train the network is realistic.
To give an example: for a network with total size N = 100 and observable nodes n=10, there is let's say M = 150 experiments and the data from one experiment is a 10x100 matrix, which holds the states of the 10 nodes sampled at 100 time points. So I will be able to feed the GAN 150 10x100 matrices in order to train it and from that I want to be able to produce multiple other 10x100 matrices from the same probability distribution.
Please excuse the long question and thank you in advance. If something wasn't clear please ask, I will clarify.
Hi everyone, hope you all are having wonderful day.
I am working on a baby inverse problem. I have a simple nonlinear model. To estimate the model parameters I can obtain two different sets of simulated experimental data (inverse problem crime) in the following ways:
Set 1. Experimental data obtained from 2^k factorial design. Set 2. Experimental data obtained from one extreme corner of the 2^k factorial design.
Interestingly, when I estimated parameters from these two data sets, I get better estimates when Set 2 was used. I thought that if the model is in good condition (without ill-posedness), using more experimental data should result in better estimates. My performance criteria is how close parameter estimates are to their assumed true values. Do you have any idea why I am seeing this behavior?
I have applied two different frame-work, i.e. Deterministic (Very fast simulated annealing) and probabilistic (Sampling) algorithms to solve a highly non-linear inverse problem with 6 unknown model parameters to be optimised. As you know, the result of the VFSA is a single solution for the each model parameter and the result of the probabilistic algorithm is a set of solutions sampled from the posterior. I expect that the the solution obtained by the deterministic method should be captured by the samples obtained by the probabilistic approach and the discrepancy should not be significant, but these solutions are far from each other!!!
For both methods, the prior, the forward model and the the noise level are the same. Actually, I defined the prior in probabilistic method uniformly distributed, but in VFSA, the model parameters are updated through sampling of a Cauchy distribution. What causes this discrepancy? Any justification for this matter? is it usual? and what causes this in the problems?
The attached figure shows the cloud of samples from the probabilistic method, their mean (Black dot) and the optimised value by deterministic algorithm (Red dot)
Thanks for your comments..
Deep learning and machine learning methods have improved substantially over the years. We are now at a point were we have enough computational resources, databases, and methods able to learn more about (recorded) things. This change is already a reality in image reconstruction.
Will this replace us in the task of improving image reconstruction methods? Let us know your opinion.
The residual functional will be convex because the inverse problem is linear, so the iterative process will be convergent. I think it is easy to prove that the convergence rate of the minimization process is 1/k. Convergence to a solution or set of solutions will depend on whether the uniqueness theorem exists or not.
Epilepsy is a chronic brain disorder characterized by recurrent seizures. Globally, 50 million people around the world live with it, while ,annually, a rate of 2.5 million new case is diagnosed to have it.
Unfortunately, one-third of the cases had shown a resistance to the Anti-Epileptic Drugs(AEDs) being prescribed for them by the physician. In these cases, the surgical intervention becomes a must. So we need to precisely localize the region involved in this disease and asses the eligibility of the patient for such surgery.
Many functional neuroimaging techniques have been used in order to perform such kind of assesment (such as MRI, PET,SPECT, etc.) but these methods provide a low temporal resolution which is needed for transient events such as the spikes. While the EEG method can provide a high temporal resolution. Hence, Many researches have been conducted on how to use the EEG along side the signal processing techniques to provide a better understanding of the spatio-temporal activity in the brain (solving the inverse problem).
I am interested in time varying source localization. So kindly, can any one of you suggest some literature review related to this issue?
Thank you in advance.
Is Dispersion Coefficient dependent on concentration in ADE (advection-diffusion-equation)? If yes, what is its relation or equation versus concentration? Regards Azade
The digital image is a collection of values of sensors arranged in a grid. These sensors have a field of view in the original world. Then it generates a value based on the phenomena that it measures.
This phenomena can be visible light(for conventional digital images), infrared light(for infrared images), Positron emissions scattering(PET scan), secondary electrons (Scanning electron microscope), and scattered microwave(Synthetic aperture radar SAR).
In conventional image processing, a matrix is used to represent these values. For inverse problems we convert this matrix to a vector and then use the normal equations or regularization based methods.
Some PDE based methods use kernels to process images using 2D convolution. Some times we use the notion of images as point clouds and then construct a graph based on some distance measure. Then we study the Laplacians(Diffusion maps) or the Hamiltonian operators of these graphs. The eigenvalues of these operators lead to the notion of heat kernel signatures and wave kernel signatures.
We have these alternate views of an image based on the application.
What is the most fundamental view of an image which can bring all these different notions together?
Does such a notion exist in literature?
In particular, what are the resolution limits of radiation planning as applied to cancer therapy? Do these limits depend only on the equipment, or are we starting to see some limitations due to the ways in which the inverse problem of mathematical optimization is solved?
Can anyone recommend a good paper or book which describes the 'layer peeling' approach to inverse scattering of the 1d Schrodinger equation using a very simple procedural style? That or some MATLAB code? I am of course aware of the papers such as "Differential methods in inverse scattering" by AM Bruckstein et al. (1985) but wondered if there was any alternative literature.
The peaks at small T2 appear less intense than the peaks at higher T2 even if the peaks area is correctly evaluated. This suggests a relaxation time dependent distortion of the reproduced peak line-width.
More details are attached!
i use FDM for forward model and genetic algorithm for inverse problem, i want to estimate a parameter in each node in the FDM grid using inverse problem, when i use an homogeneous parameter it estimate the parameter with good accuracy but for heterogeneous parameter (different value in each node in the grid) doesn't give a good result, i want to know if there is a solution of this problem.
We know that an inverse problem has a general form like:
where "d" is our data, "m" is our model and "G" is the coefficient matrix.
I want to set up the coefficient matrix for a non-linear solution in seismology, but I don't know how should I do that. In linear solutions, we use Green's Functions to set up our coefficient matrix, but I have no idea about non-Linear problems. Can you please introduce any references for this? Any paper or website would do.
I am trying the EEG inverse problem in the visual area. However, the number of electrodes in my lab is limited. so I place EEG separately to predict wide EEG source. Are papers using this approach?
I've been using Curry 7 extensively and very much favor its capability in EEG source analysis (with different methods available for solving inverse problems). However it seems to be fabulously expensive to have it for personal use.
Now, I am looking for open source toolboxes that come with methods for solving forward/inverse methods with high precision.
I am aware of some toolboxes such as FieldTrip, LORETA, NFT, SPM8, but I don't know their pros and cons.
My research will focus on localizing sources associated with EEG oscillatory responses (rhythms not ERPs) for BCIs.
Any response is highly appreciated.
Usually, Kalman filtering is applied to state variables tracking through nonstationary (time-dependent) measurements. However, considering a case where these measurements are taken at the steady-state (thus, not changing at all), is it possible to use linear/nonlinear Kalman filtering to quantify the system's state?
If so, what would be a reasonable stopping criteria for it?
Currently, I am working on 2-D nonlinear joint inversion of VLF and VLF-R data. Since VLF_R data which I am taking in log scale ranging from (0-5) and sensitivity matrix has been calculated by considering log of both model and data.. but VLF data which I'm considering is on linear scale ranging ( -40 to 40) and the sensitivity matrix has been calculated by considering model (log) and data (linear ).How should I give different weight to different data sets in this joint inversion problem ?
I encountered an inverse problem described as following
AX=b, a ill-posed problem
where A is a matrix with n by m ,X is a vector with m by one,b is a vector with n by one.
when we solve the equation,we will get the solver that is unstable because the condition number of A matrix is large.
then some ones add the damping factor into the daigonal elements in matrix ATA.
the least-square solver is
If the vector X (we can say that the model parameters)contains only one kind data, the apha we used can be constant.
But the matrix A contains kinds of data that the scale of them varies very much(eg: the velocity 2000m/s,the density 2.65 g/cm3 ,so 2000 is much larger than 2.65),the damp factors to velocity and to density will be not the same.
Please help me with some useful ways to select the best damping factors!
Hi, everyone, lately I try to retrieve two phases simultaneously with knowing two intensity measurements. I design a retrieval algorithm but finally it just stops convergence after a few hundred iteration loops. The retrieval results are not so accurate that I have to optimize them. My attempt is to give a small perturbation to the retrieval phases when the algorithm stops convergence and then restart the iteration, but it doesn't work.
Can anyone suggest me a methodology or direction to solve my problem?
I want to deal with some inverse problem of parabolic equations with free boundary.
But how to define the inverse problem of parabolic equations with free boundary? How to judge the well-pose or ill-pose? How to prove the existence and uniqueness of the inverse problem?
Apart from all the ground disturbances especially power lines, cables, stretch, which method could be best suitable for estimating the subsurface?
I want to transform data from cylindrical to cartesian coordinate and I need an operator such as "R" .For example Rm = d "d" is data in cartesian coordinate and "m" is data in cylindrical coordinate.
FYI "m" or "d" is not a point and this is a 3d matrix and I need "R" operator for solving an inverse problem to find the best "m" in the equation of Rm = d
Any help would be appreciated.
Prof. Tensi discovered four types of heat transfer modes (HTM) which should be taken into account when solving inverse problem (IP). Theoretically, the same cooling curve of standard probe can be generated by each single mode. Boundary condition for each HTM is quite different. To solve correctly IP, one should know exactly what kind of HTM takes place during quenching.