Science topic

# Inverse Problems - Science topic

Explore the latest questions and answers in Inverse Problems, and find Inverse Problems experts.
Questions related to Inverse Problems
Question
I am working on topology optimization for photonic devices. I need to apply a custom spatial filter on the designed geometry to make it fabricable with the CMOS process. I know there exist spatial filters to remove the pixel-by-pixel and small features from the geometry. However, I have not seen any custom analytical or numerical filters in the literature. Can anyone suggest a reference to help me through this?
Thanks,
Question
I have a series of shadowgraphy images obtained at different angles from a rotating geometry and I want to use them to reconstruct the 3D geometry. Theoretically, the values recorded in shadowgraphy images are the sum of laplacians of refractive index in parallel planes with respect to the detector as opposed to the absorption coefficients recorded in x-ray tomography. I have calculated the forward projections from the sum of laplacians in different views and now to reconstruct the geometry I need to solve the inverse problem. In x-ray tomography, the inverse problem is solved analytically by back projecting the recorded values into the image domain. However, the same principles cannot be applied to shadowgraphy images since Beer's law is no longer valid here. I am trying to reconstruct a 3D domain containing the refractive indices from 2D shadowgraphy image and I was wondering if anyone could help me with this problem?
Thank you.
Please see the link below for some suggestions
Question
In epidemiology modelling, in order to calibrate such a model, we require discrete observations of the population compartments functions. If we neglect the underreporting and other issues (such as vital dynamics etc.), observing the most compartments sizes is relatively easy. The infected (I) population is the integrated newly infections per day, reduced by the integrated removed. It is the case with the removed (R) compartment, which is constituted by the integrated newly recovered and deceased per day. In the classical SIR model, again rejecting the possibility of reinfection, the susceptible (S) class is calculated as the whole population, reduced by the infected and removed. Statistics is also collected for other compartments as quarantined (Q), hospitalized (H) and so on.
But how to measure the exposed (E) population share? Is there any (real world) statistics, or how to calculate it from the existing data?
Very interesting to add exposed / qurentined part of SIR -
Question
Hello, everyone.
I am in the middle of solving a linear algebra problem and reach the point to have an ill-condition matrix in the written lagrangian. To minimize the error function I am using Tikhonov method; however, after adding epsilon to the components of the matrix I am seeing a considerable difference in the results. I have seen some libraries of Scipy in Python to deal with such problems I don't know how much it can be applied. I really appreciate if anyone kindly help me to crack the problem.
Kind regards
The phenomenon you experienced is quite natural. Adding epsilon changes the problem, thus changes the solution. (Probably) the most used way to find the optimal value of epsilon is the L-curve approach.
Alternatively, you can try to solve the linear problem with the Landweber iteration method. Unfortunately, I am not aware of a Python library that implements it, but think you could easily program it by yourself.
You can find information about the methods on the internet.
Good luck with your problems!
Question
Greetings.....I am doing anomaly detection using EIT. Forward problem is solved using FEM using COMSOL . Now have to (reconstruct image)start with inverse problem of EIT. Can someone suggest which reconstruction algorithm will give the best result? Whether MATLAB is better or PYTHON is better to solve. Thanks in advance
Thank you Ahmed
Question
The paper describes the insides - the basic concepts of the Math Microscope, demonstrates the results of Super-Resolution images obtained from the Event Horizon Telescope and analyzes the results of the movement of clusters of stars that go around the Black Hole. The presence of point objects - single stars in the SR image allowed us to implement a new breakthrough approach in the problem of SR images of Powehi Black Hole in the concept of MM. In the paper, we reviewed and illustrated new concepts: Invertability Indicators and Adequacy Characteristics of discrete Models of Apparatus Functions. With these new concepts, in the inverse problem, for the first time, we were able to answer simple questions: What are we dealing with? Moreover, have we solved the inverse problem? The paper demonstrates the “manual solution” of the problem of Reconstruction of AFs and Super-Resolution on MM. In the Discussion at the end of the paper, we pose the problem of creating two Artificial Intelligences for the automated solution of the R&SR problem with the interpretation of the SR results of BH images from EHT.
Dear Pr Evgeni Terentiev,
A very interesting subject, thank you for sharing with us your experience.
Best regards,
Pr Hambaba
Question
Dear Colleagues and Authors,
plenty of problems in mathematics, economics, physics, biology, chemistry, and engineering, e.g., optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, non-destructive testing, and other disciplines, can be reduced to solving an inverse problem in an abstract space, e.g., in Hilbert and Banach spaces. Inverse problems are called that because they start with the results and then calculate the causes. Solving inverse problems is a non-trivial task that involves many areas of Mathematics and Techniques. In cases where the problem is ill-posed, small errors in the data are greatly amplified in the solution, and therefore, regularization techniques using parameter choice rules with optimal convergence rates are necessary.
Currently, I am editing a special issue on "Numerical Analysis: Inverse Problems – Theory and Applications 2021" with a Switzerland-based "Mathematics" MDPI Journal.
I would like to draw your attention to this possibility of submitting research articles:
Please let me know if you need any help.
Thank you for your kind consideration,
Christine Böckmann
Dear Prof. Christine Böckmann, that's great. Feel free to invite me as a Reviewer.
Animasaun Isaac. L. (PhD)
Department of Mathematical Sciences,
Fluid Dynamics and Survey Research Group, Center for Research and Development,
The Federal University of Technology, Akure, PMB 704, Nigeria, West Africa.
Tel.: +2348034117546
Question
Interested in numerical techniques for traditional deterministic (Tikhonov's) and especially in information-probabilistic (Bayesian) paradigm, uncertainty propagation analysis etc.
Attached for your kind perusal.
Please look at Theorem 2.7 Frechet derivative and 2.6.
Question
This is more of a survey question than a query for precise mathematical detailing. Opinions are welcome!
Follow
Question
I'm working on the inverse problems on topological induces. Actually I got a partial answer for my question. But I have to generalize this result. Please help me in this regard.
Thank you
SMH
What is the application for this inverse problem?
Question
Hi,
I am interested in image reconstruction and I mostly go to IEEE NSS-MIC and Fully3D
I would like to attend to new conferences about applied mathematics for image processing, with topics such as inverse problems, optimisation and machine-learning. Any advice?
Many thanks
The best conferences are
1) The Medical Image Computing and Computer-Assisted Intervention Society (MICCAI)
2) Information Processing in Medical Imaging (IPMI)
3) SPIE Medical Imaging
Question
Hello everyone.
I am trying to optimize an algorithm that can predict the size of a network, only some of the nodes of which are visible (available for measurement), by using time series data from the observable nodes from multiple experiments. The network can be anything from linear a linear time-invariant system to highly non-linear, noisy system such as biochemical signalling pathways.
In order to optimise this algorithm I need as much data as possible (by which understand trajectories of the observed nodes in response to some initial conditions sampled over some time period). What am I am asking is:
is it is possible to synthetically generate data with GANs using already available time-series data instead of doing more experiments which can be costly at best and not possible at worst.
Obviously there is no GAN that would fit any type of system but for the moment I want to know if
1) this is possible in the first place
2) this is practical, or in other words if the dataset size required to train the network is realistic.
To give an example: for a network with total size N = 100 and observable nodes n=10, there is let's say M = 150 experiments and the data from one experiment is a 10x100 matrix, which holds the states of the 10 nodes sampled at 100 time points. So I will be able to feed the GAN 150 10x100 matrices in order to train it and from that I want to be able to produce multiple other 10x100 matrices from the same probability distribution.
Please excuse the long question and thank you in advance. If something wasn't clear please ask, I will clarify.
Thank you for you patience in explaining the question in detail. I have a better understanding of the problem.
1) I am not sure how you arrived at M= 1.5*N, from what I understand you should be able to sample 1.7310309e+13 node combinations from N=100 and n=10 (100!/(10!x90!)). Am I missing something?
Irrespective of how you calculated the values of 1.5*N, I do not think 150 samples of initial and final conditions from 10 nodes is enough to train standard GAN architectures.
From my experience using GAN's for Image translation, I think in order to fit a generation model to less data you should first start by sizing down parameter count but discarding layers and adding layers such as dropout and batch normalization for faster convergence. And I also feel this problem can be better handled by something with a Graph-CNN style architecture. Something like a
and
Hope this made more sense than my last answer.
Question
Hi everyone, hope you all are having wonderful day.
I am working on a baby inverse problem. I have a simple nonlinear model. To estimate the model parameters I can obtain two different sets of simulated experimental data (inverse problem crime) in the following ways:
Set 1. Experimental data obtained from 2^k factorial design. Set 2. Experimental data obtained from one extreme corner of the 2^k factorial design.
Interestingly, when I estimated parameters from these two data sets, I get better estimates when Set 2 was used. I thought that if the model is in good condition (without ill-posedness), using more experimental data should result in better estimates. My performance criteria is how close parameter estimates are to their assumed true values. Do you have any idea why I am seeing this behavior?
Hello.
I assume that your calibration process involves using an optimization algorithm (to find a set of parameters values that minimizes the distance between the experimental data and the outcomes of the model).
With many points in the experimental data it is possible that there is more local minima in the optimization problem than with less experimental values. Depending on the optimization algorithm it can be harder not to be trapped in a local minimum that is not the global one. Thus non optimal values will be given. It is always a good idea to help the algorithm by giving him some a priori estimations of the parameters, though it is not always possible.
Another possibility is that the model is not able to reproduce some of the experimental data (this happens often in the real life). As a consequence these experimental data will always contribute a lot to the distance to minimize and may force the algorithm to search for the parameters values in a bad area, giving poor parameters values.
Question
I have applied two different frame-work, i.e. Deterministic (Very fast simulated annealing) and probabilistic (Sampling) algorithms to solve a highly non-linear inverse problem with 6 unknown model parameters to be optimised. As you know, the result of the VFSA is a single solution for the each model parameter and the result of the probabilistic algorithm is a set of solutions sampled from the posterior. I expect that the the solution obtained by the deterministic method should be captured by the samples obtained by the probabilistic approach and the discrepancy should not be significant, but these solutions are far from each other!!!
For both methods, the prior, the forward model and the the noise level are the same. Actually, I defined the prior in probabilistic method uniformly distributed, but in VFSA, the model parameters are updated through sampling of a Cauchy distribution. What causes this discrepancy? Any justification for this matter? is it usual? and what causes this in the problems?
P.N
The attached figure shows the cloud of samples from the probabilistic method, their mean (Black dot) and the optimised value by deterministic algorithm (Red dot)
Hamed Heidari Yes, i think so.
Question
Deep learning and machine learning methods have improved substantially over the years. We are now at a point were we have enough computational resources, databases, and methods able to learn more about (recorded) things. This change is already a reality in image reconstruction.
Will this replace us in the task of improving image reconstruction methods? Let us know your opinion.
Specifically deep learning has a lot of potential to solve the inverse problem or at-least in its currents stage is likely to give better results than its human counterpart.
Two of the examples of my research for image reconstruction and extraction of unavailable information is shown below:
Also, in the past i had developed algorithms for defense agency which reconstructs 3D scene from 2d Ground penetrating radar data. Thus, from my experience as far as image reconstruction goes with current knowledge of understanding about DL we can achieve better results.
For further reference I recommend reading on Generative adversarial Networks, Deep Fakes etc.
Sir, considering your expertise in medical imaging. Did you asked this question with any specific problem statement in your mind (other than inverse problem.. )?
For me it will be worthwhile to know any related problem statement in medical application?
Question
The residual functional will be convex because the inverse problem is linear, so the iterative process will be convergent. I think it is easy to prove that the convergence rate of the minimization process is 1/k. Convergence to a solution or set of solutions will depend on whether the uniqueness theorem exists or not.
Dear Prof. Mohammed Sabah Hussein
Thank you very much for your reply. I understand your difficulties. But! It would be very interesting to see your research for a real practical problem with realistic parameters, not for a model example. Practical statement and realistic parameters of this problem can push you to a very interesting study and result. And second. I would like to advise you to use the conjugate operator method to solve your problem (see https://drive.google.com/file/d/0B2IhUwjYFHEBOUFSTVVfeHhlWFk/view). If you are interested in this method, you can easily contact with me, my work email karchevs@math.nsc.ru
Question
Hello everyone,
Epilepsy is a chronic brain disorder characterized by recurrent seizures. Globally, 50 million people around the world live with it, while ,annually, a rate of 2.5 million new case is diagnosed to have it.
Unfortunately, one-third of the cases had shown a resistance to the Anti-Epileptic Drugs(AEDs) being prescribed for them by the physician. In these cases, the surgical intervention becomes a must. So we need to precisely localize the region involved in this disease and asses the eligibility of the patient for such surgery.
Many functional neuroimaging techniques have been used in order to perform such kind of assesment (such as MRI, PET,SPECT, etc.) but these methods provide a low temporal resolution which is needed for transient events such as the spikes. While the EEG method can provide a high temporal resolution. Hence, Many researches have been conducted on how to use the EEG along side the signal processing techniques to provide a better understanding of the spatio-temporal activity in the brain (solving the inverse problem).
I am interested in time varying source localization. So kindly, can any one of you suggest some literature review related to this issue?
Thank you in advance.
Hello Sajidah,
you may wish to review literature on application of eLoreta algorithm based software electroencephalograph with Dynamic Electrical Cortical Imaging . please see below relevant link on this technology:
Kind regards,
tatyana
Question
Is Dispersion Coefficient dependent on concentration in ADE (advection-diffusion-equation)? If yes, what is its relation or equation versus concentration? Regards Azade
It does occur. I haven't encountered it in my work.
See, for example,
The usual web searches will find more instances.
Diffusion is related to dispersion. See
Question
Is every design problem will be an inverse problem?
I agree with Samuel Grauer. Most generally, an inverse problem is any problem in which one estimates a quantity-of-interest by measuring a different quantity (e.g. measuring the current from a thermocouple to determine temperature). That being said, the term "inverse problem" is really reserved only for these problems when they are also "ill-posed", meaning cases where: (i) a solution may not exist, (ii) the solution is not unique, or (iii) the solution changes considerably with small changes to the boundary conditions.
Question
The digital image is a collection of values of sensors arranged in a grid. These sensors have a field of view in the original world. Then it generates a value based on the phenomena that it measures.
This phenomena can be visible light(for conventional digital images), infrared light(for infrared images), Positron emissions scattering(PET scan), secondary electrons (Scanning electron microscope), and scattered microwave(Synthetic aperture radar SAR).
In conventional image processing, a matrix is used to represent these values. For inverse problems we convert this matrix to a vector and then use the normal equations or regularization based methods.
Some PDE based methods use kernels to process images using 2D convolution. Some times we use the notion of images as point clouds and then construct a graph based on some distance measure. Then we study the Laplacians(Diffusion maps) or the Hamiltonian operators of these graphs. The eigenvalues of these operators lead to the notion of heat kernel signatures and wave kernel signatures.
We have these alternate views of an image based on the application.
What is the most fundamental view of an image which can bring all these different notions together?
Does such a notion exist in literature?
There are many other techniques rather than matrix representation of image.
First one is FFT, (Fast Fourier Transform)
Second is DCT (Discrete cosine transform)
Third is DWT (Discrete wavelet transform)
they all represent image in mathematical formula
Question
Conjugate gradient method for PDE
I would recommend Shewchucks introduction to conjugate gradient without the agonizing pain
Question
Could it be like FEM, meshless, statistical inverse problem, differential evolution and how?
Thank you for all ideas.
I am not sure which kind of data you mean with "raw". Identification methods use necessarily real measured data, which have to be conditioned for the identification (e.g. drift and mean value elimination). Anyway, there are several methods for the parameter estimation of PDE. See for example (an the given references in these papers):
Question
In particular, what are the resolution limits of radiation planning as applied to cancer therapy? Do these limits depend only on the equipment, or are we starting to see some limitations due to the ways in which the inverse problem of mathematical optimization is solved?
The most outstanding limit, I would say, is related to the trade-offs we still need to take between accuracy and speed in the radiation transportation calculations, such as the Papanikolau approximation in convolution/superposition methods (see for instance Phys Med Biol 44, R99). The optimization of fluences and beam shapes in IMRT/VMAT aren't much of a problema nowadays.
Question
Can anyone recommend a good paper or book which describes the 'layer peeling' approach to inverse scattering of the 1d Schrodinger equation using a very simple procedural style? That or some MATLAB code? I am of course aware of the papers such as "Differential methods in inverse scattering" by AM Bruckstein et al. (1985) but wondered if there was any alternative literature.
Do you have the code for the Inverse scattering method?
Question
The peaks at small T2 appear less intense than the peaks at higher T2 even if the peaks area is correctly evaluated. This suggests a relaxation time dependent distortion of the reproduced peak line-width.
More details are attached!
If that helps, I recommend reading:DOI: 10.1002/jmri.24870 , in which the dependency of biexponential T2's on the choice of echo times investigated. There might be estimation errors. If after reading you found i relevant and you had further questions do not hesitate to contact me.
Question
i use FDM for forward model and genetic algorithm for inverse problem, i want to estimate a parameter in each node in the FDM grid using inverse problem, when i use an homogeneous parameter it estimate the parameter with good accuracy but for heterogeneous parameter (different value in each node in the grid) doesn't give a good result, i want to know if there is a solution of this problem.
Hi Guido,
The inverse problem that i solve is the estimation of thermal conductivity(that i suppose heterogeneous) from temperature surface, the temperature data comes from a simulation . For the objectif function I minimize the temperature difference between the measured and calculated temperature for each node in the mesh. The thermal conductivity is conitionned by a maximum and minimum value.
I want to estimate the conductivity by the inverse problem for each point in the mesh that is uniform (I assume that the conductivity is different for each point), that why I use inverse problem for every point but it gives false estimates.
Question
We know that an inverse problem has a general form like:
Gm=d
where "d" is our data, "m" is our model and "G" is the coefficient matrix.
I want to set up the coefficient matrix for a non-linear solution in seismology, but I don't know how should I do that. In linear solutions, we use Green's Functions to set up our coefficient matrix, but I have no idea about non-Linear problems. Can you please introduce any references for this? Any paper or website would do.
Best,
Kamyar
Dear Kamyar:
You must linearize the problem according to a Taylor series development, then you must conduct the inversion process on the changes or increments of the variables.
You cas read the chapter 6 of Lay and Wallace 1995: Modern Global Seismology, Tarantola 2005: Inverse Problem Theory
To obtain slip distribution on a fault plane, you must try the Kikuchi and Kanamori model: http://wwweic.eri.u-tokyo.ac.jp/ETAL/KIKUCHI/
Best regards:
Cesar Jimenez (cjimenezt@unmsm.edu.pe)
Question
.
Have you tried Matlab (iradon()) or Mathematica?
Give a look to the attached paper
Best wishes,
Gianni
Question
Dear all
I am trying the EEG inverse problem in the visual area. However, the number of electrodes in my lab is limited. so I place EEG separately to predict wide EEG source. Are papers using this approach?
If I remember correctly, Michel and colleagues claim that at least 60 electrodes are needed to reconstruct the scalp signal in source space, but Trujillo-Barreto et al show that their Bayesian Model Averaging method (and variable resolution electromagnetic tomography: VARETA) was able to reconstruct sources quite well with 19 electrodes (as compared to 128 electrodes). Maybe their articles are interesting for you.
Trujillo-Barreto, N. J., Aubert-Vázquez, E., & Valdés-Sosa, P. A. (2004). Bayesian model averaging in EEG/MEG imaging. Neuroimage, 21(4), 1300-1319.
Bosch-Bayard, J., Valdes-Sosa, P., Virues-Alba, T., Aubert-Vazquez, E., John, E. R., Harmony, T., . . . Trujillo-Barreto, N. (2001). 3D statistical parametric mapping of EEG source spectra by means of variable resolution electromagnetic tomography (VARETA). Clinical EEG (electroencephalography), 32(2), 47-61.
Question
I've been using Curry 7 extensively and very much favor its capability in EEG source analysis (with different methods available for solving inverse problems). However it seems to be fabulously expensive to have it for personal use.
Now, I am looking for open source toolboxes that come with methods for solving forward/inverse methods with high precision.
I am aware of some toolboxes such as FieldTrip, LORETA, NFT, SPM8, but I don't know their pros and cons.
My research will focus on localizing sources associated with EEG oscillatory responses (rhythms not ERPs) for BCIs.
Any response is highly appreciated.
Berdakh.
if you don't restrict yourself to matlab you can have a look at MNE.
it supports many inverse methods (MNE, dSPM, sLORETA, LCMV, DICS, MxNE, single dipole fit and soon RAP-MUSIC)
The language used for scripting is Python.
Question
Usually, Kalman filtering is applied to state variables tracking through nonstationary  (time-dependent) measurements. However, considering a case where these measurements are taken at the steady-state (thus, not changing at all), is it possible to use linear/nonlinear Kalman filtering to quantify the system's state?
If so, what would be a reasonable stopping criteria for it?
César,
as far as I understand, you mean estimation of the system's state when the output does not change with time. So in simple terms, observability means the ability to unambiguously compute the state vector from the given output. If the derivative happens to be zero, you arrive at some set of (non-linear) equations. Since they may be non-linear, you can use some optimization technique to find an approximate solution like, say, RLS:
If this is really your situation, I cannot see why you have to use a KF.
Remark: steady-state does not refer to the evolution and observation models. This is wrong. What you mean by that is time varying or time invariant systems. In the former case, you can use the Kalman-Bucy filter. Steady-state refers to critical points.
Regards
Question
Currently, I am working on 2-D nonlinear joint inversion of VLF and VLF-R data.  Since VLF_R data  which I am taking in log scale ranging from (0-5) and sensitivity matrix has been calculated by considering log of both model and data.. but VLF data which I'm considering is on linear scale ranging ( -40 to 40) and the sensitivity matrix has been calculated by considering model (log) and data (linear ).How should I give different weight to different data sets in this  joint inversion  problem ?
You can introduce a weighting matrix with the reciprocals of the standard deviation of the data as diagonal elements.  Your data contains the VLF_R data in log scale so you should take the standard deviation of the log of the data for VLF_R data and the standard deviation of the VLF data in linear scale.
Question
I  encountered an inverse problem described as following
AX=b,     a ill-posed problem
where A is a matrix with n by m ,X is a vector with m by one,b is a vector with n by one.
when we solve the equation,we will get the solver that is unstable because the condition number of  A matrix  is large.
then some ones add the damping factor into the daigonal elements in matrix ATA.
the least-square solver is
(ATA+apha2.I)-1ATb
If the vector X (we can say that the model parameters)contains only one kind  data, the apha we used can be constant.
But the matrix A contains kinds of data that the scale of them varies very much(eg: the velocity 2000m/s,the density 2.65 g/cm3 ,so 2000 is much larger than 2.65),the damp factors to velocity and to density will be not the same.
Please help me  with some useful ways to select the best damping factors!
In my opinion the best way is an empirical estimate from data variance  model variance trade off cruve which you can construct with several runs each one with different damping factor.
After the runs on a wide span of damping factor you construct theTrade-off curve between the data variance and the model variance for the full range of possible values of the damping parameter; so, the optimal value of damping chosen at the lower left corner  (and/or minimum) of the trade-off curve, which yields the best compromise between misfit and model variances. In literature there are others  empirical shemes based on the analysis of trade-off curves of the the numerical and statistical results of the runs with different damping factor.
As you claim in your case you have data which scales with different scale factor. In this case I advice you to rescale, (if the formulation of your inversion procedure allows it), the density or velocity of a 10^3 (similar an unit change) so the scales are comparable.
If you perform an integrated inversion (i.e. separate inversion as I suppose seismo-gravity) you can split the problem of optimal damping search in the corresponding  trade-off  curves analysis.
Finally in your inversion I suppose that you are performing a joint seismo-gravimetric inversion in this case the dominant problems are a) the parameterization of the two model (how to) and b) the relationship between the two model parameter (it depend on the v-rho relationship is included in the inversion).
I hope it is useful for you
Question
How to deal with direct wave arrival absence/directivity?
The answer highly depends on the nature and the quality of your data. In general, you have to eliminate any artifacts in your data that cannot be modeled by the physics you are using in FWI. A good estimation of the source signature is highly crucial for the success of the FWI application. Most importantly, the initial velocity model has to be accurate enough to avoid cycle-skipping.
Question
Hi, everyone, lately I try to retrieve two phases simultaneously with knowing two intensity measurements. I design a retrieval algorithm but finally it just stops convergence after a few hundred iteration loops. The retrieval results are not so accurate that I have to optimize them. My attempt is to give a small perturbation to the retrieval phases when the algorithm stops convergence and then restart the iteration, but it doesn't work.
Can anyone suggest me a methodology or direction to solve my problem?
Hi Wei Liu,
this is a tough problem.  I know there are many paper published about this issue and all the algorithms seem to work very well but in fact it really is difficult, especially if you are not using the Fourier transform (FT) but have do a calculations for (for example) the Fresnel transform (FST) or even the Linear Canonical Transform (LCT).
There are several important issues:
First you have to make sure your numerical code to calculate the FT or FST or LCT works very well. If it does not everytime you iterate you will accumulate errors.
Second (and related to the point above) if you do not sample correctly you may get problems due to aliasing or just be inaccurate in your physical description! So be careful.
Third you have to choose a phase retrieval algorithm. (in fact you need first to decide wheather to use an iterative technique of a direct technique like transport of intensity). There are very many iterative techniques but I recommend you read the papers of Fienup.  he writes well.  Is very clever guy who figure a lot of this out for optics.  Remember it is easy to make things complicated.  So start with the simplest algorithms first, like Gerschberg-Saxton.
Fourth try to find and use a good initial guess.  The closer you start your search to the global minimum the better your chance to get there.  I know this sounds like every frustrating advice - I am telling you the better you knwo the answer the better your chance to find the answer but in physical systems you sometimes know the general shape of the result (is it continuous what size is it) so make a good guess it will help a lot.
There are lots of other important issues but I will finish by mentioning that the choice of cost function (how you quantify convergence) can also greatly effect both how you percieve the quality of convergence and may also effect the iterative algorithm (if you use the cost function as part of the iterative process).  A good cost function lets you know how accurate the result really is and may even be important to improve your convergence.
In conclusion - read good papers, try different stuff, and have fun - you may discover something really exciting if you keep at it.
Question
I want to deal with some inverse problem of parabolic equations with free boundary.
But how to define the inverse problem of parabolic equations with free boundary? How to judge the well-pose or ill-pose? How to prove the existence and uniqueness of the inverse problem?
I can mention some papers on this topic:
1) Mykola Ivanchov, A problem with free boundary for the two-dimensional parabolic equation P.17-28 Journal of Mathematical Sciences Volume 171, Issue 1 (2011)
2) Mykola Ivanchov, Tetyana Savitska, An inverse problem for a parabolic equation in a free-boundary domain degenerating at the initial time moment P. 47-64 Journal of Mathematical Sciences Volume 181, Issue 1 (2012)
Here you may find some more references
Question
Apart from all the ground disturbances especially power lines, cables, stretch, which method could be best suitable for estimating the subsurface?
It depends on the target and its depth. the GPR method is good but it requires corrections for the "cultural noise. I am proposing a new method. You can use an electromagnetic device called PL2000 (made in Japen). I've successfully used it ( with Minimum error). that is very better of GPR but only use iron target for example Gas iron pipe , power lines, Conductive cables and ......
Question
I want to transform data from cylindrical  to cartesian coordinate and I need an operator such as "R" .For example Rm = d                                                                        "d" is data in cartesian coordinate and "m" is data in cylindrical coordinate.
FYI  "m" or "d" is not a point and this is a 3d matrix and I need "R" operator for solving an inverse problem to find the best "m" in the equation of Rm = d
Any help would be appreciated.
Thanks alot Curtis Andrew Corum , You are a life saver. I would see what you recommend.
Question
Inverse Problem in Pattern Recognition & Image processing.