Science method

Tomography - Science method

Imaging methods that result in sharp images of objects located on a chosen plane and blurred images located above or below the plane.
Questions related to Tomography
  • asked a question related to Tomography
Question
4 answers
I was given neutron tomograms of plant roots. I don't want to analyze them manually, whether they are different or not. For example, counting the number of nodes myself would be problematic. I saw that there is a program ImageJ, but as far as I understand, you need to manually select pieces of roots (branching points, stem thickness and other parameters). Having these data sets, you can probably use some kind of statistical test? Does it make sense to train a neural network that will give me the number of branching points itself? I saw articles on computer vision, but I do not have a goal to invent something of my own. My goal is to use the most effective and fastest way to compare the topology and quantitative characteristics of two tomograms. It is also assumed that we will have about 8 tomograms per root in order to save time. Initially, I thought that if these are 3D images, then we can use some standard parameters for comparing two pictures, but 3D is not an option
Relevant answer
Answer
Ahmad Bakir I will try! Thank you very much!
  • asked a question related to Tomography
Question
3 answers
Hi everyone,
I am working on a 3D first-arrival traveltime tomography problem. As part of this, I have the data as first-arrival times (d, 205000), model parameters (m, 170000), and the length matrix (G, 205000 by 170000). The length matrix is generated based on tetrahedral meshes (0.5 km near the surface and 2.0 km in the deeper parts) because my initial model is complex. This matrix is very sparse, and I am trying to solve it using the LSMR and LSQR iterative schemes (https://docs.scipy.org/doc/scipy/reference/sparse.linalg.html). However, if the initial model (x0) is set to None, it generates incorrect results. How do I solve this problem?
Best regards,
Bhaskar
Relevant answer
Answer
Hi,
a way to solve sparse matrices in tomography are SIRT or ART algorithms
Van Der Sluis, A., Van Der Vorst, H.A.
Numerical solution of large, sparse linear algebraic systems arising from tomographic problems.
(1987) Seismic tomography, pp. 49-83. 
You can find applications also in the works of Aldo Vesnaver, Gualtiero Boehm and myself such as
doi: 10.1111/j.1365-246X.1996.tb05274.x
10.1111/j.1365-2478.2006.00563.x
10.3997/1873-0604.2017037
and references therein
Many wishes
  • asked a question related to Tomography
Question
1 answer
A sub field of medical physics/molecular biology, thh overall aim of the effort is to improve the understanding of the molecular architecture of unknown or understudies biological systems, for example of the human sperm (flagellum component) using cryo-electron tomography and advanced image processing workflows.
In this example, men fertilization issues from accidents, excessive therapeutic radiation and pathological development in puberty call for scientists to solve the structures of key flagellar macromolecular complexes to understand the molecular mechanisms controlling sperm function both in health and disease.
Do you know about the current degree of effectiveness compared to other approaches and the reliability in bi8logical systems structure elucidation by cryo-electron tomography ?
Relevant answer
Answer
Cryo-electron tomography (cryo-ET) is a technique that allows the study of the 3D structure of cells and tissues at near-native conditions⁴. It can provide unprecedented insights into the inner working of molecular machines in their native environment, as well as their functional relevant conformations and spatial distribution within biological cells or tissues¹.
The field of structure elucidation by cryo-ET has evolved rapidly in recent years, thanks to the advances in instrumentation, sample preparation, image processing, and data analysis. Some of the current trends and challenges in the field are:
- Achieving higher resolution and better contrast for cryo-ET images, by using phase plates, direct electron detectors, and improved alignment and reconstruction algorithms¹³.
- Developing new methods and protocols for cryo-ET sample preparation, such as cryo-focused ion beam milling, cryo-correlative light and electron microscopy, and cryo-electron microscopy of vitreous sections¹⁴.
- Applying cryo-ET to study a wide range of biological systems and processes, such as viruses, bacteria, organelles, cytoskeleton, membrane proteins, and macromolecular complexes¹²³.
- Integrating cryo-ET with other complementary techniques, such as mass spectrometry, X-ray crystallography, nuclear magnetic resonance, and computational modeling, to obtain a comprehensive understanding of the structure and function of biological macromolecules¹⁵.
- Disseminating and standardizing cryo-ET data and structures in public databases, such as the Protein Data Bank (PDB) and the Electron Microscopy Data Bank (EMDB), to facilitate data sharing and reproducibility⁵.
(2) High-resolution in situ structure determination by cryo-electron .... https://www.nature.com/articles/s41596-021-00648-5.pdf.
(3) Cryo-electron tomography: 3-dimensional imaging of soft matter. https://pubs.rsc.org/en/content/articlehtml/2011/sm/c0sm00441c.
(4) JoF | Free Full-Text | Preliminary Structural Elucidation of β-(1,3 .... https://www.mdpi.com/2309-608X/7/2/120/htm.
(5) Evolution of standardization and dissemination of cryo-EM structures .... https://www.jbc.org/article/S0021-9258%2821%2900338-0/pdf.
  • asked a question related to Tomography
Question
2 answers
I'm curious, if there's any way to estimate the two-qubit quantum state purity via direct measurements on quantum computer, without the need of full density matrix reconstruction?
I'd see some use of state purity in algorithms, but performing quantum state tomography seems really prohibitive, both runtime-wise and considering, that it's a huge decoherence source...
I'll be glad for any suggestions or paper recommendations.
Relevant answer
Answer
Dear Martin,
A little shameless self-promotion of our result on Fredkin gate and measuring purity/overlap:
Stárek, et al., npj Quantum Information 4, 35 (2018), https://www.nature.com/articles/s41534-018-0087-x
Generally speaking, nonlinear functions of a quantum state (density matrix) can be measured by interfering multiple copies of the state and applying parity detection:
Filip, PRA 65, 062320 (2002),
Ekert et al. PRL 88, 217901 (2002).
The same holds for entanglement characterization:
Horodecki, PRL 90, 167901 (2003),
Fiurášek & Cerf, PRL 93, 063601 (2004).
Experimental tests for bipartite states:
Walborn et al. Nature 440, 1022 (2006),
Islam et al. Nature 528, 77 (2015).
Best,
Miroslav
  • asked a question related to Tomography
Question
5 answers
We are going to use the Heidelberg OCT Spectralis for mouse OCT acquisition. However, we find several papers reporting the use of this instrument for rodents, we are not able to successfully do it. We have two separate devices (1- Heidelberg OCT Spectralis and 2- Heidelberg Hra 2). The Heidelberg OCT Spectralis has two lenses one 30° Standard Objective Lens and an Anterior Segment lens. The Heidelberg Hra 2 has 55° widefield lens. unfortunately, the 55° lens could not be assembled to Heidelberg OCT Spectralis.
As far as we know, given the high dioptre of the mouse eye, we should use the 55° widefield lens. However, using the standard 30° we get a rather acceptable cSLO image, no OCT image is displayed.
Can anyone help solve this problem? We already tried using an additional lens in front of the device lens but still not working, however, maybe the total dioptre of the lens was not enough.
Also, a paper suggests minor software modifications (using Alt+Ctrl+Shift+O in Heidelberg Eye Explorer software) which we could not figure out how that should be done. (Spectral domain optical coherence tomography in mouse models of retinal degeneration. Invest Ophthalmol Vis Sci. 2009 Dec;50(12):5888-95. doi: 10.1167/iovs.09-3724.)
These are some papers about using the Heidelberg OCT Spectralis for rodents:
1- Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images. Trans. Vis. Sci. Tech. 2015;4(4):9. doi: https://doi.org/10.1167/tvst.4.4.9.
2- Tracking Longitudinal Retinal Changes in Experimental Ocular Hypertension Using the cSLO and Spectral Domain-OCT. Invest. Ophthalmol. Vis. Sci. 2010;51(12):6504-6513. doi: https://doi.org/10.1167/iovs.10-5551.
3- Giannakaki-Zimmermann H, Kokona D, Wolf S, Ebneter A, Zinkernagel MS. Optical Coherence Tomography Angiography in Mice: Comparison with Confocal Scanning Laser Microscopy and Fluorescein Angiography. Transl Vis Sci Technol. 2016 Aug 18;5(4):11. doi: 10.1167/tvst.5.4.11. PMID: 27570710; PMCID: PMC4997887.
Relevant answer
Answer
Hi Danilo. Unfortunately, no. We could only get some preliminary results using TOPCON using 7line raster mode. however, we could not achieve a full field of the retina.
  • asked a question related to Tomography
Question
1 answer
What are the potential sources/causes of noise/artifacts in white beam synchrotron X-ray imaging/tomography? The SNR appears to be poor in white beams as compared to monochromatic beam. Is this because of some kind of radiation streaming? Would detector shielding and collimation of incident beam help?
Relevant answer
Answer
I tghink you are on the right way;
in case of non-sufficient collimation and non-sufficient detector shielding the scattered radiation affects the detector signal, especially via an offset (background) and its correlated noise onto its output.
Monochromatic and whlte beam are produced by different means and will thus cause different ambient x-ray scatter levels, which for example in the case of strong absorbing samples will contribute a non-negligible contributions to the detector output.
I my x-ray set-ups I always checked this scatter offset in the detector signal by
a) putting a beam stop just behind the sample
and
b) just in front of the detector window.
In case (a) you will see how much x-ray scatter will enter in total the detector; and in case (a+b) you will see how much scatter enters the detector passing the housing/shielding. The difference of these signals [ (a) - (a+b) ] will give you the scatter contributions passing through the detector window.
From the strength of that value you might decide to improve your collimation, and from the signal level of (a + b) you may decide to improve your detector shielding...
Good luck and
best regards
G.M.
  • asked a question related to Tomography
Question
1 answer
I need to use the regularization in tropospheric tomography to estimate wet refractivity as an unknown
Relevant answer
Answer
The Landweber and MART (Multiplicative Algebraic Reconstruction Technique) regularization methods are commonly used in image reconstruction and signal processing applications. Although specific implementations may vary, I can provide you with a general guideline to create MATLAB functions for these regularization methods:
1. Landweber Regularization:
The Landweber regularization method is an iterative algorithm for solving ill-posed inverse problems. Here's a basic MATLAB function that implements the Landweber method:
```matlab
function x = landweber(A, b, iterations, stepsize)
% A: System matrix
% b: Measured data vector
% iterations: Number of iterations
% stepsize: Step size or regularization parameter
[m, n] = size(A);
x = zeros(n, 1); % Initialize the solution vector
for iter = 1:iterations
% Update the solution using Landweber iteration formula
x = x - stepsize * A' * (A * x - b);
end
```
You need to provide the system matrix `A`, the measured data vector `b`, the number of iterations, and the step size (regularization parameter). The function updates the solution iteratively using the Landweber iteration formula.
2. MART Regularization:
The MART regularization method is an iterative technique commonly used in computed tomography (CT) image reconstruction. Here's a simple MATLAB function for implementing the MART algorithm:
```matlab
function x = mart(A, b, iterations)
% A: System matrix
% b: Measured data vector
% iterations: Number of iterations
[m, n] = size(A);
x = zeros(n, 1); % Initialize the solution vector
for iter = 1:iterations
% Update the solution using MART iteration formula
x = x .* (A' * (b ./ (A * x + eps))) ./ (A' * ones(m, 1));
end
```
You need to provide the system matrix `A`, the measured data vector `b`, and the number of iterations. The function performs the MART iterations, updating the solution based on the MART iteration formula.
Note: These are basic implementations, and you may need to modify them based on the specific problem you are solving. Additionally, consider incorporating suitable stopping criteria and handling specific constraints if required.
Remember to adapt these functions to your specific problem and matrix representations. Also, ensure that you have the necessary system matrix `A` and measured data vector `b` appropriately defined before using these functions.
  • asked a question related to Tomography
Question
1 answer
I am attempting to write a code for traveltime tomography based on adjoint state method. I wanted to know How to solve the adjoint state equation, especially how to get the value of adjoint state using the boundary condition. In the picture, I wish to solve 9(b). I cannot understand the purpose of n or how to solve for it?
Relevant answer
Answer
Part (1) of your question
The purpose of the normal vector (n) in equations that describe the behaviour of surface sensors is to specify the direction in which the sensor is sensitive to incoming signals or perturbations. For example, in the case of seismic waves, the normal vector of a surface sensor is used to determine the direction of the particle motion that induces a voltage or current in the sensor.
a
The normal vector is also used to calculate the geometric factor, which is a correction factor that accounts for the geometry of the sensor and the direction of the incoming wavefield. In general, the normal vector (n) is an important parameter in equations that describe the behaviour of surfaces and surfaces sensors, as it determines the orientation and sensitivity of the sensor with respect to the surrounding environment.
____________________________________________________________________________
Part (2) of your question
Acquiring the value of the adjoint state utilizing the boundary condition involves a multi-step process that has a significant impact on the final outcome.
To begin with, it is essential to derive the adjoint equation from the forward problem and apply the appropriate boundary conditions for deriving the adjoint state. Here are the steps that need to be followed:
Derive the adjoint equation: Starting from the forward problem, which could be the wave equation or any other relevant problem, the adjoint equation can be derived. This is achieved by taking the derivative of the forward problem concerning the model parameters and subsequently applying the adjoint operator. It is crucial to note that the adjoint equation possesses a structure similar to the forward problem. The only difference is that the time is reversed, and a new source term, which is derived from the misfit function, is added.
Define the initial condition for the adjoint equation: The initial condition for the adjoint equation is typically the zero state. This means that the adjoint state is zero at the final time, and this condition ensures causality. The zero state corresponds to the fact that the adjoint state is generated by the mismatch between the observed and modelled data.
Apply the boundary conditions: In the same manner as the forward problem, the adjoint equation requires boundary conditions to be well-posed. It is essential to base the boundary conditions for the adjoint equation on the corresponding conditions of the forward problem. For instance, if the forward problem has Dirichlet or Neumann boundary conditions, the adjoint equation will also have the same type of boundary conditions applied at the same locations.
Solve the adjoint equation: With the adjoint equation, initial condition, and boundary conditions properly defined, the next step involves solving the adjoint equation. This can be accomplished through various numerical methods such as finite differences, finite elements, or spectral methods. Solving the adjoint equation enables the derivation of the adjoint state, which can be further used to compute the gradient of the objective function concerning the model parameters.
It is crucial to keep in mind that the steps outlined above may vary depending on the problem, the forward model, and the numerical methods used. The key to obtaining the adjoint state is to derive the adjoint equation and apply the appropriate initial and boundary conditions.
  • asked a question related to Tomography
Question
1 answer
How can I get horizontal planes of voxels in tomography using topography area?
Or more precisely, what procedure can be considered to make this mode have its own mathematical model so that I can intersect the rays with its mathematical model?
Relevant answer
Answer
Although I wrote the mathematics of the two x-ray anodes to make resection and intersection back in 1978, I am not an expert in CT mathematical modeling. Therefore, if you know each voxel's x, y, and z coordinates, then a horizontal section is to pick up all voxels with the same z-value. A profile section picks up all voxels with the same y-value, and a cross-section picks up all voxels with the same x-value. If you need a higher resolution than the voxel size, you make more sections around the location of interest and do interpolation.
  • asked a question related to Tomography
Question
3 answers
I work with an optical coherence tomography system. I want to detect the boundary between two features in an OCT image using a CNN. Kindly guide me to anyone who has done similar work or published work on this problem.
Relevant answer
Answer
Actually, there are many tools that you can develop the code. But when we talk about the CNN, it about the long journey of learning. It cannot get on the spot. Maybe you can read some intro what is machine learning, what is deep learning.
  • asked a question related to Tomography
Question
2 answers
Quantum state tomography can be shown by some simulators also but the hardware are very useful for measuring the state.
Relevant answer
Answer
For QST, you need an entangled photon source that generates signal and idler pair photons, two polarization controllers for aligning bases (H/V, D/A, R/L), two polarization beam splitters, a dual-channel single photon counter to measure coincidence counts (CC), and some code to compute the QST from the CC. The most important thing is that you must obtain CCs for 36 combinations of polarization bases of H/V, D/A, and R/L. Here is the detailed explanation (see page 30): http://research.physics.illinois.edu/QI/Photonics/tomography-files/tomo_chapter_2004.pdf
  • asked a question related to Tomography
Question
1 answer
Hello,
I have a large set of data from a serial imaging FIB/SEM tomography of a superalloy microstructure in binarised form. I was wondering whether there is some sort of programme that allows for automated measurement using the linear intersect method on those binary images. Ideally, I'd be able to compute not only the mean size thickness of the matrix channels, but also get the statistical distribution.
Many thanks in advance!
Relevant answer
Answer
You can use (origin pro) program
Sincerely
  • asked a question related to Tomography
Question
2 answers
The lab in which I am Performing In vivo studies, don't have imaging facility, so I need to preserve my In vivo samples for sometime. How to preserve these samples for Micro–computed tomography analysis and Biomechanical testing?
Thanks
Relevant answer
Answer
Sample - Implant imbeded in bone.
Thank you sir Gordon L Warren
  • asked a question related to Tomography
Question
7 answers
Positron emission tomography generally shows imaging of the physiology of the tumor as well as its anatomy, which is superior. It is unique compared to other cross-sectional imaging such as computed tomography or computed tomography (CT) or computed tomography. CT scans or MRIs often can not detect changes at the cellular level if the PET scan is capable of immediate changes. Identify in patient cells.
In order to image the tumor using PET or other methods, differences in basic features established in physiological and Metabolic of tumors and normal tissues. These differences include tumor surface antigens compared to cell tissues. Generally grow and DNA precursors such as thymidine and the rate of protein synthesis in tumors often increase compared to normal tissues. transport and Mixing of various amino acids, as well as anaerobic and aerobic glucose levels, are observed in tumor cells. In a wide range of Tumor types Glucose intake increases significantly compared to healthy tissues. In a typical PET system they are separated by a lead or tungsten blade detection of random photons in one shot Match with photons detected in other shots. In the diagram below, I plotted the average positron emitted energy from several desired radionuclides. Which of these radionuclides is best for our purposes?
Relevant answer
Answer
The question you mention is very broad! As for which is so-called "better", it really depends what type of tumour you are testing for or evaluating. As we know, different types of tumours are better analysed with different types of PET scans and I know my colleagues generally think they (PET scans) are all the same but you and I know that is not true. Most commonly, they expect if someone is said to have had a PET scan, it most commonly refers to using F18-FDG and clinicians are often surprised when it turned out to be something else (!)
  • asked a question related to Tomography
Question
2 answers
Hi,
I am beginner of electrical impedance tomography (EIT), electrical capacitive tomography (ECT), etc.
I made two SUS plates to measure the impedance of my arm skin by agilent high precision LCR meter. Every measurement, I always observe that the the initial impedance is high, and it slowly decreases and saturates after a long time.
I carefully maintained the skin-electrode contact to avoid motion artifact. It was quite effective to reduce noise, but still the decreasing impedance was shown.
Also, I changed the SUS plates to Ag/AgCl electrodes. The noise was dramatically reduced, but still the start impedance looks high.
My question is why the skin impedance continuously decreases and how I can prevent or compensate the impedance instability to get a representative impedance value of the skin.
Thank you!
Seongjun
Relevant answer
In fact you can use any impedance meter to measure the skin impedance All what you need is to stick two metallic probes on the skin and measure the impedance between the two electrodes by this impedance meter. There are many RLC meters that measures the impedance of the skin where it is equivalent to a parallel combination of a resistance and a capacitance. You have to press the electrodes against the skin to avoid the existence of air gaps between the electrode and the skin. You can use salty solutions to wet the electrodes. In this way you will get reproducible results.
You can also use the impedance measuring method in the paper at the given link:
Article Four Voltmeter Vector Impedance Meter Based on Virtual Instrumentation
I used this method to measure the skin impedance and it gave good results.
Best wishes
  • asked a question related to Tomography
Question
2 answers
Dear all,
Could you recommend any review paper (or book) comparing various downsampling methods applicable to volumetric data (preferrably, light microscopy or cell tomography data)?
Relevant answer
Answer
Some of publications of downsampling for volumetric data:1_ Optimal distribution_preserving downsampling for a large biomedical data.
2_ Downsampling method for medical datasets_Core.
https://Core.ac.Uk> download>pdf
  • asked a question related to Tomography
Question
2 answers
Here is the 3D map of the Observable Universe. Here you should be able to see the Great Attractor and all the Voids we are familiar with.
Since I don't know where they are, I would benefit from any comment, positive or negative.
Here is the video in youtube:
Relevant answer
Answer
Thank you, Qamar,
Notice what we derive from our models. I also derived the equation of state of the Universe. They are very simple since the expansion is the Lightspeed expansion of a known hyperspherical shell volume.
We use different paradigms. They used Friedman-Lemaitre and I used the Hypergeometrical Universe Theory.
I provided evidence refuting Einstein's equations here:
and rebutted L-CDM here:
Hence it is not clear what can I use from their work. That said, I benefit from learning prior art and will reference their work in my paper.
Notice that I created the 3D galaxy density map for the whole Universe. My paradigm is trivial. The transition of Blackholium to Neutronium generated hyperspherical harmonic acoustic oscillations. While in the Neutronium phase (superfluid neutron matter), these waves froze in space due to the sudden change in the velocity of sound.
My job was to find where Earth was within the hyperspherical hypersurface and derive the hyperspherical harmonic acoustic spectral composition. No undefined nonsense related to "Gravitational Waves", or "Quantum Fluctuations".
It is very easy to understand that if suddenly an orbital degree of freedom (neutron orbital angular momentum) is allowed, sound waves would be triggered.
Notice that the last image (CMB_Best) is a simulation and you can compare it with the actual Planck Satellite Observation (CMB_SMICA):
Feel free to ask questions.
Thanks,
Marco Pereira
  • asked a question related to Tomography
Question
4 answers
How could I use ImageJ to transform a set of x-ray images (projections from different angles) into a 3D model? In other words, I would like to transform a conventional radiology equipment into a tomograph for analyzing the porosity from iron samples.
Relevant answer
Answer
Assuming you have a full CT data set (sufficient projections to meet 180+cone angle criteria), you need some type of reconstruction algorithm first. Typically people use Feldkamp based recons, of which there are algorithms available for Matlab to easily do this. However, if you built the system yourself, you need calibration first, which is a bit more complex (but there are published ways on how to do this also). if you don't meet that criteria, then it is a limited angle sampling problem, which is more along tomosynthesis reconstruction.
  • asked a question related to Tomography
Question
11 answers
I am doing Love wave tomography for the region of India and Tibet. Curious to know is there any method or way to solve the smearing in the tomographic checkerboard test.
Relevant answer
Answer
Thabk you very much for your reply @HrvojeTkalcic
  • asked a question related to Tomography
Question
1 answer
Why fiber rotator is needed in optical coherence tomography system? Currently I'm building a endoscopic OCT system from scratch and the fiber rotator will be used in the sample arm. Since I'm a newcomer in this area, could anybody offer an explanation for this device?
Thank you.
Relevant answer
Answer
Well, I never heard of OCT until this minute, but a fiber rotator is generally for selecting the axis of polarization entering (exiting) a polarization maintaining (PM) fiber. A couple minutes with Google and I see polarization sensitive OCT is a thing, so I think that is the idea. If the light source is polarized, the scattering tissue can change the polarization. Different tissues give different polarization effects, so being able to change what polarization you are observing provides extra tissue differentiation and contrast.
  • asked a question related to Tomography
Question
16 answers
  • Brain tomography and nuclear magnetic resonance emission tomography, are able to identify areas of the brain that are activated during more participatory learning, and these are related to emotions?
  • Learning and verbal memory is the least remembered the short term?
  • The brain area of emotions in students are not activated or are refractory to the classroom lessons, lectures or conferences?
  • The emotions related area is more active when the student participates in learning and this is related or meaning to the real world?
  • Videoconferencing algorithms, allusive outlines a certain topic with brief captions; and mental and conceptual maps the brain area activated more emotions, that lectures and master classes?
Relevant answer
Answer
Sure
  • asked a question related to Tomography
Question
1 answer
The electrical tomography can be utilized to calculate the solid concentration or gas holdup in aqueous based multiphase flows. However, there usually exists a strong nonlinear relationship between the solid concentration and tomographic image. Is there any effective way to handle this problem?
Relevant answer
Answer
Within the classical framework, one can minimize the error between measurements and calculated values, in function of the multiphase electrical properties. There are different optimization algorithms which deal with non-linearity, with the most popular being Gauss-Newton, Simulated Annealing and Kalman filtering. Of these three, Simulated Annealing requires a large processing time and Kalman filtering may be suitable for real time (depending on other aspects).
  • asked a question related to Tomography
Question
6 answers
I am interested in sesmic tomography data (such as pictures) of the Dzhungarian basin, Tarim and Eastern Kazakhstan. The purpose is to undestand the geodynamic and tetonic processes at the collision, subduction zones.
Relevant answer
Answer
Mr. Ernest Berkman, thank you for information. I will definitelly review mentioned articles.
  • asked a question related to Tomography
Question
8 answers
Dear professors and colleagues ,
I'm going to work on various Despiking methods for detecting & replacing Spikes and Outliers from my data set and I need your guidance to know the best methods for this purpose,
Thank you for helping me.
Relevant answer
Answer
Some time ago I used SSA and KDE - they require some fine-tuning, but are capable of giving good results.
I think you need to try simplest approaches first to see if they give acceptable results for your dataset.
  • asked a question related to Tomography
Question
9 answers
I am beginning to study freshwater fish gonads and willing to use x-ray fluorescence and x-ray tomography. For the first one I am thinking in to use lyophization. But for the tomography I am considering to fix the samples with formol 10%. Do anyone know if there is a better preservative substance for the gonads and the technique, and if anyone could help me with sources about this matter I would appreciate.
Relevant answer
Answer
Marie-Christine Zdora and Alexander Rack We've decided to fix the samples in formalin and are considering embedded it in agarose. Thank you very much!
  • asked a question related to Tomography
Question
2 answers
This is more of a survey question than a query for precise mathematical detailing. Opinions are welcome!
Relevant answer
Answer
Follow
  • asked a question related to Tomography
Question
1 answer
Often both Mercury Intrusion Porosimetry (MIP) (for pores in the range say 10nm to 150micrometers) and neutron tomography (say 100micrometer to a few millimeters) are carried out to assess pore size distribution of concrete specimens. Is there a way of getting the full range of pores (say from 10nm to a few mm) using neutron tomography with a single run?
Relevant answer
Answer
the neutron imaging limit is about 1um in a good day, seems you need SEM analysis
  • asked a question related to Tomography
Question
1 answer
For the forward calculations in travel-time tomography, the shortest path ray method provides a good initial guess. But in that case the predicted travel times are approximate. Along with the shortest path ray method, ray bending technique is also used for extra refinement.
The short path ray method provides a good initial guess which is then used by the ray bending method to converge to a global minimum.
Moser et al (1992) uses conjugate gradient method to minimize travel-time along a ray path. Rays are parameterized as beta-splines, which can express a variety of curves with a small number of control points hence making the convergence faster.
My question here would be, how would one come up with an algorithm that handles ray bending methods using conjugate gradient method?
I greatly appreciate if you could let me know of the available resources to develop a code for the ray bending method.
Relevant answer
Answer
You should distinguish two different cases:1. The stationary traveltime is a minimum. Then you minimize the traveltime. 2. The stationary traveltime is a saddle point. Then you minimize the traveltime gradient squared. In both cases, the conjugate gradient method can be used to minimize the target functions, among other optimization methods
  • asked a question related to Tomography
Question
3 answers
Scotton, Chris J., et al. "Ex vivo micro-computed tomography analysis of bleomycin-induced lung fibrosis for preclinical drug evaluation." European Respiratory Journal 42.6 (2013): 1633-1645.
I want to perform ex vivo micro-computed tomography (CT) of mouse lung after fixation. I found this particular article where they described the method. However, I need specific information about how long the lung needs to be air dried before micro CT. Also what kind of packaging should I use to keep the lung before micro-CT? Could someone please help me out?
Relevant answer
Answer
We have done quite a bit of imaging of various tissues in our lab and we have never air-dried. You can keep your tissues in formalin rather than switching them to ethanol or another organic solvent if you prefer. This will prevent shrinkage which can also cause artifact. You can prepare the sample in a tube of any size, fill it with your solvent, and parafilm any areas of potential leakage. Just make sure that you also affix the tube to the mount properly to avoid leakage in the scanner. Depending on your machine, you can also leave your mouse in tact and perform imaging. There are many other routes you can go from there.
  • asked a question related to Tomography
Question
13 answers
Dear colleagues,
A good FIB-SEM data stack can contain a few thousands pictures. It is very tricky and annoying process to segment them all manually. Which software do you use (real practical experience with big data) for such tasks?
Cheers,
Denis
Relevant answer
Answer
You can also try Dragonfly (https://www.theobjects.com/dragonfly/index.html) from Object Research Systems, it is free for non commercial use.
Dragonfly has many segmentation tools from active shape to deep neural networks. The following video, (https://www.youtube.com/watch?v=1WVlskyuw94) is a deep learning tutorial.
Do not hesitate to contact me if you want more information.
  • asked a question related to Tomography
Question
5 answers
I am trying to build an optical tomography sytem and generate a SHG. I was wondering if changing my laser incidence angle on my sample can reduce scattering by a bit? Or is it best to keep the incidence angle 90 degrees.? But I am not sure if that might affect my SHG generation or not. Please help me out with this. Thanks!
Relevant answer
Answer
Thank you everybody for your answers, I appreciate it greatly.
  • asked a question related to Tomography
Question
1 answer
Human body is full of spatial fractals and temporal bifurcations. During treatment in medicine ( ultrasound, X-rays, photons, neutrons, ions...) in computerized tomography ( CT) is used nonlinear or locally linear ( linearized) theory ? It is on-line or off-line in different cases?
Relevant answer
Answer
Computerized Tomography (CT) uses algebraic reconstruction technique (ART) which can be considered as an iterative solver of a system of linear equations. However, there exists nonlinear iterative algorithms for CT such as multiplicative algebraic reconstruction technique (MART) and experiments have shown that it can indeed further optimize CT algorithms resulting in greater image quality and dose reduction:
To answer your second question, it can be online, offline or even hybrid depending on the case.
  • asked a question related to Tomography
Question
1 answer
In Mechanics of Tomography, a Tomogram can be defined as a "Cross-Sectional Image" of a "Slice of an Object", while the Images can be developed mechanically and/or digitally - they can even be reconstructed. Therefore, Digital Tomography can be conducted with X-Rays, N-Rays, Gamma-Rays, Microwaves, Ultrasound, Electron-Beams, as well as Subatomic-Particles. Typically, we use both a "Data Acquisition Technique" and "Image-Reconstruction" for generating Tomograms, which sometimes require Iterative Techniques, Back-Projection Methods and/or Fourier Transforms.
So, How useful do You find the application of Tomography, with respect to fractured and/or damaged Glass Fiber Reinforced Polymer [GFRP] and/or Basalt Fiber Reinforced Polymer [BFRP] structural reinforcing bars? Do consider that Projections are to be obtained for 180 Degrees.
Relevant answer
Answer
In addition to producing lovely pictures, tomography can help with understanding the mechanism of fracture and characterising the surface.
We generally use the areal surface roughness parameters (see ISO 25178-2) to compare surfaces
  • asked a question related to Tomography
Question
1 answer
Hello,
I am going to do some research on small animals with bioluminescence and fluorescence tomography. I need some data on the optical properties of a mouse at several wavelengths.
Does anyone know the values of μa and μs', between 550 and 700 nm, that we could apply to a mouse, if we make the approximation that the body is homogeneous?
Thank you,
Emmanuel
Relevant answer
Answer
You may find useful the following reference:
Vo-Dinh - Biomedical Photonics Handbook, 2nd ed. Vol. I
Chapter 2 Optical Properties of Tissue Table 2.2
  • asked a question related to Tomography
Question
2 answers
I have a crosshole tomography, my inversion "pushed" all high velocity zones into one side(receiver) and the low velocities into the other(sources), is there a way to combat that?
Relevant answer
Answer
oops..
Tomographic inversion SHOULD fit if you do it right.
Inversion is a model process. Tomography is a model..
SO if you doit right IT WILL WORK. There should be NO BIAS..
If there is THEN you have issues with the assumption, equations, calibration. Develop simply check points along the entire data path and include anisotropy (?)
I would assume your inversion is properly benchmarked?
  • asked a question related to Tomography
Question
2 answers
Good morning.
I'm analyzing a Tomography (CT) of a ancient metallic artifact. This sample has 2 materials (i determinate this using XRF) one is mostly iron and the other mostly copper. But when i analyze the grey scale. The copper part is more (white) pale that the iron part. The empty zones are black. So i think that the iron part should be more brigth that copper, because the iron density is higher.
If you can explain me this i really appreciate it.
Thanks
Relevant answer
Answer
CT primarily delivers the mean linear attenuation coeffcient (µ), which is the product of the mass-attenuation coefficient (µ/rho) and the density (rho):
µ=(µ/rho)*rho.
By the way: bulk Copper has got higher density than Fe and in addition has got higher atomic number, which also leads to higher mass-attenuation coefficient compared to Iron .
You might now draw new conclusions.....
  • asked a question related to Tomography
Question
2 answers
I'm working on iterative phase tomography which combines the phase retrieval and image tomography together, while few papers could be found till now. Can you recommend me some of them?
Thanks!
Relevant answer
Answer
Yizhou Lu , Thanks for your recommendation.
I have skimmed over the book , while the book is introducing the iteration between x-ray and matters, it's more related to the application of x-ray in material science. The last chapter is the absorption and phase contrast imaging of x-ray, but there seems no phase retrieval related content in the book. Anyway, thanks you!
  • asked a question related to Tomography
Question
2 answers
I am trying to understand how the depth information in SD-OCT A-scans is obtained over distances far in excess of the source coherence length.
From what I have gathered so far, it seems that the fringe frequencies within the spectrally resolved interferograms encode depth information, and that this frequency is proportional to path length difference. What I really don't understand is why, and how this information is so precise as to be allocated a specific pixel depth.
Underlying this question is also that of the importance of source bandwidth. I *think* understand the notion of coherence envelopes, and it makes sense to me that path differences corresponding to integer multiplications of the coherence length give rise to new intensity minima and maxima, ergo the axial resolution of the system. But I had considered coherence length to be an intrinsic property of the source, and yet, the spectrum must be spatially (SD-OCT) or temporally (SS-OCT) resolved into narrowband components before Fourier analysis. Am I right to assume then, that wavelength dependant information is aggregated either before or after the FFT? If so, or even if not, how does backscattered light interfere more frequently with the reference arm? (I believe that I have been misled by the false premise that light will interfere if and only if its is superposed with its identical twin "packet", where it diverged at the coupler?)
I have inspected the Fourier transform equations of intensity as a function of spectral band width and indeed, time delay (correlating to depth) is encoded. What I fail to understand is not the maths, per se, but the intuitive behaviour of the light that gives rise to these equations. A lot of the literature evaluates the maths but so far, I do not have an intuitive, or qualitative understanding of the physical waveforms interfering with the reference beam from different depths and how this translates to predictable variations in fringe pattern.
This short paper: Measurement of optical distances by optical spectrum modulation from A. F. Fercher et al. , in part describes more elegantly my clumbsy question above, but proceeds to answer the question (page two onwards) with maths that I haven't been able to interpret.
My background is biomedical, so perhaps the notation and syntax of the predominantly mathematical descriptions are a barrier to this.
Frustratingly, TD-OCT is relatively a lot simpler, and I've been able to decipher that without much ado.
Thank you in advance - I apologise for the lengthy question.
Relevant answer
Answer
Let me try.
If you understand TD-OCT, it is great. So you understand that the wider the bandwidth of the light source, the shorter the coherence length (higher resolution of OCT image). That means if you use infinity bandwidth, you will have infinitely short coherence length and the signal would be only exactly at dz=0 (d_obj = d_ref). So now, let consider in other way. Start limit your bandwidth. You shall find that signal oscillation does not disappear so rapidly as before, so there is some vicinity of positions around dz=0 at which you still can see the signal. If you reduce bandwidth more and more up to exactly single wavelength you find that no matter how far you are from condition dz=0, there is always high signal (it varies as cosine signal due to interference of ref and obj light while moving the ref mirror). Ok then, so let detect this single wavelength signal by using a spectrometer in SD-OCT. What you will see is a single pixel which amplitude varies as cosine while moving reference mirror. Nothing special. So let's include a second wavelength in your light source so now you have a two-wavelength laser. Let the second wavelength differ not so much. Consider SD-OCT with a spectrometer having field of view of 100nm and having 2048 pixels operating at 1050nm center wavelength. So the second wavelength would be 1000nm + 100nm/2048 = 1000,05nm. How the signal will look like? Signal at first pixel still follows cosine cos[k(1000nm)*dz(t)]. The signal at the second pixel also shows a cosine according to cos[k(1000,05nm)*dz(t)]. As you see the phase (argument of cosine) would be slightly different. Assume dz=1mm then phase of the first pixel k1*dz=6283.2 and the second pixel k2*dz=6282.9, so it differs by 0.3 rad (17 deg), correct? You can check that at the 2048th pixel (1100nm) the phase is 5712. Now just calculate phase and cosine(phase) for each of 2048 pixel in excel and make a graph. You will see nice cosine fringe spanning along the pixels - this is your virtual spectrometer. Now put very small dz like 10um. You would see a fringe pattern which has just a single period along all pixels. At dz=0 all phases are zero and cosine(0)=1 for all wavelengths. That means in SD-OCT you see no signal! This is curious because in TD-OCT you observe maximal signal at exactly dz=0, right? Yes. Because in TD-OCT you observe a sum of all electric fields of all wavelengths by using a single photodiode. That means peaks and valleys of electric fields must match each other to produce highest interference signal. Since the peaks and valleys of electric field of all wavelenghts match perfectly (are highly coherent), they produce highest interference signal exactly at dz=0.
For dz > 0 phases of all wavelengths differs too much and corresponding electric fields rapidly loose the coherence, so the sum of them quickly decay to ~zero. As you saw in above virtual spectrometer considerations, at dz=1mm a two very close wavelengths (1000 and 1000.05) differs already by 17 deg. That means a wavelength in 10th pixel is at phase 170deg vs 1st pixel, at 20th pixel it is ~360deg. So you have one oscillation at 20 pixels, ~100 oscillations at 2048 pixels. At spectrometer you still observe nice interference fringes. But if you sum all of them (to simulate signal at TD-OCT - this is how TD-OCT observe signal) you will find that signal is zero! Sum of cosine signal along 100 oscillations would always give ~0 value. But if you observe a signal at spectrometer which has less than one oscillation, sum of it is > 0 and is highest when all pixels have 1.0 :). This is the all magic of TD and SD OCT :).
One digression. In TD-OCT if you don't move the ref mirror, signal from a photodiode is constant in time because neither k nor dz don't depend on time. To observe a signal you need to make k(t) or dz(t). In TD-OCT you can only make dz(t) by moving ref mirror so the signal at photodiode start changing oscillating with cost(k*dz(t)). Please note that you may also make k(t) - this is Swept-Source OCT where the wavelength of the swept laser is changing over the time. In SS-OCT you also use a single photodiode but you don't observe all wavelengths at once anymore like in TD-OCT. So SS-OCT is just "time-spectrometer" OCT :). I hope you will have better understanding now.
  • asked a question related to Tomography
Question
1 answer
saya mahasiswa tingkat akhir yang ingin mencari project penelitian untuk tugas akhir , saya adalah mahasiswa fisika saya tertarik dengan remote sensing pemetaan dan tomografi, mungkin jika berkenan dan terkait dengan area topik di atas saya ingin bergabung menjadi tim penelitian ini
Relevant answer
Answer
unfortunately, this research is over. However, you could be collaborate with others who having same topic right this one. Good luck
  • asked a question related to Tomography
Question
3 answers
The 3D imaging has become most valuable in dentistry, particularly in orthodontics, and furthermore in orofacial careful applications. In 3D symptomatic imaging, a progression of anatomical information is assembled utilizing certain mechanical gear, handled by a PC and later appeared on a 2D screen to show the dream of profundity. The three dimensional imaging provides both, the subjective and quantitative data around an object or an object system framework from pictures acquired with different modalities including computerized radiography, processed tomography, positron discharge tomography, attractive reverberation imaging and single photon outflow figured tomography, ultrasonically.
Papers:
Chen, Xiaozhi, et al., “3D object proposals using stereo imagery for accurate object class detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 5, pp. 1259-1272, 2018.
Goswami, Ankita, and MrNitin Jain, “Depth image based rendering process for 2D to 3D conversion,” Journal of Technical Reports in Engineering and Applied Science, vol. 3, pp. 160-172, 2016
Konrad, Janusz, et al., “Learning-based, automatic 2D-to-3D image and video conversion,” IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3485-3496.
Relevant answer
Answer
CBCT over CT and other methods
  • asked a question related to Tomography
Question
3 answers
Hi,
I am working on Travel Time Tomography in seismology. I am looking for an algorithm for ray bending method using conjugate gradient method. If there is an existing code for the ray bending method, then it would be useful as well.
Thanks
Relevant answer
Answer
International Association of the Seismology and Physics of the Earth's Interior (IASPEI)
  • asked a question related to Tomography
Question
6 answers
Hi,
I am a novice in travel time tomography. I am writing a code on travel-time tomography. The data I have been given consists of picked travel times and slowness values. And I have two questions:
1. Which data should I use to calculate the covariance matrix?
2. How should I calculate the Tikhonov trade-off parameter (lambda) to be used in regularized inversion?
I greatly appreciate your help.
Relevant answer
Answer
Hi Iago,
I am trying to apply Tikhonov regularization to minimize the cost function in Seismic Travel Time Inversion.
In order to construct the L curve, how to I choose the Tikhonov parameters? I have attached the paper given to me to code the cost function. Please have a look at equation number 5.
I greatly appreciate your help.
  • asked a question related to Tomography
Question
3 answers
Hi,
I am trying to come up with a code to minimize the cost function in travel time tomography. I have to come up with a model roughness matrix and also the Tikhonov parameters.
I have attached the paper I am following for my code and the cost function is expressed in equation (5).
Is there any open source code I can refer to?
I appreciate your help.
Relevant answer
Answer
You can check by applying travelling sales person algorithm.
  • asked a question related to Tomography
Question
2 answers
Hi,
I am working on travel time tomography. I have been given a code which does the forward time calculation and computes the tomography matrix.
In the program, slowness grid m*n has been constructed and the tomography matrix being computed has dimensions of (m+1) X (m*n).
This makes the number of columns greater than the number of rows.
This is an under-determined system of equations and I am being asked to apply least squares method to it.
Is this the correct dimension of the Tomography matrix?
I appreciate your help.
Relevant answer
Answer
Hi,
If your reconstruction problem is underdetermined, you need to either (1) make your reconstructed image smaller (fewer pixels, lower resolution), (2) increase the number of measurements, or (3) incorporate a priori information about your image (smoothness, total variation. etc) to regularize the reconstruction.
Hope this helps,
Guillem
  • asked a question related to Tomography
Question
4 answers
Are there any websites that host downloadable ct/mri datasets ? I know about some websites but not many
Relevant answer
Answer
Hi David, I can say I know who to contact and they can be shared but not publicly for example....
  • asked a question related to Tomography
Question
4 answers
Hi,
I am working of a software on travel time tomography. I am following the paper by Van Avendonk which I have attached. On page 5, equations 6 and 7 requires the covariance matrix in order to compute the chi-aquare.
Can anyone suggest how can I go about computing the chi-square using C++?
Relevant answer
Answer
Saptaparnee Chaudhuri to calculate the covariance matrix, you can use Opencv method:
calcCovarMatrix(const Mat* samples, int nsamples, Mat& covar, Mat& mean, int flags, int ctype=CV_64F)
to find the inverse of the matrix:
invert(InputArray src, OutputArray dst, int flags=DECOMP_LU)
all these functions are ready implemented in OpenCV, which is a very powerful library for C++.
  • asked a question related to Tomography
Question
7 answers
Reading the application of muon tomography applied in caves, our group are searching for a CCC portable detector. Thanks in advance
Relevant answer
Answer
Dear prof. Raul Perez-Lopez,
we have developed portable CCC-based muon detectors for speleological applications in Wigner RCP, Budapest, Hungary. If you are interested in the application of this technology, please feel free to contact me. My e-mail address is the following: olah.laszlo@wigner.mta.hu
Best regards,
Laszlo Olah
  • asked a question related to Tomography
Question
1 answer
Dear colleagues,
Preparation of blocks for FIB-SEM tomography is a little tricky process. Which design do you prefer and which technique and tricks use in the preparation?
I make the blocks like this one, do you have better ideas?
Cheers,
Denis
Relevant answer
Answer
not for FIB but the logic is close...
  • asked a question related to Tomography
Question
3 answers
I'm interested in possible upper asthenospheric responses to Indian and Australian collisions (with Eurasia), expressed by (e.g.) Sulu, Sulawesi, and Sunda-Banda arc-forearc rollback.
  • asked a question related to Tomography
Question
2 answers
This paper was published by C. Lauren t , C. Calvin , J.M. Chassery and F. Peyrin.
The paper is attached below.I had so many problems while implementing this paper in CUDA.Somebody help me please...
Relevant answer
Answer
In the paper they used the Cray T3D with 128 processors. It was a shared memory machine, so it should translate nicely to the GPU.
[Be aware, this paper is about 20 years old so there are likely updates to this problem. Also, the parallel algorithm could have been solved in architecture or by modern compilers.]
Looking at the algorithms, there are sends and receives. These are
message passing algorithms and they do not translate as easily to the GPU, but it can be done.
I would say implement this in MPI first, but it also might not translate clearly. If you are going to compare results, you will need that baseline anyways. Also, If you have not already done this, implement the serial version.
Where are you having problems?
  • asked a question related to Tomography
Question
4 answers
I want to make a high resolution regional earthquake (4 radian to 20 radian ) tomography. I used LOTOS, but it is good for local events, not for regional events. So please tell me Which Software or algorithm I should use for high resolution regional earthquake tomography?
  • asked a question related to Tomography
Question
7 answers
I would be happy if anybody worked with real SAR data for tomography application. Indeed if somebody have access and have ability to give me the dataset, that would be appreciated.
Relevant answer
Answer
I am not sure that Sentinel-1 is entirely suitable for tomography. The six day revisit time will mean that you may have incoherent data with more than two images; you don't have the necessary control over incidence angles (or baselines).
  • asked a question related to Tomography
Question
1 answer
I have 25mm thickness ceramic based micron size particle material. Apart from I want to do correct reconstruction of the material from X-Ray Computerized Tomography Microscope(XT-400). which one is standard procedure to monitor? Give me the suggestions please?
Relevant answer
Answer
Please use the built-in software for reconstruction. It should be optimized for your CT system.
  • asked a question related to Tomography
Question
4 answers
Is there a appropriate way to show the inter connectivity of air voids In Hot Mix Asphalt sample. We have done the X-ray tomography of the samples and also have the 2D slices of the samples. Now i want show the connectivity of the air void in the 3D sample. Can anyone suggest any method or algorithm show that???
Relevant answer
Answer
The following open source (free) programs will allow you to do what you need, if one doesn't allow you to complete the process just export a 2D image stack in the required format for one of the other software packages;
ImageJ
Drishti
Blob3d
A quick Google search will lead you to the relevant websites.
  • asked a question related to Tomography
Question
4 answers
The density of ray path coverage through a particular point in the subsurface is directly related to the accuracy of the predicted velocity; thus, the bottom and edges of a particular velocity model are generally the least accurate.
Relevant answer
Answer
dear Collins.
I write some comments to your answer without entering in the mathematical details supporting my answer, which constitutes an integration of Chris answer and an explanation of first Raed answer. In refraction tomography when we refere to " ray path coverage" to define "a measure of the model sampling", we really refer to the number of times that a ray between a source and a receiver hit, or cross, the pixels or the nodes of our model velocity grid (depends on our used parametrization). In fact, the tomographic software, or inversion procedure, give the hit counts or the derivative sums per pixel. The number of ray fractions per pixel indicates how many times the velocity parameter is involved in the inversion (properly propagation kernel or ray matrix of the system solving our problem). I nextly refer to "ray coverage" in this sense.
When the pixel or node have a coverage 0 the corresponding parameter is not involved in the inversion solution. In this case, even though the inversion procedure gives a solution, this is poor or bad constrained. On the other hand when the ray coverage is to high indicates that our velocity parameter is considered many time in our inversion. Although high ray coverage generally could indicate accuracy in parameter determination, some times they act as a solution attractor so that the solution could be also bad. It also depend on by the ray illumination direction of our pixel.
The best practice should be parametrize a model which has the "most homogeneous" value of hit counts or derivative sums. Sometimes it is better a solution obtained with few data and/or with low hit counts but homogeneous than a solution with many data with high hit counts but highly not homogeneous. A good practice is to calculate different model whith different grid dimension and analyze the hit counts; also using pixel with different dimension or using a variable grid dimension.
In your example, the external data, which not have reciprocal constraints, mainly contribute only in the deeper part (bottom) of your model but they are unconstrained in the external upper part of the model edges. If the external shots sample the deeper portion of model space that is not sampled with reciprocal shots they could reach instable parameter solution in the deeper part of model because they contribute partially and separately to whole solution with respect the part constrained by the reciprocal shots.
In the external part of the model (edge) the parameter are not (or partially) involved in the inversion, infact they are similar to those of the starting model that we suppose "good". In general we should limit the model space to the part where there are the minimum number of constrained pixels in the solution search (about 60-120 m). I advice you to consider in the inversion only the reciprocal shots in this distance range. Everthough, the external shots should be considered in the starting model construction to constrain the deeper model in the inverted model space (60-120 m). Obviously if you have "a priori robust" information about the edge of model you can insert it in the inversion.
  • asked a question related to Tomography
Question
4 answers
Does anyone work on the X-ray tomography of particulate suspensions? I am looking for some work and literature on the same. Kindly share your work related experience or some good papers related to the subject.
Relevant answer
Answer
Please send your proposal on ashishka@rrcat.gov.in.
We are doing X-ray micro-tomography using synchrotron radiation at Indore India
  • asked a question related to Tomography
Question
8 answers
We have actually done a x-ray tomography upon a HMA sample and need to analyze those. After tomography we have got huge amount of data and I am unable to know which software will be the best for any analysis and graphing etc...Programming is a weak link though...
Relevant answer
Answer
You might want to have a look at this a bit older mailings which list a lot of software as well:
  • asked a question related to Tomography
Question
12 answers
I need a help in explaining the output variables of the P-Track output file. There is no adequate explanation for tracing the particles in the user's guide! Please refer me to any relative information that may help me in that.
Kind regards  
Relevant answer
Answer
I have,
You will find most of it documented in the MCNP Manual Vol II beginning at 3-151.  Appendix I explains the output lines but it is difficult to follow.
Here is my PTRAC card:
PTRAC FILE=ASC WRITE=all TYPE=p,e EVENT=col,bnk FILTER=150,icl
I am recording all bank and collision events of photons and electrons in volume 150.
Each line is one interaction of one particle.  The first number is the reaction type.
I "unpack" the PTRAC file by using MATLAB to search for a 150 at position 6, this is the volume.  Then look at the 2xxx number.  Appendix I lists the last two digits of this number as event type, i.e. 17 is a knock on electron.  The 2xxx indicates a bank. 1xxx is a source. 9xxx is the last event in the history,  Position 15 is the energy. Position 11,12, and 13 define the location X,Y, Z of the event.  etc. etc.
WARNING.  These output files are HUGE!  You will need a special editor.  I use EM Edit.
Also consider the DBNC card instead.
2017 4 52000 0 3 150 3 1
-0.22647E+00 0.23806E+00 0.19025E+01 0.50385E+00 -0.37274E+00 0.77923E+00 0.12040E+00 0.10000E+01 0.59373E-02
Best Regards - Gregg
  • asked a question related to Tomography
Question
1 answer
Hello dear, I am working on the super-resolution polarization modulation method, which gives the Stokes with high accuracy, which means we have the degree of polarization. Our Imaging method can see some inner structure inside the nucleus (i mean boundary or the edge) sometimes the edge is so clear. I want to ask that how can we verify that this inner part is exactly having this name?
Can we verify it by finding the Degree of polarization? Our imaging method gives us much higher information as compared to the conventional imaging method. Please suggest me here how can we differentiate w.r.t polarization parameters.
Relevant answer
Answer
Yes, you can differentiate. This link may help you. https://global.britannica.com/science/cell-biology/The-process-of-differentiation 
  • asked a question related to Tomography
Question
1 answer
In the literature, DCM for M/EEG is always done using source reconstruction techniques based on equivalent current dipoles. Is there any limitation to use distributed source models? Would it make a difference if you use distributed source models like sLORETA and if so, how?
Relevant answer
Answer
Hello:
Most DCM studies are actually performed using SPM, whose source engine is based on distributed dipole models (see e.g. link). DCM requires that you select a set fo regions of interest to perform causal inference testing on the resulting time series. So ion short, DCM can be applied on time series, regardless of how they have been obtained (distributed or equivalent dipoles) and therefore, you may use the tool subsequently to sLORETA et al. modeling.
Hope this helps.
Sylvain. 
  • asked a question related to Tomography
Question
4 answers
I would like to get sample tomosynthesis imaging
Relevant answer
Answer
yes i want this type of image
  • asked a question related to Tomography
Question
2 answers
Cause filling defect on high resolution computed tomography.
Relevant answer
Answer
What type of material are those filling made of? 
If it is made of high Z number (Atomic number) like Metal then rest assured the defects are coming from those elements. 
Pls share an image to see the defects.
  • asked a question related to Tomography
Question
3 answers
Hello
Does anyone know how much is the maximum resolution of Electrical Impedance Tomography devices?
Thanks
Relevant answer
Answer
Dear Ahmad,
I have dealt with the similar question recently.
David Isaacson dealt with this question with respect to measurement precision in his work [1]:
[1] D. Isaacson, Distinguishability of Conductivities by Electric Current Computed Tomography, IEEE Transactions on Medical Imaging, 1986.
Yours sincerely,
JC
  • asked a question related to Tomography
Question
4 answers
I need to know about its applications in the areas other than military.
Relevant answer
Answer
The attached file is a pretty comprehensive treatment of the backprojection algorithm for SAR.
  • asked a question related to Tomography
Question
3 answers
Dear All,
I am applying full waveform tomography (FWT) for 2-D seismic lines.
I wanted to know that how the results affect at cross position of two lines in case of real data by applying FWT in frequency domain. Since we are inverting the data for selective set of frequencies.
Further, I want to discuss more details on the intricacies involved on real data application of FWT.
Best regards,
Damodara
Relevant answer
Answer
Dear Nikolaos,
In FWI,  I am using same parameter set including grid dimensions, frequencies and their interval and all other inversion parameters for invrting P-wave velocity using Visco-acoustic FWI for two crossed SBN lines (Wide-angle offshore data) stating from low frequency to high frequency.
the only difference between two SBN lines is that, One line having with 4ms & other having 8ms sampling data.
After FWI, We are observing very good agreement between synthetic & observed data in frequency domain at each frequency group, But at cross position we are not getting that much confident results as we have seen in comparison of data.
I want to know the justification for this type of FWI results.
Regards,
Damodara
  • asked a question related to Tomography
Question
1 answer
I want to study the distribution of my nanoparticles in vivo, using a fluorescence tomography imager. I would like to know whether using Alexa Flour 488 dye or Alexa Flour 594 dye is suitable for imaging. As I have read that a Near-Infrared Alexa Flour dye is suitable. However, I would like to use the blue or red spectrum during imaging. 
Relevant answer
Answer
  • asked a question related to Tomography
Question
4 answers
The image is 420*420 pixels of 255 gray levels. 
Thank you.
Relevant answer
Answer
What type of "artifacts" are you referring to? The output of a Radon transform is a sinogram, i.e., something that is not supposed to be understood by a human observer. If you are talking about image noise (but noise is not an "artifact") after Radon inversion (e.g., by filtered back projection) this is usually attenuated by apodization windows during the ramp-filtering step.
  • asked a question related to Tomography
Question
2 answers
The ordinary way to invert for velocity field in travel-time tomography is to construct sensitivity kernel as a forward operator and then inverting for velocity field by using a proper inversion algorithm. I want to know, Is there any difference in the solutions when we utilize fat ray approximation or the normal ray approximation in seismic travel-time tomography?
Relevant answer
Answer
Most seismic travel-time tomography approaches use the ray tracer as a way to identify the perturbed cells in the grid. Fat ray approximation may decrease resolution as adjacent cells can be hit (and thus subject to perturbation) around a specific ray path. Other than that, the results should be roughly the same if both ray tracers provide comparable responses.
  • asked a question related to Tomography
Question
6 answers
Many research papers state that the reconstruction algorithm being used in the respective research can incorporate  a priori information, without elaboration on where such data can be obtained.
The International Reference Ionosphere (IRI) seems like a popular choice, but does not represent the ionosphere very well at higher latitudes. IONOLAB uses IRI-2001 as an a priori source of information (http://www.ionolab.org/index.php?page=cit&language=en).
Some research has shown that in-situ data injection from another source for the a priori guess yields better results than built-in profiles for GPS CIT (Liu, Z. and Gao, Y., 2001, and Max van de Kamp,M.J.L., 2012 among others ). For CIT using a new type of signal (not GPS), potential data sources are ionosondes, ISR and reconstructions from GPS CIT. 
Can anyone suggest another potential source, please?
Relevant answer
Answer
For CIT using satellite-to-ground links as a primary dataset, the vertical profile is underdetermined. Therefore the datasets you and David suggested should be a good complement to data from Beacons or GPS. You can put these in either as a priori constraints or as observations as long as you account for the uncertainties. If you are using vertical basis functions, it makes sense to adjust these to be consistent with observations of the density-altitude profile before trying to ingest those observations. Zapfe et al. [2006] have a paper on that issue. 
As David mentioned, you need an optical forward model for airglow data, but Link and Cogger's [1988] formulation is quite straightforward for the 630.0-nm redline nightglow emission that occurs in the F-region. I think Tim Duly has coded it into Python with all the necessary supporting empirical models in Pyglow (which is open-access). Ultraviolet data are also available (e.g. 135.6-nm airglow) from DMSP/SSUSI and TIMED/GUVI and soon from NASA's ICON mission. The mutual neutralization term has to be accounted for in order to assimilate 135.6-nm properly. 
None of these datasets are especially good in the E-region (except the ISRs and ionosondes where we have them). This could be a problem for you if you are working at high latitudes, although it is also a problem elsewhere. In the high latitude case, I would suggest adding some a priori constraints from a particle precipitation model. 
Link, R., & Cogger, L. L. (1988). A reexamination of the OI 6300‐Å nightglow. Journal of Geophysical Research: Space Physics, 93(A9), 9883-9892.
Zapfe, B. D., Materassi, M., Mitchell, C. N., & Spalla, P. (2006). Imaging of the equatorial ionospheric anomaly over South America—A simulation study of total electron content. Journal of atmospheric and solar-terrestrial physics, 68(16), 1819-1833.
  • asked a question related to Tomography
Question
3 answers
    I am doing some research about the extraction of the eccrine sweat gland using oct fingerprint,but the image in our lab is too bad to recognize the gland. I want to know where can I find some OCT fingertip images? 
Relevant answer
Answer
Optical coherence tomography should be well suited for that purpose. Although I saw some of papers dealing with fingertips (e.g. the vascular structure) I highly doubt, there is some kind of accessible database.
May I ask which kind of OCT device you are using? (wavelength, FOV and type)
Another question is: Which resolution do you need? (let's say pixel per gland)
Do you need B-scans or C-scans?
  • asked a question related to Tomography
Question
4 answers
I want to use the travel times of P wave and S wave from local earthquakes to study the local tomography.
Relevant answer
Answer
Generally the first thing to do is to organize your data in some sort of database. A popular and free earthquake database software is SEISAN.
Once your data is stored and organized you will have to detect the events. This can be done manually or automatically. The most common approach is to use an STA/LTA detector.
Once you have a list of events you can start picking. This can also be done manually (by visually scanning the timeseries and manually picking the data) or automatically (by sending the timeseries through a picking algorithm, as in the paper posted by Tetsuo). 
Whether you pick manually or automatically will depend on how much time you have and what kind of tomography study you are planning.
here is another reference for automatic picking: http://www.bssaonline.org/content/92/5/1660.short
  • asked a question related to Tomography
Question
7 answers
I have medical tomography images varying in time series. How can I segment the dynamic cardiac part? Are there some recommended algorithms or papers? 
Thanks a lot ! 
Yours Sincerely,
Hengda He
Relevant answer
Answer
Dear Neha Baraiya
Thank you so much. You are so nice. :)
Hengda
  • asked a question related to Tomography
Question
6 answers
I would like to find data for Total Electron Content (TEC) as a function of latitude (TEC curves possibly from GPS observations). I also will need an ionospheric map (electron density as a function of altitude and latitude), possibly reconstructed from the TEC data or measured using another source such as radar. I wish to validate my raytrace program by passing rays through the ionospheric map, attempting to reproduce the previously measured TEC curve(s). Any suggestions for open-access databases or journal articles with both TEC curves and electron density maps would be greatly appreciated!
Relevant answer
Answer
Code will give you interpolated (spherical harmonic fitted) TEC maps on a 2.5x5.0 lat x lon grid in two-hour temproal resolution. Rather than just using the CODE maps, however, you can also look at the IONEX TEC maps generated by each of the global analysis centers (IGS, CODE, JPL, ESA, etc...). The CODE, IGS, and JPL maps are generally considered the more reliable. ftp://cddis.gsfc.nasa.gov/gnss/products/ionex/
Instructions on how to navigate their ftp and about the data format can be found here: http://cddis.gsfc.nasa.gov/Data_and_Derived_Products/GNSS/atmospheric_products.html#iono
You can also try Anthea Coster's Madrigal TEC maps, which don't use any interpolation (leaving you to choose how you want to interpolate the data). You can get that data here: http://madrigal.naic.edu/ No account is needed but they do ask for your email an institution information for their records.
Electron density is another beast all together. There doesn't really exist a reliable electron density model out there. The IRI has it's issues at high and low latitudes and the TIEGCM (http://www.hao.ucar.edu/modeling/tgcm/) isn't really at the stage where it can be considered for operational use. There are assimilation models out there, but none of them are openly available or run regularly.
For a formal evaluation, you may want to consider case-studies and seeing if you can get any of your data near one of the North American Incoherent Scatter Radars, which can get accurate, high-resolution electron density information. Poker Flat or Millstone Hill would probably be ideal. Poker Flat is a phased array scanning many beam directions at once and has a bunch of GPS receivers in its vicinity. Millstone Hill is in the central mid-latitudes with excellent range and is covered by an even larger array of GPS receivers. The ISR data can be retrieved from the same Madrigal site I posted a link to above. (They're also hosting an ISR Summer School in Europe this summer that you may be interested in attending -room, board, and flights are generally covered-). http://amisr.com/workshop
Alternatively, I'll be publishing my empirical electron density model for 60+ Magnetic Latitudes later this month. I'll send you the model as soon as I've finished optimizing it, if you want. Until then you can try the IRI and just see what kind of results you get. You can also scale IRI NmF2 to match map TEC, which may work OK at mid latitudes (big issues at high latitudes though).  
  • asked a question related to Tomography
Question
4 answers
Optical Coherence Tomography data is available from spectrum analyzer that needs to be assembled in the form of image. 
Relevant answer
Answer
ImageJ can be used and is Java scriptable. But you have to be more specific. What sort of data do you have and what is is that you want to measure. What is the scientific question that you need answered?
-Peter
  • asked a question related to Tomography
Question
4 answers
Hi experts,
I am very new in seismic tomography area, I've gone through quite a lot of reading but still don't really figure out how the tomography looks like honestly...
As I understand, travel-time = tomography matrix * slowness (t=L*s)
As I understand, from an assumed initial velocity, we do forward traveltime calculation to obtain the travel time. Then make comparison to observed (true) travel time so we can inverse and update the initial velocity model and get the best one. 
However, in order to be able go through the above process, we first need to initialize the L matrix as a function of geometry setting and it's very complicated... Would anyone please kindly help me understand this ? or if you know any program / code that can extract this L matrix, for 2D reflection tomography, I'm very grateful for that since I've been stuck here for months !
Many thanks !
Chappi
Relevant answer
Answer
thank you very much Bagus. Very appreciate that !! It's really helpful
  • asked a question related to Tomography
Question
4 answers
What algorithm do OCT machines use to generate automatic measurements of thicknesses and volumes?
Relevant answer
Answer
Dear Safal,
Many OCT machines come with a built-in measurement algorithm that can automate thickness and volume measurements. Automated layer detection is quite complex mathematically and it appears most OCTs use proprietary software for this purpose that varies greatly in terms of function and ongoing support. I could not find any freely available source or script for automated OCT measurement during my studies.
The best place to start is to find out the system name of the OCT you are working with and check to see whether the bundled software is sufficient for your needs. For my research, the software on the machine we used (Spectralis HRA+OCT; Heidelberg Engineering) was not suitable, so instead I manually measured thicknesses using custom scripts in ImageJ and excel VBA. This was tedious but not difficult.
Regards,
Felix Aplin
  • asked a question related to Tomography
Question
4 answers
Even if the tomography resolution parameter set by the instrument, is there any alignments that can improve  it?  Is it possible to improve during reconstruction?
Relevant answer
Answer
Many parameters will have an effect on the resolution of your tomograms. Not only the imaging system and the reconstruction but also the sample itself are important. In cryo-ET -if you get everything right- radiation damage will become the limiting factor. Here an overview article on all paramters that need to be considered to reach this:
  • asked a question related to Tomography
Question
3 answers
Dear experts,
is it true that solving the eikonal equation just give us the travel time only, not amplitude?
as I know tomography is categorized into 2 groups, wavefront tracking (using eikonal equation) and ray tracing (let's say point instead)... So basically, tomography just give us the travel time only, not consider about amplitude at all ?
Many thanks,
 Phung Chappi
Relevant answer
Answer
Yes, and here is why. To solve the wave propagation problem "asymptotically" we search for solutions of the Helmholtz equation under the so-called travel-time approximation (Rytov or WKB) that looks like u(x)=A(x)exp{i P(x)/omega) where omega is the temporal frequency. Once we substitute this ansatz into the Helmholtz equation and group the the terms by powers of omega, we obtain the eikonal equation for the phase P(x). So, in order to find an asymptotic solution that satisfies the Helmholtz equation within accuracy of O(1), phase of the above ansatz must satisfy  the eikonal equation but the amplitude can be arbitrary. However, once we equate to zero the terms of order O(omega),  we obtain the so-called  transport equation that involves both the amplitude A(x) and phase P(x). Thus, amplitudes have higher-order (i.e., smaller) effect under the travel-time approximation.
  • asked a question related to Tomography
Question
2 answers
I have X-ray tomography images of a vascular tissue consisting of multiple conduits arranged parallel to each other. The conduits are tracheids that have tapered endings. When visualized in VG Studio Max 2.2, the lengths of the tracheids seem to be different when viewed in the xz and yz plane. That is probably because the tissues are not arranged parallel to the plane being used to view the slices but are rather arranged at some angle. How do I estimate the lengths of these tracheids accurately using either VG Studio or Avizo Fire?
Relevant answer
Answer
interesting, I would like to see these images !!, I've worked with plant fibers in TEM
  • asked a question related to Tomography
Question
2 answers
Improvements in x-Ray computed tomography capabilities have occurred over the past decade making this non-invasive damage characterization technology more efficient. Now, how extensive is this modality being used in current research for ballistic impact damage characterization?
Relevant answer
Answer
I cannot speak for the laboratory-based systems, looking in the literature I collected in the past years I have found only one article: DOI: 10.1016/j.ijimpeng.2015.05.012 . At the synchrotron light sources I cannot remember having seen such a request, indeed / of course I also do not know all activities. More common is that people start to exploit MHz-framerate radioscopy at synchrotron light sources to study impact dynamics "in situ".
  • asked a question related to Tomography
Question
1 answer
My friend is doing 18F labelling through Al18F -NOTA strategy, but see the defluorination of drug in vivo. Did anyone have this experience? Would you tell us what the probable reason? let us assume the radiochemical purity is good before injection.
Relevant answer
Answer
Has he carried out stability data studies in-vitro; pH stability, temperature, serum stability etc and biodistribution in rats/mice?
  • asked a question related to Tomography
Question
2 answers
and it is capable to accept breast phanton in ".txt" file format
Relevant answer
Answer
.txt file format cannot support
  • asked a question related to Tomography
Question
5 answers
Hi every one, 
As you know, there are generally four ways for building the velocity model. The first one is normal moveout (NMO) based velocity analysis (such as semblance). The second type is wave equation based migration velocity analysis (velocity of migration). The third type is ray based migration velocity analysis, also known as traveltime tomography. The fourth type is full waveform inversion (FWI). 
I have a question: What are the power and weaknesses of each of this methods? 
Regards
Relevant answer
Answer
I think its under a reading called Introduction to reflection seismics written by Dr Ir. G.G Drijkoningen from Delf University of Technology
  • asked a question related to Tomography
Question
5 answers
method for reconstructing the grain
method for obtaining the miller indices
Relevant answer
Answer
  • asked a question related to Tomography
Question
7 answers
Hi there,
I am very new to tomography, I've searched and read quite about of papers about tomography in hope of searching for a quite visible algorithm or open source code in matlab that can help me to initialize the matrix and solve for velocity, but haven't seen any clues yet. Also I feel very confused as starting
If you have been working on tomography, would you please give me some of your genius recommendations ? I am really grateful about that.
Many thanks,
Phung Chappi 
Relevant answer
Answer
You can try the ASTRA toolbox [1], which supports many different tomographic techniques and reconstruction methods, and uses optimized CPU and GPU code. It has a Matlab and Python interface. If you need any help installing or using it, let me know. Disclaimer: I am one of the developers of ASTRA (mainly the Python parts).
  • asked a question related to Tomography
Question
12 answers
Some molecules are secreted from the gastro-intestinal mucosa to the gastro-intestinal lumen, and eventually ends up in the feces. Some positron emission tomography (PET) tracers may be secreted into the GI-lumen in this fashion, and therefore potentially confound the use of these tracers for imaging cancers and other pathology in  the GI-tract. I would appreciate some good references for papers or book chapters, which describe the principles and mechanisms of this type of secretion. Thanks. 
Relevant answer
Answer
  • asked a question related to Tomography
Question
5 answers
I found some researches found high velocity perturbations beneath active volcanic cones instead of low-V anomalies and it is interpreted as magma intrusion. To what extend this is reliable ?
Relevant answer
Answer
Thanks for your reply. So it is no way that a molten material at shallower depths gives a higher P-wave anomaly. It is solidified magma intrusion
regards  
  • asked a question related to Tomography
Question
3 answers
I want to know the effect of RMs in result of 1D model. Why do we use rms in 1D tomography for showing results?
Relevant answer
Answer
Sara
First of all I want to clarify that today in the common practice the term "tomography" becomes a synonimous of "data inversion". I not agree with this, If anything the tomography is a part of the inversion theory.
Anyway, if I well understood, in your question I think that you mean 1D data inversion.
All inversion depend on the: data errors, model parameterization, theory connecting data and parameters, starting model, and regularization approach.
In general, also for 1D inversion, the rms or any other function of the difference between the observed data and those predicted (calculated) by the model is a measure of the precision of the model. It is an indicator of how much a such model is capable to reproduce the observed data. For this reason it is necessary.
But you must take into account that you can have several model (an infinity) with the same "model precision"; In other words the "model precision" doesn't is a measure of the "model truthfulness". So the "model precision" is not sufficient to perform an appraisal of your model.
Can you explicitate your 1D inverse problem (type of data, model parameters, theory used to connect the data to model parameters, inversion approach L2, L1, ...) ?.
  • asked a question related to Tomography
Question
5 answers
For some species like Fagus orientalis or Quercus spp, is it very easy to spot the borders of sapwood and do the measurements. But for some species, the sapwood area is not visually recognizable. Do you have any suggestion for measuring the sapwood area of Carpinus betulus using the standard instruments? I should mention that we don't have high tech instruments like x-ray or tomography.
Relevant answer
Answer
Hi Hormoz,
Have you tried using a colour dye on stem cores? Methyl orange works well on many species. The difference in pH between sapwood and heartwood causing the dye to turn slightly different colours.
Fast forward to 1 minute and 14 seconds in this You Tube video I made on sap flow. I use methyl orange on a eucalypt tree to show sapwood v heartwood:
Michael
  • asked a question related to Tomography
Question
8 answers
I just want to work on the imaging modalities of PET in Alzheimers Disease..
Relevant answer
Answer
I would suggest two articles:  Small and others, Arch Neurology 2012;69(2):215-222 and Prestia and others in Neurology 2013;80:1048-1056.  Both of these articles are about the role of PET in predicting AD in those who are at high risk because of MCI or because of family Hx of AD. 
  • asked a question related to Tomography
Question
2 answers
Breast screening.
Relevant answer
Answer
 Breast tomosynthesis it is like to conventional tomography. However,  In digital tomosynthesis, only a limited rotation angle (e.g., 15-60 degrees) with a lower number of discrete exposures (e.g., 7-51) than CT. This incomplete set of projections is digitally processed to yield images similar to conventional tomography with a limited depth of field. However, because the image processing is digital, a series of slices at different depths and with different thicknesses can be reconstructed from the same acquisition, saving both time and radiation exposure. 
Reconstruction algorithms for tomosynthesis are different from those of conventional CT because the conventional filtered back projection algorithm requires a complete set of data. Iterative algorithms based upon expectation maximization are most commonly used, but are computationally intensive.
.
  • asked a question related to Tomography
Question
2 answers
I want to test reconstruction algorithms in ECT. I need data for true objects and measured data using 8 electrode ECT sensor. Is there a model which I can use to regenerate data for simulating these algorithms?
Relevant answer
Answer
Dear Josiah, do you can see  below link for download  dataset from SENSOR EIT .
See below the  dataset  of  (EIT ) Electrical Impedance Tomography.
I think that these dataset can also be used for test your algorithm of  image reconstruction based in ECT  SENSOR. 
  • asked a question related to Tomography
Question
8 answers
Brain tomography.
Relevant answer
Answer
.dat is a file extension often appended to unrecognized file types, or a generic catch all file type for data of some format. If you don't know the source, it may sometimes help to open the file in a plain text editor and see if the first lines or page (before the binary data starts) contain some clue to what kind of format or what program created the data. 
  • asked a question related to Tomography
Question
1 answer
How do AC magnetic fields behave in 'metallic' magnetic induction tomography (MIT). Do they form a magnetic circuit around a metallic specimen or penetrate through it, or both? This would be in respect of the skin effect limiting the penetration of the AC magnetic driving field (primary).
MIT works by an AC magnetic driving field inducing eddy currents in a metallic specimen (to be imaged) and sensors (coils) on the other side of the specimen picking up the secondary magnetic field due to the eddy currents induced in it. It actually senses or picks up the total field (secondary and primary magnetic fields). Sensor coils measure the spatially distributed total fields to be tomographically represented as an image.
Relevant answer
Answer
Replace the  pick-up coils with  Hall probes. At least a couple, where the one is located in the driving ac magnetic field (only slightly influenced by eddy currents)  and the another is used as a probe in a spatial magnetic field mapping, around the object of interest. Tuning the Hall probe current of the one, or both of them, Uh voltage output part, related to the primary magnetic field in a mapping probe can, be compensated by the output of the reference one, keeping the eddy currents impact to Uh "intact" (to some degree), throughout all the cycle. The mapped data can be used in a Biot-Savart inverse to estimate the eddy current loops distribution, including their dynamics in time. At least in a false picture, on a surface.
The mapping can be done by triggering to the selected current phase, or by accumulating the Uh output for all the cycle (or several) for every knot of the mapping mesh and applying,later, the phase selection in numerical evaluation of all the set of mapped data.
Broad range of the Hall probes (concerning the working temperature range, sensitivity to B and dimensions, with active surface down to 25x25 micrometers) are available at the Institute of electrical engineering, Slovak Academy of Sciences, Bratislava (my long term personal experience).
  • asked a question related to Tomography
Question
5 answers
We are performing polarisation quantum state tomography of photon pairs generated through spontaneous parametric down-conversion by following the approach described in this paper: https://www.researchgate.net/publication/235531853_Measurement_of_qubits
Although we expect the quantum state to be pure, we are getting fairly low purity values defined as purity=trace(rho^2)), where rho is the density matrix. What are the typical causes of spontaneous parametric down-converted photon-pair quantum state impurity when performing polarisation tomography? How can we check whether something is wrong with the photon pair generation or with our implementation of the tomography procedure?
Relevant answer
Answer
Hi ALexander! Hoppe everything is ok with you
Common sources of measured incoherence:
1. Generation of double pairs (with a probability of p^2 where p is the probability of generating one pair). While overall the state is coherent, the inability of most detectors to measure number states will appear as decoherence in the correlations.
2. Imperfect wave plates, polarizers. If wave plates are not aligned, you may be preparing something that it is not what you expect.
3. Dark counts of the detectors. They will make your measured counts incoherent if dark counts are not correlated with measured polarization.
4. Depending on the scheme: compensation of the polarization group velocities. If your system is pulsed, and depending if you are using two Type I crossed crystals or one Type II crystal, you will have problems in compensating for the GVM or the GVD.
5. The coherence length of the pump can also cause trouble if it is too short.
There may be others...
Take care!
Gabriel
  • asked a question related to Tomography
Question
4 answers
Hi everyone,
So I'm working on a tomography sample of a protein monolayer, which, ideally would be around ~3nm thick and ~10nm wide. I'm having problems trying to get the reconstruction of the thickness (z height). Any suggestions or anyone with similar problems and have solved this?
I am running with Imod.
Thanks
Vic
Relevant answer
Answer
Thanks for the feedback. I am doing a negative stain on my sample and it is a struggle, especially when I am trying to get a consistent ~3 nm height. I will try the FFT method and see how it goes.
  • asked a question related to Tomography
Question
35 answers
This question has been bothering me for over forty years. If we have a cylinder containing a low volume of a random distribution of small un-deformable particles (which approximate to points), and we strain the cylinder so that its length increases by (say) 50% while its volume remains constant, will the distribution of particles still be random?
If not, can the degree of non-randomness be measured?
Can the strain be deduced from this measure of non-randomness?
The 2D version of this question is slightly simpler.
In this case we start with an area containing a random distribution of points. We then elongate the area by (say) 50% assuming the area remains constant.
Is the distribution of points still random? If not can the degree of non randomness be measured? Can the strain be deduced from the latter measurement?
The answer would be relevant to metallurgy and physics of solids.
Relevant answer
Answer
.
if we assume a simple transformation of a [0, 1]x[0, 1] domain to [0, 2]x[0, 1/2]
x -> 2x
y -> y/2
it seems to me that a Kolmogorov complexity argument shows that if the initial distribution is random (in the Kolmogorov sense), the resulting distribution is random too
as Ute points out above, this of course ignores any kind of interactions that might happen between points or between points and the "substrate" (be it contact, short-range or long range interaction) ; we all know that if such interactions are taken into account, phase transitions may occur as the conditions change (pressure for instance)
.
  • asked a question related to Tomography
Question
3 answers
In iterative we have a matrix that is corresponding to the operator of projection. This operator contain the contribution of each pixel in each projection. It is known before reconstruction. How is it estimated? Could you give me all details?
Relevant answer
Answer
I would say it's a general approach, both for real and simulated data. All of this models will work, but will lead to different levels of reconstructed resolution (at least in PET it's like this).
I believe there is no formula that you can simply use, however, you can try Siddon's ray tracing algorithm for the system matrix computation. Here is a link to the original work (www.ncbi.nlm.nih.gov/pubmed/4048266). There are plenty of modifications of the algorithm which allow to improve it's performance.
  • asked a question related to Tomography
Question
3 answers
I see full-field optical coherence tomography (OCT) as a technique to do OCT with a light bulb and a 2-D detector. In my opinion, this is attractive for two main reasons: 1) you lose the complexity associated with scanning mirrors and 2) Halogen lamps and CCD/CMOS sensors are generally cheaper than conventional FDOCT sources like SLDs/femto laser and line detectors. It's been many years now, and I still only see microscopy setups using full-field OCT. Why is that? Is the challenge mainly in setting up the Kohler illumination when a real eye is involved?
Relevant answer
Answer
Why would axial resolution be any better than any other OCT setup that uses a broadband source? SLD with bandwidths beyond 60 nm are available from Superlum, and femto laser have huge bandwidths. You didn't mention cost which makes it a different story but if that's what you meant, then you definitely get more bandwidth for your buck using a light bulb to do FFOCT.
Assuming you can design the illumination optics so that it would be appropriate for human eyes (pretty sure this is one of the biggest obstacles), I don't see why it would be any less comforting to the patient than standard flood illuminated fundus camera currently on the market.
  • asked a question related to Tomography
Question
79 answers
We are working on solid foams. Micro-tomography gives very nice pictures, but what we need is getting quantitative information from them : surface area, distributions of cell sizes, window size and strut thicknesses, open vs. closed porosity, tortuosity, etc. We bought Aphelion 3D, which was a quite expensive software, but we deeply regret it as it is a true nightmare, very difficult to handle for non-specialists of programming. And the customer support is almost zero, nobody really helps whereas our needs are very basic. So can you suggest a free and easy software for soing the same ?
Relevant answer
Answer
Hi,
I recommend You very user friendly ImageJ:
ImageJ, National Institute of Mental Health, Bethesda, Maryland, USA, http://rsb.info.nih.gov/ij.
and also freeware WSxM:
I. Horcas, R. Fernández, J.M. Gómez-Rodríguez, J. Colchero, J. Gómez-Herrero, A.M. Baro, WSXM: a software for scanning probe microscopy and a tool for nanotechnology, Review of Scientific Instruments 78 (2007) 013705.
Good luck, I hope my hints were helpful.