Science method

3D-Imaging - Science method

3D-Imaging in Research
Questions related to 3D-Imaging
  • asked a question related to 3D-Imaging
Question
2 answers
is there any software in which koppen climate of world2D image is convertible to 3D by just putting the image ?
Relevant answer
Answer
There is no specific software that can directly convert a 2D image of the Köppen climate classification map into a 3D representation by simply inputting the image. Converting a 2D image to a 3D representation typically requires manual or automated reconstruction using specialized software or 3D modeling tools.
However, there are general-purpose 3D modeling and visualization software that can help you create a 3D representation of the Köppen climate classification map based on the information in the image. Some popular 3D modeling software options include:
  1. Blender: Blender is a free and open-source 3D modeling and animation software. It provides a wide range of tools for creating and manipulating 3D objects, as well as materials and textures. You can use Blender to import the Köppen climate classification image as a reference and manually model the 3D representation based on the image.
  2. Autodesk 3ds Max: 3ds Max is a professional 3D modeling and rendering software. It offers powerful tools for creating detailed 3D models and scenes. With 3ds Max, you can import the Köppen climate classification image and use it as a reference to create a 3D representation by modeling the terrain and applying appropriate textures.
  3. SketchUp: SketchUp is a user-friendly 3D modeling software that is widely used for architectural and spatial design. While it may not be specifically tailored for terrain modeling, you can import the Köppen climate classification image into SketchUp and use it as a reference to create a basic 3D representation by extruding and manipulating shapes.
Keep in mind that creating an accurate and detailed 3D representation of the Köppen climate classification map may require additional data and information beyond the 2D image. It might involve interpreting the climate zones, understanding the topography, and incorporating additional geospatial data to create a more realistic 3D model.
Overall, the process of converting a 2D image of the Köppen climate classification map into a 3D representation typically involves manual reconstruction and modeling using specialized 3D software tools.
this is online free tool that might help you, regarding colors, topologies etc .. you can lay the image on the contours
hope this helps
  • asked a question related to 3D-Imaging
Question
4 answers
There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?
Thank you in advance!
[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).
[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).
Relevant answer
Answer
Hi Vladimir Farber, I think one of the fields of Light Field Images missing to be mentioned is the "Quality Assessment of Light Field Images" when they are propagating through a communication channel. For your kind reference, one of the published papers in this field is :
"Exploiting saliency in quality assessment for light field images."
  • asked a question related to 3D-Imaging
Question
3 answers
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?
  • asked a question related to 3D-Imaging
Question
5 answers
Hello there,
Can someone recommend me that how can i reconstruct a 3d images using a stack of images?
Please Guide me in this regard.
Thanks in Advance
Relevant answer
Answer
If by a 3D image you mean a volumetric reconstruction and not a stereoscopic image, I'd say that choosing the right method will depend on the file format and the aim of the reconstruction. All the options mentioned earlier can work fine in datasets of reasonable size. However, if, for some reason, you are trying to reconstruct a large size dataset (dozens of GB). A technical note we recently presented about the use of point clouds to reconstruct gigaresolution medical datasets might give you an idea of how to do it:
-DOI: 10.1096/fasebj.2021.35.S1.04298
  • asked a question related to 3D-Imaging
Question
3 answers
My experiment demand x-ray CMT imaging. I need to place my samples over a substrate that should be x-transparent. Can anyone please any suitable materials (rather than foam) that is x-ray transparent?
Thank you in anticipation.
Relevant answer
Answer
Dear Pradip Singha , any stiff material consisting of low atomic number elements are suitable.
Examples are PMMA or polycarbonate.
Patient tables for Xray CT are built as sandwich of carbon fibers tissues (plate) -foam-carbon fiber tissue.
For small weight sample a thin carbon fiber plate is sufficient.
You may have seen them already.
In the attached linke there are some picture of such plates:
  • asked a question related to 3D-Imaging
Question
2 answers
Am working on 3D deformation study analysis of buildings for my final year project. I will love to know how one can use Erdas Imagine to determine the height of an image and to achieve the above details
Relevant answer
Answer
Can we measure height through analyzing one SEM image?
  • asked a question related to 3D-Imaging
Question
6 answers
the volume data is so big that use the cnn nets to do the segmenation is difficult cause the memory can not bear it,  is there anybody give me a advise?
Relevant answer
Answer
simplest way to 3D volume is here, do not need much resources, random forest algorithm.
  • asked a question related to 3D-Imaging
Question
5 answers
I am looking for an image processing software, that can reconstruct a high resolution SEM 2D-image of a porous Ni-GDC anode, to a 3D representation. The intent is to separate the phases Ni,Ce,Pores, measure tortuosity, porosity etc.
I have contacted people from Avizo(Mercury Computer Systems), but their license will cost above 13000 US$. I do not have the means (financial and time) for this. Can anyone recommend an alternative? 
Appreciated   
Relevant answer
Answer
Waqas Tanveer Sorry if you select the most convineant software please share it to me convenient
thank you in advance
  • asked a question related to 3D-Imaging
Question
2 answers
Dear Experts,
Have you ever tried to change the nominal resolution in Scanco uCT50? If I need to achieve 2-, 4- and 8-micron (voxel size) images, is it possible to set these parameters in Scanco, or I may use only default settings? I know that it's not a problem in XRadia but not sure about Scanco.
Thank you for your help.
Alexandra
Relevant answer
Answer
Thank you @H. Douglas Morris for your help!
  • asked a question related to 3D-Imaging
Question
5 answers
Say I have slightly different magnification images or say I used two different SEM devices which have different screens so the mag differs. But while preparing the report,  I would like to have all the images to have same scale. How can I do this?
Can imageJ or GIMP help me to do this? Could you please suggest the steps
thanks
Relevant answer
Answer
Thank you. ImageJ is a good application for the scale bar.
  • asked a question related to 3D-Imaging
Question
3 answers
Hello
I am a student of MS Computer Science. I am going to start my career in research in Pattern Recognition and 3D-Image Reconstruction. I need a road map if someone can guid me where to start, I would appreciate that.
Thanks a lot
Relevant answer
Hello, first of all, I would like to congratulate you on your new start, secondly, you should study survey and review papers on pattern recognition that will show you the existing methods and how the process should be also the same thing goes to the 3D reconstruction you should study image recomposition and stereoscopic images. Best of luck
  • asked a question related to 3D-Imaging
Question
8 answers
Dear All,
I am using AVIZO to reconstruct 3D from CT-Scan images. There are two types of images that I have (form different CT machines). First series are created by having slices in 3 directions (X, Y, Z) and I can easily reconstruct them in AVIZO. But there is a second type, in which images are created by rotating the specimen inside the CT machine while scanning (360 degrees). When I load this type of images in AVIZO, instead of having a spherical shape for example, I end up having a cylindrical shape. And the reason is, AVIZO puts these slices in one direction instead of reading them in rotating order.
Does anybody know how I can load this type of images?
With all thanks,
Arsalan
Relevant answer
Answer
Nasseh Khodaie Hi there!
Thanks for sharing the blog post.
I have no problem with reconstructed images as I am familiar with a plenty of software packages that can handle the segmentation.
My main problem is when CT images are passed to me without initial reconstruction which should normally be done right after scanning!
I guess I need to figure out how to do this in Matlab :-p
  • asked a question related to 3D-Imaging
Question
5 answers
3D image segmentation
3D image features extraction using GLCM , Gabor filter
Relevant answer
Answer
  • asked a question related to 3D-Imaging
Question
5 answers
Hello!
Using 3D models/images in geophysical works are recently becoming more popular. Based on your experience, which programs (preferably free of charge) can help to provide better/easier 3D outputs by using geophysical 2D images?
Cheers,
A.
Relevant answer
Answer
We developed a software ourselves for 3D visualization, calculation and simulation of porous media. However, It is commercial. For free software, I'd like to recommend FDMSS ( http://porenetwork.com/download/fdmss/ ) developed by Kirill M. Gerke et al. The relative paper is )".
  • asked a question related to 3D-Imaging
Question
4 answers
Hi, i have a .nii file, that is an image of the fusiform gyrus, is a mask. I would like to extract from this image the xyz coordinates 
Relevant answer
Answer
This is an old thread but let me put these code pieces here, so anyone needs in the future could use it! If your mask file was produced with SPM's plugin Marsbar, then you should have an additional sphere information file in .mat format for each of your .nii mask file (if you created your masks using sphere definition). Then you can use below code piece in Matlab to get the xyz coordinates for each of voxels in your mask. Obviously, make sure that you have SPM and Marsbar on your Matlab path (usually Marsbar comes together with SPM as default, but just in case!)
sphereInfo = maroi('myMask.mat');
volumeInfo=spm_vol('myMask.nii');
xyzCoordinates = realpts(sphereInfo,volumeInfo);
If you only have nifti file of your mask then these two lines of SPM code will be enough to extract such info
volumeInfo=spm_vol('myMask.nii')
[intensityValues,xyzCoordinates ]=spm_read_vols(volumeInfo);
I hope this helps!
  • asked a question related to 3D-Imaging
Question
16 answers
I have a set of slices of the retina's images. Using optovue software I can reconstruct the 3D structure of the piece of retina (see attached link), and measure some 2D and 3D parameters (like thickness of the retina in different positions, volume of a particular region, etc). I want to find another software for this purpose (to have possibility working home). I tried to use 3D-Slicer, but i can't make it work good. I can upload set of jpg images, but i can't create 3D model. Probably smb can advise another software? Or give nice tutorial on 3D-Slicer and jpeg images
Relevant answer
Answer
Yes, you can convert a 3D model to a series of cross-sectional images using 3D Slicer. For example, you can load the STL file, then in Data module right-click to convert it to segmentation, then right-click on the created segmentation to convert it to labelmap. Post further questions on using 3D Slicer to discourse.slicer.org.
  • asked a question related to 3D-Imaging
Question
3 answers
3d image reconstruction from 2d images and from multiple views.
Relevant answer
Answer
Elissa Mhanna you may used 3D Fusion from kinect SDK, its can be modified then you can apply a multiview.
  • asked a question related to 3D-Imaging
Question
3 answers
Dear All,
I am interested in using machine learning to analyze MRI data in order
to extract pathological changes for neurodegenerative diseases (i.e Alzheimer's, Huntington's )
Can someone please give some advice on what (bioinformatic,statistical) methods from your experience would be best for an initial analysis and to extract disease specific features ?
Also what would be the most useful platform and what visualization tools or popular packages are best for data extraction and presentation in this case ?
I have about 500 samples from Philips and Siemens scanners ... Could you also suggest the processing power that I would typically require ... for example If I want to train my data and create module or signature prediction algorithms and what MRI parameters would be most informative in each case ( i.e FLAIR, DTI ?)
Any help would be greatly appreciated
Regards
Anastasios
  • asked a question related to 3D-Imaging
Question
8 answers
I have a tif image of size around ~10 Gb. I need to perform object classification or pixel classification in this image. The dimension of image data has zyx form. My voxel size in x=0.6, y=0.6 and z=1.2.
Z is the depth of the object.
If I do classification of pixels in each Z plane separately and then merge to get the final shape and volume of object. Would I loose any information and my final shape or volume of object will be wrong?
Relevant answer
Answer
If your data have a very high slice spacing in 3rd dimension and you are getting 2d slices results, you need to integrate neighbouring slices, with a fixed spacing of 1 mm using a linear interpolation to deal with this problem.
  • asked a question related to 3D-Imaging
Question
3 answers
I have three images of the same scene with a set of triple correspondences. I can calculate the trifocal tensor and recover pairwise fundamental matrices, rotation matrices, and translation vectors (up to a scale between first and second translations). This scale can be easily calculated in a noiseless case (simulation). However, as soon as even small noise is added (Gaussian, with sigma=0.1 pixel), all estimations go seriously wrong. All triples are guaranteed inliers, but even optimization techniques fail to find correct (known in simulation) scale. Does anyone know where the catch might be?
Relevant answer
Answer
mathematically you can use Radon transform but you need more images not only three
  • asked a question related to 3D-Imaging
Question
3 answers
I have this code for the attached 3DBodyPhantom image ..
and I want to make slices for this image and after i slice it i want to make reconstruction using (radon and iradon)>>
how i can make slice for this image ??
dim = [256 256 131]; %[256 256 116];
fid = fopen('3DBodyPhantom.bin'); %fopen('output42_atn_1.bin');
x= fread(fid, 'float32');
y=reshape(x, dim);
%to view sagittal slices
for i=1:dim(1)
imagesc(squeeze(y(i,:,:)));
pause(0.05);
end
%to view coronal slices
for i=1:dim(2)
imagesc(squeeze(y(:,i,:)));
pause(0.05);
end
%to view axial slices
for i=1:dim(3)
imagesc(squeeze(y(:,:,i)));
pause(0.05);
end
fclose(fid);
Relevant answer
Answer
Download Slicer it’s free
  • asked a question related to 3D-Imaging
Question
2 answers
How can I read a .mhd 3d image? Can you please provide a MATLAB example code.
Relevant answer
Answer
You can read/see them in a number of image viewers such as ITK-SNAP, Slicer, ImageJ etc. Or, you can play with the data in python/matlab.
  • asked a question related to 3D-Imaging
Question
17 answers
I have a building plan in 2d format that I want to convert into 3d, kindly suggest recent software available. 
Relevant answer
BIM (buiding information modeling) can be a solution:
-Revit
-Archicad
- VisualArq
  • asked a question related to 3D-Imaging
Question
3 answers
I am using tapping mode AFM in MilliQ to image a plasmid PGEM3Z which is ~1um long circular DNA. I prepare the sample in the following way.
I take 15mm x 15mm mica sheet and cleave it using scotch tape. Then I incubate mica sheet in a solution of 10ml Methanol + 0.5ml acetic acid + 0.3ml APTES for 30min. This protocol is followed in Prof. Chirlmin Joo's lab. Then clean it with methanol twice and dry it using nitrogen purge. Then 0.3ng/ul PGEM3Z which is suspended in MilliQ water is added on the APTES coated mica and incubated for 10min. I can see the DNA but quality of image is not satisfactory.
I am not sure about the process of tuning the cantilever to a particular frequency, the amplitude setpoint and the gains required to get better image.
  • Is there any order to be followed in which these parameters have to be set depending on their importance?
How do we choose the driving frequency and amplitude setpoint?
Please see the attached images:
In Image1, the DNA height is only about 0.7nm but it is supposed to be 2nm.
In Image2, the DNA is seen in phase image clearly but not in topography.
Why does this happen?
  • asked a question related to 3D-Imaging
Question
11 answers
So I am doing an experiment involving endothelial cells (HMVECs) plated on top of Matrigel and am trying to look at the branching and morphological properties of the cells over a time period. I have been using Matrigel and see tube formation and structures very rapidly and want to slow down this process. Are there any known protocols for this such as a mixture of collagen and Matrigel? If so can someone help me find references.
I'm looking to see slower tube formation over a course of days not hours.
Relevant answer
Answer
Hi Danielle, have you seen the real vessel tube like structure after ECs culture on matrigel? How long did you culture? My guess is that ECs might group each other resembling vessels, what do you think? I may be wrong, ECs sprouts might form vessel like structure, in fact it might not be a real vessel. If there is a real vessel formation, you can control angiogenesis either by reducing the ECs numbers or using low concentration of matrigel or reducing both ECs number and matrigel concentration. Please update your successful attempt on how you control the rapid angiogenesis on matrigel substrata. Thanks.
  • asked a question related to 3D-Imaging
Question
4 answers
So I have these endothelial cells (HMVECs) plated on top of a thick layer of Corning Matrigel and when I start imaging at T0 they are low (in this plate around 890um above the plastic) when I end at T11 which is 11 hours later they are higher (around 1010um in this plate). Can anyone explain why?
We are using Perkin Elmer's HCS Operetta
Relevant answer
Answer
It worked!! Thank you for the recommendation. I went into lab and plated the Matrigel a day in advance and added media on the top. The next day I plated my cells and the images were much better and the cells stayed in the same plane.
  • asked a question related to 3D-Imaging
Question
9 answers
I can adjust the threshold function and lasso a cell outline, but I would like Fiji to automatically recognized and lasso the same cell outline through the z stack so I don't have to manually do it and then make the measurement. I got it to work once, but then could not replicate it, so I know it is possible. Or, if there is an easier way to do it, I'd be happy to try it.
Relevant answer
Answer
you can use voxel counter plugin in image j. After thresholding the stack, you can get the number of voxels for all the images of the z-stack. Since the size of each voxel is known you can get the volume multiplying by them.
  • asked a question related to 3D-Imaging
Question
5 answers
I want to calculate the soft tissue thickness from the 3D images of MRI/CT Scan at different landmarks. Can anyone suggest me best suited automated software for soft tissue thickness in this case.
Relevant answer
Answer
I think David is on the right path. Here is what i did years ago.
To create a distance (thickness of soft tissue) map on the skin of a horned lizard (two species in this picture link or attached file)
An isosurface was created of the soft tissue or the skin. Then an isosurface was created of the bone.
Points from each surface (skin and bone) were compared to find the shortest distance between both sets of points.
Once the shortest distance was found, this was the value (distance or thickness) of the soft tissue. Each point was assigned a thickness value, in my case only the soft tissue or skin set of points was assigned a thickness value.
This thickness can be assigned a color gradient in the case of the horned lizards very thin or no thickness (horns) was set to white and varying in greater thickness assigned to red, yellow, green, and the thickest was blue and purple.
Free Software used was IsoSurf from Cambridge to create the isosurface but ParaView or MatLab can do the same types of isosurface calculations.
To compare the points I first sorted both sets of points by x then by y then z to reduce the amount of comparisons and then wrote my own code to find the shortest distance.
I think Excel can also do the sorts and find MIN distance but i have not tried this. Hope this helps.
  • asked a question related to 3D-Imaging
Question
3 answers
What is chisquare attack and how do i perform chisquare attack for Least significant bit Image steganography
Relevant answer
Answer
The chisquare attack is a simble and famous method to test the Robustness of the security system against the attacks. It depend on probability analysis of stego image after put the information inside it using LSB method then compare it with the probability analysis of original image to check the difference between them. If the difference near zero it means no information inside the image otherwise if near one it means the image held the information.
Best regard
Zeyad Safaa Younus Saffawi
  • asked a question related to 3D-Imaging
Question
7 answers
Dear experts,
I am new to rodents MRI image processing. I have two questions:
1. What is the best program for registering structural rat images and its diffusion Images to its structural atlases and vice versa? Is landmark registration a reliable method in this field?
2.Is it necessary to extract the rodent brain before registration? Does FSL-bet do the job?
Thanks for considering my question!
Relevant answer
Answer
Im not an image analyst but many of my colleagues swear by ANTs for registrations (see Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC. 2011. A reproducible evaluation of ANTs similarity metric performance in brain image registration. NeuroImage 54(3):2033–2044 DOI 10.1016/j.neuroimage.2010.09.025. ) ; we used this approach for mouse brain images (See Wood, T.C., Simmons, C., Hurley, S.A., Vernon, A.C., Torres, J., Dell’Acqua, F., Williams, S.C.R., Cash, D., 2016. Whole-brain ex-vivo quantitative MRI of the cuprizone mouse model. PeerJ 4, e2632. doi:10.7717/peerj.2632). Finally, i believe it is not essential to extract the brain from the skull prior to registration, but if you wish you can try the program that my colleague modified from FSL-BET and posted here: https://www.nitrc.org/projects/rbet/
  • asked a question related to 3D-Imaging
Question
4 answers
It is well known that the study of computer vision includes acquiring knowledge from images and videos in general. One of its application include deriving 3D information from 2D images.
I would like to know whether the use of 3D imaging from IR cameras be considered in the computer vision field as well. Or is it better suited to be called a Machine Vision application instead?
Relevant answer
Answer
IR and Visible Imaging are both of them full part of Computer Vision field. All topics of Visible Stereo Imaging applies to IR Stereo Imaging (feature point/contour detection, feature tracking, 3D Reconstruction from image sequences and so on...) . Neverthless, some computer vision techniques have to be specifically tuned/adapted to address IR applications. 
Some sample references are available by following:
  • asked a question related to 3D-Imaging
Question
10 answers
I am not familiar with C++(but I want to learn it), for image processing purposes I am use MATLAB and it is usually enough for me. For whih image processing purposes C++ can have advantage over the MATLAB?
Relevant answer
Answer
Every language have advantages and disadvantages. In the case of C++ and Mathlab, I can say that Matlab is some times more easy to start test some ideas, bu  when you want to make a real product or software, you translate you Mathlab test to a more powerful language, like C++. Mathlab is used mostly for fast prototype and when you are sure of your idea you can spend more time writing a more complex and efficient program. So, for scientific porpouses, Mathlab is a beatiful language as C++ too. If you work with image processing, there are some librarys for C++ that can help you a lot, like OpenCV, CImg (for handle images formats and operations), ImageMagic, and others.
  • asked a question related to 3D-Imaging
Question
4 answers
Perhaps using Matlab and some toolkit?
I have found two, one from Hendrick Lab, and Mokka (please see attached links). Can someone let me know if it is possible to calibrate videos, and get 3D kinematics through these tools?
If not, and I should look into purchasing software, can you please suggest some?
Relevant answer
Answer
I have been using 1) Matlab and the Hedrick digitizing tools package (i.e., DLTdv and easyWand) and 2) Argus- an open source program (http://argus.web.unc.edu/). You might consider one or both of these routes. 
  • asked a question related to 3D-Imaging
Question
6 answers
Now  EU Research and Innovation programme Horizon 2020 is starting. Our team wants to participate in the program and looking for collaborators. We have some experince in erythrocyte 3D morphology study by the method of digital holographic interference microscopy. Our project propose to study3D  morphology of blood erythrocytes in oncology diseases, and in the  treatment (radiaton therapy, chemical therapy)
Tishko Tatyana
Relevant answer
Answer
What's the update ? @ Dr. Tatyana
  • asked a question related to 3D-Imaging
Question
4 answers
To get leaf area of a needle using an image processing software, how could we account for the needle thickness?
Relevant answer
Answer
Thanks for your help.
  • asked a question related to 3D-Imaging
Question
6 answers
I want to evaluate my 3D tumor segmentation and I do not have any idea how to evaluate that?
Regards
Ali
Relevant answer
Answer
Use BRATS dataset..
  • asked a question related to 3D-Imaging
Question
11 answers
When a 3D specimen is inspected using a microscope, the distance between stage and lens is varied. Images are taken at several distances. For each distance some image areas are in focus, some are not.
Are there any standard algorithms for finding focused areas or standard formulas for focus evaluation? Are there Matlab scripts for detecting focused areas?
Relevant answer
Answer
He is indeed Aparna. We believe they would benefit, we would benefit, and users would benefit.
  • asked a question related to 3D-Imaging
Question
4 answers
I have a set of confocal microscopy images from single cell, i.e, several 2D stacks of a cell. I need to reconstruct the 3D image and then extract the surface data in form of (x, y, z) coordinates. Preferably, these coordinates are from surface mesh that can be later on converted to a surface plot using R or Matlab. I will be very thankful if someone could guide me through this problem.
This is an example of data.
Relevant answer
Answer
This will get you some of the way:
for a given single image "i" where you have imported the image into MATLAB (so it is 1024×1024):
i_a = zeros(size(i));
i_a(i>0) = 1; %find non-zero values of image, ie. the cell.
i_a = imfill(i_a,'holes'); %fill in holes in image_a, have a look at the imfill documentation.
BW = edge(i_a); %find the edges of image_a
imagesc(BW); %have a look at the edges
[y,x] = find(BW); %find the indices of the non-zero (ie edge) values of BW.
z = ones(size(x))*n; %create a vector of z coordinates for the single image, multiply by n, where n is the height from the surface.
You can then repeat this for each image, z1:z18 and concatenate the x, y, and z coordinate vectors.
  • asked a question related to 3D-Imaging
Question
6 answers
Hello Every one,
I am trying to estimate Vessel Diameter Estimation via plane fitting by utilizing the idea presented by "İlkay Öksüz" in his article. The article is attached here.
In their article, under section "3.4 : Vessel Diameter Estimation via Plane fitting", they used a modified form of plane equation to find best fit plane of the 3D point cloud data. The equation they use is also given below:
nx(x - xc) + ny(y - yc) + nz(z - zc) <= T
where the array (nx,ny,nz) refers to the normal vector, T is an empirically set threshold ( T =2 voxels), and the (xc,yc,zc) arrays and (x,y,z) define the Cartesian coordinates of the centerline location and that of another arbitrary point from the lumen data, respectively.
I need to implement this in matlab. But i cannot understand this completely. So, can any one guide me how should i implement this equation.
I have 3D point cloud, center points of that point cloud, and normal vector.
Relevant answer
Answer
I segmented the coronary arteries using the region growing method as proposed by the article attached with this Question. I generated the 3D point cloud by using Isosurface command in MATLAB. Attached image shows a portion of point cloud, in this image,
1. The grey region represents a portion of segmented coronary arterial tree.
2. The black sports, represents centerline points of the segmented coronary tree.
3. The red points are the points for which the following equation holds
                          nx(x - xc) + ny(y - yc) + nz(z - zc) <= T
  • asked a question related to 3D-Imaging
Question
3 answers
What is the best way of measuring the surface area and porosity of electrospun nanofibers (other than BET)?
My sample quantity is very small and I do not want to rely on image analysis. The 3D nature of these scaffolds renders this information very useful to investigate cellular penetration.
Relevant answer
Answer
If you have only a small quantity and you would like to keep using the scaffold for tissue engineering you could compare the polymer density with the scaffold density to calculate porosity. This fast, inexpensive method has its limits on accuracy, though when I compared it with nCT scans I got comparable numbers.  With nCT (and thick enough fibers) you could also calculate surface area etc.
DOI: 10.1039/c3tb20995d
  • asked a question related to 3D-Imaging
Question
2 answers
How can i project a depth map in the three orthogonal planes (without the knowledge of the calabriation parameters of the camera) ?
Relevant answer
Answer
Hi Stéphane,
Thank you very much for your answer.
Best Regards
  • asked a question related to 3D-Imaging
Question
3 answers
for tracking finger position in 3D , accuracy is very important.
there are various techniques in 3D measurement.
what is the most accurate one ?
Relevant answer
Answer
For non contact 3D measurement, LIDAR is the most accurate. Dense LIDAR is very expensive, sparse (such as automative) is cheaper, but will return only a sparse set of points with true depth, in the scene, Other less accurate methods are structured light, time of flight and stereo/multiview camera 3D.
  • asked a question related to 3D-Imaging
Question
3 answers
I use to produce 3D images of transient spectra by MatLab, but I'm looking for an alternative software only to manipulate and edit the image for the publication. So I do not need a software to perform data analysis, but only something to improve the visual quality of the data. Any suggestions? 
Relevant answer
Answer
One of the best software packages for graphs (2D and 3D) is Origin (by Origin Lab). The software is quite expensive but may be something that you can get through your organization. 
  • asked a question related to 3D-Imaging
Question
4 answers
I need it for my research. 
Relevant answer
Answer
Thank you so much for your answer. 
  • asked a question related to 3D-Imaging
Question
3 answers
If I collect huge amount of image data from a biological organ or tissue( ex. 500 GB to 1500 GB), what are the requirements of software and hardware to process those image data for 3d reconstruction and visualization ?
Relevant answer
Answer
Depends on the operations you have to implement, thecamount of data reduction allowed for visualization and especially the specifications regarding processing/response time.
While massive RAM and fast processors never hurt, some clever 'strategy' often helps: regarding eg. MRT slices, there is no need to have more than 10..20 layers in RAM at any time if your reconstruction uses a clever approach to eg. match contours from layer to layer. Such an approach greatly reduces memory swapping-in / swapping-out, thus speeding up the whole process.
But - as already stated - it depends on the individual application.
  • asked a question related to 3D-Imaging
Question
5 answers
I want to know free drawing programs to draw 3d diagrams like picture attached?
Relevant answer
Answer
Blender is a free 3D modeling software to make pictures like the above image, you can try that. There is also google sketchup (simpler to learn but basic tooling)
  • asked a question related to 3D-Imaging
Question
14 answers
Here i have attached a 3-D surface plot. It shows variation of copper removal with its pH. I have referred some of papers they say cadmium removal percent is highest at pH 6 and it keep decreasing before 6 pH and after 6 pH. But in my case it is increasing after 6 pH. Kindly give me some interpretation after observation of 3-D surface plot?
Relevant answer
Answer
Dear Renu Bisht
According to my researches pH= 5.3-5.8 was the best range for removing Cadmium.
Regards
  • asked a question related to 3D-Imaging
Question
1 answer
I used (best fit registration) but it usually  does not give accurate superimposition.
Relevant answer
Answer
Dear
Abdul Gajoum
It might be more effective approach if you can detect the individual object in a CT image, you have verity of options from the perceptive of feature extraction  ranging from image texture to Tensor flow pixel base image interpretation in terms of quantification. some of the examples are given bellow.
I would recommend you to contribute approach for image p reprocessing which might detect the sub-objects from the set of major image objects than use some machine learning techniques for classification as per nature of your problem statement.
Best Regards,
JAMIL AHMED
How do I do supervised learning for image recognition of clouds in photos using Python? - ResearchGate. Available from: https://www.researchgate.net/post/How_do_I_do_supervised_learning_for_image_recognition_of_clouds_in_photos_using_Python?tpr_view=wZvxZyTjRJf1NMU6LGacWvY13zsjnV2y1fkb_3#5823f37deeae39ae941dccf1 [accessed Nov 10, 2016].
  • asked a question related to 3D-Imaging
Question
3 answers
Input towards a review journal
Relevant answer
Answer
Hi Liam
A UAV- (a.k.a. drone-) mounted LIDAR system is indeed a very suitable and well established tool to monitor artificial components such as pipeline infrastructure, construction frames and powerlines. As these structures are often very smooth and featureless, highly reflective, and consisting of very narrow components, UAV-photogrammetry often fails at accurately modeling these structures in 3D, due to lack of resolution and/or flawed tie-point matching on these surface types. E.g. for an accurate UAV-based 3D reconstruction of power lines and pylons including insulators, LIDAR is really the only option.
There are several LIDAR units available that have been sufficiently miniaturized for UAVs, e.g. from YellowScan, Velodyne and Riegl. They differ in scanning rates, range and processing options, related to their size and weight, which determines their use on rotary or fixed wing UAVs. The difficulty with UAV-borne LIDAR is that it requires an accurate trajectory reconstruction of the LIDAR system, i.e. highly accurate positioning and orientation information of the LIDAR unit along the entire flight path. Otherwise, it is impossible to align the scan strips to each other and georeference the block. This requires either (preferably) a very precise dual frequency GNSS antenna + receiver together with calibrated IMU unit, or a coupled photogrammetric system from which adjusted (calibrated) positions and orientations are calculated to then adjust the LIDAR block. 
Below are some references showing the process in detail: 
Wallace et al. (2012) Development of a UAV-LiDAR system with application to forest inventory. Remote Sensing 4, 1519-1543, doi:10.3390/rs4061519.
Jozkow et al. (2016) UAS topographic mapping with Velodyne LiDAR sensor. XXIII ISPRS congress, July 2016, Prague (Czech Republic). ISPRS Annals Volume III-1.
  • asked a question related to 3D-Imaging
Question
2 answers
I want to measure vascular density and number of junction (or junction density) from 2D or 3D images obtained with immunohistofluorescence. I know Angiotool but I want to know if a other solutions exists. I have confocal images and images obtained with nanozoomer. And I also want to correlate the lenth of the vessel and their diameter. Of course if you know open acess software it is the best. Thank you for answers.
Relevant answer
Answer
Maybe the AnalyzeSkeleton plugin for imageJ could help. It will make a binary version of your image based on a threshold, and skeletonize (removing pixels from both sides of a blob or thick line until a one-pixel-thick "skeleton" remains) its structures. For fluorescence images you could even split color channels and determine thresholds separately. The plugin can, then, count nodes (number of junctions) and measure length and thickness of branches. Junction density can be computed as a function of viewport area, which is usually provided by the microscopy software you are using or can be extrapolated from a scale bar. 
The official documentation for AnalyzeSkeleton:
Hope it helps. Cheers!
  • asked a question related to 3D-Imaging
Question
3 answers
Hi there!
I have a confusion on the idea of what a 3D image really is from the hologram reconstruction.
Sometimes the paper talks about a 3D volumetric image from a stack of amplitude intensity images. My understanding is that you reconstruct a holograph looping at different distances with some objects in focus and other objects out of focus. The focused objects are found in different distance plane .This is done automatically with algorithm to find the highest intensity change and thus the focus plane of each objects, this gets you the Z position of each object in the field and thus you have an volumetric representation of the images from one single hologram.
however the idea of 3D always seems to be combined with the phase contrast images. What i understand is that the phase images somehow give you a depth information. I know that arctan(real/imag) gives you the phase image pixel intensity but i don't know how the depth information is obtained. 
How exactly does phase image, giving the depth information of the overall images, combined with stacks of amplitude imaages, give you an 3d volumetric image?
Do you really need to have a phase images to have a volumetric 3d stack? Can't you just find the volumetric stack through the first strategy I mentioned where only amplitude image stack was used?
Many thanks for all your help!
Relevant answer
Answer
Lindsey,
  I may be misreading your question, but it's important to note that not all 3D images are holograms- some are simply stacks of intensity images without phase data.  An example would be confocal microscopy 'image stacks'.
r/
David 
  • asked a question related to 3D-Imaging
Question
3 answers
I have recently joined a research lab and am looking for some challenging topics to work on. Our research group mostly studies membrane curvature-its generation, sensing, and implications. I am mostly interested in the optics that can be developed to enhance the understanding of membrane curvature, cell trafficking (something on that line). I am looking at recent literature but also wanted to reach out to other people who are working in a similar field and have an in-depth knowledge to suggest few ideas.   
Thanks.
Relevant answer
Answer
there is so many different techniques in optics for study of biological samples. I think digital holographic microscopy might be a good option for you because you can study dynamic and online processes with an accuracy of a few nanometer more and less...
  • asked a question related to 3D-Imaging
Question
15 answers
Dear everyone,
I have tried to plot the picture like the first attachment using a text file. The structure of the file is like:
#Time Energy Nd
1 0.1 2
2 0.1 4
3 0.1 5
1 0.2 9
2 0.2 10
3 0.2 15
……
And so on.
And then I use splot “datafile” u 1:2:3 with lines to plot the figure. However, there are lines between data of x axis and data of y axis. It is still different with the picture as it in the first attachment.
And I also want to plot a picture the same as the second attachment.
Could anybody give me some instructions on how to plot pictures like the following two in Gnuplot?
Thanks very much in advance!
With My Best Regards
Liu
Relevant answer
Answer
Dear Aparna,
Thanks very much! I will try that later.
Best Regards
Liu
  • asked a question related to 3D-Imaging
Question
3 answers
I would like to know detail information about creating contours from point 3d data or XYZ text file.
Actually we are working a project, required to deliver contours from xyz text file, to generate contours we depend on terra model premium software.  Instead of depending on other 3rd party tools I would like to develop a tool to generate contours.
Relevant answer
Answer
Hi Oktar,
could you share any link or document regarding the same.
  • asked a question related to 3D-Imaging
Question
3 answers
I am new to confocal microscopy. Recently while recording an image in confocal microscopy, i found that if Z sectioning of a 10 micrometer fluorescence bead is done, it looks like 15 micrometer in the Z direction (if the 3d image of the bead is visualized using the confocal software. What are the ways to get the 3 d image (images taken in confocal microscope) of an object with correct size dimension? or How to stack the Z sections so that the objects looks as it is?
Relevant answer
Answer
Dear Ravi,
you said that you are new to confocal microscopy. This is why I want to say that the distortion you observe, especially in the Z-dimension, is a "normal" optical effect. It needs some reading in basic optics of a confocal laser scanning microscope. You may simply start (actually I did that, too) with reading Wikipedia articles about confocal microscopy and the point spread function (PSF). If you undertand the limits of your system you know what you can try to get better images.
But what can you do to get better Z-resolution? To be honest from the beginning: You will not get a perfect sphere-like representation of the bead in Z. There are ways to improve, but with a standard confocal there are limits.
1) Reduce the pinhole size. Usually the system is set to a pinhole size of 1 Airy Unit (AU). This decreases the volume you detect the emitted light from. You make the optical section more slim (in Z). It reduces the overall intensity but increases the signal to noise ratio. But at some point there is no more beneficial effect, I wouldn't go below 0.25 AU.
2) If you know the optical section thickness, then set the step size of your z-stack at least to a value that is half of it. An example: You have an optical section thickness of 980nm, your step sze should be 490nm or smaller. Keep in mind, if you reduce pinhole size the optical section gets smaller and you will need to reduce the step size as well. This will not kill your fluorescent bead but live samples do not like too much laser power.
3) As James said, check if you use the best combination of refractive index values (RI). It might be better to use a 40x oil objetive on some fixed sample (with an embedding medium that resembles the RI of oil) than a 63x water objective.
4) There is the possibility to do post-processing of confocal z-stacks, called deconvolution. But this is very tricky to do and requires even more image setting adjustments.
A last thing to the dimensions in standard confocal microscopy: the size of an object you see/measure in an unprocessed confocal image is never correct. They appear always bigger than they really are. Especially when talking about subcellular compartments etc.
Hope this could help you a little bit.
Best,
Falco
  • asked a question related to 3D-Imaging
Question
2 answers
Do any one have any information or the researches about evaluation of the differences between the 3d MSCT/CBCT DICOM-converted-to-STL models (including shrinkage, distortion and Boolean defects) to the various types 3d optical scanned models, such as: dental intra-oral scanned data, extra-oral scanned data (made by desktop scanners, includes: LASER projection type, photogrammetric type and structured light type) (including meshing, registering and patching)?
Appreciated.
Relevant answer
Answer
Hello,
You might find mesh comparison tools, such as PolyMeCo or Meshlab, useful. You can take a look at:
Cheers!
  • asked a question related to 3D-Imaging
Question
5 answers
I need to find the surface area of live tissue on my coral fragments in order to standardise dissolved oxygen measurements. Ideally I do not want to have to do this destructively (wax dipping or dry ash weight). 
This paper compares different methods but ImageJ analysis from 2D photos was highly inaccurate. http://link.springer.com/article/10.1007/s00338-008-0459-3
I have already tried foil wrapping and weighing (Marsh, 1970) but this is very fiddly and inaccurate for complex branching structures.
With all the 3D modelling apps available there must be a simple way to get estimated surface area easily. I have come across the free app 123D http://www.123dapp.com/ which transforms 2D images into 3D models which can later be 3D printed, but this does not seem to give me the surface area data, unless I am missing something?
Relevant answer
Answer
  • asked a question related to 3D-Imaging
Question
1 answer
I would like to visualise the spacial organisation of my promoters, for which I know the positions of the histones center, and some TFBS. I would like to know if the TFBS are accessible or not, close to each other or facing each other or not ... 
Any help would be really valuable :D 
Thanks a lot
Relevant answer
Answer
If you know the histone centers, you know the DNA segments wrapped around them. 
There are some X-ray crystal structures of individual nucleosomes (e.g., Luger et al. (1997) Nature 389: 251-260) . . .
. . . but the overall conformation of the promoter (i.e., the relative orientation of nucleosomes) will be modulated by intra- and inter-nucleosome interactions, promoter-binding proteins and sequentially remote but spatially close chromatin region(s).
You can build a coarse-grained model of the entire promoter (based on the structural and functional experimental data and models) . . .
. . . let it relax in MD simulations and generate an ensemble of structures but it is more likely that you will get a NASA-style artist's impression rather than a realistic model.
  • asked a question related to 3D-Imaging
Question
11 answers
The thermal images scans can provide the heat distributions as 2D projections, they cannot provide depth or 3D information of the heat source.
2D images for temperature measurement and from the images alone it may be difficult to determine the distance or the relative position of the temperature increase, which may be the location of the failed components.
Relevant answer
Answer
  • asked a question related to 3D-Imaging
Question
6 answers
I have selected ROI's using the selection tools then added these ROI's to the ROI manager. I would adjust each ROI's brightness/contrast and threshold but when it came to running the 3D object counter, it (3D object counter) would analyze the entire picture and not the pre-selected ROI's. 
Relevant answer
Answer
  • asked a question related to 3D-Imaging
Question
4 answers
I have a DEM as raster format that i want to convert points from elevation.
I wish to test/manipulate the elevations at a locations. I've looked at the elements imported but cannot figure out how to query the elevation data for one location (pixel).
Any guidance very appreciated.
Relevant answer
Answer
simple convert them raster to ascii in arc gis software. Read documentation first
  • asked a question related to 3D-Imaging
Question
3 answers
I want to make 3D diagram , so that i need to learn this software package. Is this  better than La Tex?
Relevant answer
Answer
Asymptote is based on LaTeX but optimized with C++ to draw efficiently, especially in 3D. I recommend you to learn it. Other options in LaTeX are Tikz, Tikz-euclid packages!
  • asked a question related to 3D-Imaging
Question
4 answers
microscope experts
Relevant answer
Answer
Hi Chiranjib,
please check the following paper from our group:
the method consists of GUV dispersion in a fluid (warm) agarose gel that gelifies at room temp. It is well suited for GUVs but also small vesicles (LUVs).
I hope this might help you.
Best regards,
Rafael
  • asked a question related to 3D-Imaging
Question
4 answers
I have three datasets A,B and C. Each includes tens of thousands of chemical compounds. We know that A[1], the first compound in A, is similar to B[30] and both are similar to C[9999] and we call this compound NEO; A,B &C share a lot of such compounds. I wanna show A, B and C in such a way that each compound is a dot (Properties of the compound such as molecular weight can be used as 2D or 3D coordinates) and I also wanna show that they all share NEO and many more of such shared molecules and many more molecules that are only in A and not others, etc,etc.
 A venn diagram would be the simplest way. But, can I make a venn diagram of the actual data, by showing each fragment as a dot and visualize the shared ones as overlapping? 
Or a plot like the microarray having bins for compounds with certain properties and showing A,B,C with different colors (or A&C, A&B, B&C and A&B&C with colors)?
I also came across hiveplot but I'm not sure if that is a good option or not! 
In reality A,B and C are datasets of compounds in natural products, FDA approved drugs and synthetic chemicals. I wanna visualize the distribution of these compounds, in other words which compounds are shared between these datasets. 
Relevant answer
Answer
Thanks Narasim. I like the idea of converting data to pixels but that's too general. I will most probably use the hiveplot. 
  • asked a question related to 3D-Imaging
Question
2 answers
I am finding the 3D-structure of miRNAs for interaction study. Anyone know how to find or model that kind of data?
Relevant answer
Answer
Please refer to Antonio Messina's answer!
  • asked a question related to 3D-Imaging
Question
1 answer
I have no background in the area of 3D bioprinting and I am unsure on how Individually Pore Filling (IPF), Injection Filling (IF), Fused Deposition Modelling (FDM) are carried out.
Any help would be greatly appreciated 
Relevant answer
Answer
You can find more about FDM technique in attached paper.
  • asked a question related to 3D-Imaging
Question
2 answers
I am using SPM to coregister MRI and PA brain images. How can I store the co-registered image displayed in graphics window as 3D image?
Relevant answer
Answer
Hi,
When doing coregistration the transformation is actually stored in the header of the source image.
If you want to reslice (interpolate) your source image to also have the same image dimensions and orientation as your reference image, you should 'coregister&reslice'. Then a new image is generated (r+source image)
Hope this helps.
Kind regards,
Mathijs Raemaekers
  • asked a question related to 3D-Imaging
Question
7 answers
Hello everyone,
I would like to scan one short sequence (90 seconds) multiple times on one subject, and then co-register and average images to get rid of artifacts and motion distortions. Can you recommend some tool for this ? Is it possible with SPM12 ? Or should I do it in photoshop ?
Thank you for answers
Relevant answer
Answer
There are pros and cons for either FSL or SPM. Personally, I prefer the latest SPM12 version. The new user manual is easily understandable, but you need to be aware that you need a Matlab license to run SPM!
  • asked a question related to 3D-Imaging
Question
5 answers
How to evaluate the quality and accuracy of dental 3D imaging equipments?
Quality and accuracy of dental 3D imaging equipment include:
1. - Dimension alignment accuracy
2. - Tissue density accuracy (HU/ grayscale accuracy) 
Could someone support me any information (relevant to the (1) aspect as above) of:
1. - The  dental 3D imaging equipment accuracy standards and the imaging test protocols, kits and templates?
2. - The firmware or freeware for CBCT equipment alignment and calibration?
3. - If the mechanical calibration/ alignment needed, what is the optimum design of imaging testing template for those equipment? I have seen the phantom-ball test but it just for professional engineers only. I would like to have some advices about the quick-test templates for clinics.  
Relevant answer
Answer
Puhhh - this is just to much! Seriously, I recommend to seperate your problem into some parts and open new questions about it.
What I understood so far is, that you want to make a comparison of two (or more) different techniques with a meshed 3D model. In the field of photogrammetry a comparison of different 3D models is usually done like this:
  • Define a set of control points. These control points should be well distributed over the model.
  • These control points have to be identified in both 3D models.
  • Use these control points in order to do a transformation from one to the other model. This is called registration, as you certainly already know.
  • Such a registration incorporates a 3D-similarity transformation (or Helmert-transformation). Because it (normally) uses more control points than necessary it is done with least-squares-adjustment. The outcome of such an adjustment is not only the set of transformation parameters, but also the precision (standard deviation) of how well the control points fit together between both models
  • after the registration you pick some more so called check points at different positions off the control points. Transform these check points and compute the pairwise distance of these check points between both models. From this you can calculate the root mean square (RMS) of these distances. The RMS is compareable to the standard deviation of the control points from the adjustment (during registration). In other words, this gives you a measure of how well both models fit together.
I hope it helps!
Regards
Olaf
  • asked a question related to 3D-Imaging
Question
3 answers
Hi. 
I am currently collecting some dataset of time-lapse, z stacked image set of some samples. (as *.tif format)
And I want to find a tool that can make the time-lapse 3D movie like below. (I once got a chance to use amira 5.3.2 version, but I cannot find the 4D data manipulation function. )
Relevant answer
Answer
We have recently explored Chimera and Vaa3D.  Only tried 4D in Chimera which works well except for a Zdepth rendering problem when dealing with volumetric data.  Surfaces have no problem, but for volumes, one channel always renders in front of the others.
  • asked a question related to 3D-Imaging
Question
3 answers
Normally the image intensity is the combination of specular and diffuse reflection. I have single gray scale image and want to separate the these two component. Under the lambertian assumption the intensity is the dot product of normal and light source with constant albedo. My question is how to estimate direction of light source from single gray scale image. Second, under the assumption of non-lambertian, intensity is the linear combination of specular and diffuse component. Now how to estimate the specularity from single gray scale image??
Relevant answer
Answer
If you write the expanded version of the rendering equation for all pixels, could you optimize it for light source direction, so that the equation produces the given pixels?
We use a virtual light stage, but we have an estimation of geometry. Do you have or plan to estimate geometry? Or is it possible to make a hard assumption on surface normals? Is interaction allowed? 
Also, have you seen this: "Specularity removal in images and videos"; SP Mallick, T Zickler, PN Belhumeur; ECCV 2006
  • asked a question related to 3D-Imaging
Question
15 answers
Hi,
I have few SEM images from a sample which was continuously sliced and imaged. Now I need to align them and construct a 3D image. Most of softwares ask to save data in special formats. But I can only saved SEM images.
Is this sufficient to construct 3D tomogram at the end? I know the space of each plane where each image obtained.
Thanks
Relevant answer
Answer
Hi Chiththaka,
another software which can do this really well is Imod (University of Colorado). There is an option to align series of images and also it allows you to do segmentation. You will just need to convert your tifs to mrc file, which is easy (command tif2mrc). 
Best wishes,
Petr
  • asked a question related to 3D-Imaging
Question
4 answers
I am working on Photoelectrochemical cell to split water molecule to produce hydrogen fuel.
Please let me know that is there any free software that I can use to draw a 3D image of photoelectrochemical cell .
Thank you very much.
Relevant answer
Answer
Hi. There are a lot of 3D softwares out there, you can try the students' versions of Autodesk softwares such as 3ds Max, Maya, Inventor or Autocad. Also Solidworks is a good idea. and if you want a free software, you can try Blender, there are multiple options. 
  • asked a question related to 3D-Imaging
Question
3 answers
I have an Image file of micro CT images in nhdr, and in raw format(dci format can also be obtained) from which I am able to view using ImageJ/paraview and able to generate pore network model. Now I want to simulate these 3D model for permeability prediction. What is the way to simulate such 3D images? I have heard of Avizo software. Is there any open source software from where I can perform these simulation?
Relevant answer
Answer
Generally one should develop/obtain a code to simulate one or multiphase flow in the network. Maybe the OpenPNM (as Philip said) might have some feature. Also, some researcher may share their codes with you upon request. My experience with Imperial College London (team of Prof. Blunt) was most of the times appealing. 
  • asked a question related to 3D-Imaging
Question
9 answers
My goal is understand how the software compare the different point clouds when I have done a survey with two different techniques (for example, Laser scanner and Photogrammetry). I have tried with two different software and I have found two different results, so, how can know which is the correct or maybe more correct?
Has anybody written something about it? Thanks!
Relevant answer
Answer
You can find some information about 4 different algorithms in the Paper "Change detection in Terrestrial Laser Scanner Data Via Point Cloud Correspondence" from M. Tsakiri & V. Anagnostopoulos.
The algorithms they compared are Nearest Neighbour, Nearest Neighbour with local modelling, Normal Shooting and ICP.
  • asked a question related to 3D-Imaging