Science method
3D-Imaging - Science method
3D-Imaging in Research
Questions related to 3D-Imaging
is there any software in which koppen climate of world2D image is convertible to 3D by just putting the image ?
There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?
Thank you in advance!
[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).
[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?

Hello there,
Can someone recommend me that how can i reconstruct a 3d images using a stack of images?
Please Guide me in this regard.
Thanks in Advance
My experiment demand x-ray CMT imaging. I need to place my samples over a substrate that should be x-transparent. Can anyone please any suitable materials (rather than foam) that is x-ray transparent?
Thank you in anticipation.
Am working on 3D deformation study analysis of buildings for my final year project. I will love to know how one can use Erdas Imagine to determine the height of an image and to achieve the above details
the volume data is so big that use the cnn nets to do the segmenation is difficult cause the memory can not bear it, is there anybody give me a advise?
I am looking for an image processing software, that can reconstruct a high resolution SEM 2D-image of a porous Ni-GDC anode, to a 3D representation. The intent is to separate the phases Ni,Ce,Pores, measure tortuosity, porosity etc.
I have contacted people from Avizo(Mercury Computer Systems), but their license will cost above 13000 US$. I do not have the means (financial and time) for this. Can anyone recommend an alternative?
Appreciated
Dear Experts,
Have you ever tried to change the nominal resolution in Scanco uCT50? If I need to achieve 2-, 4- and 8-micron (voxel size) images, is it possible to set these parameters in Scanco, or I may use only default settings? I know that it's not a problem in XRadia but not sure about Scanco.
Thank you for your help.
Alexandra
Say I have slightly different magnification images or say I used two different SEM devices which have different screens so the mag differs. But while preparing the report, I would like to have all the images to have same scale. How can I do this?
Can imageJ or GIMP help me to do this? Could you please suggest the steps
thanks
Hello
I am a student of MS Computer Science. I am going to start my career in research in Pattern Recognition and 3D-Image Reconstruction. I need a road map if someone can guid me where to start, I would appreciate that.
Thanks a lot
Dear All,
I am using AVIZO to reconstruct 3D from CT-Scan images. There are two types of images that I have (form different CT machines). First series are created by having slices in 3 directions (X, Y, Z) and I can easily reconstruct them in AVIZO. But there is a second type, in which images are created by rotating the specimen inside the CT machine while scanning (360 degrees). When I load this type of images in AVIZO, instead of having a spherical shape for example, I end up having a cylindrical shape. And the reason is, AVIZO puts these slices in one direction instead of reading them in rotating order.
Does anybody know how I can load this type of images?
With all thanks,
Arsalan
3D image segmentation
3D image features extraction using GLCM , Gabor filter
Hello!
Using 3D models/images in geophysical works are recently becoming more popular. Based on your experience, which programs (preferably free of charge) can help to provide better/easier 3D outputs by using geophysical 2D images?
Cheers,
A.
Hi, i have a .nii file, that is an image of the fusiform gyrus, is a mask. I would like to extract from this image the xyz coordinates
I have a set of slices of the retina's images. Using optovue software I can reconstruct the 3D structure of the piece of retina (see attached link), and measure some 2D and 3D parameters (like thickness of the retina in different positions, volume of a particular region, etc). I want to find another software for this purpose (to have possibility working home). I tried to use 3D-Slicer, but i can't make it work good. I can upload set of jpg images, but i can't create 3D model. Probably smb can advise another software? Or give nice tutorial on 3D-Slicer and jpeg images
3d image reconstruction from 2d images and from multiple views.
Dear All,
I am interested in using machine learning to analyze MRI data in order
to extract pathological changes for neurodegenerative diseases (i.e Alzheimer's, Huntington's )
Can someone please give some advice on what (bioinformatic,statistical) methods from your experience would be best for an initial analysis and to extract disease specific features ?
Also what would be the most useful platform and what visualization tools or popular packages are best for data extraction and presentation in this case ?
I have about 500 samples from Philips and Siemens scanners ... Could you also suggest the processing power that I would typically require ... for example If I want to train my data and create module or signature prediction algorithms and what MRI parameters would be most informative in each case ( i.e FLAIR, DTI ?)
Any help would be greatly appreciated
Regards
Anastasios
I have a tif image of size around ~10 Gb. I need to perform object classification or pixel classification in this image. The dimension of image data has zyx form. My voxel size in x=0.6, y=0.6 and z=1.2.
Z is the depth of the object.
If I do classification of pixels in each Z plane separately and then merge to get the final shape and volume of object. Would I loose any information and my final shape or volume of object will be wrong?
I have three images of the same scene with a set of triple correspondences. I can calculate the trifocal tensor and recover pairwise fundamental matrices, rotation matrices, and translation vectors (up to a scale between first and second translations). This scale can be easily calculated in a noiseless case (simulation). However, as soon as even small noise is added (Gaussian, with sigma=0.1 pixel), all estimations go seriously wrong. All triples are guaranteed inliers, but even optimization techniques fail to find correct (known in simulation) scale. Does anyone know where the catch might be?
I have this code for the attached 3DBodyPhantom image ..
and I want to make slices for this image and after i slice it i want to make reconstruction using (radon and iradon)>>
how i can make slice for this image ??
dim = [256 256 131]; %[256 256 116];
fid = fopen('3DBodyPhantom.bin'); %fopen('output42_atn_1.bin');
x= fread(fid, 'float32');
y=reshape(x, dim);
%to view sagittal slices
for i=1:dim(1)
imagesc(squeeze(y(i,:,:)));
pause(0.05);
end
%to view coronal slices
for i=1:dim(2)
imagesc(squeeze(y(:,i,:)));
pause(0.05);
end
%to view axial slices
for i=1:dim(3)
imagesc(squeeze(y(:,:,i)));
pause(0.05);
end
fclose(fid);
How can I read a .mhd 3d image? Can you please provide a MATLAB example code.
I have a building plan in 2d format that I want to convert into 3d, kindly suggest recent software available.
I am using tapping mode AFM in MilliQ to image a plasmid PGEM3Z which is ~1um long circular DNA. I prepare the sample in the following way.
I take 15mm x 15mm mica sheet and cleave it using scotch tape. Then I incubate mica sheet in a solution of 10ml Methanol + 0.5ml acetic acid + 0.3ml APTES for 30min. This protocol is followed in Prof. Chirlmin Joo's lab. Then clean it with methanol twice and dry it using nitrogen purge. Then 0.3ng/ul PGEM3Z which is suspended in MilliQ water is added on the APTES coated mica and incubated for 10min. I can see the DNA but quality of image is not satisfactory.
I am not sure about the process of tuning the cantilever to a particular frequency, the amplitude setpoint and the gains required to get better image.
- Is there any order to be followed in which these parameters have to be set depending on their importance?
How do we choose the driving frequency and amplitude setpoint?
Please see the attached images:
In Image1, the DNA height is only about 0.7nm but it is supposed to be 2nm.
In Image2, the DNA is seen in phase image clearly but not in topography.
Why does this happen?
So I am doing an experiment involving endothelial cells (HMVECs) plated on top of Matrigel and am trying to look at the branching and morphological properties of the cells over a time period. I have been using Matrigel and see tube formation and structures very rapidly and want to slow down this process. Are there any known protocols for this such as a mixture of collagen and Matrigel? If so can someone help me find references.
I'm looking to see slower tube formation over a course of days not hours.
So I have these endothelial cells (HMVECs) plated on top of a thick layer of Corning Matrigel and when I start imaging at T0 they are low (in this plate around 890um above the plastic) when I end at T11 which is 11 hours later they are higher (around 1010um in this plate). Can anyone explain why?
We are using Perkin Elmer's HCS Operetta
I can adjust the threshold function and lasso a cell outline, but I would like Fiji to automatically recognized and lasso the same cell outline through the z stack so I don't have to manually do it and then make the measurement. I got it to work once, but then could not replicate it, so I know it is possible. Or, if there is an easier way to do it, I'd be happy to try it.
I want to calculate the soft tissue thickness from the 3D images of MRI/CT Scan at different landmarks. Can anyone suggest me best suited automated software for soft tissue thickness in this case.
What is chisquare attack and how do i perform chisquare attack for Least significant bit Image steganography
Dear experts,
I am new to rodents MRI image processing. I have two questions:
1. What is the best program for registering structural rat images and its diffusion Images to its structural atlases and vice versa? Is landmark registration a reliable method in this field?
2.Is it necessary to extract the rodent brain before registration? Does FSL-bet do the job?
Thanks for considering my question!
It is well known that the study of computer vision includes acquiring knowledge from images and videos in general. One of its application include deriving 3D information from 2D images.
I would like to know whether the use of 3D imaging from IR cameras be considered in the computer vision field as well. Or is it better suited to be called a Machine Vision application instead?
I am not familiar with C++(but I want to learn it), for image processing purposes I am use MATLAB and it is usually enough for me. For whih image processing purposes C++ can have advantage over the MATLAB?
Perhaps using Matlab and some toolkit?
I have found two, one from Hendrick Lab, and Mokka (please see attached links). Can someone let me know if it is possible to calibrate videos, and get 3D kinematics through these tools?
If not, and I should look into purchasing software, can you please suggest some?
Now EU Research and Innovation programme Horizon 2020 is starting. Our team wants to participate in the program and looking for collaborators. We have some experince in erythrocyte 3D morphology study by the method of digital holographic interference microscopy. Our project propose to study3D morphology of blood erythrocytes in oncology diseases, and in the treatment (radiaton therapy, chemical therapy)
Tishko Tatyana
To get leaf area of a needle using an image processing software, how could we account for the needle thickness?
I want to evaluate my 3D tumor segmentation and I do not have any idea how to evaluate that?
Regards
Ali
When a 3D specimen is inspected using a microscope, the distance between stage and lens is varied. Images are taken at several distances. For each distance some image areas are in focus, some are not.
Are there any standard algorithms for finding focused areas or standard formulas for focus evaluation? Are there Matlab scripts for detecting focused areas?
I have a set of confocal microscopy images from single cell, i.e, several 2D stacks of a cell. I need to reconstruct the 3D image and then extract the surface data in form of (x, y, z) coordinates. Preferably, these coordinates are from surface mesh that can be later on converted to a surface plot using R or Matlab. I will be very thankful if someone could guide me through this problem.
This is an example of data.
Hello Every one,
I am trying to estimate Vessel Diameter Estimation via plane fitting by utilizing the idea presented by "İlkay Öksüz" in his article. The article is attached here.
In their article, under section "3.4 : Vessel Diameter Estimation via Plane fitting", they used a modified form of plane equation to find best fit plane of the 3D point cloud data. The equation they use is also given below:
nx(x - xc) + ny(y - yc) + nz(z - zc) <= T
where the array (nx,ny,nz) refers to the normal vector, T is an empirically set threshold ( T =2 voxels), and the (xc,yc,zc) arrays and (x,y,z) define the Cartesian coordinates of the centerline location and that of another arbitrary point from the lumen data, respectively.
I need to implement this in matlab. But i cannot understand this completely. So, can any one guide me how should i implement this equation.
I have 3D point cloud, center points of that point cloud, and normal vector.
What is the best way of measuring the surface area and porosity of electrospun nanofibers (other than BET)?
My sample quantity is very small and I do not want to rely on image analysis. The 3D nature of these scaffolds renders this information very useful to investigate cellular penetration.
How can i project a depth map in the three orthogonal planes (without the knowledge of the calabriation parameters of the camera) ?
for tracking finger position in 3D , accuracy is very important.
there are various techniques in 3D measurement.
what is the most accurate one ?
I use to produce 3D images of transient spectra by MatLab, but I'm looking for an alternative software only to manipulate and edit the image for the publication. So I do not need a software to perform data analysis, but only something to improve the visual quality of the data. Any suggestions?

If I collect huge amount of image data from a biological organ or tissue( ex. 500 GB to 1500 GB), what are the requirements of software and hardware to process those image data for 3d reconstruction and visualization ?
I want to know free drawing programs to draw 3d diagrams like picture attached?
Figure from dx.doi.org/10.1021/nl300483y

Here i have attached a 3-D surface plot. It shows variation of copper removal with its pH. I have referred some of papers they say cadmium removal percent is highest at pH 6 and it keep decreasing before 6 pH and after 6 pH. But in my case it is increasing after 6 pH. Kindly give me some interpretation after observation of 3-D surface plot?
I used (best fit registration) but it usually does not give accurate superimposition.
Input towards a review journal
I want to measure vascular density and number of junction (or junction density) from 2D or 3D images obtained with immunohistofluorescence. I know Angiotool but I want to know if a other solutions exists. I have confocal images and images obtained with nanozoomer. And I also want to correlate the lenth of the vessel and their diameter. Of course if you know open acess software it is the best. Thank you for answers.
Hi there!
I have a confusion on the idea of what a 3D image really is from the hologram reconstruction.
Sometimes the paper talks about a 3D volumetric image from a stack of amplitude intensity images. My understanding is that you reconstruct a holograph looping at different distances with some objects in focus and other objects out of focus. The focused objects are found in different distance plane .This is done automatically with algorithm to find the highest intensity change and thus the focus plane of each objects, this gets you the Z position of each object in the field and thus you have an volumetric representation of the images from one single hologram.
however the idea of 3D always seems to be combined with the phase contrast images. What i understand is that the phase images somehow give you a depth information. I know that arctan(real/imag) gives you the phase image pixel intensity but i don't know how the depth information is obtained.
How exactly does phase image, giving the depth information of the overall images, combined with stacks of amplitude imaages, give you an 3d volumetric image?
Do you really need to have a phase images to have a volumetric 3d stack? Can't you just find the volumetric stack through the first strategy I mentioned where only amplitude image stack was used?
Many thanks for all your help!
I have recently joined a research lab and am looking for some challenging topics to work on. Our research group mostly studies membrane curvature-its generation, sensing, and implications. I am mostly interested in the optics that can be developed to enhance the understanding of membrane curvature, cell trafficking (something on that line). I am looking at recent literature but also wanted to reach out to other people who are working in a similar field and have an in-depth knowledge to suggest few ideas.
Thanks.
Dear everyone,
I have tried to plot the picture like the first attachment using a text file. The structure of the file is like:
#Time Energy Nd
1 0.1 2
2 0.1 4
3 0.1 5
1 0.2 9
2 0.2 10
3 0.2 15
……
And so on.
And then I use splot “datafile” u 1:2:3 with lines to plot the figure. However, there are lines between data of x axis and data of y axis. It is still different with the picture as it in the first attachment.
And I also want to plot a picture the same as the second attachment.
Could anybody give me some instructions on how to plot pictures like the following two in Gnuplot?
Thanks very much in advance!
With My Best Regards
Liu


I would like to know detail information about creating contours from point 3d data or XYZ text file.
Actually we are working a project, required to deliver contours from xyz text file, to generate contours we depend on terra model premium software. Instead of depending on other 3rd party tools I would like to develop a tool to generate contours.
I am new to confocal microscopy. Recently while recording an image in confocal microscopy, i found that if Z sectioning of a 10 micrometer fluorescence bead is done, it looks like 15 micrometer in the Z direction (if the 3d image of the bead is visualized using the confocal software. What are the ways to get the 3 d image (images taken in confocal microscope) of an object with correct size dimension? or How to stack the Z sections so that the objects looks as it is?
Do any one have any information or the researches about evaluation of the differences between the 3d MSCT/CBCT DICOM-converted-to-STL models (including shrinkage, distortion and Boolean defects) to the various types 3d optical scanned models, such as: dental intra-oral scanned data, extra-oral scanned data (made by desktop scanners, includes: LASER projection type, photogrammetric type and structured light type) (including meshing, registering and patching)?
Appreciated.
I need to find the surface area of live tissue on my coral fragments in order to standardise dissolved oxygen measurements. Ideally I do not want to have to do this destructively (wax dipping or dry ash weight).
This paper compares different methods but ImageJ analysis from 2D photos was highly inaccurate. http://link.springer.com/article/10.1007/s00338-008-0459-3
I have already tried foil wrapping and weighing (Marsh, 1970) but this is very fiddly and inaccurate for complex branching structures.
With all the 3D modelling apps available there must be a simple way to get estimated surface area easily. I have come across the free app 123D http://www.123dapp.com/ which transforms 2D images into 3D models which can later be 3D printed, but this does not seem to give me the surface area data, unless I am missing something?
I would like to visualise the spacial organisation of my promoters, for which I know the positions of the histones center, and some TFBS. I would like to know if the TFBS are accessible or not, close to each other or facing each other or not ...
Any help would be really valuable :D
Thanks a lot
The thermal images scans can provide the heat distributions as 2D projections, they cannot provide depth or 3D information of the heat source.
2D images for temperature measurement and from the images alone it may be difficult to determine the distance or the relative position of the temperature increase, which may be the location of the failed components.
I have selected ROI's using the selection tools then added these ROI's to the ROI manager. I would adjust each ROI's brightness/contrast and threshold but when it came to running the 3D object counter, it (3D object counter) would analyze the entire picture and not the pre-selected ROI's.
I have a DEM as raster format that i want to convert points from elevation.
I wish to test/manipulate the elevations at a locations. I've looked at the elements imported but cannot figure out how to query the elevation data for one location (pixel).
Any guidance very appreciated.
I want to make 3D diagram , so that i need to learn this software package. Is this better than La Tex?
I have three datasets A,B and C. Each includes tens of thousands of chemical compounds. We know that A[1], the first compound in A, is similar to B[30] and both are similar to C[9999] and we call this compound NEO; A,B &C share a lot of such compounds. I wanna show A, B and C in such a way that each compound is a dot (Properties of the compound such as molecular weight can be used as 2D or 3D coordinates) and I also wanna show that they all share NEO and many more of such shared molecules and many more molecules that are only in A and not others, etc,etc.
A venn diagram would be the simplest way. But, can I make a venn diagram of the actual data, by showing each fragment as a dot and visualize the shared ones as overlapping?
Or a plot like the microarray having bins for compounds with certain properties and showing A,B,C with different colors (or A&C, A&B, B&C and A&B&C with colors)?
I also came across hiveplot but I'm not sure if that is a good option or not!
In reality A,B and C are datasets of compounds in natural products, FDA approved drugs and synthetic chemicals. I wanna visualize the distribution of these compounds, in other words which compounds are shared between these datasets.
I am finding the 3D-structure of miRNAs for interaction study. Anyone know how to find or model that kind of data?
I have no background in the area of 3D bioprinting and I am unsure on how Individually Pore Filling (IPF), Injection Filling (IF), Fused Deposition Modelling (FDM) are carried out.
Any help would be greatly appreciated
I am using SPM to coregister MRI and PA brain images. How can I store the co-registered image displayed in graphics window as 3D image?
Hello everyone,
I would like to scan one short sequence (90 seconds) multiple times on one subject, and then co-register and average images to get rid of artifacts and motion distortions. Can you recommend some tool for this ? Is it possible with SPM12 ? Or should I do it in photoshop ?
Thank you for answers
How to evaluate the quality and accuracy of dental 3D imaging equipments?
Quality and accuracy of dental 3D imaging equipment include:
1. - Dimension alignment accuracy
2. - Tissue density accuracy (HU/ grayscale accuracy)
Could someone support me any information (relevant to the (1) aspect as above) of:
1. - The dental 3D imaging equipment accuracy standards and the imaging test protocols, kits and templates?
2. - The firmware or freeware for CBCT equipment alignment and calibration?
3. - If the mechanical calibration/ alignment needed, what is the optimum design of imaging testing template for those equipment? I have seen the phantom-ball test but it just for professional engineers only. I would like to have some advices about the quick-test templates for clinics.
Hi.
I am currently collecting some dataset of time-lapse, z stacked image set of some samples. (as *.tif format)
And I want to find a tool that can make the time-lapse 3D movie like below. (I once got a chance to use amira 5.3.2 version, but I cannot find the 4D data manipulation function. )
Normally the image intensity is the combination of specular and diffuse reflection. I have single gray scale image and want to separate the these two component. Under the lambertian assumption the intensity is the dot product of normal and light source with constant albedo. My question is how to estimate direction of light source from single gray scale image. Second, under the assumption of non-lambertian, intensity is the linear combination of specular and diffuse component. Now how to estimate the specularity from single gray scale image??
Hi,
I have few SEM images from a sample which was continuously sliced and imaged. Now I need to align them and construct a 3D image. Most of softwares ask to save data in special formats. But I can only saved SEM images.
Is this sufficient to construct 3D tomogram at the end? I know the space of each plane where each image obtained.
Thanks
I am working on Photoelectrochemical cell to split water molecule to produce hydrogen fuel.
Please let me know that is there any free software that I can use to draw a 3D image of photoelectrochemical cell .
Thank you very much.
I have an Image file of micro CT images in nhdr, and in raw format(dci format can also be obtained) from which I am able to view using ImageJ/paraview and able to generate pore network model. Now I want to simulate these 3D model for permeability prediction. What is the way to simulate such 3D images? I have heard of Avizo software. Is there any open source software from where I can perform these simulation?



My goal is understand how the software compare the different point clouds when I have done a survey with two different techniques (for example, Laser scanner and Photogrammetry). I have tried with two different software and I have found two different results, so, how can know which is the correct or maybe more correct?
Has anybody written something about it? Thanks!