Questions related to 3D Reconstruction
I'm looking for WinSurf (Surf Driver) software for 3D reconstruction. I turned under all of internet. But i couldn't find. If this software has a setup file, can it share with me?
I am looking for a CT dataset of bone fractures in different bones to reconstruct 3D models. I am using these models for developing automated design algorithms. If there is a database of already existing 3D models of bone fractures, that would be useful as well.
I am planning to reconstruct 3D geometries of coronary arteries from angiogram images (not a single artery whole coronary artery trees). I am wondering if any of you have any software suggestions that can help. Or any potential Matlab/Python codes that can help?
I know that there is some open-source software that can be used for CT or MRI images. But I couldn't find any for angiogram images.
Thanks for your help.
I acquired multiple confocal stack images of olfactory epithelium from rostral to caudal position from 1 slide. I want to create 3d reconstruction by adding all the z stacks to look at expression pattern of few neuronal markers. I am able to add multiple stacks by selecting "add slice" option. However, there is a distance of 50 micron between every 2 slices. 2 questions:
1) Can I add blank space of 50 micron between every 2 slices?
2) How to align for x-y drift between different z stacks?
Hello everyone, recently I want to do a 3D reconstruction by using histological serial sections. But I did not have experience for that, so I want to ask if anyone do it and give some advices.
Thank you so much
I was wondering if there is any (open-source) software available for 3D-EBSD reconstruction from data in which the slices do not have constant thickness, or even better, not a constant thickness profile?
Effectively, it means that I collect EBSD maps, and for each EBSD map, I know the slice thickness at each position. As far as I know, most 3D-EBSD reconstruction software packages (such as dream3D) only allow constant thicknesses as input, and definitely do not allow irregular thickness profiles...
I have z-stack confocal images of whole-mount tissue IF staining, marking the tight junction of the cell. And I would like to do 3D reconstruction of the cells to measure cell circularity and area using Imaris.
However, since it's challenging to mount my tissue perfectly flat, my z-stack images are slanted.
This makes the cell reconstruction challenging since the cell membrane signal for a certain cell is spread across multiple z slices. I attached few screenshots to demonstrate.
I'm wondering if there is a way to rotate my z-stack image and reassign the xy reference angle to computationally make the image flatter?
I am looking for an image processing software, that can reconstruct a high resolution SEM 2D-image of a porous Ni-GDC anode, to a 3D representation. The intent is to separate the phases Ni,Ce,Pores, measure tortuosity, porosity etc.
I have contacted people from Avizo(Mercury Computer Systems), but their license will cost above 13000 US$. I do not have the means (financial and time) for this. Can anyone recommend an alternative?
i am working on a Spinal Cord Injury Model and in order to characterize the lesion I want to do a 3 D Reconstruction of a spinal cord. Therefore, I have cut a Spinal cord of 1 cm into 10 µm sections; 4 cuts on one slide.
I Have tried FIJI but i am not happy with it. I need a 3D Reconstruction with which I can interact and measure Volume trauma in the reconstruction.
Does anyone know how to solve this problem?
Every help is very much appreciated.
I am currently testing a thresholding method and I would like to test this method on a dataset of images with ground truth to see how good it works. I am not trying this thresholding as segmentation to classify any objects. I am just trying to threshold the image and then test the binarized image against groundtruth. If anyone knows a dataset that I can use I would appreciate it if you could share it with me.
Thanks in advance
I have 2 .stl files which are 2 organs of the inner ear. They are segmented from CT slices and exported separately as 2 .stl files.
Some parts of the 2 organs should be contacted physically and they should share the same surface in the following Finite element modeling. However, they are 2 separate stl files, and the 2 surfaces are crossing with each other at the boundary. Is there any way to merge them together with suitable software and method to let them share the same surface at the desired part?
The sketch is attached. The stl files are 3D surfaces, and the sketch is the cut plane of the model to illustrate our problems. Thank you very much.
Please see the mesh files below if it could make the question clear. Thanks a lot!
I am searching for someone who can do 3D reconstruction using Micro-CT data.
I have the scans but I cannot reconstruct them 3-dimentionally!
In my study, we aim to compare voids volume in 4 different obturation methods of endodontics.
The time is also very limited!
My colleagues are working on speckled image 3D reconstruction using Deep Learning. However, we can not find proper public data with speckle. Is there any software or method to simulate the images of binocular cameras for a given 3D target with speckles?
I am looking for 3D reconstruction softwares that are easier to use than Amira-Avizo. I need to reconstruct 2D CT scans to 3D. Any recommendations? Open source maybe?
Hi dear community,
I am currently working on a project where I want to generate a 3D reconstructed model consisting of rubble piles (in consequence of a building collapse) via remote sensing. In my case, I employ LiDAR aerial laser scanner as well as aerial photogrammetry for point cloud generation of the disaster scene. The problem is that solely the surface of the scene that lies in the field of view can be reconstructed. However, in order to evaluate the structural behavior of the debris with regards to structural stability, I need to know how the collapsed elements are structured beneath the debris surface. Does somebody has an idea how I can proceed or has anybody conducted a related study? Is my objective even feasible?
Thank you in advance!
Q1: There has been quite a lot of attempts to combine, or at least co-relate, street-level images and aerial images(mostly from drones). Folks from computer vision have developed algorithms such as SIFT or SURF for feature matching. But what are some better(easy) ways for non-computer science experts to do this which out using commercial software (aerial image-based 3D reconstruction, https://all3dp.com/1/best-photogrammetry-software/)?
Q2. The main reason for combining street level and aerial images is to improve the quality of the generated 3D model (mainly due to geometric defects, blurred textures, shades, etc.). But if we think the other way around, wouldn't it be simpler to combine point cloud model each generated from street level and aerial images? All you have to know is some GPS points to match the point clouds and maybe some noise filtering.
I performed a 3D reconstruction of a room using a Kinect V2. The idea is to position the Kinect in the middle of the room on a tripod then take pictures (color and image of profereur) every 10°. For the treatment I use MATLAB, so I use ICP for the registration and when the camera performs a complete turn, the famous problem appears, the registration error cummulation.
So my question is:
how can I minimize the gap between the first and the last point clouds?
I have this paper as support:
Study of parameterizations for the rigid body transformations of the scan registration problem
I perform whole-cell patch-clamp recordings from neurons in acute brain slices and fill the neurons with biocytin for later reconstruction. To preserve morphology, I am used to retracting the pipette slowly to allow for the membrane to reseal.
Lately however, I’ve been having the issue that the cell nuclei stick to the pipette and come along upon retracting, in effect leaving me with an unintended nucleated patch, which often destroys cell morphology. It seems the longer the recording lasts, the greater the likelihood of this occurring. I’ve been playing around with the osmolarity of my (intracellular) solutions without much luck so far. Does anyone have some thoughts on what could cause this? I’m a bit short on ideas and would much appreciate your comments.
I am trying to produce a 3D reconstruction of histological slides using a software. In order to do so, my slides are to be converted from a generic image to the 'pyramidal' TIFF format. I was recommended to use 'ImageMagick', which enables file conversion. I'm required to input these instructions
<convert INPUT.tif -compress jpeg -quality 90 -define tiff:tile-geometry=256x256 ptif:OUTPUT.tif> However, I've encountered multiple problems.
1. The tutorials online < https://www.youtube.com/watch?v=hy75zatrDcQ&t=683s > show that there is a 'convert.exe' in the file location where ImageMagick is downloaded (Fig. 1), but for the latest version I've downloaded (Fig. 2), the command is missing. I came across a comment saying that it has been removed as the command clashes with Windows, and that is why the command is now 'magick' as seen on the website (Fig. 3).
2. Okay now I know it is no longer 'convert', however where am I to input the command? There is no terminal in the ImageMagick interface (Fig. 4).
Solving this issue can really help me to move along to actually trying out the software itsel.
A good FIB-SEM data stack can contain a few thousands pictures. It is very tricky and annoying process to segment them all manually. Which software do you use (real practical experience with big data) for such tasks?
I'm currently looking into using the full-color data form the visible human project in an imaging viewer, please see:
.I have not however been able to locate the datasets from the VHP without the blue background from the dyed gelatin solution fluid used.
Do any of you have these data without the blue stuff available or do you happen to know where they can be obtained?
I have a question.
Please, what are the dimensions of the resulted image from CT while it is working to scan the patient ? It produces axial images, then reconstructs them to produce coronal, sagittal, & oblique planes. So, what is the dimension of the axial image? is it only having one dimension in z-axis, two dimensions or all the three dimensions are included due to the movement of the machine during scanning?
If the CT takes the images but the table containing the patient is stopped, what are the dimensions of the resulted images ? Are they in two dimensions? Can we consider it as a routine x-ray machine that producing 2D-images (plain film)?
I have micro-CT reconstructed slices of my open cell iron foams made by NRecon software. I have an impression that the quality of the images is not as good as it should, and the struts tend to be blurry. This was reflected when I created the 3D model through the 2D slices in another software. I have been suggested to use the smoothing function in NRecon software or find a software which turn the grey images into the black and white slices and then use those black and white slice images to reconstruct the 3D model. Does anybody know such a software? Do you have any other suggestions? I would appreciate your advice!
I am curious, at this point, to compare approaches: Delaunay triangulation, 3D alpha shape, Voxel Grid and Marching Cubes.
Which of these approaches is most used?
I am trying to do 3D reconstruction with SPECT system having multi-pinhole arrays as collimators. Can I use STIR to reconstruct 3D objects? Will it be possible to specify collimator structure in STIR software?
Thanks in Advance!
micro ct is a potent tool for measuring bone morphology . i have a few micro ct scanned samples of in vivo study of rabbit bone for histomorphometry of osseointegration of titanium implants . please suggest a software which can rebuild the 2d into 3d reconstruction and measure the osseointegration. and how to go about it.
I want to visualize the error map(RMSE) between the generated 3D face and the groundtruth 3D face for evaluation purpose. Can anyone provide suggestions or existing solutions please? Thanks a ton!
I want to analyse the error in the 3D position obtained after the stereo vision calibration using two cameras placed perpendicular to each other, after it's calibration and 3D reconstruction
I have a set of slices of the retina's images. Using optovue software I can reconstruct the 3D structure of the piece of retina (see attached link), and measure some 2D and 3D parameters (like thickness of the retina in different positions, volume of a particular region, etc). I want to find another software for this purpose (to have possibility working home). I tried to use 3D-Slicer, but i can't make it work good. I can upload set of jpg images, but i can't create 3D model. Probably smb can advise another software? Or give nice tutorial on 3D-Slicer and jpeg images
Digitizer Program which gives X,Y Coordinates of Lines or Points on Scanned Images
We want to extract the X,Y coordinates manually from imported figure (image).
Figures contain clear or non clear points or lines.
Rapid research in Google gives:
Graphmatica, Export figures and not import
Graph digitizer: ?
Engauge digitizer: I think it works only for distinct points and non merged points or lines?
Thank you for your advice in this subject, we wait your experience.
I want to explore the effect of different stimuli on brian glucose metabolism in mice. Now I have series of PET/CT images from microPET/CT scan. I can see the differences from these images but I can't tell the specific brian regions. Due to the small volume of mouse brain, I cannot find a proper software for 3D reconstruction. Can anyone give me some help about analysing these data? Thanks so much.
I wish to make a 3D reconstruction image from 2 or 4 2D SEM images. For that, I have 2 images taken from two different angles. Or 4 images taken from 4 different direction (East, West, North and South). Is there a library that does that? (I work in python) if not, what are the steps that must be followed? if you have the code ready, it is welcome. thanks
Why can't I search for related papers? Is it because the traditional method is good enough? Most of the things I searched for related to deep learning are low-dose CT reconstruction, super-resolution, and so on. Are these directions currently popular?I am a first-year graduate student and want to use deep learning to do some work related to CT, but the code of related papers is very small. Is there any suggestion? Thank you very very much.
In order to realize a 3D reconstruction by TomoSAR using TerraSAR-X images, can i use a set of images acquired from both ascending and descending orbits ? or should i pick only ascending or descending ?
Does anybody have any experience in describing smoothness of biological surfaces using confocal microscopy images that are 3d reconstructed?
I am trying to generate sinogram from scanned images of multi-pinhole SPECT system. After filtering I am trying to reconstruct object volumetrically and not slice by slice. Can anyone reply? or suggest any publications regarding this topic?
I would like to 3D print the 3D reconstruction I obtained from Imaris software, but Cura can only work with STL. How can I convert the format?
I am work on a mummified hand attributed to a Saint. To retrieve as much info as possible from it, I would like to know if I can look to specific features. Anything that will allow me to know little bit more about its old owner identity and his/her physical state. I know hands are not so popular in osteo-archaeology to determine sex or specific diseases, but if someone could give me some tips, it would be great.
I have a 3D reconstruction of it acquired by X ray so I can look through its internal structure as well.
I recently learned how to perform 3D cell reconstructions with ImajeJ, but the result is very limited. I am looking for some software available for free that allows me to obtain confocal 3D images with publication resolution (and even small spinning videos), since my group does not have a license for IMARIS. Any suggestions?
Thank you very much
I am interested to know is it possible to process (reconstruct) images obtained through TEM?On few plateform its mentioned that image J can handle 3D model reconstruction but i am not able to find anything within ImageJ.Any suggestions will direct me towards right way.
The point cloud created from generated (x,y,z) world coordinates, using the Kinect v1 RGB and depth video frames, gives an impression of a plain 2D image instead of a proper 3D model. It seems depth is not properly filled. This may partially be due to the fact that Kinect encodes depth distances too far or too close as 2047 and 0 respectively. Thus, I treat both values as zero and takes them into consideration while generating (x,y,z) and so it affects point cloud. Can anyone provide some insights why depth is not filled up and how a volume blending can be done to have proper 3D model? Any advices, pointers, or links would be highly appreciated. Thanks
I intend to reconstruct the 3D pore structure of my specimens using 2D cross section images obtained from light microscopy. Would it be possible to simultaneously stack two sets of 2D images obtained by serial sectioning in two different directions (height and width of the specimens) to get a 3D structure?
I am interested in reconstruct 3D images from Z stacks with Amira but I have never used it and I cannot find any specific tutorial for this.
I explain. I acquired 3D stacks (coordinates X, Y and Z) from different tissue sections. Now I'd like to reconstruct the whole tissue by "attaching" all these stacks together.
I know it's possible in Amira but I don't know how.
Has anyone ever used Amira?
Is so, please, suggestions? or is there a protocol somewhere?
I am trying to implement MLEM algorithm for SPECT image reconstruction in 3D. But I am confused with probability matrix. How can I calculate probability matrix?
Thanks in advance
Hi, I have a 3D cube with 128x128x128 voxels. If a cone with known details, pass through the 3D cube, how can I find with which voxels, this cone intersects? The attached image shows the situation. Cube is yellow color. We know the origin and angle of cone. Problem is to find, which voxels of cube will be inside of this cone?
I am doing an electron tomography of a dense approximately spherical object. Due to sample's geometry I could record X tilt series only from -55 to +55. During tomogram composition of the object in IMOD I get weird shape of the object in lower and upper parts of the tomogram. What do you think, does it happen because of lack of data (higher angle tilt series or dual axis tilt series is needed) or because of caused by user alignment mistake? I would appreciate a lot, if anybody of experienced IMOD users could share his/her opinion and answer some more questions about the program.
Hi. I am currently working on a medical image processing project that segments the coronary artery blobs from the axial CT slices of the human heart and converts the segmented coronary blob slices to a 3D coronary artery model. I wish to extend my project by mapping the 3D model to the 2D axial input slices, ie, by clicking on a point in the 3D model, it must display the appropriate CT slice and also the point to the position in that slice. Are there any methods ( techniques ) or softwares available to do this? If so, how?
I am using OpenCV to implement this project.
Now EU Research and Innovation programme Horizon 2020 is starting. Our team wants to participate in the program and looking for collaborators. We have some experince in erythrocyte 3D morphology study by the method of digital holographic interference microscopy. Our project propose to study3D morphology of blood erythrocytes in oncology diseases, and in the treatment (radiaton therapy, chemical therapy)
The monocular slam's initial coordinate system is random and scale-unknown.
But i want to know how to transform the initial coordinate system to another coordinate system(from a marker,like chessboard).
What am i doing now is that initializing the SLAM with a marker.The SLAM's mappoints were reconstructed by the poses calculated from PnP.So i know the SLAM's coordinate system direction and scale.In that case,i could tranform the SLAM's coordinate system to another marker's coordinate system easily.
But the mappoints reconstructed by the marker during the SLAM's initialization were untrusted.And the graphic optimization would change mappoints' position to make a smooth trajetory by minimal the projection error.
In this case,i could get an accurate transformation between the SLAM's coordinate system and marker's coordinate system.
Any idea is appreciated.
Btw,i'm studying about orbslam.
it is really a nice toolkit for DTM extraction from 3D point clouds, We would like to use it for our urban object analysis task from very high density MLS point clouds. Is that doable to get above ground level height for each point directly from the processing ? It would be a nice option.
Hello all together,
I have the problem that I want to obtain 3D landmarks using the software Landmark (Editor) from IDAV. But it seems that my .ply-files are to big (>150MB). When loading a second Surface the progam crashes. Is it possible to compress the Surface files without lossing to much information, so that I can work with more files in Landmark? If I´m right others work with files of only a few MB´s of size.
Hello Every one,
I am trying to estimate Vessel Diameter Estimation via plane fitting by utilizing the idea presented by "İlkay Öksüz" in his article. The article is attached here.
In their article, under section "3.4 : Vessel Diameter Estimation via Plane fitting", they used a modified form of plane equation to find best fit plane of the 3D point cloud data. The equation they use is also given below:
nx(x - xc) + ny(y - yc) + nz(z - zc) <= T
where the array (nx,ny,nz) refers to the normal vector, T is an empirically set threshold ( T =2 voxels), and the (xc,yc,zc) arrays and (x,y,z) define the Cartesian coordinates of the centerline location and that of another arbitrary point from the lumen data, respectively.
I need to implement this in matlab. But i cannot understand this completely. So, can any one guide me how should i implement this equation.
I have 3D point cloud, center points of that point cloud, and normal vector.
If we make a 3X3 matrix on a wafer, then we get 9 parts. I have ellipsometry of these 9 parts, but do not know how to make a 3D image of the same in origin.
Please suggest me what to do.
for tracking finger position in 3D , accuracy is very important.
there are various techniques in 3D measurement.
what is the most accurate one ?
We have got CBCT machine of newtom with denture scan option. I just want to know whether the casts scanned by this option can be used for clear aligner fabrication. Can we convert the normal dicom data of maxilla and mandible in occlusion into stl format and use it for clear aligner fabrication? My intention of asking this question is to know whether we can use CBCT data as an alternate to 3D scanner, where soft tissue details are not of much importance, as in case of Clear aligners
What we need is a pointer for picking spatial points inside buildings and their rooms, and 3D software which can convert those spatial data
points into a 3D model.
We want to pick floor and ceiling points for corners, and frames of windows and doors and hallways and wall-shape-transitions, etc. The idea is
to save a ton of time taking and scribing measurements with conventional tools, and getting actual accurate shapes of rooms into the model very
The model would be imported into Revit for further work, and remodeling to prep for visualization and walk-throughs.
1. How can we get this done, and with what hardware and software?
2. Perhaps finger tracking could work for this, and we would need to know who to contact for more information and possibly assistance
3. This seems like a great R&D project if a system is not currently available. We can make resources available for QA testing at NMSU.
We had considered that Kinect might be used, but expect it would not provide accuracy or be useful with a set of procedures that work the same
in all rooms. We expect that something like an ultrasonic system with wand and reference unit would be useful and accurate for these purposes.
Thank You for any assistance you may provide, and for taking the time to respond.
Creating 2 spherical images doesn't work, since there are at least 2 directions where the capture will be the same (left and right of the perpendicular axis to the base). My best shoot is to circumscribe the capture points to a sphere with d=stereo base, then take cubic images, and stitch them in post-processing. Itt seems a bit messy solution, especially for creating a 360 videos... frame by frame.
I am building a 3D orthomosaic of a Carboniferous coal face and I have projected it in agisoft. However, it has projected with the z-axis coordinates in reverse, I think this may be due to the camera as this is the first time that this has happened. How do you change these coordinates? Does anyone have a link to a tutorial for the coordinate system on agisoft?
Hello, we are working on different ways to digitally reconstruct histological tissue sections with a very high accuracy for anatomical purpose. Has anyone good results with a specific and commercially available reconstruction software?
I have thousands of polygons, each polygon contains polylines inside.
as per requirement from customer, need make separate polygon where two polylines intersects each other, when you extend polylines programmatically using .net .
please suggest me how to complete this task.
for clear understanding please look into attached image
We are analyzing image stacks from micro CT (computed tomography). A typical image sequence comprises 1000 images of each 10-30 MB.
The system we've used so far had 500 GB RAM but was sometimes not strong enough. Image reconstruction is done in Matlab, and 3D rendering is then done in Aviso Studio.
Any suggestions for a good workstation computer to do these analyses? Should we use:
a) 1 Terabyte RAM
b) Two good Intel Xeon processors
c) Two or more high-end graphics cards
or combinations of these?
Thanks very much for answering.
How to obtain 3D reconstruction of images after performing FIB-SEM analysis. I am wondering wether reconstruction facility is coupled with FIB-SEM setup or separate software/application is required?
Monocular camera based visual odometry is popular in the field of computer vision and robotics. However, the absolute scale can not be obtained based on only the monocular image information. Is there any good way to estimate this metric information from the monocular image?
I have several 3d vertebra structures data, and what I have done so far is:
- calculate parameters as: Thickness, Connectivity and Space distance using Bonej on several ROI.
- Calculate PCA model of those parameters and ROI
- Choose one ROI and convert it to DRR
- Compare the similarity between this DRR and an X-ray Radio
What I need now (if the difference is too big), is that I have to make some changes on the 3D structure and convert it again to DRR and compare it with the x-ray Radio, but i don't know how to use my PCA model for this propose and how to change my 3D structure.
Any advise please? I'd really appreciate any help. Thank you.
I am learning computer vision to work on my project. It involves processing of point clouds for 3D reconstruction. I need to mesh my point clouds (after filtering and segmenting them). I am using MATLA 2016a. Is there an inbuilt functions or tutorials I can use for meshing in matlab?
I am facing some trouble in transfering a 3D reconstructed model from MIMICS to CATIA v5r20. But When i export the file from mimics to CATIA in IGES format, the image comes as totally distorted. I have attached the two images. Can someone help me with this?
Hi, I'm currently trying to make a 3D model of an isosurface in OpenGL....i already extracted the isosurface from the volume data...i mean i have the number of faces, vertices and the isornormals for each vertex.
I draw the object in a loop...something like this:
for (i =0; i<Number of Faces;i++)
now the prob