Science topic

3D Reconstruction - Science topic

Explore the latest questions and answers in 3D Reconstruction, and find 3D Reconstruction experts.
Questions related to 3D Reconstruction
  • asked a question related to 3D Reconstruction
Question
2 answers
Do I must use RGB-D cameras for 3D reconstruction?
Relevant answer
Answer
Not neccesarily,Other options like photogrammetry, LiDAR, or structured light scanners can do the job too. Each method has its strengths, like photogrammetry for detailed textures or LiDAR for super precise large-scale models. It all comes down to what your project needs and what tools you have.
  • asked a question related to 3D Reconstruction
Question
1 answer
Hello,
We have done an MRI acquisition of an animal brain using three times the same T1 sequence. However, we are now trying to average these three data sets and do a 3D reconstruction. This is our first time doing this, so we don't know how to proceed.
For now, we have downloaded FSL and transformed our files to .nii.
Could someone help us figure out the next steps using FSL?
Thank you
Relevant answer
Answer
Hi,
To calculate an average, first you need to make sure that the images are well-aligned and the subject/specimen has not moved between acquisitions, thus you may want to co-register your images (one option would be FSL flirt https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT/UserGuide).
For calculating the mean you could also make use of FSL functions, see e.g. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils (most interesting for you are probably fslmaths and fslmeants)
BR
Peter
  • asked a question related to 3D Reconstruction
Question
3 answers
I'm looking for WinSurf (Surf Driver) software for 3D reconstruction. I turned under all of internet. But i couldn't find. If this software has a setup file, can it share with me?
Relevant answer
Answer
I will concur with Anton Vrdoljak
  • asked a question related to 3D Reconstruction
Question
8 answers
I am looking for a CT dataset of bone fractures in different bones to reconstruct 3D models. I am using these models for developing automated design algorithms. If there is a database of already existing 3D models of bone fractures, that would be useful as well.
  • asked a question related to 3D Reconstruction
Question
3 answers
Hi
I am planning to reconstruct 3D geometries of coronary arteries from angiogram images (not a single artery whole coronary artery trees). I am wondering if any of you have any software suggestions that can help. Or any potential Matlab/Python codes that can help?
I know that there is some open-source software that can be used for CT or MRI images. But I couldn't find any for angiogram images.
Thanks for your help.
Navid
  • asked a question related to 3D Reconstruction
Question
5 answers
Hi,
I acquired multiple confocal stack images of olfactory epithelium from rostral to caudal position from 1 slide. I want to create 3d reconstruction by adding all the z stacks to look at expression pattern of few neuronal markers. I am able to add multiple stacks by selecting "add slice" option. However, there is a distance of 50 micron between every 2 slices. 2 questions:
1) Can I add blank space of 50 micron between every 2 slices?
2) How to align for x-y drift between different z stacks?
Thank you
Raghu
Relevant answer
Answer
Raghu Ram Katreddi Thanks Raghu. Already very helpful.
Cheers.
  • asked a question related to 3D Reconstruction
Question
3 answers
Hello everyone, recently I want to do a 3D reconstruction by using histological serial sections. But I did not have experience for that, so I want to ask if anyone do it and give some advices.
Thank you so much
Relevant answer
Answer
Hi Liu Yang,
May I recommend using the CT scan processing software Slicer (https://www.slicer.org/). In this free program you can load in a series of tiff images of cross sections. The program can be used to convert it to voxels for all sorts of visualizations and analyses. Since these wouldn't be typical CT files, you would have to scale the z direction based on your histological section spacing. Also, I believe this only works with black and white photos so you may have to convert them to grayscale. You can also adjust the window of pixel values (normally x-ray attenuation factors) to select some desired range of densities (and in the case of histological sections - colors) and turn the 3D voxels into a 3D mesh (stl, obj, etc). There are lots of tutorials on youtube, as it has become popular for people to take their medical CT scans and 3D print them. Also, the input file format is really important. You will have to trick slicer into thinking your images are tomographic slices in tiff format. I suggest downloading a batch image processing program to convert and label each successive section in a folder.
There is also a similar program with a wide range of utilities called molcer (white rabbit co). The free version has some nice utilities, but you would probably have to pay for the full version to process histological sections as if they were tomographic images.
I hope this helps. Let me know if you have any additional questions.
best regards,
David
  • asked a question related to 3D Reconstruction
Question
3 answers
I was wondering if there is any (open-source) software available for 3D-EBSD reconstruction from data in which the slices do not have constant thickness, or even better, not a constant thickness profile?
Effectively, it means that I collect EBSD maps, and for each EBSD map, I know the slice thickness at each position. As far as I know, most 3D-EBSD reconstruction software packages (such as dream3D) only allow constant thicknesses as input, and definitely do not allow irregular thickness profiles...
Thanks!
Relevant answer
Answer
Muhammad Umar Thanks for the answer, but Mtex does not have 3D functionality, but for pure visualization it can indeed work.
  • asked a question related to 3D Reconstruction
Question
1 answer
I have z-stack confocal images of whole-mount tissue IF staining, marking the tight junction of the cell. And I would like to do 3D reconstruction of the cells to measure cell circularity and area using Imaris.
However, since it's challenging to mount my tissue perfectly flat, my z-stack images are slanted.
This makes the cell reconstruction challenging since the cell membrane signal for a certain cell is spread across multiple z slices. I attached few screenshots to demonstrate.
I'm wondering if there is a way to rotate my z-stack image and reassign the xy reference angle to computationally make the image flatter?
Relevant answer
Answer
If you are using Imaris to open these images (which you hashtagged in your question so I am assuming), then after opening the images, imaris gives you the option to click on the image using the cursor and reposition it the way you want it.
Uncheck the "frame" option to get rid of the frame. Take a snap and open it using imageJ, and in there you could re construct the xy axis (make sure you know the pixel size before you uncheck the 'frame' option, and draw the xy axis accordingly) hope this helps.
  • asked a question related to 3D Reconstruction
Question
5 answers
I am looking for an image processing software, that can reconstruct a high resolution SEM 2D-image of a porous Ni-GDC anode, to a 3D representation. The intent is to separate the phases Ni,Ce,Pores, measure tortuosity, porosity etc.
I have contacted people from Avizo(Mercury Computer Systems), but their license will cost above 13000 US$. I do not have the means (financial and time) for this. Can anyone recommend an alternative? 
Appreciated   
Relevant answer
Answer
Waqas Tanveer Sorry if you select the most convineant software please share it to me convenient
thank you in advance
  • asked a question related to 3D Reconstruction
Question
1 answer
Hello,
i am working on a Spinal Cord Injury Model and in order to characterize the lesion I want to do a 3 D Reconstruction of a spinal cord. Therefore, I have cut a Spinal cord of 1 cm into 10 µm sections; 4 cuts on one slide.
I Have tried FIJI but i am not happy with it. I need a 3D Reconstruction with which I can interact and measure Volume trauma in the reconstruction.
Does anyone know how to solve this problem?
Every help is very much appreciated.
Relevant answer
Answer
Hello Nima,
I am curious about your need, what type of images do you acquire: brightfield, fluorescence of some kind, other...?
I am currently working on an automated serial block face system which allows image acquisition and 3D reconstruction of unstained samples based on autofluorescence signal. This system could typically be useful for the question you ask. Maybe we can discuss?
  • asked a question related to 3D Reconstruction
Question
1 answer
Hello,
I am currently testing a thresholding method and I would like to test this method on a dataset of images with ground truth to see how good it works. I am not trying this thresholding as segmentation to classify any objects. I am just trying to threshold the image and then test the binarized image against groundtruth. If anyone knows a dataset that I can use I would appreciate it if you could share it with me.
Thanks in advance
Relevant answer
  • asked a question related to 3D Reconstruction
Question
14 answers
Hi all,
I have 2 .stl files which are 2 organs of the inner ear. They are segmented from CT slices and exported separately as 2 .stl files.
Some parts of the 2 organs should be contacted physically and they should share the same surface in the following Finite element modeling. However, they are 2 separate stl files, and the 2 surfaces are crossing with each other at the boundary. Is there any way to merge them together with suitable software and method to let them share the same surface at the desired part?
The sketch is attached. The stl files are 3D surfaces, and the sketch is the cut plane of the model to illustrate our problems. Thank you very much.
Please see the mesh files below if it could make the question clear. Thanks a lot!
Relevant answer
Answer
Check the free Autodesk Meshmixer here: https://www.meshmixer.com. You can combine parts or separate them. Many useful tools included.
  • asked a question related to 3D Reconstruction
Question
5 answers
I am searching for someone who can do 3D reconstruction using Micro-CT data.
I have the scans but I cannot reconstruct them 3-dimentionally!
In my study, we aim to compare voids volume in 4 different obturation methods of endodontics.
The time is also very limited!
Relevant answer
Answer
I use ITK SNAP
  • asked a question related to 3D Reconstruction
Question
7 answers
Hi there,
My colleagues are working on speckled image 3D reconstruction using Deep Learning. However, we can not find proper public data with speckle. Is there any software or method to simulate the images of binocular cameras for a given 3D target with speckles?
Thanks,
Relevant answer
Answer
Maybe you can use a general purpose robotics simulator, like Gazebo, where you can create a scene and add two cameras and capture images. Then, add the speckle in a post-processing step
  • asked a question related to 3D Reconstruction
Question
7 answers
I am looking for 3D reconstruction softwares that are easier to use than Amira-Avizo. I need to reconstruct 2D CT scans to 3D. Any recommendations? Open source maybe?
  • asked a question related to 3D Reconstruction
Question
3 answers
Is there reaserches on generative restitution of historical buildings?
Relevant answer
Answer
Not specially but similar approach could be applied
  • asked a question related to 3D Reconstruction
Question
5 answers
Hi dear community,
I am currently working on a project where I want to generate a 3D reconstructed model consisting of rubble piles (in consequence of a building collapse) via remote sensing. In my case, I employ LiDAR aerial laser scanner as well as aerial photogrammetry for point cloud generation of the disaster scene. The problem is that solely the surface of the scene that lies in the field of view can be reconstructed. However, in order to evaluate the structural behavior of the debris with regards to structural stability, I need to know how the collapsed elements are structured beneath the debris surface. Does somebody has an idea how I can proceed or has anybody conducted a related study? Is my objective even feasible?
Thank you in advance!
Relevant answer
Answer
Dear Amir,
I have worked with ground-based Lidar and with terrestrial GPR (Ground Penetrating Radar). Lidar can not penetrate a solid rubble pile and GPR does not work well on rubble, especially if the surface is irregular...
Respectfully yours, Joel
J
  • asked a question related to 3D Reconstruction
Question
5 answers
Q1: There has been quite a lot of attempts to combine, or at least co-relate, street-level images and aerial images(mostly from drones). Folks from computer vision have developed algorithms such as SIFT or SURF for feature matching. But what are some better(easy) ways for non-computer science experts to do this which out using commercial software (aerial image-based 3D reconstruction, https://all3dp.com/1/best-photogrammetry-software/)?
Q2. The main reason for combining street level and aerial images is to improve the quality of the generated 3D model (mainly due to geometric defects, blurred textures, shades, etc.). But if we think the other way around, wouldn't it be simpler to combine point cloud model each generated from street level and aerial images? All you have to know is some GPS points to match the point clouds and maybe some noise filtering.
Relevant answer
Answer
Hi, following a point-based registration approach seems to be the most viable option at the moment until machine learning is able to bridge the non-canonical views properly. Besides Francesco Nex's reply and linked research, there is an interesting approach by Zhu et al. (2020):
with the code and a respective test data set publicly available:
  • asked a question related to 3D Reconstruction
Question
6 answers
I performed a 3D reconstruction of a room using a Kinect V2. The idea is to position the Kinect in the middle of the room on a tripod then take pictures (color and image of profereur) every 10°. For the treatment I use MATLAB, so I use ICP for the registration and when the camera performs a complete turn, the famous problem appears, the registration error cummulation.
So my question is:
how can I minimize the gap between the first and the last point clouds?
I have this paper as support:
Study of parameterizations for the rigid body transformations of the scan registration problem
Relevant answer
You are looking to close the loop. You need to find N registration parameters, and then apply loop closure constraint to refine the individual parameters. Kaess et al's iSAM paper provides a way to do a smoothing operation to reduce the error between the first and last frame. However, in your case you may be able to get away with a simpler iterative approach. Essentially you estimate registration in a series for scan 1-2, 2-3,...(n-1)-n. After that you swap the source and the template to find registration n-(n-2), ....2-1. You repeat this process a few times, every time applying the estimated transformation to the next point cloud. In a few iterations, the overall registration should get refined.
  • asked a question related to 3D Reconstruction
Question
7 answers
I perform whole-cell patch-clamp recordings from neurons in acute brain slices and fill the neurons with biocytin for later reconstruction. To preserve morphology, I am used to retracting the pipette slowly to allow for the membrane to reseal.
Lately however, I’ve been having the issue that the cell nuclei stick to the pipette and come along upon retracting, in effect leaving me with an unintended nucleated patch, which often destroys cell morphology. It seems the longer the recording lasts, the greater the likelihood of this occurring. I’ve been playing around with the osmolarity of my (intracellular) solutions without much luck so far. Does anyone have some thoughts on what could cause this? I’m a bit short on ideas and would much appreciate your comments.
Relevant answer
Answer
Hey Mariana Vargas-Caballero, good to hear from you and thanks for the pro tips! Regarding retraction speed, I too prefer a relatively fast retraction over a couple of seconds, starting slowly and gradually increasing pace if you see your cell is closing nicely on the membrane seal test. This has always worked fine for me; it’s just recently that I’ve been having the issue of sticky nuclei. A bit of positive pressure only helps occasionally sadly… When the lab reopens from the corona lock-down I’ll review everything and hope I find the culprit. If so, I’ll be sure to report back here. Be well.
  • asked a question related to 3D Reconstruction
Question
3 answers
I am trying to produce a 3D reconstruction of histological slides using a software. In order to do so, my slides are to be converted from a generic image to the 'pyramidal' TIFF format. I was recommended to use 'ImageMagick', which enables file conversion. I'm required to input these instructions
<convert INPUT.tif -compress jpeg -quality 90 -define tiff:tile-geometry=256x256 ptif:OUTPUT.tif> However, I've encountered multiple problems.
1. The tutorials online < https://www.youtube.com/watch?v=hy75zatrDcQ&t=683s > show that there is a 'convert.exe' in the file location where ImageMagick is downloaded (Fig. 1), but for the latest version I've downloaded (Fig. 2), the command is missing. I came across a comment saying that it has been removed as the command clashes with Windows, and that is why the command is now 'magick' as seen on the website (Fig. 3).
2. Okay now I know it is no longer 'convert', however where am I to input the command? There is no terminal in the ImageMagick interface (Fig. 4).
Solving this issue can really help me to move along to actually trying out the software itsel.
Relevant answer
Answer
You might also be better off loading your pyramidal TIFFs into a software more suited for what you want to do. ImageJ is freely available and is more suited for scientific analysis like you are trying to do, and I suspect it could open your files natively.
  • asked a question related to 3D Reconstruction
Question
13 answers
Dear colleagues,
A good FIB-SEM data stack can contain a few thousands pictures. It is very tricky and annoying process to segment them all manually. Which software do you use (real practical experience with big data) for such tasks?
Cheers,
Denis
Relevant answer
Answer
You can also try Dragonfly (https://www.theobjects.com/dragonfly/index.html) from Object Research Systems, it is free for non commercial use.
Dragonfly has many segmentation tools from active shape to deep neural networks. The following video, (https://www.youtube.com/watch?v=1WVlskyuw94) is a deep learning tutorial.
Do not hesitate to contact me if you want more information.
  • asked a question related to 3D Reconstruction
Question
2 answers
Dear all,
I'm currently looking into using the full-color data form the visible human project in an imaging viewer, please see:
.I have not however been able to locate the datasets from the VHP without the blue background from the dyed gelatin solution fluid used.
Do any of you have these data without the blue stuff available or do you happen to know where they can be obtained?
Kind regards
Relevant answer
Answer
Mr. Reyes,
Thank you for your elaborate answer. I ended up with a solution where I converted each RGB pixel to HSV format and specified a Hue range to remove and ended up with an acceptable solution.
  • asked a question related to 3D Reconstruction
Question
8 answers
I would like to know which magazine might be best suited for publishing an article entitled "About 3D Reconstruction in Computer Vision"?
Relevant answer
Answer
Thank you.
  • asked a question related to 3D Reconstruction
Question
3 answers
I would like to know which magazine might be best suited for publishing an article entitled "About 3D Reconstruction in Computer Vision"?
Relevant answer
Answer
If your research is into the general 3D reconstruction then the following might be of interest to you:
# International Journal of Computer Vision
# IEEE Transactions on Image Processing
# Image and Vision Computing
Else, if the content is leaned more towards 3D reconstruction in the medical field then you could check:
# IEEE Transactions on Medical Imaging
  • asked a question related to 3D Reconstruction
Question
4 answers
Hello everyone,
I have a question.
Please, what are the dimensions of the resulted image from CT while it is working to scan the patient ? It produces axial images, then reconstructs them to produce coronal, sagittal, & oblique planes. So, what is the dimension of the axial image? is it only having one dimension in z-axis, two dimensions or all the three dimensions are included due to the movement of the machine during scanning?
If the CT takes the images but the table containing the patient is stopped, what are the dimensions of the resulted images ? Are they in two dimensions? Can we consider it as a routine x-ray machine that producing 2D-images (plain film)?
Relevant answer
Answer
While I completely agree with what you said, I was just pointing out the existence of cone-beam CT scanners. Thanks again for your respective reply.
  • asked a question related to 3D Reconstruction
Question
8 answers
Hello,
I have micro-CT reconstructed slices of my open cell iron foams made by NRecon software. I have an impression that the quality of the images is not as good as it should, and the struts tend to be blurry. This was reflected when I created the 3D model through the 2D slices in another software. I have been suggested to use the smoothing function in NRecon software or find a software which turn the grey images into the black and white slices and then use those black and white slice images to reconstruct the 3D model. Does anybody know such a software? Do you have any other suggestions? I would appreciate your advice!
Thanks,
Reza
Relevant answer
Answer
Thanks a lot for sharing your answers. I think the scanning parameters are also important. How to use the proper voltage and adjust the source and detector distance all count for the final resolution of image.
  • asked a question related to 3D Reconstruction
Question
17 answers
I am curious, at this point, to compare approaches: Delaunay triangulation, 3D alpha shape, Voxel Grid and Marching Cubes.
Which of these approaches is most used?
Relevant answer
Answer
Please read this book " Polygon Mesh Processing By Mario Botsch, Leif Kobbelt, Mark Pauly, Pierre "
  • asked a question related to 3D Reconstruction
Question
1 answer
Hi,
I am trying to do 3D reconstruction with SPECT system having multi-pinhole arrays as collimators. Can I use STIR to reconstruct 3D objects? Will it be possible to specify collimator structure in STIR software?
Thanks in Advance!
Manu
Relevant answer
Answer
Hello Manu Francis ,
yes you can use STIR for 3D reconstruction check [1] and you will find STIR paper and open source program it will be useful in this topic.
there are a lot of 3d reconstruction algorithms working under STIR. so you have to google it "STIR 3d reconstruction"
Good luck
Mohamed Samir
references:
  • asked a question related to 3D Reconstruction
Question
4 answers
micro ct is a potent tool for measuring bone morphology . i have a few micro ct scanned samples of in vivo study of rabbit bone for histomorphometry of osseointegration of titanium implants . please suggest a software which can rebuild the 2d into 3d reconstruction and measure the osseointegration. and how to go about it.
Relevant answer
Answer
I'm not sure about the 3D reconstruction, but for histomorphometry you could use ImageJ with the plugin Map_BoneMicrostructure. It can give you accurate measurements of trabecular morphometry, and you can complement it with some of the basic package measurements such as integrated density.
  • asked a question related to 3D Reconstruction
Question
2 answers
I want to visualize the error map(RMSE) between the generated 3D face and the groundtruth 3D face for evaluation purpose. Can anyone provide suggestions or existing solutions please? Thanks a ton!
Relevant answer
Answer
Thank you Lambert Zijp
  • asked a question related to 3D Reconstruction
Question
2 answers
I want to analyse the error in the 3D position obtained after the stereo vision calibration using two cameras placed perpendicular to each other, after it's calibration and 3D reconstruction
Relevant answer
Answer
You can calculate the difference of distance between the original and the projected 3D point. This is called re-projection error.
  • asked a question related to 3D Reconstruction
Question
16 answers
I have a set of slices of the retina's images. Using optovue software I can reconstruct the 3D structure of the piece of retina (see attached link), and measure some 2D and 3D parameters (like thickness of the retina in different positions, volume of a particular region, etc). I want to find another software for this purpose (to have possibility working home). I tried to use 3D-Slicer, but i can't make it work good. I can upload set of jpg images, but i can't create 3D model. Probably smb can advise another software? Or give nice tutorial on 3D-Slicer and jpeg images
Relevant answer
Answer
Yes, you can convert a 3D model to a series of cross-sectional images using 3D Slicer. For example, you can load the STL file, then in Data module right-click to convert it to segmentation, then right-click on the created segmentation to convert it to labelmap. Post further questions on using 3D Slicer to discourse.slicer.org.
  • asked a question related to 3D Reconstruction
Question
3 answers
Hello All,
I work on JSM-7200F
basically dealing with thin film kind of samples and nanoparicles.
so how do we 3d reconstruct from sem images?
what are the softwares available for that?
Is anyone familiar with such softwares?
Relevant answer
Answer
MeX is a good software. I believe there are some other programs to work with stereopairs. Buy the way, calculations and measurements are simple. If just a few points are of interest, it is possible to measure with no software at all (but then you have to read about the method).
  • asked a question related to 3D Reconstruction
Question
5 answers
Digitizer Program which gives X,Y Coordinates of Lines or Points on Scanned Images
We want to extract the X,Y coordinates manually from imported figure (image).
Figures contain clear or non clear points or lines.
Rapid research in Google gives:
Graphmatica, Export figures and not import
Graph digitizer: ?
Engauge digitizer: I think it works only for distinct points and non merged points or lines?
UnGraph: ?
Plot Digitizer?
Thank you for your advice in this subject, we wait your experience.
Relevant answer
Answer
Webplotdigitizer is the best and free. It works online as a well as offline
  • asked a question related to 3D Reconstruction
Question
5 answers
Hi everyone,
I have 4 fingerprint images from 4 different views. All are 2-dimensional images.
How can I build a 3D image from those 4 images?
Thank you so much.
Relevant answer
Answer
Structure-from-Motion is a technique that is most commonly used to created a 3D scene (point cloud) from a set of unordered images. There are many available open source software that can perform sfm for you. Examples are:
VisualSFM that can be found at: http://ccwu.me/vsfm/
OpenSfM that can be found at: https://github.com/mapillary/OpenSfM
  • asked a question related to 3D Reconstruction
Question
3 answers
I want to explore the effect of different stimuli on brian glucose metabolism in mice. Now I have series of PET/CT images from microPET/CT scan. I can see the differences from these images but I can't tell the specific brian regions. Due to the small volume of mouse brain, I cannot find a proper software for 3D reconstruction. Can anyone give me some help about analysing these data? Thanks so much.
Relevant answer
Answer
  • asked a question related to 3D Reconstruction
Question
6 answers
I wish to make a 3D reconstruction image from 2 or 4 2D SEM images. For that, I have 2 images taken from two different angles. Or 4 images taken from 4 different direction (East, West, North and South). Is there a library that does that? (I work in python) if not, what are the steps that must be followed? if you have the code ready, it is welcome. thanks
Relevant answer
Answer
It helps if you have more pictures from different views. Here is a list that may help. https://all3dp.com/1/best-photogrammetry-software/
I recommend Agisoft... this is not free but if you are with a University you can get a discount. Hope this helps.
  • asked a question related to 3D Reconstruction
Question
7 answers
Why can't I search for related papers? Is it because the traditional method is good enough? Most of the things I searched for related to deep learning are low-dose CT reconstruction, super-resolution, and so on. Are these directions currently popular?I am a first-year graduate student and want to use deep learning to do some work related to CT, but the code of related papers is very small. Is there any suggestion? Thank you very very much.
Relevant answer
Answer
I don't know if he's active on ResearchGate but I know that @JonasAdler has done some work in this topic. His publication list should get you started and I know he normally has his code on GitHub too. Even if his applications don't exactly match your own then at least the code should still be good. Bear in mind that deep learning is dictated by having good training data, there is a nice low-dose CT dataset available online which is probably why a lot of people have gone for that first...
  • asked a question related to 3D Reconstruction
Question
3 answers
Does anybody have any experience in describing smoothness of biological surfaces using confocal microscopy images that are 3d reconstructed?
Relevant answer
Answer
H Adrian,
Smoothness is lack of roughness and is related to the areal surface texture analysis in metrology. Follow:
Regards
  • asked a question related to 3D Reconstruction
Question
3 answers
Hi,
I am trying to generate sinogram from scanned images of multi-pinhole SPECT system. After filtering I am trying to reconstruct object volumetrically and not slice by slice. Can anyone reply? or suggest any publications regarding this topic?
regards,
manu
Relevant answer
Answer
Based on my experience with a multi pinhole SPECT, there is no conventional sinogram space as there is in a conventional SPECT.
In the multi pinhole SPECT that I used to work, there was 3 detector heads and the projections from each head was used for image reconstruction. Something similar to stiching method. However, these projections were called AKA sinogram.
  • asked a question related to 3D Reconstruction
Question
3 answers
So I made 3D reconstructions of Z-stacks with ImageJ, and I was wondering if there is any (free) software with which I can superimpose all these 3D reconstructions. Imagine like creating a sort of averaged picture or maybe even a heat map.
Any help is welcome!
Relevant answer
Answer
3D Slicer is a free open source software that is a flexible, modular platform for image analysis and visualization, and useful for you.
  • asked a question related to 3D Reconstruction
Question
6 answers
I would like to 3D print the 3D reconstruction I obtained from Imaris software, but Cura can only work with STL. How can I convert the format?
Relevant answer
Answer
Hi,
Maybe this question was already solved somewhere else , but I found a way quite easy that might be useful for someone.
1) Create a Surface object (if the threshold is quite high, you can include most of the information that is in the Volume object.
2) Select the object. Surpass --> Export selected objects --> Save as .vrml
3) Manually, change extension to .wrl
Now should be suitable for MeshLab and there you can save it as .stl.
  • asked a question related to 3D Reconstruction
Question
4 answers
I am work on a mummified hand attributed to a Saint. To retrieve as much info as possible from it, I would like to know if I can look to specific features. Anything that will allow me to know little bit more about its old owner identity and his/her physical state. I know hands are not so popular in osteo-archaeology to determine sex or specific diseases, but if someone could give me some tips, it would be great.
I have a 3D reconstruction of it acquired by X ray so I can look through its internal structure as well.
Relevant answer
Answer
Thanks Neils, I will!
  • asked a question related to 3D Reconstruction
Question
2 answers
Hello,
I recently learned how to perform 3D cell reconstructions with ImajeJ, but the result is very limited. I am looking for some software available for free that allows me to obtain confocal 3D images with publication resolution (and even small spinning videos), since my group does not have a license for IMARIS. Any suggestions?
Thank you very much
James
Relevant answer
Answer
Hi James,
There are several packages available with ImageJ that it sounds like you are aware of but there is a stand alone package called Vaa3D that I think will do all that you are wanting. I've used it for very simple rotational videos. It also allows you to load time stack information as well. It's available at the below link for free.
and a link to the original paper is here
  • asked a question related to 3D Reconstruction
Question
6 answers
I am interested to know is it possible to process (reconstruct) images obtained through TEM?On few plateform its mentioned that image J can handle 3D model reconstruction but i am not able to find anything within ImageJ.Any suggestions will direct me towards right way.
Relevant answer
  • asked a question related to 3D Reconstruction
Question
5 answers
can anyone suggest me a fully automated single view image baed 3D reconstruction method/paper.
Relevant answer
Answer
Though it would just be an estimate, some work can be referred in the area of hand gesture recognition where 3D modeling of hand is attempted with a single reference (2.5D) frame. Despite that Its not general, might give some clue:
-- Kirac, F., Kara, Y.E. and Akarun, L., 2014. Hierarchically constrained 3D hand pose estimation using regression forests from single frame depth data. Pattern Recognition Letters, 50, pp.91-100.
-- Zhang, C., Yang, X. and Tian, Y., 2013, April. Histogram of 3D facets: A characteristic descriptor for hand gesture recognition. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on(pp. 1-8). IEEE.
  • asked a question related to 3D Reconstruction
Question
2 answers
The point cloud created from generated (x,y,z) world coordinates, using the Kinect v1 RGB and depth video frames, gives an impression of a plain 2D image instead of a proper 3D model. It seems depth is not properly filled. This may partially be due to the fact that Kinect encodes depth distances too far or too close as 2047 and 0 respectively. Thus, I treat both values as zero and takes them into consideration while generating (x,y,z) and so it affects point cloud. Can anyone provide some insights why depth is not filled up and how a volume blending can be done to have proper 3D model? Any advices, pointers, or links would be highly appreciated. Thanks
Relevant answer
Answer
Thanks JLSL for a quick insight. Indeed, I am using a cropped frame (128X128) for generating (x,y,z) where I can't see much depth filled really. The Kinect v1 originally produces 640X480 RGB and depth images and I tried generating (x,y,z) from full images but still the results are not very promising. Can you advice on what techniques can be employed to remove such effects? I did not understand how a computer can influence with its HW? Thanks
  • asked a question related to 3D Reconstruction
Question
2 answers
Hi. I create 3D reconstruction by OpenMVG+OpenMVS. OpenMVG create sparse point cloud correctly. But OpenMVS cannot create depth maps. Log file is attached.
Relevant answer
Answer
Hello, it appears (from the log file) that there are too few registered points in your images to begin with. I would suspect that either there is something inconsistent with the output of openMVG into the openMVS workflow or your set of images is not appropriately shot. Try another set/scene to see if the io works first.
  • asked a question related to 3D Reconstruction
Question
5 answers
I intend to reconstruct the 3D pore structure of my specimens using 2D cross section images obtained from light microscopy. Would it be possible to simultaneously stack two sets of 2D images obtained by serial sectioning in two different directions (height and width of the specimens) to get a 3D structure?
Thanks
Relevant answer
Answer
Sorry for the delay.
Thank you all for your answers, I found them very helpful.
I tried stitching in one dimension and then stacked the resultant images with trakEM2.
Best regards
  • asked a question related to 3D Reconstruction
Question
7 answers
Dear all,
I am interested in reconstruct 3D images from Z stacks with Amira but I have never used it and I cannot find any specific tutorial for this.
I explain. I acquired 3D stacks (coordinates X, Y and Z) from different tissue sections. Now I'd like to reconstruct the whole tissue by "attaching" all these stacks together.
I know it's possible in Amira but I don't know how.
Has anyone ever used Amira?
Is so, please, suggestions? or is there a protocol somewhere?
Thank you
Relevant answer
Answer
Hello Yingnan,
Just follow the tutorials in this order:
1. Alignment of 2D Physical Cross-Sections
2. Image segmentation
3. Surface reconstruction
an example is given in help/demos: "Geometry Reconstruction"
  • asked a question related to 3D Reconstruction
Question
4 answers
Hi,
I am trying to implement MLEM algorithm for SPECT image reconstruction in 3D. But I am confused with probability matrix. How can I calculate probability matrix?
Thanks in advance
Manu
Relevant answer
Answer
Analytical calculation+ PSF, Ref: Accurate image reconstruction based on standard-Gaussian-model fitted system matrix in multipinhole small animal SPECT imaging
  • asked a question related to 3D Reconstruction
Question
3 answers
Hi, I have a 3D cube with 128x128x128 voxels. If a cone with known details, pass through the 3D cube, how can I find with which voxels, this cone intersects? The attached image shows the situation. Cube is yellow color. We know the origin and angle of cone. Problem is to find, which voxels of cube will be inside of this cone?
  • asked a question related to 3D Reconstruction
Question
1 answer
Dear researchers,
I am doing an electron tomography of a dense approximately spherical object. Due to sample's geometry I could record X tilt series only from -55 to +55. During tomogram composition of the object in IMOD I get weird shape of the object in lower and upper parts of the tomogram. What do you think, does it happen because of lack of data (higher angle tilt series or dual axis tilt series is needed) or because of caused by user alignment mistake? I would appreciate a lot, if anybody of experienced IMOD users could share his/her opinion and answer some more questions about the program.
Relevant answer
Answer
Yes it can be. A bad reconstruction could be several reasons:
Tilt angle is very likely, higher tilt or second tilt axis would make a lot of difference.
If it is dense need to check you are getting beam all the way through at all angles. If it blocks the beam can get artefact in the reconstruction (like around gold fiducial markers)
if you are not using markers, try to. Alignment is much easier and more accurate. Also: Join the IMOD user group/list.
  • asked a question related to 3D Reconstruction
Question
4 answers
Hi. I am currently working on a medical image processing project that segments the coronary artery blobs from the axial CT slices of the human heart and converts the segmented coronary blob slices to a 3D coronary artery model. I wish to extend my project by mapping the 3D model to the 2D axial input slices, ie, by clicking on a point in the 3D model, it must display the appropriate CT slice and also the point to the position in that slice. Are there any methods ( techniques ) or softwares available to do this? If so, how?
I am using OpenCV to implement this project.
Relevant answer
Answer
Suggest using OpenGL
1. set your camera so screen projection plane is parallel to the slice...
2. clear screen buffer as usual with glClearColor set to background color
3. Clear your depth buffer with glClearDepth set to Z-coordinate of the slice in camera space
4. set glDepthFunc(GL_EQUAL)
5. render mesh
glClearColor( 0.0,0.0,0.0,0.0 ); // <0.0,1.0> r,g,b,a
glClearDepth( 0.5 ); // <0.0,1.0> ... 0.0 = z_near, 1.0 = z_far
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDepthFunc(GL_EQUAL);
This will render only the slice for which fragments have Z==Slice coordinate. This can be also done by GLSL by throwing away all fragments with different Z. The DirectX should have something similar (I do not use it so I do not know for sure).
As most meshes are BR models (hollow) then you will obtain circumference of your slice so you most likely need to fill it afterwards to suite your needs...
  • asked a question related to 3D Reconstruction
Question
6 answers
Now  EU Research and Innovation programme Horizon 2020 is starting. Our team wants to participate in the program and looking for collaborators. We have some experince in erythrocyte 3D morphology study by the method of digital holographic interference microscopy. Our project propose to study3D  morphology of blood erythrocytes in oncology diseases, and in the  treatment (radiaton therapy, chemical therapy)
Tishko Tatyana
Relevant answer
Answer
What's the update ? @ Dr. Tatyana
  • asked a question related to 3D Reconstruction
Question
12 answers
The monocular slam's initial coordinate system is random and scale-unknown.
But i want to know how to transform the initial coordinate system to another coordinate system(from a marker,like chessboard).
What am i doing now is that initializing the SLAM with a marker.The SLAM's mappoints were reconstructed by the poses calculated from PnP.So i know the SLAM's coordinate system direction and scale.In that case,i could tranform the SLAM's coordinate system to another marker's coordinate system easily.
But the mappoints reconstructed by the marker during the SLAM's initialization were untrusted.And the graphic optimization would change mappoints' position to make a smooth trajetory by minimal the projection error.
In this case,i could get an accurate transformation between the SLAM's coordinate system and marker's coordinate system.
Any idea is appreciated.
Thank you!
Btw,i'm studying about orbslam.
Relevant answer
Answer
Hi Lei,
I am not sure what is the main problem in your application, so will try to address each of the points you made in your query, hoping that what you need to know is inside these responses:
1) "The monocular slam's initial coordinate system is random and scale-unknown".
It is true that single camera does not suffice to obtain the scale of a scene geometrically reconstructed from correspondences in two (or more) views. In other words, the baseline (i.e., the vector connecting the two camera centers) has unknown scale/norm. So we usually pick a norm for this vector (usually 1 if no other information is given) and everything else fall;s into place. However, the initial reference frame is not random. It is simply taken as the identity orientation and the zero-vector [I | 0] in terms of position/translation, simply because there is no other reference to be expressed it in terms of. So this is easy. Coordinate frames have a meaning only in relation to other coordinate frames.
The usual SLAM initialization will entail the recovery of a homography (see PTAM) or essential matrix (see my technical report here: https://www.plymouth.ac.uk/uploads/production/document/path/8/8593/Relative_Pose_-_George_Terzakis.pdf ). Of course, the baseline will have arbitrary scale, which typically is chosen as 1 and the rest will fall into place.
2) "But I want to know how to transform the initial coordinate system to another coordinate system(from a marker, like chessboard)".
Suppose that you have two coordinate frames placed at positions b1 and b2, with respective orientations R1 and R2 (containing the direction unit vectors of each frame as columns), then a point M1 in frame #1 has coordinates M2 in frame #2, related with (using MATLAB syntax):
M2 =R1*R2' * (M1 - ( b2 -  b1 ) )
Now, if the first frame is taken as the global (or world) reference (hence, R1 = I3 and b1 = 0), then the transformation becomes: M2 = R2' * M2  * (M1 - b2).
In the case that you are describing, you have to use the general case, since the PnP will give a pose in terms of the local coordinate frame of your landmarks.
3) "But the mappoints reconstructed by the marker during the SLAM's initialization were untrusted.And the graphic optimization would change mappoints' position to make a smooth trajetory by minimal the projection error."
You are using the PnP to obtain camera pose (i.e. orientation and position) from known points. You mention that these points are "untrusted". I suppose you mean that their true position is uncertain (?). I suppose you mean that their *tracked projections* on the first two images in the sequence are uncertain. Yes, that is correct and moreover, it would propagate uncertainty into your camera pose estimate. This estimate can only be improved by tracking these points further in subsequent frames, repeat the PnP for these camera frames to obtain the respective camera poses and then run *bundle adjustment* to improve the camera position estimate (usually BA improves both camera pose and feature locations, but since you know your landmark locations, BA should improve only the poses and potentially the 3D locations of features that have been detected in new frames, but did not belong in your initial set of landmarks).
Feel free to ask more, or take a look at my PhD thesis (chapters 2-3) for more information.
I really hope the above is helpful.
  • asked a question related to 3D Reconstruction
Question
3 answers
it is really a nice toolkit for DTM extraction from 3D point clouds, We would like to use it for our urban object analysis task from very high density MLS point clouds. Is that doable to get above ground level height for each point directly from the processing ? It would be a nice option.
Relevant answer
Answer
 great, I see. Thank  you for the suggestion. 
  • asked a question related to 3D Reconstruction
Question
9 answers
Hello all together,
I have the problem that I want to obtain 3D landmarks using the software Landmark (Editor) from IDAV. But it seems that my .ply-files are to big (>150MB). When loading a second Surface the progam crashes. Is it possible to compress the Surface files without lossing to much information, so that I can work with more files in Landmark? If I´m right others work with files of only a few MB´s of size.
Thanks!
Relevant answer
Answer
Peter, I encourage you to try out Stratovan Checkpoint as well. I wrote the IDAV Landmark Editor software while I was a post doc at UC Davis some years ago. I left the University and started Stratovan and built Checkpoint is essentially the "next" version of Landmark. Checkpoint can do many of the things that Landmark does and much much more. It's more robust, can work with CT scans, can handle larger data, and is easier to use.
  • asked a question related to 3D Reconstruction
Question
6 answers
Hello Every one,
I am trying to estimate Vessel Diameter Estimation via plane fitting by utilizing the idea presented by "İlkay Öksüz" in his article. The article is attached here.
In their article, under section "3.4 : Vessel Diameter Estimation via Plane fitting", they used a modified form of plane equation to find best fit plane of the 3D point cloud data. The equation they use is also given below:
nx(x - xc) + ny(y - yc) + nz(z - zc) <= T
where the array (nx,ny,nz) refers to the normal vector, T is an empirically set threshold ( T =2 voxels), and the (xc,yc,zc) arrays and (x,y,z) define the Cartesian coordinates of the centerline location and that of another arbitrary point from the lumen data, respectively.
I need to implement this in matlab. But i cannot understand this completely. So, can any one guide me how should i implement this equation.
I have 3D point cloud, center points of that point cloud, and normal vector.
Relevant answer
Answer
I segmented the coronary arteries using the region growing method as proposed by the article attached with this Question. I generated the 3D point cloud by using Isosurface command in MATLAB. Attached image shows a portion of point cloud, in this image,
1. The grey region represents a portion of segmented coronary arterial tree.
2. The black sports, represents centerline points of the segmented coronary tree.
3. The red points are the points for which the following equation holds
                          nx(x - xc) + ny(y - yc) + nz(z - zc) <= T
  • asked a question related to 3D Reconstruction
Question
2 answers
Given an Euclidean Distance Map computed on a 3D image, what is the efficient (accurate + fast) way to extract the medial axis ?
Relevant answer
Answer
Thank you for your suggestion, I am going to read it.
Best regards,
Amal
  • asked a question related to 3D Reconstruction
Question
1 answer
If we make a 3X3 matrix on a wafer, then we get 9 parts. I have ellipsometry of these 9 parts, but do not know how to make a 3D image of the same in origin. 
Please suggest me what to do. 
Relevant answer
Answer
if you work with matlab try to use the instruction plot3()
  • asked a question related to 3D Reconstruction
Question
3 answers
for tracking finger position in 3D , accuracy is very important.
there are various techniques in 3D measurement.
what is the most accurate one ?
Relevant answer
Answer
For non contact 3D measurement, LIDAR is the most accurate. Dense LIDAR is very expensive, sparse (such as automative) is cheaper, but will return only a sparse set of points with true depth, in the scene, Other less accurate methods are structured light, time of flight and stereo/multiview camera 3D.
  • asked a question related to 3D Reconstruction
Question
2 answers
We have got CBCT machine of newtom with denture scan option. I just want to know whether the casts scanned by this option can be used for clear aligner fabrication. Can we convert the normal dicom data of maxilla and mandible in occlusion into stl format and use it for clear aligner fabrication? My intention of asking this question is to know whether we can use CBCT data as an alternate to 3D scanner, where soft tissue details are not of much importance, as in case of Clear aligners
Relevant answer
Answer
Intra oral scan data, yes...CBCT data for intra oral measurements is not accurate enough for aligner production :)
  • asked a question related to 3D Reconstruction
Question
10 answers
What we need is a pointer for picking spatial points inside buildings and their rooms, and 3D software which can convert those spatial data
points into a 3D model.
We want to pick floor and ceiling points for corners, and frames of windows and doors and hallways and wall-shape-transitions, etc. The idea is
to save a ton of time taking and scribing measurements with conventional tools, and getting actual accurate shapes of rooms into the model very
quickly.
The model would be imported into Revit for further work, and remodeling to prep for visualization and walk-throughs.
1. How can we get this done, and with what hardware and software?
2. Perhaps finger tracking could work for this, and we would need to know who to contact for more information and possibly assistance
implementing.
3. This seems like a great R&D project if a system is not currently available. We can make resources available for QA testing at NMSU.
We had considered that Kinect might be used, but expect it would not provide accuracy or be useful with a set of procedures that work the same
in all rooms. We expect that something like an ultrasonic system with wand and reference unit would be useful and accurate for these purposes.
Thank You for any assistance you may provide, and for taking the time to respond.
George Szynal
575-415-8205
Relevant answer
Answer
Excellent! I also discovered that Autodesk has something along these lines, although as with all of these systems we need to try them to see what they can do.
Autodesk ReCap 360
  • asked a question related to 3D Reconstruction
Question
3 answers
Creating 2 spherical images doesn't work, since there are at least 2 directions where the capture will be the same (left and right of the perpendicular axis to the base). My best shoot is to circumscribe the capture points to a sphere with d=stereo base, then take cubic images, and stitch them in post-processing. Itt seems a bit messy solution, especially for creating a 360 videos... frame by frame.
Relevant answer
Answer
  • asked a question related to 3D Reconstruction
Question
2 answers
I am building a 3D orthomosaic of a Carboniferous coal face and I have projected it in agisoft. However, it has projected with the z-axis coordinates in reverse, I think this may be due to the camera as this is the first time that this has happened. How do you change these coordinates? Does anyone have a link to a tutorial for the coordinate system on agisoft?
Relevant answer
Answer
Francois-Xavier
Have you got any further with resolving this problem? I think that it is a case of converting a geographic to mathematic coordinate system, therefore if the coordinates are converted to local datum it might work, it has worked for one of my models so far but failed with a few too!!
  • asked a question related to 3D Reconstruction
Question
7 answers
Hello, we are working on different ways to digitally reconstruct histological tissue sections with a very high accuracy for anatomical purpose. Has anyone good results with a specific and commercially available reconstruction software?
Relevant answer
Answer
  • asked a question related to 3D Reconstruction
Question
2 answers
Hi All,
I have thousands of polygons, each polygon contains polylines inside.
as per requirement from customer, need make separate polygon where two polylines intersects each other, when you extend polylines programmatically using .net .
please suggest me how to complete this task.
for clear understanding please look into attached image 
Relevant answer
Answer
Hello,
As Mohamed said, your problem isn't fully specified yet. Your above example would yield different partitions if you "extend" your lines in different order.
For the intersection implementation I would recommend to use a robust library like Boost Polygon or CGAL 2D regularized boolean set operations. These are C++ libraries (where you would need to write a wrapper assembly in C++/CLI), or you can use Clipper which has a C# interface.
  • asked a question related to 3D Reconstruction
Question
7 answers
We are analyzing image stacks from micro CT (computed tomography). A typical image sequence comprises 1000 images of each 10-30 MB.
The system we've used so far had 500 GB RAM but was sometimes not strong enough. Image reconstruction is done in Matlab, and 3D rendering is then done in Aviso Studio.
Any suggestions for a good workstation computer to do these analyses? Should we use:
a) 1 Terabyte RAM
b) Two good Intel Xeon processors
c) Two or more high-end graphics cards
or combinations of these?
Thanks very much for answering.
Relevant answer
i highly advice you a builded workstation like the Dell Precission 7910  at max configuration
  • dual xeon 18x2 cors at 2.4 GH
  • 1TB of ram
  • nvidia quadro 8GB
  • 10 TB of SSD
  • linux redhat or windows 10
you will gain high performance and no worry about compatibility between components 
is extramly expensive but for your kind of application and there future you need this kind of computer 
  • asked a question related to 3D Reconstruction
Question
4 answers
How to obtain 3D reconstruction of images after performing FIB-SEM analysis. I am wondering wether reconstruction facility is coupled with FIB-SEM setup or separate software/application is required?
Relevant answer
Answer
I would also strongly recommend that you look at MIPAR at http://MIPAR.us. FIB SEM serial sections often require challenging segmentation, which is where MIPAR excels and many others struggle. We specifically designed the 3D aspect of MIPAR to handle FIB SEM datasets. I am happy to make a demo with a sample of your dataset if you'd like.
Cheers,
John
  • asked a question related to 3D Reconstruction
Question
2 answers
Monocular camera based visual odometry is popular in the field of computer vision and robotics. However, the absolute scale can not be obtained based on only the monocular image information. Is there any good way to estimate this metric information from the monocular image? 
Relevant answer
Answer
To inject scale in the reconstruction, you generally need to have external and/or prior information related to either of the following: a) Depth of the scene (i.e., the depth of some or all of the reconstructed points), or b) Prior knowledge or measurements of camera motion,
In robotics, the most typical way of obtaining scale is to either have a motion model (differential drive for instance) of your robot and additional sensors such as an optical encoder (accelerometers can also be used by they re not very reliable if used on their own; they can be very useful when combined with a motion model however). Once you get an estimate of the  baseline length (i.e., the difference in the robot/camera position) between two image captures, this is effectively your scale factor.
Another way of obtaining scale is by incorporating a depth sensor such as the Kinect. In this case you have reliable estimates for points that lie in the range of 0.5 - 4.5m from the camera. Provided that your scene indeed contains objects at this depth (typically indoor environments) then the Kinect is a very reliable source of scale for your Structure from Motion method.
Once you have either depth or baseline length, you would have to scale all your points by the recovered factor (there are many ways to do that, some of which are described in my PhD thesis). Regardless, you would afterwards have to perform bundle adjustment in the new estimates. If you are bothered by execution time, my suggestion would be to optimize only a few of the points that you positively regard as inliers along with the baseline vector of course.
  • asked a question related to 3D Reconstruction
Question
2 answers
I have several 3d vertebra structures data, and what I have done so far is:
  • calculate parameters as: Thickness, Connectivity and Space distance using Bonej on several ROI.
  • Calculate PCA model of those parameters and ROI
  • Choose one ROI and convert it to DRR
  • Compare the similarity between this DRR and an X-ray Radio
What I need now (if the difference is too big), is that I have to make some changes on the 3D structure and convert it again to DRR and compare it with the x-ray Radio, but i don't know how to use my PCA model for this propose and how to change my 3D structure.
Any advise please? I'd really appreciate any help. Thank you.
Relevant answer
Answer
Thank you Filipe, that is exactly what I need, I need to minimize the difference by using my PCA model and changing the ROI 3D structure.
  • asked a question related to 3D Reconstruction
Question
3 answers
I am learning computer vision to work on my project. It involves processing of point clouds for 3D reconstruction. I need to mesh my point clouds (after filtering and segmenting them). I am using MATLA 2016a. Is there an inbuilt functions or tutorials I can use for meshing in matlab?
Relevant answer
Answer
Thank you Chao Xing and Raphael Rouveure. That was helpful.